add en pages

This commit is contained in:
Karen Bradshaw 2020-05-30 15:10:23 -04:00
parent 1502e0281d
commit ecc27bbbe7
347 changed files with 2900 additions and 2537 deletions

View File

@ -1,17 +1,17 @@
---
title: Concepts
main_menu: true
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
The Concepts section helps you learn about the parts of the Kubernetes system and the abstractions Kubernetes uses to represent your {{< glossary_tooltip text="cluster" term_id="cluster" length="all" >}}, and helps you obtain a deeper understanding of how Kubernetes works.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Overview
@ -60,12 +60,13 @@ The Kubernetes master is responsible for maintaining the desired state for your
The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you'll rarely interact with nodes directly.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
If you would like to write a concept page, see
[Using Page Templates](/docs/home/contribute/page-templates/)
for information about the concept page type and the concept template.
{{% /capture %}}

View File

@ -1,10 +1,10 @@
---
title: Cloud Controller Manager
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state state="beta" for_k8s_version="v1.11" >}}
@ -17,9 +17,9 @@ components.
The cloud-controller-manager is structured using a plugin
mechanism that allows different cloud providers to integrate their platforms with Kubernetes.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Design
@ -200,8 +200,9 @@ rules:
- update
```
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
[Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager)
has instructions on running and managing the cloud controller manager.
@ -212,4 +213,3 @@ The cloud controller manager uses Go interfaces to allow implementations from an
The implementation of the shared controllers highlighted in this document (Node, Route, and Service), and some scaffolding along with the shared cloudprovider interface, is part of the Kubernetes core. Implementations specific to cloud providers are outside the core of Kubernetes and implement the `CloudProvider` interface.
For more information about developing plugins, see [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/).
{{% /capture %}}

View File

@ -3,19 +3,19 @@ reviewers:
- dchen1107
- liggitt
title: Control Plane-Node Communication
content_template: templates/concept
content_type: concept
weight: 20
aliases:
- master-node-communication
---
{{% capture overview %}}
<!-- overview -->
This document catalogs the communication paths between the control plane (really the apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Node to Control Plane
All communication paths from the nodes to the control plane terminate at the apiserver (none of the other master components are designed to expose remote services). In a typical deployment, the apiserver is configured to listen for remote connections on a secure HTTPS port (443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled.

View File

@ -1,10 +1,10 @@
---
title: Controllers
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
In robotics and automation, a _control loop_ is
a non-terminating loop that regulates the state of a system.
@ -18,10 +18,10 @@ closer to the desired state, by turning equipment on or off.
{{< glossary_definition term_id="controller" length="short">}}
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Controller pattern
@ -150,11 +150,12 @@ You can run your own controller as a set of Pods,
or externally to Kubernetes. What fits best will depend on what that particular
controller does.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about the [Kubernetes control plane](/docs/concepts/#kubernetes-control-plane)
* Discover some of the basic [Kubernetes objects](/docs/concepts/#kubernetes-objects)
* Learn more about the [Kubernetes API](/docs/concepts/overview/kubernetes-api/)
* If you want to write your own controller, see [Extension Patterns](/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns) in Extending Kubernetes.
{{% /capture %}}

View File

@ -3,11 +3,11 @@ reviewers:
- caesarxuchao
- dchen1107
title: Nodes
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
Kubernetes runs your workload by placing containers into Pods to run on _Nodes_.
A node may be a virtual or physical machine, depending on the cluster. Each node
@ -23,9 +23,9 @@ The [components](/docs/concepts/overview/components/#node-components) on a node
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, and the
{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Management
@ -332,12 +332,13 @@ the kubelet can use topology hints when making resource assignment decisions.
See [Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/)
for more information.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node.
* Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
section of the architecture design document.
* Read about [taints and tolerations](/docs/concepts/configuration/taint-and-toleration/).
* Read about [cluster autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling).
{{% /capture %}}

View File

@ -1,9 +1,9 @@
---
title: Installing Addons
content_template: templates/concept
content_type: concept
---
{{% capture overview %}}
<!-- overview -->
Add-ons extend the functionality of Kubernetes.
@ -12,10 +12,10 @@ This page lists some of the available add-ons and links to their respective inst
Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Networking and Network Policy
@ -55,4 +55,4 @@ There are several other add-ons documented in the deprecated [cluster/addons](ht
Well-maintained ones should be linked to here. PRs welcome!
{{% /capture %}}

View File

@ -1,19 +1,19 @@
---
title: Certificates
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
When using client certificate authentication, you can generate certificates
manually through `easyrsa`, `openssl` or `cfssl`.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
### easyrsa
@ -249,4 +249,4 @@ You can use the `certificates.k8s.io` API to provision
x509 certificates to use for authentication as documented
[here](/docs/tasks/tls/managing-tls-in-a-cluster).
{{% /capture %}}

View File

@ -1,16 +1,16 @@
---
title: Cloud Providers
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
This page explains how to manage Kubernetes running on a specific
cloud provider.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
### kubeadm
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
kubeadm has configuration options to specify configuration information for cloud providers. For example a typical
@ -363,7 +363,7 @@ Kubernetes network plugin and should appear in the `[Route]` section of the
[kubenet]: /docs/concepts/cluster-administration/network-plugins/#kubenet
{{% /capture %}}
## OVirt

View File

@ -3,16 +3,16 @@ reviewers:
- davidopp
- lavalamp
title: Cluster Administration Overview
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
The cluster administration overview is for anyone creating or administering a Kubernetes cluster.
It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Planning a cluster
See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure Kubernetes clusters. The solutions listed in this article are called *distros*.
@ -68,6 +68,6 @@ Note: Not all distros are actively maintained. Choose distros which have been te
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it.
{{% /capture %}}

View File

@ -1,10 +1,10 @@
---
title: API Priority and Fairness
content_template: templates/concept
content_type: concept
min-kubernetes-server-version: v1.18
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state state="alpha" for_k8s_version="v1.18" >}}
@ -33,9 +33,9 @@ the `--max-requests-inflight` flag without the API Priority and
Fairness feature enabled.
{{< /caution >}}
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Enabling API Priority and Fairness
@ -366,13 +366,13 @@ poorly-behaved workloads that may be harming system health.
request and the PriorityLevel to which it was assigned.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
For background information on design details for API priority and fairness, see
the [enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md).
You can make suggestions and feature requests via [SIG API
Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
{{% /capture %}}

View File

@ -1,20 +1,20 @@
---
reviewers:
title: Configuring kubelet Garbage Collection
content_template: templates/concept
content_type: concept
weight: 70
---
{{% capture overview %}}
<!-- overview -->
Garbage collection is a helpful function of kubelet that will clean up unused images and unused containers. Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Image Collection
@ -77,10 +77,11 @@ Including:
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources |
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details.
{{% /capture %}}

View File

@ -3,20 +3,20 @@ reviewers:
- piosz
- x13n
title: Logging Architecture
content_template: templates/concept
content_type: concept
weight: 60
---
{{% capture overview %}}
<!-- overview -->
Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution. For example, if a container crashes, a pod is evicted, or a node dies, you'll usually still want to access your application's logs. As such, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level-logging_. Cluster-level logging requires a separate backend to store, analyze, and query logs. Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
Cluster-level logging architectures are described in assumption that
a logging backend is present inside or outside of your cluster. If you're
@ -267,4 +267,4 @@ You can implement cluster-level logging by exposing or pushing logs directly fro
every application; however, the implementation for such a logging mechanism
is outside the scope of Kubernetes.
{{% /capture %}}

View File

@ -2,18 +2,18 @@
reviewers:
- janetkuo
title: Managing Resources
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features that we will discuss in more depth are [configuration files](/docs/concepts/configuration/overview/) and [labels](/docs/concepts/overview/working-with-objects/labels/).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Organizing resource configurations
@ -449,11 +449,12 @@ kubectl edit deployment/my-nginx
That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/).
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
- Learn about [how to use `kubectl` for application introspection and debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/).
- See [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/).
{{% /capture %}}

View File

@ -4,21 +4,21 @@ reviewers:
- brancz
- logicalhan
- RainbowMango
content_template: templates/concept
content_type: concept
weight: 60
aliases:
- controller-metrics.md
---
{{% capture overview %}}
<!-- overview -->
System component metrics can give a better look into what is happening inside them. Metrics are particularly useful for building dashboards and alerts.
Metrics in Kubernetes control plane are emitted in [prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/) and are human readable.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Metrics in Kubernetes
@ -124,10 +124,11 @@ cloudprovider_gce_api_request_duration_seconds { request = "detach_disk"}
cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
```
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about the [Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) for metrics
* See the list of [stable Kubernetes metrics](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)
* Read about the [Kubernetes deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior )
{{% /capture %}}

View File

@ -2,11 +2,11 @@
reviewers:
- thockin
title: Cluster Networking
content_template: templates/concept
content_type: concept
weight: 50
---
{{% capture overview %}}
<!-- overview -->
Networking is a central part of Kubernetes, but it can be challenging to
understand exactly how it is expected to work. There are 4 distinct networking
problems to address:
@ -17,10 +17,10 @@ problems to address:
3. Pod-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
4. External-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
Kubernetes is all about sharing machines between applications. Typically,
sharing machines requires ensuring that two applications do not try to use the
@ -312,12 +312,13 @@ Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-pl
or stand-alone. In either version, it doesn't require any configuration or extra code
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
The early design of the networking model and its rationale, and some future
plans are described in more detail in the [networking design
document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
{{% /capture %}}

View File

@ -1,14 +1,14 @@
---
title: Proxies in Kubernetes
content_template: templates/concept
content_type: concept
weight: 90
---
{{% capture overview %}}
<!-- overview -->
This page explains proxies used with Kubernetes.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Proxies
@ -62,6 +62,6 @@ will typically ensure that the latter types are setup correctly.
Proxies have replaced redirect capabilities. Redirects have been deprecated.
{{% /capture %}}

View File

@ -1,10 +1,10 @@
---
title: ConfigMaps
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
{{< glossary_definition term_id="configmap" prepend="A ConfigMap is" length="all" >}}
@ -15,9 +15,9 @@ If the data you want to store are confidential, use a
or use additional (third party) tools to keep your data private.
{{< /caution >}}
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Motivation
Use a ConfigMap for setting configuration data separately from application code.
@ -243,12 +243,13 @@ Existing Pods maintain a mount point to the deleted ConfigMap - it is recommende
these pods.
{{< /note >}}
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about [Secrets](/docs/concepts/configuration/secret/).
* Read [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/).
* Read [The Twelve-Factor App](https://12factor.net/) to understand the motivation for
separating code from configuration.
{{% /capture %}}

View File

@ -1,6 +1,6 @@
---
title: Managing Resources for Containers
content_template: templates/concept
content_type: concept
weight: 40
feature:
title: Automatic bin packing
@ -8,7 +8,7 @@ feature:
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
---
{{% capture overview %}}
<!-- overview -->
When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally specify how
much of each resource a {{< glossary_tooltip text="Container" term_id="container" >}} needs.
@ -21,10 +21,10 @@ allowed to use more of that resource than the limit you set. The kubelet also re
at least the _request_ amount of that system resource specifically for that container
to use.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Requests and limits
@ -740,10 +740,11 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Get hands-on experience [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
@ -758,4 +759,4 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh
* Read about [project quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
{{% /capture %}}

View File

@ -1,10 +1,10 @@
---
title: Organizing Cluster Access Using kubeconfig Files
content_template: templates/concept
content_type: concept
weight: 60
---
{{% capture overview %}}
<!-- overview -->
Use kubeconfig files to organize information about clusters, users, namespaces, and
authentication mechanisms. The `kubectl` command-line tool uses kubeconfig files to
@ -25,10 +25,10 @@ variable or by setting the
For step-by-step instructions on creating and specifying kubeconfig files, see
[Configure Access to Multiple Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Supporting multiple clusters, users, and authentication mechanisms
@ -143,14 +143,15 @@ File references on the command line are relative to the current working director
In `$HOME/.kube/config`, relative paths are stored relatively, and absolute paths
are stored absolutely.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* [Configure Access to Multiple Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
* [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config)
{{% /capture %}}

View File

@ -2,17 +2,17 @@
reviewers:
- mikedanese
title: Configuration Best Practices
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
This document highlights and consolidates configuration best practices that are introduced throughout the user guide, Getting Started documentation, and examples.
This is a living document. If you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## General Configuration Tips
- When defining configurations, specify the latest stable API version.
@ -105,5 +105,5 @@ The caching semantics of the underlying image provider make even `imagePullPolic
- Use `kubectl run` and `kubectl expose` to quickly create single-container Deployments and Services. See [Use a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/) for an example.
{{% /capture %}}

View File

@ -4,11 +4,11 @@ reviewers:
- egernst
- tallclair
title: Pod Overhead
content_template: templates/concept
content_type: concept
weight: 50
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
@ -19,10 +19,10 @@ _Pod Overhead_ is a feature for accounting for the resources consumed by the Pod
on top of the container requests & limits.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
In Kubernetes, the Pod's overhead is set at
[admission](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
@ -188,11 +188,12 @@ running with a defined Overhead. This functionality is not available in the 1.9
kube-state-metrics, but is expected in a following release. Users will need to build kube-state-metrics
from source in the meantime.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
* [PodOverhead Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
{{% /capture %}}

View File

@ -3,11 +3,11 @@ reviewers:
- davidopp
- wojtek-t
title: Pod Priority and Preemption
content_template: templates/concept
content_type: concept
weight: 70
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
@ -16,9 +16,9 @@ importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the
scheduler tries to preempt (evict) lower priority Pods to make scheduling of the
pending Pod possible.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
{{< warning >}}
@ -407,7 +407,8 @@ usage does not exceed their requests. If a Pod with lower priority is not
exceeding its requests, it won't be evicted. Another Pod with higher priority
that exceeds its requests may be evicted.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about using ResourceQuotas in connection with PriorityClasses: [limit Priority Class consumption by default](/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default)
{{% /capture %}}

View File

@ -4,19 +4,19 @@ reviewers:
- k82cn
- ahg-g
title: Resource Bin Packing for Extended Resources
content_template: templates/concept
content_type: concept
weight: 50
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
The kube-scheduler can be configured to enable bin packing of resources along with extended resources using `RequestedToCapacityRatioResourceAllocation` priority function. Priority functions can be used to fine-tune the kube-scheduler as per custom needs.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Enabling Bin Packing using RequestedToCapacityRatioResourceAllocation
@ -194,4 +194,4 @@ NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3)
```
{{% /capture %}}

View File

@ -2,7 +2,7 @@
reviewers:
- mikedanese
title: Secrets
content_template: templates/concept
content_type: concept
feature:
title: Secret and configuration management
description: >
@ -10,16 +10,16 @@ feature:
weight: 30
---
{{% capture overview %}}
<!-- overview -->
Kubernetes Secrets let you store and manage sensitive information, such
as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret
is safer and more flexible than putting it verbatim in a
{{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Overview of Secrets

View File

@ -3,18 +3,18 @@ reviewers:
- mikedanese
- thockin
title: Container Environment
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
This page describes the resources available to Containers in the Container environment.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Container environment
@ -53,12 +53,13 @@ FOO_SERVICE_PORT=<the port the service is running on>
Services have dedicated IP addresses and are available to the Container via DNS,
if [DNS addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) is enabled. 
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/).
* Get hands-on experience
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
{{% /capture %}}

View File

@ -3,19 +3,19 @@ reviewers:
- mikedanese
- thockin
title: Container Lifecycle Hooks
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
This page describes how kubelet managed Containers can use the Container lifecycle hook framework
to run code triggered by events during their management lifecycle.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Overview
@ -112,12 +112,13 @@ Events:
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
```
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn more about the [Container environment](/docs/concepts/containers/container-environment/).
* Get hands-on experience
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
{{% /capture %}}

View File

@ -3,20 +3,20 @@ reviewers:
- erictune
- thockin
title: Images
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
The `image` property of a container supports the same syntax as the `docker` command does, including private registries and tags.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Updating Images
@ -370,4 +370,4 @@ common use cases and suggested solutions.
If you need access to multiple registries, you can create one secret for each registry.
Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json`
{{% /capture %}}

View File

@ -3,11 +3,11 @@ reviewers:
- erictune
- thockin
title: Containers overview
content_template: templates/concept
content_type: concept
weight: 1
---
{{% capture overview %}}
<!-- overview -->
Containers are a technology for packaging the (compiled) code for an
application along with the dependencies it needs at run time. Each
@ -18,10 +18,10 @@ run it.
Containers decouple applications from underlying host infrastructure.
This makes deployment easier in different cloud or OS environments.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Container images
A [container image](/docs/concepts/containers/images/) is a ready-to-run
@ -38,8 +38,9 @@ the change, then recreate the container to start from the updated image.
{{< glossary_definition term_id="container-runtime" length="all" >}}
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about [container images](/docs/concepts/containers/images/)
* Read about [Pods](/docs/concepts/workloads/pods/)
{{% /capture %}}

View File

@ -3,11 +3,11 @@ reviewers:
- tallclair
- dchen1107
title: Runtime Class
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.14" state="beta" >}}
@ -16,10 +16,10 @@ This page describes the RuntimeClass resource and runtime selection mechanism.
RuntimeClass is a feature for selecting the container runtime configuration. The container runtime
configuration is used to run a Pod's containers.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Motivation
@ -180,12 +180,13 @@ Pod overhead is defined in RuntimeClass through the `overhead` fields. Through t
you can specify the overhead of running pods utilizing this RuntimeClass and ensure these overheads
are accounted for in Kubernetes.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
- [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md)
- [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md)
- Read about the [Pod Overhead](/docs/concepts/configuration/pod-overhead/) concept
- [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
{{% /capture %}}

View File

@ -2,11 +2,11 @@
title: Example Concept Template
reviewers:
- chenopis
content_template: templates/concept
content_type: concept
toc_hide: true
---
{{% capture overview %}}
<!-- overview -->
{{< note >}}
Be sure to also [create an entry in the table of contents](/docs/home/contribute/write-new-topic/#creating-an-entry-in-the-table-of-contents) for your new document.
@ -14,9 +14,9 @@ Be sure to also [create an entry in the table of contents](/docs/home/contribute
This page explains ...
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Understanding ...
@ -26,15 +26,16 @@ Kubernetes provides ...
To use ...
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
**[Optional Section]**
* Learn more about [Writing a New Topic](/docs/home/contribute/write-new-topic/).
* See [Using Page Templates - Concept template](/docs/home/contribute/page-templates/#concept_template) for how to use this template.
{{% /capture %}}

View File

@ -4,20 +4,20 @@ reviewers:
- lavalamp
- cheftako
- chenopis
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs.
The additional APIs can either be ready-made solutions such as [service-catalog](/docs/concepts/extend-kubernetes/service-catalog/), or APIs that you develop yourself.
The aggregation layer is different from [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/), which are a way to make the {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} recognise new kinds of object.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Aggregation layer
@ -34,13 +34,14 @@ If your extension API server cannot achieve that latency requirement, consider m
`EnableAggregatedDiscoveryTimeout=false` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the kube-apiserver
to disable the timeout restriction. This deprecated feature gate will be removed in a future release.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* To get the aggregator working in your environment, [configure the aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/).
* Then, [setup an extension api-server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/) to work with the aggregation layer.
* Also, learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/).
* Read the specification for [APIService](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#apiservice-v1-apiregistration-k8s-io)
{{% /capture %}}

View File

@ -3,19 +3,19 @@ title: Custom Resources
reviewers:
- enisoc
- deads2k
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
*Custom resources* are extensions of the Kubernetes API. This page discusses when to add a custom
resource to your Kubernetes cluster and when to use a standalone service. It describes the two
methods for adding custom resources and how to choose between them.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Custom resources
A *resource* is an endpoint in the [Kubernetes API](/docs/reference/using-api/api-overview/) that stores a collection of
@ -246,12 +246,13 @@ When you add a custom resource, you can access it using:
- A REST client that you write.
- A client generated using [Kubernetes client generation tools](https://github.com/kubernetes/code-generator) (generating one is an advanced undertaking, but some projects may provide a client along with the CRD or AA).
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn how to [Extend the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
* Learn how to [Extend the Kubernetes API with CustomResourceDefinition](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/).
{{% /capture %}}

View File

@ -2,11 +2,11 @@
reviewers:
title: Device Plugins
description: Use the Kubernetes device plugin framework to implement plugins for GPUs, NICs, FPGAs, InfiniBand, and similar resources that require vendor-specific setup.
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
Kubernetes provides a [device plugin framework](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md)
@ -19,9 +19,9 @@ The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adap
and other similar computing resources that may require vendor specific initialization
and setup.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Device plugin registration
@ -225,12 +225,13 @@ Here are some examples of device plugin implementations:
* The [SR-IOV Network device plugin](https://github.com/intel/sriov-network-device-plugin)
* The [Xilinx FPGA device plugins](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-fpga-device-plugin/trunk) for Xilinx FPGA devices
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn about [scheduling GPU resources](/docs/tasks/manage-gpus/scheduling-gpus/) using device plugins
* Learn about [advertising extended resources](/docs/tasks/administer-cluster/extended-resource-node/) on a node
* Read about using [hardware acceleration for TLS ingress](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) with Kubernetes
* Learn about the [Topology Manager] (/docs/tasks/adminster-cluster/topology-manager/)
{{% /capture %}}

View File

@ -4,12 +4,12 @@ reviewers:
- freehan
- thockin
title: Network Plugins
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state state="alpha" >}}
{{< caution >}}Alpha features can change rapidly. {{< /caution >}}
@ -19,9 +19,9 @@ Network plugins in Kubernetes come in a few flavors:
* CNI plugins: adhere to the appc/CNI specification, designed for interoperability.
* Kubenet plugin: implements basic `cbr0` using the `bridge` and `host-local` CNI plugins
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Installation
@ -166,8 +166,9 @@ This option is provided to the network-plugin; currently **only kubenet supports
* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`.
* `--network-plugin-mtu=9001` specifies the MTU to use, currently only used by the `kubenet` network plugin.
{{% /capture %}}
{{% capture whatsnext %}}
{{% /capture %}}
## {{% heading "whatsnext" %}}

View File

@ -5,11 +5,11 @@ reviewers:
- lavalamp
- cheftako
- chenopis
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
Kubernetes is highly configurable and extensible. As a result,
there is rarely a need to fork or submit patches to the Kubernetes
@ -22,10 +22,10 @@ their work environment. Developers who are prospective {{< glossary_tooltip text
useful as an introduction to what extension points and patterns
exist, and their trade-offs and limitations.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Overview
@ -194,10 +194,11 @@ The scheduler also supports a
that permits a webhook backend (scheduler extension) to filter and prioritize
the nodes chosen for a pod.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn more about [Custom Resources](/docs/concepts/api-extension/custom-resources/)
* Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
@ -207,4 +208,4 @@ the nodes chosen for a pod.
* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/)
* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/)
{{% /capture %}}

View File

@ -1,20 +1,20 @@
---
title: Operator pattern
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
Operators are software extensions to Kubernetes that make use of [custom
resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
to manage applications and their components. Operators follow
Kubernetes principles, notably the [control loop](/docs/concepts/#kubernetes-control-plane).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Motivation
@ -113,9 +113,10 @@ Operator.
You also implement an Operator (that is, a Controller) using any language / runtime
that can act as a [client for the Kubernetes API](/docs/reference/using-api/client-libraries/).
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case
@ -129,4 +130,3 @@ that can act as a [client for the Kubernetes API](/docs/reference/using-api/clie
* Read [CoreOS' original article](https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators
{{% /capture %}}

View File

@ -1,18 +1,18 @@
---
title: Poseidon-Firmament Scheduler
content_template: templates/concept
content_type: concept
weight: 80
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.6" state="alpha" >}}
The Poseidon-Firmament scheduler is an alternate scheduler that can be deployed alongside the default Kubernetes scheduler.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Introduction
@ -102,10 +102,11 @@ Pod-by-pod schedulers, such as the Kubernetes default scheduler, process Pods in
These downsides of pod-by-pod schedulers are addressed by batching or bulk scheduling in Poseidon-Firmament scheduler. Processing several pods in a batch allows the scheduler to jointly consider their placement, and thus to find the best trade-off for the whole batch instead of one pod. At the same time it amortizes work across pods resulting in much higher throughput.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* See [Poseidon-Firmament](https://github.com/kubernetes-sigs/poseidon#readme) on GitHub for more information.
* See the [design document](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/design/README.md) for Poseidon.
* Read [Firmament: Fast, Centralized Cluster Scheduling at Scale](https://www.usenix.org/system/files/conference/osdi16/osdi16-gog.pdf), the academic paper on the Firmament scheduling design.
* If you'd like to contribute to Poseidon-Firmament, refer to the [developer setup instructions](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/devel/README.md).
{{% /capture %}}

View File

@ -2,11 +2,11 @@
title: Service Catalog
reviewers:
- chenopis
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
{{< glossary_definition term_id="service-catalog" length="all" prepend="Service Catalog is" >}}
A service broker, as defined by the [Open service broker API spec](https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md), is an endpoint for a set of managed services offered and maintained by a third-party, which could be a cloud provider such as AWS, GCP, or Azure.
@ -14,10 +14,10 @@ Some examples of managed services are Microsoft Azure Cloud Queue, Amazon Simple
Using Service Catalog, a {{< glossary_tooltip text="cluster operator" term_id="cluster-operator" >}} can browse the list of managed services offered by a service broker, provision an instance of a managed service, and bind with it to make it available to an application in the Kubernetes cluster.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Example use case
An {{< glossary_tooltip text="application developer" term_id="application-developer" >}} wants to use message queuing as part of their application running in a Kubernetes cluster.
@ -222,16 +222,17 @@ The following example describes how to map secret values into application enviro
key: topic
```
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* If you are familiar with {{< glossary_tooltip text="Helm Charts" term_id="helm-chart" >}}, [install Service Catalog using Helm](/docs/tasks/service-catalog/install-service-catalog-using-helm/) into your Kubernetes cluster. Alternatively, you can [install Service Catalog using the SC tool](/docs/tasks/service-catalog/install-service-catalog-using-sc/).
* View [sample service brokers](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers).
* Explore the [kubernetes-incubator/service-catalog](https://github.com/kubernetes-incubator/service-catalog) project.
* View [svc-cat.io](https://svc-cat.io/docs/).
{{% /capture %}}

View File

@ -2,14 +2,14 @@
reviewers:
- lavalamp
title: Kubernetes Components
content_template: templates/concept
content_type: concept
weight: 20
card:
name: concepts
weight: 20
---
{{% capture overview %}}
<!-- overview -->
When you deploy Kubernetes, you get a cluster.
{{< glossary_definition term_id="cluster" length="all" prepend="A Kubernetes cluster consists of">}}
@ -20,9 +20,9 @@ Here's the diagram of a Kubernetes cluster with all the components tied together
![Components of Kubernetes](/images/docs/components-of-kubernetes.png)
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Control Plane Components
The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new {{< glossary_tooltip text="pod" term_id="pod">}} when a deployment's `replicas` field is unsatisfied).
@ -122,10 +122,11 @@ about containers in a central database, and provides a UI for browsing that data
A [cluster-level logging](/docs/concepts/cluster-administration/logging/) mechanism is responsible for
saving container logs to a central log store with search/browsing interface.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn about [Nodes](/docs/concepts/architecture/nodes/)
* Learn about [Controllers](/docs/concepts/architecture/controller/)
* Learn about [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/)
* Read etcd's official [documentation](https://etcd.io/docs/)
{{% /capture %}}

View File

@ -2,14 +2,14 @@
reviewers:
- chenopis
title: The Kubernetes API
content_template: templates/concept
content_type: concept
weight: 30
card:
name: concepts
weight: 30
---
{{% capture overview %}}
<!-- overview -->
The core of Kubernetes' {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
is the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}. The API server
@ -21,9 +21,10 @@ The Kubernetes API lets you query and manipulate the state of objects in the Kub
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## API changes
@ -166,8 +167,9 @@ For example: to enable deployments and daemonsets, set
Kubernetes stores its serialized state in terms of the API resources by writing them into
{{< glossary_tooltip term_id="etcd" >}}.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
[Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) describes
how the cluster manages authentication and authorization for API access.
@ -176,5 +178,3 @@ Overall API conventions are described in the
document.
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
{{% /capture %}}

View File

@ -5,18 +5,18 @@ reviewers:
title: What is Kubernetes?
description: >
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
content_template: templates/concept
content_type: concept
weight: 10
card:
name: concepts
weight: 10
---
{{% capture overview %}}
<!-- overview -->
This page is an overview of Kubernetes.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines [over 15 years of Google's experience](/blog/2015/04/borg-predecessor-to-kubernetes/) running production workloads at scale with best-of-breed ideas and practices from the community.
@ -86,9 +86,10 @@ Kubernetes:
* Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldnt matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Take a look at the [Kubernetes Components](/docs/concepts/overview/components/)
* Ready to [Get Started](/docs/setup/)?
{{% /capture %}}

View File

@ -1,15 +1,15 @@
---
title: Annotations
content_template: templates/concept
content_type: concept
weight: 50
---
{{% capture overview %}}
<!-- overview -->
You can use Kubernetes annotations to attach arbitrary non-identifying metadata
to objects. Clients such as tools and libraries can retrieve this metadata.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Attaching metadata to objects
You can use either labels or annotations to attach metadata to Kubernetes
@ -88,10 +88,11 @@ spec:
```
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
Learn more about [Labels and Selectors](/docs/concepts/overview/working-with-objects/labels/).
{{% /capture %}}

View File

@ -1,18 +1,18 @@
---
title: Recommended Labels
content_template: templates/concept
content_type: concept
---
{{% capture overview %}}
<!-- overview -->
You can visualize and manage Kubernetes objects with more tools than kubectl and
the dashboard. A common set of labels allows tools to work interoperably, describing
objects in a common manner that all tools can understand.
In addition to supporting tooling, the recommended labels describe applications
in a way that can be queried.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
The metadata is organized around the concept of an _application_. Kubernetes is not
a platform as a service (PaaS) and doesn't have or enforce a formal notion of an application.
Instead, applications are informal and described with metadata. The definition of
@ -170,4 +170,4 @@ metadata:
With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and Wordpress, the broader application, are included.
{{% /capture %}}

View File

@ -1,17 +1,17 @@
---
title: Understanding Kubernetes Objects
content_template: templates/concept
content_type: concept
weight: 10
card:
name: concepts
weight: 40
---
{{% capture overview %}}
<!-- overview -->
This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in `.yaml` format.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Understanding Kubernetes objects {#kubernetes-objects}
*Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:
@ -87,12 +87,13 @@ For example, the `spec` format for a Pod can be found in
and the `spec` format for a Deployment can be found in
[DeploymentSpec v1 apps](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#deploymentspec-v1-apps).
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* [Kubernetes API overview](/docs/reference/using-api/api-overview/) explains some more API concepts
* Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/pod-overview/).
* Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes
{{% /capture %}}

View File

@ -2,11 +2,11 @@
reviewers:
- mikedanese
title: Labels and Selectors
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
_Labels_ are key/value pairs that are attached to objects, such as pods.
Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system.
@ -24,10 +24,10 @@ Each object can have a set of key/value labels defined. Each Key must be unique
Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using [annotations](/docs/concepts/overview/working-with-objects/annotations/).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Motivation
@ -228,4 +228,4 @@ selector:
One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information.
{{% /capture %}}

View File

@ -3,11 +3,11 @@ reviewers:
- mikedanese
- thockin
title: Object Names and IDs
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
Each object in your cluster has a [_Name_](#names) that is unique for that type of resource.
Every Kubernetes object also has a [_UID_](#uids) that is unique across your whole cluster.
@ -16,9 +16,9 @@ For example, you can only have one Pod named `myapp-1234` within the same [names
For non-unique user-provided attributes, Kubernetes provides [labels](/docs/concepts/overview/working-with-objects/labels/) and [annotations](/docs/concepts/overview/working-with-objects/annotations/).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Names
@ -81,8 +81,9 @@ Some resource types have additional restrictions on their names.
Kubernetes UIDs are universally unique identifiers (also known as UUIDs).
UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.667.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes.
* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) design document.
{{% /capture %}}

View File

@ -4,19 +4,19 @@ reviewers:
- mikedanese
- thockin
title: Namespaces
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
Kubernetes supports multiple virtual clusters backed by the same physical cluster.
These virtual clusters are called namespaces.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## When to Use Multiple Namespaces
@ -112,11 +112,12 @@ kubectl api-resources --namespaced=true
kubectl api-resources --namespaced=false
```
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn more about [creating a new namespace](/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace).
* Learn more about [deleting a namespace](/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace).
{{% /capture %}}

View File

@ -1,17 +1,17 @@
---
title: Kubernetes Object Management
content_template: templates/concept
content_type: concept
weight: 15
---
{{% capture overview %}}
<!-- overview -->
The `kubectl` command-line tool supports several different ways to create and manage
Kubernetes objects. This document provides an overview of the different
approaches. Read the [Kubectl book](https://kubectl.docs.kubernetes.io) for
details of managing objects by Kubectl.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Management techniques
@ -173,9 +173,10 @@ Disadvantages compared to imperative object configuration:
- Declarative object configuration is harder to debug and understand results when they are unexpected.
- Partial updates using diffs create complex merge and patch operations.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
- [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/)
- [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/)
@ -185,4 +186,4 @@ Disadvantages compared to imperative object configuration:
- [Kubectl Book](https://kubectl.docs.kubernetes.io)
- [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
{{% /capture %}}

View File

@ -2,20 +2,20 @@
reviewers:
- nelvadas
title: Limit Ranges
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
By default, containers run with unbounded [compute resources](/docs/user-guide/compute-resources) on a Kubernetes cluster.
With resource quotas, cluster administrators can restrict resource consumption and creation on a {{< glossary_tooltip text="namespace" term_id="namespace" >}} basis.
Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace's resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
A _LimitRange_ provides constraints that can:
@ -56,9 +56,10 @@ there may be contention for resources. In this case, the Containers or Pods will
Neither contention nor changes to a LimitRange will affect already created resources.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
Refer to the [LimitRanger design document](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information.
@ -72,4 +73,4 @@ For examples on using limits, see:
- a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/).
{{% /capture %}}

View File

@ -3,21 +3,21 @@ reviewers:
- pweil-
- tallclair
title: Pod Security Policies
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state state="beta" >}}
Pod Security Policies enable fine-grained authorization of pod creation and
updates.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## What is a Pod Security Policy?
@ -631,12 +631,13 @@ By default, all safe sysctls are allowed.
Refer to the [Sysctl documentation](
/docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy).
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations.
Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details.
{{% /capture %}}

View File

@ -2,21 +2,21 @@
reviewers:
- derekwaynecarr
title: Resource Quotas
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
When several users or teams share a cluster with a fixed number of nodes,
there is a concern that one team could use more than its fair share of resources.
Resource quotas are a tool for administrators to address this concern.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
A resource quota, defined by a `ResourceQuota` object, provides constraints that limit
aggregate resource consumption per namespace. It can limit the quantity of objects that can
@ -596,10 +596,11 @@ See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) and
See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
{{% /capture %}}

View File

@ -4,12 +4,12 @@ reviewers:
- kevin-wangzefeng
- bsalamat
title: Assigning Pods to Nodes
content_template: templates/concept
content_type: concept
weight: 50
---
{{% capture overview %}}
<!-- overview -->
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} to only be able to run on particular
{{< glossary_tooltip text="Node(s)" term_id="node" >}}, or to prefer to run on particular nodes.
@ -21,9 +21,9 @@ but there are some circumstances where you may want more control on a node where
that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different
services that communicate a lot into the same availability zone.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## nodeSelector
@ -388,9 +388,10 @@ spec:
The above pod will run on the node kube-01.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
[Taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) allow a Node to *repel* a set of Pods.
@ -402,4 +403,4 @@ Once a Pod is assigned to a Node, the kubelet runs the Pod and allocates node-lo
The [topology manager](/docs/tasks/administer-cluster/topology-manager/) can take part in node-level
resource allocation decisions.
{{% /capture %}}

View File

@ -1,18 +1,18 @@
---
title: Kubernetes Scheduler
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
In Kubernetes, _scheduling_ refers to making sure that {{< glossary_tooltip text="Pods" term_id="pod" >}}
are matched to {{< glossary_tooltip text="Nodes" term_id="node" >}} so that
{{< glossary_tooltip term_id="kubelet" >}} can run them.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Scheduling overview {#scheduling}
@ -86,12 +86,13 @@ of the scheduler:
`QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, and others. You
can also configure the kube-scheduler to run different profiles.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
* Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler
* Learn about [configuring multiple schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
* Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/)
* Learn about [Pod Overhead](/docs/concepts/configuration/pod-overhead/)
{{% /capture %}}

View File

@ -2,11 +2,11 @@
reviewers:
- bsalamat
title: Scheduler Performance Tuning
content_template: templates/concept
content_type: concept
weight: 70
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.14" state="beta" >}}
@ -24,9 +24,9 @@ in a process called _Binding_.
This page explains performance tuning optimizations that are relevant for
large Kubernetes clusters.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
In large clusters, you can tune the scheduler's behaviour balancing
scheduling outcomes between latency (new Pods are placed quickly) and
@ -164,4 +164,4 @@ Node 1, Node 5, Node 2, Node 6, Node 3, Node 4
After going over all the Nodes, it goes back to Node 1.
{{% /capture %}}

View File

@ -2,11 +2,11 @@
reviewers:
- ahg-g
title: Scheduling Framework
content_template: templates/concept
content_type: concept
weight: 60
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
@ -20,9 +20,9 @@ framework.
[kep]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md
{{% /capture %}}
{{% capture body %}}
<!-- body -->
# Framework workflow
@ -239,4 +239,3 @@ If you are using Kubernetes v1.18 or later, you can configure a set of plugins a
a scheduler profile and then define multiple profiles to fit various kinds of workload.
Learn more at [multiple profiles](/docs/reference/scheduling/profiles/#multiple-profiles).
{{% /capture %}}

View File

@ -4,12 +4,12 @@ reviewers:
- kevin-wangzefeng
- bsalamat
title: Taints and Tolerations
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
[_Node affinity_](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attracts* them to
a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a
@ -22,9 +22,9 @@ Taints and tolerations work together to ensure that pods are not scheduled
onto inappropriate nodes. One or more taints are applied to a node; this
marks that the node should not accept any pods that do not tolerate the taints.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Concepts
@ -282,9 +282,10 @@ tolerations to all daemons, to prevent DaemonSets from breaking.
Adding these tolerations ensures backward compatibility. You can also add
arbitrary tolerations to DaemonSets.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about [out of resource handling](/docs/tasks/administer-cluster/out-of-resource/) and how you can configure it
* Read about [pod priority](/docs/concepts/configuration/pod-priority-preemption/)
{{% /capture %}}

View File

@ -2,13 +2,13 @@
reviewers:
- zparnold
title: Overview of Cloud Native Security
content_template: templates/concept
content_type: concept
weight: 1
---
{{< toc >}}
{{% capture overview %}}
<!-- overview -->
Kubernetes Security (and security in general) is an immense topic that has many
highly interrelated parts. In today's era where open source software is
integrated into many of the systems that help web applications run,
@ -17,9 +17,9 @@ think about security holistically. This guide will define a mental model
for some general concepts surrounding Cloud Native Security. The mental model is completely arbitrary
and you should only use it if it helps you think about where to secure your software
stack.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## The 4C's of Cloud Native Security
Let's start with a diagram that may help you understand how you can think about security in layers.
@ -153,12 +153,13 @@ Most of the above mentioned suggestions can actually be automated in your code
delivery pipeline as part of a series of checks in security. To learn about a
more "Continuous Hacking" approach to software delivery, [this article](https://thenewstack.io/beyond-ci-cd-how-continuous-hacking-of-docker-containers-and-pipeline-driven-security-keeps-ygrene-secure/) provides more detail.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about [network policies for Pods](/docs/concepts/services-networking/network-policies/)
* Read about [securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/)
* Read about [API access control](/docs/reference/access-authn-authz/controlling-access/)
* Read about [data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* Read about [data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
* Read about [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
{{% /capture %}}

View File

@ -2,11 +2,11 @@
reviewers:
- tallclair
title: Pod Security Standards
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
Security settings for Pods are typically applied by using [security
contexts](/docs/tasks/configure-pod-container/security-context/). Security Contexts allow for the
@ -21,9 +21,9 @@ However, numerous means of policy enforcement have arisen that augment or replac
PodSecurityPolicy. The intent of this page is to detail recommended Pod security profiles, decoupled
from any specific instantiation.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Policy Types
@ -322,4 +322,4 @@ kernel. This allows for workloads requiring heightened permissions to still be i
Additionally, the protection of sandboxed workloads is highly dependent on the method of
sandboxing. As such, no single recommended policy is recommended for all sandboxed workloads.
{{% /capture %}}

View File

@ -3,19 +3,19 @@ reviewers:
- rickypai
- thockin
title: Adding entries to Pod /etc/hosts with HostAliases
content_template: templates/concept
content_type: concept
weight: 60
---
{{< toc >}}
{{% capture overview %}}
<!-- overview -->
Adding entries to a Pod's /etc/hosts file provides Pod-level override of hostname resolution when DNS and other options are not applicable. In 1.7, users can add these custom entries with the HostAliases field in PodSpec.
Modification not using HostAliases is not suggested because the file is managed by Kubelet and can be overwritten on during Pod creation/restart.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Default Hosts File Content
@ -125,5 +125,5 @@ overwritten whenever the `hosts` file is remounted by Kubelet in the event of
a container restart or a Pod reschedule. Thus, it is not suggested to modify
the contents of the file.
{{% /capture %}}

View File

@ -4,12 +4,12 @@ reviewers:
- lavalamp
- thockin
title: Connecting Applications with Services
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
## The Kubernetes model for connecting containers
@ -21,9 +21,9 @@ Coordinating port allocations across multiple developers or teams that provide c
This guide uses a simple nginx server to demonstrate proof of concept.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Exposing pods to the cluster
@ -418,12 +418,13 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el
...
```
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn more about [Using a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/)
* Learn more about [Connecting a Front End to a Back End Using a Service](/docs/tasks/access-application-cluster/connecting-frontend-backend/)
* Learn more about [Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
{{% /capture %}}

View File

@ -3,14 +3,14 @@ reviewers:
- davidopp
- thockin
title: DNS for Services and Pods
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
This page provides an overview of DNS support by Kubernetes.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Introduction
@ -262,11 +262,11 @@ The availability of Pod DNS Config and DNS Policy "`None`" is shown as below.
| 1.10 | Beta (on by default)|
| 1.9 | Alpha |
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
For guidance on administering DNS configurations, check
[Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/)
{{% /capture %}}

View File

@ -9,11 +9,11 @@ feature:
description: >
Allocation of IPv4 and IPv6 addresses to Pods and Services
content_template: templates/concept
content_type: concept
weight: 70
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
@ -21,9 +21,9 @@ weight: 70
If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the cluster will support the simultaneous assignment of both IPv4 and IPv6 addresses.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Supported Features
@ -103,10 +103,11 @@ The use of publicly routable and non-publicly routable IPv6 address blocks is ac
* Kubenet forces IPv4,IPv6 positional reporting of IPs (--cluster-cidr)
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking
{{% /capture %}}

View File

@ -2,12 +2,12 @@
reviewers:
- freehan
title: EndpointSlices
content_template: templates/concept
content_type: concept
weight: 15
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
@ -15,9 +15,9 @@ _EndpointSlices_ provide a simple way to track network endpoints within a
Kubernetes cluster. They offer a more scalable and extensible alternative to
Endpoints.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Motivation
@ -175,11 +175,12 @@ necessary soon anyway. Rolling updates of Deployments also provide a natural
repacking of EndpointSlices with all pods and their corresponding endpoints
getting replaced.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices)
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
{{% /capture %}}

View File

@ -1,11 +1,11 @@
---
title: Ingress Controllers
reviewers:
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
In order for the Ingress resource to work, the cluster must have an ingress controller running.
@ -16,9 +16,9 @@ that best fits your cluster.
Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.io/ingress-gce/README.md) and
[nginx](https://git.k8s.io/ingress-nginx/README.md) controllers.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Additional controllers
@ -64,11 +64,12 @@ controllers operate slightly differently.
Make sure you review your ingress controller's documentation to understand the caveats of choosing it.
{{< /note >}}
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn more about [Ingress](/docs/concepts/services-networking/ingress/).
* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube).
{{% /capture %}}

View File

@ -2,16 +2,16 @@
reviewers:
- bprashanth
title: Ingress
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.1" state="beta" >}}
{{< glossary_definition term_id="ingress" length="all" >}}
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Terminology
@ -542,10 +542,11 @@ You can expose a Service in multiple ways that don't directly involve the Ingres
* Use [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer)
* Use [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport)
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn about the [Ingress API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io)
* Learn about [Ingress Controllers](/docs/concepts/services-networking/ingress-controllers/)
* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube)
{{% /capture %}}

View File

@ -4,20 +4,20 @@ reviewers:
- caseydavenport
- danwinship
title: Network Policies
content_template: templates/concept
content_type: concept
weight: 50
---
{{< toc >}}
{{% capture overview %}}
<!-- overview -->
A network policy is a specification of how groups of {{< glossary_tooltip text="pods" term_id="pod">}} are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use {{< glossary_tooltip text="labels" term_id="label">}} to select pods and define rules which specify what traffic is allowed to the selected pods.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Prerequisites
Network policies are implemented by the [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.
@ -215,12 +215,13 @@ You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin tha
{{< /note >}}
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
- See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/)
walkthrough for further examples.
- See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource.
{{% /capture %}}

View File

@ -8,12 +8,12 @@ feature:
description: >
Routing of service traffic based upon cluster topology.
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
@ -22,9 +22,9 @@ topology of the cluster. For example, a service can specify that traffic be
preferentially routed to endpoints that are on the same Node as the client, or
in the same availability zone.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Introduction
@ -192,11 +192,12 @@ spec:
```
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology)
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
{{% /capture %}}

View File

@ -7,12 +7,12 @@ feature:
description: >
No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
{{< glossary_definition term_id="service" length="short" >}}
@ -20,9 +20,9 @@ With Kubernetes you don't need to modify your application to use an unfamiliar s
Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods,
and can load-balance across them.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Motivation
@ -1227,12 +1227,13 @@ SCTP is not supported on Windows based nodes.
The kube-proxy does not support the management of SCTP associations when it is in userspace mode.
{{< /warning >}}
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
* Read about [Ingress](/docs/concepts/services-networking/ingress/)
* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/)
{{% /capture %}}

View File

@ -5,11 +5,11 @@ reviewers:
- thockin
- msau42
title: Dynamic Volume Provisioning
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
Dynamic volume provisioning allows storage volumes to be created on-demand.
Without dynamic provisioning, cluster administrators have to manually make
@ -19,10 +19,10 @@ to represent them in Kubernetes. The dynamic provisioning feature eliminates
the need for cluster administrators to pre-provision storage. Instead, it
automatically provisions storage when it is requested by users.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Background
@ -133,4 +133,4 @@ Zones in a Region. Single-Zone storage backends should be provisioned in the Zon
Pods are scheduled. This can be accomplished by setting the [Volume Binding
Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode).
{{% /capture %}}

View File

@ -11,18 +11,18 @@ feature:
description: >
Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as <a href="https://cloud.google.com/storage/">GCP</a> or <a href="https://aws.amazon.com/products/storage/">AWS</a>, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
This document describes the current state of _persistent volumes_ in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Introduction
@ -746,8 +746,9 @@ and need persistent storage, it is recommended that you use the following patter
dynamic storage support (in which case the user should create a matching PV)
or the cluster has no storage system (in which case the user cannot deploy
config requiring PVCs).
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn more about [Creating a PersistentVolume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume).
* Learn more about [Creating a PersistentVolumeClaim](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim).
@ -759,4 +760,3 @@ and need persistent storage, it is recommended that you use the following patter
* [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core)
* [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core)
* [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core)
{{% /capture %}}

View File

@ -5,19 +5,19 @@ reviewers:
- thockin
- msau42
title: Storage Classes
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
This document describes the concept of a StorageClass in Kubernetes. Familiarity
with [volumes](/docs/concepts/storage/volumes/) and
[persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Introduction
@ -821,4 +821,4 @@ Delaying volume binding allows the scheduler to consider all of a Pod's
scheduling constraints when choosing an appropriate PersistentVolume for a
PersistentVolumeClaim.
{{% /capture %}}

View File

@ -5,10 +5,10 @@ reviewers:
- thockin
- msau42
title: Node-specific Volume Limits
content_template: templates/concept
content_type: concept
---
{{% capture overview %}}
<!-- overview -->
This page describes the maximum number of volumes that can be attached
to a Node for various cloud providers.
@ -18,9 +18,9 @@ how many volumes can be attached to a Node. It is important for Kubernetes to
respect those limits. Otherwise, Pods scheduled on a Node could get stuck
waiting for volumes to attach.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Kubernetes default limits
@ -78,4 +78,4 @@ Refer to the [CSI specifications](https://github.com/container-storage-interface
* For volumes managed by in-tree plugins that have been migrated to a CSI driver, the maximum number of volumes will be the one reported by the CSI driver.
{{% /capture %}}

View File

@ -5,18 +5,18 @@ reviewers:
- thockin
- msau42
title: CSI Volume Cloning
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
This document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Introduction
@ -70,4 +70,4 @@ The result is a new PVC with the name `clone-of-pvc-1` that has the exact same c
Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC. It's also expected at this point that the newly created PVC is an independent object. It can be consumed, cloned, snapshotted, or deleted independently and without consideration for it's original dataSource PVC. This also implies that the source is not linked in any way to the newly created clone, it may also be modified or deleted without affecting the newly created clone.
{{% /capture %}}

View File

@ -7,20 +7,20 @@ reviewers:
- xing-yang
- yuxiangqian
title: Volume Snapshot Classes
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
This document describes the concept of `VolumeSnapshotClass` in Kubernetes. Familiarity
with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and
[storage classes](/docs/concepts/storage/storage-classes) is suggested.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Introduction
@ -69,4 +69,4 @@ Volume snapshot classes have parameters that describe volume snapshots belonging
the volume snapshot class. Different parameters may be accepted depending on the
`driver`.
{{% /capture %}}

View File

@ -7,19 +7,19 @@ reviewers:
- xing-yang
- yuxiangqian
title: Volume Snapshots
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes [persistent volumes](/docs/concepts/storage/persistent-volumes/).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Introduction
@ -154,4 +154,4 @@ the *dataSource* field in the `PersistentVolumeClaim` object.
For more details, see
[Volume Snapshot and Restore Volume from Snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support).
{{% /capture %}}

View File

@ -5,11 +5,11 @@ reviewers:
- thockin
- msau42
title: Volumes
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
On-disk files in a Container are ephemeral, which presents some problems for
non-trivial applications when running in Containers. First, when a Container
@ -20,10 +20,10 @@ Kubernetes `Volume` abstraction solves both of these problems.
Familiarity with [Pods](/docs/user-guide/pods) is suggested.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Background
@ -1481,6 +1481,7 @@ sudo systemctl restart docker
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Follow an example of [deploying WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/).
{{% /capture %}}

View File

@ -4,11 +4,11 @@ reviewers:
- soltysh
- janetkuo
title: CronJob
content_template: templates/concept
content_type: concept
weight: 80
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.8" state="beta" >}}
@ -33,8 +33,8 @@ append 11 characters to the job name provided and there is a constraint that the
maximum length of a Job name is no more than 63 characters.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## CronJob
@ -82,12 +82,13 @@ be down for the same period as the previous example (`08:29:00` to `10:21:00`,)
The CronJob is only responsible for creating Jobs that match its schedule, and
the Job in turn is responsible for the management of the Pods it represents.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
[Cron expression format](https://pkg.go.dev/github.com/robfig/cron?tab=doc#hdr-CRON_Expression_Format)
documents the format of CronJob `schedule` fields.
For instructions on creating and working with cron jobs, and for an example of CronJob
manifest, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs).
{{% /capture %}}

View File

@ -6,11 +6,11 @@ reviewers:
- janetkuo
- kow3ns
title: DaemonSet
content_template: templates/concept
content_type: concept
weight: 50
---
{{% capture overview %}}
<!-- overview -->
A _DaemonSet_ ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the
cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage
@ -26,10 +26,10 @@ In a simple case, one DaemonSet, covering all nodes, would be used for each type
A more complex setup might use multiple DaemonSets for a single type of daemon, but with
different flags and/or different memory and cpu requests for different hardware types.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Writing a DaemonSet Spec
@ -229,4 +229,4 @@ number of replicas and rolling out updates are more important than controlling e
the Pod runs on. Use a DaemonSet when it is important that a copy of a Pod always run on
all or certain hosts, and when it needs to start before other Pods.
{{% /capture %}}

View File

@ -7,11 +7,11 @@ feature:
description: >
Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
A _Deployment_ provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and
[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/).
@ -22,10 +22,10 @@ You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_
Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.
{{< /note >}}
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Use Case
@ -1166,4 +1166,4 @@ a paused Deployment and one that is not paused, is that any changes into the Pod
Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when
it is created.
{{% /capture %}}

View File

@ -1,18 +1,18 @@
---
title: Garbage Collection
content_template: templates/concept
content_type: concept
weight: 60
---
{{% capture overview %}}
<!-- overview -->
The role of the Kubernetes garbage collector is to delete certain objects
that once had an owner, but no longer have an owner.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Owners and dependents
@ -168,16 +168,17 @@ See [kubeadm/#149](https://github.com/kubernetes/kubeadm/issues/149#issuecomment
Tracked at [#26120](https://github.com/kubernetes/kubernetes/issues/26120)
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
[Design Doc 1](https://git.k8s.io/community/contributors/design-proposals/api-machinery/garbage-collection.md)
[Design Doc 2](https://git.k8s.io/community/contributors/design-proposals/api-machinery/synchronous-garbage-collection.md)
{{% /capture %}}

View File

@ -3,7 +3,7 @@ reviewers:
- erictune
- soltysh
title: Jobs - Run to Completion
content_template: templates/concept
content_type: concept
feature:
title: Batch execution
description: >
@ -11,7 +11,7 @@ feature:
weight: 70
---
{{% capture overview %}}
<!-- overview -->
A Job creates one or more Pods and ensures that a specified number of them successfully terminate.
As pods successfully complete, the Job tracks the successful completions. When a specified number
@ -24,10 +24,10 @@ due to a node hardware failure or a node reboot).
You can also use a Job to run multiple Pods in parallel.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Running an example Job
@ -478,4 +478,4 @@ object, but maintains complete control over what Pods are created and how work i
You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`.
{{% /capture %}}

View File

@ -4,19 +4,19 @@ reviewers:
- bprashanth
- madhusudancs
title: ReplicaSet
content_template: templates/concept
content_type: concept
weight: 10
---
{{% capture overview %}}
<!-- overview -->
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often
used to guarantee the availability of a specified number of identical Pods.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## How a ReplicaSet works
@ -366,4 +366,4 @@ The two serve the same purpose, and behave similarly, except that a ReplicationC
selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors).
As such, ReplicaSets are preferred over ReplicationControllers
{{% /capture %}}

View File

@ -9,11 +9,11 @@ feature:
description: >
Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
{{< note >}}
A [`Deployment`](/docs/concepts/workloads/controllers/deployment/) that configures a [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is now the recommended way to set up replication.
@ -23,10 +23,10 @@ A _ReplicationController_ ensures that a specified number of pod replicas are ru
time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is
always up and available.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## How a ReplicationController Works
@ -285,4 +285,4 @@ safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
Read [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/).
{{% /capture %}}

View File

@ -7,18 +7,18 @@ reviewers:
- kow3ns
- smarterclayton
title: StatefulSets
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
StatefulSet is the workload API object used to manage stateful applications.
{{< glossary_definition term_id="statefulset" length="all" >}}
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Using StatefulSets
@ -270,12 +270,13 @@ After reverting the template, you must also delete any Pods that StatefulSet had
already attempted to run with the bad configuration.
StatefulSet will then begin to recreate the Pods using the reverted template.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Follow an example of [deploying a stateful application](/docs/tutorials/stateful-application/basic-stateful-set/).
* Follow an example of [deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/).
* Follow an example of [running a replicated stateful application](/docs/tasks/run-application/run-replicated-stateful-application/).
{{% /capture %}}

View File

@ -2,11 +2,11 @@
reviewers:
- janetkuo
title: TTL Controller for Finished Resources
content_template: templates/concept
content_type: concept
weight: 65
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
@ -21,12 +21,12 @@ Alpha Disclaimer: this feature is currently alpha, and can be enabled with both
`TTLAfterFinished`.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## TTL Controller
@ -78,12 +78,13 @@ In Kubernetes, it's required to run NTP on all nodes
to avoid time skew. Clocks aren't always correct, but the difference should be
very small. Please be aware of this risk when setting a non-zero TTL.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
[Clean up Jobs automatically](/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically)
[Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)
{{% /capture %}}

View File

@ -4,11 +4,11 @@ reviewers:
- foxish
- davidopp
title: Disruptions
content_template: templates/concept
content_type: concept
weight: 60
---
{{% capture overview %}}
<!-- overview -->
This guide is for application owners who want to build
highly available applications, and thus need to understand
what types of Disruptions can happen to Pods.
@ -16,10 +16,10 @@ what types of Disruptions can happen to Pods.
It is also for Cluster Administrators who want to perform automated
cluster actions, like upgrading and autoscaling clusters.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Voluntary and Involuntary Disruptions
@ -262,13 +262,14 @@ the nodes in your cluster, such as a node or system software upgrade, here are s
disruptions largely overlaps with work to support autoscaling and tolerating
involuntary disruptions.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Follow steps to protect your application by [configuring a Pod Disruption Budget](/docs/tasks/run-application/configure-pdb/).
* Learn more about [draining nodes](/docs/tasks/administer-cluster/safely-drain-node/)
{{% /capture %}}

View File

@ -3,11 +3,11 @@ reviewers:
- verb
- yujuhong
title: Ephemeral Containers
content_template: templates/concept
content_type: concept
weight: 80
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state state="alpha" for_k8s_version="v1.16" >}}
@ -23,9 +23,9 @@ clusters. In accordance with the [Kubernetes Deprecation Policy](
significantly in the future or be removed entirely.
{{< /warning >}}
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Understanding ephemeral containers
@ -192,4 +192,4 @@ example:
kubectl attach -it example-pod -c debugger
```
{{% /capture %}}

View File

@ -2,20 +2,20 @@
reviewers:
- erictune
title: Init Containers
content_template: templates/concept
content_type: concept
weight: 40
---
{{% capture overview %}}
<!-- overview -->
This page provides an overview of init containers: specialized containers that run
before app containers in a {{< glossary_tooltip text="Pod" term_id="pod" >}}.
Init containers can contain utilities or setup scripts not present in an app image.
You can specify init containers in the Pod specification alongside the `containers`
array (which describes app containers).
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Understanding init containers
@ -317,12 +317,13 @@ reasons:
forcing a restart, and the init container completion record has been lost due
to garbage collection.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
* Learn how to [debug init containers](/docs/tasks/debug-application-cluster/debug-init-containers/)
{{% /capture %}}

View File

@ -1,20 +1,20 @@
---
title: Pod Lifecycle
content_template: templates/concept
content_type: concept
weight: 30
---
{{% capture overview %}}
<!-- overview -->
{{< comment >}}Updated: 4/14/2015{{< /comment >}}
{{< comment >}}Edited and moved to Concepts section: 2/2/17{{< /comment >}}
This page describes the lifecycle of a Pod.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Pod phase
@ -390,10 +390,11 @@ spec:
* Node controller sets Pod `phase` to Failed.
* If running under a controller, Pod is recreated elsewhere.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Get hands-on experience
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
@ -403,7 +404,7 @@ spec:
* Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/).
{{% /capture %}}

View File

@ -2,19 +2,19 @@
reviewers:
- erictune
title: Pod Overview
content_template: templates/concept
content_type: concept
weight: 10
card:
name: concepts
weight: 60
---
{{% capture overview %}}
<!-- overview -->
This page provides an overview of `Pod`, the smallest deployable object in the Kubernetes object model.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Understanding Pods
A *Pod* is the basic execution unit of a Kubernetes application--the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your {{< glossary_tooltip term_id="cluster" text="cluster" >}}.
@ -111,12 +111,13 @@ For example, a Deployment controller ensures that the running Pods match the cur
On Nodes, the {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} does not directly observe or manage any of the details around pod templates and updates; those details are abstracted away. That abstraction and separation of concerns simplifies system semantics, and makes it feasible to extend the cluster's behavior without changing existing code.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* Learn more about [Pods](/docs/concepts/workloads/pods/pod/)
* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) explains common layouts for Pods with more than one container
* Learn more about Pod behavior:
* [Pod Termination](/docs/concepts/workloads/pods/pod/#termination-of-pods)
* [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/)
{{% /capture %}}

View File

@ -1,18 +1,18 @@
---
title: Pod Topology Spread Constraints
content_template: templates/concept
content_type: concept
weight: 50
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Prerequisites
@ -246,4 +246,4 @@ As of 1.18, at which this feature is Beta, there are some known limitations:
- Scaling down a Deployment may result in imbalanced Pods distribution.
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
{{% /capture %}}

View File

@ -1,19 +1,19 @@
---
reviewers:
title: Pods
content_template: templates/concept
content_type: concept
weight: 20
---
{{% capture overview %}}
<!-- overview -->
_Pods_ are the smallest deployable units of computing that can be created and
managed in Kubernetes.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## What is a Pod?
@ -206,4 +206,4 @@ describes the object in detail.
When creating the manifest for a Pod object, make sure the name specified is a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
{{% /capture %}}

View File

@ -2,20 +2,20 @@
reviewers:
- jessfraz
title: Pod Preset
content_template: templates/concept
content_type: concept
weight: 50
---
{{% capture overview %}}
<!-- overview -->
{{< feature-state for_k8s_version="v1.6" state="alpha" >}}
This page provides an overview of PodPresets, which are objects for injecting
certain information into pods at creation time. The information can include
secrets, volumes, volume mounts, and environment variables.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Understanding Pod presets
A PodPreset is an API resource for injecting additional runtime requirements
@ -82,12 +82,13 @@ There may be instances where you wish for a Pod to not be altered by any Pod
Preset mutations. In these cases, you can add an annotation in the Pod Spec
of the form: `podpreset.admission.kubernetes.io/exclude: "true"`.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
See [Injecting data into a Pod using PodPreset](/docs/tasks/inject-data-application/podpreset/)
For more information about the background, see the [design proposal for PodPreset](https://git.k8s.io/community/contributors/design-proposals/service-catalog/pod-preset.md).
{{% /capture %}}

View File

@ -1,5 +1,5 @@
---
content_template: templates/concept
content_type: concept
title: Contribute to Kubernetes docs
linktitle: Contribute
main_menu: true
@ -10,7 +10,7 @@ card:
title: Start contributing
---
{{% capture overview %}}
<!-- overview -->
This website is maintained by [Kubernetes SIG Docs](/docs/contribute/#get-involved-with-sig-docs).
@ -23,9 +23,9 @@ Kubernetes documentation contributors:
Kubernetes documentation welcomes improvements from all contributors, new and experienced!
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Getting started
@ -75,4 +75,4 @@ SIG Docs communicates with different methods:
- Read the [contributor cheatsheet](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet) to get involved with Kubernetes feature development.
- Submit a [blog post or case study](/docs/contribute/new-content/blogs-case-studies/).
{{% /capture %}}

View File

@ -1,11 +1,11 @@
---
title: Advanced contributing
slug: advanced
content_template: templates/concept
content_type: concept
weight: 98
---
{{% capture overview %}}
<!-- overview -->
This page assumes that you understand how to
[contribute to new content](/docs/contribute/new-content/overview) and
@ -13,9 +13,9 @@ This page assumes that you understand how to
to learn about more ways to contribute. You need to use the Git command line
client and other tools for some of these tasks.
{{% /capture %}}
{{% capture body %}}
<!-- body -->
## Be the PR Wrangler for a week
@ -245,4 +245,4 @@ When youre ready to stop recording, click Stop.
The video uploads automatically to YouTube.
{{% /capture %}}

View File

@ -1,10 +1,10 @@
---
title: Contributing to the Upstream Kubernetes Code
content_template: templates/task
content_type: task
weight: 20
---
{{% capture overview %}}
<!-- overview -->
This page shows how to contribute to the upstream `kubernetes/kubernetes` project.
You can fix bugs found in the Kubernetes API documentation or the content of
@ -16,9 +16,10 @@ API or the `kube-*` components from the upstream code, see the following instruc
- [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/)
- [Generating Reference Documentation for the Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/)
{{% /capture %}}
{{% capture prerequisites %}}
## {{% heading "prerequisites" %}}
- You need to have these tools installed:
@ -35,9 +36,9 @@ API or the `kube-*` components from the upstream code, see the following instruc
For more information, see [Creating a Pull Request](https://help.github.com/articles/creating-a-pull-request/)
and [GitHub Standard Fork & Pull Request Workflow](https://gist.github.com/Chaser324/ce0505fbed06b947d962).
{{% /capture %}}
{{% capture steps %}}
<!-- steps -->
## The big picture
@ -230,12 +231,13 @@ the API reference documentation.
You are now ready to follow the [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) guide to generate the
[published Kubernetes API reference documentation](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/).
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/)
* [Generating Reference Docs for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/)
* [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/)
{{% /capture %}}

View File

@ -1,10 +1,10 @@
---
title: Generating Reference Documentation for kubectl Commands
content_template: templates/task
content_type: task
weight: 90
---
{{% capture overview %}}
<!-- overview -->
This page shows how to generate the `kubectl` command reference.
@ -21,15 +21,16 @@ reference page, see
[Generating Reference Pages for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/).
{{< /note >}}
{{% /capture %}}
{{% capture prerequisites %}}
## {{% heading "prerequisites" %}}
{{< include "prerequisites-ref-docs.md" >}}
{{% /capture %}}
{{% capture steps %}}
<!-- steps -->
## Setting up the local repositories
@ -253,12 +254,13 @@ A few minutes after your pull request is merged, your updated reference
topics will be visible in the
[published documentation](/docs/home).
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/)
* [Generating Reference Documentation for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/)
* [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/)
{{% /capture %}}

View File

@ -1,10 +1,10 @@
---
title: Generating Reference Documentation for the Kubernetes API
content_template: templates/task
content_type: task
weight: 50
---
{{% capture overview %}}
<!-- overview -->
This page shows how to update the Kubernetes API reference documentation.
@ -18,15 +18,16 @@ If you find bugs in the generated documentation, you need to
If you need only to regenerate the reference documentation from the [OpenAPI](https://github.com/OAI/OpenAPI-Specification)
spec, continue reading this page.
{{% /capture %}}
{{% capture prerequisites %}}
## {{% heading "prerequisites" %}}
{{< include "prerequisites-ref-docs.md" >}}
{{% /capture %}}
{{% capture steps %}}
<!-- steps -->
## Setting up the local repositories
@ -194,12 +195,13 @@ Submit your changes as a
Monitor your pull request, and respond to reviewer comments as needed. Continue
to monitor your pull request until it has been merged.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/)
* [Generating Reference Docs for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/)
* [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/)
{{% /capture %}}

View File

@ -1,34 +1,36 @@
---
title: Generating Reference Pages for Kubernetes Components and Tools
content_template: templates/task
content_type: task
weight: 120
---
{{% capture overview %}}
<!-- overview -->
This page shows how to build the Kubernetes component and tool reference pages.
{{% /capture %}}
{{% capture prerequisites %}}
## {{% heading "prerequisites" %}}
Start with the [Prerequisites section](/docs/contribute/generate-ref-docs/quickstart/#before-you-begin)
in the Reference Documentation Quickstart guide.
{{% /capture %}}
{{% capture steps %}}
<!-- steps -->
Follow the [Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/)
to generate the Kubernetes component and tool reference pages.
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
* [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/)
* [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/)
* [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/)
* [Contributing to the Upstream Kubernetes Project for Documentation](/docs/contribute/generate-ref-docs/contribute-upstream/)
{{% /capture %}}

View File

@ -1,24 +1,25 @@
---
title: Quickstart
content_template: templates/task
content_type: task
weight: 40
---
{{% capture overview %}}
<!-- overview -->
This page shows how to use the `update-imported-docs` script to generate
the Kubernetes reference documentation. The script automates
the build setup and generates the reference documentation for a release.
{{% /capture %}}
{{% capture prerequisites %}}
## {{% heading "prerequisites" %}}
{{< include "prerequisites-ref-docs.md" >}}
{{% /capture %}}
{{% capture steps %}}
<!-- steps -->
## Getting the docs repository
@ -246,9 +247,10 @@ A few minutes after your pull request is merged, your updated reference
topics will be visible in the
[published documentation](/docs/home/).
{{% /capture %}}
{{% capture whatsnext %}}
## {{% heading "whatsnext" %}}
To generate the individual reference documentation by manually setting up the required build repositories and
running the build targets, see the following guides:
@ -257,4 +259,4 @@ running the build targets, see the following guides:
* [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/)
* [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/)
{{% /capture %}}

Some files were not shown because too many files have changed in this diff Show More