Merge pull request #23363 from savitharaghunathan/merged-master-dev-1.19
Branch sync - Merge master with dev 1.19
This commit is contained in:
commit
f7049579a1
|
@ -107,10 +107,12 @@ aliases:
|
|||
- danninov
|
||||
sig-docs-it-owners: # Admins for Italian content
|
||||
- fabriziopandini
|
||||
- Fale
|
||||
- mattiaperi
|
||||
- micheleberardi
|
||||
sig-docs-it-reviews: # PR reviews for Italian content
|
||||
- fabriziopandini
|
||||
- Fale
|
||||
- mattiaperi
|
||||
- micheleberardi
|
||||
sig-docs-ja-owners: # Admins for Japanese content
|
||||
|
@ -138,6 +140,7 @@ aliases:
|
|||
- ianychoi
|
||||
- seokho-son
|
||||
- ysyukr
|
||||
- pjhwa
|
||||
sig-docs-leads: # Website chairs and tech leads
|
||||
- jimangel
|
||||
- kbarnard10
|
||||
|
@ -214,4 +217,4 @@ aliases:
|
|||
- butuzov
|
||||
- idvoretskyi
|
||||
- MaxymVlasov
|
||||
- Potapy4
|
||||
- Potapy4
|
||||
|
|
39
README.md
39
README.md
|
@ -58,6 +58,43 @@ make serve
|
|||
|
||||
This will start the local Hugo server on port 1313. Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
|
||||
|
||||
### Troubleshooting macOS for too many open files
|
||||
|
||||
If you run `make serve` on macOS and receive the following error:
|
||||
|
||||
```
|
||||
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
|
||||
make: *** [serve] Error 1
|
||||
```
|
||||
|
||||
Try checking the current limit for open files:
|
||||
|
||||
`launchctl limit maxfiles`
|
||||
|
||||
Then run the following commands:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
# These are the original gist links, linking to my gists now.
|
||||
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
|
||||
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
|
||||
|
||||
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxfiles.plist
|
||||
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxproc.plist
|
||||
|
||||
sudo mv limit.maxfiles.plist /Library/LaunchDaemons
|
||||
sudo mv limit.maxproc.plist /Library/LaunchDaemons
|
||||
|
||||
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
|
||||
sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
|
||||
|
||||
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
|
||||
```
|
||||
|
||||
This works for Catalina as well as Mojave macOS.
|
||||
|
||||
|
||||
# Get involved with SIG Docs
|
||||
|
||||
Learn more about SIG Docs Kubernetes community and meetings on the [community page](https://github.com/kubernetes/community/tree/master/sig-docs#meetings).
|
||||
|
@ -102,4 +139,4 @@ Participation in the Kubernetes community is governed by the [CNCF Code of Condu
|
|||
|
||||
# Thank you!
|
||||
|
||||
Kubernetes thrives on community participation, and we appreciate your contributions to our website and our documentation!
|
||||
Kubernetes thrives on community participation, and we appreciate your contributions to our website and our documentation!
|
||||
|
|
|
@ -832,3 +832,16 @@ section#cncf {
|
|||
font-size: 1rem;
|
||||
}
|
||||
}
|
||||
|
||||
/* DOCUMENTATION */
|
||||
|
||||
body.td-documentation {
|
||||
header > .header-filler {
|
||||
height: $hero-padding-top;
|
||||
background-color: black;
|
||||
}
|
||||
/* Special case for if an announcement is active */
|
||||
header section#announcement ~ .header-filler {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
$announcement-size-adjustment: 8px;
|
||||
|
||||
/* GLOBAL */
|
||||
.td-main {
|
||||
.row {
|
||||
|
@ -284,6 +286,82 @@ blockquote {
|
|||
border-left-color: #d9534f !important;
|
||||
}
|
||||
|
||||
.deprecation-warning {
|
||||
padding: 20px;
|
||||
margin: 20px 0;
|
||||
background-color: #faf5b6;
|
||||
color: #000;
|
||||
}
|
||||
|
||||
body.td-home .deprecation-warning, body.td-blog .deprecation-warning, body.td-documentation .deprecation-warning {
|
||||
border-radius: 3px;
|
||||
}
|
||||
|
||||
body.td-home #deprecation-warning {
|
||||
max-width: 1000px;
|
||||
margin-top: 2.5rem;
|
||||
margin-left: auto;
|
||||
margin-right: auto;
|
||||
}
|
||||
|
||||
#caseStudies body > #deprecation-warning, body.cid-casestudies > #deprecation-warning, body.cid-community > #deprecation-warning {
|
||||
display: inline-block;
|
||||
vertical-align: top;
|
||||
position: relative;
|
||||
background-color: #326ce5; // Kubernetes blue
|
||||
color: #fff;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
width: 100vw;
|
||||
}
|
||||
#caseStudies body > #deprecation-warning, body.cid-casestudies > #deprecation-warning {
|
||||
padding-top: 32px;
|
||||
}
|
||||
body.cid-partners > #deprecation-warning {
|
||||
padding: 0;
|
||||
margin-right: 0;
|
||||
margin-left: 0;
|
||||
margin-top: 0;
|
||||
width: 100vw;
|
||||
}
|
||||
body.cid-partners > #deprecation-warning > .content {
|
||||
width: 100%;
|
||||
max-width: initial;
|
||||
margin-right: 0;
|
||||
margin-left: 0;
|
||||
margin-top: 0;
|
||||
padding-left: 5vw;
|
||||
padding-right: 5vw;
|
||||
padding-top: 2rem;
|
||||
padding-bottom: 2rem;
|
||||
}
|
||||
body.cid-community > #deprecation-warning > .deprecation-warning {
|
||||
margin-left: 20px;
|
||||
margin-right: 20px;
|
||||
color: #faf5b6;
|
||||
background-color: inherit;
|
||||
}
|
||||
body.cid-community > #deprecation-warning > .deprecation-warning > * {
|
||||
color: inherit;
|
||||
background-color: inherit;
|
||||
}
|
||||
|
||||
#caseStudies body > #deprecation-warning > .deprecation-warning, body.cid-casestudies > #deprecation-warning > .deprecation-warning {
|
||||
color: inherit;
|
||||
background: inherit;
|
||||
width: 80%;
|
||||
margin: 0;
|
||||
margin-top: 120px;
|
||||
margin-left: auto;
|
||||
margin-right: auto;
|
||||
border-radius: initial;
|
||||
}
|
||||
#deprecation-warning > .deprecation-warning a {
|
||||
background: transparent;
|
||||
color: inherit;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
// search & sidebar
|
||||
.td-sidebar {
|
||||
@media only screen and (min-width: 768px) {
|
||||
|
@ -390,4 +468,47 @@ main.content {
|
|||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* ANNOUNCEMENTS */
|
||||
section#fp-announcement ~ .header-hero {
|
||||
padding: $announcement-size-adjustment 0;
|
||||
|
||||
> div {
|
||||
margin-top: $announcement-size-adjustment;
|
||||
margin-bottom: $announcement-size-adjustment;
|
||||
}
|
||||
|
||||
h1, h2, h3, h4, h5 {
|
||||
margin: $announcement-size-adjustment 0;
|
||||
}
|
||||
}
|
||||
|
||||
section#announcement ~ .header-hero {
|
||||
padding: #{$announcement-size-adjustment / 2} 0;
|
||||
|
||||
> div {
|
||||
margin-top: #{$announcement-size-adjustment / 2};
|
||||
margin-bottom: #{$announcement-size-adjustment / 2};
|
||||
padding-bottom: #{$announcement-size-adjustment / 2};
|
||||
}
|
||||
|
||||
h1, h2, h3, h4, h5 {
|
||||
margin: #{$announcement-size-adjustment / 2} 0;
|
||||
}
|
||||
}
|
||||
|
||||
/* DOCUMENTATION */
|
||||
|
||||
/* Don't show lead text */
|
||||
body.td-documentation {
|
||||
main {
|
||||
@media only screen {
|
||||
> * {
|
||||
> .lead:first-of-type {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -23,7 +23,7 @@ $main-nav-left-button-size: 50px;
|
|||
$main-nav-left-button-font-size: 18px;
|
||||
|
||||
// hero
|
||||
$hero-padding-top: 136px;
|
||||
$hero-padding-top: 116px;
|
||||
$headline-wrapper-margin-bottom: 40px;
|
||||
$quickstart-button-padding: 0 50px;
|
||||
$vendor-strip-font-size: 16px;
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: "Einen Cluster verwalten"
|
||||
description: Lerne allgemeine Aufgaben zur Verwaltung eines Clusters kennen.
|
||||
weight: 20
|
||||
---
|
||||
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: "Verwaltung mit kubeadm"
|
||||
weight: 10
|
||||
---
|
||||
|
|
@ -46,16 +46,6 @@ Bevor Sie die einzelnen Lernprogramme durchgehen, möchten Sie möglicherweise e
|
|||
|
||||
* [ZooKeeper, ein verteiltes CP-System](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## CI/CD Pipeline
|
||||
|
||||
* [Einrichten einer CI/CD-Pipeline mit Kubernetes Teil 1: Übersicht](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
|
||||
|
||||
* [Einrichten einer CI/CD-Pipeline mit einem Jenkins-Pod in Kubernetes (Teil 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
|
||||
|
||||
* [Ausführen und Skalieren einer verteilten Kreuzworträtsel-App mit CI/CD auf Kubernetes (Teil 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
|
||||
|
||||
* [CI/CD für eine verteilte Kreuzworträtsel-App auf Kubernetes einrichten (Teil 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
|
||||
|
||||
## Clusters
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
|
|
@ -23,7 +23,7 @@ To keep kubeadm lean, focused, and vendor/infrastructure agnostic, the following
|
|||
- Non-critical add-ons, e.g. for monitoring, logging, and visualization
|
||||
- Specific cloud provider integrations
|
||||
|
||||
Infrastructure provisioning, for example, is left to other SIG Cluster Lifecycle projects, such as the [Cluster API](https://github.com/kubernetes-sigs/cluster-api). Instead, kubeadm covers only the common denominator in every Kubernetes cluster: the [control plane](/docs/concepts/#kubernetes-control-plane). The user may install their preferred networking solution and other add-ons on top of Kubernetes *after* cluster creation.
|
||||
Infrastructure provisioning, for example, is left to other SIG Cluster Lifecycle projects, such as the [Cluster API](https://github.com/kubernetes-sigs/cluster-api). Instead, kubeadm covers only the common denominator in every Kubernetes cluster: the [control plane](/docs/concepts/overview/components/#control-plane-components). The user may install their preferred networking solution and other add-ons on top of Kubernetes *after* cluster creation.
|
||||
|
||||
### What kubeadm's GA release means
|
||||
|
||||
|
|
|
@ -43,7 +43,7 @@ Those tasks are addressed by other SIG Cluster Lifecycle projects, such as the
|
|||
[Cluster API](https://github.com/kubernetes-sigs/cluster-api) for infrastructure provisioning and management.
|
||||
|
||||
Instead, kubeadm covers only the common denominator in every Kubernetes cluster: the
|
||||
[control plane](https://kubernetes.io/docs/concepts/#kubernetes-control-plane).
|
||||
[control plane](/docs/concepts/overview/components/#control-plane-components).
|
||||
|
||||

|
||||
|
||||
|
|
|
@ -175,7 +175,7 @@ We calculate this by measuring Kuberhealthy's [deployment check](https://github.
|
|||
|
||||
- PromQL Query (Availability % over the past 30 days):
|
||||
```promql
|
||||
1 - (sum(count_over_time(kuberhealthy_check{check="kuberhealthy/deployment", status="0"}[30d])) OR vector(0))/(sum(count_over_time(kuberhealthy_check{check="kuberhealthy/deployment", status="1"}[30d])) * 100)
|
||||
1 - (sum(count_over_time(kuberhealthy_check{check="kuberhealthy/deployment", status="0"}[30d])) OR vector(0)) / sum(count_over_time(kuberhealthy_check{check="kuberhealthy/deployment", status="1"}[30d]))
|
||||
```
|
||||
|
||||
*Utilization*
|
||||
|
|
|
@ -0,0 +1,199 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Introducing Hierarchical Namespaces"
|
||||
date: 2020-08-14
|
||||
---
|
||||
|
||||
**Author**: Adrian Ludwin (Google)
|
||||
|
||||
Safely hosting large numbers of users on a single Kubernetes cluster has always
|
||||
been a troublesome task. One key reason for this is that different organizations
|
||||
use Kubernetes in different ways, and so no one tenancy model is likely to suit
|
||||
everyone. Instead, Kubernetes offers you building blocks to create your own
|
||||
tenancy solution, such as Role Based Access Control (RBAC) and NetworkPolicies;
|
||||
the better these building blocks, the easier it is to safely build a multitenant
|
||||
cluster.
|
||||
|
||||
# Namespaces for tenancy
|
||||
|
||||
By far the most important of these building blocks is the namespace, which forms
|
||||
the backbone of almost all Kubernetes control plane security and sharing
|
||||
policies. For example, RBAC, NetworkPolicies and ResourceQuotas all respect
|
||||
namespaces by default, and objects such as Secrets, ServiceAccounts and
|
||||
Ingresses are freely usable _within_ any one namespace, but fully segregated
|
||||
from _other_ namespaces.
|
||||
|
||||
Namespaces have two key properties that make them ideal for policy enforcement.
|
||||
Firstly, they can be used to **represent ownership**. Most Kubernetes objects
|
||||
_must_ be in a namespace, so if you use namespaces to represent ownership, you
|
||||
can always count on there being an owner.
|
||||
|
||||
Secondly, namespaces have **authorized creation and use**. Only
|
||||
highly-privileged users can create namespaces, and other users require explicit
|
||||
permission to use those namespaces - that is, create, view or modify objects in
|
||||
those namespaces. This allows them to be carefully created with appropriate
|
||||
policies, before unprivileged users can create “regular” objects like pods and
|
||||
services.
|
||||
|
||||
# The limits of namespaces
|
||||
|
||||
However, in practice, namespaces are not flexible enough to meet some common use
|
||||
cases. For example, let’s say that one team owns several microservices with
|
||||
different secrets and quotas. Ideally, they should place these services into
|
||||
different namespaces in order to isolate them from each other, but this presents
|
||||
two problems.
|
||||
|
||||
Firstly, these namespaces have no common concept of ownership, even though
|
||||
they’re both owned by the same team. This means that if the team controls
|
||||
multiple namespaces, not only does Kubernetes not have any record of their
|
||||
common owner, but namespaced-scoped policies cannot be applied uniformly across
|
||||
them.
|
||||
|
||||
Secondly, teams generally work best if they can operate autonomously, but since
|
||||
namespace creation is highly privileged, it’s unlikely that any member of the
|
||||
dev team is allowed to create namespaces. This means that whenever a team wants
|
||||
a new namespace, they must raise a ticket to the cluster administrator. While
|
||||
this is probably acceptable for small organizations, it generates unnecessary
|
||||
toil as the organization grows.
|
||||
|
||||
# Introducing hierarchical namespaces
|
||||
|
||||
[Hierarchical
|
||||
namespaces](https://github.com/kubernetes-sigs/multi-tenancy/blob/master/incubator/hnc/docs/user-guide/concepts.md#basic)
|
||||
are a new concept developed by the [Kubernetes Working Group for Multi-Tenancy
|
||||
(wg-multitenancy)](https://github.com/kubernetes-sigs/multi-tenancy) in order to
|
||||
solve these problems. In its simplest form, a hierarchical namespace is a
|
||||
regular Kubernetes namespace that contains a small custom resource that
|
||||
identifies a single, optional, parent namespace. This establishes the concept of
|
||||
ownership _across_ namespaces, not just _within_ them.
|
||||
|
||||
This concept of ownership enables two additional types of behaviours:
|
||||
|
||||
* **Policy inheritance:** if one namespace is a child of another, policy objects
|
||||
such as RBAC RoleBindings are [copied from the parent to the
|
||||
child](https://github.com/kubernetes-sigs/multi-tenancy/blob/master/incubator/hnc/docs/user-guide/concepts.md#basic-propagation).
|
||||
* **Delegated creation:** you usually need cluster-level privileges to create a
|
||||
namespace, but hierarchical namespaces adds an alternative:
|
||||
[_subnamespaces_](https://github.com/kubernetes-sigs/multi-tenancy/blob/master/incubator/hnc/docs/user-guide/concepts.md#basic-subns),
|
||||
which can be manipulated using only limited permissions in the parent
|
||||
namespace.
|
||||
|
||||
This solves both of the problems for our dev team. The cluster administrator can
|
||||
create a single “root” namespace for the team, along with all necessary
|
||||
policies, and then delegate permission to create subnamespaces to members of
|
||||
that team. Those team members can then create subnamespaces for their own use,
|
||||
without violating the policies that were imposed by the cluster administrators.
|
||||
|
||||
# Hands-on with hierarchical namespaces
|
||||
|
||||
Hierarchical namespaces are provided by a Kubernetes extension known as the
|
||||
[**Hierarchical Namespace
|
||||
Controller**](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/incubator/hnc),
|
||||
or **HNC**. The HNC consists of two components:
|
||||
|
||||
* The **manager** runs on your cluster, manages subnamespaces, propagates policy
|
||||
objects, ensures that your hierarchies are legal and manages extension points.
|
||||
* The **kubectl plugin**, called `kubectl-hns`, makes it easy for users to
|
||||
interact with the manager.
|
||||
|
||||
Both can be easily installed from the [releases page of our
|
||||
repo](https://github.com/kubernetes-sigs/multi-tenancy/releases).
|
||||
|
||||
Let’s see HNC in action. Imagine that I do not have namespace creation
|
||||
privileges, but I can view the namespace `team-a` and create subnamespaces
|
||||
within it<sup>[1](#note-1)</sup>. Using the plugin, I can now say:
|
||||
|
||||
```bash
|
||||
$ kubectl hns create svc1-team-a -n team-a
|
||||
```
|
||||
|
||||
This creates a subnamespace called `svc1-team-a`. Note that since subnamespaces
|
||||
are just regular Kubernetes namespaces, all subnamespace names must still be
|
||||
unique.
|
||||
|
||||
I can view the structure of these namespaces by asking for a tree view:
|
||||
|
||||
```bash
|
||||
$ kubectl hns tree team-a
|
||||
# Output:
|
||||
team-a
|
||||
└── svc1-team-a
|
||||
```
|
||||
|
||||
And if there were any policies in the parent namespace, these now appear in the
|
||||
child as well<sup>[2](#note-2)</sup>. For example, let’s say that `team-a` had
|
||||
an RBAC RoleBinding called `sres`. This rolebinding will also be present in the
|
||||
subnamespace:
|
||||
|
||||
```bash
|
||||
$ kubectl describe rolebinding sres -n svc1-team-a
|
||||
# Output:
|
||||
Name: sres
|
||||
Labels: hnc.x-k8s.io/inheritedFrom=team-a # inserted by HNC
|
||||
Annotations: <none>
|
||||
Role:
|
||||
Kind: ClusterRole
|
||||
Name: admin
|
||||
Subjects: ...
|
||||
```
|
||||
|
||||
Finally, HNC adds labels to these namespaces with useful information about the
|
||||
hierarchy which you can use to apply other policies. For example, you can create
|
||||
the following NetworkPolicy:
|
||||
|
||||
```yaml
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: allow-team-a
|
||||
namespace: team-a
|
||||
spec:
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: 'team-a.tree.hnc.x-k8s.io/depth' # Label created by HNC
|
||||
operator: Exists
|
||||
```
|
||||
|
||||
This policy will both be propagated to all descendants of `team-a`, and will
|
||||
_also_ allow ingress traffic between all of those namespaces. The “tree” label
|
||||
can only be applied by HNC, and is guaranteed to reflect the latest hierarchy.
|
||||
|
||||
You can learn all about the features of HNC from the [user
|
||||
guide](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/incubator/hnc/docs/user-guide).
|
||||
|
||||
# Next steps and getting involved
|
||||
|
||||
If you think that hierarchical namespaces can work for your organization, [HNC
|
||||
v0.5.1 is available on
|
||||
GitHub](https://github.com/kubernetes-sigs/multi-tenancy/releases/tag/hnc-v0.5.1).
|
||||
We’d love to know what you think of it, what problems you’re using it to solve
|
||||
and what features you’d most like to see added. As with all early software, you
|
||||
should be cautious about using HNC in production environments, but the more
|
||||
feedback we get, the sooner we’ll be able to drive to HNC 1.0.
|
||||
|
||||
We’re also open to additional contributors, whether it’s to fix or report bugs,
|
||||
or help prototype new features such as exceptions, improved monitoring,
|
||||
hierarchical resource quotas or fine-grained configuration.
|
||||
|
||||
Please get in touch with us via our
|
||||
[repo](https://github.com/kubernetes-sigs/multi-tenancy), [mailing
|
||||
list](https://groups.google.com/g/kubernetes-wg-multitenancy) or on
|
||||
[Slack](https://kubernetes.slack.com/messages/wg-multitenancy) - we look forward
|
||||
to hearing from you!
|
||||
|
||||
---
|
||||
|
||||
_[Adrian Ludwin](https://twitter.com/aludwin) is a software engineer and the
|
||||
tech lead for the Hierarchical Namespace Controller._
|
||||
|
||||
<a name="note-1"/>
|
||||
|
||||
_Note 1: technically, you create a small object called a "subnamespace anchor"
|
||||
in the parent namespace, and then HNC creates the subnamespace for you._
|
||||
|
||||
<a name="note-2"/>
|
||||
|
||||
_Note 2: By default, only RBAC Roles and RoleBindings are propagated, but you
|
||||
can configure HNC to propagate any namespaced Kubernetes object._
|
|
@ -0,0 +1,47 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<svg width="120mm" height="50mm" version="1.1" viewBox="0 0 120 50" xmlns="http://www.w3.org/2000/svg" xmlns:cc="http://creativecommons.org/ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
|
||||
<style>
|
||||
text {
|
||||
font-family: "Open Sans", sans-serif;
|
||||
font-weight: initial;
|
||||
font-size: 3px;
|
||||
letter-spacing: 0;
|
||||
word-spacing: 0;
|
||||
}
|
||||
|
||||
@media (max-width: 1200px) {
|
||||
text {
|
||||
font-size: 3.5px;
|
||||
}
|
||||
}
|
||||
|
||||
@media (max-width: 600px) {
|
||||
text {
|
||||
font-size: 4px;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
<path d="m80.515 23.737c-0.07154 5e-3 -0.10954 0.08683-0.06718 0.14469 0.21444 0.29879 0.32852 0.6427 0.35295 0.99012h-10.835v0.26407h10.835c-0.02397 0.34688-0.13762 0.68768-0.35295 0.9803-0.05673 0.0732 0.01926 0.17452 0.10542 0.14056l3.1853-1.172c0.08057-0.02881 0.08057-0.14275 0-0.17157l-3.1853-1.17c-0.0081-0.0033-0.01662-0.0054-0.02532-0.0062-0.0043-3.17e-4 -0.0086-3.17e-4 -0.01291 0z" color="#000000" dominant-baseline="auto" fill-rule="evenodd" shape-rendering="auto" solid-color="#000000" stop-color="#000000" style="inline-size:0;isolation:auto;mix-blend-mode:normal;shape-margin:0;shape-padding:0;"/>
|
||||
<path d="m43.342 23.737c-0.07154 5e-3 -0.10954 0.08683-0.06718 0.14469 0.21444 0.29879 0.32852 0.6427 0.35295 0.99012h-10.835v0.26407h10.835c-0.02397 0.34688-0.13762 0.68768-0.35295 0.9803-0.05673 0.0732 0.01926 0.17452 0.10542 0.14056l3.1853-1.172c0.08057-0.02881 0.08057-0.14275 0-0.17157l-3.1853-1.17c-0.0081-0.0033-0.01662-0.0054-0.02532-0.0062-0.0043-3.17e-4 -0.0086-3.17e-4 -0.01291 0z" color="#000000" dominant-baseline="auto" fill-rule="evenodd" shape-rendering="auto" solid-color="#000000" stop-color="#000000" style="inline-size:0;isolation:auto;mix-blend-mode:normal;shape-margin:0;shape-padding:0;"/>
|
||||
<rect x="9.4527" y="16.25" width="23.568" height="17.5" fill="#3573e3" />
|
||||
<path d="m16.434 26.129q0.16298 0 0.28646-0.0049 0.12841-0.0099 0.21237-0.02963v-0.76553q-0.04939-0.02469-0.16298-0.03951-0.10866-0.01975-0.2667-0.01975-0.10372 0-0.22225 0.01482-0.11359 0.01482-0.21237 0.0642-0.09384 0.04445-0.15804 0.12841-0.06421 0.07902-0.06421 0.21237 0 0.24694 0.15804 0.34572 0.15804 0.09384 0.42968 0.09384zm-0.03951-2.3015q0.27658 0 0.46426 0.07408 0.19262 0.06915 0.30621 0.2025 0.11853 0.12841 0.16792 0.31115 0.04939 0.1778 0.04939 0.39511v1.6051q-0.05927 0.0099-0.16792 0.02963-0.10372 0.01482-0.23707 0.02963-0.13335 0.01482-0.2914 0.02469-0.1531 0.01482-0.30621 0.01482-0.21731 0-0.40005-0.04445t-0.31609-0.13829q-0.13335-0.09878-0.20743-0.25682-0.07408-0.15804-0.07408-0.3803 0-0.21237 0.08396-0.36548 0.0889-0.15311 0.23707-0.24694 0.14817-0.09384 0.34572-0.13829 0.19756-0.04445 0.41487-0.04445 0.06915 0 0.14323 0.0099 0.07408 0.0049 0.13829 0.01975 0.06914 0.0099 0.11853 0.01976 0.04939 0.0099 0.06914 0.01482v-0.12841q0-0.1136-0.02469-0.22225-0.02469-0.11359-0.0889-0.19756-0.0642-0.0889-0.1778-0.13829-0.10866-0.05433-0.28646-0.05433-0.22719 0-0.40005 0.03457-0.16792 0.02963-0.25188 0.06421l-0.05433-0.3803q0.0889-0.03951 0.29633-0.07408 0.20743-0.03951 0.44944-0.03951z" fill="#fff" />
|
||||
<path d="m18.911 26.509q-0.42474-0.0099-0.60254-0.18274-0.1778-0.17286-0.1778-0.53834v-3.0819l0.45932-0.07902v3.0868q0 0.11359 0.01976 0.18768 0.01975 0.07408 0.06421 0.11853 0.04445 0.04445 0.11853 0.06914 0.07408 0.01975 0.18274 0.03457z" fill="#fff" />
|
||||
<path d="m21.274 25.18q0-0.44944-0.22225-0.69144-0.22225-0.242-0.59267-0.242-0.20743 0-0.32597 0.01482-0.1136 0.01482-0.18274 0.03457v1.62q0.08396 0.06914 0.242 0.13335t0.34572 0.06421q0.19756 0 0.33584-0.06914 0.14323-0.07408 0.23213-0.19756 0.0889-0.12841 0.12841-0.29633 0.03951-0.17286 0.03951-0.37042zm0.47907 0q0 0.29139-0.07902 0.53834-0.07408 0.24694-0.22225 0.42474-0.14817 0.1778-0.36548 0.27658-0.21237 0.09878-0.48895 0.09878-0.22225 0-0.39511-0.05927-0.16792-0.05927-0.25188-0.11359v1.0273h-0.45932v-3.4029q0.16298-0.03951 0.40499-0.08396 0.24694-0.04939 0.56797-0.04939 0.29633 0 0.5334 0.09384 0.23707 0.09384 0.40499 0.2667 0.16792 0.17286 0.25682 0.42474 0.09384 0.24694 0.09384 0.5581z" fill="#fff" />
|
||||
<path d="m22.404 26.459v-3.7536l0.45932-0.07902v1.3137q0.12841-0.04939 0.27164-0.07408 0.14817-0.02963 0.29139-0.02963 0.30621 0 0.50871 0.0889 0.2025 0.08396 0.32103 0.242 0.12347 0.15311 0.17286 0.37042 0.04939 0.21731 0.04939 0.47907v1.4422h-0.45932v-1.3434q0-0.23707-0.03457-0.40499-0.02963-0.16792-0.10372-0.27164-0.07408-0.10372-0.19756-0.14817-0.12347-0.04939-0.30621-0.04939-0.07408 0-0.15311 0.0099-0.07902 0.0099-0.15311 0.02469-0.06914 0.0099-0.12841 0.02469-0.05433 0.01482-0.07902 0.0247v2.1336z" fill="#fff" />
|
||||
<path d="m26.09 26.129q0.16298 0 0.28646-0.0049 0.12841-0.0099 0.21237-0.02963v-0.76553q-0.04939-0.02469-0.16298-0.03951-0.10866-0.01975-0.2667-0.01975-0.10372 0-0.22225 0.01482-0.1136 0.01482-0.21237 0.0642-0.09384 0.04445-0.15804 0.12841-0.06421 0.07902-0.06421 0.21237 0 0.24694 0.15804 0.34572 0.15804 0.09384 0.42968 0.09384zm-0.03951-2.3015q0.27658 0 0.46426 0.07408 0.19262 0.06915 0.30621 0.2025 0.11853 0.12841 0.16792 0.31115 0.04939 0.1778 0.04939 0.39511v1.6051q-0.05927 0.0099-0.16792 0.02963-0.10372 0.01482-0.23707 0.02963-0.13335 0.01482-0.29139 0.02469-0.15311 0.01482-0.30621 0.01482-0.21731 0-0.40005-0.04445t-0.31609-0.13829q-0.13335-0.09878-0.20743-0.25682-0.07408-0.15804-0.07408-0.3803 0-0.21237 0.08396-0.36548 0.0889-0.15311 0.23707-0.24694 0.14817-0.09384 0.34572-0.13829 0.19756-0.04445 0.41487-0.04445 0.06914 0 0.14323 0.0099 0.07408 0.0049 0.13829 0.01975 0.06914 0.0099 0.11853 0.01976 0.04939 0.0099 0.06914 0.01482v-0.12841q0-0.1136-0.02469-0.22225-0.02469-0.11359-0.0889-0.19756-0.06421-0.0889-0.1778-0.13829-0.10866-0.05433-0.28646-0.05433-0.22719 0-0.40005 0.03457-0.16792 0.02963-0.25188 0.06421l-0.05433-0.3803q0.0889-0.03951 0.29633-0.07408 0.20743-0.03951 0.44944-0.03951z" fill="#fff" />
|
||||
<g transform="translate(-1.59)">
|
||||
<rect x="48.216" y="16.25" width="23.568" height="17.5" fill="#3573e3" />
|
||||
<path d="m55.727 24.432q0.08396-0.05433 0.25188-0.11359 0.17286-0.05927 0.39511-0.05927 0.27658 0 0.48895 0.09878 0.21731 0.09878 0.36548 0.27658 0.14817 0.1778 0.22225 0.42474 0.07902 0.24695 0.07902 0.54328 0 0.31115-0.09384 0.56304-0.0889 0.24694-0.25682 0.4198t-0.40499 0.2667q-0.23707 0.09384-0.5334 0.09384-0.32103 0-0.56797-0.04445-0.24694-0.04445-0.40499-0.0889v-3.6795l0.45932-0.07902zm0 2.0546q0.06914 0.01976 0.19262 0.03951 0.12841 0.01482 0.31609 0.01482 0.37042 0 0.59267-0.242 0.22225-0.24695 0.22225-0.69638 0-0.19756-0.03951-0.37042-0.03951-0.17286-0.12841-0.29633-0.0889-0.12841-0.23213-0.19756-0.13829-0.07408-0.33584-0.07408-0.18768 0-0.34572 0.06421-0.15804 0.0642-0.24201 0.13335z" fill="#fff" />
|
||||
<path d="m58.037 25.607q0-0.34078 0.09878-0.59267 0.09878-0.25682 0.26176-0.42474 0.16298-0.16792 0.37536-0.25188 0.21237-0.08396 0.43462-0.08396 0.51858 0 0.79516 0.32597 0.27658 0.32103 0.27658 0.98284 0 0.02963 0 0.07902 0 0.04445-0.0049 0.08396h-1.7582q0.02963 0.40005 0.23213 0.60748 0.2025 0.20743 0.63218 0.20743 0.24201 0 0.40499-0.03951 0.16792-0.04445 0.25188-0.08396l0.06421 0.38523q-0.08396 0.04445-0.29633 0.09384-0.20743 0.04939-0.47413 0.04939-0.33584 0-0.58279-0.09878-0.242-0.10372-0.40005-0.28152-0.15804-0.1778-0.23707-0.41981-0.07408-0.24694-0.07408-0.53834zm1.7632-0.25188q0.0049-0.31115-0.15804-0.5087-0.15804-0.2025-0.43956-0.2025-0.15804 0-0.28152 0.06421-0.11853 0.05927-0.2025 0.15804-0.08396 0.09878-0.13335 0.22719-0.04445 0.12841-0.05927 0.26176z" fill="#fff" />
|
||||
<path d="m61.369 24.318h0.97296v0.38523h-0.97296v1.1853q0 0.19262 0.02963 0.32103 0.02963 0.12347 0.0889 0.19756 0.05927 0.06914 0.14817 0.09878 0.0889 0.02963 0.20743 0.02963 0.20743 0 0.33091-0.04445 0.12841-0.04939 0.1778-0.06914l0.0889 0.38029q-0.06914 0.03457-0.24201 0.08396-0.17286 0.05433-0.39511 0.05433-0.26176 0-0.43462-0.06421-0.16792-0.06914-0.27164-0.20249-0.10372-0.13335-0.14817-0.32597-0.03951-0.19756-0.03951-0.45438v-2.2916l0.45932-0.07902z" fill="#fff" />
|
||||
<path d="m63.796 26.556q0.16298 0 0.28646-0.0049 0.12841-0.0099 0.21237-0.02963v-0.76553q-0.04939-0.0247-0.16298-0.03951-0.10866-0.01976-0.2667-0.01976-0.10372 0-0.22225 0.01482-0.1136 0.01482-0.21237 0.06421-0.09384 0.04445-0.15804 0.12841-0.06421 0.07902-0.06421 0.21237 0 0.24694 0.15804 0.34572 0.15804 0.09384 0.42968 0.09384zm-0.03951-2.3015q0.27658 0 0.46426 0.07408 0.19262 0.06914 0.30621 0.20249 0.11853 0.12841 0.16792 0.31115 0.04939 0.1778 0.04939 0.39511v1.6051q-0.05927 0.0099-0.16792 0.02963-0.10372 0.01482-0.23707 0.02963-0.13335 0.01482-0.29139 0.0247-0.15311 0.01482-0.30621 0.01482-0.21731 0-0.40005-0.04445-0.18274-0.04445-0.31609-0.13829-0.13335-0.09878-0.20743-0.25682-0.07408-0.15804-0.07408-0.38029 0-0.21237 0.08396-0.36548 0.0889-0.15311 0.23707-0.24694 0.14817-0.09384 0.34572-0.13829 0.19756-0.04445 0.41487-0.04445 0.06914 0 0.14323 0.0099 0.07408 0.0049 0.13829 0.01976 0.06915 0.0099 0.11853 0.01975 0.04939 0.0099 0.06914 0.01482v-0.12841q0-0.1136-0.02469-0.22225-0.0247-0.11359-0.0889-0.19756-0.06421-0.0889-0.1778-0.13829-0.10866-0.05433-0.28646-0.05433-0.22719 0-0.40005 0.03457-0.16792 0.02963-0.25188 0.06421l-0.05433-0.38029q0.0889-0.03951 0.29633-0.07408 0.20743-0.03951 0.44944-0.03951z" fill="#fff" />
|
||||
</g>
|
||||
<rect x="83.799" y="16.25" width="23.568" height="17.5" fill="#3573e3" />
|
||||
<path d="m89.795 26.556q0.28152 0 0.41487-0.07408 0.13829-0.07408 0.13829-0.23707 0-0.16792-0.13335-0.2667-0.13335-0.09878-0.43956-0.22225-0.14817-0.05927-0.28646-0.11853-0.13335-0.06421-0.23213-0.14817-0.09878-0.08396-0.15804-0.2025-0.05927-0.11853-0.05927-0.29139 0-0.34078 0.25188-0.53834 0.25188-0.2025 0.6865-0.2025 0.10866 0 0.21731 0.01482 0.10866 0.0099 0.2025 0.02963 0.09384 0.01482 0.16298 0.03457 0.07408 0.01976 0.1136 0.03457l-0.08396 0.39511q-0.07408-0.03951-0.23213-0.07902-0.15804-0.04445-0.3803-0.04445-0.19262 0-0.33584 0.07902-0.14323 0.07408-0.14323 0.23707 0 0.08396 0.02963 0.14816 0.03457 0.06421 0.09878 0.11854 0.06914 0.04939 0.16792 0.09384 0.09878 0.04445 0.23707 0.09384 0.18274 0.06914 0.32597 0.13829 0.14323 0.06421 0.242 0.15311 0.10372 0.0889 0.15804 0.21731 0.05433 0.12347 0.05433 0.30621 0 0.3556-0.2667 0.53834-0.26176 0.18274-0.75071 0.18274-0.34078 0-0.5334-0.05927-0.19262-0.05433-0.26176-0.08396l0.08396-0.39511q0.07902 0.02963 0.25188 0.0889 0.17286 0.05927 0.45932 0.05927z" fill="#fff" />
|
||||
<path d="m91.826 24.318h0.97296v0.38523h-0.97296v1.1853q0 0.19262 0.02963 0.32103 0.02963 0.12347 0.0889 0.19756 0.05927 0.06914 0.14817 0.09878 0.0889 0.02963 0.20743 0.02963 0.20743 0 0.33091-0.04445 0.12841-0.04939 0.1778-0.06914l0.0889 0.38029q-0.06915 0.03457-0.24201 0.08396-0.17286 0.05433-0.39511 0.05433-0.26176 0-0.43462-0.06421-0.16792-0.06914-0.27164-0.20249-0.10372-0.13335-0.14817-0.32597-0.03951-0.19756-0.03951-0.45438v-2.2916l0.45932-0.07902z" fill="#fff" />
|
||||
<path d="m94.253 26.556q0.16298 0 0.28646-0.0049 0.12841-0.0099 0.21237-0.02963v-0.76553q-0.04939-0.0247-0.16298-0.03951-0.10866-0.01976-0.2667-0.01976-0.10372 0-0.22225 0.01482-0.1136 0.01482-0.21237 0.06421-0.09384 0.04445-0.15804 0.12841-0.06421 0.07902-0.06421 0.21237 0 0.24694 0.15804 0.34572 0.15804 0.09384 0.42968 0.09384zm-0.03951-2.3015q0.27658 0 0.46426 0.07408 0.19262 0.06914 0.30621 0.20249 0.11853 0.12841 0.16792 0.31115 0.04939 0.1778 0.04939 0.39511v1.6051q-0.05927 0.0099-0.16792 0.02963-0.10372 0.01482-0.23707 0.02963-0.13335 0.01482-0.29139 0.0247-0.15311 0.01482-0.30621 0.01482-0.21731 0-0.40005-0.04445-0.18274-0.04445-0.31609-0.13829-0.13335-0.09878-0.20743-0.25682-0.07408-0.15804-0.07408-0.38029 0-0.21237 0.08396-0.36548 0.0889-0.15311 0.23707-0.24694 0.14817-0.09384 0.34572-0.13829 0.19756-0.04445 0.41487-0.04445 0.06914 0 0.14323 0.0099 0.07408 0.0049 0.13829 0.01976 0.06915 0.0099 0.11853 0.01975 0.04939 0.0099 0.06914 0.01482v-0.12841q0-0.1136-0.02469-0.22225-0.02469-0.11359-0.0889-0.19756-0.06421-0.0889-0.1778-0.13829-0.10866-0.05433-0.28646-0.05433-0.22719 0-0.40005 0.03457-0.16792 0.02963-0.25188 0.06421l-0.05433-0.38032q0.0889-0.03951 0.29633-0.07408 0.20743-0.03951 0.44944-0.03951z" fill="#fff" />
|
||||
<path d="m96.419 24.432q0.08396-0.05433 0.25188-0.11359 0.17286-0.05927 0.39511-0.05927 0.27658 0 0.48895 0.09878 0.21731 0.09878 0.36548 0.27658 0.14817 0.1778 0.22225 0.42474 0.07902 0.24695 0.07902 0.54328 0 0.31115-0.09384 0.56304-0.0889 0.24694-0.25682 0.4198t-0.40499 0.2667q-0.23707 0.09384-0.5334 0.09384-0.32103 0-0.56797-0.04445-0.24694-0.04445-0.40499-0.0889v-3.6795l0.45932-0.07902zm0 2.0546q0.06914 0.01976 0.19262 0.03951 0.12841 0.01482 0.31609 0.01482 0.37042 0 0.59267-0.242 0.22225-0.24695 0.22225-0.69638 0-0.19756-0.03951-0.37042-0.03951-0.17286-0.12841-0.29633-0.0889-0.12841-0.23213-0.19756-0.13829-0.07408-0.33584-0.07408-0.18768 0-0.34572 0.06421-0.15804 0.0642-0.24201 0.13335z" fill="#fff" />
|
||||
<path d="m99.643 26.936q-0.42474-0.0099-0.60254-0.18274-0.1778-0.17286-0.1778-0.53834v-3.0819l0.45932-0.07902v3.0868q0 0.1136 0.01976 0.18768 0.01976 0.07408 0.06421 0.11853 0.04445 0.04445 0.11853 0.06914 0.07408 0.01975 0.18274 0.03457z" fill="#fff" />
|
||||
<path d="m100.08 25.607q0-0.34078 0.0988-0.59267 0.0988-0.25682 0.26176-0.42474 0.16298-0.16792 0.37535-0.25188 0.21238-0.08396 0.43463-0.08396 0.51858 0 0.79516 0.32597 0.27658 0.32103 0.27658 0.98284 0 0.02963 0 0.07902 0 0.04445-5e-3 0.08396h-1.7582q0.0296 0.40005 0.23213 0.60748 0.20249 0.20743 0.63218 0.20743 0.242 0 0.40499-0.03951 0.16792-0.04445 0.25188-0.08396l0.0642 0.38523q-0.084 0.04445-0.29634 0.09384-0.20743 0.04939-0.47413 0.04939-0.33585 0-0.58279-0.09878-0.24201-0.10372-0.40005-0.28152t-0.23707-0.41981q-0.0741-0.24694-0.0741-0.53834zm1.7632-0.25188q5e-3 -0.31115-0.15804-0.5087-0.15805-0.2025-0.43956-0.2025-0.15805 0-0.28152 0.06421-0.11853 0.05927-0.20249 0.15804-0.084 0.09878-0.13335 0.22719-0.0445 0.12841-0.0593 0.26176z" fill="#fff" />
|
||||
<text x="82.5" y="38.5" fill="#000" xml:space="preserve"><tspan x="94.5" y="38.5" text-anchor="middle">(general availability)</tspan></text>
|
||||
</svg>
|
After Width: | Height: | Size: 14 KiB |
|
@ -0,0 +1,111 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Moving Forward From Beta"
|
||||
date: 2020-08-21
|
||||
slug: moving-forward-from-beta
|
||||
---
|
||||
|
||||
**Author**: Tim Bannister, The Scale Factory
|
||||
|
||||
In Kubernetes, features follow a defined
|
||||
[lifecycle](/docs/reference/command-line-tools-reference/feature-gates/#feature-stages).
|
||||
First, as the twinkle of an eye in an interested developer. Maybe, then,
|
||||
sketched in online discussions, drawn on the online equivalent of a cafe
|
||||
napkin. This rough work typically becomes a
|
||||
[Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/0001-kubernetes-enhancement-proposal-process.md#kubernetes-enhancement-proposal-process) (KEP), and
|
||||
from there it usually turns into code.
|
||||
|
||||
For Kubernetes v1.20 and onwards, we're focusing on helping that code
|
||||
graduate into stable features.
|
||||
|
||||
That lifecycle I mentioned runs as follows:
|
||||
|
||||

|
||||
|
||||
Usually, alpha features aren't enabled by default. You turn them on by setting a feature
|
||||
gate; usually, by setting a command line flag on each of the components that use the
|
||||
feature.
|
||||
|
||||
(If you use Kubernetes through a managed service offering such as AKS, EKS, GKE, etc then
|
||||
the vendor who runs that service may have decided what feature gates are enabled for you).
|
||||
|
||||
There's a defined process for graduating an existing, alpha feature into the beta phase.
|
||||
This is important because **beta features are enabled by default**, with the feature flag still
|
||||
there so cluster operators can opt out if they want.
|
||||
|
||||
A similar but more thorough set of graduation criteria govern the transition to general
|
||||
availability (GA), also known as "stable". GA features are part of Kubernetes, with a
|
||||
commitment that they are staying in place throughout the current major version.
|
||||
|
||||
Having beta features on by default lets Kubernetes and its contributors get valuable
|
||||
real-world feedback. However, there's a mismatch of incentives. Once a feature is enabled
|
||||
by default, people will use it. Even if there might be a few details to shake out,
|
||||
the way Kubernetes' REST APIs and conventions work mean that any future stable API is going
|
||||
to be compatible with the most recent beta API: your API objects won't stop working when
|
||||
a beta feature graduates to GA.
|
||||
|
||||
For the API and its resources in particular, there's a much less strong incentive to move
|
||||
features from beta to GA than from alpha to beta. Vendors who want a particular feature
|
||||
have had good reason to help get code to the point where features are enabled by default,
|
||||
and beyond that the journey has been less clear.
|
||||
|
||||
KEPs track more than code improvements. Essentially, anything that would need
|
||||
communicating to the wider community merits a KEP. That said, most KEPs cover
|
||||
Kubernetes features (and the code to implement them).
|
||||
|
||||
You might know that [Ingress](/docs/concepts/services-networking/ingress/)
|
||||
has been in Kubernetes for a while, but did you realize that it actually went beta in 2015? To help
|
||||
drive things forward, Kubernetes' Architecture Special Interest Group (SIG) have a new approach in
|
||||
mind.
|
||||
|
||||
## Avoiding permanent beta
|
||||
|
||||
For Kubernetes REST APIs, when a new feature's API reaches beta, that starts a countdown.
|
||||
The beta-quality API now has **nine calendar months** to either:
|
||||
- reach GA, and deprecate the beta, or
|
||||
- have a new beta version (_and deprecate the previous beta_).
|
||||
|
||||
To be clear, at this point **only REST APIs are affected**. For example, _APIListChunking_ is
|
||||
a beta feature but isn't itself a REST API. Right now there are no plans to automatically
|
||||
deprecate _APIListChunking_ nor any other features that aren't REST APIs.
|
||||
|
||||
If a REST API reaches the end of that 9 month countdown, then the next Kubernetes release
|
||||
will deprecate that API version. There's no option for the REST API to stay at the same
|
||||
beta version beyond the first Kubernetes release to come out after the 9 month window.
|
||||
|
||||
### What this means for you
|
||||
|
||||
If you're using Kubernetes, there's a good chance that you're using a beta feature. Like
|
||||
I said, there are lots of them about.
|
||||
As well as Ingress, you might be using [CronJob](/docs/concepts/workloads/controllers/cron-jobs/),
|
||||
or [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/), or others.
|
||||
There's an even bigger chance that you're running on a control plane with at least one beta
|
||||
feature enabled.
|
||||
|
||||
If you're using or generating Kubernetes manifests that use beta APIs like Ingress, you'll
|
||||
need to plan to revise those. The current APIs are going to be deprecated following a
|
||||
schedule (the 9 months I mentioned earlier) and after a further 9 months those deprecated
|
||||
APIs will be removed. At that point, to stay current with Kubernetes, you should already
|
||||
have migrated.
|
||||
|
||||
### What this means for Kubernetes contributors
|
||||
|
||||
The motivation here seems pretty clear: get features stable. Guaranteeing that beta
|
||||
features will be deprecated adds a pretty big incentive so that people who want the
|
||||
feature continue their effort until the code, documentation and tests are ready for this
|
||||
feature to graduate to stable, backed by several Kubernetes' releases of evidence in
|
||||
real-world use.
|
||||
|
||||
### What this means for the ecosystem
|
||||
|
||||
In my opinion, these harsh-seeming measures make a lot of sense, and are going to be
|
||||
good for Kubernetes. Deprecating existing APIs, through a rule that applies across all
|
||||
the different Special Interest Groups (SIGs), helps avoid stagnation and encourages
|
||||
fixes.
|
||||
|
||||
Let's say that an API goes to beta and then real-world experience shows that it
|
||||
just isn't right - that, fundamentally, the API has shortcomings. With that 9 month
|
||||
countdown ticking, the people involved have the means and the justification to revise
|
||||
and release an API that deals with the problem cases. Anyone who wants to live with
|
||||
the deprecated API is welcome to - Kubernetes is open source - but their needs do not
|
||||
have to hold up progress on the feature.
|
|
@ -1,3 +1,4 @@
|
|||
---
|
||||
linktitle: Kubernetes Documentation
|
||||
title: Documentation
|
||||
---
|
||||
|
|
|
@ -113,9 +113,9 @@ useful changes, it doesn't matter if the overall state is or is not stable.
|
|||
As a tenet of its design, Kubernetes uses lots of controllers that each manage
|
||||
a particular aspect of cluster state. Most commonly, a particular control loop
|
||||
(controller) uses one kind of resource as its desired state, and has a different
|
||||
kind of resource that it manages to make that desired state happen. For example,
|
||||
kind of resource that it manages to make that desired state happen. For example,
|
||||
a controller for Jobs tracks Job objects (to discover new work) and Pod objects
|
||||
(to run the Jobs, and then to see when the work is finished). In this case
|
||||
(to run the Jobs, and then to see when the work is finished). In this case
|
||||
something else creates the Jobs, whereas the Job controller creates Pods.
|
||||
|
||||
It's useful to have simple controllers rather than one, monolithic set of control
|
||||
|
@ -154,8 +154,7 @@ controller does.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about the [Kubernetes control plane](/docs/concepts/#kubernetes-control-plane)
|
||||
* Discover some of the basic [Kubernetes objects](/docs/concepts/#kubernetes-objects)
|
||||
* Read about the [Kubernetes control plane](/docs/concepts/overview/components/#control-plane-components)
|
||||
* Discover some of the basic [Kubernetes objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
|
||||
* Learn more about the [Kubernetes API](/docs/concepts/overview/kubernetes-api/)
|
||||
* If you want to write your own controller, see [Extension Patterns](/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns) in Extending Kubernetes.
|
||||
|
||||
|
|
|
@ -98,7 +98,7 @@ You can modify Node objects regardless of the setting of `--register-node`.
|
|||
For example, you can set labels on an existing Node, or mark it unschedulable.
|
||||
|
||||
You can use labels on Nodes in conjunction with node selectors on Pods to control
|
||||
scheduling. For example, you can to constrain a Pod to only be eligible to run on
|
||||
scheduling. For example, you can constrain a Pod to only be eligible to run on
|
||||
a subset of the available nodes.
|
||||
|
||||
Marking a node as unschedulable prevents the scheduler from placing new pods onto
|
||||
|
|
|
@ -29,7 +29,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
|
|||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
|
||||
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
|
||||
* [Romana](https://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
|
||||
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
|
||||
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
|
||||
|
||||
## Service Discovery
|
||||
|
||||
|
@ -49,5 +49,3 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
|
|||
There are several other add-ons documented in the deprecated [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons) directory.
|
||||
|
||||
Well-maintained ones should be linked to here. PRs welcome!
|
||||
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 30
|
|||
|
||||
<!-- overview -->
|
||||
This page explains how to manage Kubernetes running on a specific
|
||||
cloud provider.
|
||||
cloud provider. There are many other third-party cloud provider projects, but this list is specific to projects embedded within, or relied upon by Kubernetes itself.
|
||||
|
||||
<!-- body -->
|
||||
### kubeadm
|
||||
|
@ -116,15 +116,6 @@ If you wish to use the external cloud provider, its repository is [kubernetes/cl
|
|||
The Azure cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
|
||||
Note that the Kubernetes Node name must match the Azure VM name.
|
||||
|
||||
## CloudStack
|
||||
|
||||
If you wish to use the external cloud provider, its repository is [apache/cloudstack-kubernetes-provider](https://github.com/apache/cloudstack-kubernetes-provider)
|
||||
|
||||
### Node Name
|
||||
|
||||
The CloudStack cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
|
||||
Note that the Kubernetes Node name must match the CloudStack VM name.
|
||||
|
||||
## GCE
|
||||
|
||||
If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-gcp](https://github.com/kubernetes/cloud-provider-gcp#readme)
|
||||
|
@ -138,11 +129,6 @@ Note that the first segment of the Kubernetes Node name must match the GCE insta
|
|||
|
||||
If you wish to use the external cloud provider, its repository is [kubernetes-sigs/cloud-provider-huaweicloud](https://github.com/kubernetes-sigs/cloud-provider-huaweicloud).
|
||||
|
||||
### Node Name
|
||||
|
||||
The HUAWEI CLOUD provider needs the private IP address of the node as the name of the Kubernetes Node object.
|
||||
Please make sure indicating `--hostname-override=<node private IP>` when starting kubelet on the node.
|
||||
|
||||
## OpenStack
|
||||
This section describes all the possible configurations which can
|
||||
be used when using OpenStack with Kubernetes.
|
||||
|
@ -251,11 +237,9 @@ file:
|
|||
values are `v1` or `v2`. Where no value is provided automatic detection will
|
||||
select the highest supported version exposed by the underlying OpenStack
|
||||
cloud.
|
||||
* `use-octavia` (Optional): Used to determine whether to look for and use an
|
||||
Octavia LBaaS V2 service catalog endpoint. Valid values are `true` or `false`.
|
||||
Where `true` is specified and an Octaiva LBaaS V2 entry can not be found, the
|
||||
provider will fall back and attempt to find a Neutron LBaaS V2 endpoint
|
||||
instead. The default value is `false`.
|
||||
* `use-octavia`(Optional): Whether or not to use Octavia for LoadBalancer type
|
||||
of Service implementation instead of using Neutron-LBaaS. Default: true
|
||||
Attention: Openstack CCM use Octavia as default load balancer implementation since v1.17.0
|
||||
* `subnet-id` (Optional): Used to specify the id of the subnet you want to
|
||||
create your loadbalancer on. Can be found at Network > Networks. Click on the
|
||||
respective network to get its subnets.
|
||||
|
@ -362,19 +346,7 @@ Kubernetes network plugin and should appear in the `[Route]` section of the
|
|||
[kubenet](/docs/concepts/cluster-administration/network-plugins/#kubenet)
|
||||
on OpenStack.
|
||||
|
||||
## OVirt
|
||||
|
||||
### Node Name
|
||||
|
||||
The OVirt cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
|
||||
Note that the Kubernetes Node name must match the VM FQDN (reported by OVirt under `<vm><guest_info><fqdn>...</fqdn></guest_info></vm>`)
|
||||
|
||||
## Photon
|
||||
|
||||
### Node Name
|
||||
|
||||
The Photon cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
|
||||
Note that the Kubernetes Node name must match the Photon VM name (or if `overrideIP` is set to true in the `--cloud-config`, the Kubernetes Node name must match the Photon VM IP address).
|
||||
[kubenet]: /docs/concepts/cluster-administration/network-plugins/#kubenet
|
||||
|
||||
## vSphere
|
||||
|
||||
|
@ -388,46 +360,3 @@ If you are running vSphere < 6.7U3, the in-tree vSphere cloud provider is recomm
|
|||
{{< /tabs >}}
|
||||
|
||||
For in-depth documentation on the vSphere cloud provider, visit the [vSphere cloud provider docs site](https://cloud-provider-vsphere.sigs.k8s.io).
|
||||
|
||||
## IBM Cloud Kubernetes Service
|
||||
|
||||
### Compute nodes
|
||||
By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://cloud.ibm.com/docs/containers?topic=containers-planning_worker_nodes).
|
||||
|
||||
The name of the Kubernetes Node object is the private IP address of the IBM Cloud Kubernetes Service worker node instance.
|
||||
|
||||
### Networking
|
||||
The IBM Cloud Kubernetes Service provider provides VLANs for quality network performance and network isolation for nodes. You can set up custom firewalls and Calico network policies to add an extra layer of security for your cluster, or connect your cluster to your on-prem data center via VPN. For more information, see [Planning your cluster network setup](https://cloud.ibm.com/docs/containers?topic=containers-plan_clusters).
|
||||
|
||||
To expose apps to the public or within the cluster, you can leverage NodePort, LoadBalancer, or Ingress services. You can also customize the Ingress application load balancer with annotations. For more information, see [Choosing an app exposure service](https://cloud.ibm.com/docs/containers?topic=containers-cs_network_planning#cs_network_planning).
|
||||
|
||||
### Storage
|
||||
The IBM Cloud Kubernetes Service provider leverages Kubernetes-native persistent volumes to enable users to mount file, block, and cloud object storage to their apps. You can also use database-as-a-service and third-party add-ons for persistent storage of your data. For more information, see [Planning highly available persistent storage](https://cloud.ibm.com/docs/containers?topic=containers-storage_planning#storage_planning).
|
||||
|
||||
## Baidu Cloud Container Engine
|
||||
|
||||
### Node Name
|
||||
|
||||
The Baidu cloud provider uses the private IP address of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
|
||||
Note that the Kubernetes Node name must match the Baidu VM private IP.
|
||||
|
||||
## Tencent Kubernetes Engine
|
||||
|
||||
If you wish to use the external cloud provider, its repository is [TencentCloud/tencentcloud-cloud-controller-manager](https://github.com/TencentCloud/tencentcloud-cloud-controller-manager).
|
||||
|
||||
### Node Name
|
||||
|
||||
The Tencent cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
|
||||
Note that the Kubernetes Node name must match the Tencent VM private IP.
|
||||
|
||||
## Alibaba Cloud Kubernetes
|
||||
|
||||
If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-alibaba-cloud](https://github.com/kubernetes/cloud-provider-alibaba-cloud).
|
||||
|
||||
### Node Name
|
||||
|
||||
Alibaba Cloud does not require the format of node name, but the kubelet needs to add `--provider-id=${REGION_ID}.${INSTANCE_ID}`. The parameter `${REGION_ID}` represents the region id of the Kubernetes and `${INSTANCE_ID}` denotes the Alibaba ECS (Elastic Compute Service) ID.
|
||||
|
||||
### Load Balancers
|
||||
|
||||
You can setup external load balancers to use specific features in Alibaba Cloud by configuring the [annotations](https://www.alibabacloud.com/help/en/doc-detail/86531.htm) .
|
||||
|
|
|
@ -111,7 +111,7 @@ CPU is always requested as an absolute quantity, never as a relative quantity;
|
|||
### Meaning of memory
|
||||
|
||||
Limits and requests for `memory` are measured in bytes. You can express memory as
|
||||
a plain integer or as a fixed-point integer using one of these suffixes:
|
||||
a plain integer or as a fixed-point number using one of these suffixes:
|
||||
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||
Mi, Ki. For example, the following represent roughly the same value:
|
||||
|
||||
|
@ -311,7 +311,7 @@ You can use _ephemeral-storage_ for managing local ephemeral storage. Each Conta
|
|||
* `spec.containers[].resources.requests.ephemeral-storage`
|
||||
|
||||
Limits and requests for `ephemeral-storage` are measured in bytes. You can express storage as
|
||||
a plain integer or as a fixed-point integer using one of these suffixes:
|
||||
a plain integer or as a fixed-point number using one of these suffixes:
|
||||
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||
Mi, Ki. For example, the following represent roughly the same value:
|
||||
|
||||
|
|
|
@ -172,7 +172,7 @@ Node Score:
|
|||
|
||||
intel.com/foo = resourceScoringFunction((2+2),8)
|
||||
= (100 - ((8-4)*100/8)
|
||||
= (100 - 25)
|
||||
= (100 - 50)
|
||||
= 50
|
||||
= rawScoringFunction(50)
|
||||
= 5
|
||||
|
|
|
@ -352,6 +352,8 @@ NAME TYPE DATA
|
|||
db-user-pass-96mffmfh4k Opaque 2 51s
|
||||
```
|
||||
|
||||
You can view a description of the secret:
|
||||
|
||||
```shell
|
||||
kubectl describe secrets/db-user-pass-96mffmfh4k
|
||||
```
|
||||
|
@ -1002,6 +1004,8 @@ The output is similar to:
|
|||
secret "prod-db-secret" created
|
||||
```
|
||||
|
||||
You can also create a secret for test environment credentials.
|
||||
|
||||
```shell
|
||||
kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests
|
||||
```
|
||||
|
|
|
@ -30,7 +30,7 @@ There are two hooks that are exposed to Containers:
|
|||
|
||||
`PostStart`
|
||||
|
||||
This hook executes immediately after a container is created.
|
||||
This hook is executed immediately after a container is created.
|
||||
However, there is no guarantee that the hook will execute before the container ENTRYPOINT.
|
||||
No parameters are passed to the handler.
|
||||
|
||||
|
|
|
@ -94,7 +94,7 @@ runtime to authenticate to a private container registry.
|
|||
This approach is suitable if you can control node configuration.
|
||||
|
||||
{{< note >}}
|
||||
Kubernetes as only supports the `auths` and `HttpHeaders` section in Docker configuration.
|
||||
Default Kubernetes only supports the `auths` and `HttpHeaders` section in Docker configuration.
|
||||
Docker credential helpers (`credHelpers` or `credsStore`) are not supported.
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ weight: 30
|
|||
Operators are software extensions to Kubernetes that make use of
|
||||
[custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
to manage applications and their components. Operators follow
|
||||
Kubernetes principles, notably the [control loop](/docs/concepts/#kubernetes-control-plane).
|
||||
Kubernetes principles, notably the [control loop](/docs/concepts/architecture/controller).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
|
|
@ -209,11 +209,11 @@ well as lower-trust users.The following listed controls should be enforced/disal
|
|||
<tr>
|
||||
<td>Privilege Escalation</td>
|
||||
<td>
|
||||
Privilege escalation to root should not be allowed.<br>
|
||||
Privilege escalation (such as via set-user-ID or set-group-ID file mode) should not be allowed.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.containers[*].securityContext.privileged<br>
|
||||
spec.initContainers[*].securityContext.privileged<br>
|
||||
<br><b>Allowed Values:</b> false, undefined/nil<br>
|
||||
spec.containers[*].securityContext.allowPrivilegeEscalation<br>
|
||||
spec.initContainers[*].securityContext.allowPrivilegeEscalation<br>
|
||||
<br><b>Allowed Values:</b> false<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
|
|
|
@ -259,7 +259,7 @@ Expanding EBS volumes is a time-consuming operation. Also, there is a per-volume
|
|||
If expanding underlying storage fails, the cluster administrator can manually recover the Persistent Volume Claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention.
|
||||
|
||||
1. Mark the PersistentVolume(PV) that is bound to the PersistentVolumeClaim(PVC) with `Retain` reclaim policy.
|
||||
2. Delete the PVC. Since PV has `Retain` reclaim policy - we will not loose any data when we recreate the PVC.
|
||||
2. Delete the PVC. Since PV has `Retain` reclaim policy - we will not lose any data when we recreate the PVC.
|
||||
3. Delete the `claimRef` entry from PV specs, so as new PVC can bind to it. This should make the PV `Available`.
|
||||
4. Re-create the PVC with smaller size than PV and set `volumeName` field of the PVC to the name of the PV. This should bind new PVC to existing PV.
|
||||
5. Don't forget to restore the reclaim policy of the PV.
|
||||
|
|
|
@ -94,14 +94,14 @@ using the attribute `volumeSnapshotClassName`. If nothing is set, then the defau
|
|||
|
||||
For pre-provisioned snapshots, you need to specify a `volumeSnapshotContentName` as the source for the snapshot as shown in the following example. The `volumeSnapshotContentName` source field is required for pre-provisioned snapshots.
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
kind: VolumeSnapshot
|
||||
metadata:
|
||||
name: test-snapshot
|
||||
spec:
|
||||
source:
|
||||
volumeSnapshotContentName: test-content
|
||||
volumeSnapshotContentName: test-content
|
||||
```
|
||||
|
||||
## Volume Snapshot Contents
|
||||
|
|
|
@ -7,7 +7,7 @@ reviewers:
|
|||
- kow3ns
|
||||
title: DaemonSet
|
||||
content_type: concept
|
||||
weight: 50
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -8,7 +8,7 @@ feature:
|
|||
Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.
|
||||
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -84,7 +84,7 @@ Follow the steps given below to create the above Deployment:
|
|||
2. Run `kubectl get deployments` to check if the Deployment was created.
|
||||
|
||||
If the Deployment is still being created, the output is similar to the following:
|
||||
```shell
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 0/3 0 0 1s
|
||||
```
|
||||
|
@ -100,21 +100,21 @@ Follow the steps given below to create the above Deployment:
|
|||
3. To see the Deployment rollout status, run `kubectl rollout status deployment.v1.apps/nginx-deployment`.
|
||||
|
||||
The output is similar to:
|
||||
```shell
|
||||
```
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
deployment.apps/nginx-deployment successfully rolled out
|
||||
```
|
||||
|
||||
4. Run the `kubectl get deployments` again a few seconds later.
|
||||
The output is similar to this:
|
||||
```shell
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 18s
|
||||
```
|
||||
Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.
|
||||
|
||||
5. To see the ReplicaSet (`rs`) created by the Deployment, run `kubectl get rs`. The output is similar to this:
|
||||
```shell
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-75675f5897 3 3 3 18s
|
||||
```
|
||||
|
@ -131,7 +131,7 @@ Follow the steps given below to create the above Deployment:
|
|||
|
||||
6. To see the labels automatically generated for each Pod, run `kubectl get pods --show-labels`.
|
||||
The output is similar to:
|
||||
```shell
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
|
@ -1054,7 +1054,7 @@ The `.spec.template` and `.spec.selector` are the only required field of the `.s
|
|||
The `.spec.template` is a [Pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate
|
||||
labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [selector](#selector)).
|
||||
labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [selector](#selector).
|
||||
|
||||
Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is
|
||||
allowed, which is the default if not specified.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Garbage Collection
|
||||
content_type: concept
|
||||
weight: 70
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -8,7 +8,7 @@ feature:
|
|||
title: Batch execution
|
||||
description: >
|
||||
In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
|
||||
weight: 60
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -5,7 +5,7 @@ reviewers:
|
|||
- madhusudancs
|
||||
title: ReplicaSet
|
||||
content_type: concept
|
||||
weight: 10
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -245,9 +245,10 @@ For the template's [restart policy](/docs/concepts/workloads/Pods/pod-lifecycle/
|
|||
The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/). As discussed
|
||||
[earlier](#how-a-replicaset-works) these are the labels used to identify potential Pods to acquire. In our
|
||||
`frontend.yaml` example, the selector was:
|
||||
```shell
|
||||
|
||||
```yaml
|
||||
matchLabels:
|
||||
tier: frontend
|
||||
tier: frontend
|
||||
```
|
||||
|
||||
In the ReplicaSet, `.spec.template.metadata.labels` must match `spec.selector`, or it will
|
||||
|
|
|
@ -10,7 +10,7 @@ feature:
|
|||
Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
|
||||
|
||||
content_type: concept
|
||||
weight: 20
|
||||
weight: 90
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -8,7 +8,7 @@ reviewers:
|
|||
- smarterclayton
|
||||
title: StatefulSets
|
||||
content_type: concept
|
||||
weight: 40
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -124,9 +124,9 @@ about when the container entered the `Running` state.
|
|||
|
||||
### `Terminated` {#container-state-terminated}
|
||||
|
||||
A container in the `Terminated` state has begin execution and has then either run to
|
||||
completion or has failed for some reason. When you use `kubectl` to query a Pod with
|
||||
a container that is `Terminated`, you see a reason, and exit code, and the start and
|
||||
A container in the `Terminated` state began execution and then either ran to
|
||||
completion or failed for some reason. When you use `kubectl` to query a Pod with
|
||||
a container that is `Terminated`, you see a reason, an exit code, and the start and
|
||||
finish time for that container's period of execution.
|
||||
|
||||
If a container has a `preStop` hook configured, that runs before the container enters
|
||||
|
@ -341,8 +341,8 @@ before the Pod is allowed to be forcefully killed. With that forceful shutdown t
|
|||
place, the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} attempts graceful
|
||||
shutdown.
|
||||
|
||||
Typically, the container runtime sends a a TERM signal is sent to the main process in each
|
||||
container. Once the grace period has expired, the KILL signal is sent to any remainig
|
||||
Typically, the container runtime sends a TERM signal to the main process in each
|
||||
container. Once the grace period has expired, the KILL signal is sent to any remaining
|
||||
processes, and the Pod is then deleted from the
|
||||
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}. If the kubelet or the
|
||||
container runtime's management service is restarted while waiting for processes to terminate, the
|
||||
|
@ -442,4 +442,3 @@ This avoids a resource leak as Pods are created and terminated over time.
|
|||
* For detailed information about Pod / Container status in the API, see [PodStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core)
|
||||
and
|
||||
[ContainerStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerstatus-v1-core).
|
||||
|
||||
|
|
|
@ -669,7 +669,7 @@ rules:
|
|||
```
|
||||
|
||||
Extra fields are evaluated as sub-resources of the resource "userextras". To
|
||||
allow a user to use impersonation headers for the extra field "scopes," a user
|
||||
allow a user to use impersonation headers for the extra field "scopes", a user
|
||||
should be granted the following role:
|
||||
|
||||
```yaml
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: API server
|
||||
id: kube-apiserver
|
||||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kube-apiserver/
|
||||
full_link: /docs/concepts/overview/components/#kube-apiserver
|
||||
short_description: >
|
||||
Control plane component that serves the Kubernetes API.
|
||||
|
||||
|
|
|
@ -12,17 +12,11 @@ card:
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
See also: [Kubectl Overview](/docs/reference/kubectl/overview/) and [JsonPath Guide](/docs/reference/kubectl/jsonpath).
|
||||
|
||||
This page is an overview of the `kubectl` command.
|
||||
|
||||
|
||||
This page contains a list of commonly used `kubectl` commands and flags.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
# kubectl - Cheat Sheet
|
||||
|
||||
## Kubectl Autocomplete
|
||||
## Kubectl autocomplete
|
||||
|
||||
### BASH
|
||||
|
||||
|
@ -31,7 +25,7 @@ source <(kubectl completion bash) # setup autocomplete in bash into the current
|
|||
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
|
||||
```
|
||||
|
||||
You can also use a shorthand alias for `kubectl` that also works with completion:
|
||||
You can also use a shorthand alias for `kubectl` that also works with completion:
|
||||
|
||||
```bash
|
||||
alias k=kubectl
|
||||
|
@ -45,7 +39,7 @@ source <(kubectl completion zsh) # setup autocomplete in zsh into the current s
|
|||
echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # add autocomplete permanently to your zsh shell
|
||||
```
|
||||
|
||||
## Kubectl Context and Configuration
|
||||
## Kubectl context and configuration
|
||||
|
||||
Set which Kubernetes cluster `kubectl` communicates with and modifies configuration
|
||||
information. See [Authenticating Across Clusters with kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) documentation for
|
||||
|
@ -77,14 +71,15 @@ kubectl config set-context --current --namespace=ggckad-s2
|
|||
# set a context utilizing a specific username and namespace.
|
||||
kubectl config set-context gce --user=cluster-admin --namespace=foo \
|
||||
&& kubectl config use-context gce
|
||||
|
||||
|
||||
kubectl config unset users.foo # delete user foo
|
||||
```
|
||||
|
||||
## Apply
|
||||
## Kubectl apply
|
||||
|
||||
`apply` manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running `kubectl apply`. This is the recommended way of managing Kubernetes applications on production. See [Kubectl Book](https://kubectl.docs.kubernetes.io).
|
||||
|
||||
## Creating Objects
|
||||
## Creating objects
|
||||
|
||||
Kubernetes manifests can be defined in YAML or JSON. The file extension `.yaml`,
|
||||
`.yml`, and `.json` can be used.
|
||||
|
@ -138,7 +133,7 @@ EOF
|
|||
|
||||
```
|
||||
|
||||
## Viewing, Finding Resources
|
||||
## Viewing, finding resources
|
||||
|
||||
```bash
|
||||
# Get commands with basic output
|
||||
|
@ -213,8 +208,7 @@ kubectl get nodes -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'
|
|||
kubectl get pods -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'
|
||||
```
|
||||
|
||||
## Updating Resources
|
||||
|
||||
## Updating resources
|
||||
|
||||
```bash
|
||||
kubectl set image deployment/frontend www=image:v2 # Rolling update "www" containers of "frontend" deployment, updating the image
|
||||
|
@ -241,7 +235,7 @@ kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Add an annota
|
|||
kubectl autoscale deployment foo --min=2 --max=10 # Auto scale a deployment "foo"
|
||||
```
|
||||
|
||||
## Patching Resources
|
||||
## Patching resources
|
||||
|
||||
```bash
|
||||
# Partially update a node
|
||||
|
@ -260,7 +254,8 @@ kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", "
|
|||
kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'
|
||||
```
|
||||
|
||||
## Editing Resources
|
||||
## Editing resources
|
||||
|
||||
Edit any API resource in your preferred editor.
|
||||
|
||||
```bash
|
||||
|
@ -268,7 +263,7 @@ kubectl edit svc/docker-registry # Edit the service named d
|
|||
KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Use an alternative editor
|
||||
```
|
||||
|
||||
## Scaling Resources
|
||||
## Scaling resources
|
||||
|
||||
```bash
|
||||
kubectl scale --replicas=3 rs/foo # Scale a replicaset named 'foo' to 3
|
||||
|
@ -277,7 +272,7 @@ kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # If the deplo
|
|||
kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale multiple replication controllers
|
||||
```
|
||||
|
||||
## Deleting Resources
|
||||
## Deleting resources
|
||||
|
||||
```bash
|
||||
kubectl delete -f ./pod.json # Delete a pod using the type and name specified in pod.json
|
||||
|
@ -313,7 +308,7 @@ kubectl exec my-pod -c my-container -- ls / # Run command in existing po
|
|||
kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers
|
||||
```
|
||||
|
||||
## Interacting with Nodes and Cluster
|
||||
## Interacting with Nodes and cluster
|
||||
|
||||
```bash
|
||||
kubectl cordon my-node # Mark my-node as unschedulable
|
||||
|
@ -393,17 +388,12 @@ Verbosity | Description
|
|||
`--v=8` | Display HTTP request contents.
|
||||
`--v=9` | Display HTTP request contents without truncation of contents.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn more about [Overview of kubectl](/docs/reference/kubectl/overview/).
|
||||
* Read the [kubectl overview](/docs/reference/kubectl/overview/) and learn about [JsonPath](/docs/reference/kubectl/jsonpath).
|
||||
|
||||
* See [kubectl](/docs/reference/kubectl/kubectl/) options.
|
||||
|
||||
* Also [kubectl Usage Conventions](/docs/reference/kubectl/conventions/) to understand how to use it in reusable scripts.
|
||||
* Also read [kubectl Usage Conventions](/docs/reference/kubectl/conventions/) to understand how to use kubectl in reusable scripts.
|
||||
|
||||
* See more community [kubectl cheatsheets](https://github.com/dennyzhang/cheatsheet-kubernetes-A4).
|
||||
|
||||
|
||||
|
|
|
@ -13,7 +13,17 @@ file and passing its path as a command line argument.
|
|||
|
||||
<!-- body -->
|
||||
|
||||
## Minimal Configuration
|
||||
A scheduling Profile allows you to configure the different stages of scheduling
|
||||
in the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}.
|
||||
Each stage is exposed in a extension point. Plugins provide scheduling behaviors
|
||||
by implementing one or more of these extension points.
|
||||
|
||||
You can specify scheduling profiles by running `kube-scheduler --config <filename>`,
|
||||
using the component config APIs
|
||||
([`v1alpha1`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1alpha1?tab=doc#KubeSchedulerConfiguration)
|
||||
or [`v1alpha2`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1alpha2?tab=doc#KubeSchedulerConfiguration)).
|
||||
The `v1alpha2` API allows you to configure kube-scheduler to run
|
||||
[multiple profiles](#multiple-profiles).
|
||||
|
||||
A minimal configuration looks as follows:
|
||||
|
||||
|
@ -90,6 +100,10 @@ for that extension point. This can also be used to rearrange plugins order, if
|
|||
desired.
|
||||
|
||||
### Scheduling plugins
|
||||
1. `UnReserve`: This is an informational extension point that is called if
|
||||
a Pod is rejected after being reserved and put on hold by a `Permit` plugin.
|
||||
|
||||
## Scheduling plugins
|
||||
|
||||
The following plugins, enabled by default, implement one or more of these
|
||||
extension points:
|
||||
|
@ -236,4 +250,3 @@ only has one pending pods queue.
|
|||
|
||||
* Read the [kube-scheduler reference](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||
* Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/)
|
||||
|
||||
|
|
|
@ -221,64 +221,89 @@ sysctl --system
|
|||
{{< tabs name="tab-cri-cri-o-installation" >}}
|
||||
{{% tab name="Debian" %}}
|
||||
|
||||
To install CRI-O on the following operating systems, set the environment variable $OS to the appropriate field in the following table:
|
||||
|
||||
| Operating system | $OS |
|
||||
| ---------------- | ----------------- |
|
||||
| Debian Unstable | `Debian_Unstable` |
|
||||
| Debian Testing | `Debian_Testing` |
|
||||
|
||||
<br />
|
||||
Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
|
||||
For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`.
|
||||
You can pin your installation to a specific release.
|
||||
To install version 1.18.3, set `VERSION=1.18:1.18.3`.
|
||||
<br />
|
||||
|
||||
Then run
|
||||
```shell
|
||||
# Debian Unstable/Sid
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Unstable/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Unstable/Release.key -O- | sudo apt-key add -
|
||||
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
|
||||
|
||||
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
|
||||
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
|
||||
|
||||
apt-get update
|
||||
apt-get install cri-o cri-o-runc
|
||||
```
|
||||
|
||||
```shell
|
||||
# Debian Testing
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Testing/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Testing/Release.key -O- | sudo apt-key add -
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
```shell
|
||||
# Debian 10
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_10/Release.key -O- | sudo apt-key add -
|
||||
```
|
||||
{{% tab name="Ubuntu" %}}
|
||||
|
||||
```shell
|
||||
# Raspbian 10
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Raspbian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Raspbian_10/Release.key -O- | sudo apt-key add -
|
||||
```
|
||||
To install on the following operating systems, set the environment variable $OS to the appropriate field in the following table:
|
||||
|
||||
and then install CRI-O:
|
||||
| Operating system | $OS |
|
||||
| ---------------- | ----------------- |
|
||||
| Ubuntu 20.04 | `xUbuntu_20.04` |
|
||||
| Ubuntu 19.10 | `xUbuntu_19.10` |
|
||||
| Ubuntu 19.04 | `xUbuntu_19.04` |
|
||||
| Ubuntu 18.04 | `xUbuntu_18.04` |
|
||||
|
||||
<br />
|
||||
Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
|
||||
For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`.
|
||||
You can pin your installation to a specific release.
|
||||
To install version 1.18.3, set `VERSION=1.18:1.18.3`.
|
||||
<br />
|
||||
|
||||
Then run
|
||||
```shell
|
||||
sudo apt-get install cri-o-1.17
|
||||
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
|
||||
|
||||
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
|
||||
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
|
||||
|
||||
apt-get update
|
||||
apt-get install cri-o cri-o-runc
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Ubuntu 18.04, 19.04 and 19.10" %}}
|
||||
{{% tab name="CentOS" %}}
|
||||
|
||||
To install on the following operating systems, set the environment variable $OS to the appropriate field in the following table:
|
||||
|
||||
| Operating system | $OS |
|
||||
| ---------------- | ----------------- |
|
||||
| Centos 8 | `CentOS_8` |
|
||||
| Centos 8 Stream | `CentOS_8_Stream` |
|
||||
| Centos 7 | `CentOS_7` |
|
||||
|
||||
<br />
|
||||
Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
|
||||
For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`.
|
||||
You can pin your installation to a specific release.
|
||||
To install version 1.18.3, set `VERSION=1.18:1.18.3`.
|
||||
<br />
|
||||
|
||||
Then run
|
||||
```shell
|
||||
# Configure package repository
|
||||
. /etc/os-release
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add -
|
||||
sudo apt-get update
|
||||
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
|
||||
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
|
||||
yum install cri-o
|
||||
```
|
||||
|
||||
```shell
|
||||
# Install CRI-O
|
||||
sudo apt-get install cri-o-1.17
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="CentOS/RHEL 7.4+" %}}
|
||||
|
||||
```shell
|
||||
# Install prerequisites
|
||||
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_7/devel:kubic:libcontainers:stable.repo
|
||||
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}/CentOS_7/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}.repo
|
||||
```
|
||||
|
||||
```shell
|
||||
# Install CRI-O
|
||||
yum install -y cri-o
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="openSUSE Tumbleweed" %}}
|
||||
|
@ -286,6 +311,23 @@ yum install -y cri-o
|
|||
```shell
|
||||
sudo zypper install cri-o
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Fedora" %}}
|
||||
|
||||
Set `$VERSION` to the CRI-O version that matches your Kubernetes version.
|
||||
For instance, if you want to install CRI-O 1.18, `VERSION=1.18`
|
||||
You can find available versions with:
|
||||
```shell
|
||||
dnf module list cri-o
|
||||
```
|
||||
CRI-O does not support pinning to specific releases on Fedora.
|
||||
|
||||
Then run
|
||||
```shell
|
||||
dnf module enable cri-o:$VERSION
|
||||
dnf install cri-o
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
|
|
@ -1,14 +1,14 @@
|
|||
---
|
||||
reviewers:
|
||||
- sig-cluster-lifecycle
|
||||
title: Creating a single control-plane cluster with kubeadm
|
||||
title: Creating a cluster with kubeadm
|
||||
content_type: task
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">The `kubeadm` tool helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
|
||||
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Creating a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
|
||||
`kubeadm` also supports other cluster
|
||||
lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
|
||||
|
||||
|
@ -60,7 +60,7 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev
|
|||
|
||||
## Objectives
|
||||
|
||||
* Install a single control-plane Kubernetes cluster or [high-availability cluster](/docs/setup/production-environment/tools/kubeadm/high-availability/)
|
||||
* Install a single control-plane Kubernetes cluster
|
||||
* Install a Pod network on the cluster so that your Pods can
|
||||
talk to each other
|
||||
|
||||
|
@ -554,7 +554,7 @@ Workarounds:
|
|||
|
||||
* Use multiple control-plane nodes. You can read
|
||||
[Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) to pick a cluster
|
||||
topology that provides higher availabilty.
|
||||
topology that provides [high-availability](/docs/setup/production-environment/tools/kubeadm/high-availability/).
|
||||
|
||||
### Platform compatibility {#multi-platform}
|
||||
|
||||
|
|
|
@ -155,7 +155,7 @@ Running a cluster with `kubelet` instances that are persistently two minor versi
|
|||
|
||||
Example:
|
||||
|
||||
If `kube-proxy` version is **{{< skew latestVersion >}}**:
|
||||
If `kube-proxy` version is **{{< skew oldestMinorVersion >}}**:
|
||||
|
||||
* `kubelet` version must be at the same minor version as **{{< skew latestVersion >}}**.
|
||||
* `kubelet` version must be at the same minor version as **{{< skew oldestMinorVersion >}}**.
|
||||
* `kube-apiserver` version must be between **{{< skew oldestMinorVersion >}}** and **{{< skew latestVersion >}}**, inclusive.
|
||||
|
|
|
@ -12,4 +12,4 @@ show how to do individual tasks. A task page shows how to do a
|
|||
single thing, typically by giving a short sequence of steps.
|
||||
|
||||
If you would like to write a task page, see
|
||||
[Creating a Documentation Pull Request](/docs/home/contribute/create-pull-request/).
|
||||
[Creating a Documentation Pull Request](/docs/contribute/new-content/open-a-pr/).
|
||||
|
|
|
@ -80,7 +80,8 @@ spec:
|
|||
requests:
|
||||
cpu: 700m
|
||||
memory: 200Mi
|
||||
...
|
||||
...
|
||||
status:
|
||||
qosClass: Guaranteed
|
||||
```
|
||||
|
||||
|
@ -134,7 +135,8 @@ spec:
|
|||
memory: 200Mi
|
||||
requests:
|
||||
memory: 100Mi
|
||||
...
|
||||
...
|
||||
status:
|
||||
qosClass: Burstable
|
||||
```
|
||||
|
||||
|
@ -174,6 +176,7 @@ spec:
|
|||
...
|
||||
resources: {}
|
||||
...
|
||||
status:
|
||||
qosClass: BestEffort
|
||||
```
|
||||
|
||||
|
@ -219,6 +222,7 @@ spec:
|
|||
name: qos-demo-4-ctr-2
|
||||
resources: {}
|
||||
...
|
||||
status:
|
||||
qosClass: Burstable
|
||||
```
|
||||
|
||||
|
|
|
@ -251,7 +251,7 @@ different audit policies.
|
|||
|
||||
### Use fluentd to collect and distribute audit events from log file
|
||||
|
||||
[Fluentd](http://www.fluentd.org/) is an open source data collector for unified logging layer.
|
||||
[Fluentd](https://www.fluentd.org/) is an open source data collector for unified logging layer.
|
||||
In this example, we will use fluentd to split audit events by different namespaces.
|
||||
|
||||
{{< note >}}
|
||||
|
@ -379,7 +379,7 @@ different users into different files.
|
|||
bin/logstash -f /etc/logstash/config --path.settings /etc/logstash/
|
||||
```
|
||||
|
||||
1. create a [kubeconfig file](/docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig/) for kube-apiserver webhook audit backend
|
||||
1. create a [kubeconfig file](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) for kube-apiserver webhook audit backend
|
||||
|
||||
cat <<EOF > /etc/kubernetes/audit-webhook-kubeconfig
|
||||
apiVersion: v1
|
||||
|
@ -413,9 +413,5 @@ plugin which supports full-text search and analytics.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Visit [Auditing with Falco](/docs/tasks/debug-application-cluster/falco).
|
||||
|
||||
Learn about [Mutating webhook auditing annotations](/docs/reference/access-authn-authz/extensible-admission-controllers/#mutating-webhook-auditing-annotations).
|
||||
|
||||
|
||||
|
|
|
@ -10,10 +10,7 @@ content_type: concept
|
|||
|
||||
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
|
||||
This is *not* a guide for people who want to debug their cluster. For that you should check out
|
||||
[this guide](/docs/admin/cluster-troubleshooting).
|
||||
|
||||
|
||||
|
||||
[this guide](/docs/tasks/debug-application-cluster/debug-cluster).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -46,7 +43,8 @@ there are insufficient resources of one type or another that prevent scheduling.
|
|||
your pod. Reasons include:
|
||||
|
||||
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
|
||||
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See [Compute Resources document](/docs/user-guide/compute-resources/#my-pods-are-pending-with-event-message-failedscheduling) for more information.
|
||||
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See
|
||||
[Compute Resources document](/docs/concepts/configuration/manage-resources-containers/) for more information.
|
||||
|
||||
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
|
||||
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
|
||||
|
@ -161,13 +159,13 @@ check:
|
|||
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP.
|
||||
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
If none of the above solves your problem, follow the instructions in
|
||||
[Debugging Service document](/docs/tasks/debug-application-cluster/debug-service/)
|
||||
to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are
|
||||
actually serving; you have DNS working, iptables rules installed, and kube-proxy
|
||||
does not seem to be misbehaving.
|
||||
|
||||
If none of the above solves your problem, follow the instructions in [Debugging Service document](/docs/user-guide/debugging-services) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving.
|
||||
|
||||
You may also visit [troubleshooting document](/docs/troubleshooting/) for more information.
|
||||
|
||||
You may also visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/) for more information.
|
||||
|
||||
|
|
|
@ -10,10 +10,7 @@ content_type: concept
|
|||
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
|
||||
problem you are experiencing. See
|
||||
the [application troubleshooting guide](/docs/tasks/debug-application-cluster/debug-application) for tips on application debugging.
|
||||
You may also visit [troubleshooting document](/docs/troubleshooting/) for more information.
|
||||
|
||||
|
||||
|
||||
You may also visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/) for more information.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
|
|
@ -15,10 +15,8 @@ content_type: task
|
|||
|
||||
This page shows how to investigate problems related to the execution of
|
||||
Init Containers. The example command lines below refer to the Pod as
|
||||
`<pod-name>` and the Init Containers as `<init-container-1>` and
|
||||
`<init-container-2>`.
|
||||
|
||||
|
||||
`<pod-name>` and the Init Containers as `<init-container-1>` and
|
||||
`<init-container-2>`.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -26,11 +24,9 @@ Init Containers. The example command lines below refer to the Pod as
|
|||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
* You should be familiar with the basics of
|
||||
[Init Containers](/docs/concepts/abstractions/init-containers/).
|
||||
[Init Containers](/docs/concepts/workloads/pods/init-containers/).
|
||||
* You should have [Configured an Init Container](/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container/).
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Checking the status of Init Containers
|
||||
|
|
|
@ -9,8 +9,6 @@ content_type: task
|
|||
|
||||
This page shows how to debug Pods and ReplicationControllers.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
|
@ -20,8 +18,6 @@ This page shows how to debug Pods and ReplicationControllers.
|
|||
{{< glossary_tooltip text="Pods" term_id="pod" >}} and with
|
||||
Pods' [lifecycles](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Debugging Pods
|
||||
|
@ -51,9 +47,9 @@ can not schedule your pod. Reasons include:
|
|||
You may have exhausted the supply of CPU or Memory in your cluster. In this
|
||||
case you can try several things:
|
||||
|
||||
* [Add more nodes](/docs/admin/cluster-management/#resizing-a-cluster) to the cluster.
|
||||
* [Add more nodes](/docs/tasks/administer-cluster/cluster-management/#resizing-a-cluster) to the cluster.
|
||||
|
||||
* [Terminate unneeded pods](/docs/user-guide/pods/single-container/#deleting_a_pod)
|
||||
* [Terminate unneeded pods](/docs/concepts/workloads/pods/#pod-termination)
|
||||
to make room for pending pods.
|
||||
|
||||
* Check that the pod is not larger than your nodes. For example, if all
|
||||
|
|
|
@ -13,9 +13,6 @@ Deployment (or other workload controller) and created a Service, but you
|
|||
get no response when you try to access it. This document will hopefully help
|
||||
you to figure out what's going wrong.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Running commands in a Pod
|
||||
|
@ -658,7 +655,7 @@ This might sound unlikely, but it does happen and it is supposed to work.
|
|||
This can happen when the network is not properly configured for "hairpin"
|
||||
traffic, usually when `kube-proxy` is running in `iptables` mode and Pods
|
||||
are connected with bridge network. The `Kubelet` exposes a `hairpin-mode`
|
||||
[flag](/docs/admin/kubelet/) that allows endpoints of a Service to loadbalance
|
||||
[flag](/docs/reference/command-line-tools-reference/kubelet/) that allows endpoints of a Service to loadbalance
|
||||
back to themselves if they try to access their own Service VIP. The
|
||||
`hairpin-mode` flag must either be set to `hairpin-veth` or
|
||||
`promiscuous-bridge`.
|
||||
|
@ -724,15 +721,13 @@ Service is not working. Please let us know what is going on, so we can help
|
|||
investigate!
|
||||
|
||||
Contact us on
|
||||
[Slack](/docs/troubleshooting/#slack) or
|
||||
[Slack](/docs/tasks/debug-application-cluster/troubleshooting/#slack) or
|
||||
[Forum](https://discuss.kubernetes.io) or
|
||||
[GitHub](https://github.com/kubernetes/kubernetes).
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Visit [troubleshooting document](/docs/troubleshooting/) for more information.
|
||||
Visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/)
|
||||
for more information.
|
||||
|
||||
|
||||
|
|
|
@ -12,19 +12,13 @@ content_type: task
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This task shows you how to debug a StatefulSet.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster.
|
||||
* You should have a StatefulSet running that you want to investigate.
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Debugging a StatefulSet
|
||||
|
@ -37,18 +31,12 @@ kubectl get pods -l app=myapp
|
|||
```
|
||||
|
||||
If you find that any Pods listed are in `Unknown` or `Terminating` state for an extended period of time,
|
||||
refer to the [Deleting StatefulSet Pods](/docs/tasks/manage-stateful-set/delete-pods/) task for
|
||||
refer to the [Deleting StatefulSet Pods](/docs/tasks/run-application/delete-stateful-set/) task for
|
||||
instructions on how to deal with them.
|
||||
You can debug individual Pods in a StatefulSet using the
|
||||
[Debugging Pods](/docs/tasks/debug-application-cluster/debug-pod-replication-controller/) guide.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Learn more about [debugging an init-container](/docs/tasks/debug-application-cluster/debug-init-containers/).
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ title: Logging Using Elasticsearch and Kibana
|
|||
|
||||
On the Google Compute Engine (GCE) platform, the default logging support targets
|
||||
[Stackdriver Logging](https://cloud.google.com/logging/), which is described in detail
|
||||
in the [Logging With Stackdriver Logging](/docs/user-guide/logging/stackdriver).
|
||||
in the [Logging With Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver).
|
||||
|
||||
This article describes how to set up a cluster to ingest logs into
|
||||
[Elasticsearch](https://www.elastic.co/products/elasticsearch) and view
|
||||
|
@ -90,7 +90,8 @@ Elasticsearch, and is part of a service named `kibana-logging`.
|
|||
|
||||
The Elasticsearch and Kibana services are both in the `kube-system` namespace
|
||||
and are not directly exposed via a publicly reachable IP address. To reach them,
|
||||
follow the instructions for [Accessing services running in a cluster](/docs/concepts/cluster-administration/access-cluster/#accessing-services-running-on-the-cluster).
|
||||
follow the instructions for
|
||||
[Accessing services running in a cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster).
|
||||
|
||||
If you try accessing the `elasticsearch-logging` service in your browser, you'll
|
||||
see a status page that looks something like this:
|
||||
|
@ -102,7 +103,7 @@ like. See [Elasticsearch's documentation](https://www.elastic.co/guide/en/elasti
|
|||
for more details on how to do so.
|
||||
|
||||
Alternatively, you can view your cluster's logs using Kibana (again using the
|
||||
[instructions for accessing a service running in the cluster](/docs/user-guide/accessing-the-cluster/#accessing-services-running-on-the-cluster)).
|
||||
[instructions for accessing a service running in the cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster)).
|
||||
The first time you visit the Kibana URL you will be presented with a page that
|
||||
asks you to configure your view of the ingested logs. Select the option for
|
||||
timeseries values and select `@timestamp`. On the following page select the
|
||||
|
|
|
@ -317,8 +317,8 @@ After some time, Stackdriver Logging agent pods will be restarted with the new c
|
|||
### Changing fluentd parameters
|
||||
|
||||
Fluentd configuration is stored in the `ConfigMap` object. It is effectively a set of configuration
|
||||
files that are merged together. You can learn about fluentd configuration on the [official
|
||||
site](http://docs.fluentd.org).
|
||||
files that are merged together. You can learn about fluentd configuration on the
|
||||
[official site](https://docs.fluentd.org).
|
||||
|
||||
Imagine you want to add a new parsing logic to the configuration, so that fluentd can understand
|
||||
default Python logging format. An appropriate fluentd filter looks similar to this:
|
||||
|
@ -356,7 +356,7 @@ using [guide above](#changing-daemonset-parameters).
|
|||
### Adding fluentd plugins
|
||||
|
||||
Fluentd is written in Ruby and allows to extend its capabilities using
|
||||
[plugins](http://www.fluentd.org/plugins). If you want to use a plugin, which is not included
|
||||
[plugins](https://www.fluentd.org/plugins). If you want to use a plugin, which is not included
|
||||
in the default Stackdriver Logging container image, you have to build a custom image. Imagine
|
||||
you want to add Kafka sink for messages from a particular container for additional processing.
|
||||
You can re-use the default [container image sources](https://git.k8s.io/contrib/fluentd/fluentd-gcp-image)
|
||||
|
|
|
@ -13,9 +13,6 @@ are available in Kubernetes through the Metrics API. These metrics can be either
|
|||
by user, for example by using `kubectl top` command, or used by a controller in the cluster, e.g.
|
||||
Horizontal Pod Autoscaler, to make decisions.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## The Metrics API
|
||||
|
@ -41,11 +38,19 @@ The API requires metrics server to be deployed in the cluster. Otherwise it will
|
|||
|
||||
### CPU
|
||||
|
||||
CPU is reported as the average usage, in [CPU cores](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu), over a period of time. This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The kubelet chooses the window for the rate calculation.
|
||||
CPU is reported as the average usage, in
|
||||
[CPU cores](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu),
|
||||
over a period of time. This value is derived by taking a rate over a cumulative CPU counter
|
||||
provided by the kernel (in both Linux and Windows kernels).
|
||||
The kubelet chooses the window for the rate calculation.
|
||||
|
||||
### Memory
|
||||
|
||||
Memory is reported as the working set, in bytes, at the instant the metric was collected. In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure. However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate. It includes all anonymous (non-file-backed) memory since kubernetes does not support swap. The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.
|
||||
Memory is reported as the working set, in bytes, at the instant the metric was collected.
|
||||
In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure.
|
||||
However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate.
|
||||
It includes all anonymous (non-file-backed) memory since kubernetes does not support swap.
|
||||
The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.
|
||||
|
||||
## Metrics Server
|
||||
|
||||
|
@ -54,9 +59,12 @@ It is deployed by default in clusters created by `kube-up.sh` script
|
|||
as a Deployment object. If you use a different Kubernetes setup mechanism you can deploy it using the provided
|
||||
[deployment components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases) file.
|
||||
|
||||
Metric server collects metrics from the Summary API, exposed by [Kubelet](/docs/admin/kubelet/) on each node.
|
||||
Metric server collects metrics from the Summary API, exposed by
|
||||
[Kubelet](/docs/reference/command-line-tools-reference/kubelet/) on each node.
|
||||
|
||||
Metrics Server is registered with the main API server through
|
||||
[Kubernetes aggregator](/docs/concepts/api-extension/apiserver-aggregation/).
|
||||
[Kubernetes aggregator](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
|
||||
|
||||
Learn more about the metrics server in
|
||||
[the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
|
||||
|
||||
Learn more about the metrics server in [the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
|
||||
|
|
|
@ -10,29 +10,32 @@ title: Tools for Monitoring Resources
|
|||
To scale an application and provide a reliable service, you need to
|
||||
understand how the application behaves when it is deployed. You can examine
|
||||
application performance in a Kubernetes cluster by examining the containers,
|
||||
[pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and
|
||||
[pods](/docs/concepts/workloads/pods/),
|
||||
[services](/docs/concepts/services-networking/service/), and
|
||||
the characteristics of the overall cluster. Kubernetes provides detailed
|
||||
information about an application's resource usage at each of these levels.
|
||||
This information allows you to evaluate your application's performance and
|
||||
where bottlenecks can be removed to improve overall performance.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
In Kubernetes, application monitoring does not depend on a single monitoring solution. On new clusters, you can use [resource metrics](#resource-metrics-pipeline) or [full metrics](#full-metrics-pipeline) pipelines to collect monitoring statistics.
|
||||
In Kubernetes, application monitoring does not depend on a single monitoring solution.
|
||||
On new clusters, you can use [resource metrics](#resource-metrics-pipeline) or
|
||||
[full metrics](#full-metrics-pipeline) pipelines to collect monitoring statistics.
|
||||
|
||||
## Resource metrics pipeline
|
||||
|
||||
The resource metrics pipeline provides a limited set of metrics related to
|
||||
cluster components such as the [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale) controller, as well as the `kubectl top` utility.
|
||||
cluster components such as the
|
||||
[Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/)
|
||||
controller, as well as the `kubectl top` utility.
|
||||
These metrics are collected by the lightweight, short-term, in-memory
|
||||
[metrics-server](https://github.com/kubernetes-incubator/metrics-server) and
|
||||
are exposed via the `metrics.k8s.io` API.
|
||||
|
||||
metrics-server discovers all nodes on the cluster and
|
||||
queries each node's
|
||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet) for CPU and
|
||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet/) for CPU and
|
||||
memory usage. The kubelet acts as a bridge between the Kubernetes master and
|
||||
the nodes, managing the pods and containers running on a machine. The kubelet
|
||||
translates each pod into its constituent containers and fetches individual
|
||||
|
|
|
@ -11,15 +11,14 @@ title: Troubleshooting
|
|||
Sometimes things go wrong. This guide is aimed at making them right. It has
|
||||
two sections:
|
||||
|
||||
* [Troubleshooting your application](/docs/tasks/debug-application-cluster/debug-application/) - Useful for users who are deploying code into Kubernetes and wondering why it is not working.
|
||||
* [Troubleshooting your cluster](/docs/tasks/debug-application-cluster/debug-cluster/) - Useful for cluster administrators and people whose Kubernetes cluster is unhappy.
|
||||
* [Troubleshooting your application](/docs/tasks/debug-application-cluster/debug-application/) - Useful
|
||||
for users who are deploying code into Kubernetes and wondering why it is not working.
|
||||
* [Troubleshooting your cluster](/docs/tasks/debug-application-cluster/debug-cluster/) - Useful
|
||||
for cluster administrators and people whose Kubernetes cluster is unhappy.
|
||||
|
||||
You should also check the known issues for the [release](https://github.com/kubernetes/kubernetes/releases)
|
||||
you're using.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Getting help
|
||||
|
@ -37,12 +36,12 @@ accomplish commonly used tasks, and [Tutorials](/docs/tutorials/) are more
|
|||
comprehensive walkthroughs of real-world, industry-specific, or end-to-end
|
||||
development scenarios. The [Reference](/docs/reference/) section provides
|
||||
detailed documentation on the [Kubernetes API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
and command-line interfaces (CLIs), such as [`kubectl`](/docs/user-guide/kubectl-overview/).
|
||||
and command-line interfaces (CLIs), such as [`kubectl`](/docs/reference/kubectl/overview/).
|
||||
|
||||
You may also find the Stack Overflow topics relevant:
|
||||
|
||||
* [Kubernetes](http://stackoverflow.com/questions/tagged/kubernetes)
|
||||
* [Google Kubernetes Engine](http://stackoverflow.com/questions/tagged/google-container-engine)
|
||||
* [Kubernetes](https://stackoverflow.com/questions/tagged/kubernetes)
|
||||
* [Google Kubernetes Engine](https://stackoverflow.com/questions/tagged/google-container-engine)
|
||||
|
||||
## Help! My question isn't covered! I need help now!
|
||||
|
||||
|
@ -50,15 +49,16 @@ You may also find the Stack Overflow topics relevant:
|
|||
|
||||
Someone else from the community may have already asked a similar question or may
|
||||
be able to help with your problem. The Kubernetes team will also monitor
|
||||
[posts tagged Kubernetes](http://stackoverflow.com/questions/tagged/kubernetes).
|
||||
If there aren't any existing questions that help, please [ask a new one](http://stackoverflow.com/questions/ask?tags=kubernetes)!
|
||||
[posts tagged Kubernetes](https://stackoverflow.com/questions/tagged/kubernetes).
|
||||
If there aren't any existing questions that help, please
|
||||
[ask a new one](https://stackoverflow.com/questions/ask?tags=kubernetes)!
|
||||
|
||||
### Slack
|
||||
|
||||
The Kubernetes team hangs out on Slack in the `#kubernetes-users` channel. You
|
||||
can participate in discussion with the Kubernetes team [here](https://kubernetes.slack.com).
|
||||
Slack requires registration, but the Kubernetes team is open invitation to
|
||||
anyone to register [here](http://slack.kubernetes.io). Feel free to come and ask
|
||||
anyone to register [here](https://slack.kubernetes.io). Feel free to come and ask
|
||||
any and all questions.
|
||||
|
||||
Once registered, browse the growing list of channels for various subjects of
|
||||
|
|
|
@ -673,6 +673,8 @@ spec:
|
|||
cronSpec:
|
||||
type: string
|
||||
pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
|
||||
image:
|
||||
type: string
|
||||
replicas:
|
||||
type: integer
|
||||
minimum: 1
|
||||
|
@ -720,6 +722,8 @@ spec:
|
|||
cronSpec:
|
||||
type: string
|
||||
pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
|
||||
image:
|
||||
type: string
|
||||
replicas:
|
||||
type: integer
|
||||
minimum: 1
|
||||
|
|
|
@ -1005,7 +1005,7 @@ template:
|
|||
|
||||
* [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/)
|
||||
* [Imperative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/imperative-config/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
|
||||
* [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
|
||||
|
||||
|
|
|
@ -167,7 +167,7 @@ kubectl create --edit -f /tmp/srv.yaml
|
|||
|
||||
* [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/)
|
||||
* [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
|
||||
* [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
|
||||
|
||||
|
|
|
@ -150,7 +150,7 @@ template:
|
|||
|
||||
* [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/)
|
||||
* [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
|
||||
* [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
|
||||
|
||||
|
|
|
@ -832,7 +832,7 @@ deployment.apps "dev-my-nginx" deleted
|
|||
|
||||
* [Kustomize](https://github.com/kubernetes-sigs/kustomize)
|
||||
* [Kubectl Book](https://kubectl.docs.kubernetes.io)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
|
||||
* [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
|
||||
|
||||
|
|
|
@ -229,7 +229,7 @@ The `kubectl patch` command has a `type` parameter that you can set to one of th
|
|||
</table>
|
||||
|
||||
For a comparison of JSON patch and JSON merge patch, see
|
||||
[JSON Patch and JSON Merge Patch](http://erosb.github.io/post/json-patch-vs-merge-patch/).
|
||||
[JSON Patch and JSON Merge Patch](https://erosb.github.io/post/json-patch-vs-merge-patch/).
|
||||
|
||||
The default value for the `type` parameter is `strategic`. So in the preceding exercise, you
|
||||
did a strategic merge patch.
|
||||
|
|
|
@ -37,13 +37,23 @@ You can perform a graceful pod deletion with the following command:
|
|||
kubectl delete pods <pod>
|
||||
```
|
||||
|
||||
For the above to lead to graceful termination, the Pod **must not** specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. The practice of setting a `pod.Spec.TerminationGracePeriodSeconds` of 0 seconds is unsafe and strongly discouraged for StatefulSet Pods. Graceful deletion is safe and will ensure that the Pod [shuts down gracefully](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) before the kubelet deletes the name from the apiserver.
|
||||
For the above to lead to graceful termination, the Pod **must not** specify a
|
||||
`pod.Spec.TerminationGracePeriodSeconds` of 0. The practice of setting a
|
||||
`pod.Spec.TerminationGracePeriodSeconds` of 0 seconds is unsafe and strongly discouraged
|
||||
for StatefulSet Pods. Graceful deletion is safe and will ensure that the Pod
|
||||
[shuts down gracefully](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
|
||||
before the kubelet deletes the name from the apiserver.
|
||||
|
||||
Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable. The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a [timeout](/docs/admin/node/#node-condition). Pods may also enter these states when the user attempts graceful deletion of a Pod on an unreachable Node. The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
|
||||
Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable.
|
||||
The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a
|
||||
[timeout](/docs/concepts/architecture/nodes/#node-condition).
|
||||
Pods may also enter these states when the user attempts graceful deletion of a Pod
|
||||
on an unreachable Node.
|
||||
The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
|
||||
|
||||
* The Node object is deleted (either by you, or by the [Node Controller](/docs/admin/node)).<br/>
|
||||
* The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver.<br/>
|
||||
* Force deletion of the Pod by the user.
|
||||
* The Node object is deleted (either by you, or by the [Node Controller](/docs/concepts/architecture/nodes/)).
|
||||
* The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver.
|
||||
* Force deletion of the Pod by the user.
|
||||
|
||||
The recommended best practice is to use the first or second approach. If a Node is confirmed to be dead (e.g. permanently disconnected from the network, powered down, etc), then delete the Node object. If the Node is suffering from a network partition, then try to resolve this or wait for it to resolve. When the partition heals, the kubelet will complete the deletion of the Pod and free up its name in the apiserver.
|
||||
|
||||
|
|
|
@ -15,11 +15,9 @@ Horizontal Pod Autoscaler automatically scales the number of pods
|
|||
in a replication controller, deployment, replica set or stateful set based on observed CPU utilization
|
||||
(or, with beta support, on some other, application-provided metrics).
|
||||
|
||||
This document walks you through an example of enabling Horizontal Pod Autoscaler for the php-apache server. For more information on how Horizontal Pod Autoscaler behaves, see the [Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/).
|
||||
|
||||
|
||||
|
||||
|
||||
This document walks you through an example of enabling Horizontal Pod Autoscaler for the php-apache server.
|
||||
For more information on how Horizontal Pod Autoscaler behaves, see the
|
||||
[Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/).
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -459,12 +457,12 @@ HorizontalPodAutoscaler.
|
|||
## Appendix: Quantities
|
||||
|
||||
All metrics in the HorizontalPodAutoscaler and metrics APIs are specified using
|
||||
a special whole-number notation known in Kubernetes as a *quantity*. For example,
|
||||
a special whole-number notation known in Kubernetes as a
|
||||
{{< glossary_tooltip term_id="quantity" text="quantity">}}. For example,
|
||||
the quantity `10500m` would be written as `10.5` in decimal notation. The metrics APIs
|
||||
will return whole numbers without a suffix when possible, and will generally return
|
||||
quantities in milli-units otherwise. This means you might see your metric value fluctuate
|
||||
between `1` and `1500m`, or `1` and `1.5` when written in decimal notation. See the
|
||||
[glossary entry on quantities](/docs/reference/glossary?core-object=true#term-quantity) for more information.
|
||||
between `1` and `1500m`, or `1` and `1.5` when written in decimal notation.
|
||||
|
||||
## Appendix: Other possible scenarios
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ on general patterns for running stateful applications in Kubernetes.
|
|||
* This tutorial assumes you are familiar with
|
||||
[PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
|
||||
and [StatefulSets](/docs/concepts/workloads/controllers/statefulset/),
|
||||
as well as other core concepts like [Pods](/docs/concepts/workloads/pods/pod/),
|
||||
as well as other core concepts like [Pods](/docs/concepts/workloads/pods/),
|
||||
[Services](/docs/concepts/services-networking/service/), and
|
||||
[ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/).
|
||||
* Some familiarity with MySQL helps, but this tutorial aims to present
|
||||
|
@ -297,7 +297,7 @@ running while you force a Pod out of the Ready state.
|
|||
|
||||
### Break the Readiness Probe
|
||||
|
||||
The [readiness probe](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes)
|
||||
The [readiness probe](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes)
|
||||
for the `mysql` container runs the command `mysql -h 127.0.0.1 -e 'SELECT 1'`
|
||||
to make sure the server is up and able to execute queries.
|
||||
|
||||
|
|
|
@ -8,14 +8,13 @@ content_type: task
|
|||
<!-- overview -->
|
||||
{{< glossary_definition term_id="service-catalog" length="all" prepend="Service Catalog is" >}}
|
||||
|
||||
Use [Helm](https://helm.sh/) to install Service Catalog on your Kubernetes cluster. Up to date information on this process can be found at the [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/install.md) repo.
|
||||
|
||||
|
||||
|
||||
Use [Helm](https://helm.sh/) to install Service Catalog on your Kubernetes cluster.
|
||||
Up to date information on this process can be found at the
|
||||
[kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/install.md) repo.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* Understand the key concepts of [Service Catalog](/docs/concepts/service-catalog/).
|
||||
* Understand the key concepts of [Service Catalog](/docs/concepts/extend-kubernetes/service-catalog/).
|
||||
* Service Catalog requires a Kubernetes cluster running version 1.7 or higher.
|
||||
* You must have a Kubernetes cluster with cluster DNS enabled.
|
||||
* If you are using a cloud-based Kubernetes cluster or {{< glossary_tooltip text="Minikube" term_id="minikube" >}}, you may already have cluster DNS enabled.
|
||||
|
|
|
@ -19,7 +19,7 @@ Service Catalog itself can work with any kind of managed service, not just Googl
|
|||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* Understand the key concepts of [Service Catalog](/docs/concepts/service-catalog/).
|
||||
* Understand the key concepts of [Service Catalog](/docs/concepts/extend-kubernetes/service-catalog/).
|
||||
* Install [Go 1.6+](https://golang.org/dl/) and set the `GOPATH`.
|
||||
* Install the [cfssl](https://github.com/cloudflare/cfssl) tool needed for generating SSL artifacts.
|
||||
* Service Catalog requires Kubernetes version 1.7+.
|
||||
|
|
|
@ -33,7 +33,7 @@ Once you have Minikube working, you can use it to
|
|||
## kind
|
||||
|
||||
Like Minikube, [kind](https://kind.sigs.k8s.io/docs/) lets you run Kubernetes on
|
||||
your local compute. Unlike Minikuke, kind only works with a single container runtime:
|
||||
your local computer. Unlike Minikube, kind only works with a single container runtime:
|
||||
it requires that you have [Docker](https://docs.docker.com/get-docker/) installed
|
||||
and configured.
|
||||
|
||||
|
|
|
@ -11,13 +11,18 @@ card:
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
The Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/), allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For a complete list of kubectl operations, see [Overview of kubectl](/docs/reference/kubectl/overview/).
|
||||
The Kubernetes command-line tool, [kubectl](/docs/reference/kubectl/kubectl/), allows
|
||||
you to run commands against Kubernetes clusters.
|
||||
You can use kubectl to deploy applications, inspect and manage cluster resources,
|
||||
and view logs. For a complete list of kubectl operations, see
|
||||
[Overview of kubectl](/docs/reference/kubectl/overview/).
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
You must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl helps avoid unforeseen issues.
|
||||
|
||||
You must use a kubectl version that is within one minor version difference of your cluster.
|
||||
For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master.
|
||||
Using the latest version of kubectl helps avoid unforeseen issues.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
|
|
@ -45,16 +45,6 @@ Before walking through each tutorial, you may want to bookmark the
|
|||
|
||||
* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## CI/CD Pipeline
|
||||
|
||||
* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
|
||||
|
||||
* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
|
||||
|
||||
* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
|
||||
|
||||
* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
|
||||
|
||||
## Clusters
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
|
|
@ -29,7 +29,7 @@ weight: 10
|
|||
|
||||
<div class="col-md-8">
|
||||
<h2>Kubernetes Pods</h2>
|
||||
<p>When you created a Deployment in Module <a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/">2</a>, Kubernetes created a <b>Pod</b> to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:</p>
|
||||
<p>When you created a Deployment in Module <a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/">2</a>, Kubernetes created a <b>Pod</b> to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker), and some shared resources for those containers. Those resources include:</p>
|
||||
<ul>
|
||||
<li>Shared storage, as Volumes</li>
|
||||
<li>Networking, as a unique cluster IP address</li>
|
||||
|
@ -51,7 +51,7 @@ weight: 10
|
|||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>
|
||||
A Pod is a group of one or more application containers (such as Docker or rkt) and includes shared storage (volumes), IP address and information about how to run them.
|
||||
A Pod is a group of one or more application containers (such as Docker) and includes shared storage (volumes), IP address and information about how to run them.
|
||||
</i></p>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -79,7 +79,7 @@ weight: 10
|
|||
<p>Every Kubernetes Node runs at least:</p>
|
||||
<ul>
|
||||
<li>Kubelet, a process responsible for communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine.</li>
|
||||
<li>A container runtime (like Docker, rkt) responsible for pulling the container image from a registry, unpacking the container, and running the application.</li>
|
||||
<li>A container runtime (like Docker) responsible for pulling the container image from a registry, unpacking the container, and running the application.</li>
|
||||
</ul>
|
||||
|
||||
</div>
|
||||
|
|
|
@ -150,7 +150,7 @@ ip addr
|
|||
|
||||
…then use `wget` to query the local webserver
|
||||
```shell
|
||||
# Replace 10.0.170.92 with the Pod's IPv4 address
|
||||
# Replace "10.0.170.92" with the IPv4 address of the Service named "clusterip"
|
||||
wget -qO - 10.0.170.92
|
||||
```
|
||||
```
|
||||
|
|
|
@ -49,16 +49,6 @@ Antes de recorrer cada tutorial, recomendamos añadir un marcador a
|
|||
|
||||
* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## Pipelines de CI/CD
|
||||
|
||||
* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
|
||||
|
||||
* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
|
||||
|
||||
* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
|
||||
|
||||
* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
|
||||
|
||||
## Clústers
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
|
|
@ -41,7 +41,6 @@ Kubernetes est une solution open-source qui vous permet de tirer parti de vos in
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Voir la video (en)</button>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu20" button id="desktopKCButton">Venez au KubeCon Amsterdam du 13 au 16 Aout 2020</a>
|
||||
<br>
|
||||
<br>
|
||||
|
|
|
@ -75,7 +75,8 @@ spec:
|
|||
requests:
|
||||
cpu: 700m
|
||||
memory: 200Mi
|
||||
...
|
||||
...
|
||||
status:
|
||||
qosClass: Guaranteed
|
||||
```
|
||||
|
||||
|
@ -124,7 +125,8 @@ spec:
|
|||
memory: 200Mi
|
||||
requests:
|
||||
memory: 100Mi
|
||||
...
|
||||
...
|
||||
status:
|
||||
qosClass: Burstable
|
||||
```
|
||||
|
||||
|
@ -163,6 +165,7 @@ spec:
|
|||
...
|
||||
resources: {}
|
||||
...
|
||||
status:
|
||||
qosClass: BestEffort
|
||||
```
|
||||
|
||||
|
@ -208,6 +211,7 @@ spec:
|
|||
name: qos-demo-4-ctr-2
|
||||
resources: {}
|
||||
...
|
||||
status:
|
||||
qosClass: Burstable
|
||||
```
|
||||
|
||||
|
|
|
@ -47,17 +47,6 @@ Avant d'explorer chacun des tutoriels, il peut-être utile de garder un signet p
|
|||
|
||||
* [Exécuter ZooKeeper, un système distribué CP (EN)](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## Pipeline d'Intégration continue et de déploiement continu (CI/CD)
|
||||
|
||||
* Mettre en place un pipeline CI/CD avec Kubernetes:
|
||||
* [1ère partie: Vue d'ensemble (EN)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
|
||||
|
||||
* [2ème partie: avec un Pod Jenkins (EN)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
|
||||
|
||||
* [3ème partie: Exécuter et mettre à l'échelle une application distribuée de mots croisés (EN)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
|
||||
|
||||
* [4ème partie: mettre en place un pipeline CI/CD pour une application distribuée de mots croisés avec Kubernetes (EN)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
|
||||
|
||||
## Clusters
|
||||
|
||||
* [AppArmor (EN)](/docs/tutorials/clusters/apparmor/)
|
||||
|
|
|
@ -0,0 +1,254 @@
|
|||
---
|
||||
reviewers:
|
||||
title: Debugging Resolusi DNS
|
||||
content_type: task
|
||||
min-kubernetes-server-version: v1.6
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
Laman ini menyediakan beberapa petunjuk untuk mendiagnosis masalah DNS.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
Klaster kamu harus dikonfigurasi untuk menggunakan
|
||||
{{< glossary_tooltip text="addon" term_id="addons" >}} CoreDNS atau pendahulunya,
|
||||
kube-dns.
|
||||
|
||||
{{% version-check %}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
### Membuat Pod sederhana yang digunakan sebagai lingkungan pengujian
|
||||
|
||||
{{< codenew file="admin/dns/dnsutils.yaml" >}}
|
||||
|
||||
Gunakan manifes berikut untuk membuat sebuah Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
|
||||
```
|
||||
```
|
||||
pod/dnsutils created
|
||||
```
|
||||
…dan verifikasi statusnya:
|
||||
```shell
|
||||
kubectl get pods dnsutils
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
dnsutils 1/1 Running 0 <some-time>
|
||||
```
|
||||
|
||||
Setelah Pod tersebut berjalan, kamu dapat menjalankan perintah `nslookup` di lingkungan tersebut.
|
||||
Jika kamu melihat hal seperti ini, maka DNS sudah berjalan dengan benar.
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
|
||||
```
|
||||
```
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10
|
||||
|
||||
Name: kubernetes.default
|
||||
Address 1: 10.0.0.1
|
||||
```
|
||||
|
||||
Jika perintah `nslookup` gagal, periksa hal berikut:
|
||||
|
||||
### Periksa konfigurasi DNS lokal terlebih dahulu
|
||||
|
||||
Periksa isi dari berkas resolv.conf.
|
||||
(Lihat [_Inheriting_ DNS dari node](/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node) dan
|
||||
[Isu-isu yang dikenal](#known-issues) di bawah ini untuk informasi lebih lanjut)
|
||||
|
||||
```shell
|
||||
kubectl exec -ti dnsutils -- cat /etc/resolv.conf
|
||||
```
|
||||
|
||||
Verifikasi _path_ pencarian dan nama server telah dibuat agar tampil seperti di bawah ini (perlu diperhatikan bahwa _path_ pencarian dapat berbeda tergantung dari penyedia layanan cloud):
|
||||
|
||||
```
|
||||
search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal
|
||||
nameserver 10.0.0.10
|
||||
options ndots:5
|
||||
```
|
||||
|
||||
Kesalahan yang muncul berikut ini mengindikasikan terdapat masalah dengan _add-on_ CoreDNS (atau kube-dns) atau Service terkait:
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
|
||||
```
|
||||
```
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10
|
||||
|
||||
nslookup: can't resolve 'kubernetes.default'
|
||||
```
|
||||
|
||||
atau
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
|
||||
```
|
||||
```
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||
|
||||
nslookup: can't resolve 'kubernetes.default'
|
||||
```
|
||||
|
||||
### Periksa apakah Pod DNS sedang berjalan
|
||||
|
||||
Gunakan perintah `kubectl get pods` untuk memverifikasi apakah Pod DNS sedang berjalan.
|
||||
|
||||
```shell
|
||||
kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
...
|
||||
coredns-7b96bf9f76-5hsxb 1/1 Running 0 1h
|
||||
coredns-7b96bf9f76-mvmmt 1/1 Running 0 1h
|
||||
...
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Nilai dari label `k8s-app` adalah `kube-dns` baik untuk CoreDNS maupun kube-dns.
|
||||
{{< /note >}}
|
||||
|
||||
Jika kamu melihat tidak ada Pod CoreDNS yang sedang berjalan atau Pod tersebut gagal/telah selesai, _add-on_ DNS mungkin tidak dijalankan (_deployed_) secara bawaan di lingkunganmu saat ini dan kamu harus menjalankannya secara manual.
|
||||
|
||||
### Periksa kesalahan pada Pod DNS
|
||||
|
||||
Gunakan perintah `kubectl logs` untuk melihat log dari Container DNS.
|
||||
|
||||
Untuk CoreDNS:
|
||||
```shell
|
||||
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
|
||||
```
|
||||
|
||||
Berikut contoh log dari CoreDNS yang sehat (_healthy_):
|
||||
|
||||
```
|
||||
.:53
|
||||
2018/08/15 14:37:17 [INFO] CoreDNS-1.2.2
|
||||
2018/08/15 14:37:17 [INFO] linux/amd64, go1.10.3, 2e322f6
|
||||
CoreDNS-1.2.2
|
||||
linux/amd64, go1.10.3, 2e322f6
|
||||
2018/08/15 14:37:17 [INFO] plugin/reload: Running configuration MD5 = 24e6c59e83ce706f07bcc82c31b1ea1c
|
||||
```
|
||||
|
||||
Periksa jika ada pesan mencurigakan atau tidak terduga dalam log.
|
||||
|
||||
### Apakah layanan DNS berjalan?
|
||||
|
||||
Verifikasi apakah layanan DNS berjalan dengan menggunakan perintah `kubectl get service`.
|
||||
|
||||
```shell
|
||||
kubectl get svc --namespace=kube-system
|
||||
```
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
...
|
||||
kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 1h
|
||||
...
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Nama layanan adalah `kube-dns` baik untuk CoreDNS maupun kube-dns.
|
||||
{{< /note >}}
|
||||
|
||||
Jika kamu telah membuat Service atau seharusnya Service telah dibuat secara bawaan namun ternyata tidak muncul, lihat
|
||||
[_debugging_ Service](/docs/tasks/debug-application-cluster/debug-service/) untuk informasi lebih lanjut.
|
||||
|
||||
### Apakah endpoint DNS telah ekspos?
|
||||
|
||||
Kamu dapat memverifikasikan apakah _endpoint_ DNS telah diekspos dengan menggunakan perintah `kubectl get endpoints`.
|
||||
|
||||
```shell
|
||||
kubectl get endpoints kube-dns --namespace=kube-system
|
||||
```
|
||||
```
|
||||
NAME ENDPOINTS AGE
|
||||
kube-dns 10.180.3.17:53,10.180.3.17:53 1h
|
||||
```
|
||||
|
||||
Jika kamu tidak melihat _endpoint_, lihat bagian _endpoint_ pada dokumentasi
|
||||
[_debugging_ Service](/docs/tasks/debug-application-cluster/debug-service/).
|
||||
|
||||
Untuk tambahan contoh Kubernetes DNS, lihat
|
||||
[contoh cluster-dns](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns) pada repositori Kubernetes GitHub.
|
||||
|
||||
### Apakah kueri DNS diterima/diproses?
|
||||
|
||||
Kamu dapat memverifikasi apakah kueri telah diterima oleh CoreDNS dengan menambahkan plugin `log` pada konfigurasi CoreDNS (alias Corefile).
|
||||
CoreDNS Corefile disimpan pada {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} dengan nama `coredns`. Untuk mengeditnya, gunakan perintah:
|
||||
|
||||
```
|
||||
kubectl -n kube-system edit configmap coredns
|
||||
```
|
||||
|
||||
Lalu tambahkan `log` pada bagian Corefile seperti contoh berikut:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: coredns
|
||||
namespace: kube-system
|
||||
data:
|
||||
Corefile: |
|
||||
.:53 {
|
||||
log
|
||||
errors
|
||||
health
|
||||
kubernetes cluster.local in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
upstream
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
}
|
||||
prometheus :9153
|
||||
proxy . /etc/resolv.conf
|
||||
cache 30
|
||||
loop
|
||||
reload
|
||||
loadbalance
|
||||
}
|
||||
```
|
||||
|
||||
Setelah perubahan disimpan, perubahan dapat memakan waktu satu hingga dua menit untuk Kubernetes menyebarkan perubahan ini pada Pod CoreDNS.
|
||||
|
||||
Berikutnya, coba buat beberapa kueri dan lihat log pada bagian atas dari dokumen ini. Jika pod CoreDNS menerima kueri, kamu seharusnya akan melihatnya pada log.
|
||||
|
||||
Berikut ini contoh kueri yang terdapat di dalam log:
|
||||
|
||||
```
|
||||
.:53
|
||||
2018/08/15 14:37:15 [INFO] CoreDNS-1.2.0
|
||||
2018/08/15 14:37:15 [INFO] linux/amd64, go1.10.3, 2e322f6
|
||||
CoreDNS-1.2.0
|
||||
linux/amd64, go1.10.3, 2e322f6
|
||||
2018/09/07 15:29:04 [INFO] plugin/reload: Running configuration MD5 = 162475cdf272d8aa601e6fe67a6ad42f
|
||||
2018/09/07 15:29:04 [INFO] Reloading complete
|
||||
172.17.0.18:41675 - [07/Sep/2018:15:29:11 +0000] 59925 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd,ra 106 0.000066649s
|
||||
```
|
||||
|
||||
## Isu-isu yang Dikenal
|
||||
|
||||
Beberapa distribusi Linux (contoh Ubuntu) menggunakan _resolver_ DNS lokal secara bawaan (systemd-resolved).
|
||||
Systemd-resolved memindahkan dan mengganti `/etc/resolv.conf` dengan berkas _stub_ yang dapat menyebabkan _forwarding loop_ yang fatal saat meresolusi nama pada server _upstream_. Ini dapat diatasi secara manual dengan menggunakan _flag_ kubelet `--resolv-conf`
|
||||
untuk mengarahkan ke `resolv.conf` yang benar (Pada `systemd-resolved`, ini berada di `/run/systemd/resolve/resolv.conf`).
|
||||
kubeadm akan otomatis mendeteksi `systemd-resolved`, dan menyesuaikan _flag_ kubelet sebagai mana mestinya.
|
||||
|
||||
Pemasangan Kubernetes tidak menggunakan berkas `resolv.conf` pada _node_ untuk digunakan sebagai klaster DNS secara _default_, karena proses ini umumnya spesifik pada distribusi tertentu. Hal ini bisa jadi akan diimplementasi nantinya.
|
||||
|
||||
Libc Linux (alias glibc) secara bawaan memiliki batasan `nameserver` DNS sebanyak 3 rekaman (_records_). Selain itu, pada glibc versi yang lebih lama dari glibc-2.17-222 ([versi terbaru lihat isu ini](https://access.redhat.com/solutions/58028)), jumlah rekaman DNS `search` dibatasi sejumlah 6 ([lihat masalah sejak 2005 ini](https://bugzilla.redhat.com/show_bug.cgi?id=168253)). Kubernetes membutuhkan 1 rekaman `nameserver` dan 3 rekaman `search`. Ini berarti jika instalasi lokal telah menggunakan 3 `nameserver` atau menggunakan lebih dari 3 `search`,sementara versi glibc kamu termasuk yang terkena dampak, beberapa dari pengaturan tersebut akan hilang. Untuk menyiasati batasan rekaman DNS `nameserver`, _node_ dapat menjalankan `dnsmasq`,yang akan menyediakan `nameserver` lebih banyak. Kamu juga dapat menggunakan kubelet `--resolv-conf` _flag_. Untuk menyiasati batasan rekaman `search`, pertimbangkan untuk memperbarui distribusi linux kamu atau memperbarui glibc ke versi yang tidak terdampak.
|
||||
|
||||
Jika kamu menggunakan Alpine versi 3.3 atau lebih lama sebagai dasar _image_ kamu, DNS mungkin tidak dapat bekerja dengan benar disebabkan masalah dengan Alpine.
|
||||
[Masalah 30215](https://github.com/kubernetes/kubernetes/issues/30215) Kubernetes menyediakan informasi lebih detil tentang ini.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- Lihat [Penyekalaan otomatis Service DNS dalam klaster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/).
|
||||
- Baca [DNS untuk Service dan Pod](/docs/concepts/services-networking/dns-pod-service/)
|
|
@ -0,0 +1,243 @@
|
|||
---
|
||||
title: Membuat Pod Statis
|
||||
weight: 170
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Pod statis dikelola langsung oleh _daemon_ kubelet pada suatu Node spesifik,
|
||||
tanpa {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}
|
||||
mengobservasi mereka.
|
||||
Tidak seperti Pod yang dikelola oleh _control plane_ (contohnya,
|
||||
{{< glossary_tooltip text="Deployment" term_id="deployment" >}});
|
||||
kubelet akan memantau setiap Pod statis (dan menjalankan ulang jika
|
||||
Pod mengalami kegagalan).
|
||||
|
||||
Pod statis selalu terikat pada satu {{< glossary_tooltip term_id="kubelet" >}}
|
||||
di dalam Node spesifik.
|
||||
|
||||
Kubelet secara otomatis akan mengulang untuk membuat sebuah
|
||||
{{< glossary_tooltip text="Pod mirror" term_id="mirror-pod" >}}
|
||||
pada server API Kubernetes untuk setiap Pod statis.
|
||||
Ini berarti Pod yang berjalan pada Node akan terlihat oleh API server,
|
||||
namun tidak dapat mengontrol dari sana.
|
||||
|
||||
{{< note >}}
|
||||
Jika kamu menjalankan klaster Kubernetes dan menggunakan Pod statis
|
||||
untuk menjalankan Pod pada setiap Node, kamu kemungkinan harus menggunakan
|
||||
sebuah {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}.
|
||||
{{< /note >}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
Laman ini mengasumsikan kamu menggunakan {{< glossary_tooltip term_id="docker" >}}
|
||||
untuk menjalankan Pod, dan Node kamu berjalan menggunakan sistem operasi Fedora.
|
||||
Instruksi untuk distribusi lain atau instalasi Kubernetes mungkin berbeda.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Membuat sebuah Pod statis
|
||||
|
||||
Kamu dapat mengatur Pod statis dengan menggunakan sebuah
|
||||
[berkas konfigurasi pada _file system_](#konfigurasi-melalui-berkas-sistem)
|
||||
atau sebuah [berkas konfigurasi ditempatkan pada web](#konfigurasi-melalui-http).
|
||||
|
||||
### Manifes Pod statis pada berkas sistem (_file system_) {#konfigurasi-melalui-berkas-sistem}
|
||||
|
||||
Manifes adalah standar definisi Pod dalam format JSON atau YAML pada suatu direktori.
|
||||
Gunakan _field_ `staticPodPath: <direktori>` pada
|
||||
[berkas konfigurasi kubelet](/docs/tasks/administer-cluster/kubelet-config-file),
|
||||
yang akan membaca direktori
|
||||
secara berkala dan membuat atau menghapus Pod statis sesuai dengan berkas YAML/JSON
|
||||
yang bertambah atau berkurang disana.
|
||||
|
||||
Catatan bahwa kubelet akan mengabaikan berkas yang diawali dengan titik (_dot_)
|
||||
ketika memindai suatu direktori.
|
||||
|
||||
Sebagai contoh, ini cara untuk memulai server web sederhana sebagai Pod statis:
|
||||
|
||||
1. Pilih Node yang kamu pilih untuk menjalankan Pod statis. Dalam contoh ini adalah `my-node1`.
|
||||
|
||||
```shell
|
||||
ssh my-node1
|
||||
```
|
||||
|
||||
2. Pilih sebuah direktori, katakan `/etc/kubelet.d` dan letakkan berkas definisi Pod untuk web server disana, contohnya `/etc/kubelet.d/static-web.yaml`:
|
||||
|
||||
```shell
|
||||
# Jalankan perintah ini pada Node tempat kubelet sedang berjalan
|
||||
mkdir /etc/kubelet.d/
|
||||
cat <<EOF >/etc/kubelet.d/static-web.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: static-web
|
||||
labels:
|
||||
role: myrole
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
protocol: TCP
|
||||
EOF
|
||||
```
|
||||
|
||||
3. Atur kubelet pada Node untuk menggunakan direktori ini dengan menjalankannya menggunakan argumen `--pod-manifest-path=/etc/kubelet.d/`. Pada Fedora, ubah berkas `/etc/kubernetes/kubelet` dengan menambahkan baris berikut:
|
||||
|
||||
```
|
||||
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
|
||||
```
|
||||
atau tambahkan _field_ `staticPodPath: <direktori>` pada [berkas konfigurasi kubelet](/docs/tasks/administer-cluster/kubelet-config-file).
|
||||
|
||||
4. Jalankan ulang kubelet. Pada Fedora, kamu dapat menjalankan:
|
||||
|
||||
```shell
|
||||
# Jalankan perintah berikut pada Node tempat kubelet berjalan
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
||||
### Manifes Pod statis pada Web {#konfigurasi-melalui-http}
|
||||
|
||||
Berkas yang ditentukan pada argumen `--manifest-url=<URL>` akan diunduh oleh kubelet secara berkala
|
||||
dan kubelet akan menginterpretasinya sebagai sebuah berkas JSON/YAML yang berisikan definisi Pod.
|
||||
Mirip dengan cara kerja [manifes pada _filesystem_](##konfigurasi-melalui-berkas-sistem),
|
||||
kubelet akan mengambil manifes berdasarkan jadwal. Jika ada perubahan pada daftar
|
||||
Pod statis, maka kubelet akan menerapkannya.
|
||||
|
||||
Untuk menggunakan cara ini:
|
||||
|
||||
1. Buat sebuah berkas YAML dan simpan pada suatu web server sehingga kamu pada memberikan URL tersebut pada kubelet.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: static-web
|
||||
labels:
|
||||
role: myrole
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
protocol: TCP
|
||||
```
|
||||
|
||||
2. Atur kubelet pada suatu Node untuk menggunakan manifes pada web ini dengan menjalankan menggunakan argumen `--manifest-url=<url-manifes>`. Pada Fedora, ubah pada `/etc/kubernetes/kubelet` untuk menambahkan baris ini:
|
||||
|
||||
```
|
||||
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --manifest-url=<url-manifes>"
|
||||
```
|
||||
|
||||
3. Jalankan ulang kubelet. Pada Fedora, kamu dapat menjalankan:
|
||||
|
||||
```shell
|
||||
# Jalankan perintah ini pada Node tempat kubelet berjalan
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
||||
## Mengobservasi perilaku Pod statis
|
||||
|
||||
Ketika kubelet berjalan, secara otomatis akan menjalankan semua Pod statis yang terdefinisi.
|
||||
Ketika kamu mendefinisikan Pod statis dan menjalankan ulang kubelet, Pod statis yang baru
|
||||
akan dijalankan.
|
||||
|
||||
Kamu dapat melihat Container yang berjalan (termasuk Pod statis) dengan menjalankan (pada Node):
|
||||
```shell
|
||||
# Jalankan perintah ini pada Node tempat kubelet berjalan
|
||||
docker ps
|
||||
```
|
||||
|
||||
Keluarannya kira-kira seperti berikut:
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
|
||||
```
|
||||
|
||||
Kamu dapat melihat Pod _mirror_ tersebut pada API server:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
static-web-my-node1 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Pastikan kubelet memiliki izin untuk membuat Pod _mirror_ pada server API. Jika tidak,
|
||||
pembuatannya akan ditolak oleh API server. Lihat
|
||||
[PodSecurityPolicy](/id/docs/concepts/policy/pod-security-policy/).
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
{{< glossary_tooltip term_id="label" text="Label" >}} dari Pod statis
|
||||
akan dibuat juga pada Pod _mirror_. Kamu dapat menggunakan label tersebut
|
||||
seperti biasa menggunakan {{< glossary_tooltip term_id="selector" text="selector-selector" >}},
|
||||
atau yang lainnya.
|
||||
|
||||
Kamu dapat mencoba untuk menggunakan kubelet untuk menghapus Pod _mirror_ tersebut pada API server,
|
||||
namun kubelet tidak akan menghapus Pod statis:
|
||||
|
||||
```shell
|
||||
kubectl delete pod static-web-my-node1
|
||||
```
|
||||
```
|
||||
pod "static-web-my-node1" deleted
|
||||
```
|
||||
Kamu akan melihat bahwa Pod tersebut tetap berjalan:
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
static-web-my-node1 1/1 Running 0 12s
|
||||
```
|
||||
|
||||
Kembali ke Node tempat kubelet berjalan, kamu dapat mencoba menghentikan Container
|
||||
Docker secara manual.
|
||||
Kamu akan melihat, setelah beberapa saat, kubelet akan mengetahui dan akan menjalankan ulang Pod
|
||||
secara otomatis:
|
||||
|
||||
```shell
|
||||
# Jalankan perintah ini pada Node tempat kubelet berjalan
|
||||
docker stop f6d05272b57e # ganti dengan ID pada Container-mu
|
||||
sleep 20
|
||||
docker ps
|
||||
```
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED ...
|
||||
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
|
||||
```
|
||||
|
||||
## Penambahan dan pengurangan secara dinamis pada Pod statis
|
||||
|
||||
Direktori konfigurasi (`/etc/kubelet.d` pada contoh kita) akan dipindai secara berkala oleh kubelet
|
||||
untuk melakukan perubahan dan penambahan/pengurangan
|
||||
Pod sesuai dengan penambahan/pengurangan berkas pada direktori tersebut.
|
||||
|
||||
```shell
|
||||
# Ini mengasumsikan kamu menggunakan konfigurasi Pod statis pada _filesystem_
|
||||
# Jalankan perintah ini pada Node tempat kubelet berjalan
|
||||
#
|
||||
mv /etc/kubelet.d/static-web.yaml /tmp
|
||||
sleep 20
|
||||
docker ps
|
||||
# Kamu mendapatkan bahwa tidak ada Container nginx yang berjalan
|
||||
mv /tmp/static-web.yaml /etc/kubelet.d/
|
||||
sleep 20
|
||||
docker ps
|
||||
```
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED ...
|
||||
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
|
||||
```
|
|
@ -0,0 +1,253 @@
|
|||
---
|
||||
title: Mendistribusikan Kredensial dengan Aman Menggunakan Secret
|
||||
content_type: task
|
||||
weight: 50
|
||||
min-kubernetes-server-version: v1.6
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
Laman ini menjelaskan bagaimana cara menginjeksi data sensitif, seperti kata sandi (_password_) dan kunci enkripsi, ke dalam Pod.
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
### Mengubah data rahasia kamu ke dalam representasi Base64
|
||||
|
||||
Misalnya kamu mempunyai dua buah data rahasia: sebuah nama pengguna `my-app` dan kata sandi
|
||||
`39528$vdg7Jb`. Pertama, gunakan alat penyandian Base64 untuk mengubah nama pengguna kamu dan kata sandi ke dalam representasi Base64. Berikut ini contoh menggunakan program Base64 yang umum digunakan:
|
||||
|
||||
```shell
|
||||
echo -n 'my-app' | base64
|
||||
echo -n '39528$vdg7Jb' | base64
|
||||
```
|
||||
|
||||
Hasil keluaran menampilkan representasi Base64 dari nama pengguna kamu yaitu `bXktYXBw`,
|
||||
dan representasi Base64 dari kata sandi kamu yaitu `Mzk1MjgkdmRnN0pi`.
|
||||
|
||||
{{< caution >}}
|
||||
Gunakan alat yang telah dipercayai oleh OS kamu untuk menghindari risiko dari penggunaan alat eksternal.
|
||||
{{< /caution >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Membuat Secret
|
||||
|
||||
Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Secret yang akan menampung nama pengguna dan kata sandi kamu:
|
||||
|
||||
{{< codenew file="pods/inject/secret.yaml" >}}
|
||||
|
||||
1. Membuat Secret
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/secret.yaml
|
||||
```
|
||||
|
||||
1. Melihat informasi dari Secret:
|
||||
|
||||
```shell
|
||||
kubectl get secret test-secret
|
||||
```
|
||||
|
||||
Hasil keluaran:
|
||||
|
||||
```
|
||||
NAME TYPE DATA AGE
|
||||
test-secret Opaque 2 1m
|
||||
```
|
||||
|
||||
1. Melihat informasi detil dari Secret:
|
||||
|
||||
```shell
|
||||
kubectl describe secret test-secret
|
||||
```
|
||||
|
||||
Hasil keluaran:
|
||||
|
||||
```
|
||||
Name: test-secret
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Type: Opaque
|
||||
|
||||
Data
|
||||
====
|
||||
password: 13 bytes
|
||||
username: 7 bytes
|
||||
```
|
||||
|
||||
### Membuat Secret langsung dengan kubectl
|
||||
|
||||
Jika kamu ingin melompati langkah penyandian dengan Base64, kamu dapat langsung membuat Secret yang sama dengan menggunakan perintah `kubectl create secret`. Contohnya:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic test-secret --from-literal='username=my-app' --from-literal='password=39528$vdg7Jb'
|
||||
```
|
||||
|
||||
Tentu saja ini lebih mudah. Pendekatan yang mendetil setiap langkah di atas bertujuan untuk mendemonstrasikan apa yang sebenarnya terjadi pada setiap langkah.
|
||||
|
||||
|
||||
## Membuat Pod yang memiliki akses ke data Secret melalui Volume
|
||||
|
||||
Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Pod:
|
||||
|
||||
{{< codenew file="pods/inject/secret-pod.yaml" >}}
|
||||
|
||||
1. Membuat Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/secret-pod.yaml
|
||||
```
|
||||
|
||||
1. Verifikasikan apakah Pod kamu sudah berjalan:
|
||||
|
||||
```shell
|
||||
kubectl get pod secret-test-pod
|
||||
```
|
||||
|
||||
Hasil keluaran:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
secret-test-pod 1/1 Running 0 42m
|
||||
```
|
||||
|
||||
1. Gunakan _shell_ untuk masuk ke dalam Container yang berjalan di dalam Pod kamu:
|
||||
```shell
|
||||
kubectl exec -i -t secret-test-pod -- /bin/bash
|
||||
```
|
||||
|
||||
1. Data Secret terekspos ke Container melalui Volume yang dipasang (_mount_) pada
|
||||
`/etc/secret-volume`.
|
||||
|
||||
Di dalam _shell_ kamu, tampilkan berkas yang ada di dalam direktori `/etc/secret-volume`:
|
||||
```shell
|
||||
# Jalankan ini di dalam shell dalam Container
|
||||
ls /etc/secret-volume
|
||||
```
|
||||
Hasil keluaran menampilkan dua buah berkas, masing-masing untuk setiap data Secret:
|
||||
```
|
||||
password username
|
||||
```
|
||||
|
||||
1. Di dalam _shell_ kamu, tampilkan konten dari berkas `username` dan `password`:
|
||||
```shell
|
||||
# Jalankan ini di dalam shell dalam Container
|
||||
echo "$( cat /etc/secret-volume/username )"
|
||||
echo "$( cat /etc/secret-volume/password )"
|
||||
```
|
||||
Hasil keluarannya adalah nama pengguna dan kata sandi kamu:
|
||||
```
|
||||
my-app
|
||||
39528$vdg7Jb
|
||||
```
|
||||
|
||||
## Mendefinisikan variabel lingkungan Container menggunakan data Secret
|
||||
|
||||
### Mendefinisikan variabel lingkungan Container menggunakan data dari Secret tunggal
|
||||
|
||||
* Definisikan variabel lingkungan sebagai pasangan _key-value_ pada Secret:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic backend-user --from-literal=backend-username='backend-admin'
|
||||
```
|
||||
|
||||
* Tentukan nilai `backend-username` yang didefinisikan di Secret ke variabel lingkungan `SECRET_USERNAME` di dalam spesifikasi Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-single-secret-env-variable.yaml" >}}
|
||||
|
||||
* Membuat Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/inject/pod-single-secret-env-variable.yaml
|
||||
```
|
||||
|
||||
* Di dalam _shell_ kamu, tampilkan konten dari variabel lingkungan `SECRET_USERNAME` dari Container
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME'
|
||||
```
|
||||
|
||||
Hasil keluarannya
|
||||
```
|
||||
backend-admin
|
||||
```
|
||||
|
||||
### Mendefinisikan variabel lingkungan Container dengan data dari multipel Secret
|
||||
|
||||
* Seperti contoh sebelumnya, buat Secret terlebih dahulu.
|
||||
|
||||
```shell
|
||||
kubectl create secret generic backend-user --from-literal=backend-username='backend-admin'
|
||||
kubectl create secret generic db-user --from-literal=db-username='db-admin'
|
||||
```
|
||||
|
||||
* Definisikan variabel lingkungan di dalam spesifikasi Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" >}}
|
||||
|
||||
* Membuat Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/inject/pod-multiple-secret-env-variable.yaml
|
||||
```
|
||||
|
||||
* Di dalam _shell_ kamu, tampilkan konten dari variabel lingkungan Container
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c 'env | grep _USERNAME'
|
||||
```
|
||||
Hasil keluarannya
|
||||
```
|
||||
DB_USERNAME=db-admin
|
||||
BACKEND_USERNAME=backend-admin
|
||||
```
|
||||
|
||||
|
||||
## Mengonfigurasi semua pasangan _key-value_ di dalam Secret sebagai variabel lingkungan Container
|
||||
|
||||
{{< note >}}
|
||||
Fitur ini tersedia mulai dari Kubernetes v1.6 dan yang lebih baru.
|
||||
{{< /note >}}
|
||||
|
||||
* Membuat Secret yang berisi banyak pasangan _key-value_
|
||||
|
||||
```shell
|
||||
kubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb'
|
||||
```
|
||||
|
||||
* Gunakan envFrom untuk mendefinisikan semua data Secret sebagai variabel lingkungan Container. _Key_ dari Secret akan mennjadi nama variabel lingkungan di dalam Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}}
|
||||
|
||||
* Membuat Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/inject/pod-secret-envFrom.yaml
|
||||
```
|
||||
|
||||
* Di dalam _shell_ kamu, tampilkan variabel lingkungan Container `username` dan `password`
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t envfrom-secret -- /bin/sh -c 'echo "username: $username\npassword: $password\n"'
|
||||
```
|
||||
|
||||
Hasil keluarannya
|
||||
```
|
||||
username: my-app
|
||||
password: 39528$vdg7Jb
|
||||
```
|
||||
|
||||
### Referensi
|
||||
|
||||
* [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
|
||||
* [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core)
|
||||
* [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Pelajari lebih lanjut [Secret](/id/docs/concepts/configuration/secret/).
|
||||
* Pelajari lebih lanjut [Volume](/id/docs/concepts/storage/volumes/).
|
|
@ -46,16 +46,6 @@ Sebelum melangkah lebih lanjut ke tutorial, sebaiknya tandai dulu halaman [Kamus
|
|||
|
||||
* [Menjalankan ZooKeeper, sebuah sistem terdistribusi yang berbasis CP](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## Pipeline CI/CD
|
||||
|
||||
* [Menyiapkan Pipeline CI/CD dengan Kubernetes Bagian 1: Ringkasan](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
|
||||
|
||||
* [Menyiapkan Pipeline CI/CD dengan Jenkins Pod di Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
|
||||
|
||||
* [Menjalankan dan Mereplikasi Aplikasi Teka-Teki Terdistribusi dengan CI/CD pada Kubernetes (Bagian 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
|
||||
|
||||
* [Menyiapkan CI/CD untuk Aplikasi Teka-Teki Terdistribusi pada Kubernetes (Bagian 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
|
||||
|
||||
## Klaster
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dnsutils
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: dnsutils
|
||||
image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
|
||||
command:
|
||||
- sleep
|
||||
- "3600"
|
||||
imagePullPolicy: IfNotPresent
|
||||
restartPolicy: Always
|
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: envvars-multiple-secrets
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
env:
|
||||
- name: BACKEND_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: backend-user
|
||||
key: backend-username
|
||||
- name: DB_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: db-user
|
||||
key: db-username
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: envfrom-secret
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: test-secret
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: env-single-secret
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
env:
|
||||
- name: SECRET_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: backend-user
|
||||
key: backend-username
|
|
@ -0,0 +1,17 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
# nama harus sesuai dengan nama Volume di bawah ini
|
||||
- name: secret-volume
|
||||
mountPath: /etc/secret-volume
|
||||
# Data Secret diekspos ke Container di dalam Pod melalui Volume
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: test-secret
|
|
@ -0,0 +1,7 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: test-secret
|
||||
data:
|
||||
username: bXktYXBw
|
||||
password: Mzk1MjgkdmRnN0pi
|
|
@ -46,16 +46,6 @@ Prima di procedere con vari tutorial, raccomandiamo di aggiungere il
|
|||
|
||||
* [Eseguire ZooKeeper, un sistema distribuito CP](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## CI/CD Pipelines
|
||||
|
||||
* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
|
||||
|
||||
* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
|
||||
|
||||
* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
|
||||
|
||||
* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
|
||||
|
||||
## Clusters
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
|
|
@ -9,7 +9,7 @@ logo: appdirect_featured_logo.png
|
|||
featured: true
|
||||
weight: 4
|
||||
quote: >
|
||||
私たちはたくさんの人からの関心を得るためにさまざまな戦略を試みています。Kubernetesとクラウドネイティブ技術は、いまやデファクトのエコシステムとみなされています。
|
||||
私たちはたくさんの人からの関心を得るためにさまざまな戦略を試みています。Kubernetesとクラウドネイティブ技術は、いまやデファクトのエコシステムとみなされています。
|
||||
---
|
||||
|
||||
<div class="banner1" style="background-image: url('/images/CaseStudy_appdirect_banner1.jpg')">
|
||||
|
@ -48,10 +48,10 @@ quote: >
|
|||
<h2><a href="https://www.appdirect.com/">AppDirect</a> は2009年以来、クラウドベースの製品やサービス向けのエンドツーエンドのeコマースプラットフォームによって、ComcastやGoDaddyといった組織がデジタルサプライチェーンをシンプルにすることに役立ってきました。</h2><br> 2014年にソフトウェア開発ディレクターのPierre-Alexandre Lacerteが働き始めたとき、AppDirectでは「tomcatベースのインフラにモノリシックなアプリケーションをデプロイしていて、リリースプロセス全体が必要以上に複雑なものとなっていました」と彼は振り返ります。「たくさんのマニュアルステップがありました。1人のエンジニアがある機能を構築し、Pull requestを作成、そしてQAもしくは別のエンジニアの手によってその機能を検証する、といった具合でした。さらにこれがマージされれば、また別の誰かがデプロイ作業の面倒をみることになるでしょう。そのため、提供までのパイプラインにボトルネックがいくつもありました。」<br><br> これと同時に、40人のエンジニアリングチームが大きくなっていくにつれ、その成長を後押しし加速する上でも、より良いインフラが必要となってくることに同社は気づいたのです。そのころプラットフォーム チームの一員であったLacerteには、<a href="https://nodejs.org/">Node.js</a> に <a href="http://spring.io/projects/spring-boot">Spring Boot Java</a>といった、異なるフレームワークや言語を使いたいといった声が複数のチームから聞こえてくるようになってきました。同社の成長とスピードを両立するには、(チームが自律的に動き、自らがデプロイし、稼働中のサービスに責任を持てるような)よりよいインフラやシステムがこの会社には必要だということに彼はすぐに気づいたのです。</div>
|
||||
</section>
|
||||
<div class="banner3" style="background-image: url('/images/CaseStudy_appdirect_banner3.jpg')">
|
||||
<div class="banner3text">「正しいタイミングで正しい判断ができました。Kubernetesとクラウドネイティブ技術は、いまやデファクトのエコシステムとみなされています。スケールアウトしていく中で直面する新たな難題に取り組むにはどこに注力すべきか、私たちはわかっています。このコミュニティーはとても活発で、当社の優秀なチームをすばらしく補完してくれています。」<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- AppDirect ソフトウェア開発者 Alexandre Gervais </span></div>
|
||||
<div class="banner3text">「正しいタイミングで正しい判断ができました。Kubernetesとクラウドネイティブ技術は、いまやデファクトのエコシステムとみなされています。スケールアウトしていく中で直面する新たな難題に取り組むにはどこに注力すべきか、私たちはわかっています。このコミュニティはとても活発で、当社の優秀なチームをすばらしく補完してくれています。」<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- AppDirect ソフトウェア開発者 Alexandre Gervais </span></div>
|
||||
</div>
|
||||
<section class="section3">
|
||||
<div class="fullcol">Lacerteは当初から言っていました。「私のアイデアは、チームがサービスをもっと高速にデプロイできる環境を作ろう、というものです。そうすれば彼らもこう言うでしょう『そうだよ、モノリスを建てるなんてもうしたくないしサービスを構築したいんだ』と」(Lacerteは2019年に同社を退社)。<br><br>Lacerteのグループは運用チームと連携することで同社の <a href="https://aws.amazon.com/">AWSのインフラ</a>により多くアクセスし、コントロールするようになりました。そして、いくつかのオーケストレーション技術のプロトタイプを作り始めたのです。「当時を振り返ると、Kubernetesはちょっとアンダーグラウンドというか、それほど知られていなかったように思います。」と彼は言います。「しかし、コミュニティーやPull requestの数、GitHub上でのスピードなどをよく見てみると勢いが増してきていることがわかりました。他の技術よりも管理がはるかに簡単であることもわかりました。」彼らは、Kubernetes上で <a href="https://www.chef.io/">Chef</a> や<a href="https://www.terraform.io/">Terraform</a> によるプロビジョニングを用いながら最初のいくつかのサービスを開発しました。その後さらにサービスも、自動化されるところも増えました。「韓国、オーストラリア、ドイツ、そしてアメリカ、私たちのクラスターは世界中にあります。」とLacerteは言います。「自動化は私たちにとって極めて重要です。」今彼らは大部分で<a href="https://github.com/kubernetes/kops">Kops</a>を使っていて、いくつかのクラウドプロバイダーから提供されるマネージドKubernetesサービスも視野に入れていれています。<br><br> 今もモノリスは存在してはいますが、コミットや機能はどんどん少なくなってきています。あらゆるチームがこの新たなインフラ上でデプロイしていて、それらはサービスとして提供されるのが一般的です。今やAppDirectは本番環境で50以上のマイクロサービス、15のKubernetesクラスターをAWS上や世界中のオンプレミス環境で展開しています。<br><br> Kubernetesプラットフォームがデプロイ時間に非常に大きなインパクトを与えたことから、Lacerteの戦略が究極的に機能しました。カスタムメイドで不安定だった、SCPコマンドを用いたシェルスクリプトに対する依存性を弱めることで、新しいバージョンをデプロイする時間は4時間から数分にまで短縮されるようになったのです。こういったことに加え同社は、開発者たちが自らのサービスとして仕立て上げるよう、数多くの努力をしてきました。「新しいサービスを始めるのに、 <a href="https://www.atlassian.com/software/jira">Jira</a>のチケットや他のチームとのミーティングはもはや必要ないのです」とLacerteは言います。以前、週あたり1〜30だった同社のデプロイ数は、いまや週1,600デプロイにまでなっています。
|
||||
<div class="fullcol">Lacerteは当初から言っていました。「私のアイデアは、チームがサービスをもっと高速にデプロイできる環境を作ろう、というものです。そうすれば彼らもこう言うでしょう『そうだよ、モノリスを建てるなんてもうしたくないしサービスを構築したいんだ』と」(Lacerteは2019年に同社を退社)。<br><br>Lacerteのグループは運用チームと連携することで同社の <a href="https://aws.amazon.com/">AWSのインフラ</a>により多くアクセスし、コントロールするようになりました。そして、いくつかのオーケストレーション技術のプロトタイプを作り始めたのです。「当時を振り返ると、Kubernetesはちょっとアンダーグラウンドというか、それほど知られていなかったように思います。」と彼は言います。「しかし、コミュニティやPull requestの数、GitHub上でのスピードなどをよく見てみると勢いが増してきていることがわかりました。他の技術よりも管理がはるかに簡単であることもわかりました。」彼らは、Kubernetes上で <a href="https://www.chef.io/">Chef</a> や<a href="https://www.terraform.io/">Terraform</a> によるプロビジョニングを用いながら最初のいくつかのサービスを開発しました。その後さらにサービスも、自動化されるところも増えました。「韓国、オーストラリア、ドイツ、そしてアメリカ、私たちのクラスターは世界中にあります。」とLacerteは言います。「自動化は私たちにとって極めて重要です。」今彼らは大部分で<a href="https://github.com/kubernetes/kops">Kops</a>を使っていて、いくつかのクラウドプロバイダーから提供されるマネージドKubernetesサービスも視野に入れていれています。<br><br> 今もモノリスは存在してはいますが、コミットや機能はどんどん少なくなってきています。あらゆるチームがこの新たなインフラ上でデプロイしていて、それらはサービスとして提供されるのが一般的です。今やAppDirectは本番環境で50以上のマイクロサービス、15のKubernetesクラスターをAWS上や世界中のオンプレミス環境で展開しています。<br><br> Kubernetesプラットフォームがデプロイ時間に非常に大きなインパクトを与えたことから、Lacerteの戦略が究極的に機能しました。カスタムメイドで不安定だった、SCPコマンドを用いたシェルスクリプトに対する依存性を弱めることで、新しいバージョンをデプロイする時間は4時間から数分にまで短縮されるようになったのです。こういったことに加え同社は、開発者たちが自らのサービスとして仕立て上げるよう、数多くの努力をしてきました。「新しいサービスを始めるのに、 <a href="https://www.atlassian.com/software/jira">Jira</a>のチケットや他のチームとのミーティングはもはや必要ないのです」とLacerteは言います。以前、週あたり1〜30だった同社のデプロイ数は、いまや週1,600デプロイにまでなっています。
|
||||
</div>
|
||||
</section>
|
||||
<div class="banner4" style="background-image: url('/images/CaseStudy_appdirect_banner4.jpg');width:100%;">
|
||||
|
@ -66,5 +66,5 @@ quote: >
|
|||
<div class="banner5text">「私たちは、『ブランチにコードをプッシュする』だけのカルチャーから、コードベースを越えた、刺激的な新しい責務に移行しました。機能や設定のデプロイ、アプリケーションとビジネスメトリクスのモニタリング、そして機能停止が起きた場合の電話サポートなどがそれにあたります。それは計り知れないほどのエンジニアリング文化のシフトでしたが、規模とスピードを考えるとそのメリットは否定できません。」<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- AppDirect ソフトウェア開発 ディレクター Pierre-Alexandre Lacerte</span></div>
|
||||
</div>
|
||||
|
||||
<div class="fullcol">もちろんそれは、より多くの責任も意味しています。「私たちはエンジニアに視野を広げるように依頼しました」とGervaisは言います。「私たちは、『ブランチにコードをプッシュする』だけのカルチャーから、コードベースを越えた、刺激的な新しい責務へ移行しました。機能やコンフィグのデプロイ、アプリケーションとビジネスメトリクスのモニタリング、そして機能停止が起きた場合の電話サポートなどがそれにあたります。「それは計り知れないほどのエンジニアリング文化のシフトでしたが、規模とスピードを考えるとそのメリットは否定できません。」 <br><br> エンジニアリングのレベルが上がり続けるにつれて、プラットフォームチームは新たな課題を抱えることになります。Kubernetesプラットフォームが誰からでもアクセス可能で簡単に利用できる、それを確実にしていくことが求められるのです。「チームにより多くの人を追加したとき、彼らが効率的で生産的であり、プラットフォームの強化の仕方を知っていることを確実にするにはどうすればいいでしょうか?」とLacerteは問います。そのために、私たちにはエバンジェリストがいて、ドキュメントを用意して、いくつかのプロジェクトの事例を紹介できるようにしているのです。なので実際にデモをして、AMA(Ask Me Anything: 何でも聞いてほしい)セッションを設けるのです。私たちはたくさんの人からの関心を得るためにさまざまな戦略を試みています。」<br><br>Kubernetesの3年半もの旅を振り返り、GervaisはAppDirectが「正しいタイミングで正しい判断ができた」と感じています。「Kubernetesとクラウドネイティブ技術は、いまやデファクトのエコシステムとみなされています。スケールアウトしていく中で直面する新たな難題に取り組むにはどこに注力すべきか、私たちはわかっています。このコミュニティーはとても活発で、当社の優秀なチームをすばらしく補完してくれています。前進していくために私たちが注力すべきなのは、エコシステムから恩恵を受けながら、日々のオペレーションにビジネス的な付加価値を提供していくことでしょう。」</div>
|
||||
<div class="fullcol">もちろんそれは、より多くの責任も意味しています。「私たちはエンジニアに視野を広げるように依頼しました」とGervaisは言います。「私たちは、『ブランチにコードをプッシュする』だけのカルチャーから、コードベースを越えた、刺激的な新しい責務へ移行しました。機能やコンフィグのデプロイ、アプリケーションとビジネスメトリクスのモニタリング、そして機能停止が起きた場合の電話サポートなどがそれにあたります。「それは計り知れないほどのエンジニアリング文化のシフトでしたが、規模とスピードを考えるとそのメリットは否定できません。」 <br><br> エンジニアリングのレベルが上がり続けるにつれて、プラットフォームチームは新たな課題を抱えることになります。Kubernetesプラットフォームが誰からでもアクセス可能で簡単に利用できる、それを確実にしていくことが求められるのです。「チームにより多くの人を追加したとき、彼らが効率的で生産的であり、プラットフォームの強化の仕方を知っていることを確実にするにはどうすればいいでしょうか?」とLacerteは問います。そのために、私たちにはエバンジェリストがいて、ドキュメントを用意して、いくつかのプロジェクトの事例を紹介できるようにしているのです。なので実際にデモをして、AMA(Ask Me Anything: 何でも聞いてほしい)セッションを設けるのです。私たちはたくさんの人からの関心を得るためにさまざまな戦略を試みています。」<br><br>Kubernetesの3年半もの旅を振り返り、GervaisはAppDirectが「正しいタイミングで正しい判断ができた」と感じています。「Kubernetesとクラウドネイティブ技術は、いまやデファクトのエコシステムとみなされています。スケールアウトしていく中で直面する新たな難題に取り組むにはどこに注力すべきか、私たちはわかっています。このコミュニティはとても活発で、当社の優秀なチームをすばらしく補完してくれています。前進していくために私たちが注力すべきなのは、エコシステムから恩恵を受けながら、日々のオペレーションにビジネス的な付加価値を提供していくことでしょう。」</div>
|
||||
</section>
|
||||
|
|
|
@ -0,0 +1,236 @@
|
|||
---
|
||||
title: コミュニティ
|
||||
layout: basic
|
||||
cid: community
|
||||
---
|
||||
|
||||
<div class="newcommunitywrapper">
|
||||
<div class="banner1">
|
||||
<img src="/images/community/kubernetes-community-final-02.jpg" alt="Kubernetesカンファレンスギャラリー" style="width:100%;padding-left:0px" class="desktop">
|
||||
<img src="/images/community/kubernetes-community-02-mobile.jpg" alt="Kubernetesカンファレンスギャラリー" style="width:100%;padding-left:0px" class="mobile">
|
||||
</div>
|
||||
|
||||
<div class="intro">
|
||||
<br class="mobile">
|
||||
<p>Kubernetesコミュニティ(ユーザー、コントリビューター、そして私たちがともに作り上げた文化)は、このオープンソースプロジェクトが急速に広まった最大の理由の1つです。プロジェクト自体の成長と変化に伴って、私たちの文化と価値も成長と変化を続けています。私たちはプロジェクトとプロジェクトへの取り組み方を絶えず改善するために協力して活動しています。<br><br>私たちはissueとpull requestを作り、SIGミーティング、Kubernetesミートアップ、Kubeconに参加する人々です。そして、Kubernetesの採用とイノベーションを提唱し、<code>kubectl get pods</code>を実行し、数多くの重要な方法でコントリビュートする人々です。このページを読み進めて、この素晴らしいコミュニティに参加してコミュニティの一員になる方法について知ってください。</p>
|
||||
<br class="mobile">
|
||||
</div>
|
||||
|
||||
<div class="community__navbar">
|
||||
|
||||
<a href="#conduct">行動規範</a>
|
||||
<a href="#videos">動画</a>
|
||||
<a href="#discuss">ディスカッション</a>
|
||||
<a href="#events">イベントとミートアップ</a>
|
||||
<a href="#news">ニュース</a>
|
||||
|
||||
</div>
|
||||
<br class="mobile"><br class="mobile">
|
||||
<div class="imagecols">
|
||||
<br class="mobile">
|
||||
<div class="imagecol">
|
||||
<img src="/images/community/kubernetes-community-final-03.jpg" alt="Kubernetesカンファレンスギャラリー" style="width:100%" class="desktop">
|
||||
</div>
|
||||
|
||||
<div class="imagecol">
|
||||
<img src="/images/community/kubernetes-community-final-04.jpg" alt="Kubernetesカンファレンスギャラリー" style="width:100%" class="desktop">
|
||||
</div>
|
||||
|
||||
<div class="imagecol" style="margin-right:0% important">
|
||||
<img src="/images/community/kubernetes-community-final-05.jpg" alt="Kubernetesカンファレンスギャラリー" style="width:100%;margin-right:0% important" class="desktop">
|
||||
</div>
|
||||
<img src="/images/community/kubernetes-community-04-mobile.jpg" alt="Kubernetesカンファレンスギャラリー" style="width:100%;margin-bottom:3%" class="mobile">
|
||||
|
||||
<a name="conduct"></a>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
<div class="conduct">
|
||||
<div class="conducttext">
|
||||
<br class="mobile"><br class="mobile">
|
||||
<br class="tablet"><br class="tablet">
|
||||
<div class="conducttextnobutton" style="margin-bottom:2%"><h1>行動規範</h1>
|
||||
Kubernetesコミュニティは尊重と包括を価値のあるものと考えているため、あらゆるやりとりに対して行動規範を遵守することを強制しています。イベント、ミーティング、Slack、その他のコミュニケーション手段において行動規範の違反を見つけた場合には、Kubernetes Code of Conduct Committee(行動規範委員会)<a href="mailto:conduct@kubernetes.io" style="color:#0662EE;font-weight:300">conduct@kubernetes.io</a>に連絡してください。すべての報告の秘密は遵守されます。委員会(Committee)については<a href="https://github.com/kubernetes/community/tree/master/committee-code-of-conduct" style="color:#0662EE;font-weight:300">こちら</a>で読めます。
|
||||
<br>
|
||||
<a href="/ja/community/code-of-conduct/">
|
||||
<br class="mobile"><br class="mobile">
|
||||
|
||||
<span class="fullbutton">
|
||||
もっと読む
|
||||
</span>
|
||||
</a>
|
||||
</div><a name="videos"></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
<div class="videos">
|
||||
<br class="mobile"><br class="mobile">
|
||||
<br class="tablet"><br class="tablet">
|
||||
<h1 style="margin-top:0px">動画</h1>
|
||||
|
||||
<div style="margin-bottom:4%;font-weight:300;text-align:center;padding-left:10%;padding-right:10%">YouTubeでは、多数の動画を公開しています。チャンネル登録をして幅広いトピックの動画をご覧ください。</div>
|
||||
|
||||
<div class="videocontainer">
|
||||
|
||||
<div class="video">
|
||||
|
||||
<iframe width="100%" height="250" src="https://www.youtube.com/embed/videoseries?list=PL69nYSiGNLP3azFUvYJjGn45YbF6C-uIg" title="Monthly office hours" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
|
||||
|
||||
<a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP3azFUvYJjGn45YbF6C-uIg">
|
||||
<div class="videocta">
|
||||
月例のオフィスアワーを視聴する ▶</div>
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<div class="video">
|
||||
<iframe width="100%" height="250" src="https://www.youtube.com/embed/videoseries?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ" title="Weekly community meetings" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
|
||||
<a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ">
|
||||
<div class="videocta">
|
||||
月例のコミュニティミーティングを視聴する ▶
|
||||
</div>
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<div class="video">
|
||||
|
||||
<iframe width="100%" height="250" src="https://www.youtube.com/embed/videoseries?list=PL69nYSiGNLP3QpQrhZq_sLYo77BVKv09F" title="Talk from a community member" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
|
||||
|
||||
<a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP3QpQrhZq_sLYo77BVKv09F">
|
||||
<div class="videocta">
|
||||
コミュニティメンバーのトークを視聴する ▶
|
||||
</div>
|
||||
|
||||
</a>
|
||||
<a name="discuss"></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
<div class="resources">
|
||||
<br class="mobile"><br class="mobile">
|
||||
<br class="tablet"><br class="tablet">
|
||||
<h1 style="padding-top:1%">ディスカッション</h1>
|
||||
|
||||
<div style="font-weight:300;text-align:center">さまざまな話題について話し合いが行われています。次のようなプラットフォームで会話に参加してください。</div>
|
||||
|
||||
<div class="resourcecontainer">
|
||||
|
||||
<div class="resourcebox">
|
||||
<img src="/images/community/discuss.png" alt=フォーラム" style="width:80%;padding-bottom:2%">
|
||||
<a href="https://discuss.kubernetes.io/" style="color:#0662EE;display:block;margin-top:1%">
|
||||
フォーラム ▶
|
||||
</a>
|
||||
<div class="resourceboxtext" style="font-size:12px;text-transform:none !important;font-weight:300;line-height:1.4em;color:#333333;margin-top:4%">
|
||||
ドキュメントやStackOverflow、その他の場所の橋渡しをするトピックベースの技術的な議論
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="resourcebox">
|
||||
<img src="/images/community/twitter.png" alt="Twitter" style="width:80%;padding-bottom:2%">
|
||||
<a href="https://twitter.com/kubernetesio" style="color:#0662EE;display:block;margin-top:1%">
|
||||
Twitter ▶
|
||||
</a>
|
||||
<div class="resourceboxtext" style="font-size:12px;text-transform:none !important;font-weight:300;line-height:1.4em;color:#333333;margin-top:4%">
|
||||
ブログ記事、イベント、ニュース、アイデアに関するリアルタイムのお知らせ
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="resourcebox">
|
||||
<img src="/images/community/github.png" alt="GitHub" style="width:80%;padding-bottom:2%">
|
||||
<a href="https://github.com/kubernetes/kubernetes" style="color:#0662EE;display:block;margin-top:1%">
|
||||
GitHub ▶
|
||||
</a>
|
||||
<div class="resourceboxtext" style="font-size:12px;text-transform:none !important;font-weight:300;line-height:1.4em;color:#333333;margin-top:4%">
|
||||
すべてのプロジェクトとissueの管理、そしてもちろんコードがあります
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="resourcebox">
|
||||
<img src="/images/community/stack.png" alt="Stack Overflow" style="width:80%;padding-bottom:2%">
|
||||
<a href="https://stackoverflow.com/search?q=kubernetes" style="color:#0662EE;display:block;margin-top:1%">
|
||||
stack overflow ▶
|
||||
</a>
|
||||
<div class="resourceboxtext" style="font-size:12px;text-transform:none !important;font-weight:300;line-height:1.4em;color:#333333;margin-top:4%">
|
||||
あらゆるユースケースの技術的な問題の解決
|
||||
<a name="events"></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!--
|
||||
<div class="resourcebox">
|
||||
|
||||
<img src="/images/community/slack.png" style="width:80%">
|
||||
|
||||
slack ▶
|
||||
|
||||
<div class="resourceboxtext" style="font-size:11px;text-transform:none !important;font-weight:200;line-height:1.4em;color:#333333;margin-top:4%">
|
||||
With 170+ channels, you'll find one that fits your needs.
|
||||
</div>
|
||||
|
||||
</div>-->
|
||||
|
||||
</div>
|
||||
</div>
|
||||
<div class="events">
|
||||
<br class="mobile"><br class="mobile">
|
||||
<br class="tablet"><br class="tablet">
|
||||
<div class="eventcontainer">
|
||||
<h1 style="color:white !important">今後のイベント</h1>
|
||||
{{< upcoming-events >}}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="meetups">
|
||||
<div class="meetupcol">
|
||||
<div class="meetuptext">
|
||||
<h1 style="text-align:left">グローバルコミュニティ</h1>
|
||||
世界中で150以上のミートアップが開催されています。地元でKubernetes仲間を見つけましょう。近くで開催されていない場合は、指揮を取って新しいミートアップを作りましょう。
|
||||
</div>
|
||||
<a href="https://www.meetup.com/topics/kubernetes/">
|
||||
<div class="button">
|
||||
ミートアップを探す
|
||||
</div>
|
||||
</a>
|
||||
<a name="news"></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
<!--
|
||||
<div class="contributor">
|
||||
<div class="contributortext">
|
||||
<br>
|
||||
<h1 style="text-align:left">
|
||||
New Contributors Site
|
||||
</h1>
|
||||
Text about new contributors site.
|
||||
|
||||
<br><br>
|
||||
|
||||
<div class="button">
|
||||
VISIT SITE
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
-->
|
||||
|
||||
<div class="news">
|
||||
<br class="mobile"><br class="mobile">
|
||||
<br class="tablet"><br class="tablet">
|
||||
<h1 style="margin-bottom:2%">最新ニュース</h1>
|
||||
|
||||
<br>
|
||||
<div class="twittercol1">
|
||||
<a class="twitter-timeline" data-tweet-limit="1" href="https://twitter.com/kubernetesio?ref_src=twsrc%5Etfw">kubernetesioさんのツイート</a> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
<br><br><br><br>
|
||||
</div>
|
||||
|
||||
</div>
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: コミュニティ
|
||||
layout: basic
|
||||
cid: community
|
||||
css: /css/community.css
|
||||
---
|
||||
|
||||
<div class="community_main">
|
||||
<h1>Kubernetesコミュニティ行動規範</h1>
|
||||
|
||||
Kubernetesは、<a href="https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/jp.md">CNCF行動規範</a>に従います。
|
||||
以下のCNCFの行動規範のテキストは、<a href="https://github.com/cncf/foundation/blob/214585e24aab747fb85c2ea44fbf4a2442e30de6/code-of-conduct-languages/jp.md">commit 214585e</a>から複製したものです。この掲示が古くなっていることに気がついた場合には、<a href="https://github.com/kubernetes/website/issues/new">issueを作成</a>してください。
|
||||
|
||||
イベント、ミーティング、Slack、その他のコミュニケーション手段において行動規範の違反を見つけた場合には、<a href="https://git.k8s.io/community/committee-code-of-conduct">Kubernetes Code of Conduct Committee(行動規範委員会)</a>に連絡してください。連絡はメールアドレス<a href="mailto:conduct@kubernetes.io">conduct@kubernetes.io</a>から行えます。連絡者の匿名性は守られます。
|
||||
|
||||
<div class="cncf_coc_container">
|
||||
{{< include "static/cncf-code-of-conduct.md" >}}
|
||||
</div>
|
||||
</div>
|
|
@ -0,0 +1,5 @@
|
|||
The files in this directory have been imported from other sources. Do not
|
||||
edit them directly, except by replacing them with new versions.
|
||||
|
||||
For Japanese translation, cncf-code-of-conduct.md is copied from the following CNCF repository:
|
||||
https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/jp.md
|
|
@ -0,0 +1,32 @@
|
|||
<!-- Do not edit this file directly. Get the latest from
|
||||
https://github.com/cncf/foundation/blob/master/code-of-conduct/jp.md -->
|
||||
|
||||
CNCF コミュニティ行動規範 v1.0
|
||||
------------------------------
|
||||
|
||||
### コントリビューター行動規範
|
||||
|
||||
本プロジェクトのコントリビューターおよびメンテナーとして、オープンかつ快適なコミュニティを促進するために、私たちは、問題の報告、機能要求の投稿、ドキュメントの更新、プル要求またはパッチの送信、および他の活動を通して貢献するすべての人々を尊重することを誓います。
|
||||
|
||||
私たちは、経験レベル、性別、性の自認および表明、性的嗜好、障がい、個人的な外見、身体の大きさ、人種、民族、年齢、宗教、または国籍に関係なく、このプロジェクトへの参加を、全員にとってハラスメントがない経験にすることを約束します。
|
||||
|
||||
認められない参加者の行動の例:
|
||||
|
||||
- 性的な言語またはイメージの使用
|
||||
- 個人的な攻撃
|
||||
- 煽りや侮辱/軽蔑的なコメント
|
||||
- 公開または非公開のハラスメント
|
||||
- 明示的な許可なく、物理的な住所や電子アドレスなど、他者の個人情報を公開すること
|
||||
- 他の倫理または職業倫理に反する行為。
|
||||
|
||||
プロジェクト メンテナーは、本行動規範に準拠していないコメント、コミット、コード、Wiki 編集、問題、他のコントリビューションを削除、編集、または却下する権利および責任を有しています。プロジェクト メンテナーは、本行動規範を採用することで、公正かつ一貫した方法で、これらの原則を本プロジェクトの管理におけるすべての要素に適用することに注力します。本行動規範を遵守または施行しないプロジェクト メンテナーは、永久的にプロジェクト チームから抹消される可能性があります。
|
||||
|
||||
個人がプロジェクトまたはコミュニティを代表するときには、本行動規範は、プロジェクト スペースおよび公共のスペースの両方に適用されます。
|
||||
|
||||
Kubernetesで虐待的、嫌がらせ、または許されない行動があった場合には、<conduct@kubernetes.io>から[Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct)(行動規範委員会)にご連絡ください。その他のプロジェクトにつきましては、CNCFプロジェクト管理者または仲介者<mishi@linux.com>にご連絡ください。
|
||||
|
||||
本行動規範は、コントリビューターの合意 (http://contributor-covenant.org) バージョン 1.2.0 http://contributor-covenant.org/version/1/2/0/ から適応されています。
|
||||
|
||||
### CNCF イベント行動規範
|
||||
|
||||
CNCF イベントは、イベント ページにある Linux Foundation 行動規範 に準拠します。これは、前述のポリシーに対応し、インシデントへの対応に関する詳細を含めることを目的としています。
|
|
@ -0,0 +1,231 @@
|
|||
---
|
||||
title: 証明書
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
クライアント証明書認証を使用する場合、`easyrsa`や`openssl`、`cfssl`を用いて、手動で証明書を生成できます。
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
### easyrsa
|
||||
|
||||
**easyrsa**を用いると、クラスターの証明書を手動で生成できます。
|
||||
|
||||
1. パッチを当てたバージョンのeasyrsa3をダウンロードして解凍し、初期化します。
|
||||
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
|
||||
tar xzf easy-rsa.tar.gz
|
||||
cd easy-rsa-master/easyrsa3
|
||||
./easyrsa init-pki
|
||||
1. 新しい認証局(CA)を生成します。`--batch`は自動モードを設定し、`--req-cn`はCAの新しいルート証明書の共通名(CN)を指定します。
|
||||
|
||||
./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass
|
||||
1. サーバー証明書と鍵を生成します。
|
||||
引数`--subject-alt-name`は、APIサーバーへのアクセスに使用できるIPおよびDNS名を設定します。
|
||||
`MASTER_CLUSTER_IP`は通常、APIサーバーとコントローラーマネージャーコンポーネントの両方で引数`--service-cluster-ip-range`として指定されるサービスCIDRの最初のIPです。
|
||||
引数`--days`は、証明書の有効期限が切れるまでの日数を設定するために使われます。
|
||||
以下の例は、デフォルトのDNSドメイン名として`cluster.local`を使用していることを前提とします。
|
||||
|
||||
./easyrsa --subject-alt-name="IP:${MASTER_IP},"\
|
||||
"IP:${MASTER_CLUSTER_IP},"\
|
||||
"DNS:kubernetes,"\
|
||||
"DNS:kubernetes.default,"\
|
||||
"DNS:kubernetes.default.svc,"\
|
||||
"DNS:kubernetes.default.svc.cluster,"\
|
||||
"DNS:kubernetes.default.svc.cluster.local" \
|
||||
--days=10000 \
|
||||
build-server-full server nopass
|
||||
1. `pki/ca.crt`、`pki/issued/server.crt`、`pki/private/server.key`をディレクトリーにコピーします。
|
||||
1. 以下のパラメーターを、APIサーバーの開始パラメーターとして追加します。
|
||||
|
||||
--client-ca-file=/yourdirectory/ca.crt
|
||||
--tls-cert-file=/yourdirectory/server.crt
|
||||
--tls-private-key-file=/yourdirectory/server.key
|
||||
|
||||
### openssl
|
||||
|
||||
**openssl**はクラスターの証明書を手動で生成できます。
|
||||
|
||||
1. 2048ビットのca.keyを生成します。
|
||||
|
||||
openssl genrsa -out ca.key 2048
|
||||
1. ca.keyに応じて、ca.crtを生成します。証明書の有効期間を設定するには、-daysを使用します。
|
||||
|
||||
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
|
||||
1. 2048ビットのserver.keyを生成します。
|
||||
|
||||
openssl genrsa -out server.key 2048
|
||||
1. 証明書署名要求(CSR)を生成するための設定ファイルを生成します。
|
||||
ファイル(例: `csr.conf`)に保存する前に、かぎ括弧で囲まれた値(例: `<MASTER_IP>`)を必ず実際の値に置き換えてください。
|
||||
`MASTER_CLUSTER_IP`の値は、前節で説明したAPIサーバーのサービスクラスターIPであることに注意してください。
|
||||
以下の例は、デフォルトのDNSドメイン名として`cluster.local`を使用していることを前提とします。
|
||||
|
||||
[ req ]
|
||||
default_bits = 2048
|
||||
prompt = no
|
||||
default_md = sha256
|
||||
req_extensions = req_ext
|
||||
distinguished_name = dn
|
||||
|
||||
[ dn ]
|
||||
C = <country>
|
||||
ST = <state>
|
||||
L = <city>
|
||||
O = <organization>
|
||||
OU = <organization unit>
|
||||
CN = <MASTER_IP>
|
||||
|
||||
[ req_ext ]
|
||||
subjectAltName = @alt_names
|
||||
|
||||
[ alt_names ]
|
||||
DNS.1 = kubernetes
|
||||
DNS.2 = kubernetes.default
|
||||
DNS.3 = kubernetes.default.svc
|
||||
DNS.4 = kubernetes.default.svc.cluster
|
||||
DNS.5 = kubernetes.default.svc.cluster.local
|
||||
IP.1 = <MASTER_IP>
|
||||
IP.2 = <MASTER_CLUSTER_IP>
|
||||
|
||||
[ v3_ext ]
|
||||
authorityKeyIdentifier=keyid,issuer:always
|
||||
basicConstraints=CA:FALSE
|
||||
keyUsage=keyEncipherment,dataEncipherment
|
||||
extendedKeyUsage=serverAuth,clientAuth
|
||||
subjectAltName=@alt_names
|
||||
1. 設定ファイルに基づいて、証明書署名要求を生成します。
|
||||
|
||||
openssl req -new -key server.key -out server.csr -config csr.conf
|
||||
1. ca.key、ca.crt、server.csrを使用してサーバー証明書を生成します。
|
||||
|
||||
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
|
||||
-CAcreateserial -out server.crt -days 10000 \
|
||||
-extensions v3_ext -extfile csr.conf
|
||||
1. 証明書を表示します。
|
||||
|
||||
openssl x509 -noout -text -in ./server.crt
|
||||
|
||||
最後にAPIサーバーの起動パラメーターに、同様のパラメーターを追加します。
|
||||
|
||||
### cfssl
|
||||
|
||||
**cfssl**も証明書を生成するためのツールです。
|
||||
|
||||
1. 以下のように、ダウンロードして解凍し、コマンドラインツールを用意します。
|
||||
使用しているハードウェアアーキテクチャやcfsslのバージョンに応じて、サンプルコマンドの調整が必要な場合があります。
|
||||
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 -o cfssl
|
||||
chmod +x cfssl
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64 -o cfssljson
|
||||
chmod +x cfssljson
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl-certinfo_1.4.1_linux_amd64 -o cfssl-certinfo
|
||||
chmod +x cfssl-certinfo
|
||||
1. アーティファクトを保持するディレクトリーを生成し、cfsslを初期化します。
|
||||
|
||||
mkdir cert
|
||||
cd cert
|
||||
../cfssl print-defaults config > config.json
|
||||
../cfssl print-defaults csr > csr.json
|
||||
1. CAファイルを生成するためのJSON設定ファイル(例: `ca-config.json`)を生成します。
|
||||
|
||||
{
|
||||
"signing": {
|
||||
"default": {
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"profiles": {
|
||||
"kubernetes": {
|
||||
"usages": [
|
||||
"signing",
|
||||
"key encipherment",
|
||||
"server auth",
|
||||
"client auth"
|
||||
],
|
||||
"expiry": "8760h"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
1. CA証明書署名要求(CSR)用のJSON設定ファイル(例: `ca-csr.json`)を生成します。
|
||||
かぎ括弧で囲まれた値は、必ず使用したい実際の値に置き換えてください。
|
||||
|
||||
{
|
||||
"CN": "kubernetes",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names":[{
|
||||
"C": "<country>",
|
||||
"ST": "<state>",
|
||||
"L": "<city>",
|
||||
"O": "<organization>",
|
||||
"OU": "<organization unit>"
|
||||
}]
|
||||
}
|
||||
1. CA鍵(`ca-key.pem`)と証明書(`ca.pem`)を生成します。
|
||||
|
||||
../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca
|
||||
1. APIサーバーの鍵と証明書を生成するためのJSON設定ファイル(例: `server-csr.json`)を生成します。
|
||||
かぎ括弧で囲まれた値は、必ず使用したい実際の値に置き換えてください。
|
||||
`MASTER_CLUSTER_IP`の値は、前節で説明したAPIサーバーのサービスクラスターIPです。
|
||||
以下の例は、デフォルトのDNSドメイン名として`cluster.local`を使用していることを前提とします。
|
||||
|
||||
{
|
||||
"CN": "kubernetes",
|
||||
"hosts": [
|
||||
"127.0.0.1",
|
||||
"<MASTER_IP>",
|
||||
"<MASTER_CLUSTER_IP>",
|
||||
"kubernetes",
|
||||
"kubernetes.default",
|
||||
"kubernetes.default.svc",
|
||||
"kubernetes.default.svc.cluster",
|
||||
"kubernetes.default.svc.cluster.local"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [{
|
||||
"C": "<country>",
|
||||
"ST": "<state>",
|
||||
"L": "<city>",
|
||||
"O": "<organization>",
|
||||
"OU": "<organization unit>"
|
||||
}]
|
||||
}
|
||||
1. APIサーバーの鍵と証明書を生成します。デフォルトでは、それぞれ`server-key.pem`と`server.pem`というファイルに保存されます。
|
||||
|
||||
../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
|
||||
--config=ca-config.json -profile=kubernetes \
|
||||
server-csr.json | ../cfssljson -bare server
|
||||
|
||||
|
||||
## 自己署名CA証明書の配布
|
||||
|
||||
クライアントノードは、自己署名CA証明書を有効だと認識しないことがあります。
|
||||
プロダクション用でない場合や、会社のファイアウォールの背後で実行する場合は、自己署名CA証明書をすべてのクライアントに配布し、有効な証明書のローカルリストを更新できます。
|
||||
|
||||
各クライアントで、以下の操作を実行します。
|
||||
|
||||
```bash
|
||||
sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
|
||||
sudo update-ca-certificates
|
||||
```
|
||||
|
||||
```
|
||||
Updating certificates in /etc/ssl/certs...
|
||||
1 added, 0 removed; done.
|
||||
Running hooks in /etc/ca-certificates/update.d....
|
||||
done.
|
||||
```
|
||||
|
||||
## 証明書API
|
||||
|
||||
`certificates.k8s.io`APIを用いることで、[こちら](/ja/docs/tasks/tls/managing-tls-in-a-cluster)のドキュメントにあるように、認証に使用するx509証明書をプロビジョニングすることができます。
|
|
@ -0,0 +1,6 @@
|
|||
---
|
||||
title: "セキュリティ"
|
||||
weight: 81
|
||||
description: >
|
||||
クラウドネイティブなワークロードをセキュアに維持するための概念
|
||||
---
|
|
@ -92,7 +92,7 @@ echo $pods
|
|||
pi-5rwd7
|
||||
```
|
||||
|
||||
ここでのセレクターは、Jobのセレクターと同じです。`--output = jsonpath`オプションは、返されたリストの各Podから名前だけを取得する式を指定します。
|
||||
ここでのセレクターは、Jobのセレクターと同じです。`--output=jsonpath`オプションは、返されたリストの各Podから名前だけを取得する式を指定します。
|
||||
|
||||
|
||||
いずれかのPodの標準出力を表示します。
|
||||
|
@ -107,7 +107,7 @@ kubectl logs $pods
|
|||
|
||||
## Jobの仕様の作成
|
||||
|
||||
他のすべてのKubernetesの設定と同様に、Jobには`apiVersion`、` kind`、および`metadata`フィールドが必要です。
|
||||
他のすべてのKubernetesの設定と同様に、Jobには`apiVersion`、`kind`、および`metadata`フィールドが必要です。
|
||||
その名前は有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)である必要があります。
|
||||
|
||||
Jobには[`.spec`セクション](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)も必要です。
|
||||
|
|
|
@ -86,28 +86,29 @@ card:
|
|||
|
||||
英語 | 日本語
|
||||
--------- | ---------
|
||||
Addon/Add-on|アドオン
|
||||
Aggregation Layer | アグリゲーションレイヤー
|
||||
architecture | アーキテクチャ
|
||||
binary | バイナリ
|
||||
cluster|クラスター
|
||||
community | コミュニティ
|
||||
container | コンテナ
|
||||
controller | コントローラー
|
||||
Deployment/Deploy|KubernetesリソースとしてのDeploymentはママ表記、一般的な用語としてのdeployの場合は、デプロイ
|
||||
directory | ディレクトリ
|
||||
For more information|さらなる情報(一時的)
|
||||
GitHub | GitHub (ママ表記)
|
||||
Issue | Issue (ママ表記)
|
||||
operator | オペレーター
|
||||
orchestrate(動詞)|オーケストレーションする
|
||||
Persistent Volume|KubernetesリソースとしてのPersistentVolumeはママ表記、一般的な用語としての場合は、永続ボリューム
|
||||
Deployment/Deploy|KubernetesリソースとしてのDeploymentはママ表記、一般的な用語としてのdeployの場合は、デプロイ
|
||||
Addon/Add-on|アドオン
|
||||
Quota|クォータ
|
||||
For more information|さらなる情報(一時的)
|
||||
prefix | プレフィックス
|
||||
container | コンテナ
|
||||
directory | ディレクトリ
|
||||
binary | バイナリ
|
||||
controller | コントローラー
|
||||
opeartor | オペレーター
|
||||
Aggregation Layer | アグリゲーションレイヤー
|
||||
Issue | Issue (ママ表記)
|
||||
Pull Request | Pull Request (ママ表記)
|
||||
GitHub | GitHub (ママ表記)
|
||||
Quota|クォータ
|
||||
registry | レジストリ
|
||||
architecture | アーキテクチャ
|
||||
secure | セキュア
|
||||
stacked | 積層(例: stacked etcd clusterは積層etcdクラスター)
|
||||
a set of ~ | ~の集合
|
||||
stacked | 積層(例: stacked etcd clusterは積層etcdクラスター)
|
||||
|
||||
### 備考
|
||||
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
title: ドキュメントスタイルの概要
|
||||
main_menu: true
|
||||
weight: 80
|
||||
---
|
||||
|
||||
このセクション内のトピックでは、文章のスタイル、コンテンツの形式や構成、特にKubernetesのドキュメント特有のHugoカスタマイズの使用方法に関するガイダンスを提供します。
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Secret
|
||||
id: secret
|
||||
date: 2018-04-12
|
||||
full_link: /ja/docs/concepts/configuration/secret/
|
||||
short_description: >
|
||||
パスワードやOAuthトークン、SSHキーのような機密の情報を保持します。
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- core-object
|
||||
- security
|
||||
---
|
||||
パスワードやOAuthトークン、SSHキーのような機密の情報を保持します。
|
||||
|
||||
<!--more-->
|
||||
|
||||
機密情報の取り扱い方法を細かく制御することができ、保存時には[暗号化](/ja/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)するなど、誤って公開してしまうリスクを減らすことができます。{{< glossary_tooltip text="Pod" term_id="pod" >}}は、ボリュームマウントされたファイルとして、またはPodのイメージをPullするkubeletによって、Secretを参照します。Secretは機密情報を扱うのに最適で、機密でない情報には[ConfigMap](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)が適しています。
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: セットアップツールのリファレンス
|
||||
weight: 50
|
||||
---
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue