Clean zh/docs directory for dangling YAML files (#17341)
There are many YAML manifests sneaking into the `zh/docs/` directory. These files should go to the `zh/examples` directory instead. Having these "garbage" files (not referenced anywhere) is creating confusion for the release meister when merging branches. For example, some YAML files found in the 'master' branch are no longer there in the release-1.16 branch. It is tedious, if possible at all, to solve all this kind of conflicts during a rebase. The PR cleanses the zh/docs directory for all dangling YAML files.
This commit is contained in:
parent
8c5faa68b7
commit
9bf026e951
|
|
@ -78,7 +78,7 @@ A URL can also be specified as a configuration source, which is handy for deploy
|
|||
还可以使用 URL 作为配置源,便于直接使用已经提交到 Github 上的配置文件进行部署:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/zh/examples/application/nginx/nginx-deployment.yaml
|
||||
```
|
||||
|
||||
```shell
|
||||
|
|
|
|||
|
|
@ -1,26 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: with-node-affinity
|
||||
spec:
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/e2e-az-name
|
||||
operator: In
|
||||
values:
|
||||
- e2e-az1
|
||||
- e2e-az2
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: another-node-label-key
|
||||
operator: In
|
||||
values:
|
||||
- another-node-label-value
|
||||
containers:
|
||||
- name: with-node-affinity
|
||||
image: k8s.gcr.io/pause:2.0
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: with-pod-affinity
|
||||
spec:
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: security
|
||||
operator: In
|
||||
values:
|
||||
- S1
|
||||
topologyKey: failure-domain.beta.kubernetes.io/zone
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: security
|
||||
operator: In
|
||||
values:
|
||||
- S2
|
||||
topologyKey: kubernetes.io/hostname
|
||||
containers:
|
||||
- name: with-pod-affinity
|
||||
image: k8s.gcr.io/pause:2.0
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
nodeSelector:
|
||||
disktype: ssd
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
spec:
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.7.9
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
|
@ -8,7 +8,6 @@ redirect_from:
|
|||
---
|
||||
|
||||
|
||||
|
||||
`PodSecurityPolicy` 类型的对象能够控制,是否可以向 Pod 发送请求,该 Pod 能够影响被应用到 Pod 和容器的 `SecurityContext`。
|
||||
查看 [Pod 安全策略建议](https://git.k8s.io/community/contributors/design-proposals/security-context-constraints.md) 获取更多信息。
|
||||
|
||||
|
|
@ -143,23 +142,33 @@ _Pod 安全策略_ 由设置和策略组成,它们能够控制 Pod 访问的
|
|||
Pod 必须基于 PSP 验证每个字段。
|
||||
|
||||
|
||||
<!--
|
||||
### Create a policy and a pod
|
||||
|
||||
## 创建 Pod 安全策略
|
||||
Define the example PodSecurityPolicy object in a file. This is a policy that
|
||||
simply prevents the creation of privileged pods.
|
||||
|
||||
下面是一个 Pod 安全策略的例子,所有字段的设置都被允许:
|
||||
{{< codenew file="policy/example-psp.yaml" >}}
|
||||
|
||||
{{< code file="psp.yaml" >}}
|
||||
|
||||
|
||||
|
||||
下载示例文件可以创建该策略,然后执行如下命令:
|
||||
And create it with kubectl:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./psp.yaml
|
||||
podsecuritypolicy "permissive" created
|
||||
kubectl-admin create -f example-psp.yaml
|
||||
```
|
||||
-->
|
||||
|
||||
### 创建一个策略和一个 Pod
|
||||
|
||||
在一个文件中定义 PodSecurityPolicy 对象实例。这里的策略只是用来禁止创建有特权
|
||||
要求的 Pods。
|
||||
|
||||
{{< codenew file="policy/example-psp.yaml" >}}
|
||||
|
||||
使用 kubectl 执行创建操作:
|
||||
|
||||
```shell
|
||||
kubectl-admin create -f example-psp.yaml
|
||||
```
|
||||
|
||||
## 获取 Pod 安全策略列表
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,6 @@ redirect_from:
|
|||
{{< toc >}}
|
||||
|
||||
|
||||
|
||||
## Cron Job 是什么?
|
||||
|
||||
_Cron Job_ 管理基于时间的 [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/),即:
|
||||
|
|
@ -43,21 +42,6 @@ _Cron Job_ 管理基于时间的 [Job](/docs/concepts/jobs/run-to-completion-fin
|
|||
|
||||
当使用的 Kubernetes 集群,版本 >= 1.4(对 ScheduledJob),>= 1.5(对 CronJob),当启动 API Server(参考 [为集群开启或关闭 API 版本](/docs/admin/cluster-management/#turn-on-or-off-an-api-version-for-your-cluster) 获取更多信息)时,通过传递选项 `--runtime-config=batch/v2alpha1=true` 可以开启 batch/v2alpha1 API。
|
||||
|
||||
## 创建 Cron Job
|
||||
|
||||
下面是一个 Cron Job 的例子。它会每分钟运行一个 Job,打印出当前时间并输出问候语 hello。
|
||||
|
||||
% include code.html language="yaml" file="cronjob.yaml" ghlink="/docs/concepts/workloads/controllers/cronjob.yaml" %}
|
||||
|
||||
下载并运行该示例 Cron Job,然后执行如下命令:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./cronjob.yaml
|
||||
cronjob "hello" created
|
||||
```
|
||||
|
||||
|
||||
|
||||
可选地,使用 `kubectl run` 创建一个 Cron Job,不需要写完整的配置:
|
||||
|
||||
```shell
|
||||
|
|
@ -65,8 +49,6 @@ $ kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox
|
|||
cronjob "hello" created
|
||||
```
|
||||
|
||||
|
||||
|
||||
创建该 Cron Job 之后,通过如下命令获取它的状态信息:
|
||||
|
||||
```shell
|
||||
|
|
|
|||
|
|
@ -1,42 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: fluentd-elasticsearch
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: fluentd-logging
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
name: fluentd-elasticsearch
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: fluentd-elasticsearch
|
||||
spec:
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
containers:
|
||||
- name: fluentd-elasticsearch
|
||||
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
terminationGracePeriodSeconds: 30
|
||||
volumes:
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
|
|
@ -38,13 +38,12 @@ The following are typical use cases for Deployments:
|
|||
|
||||
Here is an example Deployment. It creates a ReplicaSet to bring up three nginx Pods.
|
||||
|
||||
{{< code file="nginx-deployment.yaml" >}}
|
||||
{{< codenew file="controllers/nginx-deployment.yaml" >}}
|
||||
|
||||
Run the example by downloading the example file and then running this command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/nginx-deployment.yaml --record
|
||||
deployment "nginx-deployment" created
|
||||
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
|
||||
```
|
||||
|
||||
Setting the kubectl flag `--record` to `true` allows you to record current command in the annotations of
|
||||
|
|
@ -361,7 +360,7 @@ First, check the revisions of this deployment:
|
|||
$ kubectl rollout history deployment/nginx-deployment
|
||||
deployments "nginx-deployment"
|
||||
REVISION CHANGE-CAUSE
|
||||
1 kubectl create -f docs/user-guide/nginx-deployment.yaml --record
|
||||
1 kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
|
||||
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
|
||||
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
|
||||
```
|
||||
|
|
|
|||
|
|
@ -1,45 +0,0 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: ReplicaSet
|
||||
metadata:
|
||||
name: frontend
|
||||
# these labels can be applied automatically
|
||||
# from the labels in the pod template if not set
|
||||
# labels:
|
||||
# app: guestbook
|
||||
# tier: frontend
|
||||
spec:
|
||||
# this replicas value is default
|
||||
# modify it according to your case
|
||||
replicas: 3
|
||||
# selector can be applied automatically
|
||||
# from the labels in the pod template if not set,
|
||||
# but we are specifying the selector here to
|
||||
# demonstrate its usage.
|
||||
selector:
|
||||
matchLabels:
|
||||
tier: frontend
|
||||
matchExpressions:
|
||||
- {key: tier, operator: In, values: [frontend]}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: php-redis
|
||||
image: gcr.io/google_samples/gb-frontend:v3
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
env:
|
||||
- name: GET_HOSTS_FROM
|
||||
value: dns
|
||||
# If your cluster config does not include a dns service, then to
|
||||
# instead access environment variables to find service host
|
||||
# info, comment out the 'value: dns' line above, and uncomment the
|
||||
# line below.
|
||||
# value: env
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
|
@ -39,9 +39,7 @@ Kubernetes 垃圾收集器的角色是删除指定的对象,这些对象曾经
|
|||
|
||||
这里有一个配置文件,表示一个具有 3 个 Pod 的 ReplicaSet:
|
||||
|
||||
{{< code file="my-repset.yaml" >}}
|
||||
|
||||
|
||||
{{< codenew file="controllers/replicaset.yaml" >}}
|
||||
|
||||
如果创建该 ReplicaSet,然后查看 Pod 的 metadata 字段,能够看到 OwnerReferences 字段:
|
||||
|
||||
|
|
|
|||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: autoscaling/v1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: frontend-scaler
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
kind: ReplicaSet
|
||||
name: frontend
|
||||
minReplicas: 3
|
||||
maxReplicas: 10
|
||||
targetCPUUtilizationPercentage: 50
|
||||
|
|
@ -1,15 +0,0 @@
|
|||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: pi
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
name: pi
|
||||
spec:
|
||||
containers:
|
||||
- name: pi
|
||||
image: perl
|
||||
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
restartPolicy: Never
|
||||
|
||||
|
|
@ -1,19 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
|
@ -11,13 +11,10 @@ content_template: templates/task
|
|||
|
||||
{{% capture overview %}}
|
||||
|
||||
|
||||
|
||||
本文旨在说明如何让一个 Pod 内的两个容器使用一个卷(Volume)进行通信。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
|
@ -30,36 +27,24 @@ content_template: templates/task
|
|||
|
||||
## 创建一个包含两个容器的 Pod
|
||||
|
||||
|
||||
|
||||
|
||||
在这个练习中,你会创建一个包含两个容器的 Pod。两个容器共享一个卷用于他们之间的通信。
|
||||
Pod 的配置文件如下:
|
||||
|
||||
{{< code file="two-container-pod.yaml" >}}
|
||||
|
||||
|
||||
{{< codenew file="pods/two-container-pod.yaml" >}}
|
||||
|
||||
在配置文件中,你可以看到 Pod 有一个共享卷,名为 `shared-data`。
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
配置文件中的第一个容器运行了一个 nginx 服务器。共享卷的挂载路径是 `/usr/share/nginx/html`。
|
||||
第二个容器是基于 debian 镜像的,有一个 `/pod-data` 的挂载路径。第二个容器运行了下面的命令然后终止。
|
||||
|
||||
echo Hello from the debian container > /pod-data/index.html
|
||||
|
||||
|
||||
|
||||
注意,第二个容器在 nginx 服务器的根目录下写了 `index.html` 文件。
|
||||
|
||||
|
||||
创建一个包含两个容器的 Pod:
|
||||
|
||||
kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/two-container-pod.yaml
|
||||
|
||||
kubectl apply -f https://k8s.io/examples/pods/two-container-pod.yaml
|
||||
|
||||
查看 Pod 和容器的信息:
|
||||
|
||||
|
|
|
|||
|
|
@ -44,19 +44,15 @@ content_template: templates/tutorial
|
|||
|
||||
### 使用部署对象(Deployment)创建后端
|
||||
|
||||
|
||||
|
||||
后端是一个简单的 hello 欢迎微服务应用。这是后端应用的 Deployment 配置文件:
|
||||
|
||||
{{< code file="hello.yaml" >}}
|
||||
|
||||
{{< codenew file="service/access/hello.yaml" >}}
|
||||
|
||||
创建后端 Deployment:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/service/access/hello.yaml
|
||||
```
|
||||
kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/hello.yaml
|
||||
```
|
||||
|
||||
|
||||
查看后端的 Deployment 信息:
|
||||
|
||||
|
|
@ -64,7 +60,6 @@ kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/hello.yam
|
|||
kubectl describe deployment hello
|
||||
```
|
||||
|
||||
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
|
|
@ -98,8 +93,7 @@ Events:
|
|||
|
||||
首先,浏览 Service 的配置文件:
|
||||
|
||||
{{< code file="hello-service.yaml" >}}
|
||||
|
||||
{{< codenew file="service/access/hello-service.yaml" >}}
|
||||
|
||||
|
||||
配置文件中,你可以看到 Service 将流量路由到包含 `app: hello` 和 `tier: backend` 标签的 Pod。
|
||||
|
|
@ -107,46 +101,33 @@ Events:
|
|||
|
||||
创建 `hello` Service:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/service/access/hello-service.yaml
|
||||
```
|
||||
kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/hello-service.yaml
|
||||
```
|
||||
|
||||
|
||||
|
||||
此时,你已经有了一个在运行的后端 Deployment,你也有了一个 Service 用于路由网络流量。
|
||||
|
||||
|
||||
### 创建前端应用
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
既然你已经有了后端应用,你可以创建一个前端应用连接到后端。前端应用通过 DNS 名连接到后端的工作 Pods。
|
||||
DNS 名是 "hello",也就是 Service 配置文件中 `name` 字段的值。
|
||||
|
||||
|
||||
|
||||
前端 Deployment 中的 Pods 运行一个 nginx 镜像,这个已经配置好镜像去寻找后端的 hello Service。
|
||||
只是 nginx 的配置文件:
|
||||
|
||||
{{< code file="frontend/frontend.conf" >}}
|
||||
|
||||
|
||||
|
||||
{{< codenew file="service/access/frontend.conf" >}}
|
||||
|
||||
与后端类似,前端用包含一个 Deployment 和一个 Service。Service 的配置文件包含了 `type: LoadBalancer`,
|
||||
也就是说,Service 会使用你的云服务商的默认负载均衡设备。
|
||||
|
||||
{{< code file="frontend.yaml" >}}
|
||||
|
||||
{{< codenew file="service/access/frontend.yaml" >}}
|
||||
|
||||
创建前端 Deployment 和 Service:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/service/access/frontend.yaml
|
||||
```
|
||||
kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/frontend.yaml
|
||||
```
|
||||
|
||||
|
||||
通过输出确认两个资源都已经被创建:
|
||||
|
||||
|
|
@ -155,27 +136,17 @@ deployment "frontend" created
|
|||
service "frontend" created
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
**注意**:这个 nginx 配置文件是被打包在 [容器镜像](/docs/tasks/access-application-cluster/frontend/Dockerfile) 里的。
|
||||
更好的方法是使用 [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/),这样的话你可以更轻易地更改配置。
|
||||
|
||||
|
||||
### 与前端 Service 交互
|
||||
|
||||
|
||||
|
||||
一旦你创建了 LoadBalancer 类型的 Service,你可以使用这条命令查看外部 IP:
|
||||
|
||||
```
|
||||
kubectl get service frontend
|
||||
```
|
||||
|
||||
|
||||
|
||||
外部 IP 字段的生成可能需要一些时间。如果是这种情况,外部 IP 会显示为 `<pending>`。
|
||||
|
||||
```
|
||||
|
|
|
|||
|
|
@ -1,34 +0,0 @@
|
|||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: frontend
|
||||
spec:
|
||||
selector:
|
||||
app: hello
|
||||
tier: frontend
|
||||
ports:
|
||||
- protocol: "TCP"
|
||||
port: 80
|
||||
targetPort: 80
|
||||
type: LoadBalancer
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: frontend
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hello
|
||||
tier: frontend
|
||||
track: stable
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: "gcr.io/google-samples/hello-frontend:1.0"
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/usr/sbin/nginx","-s","quit"]
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
upstream hello {
|
||||
server hello;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
location / {
|
||||
proxy_pass http://hello;
|
||||
}
|
||||
}
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: hello
|
||||
spec:
|
||||
selector:
|
||||
app: hello
|
||||
tier: backend
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: http
|
||||
|
|
@ -1,19 +0,0 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: hello
|
||||
spec:
|
||||
replicas: 7
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hello
|
||||
tier: backend
|
||||
track: stable
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: "gcr.io/google-samples/hello-go-gke:1.0"
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
name: redis
|
||||
redis-sentinel: "true"
|
||||
role: master
|
||||
name: redis-master
|
||||
spec:
|
||||
containers:
|
||||
- name: master
|
||||
image: k8s.gcr.io/redis:v1
|
||||
env:
|
||||
- name: MASTER
|
||||
value: "true"
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
resources:
|
||||
limits:
|
||||
cpu: "0.1"
|
||||
volumeMounts:
|
||||
- mountPath: /redis-master-data
|
||||
name: data
|
||||
- name: sentinel
|
||||
image: kubernetes/redis:v1
|
||||
env:
|
||||
- name: SENTINEL
|
||||
value: "true"
|
||||
ports:
|
||||
- containerPort: 26379
|
||||
volumes:
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
|
|
@ -1,412 +0,0 @@
|
|||
---
|
||||
approvers:
|
||||
- derekwaynecarr
|
||||
- janetkuo
|
||||
|
||||
title: 应用资源配额和限额
|
||||
redirect_from:
|
||||
- "/docs/admin/resourcequota/walkthrough/"
|
||||
- "/docs/admin/resourcequota/walkthrough.html"
|
||||
- "/docs/tasks/configure-pod-container/apply-resource-quota-limit/"
|
||||
- "/docs/tasks/configure-pod-container/apply-resource-quota-limit.html"
|
||||
content_template: templates/task
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
|
||||
本示例展示了在一个 namespace 中控制资源用量的典型设置。
|
||||
|
||||
|
||||
本文展示了以下资源的使用: [Namespace](/docs/admin/namespaces), [ResourceQuota](/docs/concepts/policy/resource-quotas/) 和 [LimitRange](/docs/tasks/configure-pod-container/limit-range/)。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## 场景
|
||||
|
||||
|
||||
集群管理员正在操作一个代表用户群体的集群,他希望控制一个特定 namespace 中可以被使用的资源总量,以达到促进对集群的公平共享及控制成本的目的。
|
||||
|
||||
|
||||
集群管理员有以下目标:
|
||||
|
||||
|
||||
* 限制运行中 pods 使用的计算资源数量
|
||||
* 限制 persistent volume claims 数量以控制对存储的访问
|
||||
* 限制 load balancers 数量以控制成本
|
||||
* 防止使用 node ports 以保留稀缺资源
|
||||
* 提供默认计算资源请求以实现更好的调度决策
|
||||
|
||||
|
||||
## 创建 namespace
|
||||
|
||||
|
||||
本示例将在一个自定义的 namespace 中运行,以展示相关概念。
|
||||
|
||||
|
||||
让我们创建一个叫做 quota-example 的新 namespace:
|
||||
|
||||
```shell
|
||||
$ kubectl create namespace quota-example
|
||||
namespace "quota-example" created
|
||||
$ kubectl get namespaces
|
||||
NAME STATUS AGE
|
||||
default Active 2m
|
||||
kube-system Active 2m
|
||||
quota-example Active 39s
|
||||
```
|
||||
|
||||
|
||||
## 应用 object-count 配额到 namespace
|
||||
|
||||
|
||||
集群管理员想要控制下列资源:
|
||||
|
||||
* persistent volume claims
|
||||
* load balancers
|
||||
* node ports
|
||||
|
||||
|
||||
我们来创建一个简单的配额,用于控制这个 namespace 中那些资源类型的对象数量。
|
||||
|
||||
```shell
|
||||
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/rq-object-counts.yaml --namespace=quota-example
|
||||
resourcequota "object-counts" created
|
||||
```
|
||||
|
||||
|
||||
配额系统将察觉到有一个配额被创建,并且会计算 namespace 中的资源消耗量作为响应。这应该会很快发生。
|
||||
|
||||
|
||||
让我们显示一下配额来观察这个 namespace 中当前被消耗的资源:
|
||||
|
||||
```shell
|
||||
$ kubectl describe quota object-counts --namespace=quota-example
|
||||
Name: object-counts
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
persistentvolumeclaims 0 2
|
||||
services.loadbalancers 0 2
|
||||
services.nodeports 0 0
|
||||
```
|
||||
|
||||
|
||||
配额系统现在将阻止用户创建比各个资源指定数量更多的资源。
|
||||
|
||||
|
||||
|
||||
## 应用计算资源配额到 namespace
|
||||
|
||||
|
||||
为了限制这个 namespace 可以被使用的计算资源数量,让我们创建一个跟踪计算资源的配额。
|
||||
|
||||
```shell
|
||||
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/rq-compute-resources.yaml --namespace=quota-example
|
||||
resourcequota "compute-resources" created
|
||||
```
|
||||
|
||||
|
||||
让我们显示一下配额来观察这个 namespace 中当前被消耗的资源:
|
||||
|
||||
```shell
|
||||
$ kubectl describe quota compute-resources --namespace=quota-example
|
||||
Name: compute-resources
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
limits.cpu 0 2
|
||||
limits.memory 0 2Gi
|
||||
pods 0 4
|
||||
requests.cpu 0 1
|
||||
requests.memory 0 1Gi
|
||||
```
|
||||
|
||||
|
||||
配额系统现在会防止 namespace 拥有超过 4 个没有终止的 pods。此外它还将强制 pod 中的每个容器配置一个 `request` 并为 `cpu` 和 `memory` 定义 `limit`。
|
||||
|
||||
|
||||
## 应用默认资源请求和限制
|
||||
|
||||
|
||||
Pod 的作者很少为它们的 pods 指定资源请求和限制。
|
||||
|
||||
|
||||
既然我们对项目应用了配额,我们来看一下当终端用户通过创建一个没有 cpu 和 内存限制的 pod 时会发生什么。这通过在 pod 里创建一个 nginx 容器实现。
|
||||
|
||||
|
||||
作为演示,让我们来创建一个运行 nginx 的 deployment:
|
||||
|
||||
```shell
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
|
||||
deployment "nginx" created
|
||||
```
|
||||
|
||||
|
||||
现在我们来看一下创建的 pods。
|
||||
|
||||
```shell
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
```
|
||||
|
||||
|
||||
发生了什么?我一个 pods 都没有!让我们 describe 这个 deployment 来看看发生了什么。
|
||||
|
||||
```shell
|
||||
$ kubectl describe deployment nginx --namespace=quota-example
|
||||
Name: nginx
|
||||
Namespace: quota-example
|
||||
CreationTimestamp: Mon, 06 Jun 2016 16:11:37 -0400
|
||||
Labels: run=nginx
|
||||
Selector: run=nginx
|
||||
Replicas: 0 updated | 1 total | 0 available | 1 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 1 max unavailable, 1 max surge
|
||||
OldReplicaSets: <none>
|
||||
NewReplicaSet: nginx-3137573019 (0/1 replicas created)
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
Deployment 创建了一个对应的 replica set 并尝试按照大小来创建一个 pod。
|
||||
|
||||
|
||||
让我们看看 replica set 的更多细节。
|
||||
|
||||
```shell
|
||||
$ kubectl describe rs nginx-3137573019 --namespace=quota-example
|
||||
Name: nginx-3137573019
|
||||
Namespace: quota-example
|
||||
Image(s): nginx
|
||||
Selector: pod-template-hash=3137573019,run=nginx
|
||||
Labels: pod-template-hash=3137573019
|
||||
run=nginx
|
||||
Replicas: 0 current / 1 desired
|
||||
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
|
||||
No volumes.
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
4m 7s 11 {replicaset-controller } Warning FailedCreate Error creating: pods "nginx-3137573019-" is forbidden: Failed quota: compute-resources: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
|
||||
```
|
||||
|
||||
|
||||
Kubernetes API server 拒绝了 replica set 创建一个 pod 的请求,因为我们的 pods 没有为 `cpu` 和 `memory` 指定 `requests` 或 `limits`。
|
||||
|
||||
|
||||
因此,我们来为 pod 指定它可以使用的 `cpu` 和 `memory` 默认数量。
|
||||
|
||||
```shell
|
||||
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/rq-limits.yaml --namespace=quota-example
|
||||
limitrange "limits" created
|
||||
$ kubectl describe limits limits --namespace=quota-example
|
||||
Name: limits
|
||||
Namespace: quota-example
|
||||
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
|
||||
---- -------- --- --- --------------- ------------- -----------------------
|
||||
Container memory - - 256Mi 512Mi -
|
||||
Container cpu - - 100m 200m -
|
||||
```
|
||||
|
||||
|
||||
如果 Kubernetes API server 发现一个 namespace 中有一个创建 pod 的请求,并且 pod 中的容器没有设置任何计算资源请求时,作为准入控制的一部分,一个默认的 request 和 limit 将会被应用。
|
||||
|
||||
|
||||
在本例中,创建的每个 pod 都将拥有如下的计算资源限制:
|
||||
|
||||
```shell
|
||||
$ kubectl run nginx \
|
||||
--image=nginx \
|
||||
--replicas=1 \
|
||||
--requests=cpu=100m,memory=256Mi \
|
||||
--limits=cpu=200m,memory=512Mi \
|
||||
--namespace=quota-example
|
||||
```
|
||||
|
||||
|
||||
由于已经为我们的 namespace 申请了默认的计算资源,我们的 replica set 应该能够创建它的 pods 了。
|
||||
|
||||
```shell
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-3137573019-fvrig 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
|
||||
而且如果打印出我们在这个 namespace 中的配额使用情况:
|
||||
|
||||
```shell
|
||||
$ kubectl describe quota --namespace=quota-example
|
||||
Name: compute-resources
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
limits.cpu 200m 2
|
||||
limits.memory 512Mi 2Gi
|
||||
pods 1 4
|
||||
requests.cpu 100m 1
|
||||
requests.memory 256Mi 1Gi
|
||||
|
||||
|
||||
Name: object-counts
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
persistentvolumeclaims 0 2
|
||||
services.loadbalancers 0 2
|
||||
services.nodeports 0 0
|
||||
```
|
||||
|
||||
|
||||
就像你看到的,创建的 pod 消耗了明确的计算资源量,并且正被 Kubernetes 正确的追踪着。
|
||||
|
||||
|
||||
## 高级配额 scopes
|
||||
|
||||
|
||||
让我们想象一下如果你不希望为你的 namespace 指定默认计算资源使用量。
|
||||
|
||||
|
||||
作为替换,你希望用户在它们的 namespace 中运行指定数量的 `BestEffort` pods,以从宽松的计算资源中获得好处。然后要求用户为需要更高质量服务的 pods 配置一个显式的资源请求。
|
||||
|
||||
|
||||
让我们新建一个拥有两个配额的 namespace 来演示这种行为:
|
||||
|
||||
```shell
|
||||
$ kubectl create namespace quota-scopes
|
||||
namespace "quota-scopes" created
|
||||
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/rq-best-effort.yaml --namespace=quota-scopes
|
||||
resourcequota "best-effort" created
|
||||
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/rq-not-best-effort.yaml --namespace=quota-scopes
|
||||
resourcequota "not-best-effort" created
|
||||
$ kubectl describe quota --namespace=quota-scopes
|
||||
Name: best-effort
|
||||
Namespace: quota-scopes
|
||||
Scopes: BestEffort
|
||||
* Matches all pods that have best effort quality of service.
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
pods 0 10
|
||||
|
||||
|
||||
Name: not-best-effort
|
||||
Namespace: quota-scopes
|
||||
Scopes: NotBestEffort
|
||||
* Matches all pods that do not have best effort quality of service.
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
limits.cpu 0 2
|
||||
limits.memory 0 2Gi
|
||||
pods 0 4
|
||||
requests.cpu 0 1
|
||||
requests.memory 0 1Gi
|
||||
```
|
||||
|
||||
|
||||
在这种场景下,一个没有配置计算资源请求的 pod 将会被 `best-effort` 配额跟踪。
|
||||
|
||||
|
||||
而配置了计算资源请求的则会被 `not-best-effort` 配额追踪。
|
||||
|
||||
|
||||
让我们创建两个 deployments 作为演示:
|
||||
|
||||
```shell
|
||||
$ kubectl run best-effort-nginx --image=nginx --replicas=8 --namespace=quota-scopes
|
||||
deployment "best-effort-nginx" created
|
||||
$ kubectl run not-best-effort-nginx \
|
||||
--image=nginx \
|
||||
--replicas=2 \
|
||||
--requests=cpu=100m,memory=256Mi \
|
||||
--limits=cpu=200m,memory=512Mi \
|
||||
--namespace=quota-scopes
|
||||
deployment "not-best-effort-nginx" created
|
||||
```
|
||||
|
||||
|
||||
虽然没有指定默认的 limits,`best-effort-nginx` deployment 还是会创建 8 个 pods。这是由于它被 `best-effort` 配额追踪,而 `not-best-effort` 配额将忽略它。`not-best-effort` 配额将追踪 `not-best-effort-nginx` deployment,因为它创建的 pods 具有 `Burstable` 服务质量。
|
||||
|
||||
|
||||
让我们列出 namespace 中的 pods:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods --namespace=quota-scopes
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
best-effort-nginx-3488455095-2qb41 1/1 Running 0 51s
|
||||
best-effort-nginx-3488455095-3go7n 1/1 Running 0 51s
|
||||
best-effort-nginx-3488455095-9o2xg 1/1 Running 0 51s
|
||||
best-effort-nginx-3488455095-eyg40 1/1 Running 0 51s
|
||||
best-effort-nginx-3488455095-gcs3v 1/1 Running 0 51s
|
||||
best-effort-nginx-3488455095-rq8p1 1/1 Running 0 51s
|
||||
best-effort-nginx-3488455095-udhhd 1/1 Running 0 51s
|
||||
best-effort-nginx-3488455095-zmk12 1/1 Running 0 51s
|
||||
not-best-effort-nginx-2204666826-7sl61 1/1 Running 0 23s
|
||||
not-best-effort-nginx-2204666826-ke746 1/1 Running 0 23s
|
||||
```
|
||||
|
||||
|
||||
如你看到的,所有 10 个 pods 都已经被准许创建。
|
||||
|
||||
|
||||
让我们 describe 这个 namespace 当前的配额使用情况:
|
||||
|
||||
```shell
|
||||
$ kubectl describe quota --namespace=quota-scopes
|
||||
Name: best-effort
|
||||
Namespace: quota-scopes
|
||||
Scopes: BestEffort
|
||||
* Matches all pods that have best effort quality of service.
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
pods 8 10
|
||||
|
||||
|
||||
Name: not-best-effort
|
||||
Namespace: quota-scopes
|
||||
Scopes: NotBestEffort
|
||||
* Matches all pods that do not have best effort quality of service.
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
limits.cpu 400m 2
|
||||
limits.memory 1Gi 2Gi
|
||||
pods 2 4
|
||||
requests.cpu 200m 1
|
||||
requests.memory 512Mi 1Gi
|
||||
```
|
||||
|
||||
|
||||
如你看到的,`best-effort` 配额追踪了我们在 `best-effort-nginx` deployment 中创建的 8 个 pods 的资源用量,而 `not-best-effort` 配额追踪了我们在 `not-best-effort-nginx` deployment 中创的两个 pods 的用量。
|
||||
|
||||
|
||||
Scopes 提供了一种来对任何配额文档追踪的资源集合进行细分的机制,给操作人员部署和追踪资源消耗带来更大的灵活性。
|
||||
|
||||
|
||||
除 `BestEffort` 和 `NotBestEffort` scopes 之外,还有用于限制长时间运行和有时限 pods 的scopes。`Terminating` scope 将匹配任何 `spec.activeDeadlineSeconds` 不为 `nil` 的 pod。`NotTerminating` scope 将匹配任何 `spec.activeDeadlineSeconds` 为 `nil` 的 pod。这些 scopes 允许你基于 pods 在你集群中 node 上的预期持久程度来为它们指定配额。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture discussion %}}
|
||||
|
||||
## 总结
|
||||
|
||||
|
||||
消耗节点 cpu 和 memory 资源的动作受到 namespace 配额定义的硬性配额限制的管制。
|
||||
|
||||
|
||||
任意消耗那些资源的动作能够被调整,或者获得一个 namespace 级别的默认值以符合你最终的目标。
|
||||
|
||||
|
||||
可以基于服务质量或者在你集群中节点上的预期持久程度来分配配额。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1.5"
|
||||
requests:
|
||||
cpu: "500m"
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo-4
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-4-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "800m"
|
||||
requests:
|
||||
cpu: "100m"
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo-4
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-4-ctr
|
||||
image: vish/stress
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "800m"
|
||||
requests:
|
||||
cpu: "500m"
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: cpu-min-max-demo-lr
|
||||
spec:
|
||||
limits:
|
||||
- max:
|
||||
cpu: "800m"
|
||||
min:
|
||||
cpu: "200m"
|
||||
type: Container
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-cpu-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: default-cpu-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-cpu-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: default-cpu-demo-3-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
cpu: "0.75"
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-cpu-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: default-cpu-demo-ctr
|
||||
image: nginx
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: cpu-limit-range
|
||||
spec:
|
||||
limits:
|
||||
- default:
|
||||
cpu: 1
|
||||
defaultRequest:
|
||||
cpu: 0.5
|
||||
type: Container
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kube-dns-autoscaler
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-dns-autoscaler
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: kube-dns-autoscaler
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-dns-autoscaler
|
||||
spec:
|
||||
containers:
|
||||
- name: autoscaler
|
||||
image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.1.1
|
||||
resources:
|
||||
requests:
|
||||
cpu: "20m"
|
||||
memory: "10Mi"
|
||||
command:
|
||||
- /cluster-proportional-autoscaler
|
||||
- --namespace=kube-system
|
||||
- --configmap=kube-dns-autoscaler
|
||||
- --target=<SCALE_TARGET>
|
||||
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
|
||||
# If using small nodes, "nodesPerReplica" should dominate.
|
||||
- --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"min":1}}
|
||||
- --logtostderr=true
|
||||
- --v=2
|
||||
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
title: 配置命名空间下pod总数
|
||||
content_template: templates/task
|
||||
weight: 60
|
||||
---
|
||||
|
||||
|
||||
|
|
@ -32,12 +34,12 @@ kubectl create namespace quota-pod-example
|
|||
|
||||
下面是一个资源配额的配置文件:
|
||||
|
||||
{{< code file="quota-pod.yaml" >}}
|
||||
{{< codenew file="admin/resource/quota-pod.yaml" >}}
|
||||
|
||||
创建这个资源配额:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/quota-pod.yaml --namespace=quota-pod-example
|
||||
kubectl apply -f https://k8s.io/examples/admin/resource/quota-pod.yaml --namespace=quota-pod-example
|
||||
```
|
||||
|
||||
查看资源配额的详细信息:
|
||||
|
|
@ -61,14 +63,14 @@ status:
|
|||
|
||||
下面是一个Deployment的配置文件:
|
||||
|
||||
{{< code file="quota-pod-deployment.yaml" >}}
|
||||
{{< codenew file="admin/resource/quota-pod-deployment.yaml" >}}
|
||||
|
||||
在配置文件中, `replicas: 3` 告诉kubernetes尝试创建三个pods,且运行相同的应用。
|
||||
|
||||
创建这个Deployment:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/administer-cluster/quota-pod-deployment.yaml --namespace=quota-pod-example
|
||||
kubectl apply -f https://k8s.io/examples/admin/resource/quota-pod-deployment.yaml --namespace=quota-pod-example
|
||||
```
|
||||
|
||||
查看Deployment的详细信息:
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-mem-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-mem-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "1.5Gi"
|
||||
requests:
|
||||
memory: "800Mi"
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-mem-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-mem-demo-3-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "800Mi"
|
||||
requests:
|
||||
memory: "100Mi"
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-mem-demo-4
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-mem-demo-4-ctr
|
||||
image: nginx
|
||||
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-mem-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-mem-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "800Mi"
|
||||
requests:
|
||||
memory: "600Mi"
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: mem-min-max-demo-lr
|
||||
spec:
|
||||
limits:
|
||||
- max:
|
||||
memory: 1Gi
|
||||
min:
|
||||
memory: 500Mi
|
||||
type: Container
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-mem-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: defalt-mem-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-mem-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: default-mem-demo-3-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-mem-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: default-mem-demo-ctr
|
||||
image: nginx
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: mem-limit-range
|
||||
spec:
|
||||
limits:
|
||||
- default:
|
||||
memory: 512Mi
|
||||
defaultRequest:
|
||||
memory: 256Mi
|
||||
type: Container
|
||||
|
|
@ -1,43 +0,0 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
component: scheduler
|
||||
tier: control-plane
|
||||
name: my-scheduler
|
||||
namespace: kube-system
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: scheduler
|
||||
tier: control-plane
|
||||
version: second
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- /usr/local/bin/kube-scheduler
|
||||
- --address=0.0.0.0
|
||||
- --leader-elect=false
|
||||
- --scheduler-name=my-scheduler
|
||||
image: gcr.io/my-gcp-project/my-kube-scheduler:1.0
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10251
|
||||
initialDelaySeconds: 15
|
||||
name: kube-second-scheduler
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10251
|
||||
resources:
|
||||
requests:
|
||||
cpu: '0.1'
|
||||
securityContext:
|
||||
privileged: false
|
||||
volumeMounts: []
|
||||
hostNetwork: false
|
||||
hostPID: false
|
||||
volumes: []
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: no-annotation
|
||||
labels:
|
||||
name: multischeduler-example
|
||||
spec:
|
||||
containers:
|
||||
- name: pod-with-no-annotation-container
|
||||
image: k8s.gcr.io/pause:2.0
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: annotation-default-scheduler
|
||||
labels:
|
||||
name: multischeduler-example
|
||||
spec:
|
||||
schedulerName: default-scheduler
|
||||
containers:
|
||||
- name: pod-with-default-annotation-container
|
||||
image: k8s.gcr.io/pause:2.0
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: annotation-second-scheduler
|
||||
labels:
|
||||
name: multischeduler-example
|
||||
spec:
|
||||
schedulerName: my-scheduler
|
||||
containers:
|
||||
- name: pod-with-second-annotation-container
|
||||
image: k8s.gcr.io/pause:2.0
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: quota-mem-cpu-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: quota-mem-cpu-demo-2-ctr
|
||||
image: redis
|
||||
resources:
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "800m"
|
||||
requests:
|
||||
memory: "700Mi"
|
||||
cpu: "400m"
|
||||
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: quota-mem-cpu-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: quota-mem-cpu-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "800Mi"
|
||||
cpu: "800m"
|
||||
requests:
|
||||
memory: "600Mi"
|
||||
cpu: "400m"
|
||||
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: mem-cpu-demo
|
||||
spec:
|
||||
hard:
|
||||
requests.cpu: "1"
|
||||
requests.memory: 1Gi
|
||||
limits.cpu: "2"
|
||||
limits.memory: 2Gi
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-quota-demo-2
|
||||
spec:
|
||||
storageClassName: manual
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 4Gi
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-quota-demo
|
||||
spec:
|
||||
storageClassName: manual
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 3Gi
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: object-quota-demo
|
||||
spec:
|
||||
hard:
|
||||
persistentvolumeclaims: "1"
|
||||
services.loadbalancers: "2"
|
||||
services.nodeports: "0"
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-quota-demo-2
|
||||
spec:
|
||||
storageClassName: manual
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 4Gi
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: cpu-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: cpu-demo-ctr-2
|
||||
image: vish/stress
|
||||
resources:
|
||||
limits:
|
||||
cpu: "100"
|
||||
requests:
|
||||
cpu: "100"
|
||||
args:
|
||||
- -cpus
|
||||
- "2"
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: cpu-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: cpu-demo-ctr
|
||||
image: vish/stress
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
requests:
|
||||
cpu: "0.5"
|
||||
args:
|
||||
- -cpus
|
||||
- "2"
|
||||
|
|
@ -1,26 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-exec
|
||||
spec:
|
||||
containers:
|
||||
|
||||
- name: liveness
|
||||
|
||||
args:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
|
||||
|
||||
image: k8s.gcr.io/busybox
|
||||
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- cat
|
||||
- /tmp/healthy
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
|
|
@ -1,21 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-http
|
||||
spec:
|
||||
containers:
|
||||
- name: liveness
|
||||
args:
|
||||
- /server
|
||||
image: k8s.gcr.io/liveness
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
httpHeaders:
|
||||
- name: X-Custom-Header
|
||||
value: Awesome
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 3
|
||||
|
|
@ -1,30 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: init-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: workdir
|
||||
mountPath: /usr/share/nginx/html
|
||||
# These containers are run during pod initialization
|
||||
initContainers:
|
||||
- name: install
|
||||
image: busybox
|
||||
command:
|
||||
- wget
|
||||
- "-O"
|
||||
- "/work-dir/index.html"
|
||||
- http://kubernetes.io
|
||||
volumeMounts:
|
||||
- name: workdir
|
||||
mountPath: "/work-dir"
|
||||
dnsPolicy: Default
|
||||
volumes:
|
||||
- name: workdir
|
||||
emptyDir: {}
|
||||
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: lifecycle-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: lifecycle-demo-container
|
||||
image: nginx
|
||||
|
||||
lifecycle:
|
||||
postStart:
|
||||
exec:
|
||||
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/usr/sbin/nginx","-s","quit"]
|
||||
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: mem-limit-range
|
||||
spec:
|
||||
limits:
|
||||
- default:
|
||||
memory: 512Mi
|
||||
defaultRequest:
|
||||
memory: 256Mi
|
||||
type: Container
|
||||
|
|
@ -1,20 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: memory-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: memory-demo-2-ctr
|
||||
image: vish/stress
|
||||
resources:
|
||||
requests:
|
||||
memory: 50Mi
|
||||
limits:
|
||||
memory: "100Mi"
|
||||
args:
|
||||
- -mem-total
|
||||
- 250Mi
|
||||
- -mem-alloc-size
|
||||
- 10Mi
|
||||
- -mem-alloc-sleep
|
||||
- 1s
|
||||
|
|
@ -1,20 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: memory-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: memory-demo-3-ctr
|
||||
image: vish/stress
|
||||
resources:
|
||||
limits:
|
||||
memory: "1000Gi"
|
||||
requests:
|
||||
memory: "1000Gi"
|
||||
args:
|
||||
- -mem-total
|
||||
- 150Mi
|
||||
- -mem-alloc-size
|
||||
- 10Mi
|
||||
- -mem-alloc-sleep
|
||||
- 1s
|
||||
|
|
@ -1,20 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: memory-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: memory-demo-ctr
|
||||
image: vish/stress
|
||||
resources:
|
||||
limits:
|
||||
memory: "200Mi"
|
||||
requests:
|
||||
memory: "100Mi"
|
||||
args:
|
||||
- -mem-total
|
||||
- 150Mi
|
||||
- -mem-alloc-size
|
||||
- 10Mi
|
||||
- -mem-alloc-sleep
|
||||
- 1s
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: oir-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: oir-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
pod.alpha.kubernetes.io/opaque-int-resource-dongle: 2
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: oir-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: oir-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
pod.alpha.kubernetes.io/opaque-int-resource-dongle: 3
|
||||
|
|
@ -1,138 +0,0 @@
|
|||
---
|
||||
title: 给容器分配非透明整型资源
|
||||
content_template: templates/task
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
本页展示了如何给容器分配非透明整型资源。
|
||||
|
||||
{{< feature-state state="alpha" >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
在做这个练习之前,请在[给节点配置非透明整型资源](/docs/tasks/administer-cluster/opaque-integer-resource-node/)文档中进行练习,
|
||||
该文档介绍了在一个节点上配置dongle资源。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## 给Pod分配非透明整型资源
|
||||
|
||||
为了请求一个非透明整型资源,需要在容器配置文件中包含`resources:requests`字段。
|
||||
非透明整型资源类型前缀是`pod.alpha.kubernetes.io/opaque-int-resource-`。
|
||||
|
||||
下面是含有一个容器的Pod的配置文件:
|
||||
|
||||
{{< code file="oir-pod.yaml" >}}
|
||||
|
||||
在配置文件中,可以看到容器请求了3个dongles资源。
|
||||
|
||||
创建Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/oir-pod.yaml
|
||||
```
|
||||
|
||||
验证Pod是否正在运行:
|
||||
|
||||
```shell
|
||||
kubectl get pod oir-demo
|
||||
```
|
||||
|
||||
查询Pod的状态:
|
||||
|
||||
```shell
|
||||
kubectl describe pod oir-demo
|
||||
```
|
||||
|
||||
输出显示了dongle请求:
|
||||
|
||||
```yaml
|
||||
Requests:
|
||||
pod.alpha.kubernetes.io/opaque-int-resource-dongle: 3
|
||||
```
|
||||
|
||||
## 尝试创建第二个Pod
|
||||
|
||||
下面是含有一个容器的Pod的配置文件。该容器请求了两个dongles资源。
|
||||
|
||||
{{< code file="oir-pod-2.yaml" >}}
|
||||
|
||||
Kubernetes无法再满足两个dongles的请求,因为第一个Pod已经使用了四个可用dongles中的三个。
|
||||
|
||||
尝试创建Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/oir-pod-2.yaml
|
||||
```
|
||||
|
||||
查询Pod的状态
|
||||
|
||||
```shell
|
||||
kubectl describe pod oir-demo-2
|
||||
```
|
||||
|
||||
输出显示该Pod无法被调度,因为没有节点有两个可用的dongles资源:
|
||||
|
||||
|
||||
```
|
||||
Conditions:
|
||||
Type Status
|
||||
PodScheduled False
|
||||
...
|
||||
Events:
|
||||
...
|
||||
... Warning FailedScheduling pod (oir-demo-2) failed to fit in any node
|
||||
fit failure summary on nodes : Insufficient pod.alpha.kubernetes.io/opaque-int-resource-dongle (1)
|
||||
```
|
||||
|
||||
查看Pod的状态:
|
||||
|
||||
```shell
|
||||
kubectl get pod oir-demo-2
|
||||
```
|
||||
|
||||
输出显示Pod已创建,但是没有被调度并运行在节点上。
|
||||
它的状态为Pending:
|
||||
|
||||
```yaml
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
oir-demo-2 0/1 Pending 0 6m
|
||||
```
|
||||
|
||||
## 删除
|
||||
|
||||
删除本练习中创建的Pod:
|
||||
|
||||
```shell
|
||||
kubectl delete pod oir-demo
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
### 对于应用开发者
|
||||
|
||||
* [分配内存资源](/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||
* [分配CPU资源](/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
|
||||
### 对于集群管理员
|
||||
|
||||
* [给节点配置非透明整型资源](/docs/tasks/administer-cluster/opaque-integer-resource-node/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,14 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: redis
|
||||
spec:
|
||||
containers:
|
||||
- name: redis
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: redis-storage
|
||||
mountPath: /data/redis
|
||||
volumes:
|
||||
- name: redis-storage
|
||||
emptyDir: {}
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
nodeSelector:
|
||||
disktype: ssd
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: private-reg
|
||||
spec:
|
||||
containers:
|
||||
- name: private-reg-container
|
||||
image: <your-private-image>
|
||||
imagePullSecrets:
|
||||
- name: regsecret
|
||||
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-projected-volume
|
||||
spec:
|
||||
containers:
|
||||
- name: test-projected-volume
|
||||
image: busybox
|
||||
args:
|
||||
- sleep
|
||||
- "86400"
|
||||
volumeMounts:
|
||||
- name: all-in-one
|
||||
mountPath: "/projected-volume"
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: all-in-one
|
||||
projected:
|
||||
sources:
|
||||
- secret:
|
||||
name: user
|
||||
- secret:
|
||||
name: pass
|
||||
|
|
@ -1,14 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: qos-demo-2
|
||||
namespace: qos-example
|
||||
spec:
|
||||
containers:
|
||||
- name: qos-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "200Mi"
|
||||
requests:
|
||||
memory: "100Mi"
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: qos-demo-3
|
||||
namespace: qos-example
|
||||
spec:
|
||||
containers:
|
||||
- name: qos-demo-3-ctr
|
||||
image: nginx
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: qos-demo-4
|
||||
namespace: qos-example
|
||||
spec:
|
||||
containers:
|
||||
|
||||
- name: qos-demo-4-ctr-1
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
memory: "200Mi"
|
||||
|
||||
- name: qos-demo-4-ctr-2
|
||||
image: redis
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: qos-demo
|
||||
namespace: qos-example
|
||||
spec:
|
||||
containers:
|
||||
- name: qos-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "200Mi"
|
||||
cpu: "700m"
|
||||
requests:
|
||||
memory: "200Mi"
|
||||
cpu: "700m"
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: compute-resources
|
||||
spec:
|
||||
hard:
|
||||
pods: "4"
|
||||
requests.cpu: "1"
|
||||
requests.memory: 1Gi
|
||||
limits.cpu: "2"
|
||||
limits.memory: 2Gi
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: security-context-demo-2
|
||||
spec:
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
containers:
|
||||
- name: sec-ctx-demo-2
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
securityContext:
|
||||
runAsUser: 2000
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: security-context-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: sec-ctx-3
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: security-context-demo-4
|
||||
spec:
|
||||
containers:
|
||||
- name: sec-ctx-4
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
securityContext:
|
||||
capabilities:
|
||||
add: ["NET_ADMIN", "SYS_TIME"]
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: security-context-demo
|
||||
spec:
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
fsGroup: 2000
|
||||
volumes:
|
||||
- name: sec-ctx-vol
|
||||
emptyDir: {}
|
||||
containers:
|
||||
- name: sec-ctx-demo
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
volumeMounts:
|
||||
- name: sec-ctx-vol
|
||||
mountPath: /data/demo
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: task-pv-claim
|
||||
spec:
|
||||
storageClassName: manual
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 3Gi
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: task-pv-pod
|
||||
spec:
|
||||
|
||||
volumes:
|
||||
- name: task-pv-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: task-pv-claim
|
||||
|
||||
containers:
|
||||
- name: task-pv-container
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: "http-server"
|
||||
volumeMounts:
|
||||
- mountPath: "/usr/share/nginx/html"
|
||||
name: task-pv-storage
|
||||
|
||||
|
||||
|
|
@ -1,14 +0,0 @@
|
|||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: task-pv-volume
|
||||
labels:
|
||||
type: local
|
||||
spec:
|
||||
storageClassName: manual
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
hostPath:
|
||||
path: "/tmp/data"
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: goproxy
|
||||
labels:
|
||||
app: goproxy
|
||||
spec:
|
||||
containers:
|
||||
- name: goproxy
|
||||
image: k8s.gcr.io/goproxy:0.1
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 8080
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 8080
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 20
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: command-demo
|
||||
labels:
|
||||
purpose: demonstrate-command
|
||||
spec:
|
||||
containers:
|
||||
- name: command-demo-container
|
||||
image: debian
|
||||
command: ["printenv"]
|
||||
args: ["HOSTNAME", "KUBERNETES_PORT"]
|
||||
restartPolicy: OnFailure
|
||||
|
|
@ -31,21 +31,19 @@ content_template: templates/task
|
|||
|
||||
本示例中,将创建一个只包含单个容器的Pod。在Pod配置文件中设置了一个命令与两个入参:
|
||||
|
||||
{{< code file="commands.yaml" >}}
|
||||
{{< codenew file="pods/commands.yaml" >}}
|
||||
|
||||
1. 基于YAML文件创建一个Pod:
|
||||
1. 基于 YAML 文件创建一个 Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/inject-data-application/commands.yaml
|
||||
kubectl apply -f https://k8s.io/examples/pods/commands.yaml
|
||||
```
|
||||
|
||||
2. List the running Pods:
|
||||
2. 列举运行中的 Pods:
|
||||
|
||||
获取正在运行的 pod
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
查询结果显示在command-demo这个Pod下运行的容器已经启动完成
|
||||
|
||||
|
|
@ -55,7 +53,7 @@ content_template: templates/task
|
|||
kubectl logs command-demo
|
||||
```
|
||||
|
||||
日志中显示了HOSTNAME 与KUBERNETES_PORT 这两个环境变量的值:
|
||||
日志中显示了 HOSTNAME 与 KUBERNETES_PORT 这两个环境变量的值:
|
||||
|
||||
```
|
||||
command-demo
|
||||
|
|
@ -99,12 +97,12 @@ args: ["-c", "while true; do echo hello; sleep 10;done"]
|
|||
|
||||
下表给出了Docker 与 Kubernetes中对应的字段名称。
|
||||
|
||||
| Description | Docker field name | Kubernetes field name |
|
||||
|----------------------------------------|------------------------|-----------------------|
|
||||
| The command run by the container | Entrypoint | command |
|
||||
| The arguments passed to the command | Cmd | args |
|
||||
| 描述 | Docker 字段名称 | Kubernetes 字段名称 |
|
||||
|------|-----------------|---------------------|
|
||||
| 容器运行的命令 | Entrypoint | command |
|
||||
| 传递给命令的参数集合 | Cmd | args |
|
||||
|
||||
如果要覆盖默认的Entrypoint 与 Cmd,需要遵循如下规则:
|
||||
如果要覆盖默认的 Entrypoint 与 Cmd,需要遵循如下规则:
|
||||
|
||||
* 如果在容器配置中没有设置`command` 或者 `args`,那么将使用Docker镜像自带的命
|
||||
令及其入参。
|
||||
|
|
@ -121,22 +119,22 @@ args: ["-c", "while true; do echo hello; sleep 10;done"]
|
|||
|
||||
下表涵盖了各类设置场景:
|
||||
|
||||
| Image Entrypoint | Image Cmd | Container command | Container args | Command run |
|
||||
|--------------------|------------------|---------------------|--------------------|------------------|
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | <not set> | `[ep-1 foo bar]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | <not set> | `[ep-2]` |
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | `[zoo boo]` | `[ep-1 zoo boo]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
|
||||
| 镜像 Entrypoint | 镜像 Cmd | 容器命令 | 容器参数 | 运行的命令 |
|
||||
|-------------------|----------|----------|----------|------------|
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | <not set> | `[ep-1 foo bar]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | <not set> | `[ep-2]` |
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | `[zoo boo]` | `[ep-1 zoo boo]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
|
||||
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* 获取更多资讯可参考 [containers and commands](/docs/user-guide/containers/).
|
||||
* 获取更多资讯可参考 [configuring pods and containers](/docs/tasks/).
|
||||
* 获取更多资讯可参考 [running commands in a container](/docs/tasks/debug-application-cluster/get-shell-running-container/).
|
||||
* 参考 [Container](/docs/api-reference/{{< param "version" >}}/#container-v1-core).
|
||||
* 深入了解 [容器和命令](/docs/user-guide/containers/).
|
||||
* 深入了解 [配置 Pods 和容器](/docs/tasks/).
|
||||
* 深入了解 [在容器中运行命令](/docs/tasks/debug-application-cluster/get-shell-running-container/).
|
||||
* 参考 [Container](/docs/api-reference/{{< param "version" >}}/#container-v1-core) 资源
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
|||
|
|
@ -26,12 +26,12 @@ content_template: templates/task
|
|||
本示例中,将创建一个只包含单个容器的Pod。Pod的配置文件中设置环境变量的名称为`DEMO_GREETING`,
|
||||
其值为`"Hello from the environment"`。下面是Pod的配置文件内容:
|
||||
|
||||
{{< code file="envars.yaml" >}}
|
||||
{{< codenew file="pods/inject/envars.yaml" >}}
|
||||
|
||||
1. 基于YAML文件创建一个Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/inject-data-application/envars.yaml
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/envars.yaml
|
||||
```
|
||||
|
||||
1. 获取一下当前正在运行的Pods信息:
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ content_template: templates/task
|
|||
|
||||
在这个练习中,你将创建一个包含一个容器的pod。这是该pod的配置文件:
|
||||
|
||||
{{< code file="dapi-volume.yaml" >}}
|
||||
{{< codenew file="pods/inject/dapi-volume.yaml" >}}
|
||||
|
||||
在配置文件中,你可以看到Pod有一个`downwardAPI`类型的Volume,并且挂载到容器中的`/etc`。
|
||||
|
||||
|
|
@ -46,7 +46,7 @@ content_template: templates/task
|
|||
创建 Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/cn/docs/tasks/inject-data-application/dapi-volume.yaml
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/dapi-volume.yaml
|
||||
```
|
||||
|
||||
验证Pod中的容器运行正常:
|
||||
|
|
@ -134,7 +134,7 @@ total 8
|
|||
|
||||
前面的练习中,你将Pod字段保存到DownwardAPIVolumeFile中。接下来这个练习,你将存储容器字段。这里是包含一个容器的pod的配置文件:
|
||||
|
||||
{{< code file="dapi-volume-resources.yaml" >}}
|
||||
{{< codenew file="pods/inject/dapi-volume-resources.yaml" >}}
|
||||
|
||||
在这个配置文件中,你可以看到Pod有一个`downwardAPI`类型的Volume,并且挂载到容器的`/etc`目录。
|
||||
|
||||
|
|
@ -145,7 +145,7 @@ total 8
|
|||
创建Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/dapi-volume-resources.yaml
|
||||
```
|
||||
|
||||
进入Pod中运行的容器,打开一个shell:
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ content_template: templates/task
|
|||
|
||||
在这个练习中,你将创建一个包含一个容器的pod。这是该pod的配置文件:
|
||||
|
||||
{{< code file="dapi-envars-pod.yaml" >}}
|
||||
{{< codenew file="pods/inject/dapi-envars-pod.yaml" >}}
|
||||
|
||||
这个配置文件中,你可以看到五个环境变量。`env`字段是一个[EnvVars](/docs/resources-reference/{{< param "version" >}}/#envvar-v1-core)类型的数组。
|
||||
数组中第一个元素指定`MY_NODE_NAME`这个环境变量从Pod的`spec.nodeName`字段获取变量值。同样,其它环境变量也是从Pod的字段获取它们的变量值。
|
||||
|
|
@ -49,7 +49,7 @@ content_template: templates/task
|
|||
创建Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-pod.yaml
|
||||
```
|
||||
|
||||
验证Pod中的容器运行正常:
|
||||
|
|
@ -105,7 +105,7 @@ MY_POD_NAME=dapi-envars-fieldref
|
|||
|
||||
前面的练习中,你将Pod字段作为环境变量的值。接下来这个练习,你将用容器字段作为环境变量的值。这里是包含一个容器的pod的配置文件:
|
||||
|
||||
{{< code file="dapi-envars-container.yaml" >}}
|
||||
{{< codenew file="pods/inject/dapi-envars-container.yaml" >}}
|
||||
|
||||
这个配置文件中,你可以看到四个环境变量。`env`字段是一个[EnvVars](/docs/resources-reference/{{< param "version" >}}/#envvar-v1-core)
|
||||
类型的数组。数组中第一个元素指定`MY_CPU_REQUEST`这个环境变量从容器的`requests.cpu`字段获取变量值。同样,其它环境变量也是从容器的字段获取它们的变量值。
|
||||
|
|
@ -113,7 +113,7 @@ MY_POD_NAME=dapi-envars-fieldref
|
|||
创建Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-container.yaml
|
||||
```
|
||||
|
||||
验证Pod中的容器运行正常:
|
||||
|
|
|
|||
|
|
@ -16,37 +16,76 @@ title: 使用 PodPreset 将信息注入 Pods
|
|||
|
||||
这里是一个简单的示例,展示了如何通过 Pod Preset 修改 Pod spec 。
|
||||
|
||||
**用户提交的 pod spec:**
|
||||
{{< codenew file="podpreset/preset.yaml" >}}
|
||||
|
||||
{{< code file="podpreset-pod.yaml" >}}
|
||||
创建 PodPreset:
|
||||
|
||||
**Pod Preset 示例:**
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/podpreset/preset.yaml
|
||||
```
|
||||
|
||||
{{< code file="podpreset-preset.yaml" >}}
|
||||
检查所创建的 PodPreset:
|
||||
|
||||
**通过准入控制器后的 Pod spec:**
|
||||
```shell
|
||||
kubectl get podpreset
|
||||
```
|
||||
```
|
||||
NAME AGE
|
||||
allow-database 1m
|
||||
```
|
||||
|
||||
{{< code file="podpreset-merged.yaml" >}}
|
||||
|
||||
### 带有 `ConfigMap` 的 Pod Spec 示例
|
||||
|
||||
这里的示例展示了如何通过 Pod Preset 修改 Pod spec,Pod Preset 中定义了 `ConfigMap` 作为环境变量取值来源。
|
||||
新的 PodPreset 会对所有具有标签 `role: frontend` 的 Pods 采取行动。
|
||||
|
||||
**用户提交的 pod spec:**
|
||||
|
||||
{{< code file="podpreset-pod.yaml" >}}
|
||||
{{< codenew file="podpreset/pod.yaml" >}}
|
||||
|
||||
创建 Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/podpreset/pod.yaml
|
||||
```
|
||||
|
||||
列举运行中的 Pods:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
website 1/1 Running 0 4m
|
||||
```
|
||||
|
||||
**通过准入控制器后的 Pod 规约:**
|
||||
|
||||
{{< codenew file="podpreset/merged.yaml" >}}
|
||||
|
||||
要查看如上输出,运行下面的命令:
|
||||
|
||||
```shell
|
||||
kubectl get pod website -o yaml
|
||||
```
|
||||
|
||||
### 带有 ConfigMap 的 Pod Spec 示例
|
||||
|
||||
这里的示例展示了如何通过 PodPreset 修改 Pod 规约,PodPreset 中定义了 `ConfigMap`
|
||||
作为环境变量取值来源。
|
||||
|
||||
**用户提交的 pod spec:**
|
||||
|
||||
{{< codenew file="podpreset/pod.yaml" >}}
|
||||
|
||||
**用户提交的 `ConfigMap`:**
|
||||
|
||||
{{< code file="podpreset-configmap.yaml" >}}
|
||||
{{< codenew file="podpreset/configmap.yaml" >}}
|
||||
|
||||
**Pod Preset 示例:**
|
||||
**PodPreset 示例:**
|
||||
|
||||
{{< code file="podpreset-allow-db.yaml" >}}
|
||||
{{< codenew file="podpreset/allow-db.yaml" >}}
|
||||
|
||||
**通过准入控制器后的 Pod spec:**
|
||||
|
||||
{{< code file="podpreset-allow-db-merged.yaml" >}}
|
||||
{{< codenew file="podpreset/allow-db-merged.yaml" >}}
|
||||
|
||||
### 带有 Pod Spec 的 ReplicaSet 示例
|
||||
|
||||
|
|
@ -54,53 +93,53 @@ title: 使用 PodPreset 将信息注入 Pods
|
|||
|
||||
**用户提交的 ReplicaSet:**
|
||||
|
||||
{{< code file="podpreset-replicaset.yaml" >}}
|
||||
{{< codenew file="podpreset/replicaset.yaml" >}}
|
||||
|
||||
**Pod Preset 示例:**
|
||||
**PodPreset 示例:**
|
||||
|
||||
{{< code file="podpreset-preset.yaml" >}}
|
||||
{{< codenew file="podpreset/preset.yaml" >}}
|
||||
|
||||
**通过准入控制器后的 Pod spec:**
|
||||
|
||||
注意 ReplicaSet spec 没有改变,用户必须检查单独的 pod 来验证 PodPreset 已被应用。
|
||||
|
||||
{{< code file="podpreset-replicaset-merged.yaml" >}}
|
||||
{{< codenew file="podpreset/replicaset-merged.yaml" >}}
|
||||
|
||||
### 多 PodPreset 示例
|
||||
|
||||
这里的示例展示了如何通过多个 Pod 注入策略修改 Pod spec。
|
||||
|
||||
**用户提交的 pod spec:**
|
||||
**用户提交的 Pod 规约:**
|
||||
|
||||
{{< code file="podpreset-pod.yaml" >}}
|
||||
{{< codenew file="podpreset/pod.yaml" >}}
|
||||
|
||||
**Pod Preset 示例:**
|
||||
**PodPreset 示例:**
|
||||
|
||||
{{< code file="podpreset-preset.yaml" >}}
|
||||
{{< codenew file="podpreset/preset.yaml" >}}
|
||||
|
||||
**另一个 Pod Preset 示例:**
|
||||
|
||||
{{< code file="podpreset-proxy.yaml" >}}
|
||||
{{< codenew file="podpreset/proxy.yaml" >}}
|
||||
|
||||
**通过准入控制器后的 Pod spec:**
|
||||
**通过准入控制器后的 Pod 规约:**
|
||||
|
||||
{{< code file="podpreset-multi-merged.yaml" >}}
|
||||
{{< codenew file="podpreset/multi-merged.yaml" >}}
|
||||
|
||||
### 冲突示例
|
||||
|
||||
这里的示例展示了 Pod Preset 与原 Pod 存在冲突时,Pod spec 不会被修改。
|
||||
这里的示例展示了 PodPreset 与原 Pod 存在冲突时,Pod spec 不会被修改。
|
||||
|
||||
**用户提交的 pod spec:**
|
||||
**用户提交的 Pod 规约:**
|
||||
|
||||
{{< code file="podpreset-conflict-pod.yaml" >}}
|
||||
{{< codenew file="podpreset/conflict-pod.yaml" >}}
|
||||
|
||||
**Pod Preset 示例:**
|
||||
**PodPreset 示例:**
|
||||
|
||||
{{< code file="podpreset-conflict-preset.yaml" >}}
|
||||
{{< codenew file="podpreset/conflict-preset.yaml" >}}
|
||||
|
||||
**因存在冲突,通过准入控制器后的 Pod spec 不会改变:**
|
||||
|
||||
{{< code file="podpreset-conflict-pod.yaml" >}}
|
||||
{{< codenew file="podpreset/conflict-pod.yaml" >}}
|
||||
|
||||
**如果运行 `kubectl describe...` 用户会看到以下事件:**
|
||||
|
||||
|
|
@ -117,7 +156,9 @@ Events:
|
|||
一旦用户不再需要 pod preset,可以使用 `kubectl` 进行删除:
|
||||
|
||||
```shell
|
||||
$ kubectl delete podpreset allow-database
|
||||
kubectl delete podpreset allow-database
|
||||
```
|
||||
```
|
||||
podpreset "allow-database" deleted
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -1,17 +0,0 @@
|
|||
apiVersion: apps/v1beta2
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: patch-demo
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: patch-demo-ctr
|
||||
image: nginx
|
||||
|
|
@ -1,18 +0,0 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
spec:
|
||||
replicas: 2 # tells deployment to run 2 pods matching the template
|
||||
template: # create pods using pod definition in this template
|
||||
metadata:
|
||||
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
|
||||
# generated from the deployment name
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.7.9
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: mysql-pv
|
||||
spec:
|
||||
capacity:
|
||||
storage: 20Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
gcePersistentDisk:
|
||||
pdName: mysql-disk
|
||||
fsType: ext4
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mysql
|
||||
labels:
|
||||
app: mysql
|
||||
data:
|
||||
master.cnf: |
|
||||
# Apply this config only on the master.
|
||||
[mysqld]
|
||||
log-bin
|
||||
slave.cnf: |
|
||||
# Apply this config only on slaves.
|
||||
[mysqld]
|
||||
super-read-only
|
||||
|
||||
|
|
@ -1,30 +0,0 @@
|
|||
# Headless service for stable DNS entries of StatefulSet members.
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mysql
|
||||
labels:
|
||||
app: mysql
|
||||
spec:
|
||||
ports:
|
||||
- name: mysql
|
||||
port: 3306
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: mysql
|
||||
---
|
||||
# Client service for connecting to any MySQL instance for reads.
|
||||
# For writes, you must instead connect to the master: mysql-0.mysql.
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mysql-read
|
||||
labels:
|
||||
app: mysql
|
||||
spec:
|
||||
ports:
|
||||
- name: mysql
|
||||
port: 3306
|
||||
selector:
|
||||
app: mysql
|
||||
|
||||
|
|
@ -1,163 +0,0 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: mysql
|
||||
spec:
|
||||
serviceName: mysql
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mysql
|
||||
annotations:
|
||||
pod.beta.kubernetes.io/init-containers: '[
|
||||
{
|
||||
"name": "init-mysql",
|
||||
"image": "mysql:5.7",
|
||||
"command": ["bash", "-c", "
|
||||
set -ex\n
|
||||
# Generate mysql server-id from pod ordinal index.\n
|
||||
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n
|
||||
ordinal=${BASH_REMATCH[1]}\n
|
||||
echo [mysqld] > /mnt/conf.d/server-id.cnf\n
|
||||
# Add an offset to avoid reserved server-id=0 value.\n
|
||||
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n
|
||||
# Copy appropriate conf.d files from config-map to emptyDir.\n
|
||||
if [[ $ordinal -eq 0 ]]; then\n
|
||||
cp /mnt/config-map/master.cnf /mnt/conf.d/\n
|
||||
else\n
|
||||
cp /mnt/config-map/slave.cnf /mnt/conf.d/\n
|
||||
fi\n
|
||||
"],
|
||||
"volumeMounts": [
|
||||
{"name": "conf", "mountPath": "/mnt/conf.d"},
|
||||
{"name": "config-map", "mountPath": "/mnt/config-map"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "clone-mysql",
|
||||
"image": "gcr.io/google-samples/xtrabackup:1.0",
|
||||
"command": ["bash", "-c", "
|
||||
set -ex\n
|
||||
# Skip the clone if data already exists.\n
|
||||
[[ -d /var/lib/mysql/mysql ]] && exit 0\n
|
||||
# Skip the clone on master (ordinal index 0).\n
|
||||
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n
|
||||
ordinal=${BASH_REMATCH[1]}\n
|
||||
[[ $ordinal -eq 0 ]] && exit 0\n
|
||||
# Clone data from previous peer.\n
|
||||
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n
|
||||
# Prepare the backup.\n
|
||||
xtrabackup --prepare --target-dir=/var/lib/mysql\n
|
||||
"],
|
||||
"volumeMounts": [
|
||||
{"name": "data", "mountPath": "/var/lib/mysql", "subPath": "mysql"},
|
||||
{"name": "conf", "mountPath": "/etc/mysql/conf.d"}
|
||||
]
|
||||
}
|
||||
]'
|
||||
spec:
|
||||
containers:
|
||||
- name: mysql
|
||||
image: mysql:5.7
|
||||
env:
|
||||
- name: MYSQL_ALLOW_EMPTY_PASSWORD
|
||||
value: "1"
|
||||
ports:
|
||||
- name: mysql
|
||||
containerPort: 3306
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /var/lib/mysql
|
||||
subPath: mysql
|
||||
- name: conf
|
||||
mountPath: /etc/mysql/conf.d
|
||||
resources:
|
||||
requests:
|
||||
cpu: 1
|
||||
memory: 1Gi
|
||||
livenessProbe:
|
||||
exec:
|
||||
command: ["mysqladmin", "ping"]
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
exec:
|
||||
# Check we can execute queries over TCP (skip-networking is off).
|
||||
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 1
|
||||
- name: xtrabackup
|
||||
image: gcr.io/google-samples/xtrabackup:1.0
|
||||
ports:
|
||||
- name: xtrabackup
|
||||
containerPort: 3307
|
||||
command:
|
||||
- bash
|
||||
- "-c"
|
||||
- |
|
||||
set -ex
|
||||
cd /var/lib/mysql
|
||||
|
||||
# Determine binlog position of cloned data, if any.
|
||||
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
|
||||
# XtraBackup already generated a partial "CHANGE MASTER TO" query
|
||||
# because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
|
||||
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
|
||||
# Ignore xtrabackup_binlog_info in this case (it's useless).
|
||||
rm -f xtrabackup_slave_info xtrabackup_binlog_info
|
||||
elif [[ -f xtrabackup_binlog_info ]]; then
|
||||
# We're cloning directly from master. Parse binlog position.
|
||||
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
|
||||
rm -f xtrabackup_binlog_info xtrabackup_slave_info
|
||||
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
|
||||
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
|
||||
fi
|
||||
|
||||
# Check if we need to complete a clone by starting replication.
|
||||
if [[ -f change_master_to.sql.in ]]; then
|
||||
echo "Waiting for mysqld to be ready (accepting connections)"
|
||||
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
|
||||
|
||||
echo "Initializing replication from clone position"
|
||||
mysql -h 127.0.0.1 \
|
||||
-e "$(<change_master_to.sql.in), \
|
||||
MASTER_HOST='mysql-0.mysql', \
|
||||
MASTER_USER='root', \
|
||||
MASTER_PASSWORD='', \
|
||||
MASTER_CONNECT_RETRY=10; \
|
||||
START SLAVE;" || exit 1
|
||||
# In case of container restart, attempt this at-most-once.
|
||||
mv change_master_to.sql.in change_master_to.sql.orig
|
||||
fi
|
||||
|
||||
# Start a server to send backups when requested by peers.
|
||||
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
|
||||
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /var/lib/mysql
|
||||
subPath: mysql
|
||||
- name: conf
|
||||
mountPath: /etc/mysql/conf.d
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
volumes:
|
||||
- name: conf
|
||||
emptyDir: {}
|
||||
- name: config-map
|
||||
configMap:
|
||||
name: mysql
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
annotations:
|
||||
volume.alpha.kubernetes.io/storage-class: default
|
||||
spec:
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
|
||||
|
|
@ -14,7 +14,7 @@ content_template: templates/tutorial
|
|||
|
||||
* 在环境中通过磁盘创建一个PersistentVolume.
|
||||
* 创建一个MySQL Deployment.
|
||||
* 在集群内以一个已知的DNS名将MySQL暴露给其他pods.
|
||||
* 在集群内以一个已知的 DNS 名将 MySQL 暴露给其他 pods.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
@ -23,38 +23,12 @@ content_template: templates/tutorial
|
|||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
* 为了数据持久性我们将在环境上通过磁盘创建一个持久卷. 环境支持的类型见这里[here](/docs/user-guide/persistent-volumes/#types-of-persistent-volumes). 本篇文档将介绍 `GCEPersistentDisk` . `GCEPersistentDisk`卷只能工作在Google Compute Engine平台上.
|
||||
* {{< include "default-storage-class-prereqs.md" >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture lessoncontent %}}
|
||||
|
||||
## 在环境中设置一个磁盘
|
||||
|
||||
你可以为有状态的应用使用任何类型的持久卷. 有关支持环境的磁盘列表,请参考持久卷类型[Types of Persistent Volumes](/docs/user-guide/persistent-volumes/#types-of-persistent-volumes). 对于Google Compute Engine, 请运行:
|
||||
|
||||
```
|
||||
gcloud compute disks create --size=20GB mysql-disk
|
||||
```
|
||||
|
||||
|
||||
接下来创建一个指向刚创建的 `mysql-disk`磁盘的PersistentVolume. 下面是一个PersistentVolume的配置文件,它指向上面创建的Compute Engine磁盘:
|
||||
|
||||
{{< code file="gce-volume.yaml" >}}
|
||||
|
||||
注意`pdName: mysql-disk` 这行与Compute Engine环境中的磁盘名称相匹配. 有关为其
|
||||
他环境编写PersistentVolume配置文件的详细信息,请参见持久卷[Persistent Volumes](/docs/concepts/storage/persistent-volumes/).
|
||||
|
||||
|
||||
创建持久卷:
|
||||
|
||||
```
|
||||
kubectl create -f https://k8s.io/docs/tasks/run-application/gce-volume.yaml
|
||||
```
|
||||
|
||||
|
||||
|
||||
## 部署MySQL
|
||||
|
||||
通过创建Kubernetes Deployment并使用PersistentVolumeClaim将其连接到现已存在的PersistentVolume上来运行一个有状态的应用. 例如, 下面这个YAML文件描述了一个运行MySQL
|
||||
|
|
@ -65,13 +39,16 @@ kubectl create -f https://k8s.io/docs/tasks/run-application/gce-volume.yaml
|
|||
注意: 在配置的yaml文件中定义密码的做法是不安全的. 具体安全解决方案请参考
|
||||
[Kubernetes Secrets](/docs/concepts/configuration/secret/).
|
||||
|
||||
{{< code file="mysql-deployment.yaml" >}}
|
||||
{{< codenew file="application/mysql/mysql-deployment.yaml" >}}
|
||||
{{< codenew file="application/mysql/mysql-pv.yaml" >}}
|
||||
|
||||
1. 部署 YAML 文件中定义的 PV 和 PVC:
|
||||
|
||||
1. 部署YAML文件中定义的内容:
|
||||
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml
|
||||
|
||||
kubectl create -f https://k8s.io/docs/tasks/run-application/mysql-deployment.yaml
|
||||
1. 部署 YAML 文件中定义的 Deployment:
|
||||
|
||||
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml
|
||||
|
||||
1. 展示Deployment相关信息:
|
||||
|
||||
|
|
|
|||
|
|
@ -32,12 +32,12 @@ content_template: templates/tutorial
|
|||
|
||||
你可以通过创建一个Kubernetes Deployment对象来运行一个应用, 可以在一个YAML文件中描述Deployment. 例如, 下面这个YAML文件描述了一个运行nginx:1.7.9 Docker镜像的Deployment:
|
||||
|
||||
{{< code file="deployment.yaml" >}}
|
||||
{{< codenew file="application/deployment.yaml" >}}
|
||||
|
||||
|
||||
1. 通过YAML文件创建一个Deployment:
|
||||
|
||||
kubectl create -f https://k8s.io/docs/tasks/run-application/deployment.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
|
||||
|
||||
1. 展示Deployment相关信息:
|
||||
|
||||
|
|
@ -90,29 +90,29 @@ content_template: templates/tutorial
|
|||
|
||||
你可以通过更新一个新的YAML文件来更新deployment. 下面的YAML文件指定该deployment镜像更新为nginx 1.8.
|
||||
|
||||
{{< code file="deployment-update.yaml" >}}
|
||||
{{< codenew file="application/deployment-update.yaml" >}}
|
||||
|
||||
1. 应用新的YAML:
|
||||
|
||||
kubectl apply -f https://k8s.io/docs/tutorials/stateless-application/deployment-update.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml
|
||||
|
||||
1. 查看该deployment创建的pods以新的名称同时删除旧的pods:
|
||||
|
||||
kubectl get pods -l app=nginx
|
||||
kubectl get pods -l app=nginx
|
||||
|
||||
## 通过增加副本数来弹缩应用
|
||||
|
||||
你可以通过应用新的YAML文件来增加Deployment中pods的数量. 该YAML文件将`replicas`设置为4, 指定该Deployment应有4个pods:
|
||||
|
||||
{{< code file="deployment-scale.yaml" >}}
|
||||
{{< codenew file="application/deployment-scale.yaml" >}}
|
||||
|
||||
1. 应用新的YAML文件:
|
||||
|
||||
kubectl apply -f https://k8s.io/docs/tutorials/stateless-application/deployment-scale.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/deployment-scale.yaml
|
||||
|
||||
1. 验证Deployment有4个pods:
|
||||
|
||||
kubectl get pods -l app=nginx
|
||||
kubectl get pods -l app=nginx
|
||||
|
||||
输出的结果类似于:
|
||||
|
||||
|
|
|
|||
|
|
@ -51,8 +51,7 @@ StatefulSets 旨在与有状态的应用及分布式系统一起使用。然而
|
|||
|
||||
作为开始,使用如下示例创建一个 StatefulSet。它和 [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) 概念中的示例相似。它创建了一个 [Headless Service](/docs/user-guide/services/#headless-services) `nginx` 用来发布 StatefulSet `web` 中的 Pod 的 IP 地址。
|
||||
|
||||
{{< code file="web.yaml" >}}
|
||||
|
||||
{{< codenew file="application/web/web.yaml" >}}
|
||||
|
||||
下载上面的例子并保存为文件 `web.yaml`。
|
||||
|
||||
|
|
@ -67,12 +66,11 @@ kubectl get pods -w -l app=nginx
|
|||
在另一个终端中,使用 [`kubectl create`](/docs/user-guide/kubectl/{{< param "version" >}}/#create) 来创建定义在 `web.yaml` 中的 Headless Service 和 StatefulSet。
|
||||
|
||||
```shell
|
||||
kubectl create -f web.yaml
|
||||
service "nginx" created
|
||||
statefulset "web" created
|
||||
kubectl apply -f web.yaml
|
||||
service/nginx created
|
||||
statefulset.apps/web created
|
||||
```
|
||||
|
||||
|
||||
上面的命令创建了两个 Pod,每个都运行了一个 [NGINX](https://www.nginx.com) web 服务器。获取 `nginx` Service 和 `web` StatefulSet 来验证是否成功的创建了它们。
|
||||
|
||||
```shell
|
||||
|
|
@ -85,7 +83,6 @@ NAME DESIRED CURRENT AGE
|
|||
web 2 1 20s
|
||||
```
|
||||
|
||||
|
||||
### 顺序创建 Pod
|
||||
|
||||
|
||||
|
|
@ -989,10 +986,9 @@ statefulset "web" deleted
|
|||
|
||||
`Parallel` pod 管理策略告诉 StatefulSet 控制器并行的终止所有 Pod,在启动或终止另一个 Pod 前,不必等待这些 Pod 变成 Running 和 Ready 或者完全终止状态。
|
||||
|
||||
{{< code file="webp.yaml" >}}
|
||||
{{< codenew file="application/web/web-parallel.yaml" >}}
|
||||
|
||||
|
||||
下载上面的例子并保存为 `webp.yaml`。
|
||||
下载上面的例子并保存为 `web-parallel.yaml`。
|
||||
|
||||
|
||||
这份清单和你在上文下载的完全一样,只是 `web` StatefulSet 的 `.spec.podManagementPolicy` 设置成了 `Parallel`。
|
||||
|
|
@ -1008,12 +1004,11 @@ kubectl get po -lapp=nginx -w
|
|||
在另一个终端窗口创建清单中的 StatefulSet 和 Service。
|
||||
|
||||
```shell
|
||||
kubectl create -f webp.yaml
|
||||
service "nginx" created
|
||||
statefulset "web" created
|
||||
kubectl apply -f web-parallel.yaml
|
||||
service/nginx created
|
||||
statefulset.apps/web created
|
||||
```
|
||||
|
||||
|
||||
查看你在第一个终端中运行的 `kubectl get` 命令的输出。
|
||||
|
||||
```shell
|
||||
|
|
|
|||
|
|
@ -60,6 +60,8 @@ Pod 使用来自 Google [容器仓库](https://cloud.google.com/container-regist
|
|||
## 快速入门
|
||||
|
||||
|
||||
{{< codenew file="application/cassandra/cassandra-service.yaml" >}}
|
||||
|
||||
如果你希望直接跳到我们使用的命令,以下是全部步骤:
|
||||
|
||||
<!--
|
||||
|
|
@ -71,19 +73,15 @@ Pod 使用来自 Google [容器仓库](https://cloud.google.com/container-regist
|
|||
-->
|
||||
|
||||
```sh
|
||||
#
|
||||
# StatefulSet
|
||||
#
|
||||
|
||||
# 克隆示例存储库
|
||||
git clone https://github.com/kubernetes/examples
|
||||
cd examples
|
||||
kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml
|
||||
```
|
||||
|
||||
# 创建服务来跟踪所有 cassandra statefulset 节点
|
||||
kubectl create -f cassandra/cassandra-service.yaml
|
||||
{{< codenew file="application/cassandra/cassandra-statefulset.yaml" >}}
|
||||
|
||||
```
|
||||
# 创建 statefulset
|
||||
kubectl create -f cassandra/cassandra-statefulset.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml
|
||||
|
||||
# 验证 Cassandra 集群。替换一个 pod 的名称。
|
||||
kubectl exec -ti cassandra-0 -- nodetool status
|
||||
|
|
@ -157,18 +155,15 @@ spec:
|
|||
```
|
||||
|
||||
|
||||
[下载示例](https://raw.githubusercontent.com/kubernetes/examples/master/cassandra-service.yaml)
|
||||
|
||||
|
||||
Download [`cassandra-service.yaml`](/examples/application/cassandra/cassandra-service.yaml)
|
||||
and [`cassandra-statefulset.yaml`](/examples/application/cassandra/cassandra-statefulset.yaml)
|
||||
|
||||
为 StatefulSet 创建 service
|
||||
|
||||
|
||||
```console
|
||||
$ kubectl create -f cassandra/cassandra-service.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml
|
||||
```
|
||||
|
||||
|
||||
以下命令显示了 service 是否被成功创建。
|
||||
|
||||
```console
|
||||
|
|
@ -291,17 +286,14 @@ parameters:
|
|||
type: pd-ssd
|
||||
```
|
||||
|
||||
[下载示例](https://raw.githubusercontent.com/kubernetes/examples/master/cassandra-statefulset.yaml)
|
||||
|
||||
创建 Cassandra StatefulSet 如下:
|
||||
创建 Cassandra StatefulSet 如下:
|
||||
|
||||
```console
|
||||
$ kubectl create -f cassandra/cassandra-statefulset.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml
|
||||
```
|
||||
|
||||
## 步骤 3:验证和修改 Cassandra StatefulSet
|
||||
|
||||
|
||||
这个 StatefulSet 的部署展示了 StatefulSets 提供的两个新特性:
|
||||
|
||||
1. Pod 的名称已知
|
||||
|
|
|
|||
|
|
@ -1,328 +1,241 @@
|
|||
---
|
||||
title: "基于 Persistent Volumes 搭建 WordPress 和 MySQL 应用"
|
||||
approvers:
|
||||
title: "Example: Deploying WordPress and MySQL with Persistent Volumes"
|
||||
reviewers:
|
||||
- ahmetb
|
||||
- jeffmendoza
|
||||
content_template: templates/tutorial
|
||||
weight: 20
|
||||
card:
|
||||
name: tutorials
|
||||
weight: 40
|
||||
title: "Stateful Example: Wordpress with Persistent Volumes"
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
This tutorial shows you how to deploy a WordPress site and a MySQL database using Minikube. Both applications use PersistentVolumes and PersistentVolumeClaims to store data.
|
||||
|
||||
本示例描述了如何在 Kubernetes 上持久化安装 [WordPress](https://wordpress.org/) 和
|
||||
[MySQL](https://www.mysql.com/) 。在这个安装里我们将使用官方的 [MySQL](https://registry.hub.docker.com/_/mysql/) 和
|
||||
[WordPress](https://registry.hub.docker.com/_/wordpress/) 镜像(WordPress 镜像包含一个 Apache 服务)。
|
||||
A [PersistentVolume](/docs/concepts/storage/persistent-volumes/) (PV) is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a [StorageClass](/docs/concepts/storage/storage-classes). A [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC) is a request for storage by a user that can be fulfilled by a PV. PersistentVolumes and PersistentVolumeClaims are independent from Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.
|
||||
|
||||
{{< warning >}}
|
||||
This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using [WordPress Helm Chart](https://github.com/kubernetes/charts/tree/master/stable/wordpress) to deploy WordPress in production.
|
||||
{{< /warning >}}
|
||||
|
||||
展示的 Kubernetes 概念:
|
||||
{{< note >}}
|
||||
The files provided in this tutorial are using GA Deployment APIs and are specific to kubernetes version 1.9 and later. If you wish to use this tutorial with an earlier version of Kubernetes, please update the API version appropriately, or reference earlier versions of this tutorial.
|
||||
{{< /note >}}
|
||||
|
||||
* [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) 定义持久化磁盘(磁盘生命周期不和 Pods 绑定)。
|
||||
* [Services](https://kubernetes.io/docs/concepts/services-networking/service/) 使得 Pods 能够找到其它 Pods。
|
||||
* [External Load Balancers](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer) 对外暴露 Services。
|
||||
* [Deployments](http://kubernetes.io/docs/user-guide/deployments/) 确保 Pods 持续运行。
|
||||
* [Secrets](http://kubernetes.io/docs/user-guide/secrets/) 保存敏感密码信息。
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture objectives %}}
|
||||
* Create PersistentVolumeClaims and PersistentVolumes
|
||||
* Create a `kustomization.yaml` with
|
||||
* a Secret generator
|
||||
* MySQL resource configs
|
||||
* WordPress resource configs
|
||||
* Apply the kustomization directory by `kubectl apply -k ./`
|
||||
* Clean up
|
||||
|
||||
## 快速入门
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
在一个名为 `password.txt` 的文件中放置你期望的 MySQL 密码,结尾不要有空行。如果你的编辑器添加了一个空行,开始的 `tr` 命令将会删除它。
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
The example shown on this page works with `kubectl` 1.14 and above.
|
||||
|
||||
Download the following configuration files:
|
||||
|
||||
**请注意:**如果你的集群强制启用 **_selinux_** 特性并且你将使用 [Host Path](#host-path) 作为存储,请遵照这个[额外步骤](#selinux)。
|
||||
1. [mysql-deployment.yaml](/examples/application/wordpress/mysql-deployment.yaml)
|
||||
|
||||
1. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture lessoncontent %}}
|
||||
|
||||
## Create PersistentVolumeClaims and PersistentVolumes
|
||||
|
||||
MySQL and Wordpress each require a PersistentVolume to store data. Their PersistentVolumeClaims will be created at the deployment step.
|
||||
|
||||
Many cluster environments have a default StorageClass installed. When a StorageClass is not specified in the PersistentVolumeClaim, the cluster's default StorageClass is used instead.
|
||||
|
||||
When a PersistentVolumeClaim is created, a PersistentVolume is dynamically provisioned based on the StorageClass configuration.
|
||||
|
||||
{{< warning >}}
|
||||
In local clusters, the default StorageClass uses the `hostPath` provisioner. `hostPath` volumes are only suitable for development and testing. With `hostPath` volumes, your data lives in `/tmp` on the node the Pod is scheduled onto and does not move between nodes. If a Pod dies and gets scheduled to another node in the cluster, or the node is rebooted, the data is lost.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< note >}}
|
||||
If you are bringing up a cluster that needs to use the `hostPath` provisioner, the `--enable-hostpath-provisioner` flag must be set in the `controller-manager` component.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
If you have a Kubernetes cluster running on Google Kubernetes Engine, please follow [this guide](https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk).
|
||||
{{< /note >}}
|
||||
|
||||
## Create a kustomization.yaml
|
||||
|
||||
### Add a Secret generator
|
||||
A [Secret](/docs/concepts/configuration/secret/) is an object that stores a piece of sensitive data like a password or key. Since 1.14, `kubectl` supports the management of Kubernetes objects using a kustomization file. You can create a Secret by generators in `kustomization.yaml`.
|
||||
|
||||
Add a Secret generator in `kustomization.yaml` from the following command. You will need to replace `YOUR_PASSWORD` with the password you want to use.
|
||||
|
||||
```shell
|
||||
tr --delete '\n' <password.txt >.strippedpassword.txt && mv .strippedpassword.txt password.txt
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/mysql-wordpress-pd/local-volumes.yaml
|
||||
kubectl create secret generic mysql-pass --from-file=password.txt
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/mysql-wordpress-pd/mysql-deployment.yaml
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/mysql-wordpress-pd/wordpress-deployment.yaml
|
||||
cat <<EOF >./kustomization.yaml
|
||||
secretGenerator:
|
||||
- name: mysql-pass
|
||||
literals:
|
||||
- password=YOUR_PASSWORD
|
||||
EOF
|
||||
```
|
||||
|
||||
## Add resource configs for MySQL and WordPress
|
||||
|
||||
## 目录
|
||||
The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The `MYSQL_ROOT_PASSWORD` environment variable sets the database password from the Secret.
|
||||
|
||||
{{< codenew file="application/wordpress/mysql-deployment.yaml" >}}
|
||||
|
||||
The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the
|
||||
PersistentVolume at `/var/www/html` for website data files. The `WORDPRESS_DB_HOST` environment variable sets
|
||||
the name of the MySQL Service defined above, and WordPress will access the database by Service. The
|
||||
`WORDPRESS_DB_PASSWORD` environment variable sets the database password from the Secret kustomize generated.
|
||||
|
||||
{{< codenew file="application/wordpress/wordpress-deployment.yaml" >}}
|
||||
|
||||
[在 Kubernetes 上持久化安装 MySQL 和 WordPress](#persistent-installation-of-mysql-and-wordpress-on-kubernetes)
|
||||
- [快速入门](#quickstart)
|
||||
- [目录](#table-of-contents)
|
||||
- [集群要求](#cluster-requirements)
|
||||
- [决定在哪里存储你的数据](#decide-where-you-will-store-your-data)
|
||||
- [Host Path](#host-path)
|
||||
- [SELinux](#selinux)
|
||||
- [GCE Persistent Disk](#gce-persistent-disk)
|
||||
- [创建 MySQL 密码 secret](#create-the-mysql-password-secret)
|
||||
- [部署 MySQL](#deploy-mysql)
|
||||
- [部署 WordPress](#deploy-wordpress)
|
||||
- [访问你的新 WordPress 博客](#visit-your-new-wordpress-blog)
|
||||
- [删除并重启你的博客](#take-down-and-restart-your-blog)
|
||||
- [接下来的步骤](#next-steps)
|
||||
1. Download the MySQL deployment configuration file.
|
||||
|
||||
```shell
|
||||
curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml
|
||||
```
|
||||
|
||||
2. Download the WordPress configuration file.
|
||||
|
||||
```shell
|
||||
curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml
|
||||
```
|
||||
|
||||
3. Add them to `kustomization.yaml` file.
|
||||
|
||||
```shell
|
||||
cat <<EOF >>./kustomization.yaml
|
||||
resources:
|
||||
- mysql-deployment.yaml
|
||||
- wordpress-deployment.yaml
|
||||
EOF
|
||||
```
|
||||
|
||||
## 集群要求
|
||||
|
||||
|
||||
Kubernetes本质是模块化的,可以在各种环境中运行。但并不是所有集群都相同。此处是本示例的一些要求:
|
||||
* 需要 1.2 版本以上的 Kubernetes,以使用更新的特性,例如 PV Claims 和 Deployments。运行 `kubectl version` 来查看你的集群版本。
|
||||
* [Cluster DNS](https://github.com/kubernetes/dns) 将被用于服务发现。
|
||||
* 一个 [external load balancer](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer) 将被用于接入 WordPress。
|
||||
* 使用了 [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。你必须创建集群中需要的 Persistent Volumes。本示例将展示两种类型的 volume 的创建方法,但是任何类型的 volume 都是足够使用的。
|
||||
|
||||
|
||||
查阅 [Getting Started Guide](http://kubernetes.io/docs/getting-started-guides/),搭建一个集群并安装 [kubectl](http://kubernetes.io/docs/user-guide/prereqs/) 命令行工具。
|
||||
|
||||
|
||||
## 决定在哪里存储你的数据
|
||||
|
||||
|
||||
MySQL 和 WordPress 各自使用一个 [Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) 来存储自己的数据。我们将使用一个 Persistent Volume Claim 来取得一个可用的持久化存储。本示例覆盖了 HostPath 和
|
||||
GCEPersistentDisk 卷类型。你可以从两者中选择一个,或者查看 [Persistent Volumes的类型](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes)。
|
||||
|
||||
|
||||
### Host Path
|
||||
|
||||
|
||||
Host paths 是映射到主机上目录的卷。**这种类型应该只用于测试目的或者单节点集群**。如果 pod 在一个新的节点上重建,数据将不会在节点之间移动。如果 pod 被删除并在一个新的节点上重建,数据将会丢失。
|
||||
|
||||
|
||||
##### SELinux
|
||||
|
||||
|
||||
在支持 selinux 的系统上,保持它为 enabled/enforcing 是最佳选择。然而,docker 容器使用 "_svirt_sandbox_file_t_" 标签类型挂载 host path,这和默认的 /tmp ("_tmp_t_") 标签类型不兼容。在 mysql 容器试图对 _/var/lib/mysql_ 执行 `chown` 时将导致权限错误。
|
||||
因此,要在一个启用 selinx 的系统上使用 host path,你应该预先创建 host path 路径(/tmp/data/)并将它的 selinux 标签类型改变为 "_svirt_sandbox_file_t_",就像下面一样:
|
||||
|
||||
## Apply and Verify
|
||||
The `kustomization.yaml` contains all the resources for deploying a WordPress site and a
|
||||
MySQL database. You can apply the directory by
|
||||
```shell
|
||||
## on every node:
|
||||
mkdir -p /tmp/data
|
||||
chmod a+rwt /tmp/data # match /tmp permissions
|
||||
chcon -Rt svirt_sandbox_file_t /tmp/data
|
||||
kubectl apply -k ./
|
||||
```
|
||||
|
||||
Now you can verify that all objects exist.
|
||||
|
||||
继续进行 host path 配置,在 Kubernetes 中使用 [local-volumes.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/local-volumes.yaml) 创建 persistent volume 对象:
|
||||
1. Verify that the Secret exists by running the following command:
|
||||
|
||||
```shell
|
||||
export KUBE_REPO=https://raw.githubusercontent.com/kubernetes/examples/master
|
||||
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/local-volumes.yaml
|
||||
```
|
||||
```shell
|
||||
kubectl get secrets
|
||||
```
|
||||
|
||||
The response should be like this:
|
||||
|
||||
```shell
|
||||
NAME TYPE DATA AGE
|
||||
mysql-pass-c57bb4t7mf Opaque 1 9s
|
||||
```
|
||||
|
||||
### GCE Persistent Disk
|
||||
2. Verify that a PersistentVolume got dynamically provisioned.
|
||||
|
||||
```shell
|
||||
kubectl get pvc
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
It can take up to a few minutes for the PVs to be provisioned and bound.
|
||||
{{< /note >}}
|
||||
|
||||
The response should be like this:
|
||||
|
||||
如果在 [Google Compute Engine](http://kubernetes.io/docs/getting-started-guides/gce/) 上运行集群,你可以使用这个存储选项。
|
||||
```shell
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
mysql-pv-claim Bound pvc-8cbd7b2e-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s
|
||||
wp-pv-claim Bound pvc-8cd0df54-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s
|
||||
```
|
||||
|
||||
3. Verify that the Pod is running by running the following command:
|
||||
|
||||
创建两个永久磁盘。你需要在和 Kubernetes 集群相同的 [GCE zone](https://cloud.google.com/compute/docs/zones) 中创建这些磁盘。默认的安装脚本将在 `us-central1-b` zone 中创建集群,就像你在 [config-default.sh](https://git.k8s.io/kubernetes/cluster/gce/config-default.sh) 文件中看到的。替换下面的 `<zone>` 为合适的 zone。`wordpress-1` 和 `wordpress-2` 的名字必须和 [gce-volumes.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/gce-volumes.yaml) 指定的 `pdName` 字段匹配。
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
```shell
|
||||
gcloud compute disks create --size=20GB --zone=<zone> wordpress-1
|
||||
gcloud compute disks create --size=20GB --zone=<zone> wordpress-2
|
||||
```
|
||||
{{< note >}}
|
||||
It can take up to a few minutes for the Pod's Status to be `RUNNING`.
|
||||
{{< /note >}}
|
||||
|
||||
The response should be like this:
|
||||
|
||||
在 Kubernetes 为这些磁盘创建 persistent volume 对象:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
wordpress-mysql-1894417608-x5dzt 1/1 Running 0 40s
|
||||
```
|
||||
|
||||
```shell
|
||||
export KUBE_REPO=https://raw.githubusercontent.com/kubernetes/examples/master
|
||||
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/gce-volumes.yaml
|
||||
```
|
||||
4. Verify that the Service is running by running the following command:
|
||||
|
||||
```shell
|
||||
kubectl get services wordpress
|
||||
```
|
||||
|
||||
## 创建 MySQL 密码 Secret
|
||||
The response should be like this:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
wordpress ClusterIP 10.0.0.89 <pending> 80:32406/TCP 4m
|
||||
```
|
||||
|
||||
使用一个 [Secret](http://kubernetes.io/docs/user-guide/secrets/) 对象存储 MySQL 密码。首先,创建一个名为 `password.txt` 的文件(和 wordpress 示例文件在相同的文件夹),并且将你的密码保存于其中。请确保密码文件的结尾没有空行。如果你的编辑器添加了一个,开始的 `tr` 命令将会删除这个空行。然后,创建这个 Secret 对象。
|
||||
{{< note >}}
|
||||
Minikube can only expose Services through `NodePort`. The EXTERNAL-IP is always pending.
|
||||
{{< /note >}}
|
||||
|
||||
```shell
|
||||
tr --delete '\n' <password.txt >.strippedpassword.txt && mv .strippedpassword.txt password.txt
|
||||
kubectl create secret generic mysql-pass --from-file=password.txt
|
||||
```
|
||||
5. Run the following command to get the IP Address for the WordPress Service:
|
||||
|
||||
```shell
|
||||
minikube service wordpress --url
|
||||
```
|
||||
|
||||
MySQL 和 WordPress pod 配置引用了这个 secret,所以这些 pods 就可以访问它。MySQL pod 会设置数据库密码,并且 WordPress 将使用这个密码来访问数据库。
|
||||
The response should be like this:
|
||||
|
||||
```
|
||||
http://1.2.3.4:32406
|
||||
```
|
||||
|
||||
## 部署 MySQL
|
||||
6. Copy the IP address, and load the page in your browser to view your site.
|
||||
|
||||
<!--
|
||||
Now that the persistent disks and secrets are defined, the Kubernetes
|
||||
pods can be launched. Start MySQL using
|
||||
[mysql-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/mysql-deployment.yaml).
|
||||
-->
|
||||
现在我们已经定义了永久磁盘和 secrets,可以启动 Kubernetes pods 了。使用 [mysql-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/mysql-deployment.yaml) 启动 MySQL。
|
||||
You should see the WordPress set up page similar to the following screenshot.
|
||||
|
||||
```shell
|
||||
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/mysql-deployment.yaml
|
||||
```
|
||||

|
||||
|
||||
{{< warning >}}
|
||||
Do not leave your WordPress installation on this page. If another user finds it, they can set up a website on your instance and use it to serve malicious content. <br/><br/>Either install WordPress by creating a username and password or delete your instance.
|
||||
{{< /warning >}}
|
||||
|
||||
查看 [mysql-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/mysql-deployment.yaml),注意到我们定义了一个挂载到 `/var/lib/mysql` 的卷,然后创建了一个请求 20G 卷的 Persistent Volume Claim。这个要求可以被任何符合这个要求的卷满足,在我们的例子中,可以是上面创建的卷中的一个。
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture cleanup %}}
|
||||
|
||||
再看一下 `env` 一节,我们引用上面创建的 `mysql-pass` secret 来指定密码。Secrets 可以有多组键值对。我们的只有一个键 `password.txt`,它是我们用来创建 secret 的文件名。[MySQL镜像](https://hub.docker.com/_/mysql/) 使用 `MYSQL_ROOT_PASSWORD` 环境变量设置数据库密码。
|
||||
1. Run the following command to delete your Secret, Deployments, Services and PersistentVolumeClaims:
|
||||
|
||||
```shell
|
||||
kubectl delete -k ./
|
||||
```
|
||||
|
||||
在很短的时间内,新建的 pod 将达到 `Running` 状态。列出所有的 pods,查看新建的 pod 的状态。
|
||||
{{% /capture %}}
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
wordpress-mysql-cqcf4-9q8lo 1/1 Running 0 1m
|
||||
```
|
||||
* Learn more about [Introspection and Debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/)
|
||||
* Learn more about [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
|
||||
* Learn more about [Port Forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
|
||||
* Learn how to [Get a Shell to a Container](/docs/tasks/debug-application-cluster/get-shell-running-container/)
|
||||
|
||||
|
||||
Kubernetes 记录每个 pod 的 stderr 和 stdout。使用 `kubectl log` 查看一个 pod 的日志。从 `get pods` 复制 pod 名字,然后:
|
||||
|
||||
```shell
|
||||
kubectl logs <pod-name>
|
||||
```
|
||||
|
||||
```
|
||||
...
|
||||
2016-02-19 16:58:05 1 [Note] InnoDB: 128 rollback segment(s) are active.
|
||||
2016-02-19 16:58:05 1 [Note] InnoDB: Waiting for purge to start
|
||||
2016-02-19 16:58:05 1 [Note] InnoDB: 5.6.29 started; log sequence number 1626007
|
||||
2016-02-19 16:58:05 1 [Note] Server hostname (bind-address): '*'; port: 3306
|
||||
2016-02-19 16:58:05 1 [Note] IPv6 is available.
|
||||
2016-02-19 16:58:05 1 [Note] - '::' resolves to '::';
|
||||
2016-02-19 16:58:05 1 [Note] Server socket created on IP: '::'.
|
||||
2016-02-19 16:58:05 1 [Warning] 'proxies_priv' entry '@ root@wordpress-mysql-cqcf4-9q8lo' ignored in --skip-name-resolve mode.
|
||||
2016-02-19 16:58:05 1 [Note] Event Scheduler: Loaded 0 events
|
||||
2016-02-19 16:58:05 1 [Note] mysqld: ready for connections.
|
||||
Version: '5.6.29' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
|
||||
```
|
||||
|
||||
|
||||
我们还需要在 [mysql-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/mysql-deployment.yaml) 中创建一个 service 以允许其它 pods 访问这个 mysql 示例。`wordpress-mysql` 名称被解析为这个 pod 的 IP。
|
||||
|
||||
|
||||
到此为止,我们创建了一个 Deployment,一个 Pod,一个 PVC,一个 Service,一个 Endpoint,两个 PV 和一个 Secret,显示如下:
|
||||
|
||||
```shell
|
||||
kubectl get deployment,pod,svc,endpoints,pvc -l app=wordpress -o wide && \
|
||||
kubectl get secret mysql-pass && \
|
||||
kubectl get pv
|
||||
```
|
||||
|
||||
```shell
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deploy/wordpress-mysql 1 1 1 1 3m
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
po/wordpress-mysql-3040864217-40soc 1/1 Running 0 3m 172.17.0.2 127.0.0.1
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
||||
svc/wordpress-mysql None <none> 3306/TCP 3m app=wordpress,tier=mysql
|
||||
NAME ENDPOINTS AGE
|
||||
ep/wordpress-mysql 172.17.0.2:3306 3m
|
||||
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||||
pvc/mysql-pv-claim Bound local-pv-2 20Gi RWO 3m
|
||||
NAME TYPE DATA AGE
|
||||
mysql-pass Opaque 1 3m
|
||||
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
|
||||
local-pv-1 20Gi RWO Available 3m
|
||||
local-pv-2 20Gi RWO Bound default/mysql-pv-claim 3m
|
||||
```
|
||||
|
||||
|
||||
## 部署 WordPress
|
||||
|
||||
|
||||
接下来使用 [wordpress-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/wordpress-deployment.yaml) 部署 WordPress:
|
||||
|
||||
```shell
|
||||
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/wordpress-deployment.yaml
|
||||
```
|
||||
|
||||
|
||||
我们在这里使用了许多相同的特性,比如对 persistent storage 的 volume claim 和 password 的 secret。
|
||||
|
||||
|
||||
[WordPress 镜像](https://hub.docker.com/_/wordpress/) 通过环境变量 `WORDPRESS_DB_HOST` 接收数据库的主机名。我们将这个环境变量值设置为我们创建的 MySQL
|
||||
service 的名字:`wordpress-mysql`。
|
||||
|
||||
|
||||
WordPress service 具有 `type: LoadBalancer` 的设置。这将 wordpress service 置于一个外部 IP 之下。
|
||||
|
||||
|
||||
找到你的 WordPress service 的外部 IP 地址。**为这个 service 分配一个外部 IP 地址可能会耗时一分钟左右,这取决于你的集群环境。**
|
||||
|
||||
```shell
|
||||
kubectl get services wordpress
|
||||
```
|
||||
|
||||
```
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
wordpress 10.0.0.5 1.2.3.4 80/TCP 19h
|
||||
```
|
||||
|
||||
|
||||
## 访问你的新 WordPress 博客
|
||||
|
||||
|
||||
现在,我们可以访问这个运行的 WordPress 应用。请使用你上面获取的 service 的外部 IP 地址。
|
||||
|
||||
```
|
||||
http://<external-ip>
|
||||
```
|
||||
|
||||
|
||||
你应该可以看到熟悉的 WordPress 初始页面。
|
||||
|
||||

|
||||
|
||||
|
||||
> 警告:不要在这个页面上留下你的 WordPress 设置。如果被其他用户发现,他们可能在你的实例上创建一个网站并用它来为可能有害的内容提供服务。你应该继续创建用户名密码之后的安装过程,删除你的实例,或者建立一个防火墙来限制接入。
|
||||
|
||||
|
||||
## 删除并重启你的博客
|
||||
|
||||
|
||||
建立你的 WordPress 博客并简单使用一下。然后删除它的 pods 并再次启动它们。由于使用了永久磁盘,你的博客的状态将被保留。
|
||||
|
||||
|
||||
所有的资源都被标记为 `app=wordpress`,你可以使用 label selector 轻松的删除它们:
|
||||
|
||||
```shell
|
||||
kubectl delete deployment,service -l app=wordpress
|
||||
kubectl delete secret mysql-pass
|
||||
```
|
||||
|
||||
|
||||
稍后使用原来的命令重建资源,这将会选择包含原来的完整数据的磁盘。由于我们没有删除 PV Claims,在删除我们的 pods 后,集群中没有任何一个 pod 能够 claim 它们。保留 PV Claims 也保证了重建 Pods 不会导致 PD 切换 Pods。
|
||||
|
||||
|
||||
如果你已经准备好了释放你的永久磁盘及其上的数据,请运行:
|
||||
|
||||
```shell
|
||||
kubectl delete pvc -l app=wordpress
|
||||
```
|
||||
|
||||
And then delete the volume objects themselves:
|
||||
|
||||
```shell
|
||||
kubectl delete pv local-pv-1 local-pv-2
|
||||
```
|
||||
|
||||
|
||||
或者
|
||||
|
||||
```shell
|
||||
kubectl delete pv wordpress-pv-1 wordpress-pv-2
|
||||
```
|
||||
|
||||
|
||||
## 接下来的步骤
|
||||
|
||||
* [Introspection and Debugging](http://kubernetes.io/docs/user-guide/introspection-and-debugging/)
|
||||
* [Jobs](http://kubernetes.io/docs/user-guide/jobs/) may be useful to run SQL queries.
|
||||
* [Exec](http://kubernetes.io/docs/user-guide/getting-into-containers/)
|
||||
* [Port Forwarding](http://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/)
|
||||
|
||||
|
||||
[]()
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue