Consolidate YAML files [part-4] (#9241)
This PR consolidates YAML files used in the `tasks/debug-application-cluster` subdirectory. Depends-On: #9236
This commit is contained in:
parent
a11e02d575
commit
d15da5a726
|
@ -28,15 +28,15 @@ the description of how logs are stored and handled on the node to be useful.
|
|||
|
||||
In this section, you can see an example of basic logging in Kubernetes that
|
||||
outputs data to the standard output stream. This demonstration uses
|
||||
a [pod specification](/docs/concepts/cluster-administration/counter-pod.yaml) with
|
||||
a [pod specification](/examples/debug/counter-pod.yaml) with
|
||||
a container that writes some text to standard output once per second.
|
||||
|
||||
{{< code file="counter-pod.yaml" >}}
|
||||
{{< codenew file="debug/counter-pod.yaml" >}}
|
||||
|
||||
To run this pod, use the following command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/counter-pod.yaml
|
||||
$ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml
|
||||
pod "counter" created
|
||||
```
|
||||
|
||||
|
@ -251,4 +251,4 @@ You can implement cluster-level logging by exposing or pushing logs directly fro
|
|||
every application; however, the implementation for such a logging mechanism
|
||||
is outside the scope of Kubernetes.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -77,7 +77,7 @@ A policy with no (0) rules is treated as illegal.
|
|||
|
||||
Below is an example audit policy file:
|
||||
|
||||
{{< code file="audit-policy.yaml" >}}
|
||||
{{< codenew file="audit/audit-policy.yaml" >}}
|
||||
|
||||
You can use a minimal audit policy file to log all requests at the `Metadata` level:
|
||||
|
||||
|
|
|
@ -1,10 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counter
|
||||
spec:
|
||||
containers:
|
||||
- name: count
|
||||
image: busybox
|
||||
args: [/bin/sh, -c,
|
||||
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
|
|
@ -22,12 +22,12 @@ your pods. But there are a number of ways to get even more information about you
|
|||
|
||||
For this example we'll use a Deployment to create two pods, similar to the earlier example.
|
||||
|
||||
{{< code file="nginx-dep.yaml" >}}
|
||||
{{< codenew file="application/nginx-with-request.yaml" >}}
|
||||
|
||||
Create deployment by running following command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/nginx-dep.yaml
|
||||
$ kubectl create -f https://k8s.io/examples/application/nginx-with-request.yaml
|
||||
deployment "nginx-deployment" created
|
||||
```
|
||||
|
||||
|
|
|
@ -34,11 +34,11 @@ In this exercise, you create a Pod that runs one container.
|
|||
The configuration file specifies a command that runs when
|
||||
the container starts.
|
||||
|
||||
{{< code file="termination.yaml" >}}
|
||||
{{< codenew file="debug/termination.yaml" >}}
|
||||
|
||||
1. Create a Pod based on the YAML configuration file:
|
||||
|
||||
kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/termination.yaml
|
||||
kubectl create -f https://k8s.io/examples/debug/termination.yaml
|
||||
|
||||
In the YAML file, in the `cmd` and `args` fields, you can see that the
|
||||
container sleeps for 10 seconds and then writes "Sleep expired" to
|
||||
|
|
|
@ -60,7 +60,7 @@ average, approximately 100Mb RAM and 100m CPU is needed.
|
|||
Deploy event exporter to your cluster using the following command:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/event-exporter-deploy.yaml
|
||||
kubectl create -f https://k8s.io/examples/debug/event-exporter.yaml
|
||||
```
|
||||
|
||||
Since event exporter accesses the Kubernetes API, it requires permissions to
|
||||
|
@ -70,7 +70,7 @@ to allow event exporter to read events. To make sure that event exporter
|
|||
pod will not be evicted from the node, you can additionally set up resource
|
||||
requests. As mentioned earlier, 100Mb RAM and 100m CPU should be enough.
|
||||
|
||||
{{< code file="event-exporter-deploy.yaml" >}}
|
||||
{{< codenew file="debug/event-exporter.yaml" >}}
|
||||
|
||||
## User Guide
|
||||
|
||||
|
@ -94,4 +94,4 @@ jsonPayload.involvedObject.name:"nginx-deployment"
|
|||
|
||||
{{< figure src="/images/docs/stackdriver-event-exporter-filter.png" alt="Filtered events in the Stackdriver Logging interface" width="500" >}}
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -28,12 +28,12 @@ running Container.
|
|||
In this exercise, you create a Pod that has one Container. The Container
|
||||
runs the nginx image. Here is the configuration file for the Pod:
|
||||
|
||||
{{< code file="shell-demo.yaml" >}}
|
||||
{{< codenew file="application/shell-demo.yaml" >}}
|
||||
|
||||
Create the Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/shell-demo.yaml
|
||||
kubectl create -f https://k8s.io/examples/application/shell-demo.yaml
|
||||
```
|
||||
|
||||
Verify that the Container is running:
|
||||
|
|
|
@ -96,7 +96,7 @@ than Google Kubernetes Engine. Proceed at your own risk.
|
|||
1. Deploy a `ConfigMap` with the logging agent configuration by running the following command:
|
||||
|
||||
```
|
||||
kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/fluentd-gcp-configmap.yaml
|
||||
kubectl create -f https://k8s.io/examples/debug/fluentd-gcp-configmap.yaml
|
||||
```
|
||||
|
||||
The command creates the `ConfigMap` in the `default` namespace. You can download the file
|
||||
|
@ -105,7 +105,7 @@ than Google Kubernetes Engine. Proceed at your own risk.
|
|||
1. Deploy the logging agent `DaemonSet` by running the following command:
|
||||
|
||||
```
|
||||
kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/fluentd-gcp-ds.yaml
|
||||
kubectl create -f https://k8s.io/examples/debug/fluentd-gcp-ds.yaml
|
||||
```
|
||||
|
||||
You can download and edit this file before using it as well.
|
||||
|
@ -129,16 +129,16 @@ default fluentd-gcp-v2.0 3 3 3 beta.kubernetes.i
|
|||
```
|
||||
|
||||
To understand how logging with Stackdriver works, consider the following
|
||||
synthetic log generator pod specification [counter-pod.yaml](/docs/tasks/debug-application-cluster/counter-pod.yaml):
|
||||
synthetic log generator pod specification [counter-pod.yaml](/examples/debug/counter-pod.yaml):
|
||||
|
||||
{{< code file="counter-pod.yaml" >}}
|
||||
{{< codenew file="debug/counter-pod.yaml" >}}
|
||||
|
||||
This pod specification has one container that runs a bash script
|
||||
that writes out the value of a counter and the date once per
|
||||
second, and runs indefinitely. Let's create this pod in the default namespace.
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/counter-pod.yaml
|
||||
kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml
|
||||
```
|
||||
|
||||
You can observe the running pod:
|
||||
|
@ -175,7 +175,7 @@ pod "counter" deleted
|
|||
and then recreating it:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/counter-pod.yaml
|
||||
$ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml
|
||||
pod "counter" created
|
||||
```
|
||||
|
||||
|
|
|
@ -64,7 +64,7 @@ customized node problems.
|
|||
|
||||
* **Step 1:** `node-problem-detector.yaml`:
|
||||
|
||||
{{< code file="node-problem-detector.yaml" >}}
|
||||
{{< codenew file="debug/node-problem-detector.yaml" >}}
|
||||
|
||||
|
||||
***Notice that you should make sure the system log directory is right for your
|
||||
|
@ -73,7 +73,7 @@ OS distro.***
|
|||
* **Step 2:** Start node problem detector with `kubectl`:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/node-problem-detector.yaml
|
||||
kubectl create -f https://k8s.io/examples/debug/node-problem-detector.yaml
|
||||
```
|
||||
|
||||
### Addon Pod
|
||||
|
@ -98,14 +98,14 @@ following the steps:
|
|||
node-problem-detector-config --from-file=config/`.
|
||||
* **Step 3:** Change the `node-problem-detector.yaml` to use the ConfigMap:
|
||||
|
||||
{{< code file="node-problem-detector-configmap.yaml" >}}
|
||||
{{< codenew file="debug/node-problem-detector-configmap.yaml" >}}
|
||||
|
||||
|
||||
* **Step 4:** Re-create the node problem detector with the new yaml file:
|
||||
|
||||
```shell
|
||||
kubectl delete -f https://k8s.io/docs/tasks/debug-application-cluster/node-problem-detector.yaml # If you have a node-problem-detector running
|
||||
kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/node-problem-detector-configmap.yaml
|
||||
kubectl delete -f https://k8s.io/examples/debug/node-problem-detector.yaml # If you have a node-problem-detector running
|
||||
kubectl create -f https://k8s.io/examples/debug/node-problem-detector-configmap.yaml
|
||||
```
|
||||
|
||||
***Notice that this approach only applies to node problem detector started with `kubectl`.***
|
||||
|
@ -177,4 +177,4 @@ resource overhead on each node. Usually this is fine, because:
|
|||
* Even under high load, the resource usage is acceptable.
|
||||
(see [benchmark result](https://github.com/kubernetes/node-problem-detector/issues/2#issuecomment-220255629))
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -268,7 +268,6 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
// Please help maintain the alphabeta order in the map
|
||||
cases := map[string]map[string][]runtime.Object{
|
||||
"docs/concepts/cluster-administration": {
|
||||
"counter-pod": {&api.Pod{}},
|
||||
"fluentd-sidecar-config": {&api.ConfigMap{}},
|
||||
"nginx-app": {&api.Service{}, &extensions.Deployment{}},
|
||||
"nginx-deployment": {&extensions.Deployment{}},
|
||||
|
@ -393,17 +392,6 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"task-pv-volume": {&api.PersistentVolume{}},
|
||||
"tcp-liveness-readiness": {&api.Pod{}},
|
||||
},
|
||||
"docs/tasks/debug-application-cluster": {
|
||||
"counter-pod": {&api.Pod{}},
|
||||
"event-exporter-deploy": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.Deployment{}},
|
||||
"fluentd-gcp-configmap": {&api.ConfigMap{}},
|
||||
"fluentd-gcp-ds": {&extensions.DaemonSet{}},
|
||||
"nginx-dep": {&extensions.Deployment{}},
|
||||
"node-problem-detector": {&extensions.DaemonSet{}},
|
||||
"node-problem-detector-configmap": {&extensions.DaemonSet{}},
|
||||
"shell-demo": {&api.Pod{}},
|
||||
"termination": {&api.Pod{}},
|
||||
},
|
||||
// TODO: decide whether federation examples should be added
|
||||
"docs/tasks/inject-data-application": {
|
||||
"commands": {&api.Pod{}},
|
||||
|
@ -445,6 +433,17 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"deployment-patch": {&extensions.Deployment{}},
|
||||
"deployment-scale": {&extensions.Deployment{}},
|
||||
"deployment-update": {&extensions.Deployment{}},
|
||||
"nginx-with-request": {&extensions.Deployment{}},
|
||||
"shell-demo": {&api.Pod{}},
|
||||
},
|
||||
"examples/debug": {
|
||||
"counter-pod": {&api.Pod{}},
|
||||
"event-exporter": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.Deployment{}},
|
||||
"fluentd-gcp-configmap": {&api.ConfigMap{}},
|
||||
"fluentd-gcp-ds": {&extensions.DaemonSet{}},
|
||||
"node-problem-detector": {&extensions.DaemonSet{}},
|
||||
"node-problem-detector-configmap": {&extensions.DaemonSet{}},
|
||||
"termination": {&api.Pod{}},
|
||||
},
|
||||
"examples/application/mysql": {
|
||||
"mysql-configmap": {&api.ConfigMap{}},
|
||||
|
@ -493,7 +492,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
|
||||
// Note a key in the following map has to be complete relative path
|
||||
filesIgnore := map[string]map[string]bool{
|
||||
"../content/en/docs/tasks/debug-application-cluster": {
|
||||
"../content/en/examples/audit": {
|
||||
"audit-policy": true,
|
||||
},
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue