Consolidate YAML files [part-7] (#9262)

* Consolidate YAML files [part-7]

This PR relocates YAML files used by Job/CronJob examples.

* Update examples_test.go
This commit is contained in:
Qiming 2018-07-03 04:54:18 +08:00 committed by k8s-ci-robot
parent 3a0c618734
commit 48897cc47d
17 changed files with 39 additions and 27 deletions

0
content/en/docs/tasks/job/_index.md Executable file → Normal file
View File

View File

@ -43,7 +43,7 @@ component.
Cron jobs require a config file. Cron jobs require a config file.
This example cron job config `.spec` file prints the current time and a hello message every minute: This example cron job config `.spec` file prints the current time and a hello message every minute:
{{< code file="cronjob.yaml" >}} {{< codenew file="application/job/cronjob.yaml" >}}
Run the example cron job by downloading the example file and then running this command: Run the example cron job by downloading the example file and then running this command:

View File

@ -181,13 +181,13 @@ We will use the `amqp-consume` utility to read the message
from the queue and run our actual program. Here is a very simple from the queue and run our actual program. Here is a very simple
example program: example program:
{{< code language="python" file="coarse-parallel-processing-work-queue/worker.py" >}} {{< codenew language="python" file="application/job/rabbitmq/worker.py" >}}
Now, build an image. If you are working in the source Now, build an image. If you are working in the source
tree, then change directory to `examples/job/work-queue-1`. tree, then change directory to `examples/job/work-queue-1`.
Otherwise, make a temporary directory, change to it, Otherwise, make a temporary directory, change to it,
download the [Dockerfile](Dockerfile?raw=true), download the [Dockerfile](/examples/application/job/rabbitmq/Dockerfile),
and [worker.py](worker.py?raw=true). In either case, and [worker.py](/examples/application/job/rabbitmq/worker.py). In either case,
build the image with this command: build the image with this command:
```shell ```shell
@ -219,7 +219,7 @@ Here is a job definition. You'll need to make a copy of the Job and edit the
image to match the name you used, and call it `./job.yaml`. image to match the name you used, and call it `./job.yaml`.
{{< code file="coarse-parallel-processing-work-queue/job.yaml" >}} {{< codenew file="application/job/rabbitmq/job.yaml" >}}
In this example, each pod works on one item from the queue and then exits. In this example, each pod works on one item from the queue and then exits.
So, the completion count of the Job corresponds to the number of work items So, the completion count of the Job corresponds to the number of work items

View File

@ -12,7 +12,6 @@ worker processes in a given pod.
In this example, as each pod is created, it picks up one unit of work In this example, as each pod is created, it picks up one unit of work
from a task queue, processes it, and repeats until the end of the queue is reached. from a task queue, processes it, and repeats until the end of the queue is reached.
Here is an overview of the steps in this example: Here is an overview of the steps in this example:
1. **Start a storage service to hold the work queue.** In this example, we use Redis to store 1. **Start a storage service to hold the work queue.** In this example, we use Redis to store
@ -50,16 +49,27 @@ For this example, for simplicity, we will start a single instance of Redis.
See the [Redis Example](https://github.com/kubernetes/examples/tree/master/guestbook) for an example See the [Redis Example](https://github.com/kubernetes/examples/tree/master/guestbook) for an example
of deploying Redis scalably and redundantly. of deploying Redis scalably and redundantly.
Start a temporary Pod running Redis and a service so we can find it. If you are working from the website source tree, you can go to the following
directory and start a temporary Pod running Redis and a service so we can find it.
```shell ```shell
$ kubectl create -f docs/tasks/job/fine-parallel-processing-work-queue/redis-pod.yaml $ cd content/en/examples/application/job/redis
$ kubectl create -f ./redis-pod.yaml
pod "redis-master" created pod "redis-master" created
$ kubectl create -f docs/tasks/job/fine-parallel-processing-work-queue/redis-service.yaml $ kubectl create -f ./redis-service.yaml
service "redis" created service "redis" created
``` ```
If you're not working from the source tree, you could also download [`redis-pod.yaml`](redis-pod.yaml?raw=true) and [`redis-service.yaml`](redis-service.yaml?raw=true) directly. If you're not working from the source tree, you could also download the following
files directly:
- [`redis-pod.yaml`](/examples/application/job/redis/redis-pod.yaml)
- [`redis-service.yaml`](/examples/application/job/redis/redis-service.yaml)
- [`Dockerfile`](/examples/application/job/redis/Dockerfile)
- [`job.yaml`](/examples/application/job/redis/job.yaml)
- [`rediswq.py`](/examples/application/job/redis/rediswq.py)
- [`worker.py`](/examples/application/job/redis/worker.py)
## Filling the Queue with tasks ## Filling the Queue with tasks
@ -122,17 +132,19 @@ We will use a python worker program with a redis client to read
the messages from the message queue. the messages from the message queue.
A simple Redis work queue client library is provided, A simple Redis work queue client library is provided,
called rediswq.py ([Download](rediswq.py?raw=true)). called rediswq.py ([Download](/examples/application/job/redis/rediswq.py)).
The "worker" program in each Pod of the Job uses the work queue The "worker" program in each Pod of the Job uses the work queue
client library to get work. Here it is: client library to get work. Here it is:
{{< code language="python" file="fine-parallel-processing-work-queue/worker.py" >}} {{< codenew language="python" file="application/job/redis/worker.py" >}}
If you are working from the source tree, If you are working from the source tree, change directory to the
change directory to the `docs/tasks/job/fine-parallel-processing-work-queue/` directory. `content/en/examples/application/job/redis/` directory.
Otherwise, download [`worker.py`](worker.py?raw=true), [`rediswq.py`](rediswq.py?raw=true), and [`Dockerfile`](Dockerfile?raw=true) Otherwise, download [`worker.py`](/examples/application/job/redis/worker.py),
using above links. Then build the image: [`rediswq.py`](/examples/application/job/redis/rediswq.py), and
[`Dockerfile`](/examples/application/job/redis/Dockerfile) files, then build
the image:
```shell ```shell
docker build -t job-wq-2 . docker build -t job-wq-2 .
@ -166,7 +178,7 @@ gcloud docker -- push gcr.io/<project>/job-wq-2
Here is the job definition: Here is the job definition:
{{< code file="fine-parallel-processing-work-queue/job.yaml" >}} {{< codenew file="application/job/redis/job.yaml" >}}
Be sure to edit the job template to Be sure to edit the job template to
change `gcr.io/myproject` to your own path. change `gcr.io/myproject` to your own path.

View File

@ -18,9 +18,9 @@ non-parallel, use of [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloa
## Basic Template Expansion ## Basic Template Expansion
First, download the following template of a job to a file called `job.yaml` First, download the following template of a job to a file called `job-tmpl.yaml`
{{< code file="job.yaml" >}} {{< codenew file="application/job/job-tmpl.yaml" >}}
Unlike a *pod template*, our *job template* is not a Kubernetes API type. It is just Unlike a *pod template*, our *job template* is not a Kubernetes API type. It is just
a yaml representation of a Job object that has some placeholders that need to be filled a yaml representation of a Job object that has some placeholders that need to be filled
@ -47,7 +47,7 @@ Next, expand the template into multiple files, one for each item to be processed
$ mkdir ./jobs $ mkdir ./jobs
$ for i in apple banana cherry $ for i in apple banana cherry
do do
cat job.yaml | sed "s/\$ITEM/$i/" > ./jobs/job-$i.yaml cat job-tmpl.yaml | sed "s/\$ITEM/$i/" > ./jobs/job-$i.yaml
done done
``` ```
@ -195,4 +195,4 @@ If you have a large number of job objects, you may find that:
In this case, you can consider one of the In this case, you can consider one of the
other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns). other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns).
{{% /capture %}} {{% /capture %}}

View File

@ -443,14 +443,14 @@ func TestExampleObjectSchemas(t *testing.T) {
"secret-envars-pod": {&api.Pod{}}, "secret-envars-pod": {&api.Pod{}},
"secret-pod": {&api.Pod{}}, "secret-pod": {&api.Pod{}},
}, },
"docs/tasks/job": { "examples/application/job": {
"cronjob": {&batch.CronJob{}}, "job-tmpl": {&batch.Job{}},
"job": {&batch.Job{}}, "cronjob": {&batch.CronJob{}},
}, },
"docs/tasks/job/coarse-parallel-processing-work-queue": { "examples/application/job/rabbitmq": {
"job": {&batch.Job{}}, "job": {&batch.Job{}},
}, },
"docs/tasks/job/fine-parallel-processing-work-queue": { "examples/application/job/redis": {
"job": {&batch.Job{}}, "job": {&batch.Job{}},
"redis-pod": {&api.Pod{}}, "redis-pod": {&api.Pod{}},
"redis-service": {&api.Service{}}, "redis-service": {&api.Service{}},