Consolidate YAML files [part-7] (#9262)
* Consolidate YAML files [part-7] This PR relocates YAML files used by Job/CronJob examples. * Update examples_test.go
This commit is contained in:
parent
3a0c618734
commit
48897cc47d
|
@ -43,7 +43,7 @@ component.
|
|||
Cron jobs require a config file.
|
||||
This example cron job config `.spec` file prints the current time and a hello message every minute:
|
||||
|
||||
{{< code file="cronjob.yaml" >}}
|
||||
{{< codenew file="application/job/cronjob.yaml" >}}
|
||||
|
||||
Run the example cron job by downloading the example file and then running this command:
|
||||
|
||||
|
|
|
@ -181,13 +181,13 @@ We will use the `amqp-consume` utility to read the message
|
|||
from the queue and run our actual program. Here is a very simple
|
||||
example program:
|
||||
|
||||
{{< code language="python" file="coarse-parallel-processing-work-queue/worker.py" >}}
|
||||
{{< codenew language="python" file="application/job/rabbitmq/worker.py" >}}
|
||||
|
||||
Now, build an image. If you are working in the source
|
||||
tree, then change directory to `examples/job/work-queue-1`.
|
||||
Otherwise, make a temporary directory, change to it,
|
||||
download the [Dockerfile](Dockerfile?raw=true),
|
||||
and [worker.py](worker.py?raw=true). In either case,
|
||||
download the [Dockerfile](/examples/application/job/rabbitmq/Dockerfile),
|
||||
and [worker.py](/examples/application/job/rabbitmq/worker.py). In either case,
|
||||
build the image with this command:
|
||||
|
||||
```shell
|
||||
|
@ -219,7 +219,7 @@ Here is a job definition. You'll need to make a copy of the Job and edit the
|
|||
image to match the name you used, and call it `./job.yaml`.
|
||||
|
||||
|
||||
{{< code file="coarse-parallel-processing-work-queue/job.yaml" >}}
|
||||
{{< codenew file="application/job/rabbitmq/job.yaml" >}}
|
||||
|
||||
In this example, each pod works on one item from the queue and then exits.
|
||||
So, the completion count of the Job corresponds to the number of work items
|
||||
|
|
|
@ -12,7 +12,6 @@ worker processes in a given pod.
|
|||
In this example, as each pod is created, it picks up one unit of work
|
||||
from a task queue, processes it, and repeats until the end of the queue is reached.
|
||||
|
||||
|
||||
Here is an overview of the steps in this example:
|
||||
|
||||
1. **Start a storage service to hold the work queue.** In this example, we use Redis to store
|
||||
|
@ -50,16 +49,27 @@ For this example, for simplicity, we will start a single instance of Redis.
|
|||
See the [Redis Example](https://github.com/kubernetes/examples/tree/master/guestbook) for an example
|
||||
of deploying Redis scalably and redundantly.
|
||||
|
||||
Start a temporary Pod running Redis and a service so we can find it.
|
||||
If you are working from the website source tree, you can go to the following
|
||||
directory and start a temporary Pod running Redis and a service so we can find it.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/tasks/job/fine-parallel-processing-work-queue/redis-pod.yaml
|
||||
$ cd content/en/examples/application/job/redis
|
||||
$ kubectl create -f ./redis-pod.yaml
|
||||
pod "redis-master" created
|
||||
$ kubectl create -f docs/tasks/job/fine-parallel-processing-work-queue/redis-service.yaml
|
||||
$ kubectl create -f ./redis-service.yaml
|
||||
service "redis" created
|
||||
```
|
||||
|
||||
If you're not working from the source tree, you could also download [`redis-pod.yaml`](redis-pod.yaml?raw=true) and [`redis-service.yaml`](redis-service.yaml?raw=true) directly.
|
||||
If you're not working from the source tree, you could also download the following
|
||||
files directly:
|
||||
|
||||
- [`redis-pod.yaml`](/examples/application/job/redis/redis-pod.yaml)
|
||||
- [`redis-service.yaml`](/examples/application/job/redis/redis-service.yaml)
|
||||
- [`Dockerfile`](/examples/application/job/redis/Dockerfile)
|
||||
- [`job.yaml`](/examples/application/job/redis/job.yaml)
|
||||
- [`rediswq.py`](/examples/application/job/redis/rediswq.py)
|
||||
- [`worker.py`](/examples/application/job/redis/worker.py)
|
||||
|
||||
|
||||
## Filling the Queue with tasks
|
||||
|
||||
|
@ -122,17 +132,19 @@ We will use a python worker program with a redis client to read
|
|||
the messages from the message queue.
|
||||
|
||||
A simple Redis work queue client library is provided,
|
||||
called rediswq.py ([Download](rediswq.py?raw=true)).
|
||||
called rediswq.py ([Download](/examples/application/job/redis/rediswq.py)).
|
||||
|
||||
The "worker" program in each Pod of the Job uses the work queue
|
||||
client library to get work. Here it is:
|
||||
|
||||
{{< code language="python" file="fine-parallel-processing-work-queue/worker.py" >}}
|
||||
{{< codenew language="python" file="application/job/redis/worker.py" >}}
|
||||
|
||||
If you are working from the source tree,
|
||||
change directory to the `docs/tasks/job/fine-parallel-processing-work-queue/` directory.
|
||||
Otherwise, download [`worker.py`](worker.py?raw=true), [`rediswq.py`](rediswq.py?raw=true), and [`Dockerfile`](Dockerfile?raw=true)
|
||||
using above links. Then build the image:
|
||||
If you are working from the source tree, change directory to the
|
||||
`content/en/examples/application/job/redis/` directory.
|
||||
Otherwise, download [`worker.py`](/examples/application/job/redis/worker.py),
|
||||
[`rediswq.py`](/examples/application/job/redis/rediswq.py), and
|
||||
[`Dockerfile`](/examples/application/job/redis/Dockerfile) files, then build
|
||||
the image:
|
||||
|
||||
```shell
|
||||
docker build -t job-wq-2 .
|
||||
|
@ -166,7 +178,7 @@ gcloud docker -- push gcr.io/<project>/job-wq-2
|
|||
|
||||
Here is the job definition:
|
||||
|
||||
{{< code file="fine-parallel-processing-work-queue/job.yaml" >}}
|
||||
{{< codenew file="application/job/redis/job.yaml" >}}
|
||||
|
||||
Be sure to edit the job template to
|
||||
change `gcr.io/myproject` to your own path.
|
||||
|
|
|
@ -18,9 +18,9 @@ non-parallel, use of [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloa
|
|||
|
||||
## Basic Template Expansion
|
||||
|
||||
First, download the following template of a job to a file called `job.yaml`
|
||||
First, download the following template of a job to a file called `job-tmpl.yaml`
|
||||
|
||||
{{< code file="job.yaml" >}}
|
||||
{{< codenew file="application/job/job-tmpl.yaml" >}}
|
||||
|
||||
Unlike a *pod template*, our *job template* is not a Kubernetes API type. It is just
|
||||
a yaml representation of a Job object that has some placeholders that need to be filled
|
||||
|
@ -47,7 +47,7 @@ Next, expand the template into multiple files, one for each item to be processed
|
|||
$ mkdir ./jobs
|
||||
$ for i in apple banana cherry
|
||||
do
|
||||
cat job.yaml | sed "s/\$ITEM/$i/" > ./jobs/job-$i.yaml
|
||||
cat job-tmpl.yaml | sed "s/\$ITEM/$i/" > ./jobs/job-$i.yaml
|
||||
done
|
||||
```
|
||||
|
||||
|
@ -195,4 +195,4 @@ If you have a large number of job objects, you may find that:
|
|||
In this case, you can consider one of the
|
||||
other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns).
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -443,14 +443,14 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"secret-envars-pod": {&api.Pod{}},
|
||||
"secret-pod": {&api.Pod{}},
|
||||
},
|
||||
"docs/tasks/job": {
|
||||
"cronjob": {&batch.CronJob{}},
|
||||
"job": {&batch.Job{}},
|
||||
"examples/application/job": {
|
||||
"job-tmpl": {&batch.Job{}},
|
||||
"cronjob": {&batch.CronJob{}},
|
||||
},
|
||||
"docs/tasks/job/coarse-parallel-processing-work-queue": {
|
||||
"job": {&batch.Job{}},
|
||||
"examples/application/job/rabbitmq": {
|
||||
"job": {&batch.Job{}},
|
||||
},
|
||||
"docs/tasks/job/fine-parallel-processing-work-queue": {
|
||||
"examples/application/job/redis": {
|
||||
"job": {&batch.Job{}},
|
||||
"redis-pod": {&api.Pod{}},
|
||||
"redis-service": {&api.Service{}},
|
||||
|
|
Loading…
Reference in New Issue