[zh] Remove Master/Slave terminology from stateless application tutorial
This commit is contained in:
parent
7c94fc38b4
commit
a89880b826
|
|
@ -62,13 +62,13 @@ Kubernetes 文档的这一部分包含教程。每个教程展示了如何完成
|
|||
|
||||
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
|
||||
* [Example: Deploying PHP Guestbook application with MongoDB](/docs/tutorials/stateless-application/guestbook/)
|
||||
-->
|
||||
## 无状态应用程序
|
||||
|
||||
* [公开外部 IP 地址访问集群中的应用程序](/zh/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [示例:使用 Redis 部署 PHP 留言板应用程序](/zh/docs/tutorials/stateless-application/guestbook/)
|
||||
* [示例:使用 MongoDB 部署 PHP 留言板应用程序](/zh/docs/tutorials/stateless-application/guestbook/)
|
||||
|
||||
|
||||
<!--
|
||||
|
|
|
|||
|
|
@ -1,723 +0,0 @@
|
|||
---
|
||||
title: "示例: 添加日志和指标到 PHP / Redis Guestbook 案例"
|
||||
content_type: tutorial
|
||||
weight: 21
|
||||
card:
|
||||
name: tutorials
|
||||
weight: 31
|
||||
title: "示例: 添加日志和指标到 PHP / Redis Guestbook 案例"
|
||||
---
|
||||
<!--
|
||||
title: "Example: Add logging and metrics to the PHP / Redis Guestbook example"
|
||||
reviewers:
|
||||
- sftim
|
||||
content_type: tutorial
|
||||
weight: 21
|
||||
card:
|
||||
name: tutorials
|
||||
weight: 31
|
||||
title: "Example: Add logging and metrics to the PHP / Redis Guestbook example"
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
<!--
|
||||
This tutorial builds upon the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. Lightweight log, metric, and network data open source shippers, or *Beats*, from Elastic are deployed in the same Kubernetes cluster as the guestbook. The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana. This example consists of the following components:
|
||||
|
||||
* A running instance of the [PHP Guestbook with Redis tutorial](/docs/tutorials/stateless-application/guestbook)
|
||||
* Elasticsearch and Kibana
|
||||
* Filebeat
|
||||
* Metricbeat
|
||||
* Packetbeat
|
||||
-->
|
||||
本教程建立在
|
||||
[使用 Redis 部署 PHP Guestbook](/zh/docs/tutorials/stateless-application/guestbook) 教程之上。
|
||||
*Beats*,是 Elastic 出品的开源的轻量级日志、指标和网络数据采集器,
|
||||
将和 Guestbook 一同部署在 Kubernetes 集群中。
|
||||
Beats 收集、分析、索引数据到 Elasticsearch,使你可以用 Kibana 查看并分析得到的运营信息。
|
||||
本示例由以下内容组成:
|
||||
* [带 Redis 的 PHP Guestbook 教程](/zh/docs/tutorials/stateless-application/guestbook)
|
||||
的一个实例部署
|
||||
* Elasticsearch 和 Kibana
|
||||
* Filebeat
|
||||
* Metricbeat
|
||||
* Packetbeat
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
<!--
|
||||
* Start up the PHP Guestbook with Redis.
|
||||
* Install kube-state-metrics.
|
||||
* Create a Kubernetes Secret.
|
||||
* Deploy the Beats.
|
||||
* View dashboards of your logs and metrics.
|
||||
-->
|
||||
* 启动用 Redis 部署的 PHP Guestbook。
|
||||
* 安装 kube-state-metrics。
|
||||
* 创建 Kubernetes secret。
|
||||
* 部署 Beats。
|
||||
* 用仪表板查看日志和指标。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
{{< version-check >}}
|
||||
|
||||
<!--
|
||||
Additionally you need:
|
||||
|
||||
* A running deployment of the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial.
|
||||
|
||||
* A running Elasticsearch and Kibana deployment. You can use [Elasticsearch Service in Elastic Cloud](https://cloud.elastic.co),
|
||||
run the [download files](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html)
|
||||
on your workstation or servers, or the [Elastic Helm Charts](https://github.com/elastic/helm-charts).
|
||||
-->
|
||||
此外,你还需要:
|
||||
|
||||
* 依照教程[使用 Redis 的 PHP Guestbook](/zh/docs/tutorials/stateless-application/guestbook)得到的一套运行中的部署环境。
|
||||
* 一套运行中的 Elasticsearch 和 Kibana 部署环境。你可以使用 [Elastic 云中的Elasticsearch 服务](https://cloud.elastic.co)、在工作站或者服务器上运行此[下载文件](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html)、或运行 [Elastic Helm Charts](https://github.com/elastic/helm-charts)。
|
||||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
<!--
|
||||
## Start up the PHP Guestbook with Redis
|
||||
|
||||
This tutorial builds on the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the **Cleanup** steps. Come back to this page when you have the guestbook running.
|
||||
-->
|
||||
## 启动用 Redis 部署的 PHP Guestbook {#start-up-the-php-guestbook-with-redis}
|
||||
|
||||
本教程建立在
|
||||
[使用 Redis 部署 PHP Guestbook](/zh/docs/tutorials/stateless-application/guestbook) 之上。
|
||||
如果你已经有一个运行的 Guestbook 应用程序,那就监控它。
|
||||
如果还没有,那就按照说明先部署 Guestbook ,但不要执行**清理**的步骤。
|
||||
当 Guestbook 运行起来后,再返回本页。
|
||||
|
||||
<!--
|
||||
## Add a Cluster role binding
|
||||
|
||||
Create a [cluster level role binding](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) so that you can deploy kube-state-metrics and the Beats at the cluster level (in kube-system).
|
||||
-->
|
||||
## 添加一个集群角色绑定 {#add-a-cluster-role-binding}
|
||||
|
||||
创建一个[集群范围的角色绑定](/zh/docs/reference/access-authn-authz/rbac/#rolebinding-和-clusterrolebinding),
|
||||
以便你可以在集群范围(在 kube-system 中)部署 kube-state-metrics 和 Beats。
|
||||
|
||||
```shell
|
||||
kubectl create clusterrolebinding cluster-admin-binding \
|
||||
--clusterrole=cluster-admin --user=<your email associated with the k8s provider account>
|
||||
```
|
||||
|
||||
<!--
|
||||
## Install kube-state-metrics
|
||||
|
||||
Kubernetes [*kube-state-metrics*](https://github.com/kubernetes/kube-state-metrics) is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. Metricbeat reports these metrics. Add kube-state-metrics to the Kubernetes cluster that the guestbook is running in.
|
||||
-->
|
||||
### 安装 kube-state-metrics {#install-kube-state-metrics}
|
||||
|
||||
Kubernetes [*kube-state-metrics*](https://github.com/kubernetes/kube-state-metrics)
|
||||
是一个简单的服务,它侦听 Kubernetes API 服务器并生成对象状态的指标。
|
||||
Metricbeat 报告这些指标。
|
||||
添加 kube-state-metrics 到运行 Guestbook 的 Kubernetes 集群。
|
||||
|
||||
```shell
|
||||
git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
|
||||
kubectl apply -f kube-state-metrics/examples/standard
|
||||
```
|
||||
|
||||
<!--
|
||||
### Check to see if kube-state-metrics is running
|
||||
-->
|
||||
### 检查 kube-state-metrics 是否正在运行 {#check-to-see-if-kube-state-metrics-is-running}
|
||||
|
||||
```shell
|
||||
kubectl get pods --namespace=kube-system -l app.kubernetes.io/name=kube-state-metrics
|
||||
```
|
||||
|
||||
<!--
|
||||
Output:
|
||||
-->
|
||||
输出:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-state-metrics-89d656bf8-vdthm 1/1 Running 0 21s
|
||||
```
|
||||
|
||||
<!--
|
||||
## Clone the Elastic examples GitHub repo
|
||||
-->
|
||||
## 从 GitHub 克隆 Elastic examples 库 {#clone-the-elastic-examples-github-repo}
|
||||
|
||||
```shell
|
||||
git clone https://github.com/elastic/examples.git
|
||||
```
|
||||
|
||||
<!--
|
||||
The rest of the commands will reference files in the `examples/beats-k8s-send-anywhere` directory, so change dir there:
|
||||
-->
|
||||
后续命令将引用目录 `examples/beats-k8s-send-anywhere` 中的文件,
|
||||
所以把目录切换过去。
|
||||
|
||||
```shell
|
||||
cd examples/beats-k8s-send-anywhere
|
||||
```
|
||||
|
||||
<!--
|
||||
## Create a Kubernetes Secret
|
||||
A Kubernetes {{< glossary_tooltip text="Secret" term_id="secret" >}} is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image; putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.
|
||||
|
||||
There are two sets of steps here, one for *self managed* Elasticsearch and Kibana (running on your servers or using the Elastic Helm Charts), and a second separate set for the *managed service* Elasticsearch Service in Elastic Cloud. Only create the secret for the type of Elasticsearch and Kibana system that you will use for this tutorial.
|
||||
-->
|
||||
## 创建 Kubernetes Secret {#create-a-kubernetes-secret}
|
||||
|
||||
Kubernetes {{< glossary_tooltip text="Secret" term_id="secret" >}}
|
||||
是包含少量敏感数据(类似密码、令牌、秘钥等)的对象。
|
||||
这类信息也可以放在 Pod 规格定义或者镜像中;
|
||||
但放在 Secret 对象中,能更好的控制它的使用方式,也能减少意外泄露的风险。
|
||||
|
||||
{{< note >}}
|
||||
这里有两套步骤,一套用于*自管理*的 Elasticsearch 和 Kibana(运行在你的服务器上或使用 Helm Charts),
|
||||
另一套用于在 Elastic 云服务中 *Managed service* 的 Elasticsearch 服务。
|
||||
在本教程中,只需要为 Elasticsearch 和 Kibana 系统创建 secret。
|
||||
{{< /note >}}
|
||||
|
||||
{{< tabs name="tab_with_md" >}}
|
||||
{{% tab name="自管理" %}}
|
||||
|
||||
<!--
|
||||
### Self managed
|
||||
Switch to the **Managed service** tab if you are connecting to Elasticsearch Service in Elastic Cloud.
|
||||
|
||||
### Set the credentials
|
||||
There are four files to edit to create a k8s secret when you are connecting to self managed Elasticsearch and Kibana (self managed is effectively anything other than the managed Elasticsearch Service in Elastic Cloud). The files are:
|
||||
-->
|
||||
### 自管理系统 {#self-managed}
|
||||
|
||||
如果你使用 Elastic 云中的 Elasticsearch 服务,切换到 **Managed service** 标签页。
|
||||
|
||||
### 设置凭据 {#set-the-credentials}
|
||||
|
||||
当你使用自管理的 Elasticsearch 和 Kibana (对比托管于 Elastic 云中的 Elasticsearch 服务,自管理更有效率),
|
||||
创建 k8s secret 需要准备四个文件。这些文件是:
|
||||
|
||||
1. `ELASTICSEARCH_HOSTS`
|
||||
1. `ELASTICSEARCH_PASSWORD`
|
||||
1. `ELASTICSEARCH_USERNAME`
|
||||
1. `KIBANA_HOST`
|
||||
|
||||
<!--
|
||||
Set these with the information for your Elasticsearch cluster and your Kibana host. Here are some examples (also see [*this configuration*](https://stackoverflow.com/questions/59892896/how-to-connect-from-minikube-to-elasticsearch-installed-on-host-local-developme/59892897#59892897))
|
||||
-->
|
||||
为你的 Elasticsearch 集群和 Kibana 主机设置这些信息。这里是一些例子
|
||||
(另见[*此配置*](https://stackoverflow.com/questions/59892896/how-to-connect-from-minikube-to-elasticsearch-installed-on-host-local-developme/59892897#59892897))
|
||||
|
||||
#### `ELASTICSEARCH_HOSTS` {#elasticsearch-hosts}
|
||||
|
||||
<!--
|
||||
1. A nodeGroup from the Elastic Elasticsearch Helm Chart:
|
||||
-->
|
||||
1. 来自于 Elastic Elasticsearch Helm Chart 的节点组:
|
||||
|
||||
```
|
||||
["http://elasticsearch-master.default.svc.cluster.local:9200"]
|
||||
```
|
||||
|
||||
<!--
|
||||
1. A single Elasticsearch node running on a Mac where your Beats are running in Docker for Mac:
|
||||
-->
|
||||
1. Mac 上的单节点的 Elasticsearch,Beats 运行在 Mac 的容器中:
|
||||
|
||||
```
|
||||
["http://host.docker.internal:9200"]
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Two Elasticsearch nodes running in VMs or on physical hardware:
|
||||
-->
|
||||
1. 运行在虚拟机或物理机上的两个 Elasticsearch 节点
|
||||
|
||||
```
|
||||
["http://host1.example.com:9200", "http://host2.example.com:9200"]
|
||||
```
|
||||
|
||||
<!--
|
||||
Edit `ELASTICSEARCH_HOSTS`
|
||||
-->
|
||||
编辑 `ELASTICSEARCH_HOSTS`
|
||||
```shell
|
||||
vi ELASTICSEARCH_HOSTS
|
||||
```
|
||||
|
||||
#### `ELASTICSEARCH_PASSWORD` {#elasticsearch-password}
|
||||
|
||||
<!--
|
||||
Just the password; no whitespace, quotes, or <>:
|
||||
-->
|
||||
只有密码;没有空格、引号、< 和 >:
|
||||
|
||||
```
|
||||
<yoursecretpassword>
|
||||
```
|
||||
|
||||
<!--
|
||||
Edit `ELASTICSEARCH_PASSWORD`
|
||||
-->
|
||||
编辑 `ELASTICSEARCH_PASSWORD`:
|
||||
|
||||
```shell
|
||||
vi ELASTICSEARCH_PASSWORD
|
||||
```
|
||||
|
||||
#### `ELASTICSEARCH_USERNAME` {#elasticsearch-username}
|
||||
|
||||
<!--
|
||||
Just the username; no whitespace, quotes, or <>:
|
||||
-->
|
||||
只有用名;没有空格、引号、< 和 >:
|
||||
|
||||
<!--
|
||||
your ingest username for Elasticsearch
|
||||
-->
|
||||
```
|
||||
<为 Elasticsearch 注入的用户名>
|
||||
```
|
||||
|
||||
<!--
|
||||
Edit `ELASTICSEARCH_USERNAME`
|
||||
-->
|
||||
编辑 `ELASTICSEARCH_USERNAME`:
|
||||
|
||||
```shell
|
||||
vi ELASTICSEARCH_USERNAME
|
||||
```
|
||||
|
||||
#### `KIBANA_HOST` {#kibana-host}
|
||||
|
||||
<!--
|
||||
1. The Kibana instance from the Elastic Kibana Helm Chart. The subdomain `default` refers to the default namespace. If you have deployed the Helm Chart using a different namespace, then your subdomain will be different:
|
||||
-->
|
||||
1. 从 Elastic Kibana Helm Chart 安装的 Kibana 实例。子域 `default` 指默认的命名空间。如果你把 Helm Chart 指定部署到不同的命名空间,那子域会不同:
|
||||
|
||||
```
|
||||
"kibana-kibana.default.svc.cluster.local:5601"
|
||||
```
|
||||
|
||||
<!--
|
||||
1. A Kibana instance running on a Mac where your Beats are running in Docker for Mac:
|
||||
-->
|
||||
1. Mac 上的 Kibana 实例,Beats 运行于 Mac 的容器:
|
||||
|
||||
```
|
||||
"host.docker.internal:5601"
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Two Elasticsearch nodes running in VMs or on physical hardware:
|
||||
-->
|
||||
1. 运行于虚拟机或物理机上的两个 Elasticsearch 节点:
|
||||
|
||||
```
|
||||
"host1.example.com:5601"
|
||||
```
|
||||
|
||||
<!--
|
||||
Edit `KIBANA_HOST`
|
||||
-->
|
||||
编辑 `KIBANA_HOST`:
|
||||
|
||||
```shell
|
||||
vi KIBANA_HOST
|
||||
```
|
||||
|
||||
<!--
|
||||
### Create a Kubernetes secret
|
||||
This command creates a secret in the Kubernetes system level namespace (kube-system) based on the files you just edited:
|
||||
-->
|
||||
### 创建 Kubernetes secret {#create-a-kubernetes-secret}
|
||||
|
||||
在上面编辑完的文件的基础上,本命令在 Kubernetes 系统范围的命名空间(kube-system)创建一个 secret。
|
||||
|
||||
```
|
||||
kubectl create secret generic dynamic-logging \
|
||||
--from-file=./ELASTICSEARCH_HOSTS \
|
||||
--from-file=./ELASTICSEARCH_PASSWORD \
|
||||
--from-file=./ELASTICSEARCH_USERNAME \
|
||||
--from-file=./KIBANA_HOST \
|
||||
--namespace=kube-system
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Managed service" %}}
|
||||
|
||||
<!--
|
||||
## Managed service
|
||||
This tab is for Elasticsearch Service in Elastic Cloud only, if you have already created a secret for a self managed Elasticsearch and Kibana deployment, then continue with [Deploy the Beats](#deploy-the-beats).
|
||||
### Set the credentials
|
||||
There are two files to edit to create a k8s secret when you are connecting to the managed Elasticsearch Service in Elastic Cloud. The files are:
|
||||
-->
|
||||
## Managed service {#managed-service}
|
||||
|
||||
本标签页只用于 Elastic 云 的 Elasticsearch 服务,如果你已经为自管理的 Elasticsearch 和 Kibana 创建了secret,请继续[部署 Beats](#deploy-the-beats)并继续。
|
||||
|
||||
### 设置凭据 {#set-the-credentials}
|
||||
|
||||
在 Elastic 云中的托管 Elasticsearch 服务中,为了创建 k8s secret,你需要先编辑两个文件。它们是:
|
||||
|
||||
1. `ELASTIC_CLOUD_AUTH`
|
||||
1. `ELASTIC_CLOUD_ID`
|
||||
|
||||
<!--
|
||||
Set these with the information provided to you from the Elasticsearch Service console when you created the deployment. Here are some examples:
|
||||
-->
|
||||
当你完成部署的时候,Elasticsearch 服务控制台会提供给你一些信息,用这些信息完成设置。
|
||||
这里是一些示例:
|
||||
|
||||
#### ELASTIC_CLOUD_ID {#elastic-cloud-id}
|
||||
|
||||
```
|
||||
devk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ==
|
||||
```
|
||||
|
||||
#### ELASTIC_CLOUD_AUTH {#elastic-cloud-auth}
|
||||
|
||||
<!--
|
||||
Just the username, a colon (`:`), and the password, no whitespace or quotes:
|
||||
-->
|
||||
只要用户名;没有空格、引号、< 和 >:
|
||||
|
||||
```
|
||||
elastic:VFxJJf9Tjwer90wnfTghsn8w
|
||||
```
|
||||
|
||||
<!--
|
||||
### Edit the required files:
|
||||
-->
|
||||
### 编辑要求的文件 {#edit-the-required-files}
|
||||
```shell
|
||||
vi ELASTIC_CLOUD_ID
|
||||
vi ELASTIC_CLOUD_AUTH
|
||||
```
|
||||
|
||||
<!--
|
||||
### Create a Kubernetes secret
|
||||
This command creates a secret in the Kubernetes system level namespace (kube-system) based on the files you just edited:
|
||||
-->
|
||||
### 创建 Kubernetes secret {#create-a-kubernetes-secret}
|
||||
|
||||
基于上面刚编辑过的文件,在 Kubernetes 系统范围命名空间(kube-system)中,用下面命令创建一个的secret:
|
||||
|
||||
kubectl create secret generic dynamic-logging \
|
||||
--from-file=./ELASTIC_CLOUD_ID \
|
||||
--from-file=./ELASTIC_CLOUD_AUTH \
|
||||
--namespace=kube-system
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
## Deploy the Beats
|
||||
Manifest files are provided for each Beat. These manifest files use the secret created earlier to configure the Beats to connect to your Elasticsearch and Kibana servers.
|
||||
|
||||
### About Filebeat
|
||||
Filebeat will collect logs from the Kubernetes nodes and the containers running in each pod running on those nodes. Filebeat is deployed as a {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}. Filebeat can autodiscover applications running in your Kubernetes cluster. At startup Filebeat scans existing containers and launches the proper configurations for them, then it will watch for new start/stop events.
|
||||
|
||||
Here is the autodiscover configuration that enables Filebeat to locate and parse Redis logs from the Redis containers deployed with the guestbook application. This configuration is in the file `filebeat-kubernetes.yaml`:
|
||||
-->
|
||||
## 部署 Beats {#deploy-the-beats}
|
||||
|
||||
为每一个 Beat 提供 清单文件。清单文件使用已创建的 secret 接入 Elasticsearch 和 Kibana 服务器。
|
||||
|
||||
### 关于 Filebeat {#about-filebeat}
|
||||
|
||||
Filebeat 收集日志,日志来源于 Kubernetes 节点以及这些节点上每一个 Pod 中的容器。Filebeat 部署为
|
||||
{{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}。
|
||||
Filebeat 支持自动发现 Kubernetes 集群中的应用。
|
||||
在启动时,Filebeat 扫描存量的容器,并为它们提供适当的配置,
|
||||
然后开始监听新的启动/中止信号。
|
||||
|
||||
下面是一个自动发现的配置,它支持 Filebeat 定位并分析来自于 Guestbook 应用部署的 Redis 容器的日志文件。
|
||||
下面的配置片段来自文件 `filebeat-kubernetes.yaml`:
|
||||
|
||||
```yaml
|
||||
- condition.contains:
|
||||
kubernetes.labels.app: redis
|
||||
config:
|
||||
- module: redis
|
||||
log:
|
||||
input:
|
||||
type: docker
|
||||
containers.ids:
|
||||
- ${data.kubernetes.container.id}
|
||||
slowlog:
|
||||
enabled: true
|
||||
var.hosts: ["${data.host}:${data.port}"]
|
||||
```
|
||||
|
||||
<!--
|
||||
This configures Filebeat to apply the Filebeat module `redis` when a container is detected with a label `app` containing the string `redis`. The redis module has the ability to collect the `log` stream from the container by using the docker input type (reading the file on the Kubernetes node associated with the STDOUT stream from this Redis container). Additionally, the module has the ability to collect Redis `slowlog` entries by connecting to the proper pod host and port, which is provided in the container metadata.
|
||||
|
||||
### Deploy Filebeat:
|
||||
-->
|
||||
|
||||
这样配置 Filebeat,当探测到容器拥有 `app` 标签,且值为 `redis`,那就启用 Filebeat 的 `redis` 模块。
|
||||
`redis` 模块可以根据 docker 的输入类型(在 Kubernetes 节点上读取和 Redis 容器的标准输出流关联的文件) ,从容器收集 `log` 流。
|
||||
另外,此模块还可以使用容器元数据中提供的配置信息,连到 Pod 适当的主机和端口,收集 Redis 的 `slowlog` 。
|
||||
|
||||
### 部署 Filebeat {#deploy-filebeat}
|
||||
|
||||
```shell
|
||||
kubectl create -f filebeat-kubernetes.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Verify
|
||||
-->
|
||||
#### 验证 {#verify}
|
||||
|
||||
```shell
|
||||
kubectl get pods -n kube-system -l k8s-app=filebeat-dynamic
|
||||
```
|
||||
|
||||
<!--
|
||||
### About Metricbeat
|
||||
Metricbeat autodiscover is configured in the same way as Filebeat. Here is the Metricbeat autodiscover configuration for the Redis containers. This configuration is in the file `metricbeat-kubernetes.yaml`:
|
||||
-->
|
||||
### 关于 Metricbeat {#about-metricbeat}
|
||||
|
||||
Metricbeat 自动发现的配置方式与 Filebeat 完全相同。
|
||||
这里是针对 Redis 容器的 Metricbeat 自动发现配置。
|
||||
此配置片段来自于文件 `metricbeat-kubernetes.yaml`:
|
||||
|
||||
```yaml
|
||||
- condition.equals:
|
||||
kubernetes.labels.tier: backend
|
||||
config:
|
||||
- module: redis
|
||||
metricsets: ["info", "keyspace"]
|
||||
period: 10s
|
||||
|
||||
# Redis hosts
|
||||
hosts: ["${data.host}:${data.port}"]
|
||||
```
|
||||
<!--
|
||||
This configures Metricbeat to apply the Metricbeat module `redis` when a container is detected with a label `tier` equal to the string `backend`. The `redis` module has the ability to collect the `info` and `keyspace` metrics from the container by connecting to the proper pod host and port, which is provided in the container metadata.
|
||||
|
||||
### Deploy Metricbeat
|
||||
-->
|
||||
配置 Metricbeat,在探测到标签 `tier` 的值等于 `backend` 时,应用 Metricbeat 模块 `redis`。
|
||||
`redis` 模块可以获取容器元数据,连接到 Pod 适当的主机和端口,从 Pod 中收集指标 `info` 和 `keyspace`。
|
||||
|
||||
### 部署 Metricbeat {#deploy-metricbeat}
|
||||
|
||||
```shell
|
||||
kubectl create -f metricbeat-kubernetes.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Verify
|
||||
-->
|
||||
#### 验证 {#verify2}
|
||||
|
||||
```shell
|
||||
kubectl get pods -n kube-system -l k8s-app=metricbeat
|
||||
```
|
||||
|
||||
<!--
|
||||
### About Packetbeat
|
||||
Packetbeat configuration is different than Filebeat and Metricbeat. Rather than specify patterns to match against container labels the configuration is based on the protocols and port numbers involved. Shown below is a subset of the port numbers.
|
||||
|
||||
If you are running a service on a non-standard port add that port number to the appropriate type in `filebeat.yaml` and delete / create the Packetbeat DaemonSet.
|
||||
-->
|
||||
### 关于 Packetbeat {#about-packetbeat}
|
||||
|
||||
Packetbeat 的配置方式不同于 Filebeat 和 Metricbeat。
|
||||
相比于匹配容器标签的模式,它的配置基于相关协议和端口号。
|
||||
下面展示的是端口号的一个子集:
|
||||
|
||||
{{< note >}}
|
||||
如果你的服务运行在非标准的端口上,那就打开文件 `filebeat.yaml`,把这个端口号添加到合适的类型中,然后删除/启动 Packetbeat 的守护进程。
|
||||
{{< /note >}}
|
||||
|
||||
```yaml
|
||||
packetbeat.interfaces.device: any
|
||||
|
||||
packetbeat.protocols:
|
||||
- type: dns
|
||||
ports: [53]
|
||||
include_authorities: true
|
||||
include_additionals: true
|
||||
|
||||
- type: http
|
||||
ports: [80, 8000, 8080, 9200]
|
||||
|
||||
- type: mysql
|
||||
ports: [3306]
|
||||
|
||||
- type: redis
|
||||
ports: [6379]
|
||||
|
||||
packetbeat.flows:
|
||||
timeout: 30s
|
||||
period: 10s
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Deploy Packetbeat
|
||||
-->
|
||||
### 部署 Packetbeat {#deploy-packetbeat}
|
||||
|
||||
```shell
|
||||
kubectl create -f packetbeat-kubernetes.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Verify
|
||||
-->
|
||||
#### 验证 {#verify3}
|
||||
|
||||
```shell
|
||||
kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic
|
||||
```
|
||||
|
||||
<!--
|
||||
## View in Kibana
|
||||
|
||||
Open Kibana in your browser and then open the **Dashboard** application. In the search bar type Kubernetes and click on the Metricbeat dashboard for Kubernetes. This dashboard reports on the state of your Nodes, deployments, etc.
|
||||
|
||||
Search for Packetbeat on the Dashboard page, and view the Packetbeat overview.
|
||||
|
||||
Similarly, view dashboards for Apache and Redis. You will see dashboards for logs and metrics for each. The Apache Metricbeat dashboard will be blank. Look at the Apache Filebeat dashboard and scroll to the bottom to view the Apache error logs. This will tell you why there are no metrics available for Apache.
|
||||
|
||||
To enable Metricbeat to retrieve the Apache metrics, enable server-status by adding a ConfigMap including a mod-status configuration file and re-deploy the guestbook.
|
||||
|
||||
|
||||
## Scale your deployments and see new pods being monitored
|
||||
List the existing deployments:
|
||||
-->
|
||||
## 在 kibana 中浏览 {#view-in-kibana}
|
||||
|
||||
在浏览器中打开 kibana,再打开 **Dashboard**。
|
||||
在搜索栏中键入 Kubernetes,再点击 Metricbeat 的 Kubernetes Dashboard。
|
||||
此 Dashboard 展示节点状态、应用部署等。
|
||||
|
||||
在 Dashboard 页面,搜索 Packetbeat,并浏览 Packetbeat 概览信息。
|
||||
|
||||
同样地,浏览 Apache 和 Redis 的 Dashboard。
|
||||
可以看到日志和指标各自独立 Dashboard。
|
||||
Apache Metricbeat Dashboard 是空的。
|
||||
找到 Apache Filebeat Dashboard,拉到最下面,查看 Apache 的错误日志。
|
||||
日志会揭示出没有 Apache 指标的原因。
|
||||
|
||||
要让 metricbeat 得到 Apache 的指标,需要添加一个包含模块状态配置文件的 ConfigMap,并重新部署 Guestbook。
|
||||
|
||||
## 缩放部署规模,查看新 Pod 已被监控 {#scale-your-deployments-and-see-new-pods-being-monitored}
|
||||
|
||||
列出现有的 deployments:
|
||||
|
||||
```shell
|
||||
kubectl get deployments
|
||||
```
|
||||
|
||||
<!--
|
||||
The output:
|
||||
-->
|
||||
输出:
|
||||
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
frontend 3/3 3 3 3h27m
|
||||
redis-master 1/1 1 1 3h27m
|
||||
redis-slave 2/2 2 2 3h27m
|
||||
```
|
||||
|
||||
<!--
|
||||
Scale the frontend down to two pods:
|
||||
-->
|
||||
缩放前端到两个 Pod:
|
||||
|
||||
```shell
|
||||
kubectl scale --replicas=2 deployment/frontend
|
||||
```
|
||||
|
||||
<!--
|
||||
The output:
|
||||
-->
|
||||
输出:
|
||||
|
||||
```
|
||||
deployment.extensions/frontend scaled
|
||||
```
|
||||
|
||||
<!--
|
||||
Scale the frontend back up to three pods:
|
||||
-->
|
||||
将前端应用缩放回三个 Pod:
|
||||
|
||||
```shell
|
||||
kubectl scale --replicas=3 deployment/frontend
|
||||
```
|
||||
|
||||
<!--
|
||||
## View the changes in Kibana
|
||||
See the screenshot, add the indicated filters and then add the columns to the view. You can see the ScalingReplicaSet entry that is marked, following from there to the top of the list of events shows the image being pulled, the volumes mounted, the pod starting, etc.
|
||||

|
||||
-->
|
||||
## 在 Kibana 中查看变化 {#view-the-chagnes-in-kibana}
|
||||
|
||||
参见屏幕截图,添加指定的过滤器,然后将列添加到视图。
|
||||
你可以看到,ScalingReplicaSet 被做了标记,从标记的点开始,到消息列表的顶部,展示了拉取的镜像、挂载的卷、启动的 Pod 等。
|
||||

|
||||
|
||||
## {{% heading "cleanup" %}}
|
||||
|
||||
<!--
|
||||
Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.
|
||||
|
||||
1. Run the following commands to delete all Pods, Deployments, and Services.
|
||||
-->
|
||||
删除 Deployments 和 Services, 删除运行的 Pod。
|
||||
用标签功能在一个命令中删除多个资源。
|
||||
|
||||
1. 执行下列命令,删除所有的 Pod、Deployment 和 Services。
|
||||
|
||||
```shell
|
||||
kubectl delete deployment -l app=redis
|
||||
kubectl delete service -l app=redis
|
||||
kubectl delete deployment -l app=guestbook
|
||||
kubectl delete service -l app=guestbook
|
||||
kubectl delete -f filebeat-kubernetes.yaml
|
||||
kubectl delete -f metricbeat-kubernetes.yaml
|
||||
kubectl delete -f packetbeat-kubernetes.yaml
|
||||
kubectl delete secret dynamic-logging -n kube-system
|
||||
```
|
||||
<!--
|
||||
1. Query the list of Pods to verify that no Pods are running:
|
||||
-->
|
||||
2. 查询 Pod,以核实没有 Pod 还在运行:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
<!--
|
||||
The response should be this:
|
||||
-->
|
||||
响应应该是这样:
|
||||
|
||||
```
|
||||
No resources found.
|
||||
```
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
* Learn about [tools for monitoring resources](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
* Read more about [logging architecture](/docs/concepts/cluster-administration/logging/)
|
||||
* Read more about [application introspection and debugging](/docs/tasks/debug-application-cluster/)
|
||||
* Read more about [troubleshoot applications](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
-->
|
||||
* 了解[监控资源的工具](/zh/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
* 进一步阅读[日志体系架构](/zh/docs/concepts/cluster-administration/logging/)
|
||||
* 进一步阅读[应用内省和调试](/zh/docs/tasks/debug-application-cluster/)
|
||||
* 进一步阅读[应用程序的故障排除](/zh/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
|
|
@ -1,15 +1,16 @@
|
|||
---
|
||||
title: "示例:使用 Redis 部署 PHP 留言板应用程序"
|
||||
title: "示例:使用 MongoDB 部署 PHP 留言板应用程序"
|
||||
content_type: tutorial
|
||||
weight: 20
|
||||
card:
|
||||
name: tutorials
|
||||
weight: 30
|
||||
title: "无状态应用示例:基于 Redis 的 PHP Guestbook"
|
||||
title: "无状态应用示例:基于 MongoDB 的 PHP Guestbook"
|
||||
min-kubernetes-server-version: v1.14
|
||||
---
|
||||
|
||||
<!--
|
||||
title: "Example: Deploying PHP Guestbook application with Redis"
|
||||
title: "Example: Deploying PHP Guestbook application with MongoDB"
|
||||
reviewers:
|
||||
- ahmetb
|
||||
content_type: tutorial
|
||||
|
|
@ -17,25 +18,24 @@ weight: 20
|
|||
card:
|
||||
name: tutorials
|
||||
weight: 30
|
||||
title: "Stateless Example: PHP Guestbook with Redis"
|
||||
title: "Stateless Example: PHP Guestbook with MongoDB"
|
||||
min-kubernetes-server-version: v1.14
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
This tutorial shows you how to build and deploy a simple, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
|
||||
This tutorial shows you how to build and deploy a simple _(not production ready)_, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
|
||||
-->
|
||||
本教程向您展示如何使用 Kubernetes 和 [Docker](https://www.docker.com/) 构建和部署
|
||||
一个简单的多层 web 应用程序。本例由以下组件组成:
|
||||
一个简单的_(非面向生产)的_多层 web 应用程序。本例由以下组件组成:
|
||||
|
||||
<!--
|
||||
* A single-instance [Redis](https://redis.io/) master to store guestbook entries
|
||||
* Multiple [replicated Redis](https://redis.io/topics/replication) instances to serve reads
|
||||
* A single-instance [MongoDB](https://www.mongodb.com/) to store guestbook entries
|
||||
* Multiple web frontend instances
|
||||
-->
|
||||
|
||||
* 单实例 [Redis](https://redis.io/) 主节点保存留言板条目
|
||||
* 多个[从 Redis](https://redis.io/topics/replication) 节点用来读取数据
|
||||
* 单实例 [MongoDB](https://www.mongodb.com/) 以保存留言板条目
|
||||
* 多个 web 前端实例
|
||||
|
||||
|
||||
|
|
@ -45,15 +45,13 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica
|
|||
|
||||
|
||||
<!--
|
||||
* Start up a Redis master.
|
||||
* Start up Redis slaves.
|
||||
* Start up a Mongo database.
|
||||
* Start up the guestbook frontend.
|
||||
* Expose and view the Frontend Service.
|
||||
* Clean up.
|
||||
-->
|
||||
|
||||
* 启动 Redis 主节点。
|
||||
* 启动 Redis 从节点。
|
||||
* 启动 Mongo 数据库。
|
||||
* 启动留言板前端。
|
||||
* 公开并查看前端服务。
|
||||
* 清理。
|
||||
|
|
@ -72,44 +70,50 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica
|
|||
<!-- lessoncontent -->
|
||||
|
||||
<!--
|
||||
## Start up the Redis Master
|
||||
## Start up the Mongo Database
|
||||
-->
|
||||
|
||||
## 启动 Redis 主节点
|
||||
## 启动 Mongo 数据库
|
||||
|
||||
<!--
|
||||
The guestbook application uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis slave instances.
|
||||
The guestbook application uses MongoDB to store its data.
|
||||
-->
|
||||
留言板应用程序使用 Redis 存储数据。它将数据写入一个 Redis 主实例,并从多个 Redis 读取数据。
|
||||
留言板应用程序使用 MongoDB 存储数据。
|
||||
|
||||
<!--
|
||||
### Creating the Redis Master Deployment
|
||||
### Creating the Mongo Deployment
|
||||
-->
|
||||
|
||||
### 创建 Redis 主节点的 Deployment
|
||||
### 创建 Mongo 的 Deployment
|
||||
|
||||
<!--
|
||||
The manifest file, included below, specifies a Deployment controller that runs a single replica Redis master Pod.
|
||||
The manifest file, included below, specifies a Deployment controller that runs a single replica MongoDB Pod.
|
||||
-->
|
||||
下面包含的清单文件指定了一个 Deployment 控制器,该控制器运行一个 Redis 主节点 Pod 副本。
|
||||
下面包含的清单文件指定了一个 Deployment 控制器,该控制器运行一个 MongoDB Pod 副本。
|
||||
|
||||
{{< codenew file="application/guestbook/redis-master-deployment.yaml" >}}
|
||||
{{< codenew file="application/guestbook/mongo-deployment.yaml" >}}
|
||||
|
||||
<!--
|
||||
1. Launch a terminal window in the directory you downloaded the manifest files.
|
||||
1. Apply the Redis Master Deployment from the `redis-master-deployment.yaml` file:
|
||||
1. Apply the MongoDB Deployment from the `mongo-deployment.yaml` file:
|
||||
-->
|
||||
1. 在下载清单文件的目录中启动终端窗口。
|
||||
2. 从 `redis-master-deployment.yaml` 文件中应用 Redis 主 Deployment:
|
||||
2. 从 `mongo-deployment.yaml` 文件中应用 MongoDB Deployment:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Query the list of Pods to verify that the Redis Master Pod is running:
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/mongo-deployment.yaml
|
||||
-->
|
||||
3. 查询 Pod 列表以验证 Redis 主节点 Pod 是否正在运行:
|
||||
|
||||
|
||||
<!--
|
||||
1. Query the list of Pods to verify that the MongoDB Pod is running:
|
||||
-->
|
||||
3. 查询 Pod 列表以验证 MongoDB Pod 是否正在运行:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
|
|
@ -122,53 +126,49 @@ The manifest file, included below, specifies a Deployment controller that runs a
|
|||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 28s
|
||||
mongo-5cfd459dd4-lrcjb 1/1 Running 0 28s
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Run the following command to view the logs from the Redis Master Pod:
|
||||
1. Run the following command to view the logs from the MongoDB Deployment:
|
||||
-->
|
||||
4. 运行以下命令查看 Redis 主节点 Pod 中的日志:
|
||||
4. 运行以下命令查看 MongoDB Deployment 中的日志:
|
||||
|
||||
```shell
|
||||
kubectl logs -f POD-NAME
|
||||
kubectl logs -f deployment/mongo
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
||||
<!--
|
||||
Replace POD-NAME with the name of your Pod.
|
||||
-->
|
||||
将 POD-NAME 替换为您的 Pod 名称。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Creating the Redis Master Service
|
||||
### Creating the MongoDB Service
|
||||
-->
|
||||
|
||||
### 创建 Redis 主节点的服务
|
||||
### 创建 MongoDB 服务
|
||||
|
||||
<!--
|
||||
The guestbook application needs to communicate to the Redis master to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the Redis master Pod. A Service defines a policy to access the Pods.
|
||||
The guestbook application needs to communicate to the MongoDB to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the MongoDB Pod. A Service defines a policy to access the Pods.
|
||||
-->
|
||||
留言板应用程序需要往 Redis 主节点中写数据。因此,需要创建 [Service](/zh/docs/concepts/services-networking/service/) 来代理 Redis 主节点 Pod 的流量。Service 定义了访问 Pod 的策略。
|
||||
留言板应用程序需要往 MongoDB 中写数据。因此,需要创建 [Service](/zh/docs/concepts/services-networking/service/) 来代理 MongoDB Pod 的流量。Service 定义了访问 Pod 的策略。
|
||||
|
||||
{{< codenew file="application/guestbook/redis-master-service.yaml" >}}
|
||||
{{< codenew file="application/guestbook/mongo-service.yaml" >}}
|
||||
|
||||
<!--
|
||||
1. Apply the Redis Master Service from the following `redis-master-service.yaml` file:
|
||||
1. Apply the MongoDB Service from the following `mongo-service.yaml` file:
|
||||
-->
|
||||
1. 使用下面的 `redis-master-service.yaml` 文件创建 Redis 主节点的服务:
|
||||
1. 使用下面的 `mongo-service.yaml` 文件创建 MongoDB 的服务:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Query the list of Services to verify that the Redis Master Service is running:
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/mongo-service.yaml
|
||||
-->
|
||||
2. 查询服务列表验证 Redis 主节点服务是否正在运行:
|
||||
|
||||
<!--
|
||||
1. Query the list of Services to verify that the MongoDB Service is running:
|
||||
-->
|
||||
2. 查询服务列表验证 MongoDB 服务是否正在运行:
|
||||
|
||||
```shell
|
||||
kubectl get service
|
||||
|
|
@ -182,134 +182,26 @@ The guestbook application needs to communicate to the Redis master to write its
|
|||
```shell
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m
|
||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 8s
|
||||
mongo ClusterIP 10.0.0.151 <none> 6379/TCP 8s
|
||||
```
|
||||
|
||||
<!--
|
||||
This manifest file creates a Service named `mongo` with a set of labels that match the labels previously defined, so the Service routes network traffic to the MongoDB Pod.
|
||||
-->
|
||||
{{< note >}}
|
||||
|
||||
<!--
|
||||
This manifest file creates a Service named `redis-master` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis master Pod.
|
||||
-->
|
||||
这个清单文件创建了一个名为 `Redis-master` 的 Service,其中包含一组与前面定义的标签匹配的标签,因此服务将网络流量路由到 Redis 主节点 Pod 上。
|
||||
|
||||
这个清单文件创建了一个名为 `mongo` 的 Service,其中包含一组与前面定义的标签匹配的标签,因此服务将网络流量路由到 MongoDB Pod 上。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Start up the Redis Slaves
|
||||
-->
|
||||
|
||||
## 启动 Redis 从节点
|
||||
|
||||
<!--
|
||||
Although the Redis master is a single pod, you can make it highly available to meet traffic demands by adding replica Redis slaves.
|
||||
-->
|
||||
尽管 Redis 主节点是一个单独的 pod,但是您可以通过添加 Redis 从节点的方式来使其高可用性,以满足流量需求。
|
||||
|
||||
<!--
|
||||
### Creating the Redis Slave Deployment
|
||||
-->
|
||||
|
||||
### 创建 Redis 从节点 Deployment
|
||||
|
||||
<!--
|
||||
Deployments scale based off of the configurations set in the manifest file. In this case, the Deployment object specifies two replicas.
|
||||
-->
|
||||
Deployments 根据清单文件中设置的配置进行伸缩。在这种情况下,Deployment 对象指定两个副本。
|
||||
|
||||
<!--
|
||||
If there are not any replicas running, this Deployment would start the two replicas on your container cluster. Conversely, if there are more than two replicas are running, it would scale down until two replicas are running.
|
||||
-->
|
||||
如果没有任何副本正在运行,则此 Deployment 将启动容器集群上的两个副本。相反,
|
||||
如果有两个以上的副本在运行,那么它的规模就会缩小,直到运行两个副本为止。
|
||||
|
||||
{{< codenew file="application/guestbook/redis-slave-deployment.yaml" >}}
|
||||
|
||||
<!--
|
||||
1. Apply the Redis Slave Deployment from the `redis-slave-deployment.yaml` file:
|
||||
-->
|
||||
1. 从 `redis-slave-deployment.yaml` 文件中应用 Redis Slave Deployment:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-deployment.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Query the list of Pods to verify that the Redis Slave Pods are running:
|
||||
-->
|
||||
2. 查询 Pod 列表以验证 Redis Slave Pod 正在运行:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
<!--
|
||||
The response should be similar to this:
|
||||
-->
|
||||
响应应该与此类似:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 1m
|
||||
redis-slave-2005841000-fpvqc 0/1 ContainerCreating 0 6s
|
||||
redis-slave-2005841000-phfv9 0/1 ContainerCreating 0 6s
|
||||
```
|
||||
|
||||
<!--
|
||||
### Creating the Redis Slave Service
|
||||
-->
|
||||
|
||||
### 创建 Redis 从节点的 Service
|
||||
|
||||
<!--
|
||||
The guestbook application needs to communicate to Redis slaves to read data. To make the Redis slaves discoverable, you need to set up a Service. A Service provides transparent load balancing to a set of Pods.
|
||||
-->
|
||||
留言板应用程序需要从 Redis 从节点中读取数据。
|
||||
为了便于 Redis 从节点可发现,
|
||||
您需要设置一个 Service。Service 为一组 Pod 提供负载均衡。
|
||||
|
||||
{{< codenew file="application/guestbook/redis-slave-service.yaml" >}}
|
||||
|
||||
<!--
|
||||
1. Apply the Redis Slave Service from the following `redis-slave-service.yaml` file:
|
||||
-->
|
||||
1. 从以下 `redis-slave-service.yaml` 文件应用 Redis Slave 服务:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-service.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Query the list of Services to verify that the Redis slave service is running:
|
||||
-->
|
||||
2. 查询服务列表以验证 Redis 在服务是否正在运行:
|
||||
|
||||
```shell
|
||||
kubectl get services
|
||||
```
|
||||
|
||||
<!--
|
||||
The response should be similar to this:
|
||||
-->
|
||||
响应应该与此类似:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2m
|
||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 1m
|
||||
redis-slave ClusterIP 10.0.0.223 <none> 6379/TCP 6s
|
||||
```
|
||||
|
||||
<!--
|
||||
## Set up and Expose the Guestbook Frontend
|
||||
-->
|
||||
|
||||
## 设置并公开留言板前端
|
||||
|
||||
<!--
|
||||
The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the `redis-master` Service for write requests and the `redis-slave` service for Read requests.
|
||||
-->
|
||||
<!--
|
||||
The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the `mongo` Service to store Guestbook entries.
|
||||
-->
|
||||
留言板应用程序有一个 web 前端,服务于用 PHP 编写的 HTTP 请求。
|
||||
它被配置为连接到写请求的 `redis-master` 服务和读请求的 `redis-slave` 服务。
|
||||
它被配置为连接到 `mongo` 服务以存储留言版条目。
|
||||
|
||||
<!--
|
||||
### Creating the Guestbook Frontend Deployment
|
||||
|
|
@ -328,13 +220,18 @@ The guestbook application has a web frontend serving the HTTP requests written i
|
|||
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
|
||||
```
|
||||
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/frontend-deployment.yaml
|
||||
-->
|
||||
|
||||
<!--
|
||||
1. Query the list of Pods to verify that the three frontend replicas are running:
|
||||
-->
|
||||
2. 查询 Pod 列表,验证三个前端副本是否正在运行:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=guestbook -l tier=frontend
|
||||
kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend
|
||||
```
|
||||
|
||||
<!--
|
||||
|
|
@ -356,24 +253,22 @@ The guestbook application has a web frontend serving the HTTP requests written i
|
|||
### 创建前端服务
|
||||
|
||||
<!--
|
||||
The `redis-slave` and `redis-master` Services you applied are only accessible within the container cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
||||
The `mongo` Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
||||
-->
|
||||
应用的 `redis-slave` 和 `redis-master` 服务只能在容器集群中访问,因为服务的默认类型是
|
||||
[ClusterIP](/zh/docs/concepts/Services-networking/Service/#publishingservices-Service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。
|
||||
应用的 `mongo` 服务只能在 Kubernetes 集群中访问,因为服务的默认类型是
|
||||
[ClusterIP](/zh/docs/concepts/services-networking/service/#publishing-services---service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。
|
||||
|
||||
<!--
|
||||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the container cluster. Minikube can only expose Services through `NodePort`.
|
||||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the Kubernetes cluster. However a Kubernetes user you can use `kubectl port-forward` to access the service even though it uses a `ClusterIP`.
|
||||
-->
|
||||
如果您希望客人能够访问您的留言板,您必须将前端服务配置为外部可见的,以便客户机可以从容器集群之外请求服务。Minikube 只能通过 `NodePort` 公开服务。
|
||||
如果您希望访客能够访问您的留言板,您必须将前端服务配置为外部可见的,以便客户端可以从 Kubernetes 集群之外请求服务。然而即便使用了 `ClusterIP` Kubernets 用户仍可以通过 `kubectl port-forwart` 访问服务。
|
||||
|
||||
<!--
|
||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, uncomment `type: LoadBalancer`.
|
||||
-->
|
||||
{{< note >}}
|
||||
|
||||
<!--
|
||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply delete or comment out `type: NodePort`, and uncomment `type: LoadBalancer`.
|
||||
-->
|
||||
一些云提供商,如 Google Compute Engine 或 Google Kubernetes Engine,支持外部负载均衡器。如果您的云提供商支持负载均衡器,并且您希望使用它,
|
||||
只需删除或注释掉 `type: NodePort`,并取消注释 `type: LoadBalancer` 即可。
|
||||
|
||||
只需取消注释 `type: LoadBalancer` 即可。
|
||||
{{< /note >}}
|
||||
|
||||
{{< codenew file="application/guestbook/frontend-service.yaml" >}}
|
||||
|
|
@ -387,6 +282,11 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
|
|||
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
|
||||
```
|
||||
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/frontend-service.yaml
|
||||
-->
|
||||
|
||||
<!--
|
||||
1. Query the list of Services to verify that the frontend Service is running:
|
||||
-->
|
||||
|
|
@ -403,30 +303,24 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
|
|||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
frontend NodePort 10.0.0.112 <none> 80:31323/TCP 6s
|
||||
frontend ClusterIP 10.0.0.112 <none> 80/TCP 6s
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
|
||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 2m
|
||||
redis-slave ClusterIP 10.0.0.223 <none> 6379/TCP 1m
|
||||
mongo ClusterIP 10.0.0.151 <none> 6379/TCP 2m
|
||||
```
|
||||
|
||||
<!--
|
||||
### Viewing the Frontend Service via `NodePort`
|
||||
### Viewing the Frontend Service via `kubectl port-forward`
|
||||
-->
|
||||
|
||||
### 通过 `NodePort` 查看前端服务
|
||||
### 通过 `kubectl port-forward` 查看前端服务
|
||||
|
||||
<!--
|
||||
If you deployed this application to Minikube or a local cluster, you need to find the IP address to view your Guestbook.
|
||||
1. Run the following command to forward port `8080` on your local machine to port `80` on the service.
|
||||
-->
|
||||
如果您将此应用程序部署到 Minikube 或本地集群,您需要找到 IP 地址来查看您的留言板。
|
||||
|
||||
<!--
|
||||
1. Run the following command to get the IP address for the frontend Service.
|
||||
-->
|
||||
1. 运行以下命令获取前端服务的 IP 地址。
|
||||
1. 运行以下命令将本机的 `8080` 端口转发到服务的 `80` 端口。
|
||||
|
||||
```shell
|
||||
minikube service frontend --url
|
||||
kubectl port-forward svc/frontend 8080:80
|
||||
```
|
||||
|
||||
<!--
|
||||
|
|
@ -435,13 +329,14 @@ If you deployed this application to Minikube or a local cluster, you need to fin
|
|||
响应应该与此类似:
|
||||
|
||||
```
|
||||
http://192.168.99.100:31323
|
||||
Forwarding from 127.0.0.1:8080 -> 80
|
||||
Forwarding from [::1]:8080 -> 80
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Copy the IP address, and load the page in your browser to view your guestbook.
|
||||
1. load the page [http://localhost:8080](http://localhost:8080) in your browser to view your guestbook.
|
||||
-->
|
||||
2. 复制 IP 地址,然后在浏览器中加载页面以查看留言板。
|
||||
2. 在浏览器中加载 [http://localhost:8080](http://localhost:8080) 页面以查看留言板。
|
||||
|
||||
<!--
|
||||
### Viewing the Frontend Service via `LoadBalancer`
|
||||
|
|
@ -519,9 +414,7 @@ Scaling up or down is easy because your servers are defined as a Service that us
|
|||
frontend-3823415956-k22zn 1/1 Running 0 54m
|
||||
frontend-3823415956-w9gbt 1/1 Running 0 54m
|
||||
frontend-3823415956-x2pld 1/1 Running 0 5s
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 56m
|
||||
redis-slave-2005841000-fpvqc 1/1 Running 0 55m
|
||||
redis-slave-2005841000-phfv9 1/1 Running 0 55m
|
||||
mongo-1068406935-3lswp 1/1 Running 0 56m
|
||||
```
|
||||
|
||||
<!--
|
||||
|
|
@ -551,9 +444,7 @@ Scaling up or down is easy because your servers are defined as a Service that us
|
|||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-3823415956-k22zn 1/1 Running 0 1h
|
||||
frontend-3823415956-w9gbt 1/1 Running 0 1h
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 1h
|
||||
redis-slave-2005841000-fpvqc 1/1 Running 0 1h
|
||||
redis-slave-2005841000-phfv9 1/1 Running 0 1h
|
||||
mongo-1068406935-3lswp 1/1 Running 0 1h
|
||||
```
|
||||
|
||||
|
||||
|
|
@ -569,13 +460,13 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
|||
<!--
|
||||
1. Run the following commands to delete all Pods, Deployments, and Services.
|
||||
-->
|
||||
5. 运行以下命令以删除所有 Pod,Deployments 和 Services。
|
||||
1. 运行以下命令以删除所有 Pod,Deployments 和 Services。
|
||||
|
||||
```shell
|
||||
kubectl delete deployment -l app=redis
|
||||
kubectl delete service -l app=redis
|
||||
kubectl delete deployment -l app=guestbook
|
||||
kubectl delete service -l app=guestbook
|
||||
kubectl delete deployment -l app.kubernetes.io/name=mongo
|
||||
kubectl delete service -l app.kubernetes.io/name=mongo
|
||||
kubectl delete deployment -l app.kubernetes.io/name=guestbook
|
||||
kubectl delete service -l app.kubernetes.io/name=guestbook
|
||||
```
|
||||
|
||||
<!--
|
||||
|
|
@ -584,10 +475,8 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
|||
响应应该是:
|
||||
|
||||
```
|
||||
deployment.apps "redis-master" deleted
|
||||
deployment.apps "redis-slave" deleted
|
||||
service "redis-master" deleted
|
||||
service "redis-slave" deleted
|
||||
deployment.apps "mongo" deleted
|
||||
service "mongo" deleted
|
||||
deployment.apps "frontend" deleted
|
||||
service "frontend" deleted
|
||||
```
|
||||
|
|
@ -595,7 +484,7 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
|||
<!--
|
||||
1. Query the list of Pods to verify that no Pods are running:
|
||||
-->
|
||||
6. 查询 Pod 列表,确认没有 Pod 在运行:
|
||||
2. 查询 Pod 列表,确认没有 Pod 在运行:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
|
|
@ -616,15 +505,12 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
|||
|
||||
|
||||
<!--
|
||||
* Add [ELK logging and monitoring](/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/) to your Guestbook application
|
||||
* Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials
|
||||
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
|
||||
* Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Read more about [Managing Resources](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)
|
||||
-->
|
||||
|
||||
* 为 Guestbook 应用添加
|
||||
[ELK 日志与监控](/zh/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/)
|
||||
* 完成 [Kubernetes Basics](/zh/docs/tutorials/kubernetes-basics/) 交互式教程
|
||||
* 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
|
||||
* 阅读更多关于[连接应用程序](/zh/docs/concepts/services-networking/connect-applications-service/)
|
||||
|
|
|
|||
|
|
@ -3,22 +3,24 @@ kind: Deployment
|
|||
metadata:
|
||||
name: frontend
|
||||
labels:
|
||||
app: guestbook
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: php-redis
|
||||
image: gcr.io/google-samples/gb-frontend:v4
|
||||
- name: guestbook
|
||||
image: paulczar/gb-frontend:v5
|
||||
# image: gcr.io/google-samples/gb-frontend:v4
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
|
|
@ -26,13 +28,5 @@ spec:
|
|||
env:
|
||||
- name: GET_HOSTS_FROM
|
||||
value: dns
|
||||
# Using `GET_HOSTS_FROM=dns` requires your cluster to
|
||||
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
|
||||
# service launched automatically. However, if the cluster you are using
|
||||
# does not have a built-in DNS service, you can instead
|
||||
# access an environment variable to find the master
|
||||
# service's host. To do so, comment out the 'value: dns' line above, and
|
||||
# uncomment the line below:
|
||||
# value: env
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
|
|
|||
|
|
@ -3,16 +3,14 @@ kind: Service
|
|||
metadata:
|
||||
name: frontend
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
spec:
|
||||
# comment or delete the following line if you want to use a LoadBalancer
|
||||
type: NodePort
|
||||
# if your cluster supports it, uncomment the following to automatically create
|
||||
# an external load-balanced IP for the frontend service.
|
||||
# type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
|
|
|
|||
|
|
@ -0,0 +1,31 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mongo
|
||||
labels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: mongo
|
||||
image: mongo:4.2
|
||||
args:
|
||||
- --bind_ip
|
||||
- 0.0.0.0
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
ports:
|
||||
- containerPort: 27017
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mongo
|
||||
labels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 27017
|
||||
targetPort: 27017
|
||||
selector:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-master
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: master
|
||||
image: k8s.gcr.io/redis:e2e # or just image: redis
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: redis-master
|
||||
labels:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
targetPort: 6379
|
||||
selector:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-slave
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: slave
|
||||
image: gcr.io/google_samples/gb-redisslave:v3
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
env:
|
||||
- name: GET_HOSTS_FROM
|
||||
value: dns
|
||||
# Using `GET_HOSTS_FROM=dns` requires your cluster to
|
||||
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
|
||||
# service launched automatically. However, if the cluster you are using
|
||||
# does not have a built-in DNS service, you can instead
|
||||
# access an environment variable to find the master
|
||||
# service's host. To do so, comment out the 'value: dns' line above, and
|
||||
# uncomment the line below:
|
||||
# value: env
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
|
|
@ -1,15 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: redis-slave
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
selector:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
Loading…
Reference in New Issue