translate 'set up Konnectivity service' into chinese

This commit is contained in:
bryan 2020-06-30 20:43:08 +08:00
parent 4738206e67
commit 31aa397284
5 changed files with 242 additions and 0 deletions

View File

@ -0,0 +1,74 @@
---
title: 设置 Konnectivity 服务
content_type: task
weight: 70
---
<!-- overview -->
<!--
The Konnectivity service provides a TCP level proxy for the control plane to cluster
communication.
-->
Konnectivity 服务为控制平面提供集群通信的 TCP 级别代理。
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
<!-- steps -->
<!--
## Configure the Konnectivity service
The following steps require an egress configuration, for example:
{{< codenew file="admin/konnectivity/egress-selector-configuration.yaml" >}}
You need to configure the API Server to use the Konnectivity service
and direct the network traffic to the cluster nodes:
1. Create an egress configuration file such as `admin/konnectivity/egress-selector-configuration.yaml`.
1. Set the `--egress-selector-config-file` flag of the API Server to the path of
your API Server egress configuration file.
-->
## 配置 Konnectivity 服务
接下来的步骤需要出口配置,比如:
{{< codenew file="admin/konnectivity/egress-selector-configuration.yaml" >}}
您需要配置 API 服务器来使用 Konnectivity 服务,并将网络流量定向到集群节点:
1. 创建一个出口配置文件比如 `admin/konnectivity/egress-selector-configuration.yaml`
1. 将 API 服务器的 `--egress-selector-config-file` 参数设置为你的 API 服务器的出口配置文件路径。
<!--
Next, you need to deploy the Konnectivity server and agents.
[kubernetes-sigs/apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy)
is a reference implementation.
Deploy the Konnectivity server on your control plane node. The provided
`konnectivity-server.yaml` manifest assumes
that the Kubernetes components are deployed as a {{< glossary_tooltip text="static Pod"
term_id="static-pod" >}} in your cluster. If not, you can deploy the Konnectivity
server as a DaemonSet.
-->
接下来,你需要部署 Konnectivity 服务器和代理。[kubernetes-sigs/apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy) 是参考实现。
在控制平面节点上部署 Konnectivity 服务,下面提供的 `konnectivity-server.yaml` 配置清单假定您在集群中
将 Kubernetes 组件都是部署为{{< glossary_tooltip text="静态 Pod" term_id="static-pod" >}}。如果不是,你可以将 Konnectivity 服务部署为 DaemonSet。
{{< codenew file="admin/konnectivity/konnectivity-server.yaml" >}}
<!--
Then deploy the Konnectivity agents in your cluster:
-->
在您的集群中部署 Konnectivity 代理:
{{< codenew file="admin/konnectivity/konnectivity-agent.yaml" >}}
<!--
Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:
-->
最后,如果您的集群开启了 RBAC请创建相关的 RBAC 规则:
{{< codenew file="admin/konnectivity/konnectivity-rbac.yaml" >}}

View File

@ -0,0 +1,21 @@
apiVersion: apiserver.k8s.io/v1beta1
kind: EgressSelectorConfiguration
egressSelections:
# Since we want to control the egress traffic to the cluster, we use the
# "cluster" as the name. Other supported values are "etcd", and "master".
- name: cluster
connection:
# This controls the protocol between the API Server and the Konnectivity
# server. Supported values are "GRPC" and "HTTPConnect". There is no
# end user visible difference between the two modes. You need to set the
# Konnectivity server to work in the same mode.
proxyProtocol: GRPC
transport:
# This controls what transport the API Server uses to communicate with the
# Konnectivity server. UDS is recommended if the Konnectivity server
# locates on the same machine as the API Server. You need to configure the
# Konnectivity server to listen on the same UDS socket.
# The other supported transport is "tcp". You will need to set up TLS
# config to secure the TCP transport.
uds:
udsName: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket

View File

@ -0,0 +1,53 @@
apiVersion: apps/v1
# Alternatively, you can deploy the agents as Deployments. It is not necessary
# to have an agent on each node.
kind: DaemonSet
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: konnectivity-agent
namespace: kube-system
name: konnectivity-agent
spec:
selector:
matchLabels:
k8s-app: konnectivity-agent
template:
metadata:
labels:
k8s-app: konnectivity-agent
spec:
priorityClassName: system-cluster-critical
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.8
name: konnectivity-agent
command: ["/proxy-agent"]
args: [
"--logtostderr=true",
"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
# Since the konnectivity server runs with hostNetwork=true,
# this is the IP address of the master machine.
"--proxy-server-host=35.225.206.7",
"--proxy-server-port=8132",
"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token"
]
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: konnectivity-agent-token
livenessProbe:
httpGet:
port: 8093
path: /healthz
initialDelaySeconds: 15
timeoutSeconds: 15
serviceAccountName: konnectivity-agent
volumes:
- name: konnectivity-agent-token
projected:
sources:
- serviceAccountToken:
path: konnectivity-agent-token
audience: system:konnectivity-server

View File

@ -0,0 +1,24 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:konnectivity-server
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:konnectivity-server
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: konnectivity-agent
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile

View File

@ -0,0 +1,70 @@
apiVersion: v1
kind: Pod
metadata:
name: konnectivity-server
namespace: kube-system
spec:
priorityClassName: system-cluster-critical
hostNetwork: true
containers:
- name: konnectivity-server-container
image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.8
command: ["/proxy-server"]
args: [
"--log-file=/var/log/konnectivity-server.log",
"--logtostderr=false",
"--log-file-max-size=0",
# This needs to be consistent with the value set in egressSelectorConfiguration.
"--uds-name=/etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket",
# The following two lines assume the Konnectivity server is
# deployed on the same machine as the apiserver, and the certs and
# key of the API Server are at the specified location.
"--cluster-cert=/etc/srv/kubernetes/pki/apiserver.crt",
"--cluster-key=/etc/srv/kubernetes/pki/apiserver.key",
# This needs to be consistent with the value set in egressSelectorConfiguration.
"--mode=grpc",
"--server-port=0",
"--agent-port=8132",
"--admin-port=8133",
"--agent-namespace=kube-system",
"--agent-service-account=konnectivity-agent",
"--kubeconfig=/etc/srv/kubernetes/konnectivity-server/kubeconfig",
"--authentication-audience=system:konnectivity-server"
]
livenessProbe:
httpGet:
scheme: HTTP
host: 127.0.0.1
port: 8133
path: /healthz
initialDelaySeconds: 30
timeoutSeconds: 60
ports:
- name: agentport
containerPort: 8132
hostPort: 8132
- name: adminport
containerPort: 8133
hostPort: 8133
volumeMounts:
- name: varlogkonnectivityserver
mountPath: /var/log/konnectivity-server.log
readOnly: false
- name: pki
mountPath: /etc/srv/kubernetes/pki
readOnly: true
- name: konnectivity-uds
mountPath: /etc/srv/kubernetes/konnectivity-server
readOnly: false
volumes:
- name: varlogkonnectivityserver
hostPath:
path: /var/log/konnectivity-server.log
type: FileOrCreate
- name: pki
hostPath:
path: /etc/srv/kubernetes/pki
- name: konnectivity-uds
hostPath:
path: /etc/srv/kubernetes/konnectivity-server
type: DirectoryOrCreate