fix some typos (#2817)
* modified document:csi-migration.md * modified document:rescheduling.md * fix a typo * fix a typo * fix a typo * fix a typo * fix a typo * fix a typo * fix a typo * fix a typo * fix a typo * revert a typo
This commit is contained in:
parent
e59e666e34
commit
e07eec742c
|
|
@ -197,7 +197,7 @@ Among these controller loops, the following are cloud provider dependent.
|
|||
|
||||
The nodeIpamController uses the cloudprovider to handle cloud specific CIDR assignment of a node. Currently the only
|
||||
cloud provider using this functionality is GCE. So the current plan is to break this functionality out of the common
|
||||
verion of the nodeIpamController. Most cloud providers can just run the default version of this controller. However any
|
||||
version of the nodeIpamController. Most cloud providers can just run the default version of this controller. However any
|
||||
cloud provider which needs cloud specific version of this functionality and disable the default version running in the
|
||||
KCM and run their own version in the CCM.
|
||||
|
||||
|
|
|
|||
|
|
@ -198,7 +198,7 @@ manager framework its own K8s/K8s Staging repo.
|
|||
It should be generally possible for cloud providers to determine where a controller runs and even over-ride specific
|
||||
controller functionality. Please note that if a cloud provider exercises this possibility it is up to that cloud provider
|
||||
to keep their custom controller conformant to the K8s/K8s standard. This means any controllers may be run in either KCM
|
||||
or CCM. As an example the NodeIpamController, will be shared acrosss K8s/K8s and K8s/cloud-provider-gce, both in the
|
||||
or CCM. As an example the NodeIpamController, will be shared across K8s/K8s and K8s/cloud-provider-gce, both in the
|
||||
short and long term. Currently it needs to take a cloud provider to allow it to do GCE CIDR management. We could handle
|
||||
this by leaving the cloud provider interface with the controller manager framework code. The GCE controller manager could
|
||||
then inject the cloud provider for that controller. For everyone else (especially the KCM) NodeIpamController is
|
||||
|
|
@ -256,7 +256,7 @@ With the additions needed in the short term to make this work; the Staging area
|
|||
- Sample-Controller
|
||||
|
||||
When we complete the cloud provider work, several of the new modules in staging should be moving to their permanent new
|
||||
home in the appropriate K8s/Cloud-provider repoas they will no longer be needed in the K8s/K8s repo. There are however
|
||||
home in the appropriate K8s/Cloud-provider repos they will no longer be needed in the K8s/K8s repo. There are however
|
||||
other new modules we will add which continue to be needed by both K8s/K8s and K8s/Cloud-provider. Those modules will
|
||||
remain in Staging until the Staging initiative completes and they are moved into some other Kubernetes shared code repo.
|
||||
- Api
|
||||
|
|
|
|||
|
|
@ -149,7 +149,7 @@ There are 3 proxy modes in ipvs - NAT (masq), IPIP and DR. Only NAT mode support
|
|||
```shell
|
||||
# ipvsadm -ln
|
||||
IP Virtual Server version 1.2.1 (size=4096)
|
||||
Prot LocalAddress:Port Scheduler Flags
|
||||
Port LocalAddress:Port Scheduler Flags
|
||||
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
||||
TCP 10.102.128.4:3080 rr
|
||||
-> 10.244.0.235:8080 Masq 1 0 0
|
||||
|
|
@ -177,7 +177,7 @@ And, IPVS proxier will maintain 5 kubernetes-specific chains in nat table
|
|||
**1. kube-proxy start with --masquerade-all=true**
|
||||
|
||||
If kube-proxy starts with `--masquerade-all=true`, the IPVS proxier will masquerade all traffic accessing service ClusterIP, which behaves same as what iptables proxier does.
|
||||
Suppose there is a serivice with Cluster IP `10.244.5.1` and port `8080`:
|
||||
Suppose there is a service with Cluster IP `10.244.5.1` and port `8080`:
|
||||
|
||||
```shell
|
||||
# iptables -t nat -nL
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ superseded-by:
|
|||
|
||||
The goal of the SCTP support feature is to enable the usage of the SCTP protocol in Kubernetes [Service][], [NetworkPolicy][], and [ContainerPort][]as an additional protocol value option beside the current TCP and UDP options.
|
||||
SCTP is an IETF protocol specified in [RFC4960][], and it is used widely in telecommunications network stacks.
|
||||
Once SCTP support is added as a new protocol option those applications that require SCTP as L4 protocol on their interfaces can be deployed on Kubernetes clusters on a more straightforward way. For example they can use the native kube-dns based service discvery, and their communication can be controlled on the native NetworkPolicy way.
|
||||
Once SCTP support is added as a new protocol option those applications that require SCTP as L4 protocol on their interfaces can be deployed on Kubernetes clusters on a more straightforward way. For example they can use the native kube-dns based service discovery, and their communication can be controlled on the native NetworkPolicy way.
|
||||
|
||||
[Service]: https://kubernetes.io/docs/concepts/services-networking/service/
|
||||
[NetworkPolicy]:
|
||||
|
|
@ -68,7 +68,7 @@ It is also a goal to enable ingress SCTP connections from clients outside the Ku
|
|||
|
||||
It is not a goal here to add SCTP support to load balancers that are provided by cloud providers. The Kubernetes side implementation will not restrict the usage of SCTP as the protocol for the Services with type=LoadBalancer, but we do not implement the support of SCTP into the cloud specific load balancer implementations.
|
||||
|
||||
It is not a goal to support multi-homed SCTP associations. Such a support also depends on the ability to manage multiple IP addresses for a pod, and in the case of Services with ClusterIP or NodePort the support of multi-homed assocations would also require the support of NAT for multihomed associations in the SCTP related NF conntrack modules.
|
||||
It is not a goal to support multi-homed SCTP associations. Such a support also depends on the ability to manage multiple IP addresses for a pod, and in the case of Services with ClusterIP or NodePort the support of multi-homed associations would also require the support of NAT for multihomed associations in the SCTP related NF conntrack modules.
|
||||
|
||||
## Proposal
|
||||
|
||||
|
|
@ -148,7 +148,7 @@ spec:
|
|||
|
||||
#### SCTP port accessible from outside the cluster
|
||||
|
||||
As a user of Kubernetes I want to have the option that clien applications that reside outside of the cluster can access my SCTP based services that run in the cluster.
|
||||
As a user of Kubernetes I want to have the option that client applications that reside outside of the cluster can access my SCTP based services that run in the cluster.
|
||||
|
||||
Example:
|
||||
```
|
||||
|
|
|
|||
Loading…
Reference in New Issue