Example DNS SRV record added. Userspace SCTP alternatives updated.

This commit is contained in:
Janosi Laszlo 2018-07-03 13:51:58 +02:00 committed by janosi
parent fdf2fc20a3
commit d0ad13a09e
1 changed files with 10 additions and 5 deletions

View File

@ -157,7 +157,11 @@ As a user of Kubernetes I want to deploy and run my applications that use a user
The Kubernetes API modification for Services is obvious.
The selected port shall be reserved on the node, just like for TCP and UDP now. Unfortunately, golang does not have native SCTP support in the "net" package, so in order to access the kernel's SCTP API we have to introduce a new 3rd party package as a new vendor package. We plan to use the go sctp library from github.com/ishidawataru/sctp.
For Services with type=LoadBalancer we have to check how the cloud provider implementations handle new protocols, and we have to make sure that if SCTP is not supported then the request for a new load balancer, firewall rule, etc. with protocol=SCTP is rejected gracefully.
Kube DNS shall support SRV records with "_sctp" as "proto" value. According to our investigations, the DNS controller is very flexible from this perspective, and it can create SRV records with any protocol name.
Kube DNS shall support SRV records with "_sctp" as "proto" value. According to our investigations, the DNS controller is very flexible from this perspective, and it can create SRV records with any protocol name. Example:
```
_diameter._sctp.my-service.default.svc.cluster.local. 30 IN SRV 10 100 1234 my-service.default.svc.cluster.local.
```
#### SCTP in NetworkPolicy
The Kubernetes API modification for the NetworkPolicy is obvious.
@ -186,17 +190,18 @@ The next task is to ensure that the packets that are sent by applications to the
NOTE! The handling of TCP and UDP Services does not change on those dedicated nodes, i.e. the current iptables/ipvs/etc. mechanisms can be used for those.
We propose the following alternatives here for consideration in the community:
We propose the following alternatives for consideration in the community:
##### Documentation only
In this alternative we describe in the Kubernetes documentation the mutually exclusive nature of userspace and kernel space SCTP stacks, and we would highlight, that the new SCTP Service feature must not be used in those clusters where userspace SCTP stack based applications are deployed, and in turn, userspace SCTP stack based applications cannot be deployed in such clusters where kernel space SCTP stack based applications have already been deployed.
In this alternative we would describe in the Kubernetes documentation the mutually exclusive nature of userspace and kernel space SCTP stacks, and we would highlight, that the new SCTP Service feature must not be used in those clusters where userspace SCTP stack based applications are deployed, and in turn, userspace SCTP stack based applications cannot be deployed in such clusters where kernel space SCTP stack based applications have already been deployed. We would also highlight, that the usage of headless SCTP Services is allowed because such services do not trigger the creation of iptables/ipvs rules, thus those do not trigger the loading of the SCTP kernel module on every node.
##### There would not be a ClusterIP -> service backends proxy on the dedicated nodes
##### Dedicated nodes without ClusterIP proxy
In this alternative we would implement the option to dedicate nodes for userspace SCTP applications, but we do not implement the userspace proxy. That is:
* there would be a kube-prpxy parameter that indicates to the kube-proxy that it must not create iptables or ipvs rules for SCTP Services on its local node
* there would not be a userspace proxy to direct traffic sent to the SCTP Service's ClusterIP to the actual service backends
As userspace SCTP applications could not use the benefits of Kubernetes Services before this enhanceent, those anyway had to implement their own service discovery and SCTP traffic handling mechanisms. Following this assumption we can say, that if they continue using their current logic, they do not and will not obtain the ClusterIP from the KubeDNS, but instead they use an alternative way to find their peers, and they use some other ways for connecting to their peers - like e.g. connecting to the IP of their peers directly without any ClusterIP-like solution. That is, they will not miss the possibility to use the ClusterIP of their peers, and consequently they do not need a proxy solution on their local nodes.
As userspace SCTP applications could not use the benefits of Kubernetes Services before this enhancement, those anyway had to implement their own service discovery and SCTP traffic handling mechanisms. Following this assumption we can say, that if they continue using their current logic, they do not and will not obtain the ClusterIP from the KubeDNS, but instead they use an alternative way to find their peers, and they use some other ways for connecting to their peers - like e.g. connecting to the IP of their peers directly without any ClusterIP-like solution. That is, they will not miss the possibility to use the ClusterIP of their peers, and consequently they do not need a proxy solution on their local nodes.
Also we must note here, that even those userspace SCTP applications can enjoy the benefits of having the peer SCTP endpoints in KubeDNS, and the benefits of having the relevant Service/Endpoint information on the Kubernetes API. For example, they can replace their own service discovery mechanisms with a KubeDNS based one, their custom controllers (if any) can use the state reports of SCTP Services/Endpoints via the Kubernetes API.
##### Dedicated nodes and userspace proxy
In this alternative we would implement all the tasks that we listed above: