Broken Links fix for 2696

FILE: ./sig-cloud-provider/CHARTER.md
[✖] https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md
→ Status: 404
[✖] https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md#repository-requirements
→ Status: 404
FILE: ./github-management/setting-up-cla-check.md
[✖] https://identity.linuxfoundation.org/lfcla/github/postreceive?group=284&comment=no&target=https://identity.linuxfoundation.org/projects/cncf
→ Status: 404
FILE: ./sig-scalability/blogs/k8s-services-scalability-issues.md
[✖] https://cilium.io/blog/2018/04/17/why-is-the-kernel-community-replacing-iptables)
→ Status: 404
FILE: ./sig-scalability/goals.md
[✖] provider-configs.md → Status: 400 Error: ENOENT: no such file or
directory, access
'/home/jorge/src/kubernetes/community/sig-scalability/provider-configs.md'
This commit is contained in:
ramnar 2018-11-28 01:37:08 +05:30
parent 4fc2b6bf79
commit 249a50eb5e
4 changed files with 5 additions and 5 deletions

View File

@ -11,7 +11,7 @@ setup the Linux Foundation CNCF CLA check for your repositories, please read on.
1. Go to the settings for your organization or webhook, and choose Webhooks from
the menu, then "Add webhook"
- Payload URL:
https://identity.linuxfoundation.org/lfcla/github/postreceive?group=284&comment=no&target=https://identity.linuxfoundation.org/projects/cncf
`https://identity.linuxfoundation.org/lfcla/github/postreceive?group=284&comment=no&target=https://identity.linuxfoundation.org/projects/cncf`
- `group=284` specifies the ID of the CNCF project authorized committers
group in our CLA system.
- `comment=no` specifies that our system should not post help comments

View File

@ -19,7 +19,7 @@ The Cloud Provider SIG ensures that the Kubernetes ecosystem is evolving in a wa
* Developing future functionality in Kubernetes to support use cases common to all providers while also allowing custom and pluggable implementations when required, some examples include but are not limited to:
* Extendable node status and machine states based on provider
* Extendable node address types based on provider
* See also [Cloud Controller Manager KEP](https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md)
* See also [Cloud Controller Manager KEP](https://github.com/kubernetes/community/blob/master/keps/sig-cloud-provider/0002-cloud-controller-manager.md)
* The collection of user experience reports from Kubernetes operators running on provider subprojects; and the delivery of roadmap information to SIG PM
## Organizational Management
@ -32,7 +32,7 @@ The Cloud Provider SIG ensures that the Kubernetes ecosystem is evolving in a wa
## Subproject Creation
Each Kubernetes provider will (eventually) be a subproject under SIG Cloud Provider. To add new sub projects (providers), SIG Cloud Provider will maintain an open list of requirements that must be satisfied.
The current requirements can be seen [here](https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md#repository-requirements). Each provider subproject is entitled to create 1..N repositories related to cluster turn up or operation on their platform, subject to technical standards set by SIG Cloud Provider.
The current requirements can be seen [here](https://github.com/kubernetes/community/blob/master/keps/sig-cloud-provider/0002-cloud-controller-manager.md#repository-requirements). Each provider subproject is entitled to create 1..N repositories related to cluster turn up or operation on their platform, subject to technical standards set by SIG Cloud Provider.
Creation of a repository SHOULD follow the KEP process to preserve the motivation for the repository and any additional instructions for how other SIGs (e.g SIG Documentation and SIG Release) should interact with the repository
Subprojects that fall under SIG Cloud Provider may also be features in Kubernetes that is requested or needed by all, or at least a large majority of providers. The creation process for these subprojects will follow the usual KEP process.

View File

@ -21,7 +21,7 @@ Iptables can be slow in packet processing when a large number of services exist.
- Moving from iptables to IPVS may help here if that works stably. We have the IPVS alternative [implemented](https://github.com/kubernetes/kubernetes/pull/46580), but it still hasnt gone to GA.
- For a long time the official kernel upstream answer for this issue was that nftables was going to solve all iptables-related scalability problems (both with packet processing and with rule changes), and there's now an out-of-tree nftables kube-proxy backend too ([#62720](https://github.com/kubernetes/kubernetes/issues/62720)).
- There's also an alternative plan to fix the packet processing speed problems by rewriting iptables to use eBPF inside the kernel (see - https://cilium.io/blog/2018/04/17/why-is-the-kernel-community-replacing-iptables).
- There's also an alternative plan to fix the packet processing speed problems by rewriting iptables to use eBPF inside the kernel (see - https://cilium.io/blog/2018/04/17/why-is-the-kernel-community-replacing-iptables ).
## Slow/failing iptables-restore operations when large number of rules exist

View File

@ -72,7 +72,7 @@ NOTES:
## Control Plane Configurations for Testing
Configuration of the control plane for cluster testing varies by provider, and there are multiple reasonable configurations. Discussion and guideline of control plane configuration options and standards are documented [here](provider-configs.md).
Configuration of the control plane for cluster testing varies by provider, and there are multiple reasonable configurations. Discussion and guideline of control plane configuration options and standards are documented [here](configs-and-limits/provider-configs.md).
## Open Questions