Merge pull request #640 from euank/fix-broken-version-links
Update github tree links to use a valid version
This commit is contained in:
		
						commit
						71637ae006
					
				| 
						 | 
				
			
			@ -18,13 +18,13 @@ The kubelet has a single default network plugin, and a default network common to
 | 
			
		|||
 | 
			
		||||
## Network Plugin Requirements
 | 
			
		||||
 | 
			
		||||
Besides providing the [`NetworkPlugin` interface](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/pkg/kubelet/network/plugins.go) to configure and clean up pod networking, the plugin may also need specific support for kube-proxy.  The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables.  For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly.  If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.
 | 
			
		||||
Besides providing the [`NetworkPlugin` interface](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/pkg/kubelet/network/plugins.go) to configure and clean up pod networking, the plugin may also need specific support for kube-proxy.  The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables.  For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly.  If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.
 | 
			
		||||
 | 
			
		||||
By default if no kubelet network plugin is specified, the `noop` plugin is used, which sets `net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like docker with a bridge) work correctly with the iptables proxy.
 | 
			
		||||
 | 
			
		||||
### Exec
 | 
			
		||||
 | 
			
		||||
Place plugins in `network-plugin-dir/plugin-name/plugin-name`, i.e if you have a bridge plugin and `network-plugin-dir` is `/usr/lib/kubernetes`, you'd place the bridge plugin executable at `/usr/lib/kubernetes/bridge/bridge`. See [this comment](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/pkg/kubelet/network/exec/exec.go) for more details.
 | 
			
		||||
Place plugins in `network-plugin-dir/plugin-name/plugin-name`, i.e if you have a bridge plugin and `network-plugin-dir` is `/usr/lib/kubernetes`, you'd place the bridge plugin executable at `/usr/lib/kubernetes/bridge/bridge`. See [this comment](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/pkg/kubelet/network/exec/exec.go) for more details.
 | 
			
		||||
 | 
			
		||||
### CNI
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -38,4 +38,4 @@ The plugin requires a few things:
 | 
			
		|||
* The standard CNI `bridge` and `host-local` plugins to be placed in `/opt/cni/bin`.
 | 
			
		||||
* Kubelet must be run with the `--network-plugin=kubenet` argument to enable the plugin
 | 
			
		||||
* Kubelet must also be run with the `--reconcile-cidr` argument to ensure the IP subnet assigned to the node by configuration or the controller-manager is propagated to the plugin
 | 
			
		||||
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options.
 | 
			
		||||
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options.
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -168,7 +168,7 @@ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s
 | 
			
		|||
 | 
			
		||||
## Launch other Services With Calico-Kubernetes
 | 
			
		||||
 | 
			
		||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/examples/) to set up other services on your cluster.
 | 
			
		||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
 | 
			
		||||
 | 
			
		||||
## Connectivity to outside the cluster
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -436,7 +436,7 @@ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s
 | 
			
		|||
 | 
			
		||||
## Launch other Services With Calico-Kubernetes
 | 
			
		||||
 | 
			
		||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}/examples/) to set up other services on your cluster.
 | 
			
		||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
 | 
			
		||||
 | 
			
		||||
## Connectivity to outside the cluster
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in New Issue