Merge branch 'rancher:master' into master

This commit is contained in:
Jen Travinski 2021-08-09 13:55:08 -04:00 committed by GitHub
commit dabc16e26d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
59 changed files with 230 additions and 126 deletions

View File

@ -23,7 +23,12 @@ Windows
./scripts/dev-windows.ps1
```
and then navigate to http://localhost:9001/. You can customize the port by passing it as an argument:
and then navigate to http://localhost:9001/. Click the link on the card associated with a given Rancher version to
access the documentation. For example, clicking on the link of the Rancher v2.5 card will redirect to
http://localhost:9001/rancher/v2.5/en/. Note that due to the way the Rancher website is built, links in the top
navigation panel will not work.
You can customize the port by passing it as an argument:
Linux
```bash

View File

@ -46,6 +46,11 @@ When using this method to install K3s, the following environment variables can b
| `INSTALL_K3S_CHANNEL_URL` | Channel URL for fetching K3s download URL. Defaults to https://update.k3s.io/v1-release/channels. |
| `INSTALL_K3S_CHANNEL` | Channel to use for fetching K3s download URL. Defaults to "stable". Options include: `stable`, `latest`, `testing`. |
This example shows where to place aformentioned environment variables as options (after the pipe):
```
curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=latest sh -
```
Environment variables which begin with `K3S_` will be preserved for the systemd and openrc services to use.

View File

@ -28,11 +28,11 @@ If you don't install CoreDNS, you will need to install a cluster DNS provider yo
[Traefik](https://traefik.io/) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It simplifies networking complexity while designing, deploying, and running applications.
Traefik is deployed by default when starting the server. For more information see [Auto Deploying Manifests]({{<baseurl>}}/k3s/latest/en/advanced/#auto-deploying-manifests). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml` and any changes made to this file will automatically be deployed to Kubernetes in a manner similar to `kubectl apply`.
Traefik is deployed by default when starting the server. For more information see [Auto Deploying Manifests]({{<baseurl>}}/k3s/latest/en/advanced/#auto-deploying-manifests). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml`.
The Traefik ingress controller will use ports 80 and 443 on the host (i.e. these will not be usable for HostPort or NodePort).
Traefik can be configured by editing the `traefik.yaml` file. To prevent k3s from using or overwriting the modified version, deploy k3s with `--no-deploy traefik` and store the modified copy in the `k3s/server/manifests` directory. For more information, refer to the official [Traefik for Helm Configuration Parameters.](https://github.com/helm/charts/tree/master/stable/traefik#configuration)
The `traefik.yaml` file should not be edited manually, because k3s would overwrite it again once it is restarted. Instead you can customize Traefik by creating an additional `HelmChartConfig` manifest in `/var/lib/rancher/k3s/server/manifests`. For more details and an example see [Customizing Packaged Components with HelmChartConfig]({{<baseurl>}}/k3s/latest/en/helm/#customizing-packaged-components-with-helmchartconfig). For more information on the possible configuration values, refer to the official [Traefik Helm Configuration Parameters.](https://github.com/traefik/traefik-helm-chart/tree/master/traefik).
To disable it, start each server with the `--disable traefik` option.

View File

@ -302,6 +302,54 @@ spec:
- Ingress
```
If you are using the default traefik ingress controller with k3s, it will also be blocked by default, so the following network policies must be added to allow traffic to both traefik pods and svclb pods in the kube-system namespace. For version 1.20 and below there is a different label `traefik` used than in 1.21 and above, so remove the one that is not associated with your Kubernetes version.
```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-svclbtraefik-ingress
namespace: kube-system
spec:
podSelector:
matchLabels:
app: svclb-traefik
ingress:
- {}
policyTypes:
- Ingress
---
# 1.20
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-traefik-v120-ingress
namespace: kube-system
spec:
podSelector:
matchLabels:
app: traefik
ingress:
- {}
policyTypes:
- Ingress
---
# 1.21
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-traefik-v121-ingress
namespace: kube-system
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- {}
policyTypes:
- Ingress
```
> **Note:** Operators must manage network policies as normal for additional namespaces that are created.
## Known Issues

View File

@ -5,7 +5,7 @@ description: RancherOS is a simplified Linux distribution built from containers,
weight: 1
---
> RancherOS 1.x is currently in a maintain-only-as-essential mode. It is no longer being actively maintained at a code level other than addressing critical or security fixes. For more information about the support status of RancherOS, see [this page.](https://support.rancher.com/hc/en-us/articles/360041771072#development-status-0-0)
> RancherOS 1.x is currently in a maintain-only-as-essential mode. It is no longer being actively maintained at a code level other than addressing critical or security fixes. For more information about the support status of RancherOS, see [this page.](https://rancher.zendesk.com/hc/en-us/articles/360041771072-Could-you-help-us-understand-the-development-and-support-status-of-RancherOS-for-2020-and-beyond-)
RancherOS is the smallest, easiest way to run Docker in production. Every process in RancherOS is a container managed by Docker. This includes system services such as `udev` and `syslog`. Because it only includes the services necessary to run Docker, RancherOS is significantly smaller than most traditional operating systems. By removing unnecessary libraries and services, requirements for security patches and other maintenance are also reduced. This is possible because, with Docker, users typically package all necessary libraries into their containers.

View File

@ -71,7 +71,7 @@ spec:
```
Directive | Description
Directive | Description
---------|----------|
`apiVersion: autoscaling/v2beta1` | The version of the Kubernetes `autoscaling` API group in use. This example manifest uses the beta version, so scaling by CPU and memory is enabled. |
`name: hello-world` | Indicates that HPA is performing autoscaling for the `hello-word` deployment. |
@ -172,7 +172,7 @@ For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter](
I0724 10:18:45.696703 1 round_trippers.go:445] Content-Type: application/json
I0724 10:18:45.696706 1 round_trippers.go:445] Content-Length: 2581
I0724 10:18:45.696766 1 request.go:836] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"6237"},"items":[{"metadata":{"name":"hello-world-54764dfbf8-q6l82","generateName":"hello-world-54764dfbf8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-world-54764dfbf8-q6l82","uid":"484cb929-8f29-11e8-99d2-067cac34e79c","resourceVersion":"4066","creationTimestamp":"2018-07-24T10:06:50Z","labels":{"app":"hello-world","pod-template-hash":"1032089694"},"annotations":{"cni.projectcalico.org/podIP":"10.42.0.7/32"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"hello-world-54764dfbf8","uid":"4849b9b1-8f29-11e8-99d2-067cac34e79c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-ncvts","secret":{"secretName":"default-token-ncvts","defaultMode":420}}],"containers":[{"name":"hello-world","image":"rancher/hello-world","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"requests":{"cpu":"500m","memory":"64Mi"}},"volumeMounts":[{"name":"default-token-ncvts","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"34.220.18.140","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:54Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"}],"hostIP":"34.220.18.140","podIP":"10.42.0.7","startTime":"2018-07-24T10:06:50Z","containerStatuses":[{"name":"hello-world","state":{"running":{"startedAt":"2018-07-24T10:06:54Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"rancher/hello-world:latest","imageID":"docker-pullable://rancher/hello-world@sha256:4b1559cb4b57ca36fa2b313a3c7dde774801aa3a2047930d94e11a45168bc053","containerID":"docker://cce4df5fc0408f03d4adf82c90de222f64c302bf7a04be1c82d584ec31530773"}],"qosClass":"Burstable"}}]}
I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.xip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK
I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.sslip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK
I0724 10:18:45.699620 1 api.go:93] Response Body: {"status":"success","data":{"resultType":"vector","result":[{"metric":{"pod_name":"hello-world-54764dfbf8-q6l82"},"value":[1532427525.697,"0"]}]}}
I0724 10:18:45.699939 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/fs_read?labelSelector=app%3Dhello-world: (12.431262ms) 200 [[kube-controller-manager/v1.10.1 (linux/amd64) kubernetes/d4ab475/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.42.0.0:24268]
I0724 10:18:51.727845 1 request.go:836] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}

View File

@ -4,7 +4,7 @@ description: Ingresses can be added for workloads to provide load balancing, SSL
weight: 3042
aliases:
- /rancher/v2.0-v2.4/en/tasks/workloads/add-ingress/
- /rancher/v2.0-v2.4/en/k8s-in-rancher/load-balancers-and-ingress/ingress
- /rancher/v2.0-v2.4/en/k8s-in-rancher/load-balancers-and-ingress/ingress
---
Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{<baseurl>}}/rancher/v2.0-v2.4/en/helm-charts/globaldns/).
@ -14,24 +14,24 @@ Ingress can be added for workloads to provide load balancing, SSL termination an
1. Enter a **Name** for the ingress.
1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new namespace on the fly by clicking **Add to a new namespace**.
1. Create ingress forwarding **Rules**. For help configuring the rules, refer to [this section.](#ingress-rule-configuration) If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications.
1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s.
1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s.
**Result:** Your ingress is added to the project. The ingress begins enforcing your ingress rules.
# Ingress Rule Configuration
- [Automatically generate a xip.io hostname](#automatically-generate-a-xip-io-hostname)
- [Automatically generate a sslip.io hostname](#automatically-generate-a-sslip-io-hostname)
- [Specify a hostname to use](#specify-a-hostname-to-use)
- [Use as the default backend](#use-as-the-default-backend)
- [Certificates](#certificates)
- [Labels and Annotations](#labels-and-annotations)
### Automatically generate a xip.io hostname
### Automatically generate a sslip.io hostname
If you choose this option, ingress routes requests to hostname to a DNS name that's automatically generated. Rancher uses [xip.io](http://xip.io/) to automatically generates the DNS name. This option is best used for testing, _not_ production environments.
If you choose this option, ingress routes requests to hostname to a DNS name that's automatically generated. Rancher uses [sslip.io](http://sslip.io/) to automatically generates the DNS name. This option is best used for testing, _not_ production environments.
>**Note:** To use this option, you must be able to resolve to `xip.io` addresses.
>**Note:** To use this option, you must be able to resolve to `sslip.io` addresses.
1. Add a **Target Backend**. By default, a workload is added to the ingress, but you can add more targets by clicking either **Service** or **Workload**.
1. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field. Typically, the first rule that you create does not include a path.

View File

@ -58,9 +58,9 @@ Some cloud-managed layer-7 load balancers (such as the ALB ingress controller on
Other layer-7 load balancers, such as the Google Load Balancer or Nginx Ingress Controller, directly expose one or more IP addresses. Google Load Balancer provides a single routable IP address. Nginx Ingress Controller exposes the external IP of all nodes that run the Nginx Ingress Controller. You can do either of the following:
1. Configure your own DNS to map (via A records) your domain name to the IP addresses exposes by the Layer-7 load balancer.
2. Ask Rancher to generate an xip.io host name for your ingress rule. Rancher will take one of your exposed IPs, say a.b.c.d, and generate a host name <ingressname>.<namespace>.a.b.c.d.xip.io.
2. Ask Rancher to generate an sslip.io host name for your ingress rule. Rancher will take one of your exposed IPs, say a.b.c.d, and generate a host name <ingressname>.<namespace>.a.b.c.d.sslip.io.
The benefit of using xip.io is that you obtain a working entrypoint URL immediately after you create the ingress rule. Setting up your own domain name, on the other hand, requires you to configure DNS servers and wait for DNS to propagate.
The benefit of using sslip.io is that you obtain a working entrypoint URL immediately after you create the ingress rule. Setting up your own domain name, on the other hand, requires you to configure DNS servers and wait for DNS to propagate.
## Related Links

View File

@ -24,7 +24,7 @@ The following steps will quickly deploy a Rancher Server on AWS with a single no
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
1. Edit `terraform.tfvars` and customize the following variables:
- `aws_access_key` - Amazon AWS Access Key
- `aws_access_key` - Amazon AWS Access Key
- `aws_secret_key` - Amazon AWS Secret Key
- `rancher_server_admin_password` - Admin password for created Rancher server
@ -45,7 +45,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -45,7 +45,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -25,7 +25,7 @@ The following steps will quickly deploy a Rancher server on GCP in a single-node
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
1. Edit `terraform.tfvars` and customize the following variables:
- `gcp_account_json` - GCP service account file path and file name
- `gcp_account_json` - GCP service account file path and file name
- `rancher_server_admin_password` - Admin password for created Rancher server
1. **Optional:** Modify optional variables within `terraform.tfvars`.
@ -46,7 +46,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -27,7 +27,7 @@ The following steps will quickly deploy a Rancher server on Azure in a single-no
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
1. Edit `terraform.tfvars` and customize the following variables:
- `azure_subscription_id` - Microsoft Azure Subscription ID
- `azure_subscription_id` - Microsoft Azure Subscription ID
- `azure_client_id` - Microsoft Azure Client ID
- `azure_client_secret` - Microsoft Azure Client Secret
- `azure_tenant_id` - Microsoft Azure Tenant ID
@ -51,7 +51,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -61,11 +61,11 @@ Now that the application is up and running it needs to be exposed so that other
9. Leave everything else as default and click **Save**.
**Result:** The application is assigned a `xip.io` address and exposed. It may take a minute or two to populate.
**Result:** The application is assigned a `sslip.io` address and exposed. It may take a minute or two to populate.
### View Your Application
From the **Load Balancing** page, click the target link, which will look something like `hello.default.xxx.xxx.xxx.xxx.xip.io > hello-world`.
From the **Load Balancing** page, click the target link, which will look something like `hello.default.xxx.xxx.xxx.xxx.sslip.io > hello-world`.
Your application will open in a separate window.

View File

@ -7,6 +7,9 @@ Rancher is committed to informing the community of security issues in our produc
| ID | Description | Date | Resolution |
|----|-------------|------|------------|
| [CVE-2021-31999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31999) | A vulnerability was discovered where a malicious Rancher user could craft an API request directed at the proxy for the Kubernetes API of a managed cluster to gain access to information they do not have access to. This is done by passing the "Impersonate-User" or "Impersonate-Group" header in the Connection header, which is then removed by the proxy. At this point, instead of impersonating the user and their permissions, the request will act as if it was from the Rancher management server, i.e. local server, and return the requested information. You are vulnerable if you are running any Rancher 2.x version. Only valid Rancher users who have some level of permission on the cluster can perform the request. There is no direct mitigation besides upgrading to the patched versions. You can limit wider exposure by ensuring all Rancher users are trusted. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9), [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
| [CVE-2021-25318](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25318) | A vulnerability was discovered in Rancher where users were granted access to resources regardless of the resource's API group. For example Rancher should have allowed users access to `apps.catalog.cattle.io`, but instead incorrectly gave access to `apps.*`. You are vulnerable if you are running any Rancher 2.x version. The extent of the exploit increases if there are other matching CRD resources installed in the cluster. There is no direct mitigation besides upgrading to the patched versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9), [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
| [CVE-2021-25320](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25320) | A vulnerability was discovered in Rancher where cloud credentials weren't being properly validated through the Rancher API. Specifically through a proxy designed to communicate with cloud providers. Any Rancher user that was logged-in and aware of a cloud credential ID that was valid for a given cloud provider could make requests against that cloud provider's API through the proxy API, and the cloud credential would be attached. You are vulnerable if you are running any Rancher 2.2.0 or above and use cloud credentials. The exploit is limited to valid Rancher users. There is no direct mitigation besides upgrading to the patched versions. You can limit wider exposure by ensuring all Rancher users are trusted. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9), [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{<baseurl>}}/rancher/v2.0-v2.4/en/upgrades/rollbacks/). |
| [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) |
| [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) |

View File

@ -1,5 +1,5 @@
---
title: Rancher 2.5.7-2.5.8+ (Latest)
title: Rancher 2.5.7-2.5.9 (Latest)
weight: 1
showBreadcrumb: false
---

View File

@ -1,8 +1,8 @@
---
title: "Rancher 2.5.7-2.5.8+ (Latest)"
shortTitle: "Rancher 2.5.7-2.5.8+ (Latest)"
title: "Rancher 2.5.7-2.5.9 (Latest)"
shortTitle: "Rancher 2.5.7-2.5.9 (Latest)"
description: "Rancher adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more."
metaTitle: "Rancher 2.x Docs: What is New?"
metaTitle: "Rancher 2.5.7-2.5.9 Docs: What is New?"
metaDescription: "Rancher 2 adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more."
insertOneSix: false
weight: 1
@ -18,4 +18,4 @@ Rancher adds significant value on top of Kubernetes, first by centralizing authe
It then enables detailed monitoring and alerting for clusters and their resources, ships logs to external providers, and integrates directly with Helm via the Application Catalog. If you have an external CI/CD system, you can plug it into Rancher, but if you don't, Rancher even includes [Fleet](http://fleet.rancher.io/) to help you automatically deploy and upgrade workloads.
Rancher is a _complete_ container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere.
Rancher is a _complete_ container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere.

View File

@ -135,4 +135,6 @@ To assign a custom global role to a group, follow these steps:
# Privilege Escalation
The `Configure Catalogs` custom permission is powerful and should be used with caution. When an admin assigns the `Configure Catalogs` permission to a standard user, it could result in privilege escalation in which the user could give themselves admin access to Rancher provisioned clusters. Anyone with this permission should be considered equivalent to an admin.
The `Configure Catalogs` custom permission is powerful and should be used with caution. When an admin assigns the `Configure Catalogs` permission to a standard user, it could result in privilege escalation in which the user could give themselves admin access to Rancher provisioned clusters. Anyone with this permission should be considered equivalent to an admin.
The `Manager Users` role grants the ability to create, update, and delete _any_ user. This presents the risk of privilege escalation as even non-admin users with this role will be able to create, update, and delete admin users. Admins should take caution when assigning this role.

View File

@ -123,7 +123,7 @@ s3:
s3:
credentialSecretName: minio-creds
bucketName: rancherbackups
endpoint: minio.35.202.130.254.xip.io
endpoint: minio.35.202.130.254.sslip.io
endpointCA: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHakNDQWdLZ0F3SUJBZ0lKQUtpWFZpNEpBb0J5TUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NakF3T0RNd01UZ3lOVFE1V2hjTk1qQXhNREk1TVRneU5UUTVXakFTTVJBdwpEZ1lEVlFRRERBZDBaWE4wTFdOaE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCjA4dnV3Q2Y0SEhtR2Q2azVNTmozRW5NOG00T2RpS3czSGszd1NlOUlXQkwyVzY5WDZxenBhN2I2M3U2L05mMnkKSnZWNDVqeXplRFB6bFJycjlpbEpWaVZ1NFNqWlFjdG9jWmFCaVNsL0xDbEFDdkFaUlYvKzN0TFVTZSs1ZDY0QQpWcUhDQlZObU5xM3E3aVY0TE1aSVpRc3N6K0FxaU1Sd0pOMVVKQTZ6V0tUc2Yzc3ByQ0J2dWxJWmZsVXVETVAyCnRCTCt6cXZEc0pDdWlhNEEvU2JNT29tVmM2WnNtTGkwMjdub3dGRld3MnRpSkM5d0xMRE14NnJoVHQ4a3VvVHYKQXJpUjB4WktiRU45L1Uzb011eUVKbHZyck9YS2ZuUDUwbk8ycGNaQnZCb3pUTStYZnRvQ1d5UnhKUmI5cFNTRApKQjlmUEFtLzNZcFpMMGRKY2sxR1h3SURBUUFCbzNNd2NUQWRCZ05WSFE0RUZnUVU5NHU4WXlMdmE2MTJnT1pyCm44QnlFQ2NucVFjd1FnWURWUjBqQkRzd09ZQVU5NHU4WXlMdmE2MTJnT1pybjhCeUVDY25xUWVoRnFRVU1CSXgKRURBT0JnTlZCQU1NQjNSbGMzUXRZMkdDQ1FDb2wxWXVDUUtBY2pBTUJnTlZIUk1FQlRBREFRSC9NQTBHQ1NxRwpTSWIzRFFFQkN3VUFBNElCQVFER1JRZ1RtdzdVNXRQRHA5Q2psOXlLRW9Vd2pYWWM2UlAwdm1GSHpubXJ3dUVLCjFrTkVJNzhBTUw1MEpuS29CY0ljVDNEeGQ3TGdIbTNCRE5mVVh2anArNnZqaXhJYXR2UWhsSFNVaWIyZjJsSTkKVEMxNzVyNCtROFkzelc1RlFXSDdLK08vY3pJTGh5ei93aHRDUlFkQ29lS1dXZkFiby8wd0VSejZzNkhkVFJzNwpHcWlGNWZtWGp6S0lOcTBjMHRyZ0xtalNKd1hwSnU0ZnNGOEcyZUh4b2pOKzdJQ1FuSkg5cGRIRVpUQUtOL2ppCnIvem04RlZtd1kvdTBndEZneWVQY1ZWbXBqRm03Y0ZOSkc4Y2ZYd0QzcEFwVjhVOGNocTZGeFBHTkVvWFZnclMKY1VRMklaU0RJd1FFY3FvSzFKSGdCUWw2RXBaUVpWMW1DRklrdFBwSQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t
```
### Example credentialSecret
@ -145,7 +145,7 @@ There are two ways to set up the `rancher-backup` operator to use S3 as the back
One way is to configure the `credentialSecretName` in the Backup custom resource, which refers to AWS credentials that have access to S3.
If the cluster nodes are in Amazon EC2, the S3 access can also be set up by assigning IAM permissions to the EC2 nodes so that they can access S3.
If the cluster nodes are in Amazon EC2, the S3 access can also be set up by assigning IAM permissions to the EC2 nodes so that they can access S3.
To allow a node to access S3, follow the instructions in the [AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/) to create an IAM role for EC2. When you add a custom policy to the role, add the following permissions, and replace the `Resource` with your bucket name:
@ -178,7 +178,7 @@ To allow a node to access S3, follow the instructions in the [AWS documentation]
}
```
After the role is created, and you have attached the corresponding instance profile to your EC2 instance(s), the `credentialSecretName` directive can be left empty in the Backup custom resource.
After the role is created, and you have attached the corresponding instance profile to your EC2 instance(s), the `credentialSecretName` directive can be left empty in the Backup custom resource.
# Examples

View File

@ -85,7 +85,7 @@ spec:
credentialSecretName: minio-creds
credentialSecretNamespace: default
bucketName: rancherbackups
endpoint: minio.xip.io
endpoint: minio.sslip.io
endpointCA: LS0tLS1CRUdJTi3VUFNQkl5UUT.....pbEpWaVzNkRS0tLS0t
resourceSetName: rancher-resource-set
encryptionConfigSecretName: encryptionconfig
@ -214,7 +214,7 @@ spec:
credentialSecretName: minio-creds
credentialSecretNamespace: default
bucketName: rancherbackups
endpoint: minio.xip.io
endpoint: minio.sslip.io
endpointCA: LS0tLS1CRUdJTi3VUFNQkl5UUT.....pbEpWaVzNkRS0tLS0t
encryptionConfigSecretName: test-encryptionconfig
```
@ -298,6 +298,3 @@ resources:
- name: key1
secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
```

View File

@ -148,7 +148,7 @@ The following settings are also configurable. All of these except for the "Node
| Desired ASG Size | The desired number of instances. |
| Maximum ASG Size | The maximum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
| Minimum ASG Size | The minimum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
| Labels | Kubernetes labels applied to the nodes in the managed node group. |
| Labels | Kubernetes labels applied to the nodes in the managed node group. Note: Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) |
| Tags | These are tags for the managed node group and do not propagate to any of the associated resources. |

View File

@ -5,7 +5,7 @@ weight: 3
---
{{% tabs %}}
{{% tab "v2.5.8" %}}
{{% tab "Rancher v2.5.8+" %}}
# Changes in v2.5.8
@ -237,6 +237,7 @@ _Mutable: no_
You can apply labels to the node pool, which applies the labels to all nodes in the pool.
Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)
# Group Details
@ -309,6 +310,8 @@ The shorter the refresh window, the less likely any race conditions will occur,
Add Kubernetes [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) to the cluster.
Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)
# Kubernetes Options
### Location Type
@ -429,6 +432,8 @@ When you apply a taint to a node, only Pods that tolerate the taint are allowed
### Node Labels
You can apply labels to the node pool, which applies the labels to all nodes in the pool.
Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)
## Security Options
### Service Account

View File

@ -3,57 +3,59 @@ headless: true
---
{{% tabs %}}
{{% tab "Rancher v2.5.8" %}}
{{% tab "Rancher v2.5.8+" %}}
| Action | Rancher Launched Kubernetes Clusters | EKS and GKE Clusters* | Other Hosted Kubernetes Clusters | Non-EKS or GKE Registered Clusters |
| Action | Rancher Launched Kubernetes Clusters | EKS and GKE Clusters<sup>1</sup> | Other Hosted Kubernetes Clusters | Non-EKS or GKE Registered Clusters |
| --- | --- | ---| ---|----|
| [Using kubectl and a kubeconfig file to Access a Cluster]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/) | ✓ | ✓ | ✓ | ✓ |
| [Managing Cluster Members]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cluster-access/cluster-members/) | ✓ | ✓ | ✓ | ✓ |
| [Editing and Upgrading Clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/editing-clusters/) | ✓ | ✓ | ✓ | ** |
| [Managing Nodes]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/nodes) | ✓ | ✓ | ✓ | ✓ *** |
| [Editing and Upgrading Clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/editing-clusters/) | ✓ | ✓ | ✓ | <sup>2</sup> |
| [Managing Nodes]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/nodes) | ✓ | ✓ | ✓ | ✓<sup>3</sup> |
| [Managing Persistent Volumes and Storage Classes]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/volumes-and-storage/) | ✓ | ✓ | ✓ | ✓ |
| [Managing Projects, Namespaces and Workloads]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/projects-and-namespaces/) | ✓ | ✓ | ✓ | ✓ |
| [Using App Catalogs]({{<baseurl>}}/rancher/v2.5/en/catalog/) | ✓ | ✓ | ✓ | ✓ |
| [Configuring Tools (Alerts, Notifiers, Logging, Monitoring, Istio)]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/tools/) | ✓ | ✓ | ✓ | ✓ |
| [Running Security Scans]({{<baseurl>}}/rancher/v2.5/en/security/security-scan/) | ✓ | ✓ | ✓ | ✓ |
| [Cloning Clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cloning-clusters/)| ✓ | ✓ |✓ | |
| [Use existing configuration to create additional clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cloning-clusters/)| ✓ | ✓ |✓ | |
| [Ability to rotate certificates]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/certificate-rotation/) | ✓ | ✓ | | |
| [Ability to back up your Kubernetes Clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/backing-up-etcd/) | ✓ | ✓ | | |
| [Ability to recover and restore etcd]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/restoring-etcd/) | ✓ | ✓ | | |
| Ability to [backup]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/backing-up-etcd/) and [restore]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/restoring-etcd/) Rancher-launched clusters | ✓ | ✓ | | ✓<sup>4</sup> |
| [Cleaning Kubernetes components when clusters are no longer reachable from Rancher]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cleaning-cluster-nodes/) | ✓ | | | |
| [Configuring Pod Security Policies]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/pod-security-policy/) | ✓ | ✓ | ||
\* Registered GKE and EKS clusters have the same options available as GKE and EKS clusters created from the Rancher UI. The difference is that when a registered cluster is deleted from the Rancher UI, [it is not destroyed.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/registered-clusters/#additional-features-for-registered-eks-and-gke-clusters)
1. Registered GKE and EKS clusters have the same options available as GKE and EKS clusters created from the Rancher UI. The difference is that when a registered cluster is deleted from the Rancher UI, [it is not destroyed.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/registered-clusters/#additional-features-for-registered-eks-and-gke-clusters)
\* \* Cluster configuration options can't be edited for imported clusters, except for [K3s and RKE2 clusters.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/)
2. Cluster configuration options can't be edited for registered clusters, except for [K3s and RKE2 clusters.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/)
\* \* \* For registered cluster nodes, the Rancher UI exposes the ability to cordon drain, and edit the node.
3. For registered cluster nodes, the Rancher UI exposes the ability to cordon, drain, and edit the node.
4. For registered clusters using etcd as a control plane, snapshots must be taken manually outside of the Rancher UI to use for backup and recovery.
{{% /tab %}}
{{% tab "Rancher v2.5.0-v2.5.7" %}}
{{% tab "Rancher before v2.5.8" %}}
| Action | Rancher Launched Kubernetes Clusters | Hosted Kubernetes Clusters | Registered EKS Clusters | All Other Registered Clusters |
| --- | --- | ---| ---|----|
| [Using kubectl and a kubeconfig file to Access a Cluster]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/) | ✓ | ✓ | ✓ | ✓ |
| [Managing Cluster Members]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cluster-access/cluster-members/) | ✓ | ✓ | ✓ | ✓ |
| [Editing and Upgrading Clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/editing-clusters/) | ✓ | ✓ | ✓ | * |
| [Managing Nodes]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/nodes) | ✓ | ✓ | ✓ | ✓ ** |
| [Editing and Upgrading Clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/editing-clusters/) | ✓ | ✓ | ✓ | <sup>1</sup> |
| [Managing Nodes]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/nodes) | ✓ | ✓ | ✓ | ✓<sup>2</sup> |
| [Managing Persistent Volumes and Storage Classes]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/volumes-and-storage/) | ✓ | ✓ | ✓ | ✓ |
| [Managing Projects, Namespaces and Workloads]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/projects-and-namespaces/) | ✓ | ✓ | ✓ | ✓ |
| [Using App Catalogs]({{<baseurl>}}/rancher/v2.5/en/catalog/) | ✓ | ✓ | ✓ | ✓ |
| [Configuring Tools (Alerts, Notifiers, Logging, Monitoring, Istio)]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/tools/) | ✓ | ✓ | ✓ | ✓ |
| [Running Security Scans]({{<baseurl>}}/rancher/v2.5/en/security/security-scan/) | ✓ | ✓ | ✓ | ✓ |
| [Cloning Clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cloning-clusters/)| ✓ | ✓ |✓ | |
| [Use existing configuration to create additional clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cloning-clusters/)| ✓ | ✓ |✓ | |
| [Ability to rotate certificates]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/certificate-rotation/) | ✓ | | ✓ | |
| [Ability to back up your Kubernetes Clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/backing-up-etcd/) | ✓ | | ✓ | |
| [Ability to recover and restore etcd]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/restoring-etcd/) | ✓ | | ✓ | |
| Ability to [backup]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/backing-up-etcd/) and [restore]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/restoring-etcd/) Rancher-launched clusters | ✓ | ✓ | | ✓<sup>3</sup> |
| [Cleaning Kubernetes components when clusters are no longer reachable from Rancher]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cleaning-cluster-nodes/) | ✓ | | | |
| [Configuring Pod Security Policies]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/pod-security-policy/) | ✓ | | ✓ ||
\* Cluster configuration options can't be edited for imported clusters, except for [K3s and RKE2 clusters.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/)
1. Cluster configuration options can't be edited for registered clusters, except for [K3s and RKE2 clusters.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/)
\* \* For registered cluster nodes, the Rancher UI exposes the ability to cordon drain, and edit the node.
2. For registered cluster nodes, the Rancher UI exposes the ability to cordon, drain, and edit the node.
3. For registered clusters using etcd as a control plane, snapshots must be taken manually outside of the Rancher UI to use for backup and recovery.
{{% /tab %}}
{{% /tabs %}}
{{% /tabs %}}

View File

@ -82,4 +82,4 @@ You can access your cluster after its state is updated to **Active.**
| EIP Charge Mode | This option will only be shown when `Create EIP` is selected. The options are pay by `BandWidth` and pay by `Traffic`. |
| EIP Bandwidth Size | This option will only be shown when `Create EIP` is selected. The BandWidth of the EIPs. |
| Authentication Mode | It means enabling `RBAC` or also enabling `Authenticating Proxy`. If you select `Authenticating Proxy`, the certificate which is used for authenticating proxy will be also required. |
| Node Label | The labels for the cluster node(s). |
| Node Label | The labels for the cluster node(s). Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) |

View File

@ -125,7 +125,7 @@ The capabilities for registered clusters are listed in the table on [this page.]
{{% /tab %}}
{{% tab "Rancher v2.5.0-v2.5.8" %}}
{{% tab "Rancher before v2.5.8" %}}
- [Features for All Registered Clusters](#before-2-5-8-features-for-all-registered-clusters)
- [Additional Features for Registered K3s Clusters](#before-2-5-8-additional-features-for-registered-k3s-clusters)

View File

@ -35,6 +35,8 @@ After you create a node template in Rancher, it's saved so that you can use this
You can add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) on each node template, so that any nodes created from the node template will automatically have these labels on them.
Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)
### Node Taints
You can add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on each node template, so that any nodes created from the node template will automatically have these taints on them.

View File

@ -83,6 +83,32 @@ If you are configuring DHCP options sets for an AWS virtual private cloud, note
> Some Linux operating systems accept multiple domain names separated by spaces. However, other Linux operating systems and Windows treat the value as a single domain, which results in unexpected behavior. If your DHCP options set is associated with a VPC that has instances with multiple operating systems, specify only one domain name.
#### Rancher on vSphere with ESXi 6.7u2 and above
If you are using Rancher on VMware vSphere with ESXi 6.7u2 or later with Red Hat Enterprise Linux 8.3, CentOS 8.3, or SUSE Enterprise Linux 15 SP2 or later, it is necessary to disable the `vmxnet3` virtual network adapter hardware offloading feature. Failure to do so will result in all network connections between pods on different cluster nodes to fail with timeout errors. All connections from Windows pods to critical services running on Linux nodes, such as CoreDNS, will fail as well. It is also possible that external connections may fail. This issue is the result of Linux distributions enabling the hardware offloading feature in `vmxnet3` and a bug in the `vmxnet3` hardware offloading feature that results in the discarding of packets for guest overlay traffic. To address this issue, it is necessary disable the `vmxnet3` hardware offloading feature. This setting does not survive reboot, so it is necessary to disable on every boot. The recommended course of action is to create a systemd unit file at `/etc/systemd/system/disable_hw_offloading.service`, which disables the `vmxnet3` hardware offloading feature on boot. A sample systemd unit file which disables the `vmxnet3` hardware offloading feature is as follows. Note that `<VM network interface>` must be customized to the host `vmxnet3` network interface, e.g., `ens192`:
```
[Unit]
Description=Disable vmxnet3 hardware offloading feature
[Service]
Type=oneshot
ExecStart=ethtool -K <VM network interface> tx-udp_tnl-segmentation off
ExecStart=ethtool -K <VM network interface> tx-udp_tnl-csum-segmentation off
StandardOutput=journal
[Install]
WantedBy=multi-user.target
```
Then set the appropriate permissions on the systemd unit file:
```
chmod 0644 /etc/systemd/system/disable_hw_offloading.service
```
Finally, enable the systemd service:
```
systemctl enable disable_hw_offloading.service
```
### Architecture Requirements
The Kubernetes cluster management nodes (`etcd` and `controlplane`) must be run on Linux nodes.
@ -249,4 +275,4 @@ After creating your cluster, you can access it through the Rancher UI. As a best
# Configuration for Storage Classes in Azure
If you are using Azure VMs for your nodes, you can use [Azure files](https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv) as a StorageClass for the cluster. For details, refer to [this section.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/azure-storageclass)
If you are using Azure VMs for your nodes, you can use [Azure files](https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv) as a StorageClass for the cluster. For details, refer to [this section.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/azure-storageclass)

View File

@ -137,7 +137,7 @@ Placeholder | Description
`<CERTMANAGER_VERSION>` | Cert-manager version running on k8s cluster.
{{% tabs %}}
{{% tab "Rancher v2.5.8" %}}
{{% tab "Rancher v2.5.8+" %}}
```plain
helm template rancher ./rancher-<VERSION>.tgz --output-dir . \
--no-hooks \ # prevent files for Helm hooks from being generated

View File

@ -77,7 +77,7 @@ Here is an example of a command for passing in the feature flag names when rende
The Helm 3 command is as follows:
{{% tabs %}}
{{% tab "Rancher v2.5.8" %}}
{{% tab "Rancher v2.5.8+" %}}
```
helm template rancher ./rancher-<VERSION>.tgz --output-dir . \

View File

@ -66,7 +66,7 @@ spec:
```
Directive | Description
Directive | Description
---------|----------|
`apiVersion: autoscaling/v2beta1` | The version of the Kubernetes `autoscaling` API group in use. This example manifest uses the beta version, so scaling by CPU and memory is enabled. |
`name: hello-world` | Indicates that HPA is performing autoscaling for the `hello-word` deployment. |
@ -78,7 +78,7 @@ Directive | Description
##### Configuring HPA to Scale Using Resource Metrics (CPU and Memory)
Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler.
Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler.
Run the following commands to check if metrics are available in your installation:
@ -168,7 +168,7 @@ For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter](
I0724 10:18:45.696703 1 round_trippers.go:445] Content-Type: application/json
I0724 10:18:45.696706 1 round_trippers.go:445] Content-Length: 2581
I0724 10:18:45.696766 1 request.go:836] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"6237"},"items":[{"metadata":{"name":"hello-world-54764dfbf8-q6l82","generateName":"hello-world-54764dfbf8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-world-54764dfbf8-q6l82","uid":"484cb929-8f29-11e8-99d2-067cac34e79c","resourceVersion":"4066","creationTimestamp":"2018-07-24T10:06:50Z","labels":{"app":"hello-world","pod-template-hash":"1032089694"},"annotations":{"cni.projectcalico.org/podIP":"10.42.0.7/32"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"hello-world-54764dfbf8","uid":"4849b9b1-8f29-11e8-99d2-067cac34e79c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-ncvts","secret":{"secretName":"default-token-ncvts","defaultMode":420}}],"containers":[{"name":"hello-world","image":"rancher/hello-world","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"requests":{"cpu":"500m","memory":"64Mi"}},"volumeMounts":[{"name":"default-token-ncvts","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"34.220.18.140","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:54Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"}],"hostIP":"34.220.18.140","podIP":"10.42.0.7","startTime":"2018-07-24T10:06:50Z","containerStatuses":[{"name":"hello-world","state":{"running":{"startedAt":"2018-07-24T10:06:54Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"rancher/hello-world:latest","imageID":"docker-pullable://rancher/hello-world@sha256:4b1559cb4b57ca36fa2b313a3c7dde774801aa3a2047930d94e11a45168bc053","containerID":"docker://cce4df5fc0408f03d4adf82c90de222f64c302bf7a04be1c82d584ec31530773"}],"qosClass":"Burstable"}}]}
I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.xip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK
I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.sslip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK
I0724 10:18:45.699620 1 api.go:93] Response Body: {"status":"success","data":{"resultType":"vector","result":[{"metric":{"pod_name":"hello-world-54764dfbf8-q6l82"},"value":[1532427525.697,"0"]}]}}
I0724 10:18:45.699939 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/fs_read?labelSelector=app%3Dhello-world: (12.431262ms) 200 [[kube-controller-manager/v1.10.1 (linux/amd64) kubernetes/d4ab475/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.42.0.0:24268]
I0724 10:18:51.727845 1 request.go:836] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}

View File

@ -4,7 +4,7 @@ description: Ingresses can be added for workloads to provide load balancing, SSL
weight: 3042
aliases:
- /rancher/v2.5/en/tasks/workloads/add-ingress/
- /rancher/v2.5/en/k8s-in-rancher/load-balancers-and-ingress/ingress
- /rancher/v2.5/en/k8s-in-rancher/load-balancers-and-ingress/ingress
---
Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a Global DNS entry.
@ -14,24 +14,24 @@ Ingress can be added for workloads to provide load balancing, SSL termination an
1. Enter a **Name** for the ingress.
1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new namespace on the fly by clicking **Add to a new namespace**.
1. Create ingress forwarding **Rules**. For help configuring the rules, refer to [this section.](#ingress-rule-configuration) If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications.
1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s.
1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s.
**Result:** Your ingress is added to the project. The ingress begins enforcing your ingress rules.
# Ingress Rule Configuration
- [Automatically generate a xip.io hostname](#automatically-generate-a-xip-io-hostname)
- [Automatically generate a sslip.io hostname](#automatically-generate-a-sslip-io-hostname)
- [Specify a hostname to use](#specify-a-hostname-to-use)
- [Use as the default backend](#use-as-the-default-backend)
- [Certificates](#certificates)
- [Labels and Annotations](#labels-and-annotations)
### Automatically generate a xip.io hostname
### Automatically generate a sslip.io hostname
If you choose this option, ingress routes requests to hostname to a DNS name that's automatically generated. Rancher uses [xip.io](http://xip.io/) to automatically generates the DNS name. This option is best used for testing, _not_ production environments.
If you choose this option, ingress routes requests to hostname to a DNS name that's automatically generated. Rancher uses [sslip.io](http://sslip.io/) to automatically generates the DNS name. This option is best used for testing, _not_ production environments.
>**Note:** To use this option, you must be able to resolve to `xip.io` addresses.
>**Note:** To use this option, you must be able to resolve to `sslip.io` addresses.
1. Add a **Target Backend**. By default, a workload is added to the ingress, but you can add more targets by clicking either **Service** or **Workload**.
1. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field. Typically, the first rule that you create does not include a path.

View File

@ -58,9 +58,9 @@ Some cloud-managed layer-7 load balancers (such as the ALB ingress controller on
Other layer-7 load balancers, such as the Google Load Balancer or Nginx Ingress Controller, directly expose one or more IP addresses. Google Load Balancer provides a single routable IP address. Nginx Ingress Controller exposes the external IP of all nodes that run the Nginx Ingress Controller. You can do either of the following:
1. Configure your own DNS to map (via A records) your domain name to the IP addresses exposes by the Layer-7 load balancer.
2. Ask Rancher to generate an xip.io host name for your ingress rule. Rancher will take one of your exposed IPs, say a.b.c.d, and generate a host name <ingressname>.<namespace>.a.b.c.d.xip.io.
2. Ask Rancher to generate an sslip.io host name for your ingress rule. Rancher will take one of your exposed IPs, say a.b.c.d, and generate a host name <ingressname>.<namespace>.a.b.c.d.sslip.io.
The benefit of using xip.io is that you obtain a working entrypoint URL immediately after you create the ingress rule. Setting up your own domain name, on the other hand, requires you to configure DNS servers and wait for DNS to propagate.
The benefit of using sslip.io is that you obtain a working entrypoint URL immediately after you create the ingress rule. Setting up your own domain name, on the other hand, requires you to configure DNS servers and wait for DNS to propagate.
## Related Links

View File

@ -79,13 +79,13 @@ For a list of options that can be configured when the logging application is ins
### Windows Support
{{% tabs %}}
{{% tab "Rancher v2.5.8" %}}
{{% tab "Rancher v2.5.8+" %}}
As of Rancher v2.5.8, logging support for Windows clusters has been added and logs can be collected from Windows nodes.
For details on how to enable or disable Windows node logging, see [this section.](./helm-chart-options/#enable-disable-windows-node-logging)
{{% /tab %}}
{{% tab "Rancher v2.5.0-2.5.7" %}}
{{% tab "Rancher before v2.5.8" %}}
Clusters with Windows workers support exporting logs from Linux nodes, but Windows node logs are currently unable to be exported.
Only Linux node logs are able to be exported.

View File

@ -74,7 +74,7 @@ Matches, filters and `Outputs` are configured for `ClusterFlows` in the same way
After `ClusterFlow` selects logs from all namespaces in the cluster, logs from the cluster will be collected and logged to the selected `ClusterOutput`.
{{% /tab %}}
{{% tab "Rancher v2.5.0-v2.5.7" %}}
{{% tab "Rancher before v2.5.8" %}}
- [Flows](#flows-2-5-0)
- [Matches](#matches-2-5-0)

View File

@ -69,7 +69,7 @@ For example configuration for each logging plugin supported by the logging opera
For the details of the `ClusterOutput` custom resource, see [ClusterOutput.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/clusteroutput_types/)
{{% /tab %}}
{{% tab "v2.5.0-v2.5.7" %}}
{{% tab "Rancher before v2.5.8" %}}
- [Outputs](#outputs-2-5-0)
@ -343,4 +343,4 @@ spec:
ignore_network_errors_at_startup: false
```
Let's break down what is happening here. First, we create a deployment of a container that has the additional `syslog` plugin and accepts logs forwarded from another `fluentd`. Next we create an `Output` configured as a forwarder to our deployment. The deployment `fluentd` will then forward all logs to the configured `syslog` destination.
Let's break down what is happening here. First, we create a deployment of a container that has the additional `syslog` plugin and accepts logs forwarded from another `fluentd`. Next we create an `Output` configured as a forwarder to our deployment. The deployment `fluentd` will then forward all logs to the configured `syslog` destination.

View File

@ -20,13 +20,13 @@ Both provide choice for the what node(s) the pod will run on.
### Default Implementation in Rancher's Logging Stack
{{% tabs %}}
{{% tab "Rancher v2.5.8" %}}
{{% tab "Rancher v2.5.8+" %}}
By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes.
The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes.
Moreover, most logging stack pods run on Linux only and have a `nodeSelector` added to ensure they run on Linux nodes.
{{% /tab %}}
{{% tab "Rancher v2.5.0-2.5.7" %}}
{{% tab "Rancher before v2.5.8" %}}
By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes.
The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes.
Moreover, we can populate the `nodeSelector` to ensure that our pods *only* run on Linux nodes.
@ -74,4 +74,4 @@ However, if you would like to add tolerations for *only* the `fluentbit` contain
```yaml
fluentbit_tolerations:
# insert tolerations list for fluentbit containers only...
```
```

View File

@ -63,7 +63,7 @@ As an [administrator]({{<baseurl>}}/rancher/v2.5/en/admin-settings/rbac/global-p
> - When installing monitoring on an RKE cluster using RancherOS or Flatcar Linux nodes, change the etcd node certificate directory to `/opt/rke/etc/kubernetes/ssl`.
{{% tabs %}}
{{% tab "Rancher v2.5.8" %}}
{{% tab "Rancher v2.5.8+" %}}
### Enable Monitoring for use without SSL
@ -101,7 +101,7 @@ key.pfx=`base64-content`
Then **Cert File Path** would be set to `/etc/alertmanager/secrets/cert.pem`.
{{% /tab %}}
{{% tab "Rancher v2.5.0-2.5.7" %}}
{{% tab "Rancher before v2.5.8" %}}
1. In the Rancher UI, go to the cluster where you want to install monitoring and click **Cluster Explorer.**
1. Click **Apps.**

View File

@ -91,7 +91,7 @@ Rancher v2.5.8 added Microsoft Teams and SMS as configurable receivers in the Ra
Rancher v2.5.4 introduced the capability to configure receivers by filling out forms in the Rancher UI.
{{% tabs %}}
{{% tab "Rancher v2.5.8" %}}
{{% tab "Rancher v2.5.8+" %}}
The following types of receivers can be configured in the Rancher UI:

View File

@ -84,7 +84,7 @@ grafana.sidecar.dashboards.searchNamespace=ALL
Note that the RBAC roles exposed by the Monitoring chart to add Grafana Dashboards are still restricted to giving permissions for users to add dashboards in the namespace defined in `grafana.dashboards.namespace`, which defaults to `cattle-dashboards`.
{{% /tab %}}
{{% tab "Rancher v2.5.0-v2.5.8" %}}
{{% tab "Rancher before v2.5.8" %}}
> **Prerequisites:**
>
> - The monitoring application needs to be installed.

View File

@ -24,7 +24,7 @@ The following steps will quickly deploy a Rancher Server on AWS with a single no
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
1. Edit `terraform.tfvars` and customize the following variables:
- `aws_access_key` - Amazon AWS Access Key
- `aws_access_key` - Amazon AWS Access Key
- `aws_secret_key` - Amazon AWS Secret Key
- `rancher_server_admin_password` - Admin password for created Rancher server
@ -45,7 +45,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -45,7 +45,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -25,7 +25,7 @@ The following steps will quickly deploy a Rancher server on GCP in a single-node
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
1. Edit `terraform.tfvars` and customize the following variables:
- `gcp_account_json` - GCP service account file path and file name
- `gcp_account_json` - GCP service account file path and file name
- `rancher_server_admin_password` - Admin password for created Rancher server
1. **Optional:** Modify optional variables within `terraform.tfvars`.
@ -46,7 +46,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -27,7 +27,7 @@ The following steps will quickly deploy a Rancher server on Azure in a single-no
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
1. Edit `terraform.tfvars` and customize the following variables:
- `azure_subscription_id` - Microsoft Azure Subscription ID
- `azure_subscription_id` - Microsoft Azure Subscription ID
- `azure_client_id` - Microsoft Azure Client ID
- `azure_client_secret` - Microsoft Azure Client Secret
- `azure_tenant_id` - Microsoft Azure Tenant ID
@ -50,7 +50,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -61,11 +61,11 @@ Now that the application is up and running it needs to be exposed so that other
9. Leave everything else as default and click **Save**.
**Result:** The application is assigned a `xip.io` address and exposed. It may take a minute or two to populate.
**Result:** The application is assigned a `sslip.io` address and exposed. It may take a minute or two to populate.
### View Your Application
From the **Load Balancing** page, click the target link, which will look something like `hello.default.xxx.xxx.xxx.xxx.xip.io > hello-world`.
From the **Load Balancing** page, click the target link, which will look something like `hello.default.xxx.xxx.xxx.xxx.sslip.io > hello-world`.
Your application will open in a separate window.

View File

@ -7,6 +7,9 @@ Rancher is committed to informing the community of security issues in our produc
| ID | Description | Date | Resolution |
|----|-------------|------|------------|
| [CVE-2021-31999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31999) | A vulnerability was discovered where a malicious Rancher user could craft an API request directed at the proxy for the Kubernetes API of a managed cluster to gain access to information they do not have access to. This is done by passing the "Impersonate-User" or "Impersonate-Group" header in the Connection header, which is then removed by the proxy. At this point, instead of impersonating the user and their permissions, the request will act as if it was from the Rancher management server, i.e. local server, and return the requested information. You are vulnerable if you are running any Rancher 2.x version. Only valid Rancher users who have some level of permission on the cluster can perform the request. There is no direct mitigation besides upgrading to the patched versions. You can limit wider exposure by ensuring all Rancher users are trusted. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9), [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
| [CVE-2021-25318](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25318) | A vulnerability was discovered in Rancher where users were granted access to resources regardless of the resource's API group. For example Rancher should have allowed users access to `apps.catalog.cattle.io`, but instead incorrectly gave access to `apps.*`. You are vulnerable if you are running any Rancher 2.x version. The extent of the exploit increases if there are other matching CRD resources installed in the cluster. There is no direct mitigation besides upgrading to the patched versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9), [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
| [CVE-2021-25320](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25320) | A vulnerability was discovered in Rancher where cloud credentials weren't being properly validated through the Rancher API. Specifically through a proxy designed to communicate with cloud providers. Any Rancher user that was logged-in and aware of a cloud credential ID that was valid for a given cloud provider could make requests against that cloud provider's API through the proxy API, and the cloud credential would be attached. You are vulnerable if you are running any Rancher 2.2.0 or above and use cloud credentials. The exploit is limited to valid Rancher users. There is no direct mitigation besides upgrading to the patched versions. You can limit wider exposure by ensuring all Rancher users are trusted. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9), [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
| [CVE-2021-25313](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25313) | A security vulnerability was discovered on all Rancher 2 versions. When accessing the Rancher API with a browser, the URL was not properly escaped, making it vulnerable to an XSS attack. Specially crafted URLs to these API endpoints could include JavaScript which would be embedded in the page and execute in a browser. There is no direct mitigation. Avoid clicking on untrusted links to your Rancher server. | 2 Mar 2021 | [Rancher v2.5.6](https://github.com/rancher/rancher/releases/tag/v2.5.6), [Rancher v2.4.14](https://github.com/rancher/rancher/releases/tag/v2.4.14), and [Rancher v2.3.11](https://github.com/rancher/rancher/releases/tag/v2.3.11) |
| [CVE-2019-14435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14435) | This vulnerability allows authenticated users to potentially extract otherwise private data out of IPs reachable from system service containers used by Rancher. This can include but not only limited to services such as cloud provider metadata services. Although Rancher allow users to configure whitelisted domains for system service access, this flaw can still be exploited by a carefully crafted HTTP request. The issue was found and reported by Matt Belisle and Alex Stevenson at Workiva. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
| [CVE-2019-14436](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14436) | The vulnerability allows a member of a project that has access to edit role bindings to be able to assign themselves or others a cluster level role granting them administrator access to that cluster. The issue was found and reported by Michal Lipinski at Nokia. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |

View File

@ -45,7 +45,7 @@ Rancher ships with two default Pod Security Policies (PSPs): the `restricted` an
This policy is based on the Kubernetes [example restricted policy](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/restricted-psp.yaml). It significantly restricts what types of pods can be deployed to a cluster or project. This policy:
- Prevents pods from running as a privileged user and prevents escalation of privileges.
- Validates that server-required security mechanisms are in place (such as restricting what volumes can be mounted to only the core volume types and preventing root supplemental groups from being added.
- Validates that server-required security mechanisms are in place (such as restricting what volumes can be mounted to only the core volume types and preventing root supplemental groups from being added).
### Unrestricted

View File

@ -124,7 +124,7 @@ s3:
s3:
credentialSecretName: minio-creds
bucketName: rancherbackups
endpoint: minio.35.202.130.254.xip.io
endpoint: minio.35.202.130.254.sslip.io
endpointCA: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHakNDQWdLZ0F3SUJBZ0lKQUtpWFZpNEpBb0J5TUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NakF3T0RNd01UZ3lOVFE1V2hjTk1qQXhNREk1TVRneU5UUTVXakFTTVJBdwpEZ1lEVlFRRERBZDBaWE4wTFdOaE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCjA4dnV3Q2Y0SEhtR2Q2azVNTmozRW5NOG00T2RpS3czSGszd1NlOUlXQkwyVzY5WDZxenBhN2I2M3U2L05mMnkKSnZWNDVqeXplRFB6bFJycjlpbEpWaVZ1NFNqWlFjdG9jWmFCaVNsL0xDbEFDdkFaUlYvKzN0TFVTZSs1ZDY0QQpWcUhDQlZObU5xM3E3aVY0TE1aSVpRc3N6K0FxaU1Sd0pOMVVKQTZ6V0tUc2Yzc3ByQ0J2dWxJWmZsVXVETVAyCnRCTCt6cXZEc0pDdWlhNEEvU2JNT29tVmM2WnNtTGkwMjdub3dGRld3MnRpSkM5d0xMRE14NnJoVHQ4a3VvVHYKQXJpUjB4WktiRU45L1Uzb011eUVKbHZyck9YS2ZuUDUwbk8ycGNaQnZCb3pUTStYZnRvQ1d5UnhKUmI5cFNTRApKQjlmUEFtLzNZcFpMMGRKY2sxR1h3SURBUUFCbzNNd2NUQWRCZ05WSFE0RUZnUVU5NHU4WXlMdmE2MTJnT1pyCm44QnlFQ2NucVFjd1FnWURWUjBqQkRzd09ZQVU5NHU4WXlMdmE2MTJnT1pybjhCeUVDY25xUWVoRnFRVU1CSXgKRURBT0JnTlZCQU1NQjNSbGMzUXRZMkdDQ1FDb2wxWXVDUUtBY2pBTUJnTlZIUk1FQlRBREFRSC9NQTBHQ1NxRwpTSWIzRFFFQkN3VUFBNElCQVFER1JRZ1RtdzdVNXRQRHA5Q2psOXlLRW9Vd2pYWWM2UlAwdm1GSHpubXJ3dUVLCjFrTkVJNzhBTUw1MEpuS29CY0ljVDNEeGQ3TGdIbTNCRE5mVVh2anArNnZqaXhJYXR2UWhsSFNVaWIyZjJsSTkKVEMxNzVyNCtROFkzelc1RlFXSDdLK08vY3pJTGh5ei93aHRDUlFkQ29lS1dXZkFiby8wd0VSejZzNkhkVFJzNwpHcWlGNWZtWGp6S0lOcTBjMHRyZ0xtalNKd1hwSnU0ZnNGOEcyZUh4b2pOKzdJQ1FuSkg5cGRIRVpUQUtOL2ppCnIvem04RlZtd1kvdTBndEZneWVQY1ZWbXBqRm03Y0ZOSkc4Y2ZYd0QzcEFwVjhVOGNocTZGeFBHTkVvWFZnclMKY1VRMklaU0RJd1FFY3FvSzFKSGdCUWw2RXBaUVpWMW1DRklrdFBwSQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t
```
### Example credentialSecret
@ -143,7 +143,7 @@ data:
Make sure to encode the keys to base64 in YAML file.
Run the following command to encode the keys.
```
echo -n "your_key" |base64
echo -n "your_key" |base64
```
### IAM Permissions for EC2 Nodes to Access S3
@ -152,7 +152,7 @@ There are two ways to set up the `rancher-backup` operator to use S3 as the back
One way is to configure the `credentialSecretName` in the Backup custom resource, which refers to AWS credentials that have access to S3.
If the cluster nodes are in Amazon EC2, the S3 access can also be set up by assigning IAM permissions to the EC2 nodes so that they can access S3.
If the cluster nodes are in Amazon EC2, the S3 access can also be set up by assigning IAM permissions to the EC2 nodes so that they can access S3.
To allow a node to access S3, follow the instructions in the [AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/) to create an IAM role for EC2. When you add a custom policy to the role, add the following permissions, and replace the `Resource` with your bucket name:
@ -185,7 +185,7 @@ To allow a node to access S3, follow the instructions in the [AWS documentation]
}
```
After the role is created, and you have attached the corresponding instance profile to your EC2 instance(s), the `credentialSecretName` directive can be left empty in the Backup custom resource.
After the role is created, and you have attached the corresponding instance profile to your EC2 instance(s), the `credentialSecretName` directive can be left empty in the Backup custom resource.
# Examples

View File

@ -85,7 +85,7 @@ spec:
credentialSecretName: minio-creds
credentialSecretNamespace: default
bucketName: rancherbackups
endpoint: minio.xip.io
endpoint: minio.sslip.io
endpointCA: LS0tLS1CRUdJTi3VUFNQkl5UUT.....pbEpWaVzNkRS0tLS0t
resourceSetName: rancher-resource-set
encryptionConfigSecretName: encryptionconfig
@ -214,7 +214,7 @@ spec:
credentialSecretName: minio-creds
credentialSecretNamespace: default
bucketName: rancherbackups
endpoint: minio.xip.io
endpoint: minio.sslip.io
endpointCA: LS0tLS1CRUdJTi3VUFNQkl5UUT.....pbEpWaVzNkRS0tLS0t
encryptionConfigSecretName: test-encryptionconfig
```
@ -298,6 +298,3 @@ resources:
- name: key1
secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
```

View File

@ -71,7 +71,7 @@ spec:
```
Directive | Description
Directive | Description
---------|----------|
`apiVersion: autoscaling/v2beta1` | The version of the Kubernetes `autoscaling` API group in use. This example manifest uses the beta version, so scaling by CPU and memory is enabled. |
`name: hello-world` | Indicates that HPA is performing autoscaling for the `hello-word` deployment. |
@ -172,7 +172,7 @@ For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter](
I0724 10:18:45.696703 1 round_trippers.go:445] Content-Type: application/json
I0724 10:18:45.696706 1 round_trippers.go:445] Content-Length: 2581
I0724 10:18:45.696766 1 request.go:836] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"6237"},"items":[{"metadata":{"name":"hello-world-54764dfbf8-q6l82","generateName":"hello-world-54764dfbf8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-world-54764dfbf8-q6l82","uid":"484cb929-8f29-11e8-99d2-067cac34e79c","resourceVersion":"4066","creationTimestamp":"2018-07-24T10:06:50Z","labels":{"app":"hello-world","pod-template-hash":"1032089694"},"annotations":{"cni.projectcalico.org/podIP":"10.42.0.7/32"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"hello-world-54764dfbf8","uid":"4849b9b1-8f29-11e8-99d2-067cac34e79c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-ncvts","secret":{"secretName":"default-token-ncvts","defaultMode":420}}],"containers":[{"name":"hello-world","image":"rancher/hello-world","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"requests":{"cpu":"500m","memory":"64Mi"}},"volumeMounts":[{"name":"default-token-ncvts","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"34.220.18.140","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:54Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"}],"hostIP":"34.220.18.140","podIP":"10.42.0.7","startTime":"2018-07-24T10:06:50Z","containerStatuses":[{"name":"hello-world","state":{"running":{"startedAt":"2018-07-24T10:06:54Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"rancher/hello-world:latest","imageID":"docker-pullable://rancher/hello-world@sha256:4b1559cb4b57ca36fa2b313a3c7dde774801aa3a2047930d94e11a45168bc053","containerID":"docker://cce4df5fc0408f03d4adf82c90de222f64c302bf7a04be1c82d584ec31530773"}],"qosClass":"Burstable"}}]}
I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.xip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK
I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.sslip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK
I0724 10:18:45.699620 1 api.go:93] Response Body: {"status":"success","data":{"resultType":"vector","result":[{"metric":{"pod_name":"hello-world-54764dfbf8-q6l82"},"value":[1532427525.697,"0"]}]}}
I0724 10:18:45.699939 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/fs_read?labelSelector=app%3Dhello-world: (12.431262ms) 200 [[kube-controller-manager/v1.10.1 (linux/amd64) kubernetes/d4ab475/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.42.0.0:24268]
I0724 10:18:51.727845 1 request.go:836] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}

View File

@ -4,7 +4,7 @@ description: Ingresses can be added for workloads to provide load balancing, SSL
weight: 3042
aliases:
- /rancher/v2.x/en/tasks/workloads/add-ingress/
- /rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress
- /rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress
---
Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{<baseurl>}}/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/).
@ -14,24 +14,24 @@ Ingress can be added for workloads to provide load balancing, SSL termination an
1. Enter a **Name** for the ingress.
1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) on the fly by clicking **Add to a new namespace**.
1. Create ingress forwarding **Rules**. For help configuring the rules, refer to [this section.](#ingress-rule-configuration) If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications.
1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s.
1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s.
**Result:** Your ingress is added to the project. The ingress begins enforcing your ingress rules.
# Ingress Rule Configuration
- [Automatically generate a xip.io hostname](#automatically-generate-a-xip-io-hostname)
- [Automatically generate a sslip.io hostname](#automatically-generate-a-sslip-io-hostname)
- [Specify a hostname to use](#specify-a-hostname-to-use)
- [Use as the default backend](#use-as-the-default-backend)
- [Certificates](#certificates)
- [Labels and Annotations](#labels-and-annotations)
### Automatically generate a xip.io hostname
### Automatically generate a sslip.io hostname
If you choose this option, ingress routes requests to hostname to a DNS name that's automatically generated. Rancher uses [xip.io](http://xip.io/) to automatically generates the DNS name. This option is best used for testing, _not_ production environments.
If you choose this option, ingress routes requests to hostname to a DNS name that's automatically generated. Rancher uses [sslip.io](http://sslip.io/) to automatically generates the DNS name. This option is best used for testing, _not_ production environments.
>**Note:** To use this option, you must be able to resolve to `xip.io` addresses.
>**Note:** To use this option, you must be able to resolve to `sslip.io` addresses.
1. Add a **Target Backend**. By default, a workload is added to the ingress, but you can add more targets by clicking either **Service** or **Workload**.
1. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field. Typically, the first rule that you create does not include a path.

View File

@ -58,9 +58,9 @@ Some cloud-managed layer-7 load balancers (such as the ALB ingress controller on
Other layer-7 load balancers, such as the Google Load Balancer or Nginx Ingress Controller, directly expose one or more IP addresses. Google Load Balancer provides a single routable IP address. Nginx Ingress Controller exposes the external IP of all nodes that run the Nginx Ingress Controller. You can do either of the following:
1. Configure your own DNS to map (via A records) your domain name to the IP addresses exposes by the Layer-7 load balancer.
2. Ask Rancher to generate an xip.io host name for your ingress rule. Rancher will take one of your exposed IPs, say a.b.c.d, and generate a host name <ingressname>.<namespace>.a.b.c.d.xip.io.
2. Ask Rancher to generate an sslip.io host name for your ingress rule. Rancher will take one of your exposed IPs, say a.b.c.d, and generate a host name <ingressname>.<namespace>.a.b.c.d.sslip.io.
The benefit of using xip.io is that you obtain a working entrypoint URL immediately after you create the ingress rule. Setting up your own domain name, on the other hand, requires you to configure DNS servers and wait for DNS to propagate.
The benefit of using sslip.io is that you obtain a working entrypoint URL immediately after you create the ingress rule. Setting up your own domain name, on the other hand, requires you to configure DNS servers and wait for DNS to propagate.
## Related Links

View File

@ -24,7 +24,7 @@ The following steps will quickly deploy a Rancher Server on AWS with a single no
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
1. Edit `terraform.tfvars` and customize the following variables:
- `aws_access_key` - Amazon AWS Access Key
- `aws_access_key` - Amazon AWS Access Key
- `aws_secret_key` - Amazon AWS Secret Key
- `rancher_server_admin_password` - Admin password for created Rancher server
@ -45,7 +45,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -45,7 +45,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -25,7 +25,7 @@ The following steps will quickly deploy a Rancher server on GCP in a single-node
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
1. Edit `terraform.tfvars` and customize the following variables:
- `gcp_account_json` - GCP service account file path and file name
- `gcp_account_json` - GCP service account file path and file name
- `rancher_server_admin_password` - Admin password for created Rancher server
1. **Optional:** Modify optional variables within `terraform.tfvars`.
@ -46,7 +46,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -27,7 +27,7 @@ The following steps will quickly deploy a Rancher server on Azure in a single-no
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
1. Edit `terraform.tfvars` and customize the following variables:
- `azure_subscription_id` - Microsoft Azure Subscription ID
- `azure_subscription_id` - Microsoft Azure Subscription ID
- `azure_client_id` - Microsoft Azure Client ID
- `azure_client_secret` - Microsoft Azure Client Secret
- `azure_tenant_id` - Microsoft Azure Tenant ID
@ -50,7 +50,7 @@ Suggestions include:
Outputs:
rancher_node_ip = xx.xx.xx.xx
rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
workload_node_ip = yy.yy.yy.yy
```

View File

@ -61,11 +61,11 @@ Now that the application is up and running it needs to be exposed so that other
9. Leave everything else as default and click **Save**.
**Result:** The application is assigned a `xip.io` address and exposed. It may take a minute or two to populate.
**Result:** The application is assigned a `sslip.io` address and exposed. It may take a minute or two to populate.
### View Your Application
From the **Load Balancing** page, click the target link, which will look something like `hello.default.xxx.xxx.xxx.xxx.xip.io > hello-world`.
From the **Load Balancing** page, click the target link, which will look something like `hello.default.xxx.xxx.xxx.xxx.sslip.io > hello-world`.
Your application will open in a separate window.

View File

@ -7,6 +7,9 @@ Rancher is committed to informing the community of security issues in our produc
| ID | Description | Date | Resolution |
|----|-------------|------|------------|
| [CVE-2021-31999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31999) | A vulnerability was discovered where a malicious Rancher user could craft an API request directed at the proxy for the Kubernetes API of a managed cluster to gain access to information they do not have access to. This is done by passing the "Impersonate-User" or "Impersonate-Group" header in the Connection header, which is then removed by the proxy. At this point, instead of impersonating the user and their permissions, the request will act as if it was from the Rancher management server, i.e. local server, and return the requested information. You are vulnerable if you are running any Rancher 2.x version. Only valid Rancher users who have some level of permission on the cluster can perform the request. There is no direct mitigation besides upgrading to the patched versions. You can limit wider exposure by ensuring all Rancher users are trusted. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9), [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
| [CVE-2021-25318](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25318) | A vulnerability was discovered in Rancher where users were granted access to resources regardless of the resource's API group. For example Rancher should have allowed users access to `apps.catalog.cattle.io`, but instead incorrectly gave access to `apps.*`. You are vulnerable if you are running any Rancher 2.x version. The extent of the exploit increases if there are other matching CRD resources installed in the cluster. There is no direct mitigation besides upgrading to the patched versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9), [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
| [CVE-2021-25320](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25320) | A vulnerability was discovered in Rancher where cloud credentials weren't being properly validated through the Rancher API. Specifically through a proxy designed to communicate with cloud providers. Any Rancher user that was logged-in and aware of a cloud credential ID that was valid for a given cloud provider could make requests against that cloud provider's API through the proxy API, and the cloud credential would be attached. You are vulnerable if you are running any Rancher 2.2.0 or above and use cloud credentials. The exploit is limited to valid Rancher users. There is no direct mitigation besides upgrading to the patched versions. You can limit wider exposure by ensuring all Rancher users are trusted. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9), [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{<baseurl>}}/rancher/v2.x/en/upgrades/rollbacks/). |
| [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) |
| [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) |

View File

@ -5,6 +5,8 @@ weight: 150
_Available as of v0.2.0_
> **Note:** This is not "TLS Certificates management in Kubernetes". Refer the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/) and RKE [cluster.yaml example]({{<baseurl>}}/rke/latest/en/example-yamls/) for more details.
Certificates are an important part of Kubernetes clusters and are used for all Kubernetes cluster components. RKE has a `rke cert` command to help work with certificates.
* [Ability to generate certificate sign requests for the Kubernetes components](#generating-certificate-signing-requests-csrs-and-keys)

View File

@ -30,8 +30,8 @@ time="2018-05-04T18:43:16Z" level=info msg="Created backup" name="2018-05-04T18:
|Option|Description| S3 Specific |
|---|---| --- |
|**interval_hours**| The duration in hours between recurring backups. This supercedes the `creation` option (which was used in RKE before v0.2.0) and will override it if both are specified.| |
|**retention**| The number of snapshots to retain before rotation. If the retention is configured in both `etcd.retention` (time period to keep snapshots in hours), which was required in RKE before v0.2.0, and at `etcd.backup_config.retention` (number of snapshots), the latter will be used. | |
|**interval_hours**| The duration in hours between recurring backups. This supercedes the `creation` option (which was used in RKE before v0.2.0) and will override it if both are specified. (Default: 12)| |
|**retention**| The number of snapshots to retain before rotation. If the retention is configured in both `etcd.retention` (time period to keep snapshots in hours), which was required in RKE before v0.2.0, and at `etcd.backup_config.retention` (number of snapshots), the latter will be used. (Default: 6) | |
|**bucket_name**| S3 bucket name where backups will be stored| * |
|**folder**| Folder inside S3 bucket where backups will be stored. This is optional. _Available as of v0.3.0_ | * |
|**access_key**| S3 access key with permission to access the backup bucket.| * |

View File

@ -250,6 +250,10 @@ services:
v: 4
# Enable RotateKubeletServerCertificate feature gate
feature-gates: RotateKubeletServerCertificate=true
# Enable TLS Certificates management
# https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
cluster-signing-cert-file: "/etc/kubernetes/ssl/kube-ca.pem"
cluster-signing-key-file: "/etc/kubernetes/ssl/kube-ca-key.pem"
kubelet:
# Base domain for the cluster
cluster_domain: cluster.local