Merge pull request #39103 from windsonsea/lvlpss

Fix indentations in cluster-level-pss.md
This commit is contained in:
Kubernetes Prow Robot 2023-01-27 02:44:26 -08:00 committed by GitHub
commit 24cde2766a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 142 additions and 142 deletions

View File

@ -41,56 +41,55 @@ that are most appropriate for your configuration, do the following:
1. Create a cluster with no Pod Security Standards applied:
```shell
kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.24.0
```
```shell
kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.24.0
```
The output is similar to this:
```
Creating cluster "psa-wo-cluster-pss" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-psa-wo-cluster-pss"
You can now use your cluster with:
kubectl cluster-info --context kind-psa-wo-cluster-pss
Thanks for using kind! 😊
```
```
Creating cluster "psa-wo-cluster-pss" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-psa-wo-cluster-pss"
You can now use your cluster with:
kubectl cluster-info --context kind-psa-wo-cluster-pss
Thanks for using kind! 😊
```
1. Set the kubectl context to the new cluster:
```shell
kubectl cluster-info --context kind-psa-wo-cluster-pss
```
```shell
kubectl cluster-info --context kind-psa-wo-cluster-pss
```
The output is similar to this:
```
Kubernetes control plane is running at https://127.0.0.1:61350
```
Kubernetes control plane is running at https://127.0.0.1:61350
CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
1. Get a list of namespaces in the cluster:
1. Get a list of namespaces in the cluster:
```shell
kubectl get ns
```
The output is similar to this:
```
NAME STATUS AGE
default Active 9m30s
kube-node-lease Active 9m32s
kube-public Active 9m32s
kube-system Active 9m32s
local-path-storage Active 9m26s
```
```shell
kubectl get ns
```
The output is similar to this:
```
NAME STATUS AGE
default Active 9m30s
kube-node-lease Active 9m32s
kube-public Active 9m32s
kube-system Active 9m32s
local-path-storage Active 9m26s
```
1. Use `--dry-run=server` to understand what happens when different Pod Security Standards
are applied:
@ -100,7 +99,7 @@ that are most appropriate for your configuration, do the following:
kubectl label --dry-run=server --overwrite ns --all \
pod-security.kubernetes.io/enforce=privileged
```
The output is similar to this:
The output is similar to this:
```
namespace/default labeled
namespace/kube-node-lease labeled
@ -113,7 +112,7 @@ that are most appropriate for your configuration, do the following:
kubectl label --dry-run=server --overwrite ns --all \
pod-security.kubernetes.io/enforce=baseline
```
The output is similar to this:
The output is similar to this:
```
namespace/default labeled
namespace/kube-node-lease labeled
@ -127,11 +126,11 @@ that are most appropriate for your configuration, do the following:
```
3. Restricted
```shell
```shell
kubectl label --dry-run=server --overwrite ns --all \
pod-security.kubernetes.io/enforce=restricted
```
The output is similar to this:
The output is similar to this:
```
namespace/default labeled
namespace/kube-node-lease labeled
@ -179,72 +178,72 @@ following:
1. Create a configuration file that can be consumed by the Pod Security
Admission Controller to implement these Pod Security Standards:
```
mkdir -p /tmp/pss
cat <<EOF > /tmp/pss/cluster-level-pss.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1
kind: PodSecurityConfiguration
defaults:
enforce: "baseline"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
usernames: []
runtimeClasses: []
namespaces: [kube-system]
EOF
```
```
mkdir -p /tmp/pss
cat <<EOF > /tmp/pss/cluster-level-pss.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1
kind: PodSecurityConfiguration
defaults:
enforce: "baseline"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
usernames: []
runtimeClasses: []
namespaces: [kube-system]
EOF
```
{{< note >}}
`pod-security.admission.config.k8s.io/v1` configuration requires v1.25+.
For v1.23 and v1.24, use [v1beta1](https://v1-24.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/).
For v1.22, use [v1alpha1](https://v1-22.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/).
{{< /note >}}
{{< note >}}
`pod-security.admission.config.k8s.io/v1` configuration requires v1.25+.
For v1.23 and v1.24, use [v1beta1](https://v1-24.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/).
For v1.22, use [v1alpha1](https://v1-22.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/).
{{< /note >}}
1. Configure the API server to consume this file during cluster creation:
```
cat <<EOF > /tmp/pss/cluster-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
admission-control-config-file: /etc/config/cluster-level-pss.yaml
extraVolumes:
- name: accf
hostPath: /etc/config
mountPath: /etc/config
readOnly: false
pathType: "DirectoryOrCreate"
extraMounts:
- hostPath: /tmp/pss
containerPath: /etc/config
# optional: if set, the mount is read-only.
# default false
readOnly: false
# optional: if set, the mount needs SELinux relabeling.
# default false
selinuxRelabel: false
# optional: set propagation mode (None, HostToContainer or Bidirectional)
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
# default None
propagation: None
EOF
```
```
cat <<EOF > /tmp/pss/cluster-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
admission-control-config-file: /etc/config/cluster-level-pss.yaml
extraVolumes:
- name: accf
hostPath: /etc/config
mountPath: /etc/config
readOnly: false
pathType: "DirectoryOrCreate"
extraMounts:
- hostPath: /tmp/pss
containerPath: /etc/config
# optional: if set, the mount is read-only.
# default false
readOnly: false
# optional: if set, the mount needs SELinux relabeling.
# default false
selinuxRelabel: false
# optional: set propagation mode (None, HostToContainer or Bidirectional)
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
# default None
propagation: None
EOF
```
{{<note>}}
If you use Docker Desktop with KinD on macOS, you can
@ -256,56 +255,57 @@ following:
these Pod Security Standards:
```shell
kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml
kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml
```
The output is similar to this:
```
Creating cluster "psa-with-cluster-pss" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-psa-with-cluster-pss"
You can now use your cluster with:
Creating cluster "psa-with-cluster-pss" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-psa-with-cluster-pss"
You can now use your cluster with:
kubectl cluster-info --context kind-psa-with-cluster-pss
kubectl cluster-info --context kind-psa-with-cluster-pss
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
```
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
```
1. Point kubectl to the cluster
1. Point kubectl to the cluster:
```shell
kubectl cluster-info --context kind-psa-with-cluster-pss
```
kubectl cluster-info --context kind-psa-with-cluster-pss
```
The output is similar to this:
```
Kubernetes control plane is running at https://127.0.0.1:63855
CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
```
Kubernetes control plane is running at https://127.0.0.1:63855
CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
1. Create the following Pod specification for a minimal configuration in the default namespace:
```
cat <<EOF > /tmp/pss/nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
EOF
```
```
cat <<EOF > /tmp/pss/nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
EOF
```
1. Create the Pod in the cluster:
```shell
kubectl apply -f /tmp/pss/nginx-pod.yaml
kubectl apply -f /tmp/pss/nginx-pod.yaml
```
The output is similar to this:
```