Merge pull request #45550 from my-git9/apparmor042
[zh-cn]sync security/apparmor
This commit is contained in:
commit
8bbe557d18
|
|
@ -16,7 +16,7 @@ weight: 30
|
|||
{{< feature-state for_k8s_version="v1.4" state="beta" >}}
|
||||
|
||||
<!--
|
||||
AppArmor is a Linux kernel security module that supplements the standard Linux user and group based
|
||||
[AppArmor](https://apparmor.net/) is a Linux kernel security module that supplements the standard Linux user and group based
|
||||
permissions to confine programs to a limited set of resources. AppArmor can be configured for any
|
||||
application to reduce its potential attack surface and provide greater in-depth defense. It is
|
||||
configured through profiles tuned to allow the access needed by a specific program or container,
|
||||
|
|
@ -24,7 +24,7 @@ such as Linux capabilities, network access, file permissions, etc. Each profile
|
|||
*enforcing* mode, which blocks access to disallowed resources, or *complain* mode, which only reports
|
||||
violations.
|
||||
-->
|
||||
AppArmor 是一个 Linux 内核安全模块,
|
||||
[AppArmor](https://apparmor.net/) 是一个 Linux 内核安全模块,
|
||||
它补充了基于标准 Linux 用户和组的权限,将程序限制在一组有限的资源中。
|
||||
AppArmor 可以配置为任何应用程序减少潜在的攻击面,并且提供更加深入的防御。
|
||||
它通过调整配置文件进行配置,以允许特定程序或容器所需的访问,
|
||||
|
|
@ -33,13 +33,13 @@ AppArmor 可以配置为任何应用程序减少潜在的攻击面,并且提
|
|||
模式(阻止访问不允许的资源)或 **投诉(complain)** 模式(仅报告冲突)下运行。
|
||||
|
||||
<!--
|
||||
AppArmor can help you to run a more secure deployment by restricting what containers are allowed to
|
||||
On Kubernetes, AppArmor can help you to run a more secure deployment by restricting what containers are allowed to
|
||||
do, and/or provide better auditing through system logs. However, it is important to keep in mind
|
||||
that AppArmor is not a silver bullet and can only do so much to protect against exploits in your
|
||||
application code. It is important to provide good, restrictive profiles, and harden your
|
||||
applications and cluster from other angles as well.
|
||||
-->
|
||||
AppArmor 可以通过限制允许容器执行的操作,
|
||||
在 Kubernetes 中,AppArmor 可以通过限制允许容器执行的操作,
|
||||
和/或通过系统日志提供更好的审计来帮助你运行更安全的部署。
|
||||
但是,重要的是要记住 AppArmor 不是灵丹妙药,
|
||||
只能做部分事情来防止应用程序代码中的漏洞。
|
||||
|
|
@ -48,7 +48,7 @@ AppArmor 可以通过限制允许容器执行的操作,
|
|||
## {{% heading "objectives" %}}
|
||||
|
||||
<!--
|
||||
* See an example of how to load a profile on a node
|
||||
* See an example of how to load a profile on a Node
|
||||
* Learn how to enforce the profile on a Pod
|
||||
* Learn how to check that the profile is loaded
|
||||
* See what happens when a profile is violated
|
||||
|
|
@ -63,38 +63,18 @@ AppArmor 可以通过限制允许容器执行的操作,
|
|||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!--
|
||||
Make sure:
|
||||
AppArmor is an optional kernel module and Kubernetes feature, so verify it is supported on your
|
||||
Nodes before proceeding:
|
||||
-->
|
||||
确保:
|
||||
AppArmor 是一个可选的内核模块和 Kubernetes 特性,因此请在继续之前验证您的节点是否支持它:
|
||||
|
||||
<!--
|
||||
1. Kubernetes version is at least v1.4 -- Kubernetes support for AppArmor was added in
|
||||
v1.4. Kubernetes components older than v1.4 are not aware of the new AppArmor annotations, and
|
||||
will **silently ignore** any AppArmor settings that are provided. To ensure that your Pods are
|
||||
receiving the expected protections, it is important to verify the Kubelet version of your nodes:
|
||||
-->
|
||||
1. Kubernetes 版本至少是 v1.4 —— AppArmor 在 Kubernetes v1.4 版本中才添加了对 AppArmor 的支持。
|
||||
早于 v1.4 版本的 Kubernetes 组件不知道新的 AppArmor
|
||||
注解并且将会 **默认忽略** 提供的任何 AppArmor 设置。
|
||||
为了确保你的 Pod 能够得到预期的保护,必须验证节点的 Kubelet 版本:
|
||||
|
||||
```shell
|
||||
kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.kubeletVersion}\n{end}'
|
||||
```
|
||||
|
||||
```
|
||||
gke-test-default-pool-239f5d02-gyn2: v1.4.0
|
||||
gke-test-default-pool-239f5d02-x1kf: v1.4.0
|
||||
gke-test-default-pool-239f5d02-xwux: v1.4.0
|
||||
```
|
||||
|
||||
<!--
|
||||
2. AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor profile, the
|
||||
1. AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor profile, the
|
||||
AppArmor kernel module must be installed and enabled. Several distributions enable the module by
|
||||
default, such as Ubuntu and SUSE, and many others provide optional support. To check whether the
|
||||
module is enabled, check the `/sys/module/apparmor/parameters/enabled` file:
|
||||
-->
|
||||
2. AppArmor 内核模块已启用 —— 要使 Linux 内核强制执行 AppArmor 配置文件,
|
||||
1. AppArmor 内核模块已启用 —— 要使 Linux 内核强制执行 AppArmor 配置文件,
|
||||
必须安装并且启动 AppArmor 内核模块。默认情况下,有几个发行版支持该模块,
|
||||
如 Ubuntu 和 SUSE,还有许多发行版提供可选支持。要检查模块是否已启用,请检查
|
||||
`/sys/module/apparmor/parameters/enabled` 文件:
|
||||
|
|
@ -105,43 +85,30 @@ Make sure:
|
|||
```
|
||||
|
||||
<!--
|
||||
If the Kubelet contains AppArmor support (>= v1.4), it will refuse to run a Pod with AppArmor
|
||||
options if the kernel module is not enabled.
|
||||
The Kubelet verifies that AppArmor is enabled on the host before admitting a pod with AppArmor
|
||||
explicitly configured.
|
||||
-->
|
||||
如果 Kubelet 包含 AppArmor 支持(>= v1.4),
|
||||
但是内核模块未启用,它将拒绝运行带有 AppArmor 选项的 Pod。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Ubuntu carries many AppArmor patches that have not been merged into the upstream Linux
|
||||
kernel, including patches that add additional hooks and features. Kubernetes has only been
|
||||
tested with the upstream version, and does not promise support for other features.
|
||||
-->
|
||||
Ubuntu 携带了许多没有合并到上游 Linux 内核中的 AppArmor 补丁,
|
||||
包括添加附加钩子和特性的补丁。Kubernetes 只在上游版本中测试过,不承诺支持其他特性。
|
||||
{{< /note >}}
|
||||
kubelet 会先验证主机上是否已启用 AppArmor,然后再接纳显式配置了 AppArmor 的 Pod。
|
||||
|
||||
<!--
|
||||
3. Container runtime supports AppArmor -- Currently all common Kubernetes-supported container
|
||||
runtimes should support AppArmor, like {{< glossary_tooltip term_id="docker">}},
|
||||
{{< glossary_tooltip term_id="cri-o" >}} or {{< glossary_tooltip term_id="containerd" >}}.
|
||||
Please refer to the corresponding runtime documentation and verify that the cluster fulfills
|
||||
the requirements to use AppArmor.
|
||||
3. Container runtime supports AppArmor -- All common Kubernetes-supported container
|
||||
runtimes should support AppArmor, including {{< glossary_tooltip term_id="cri-o" >}} and
|
||||
{{< glossary_tooltip term_id="containerd" >}}. Please refer to the corresponding runtime
|
||||
documentation and verify that the cluster fulfills the requirements to use AppArmor.
|
||||
-->
|
||||
3. 容器运行时支持 AppArmor —— 目前所有常见的 Kubernetes 支持的容器运行时都应该支持 AppArmor,
|
||||
像 {{< glossary_tooltip term_id="docker">}}、{{< glossary_tooltip term_id="cri-o" >}}
|
||||
或 {{< glossary_tooltip term_id="containerd" >}}。
|
||||
2. 容器运行时支持 AppArmor —— 所有常见的 Kubernetes 支持的容器运行时都应该支持 AppArmor,
|
||||
包括 {{< glossary_tooltip term_id="cri-o" >}} 和 {{< glossary_tooltip term_id="containerd" >}}。
|
||||
请参考相应的运行时文档并验证集群是否满足使用 AppArmor 的要求。
|
||||
|
||||
<!--
|
||||
4. Profile is loaded -- AppArmor is applied to a Pod by specifying an AppArmor profile that each
|
||||
container should be run with. If any of the specified profiles is not already loaded in the
|
||||
kernel, the Kubelet (>= v1.4) will reject the Pod. You can view which profiles are loaded on a
|
||||
3. Profile is loaded -- AppArmor is applied to a Pod by specifying an AppArmor profile that each
|
||||
container should be run with. If any of the specified profiles is not loaded in the
|
||||
kernel, the Kubelet will reject the Pod. You can view which profiles are loaded on a
|
||||
node by checking the `/sys/kernel/security/apparmor/profiles` file. For example:
|
||||
-->
|
||||
4. 配置文件已加载 —— 通过指定每个容器都应使用的 AppArmor 配置文件,
|
||||
AppArmor 会被应用到 Pod 上。如果指定的任何配置文件尚未加载到内核,
|
||||
Kubelet(>= v1.4)将拒绝 Pod。
|
||||
3. 配置文件已加载 —— 通过指定每个容器应使用的 AppArmor 配置文件,
|
||||
AppArmor 会被应用到 Pod 上。如果所指定的配置文件未加载到内核,
|
||||
kubelet 将拒绝 Pod。
|
||||
通过检查 `/sys/kernel/security/apparmor/profiles` 文件,
|
||||
可以查看节点加载了哪些配置文件。例如:
|
||||
|
||||
|
|
@ -163,26 +130,6 @@ Make sure:
|
|||
|
||||
有关在节点上加载配置文件的详细信息,请参见[使用配置文件设置节点](#setting-up-nodes-with-profiles)。
|
||||
|
||||
<!--
|
||||
As long as the Kubelet version includes AppArmor support (>= v1.4), the Kubelet will reject a Pod
|
||||
with AppArmor options if any of the prerequisites are not met. You can also verify AppArmor support
|
||||
on nodes by checking the node ready condition message (though this is likely to be removed in a
|
||||
later release):
|
||||
-->
|
||||
只要 Kubelet 版本包含 AppArmor 支持(>=v1.4),
|
||||
如果不满足这些先决条件,Kubelet 将拒绝带有 AppArmor 选项的 Pod。
|
||||
你还可以通过检查节点就绪状况消息来验证节点上的 AppArmor 支持(尽管这可能会在以后的版本中删除):
|
||||
|
||||
```shell
|
||||
kubectl get nodes -o=jsonpath='{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}{"\n"}{end}'
|
||||
```
|
||||
|
||||
```
|
||||
gke-test-default-pool-239f5d02-gyn2: kubelet is posting ready status. AppArmor enabled
|
||||
gke-test-default-pool-239f5d02-x1kf: kubelet is posting ready status. AppArmor enabled
|
||||
gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor enabled
|
||||
```
|
||||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
<!--
|
||||
|
|
@ -212,7 +159,7 @@ container.apparmor.security.beta.kubernetes.io/<container_name>: <profile_ref>
|
|||
|
||||
<!--
|
||||
Where `<container_name>` is the name of the container to apply the profile to, and `<profile_ref>`
|
||||
specifies the profile to apply. The `profile_ref` can be one of:
|
||||
specifies the profile to apply. The `<profile_ref>` can be one of:
|
||||
-->
|
||||
`<container_name>` 的名称是配置文件所针对的容器的名称,`<profile_def>` 则设置要应用的配置文件。
|
||||
`<profile_ref>` 可以是以下取值之一:
|
||||
|
|
@ -232,26 +179,24 @@ See the [API Reference](#api-reference) for the full details on the annotation a
|
|||
有关注解和配置文件名称格式的详细信息,请参阅 [API 参考](#api-reference)。
|
||||
|
||||
<!--
|
||||
Kubernetes AppArmor enforcement works by first checking that all the prerequisites have been
|
||||
met, and then forwarding the profile selection to the container runtime for enforcement. If the
|
||||
prerequisites have not been met, the Pod will be rejected, and will not run.
|
||||
To verify that the profile was applied, you can check that the container's root process is
|
||||
running with the correct profile by examining its proc attr:
|
||||
-->
|
||||
Kubernetes AppArmor 强制执行机制首先检查所有先决条件都已满足,
|
||||
然后将所选的配置文件转发到容器运行时进行强制执行。
|
||||
如果未满足先决条件,Pod 将被拒绝,并且不会运行。
|
||||
|
||||
<!--
|
||||
To verify that the profile was applied, you can look for the AppArmor security option listed in the container created event:
|
||||
-->
|
||||
要验证是否应用了配置文件,可以在容器创建事件中查找所列出的 AppArmor 安全选项:
|
||||
要验证是否应用了配置文件,
|
||||
你可以通过检查容器根进程的进程属性来检查该进程是否正在使用正确的配置文件运行:
|
||||
|
||||
```shell
|
||||
kubectl get events | grep Created
|
||||
kubectl exec <pod_name> -- cat /proc/1/attr/current
|
||||
```
|
||||
|
||||
```
|
||||
22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-node-pool-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
|
||||
```
|
||||
<!--
|
||||
The output should look something like this:
|
||||
-->
|
||||
输出应如下所示:
|
||||
|
||||
```
|
||||
k8s-apparmor-example-deny-write (enforce)
|
||||
```
|
||||
|
||||
<!--
|
||||
You can also verify directly that the container's root process is running with the correct profile by checking its proc attr:
|
||||
|
|
@ -277,11 +222,11 @@ k8s-apparmor-example-deny-write (enforce)
|
|||
**本例假设你已经设置了一个集群使用 AppArmor 支持。**
|
||||
|
||||
<!--
|
||||
First, we need to load the profile we want to use onto our nodes. This profile denies all file writes:
|
||||
First, load the profile you want to use onto your Nodes. This profile denies all file writes:
|
||||
-->
|
||||
首先,我们需要将要使用的配置文件加载到节点上。配置文件拒绝所有文件写入:
|
||||
首先,将要使用的配置文件加载到节点上,此配置文件拒绝所有文件写入:
|
||||
|
||||
```shell
|
||||
```
|
||||
#include <tunables/global>
|
||||
|
||||
profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
|
||||
|
|
@ -295,20 +240,20 @@ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
|
|||
```
|
||||
|
||||
<!--
|
||||
Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our
|
||||
nodes. For this example we'll use SSH to install the profiles, but other approaches are
|
||||
The profile needs to loaded onto all nodes, since you don't know where the pod will be scheduled.
|
||||
For this example we'll use SSH to install the profiles, but other approaches are
|
||||
discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles).
|
||||
-->
|
||||
由于我们不知道 Pod 将被调度到哪里,我们需要在所有节点上加载配置文件。
|
||||
由于不知道 Pod 将被调度到哪里,该配置文件需要加载到所有节点上。
|
||||
在本例中,我们将使用 SSH 来安装概要文件,
|
||||
但是在[使用配置文件设置节点](#setting-up-nodes-with-profiles)中讨论了其他方法。
|
||||
|
||||
<!--
|
||||
# This example assumes that node names match host names, and are reachable via SSH.
|
||||
-->
|
||||
```shell
|
||||
NODES=(
|
||||
# 你的节点的可通过 SSH 访问的域名
|
||||
gke-test-default-pool-239f5d02-gyn2.us-central1-a.my-k8s
|
||||
gke-test-default-pool-239f5d02-x1kf.us-central1-a.my-k8s
|
||||
gke-test-default-pool-239f5d02-xwux.us-central1-a.my-k8s)
|
||||
# 此示例假设节点名称与主机名称匹配,并且可通过 SSH 访问。
|
||||
NODES=($(kubectl get nodes -o name))
|
||||
for NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <<EOF
|
||||
#include <tunables/global>
|
||||
|
||||
|
|
@ -325,52 +270,38 @@ done
|
|||
```
|
||||
|
||||
<!--
|
||||
Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile:
|
||||
Next, run a simple "Hello AppArmor" pod with the deny-write profile:
|
||||
-->
|
||||
接下来,我们将运行一个带有拒绝写入配置文件的简单 “Hello AppArmor” Pod:
|
||||
接下来,运行一个带有拒绝写入配置文件的简单 “Hello AppArmor” Pod:
|
||||
|
||||
{{% code_sample file="pods/security/hello-apparmor.yaml" %}}
|
||||
|
||||
```shell
|
||||
kubectl create -f ./hello-apparmor.yaml
|
||||
kubectl create -f hello-apparmor.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
If we look at the pod events, we can see that the Pod container was created with the AppArmor
|
||||
profile "k8s-apparmor-example-deny-write":
|
||||
You can verify that the container is actually running with that profile by checking its `/proc/1/attr/current`:
|
||||
-->
|
||||
如果我们查看 Pod 事件,我们可以看到 Pod 容器是用 AppArmor
|
||||
配置文件 “k8s-apparmor-example-deny-write” 所创建的:
|
||||
|
||||
```shell
|
||||
kubectl get events | grep hello-apparmor
|
||||
```
|
||||
|
||||
```
|
||||
14s 14s 1 hello-apparmor Pod Normal Scheduled {default-scheduler } Successfully assigned hello-apparmor to gke-test-default-pool-239f5d02-gyn2
|
||||
14s 14s 1 hello-apparmor Pod spec.containers{hello} Normal Pulling {kubelet gke-test-default-pool-239f5d02-gyn2} pulling image "busybox"
|
||||
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Pulled {kubelet gke-test-default-pool-239f5d02-gyn2} Successfully pulled image "busybox"
|
||||
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet gke-test-default-pool-239f5d02-gyn2} Created container with docker id 06b6cd1c0989; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
|
||||
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Started {kubelet gke-test-default-pool-239f5d02-gyn2} Started container with docker id 06b6cd1c0989
|
||||
```
|
||||
|
||||
<!--
|
||||
We can verify that the container is actually running with that profile by checking its proc attr:
|
||||
-->
|
||||
我们可以通过检查该配置文件的 proc attr 来验证容器是否实际使用该配置文件运行:
|
||||
你可以通过检查其 `/proc/1/attr/current` 来验证容器是否确实使用该配置文件运行:
|
||||
|
||||
```shell
|
||||
kubectl exec hello-apparmor -- cat /proc/1/attr/current
|
||||
```
|
||||
|
||||
<!--
|
||||
The output should be:
|
||||
-->
|
||||
输出应该是:
|
||||
|
||||
```
|
||||
k8s-apparmor-example-deny-write (enforce)
|
||||
```
|
||||
|
||||
<!--
|
||||
Finally, we can see what happens if we try to violate the profile by writing to a file:
|
||||
Finally, you can see what happens if you violate the profile by writing to a file:
|
||||
-->
|
||||
最后,我们可以看到,如果我们尝试通过写入文件来违反配置文件会发生什么:
|
||||
最后,你可以看到,如果你通过写入文件来违反配置文件会发生什么:
|
||||
|
||||
```shell
|
||||
kubectl exec hello-apparmor -- touch /tmp/test
|
||||
|
|
@ -382,15 +313,12 @@ error: error executing remote command: command terminated with non-zero exit cod
|
|||
```
|
||||
|
||||
<!--
|
||||
To wrap up, let's look at what happens if we try to specify a profile that hasn't been loaded:
|
||||
To wrap up, see what happens if you try to specify a profile that hasn't been loaded:
|
||||
-->
|
||||
最后,让我们看看如果我们试图指定一个尚未加载的配置文件会发生什么:
|
||||
最后,看看如果你尝试指定尚未加载的配置文件会发生什么:
|
||||
|
||||
```shell
|
||||
kubectl create -f /dev/stdin <<EOF
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
|
@ -403,9 +331,16 @@ spec:
|
|||
image: busybox:1.28
|
||||
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
|
||||
EOF
|
||||
```
|
||||
```
|
||||
pod/hello-apparmor-2 created
|
||||
```
|
||||
|
||||
<!--
|
||||
Although the Pod was created successfully, further examination will show that it is stuck in pending:
|
||||
-->
|
||||
虽然 Pod 创建成功,但进一步检查会发现它陷入 pending 状态:
|
||||
|
||||
```shell
|
||||
kubectl describe pod hello-apparmor-2
|
||||
```
|
||||
|
|
@ -413,59 +348,30 @@ kubectl describe pod hello-apparmor-2
|
|||
```
|
||||
Name: hello-apparmor-2
|
||||
Namespace: default
|
||||
Node: gke-test-default-pool-239f5d02-x1kf/
|
||||
Node: gke-test-default-pool-239f5d02-x1kf/10.128.0.27
|
||||
Start Time: Tue, 30 Aug 2016 17:58:56 -0700
|
||||
Labels: <none>
|
||||
Annotations: container.apparmor.security.beta.kubernetes.io/hello=localhost/k8s-apparmor-example-allow-write
|
||||
Status: Pending
|
||||
Reason: AppArmor
|
||||
Message: Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
|
||||
IP:
|
||||
Controllers: <none>
|
||||
Containers:
|
||||
hello:
|
||||
Container ID:
|
||||
Image: busybox
|
||||
Image ID:
|
||||
Port:
|
||||
Command:
|
||||
sh
|
||||
-c
|
||||
echo 'Hello AppArmor!' && sleep 1h
|
||||
State: Waiting
|
||||
Reason: Blocked
|
||||
Ready: False
|
||||
Restart Count: 0
|
||||
Environment: <none>
|
||||
Mounts:
|
||||
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dnz7v (ro)
|
||||
Conditions:
|
||||
Type Status
|
||||
Initialized True
|
||||
Ready False
|
||||
PodScheduled True
|
||||
Volumes:
|
||||
default-token-dnz7v:
|
||||
Type: Secret (a volume populated by a Secret)
|
||||
SecretName: default-token-dnz7v
|
||||
Optional: false
|
||||
QoS Class: BestEffort
|
||||
Node-Selectors: <none>
|
||||
Tolerations: <none>
|
||||
...
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-node-pool-t1f5
|
||||
23s 23s 1 {kubelet e2e-test-stclair-node-pool-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 10s default-scheduler Successfully assigned default/hello-apparmor to gke-test-default-pool-239f5d02-x1kf
|
||||
Normal Pulled 8s kubelet Successfully pulled image "busybox:1.28" in 370.157088ms (370.172701ms including waiting)
|
||||
Normal Pulling 7s (x2 over 9s) kubelet Pulling image "busybox:1.28"
|
||||
Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found k8s-apparmor-example-allow-write
|
||||
Normal Pulled 7s kubelet Successfully pulled image "busybox:1.28" in 90.980331ms (91.005869ms including waiting)
|
||||
```
|
||||
|
||||
<!--
|
||||
Note the pod status is Pending, with a helpful error message: `Pod Cannot enforce AppArmor: profile
|
||||
"k8s-apparmor-example-allow-write" is not loaded`. An event was also recorded with the same message.
|
||||
An Event provides the error message with the reason, the specific wording is runtime-dependent:
|
||||
-->
|
||||
注意 Pod 呈现 Pending 状态,并且显示一条有用的错误信息:
|
||||
`Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded`。
|
||||
还用相同的消息记录了一个事件。
|
||||
事件提供错误消息及其原因,具体措辞与运行时相关:
|
||||
|
||||
```
|
||||
Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found
|
||||
```
|
||||
|
||||
<!--
|
||||
## Administration
|
||||
|
|
@ -473,74 +379,31 @@ Note the pod status is Pending, with a helpful error message: `Pod Cannot enforc
|
|||
## 管理 {#administration}
|
||||
|
||||
<!--
|
||||
### Setting up nodes with profiles
|
||||
### Setting up Nodes with profiles
|
||||
-->
|
||||
### 使用配置文件设置节点 {#setting-up-nodes-with-profiles}
|
||||
|
||||
<!--
|
||||
Kubernetes does not currently provide any native mechanisms for loading AppArmor profiles onto
|
||||
nodes. There are lots of ways to set up the profiles though, such as:
|
||||
Kubernetes does not currently provide any built-in mechanisms for loading AppArmor profiles onto
|
||||
Nodes. Profiles can be loaded through custom infrastructure or tools like the
|
||||
[Kubernetes Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator).
|
||||
-->
|
||||
Kubernetes 目前不提供任何本地机制来将 AppArmor 配置文件加载到节点上。
|
||||
有很多方法可以设置配置文件,例如:
|
||||
可以通过自定义基础设施或工具(例如 [Kubernetes Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator))
|
||||
加载配置文件。
|
||||
|
||||
<!--
|
||||
* Through a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) that runs a Pod on each node to
|
||||
ensure the correct profiles are loaded. An example implementation can be found
|
||||
[here](https://git.k8s.io/kubernetes/test/images/apparmor-loader).
|
||||
* At node initialization time, using your node initialization scripts (e.g. Salt, Ansible, etc.) or
|
||||
image.
|
||||
* By copying the profiles to each node and loading them through SSH, as demonstrated in the
|
||||
[Example](#example).
|
||||
-->
|
||||
* 通过在每个节点上运行 Pod 的
|
||||
[DaemonSet](/zh-cn/docs/concepts/workloads/controllers/daemonset/) 来确保加载了正确的配置文件。
|
||||
可以在[这里](https://git.k8s.io/kubernetes/test/images/apparmor-loader)找到实现示例。
|
||||
* 在节点初始化时,使用节点初始化脚本(例如 Salt、Ansible 等)或镜像。
|
||||
* 通过将配置文件复制到每个节点并通过 SSH 加载它们,如[示例](#example)。
|
||||
|
||||
<!--
|
||||
The scheduler is not aware of which profiles are loaded onto which node, so the full set of profiles
|
||||
must be loaded onto every node. An alternative approach is to add a node label for each profile (or
|
||||
class of profiles) on the node, and use a
|
||||
The scheduler is not aware of which profiles are loaded onto which Node, so the full set of profiles
|
||||
must be loaded onto every Node. An alternative approach is to add a Node label for each profile (or
|
||||
class of profiles) on the Node, and use a
|
||||
[node selector](/docs/concepts/scheduling-eviction/assign-pod-node/) to ensure the Pod is run on a
|
||||
node with the required profile.
|
||||
Node with the required profile.
|
||||
-->
|
||||
调度程序不知道哪些配置文件加载到哪个节点上,因此必须将全套配置文件加载到每个节点上。
|
||||
另一种方法是为节点上的每个配置文件(或配置文件类)添加节点标签,
|
||||
并使用[节点选择器](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)确保
|
||||
Pod 在具有所需配置文件的节点上运行。
|
||||
|
||||
<!--
|
||||
### Disabling AppArmor
|
||||
-->
|
||||
### 禁用 AppArmor {#disabling-apparmor}
|
||||
|
||||
<!--
|
||||
If you do not want AppArmor to be available on your cluster, it can be disabled by a command-line flag:
|
||||
-->
|
||||
如果你不希望 AppArmor 在集群上可用,可以通过命令行标志禁用它:
|
||||
|
||||
```
|
||||
--feature-gates=AppArmor=false
|
||||
```
|
||||
|
||||
<!--
|
||||
When disabled, any Pod that includes an AppArmor profile will fail validation with a "Forbidden"
|
||||
error.
|
||||
-->
|
||||
禁用时,任何包含 AppArmor 配置文件的 Pod 都将导致验证失败,且返回 “Forbidden” 错误。
|
||||
|
||||
{{<note>}}
|
||||
<!--
|
||||
Even if the Kubernetes feature is disabled, runtimes may still enforce the default profile. The
|
||||
option to disable the AppArmor feature will be removed when AppArmor graduates to general
|
||||
availability (GA).
|
||||
-->
|
||||
即使此 Kubernetes 特性被禁用,运行时仍可能强制执行默认配置文件。
|
||||
当 AppArmor 升级为正式版 (GA) 时,禁用 AppArmor 功能的选项将被删除。
|
||||
{{</note>}}
|
||||
|
||||
<!--
|
||||
## Authoring Profiles
|
||||
-->
|
||||
|
|
@ -609,8 +472,7 @@ Specifying the profile a container will run with:
|
|||
|
||||
<!--
|
||||
- `runtime/default`: Refers to the default runtime profile.
|
||||
- Equivalent to not specifying a profile, except it still
|
||||
requires AppArmor to be enabled.
|
||||
- Equivalent to not specifying a profile, except it still requires AppArmor to be enabled.
|
||||
- In practice, many container runtimes use the same OCI default profile, defined here:
|
||||
https://github.com/containers/common/blob/main/pkg/apparmor/apparmor_linux_template.go
|
||||
- `localhost/<profile_name>`: Refers to a profile loaded on the node (localhost) by name.
|
||||
|
|
|
|||
Loading…
Reference in New Issue