removed owners and sig-security-external-audit (#6176)

Signed-off-by: Ayushman <ayushvidushi01@gmail.com>

removed security-tooling/owners security-audit-2019 security-audit-2021

Signed-off-by: Ayushman <ayushvidushi01@gmail.com>
This commit is contained in:
Ayushman 2021-11-18 22:07:02 +05:30 committed by GitHub
parent 97dd845870
commit 282eb7e767
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
28 changed files with 0 additions and 2003 deletions

View File

@ -1,118 +0,0 @@
# Request for Proposal
## Kubernetes Third Party Security Audit
The Kubernetes Third-Party Audit Working Group (working group, henceforth) is soliciting proposals from select Information Security vendors for a comprehensive security audit of the Kubernetes Project.
### Eligible Vendors
Only the following vendors will be permitted to submit proposals:
- NCC Group
- Trail of Bits
- Cure53
- Bishop Fox
- Insomnia
- Atredis Partners
If your proposal includes sub-contractors, please include relevant details from their firm such as CVs, past works, etc.
### RFP Process
This RFP will be open between 2018/10/29 and 2019/11/30.
The working group will answer questions for the first two weeks of this period.
Questions can be submitted [here](https://docs.google.com/forms/d/e/1FAIpQLSd5rXSDYQ0KMjzSEGxv0pkGxInkdW1NEQHvUJpxgX3y0o9IEw/viewform?usp=sf_link). All questions will be answered publicly in this document.
Proposals must include CVs, resumes, and/or example reports from staff that will be working on the project.
- 2018/10/29: RFP Open, Question period open
- 2018/11/12: Question period closes
- 2018/11/30: RFP Closes
- 2018/12/11: The working group will announce vendor selection
## Audit Scope
The scope of the audit is the most recent release (1.12) of the core [Kubernetes project](https://github.com/kubernetes/kubernetes).
- Findings within the [bug bounty program](https://github.com/kubernetes/community/blob/master/contributors/guide/bug-bounty.md) scope are in scope.
We want the focus of the audit to be on bugs on Kubernetes. While Kubernetes relies upon a container runtimes such as Docker and CRI-O, we aren't looking for (for example) container escapes that rely upon bugs in the container runtime (unless, for example, the escape is made possible by a defect in the way that Kubernetes sets up the container).
### Focus Areas
The Kubernetes Third-Party Audit Working Group is specifically interested in the following areas. Proposals should indicate their level of expertise in these fields as it relates to Kubernetes.
- Networking
- Cryptography
- Authentication & Authorization (including Role Based Access Controls)
- Secrets management
- Multi-tenancy isolation: Specifically soft (non-hostile co-tenants)
### Out of Scope
Findings specifically excluded from the [bug bounty program](https://github.com/kubernetes/community/blob/master/contributors/guide/bug-bounty.md) scope are out of scope.
## Methodology
We are allowing 8 weeks for the audit, start date can be negioated after vendor selection. We recognize that November and December can be very high utilization periods for security vendors.
The audit should not be treated as a penetration test, or red team exercise. It should be comprehensive and not end with the first successful exploit or critical vulnerability.
The vendor should perform both source code analysis as well as live evaluation of Kubernetes.
The vendor should document the Kubernetes configuration and architecture that the audit was performed against for the creation of a "audited reference architecture" artifact. The working group must approve this configuration before the audit continues.
The working group will establish a 60 minute kick-off meeting to answer any initial questions and explain Kubernetes architecture.
The working group will be available weekly to meet with the selected vendor, will and provide subject matter experts for requested components.
The vendor must report urgent security issues immediately to both the working group and security@kubernetes.io.
## Confidentiality and Embargo
All information gathered and artifacts created as a part of the audit must not be shared outside the vendor or the working group without the explicit consent of the working group.
## Artifacts
The audit should result in the following artifacts, which will be made public after any sensitive security issues are mitigated.
- Findings report, including an executive summary
- Audited reference architecture specification. Should take the form of a summary and associated configuration yaml files.
- Formal threat model
- Any proof of concept exploits that we can use to investigate and fix defects
- Retrospective white paper(s) on important security considerations in Kubernetes
*This artifact can be provided up to 3 weeks after deadline for the others.*
- E.g. [NCC Group: Understanding hardening linux containers](https://www.nccgroup.trust/globalassets/our-research/us/whitepapers/2016/april/ncc_group_understanding_hardening_linux_containers-1-1.pdf)
- E.g. [NCC Group: Abusing Privileged and Unprivileged Linux
Containers](https://www.nccgroup.trust/globalassets/our-research/us/whitepapers/2016/june/container_whitepaper.pdf)
## Q & A
| # | Question | Answer |
|---|----------|--------|
| 1 | The RFP says that any area included in the out of scope section of the k8s bug bounty programme is not in-scope of this review. There are some areas which are out of scope of the bug bounty which would appear to be relatively core to k8s, for example Kubernetes on Windows. Can we have 100% confirmation that these areas are out of scope? | Yes. If you encounter a vulnerability in Kubernetes' use of an out-of-scope element, like etcd or the container network interface (to Calico, Weave, Flannel, ...), that is in scope. If you encounter a direct vulnerability in a third-party component during the audit you should follow the embargo section of the RFP. |
| 2 | On the subject of target Distribution and configuration option review:<br> The RFP mentions an "audited reference architecture".<br> - Is the expectation that this will be based on a specific k8s install mechanism (e.g. kubeadm)? <br> - On a related note is it expected that High Availability configurations (e.g. multiple control plane nodes) should be included.<br> - The assessment mentions Networking as a focus area. Should a specific set of network plugins (e.g. weave, calico, flannel) be considered as in-scope or are all areas outside of the core Kubernetes code for this out of scope.<br> - Where features of Kubernetes have been deprecated but not removed in 1.12, should they be considered in-scope or not? | 1. No, we are interested in the final topology -- the installation mechanism, as well as its default configuration, is tangental. The purpose is to contextualise the findings.<br>2. High-availability configurations should be included. For confinement of level of effort, vendor could create one single-master configuration and one high-availability configuration.<br>3. All plugins are out of scope per the bug bounty scope -- for clarification regarding the interface to plug-ins, please see the previous question.<br> 4. Deprecated features should be considered out of scope |
| 3 | On the subject of dependencies:<br>- Will any of the project dependencies be in scope for the assessment? (e.g. https://github.com/kubernetes/kubernetes/blob/v1.14.3/Godeps/Godeps.json) | Project dependencies are in scope in the sense that they are **allowed** to be tested, but they should not be considered a **required** testing area. We would be interested in cases where Kubernetes is exploitable due to a vulnerability in a project depdendency. Vulnerabilities found in third-party dependencies should follow the embargo section of the RFP.|
| 4 | Is the 8 weeks mentioned in the scope intended to be a limit on effort applied to the review, or just the timeframe for the review to occur in? | This is only a restriction on time frame, but is not intended to convey level of effort. |
| 5| Will the report be released in its entirety after the issues have been remediated? | Yes. |
| 6| What goals must be met to make this project a success? | We have several goals in mind:<br>1) Document a full and complete understanding of Kubernetes dataflow.<br>2) Achieve a reasonable understanding of potential vulnerability vectors for subsequent research.<br>3) Creation of artifacts that help third parties make a practical assessment of Kubernetes security position.<br>4) Eliminate design and architecture-level vulnerabilities.<br>5) Discover the most significant vulnerabilities, in both number and severity. |
| 7 | Would you be open to two firms partnering on the proposal? | Yes, however both firms should collaborate on the proposal and individual contributors should all provide C.V.s or past works.|
| 8| From a deliverables perspective, will the final report (aside from the whitepaper) be made public? | Yes. |
| 9| The bug bounty document states the following is in scope, "Community maintained stable cloud platform plugins", however will the scope of the assessment include review of the cloud providers' k8s implementation? Reference of cloud providers: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/ | Cloud provider-specific issues are excluded from the scope. |
| 10| The bug bounty doc lists supply chain attacks as in scope and also says, "excluding social engineering attacks against maintainers". We can assume phishing these individuals is out of scope, but does the exclusion of social engineering against maintainers include all attacks involving individuals? For example, if we were to discover that one of these developers accidentally committed their SSH keys to a git repo unassociated with k8s and we could use these keys to gain access to the k8s project. Is that in scope? | Attacks against individual developers, such as the example provided, are out of scope for this engagement. |
| 11| While suppression of logs is explicitly in scope, is log injection also in scope? | Log injection is in scope for the purposes of this audit.|
| 12| Are all the various networking implementations in scope for the assessment? Ref: https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model | Please refer to question 1. |
| 13| What does the working group refer to with formal threat model? Would STRIDE be a formal threat model in that sense?| A formal threat model should include a comprehensive dataflow diagram which shows data moving between different trust levels and assesses threats to that data using a system like STRIDE as the data moves between each process/component. Many good examples are present in Threat Modeling: Designing for Security by Adam Shostack. |
| 14| Does Kubernetes uses any GoLang non-standard signing libraries? | An initial investigation has not uncovered any, however with a code base as large as Kubernetes, it is possible. |
| 15| Does Kubernetes implement any cryptographic primitives on its own, i.e. primitives which are not part of the standard libraries? | An initial investigation has not uncovered any, however with a code base as large as Kubernetes, it is possible. |
| 16| Presuming that live testing is part of the project, how does the working group see the "audited reference architecture" being defined? Is there a representative deployment, or a document describing a "default installation" that you foresee the engagement team using to inform the buildout of a test environment?| The purpose of the reference architecture is to define and document the configuration against which live testing was preformed. It should be generated collaboratively with the working group at the beginning of the project. We will want it to represent at least a common configuration, as in practice Kubernetes itself has no default configuration. It should take the form of a document detailing the set-up and configuration steps the vendor took to create their environment, ensuring an easily repeatable reference implementation. |
| 17| The RFP describes ""networking and multi-tenancy isolation"" as one of the focus areas. <br/><br/>Can you describe for us what these terms mean to you? Can you also help us understand how you define a soft non-hostile co-tenant? Is a _hostile_ co-tenant also in scope?| By networking we mean vulnerabilities related to communication within and to/from the cluster: container to container, pod to pod, pod to service, and external to internal communications as described in [the networking documentation](https://kubernetes.io/docs/concepts/cluster-administration/networking/). <br/><br/>The concept of soft multi-tenancy is that you have a single cluster being shared by applications or groups within the same company or organization, with less intended restrictions of a hard multi-tenant platform like a PaaS that hosts multiple distinct and potentially hostile competing customers on a single cluster which requires stricter security assumptions. These definitions may vary by group and use case, but the idea is that you can have a cluster with multiple groups with their own namespaces, isolated by networking/storage/RBAC roles."|
| 18| In the Artifacts section, you describe a Formal Threat Model as one of the outputs of the engagement. Can you expound on what this means to you? Are there any representative public examples you could point us to?| Please refer to question 13.|

View File

@ -1,27 +0,0 @@
# Security Audit WG - RFP Decision Process
The Security Audit Working Group was tasked with leading the process of having a third party security audit conducted for the Kubernetes project. Our first steps were to select the vendors to be included in the Request for Proposal (RFP) process and create the RFP document setting out the goals of the audit. We were then responsible for evaluating the submitted proposals in order to select the vendor best suited to complete a security assessment against Kubernetes, a very complex and widely scoped project.
After publishing the initial RFP and distributing it to the eligible vendors, we had a period open for vendors to submit questions to better understand the projects goals, which we made publicly available in the RFP document. While six (6) vendors were invited to participate, we ultimately received four (4) RFP responses, due to one vendor dropping out and two vendors partnering to submit a combined proposal.
The next stage of this project was more difficult: evaluating the responses and determining which vendor to use for the audit. With the list of eligible vendors already limited to a small set of very strong and well-known firms, it came as no surprise to us that they each had extremely compelling proposals that made choosing one over the other very difficult. The working group leads have years of experience on both sides of the table: writing proposals and conducting audits, as well as working with these vendors and their teams to assess companies weve worked at. To help us combine objective evaluations with our individual past experiences and knowledge of each of the vendors work and relevant experience (conference talks, white papers, published research and reports), we came up with a set of criteria that each of us used to rank the proposals on a scale of 1 to 5:
* Personnel fit and talent
* Relevant understanding and experience (orchestration systems, containers, hardening, etc.)
* The individual work products requested in the RFP:
- Threat Model
- Reference architecture
- White paper
- Assessment and report
While budget constraints became a part of the final selection, we wanted to leave cost out of the process as much as possible and focus on ensuring the community received the best possible audit. Based on this criteria, the scoring overall was extremely close, with the total scores all within a few points of each other.
| Vendor | Total Score |
|--------|-------------|
| Vendor A | 149 |
| Vendor B | 149 |
| Vendor C | 144 |
| Vendor D | 135 |
After narrowing it down to our top two choices and some further discussions with those vendors, we decided to select the partnership of Atredis and Trail of Bits to complete this audit. We felt very strongly that the combination of these two firms, both composed of very senior and well known staff in the information security industry, would provide the best possible results. We look forward to working with them to kick off the actual audit process soon and for other Kubernetes contributors from the various SIGs to help partner with the working group on this assessment.

View File

@ -1,47 +0,0 @@
digraph K8S {
subgraph cluster_apiserverinternal {
node [style=filled];
color=green;
etcd[label="etcd"];
label = "API Server Data Layer";
}
subgraph cluster_apiserver {
node [style=filled];
color=blue;
kubeapiserver[label="kube-apiserver"];
kubeapiserver->etcd[label="HTTPS"]
label = "API Server";
}
subgraph cluster_mastercomponents {
node [style=filled];
label = "Master Control Plane Components";
scheduler[label="Scheduler"];
controllers[label="Controllers"]
scheduler->kubeapiserver[label="Callback/HTTPS"];
controllers->kubeapiserver[label="Callback/HTTPS"];
color=black;
}
subgraph cluster_worker {
label="Worker"
color="blue"
kubelet->kubeapiserver[label="authenticated HTTPS"]
kubeproxy[label="kube-proxy"]
iptables->kubeproxy->iptables
pods[label="pods with various containers"]
pods->kubeproxy->pods
}
subgraph cluster_internet {
label="Internet"
authuser[label="Authorized User via kubebctl"]
generaluser[label="General User"]
authuser->kubeapiserver[label="Authenticated HTTPS"]
generaluser->pods[label="application-specific connection protocol"]
}
kubeapiserver->kubelet[label="HTTPS"]
kubeapiserver->pods[label="HTTP",color=red]
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 100 KiB

View File

@ -1,3 +0,0 @@
python3 tm.py --dfd > updated-dataflow.dot
dot -Tpng < updated-dataflow.dot > updated-dataflow.png
open updated-dataflow.png

View File

@ -1,106 +0,0 @@
# !/usr/bin/env python3
from pytm.pytm import TM, Server, Datastore, Dataflow, Boundary, Actor, Lambda, Process
tm = TM("Kubernetes Threat Model")
tm.description = "a deep-dive threat model of Kubernetes"
# Boundaries
inet = Boundary("Internet")
mcdata = Boundary("Master Control Data")
apisrv = Boundary("API Server")
mcomps = Boundary("Master Control Components")
worker = Boundary("Worker")
contain = Boundary("Container")
# Actors
miu = Actor("Malicious Internal User")
ia = Actor("Internal Attacker")
ea = Actor("External Actor")
admin = Actor("Administrator")
dev = Actor("Developer")
eu = Actor("End User")
# Server & OS Components
etcd = Datastore("N-ary etcd servers")
apiserver = Server("kube-apiserver")
kubelet = Server("kubelet")
kubeproxy = Server("kube-proxy")
scheduler = Server("kube-scheduler")
controllers = Server("CCM/KCM")
pods = Server("Pods")
iptables = Process("iptables")
# Component <> Boundary Relations
etcd.inBoundary = mcdata
mcdata.inBoundary = apisrv
apiserver.inBoundary = apisrv
kubelet.inBoundary = worker
kubeproxy.inBoundary = worker
pods.inBoundary = contain
scheduler.inBoundary = mcomps
controllers.inBoundary = mcomps
pods.inBoundary = contain
iptables.inBoundary = worker
miu.inBoundary = apisrv
ia.inBoundary = contain
ea.inBoundary = inet
admin.inBoundary = apisrv
dev.inBoundary = inet
eu.inBoundary = inet
# Dataflows
apiserver2etcd = Dataflow(apiserver, etcd, "All kube-apiserver data")
apiserver2etcd.isEncrypted = True
apiserver2etcd.protocol = "HTTPS"
apiserver2kubelet = Dataflow(apiserver, kubelet, "kubelet Health, Status, &amp;c.")
apiserver2kubelet.isEncrypted = False
apiserver2kubelet.protocol = "HTTP"
apiserver2kubeproxy = Dataflow(apiserver, kubeproxy, "kube-proxy Health, Status, &amp;c.")
apiserver2kubeproxy.isEncrypted = False
apiserver2kubeproxy.protocol = "HTTP"
apiserver2scheduler = Dataflow(apiserver, scheduler, "kube-scheduler Health, Status, &amp;c.")
apiserver2scheduler.isEncrypted = False
apiserver2scheduler.protocol = "HTTP"
apiserver2controllers = Dataflow(apiserver, controllers, "{kube, cloud}-controller-manager Health, Status, &amp;c.")
apiserver2controllers.isEncrypted = False
apiserver2controllers.protocol = "HTTP"
kubelet2apiserver = Dataflow(kubelet, apiserver, "HTTP watch for resources on kube-apiserver")
kubelet2apiserver.isEncrypted = True
kubelet2apiserver.protocol = "HTTPS"
kubeproxy2apiserver = Dataflow(kubeproxy, apiserver, "HTTP watch for resources on kube-apiserver")
kubeproxy2apiserver.isEncrypted = True
kubeproxy2apiserver.protocol = "HTTPS"
controllers2apiserver = Dataflow(controllers, apiserver, "HTTP watch for resources on kube-apiserver")
controllers2apiserver.isEncrypted = True
controllers2apiserver.protocol = "HTTPS"
scheduler2apiserver = Dataflow(scheduler, apiserver, "HTTP watch for resources on kube-apiserver")
scheduler2apiserver.isEncrypted = True
scheduler2apiserver.protocol = "HTTPS"
kubelet2iptables = Dataflow(kubelet, iptables, "kubenet update of iptables (... ipvs, &amp;c) to setup Host-level ports")
kubelet2iptables.protocol = "IPC"
kubeproxy2iptables = Dataflow(kubeproxy, iptables, "kube-prxy update of iptables (... ipvs, &amp;c) to setup all pod networking")
kubeproxy2iptables.protocol = "IPC"
kubelet2pods = Dataflow(kubelet, pods, "kubelet to pod/CRI runtime, to spin up pods within a host")
kubelet2pods.protocol = "IPC"
eu2pods = Dataflow(eu, pods, "End-user access of Kubernetes-hosted applications")
ea2pods = Dataflow(ea, pods, "External Attacker attempting to compromise a trust boundary")
ia2cnts = Dataflow(ia, pods, "Internal Attacker with access to a compromised or malicious pod")
tm.process()

View File

@ -1,217 +0,0 @@
digraph tm {
graph [
fontname = Arial;
fontsize = 14;
]
node [
fontname = Arial;
fontsize = 14;
rankdir = lr;
]
edge [
shape = none;
fontname = Arial;
fontsize = 12;
]
labelloc = "t";
fontsize = 20;
nodesep = 1;
subgraph cluster_bfaefefcfbeeafeefac {
graph [
fontsize = 10;
fontcolor = firebrick2;
style = dashed;
color = firebrick2;
label = <<i>Internet</i>>;
]
bfbeacdafaceebdccfdffcdfcedfec [
shape = square;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>External Actor</b></td></tr></table>>;
]
abaadcacbbafdffbcffffbeedef [
shape = square;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>Developer</b></td></tr></table>>;
]
adafdaeaedeedcafe [
shape = square;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>End User</b></td></tr></table>>;
]
}
subgraph cluster_bbfdadaacbdaedcebfec {
graph [
fontsize = 10;
fontcolor = firebrick2;
style = dashed;
color = firebrick2;
label = <<i>Master Control Data</i>>;
]
bfffcaeeeeedccabfaaeff [
shape = none;
color = black;
label = <<table sides="TB" cellborder="0" cellpadding="2"><tr><td><font color="black"><b>N-ary etcd servers</b></font></td></tr></table>>;
]
}
subgraph cluster_afeffbbfdbeeefcabddacdba {
graph [
fontsize = 10;
fontcolor = firebrick2;
style = dashed;
color = firebrick2;
label = <<i>API Server</i>>;
]
bdfbefabdbefeacdfcabaac [
shape = square;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>Malicious Internal User</b></td></tr></table>>;
]
fabeebdadbcdffdcdec [
shape = square;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>Administrator</b></td></tr></table>>;
]
eadddadcfbabebaed [
shape = circle
color = black
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>kube-apiserver</b></td></tr></table>>;
]
}
subgraph cluster_cebcbebffccbfedcaffbb {
graph [
fontsize = 10;
fontcolor = firebrick2;
style = dashed;
color = firebrick2;
label = <<i>Master Control Components</i>>;
]
ffceacecdbcacdddddffbfa [
shape = circle
color = black
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>kube-scheduler</b></td></tr></table>>;
]
adffdceecfcfbcfdaefca [
shape = circle
color = black
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>CCM/KCM</b></td></tr></table>>;
]
}
subgraph cluster_baaffdafbdceebaaafaefeea {
graph [
fontsize = 10;
fontcolor = firebrick2;
style = dashed;
color = firebrick2;
label = <<i>Worker</i>>;
]
dbddcfaeaacebaecba [
shape = circle
color = black
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>kubelet</b></td></tr></table>>;
]
ddcaffdfdebdaeff [
shape = circle
color = black
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>kube-proxy</b></td></tr></table>>;
]
bcdcebabbdaadffeaeddcce [
shape = circle;
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color="black"><b>iptables</b></font></td></tr></table>>;
]
}
subgraph cluster_fdcecbcfbeadaccab {
graph [
fontsize = 10;
fontcolor = firebrick2;
style = dashed;
color = firebrick2;
label = <<i>Container</i>>;
]
bdfadfbeeaedceab [
shape = square;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>Internal Attacker</b></td></tr></table>>;
]
eefbffbeaaeecaceaaabe [
shape = circle
color = black
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><b>Pods</b></td></tr></table>>;
]
}
eadddadcfbabebaed -> bfffcaeeeeedccabfaaeff [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>All kube-apiserver data</b></font></td></tr></table>>;
]
eadddadcfbabebaed -> dbddcfaeaacebaecba [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>kubelet Health, Status, &amp;c.</b></font></td></tr></table>>;
]
eadddadcfbabebaed -> ddcaffdfdebdaeff [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>kube-proxy Health, Status, &amp;c.</b></font></td></tr></table>>;
]
eadddadcfbabebaed -> ffceacecdbcacdddddffbfa [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>kube-scheduler Health, Status, &amp;c.</b></font></td></tr></table>>;
]
eadddadcfbabebaed -> adffdceecfcfbcfdaefca [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>{kube, cloud}-controller-manager Health, Status, &amp;c.</b></font></td></tr></table>>;
]
dbddcfaeaacebaecba -> eadddadcfbabebaed [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>HTTP watch for resources on kube-apiserver</b></font></td></tr></table>>;
]
ddcaffdfdebdaeff -> eadddadcfbabebaed [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>HTTP watch for resources on kube-apiserver</b></font></td></tr></table>>;
]
adffdceecfcfbcfdaefca -> eadddadcfbabebaed [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>HTTP watch for resources on kube-apiserver</b></font></td></tr></table>>;
]
ffceacecdbcacdddddffbfa -> eadddadcfbabebaed [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>HTTP watch for resources on kube-apiserver</b></font></td></tr></table>>;
]
dbddcfaeaacebaecba -> bcdcebabbdaadffeaeddcce [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>kubenet update of iptables (... ipvs, &amp;c) to setup Host-level ports</b></font></td></tr></table>>;
]
ddcaffdfdebdaeff -> bcdcebabbdaadffeaeddcce [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>kube-prxy update of iptables (... ipvs, &amp;c) to setup all pod networking</b></font></td></tr></table>>;
]
dbddcfaeaacebaecba -> eefbffbeaaeecaceaaabe [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>kubelet to pod/CRI runtime, to spin up pods within a host</b></font></td></tr></table>>;
]
adafdaeaedeedcafe -> eefbffbeaaeecaceaaabe [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>End-user access of Kubernetes-hosted applications</b></font></td></tr></table>>;
]
bfbeacdafaceebdccfdffcdfcedfec -> eefbffbeaaeecaceaaabe [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>External Attacker attempting to compromise a trust boundary</b></font></td></tr></table>>;
]
bdfadfbeeaedceab -> eefbffbeaaeecaceaaabe [
color = black;
label = <<table border="0" cellborder="0" cellpadding="2"><tr><td><font color ="black"><b>Internal Attacker with access to a compromised or malicious pod</b></font></td></tr></table>>;
]
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 314 KiB

View File

@ -1,141 +0,0 @@
# Overview
- Component: Container Runtime
- Owner(s): [sig-node](https://github.com/kubernetes/community/blob/master/sig-node/README.md)
- SIG/WG(s) at meeting:
- Service Data Classification: High
- Highest Risk Impact:
# Service Notes
The portion should walk through the component and discuss connections, their relevant controls, and generally lay out how the component serves its relevant function. For example
a component that accepts an HTTP connection may have relevant questions about channel security (TLS and Cryptography), authentication, authorization, non-repudiation/auditing,
and logging. The questions aren't the *only* drivers as to what may be spoken about, the questions are meant to drive what we discuss and keep things on task for the duration
of a meeting/call.
## How does the service work?
- Container Runtimes expose an IPC endpoint such as a file system socket
- kubelet retrieves pods to be executed from the kube-apiserver
- The Container Runtime Interface then executes the necessary commands/requests from the actual container system (e.g. docker) to run the pod
## Are there any subcomponents or shared boundaries?
Yes
- The Container Runtime technically interfaces with kublet, and runs on the same host
- However, the Container Runtime is logically a separate Trust Zone within the node
## What communications protocols does it use?
Various, depends on the IPC mechanism required by the Container Runtime
## Where does it store data?
Most data should be provided by kubelet or the CRI in running the container
## What is the most sensitive data it stores?
N/A
## How is that data stored?
N/A
# Meeting Notes
# Data Dictionary
| Name | Classification/Sensitivity | Comments |
| :--: | :--: | :--: |
| Data | Goes | Here |
# Control Families
These are the areas of controls that we're interested in based on what the audit working group selected.
When we say "controls," we mean a logical section of an application or system that handles a security requirement. Per CNSSI:
> The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information.
For example, an system may have authorization requirements that say:
- users must be registered with a central authority
- all requests must be verified to be owned by the requesting user
- each account must have attributes associated with it to uniquely identify the user
and so on.
For this assessment, we're looking at six basic control families:
- Networking
- Cryptography
- Secrets Management
- Authentication
- Authorization (Access Control)
- Multi-tenancy Isolation
Obviously we can skip control families as "not applicable" in the event that the component does not require it. For example,
something with the sole purpose of interacting with the local file system may have no meaningful Networking component; this
isn't a weakness, it's simply "not applicable."
For each control family we want to ask:
- What does the component do for this control?
- What sorts of data passes through that control?
- for example, a component may have sensitive data (Secrets Management), but that data never leaves the component's storage via Networking
- What can attacker do with access to this component?
- What's the simplest attack against it?
- Are there mitigations that we recommend (i.e. "Always use an interstitial firewall")?
- What happens if the component stops working (via DoS or other means)?
- Have there been similar vulnerabilities in the past? What were the mitigations?
# Threat Scenarios
- An External Attacker without access to the client application
- An External Attacker with valid access to the client application
- An Internal Attacker with access to cluster
- A Malicious Internal User
## Networking
- CRI Runs an HTTP server
- port forwarding, exec, attach
- !FINDING TLS bye default, but not mutual TLS, and self-signed
- kubelet -> exec request to CRI over gRPC
- Returns URL with single use Token
- gRPC is Unix Domain by default
- Kubelet proxies or responds w/ redirect to API server (locally hosted CRI only)
- !FINDING(same HTTP finding for pull as kubectl) CRI actually pulls images, no egress filtering
- image tag is SHA256, CRI checks that
- Not sure how CNI, it might be exec
- only responds to connections
- CRI uses Standard Go HTTP
## Cryptography
- Nothing beyond TLS
## Secrets Management
- !FINDING auth'd container repos, passed in via podspec, fetched by kubelet, are passed via CLI
- so anyone with access to the host running the container can see those secrets
## Authentication
- Unix Domain Socket for gRPC, so Linux authN/authZ
- !FINDING 8 character random single use token with 1 minute lifetype (response to line 109)
## Authorization
- no authZ
## Multi-tenancy Isolation
- knows nothing about tenants or namespaces
- low-level component, kubelet/api-server is the arbiter
## Summary
# Recommendations

View File

@ -1,162 +0,0 @@
# Overview
- Component: etcd
- Owner(s): Technically external to Kubernetes itself, but managed by [sig-api-machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery)
- SIG/WG(s) at meeting:
- Service Data Classification: Critical (on a cluster with an API server, access to etcd is root access to the cluster)
- Highest Risk Impact:
# Service Notes
The portion should walk through the component and discuss connections, their relevant controls, and generally lay out how the component serves its relevant function. For example
a component that accepts an HTTP connection may have relevant questions about channel security (TLS and Cryptography), authentication, authorization, non-repudiation/auditing,
and logging. The questions aren't the *only* drivers as to what may be spoken about, the questions are meant to drive what we discuss and keep things on task for the duration
of a meeting/call.
## How does the service work?
- Distributed key-value store
- uses RAFT for consensus
- always need to deploy (N x M) + 1 members to avoid leader election issues
- five is recommended for production usage
- listens for requests from clients
- clients are simple REST clients that interact via JSON or other mechanisms
- in Kubernetes' case, data is stored under `/registry`
## Are there any subcomponents or shared boundaries?
There shouldn't be; documentation specifically states:
- should be in own cluster
- limited to access by the API server(s) only
- should use some sort of authentication (hopefully certificate auth)
## What communications protocols does it use?
- HTTPS (with optional client-side or two-way TLS)
- can also use basic auth
- there's technically gRPC as well
## Where does it store data?
- typical database-style:
- data directory
- snapshot directory
- write-ahead log (WAL) directory
- all three may be the same, depends on command line options
- Consensus is then achieved across nodes via RAFT (leader election + log replication via distributed state machine)
## What is the most sensitive data it stores?
- literally holds the keys to the kingdom:
- pod specs
- secrets
- roles/attributes for {R, A}BAC
- literally any data stored in Kubernetes via the kube-apiserver
- [Access to etcd is equivalent to root permission in the cluster](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#securing-etcd-clusters)
## How is that data stored?
- Outside the scope of this assessment per se, but not encrypted at rest
- Kubernetes supports this itself with Encryption providers
- the typical process of a WAL + data + snapshot is used
- this is then replicated across the cluster with Raft
# Meeting Notes
- No authorization (from k8s perspective)
- AUthentication by local port access in current k8s
- working towards mTLS for all connections
- Raft consensus port, listener port
- backups in etcd (system-level) not encrypted
- metrics aren't encrypted at all either
- multi-tenant: no multi-tenant controls at all
- the kube-apiserver is the arbiter namespaces
- could add namespaces to the registry, but that is a large amount of work
- no migration plan or test
- watches (like kubelet watching for pod spec changes) would break
- multi-single tenant is best route
- RAFT port may be open by default, even in single etcd configuraitons
- runs in a container within static Master kubelet, but is run as root
- [CONTROL WEAKNESS] CA is passed on command line
- Types of files: WAL, Snapshot, Data file (and maybe backup)
- [FINDING] no checksums on WAL/Snapshot/Data
- [RECOMMENDATION] checksum individual WAL entries, checksum the entire snapshot file
- do this because it's fast enough for individual entries, and then the snapshot should never change
- Crypto, really only TLS (std go) and checksums for backups (but not other files, as noted above)
- No auditing, but that's less useful
- kube-apiserver is the arbiter of what things are
- kube-apiserver uses a single connection credential to etcd w/o impersonation, so harder to tell who did what
- major events end up in the app log
- debug mode allows you to see all events when they happen
# Data Dictionary
| Name | Classification/Sensitivity | Comments |
| :--: | :--: | :--: |
| Data | Goes | Here |
# Control Families
These are the areas of controls that we're interested in based on what the audit working group selected.
When we say "controls," we mean a logical section of an application or system that handles a security requirement. Per CNSSI:
> The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information.
For example, an system may have authorization requirements that say:
- users must be registered with a central authority
- all requests must be verified to be owned by the requesting user
- each account must have attributes associated with it to uniquely identify the user
and so on.
For this assessment, we're looking at six basic control families:
- Networking
- Cryptography
- Secrets Management
- Authentication
- Authorization (Access Control)
- Multi-tenancy Isolation
Obviously we can skip control families as "not applicable" in the event that the component does not require it. For example,
something with the sole purpose of interacting with the local file system may have no meaningful Networking component; this
isn't a weakness, it's simply "not applicable."
For each control family we want to ask:
- What does the component do for this control?
- What sorts of data passes through that control?
- for example, a component may have sensitive data (Secrets Management), but that data never leaves the component's storage via Networking
- What can attacker do with access to this component?
- What's the simplest attack against it?
- Are there mitigations that we recommend (i.e. "Always use an interstitial firewall")?
- What happens if the component stops working (via DoS or other means)?
- Have there been similar vulnerabilities in the past? What were the mitigations?
# Threat Scenarios
- An External Attacker without access to the client application
- An External Attacker with valid access to the client application
- An Internal Attacker with access to cluster
- A Malicious Internal User
## Networking
## Cryptography
## Secrets Management
## Authentication
- by default Kubernetes doesn't use two-way TLS to the etcd cluster, which would be the most secure (combined with IP restrictions so that stolen creds can't be reused on new infrastructure)
## Authorization
## Multi-tenancy Isolation
## Summary
# Recommendations

View File

@ -1,18 +0,0 @@
# Meeting notes
- CCM per cloud provider
- same host as kube-apiserver
- caches live in memory
- refresh cache, but can be forced to by request
- Controller manager attempts to use PoLA, but the service account controller has permission to write to it's own policies
- Cloud controller (routes, IPAM, &c.) can talk to external resources
- CCM/KCM have no notion of multi-tenant, and there are implications going forward
- Deployments across namespace
- cloud controller has access to cloud credentials (passed in by various means, as we saw in the code)
- CCM is a reference implementation, meant to separate out other company's code
- So Amazon doesn't need to have Red Hat's code running, &c.
- shared acache across all controllers
- [FINDING] separate out high privileged controllers from lower privileged ones, so there's no confused deputy
- single binary for controller
- if you can trick the service account controller into granting access to things you shouldn't (for example) that would be problematic
- make a "privileged controller manager" which bundles high and low-privileged controllers, and adds another trust boundary

View File

@ -1,187 +0,0 @@
# Overview
- Component: kube-apiserver
- Owner(s): [sig-api-machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery)
- SIG/WG(s) at meeting:
- Service Data Classification: Critical (technically, it isn't needed, but most clusters will use it extensively)
- Highest Risk Impact:
# Service Notes
The portion should walk through the component and discuss connections, their relevant controls, and generally lay out how the component serves its relevant function. For example
a component that accepts an HTTP connection may have relevant questions about channel security (TLS and Cryptography), authentication, authorization, non-repudiation/auditing,
and logging. The questions aren't the *only* drivers as to what may be spoken about, the questions are meant to drive what we discuss and keep things on task for the duration
of a meeting/call.
## How does the service work?
- RESTful API server
- made up of multiple subcomponents:
- authenticators
- authorizers
- admission controllers
- resource validators
- users issue a request, which is authenticated via one (or more) plugins
- the requests is then authorized by one or more authorizers
- it is then potentially modified and validated by an admission controller
- resource validation that validates the object, stores it in etcd, and responds
- clients issue HTTP requests (via TLS ala HTTPS) to "watch" resources and poll for changes from the server; for example:
1. a client updates a pod definition via `kubectl` and a `POST` request
1. the scheduler is "watching" for pod updates via an HTTP watch request to retrieve new pods
1. the scheduler then update the pod list via a `POST` to the kube-apiserver
1. a node's `kubelet` retrieves a list of pods assigned to it via an HTTP watch request
1. the node's `kubelet` then update the running pod list on the kube-apiserver
## Are there any subcomponents or shared boundaries?
Yes
- Controllers technically run on the kube-apiserver
- the various subcomponents (authenticators, authorizers, and so on) run on the kube-apiserver
additionally, depending on the configuration there may be any number of other Master Control Pane components running on the same phyical/logical host
## What communications protocols does it use?
- Communcations to the kube-apiserver use HTTPS and various authentication mechanisms
- Communications from the kube-apiserver to etcd use HTTPS, with optional client-side (two-way) TLS
- Communications from the kube-apiserver to kubelets can use HTTP or HTTPS, the latter is without validation by default (find this again in the docs)
## Where does it store data?
- Most data is stored in etcd, mainly under `/registry`
- Some data is obviously stored on the local host, to bootstrap the connection to etcd
## What is the most sensitive data it stores?
- Not much sensitive is directly stored on kube-apiserver
- However, all sensitive data within the system (save for in MCP-less setups) is processed and transacted via the kube-apiserver
## How is that data stored?
- On etcd, with the level of protection requested by the user
- looks like encryption [is a command line flag](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#configuration-and-determining-whether-encryption-at-rest-is-already-enabled)
# Meeting notes
- web hooks: kube-apiserver can call eternal resources
- authorization webhook (for when you wish to auth a request without setting up a new authorizer)
- images, other resources
- [FINDING] supports HTTP
- Aggregate API server // Aggregator
- for adding externisbility resources
- a type of CRD, basically
- component status -> reaches out to every component on the cluster
- Network proxy: restrict outbound connections from kube-apiserver (currently no restriction)
- honestly a weakness: no egress filtering
- Business logic in controllers, but kube-apiserver is info
- cloud prociders, auth, &c
- sharding by group version kind, put all KVKs into the same etcd
- listeners: insecure and secure
- check if insecure is configured by default
- would be a finding if so
- Not comfortable doing true multi-tenant on k8s
- multi-single tenants (as in, if Pepsi wants to have marketing & accounting that's fine, but not Coke & Pepsi on the same cluster)
- Best way to restrict access to kube-apiserver
- and working on a proxy as noted above
- kube-apiserver is the root CA for *at least two* PKIs:
- two CAs, but not on by default w/o flags (check what happens w/o two CAs...)
- that would be a finding, if you can cross CAs really
- TLS (multiple domains):
- etcd -> kube-apiserver
- the other is webhooks/kublet/components...
- check secrets: can you tell k8s to encrypt a secret but not provide the flag? what does it do?
- Alt route for secrets: volumes, write to a volume, then mount
- Can't really do much about that, since it's opaque to the kube-apiserver
- ConfigMap: people can stuff secrets into ConfigMaps
- untyped data blob
- cannot encrypt
- recommend moving away from ConfigMaps
- Logging to var log
- resource names in logs (namespace, secret name, &c). Can be sensitive
- [FINDING] no logs by default who did what
- need to turn on auditing for that
- look at metrics as well, similar to CRDs
- Data Validation
- can have admission controller, webhooks, &c.
- everything goes through validation
- Session
- upgrade to HTTP/2, channel, or SPDY
- JWT is long lived (we know)
- Certain requests like proxy and logs require upgrade to channels
- look at k8s enhancement ... kube-apiserver dot md
# Data Dictionary
| Name | Classification/Sensitivity | Comments |
| :--: | :--: | :--: |
| Data | Goes | Here |
# Control Families
These are the areas of controls that we're interested in based on what the audit working group selected.
When we say "controls," we mean a logical section of an application or system that handles a security requirement. Per CNSSI:
> The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information.
For example, an system may have authorization requirements that say:
- users must be registered with a central authority
- all requests must be verified to be owned by the requesting user
- each account must have attributes associated with it to uniquely identify the user
and so on.
For this assessment, we're looking at six basic control families:
- Networking
- Cryptography
- Secrets Management
- Authentication
- Authorization (Access Control)
- Multi-tenancy Isolation
Obviously we can skip control families as "not applicable" in the event that the component does not require it. For example,
something with the sole purpose of interacting with the local file system may have no meaningful Networking component; this
isn't a weakness, it's simply "not applicable."
For each control family we want to ask:
- What does the component do for this control?
- What sorts of data passes through that control?
- for example, a component may have sensitive data (Secrets Management), but that data never leaves the component's storage via Networking
- What can attacker do with access to this component?
- What's the simplest attack against it?
- Are there mitigations that we recommend (i.e. "Always use an interstitial firewall")?
- What happens if the component stops working (via DoS or other means)?
- Have there been similar vulnerabilities in the past? What were the mitigations?
# Threat Scenarios
- An External Attacker without access to the client application
- An External Attacker with valid access to the client application
- An Internal Attacker with access to cluster
- A Malicious Internal User
## Networking
- in the version of k8s we are testing, no outbound limits on external connections
## Cryptography
- Not encrypting secrets in etcd by default
- requiring [a command line flag](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#configuration-and-determining-whether-encryption-at-rest-is-already-enabled)
- SUpports HTTP for Webhooks and comopnent status
## Secrets Management
## Authentication
## Authorization
## Multi-tenancy Isolation
## Summary
# Recommendations

View File

@ -1,227 +0,0 @@
# Overview
- Component: kube-proxy
- Owner(s): [sig-network](https://github.com/kubernetes/community/tree/master/sig-network)
- SIG/WG(s) at meeting:
- Service Data Classification: Medium
- Highest Risk Impact:
# Service Notes
The portion should walk through the component and discuss connections, their relevant controls, and generally lay out how the component serves its relevant function. For example
a component that accepts an HTTP connection may have relevant questions about channel security (TLS and Cryptography), authentication, authorization, non-repudiation/auditing,
and logging. The questions aren't the *only* drivers as to what may be spoken about, the questions are meant to drive what we discuss and keep things on task for the duration
of a meeting/call.
## How does the service work?
- kubeproxy has several main modes of operation:
- as a literal network proxy, handling networking between nodes
- as a bridge between Container Network Interface (CNI) which handles the actual networking and the host operating system
- `iptables` mode
- `ipvs` mode
- two Microsoft Windows-specific modes (not covered by the RRA)
- in any of these modes, kubeproxy interfaces with the host's routing table so as to achieve a seamless, flat network across the kubernetes cluster
## Are there any subcomponents or shared boundaries?
Yes.
- Similar to kubelet, kube-proxy run's on the node, with an implicit trust boundary between Worker components and Container components (i.e. pods)
## What communications protocols does it use?
- Direct IPC to `iptables` or `ipvs`
- HTTPS to the kube-apiserver
- HTTP Healthz port (which is a literal counter plus a `200 Ok` response)
## Where does it store data?
Minimal data should be stored by kube-proxy itself, this should mainly be handled by kubelet and some file system configuration
## What is the most sensitive data it stores?
N/A
## How is that data stored?
N/A
# Data Dictionary
| Name | Classification/Sensitivity | Comments |
| :--: | :--: | :--: |
| Data | Goes | Here |
# Control Families
These are the areas of controls that we're interested in based on what the audit working group selected.
When we say "controls," we mean a logical section of an application or system that handles a security requirement. Per CNSSI:
> The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information.
For example, an system may have authorization requirements that say:
- users must be registered with a central authority
- all requests must be verified to be owned by the requesting user
- each account must have attributes associated with it to uniquely identify the user
and so on.
For this assessment, we're looking at six basic control families:
- Networking
- Cryptography
- Secrets Management
- Authentication
- Authorization (Access Control)
- Multi-tenancy Isolation
Obviously we can skip control families as "not applicable" in the event that the component does not require it. For example,
something with the sole purpose of interacting with the local file system may have no meaningful Networking component; this
isn't a weakness, it's simply "not applicable."
For each control family we want to ask:
- What does the component do for this control?
- What sorts of data passes through that control?
- for example, a component may have sensitive data (Secrets Management), but that data never leaves the component's storage via Networking
- What can attacker do with access to this component?
- What's the simplest attack against it?
- Are there mitigations that we recommend (i.e. "Always use an interstitial firewall")?
- What happens if the component stops working (via DoS or other means)?
- Have there been similar vulnerabilities in the past? What were the mitigations?
# Threat Scenarios
- An External Attacker without access to the client application
- An External Attacker with valid access to the client application
- An Internal Attacker with access to cluster
- A Malicious Internal User
## Networking
- kube-proxy is actually five programs
- proxy: mostly deprecated, but a literal proxy, in that it intercepts requests and proxies them to backend services
- IPVS/iptables: very similar modes, handle connecting virtual IPs (VIPs) and the like via low-level routing (the preferred mode)
- two Windows-specific modes (out of scope for this discussion, but if there are details we can certainly add them)
Node ports:
- captures traffic from Host IP
- shuffles to backend (used for building load balancers)
- kube-proxy shells out to `iptables` or `ipvs`
- Also uses a netlink socket for IPVS (netlink are similar to Unix Domain Sockets)
- *Also* shells out to `ipset` under certain circumstances for IPVS (building sets of IPs and such)
### User space proxy
Setup:
1. Connect to the kube-apiserver
1. Watch the API server for services/endpoints/&c
1. Build in-memory caching map: for services, for every port a service maps, open a port, write iptables rule for VIP & Virt Port
1. Watch for updates of services/endpoints/&c
when a consumer connects to the port:
1. Service is running VIP:VPort
1. Root NS -> iptable -> kube-proxy port
1. look at the src/dst port, check the map, pick a service on that port at random (if that fails, try another until either success or a retry count has exceeded)
1. Shuffle bytes back and forth between backend service and client until termination or failure
### iptables
1. Same initial setup (sans opening a port directly)
1. iptables restore command set
1. giant string of services
1. User VIP -> Random Backend -> Rewrite packets (at the kernel level, so kube-proxy never sees the data)
1. At the end of the sync loop, write (write in batches to avoid iptables contentions)
1. no more routing table touches until service updates (from watching kube-apiserver or a time out, expanded below)
**NOTE**: rate limited (bounded frequency) updates:
- no later than 10 minutes by default
- no sooner than 15s by default (if there are no service map updates)
this point came out of the following question: is having access to kube-proxy *worse* than having root access to the host machine?
### ipvs
1. Same setup as iptables & proxy mode
1. `ipvsadm` and `ipset` commands instead of `iptables`
1. This does have some strange changes:
- ip address needs a dummy adapter
- !NOTE Any service bound to 0.0.0.0 are also bound to _all_ adapters
- somewhat expected because 0.0.0.0, but can still lead to interesting behavior
### concern points within networking
- !NOTE: ARP table attacks (such as if someone has `CAP_NET_RAW` in a container or host access) can impact kube-proxy
- Endpoint selection is namespace & pod-based, so injection could overwrite (I don't think this is worth a finding/note because kube-apiserver is the arbiter of truth)
- !FINDING (but low...): POD IP Reuse: (factor of 2 x max) cause a machine to churn thru IPS, you could cause a kube-proxy to forward ports to your pod if you win the race condition.
- this would be limited to the window of routing updates
- however, established connections would remain
- kube-apiserver could be the arbiter of routing, but that may require more watch and connection to the central component
- [editor] I think just noting this potential issue and maybe warning on it in kube-proxy logs would be enough
### with root access?
Access to kube-proxy is mostly the same as root access
- set syscalls, route local, &c could gobble memory
- Node/VIP level
- Recommend `CAP_NET_BIND` (bind to low ports, don't need root for certain users) for containers/pods, alleviate concerns there
- Can map low ports to high ports in kube-proxy as well, but mucks with anything that pretends to be a VIP
- LB forwards packets to service without new connection (based on srcport)
- 2-hop LB, can't do direct LB
## Cryptography
- kube-proxy itself does not handle cryptography other than the TLS connection to kube-apiserver
## Secrets Management
- kube-proxy itself does not handle secrets, but rather only consumes credentials from the command line (like all other k8s components)
## Authentication
- kube-proxy does not handle any authentication other than credentials to the kube-apiserver
## Authorization
- kube-proxy does not handle any authorization; the arbiters of authorization are kubelet and kube-proxy
## Multi-tenancy Isolation
- kube-proxy does not currently segment clients from one another, as clients on the same pod/host must use the same iptables/ipvs configuration
- kube-proxy does have conception of namespaces, but currently avoids enforcing much at that level
- routes still must be added to iptables or the like
- iptables contention could be problematic
- much better to handle at higher-level components, namely kube-apiserver and kube-proxy
## Logging
- stderr directed to a file
- same as with kubelet
- !FINDING (but same as all other components) logs namespaces, service names (same as every other service)
# Additional Notes
## kubelet to iptables
- per pod network management
- pods can request a host port, docker style
- kubenet and CNI plugins
- kubenet uses CNI
- setup kubenet iptable to map ports to a single pod
- overly broad, should be appended to iptables list
- all local IPs to the host
!FINDING: don't use host ports, they can cause problems with services and such; we may recommend deprecating them
## Summary
# Recommendations

View File

@ -1,162 +0,0 @@
# Overview
- Component: kube-scheduler
- Owner(s): [sig-scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling)
- SIG/WG(s) at meeting:
- Service Data Classifjcation: Moderate (the scheduler adds pods to nodes, but will not remove pods, for the most part)
- Highest Risk Impact:
# Service Notes
The portion should walk through the component and discuss connections, their relevant controls, and generally lay out how the component serves its relevant function. For example
a component that accepts an HTTP connection may have relevant questions about channel security (TLS and Cryptography), authentication, authorization, non-repudiation/auditing,
and logging. The questions aren't the *only* drivers as to what may be spoken about, the questions are meant to drive what we discuss and keep things on task for the duration
of a meeting/call.
## How does the service work?
- Similar to most other components:
1. Watches for unscheduled/new pods
1. Watches nodes with and their resource constraints
1. Chooses a node, via various mechanisms, to allocate based on best fit of resource requirements
1. Updates the pod spec on the kube-apiserver
1. that update is then retrieved by the node, which is also Watching components via the kube-apiserver
- there may be multiple schedulers with various names, and parameters (such as pod-specific schedulers)
- !NOTE schedulers are coöperative
- !NOTE schedulers are *supposed* to honor the name, but need not
- Interesting note, makes the huge list of schedulers DoS interesting
- !NOTE idea there was to add a *huge* number of pods to be scheduled that are associated with an poorly named scheduler
- !NOTE peopoe shouldn't request specific schedulers in podspec, rather, there should be some webhook to process that
- !NOTE team wasn't sure what would happen with large number of pods to be scheduled
## Are there any subcomponents or shared boundaries?
Yes
- there may be multiple schedulers on the same MCP host
- schedulers may run on the same host as the API server
## What communications protocols does it use?
- standard HTTPS + auth (chosen by the cluster)
## Where does it store data?
- most should be stored in etcd (via kube-apiserver)
- some data will be stored on command line (configuration options) or on the file system (certificate paths for authentication)
## What is the most sensitive data it stores?
- No direct storage
## How is that data stored?
- N/A
# Data Dictionary
| Name | Classification/Sensitivity | Comments |
| :--: | :--: | :--: |
| Data | Goes | Here |
# Control Families
These are the areas of controls that we're interested in based on what the audit working group selected.
When we say "controls," we mean a logical section of an application or system that handles a security requirement. Per CNSSI:
> The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information.
For example, an system may have authorization requirements that say:
- users must be registered with a central authority
- all requests must be verified to be owned by the requesting user
- each account must have attributes associated with it to uniquely identify the user
and so on.
For this assessment, we're looking at six basic control families:
- Networking
- Cryptography
- Secrets Management
- Authentication
- Authorization (Access Control)
- Multi-tenancy Isolation
Obviously we can skip control families as "not applicable" in the event that the component does not require it. For example,
something with the sole purpose of interacting with the local file system may have no meaningful Networking component; this
isn't a weakness, it's simply "not applicable."
For each control family we want to ask:
- What does the component do for this control?
- What sorts of data passes through that control?
- for example, a component may have sensitive data (Secrets Management), but that data never leaves the component's storage via Networking
- What can attacker do with access to this component?
- What's the simplest attack against it?
- Are there mitigations that we recommend (i.e. "Always use an interstitial firewall")?
- What happens if the component stops working (via DoS or other means)?
- Have there been similar vulnerabilities in the past? What were the mitigations?
# Threat Scenarios
- An External Attacker without access to the client application
- An External Attacker with valid access to the client application
- An Internal Attacker with access to cluster
- A Malicious Internal User
## Networking
- only talks to kube-apiserver
- colocated on the same host generally as kube-apiserver, but needn't be
- has a web server (HTTP)
- !FINDING: same HTTP server finding as all other components
- metrics endpoint: qps, scheduling latency, &c
- healthz endpoint, which is just a 200 Ok response
- by default doesn't verify cert (maybe)
## Cryptography
- None
## Secrets Management
- Logs is the only persistence mechanism
- !FINDING (to be added to all the other "you expose secrets in env and CLI" finding locations) auth token/cred passed in via CLI
## Authentication
- no authN really
- pods, nodes, related objects; doesn't deal in authN
- unaware of any service/user accounts
## Authorization
- schedluinc concepts protected by authZ
- quotas
- priority classes
- &c
- this authZ is not enforced by scheduler, however, enforced by kube-apiserver
## Multi-tenancy Isolation
- tenant: different users of workloads that don't want to trust one another
- namespaces are usually the boundaries
- affinity/anti-affinity for namespace
- scheduler doesn't have data plan access
- can have noisy neighbory problem
- is that the scheduler's issue?
- not sure
- namspace agnostic
- can use priority classes which can be RBAC'd to a specific namespace, like kube-system
- does not handle tenant fairness, handles priorty class fairness
- no visibility into network boundary or usage information
- no cgroup for network counts
- !FINDING anti-affinity can be abused: only I can have this one host, no one else, applicable from `kubectl`
- !NOTE no backoff process for scheduler to reschedule a rejected pod by the kublet; the replicaset controller can create a tightloop (RSC -> Scheduler -> Kubelet -> Reject -> RSC...)
## Summary
# Recommendations

View File

@ -1,180 +0,0 @@
# Overview
- Component: kubelet
- Owner(s): [sig-node](https://github.com/kubernetes/community/tree/master/sig-node)
- SIG/WG(s) at meeting:
- Service Data Classification: High
- Highest Risk Impact:
# Service Notes
The portion should walk through the component and discuss connections, their relevant controls, and generally lay out how the component serves its relevant function. For example
a component that accepts an HTTP connection may have relevant questions about channel security (TLS and Cryptography), authentication, authorization, non-repudiation/auditing,
and logging. The questions aren't the *only* drivers as to what may be spoken about, the questions are meant to drive what we discuss and keep things on task for the duration
of a meeting/call.
## How does the service work?
- `kubelet` isses a watch request on the `kube-apiserver`
- `kubelet` watches for pod allocations assigned to the node the kubelet is currently running on
- when a new pod has been allocated for the kubelet's host, it retrieve the pod spec, and interacts with the Container Runtime via local Interprocess Communication to run the container
- Kubelet also handles:
- answering log requests from the kube-apiserver
- monitoring pod health for failures
- working with the Container Runtime to deschedule pods when the pod has been deleted
- updating the kube-apiserver with host status (for use by the scheduler)
## Are there any subcomponents or shared boundaries?
Yes.
- Technically, kubelet runs on the same host as the Container Runtime and kubeproxy
- There is a Trust Zone boundary between the Container Runtime and the kubelet
## What communications protocols does it use?
- HTTPS with certificate validation and some authentication mechanism for communication with the kube-apiserver as a client
- HTTPS without certificate validation by default
## Where does it store data?
- kubelet itself should not store much data
- kubelet can be run in an "apiserver-less mode" that loads pod manifests from the file system
- most data should be retrieved from the kube-apiserver via etcd
- authentication credentials for the kube-apiserver may be stored on the file system or in memory (both in CLI parameter as well as actual program memory) for the duration of execution
## What is the most sensitive data it stores?
- authentication credentials are stored in memory or are out of scope
## How is that data stored?
N/A
# Data Dictionary
| Name | Classification/Sensitivity | Comments |
| :--: | :--: | :--: |
| Data | Goes | Here |
# Control Families
These are the areas of controls that we're interested in based on what the audit working group selected.
When we say "controls," we mean a logical section of an application or system that handles a security requirement. Per CNSSI:
> The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information.
For example, an system may have authorization requirements that say:
- users must be registered with a central authority
- all requests must be verified to be owned by the requesting user
- each account must have attributes associated with it to uniquely identify the user
and so on.
For this assessment, we're looking at six basic control families:
- Networking
- Cryptography
- Secrets Management
- Authentication
- Authorization (Access Control)
- Multi-tenancy Isolation
Obviously we can skip control families as "not applicable" in the event that the component does not require it. For example,
something with the sole purpose of interacting with the local file system may have no meaningful Networking component; this
isn't a weakness, it's simply "not applicable."
For each control family we want to ask:
- What does the component do for this control?
- What sorts of data passes through that control?
- for example, a component may have sensitive data (Secrets Management), but that data never leaves the component's storage via Networking
- What can attacker do with access to this component?
- What's the simplest attack against it?
- Are there mitigations that we recommend (i.e. "Always use an interstitial firewall")?
- What happens if the component stops working (via DoS or other means)?
- Have there been similar vulnerabilities in the past? What were the mitigations?
# Threat Scenarios
- An External Attacker without access to the client application
- An External Attacker with valid access to the client application
- An Internal Attacker with access to cluster
- A Malicious Internal User
## Networking
- Post 10250: read/write, authenticated
- Port 10255: read-only, unauthenticated
- cadvisor uses this, going to be deprecated
- 10248: healthz, unauth'd
- static pod manifest directory
- Static pod fetch via HTTP(S)
### Routes:
- Auth filter on API, for 10250
- delegated to apiserver, subject access review, HTTPS request
- `/pods` podspec on node -> leaks data
- `/healthz`
- `/spec`
- `/stats-{cpu, mem, &c}`
- on 10250 only:
- `/exec`
- `/attach`
- `portforward`
- `/kube-auth`
- `/debug-flags`
- `/cri/{exec, attach, portforward}`
### Findings:
- !FINDING: 10255 is unauthenticated and leaks secrets
- !FINDING: 10255/10248
- !FINDING: 10250 is self-signed TLS
## Cryptography
- None
## Secrets Management
- returned from kube-apiserver unencrypted
- in memory cache
- if pod mounts disk, written to tmpfs
- !FINDING (already captured) ENV vars can expose secrets
- configmaps are treated like secrets by kubelet
- !FINDING keynames and secret names may be logged
- maintains its own certs, secrets, bootstrap credential
- bootstrap: initial cert used to issue CSR to kube-apiserver
- !NOTE certs are written to disk unencrypted
- !FINDING bootstrap cert may be long lived, w/o a TTL
## Authentication
- delegated to kube-apiserver, via HTTPS request, with subject access review
- two-way TLS by default (we believe)
- token auth
- bearer token
- passed to request to API server
- "token review"
- kube-apiserver responds w/ ident
- response is boolean (yes/no is this a user) and username/uid/groups/arbitrary data as a tuple
- no auditing on kublet, but logged on kube-apiserver
## Authorization
- delegated to kube-apiserver
## Multi-tenancy Isolation
- kube-apiserver is the arbiter
- kubelet doesn't know namespaces really
- every pod is a separate tenant
- pods are security boundaries
## Summary
# Recommendations

View File

@ -1,95 +0,0 @@
# Overview
- Component:
- Owner(s):
- SIG/WG(s) at meeting:
- Service Data Classification:
- Highest Risk Impact:
# Service Notes
The portion should walk through the component and discuss connections, their relevant controls, and generally lay out how the component serves its relevant function. For example
a component that accepts an HTTP connection may have relevant questions about channel security (TLS and Cryptography), authentication, authorization, non-repudiation/auditing,
and logging. The questions aren't the *only* drivers as to what may be spoken about, the questions are meant to drive what we discuss and keep things on task for the duration
of a meeting/call.
## How does the service work?
## Are there any subcomponents or shared boundaries?
## What communications protocols does it use?
## Where does it store data?
## What is the most sensitive data it stores?
## How is that data stored?
# Data Dictionary
| Name | Classification/Sensitivity | Comments |
| :--: | :--: | :--: |
| Data | Goes | Here |
# Control Families
These are the areas of controls that we're interested in based on what the audit working group selected.
When we say "controls," we mean a logical section of an application or system that handles a security requirement. Per CNSSI:
> The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information.
For example, an system may have authorization requirements that say:
- users must be registered with a central authority
- all requests must be verified to be owned by the requesting user
- each account must have attributes associated with it to uniquely identify the user
and so on.
For this assessment, we're looking at six basic control families:
- Networking
- Cryptography
- Secrets Management
- Authentication
- Authorization (Access Control)
- Multi-tenancy Isolation
Obviously we can skip control families as "not applicable" in the event that the component does not require it. For example,
something with the sole purpose of interacting with the local file system may have no meaningful Networking component; this
isn't a weakness, it's simply "not applicable."
For each control family we want to ask:
- What does the component do for this control?
- What sorts of data passes through that control?
- for example, a component may have sensitive data (Secrets Management), but that data never leaves the component's storage via Networking
- What can attacker do with access to this component?
- What's the simplest attack against it?
- Are there mitigations that we recommend (i.e. "Always use an interstitial firewall")?
- What happens if the component stops working (via DoS or other means)?
- Have there been similar vulnerabilities in the past? What were the mitigations?
# Threat Scenarios
- An External Attacker without access to the client application
- An External Attacker with valid access to the client application
- An Internal Attacker with access to cluster
- A Malicious Internal User
## Networking
## Cryptography
## Secrets Management
## Authentication
## Authorization
## Multi-tenancy Isolation
## Summary
# Recommendations

View File

@ -1,171 +0,0 @@
# Request for Proposal
## Kubernetes Third-Party Security Audit
The Kubernetes SIG Security Third-Party Audit sub-project (working group, henceforth) is soliciting proposals from Information Security vendors for a comprehensive security audit of the Kubernetes Project.
### Background
In August of 2019, the Kubernetes Security Audit working group, in concert with the CNCF, Trail of Bits, and Atredis Partners, completed the first comprehensive security audit of the Kubernetes projects [codebase](https://github.com/kubernetes/kubernetes/), working from version 1.13.
These findings, below, paint a broad picture of Kubernetes security, as of version 1.13, and highlight some areas that warrant further, deeper, research.
* [Kubernetes Security Review](../security-audit-2019/findings/Kubernetes%20Final%20Report.pdf)
* [Attacking and Defending Kubernetes Installations](../security-audit-2019/findings/AtredisPartners_Attacking_Kubernetes-v1.0.pdf)
* [Whitepaper](../security-audit-2019/findings/Kubernetes%20White%20Paper.pdf)
* [Threat Model](../security-audit-2019/findings/Kubernetes%20Threat%20Model.pdf)
### Project Goals and Scope
This subsequent audit is intended to be the second in a series of recurring audits, each focusing on a specific aspect of Kubernetes while maintaining coverage of all aspects that have changed since the previous audit ([1.13](../security-audit-2019/findings/)).
The scope of this audit is the most recent release at commencement of audit of the core [Kubernetes project](https://github.com/kubernetes/kubernetes) and certain other code maintained by [Kubernetes SIGs](https://github.com/kubernetes-sigs/).
This audit will focus on the following components of Kubernetes:
* kube-apiserver
* kube-scheduler
* etcd, Kubernetes use of
* kube-controller-manager
* cloud-controller-manager
* kubelet
* kube-proxy
* secrets-store-csi-driver
Adjacent findings within the scope of the [bug bounty program](https://hackerone.com/kubernetes?type=team#scope) may be included, but are not the primary goal.
This audit is intended to find vulnerabilities or weaknesses in Kubernetes. While Kubernetes relies upon container runtimes such as Docker and CRI-O, we aren't looking for (for example) container escapes that rely upon bugs in the container runtime (unless, for example, the escape is made possible by a defect in the way that Kubernetes sets up the container).
The working group is specifically interested in the following aspects of the in-scope components. Proposals should indicate the specific proposed personnels level of expertise in these fields as it relates to Kubernetes.
* Golang analysis and fuzzing
* Networking
* Cryptography
* Evaluation of component privilege
* Trust relationships and architecture evaluation
* Authentication & Authorization (including Role Based Access Controls)
* Secrets management
* Multi-tenancy isolation: Specifically soft (non-hostile co-tenants)
Personnel written into the proposal must serve on the engagement, unless explicit approvals for staff changes are made by the Security Audit Working Group.
#### Out of Scope
Findings specifically excluded from the [bug bounty program](https://hackerone.com/kubernetes?type=team#scope) scope are out of scope for this audit.
### Eligible Vendors
This RFP is open to proposals from all vendors.
#### Constraints
If your proposal includes subcontractors, please include relevant details from their firms such as CVs, past works, etc. The selected vendor will be wholly responsible for fulfillment of the audit and subcontractors must be wholly managed by the selected vendor.
### Anticipated Selection Schedule
This RFP will be open until 4 proposals have been received.
The RFP closing date will be set 2 calendar weeks after the fourth proposal is received.
The working group will announce the vendor selection after reviewing proposals.
Upon receipt of the fourth proposal, the working group will update the RFP closure date and vendor selection date in this document.
The working group will answer questions for the RFP period.
Questions can be submitted [here](https://docs.google.com/forms/d/e/1FAIpQLScjApMDAJ5o5pIBFKpJ3mUhdY9w5s9VYd_TffcMSvYH_O7-og/viewform). All questions will be answered publicly in this document.
We understand scheduling can be complex but we prefer to have proposals include CVs, resumes, and/or example reports from staff that will be working on the project.
Proposals should be submitted to kubernetes-security-audit-2021@googlegroups.com
* 2021/02/08: RFP Open, Question period open
* 2021/06/22: Fourth proposal received
* 2021/07/06: RFP Closes, Question period closes
* 2021/08/31: The working group will announce vendor selection
## Methodology
The start and end dates will be negotiated after vendor selection. The timeline for this audit is flexible.
The working group will establish a 60 minute kick-off meeting to answer any initial questions and discuss the Kubernetes architecture.
This is a comprehensive audit, not a penetration test or red team exercise. The audit does not end with the first successful exploit or critical vulnerability.
The vendor will document the Kubernetes configuration and architecture that they will audit and provide this to the working group. The cluster deployment assessed must not be specific to any public cloud. The working group must approve this configuration before the audit continues. This documented configuration will result in the "audited reference architecture specification" deliverable.
The vendor will perform source code analysis on the Kubernetes code base, finding vulnerabilities and, where possible and making the most judicious use of time, providing proof of concept exploits that the Kubernetes project can use to investigate and fix defects. The vendor will discuss findings on a weekly basis and, at the vendors discretion, bring draft write-ups to status meetings.
The working group will be available weekly to meet with the selected vendor and will provide subject matter experts as requested.
The vendor will develop and deliver a draft report, describing their methodology, how much attention the various components received (to inform future work), and the works findings. The working group will review and comment on the draft report, either requesting updates or declaring the draft final. This draft-review-comment-draft cycle may repeat several times.
## Expectations
The vendor must report urgent security issues immediately to both the working group and security@kubernetes.io.
## Selection Criteria
To help us combine objective evaluations with the working group members individual past experiences and knowledge of the vendors work and relevant experience, the vendors will be evaluated against the following criteria. Each member of the working group will measure the RFP against the criteria on a scale of 1 to 5:
* Relevant understanding and experience in code audit, threat modeling, and related work
* Relevant understanding and experience in Kubernetes, other orchestration systems, containers, Linux, hardening of distributed systems, and related work
* Strength of the vendors proposal and examples of previous work product, redacted as necessary
A writeup which details our process and results of the last RFP is available [here](../security-audit-2019/RFP_Decision.md).
## Confidentiality and Embargo
All information gathered and deliverables created as a part of the audit must not be shared outside the vendor or the working group without the explicit consent of the working group.
## Deliverables
The audit should result in the following deliverables, which will be made public after any sensitive security issues are mitigated.
* Audited reference architecture specification. Should take the form of a summary and associated configuration YAML files.
* Findings report including an executive summary.
* Where possible and, in the vendors opinion makes the most judicious use of time, proof of concept exploits that the Kubernetes project can use to investigate and fix defects.
## Questions Asked during RFP Response Process
### Do we need to use our own hardware and infrastructure or should we use a cloud?
Strong preference would be for the vendor to provide their own infrastructure or use a public cloud provider, just NOT a managed offering like GKE or EKS. The reasoning is to prevent accidentally auditing a cloud provider's kubernetes service instead of kubernetes/kubernetes. Depending on the scope and approach, it may make sense to use a local cluster (e.g. kind) for API fuzzing and anything that doesn't impact the underlying OS, and is an easy to use repeatable setup (see Methodology above).
### What is the intellectual property ownership of the report and all work product?
The report must be licensed under the Creative Commons Attribution 4.0 International Public License (CC BY 4.0) based on [section 11.(f) of the Cloud Native Computing Foundation (CNCF) Charter](https://github.com/cncf/foundation/blob/master/charter.md#11-ip-policy).
Separately, any code released with or as part of the report needs to be under the Apache License, version 2.0. Please refer to [sections 11.(e) and (d) in the CNCF Charter](https://github.com/cncf/foundation/blob/master/charter.md#11-ip-policy).
### Must I use the report format from the previous audit? Can the SIG provide a report format template I can use?
Vendors who wish to use either the previous report format, as allowed by CC BY 4.0, or a report format provided by the community may do so as long as it is also available under CC BY 4.0. Vendors who wish to publish 2 versions of the report, one tailored for the community under CC BY 4.0 and one that they host on their own site using their proprietary fonts, formats, branding, or other copyrights, under their own license may do so, in order to differentiate their commercial report format from this report. Vendors may also publish a synopsis and marketing materials regarding the report on their website as long as it links to the original report in this repository. In the community report, vendors can place links in the report to materials hosted on their commercial site. This does not imply that linked materials are themselves CC BY 4.0.
### Do you have any developer documentation or design documentation specifications that aren't available on the internet that you would be able to share?
Kubernetes is an open source project, all documentation is available on https://kubernetes.io or on https://github.com/kubernetes.
### What are the most important publicly available pages detailing the design of the system and the data it receives.
- Overview of [Kubernetes components](https://kubernetes.io/docs/concepts/overview/components/)
- [kube-apiserver overview](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/)
- [kube-scheduler overview](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/)
- [Operating etcd clusters for Kubernetes](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/)
- [etcd clustering guide](https://etcd.io/docs/next/op-guide/clustering/)
- [kube-controller-manager overview](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/)
- [cloud-controller-manager overview](https://kubernetes.io/docs/concepts/architecture/cloud-controller/)
- [cloud-controller-manager administration](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/)
- [kubelet overview](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/)
- [kube-proxy overview](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
- [secrets-store-csi-driver](https://github.com/kubernetes-sigs/secrets-store-csi-driver)
### How long does the Working Group envision the engagement lasting and what is the latest date you can receive the deliverables?
The latest date to receive deliverables will be negotiated with the selected vendor.
### Which attack vectors are of most concern to the Working Group.
1. The attack vector most concerned about is unauthenticated access to a cluster resulting in compromise of the [components in-scope](#project-goals-and-scope)
2. Crossing namespace boundaries, an authenticated attacker being able to affect resources their credentials do not directly allow
3. Any other attack vector that exists against the components in scope
### Is there flexibility to wait for staff to be available to work on the audit?
Yes, the timeline for the audit is flexible and the timeline will be further discussed and negotiated with the selected vendor.

View File

@ -1,6 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
reviewers:
- savitharaghunathan
approvers:
- savitharaghunathan

View File

@ -1,6 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
reviewers:
- reylejano
approvers:
- reylejano

View File

@ -1,86 +0,0 @@
# SIG Security External Audit Subproject
## Overview
The SIG Security External Audit subproject (subproject, henceforth) is responsible for coordinating regular,
comprehensive, third-party security audits.
The subproject publishes the deliverables of the audit after abiding to the
[Security Release Process](https://github.com/kubernetes/committee-security-response/blob/main/security-release-process.md) and
[embargo policy](https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy).
- [Request for Proposal (RFP)](#rfp)
- [Security Audit Scope](#security-audit-scope)
- [Vendor and Community Questions](#vendor-and-community-questions)
- [Review of Proposals](#review-of-proposals)
- [Vendor Selection](#vendor-selection)
- [Deliverables](#deliverables)
## RFP
The subproject produces a RFP for a third-party, comprehensive security audit. The subproject publishes the RFP in the
`sig-security` folder in the `kubernetes/community` repository. The subproject defines the scope, schedule,
methodology, selection criteria, and deliverables in the RFP.
Previous RFPs:
- [2019](https://github.com/kubernetes/community/blob/master/sig-security/security-audit-2019/RFP.md)
- [2021](https://github.com/kubernetes/community/blob/master/sig-security/security-audit-2021/RFP.md)
As efforts begin for the year's security audit, create a tracking issue for the security audit in
`kubernetes/community` with the `/sig security` label.
### Security Audit Scope
The scope of an audit is the most recent release at commencement of audit of the core
[Kubernetes project](https://github.com/kubernetes/kubernetes) and certain other code maintained by
[Kubernetes SIGs](https://github.com/kubernetes-sigs/).
Core Kubernetes components remain as focus areas of regular audits. Additional focus areas are finalized by the
subproject.
### Vendor and Community Questions
Potential vendors and the community can submit questions regarding the RFP through a Google form. The Google form is
linked in the RFP.
[Example from the 2021 audit](https://docs.google.com/forms/d/e/1FAIpQLScjApMDAJ5o5pIBFKpJ3mUhdY9w5s9VYd_TffcMSvYH_O7-og/viewform).
The subproject answers questions publicly on the RFP with pull requests to update the RFP.
[Example from the 2021 audit](https://github.com/kubernetes/community/pull/5813).
The question period is typically open between the RFP's opening date and closing date.
## Review of Proposals
Proposals are reviewed by the subproject proposal reviewers after the RFP closing date. An understanding of security audits is required to be a proposal reviewer.
All proposal reviewers must agree to abide by the
**[Security Release Process](https://github.com/kubernetes/committee-security-response/blob/main/security-release-process.md)**,
**[embargo policy](https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy)**,
and have no [conflict of interest](#conflict-of-interest) the tracking issue.
This is done by placing a comment on the issue associated with the security audit.
e.g. `I agree to abide by the guidelines set forth in the Security Release Process, specifically the embargo on CVE
communications and have no conflict of interest`
Proposal reviewers are members of a private Google group and private Slack channel to exchange sensitive, confidential information and to share artifacts.
### Conflict of Interest
There is a possibility of a conflict of interest between a proposal reviewer and a vendor. Proposal reviewers should not have a conflict of interest. Examples of conflict of interest:
- Proposal reviewer is employed by a vendor who submitted a proposal
- Proposal reviewer has financial interest directly tied to the audit
Should a conflict arise during the proposal review, reviewers should notify the subproject owner and SIG Security chairs when they become aware of the conflict.
> The _Conflict of Interest_ section is inspired by the
[CNCF Security TAG security reviewer process](https://github.com/cncf/tag-security/blob/main/assessments/guide/security-reviewer.md#conflict-of-interest).
## Vendor Selection
On the vendor selection date, the subproject will publish a the selected vendor in the 'sig-security' folder in the `kubernetes/community` repository.
[Example from the 2019 audit](https://github.com/kubernetes/community/blob/master/sig-security/security-audit-2019/RFP_Decision.md).
## Deliverables
The deliverables of the audit are defined in the RFP e.g. findings report, threat model, white paper, audited reference architecture spec (with yaml manifests) and published in the 'sig-security' folder in the `kubernetes/community` repository.
[Example from the 2019 audit](https://github.com/kubernetes/community/tree/master/sig-security/security-audit-2019/findings).
**All information gathered and deliverables created as a part of the audit must not be shared outside the vendor or the subproject without the explicit consent of the subproject and SIG Security chairs.**

View File

@ -1,36 +0,0 @@
Past external security audits have not been comprehensive of the entire Kubernetes project.
This roadmap lists previously audited focus areas and focus areas requested to be included in future audits.
The Kubernetes community is invited to create issues and PRs to request additional components to be audited.
| **Kubernetes Focus Area** | **Audit Year**| **Links** |
|---------------------------|---------------|-----------|
| Networking | 2019 | |
| Cryptography | 2019 | |
| Authentication & Authorization (including Role Based Access Controls) | 2019 | |
| Secrets Management | 2019 | |
| Multi-tenancy isolation: Specifically soft (non-hostile co-tenants) | 2019 | |
| kube-apiserver | 2021 | |
| kube-scheduler | 2021 | |
| etcd (in the context of Kubernetes use of etcd) | 2021 | |
| kube-controller-manager | 2021 | |
| cloud-controller-manager | 2021 | |
| kubelet | 2021 | https://github.com/kubernetes/kubelet https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/kubelet |
| kube-proxy | 2021 | https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/kube-proxy https://github.com/kubernetes/kube-proxy |
| secrets-store-csi-driver | 2021 | https://github.com/kubernetes-sigs/secrets-store-csi-driver |
| cluster API | TBD | https://github.com/kubernetes-sigs/cluster-api |
| kubectl | TBD | https://github.com/kubernetes/kubectl |
| kubeadm | TBD | https://github.com/kubernetes/kubeadm |
| metrics server | TBD | https://github.com/kubernetes-sigs/metrics-server
| nginx-ingress (in the context of a Kubernetes ingress controller) | TBD | https://github.com/kubernetes/ingress-nginx
| kube-state-metrics | TBD | https://github.com/kubernetes/kube-state-metrics
| node feature discovery | TBD | https://github.com/kubernetes-sigs/node-feature-discovery
| hierarchial namespace | TBD | https://github.com/kubernetes-sigs/multi-tenancy/tree/master/incubator/hnc
| pod security policy replacement | TBD | https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/2579-psp-replacement
| CoreDNS (in the context of Kubernetes use of CoreDNS) | TBD | Concept: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ Reference: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ |
| cluster autoscaler | TBD | https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler |
| kube rbac proxy | TBD | https://github.com/brancz/kube-rbac-proxy |
| kms plugins | TBD | https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/#implementing-a-kms-plugin |
| cni plugins | TBD | https://github.com/containernetworking/cni |
| csi plugins | TBD | https://github.com/kubernetes-csi |
| aggregator layer | TBD | https://github.com/kubernetes/kube-aggregator |

View File

@ -1,7 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
reviewers:
- pushkarj
approvers:
- pushkarj
- sig-security-leads