Compare commits

...

86 Commits

Author SHA1 Message Date
dependabot[bot] 7ad10b8063 chore(deps): Bump lycheeverse/lychee-action from 2.4.0 to 2.5.0
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.4.0 to 2.5.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](1d97d84f0b...5c4ee84814)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-version: 2.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 08:29:15 +02:00
Federico Di Pierro cc96a4dde6 fix(charts/falco/tests): fixed Falco chart tests.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-08-04 22:36:52 +02:00
Federico Di Pierro 9717814edb update(charts/falco): updated CHANGELOG.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-08-04 22:36:52 +02:00
Federico Di Pierro 6305d9bf7d chore(charts/falco): bump chart version + variables.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-08-04 22:36:52 +02:00
Federico Di Pierro 0b9b5a01d4 update(charts/falco): bump container and k8smeta plugin to latest.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-08-04 22:36:52 +02:00
Leonardo Grasso 01ed738a2c docs(charts/falco): update docs for v6.2.1
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 15:15:40 +02:00
Leonardo Grasso 11be245149 update(charts/falco): bump version to 6.2.1
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 15:15:40 +02:00
Leonardo Grasso 65ba4c266e update(charts/falco): bump container plugin to v0.3.3
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 15:15:40 +02:00
Leonardo Grasso 530eded713 docs(charts/falco): update docs for v6.2.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 12:17:40 +02:00
Leonardo Grasso 9e1550ab44 update(charts/falco): bump charts to v6.2.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 12:17:40 +02:00
Leonardo Grasso 3a7cb6edba update(charts/falco): bump container plugin to v0.3.2
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 12:17:40 +02:00
Leonardo Grasso 2646171e4c chore(charts/falco): adapt volume mounts for new containerEngine
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 12:17:40 +02:00
Leonardo Grasso 9f5ead4705 update(charts/falco): update containerEngines configuration
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 12:17:40 +02:00
Benjamin FERNANDEZ 3cbf72bd9c feat(falco): Add possibility to custom falco pods hostname
Signed-off-by: Benjamin FERNANDEZ <benjamin2.fernandez.ext@orange.com>
2025-07-24 09:56:38 +02:00
Leonardo Grasso ff984cc8a8 update: remove falco-exporter
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-22 15:06:29 +02:00
Leonardo Di Giovanna cd4dc68cb1 docs(OWNERS): add `ekoops` as approver
Signed-off-by: Leonardo Di Giovanna <41296180+ekoops@users.noreply.github.com>
2025-07-18 10:36:10 +02:00
Leonardo Di Giovanna 56f2eb7ccf update(charts/falco): update `README.md` for 6.0.2
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-07-01 14:36:21 +02:00
Leonardo Di Giovanna 489e4d67b6 update(charts/falco): update `CHANGELOG.md` for 6.0.2
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-07-01 14:36:21 +02:00
Leonardo Di Giovanna b821e9db06 update(falco): bump container plugin to 0.3.1
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-07-01 14:36:21 +02:00
Leonardo Di Giovanna 4ba195cc61 update(falco): upgrade chart for Falco 0.41.3
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-07-01 14:36:21 +02:00
dependabot[bot] 2206d6ce36 chore(deps): Bump sigstore/cosign-installer from 3.9.0 to 3.9.1
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.9.0 to 3.9.1.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](fb28c2b633...398d4b0eee)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-24 07:57:43 +02:00
dependabot[bot] b31c2881eb chore(deps): Bump sigstore/cosign-installer from 3.8.2 to 3.9.0
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.8.2 to 3.9.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](3454372f43...fb28c2b633)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-18 11:44:05 +02:00
Leonardo Di Giovanna 1f52c29818 update(charts/falco): update `README.md` for 6.0.1
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-06-17 14:38:44 +02:00
Leonardo Di Giovanna 42b6a54d71 update(charts/falco): update `CHANGELOG.md` for 6.0.1
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-06-17 14:38:44 +02:00
Leonardo Di Giovanna f9c9f14e04 update(falco): bump container plugin to 0.3.0
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-06-17 14:38:44 +02:00
Leonardo Di Giovanna d7816c9a2f update(falco): upgrade chart for Falco 0.41.2
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-06-17 14:38:44 +02:00
Leonardo Grasso 36b77e4937 docs(falco): update docs for v6.0.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-13 10:31:16 +02:00
Leonardo Grasso a03331d05c update(falco): bump to v6.0.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-13 10:31:16 +02:00
Igor Eulalio 60f43e7ad4 chore: response_actions to responseActions
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-06-13 10:31:16 +02:00
Igor Eulalio 9655a0f6da make docs
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-06-13 10:31:16 +02:00
Igor Eulalio a6baf31059 feat: enable talon trough response_actions.enabled proposal
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-06-13 10:31:16 +02:00
Igor Eulalio 049a366d92 feat: bump talon to 0.3.0, rename talon subchart to follow standard
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

feat: bump talon to 0.3.0, rename talon subchart to follow standard

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-06-13 10:31:16 +02:00
Federico Di Pierro 90dea388ad update(charts/falco): bump Falco chart to 5.0.3.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-06-05 16:07:31 +02:00
Federico Di Pierro da70b354c2 update(tests): bump container plugin to 0.2.5.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-06-05 10:43:30 +02:00
Federico Di Pierro e9164d6a17 chore(docs): run `make docs`.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-06-05 10:43:30 +02:00
Federico Di Pierro fdf085c249 update(charts/falco): update charts for Falco 0.41.1.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-06-05 10:43:30 +02:00
Leonardo Grasso e90802d1bf docs(.github): clean up leftover in PULL_REQUEST_TEMPLATE.md
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-03 14:21:13 +02:00
Leonardo Grasso 4c48271f75 chore(charts/falco): bump chart for release
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-03 14:20:14 +02:00
Leonardo Grasso 14b38d251b chore(charts/falco): clarify the notice does not require action items in all cases
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-03 14:20:14 +02:00
Leonardo Grasso 560fd390bc fix(charts/falco): some volumes do not depend on artifact install
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-03 14:20:14 +02:00
Leonardo Grasso 29c627a4ff update(charts/falco)!: bump chart to 5.0.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-30 16:09:37 +02:00
Leonardo Grasso 6d9ccd5078 update(charts/falco): don't use `debian` flavor anymore
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-30 16:09:37 +02:00
Leonardo Grasso ea882813a8 chore(charts/falco): chart release
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-30 16:09:37 +02:00
Leonardo Grasso 36160ce337 update(charts/falco): bump falcoctl to v0.11.2
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-30 16:09:37 +02:00
Leonardo Grasso 7baa31fbd5 docs(charts/falco): update readme
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-29 12:01:25 +02:00
Leonardo Grasso 601643dbbf update(charts/falco): add `json_include_output_fields_property` in config
Co-authored-by: Federico Di Pierro <nierro92@gmail.com>
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 3b9030f7ef update(falco): bump container plugin to 0.2.4
Co-authored-by: Federico Di Pierro <nierro92@gmail.com>
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 7a89dc0a6d update(falco): docs and changelog
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 931b11fd6e remove(falco): delete outdated unit tests
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku bb053c5c12 update(falco): bump rules to version 4.0.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku dba12e1e0d new(falco): unit tests for container plugin's configuration
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 092de0da9d new(falco): add container plugins's volumes and volumeMounts
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 911c16ce46 refactor(falco/tests): reorganize unit tests
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 1c8fb43e58 new(falco): add container plugin's configuration
* new helper
* unit tests
* configuration values in values.yaml

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku cc6cb3771d update(falco): bump k8smeta to version 0.3.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 4e49371d8c update(falco): update config_files configuration
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 8ab13ea972 update(falco): enable libs_logger
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku e74948391f feat(falco): add new http_output option max_consecutive_timeouts
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 1c5ac83a96 feat(falco): add suggested_output configuration to falco
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 003d93735e chore(falco): drop -pk args
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Mathieu Garstecki ac7d08f06b chore: bump falcosidekick chart version (+changelog)
Signed-off-by: Mathieu Garstecki <mathieu.garstecki@pigment.com>
2025-05-28 15:04:18 +02:00
Mathieu Garstecki 4bbc57bc78 fix: avoid ArgoCD diff on volumeClaimTemplates
K8S adds those fields by itself on apply, which creates a diff in ArgoCD.
We could ignore it, but there is little reason to omit those fields in
the definition.

Signed-off-by: Mathieu Garstecki <mathieu.garstecki@pigment.com>
2025-05-28 15:04:18 +02:00
dependabot[bot] 6db1b396ae chore(deps): Bump actions/setup-python from 5.5.0 to 5.6.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.5.0 to 5.6.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](8d9ed9ac5c...a26af69be9)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 5.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-25 08:58:38 +02:00
dependabot[bot] 0547284c57 chore(deps): Bump sigstore/cosign-installer from 3.8.1 to 3.8.2
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.8.1 to 3.8.2.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](d7d6bc7722...3454372f43)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.8.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-23 09:05:30 +02:00
dependabot[bot] a306f299a3 chore(deps): Bump golang.org/x/net from 0.37.0 to 0.38.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.37.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.37.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-17 10:17:49 +02:00
dependabot[bot] 00dacd98de chore(deps): Bump lycheeverse/lychee-action from 2.3.0 to 2.4.0
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.3.0 to 2.4.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](f613c4a64e...1d97d84f0b)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-version: 2.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-07 10:57:46 +02:00
Aliloya e648b42f87 Update CHANGELOG.md
Signed-off-by: Aliloya <53705110+Aliloya-eng@users.noreply.github.com>
2025-03-27 15:23:36 +01:00
Aliloya bd24ca27db fix: add an or condition for configmap
Co-authored-by: Igor Eulalio <41654187+IgorEulalio@users.noreply.github.com>
Signed-off-by: Aliloya <53705110+Aliloya-eng@users.noreply.github.com>
2025-03-27 15:23:36 +01:00
Ali Hassan 8e24293503 fix: remove faulty condition for configmap
Signed-off-by: Ali Hassan <53705110+Aliloya-eng@users.noreply.github.com>
2025-03-27 15:23:36 +01:00
dependabot[bot] 347a231b17 chore(deps): Bump actions/setup-python from 5.4.0 to 5.5.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.4.0 to 5.5.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](42375524e2...8d9ed9ac5c)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-26 10:23:28 +01:00
dependabot[bot] 9c59182fe2 chore(deps): Bump actions/setup-go from 5.3.0 to 5.4.0
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.3.0 to 5.4.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](f111f3307d...0aaccfd150)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-20 07:47:25 +01:00
Igor Eulalio 56a04dac13 fix: disable Talon by default on Falco installation
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-03-19 15:53:21 +01:00
Igor Eulalio 6d160f3560 chore: make docs
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-03-19 14:14:21 +01:00
Igor Eulalio a23ab7247b feat: set falco and sidekick chart version to latest
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-03-19 14:14:21 +01:00
Igor Eulalio 06156a4c23 feat: bump chart version and add changelog.md
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-03-19 14:14:21 +01:00
Igor Eulalio 61bd0f0fc5 feat: add falco-talon as a falco subchart
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-03-19 14:14:21 +01:00
dependabot[bot] a759414abf chore(deps): Bump docker/login-action from 3.3.0 to 3.4.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](9780b0c442...74a5d14239)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 08:34:06 +01:00
Leonardo Grasso 5b033a204a chore: remove test settings for deprecated charts
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso 1442228ac7 update(event-generator): set chart as deprecated
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso 0dbc58494c fix(falco): ci testing
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso bc78f46298 chore(falco-expoter): bump to latest version
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso 894f7f7968 update(falco): falco-exporter deprecation
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso c3123e77ac chore: falco-exporter deprecation
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso 06bd1848c8 docs(falco-exporter): update changelog
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso 3ac90fe8c6 update(falco-exporter): add deprecation notice
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>

update(falco-exporter): bump version for deprecation

Signed-off-by: Leonardo Grasso <me@leonardograsso.com>

docs(falco-exporter):

Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Aldo Lacuku 168b69d3f1 chore(tests): bump go version to 1.23.0
Furthermore this commit updates other go modules.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-03-14 17:43:53 +01:00
63 changed files with 1999 additions and 2593 deletions

View File

@ -35,8 +35,6 @@ Please remove the leading whitespace before the `/kind <>` you uncommented.
> /area falco-chart
> /area falco-exporter-chart
> /area falcosidekick-chart
> /area falco-talon-chart
@ -63,7 +61,6 @@ Fixes #
**Special notes for your reviewer**:
I've already fixed the corrisponding labels in GitHub
**Checklist**
<!--

View File

@ -17,7 +17,7 @@ jobs:
fetch-depth: 0
- name: Link Checker
uses: lycheeverse/lychee-action@f613c4a64e50d792e0b31ec34bbcbba12263c6a6 #v2.3.0
uses: lycheeverse/lychee-action@5c4ee84814c983aa7164eaee476f014e53ff3963 #v2.5.0
with:
args: --no-progress './**/*.yml' './**/*.yaml' './**/*.md' './**/*.gotmpl' './**/*.tpl' './**/OWNERS' './**/LICENSE'
token: ${{ secrets.GITHUB_TOKE }}

View File

@ -24,7 +24,7 @@ jobs:
fetch-depth: 0
- name: Install Cosign
uses: sigstore/cosign-installer@d7d6bc7722e3daa8354c50bcb52f4837da5e9b6a # v3.8.1
uses: sigstore/cosign-installer@398d4b0eeef1380460a10c8013a76f728fb906ac # v3.9.1
- name: Configure Git
run: |
@ -47,7 +47,7 @@ jobs:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: Login to GitHub Container Registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}

View File

@ -15,11 +15,11 @@ jobs:
- name: Set up Helm
uses: azure/setup-helm@b9e51907a09c216f16ebe8536097933489208112 # v4.3.0
with:
version: '3.14.0'
version: "3.14.0"
- uses: actions/setup-python@42375524e23c412d93fb67b49958b491fce71c38 # v5.4.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.x'
python-version: "3.x"
- name: Set up chart-testing
uses: helm/chart-testing-action@0d28d3144d3a25ea2cc349d6e59901c4ff469b3b # v2.7.0
@ -41,22 +41,9 @@ jobs:
with:
config: ./tests/kind-config.yaml
- name: install falco if needed (ie for falco-exporter)
if: steps.list-changed.outputs.changed == 'true'
run: |
changed=$(ct list-changed --config ct.yaml)
if [[ "$changed[@]" =~ "charts/falco-exporter" ]]; then
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco -f ./tests/falco-test-ci.yaml
kubectl get po -A
sleep 120
kubectl get po -A
fi
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: ct install --config ct.yaml
run: ct install --exclude-deprecated --config ct.yaml
go-unit-tests:
runs-on: ubuntu-latest
@ -69,15 +56,15 @@ jobs:
- name: Set up Helm
uses: azure/setup-helm@b9e51907a09c216f16ebe8536097933489208112 # v4.3.0
with:
version: '3.10.3'
version: "3.10.3"
- name: Update repo deps
run: helm dependency update ./charts/falco
run: helm dependency update ./charts/falco
- name: Setup Go
uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0
uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
with:
go-version: '1.21'
go-version: "1.21"
check-latest: true
- name: K8s-metacollector unit tests

1
OWNERS
View File

@ -3,6 +3,7 @@ approvers:
- Issif
- cpanato
- alacuku
- ekoops
reviewers:
- bencer
emeritus_approvers:

View File

@ -23,7 +23,6 @@ The Charts in the `master` branch (with a corresponding [GitHub release](https:/
Charts currently available are listed below.
- [falco](./charts/falco)
- [falco-exporter](./charts/falco-exporter)
- [falcosidekick](./charts/falcosidekick)
- [event-generator](./charts/event-generator)
- [k8s-metacollector](./charts/k8s-metacollector)

View File

@ -1,239 +0,0 @@
# Change Log
This file documents all notable changes to `falco-exporter` Helm Chart. The release
numbering uses [semantic versioning](http://semver.org).
## v0.12.1
* fix bug in 'for' for falco exporter prometheus rules
## v0.12.0
* make 'for' configurable for falco exporter prometheus rules
## v0.11.0
* updated grafana dashboard
## v0.10.1
* Enhanced the service Monitor to support additional Properties.
## v0.10.0
* added ability to set the grafana folder annotation name
## v0.9.11
* fix dead links in README.md
## v0.9.10
* update configuration values in README.md
* introduce helm docs for the chart
## v0.9.9
* update tolerations
## v0.9.8
* add annotation for set of folder's grafana-chart
## v0.9.7
* noop change just to test the ci
## v0.9.6
### Minor Changes
* Bump falco-exporter to v0.8.3
## v0.9.5
### Minor Changes
* Removed unnecessary capabilities from security context
* Setted filesystem on read-only
## v0.9.4
### Minor Changes
* Add options to configure readiness/liveness probe values
## v0.9.3
### Minor Changes
* Bump falco-exporter to v0.8.2
## v0.9.2
### Minor Changes
* Add option to place Grafana dashboard in a folder
## v0.9.1
### Minor Changes
* Fix PSP allowed host path prefix to match grpc socket path change.
## v0.8.3
### Major Changes
* Changing the grpc socket path from `unix:///var/run/falco/falco.soc` to `unix:///run/falco/falco.sock`.
### Minor Changes
* Bump falco-exporter to v0.8.0
## v0.8.2
### Minor Changes
* Support configuration of updateStrategy of the Daemonset
## v0.8.0
* Upgrade falco-exporter version to v0.7.0 (see the [falco-exporter changelog](https://github.com/falcosecurity/falco-exporter/releases/tag/v0.7.0))
### Major Changes
* Add option to add labels to the Daemonset pods
## v0.7.2
### Minor Changes
* Add option to add labels to the Daemonset pods
## v0.7.1
### Minor Changes
* Fix `FalcoExporterAbsent` expression
## v0.7.0
### Major Changes
* Adds ability to create custom PrometheusRules for alerting
## v0.6.2
## Minor Changes
* Add Check availability of 'monitoring.coreos.com/v1' api version
## v0.6.1
### Minor Changes
* Add option the add annotations to the Daemonset
## v0.6.0
### Minor Changes
* Upgrade falco-exporter version to v0.6.0 (see the [falco-exporter changelog](https://github.com/falcosecurity/falco-exporter/releases/tag/v0.6.0))
## v0.5.2
### Minor changes
* Make image registry configurable
## v0.5.1
* Display only non-zero rates in Grafana dashboard template
## v0.5.0
### Minor Changes
* Upgrade falco-exporter version to v0.5.0
* Add metrics about Falco drops
* Make `unix://` prefix optional
## v0.4.2
### Minor Changes
* Fix Prometheus datasource name reference in grafana dashboard template
## v0.4.1
### Minor Changes
* Support release namespace configuration
## v0.4.0
### Mayor Changes
* Add Mutual TLS for falco-exporter enable/disabled feature
## v0.3.8
### Minor Changes
* Replace extensions apiGroup/apiVersion because of deprecation
## v0.3.7
### Minor Changes
* Fixed falco-exporter PSP by allowing secret volumes
## v0.3.6
### Minor Changes
* Add SecurityContextConstraint to allow deploying in Openshift
## v0.3.5
### Minor Changes
* Added the possibility to automatically add a PSP (in combination with a Role and a RoleBindung) via the podSecurityPolicy values
* Namespaced the falco-exporter ServiceAccount and Service
## v0.3.4
### Minor Changes
* Add priorityClassName to values
## v0.3.3
### Minor Changes
* Add grafana dashboard to helm chart
## v0.3.2
### Minor Changes
* Fix for additional labels for falco-exporter servicemonitor
## v0.3.1
### Minor Changes
* Added the support to deploy a Prometheus Service Monitor. Is disables by default.
## v0.3.0
### Major Changes
* Chart moved to [falcosecurity/charts](https://github.com/falcosecurity/charts) repository
* gRPC over unix socket support (by default)
* Updated falco-exporter version to `0.3.0`
### Minor Changes
* README.md and CHANGELOG.md added

View File

@ -1,36 +0,0 @@
apiVersion: v2
name: falco-exporter
description: Prometheus Metrics Exporter for Falco output events
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.12.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 0.8.3
keywords:
- monitoring
- security
- alerting
- metric
- troubleshooting
- run-time
sources:
- https://github.com/falcosecurity/falco-exporter
maintainers:
- name: leogr
email: me@leonardograsso.com

View File

@ -1,75 +0,0 @@
# falco-exporter Helm Chart
[falco-exporter](https://github.com/falcosecurity/falco-exporter) is a Prometheus Metrics Exporter for Falco output events.
Before using this chart, you need [Falco installed](https://falco.org/docs/installation/) and running with the [gRPC Output](https://falco.org/docs/grpc/) enabled (over Unix socket by default).
This chart is compatible with the [Falco Chart](https://github.com/falcosecurity/charts/tree/master/charts/falco) version `v1.2.0` or greater. Instructions to enable the gRPC Output in the Falco Helm Chart can be found [here](https://github.com/falcosecurity/charts/tree/master/charts/falco#enabling-grpc). We also strongly recommend using [gRPC over Unix socket](https://github.com/falcosecurity/charts/tree/master/charts/falco#grpc-over-unix-socket-default).
## Introduction
The chart deploys **falco-exporter** as Daemon Set on your the Kubernetes cluster. If a [Prometheus installation](https://github.com/helm/charts/tree/master/stable/prometheus) is running within your cluster, metrics provided by **falco-exporter** will be automatically discovered.
## Adding `falcosecurity` repository
Prior to installing the chart, add the `falcosecurity` charts repository:
```bash
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
```
## Installing the Chart
To install the chart with the release name `falco-exporter` run:
```bash
helm install falco-exporter falcosecurity/falco-exporter
```
After a few seconds, **falco-exporter** should be running.
> **Tip**: List all releases using `helm list`, a release is a name used to track a specific deployment
## Uninstalling the Chart
To uninstall the `falco-exporter` deployment:
```bash
helm uninstall falco-exporter
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
```bash
helm install falco-exporter --set falco.grpcTimeout=3m falcosecurity/falco-exporter
```
Alternatively, a YAML file that specifies the parameters' values can be provided while installing the chart. For example,
```bash
helm install falco-exporter -f values.yaml falcosecurity/falco-exporter
```
### Enable Mutual TLS
Mutual TLS for `/metrics` endpoint can be enabled to prevent alerts content from being consumed by unauthorized components.
To install falco-exporter with Mutual TLS enabled, you have to:
```shell
helm install falco-exporter \
--set service.mTLS.enabled=true \
--set-file service.mTLS.server.key=/path/to/server.key \
--set-file service.mTLS.server.crt=/path/to/server.crt \
--set-file service.mTLS.ca.crt=/path/to/ca.crt \
falcosecurity/falco-exporter
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Configuration
The following table lists the main configurable parameters of the {{ template "chart.name" . }} chart v{{ template "chart.version" . }} and their default values. Please, refer to [values.yaml](./values.yaml) for the full list of configurable parameters.
{{ template "chart.valuesSection" . }}

View File

@ -1,160 +0,0 @@
# falco-exporter Helm Chart
[falco-exporter](https://github.com/falcosecurity/falco-exporter) is a Prometheus Metrics Exporter for Falco output events.
Before using this chart, you need [Falco installed](https://falco.org/docs/installation/) and running with the [gRPC Output](https://falco.org/docs/grpc/) enabled (over Unix socket by default).
This chart is compatible with the [Falco Chart](https://github.com/falcosecurity/charts/tree/master/charts/falco) version `v1.2.0` or greater. Instructions to enable the gRPC Output in the Falco Helm Chart can be found [here](https://github.com/falcosecurity/charts/tree/master/charts/falco#enabling-grpc). We also strongly recommend using [gRPC over Unix socket](https://github.com/falcosecurity/charts/tree/master/charts/falco#grpc-over-unix-socket-default).
## Introduction
The chart deploys **falco-exporter** as Daemon Set on your the Kubernetes cluster. If a [Prometheus installation](https://github.com/helm/charts/tree/master/stable/prometheus) is running within your cluster, metrics provided by **falco-exporter** will be automatically discovered.
## Adding `falcosecurity` repository
Prior to installing the chart, add the `falcosecurity` charts repository:
```bash
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
```
## Installing the Chart
To install the chart with the release name `falco-exporter` run:
```bash
helm install falco-exporter falcosecurity/falco-exporter
```
After a few seconds, **falco-exporter** should be running.
> **Tip**: List all releases using `helm list`, a release is a name used to track a specific deployment
## Uninstalling the Chart
To uninstall the `falco-exporter` deployment:
```bash
helm uninstall falco-exporter
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
```bash
helm install falco-exporter --set falco.grpcTimeout=3m falcosecurity/falco-exporter
```
Alternatively, a YAML file that specifies the parameters' values can be provided while installing the chart. For example,
```bash
helm install falco-exporter -f values.yaml falcosecurity/falco-exporter
```
### Enable Mutual TLS
Mutual TLS for `/metrics` endpoint can be enabled to prevent alerts content from being consumed by unauthorized components.
To install falco-exporter with Mutual TLS enabled, you have to:
```shell
helm install falco-exporter \
--set service.mTLS.enabled=true \
--set-file service.mTLS.server.key=/path/to/server.key \
--set-file service.mTLS.server.crt=/path/to/server.crt \
--set-file service.mTLS.ca.crt=/path/to/ca.crt \
falcosecurity/falco-exporter
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Configuration
The following table lists the main configurable parameters of the falco-exporter chart v0.12.1 and their default values. Please, refer to [values.yaml](./values.yaml) for the full list of configurable parameters.
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | affinity allows pod placement based on node characteristics, or any other custom labels assigned to nodes. |
| daemonset | object | `{"annotations":{},"podLabels":{},"updateStrategy":{"type":"RollingUpdate"}}` | daemonset holds the configuration for the daemonset. |
| daemonset.annotations | object | `{}` | annotations to add to the DaemonSet pods. |
| daemonset.podLabels | object | `{}` | podLabels labels to add to the pods. |
| falco | object | `{"grpcTimeout":"2m","grpcUnixSocketPath":"unix:///run/falco/falco.sock"}` | falco the configuration to connect falco. |
| falco.grpcTimeout | string | `"2m"` | grpcTimeout timout value for grpc connection. |
| falco.grpcUnixSocketPath | string | `"unix:///run/falco/falco.sock"` | grpcUnixSocketPath path to the falco's grpc unix socket. |
| fullnameOverride | string | `""` | fullNameOverride same as nameOverride but for the full name. |
| grafanaDashboard | object | `{"enabled":false,"folder":"","folderAnnotation":"grafana_dashboard_folder","namespace":"default","prometheusDatasourceName":"Prometheus"}` | grafanaDashboard contains the configuration related to grafana dashboards. |
| grafanaDashboard.enabled | bool | `false` | enabled specifies whether the dashboard should be deployed. |
| grafanaDashboard.folder | string | `""` | folder creates and set folderAnnotation to specify where the dashboard is stored in grafana. |
| grafanaDashboard.folderAnnotation | string | `"grafana_dashboard_folder"` | folderAnnotation sets the annotation's name used by folderAnnotation in grafana's helm-chart. |
| grafanaDashboard.namespace | string | `"default"` | namespace specifies the namespace for the configmap. |
| grafanaDashboard.prometheusDatasourceName | string | `"Prometheus"` | prometheusDatasourceName name of the data source. |
| healthChecks | object | `{"livenessProbe":{"initialDelaySeconds":60,"periodSeconds":15,"probesPort":19376,"timeoutSeconds":5},"readinessProbe":{"initialDelaySeconds":30,"periodSeconds":15,"probesPort":19376,"timeoutSeconds":5}}` | healthChecks contains the configuration for liveness and readiness probes. |
| healthChecks.livenessProbe | object | `{"initialDelaySeconds":60,"periodSeconds":15,"probesPort":19376,"timeoutSeconds":5}` | livenessProbe is a diagnostic mechanism used to determine weather a container within a Pod is still running and healthy. |
| healthChecks.livenessProbe.initialDelaySeconds | int | `60` | initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe. |
| healthChecks.livenessProbe.periodSeconds | int | `15` | periodSeconds specifies the interval at which the liveness probe will be repeated. |
| healthChecks.livenessProbe.probesPort | int | `19376` | probesPort is liveness probes port. |
| healthChecks.livenessProbe.timeoutSeconds | int | `5` | timeoutSeconds number of seconds after which the probe times out. |
| healthChecks.readinessProbe | object | `{"initialDelaySeconds":30,"periodSeconds":15,"probesPort":19376,"timeoutSeconds":5}` | readinessProbe is a mechanism used to determine whether a container within a Pod is ready to serve traffic. |
| healthChecks.readinessProbe.initialDelaySeconds | int | `30` | initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe. |
| healthChecks.readinessProbe.periodSeconds | int | `15` | periodSeconds specifies the interval at which the readiness probe will be repeated. |
| healthChecks.readinessProbe.timeoutSeconds | int | `5` | timeoutSeconds is the number of seconds after which the probe times out. |
| image | object | `{"pullPolicy":"IfNotPresent","registry":"docker.io","repository":"falcosecurity/falco-exporter","tag":"0.8.3"}` | image is the configuration for the exporter image. |
| image.pullPolicy | string | `"IfNotPresent"` | pullPolicy is the policy used to determine when a node should attempt to pull the container image. |
| image.registry | string | `"docker.io"` | registry is the image registry to pull from. |
| image.repository | string | `"falcosecurity/falco-exporter"` | repository is the image repository to pull from. |
| image.tag | string | `"0.8.3"` | tag is image tag to pull. |
| imagePullSecrets | list | `[]` | pullSecrets a list of secrets containing credentials used when pulling from private/secure registries. |
| nameOverride | string | `""` | nameOverride is the new name used to override the release name used for exporter's components. |
| nodeSelector | object | `{}` | nodeSelector specifies a set of key-value pairs that must match labels assigned to nodes for the Pod to be eligible for scheduling on that node |
| podSecurityContext | object | `{}` | podSecurityPolicy holds the security policy settings for the pod. |
| podSecurityPolicy | object | `{"annotations":{},"create":false,"name":""}` | podSecurityPolicy holds the security policy settings for the pod. |
| podSecurityPolicy.annotations | object | `{}` | annotations to add to the PSP, Role and RoleBinding |
| podSecurityPolicy.create | bool | `false` | create specifies whether a PSP, Role and RoleBinding should be created |
| podSecurityPolicy.name | string | `""` | name of the PSP, Role and RoleBinding to use. If not set and create is true, a name is generated using the fullname template |
| priorityClassName | string | `""` | priorityClassName specifies the name of the PriorityClass for the pods. |
| prometheusRules.alerts.additionalAlerts | object | `{}` | |
| prometheusRules.alerts.alert.enabled | bool | `true` | |
| prometheusRules.alerts.alert.for | string | `"5m"` | |
| prometheusRules.alerts.alert.rate_interval | string | `"5m"` | |
| prometheusRules.alerts.alert.threshold | int | `0` | |
| prometheusRules.alerts.critical.enabled | bool | `true` | |
| prometheusRules.alerts.critical.for | string | `"15m"` | |
| prometheusRules.alerts.critical.rate_interval | string | `"5m"` | |
| prometheusRules.alerts.critical.threshold | int | `0` | |
| prometheusRules.alerts.emergency.enabled | bool | `true` | |
| prometheusRules.alerts.emergency.for | string | `"1m"` | |
| prometheusRules.alerts.emergency.rate_interval | string | `"1m"` | |
| prometheusRules.alerts.emergency.threshold | int | `0` | |
| prometheusRules.alerts.error.enabled | bool | `true` | |
| prometheusRules.alerts.error.for | string | `"15m"` | |
| prometheusRules.alerts.error.rate_interval | string | `"5m"` | |
| prometheusRules.alerts.error.threshold | int | `0` | |
| prometheusRules.alerts.warning.enabled | bool | `true` | |
| prometheusRules.alerts.warning.for | string | `"15m"` | |
| prometheusRules.alerts.warning.rate_interval | string | `"5m"` | |
| prometheusRules.alerts.warning.threshold | int | `0` | |
| prometheusRules.enabled | bool | `false` | enabled specifies whether the prometheus rules should be deployed. |
| resources | object | `{}` | resources defines the computing resources (CPU and memory) that are allocated to the containers running within the Pod. |
| scc.create | bool | `true` | |
| securityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":true,"seccompProfile":{"type":"RuntimeDefault"}}` | securityContext holds the security context for the daemonset. |
| securityContext.capabilities | object | `{"drop":["ALL"]}` | capabilities to be assigned to the daemonset. |
| service | object | `{"annotations":{"prometheus.io/port":"9376","prometheus.io/scrape":"true"},"clusterIP":"None","labels":{},"mTLS":{"enabled":false},"port":9376,"targetPort":9376,"type":"ClusterIP"}` | service exposes the exporter service to be accessed from within the cluster. |
| service.annotations | object | `{"prometheus.io/port":"9376","prometheus.io/scrape":"true"}` | annotations set of annotations to be applied to the service. |
| service.clusterIP | string | `"None"` | clusterIP set to none. It's headless service. |
| service.labels | object | `{}` | labels set of labels to be applied to the service. |
| service.mTLS | object | `{"enabled":false}` | mTLS mutual TLS for HTTP metrics server. |
| service.mTLS.enabled | bool | `false` | enabled specifies whether the mTLS should be enabled. |
| service.port | int | `9376` | port is the port on which the Service will listen. |
| service.targetPort | int | `9376` | targetPort is the port on which the Pod is listening. |
| service.type | string | `"ClusterIP"` | type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible from within the cluster. |
| serviceAccount | object | `{"annotations":{},"create":true,"name":""}` | serviceAccount is the configuration for the service account. |
| serviceAccount.name | string | `""` | name is the name of the service account to use. If not set and create is true, a name is generated using the fullname template. If set and create is false, an already existing serviceAccount must be provided. |
| serviceMonitor | object | `{"additionalLabels":{},"additionalProperties":{},"enabled":false,"interval":"","scrapeTimeout":""}` | serviceMonitor holds the configuration for the ServiceMonitor CRD. A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should discover and scrape metrics from the exporter service. |
| serviceMonitor.additionalLabels | object | `{}` | additionalLabels specifies labels to be added on the Service Monitor. |
| serviceMonitor.additionalProperties | object | `{}` | aditionalProperties allows setting additional properties on the endpoint such as relabelings, metricRelabelings etc. |
| serviceMonitor.enabled | bool | `false` | enable the deployment of a Service Monitor for the Prometheus Operator. |
| serviceMonitor.interval | string | `""` | interval specifies the time interval at which Prometheus should scrape metrics from the service. |
| serviceMonitor.scrapeTimeout | string | `""` | scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request. If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for that target. |
| tolerations | list | `[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]` | tolerations are applied to pods and allow them to be scheduled on nodes with matching taints. |

View File

@ -1,16 +0,0 @@
Get the falco-exporter metrics URL by running these commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "falco-exporter.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo {{- if .Values.service.mTLS.enabled }} https{{- else }} http{{- end }}://$NODE_IP:$NODE_PORT/metrics
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "falco-exporter.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "falco-exporter.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo {{- if .Values.service.mTLS.enabled }} https{{- else }} http{{- end }}://$SERVICE_IP:{{ .Values.service.port }}/metrics
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "falco-exporter.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit {{- if .Values.service.mTLS.enabled }} https{{- else }} http{{- end }}://127.0.0.1:{{ .Values.service.targetPort }}/metrics to use your application"
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME {{ .Values.service.targetPort }}
{{- end }}
echo {{- if .Values.service.mTLS.enabled }} "You'll need a valid client certificate and its corresponding key for Mutual TLS handshake" {{- end }}

View File

@ -1,98 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "falco-exporter.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "falco-exporter.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "falco-exporter.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "falco-exporter.labels" -}}
{{ include "falco-exporter.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
{{- if not .Values.skipHelm }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{- if not .Values.skipHelm }}
helm.sh/chart: {{ include "falco-exporter.chart" . }}
{{- end }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "falco-exporter.selectorLabels" -}}
app.kubernetes.io/name: {{ include "falco-exporter.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "falco-exporter.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "falco-exporter.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the PSP to use
*/}}
{{- define "falco-exporter.podSecurityPolicyName" -}}
{{- if .Values.podSecurityPolicy.create -}}
{{ default (include "falco-exporter.fullname" .) .Values.podSecurityPolicy.name }}
{{- else -}}
{{ default "default" .Values.podSecurityPolicy.name }}
{{- end -}}
{{- end -}}
{{/*
Extract the unixSocket's directory path
*/}}
{{- define "falco-exporter.unixSocketDir" -}}
{{- if .Values.falco.grpcUnixSocketPath -}}
{{- .Values.falco.grpcUnixSocketPath | trimPrefix "unix://" | dir -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for rbac.
*/}}
{{- define "rbac.apiVersion" -}}
{{- if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1" }}
{{- print "rbac.authorization.k8s.io/v1" -}}
{{- else -}}
{{- print "rbac.authorization.k8s.io/v1beta1" -}}
{{- end -}}
{{- end -}}

View File

@ -1,132 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "falco-exporter.fullname" . }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
namespace: {{ .Release.Namespace }}
spec:
selector:
matchLabels:
{{- include "falco-exporter.selectorLabels" . | nindent 6 }}
updateStrategy:
{{ toYaml .Values.daemonset.updateStrategy | indent 4 }}
template:
metadata:
labels:
{{- include "falco-exporter.selectorLabels" . | nindent 8 }}
{{- if .Values.daemonset.podLabels }}
{{ toYaml .Values.daemonset.podLabels | nindent 8 }}
{{- end }}
{{- if .Values.daemonset.annotations }}
annotations:
{{ toYaml .Values.daemonset.annotations | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
serviceAccountName: {{ include "falco-exporter.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- /usr/bin/falco-exporter
{{- if .Values.falco.grpcUnixSocketPath }}
- --client-socket={{ .Values.falco.grpcUnixSocketPath }}
{{- else }}
- --client-hostname={{ .Values.falco.grpcHostname }}
- --client-port={{ .Values.falco.grpcPort }}
{{- end }}
- --timeout={{ .Values.falco.grpcTimeout }}
- --listen-address=0.0.0.0:{{ .Values.service.port }}
{{- if .Values.service.mTLS.enabled }}
- --server-ca=/etc/falco/server-certs/ca.crt
- --server-cert=/etc/falco/server-certs/server.crt
- --server-key=/etc/falco/server-certs/server.key
{{- end }}
ports:
- name: metrics
containerPort: {{ .Values.service.targetPort }}
protocol: TCP
livenessProbe:
initialDelaySeconds: {{ .Values.healthChecks.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.healthChecks.livenessProbe.timeoutSeconds }}
periodSeconds: {{ .Values.healthChecks.livenessProbe.periodSeconds }}
httpGet:
path: /liveness
port: {{ .Values.healthChecks.livenessProbe.probesPort }}
readinessProbe:
initialDelaySeconds: {{ .Values.healthChecks.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.healthChecks.readinessProbe.timeoutSeconds }}
periodSeconds: {{ .Values.healthChecks.readinessProbe.periodSeconds }}
httpGet:
path: /readiness
port: {{ .Values.healthChecks.readinessProbe.probesPort }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
{{- if .Values.falco.grpcUnixSocketPath }}
- mountPath: {{ include "falco-exporter.unixSocketDir" . }}
name: falco-socket-dir
readOnly: true
{{- else }}
- mountPath: /etc/falco/certs
name: certs-volume
readOnly: true
{{- end }}
{{- if .Values.service.mTLS.enabled }}
- mountPath: /etc/falco/server-certs
name: server-certs-volume
readOnly: true
{{- end }}
volumes:
{{- if .Values.falco.grpcUnixSocketPath }}
- name: falco-socket-dir
hostPath:
path: {{ include "falco-exporter.unixSocketDir" . }}
{{- else }}
- name: certs-volume
secret:
secretName: {{ include "falco-exporter.fullname" . }}-certs
items:
- key: client.key
path: client.key
- key: client.crt
path: client.crt
- key: ca.crt
path: ca.crt
{{- end }}
{{- if .Values.service.mTLS.enabled }}
- name: server-certs-volume
secret:
secretName: {{ include "falco-exporter.fullname" . }}-server-certs
items:
- key: server.key
path: server.key
- key: server.crt
path: server.crt
- key: ca.crt
path: ca.crt
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -1,648 +0,0 @@
{{- if .Values.grafanaDashboard.enabled }}
apiVersion: v1
data:
grafana-falco.json: |-
{
"__inputs": [
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "7.0.3"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": ""
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "table",
"name": "Table",
"version": ""
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "datasource",
"uid": "grafana"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"description": "",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "${datasource}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic",
"seriesBy": "last"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "smooth",
"lineStyle": {
"fill": "solid"
},
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"id": 90,
"options": {
"legend": {
"calcs": [],
"displayMode": "table",
"placement": "right",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "asc"
}
},
"pluginVersion": "8.3.3",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${datasource}"
},
"editorMode": "code",
"expr": "sum(rate(falco_events[$__rate_interval])) by (rule)",
"hide": false,
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Events rate by rule",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${datasource}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic",
"seriesBy": "last"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "smooth",
"lineStyle": {
"fill": "solid"
},
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"id": 72,
"options": {
"legend": {
"calcs": [],
"displayMode": "table",
"placement": "right",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "asc"
}
},
"pluginVersion": "8.3.3",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${datasource}"
},
"editorMode": "code",
"expr": "sum(rate(falco_events[$__rate_interval])) by (priority)",
"hide": false,
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Events rate by priority",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${datasource}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic",
"seriesBy": "last"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "smooth",
"lineStyle": {
"fill": "solid"
},
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"id": 89,
"options": {
"legend": {
"calcs": [],
"displayMode": "table",
"placement": "right",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "asc"
}
},
"pluginVersion": "8.3.3",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${datasource}"
},
"editorMode": "code",
"expr": "sum(rate(falco_events[$__rate_interval])) by (tags)",
"hide": false,
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Events rate by tags",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${datasource}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic",
"seriesBy": "last"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "smooth",
"lineStyle": {
"fill": "solid"
},
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"id": 91,
"options": {
"legend": {
"calcs": [],
"displayMode": "table",
"placement": "right",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "asc"
}
},
"pluginVersion": "8.3.3",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${datasource}"
},
"editorMode": "code",
"expr": "sum(rate(falco_events[$__rate_interval])) by (pod, hostname)",
"hide": false,
"instant": false,
"legendFormat": "{{`{{ pod }} ({{hostname}})`}}",
"range": true,
"refId": "A"
}
],
"title": "Events rate by pod, hostname",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "$datasource"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"cellOptions": {
"type": "color-text"
},
"filterable": true,
"inspect": false,
"minWidth": 50
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "text",
"value": null
},
{
"color": "#EAB839",
"value": 100
},
{
"color": "red",
"value": 1000
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 12,
"w": 24,
"x": 0,
"y": 16
},
"id": 94,
"options": {
"cellHeight": "sm",
"footer": {
"countRows": false,
"enablePagination": true,
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true,
"sortBy": [
{
"desc": true,
"displayName": "Count"
}
]
},
"pluginVersion": "10.4.1",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"editorMode": "code",
"exemplar": false,
"expr": "falco_events",
"format": "table",
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "A"
}
],
"title": "Events Total",
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"__name__": true,
"container": true,
"endpoint": true,
"instance": true,
"job": true,
"k8s_ns_name": true,
"k8s_pod_name": true,
"service": true
},
"includeByName": {},
"indexByName": {},
"renameByName": {
"Value": "Count"
}
}
}
],
"type": "table"
}
],
"refresh": "30s",
"schemaVersion": 39,
"tags": [
"security",
"falco"
],
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "Prometheus",
"value": "prometheus"
},
"hide": 0,
"includeAll": false,
"multi": false,
"name": "datasource",
"options": [],
"query": "prometheus",
"queryValue": "",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"type": "datasource"
},
{
"current": {
"isNone": true,
"selected": false,
"text": "None",
"value": ""
},
"datasource": {
"type": "prometheus",
"uid": "${datasource}"
},
"definition": "label_values(kube_node_info,cluster)",
"hide": 0,
"includeAll": false,
"multi": false,
"name": "cluster",
"options": [],
"query": {
"qryType": 1,
"query": "label_values(kube_node_info,cluster)",
"refId": "PrometheusVariableQueryEditor-VariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"type": "query"
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Falco Events",
"uid": "FvUFlfuZz",
"version": 2,
"weekStart": ""
}
kind: ConfigMap
metadata:
labels:
grafana_dashboard: "1"
{{- if .Values.grafanaDashboard.folder }}
annotations:
k8s-sidecar-target-directory: /tmp/dashboards/{{ .Values.grafanaDashboard.folder }}
{{ .Values.grafanaDashboard.folderAnnotation }}: {{ .Values.grafanaDashboard.folder }}
{{- end }}
name: grafana-falco
{{- if .Values.grafanaDashboard.namespace }}
namespace: {{ .Values.grafanaDashboard.namespace }}
{{- else }}
namespace: {{ .Release.Namespace }}
{{- end}}
{{- end -}}

View File

@ -1,28 +0,0 @@
{{- if and .Values.podSecurityPolicy.create (.Capabilities.APIVersions.Has "policy/v1beta1") }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "falco-exporter.podSecurityPolicyName" . }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- with .Values.podSecurityPolicy.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
allowPrivilegeEscalation: false
allowedHostPaths:
- pathPrefix: "/run/falco"
readOnly: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'hostPath'
- 'secret'
{{- end -}}

View File

@ -1,81 +0,0 @@
{{- if and .Values.prometheusRules.enabled .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ include "falco-exporter.fullname" . }}
{{- if .Values.prometheusRules.namespace }}
namespace: {{ .Values.prometheusRules.namespace }}
{{- end }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- if .Values.prometheusRules.additionalLabels }}
{{- toYaml .Values.prometheusRules.additionalLabels | nindent 4 }}
{{- end }}
spec:
groups:
- name: falco-exporter
rules:
{{- if .Values.prometheusRules.enabled }}
- alert: FalcoExporterAbsent
expr: absent(up{job="{{- include "falco-exporter.fullname" . }}"})
for: 10m
annotations:
summary: Falco Exporter has dissapeared from Prometheus service discovery.
description: No metrics are being scraped from falco. No events will trigger any alerts.
labels:
severity: critical
{{- end }}
{{- if .Values.prometheusRules.alerts.warning.enabled }}
- alert: FalcoWarningEventsRateHigh
annotations:
summary: Falco is experiencing high rate of warning events
description: A high rate of warning events are being detected by Falco
expr: rate(falco_events{priority="4"}[{{ .Values.prometheusRules.alerts.warning.rate_interval }}]) > {{ .Values.prometheusRules.alerts.warning.threshold }}
for: {{ .Values.prometheusRules.alerts.warning.for }}
labels:
severity: warning
{{- end }}
{{- if .Values.prometheusRules.alerts.error.enabled }}
- alert: FalcoErrorEventsRateHigh
annotations:
summary: Falco is experiencing high rate of error events
description: A high rate of error events are being detected by Falco
expr: rate(falco_events{priority="3"}[{{ .Values.prometheusRules.alerts.error.rate_interval }}]) > {{ .Values.prometheusRules.alerts.error.threshold }}
for: {{ .Values.prometheusRules.alerts.error.for }}
labels:
severity: warning
{{- end }}
{{- if .Values.prometheusRules.alerts.critical.enabled }}
- alert: FalcoCriticalEventsRateHigh
annotations:
summary: Falco is experiencing high rate of critical events
description: A high rate of critical events are being detected by Falco
expr: rate(falco_events{priority="2"}[{{ .Values.prometheusRules.alerts.critical.rate_interval }}]) > {{ .Values.prometheusRules.alerts.critical.threshold }}
for: {{ .Values.prometheusRules.alerts.critical.for }}
labels:
severity: critical
{{- end }}
{{- if .Values.prometheusRules.alerts.alert.enabled }}
- alert: FalcoAlertEventsRateHigh
annotations:
summary: Falco is experiencing high rate of alert events
description: A high rate of alert events are being detected by Falco
expr: rate(falco_events{priority="1"}[{{ .Values.prometheusRules.alerts.alert.rate_interval }}]) > {{ .Values.prometheusRules.alerts.alert.threshold }}
for: {{ .Values.prometheusRules.alerts.alert.for }}
labels:
severity: critical
{{- end }}
{{- if .Values.prometheusRules.alerts.emergency.enabled }}
- alert: FalcoEmergencyEventsRateHigh
annotations:
summary: Falco is experiencing high rate of emergency events
description: A high rate of emergency events are being detected by Falco
expr: rate(falco_events{priority="0"}[{{ .Values.prometheusRules.alerts.emergency.rate_interval }}]) > {{ .Values.prometheusRules.alerts.emergency.threshold }}
for: {{ .Values.prometheusRules.alerts.emergency.for }}
labels:
severity: critical
{{- end }}
{{- with .Values.prometheusRules.additionalAlerts }}
{{ . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -1,22 +0,0 @@
{{- if .Values.podSecurityPolicy.create -}}
kind: Role
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: {{ include "falco-exporter.podSecurityPolicyName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- with .Values.podSecurityPolicy.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
rules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
resourceNames:
- {{ include "falco-exporter.podSecurityPolicyName" . }}
verbs:
- use
{{- end -}}

View File

@ -1,20 +0,0 @@
{{- if .Values.podSecurityPolicy.create -}}
kind: RoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: {{ include "falco-exporter.podSecurityPolicyName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- with .Values.podSecurityPolicy.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
subjects:
- kind: ServiceAccount
name: {{ include "falco-exporter.serviceAccountName" . }}
roleRef:
kind: Role
name: {{ include "falco-exporter.podSecurityPolicyName" . }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -1,24 +0,0 @@
{{- if .Values.certs }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "falco-exporter.fullname" . }}-certs
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
type: Opaque
data:
{{- if .Values.certs }}
{{- if and .Values.certs.ca .Values.certs.ca.crt }}
ca.crt: {{ .Values.certs.ca.crt | b64enc | quote }}
{{- end }}
{{- if .Values.certs.client }}
{{- if .Values.certs.client.key }}
client.key: {{ .Values.certs.client.key | b64enc | quote }}
{{- end }}
{{- if .Values.certs.client.crt }}
client.crt: {{ .Values.certs.client.crt | b64enc | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -1,41 +0,0 @@
{{- if and .Values.scc.create (.Capabilities.APIVersions.Has "security.openshift.io/v1") }}
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: |
This provides the minimum requirements Falco-exporter to run in Openshift.
name: {{ template "falco-exporter.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities: []
allowedUnsafeSysctls: []
defaultAddCapabilities: []
fsGroup:
type: RunAsAny
groups: []
priority: 0
readOnlyRootFilesystem: false
requiredDropCapabilities: []
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
seccompProfiles:
- '*'
supplementalGroups:
type: RunAsAny
users:
- system:serviceaccount:{{ .Release.Namespace }}:{{ include "falco-exporter.serviceAccountName" . }}
volumes:
- hostPath
- secret
{{- end }}

View File

@ -1,14 +0,0 @@
{{- if .Values.service.mTLS.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "falco-exporter.fullname" . }}-server-certs
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
type: Opaque
data:
server.crt: {{ .Values.service.mTLS.server.crt | b64enc | quote }}
server.key: {{ .Values.service.mTLS.server.key | b64enc | quote }}
ca.crt: {{ .Values.service.mTLS.ca.crt | b64enc | quote }}
{{- end }}

View File

@ -1,42 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "falco-exporter.fullname" . }}
{{- if .Values.service.annotations }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
{{- end }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
spec:
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- range $cidr := .Values.service.loadBalancerSourceRanges }}
- {{ $cidr }}
{{- end }}
{{- end }}
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
{{- if ( and (eq .Values.service.type "NodePort" ) (not (empty .Values.service.nodePort)) ) }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCP
name: metrics
selector:
{{- include "falco-exporter.selectorLabels" . | nindent 4 }}

View File

@ -1,13 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "falco-exporter.serviceAccountName" . }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
{{- end -}}

View File

@ -1,27 +0,0 @@
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "falco-exporter.fullname" . }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- range $key, $value := .Values.serviceMonitor.additionalLabels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- port: metrics
{{- if .Values.serviceMonitor.interval }}
interval: {{ .Values.serviceMonitor.interval }}
{{- end }}
{{- if .Values.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
{{- end }}
{{- with .Values.serviceMonitor.additionalProperties }}
{{- toYaml . | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "falco-exporter.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "falco-exporter.fullname" . }}-test-connection"
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "falco-exporter.fullname" . }}:{{ .Values.service.port }}/metrics']
restartPolicy: Never

View File

@ -1,222 +0,0 @@
# Default values for falco-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# -- service exposes the exporter service to be accessed from within the cluster.
service:
# -- type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible
# from within the cluster.
type: ClusterIP
# -- clusterIP set to none. It's headless service.
clusterIP: None
# -- port is the port on which the Service will listen.
port: 9376
# -- targetPort is the port on which the Pod is listening.
targetPort: 9376
# -- labels set of labels to be applied to the service.
labels: {}
# -- annotations set of annotations to be applied to the service.
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9376"
# -- mTLS mutual TLS for HTTP metrics server.
mTLS:
# -- enabled specifies whether the mTLS should be enabled.
enabled: false
# -- healthChecks contains the configuration for liveness and readiness probes.
healthChecks:
# -- livenessProbe is a diagnostic mechanism used to determine weather a container within a Pod is still running and healthy.
livenessProbe:
# -- probesPort is liveness probes port.
probesPort: 19376
# -- initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe.
initialDelaySeconds: 60
# -- timeoutSeconds number of seconds after which the probe times out.
timeoutSeconds: 5
# -- periodSeconds specifies the interval at which the liveness probe will be repeated.
periodSeconds: 15
# -- readinessProbe is a mechanism used to determine whether a container within a Pod is ready to serve traffic.
readinessProbe:
# probesPort is readiness probes port
probesPort: 19376
# -- initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe.
initialDelaySeconds: 30
# -- timeoutSeconds is the number of seconds after which the probe times out.
timeoutSeconds: 5
# -- periodSeconds specifies the interval at which the readiness probe will be repeated.
periodSeconds: 15
# -- image is the configuration for the exporter image.
image:
# -- registry is the image registry to pull from.
registry: docker.io
# -- repository is the image repository to pull from.
repository: falcosecurity/falco-exporter
# -- tag is image tag to pull.
tag: "0.8.3"
# -- pullPolicy is the policy used to determine when a node should attempt to pull the container image.
pullPolicy: IfNotPresent
# -- pullSecrets a list of secrets containing credentials used when pulling from private/secure registries.
imagePullSecrets: []
# -- nameOverride is the new name used to override the release name used for exporter's components.
nameOverride: ""
# -- fullNameOverride same as nameOverride but for the full name.
fullnameOverride: ""
# -- priorityClassName specifies the name of the PriorityClass for the pods.
priorityClassName: ""
# -- falco the configuration to connect falco.
falco:
# -- grpcUnixSocketPath path to the falco's grpc unix socket.
grpcUnixSocketPath: "unix:///run/falco/falco.sock"
# -- grpcTimeout timout value for grpc connection.
grpcTimeout: 2m
# -- serviceAccount is the configuration for the service account.
serviceAccount:
# create specifies whether a service account should be created.
create: true
# annotations to add to the service account
annotations: {}
# -- name is the name of the service account to use.
# If not set and create is true, a name is generated using the fullname template.
# If set and create is false, an already existing serviceAccount must be provided.
name: ""
# -- podSecurityPolicy holds the security policy settings for the pod.
podSecurityPolicy:
# -- create specifies whether a PSP, Role and RoleBinding should be created
create: false
# -- annotations to add to the PSP, Role and RoleBinding
annotations: {}
# -- name of the PSP, Role and RoleBinding to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
# -- podSecurityPolicy holds the security policy settings for the pod.
podSecurityContext:
{}
# fsGroup: 2000
# -- daemonset holds the configuration for the daemonset.
daemonset:
# updateStrategy perform rolling updates by default in the DaemonSet agent
# ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
updateStrategy:
# type of the strategy. Can also customize maxUnavailable or minReadySeconds based on your needs.
type: RollingUpdate
# -- annotations to add to the DaemonSet pods.
annotations: {}
# -- podLabels labels to add to the pods.
podLabels: {}
# -- securityContext holds the security context for the daemonset.
securityContext:
# -- capabilities to be assigned to the daemonset.
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
privileged: false
seccompProfile:
type: RuntimeDefault
# -- resources defines the computing resources (CPU and memory) that are allocated to the containers running within the Pod.
resources:
{}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# -- nodeSelector specifies a set of key-value pairs that must match labels assigned to nodes
# for the Pod to be eligible for scheduling on that node
nodeSelector: {}
# -- tolerations are applied to pods and allow them to be scheduled on nodes with matching taints.
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
# -- affinity allows pod placement based on node characteristics, or any other custom labels assigned to nodes.
affinity: {}
# -- serviceMonitor holds the configuration for the ServiceMonitor CRD.
# A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should
# discover and scrape metrics from the exporter service.
serviceMonitor:
# -- enable the deployment of a Service Monitor for the Prometheus Operator.
enabled: false
# -- additionalLabels specifies labels to be added on the Service Monitor.
additionalLabels: {}
# -- interval specifies the time interval at which Prometheus should scrape metrics from the service.
interval: ""
# -- scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request.
# If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for
# that target.
scrapeTimeout: ""
# -- aditionalProperties allows setting additional properties on the endpoint such as relabelings, metricRelabelings etc.
additionalProperties: {}
# -- grafanaDashboard contains the configuration related to grafana dashboards.
grafanaDashboard:
# -- enabled specifies whether the dashboard should be deployed.
enabled: false
# -- folder creates and set folderAnnotation to specify where the dashboard is stored in grafana.
folder: ""
# -- folderAnnotation sets the annotation's name used by folderAnnotation in grafana's helm-chart.
folderAnnotation: "grafana_dashboard_folder"
# -- namespace specifies the namespace for the configmap.
namespace: default
# -- prometheusDatasourceName name of the data source.
prometheusDatasourceName: Prometheus
scc:
# true here enabled creation of Security Context Constraints in Openshift
create: true
# prometheusRules holds the configuration for alerting on priority events.
prometheusRules:
# -- enabled specifies whether the prometheus rules should be deployed.
enabled: false
alerts:
warning:
enabled: true
rate_interval: "5m"
threshold: 0
for: "15m"
error:
enabled: true
rate_interval: "5m"
threshold: 0
for: "15m"
critical:
enabled: true
rate_interval: "5m"
threshold: 0
for: "15m"
alert:
enabled: true
rate_interval: "5m"
threshold: 0
for: "5m"
emergency:
enabled: true
rate_interval: "1m"
threshold: 0
for: "1m"
additionalAlerts: {}

View File

@ -1,4 +1,6 @@
# Helm chart Breaking Changes
- [5.0.0](#500)
- [Default Falco Image](#default-falco-image)
- [4.0.0](#400)
- [Drivers](#drivers)
- [K8s Collector](#k8s-collector)
@ -9,6 +11,21 @@
- [Falco Images](#drop-support-for-falcosecurityfalco-image)
- [Driver Loader Init Container](#driver-loader-simplified-logic)
## 6.0.0
### Falco Talon configuration changes
The following backward-incompatible changes have been made to `values.yaml`:
- `falcotalon` configuration has been renamed to `falco-talon`
- `falcotalon.enabled` has been renamed to `responseActions.enabled`
## 5.0.0
### Default Falco Image
**Starting with version 5.0.0, the Helm chart now uses the default Falco container image, which is a distroless image without any additional tools installed.**
Previously, the chart used the `debian` image with the several tools included to avoid breaking changes during upgrades. The new image is more secure and lightweight, but it does not include these tools.
If you rely on some tool—for example, when using the `program_output` feature—you can manually override the `image.tag` value to use a different image flavor. For instance, setting `image.tag` to `0.41.0-debian` will restore access to the tools available in the Debian-based image.
## 4.0.0
### Drivers
The `driver` section has been reworked based on the following PR: https://github.com/falcosecurity/falco/pull/2413.

View File

@ -3,6 +3,75 @@
This file documents all notable changes to Falco Helm Chart. The release
numbering uses [semantic versioning](http://semver.org).
## v6.2.2
* Bump container plugin to 0.3.5
* Bump k8smeta plugin to 0.3.1
## v6.2.1
* Bump container plugin to 0.3.3
## v6.2.0
* Switch to `collectors.containerEngine` configuration by default
* Update `collectors.containerEngine.engines` default values
* Fix containerd socket path configuration
* Address "container.name shows container.id" issue
* Address "Missing k8s.pod name, container.name, other metadata with k3s" issue
* Bump container plugin to 0.3.2
## v6.1.0
* feat(falco): Add possibility to custom falco pods hostname
## v6.0.2
* Bump Falco to 0.41.3
* Bump container plugin to 0.3.1
## v6.0.1
* Bump Falco to 0.41.2
* Bump container plugin to 0.3.0
## v6.0.0
* Rename Falco Talon configuration keys naming
## v5.0.3
* Bump container plugin to 0.2.6
## v5.0.2
* Bump container plugin to 0.2.5
* Bump Falco to 0.41.1
## v5.0.1
* Correct installation issue when both artifact installation and follow are enabled
## v5.0.0
* Bump falcoctl to 0.11.2
* Use default falco image flavor (wolfi) by default
## v4.22.0
* Bump Falco to 0.41.0;
* Bump falco rules to 4.0.0;
* Deprecate old container engines in favor of the new container plugin;
* Add support for the new container plugin;
* Update k8smeta plugin to 0.3.0;
* Update falco configuration;
## v4.21.2
* add falco-talon as falco subchart
## v4.21.1
* removed falco-expoter (now deprecated) references from the readme
## v4.21.0
* feat(falco): adding imagePullSecrets at the service account level

View File

@ -1,7 +1,7 @@
apiVersion: v2
name: falco
version: 4.21.0
appVersion: "0.40.0"
version: 6.2.2
appVersion: "0.41.3"
description: Falco
keywords:
- monitoring
@ -26,3 +26,7 @@ dependencies:
version: 0.1.*
repository: https://falcosecurity.github.io/charts
condition: collectors.kubernetes.enabled
- name: falco-talon
version: 0.3.*
repository: https://falcosecurity.github.io/charts
condition: responseActions.enabled

View File

@ -510,8 +510,6 @@ Moreover, Falco supports running a gRPC server with two main binding types:
- Over a local **Unix socket** with no authentication
- Over the **network** with mandatory mutual TLS authentication (mTLS)
> **Tip**: Once gRPC is enabled, you can deploy [falco-exporter](https://github.com/falcosecurity/falco-exporter) to export metrics to Prometheus.
### gRPC over unix socket (default)
The preferred way to use the gRPC is over a Unix socket.

View File

@ -506,8 +506,6 @@ Moreover, Falco supports running a gRPC server with two main binding types:
- Over a local **Unix socket** with no authentication
- Over the **network** with mandatory mutual TLS authentication (mTLS)
> **Tip**: Once gRPC is enabled, you can deploy [falco-exporter](https://github.com/falcosecurity/falco-exporter) to export metrics to Prometheus.
### gRPC over unix socket (default)
The preferred way to use the gRPC is over a Unix socket.
@ -585,7 +583,7 @@ If you use a Proxy in your cluster, the requests between `Falco` and `Falcosidek
## Configuration
The following table lists the main configurable parameters of the falco chart v4.21.0 and their default values. See [values.yaml](./values.yaml) for full list.
The following table lists the main configurable parameters of the falco chart v6.2.2 and their default values. See [values.yaml](./values.yaml) for full list.
## Values
@ -599,18 +597,28 @@ The following table lists the main configurable parameters of the falco chart v4
| certs.existingSecret | string | `""` | Existing secret containing the following key, crt and ca as well as the bundle pem. |
| certs.server.crt | string | `""` | Certificate used by gRPC and webserver. |
| certs.server.key | string | `""` | Key used by gRPC and webserver. |
| collectors.containerd.enabled | bool | `true` | Enable ContainerD support. |
| collectors.containerd.socket | string | `"/run/containerd/containerd.sock"` | The path of the ContainerD socket. |
| collectors.crio.enabled | bool | `true` | Enable CRI-O support. |
| collectors.containerEngine | object | `{"enabled":true,"engines":{"bpm":{"enabled":true},"containerd":{"enabled":true,"sockets":["/run/host-containerd/containerd.sock"]},"cri":{"enabled":true,"sockets":["/run/containerd/containerd.sock","/run/crio/crio.sock","/run/k3s/containerd/containerd.sock","/run/host-containerd/containerd.sock"]},"docker":{"enabled":true,"sockets":["/var/run/docker.sock"]},"libvirt_lxc":{"enabled":true},"lxc":{"enabled":true},"podman":{"enabled":true,"sockets":["/run/podman/podman.sock"]}},"hooks":["create"],"labelMaxLen":100,"pluginRef":"ghcr.io/falcosecurity/plugins/plugin/container:0.3.5","withSize":false}` | This collector is the new container engine collector that replaces the old docker, containerd, crio and podman collectors. It is designed to collect metadata from various container engines and provide a unified interface through the container plugin. When enabled, it will deploy the container plugin and use it to collect metadata from the container engines. Keep in mind that the old collectors (docker, containerd, crio, podman) will use the container plugin to collect metadata under the hood. |
| collectors.containerEngine.enabled | bool | `true` | Enable Container Engine support. |
| collectors.containerEngine.engines | object | `{"bpm":{"enabled":true},"containerd":{"enabled":true,"sockets":["/run/host-containerd/containerd.sock"]},"cri":{"enabled":true,"sockets":["/run/containerd/containerd.sock","/run/crio/crio.sock","/run/k3s/containerd/containerd.sock","/run/host-containerd/containerd.sock"]},"docker":{"enabled":true,"sockets":["/var/run/docker.sock"]},"libvirt_lxc":{"enabled":true},"lxc":{"enabled":true},"podman":{"enabled":true,"sockets":["/run/podman/podman.sock"]}}` | engines specify the container engines that will be used to collect metadata. See https://github.com/falcosecurity/plugins/blob/main/plugins/container/README.md#configuration |
| collectors.containerEngine.hooks | list | `["create"]` | hooks specify the hooks that will be used to collect metadata from the container engine. The available hooks are: create, start. |
| collectors.containerEngine.labelMaxLen | int | `100` | labelMaxLen is the maximum length of the labels that can be used in the container plugin. container labels larger than this value won't be collected. |
| collectors.containerEngine.pluginRef | string | `"ghcr.io/falcosecurity/plugins/plugin/container:0.3.5"` | pluginRef is the OCI reference for the container plugin. It could be a full reference such as "ghcr.io/falcosecurity/plugins/plugin/container:0.3.5". Or just name + tag: container:0.3.5. |
| collectors.containerEngine.withSize | bool | `false` | withSize specifies whether to enable container size inspection, which is inherently slow. |
| collectors.containerd | object | `{"enabled":false,"socket":"/run/host-containerd/containerd.sock"}` | This collector is deprecated and will be removed in the future. Please use the containerEngine collector instead. |
| collectors.containerd.enabled | bool | `false` | Enable ContainerD support. |
| collectors.containerd.socket | string | `"/run/host-containerd/containerd.sock"` | The path of the ContainerD socket. |
| collectors.crio | object | `{"enabled":false,"socket":"/run/crio/crio.sock"}` | This collector is deprecated and will be removed in the future. Please use the containerEngine collector instead. |
| collectors.crio.enabled | bool | `false` | Enable CRI-O support. |
| collectors.crio.socket | string | `"/run/crio/crio.sock"` | The path of the CRI-O socket. |
| collectors.docker.enabled | bool | `true` | Enable Docker support. |
| collectors.docker | object | `{"enabled":false,"socket":"/var/run/docker.sock"}` | This collector is deprecated and will be removed in the future. Please use the containerEngine collector instead. |
| collectors.docker.enabled | bool | `false` | Enable Docker support. |
| collectors.docker.socket | string | `"/var/run/docker.sock"` | The path of the Docker daemon socket. |
| collectors.enabled | bool | `true` | Enable/disable all the metadata collectors. |
| collectors.kubernetes | object | `{"collectorHostname":"","collectorPort":"","enabled":false,"hostProc":"/host","pluginRef":"ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.1","verbosity":"info"}` | kubernetes holds the configuration for the kubernetes collector. Starting from version 0.37.0 of Falco, the legacy kubernetes client has been removed. A new standalone component named k8s-metacollector and a Falco plugin have been developed to solve the issues that were present in the old implementation. More info here: https://github.com/falcosecurity/falco/issues/2973 |
| collectors.kubernetes | object | `{"collectorHostname":"","collectorPort":"","enabled":false,"hostProc":"/host","pluginRef":"ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.3.1","verbosity":"info"}` | kubernetes holds the configuration for the kubernetes collector. Starting from version 0.37.0 of Falco, the legacy kubernetes client has been removed. A new standalone component named k8s-metacollector and a Falco plugin have been developed to solve the issues that were present in the old implementation. More info here: https://github.com/falcosecurity/falco/issues/2973 |
| collectors.kubernetes.collectorHostname | string | `""` | collectorHostname is the address of the k8s-metacollector. When not specified it will be set to match k8s-metacollector service. e.x: falco-k8smetacollecto.falco.svc. If for any reason you need to override it, make sure to set here the address of the k8s-metacollector. It is used by the k8smeta plugin to connect to the k8s-metacollector. |
| collectors.kubernetes.collectorPort | string | `""` | collectorPort designates the port on which the k8s-metacollector gRPC service listens. If not specified the value of the port named `broker-grpc` in k8s-metacollector.service.ports is used. The default values is 45000. It is used by the k8smeta plugin to connect to the k8s-metacollector. |
| collectors.kubernetes.enabled | bool | `false` | enabled specifies whether the Kubernetes metadata should be collected using the k8smeta plugin and the k8s-metacollector component. It will deploy the k8s-metacollector external component that fetches Kubernetes metadata and pushes them to Falco instances. For more info see: https://github.com/falcosecurity/k8s-metacollector https://github.com/falcosecurity/charts/tree/master/charts/k8s-metacollector When this option is disabled, Falco falls back to the container annotations to grab the metadata. In such a case, only the ID, name, namespace, labels of the pod will be available. |
| collectors.kubernetes.pluginRef | string | `"ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.1"` | pluginRef is the OCI reference for the k8smeta plugin. It could be a full reference such as: "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0". Or just name + tag: k8smeta:0.1.0. |
| collectors.kubernetes.pluginRef | string | `"ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.3.1"` | pluginRef is the OCI reference for the k8smeta plugin. It could be a full reference such as: "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0". Or just name + tag: k8smeta:0.1.0. |
| containerSecurityContext | object | `{}` | Set securityContext for the Falco container.For more info see the "falco.securityContext" helper in "pod-template.tpl" |
| controller.annotations | object | `{}` | |
| controller.daemonset.updateStrategy.type | string | `"RollingUpdate"` | Perform rolling updates by default in the DaemonSet agent ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/ |
@ -650,7 +658,8 @@ The following table lists the main configurable parameters of the falco chart v4
| extra.args | list | `[]` | Extra command-line arguments. |
| extra.env | list | `[]` | Extra environment variables that will be pass onto Falco containers. |
| extra.initContainers | list | `[]` | Additional initContainers for Falco pods. |
| falco.append_output | list | `[]` | |
| falco-talon | object | `{}` | It must be used in conjunction with the response_actions.enabled option. |
| falco.append_output[0].suggested_output | bool | `true` | |
| falco.base_syscalls | object | `{"custom_set":[],"repair":false}` | - [Suggestions] NOTE: setting `base_syscalls.repair: true` automates the following suggestions for you. These suggestions are subject to change as Falco and its state engine evolve. For execve* events: Some Falco fields for an execve* syscall are retrieved from the associated `clone`, `clone3`, `fork`, `vfork` syscalls when spawning a new process. The `close` syscall is used to purge file descriptors from Falco's internal thread / process cache table and is necessary for rules relating to file descriptors (e.g. open, openat, openat2, socket, connect, accept, accept4 ... and many more) Consider enabling the following syscalls in `base_syscalls.custom_set` for process rules: [clone, clone3, fork, vfork, execve, execveat, close] For networking related events: While you can log `connect` or `accept*` syscalls without the socket syscall, the log will not contain the ip tuples. Additionally, for `listen` and `accept*` syscalls, the `bind` syscall is also necessary. We recommend the following as the minimum set for networking-related rules: [clone, clone3, fork, vfork, execve, execveat, close, socket, bind, getsockopt] Lastly, for tracking the correct `uid`, `gid` or `sid`, `pgid` of a process when the running process opens a file or makes a network connection, consider adding the following to the above recommended syscall sets: ... setresuid, setsid, setuid, setgid, setpgid, setresgid, setsid, capset, chdir, chroot, fchdir ... |
| falco.buffered_outputs | bool | `false` | Enabling buffering for the output queue can offer performance optimization, efficient resource usage, and smoother data flow, resulting in a more reliable output mechanism. By default, buffering is disabled (false). |
| falco.config_files[0] | string | `"/etc/falco/config.d"` | |
@ -669,7 +678,7 @@ The following table lists the main configurable parameters of the falco chart v4
| falco.grpc | object | `{"bind_address":"unix:///run/falco/falco.sock","enabled":false,"threadiness":0}` | gRPC server using a local unix socket |
| falco.grpc.threadiness | int | `0` | When the `threadiness` value is set to 0, Falco will automatically determine the appropriate number of threads based on the number of online cores in the system. |
| falco.grpc_output | object | `{"enabled":false}` | Use gRPC as an output service. gRPC is a modern and high-performance framework for remote procedure calls (RPC). It utilizes protocol buffers for efficient data serialization. The gRPC output in Falco provides a modern and efficient way to integrate with other systems. By default the setting is turned off. Enabling this option stores output events in memory until they are consumed by a gRPC client. Ensure that you have a consumer for the output events or leave it disabled. |
| falco.http_output | object | `{"ca_bundle":"","ca_cert":"","ca_path":"/etc/falco/certs/","client_cert":"/etc/falco/certs/client/client.crt","client_key":"/etc/falco/certs/client/client.key","compress_uploads":false,"echo":false,"enabled":false,"insecure":false,"keep_alive":false,"mtls":false,"url":"","user_agent":"falcosecurity/falco"}` | Send logs to an HTTP endpoint or webhook. |
| falco.http_output | object | `{"ca_bundle":"","ca_cert":"","ca_path":"/etc/falco/certs/","client_cert":"/etc/falco/certs/client/client.crt","client_key":"/etc/falco/certs/client/client.key","compress_uploads":false,"echo":false,"enabled":false,"insecure":false,"keep_alive":false,"max_consecutive_timeouts":5,"mtls":false,"url":"","user_agent":"falcosecurity/falco"}` | Send logs to an HTTP endpoint or webhook. |
| falco.http_output.ca_bundle | string | `""` | Path to a specific file that will be used as the CA certificate store. |
| falco.http_output.ca_cert | string | `""` | Path to the CA certificate that can verify the remote server. |
| falco.http_output.ca_path | string | `"/etc/falco/certs/"` | Path to a folder that will be used as the CA certificate store. CA certificate need to be stored as indivitual PEM files in this directory. |
@ -681,10 +690,11 @@ The following table lists the main configurable parameters of the falco chart v4
| falco.http_output.keep_alive | bool | `false` | keep_alive whether to keep alive the connection. |
| falco.http_output.mtls | bool | `false` | Tell Falco to use mTLS |
| falco.json_include_message_property | bool | `false` | |
| falco.json_include_output_fields_property | bool | `true` | |
| falco.json_include_output_property | bool | `true` | When using JSON output in Falco, you have the option to include the "output" property itself in the generated JSON output. The "output" property provides additional information about the purpose of the rule. To reduce the logging volume, it is recommended to turn it off if it's not necessary for your use case. |
| falco.json_include_tags_property | bool | `true` | When using JSON output in Falco, you have the option to include the "tags" field of the rules in the generated JSON output. The "tags" field provides additional metadata associated with the rule. To reduce the logging volume, if the tags associated with the rule are not needed for your use case or can be added at a later stage, it is recommended to turn it off. |
| falco.json_output | bool | `false` | When enabled, Falco will output alert messages and rules file loading/validation results in JSON format, making it easier for downstream programs to process and consume the data. By default, this option is disabled. |
| falco.libs_logger | object | `{"enabled":false,"severity":"debug"}` | The `libs_logger` setting in Falco determines the minimum log level to include in the logs related to the functioning of the software of the underlying `libs` library, which Falco utilizes. This setting is independent of the `priority` field of rules and the `log_level` setting that controls Falco's operational logs. It allows you to specify the desired log level for the `libs` library specifically, providing more granular control over the logging behavior of the underlying components used by Falco. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". It is not recommended for production use. |
| falco.libs_logger | object | `{"enabled":true,"severity":"info"}` | The `libs_logger` setting in Falco determines the minimum log level to include in the logs related to the functioning of the software of the underlying `libs` library, which Falco utilizes. This setting is independent of the `priority` field of rules and the `log_level` setting that controls Falco's operational logs. It allows you to specify the desired log level for the `libs` library specifically, providing more granular control over the logging behavior of the underlying components used by Falco. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". It is not recommended for production use. |
| falco.load_plugins | list | `[]` | Add here all plugins and their configuration. Please consult the plugins documentation for more info. Remember to add the plugins name in "load_plugins: []" in order to load them in Falco. |
| falco.log_level | string | `"info"` | The `log_level` setting determines the minimum log level to include in Falco's logs related to the functioning of the software. This setting is separate from the `priority` field of rules and specifically controls the log level of Falco's operational logging. By specifying a log level, you can control the verbosity of Falco's operational logs. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". |
| falco.log_stderr | bool | `true` | Send information logs to stderr. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. |
@ -721,23 +731,23 @@ The following table lists the main configurable parameters of the falco chart v4
| falcoctl.artifact.install.mounts | object | `{"volumeMounts":[]}` | A list of volume mounts you want to add to the falcoctl-artifact-install init container. |
| falcoctl.artifact.install.resources | object | `{}` | Resources requests and limits for the falcoctl-artifact-install init container. |
| falcoctl.artifact.install.securityContext | object | `{}` | Security context for the falcoctl init container. |
| falcoctl.config | object | `{"artifact":{"allowedTypes":["rulesfile","plugin"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:3"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:3"],"resolveDeps":true,"rulesfilesDir":"/rulesfiles"}},"indexes":[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]}` | Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers. |
| falcoctl.config.artifact | object | `{"allowedTypes":["rulesfile","plugin"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:3"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:3"],"resolveDeps":true,"rulesfilesDir":"/rulesfiles"}}` | Configuration used by the artifact commands. |
| falcoctl.config | object | `{"artifact":{"allowedTypes":["rulesfile","plugin"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:4"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:4"],"resolveDeps":true,"rulesfilesDir":"/rulesfiles"}},"indexes":[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]}` | Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers. |
| falcoctl.config.artifact | object | `{"allowedTypes":["rulesfile","plugin"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:4"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:4"],"resolveDeps":true,"rulesfilesDir":"/rulesfiles"}}` | Configuration used by the artifact commands. |
| falcoctl.config.artifact.allowedTypes | list | `["rulesfile","plugin"]` | List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained in the list it will refuse to downloade and install that artifact. |
| falcoctl.config.artifact.follow.every | string | `"6h"` | How often the tool checks for new versions of the followed artifacts. |
| falcoctl.config.artifact.follow.falcoversions | string | `"http://localhost:8765/versions"` | HTTP endpoint that serves the api versions of the Falco instance. It is used to check if the new versions are compatible with the running Falco instance. |
| falcoctl.config.artifact.follow.pluginsDir | string | `"/plugins"` | See the fields of the artifact.install section. |
| falcoctl.config.artifact.follow.refs | list | `["falco-rules:3"]` | List of artifacts to be followed by the falcoctl sidecar container. |
| falcoctl.config.artifact.follow.refs | list | `["falco-rules:4"]` | List of artifacts to be followed by the falcoctl sidecar container. |
| falcoctl.config.artifact.follow.rulesfilesDir | string | `"/rulesfiles"` | See the fields of the artifact.install section. |
| falcoctl.config.artifact.install.pluginsDir | string | `"/plugins"` | Same as the one above but for the artifacts. |
| falcoctl.config.artifact.install.refs | list | `["falco-rules:3"]` | List of artifacts to be installed by the falcoctl init container. |
| falcoctl.config.artifact.install.refs | list | `["falco-rules:4"]` | List of artifacts to be installed by the falcoctl init container. |
| falcoctl.config.artifact.install.resolveDeps | bool | `true` | Resolve the dependencies for artifacts. |
| falcoctl.config.artifact.install.rulesfilesDir | string | `"/rulesfiles"` | Directory where the rulesfiles are saved. The path is relative to the container, which in this case is an emptyDir mounted also by the Falco pod. |
| falcoctl.config.indexes | list | `[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]` | List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see: https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview |
| falcoctl.image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. |
| falcoctl.image.registry | string | `"docker.io"` | The image registry to pull from. |
| falcoctl.image.repository | string | `"falcosecurity/falcoctl"` | The image repository to pull from. |
| falcoctl.image.tag | string | `"0.11.0"` | The image tag to pull. |
| falcoctl.image.tag | string | `"0.11.2"` | The image tag to pull. |
| falcosidekick | object | `{"enabled":false,"fullfqdn":false,"listenPort":""}` | For configuration values, see https://github.com/falcosecurity/charts/blob/master/charts/falcosidekick/values.yaml |
| falcosidekick.enabled | bool | `false` | Enable falcosidekick deployment. |
| falcosidekick.fullfqdn | bool | `false` | Enable usage of full FQDN of falcosidekick service (useful when a Proxy is used). |
@ -789,12 +799,14 @@ The following table lists the main configurable parameters of the falco chart v4
| namespaceOverride | string | `""` | Override the deployment namespace |
| nodeSelector | object | `{}` | Selectors used to deploy Falco on a given node/nodes. |
| podAnnotations | object | `{}` | Add additional pod annotations |
| podHostname | string | `nil` | Override hostname in falco pod |
| podLabels | object | `{}` | Add additional pod labels |
| podPriorityClassName | string | `nil` | Set pod priorityClassName |
| podSecurityContext | object | `{}` | Set securityContext for the pods These security settings are overriden by the ones specified for the specific containers when there is overlap. |
| rbac.create | bool | `true` | |
| resources.limits | object | `{"cpu":"1000m","memory":"1024Mi"}` | Maximum amount of resources that Falco container could get. If you are enabling more than one source in falco, than consider to increase the cpu limits. |
| resources.requests | object | `{"cpu":"100m","memory":"512Mi"}` | Although resources needed are subjective on the actual workload we provide a sane defaults ones. If you have more questions or concerns, please refer to #falco slack channel for more info about it. |
| responseActions | object | `{"enabled":false}` | Enable the response actions using Falco Talon. |
| scc.create | bool | `true` | Create OpenShift's Security Context Constraint. |
| serviceAccount.annotations | object | `{}` | Annotations to add to the service account. |
| serviceAccount.create | bool | `true` | Specifies whether a service account should be created. |

View File

@ -1,16 +0,0 @@
# CI values for Falco.
# The following values will bypass the installation of the kernel module
# and disable the kernel space driver.
# disable the kernel space driver
driver:
enabled: false
# make Falco run in userspace only mode
extra:
args:
- --userspace
# enforce /proc mounting since Falco still tries to scan it
mounts:
enforceProcMount: true

View File

@ -41,6 +41,7 @@ WARNING(drivers):
{{- if and (not (empty .Values.falco.load_plugins)) (or .Values.falcoctl.artifact.follow.enabled .Values.falcoctl.artifact.install.enabled) }}
WARNING:
{{ printf "It seems you are loading the following plugins %v, please make sure to install them by adding the correct reference to falcoctl.config.artifact.install.refs: %v" .Values.falco.load_plugins .Values.falcoctl.config.artifact.install.refs -}}
NOTICE:
{{ printf "It seems you are loading the following plugins %v, please make sure to install them by specifying the correct reference to falcoctl.config.artifact.install.refs: %v" .Values.falco.load_plugins .Values.falcoctl.config.artifact.install.refs -}}
{{ printf "Ignore this notice if the value of falcoctl.config.artifact.install.refs is correct already." -}}
{{- end }}

View File

@ -89,7 +89,7 @@ Return the proper Falco image name
{{- . }}/
{{- end -}}
{{- .Values.image.repository }}:
{{- .Values.image.tag | default (printf "%s-debian" .Chart.AppVersion) -}}
{{- .Values.image.tag | default (printf "%s" .Chart.AppVersion) -}}
{{- end -}}
{{/*
@ -435,22 +435,127 @@ Based on the user input it populates the metrics configuration in the falco conf
{{- end -}}
{{/*
Based on the user input it populates the container_engines configuration in the falco config map.
This helper is used to add the container plugin to the falco configuration.
*/}}
{{- define "falco.containerEnginesConfiguration" -}}
{{- if .Values.collectors.enabled -}}
{{- $criSockets := list -}}
{{- $criEnabled := false }}
{{- $_ := set .Values.falco.container_engines "docker" (dict "enabled" .Values.collectors.docker.enabled) -}}
{{- if or .Values.collectors.crio.enabled .Values.collectors.containerd.enabled }}
{{- $criEnabled = true }}
{{ define "falco.containerPlugin" -}}
{{ if and .Values.driver.enabled .Values.collectors.enabled -}}
{{ if and (or .Values.collectors.docker.enabled .Values.collectors.crio.enabled .Values.collectors.containerd.enabled) .Values.collectors.containerEngine.enabled -}}
{{ fail "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated." }}
{{ else if or .Values.collectors.docker.enabled .Values.collectors.crio.enabled .Values.collectors.containerd.enabled .Values.collectors.containerEngine.enabled -}}
{{ if or .Values.collectors.docker.enabled .Values.collectors.crio.enabled .Values.collectors.containerd.enabled -}}
{{ $_ := set .Values.collectors.containerEngine.engines.docker "enabled" .Values.collectors.docker.enabled -}}
{{ $_ = set .Values.collectors.containerEngine.engines.docker "sockets" (list .Values.collectors.docker.socket) -}}
{{ $_ = set .Values.collectors.containerEngine.engines.containerd "enabled" .Values.collectors.containerd.enabled -}}
{{ $_ = set .Values.collectors.containerEngine.engines.containerd "sockets" (list .Values.collectors.containerd.socket) -}}
{{ $_ = set .Values.collectors.containerEngine.engines.cri "enabled" .Values.collectors.crio.enabled -}}
{{ $_ = set .Values.collectors.containerEngine.engines.cri "sockets" (list .Values.collectors.crio.socket) -}}
{{ $_ = set .Values.collectors.containerEngine.engines.podman "enabled" false -}}
{{ $_ = set .Values.collectors.containerEngine.engines.lxc "enabled" false -}}
{{ $_ = set .Values.collectors.containerEngine.engines.libvirt_lxc "enabled" false -}}
{{ $_ = set .Values.collectors.containerEngine.engines.bpm "enabled" false -}}
{{ end -}}
{{ $hasConfig := false -}}
{{ range .Values.falco.plugins -}}
{{ if eq (get . "name") "container" -}}
{{ $hasConfig = true -}}
{{ end -}}
{{ end -}}
{{ if not $hasConfig -}}
{{ $pluginConfig := dict -}}
{{ with .Values.collectors.containerEngine -}}
{{ $pluginConfig = dict "name" "container" "library_path" "libcontainer.so" "init_config" (dict "label_max_len" .labelMaxLen "with_size" .withSize "hooks" .hooks "engines" .engines) -}}
{{ end -}}
{{ $newConfig := append .Values.falco.plugins $pluginConfig -}}
{{ $_ := set .Values.falco "plugins" ($newConfig | uniq) -}}
{{ $loadedPlugins := append .Values.falco.load_plugins "container" -}}
{{ $_ = set .Values.falco "load_plugins" ($loadedPlugins | uniq) -}}
{{ end -}}
{{ $_ := set .Values.falcoctl.config.artifact.install "refs" ((append .Values.falcoctl.config.artifact.install.refs .Values.collectors.containerEngine.pluginRef) | uniq) -}}
{{ $_ = set .Values.falcoctl.config.artifact "allowedTypes" ((append .Values.falcoctl.config.artifact.allowedTypes "plugin") | uniq) -}}
{{ end -}}
{{ end -}}
{{ end -}}
{{/*
This helper is used to add container plugin volumes to the falco pod.
*/}}
{{- define "falco.containerPluginVolumes" -}}
{{- if and .Values.driver.enabled .Values.collectors.enabled -}}
{{- if and (or .Values.collectors.docker.enabled .Values.collectors.crio.enabled .Values.collectors.containerd.enabled) .Values.collectors.containerEngine.enabled -}}
{{ fail "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated." }}
{{- end -}}
{{ $volumes := list -}}
{{- if .Values.collectors.docker.enabled -}}
{{ $volumes = append $volumes (dict "name" "docker-socket" "hostPath" (dict "path" .Values.collectors.docker.socket)) -}}
{{- end -}}
{{- if .Values.collectors.crio.enabled -}}
{{ $volumes = append $volumes (dict "name" "crio-socket" "hostPath" (dict "path" .Values.collectors.crio.socket)) -}}
{{- end -}}
{{- if .Values.collectors.containerd.enabled -}}
{{- $criSockets = append $criSockets .Values.collectors.containerd.socket -}}
{{- end }}
{{ $volumes = append $volumes (dict "name" "containerd-socket" "hostPath" (dict "path" .Values.collectors.containerd.socket)) -}}
{{- end -}}
{{- if .Values.collectors.containerEngine.enabled -}}
{{- $seenPaths := dict -}}
{{- $idx := 0 -}}
{{- $engineOrder := list "docker" "podman" "containerd" "cri" "lxc" "libvirt_lxc" "bpm" -}}
{{- range $engineName := $engineOrder -}}
{{- $val := index $.Values.collectors.containerEngine.engines $engineName -}}
{{- if and $val $val.enabled -}}
{{- range $index, $socket := $val.sockets -}}
{{- $mountPath := print "/host" $socket -}}
{{- if not (hasKey $seenPaths $mountPath) -}}
{{ $volumes = append $volumes (dict "name" (printf "container-engine-socket-%d" $idx) "hostPath" (dict "path" $socket)) -}}
{{- $idx = add $idx 1 -}}
{{- $_ := set $seenPaths $mountPath true -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- if gt (len $volumes) 0 -}}
{{ toYaml $volumes -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
This helper is used to add container plugin volumeMounts to the falco pod.
*/}}
{{- define "falco.containerPluginVolumeMounts" -}}
{{- if and .Values.driver.enabled .Values.collectors.enabled -}}
{{- if and (or .Values.collectors.docker.enabled .Values.collectors.crio.enabled .Values.collectors.containerd.enabled) .Values.collectors.containerEngine.enabled -}}
{{ fail "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated." }}
{{- end -}}
{{ $volumeMounts := list -}}
{{- if .Values.collectors.docker.enabled -}}
{{ $volumeMounts = append $volumeMounts (dict "name" "docker-socket" "mountPath" (print "/host" .Values.collectors.docker.socket)) -}}
{{- end -}}
{{- if .Values.collectors.crio.enabled -}}
{{- $criSockets = append $criSockets .Values.collectors.crio.socket -}}
{{ $volumeMounts = append $volumeMounts (dict "name" "crio-socket" "mountPath" (print "/host" .Values.collectors.crio.socket)) -}}
{{- end -}}
{{- if .Values.collectors.containerd.enabled -}}
{{ $volumeMounts = append $volumeMounts (dict "name" "containerd-socket" "mountPath" (print "/host" .Values.collectors.containerd.socket)) -}}
{{- end -}}
{{- if .Values.collectors.containerEngine.enabled -}}
{{- $seenPaths := dict -}}
{{- $idx := 0 -}}
{{- $engineOrder := list "docker" "podman" "containerd" "cri" "lxc" "libvirt_lxc" "bpm" -}}
{{- range $engineName := $engineOrder -}}
{{- $val := index $.Values.collectors.containerEngine.engines $engineName -}}
{{- if and $val $val.enabled -}}
{{- range $index, $socket := $val.sockets -}}
{{- $mountPath := print "/host" $socket -}}
{{- if not (hasKey $seenPaths $mountPath) -}}
{{ $volumeMounts = append $volumeMounts (dict "name" (printf "container-engine-socket-%d" $idx) "mountPath" $mountPath) -}}
{{- $idx = add $idx 1 -}}
{{- $_ := set $seenPaths $mountPath true -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- if gt (len $volumeMounts) 0 -}}
{{ toYaml ($volumeMounts) }}
{{- end -}}
{{- $_ = set .Values.falco.container_engines "cri" (dict "enabled" $criEnabled "sockets" $criSockets) -}}
{{- end -}}
{{- end -}}

View File

@ -11,5 +11,5 @@ data:
{{- include "k8smeta.configuration" . -}}
{{- include "falco.engineConfiguration" . -}}
{{- include "falco.metricsConfiguration" . -}}
{{- include "falco.containerEnginesConfiguration" . -}}
{{- include "falco.containerPlugin" . -}}
{{- toYaml .Values.falco | nindent 4 }}

View File

@ -9,5 +9,6 @@ metadata:
data:
falcoctl.yaml: |-
{{- include "k8smeta.configuration" . -}}
{{- include "falco.containerPlugin" . -}}
{{- toYaml .Values.falcoctl.config | nindent 4 }}
{{- end }}

View File

@ -27,6 +27,9 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.falco.podHostname }}
hostname: {{ .Values.falco.podHostname }}
{{- end }}
serviceAccountName: {{ include "falco.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
@ -73,11 +76,6 @@ spec:
args:
- /usr/bin/falco
{{- include "falco.configSyscallSource" . | indent 8 }}
{{- with .Values.collectors }}
{{- if .enabled }}
- -pk
{{- end }}
{{- end }}
{{- with .Values.extra.args }}
{{- toYaml . | nindent 8 }}
{{- end }}
@ -123,6 +121,7 @@ spec:
{{- end }}
{{- end }}
volumeMounts:
{{- include "falco.containerPluginVolumeMounts" . | nindent 8 -}}
{{- if or .Values.falcoctl.artifact.install.enabled .Values.falcoctl.artifact.follow.enabled }}
{{- if has "rulesfile" .Values.falcoctl.config.artifact.allowedTypes }}
- mountPath: /etc/falco
@ -168,22 +167,6 @@ spec:
- name: debugfs
mountPath: /sys/kernel/debug
{{- end }}
{{- with .Values.collectors }}
{{- if .enabled }}
{{- if .docker.enabled }}
- mountPath: /host{{ dir .docker.socket }}
name: docker-socket
{{- end }}
{{- if .containerd.enabled }}
- mountPath: /host{{ dir .containerd.socket }}
name: containerd-socket
{{- end }}
{{- if .crio.enabled }}
- mountPath: /host{{ dir .crio.socket }}
name: crio-socket
{{- end }}
{{- end }}
{{- end }}
- mountPath: /etc/falco/falco.yaml
name: falco-yaml
subPath: falco.yaml
@ -233,6 +216,7 @@ spec:
{{- include "falcoctl.initContainer" . | nindent 4 }}
{{- end }}
volumes:
{{- include "falco.containerPluginVolumes" . | nindent 4 -}}
{{- if eq (include "driverLoader.enabled" .) "true" }}
- name: specialized-falco-configs
emptyDir: {}
@ -272,25 +256,6 @@ spec:
hostPath:
path: /sys/kernel/debug
{{- end }}
{{- with .Values.collectors }}
{{- if .enabled }}
{{- if .docker.enabled }}
- name: docker-socket
hostPath:
path: {{ dir .docker.socket }}
{{- end }}
{{- if .containerd.enabled }}
- name: containerd-socket
hostPath:
path: {{ dir .containerd.socket }}
{{- end }}
{{- if .crio.enabled }}
- name: crio-socket
hostPath:
path: {{ dir .crio.socket }}
{{- end }}
{{- end }}
{{- end }}
- name: proc-fs
hostPath:
path: /proc

View File

@ -22,7 +22,8 @@ import (
"gopkg.in/yaml.v3"
)
func chartInfo(t *testing.T, chartPath string) (map[string]interface{}, error) {
// ChartInfo returns chart's information.
func ChartInfo(t *testing.T, chartPath string) (map[string]interface{}, error) {
// Get chart info.
output, err := helm.RunHelmCommandAndGetOutputE(t, &helm.Options{}, "show", "chart", chartPath)
if err != nil {

View File

@ -16,7 +16,14 @@
package unit
const (
releaseName = "rendered-resources"
patternK8sMetacollectorFiles = `# Source: falco/charts/k8s-metacollector/templates/([^\n]+)`
k8sMetaPluginName = "k8smeta"
// ReleaseName is the name of the release we expect in the rendered resources.
ReleaseName = "rendered-resources"
// PatternK8sMetacollectorFiles is the regex pattern we expect to find in the rendered resources.
PatternK8sMetacollectorFiles = `# Source: falco/charts/k8s-metacollector/templates/([^\n]+)`
// K8sMetaPluginName is the name of the k8smeta plugin we expect in the falco configuration.
K8sMetaPluginName = "k8smeta"
// ContainerPluginName name of the container plugin we expect in the falco configuration.
ContainerPluginName = "container"
// ChartPath is the path to the chart.
ChartPath = "../../.."
)

View File

@ -1,230 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package unit
import (
"path/filepath"
"testing"
"gopkg.in/yaml.v3"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
)
type Config struct {
ContainerEngines ContainerEngines `yaml:"container_engines"`
}
type ContainerEngines struct {
Docker EngineConfig `yaml:"docker"`
Cri CriConfig `yaml:"cri"`
Podman EngineConfig `yaml:"podman"`
Lxc EngineConfig `yaml:"lxc"`
LibvirtLxc EngineConfig `yaml:"libvirt_lxc"`
Bpm EngineConfig `yaml:"bpm"`
}
type EngineConfig struct {
Enabled bool `yaml:"enabled"`
}
type CriConfig struct {
Enabled bool `yaml:"enabled"`
Sockets []string `yaml:"sockets"`
DisableAsync bool `yaml:"disable_async"`
}
func TestContainerEnginesConfig(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(chartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, engines ContainerEngines)
}{
{
"defaultValues",
nil,
func(t *testing.T, engines ContainerEngines) {
require.True(t, engines.Docker.Enabled)
require.True(t, engines.Cri.Enabled)
require.False(t, engines.Cri.DisableAsync)
require.Contains(t, engines.Cri.Sockets, "/run/crio/crio.sock")
require.Contains(t, engines.Cri.Sockets, "/run/containerd/containerd.sock")
require.False(t, engines.Podman.Enabled)
require.False(t, engines.Lxc.Enabled)
require.False(t, engines.LibvirtLxc.Enabled)
require.False(t, engines.Bpm.Enabled)
},
},
{
"collectors disabled",
map[string]string{
"collectors.enabled": "false",
},
func(t *testing.T, engines ContainerEngines) {
require.False(t, engines.Docker.Enabled)
require.False(t, engines.Cri.Enabled)
require.False(t, engines.Cri.DisableAsync)
require.Contains(t, engines.Cri.Sockets, "/run/crio/crio.sock")
require.Contains(t, engines.Cri.Sockets, "/run/containerd/containerd.sock")
require.False(t, engines.Podman.Enabled)
require.False(t, engines.Lxc.Enabled)
require.False(t, engines.LibvirtLxc.Enabled)
require.False(t, engines.Bpm.Enabled)
},
},
{
"Disable containerd",
map[string]string{
"collectors.containerd.enabled": "false",
},
func(t *testing.T, engines ContainerEngines) {
require.True(t, engines.Docker.Enabled)
require.True(t, engines.Cri.Enabled)
require.False(t, engines.Cri.DisableAsync)
require.Len(t, engines.Cri.Sockets, 1)
require.Contains(t, engines.Cri.Sockets, "/run/crio/crio.sock")
require.NotContains(t, engines.Cri.Sockets, "/run/containerd/containerd.sock")
require.False(t, engines.Podman.Enabled)
require.False(t, engines.Lxc.Enabled)
require.False(t, engines.LibvirtLxc.Enabled)
require.False(t, engines.Bpm.Enabled)
},
},
{
"Customize containerd socket",
map[string]string{
"collectors.containerd.socket": "/var/run/containerd/my.socket",
},
func(t *testing.T, engines ContainerEngines) {
require.True(t, engines.Docker.Enabled)
require.True(t, engines.Cri.Enabled)
require.False(t, engines.Cri.DisableAsync)
require.Len(t, engines.Cri.Sockets, 2)
require.Contains(t, engines.Cri.Sockets, "/run/crio/crio.sock")
require.Contains(t, engines.Cri.Sockets, "/var/run/containerd/my.socket")
require.False(t, engines.Podman.Enabled)
require.False(t, engines.Lxc.Enabled)
require.False(t, engines.LibvirtLxc.Enabled)
require.False(t, engines.Bpm.Enabled)
},
},
{
"Disable docker",
map[string]string{
"collectors.docker.enabled": "false",
},
func(t *testing.T, engines ContainerEngines) {
require.False(t, engines.Docker.Enabled)
require.True(t, engines.Cri.Enabled)
require.False(t, engines.Cri.DisableAsync)
require.Len(t, engines.Cri.Sockets, 2)
require.Contains(t, engines.Cri.Sockets, "/run/crio/crio.sock")
require.Contains(t, engines.Cri.Sockets, "/run/containerd/containerd.sock")
require.False(t, engines.Podman.Enabled)
require.False(t, engines.Lxc.Enabled)
require.False(t, engines.LibvirtLxc.Enabled)
require.False(t, engines.Bpm.Enabled)
},
},
{
"Disable crio",
map[string]string{
"collectors.crio.enabled": "false",
},
func(t *testing.T, engines ContainerEngines) {
require.True(t, engines.Docker.Enabled)
require.True(t, engines.Cri.Enabled)
require.False(t, engines.Cri.DisableAsync)
require.Len(t, engines.Cri.Sockets, 1)
require.NotContains(t, engines.Cri.Sockets, "/run/crio/crio.sock")
require.Contains(t, engines.Cri.Sockets, "/run/containerd/containerd.sock")
require.False(t, engines.Podman.Enabled)
require.False(t, engines.Lxc.Enabled)
require.False(t, engines.LibvirtLxc.Enabled)
require.False(t, engines.Bpm.Enabled)
},
},
{
"Customize crio socket",
map[string]string{
"collectors.crio.socket": "/run/crio/my.socket",
},
func(t *testing.T, engines ContainerEngines) {
require.True(t, engines.Docker.Enabled)
require.True(t, engines.Cri.Enabled)
require.False(t, engines.Cri.DisableAsync)
require.Len(t, engines.Cri.Sockets, 2)
require.Contains(t, engines.Cri.Sockets, "/run/crio/my.socket")
require.Contains(t, engines.Cri.Sockets, "/run/containerd/containerd.sock")
require.False(t, engines.Podman.Enabled)
require.False(t, engines.Lxc.Enabled)
require.False(t, engines.LibvirtLxc.Enabled)
require.False(t, engines.Bpm.Enabled)
},
},
{
"Disable crio and containerd",
map[string]string{
"collectors.crio.enabled": "false",
"collectors.containerd.enabled": "false",
},
func(t *testing.T, engines ContainerEngines) {
require.True(t, engines.Docker.Enabled)
require.False(t, engines.Cri.Enabled)
require.False(t, engines.Cri.DisableAsync)
require.Len(t, engines.Cri.Sockets, 0)
require.NotContains(t, engines.Cri.Sockets, "/run/crio/my.socket")
require.NotContains(t, engines.Cri.Sockets, "/run/containerd/containerd.sock")
require.False(t, engines.Podman.Enabled)
require.False(t, engines.Lxc.Enabled)
require.False(t, engines.LibvirtLxc.Enabled)
require.False(t, engines.Bpm.Enabled)
},
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
var config map[string]interface{}
helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config)
engineConfigString := config["container_engines"]
engineConfigBytes, err := yaml.Marshal(engineConfigString)
var containerEngines ContainerEngines
err = yaml.Unmarshal(engineConfigBytes, &containerEngines)
require.NoError(t, err)
testCase.expected(t, containerEngines)
})
}
}

View File

@ -1,190 +0,0 @@
package unit
import (
"path/filepath"
"testing"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
)
func TestContainerEngineSocketMounts(t *testing.T) {
helmChartPath, err := filepath.Abs(chartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, volumes []v1.Volume, volumeMounts []v1.VolumeMount)
}{
{
"defaultValues",
nil,
func(t *testing.T, volumes []v1.Volume, volumeMounts []v1.VolumeMount) {
require.Contains(t, volumes, volume("crio-socket", "/run/crio/crio.sock"))
require.Contains(t, volumes, volume("containerd-socket", "/run/containerd/containerd.sock"))
require.Contains(t, volumes, volume("docker-socket", "/var/run/docker.sock"))
require.Contains(t, volumeMounts, volumeMount("crio-socket", "/host/run/crio/crio.sock"))
require.Contains(t, volumeMounts, volumeMount("containerd-socket", "/host/run/containerd/containerd.sock"))
require.Contains(t, volumeMounts, volumeMount("docker-socket", "/host/var/run/docker.sock"))
},
},
{
"disableCrioSocket",
map[string]string{"collectors.crio.enabled": "false"},
func(t *testing.T, volumes []v1.Volume, volumeMounts []v1.VolumeMount) {
require.NotContains(t, volumes, volume("crio-socket", "/run/crio/crio.sock"))
require.Contains(t, volumes, volume("containerd-socket", "/run/containerd/containerd.sock"))
require.Contains(t, volumes, volume("docker-socket", "/var/run/docker.sock"))
require.NotContains(t, volumeMounts, volumeMount("crio-socket", "/host/run/crio/crio.sock"))
require.Contains(t, volumeMounts, volumeMount("containerd-socket", "/host/run/containerd/containerd.sock"))
require.Contains(t, volumeMounts, volumeMount("docker-socket", "/host/var/run/docker.sock"))
},
},
{
"disableContainerdSocket",
map[string]string{"collectors.containerd.enabled": "false"},
func(t *testing.T, volumes []v1.Volume, volumeMounts []v1.VolumeMount) {
require.Contains(t, volumes, volume("crio-socket", "/run/crio/crio.sock"))
require.NotContains(t, volumes, volume("containerd-socket", "/run/containerd/containerd.sock"))
require.Contains(t, volumes, volume("docker-socket", "/var/run/docker.sock"))
require.Contains(t, volumeMounts, volumeMount("crio-socket", "/host/run/crio/crio.sock"))
require.NotContains(t, volumeMounts, volumeMount("containerd-socket", "/host/run/containerd/containerd.sock"))
require.Contains(t, volumeMounts, volumeMount("docker-socket", "/host/var/run/docker.sock"))
},
},
{
"disableDockerSocket",
map[string]string{"collectors.docker.enabled": "false"},
func(t *testing.T, volumes []v1.Volume, volumeMounts []v1.VolumeMount) {
require.Contains(t, volumes, volume("crio-socket", "/run/crio/crio.sock"))
require.Contains(t, volumes, volume("containerd-socket", "/run/containerd/containerd.sock"))
require.NotContains(t, volumes, volume("docker-socket", "/var/run/docker.sock"))
require.Contains(t, volumeMounts, volumeMount("crio-socket", "/host/run/crio/crio.sock"))
require.Contains(t, volumeMounts, volumeMount("containerd-socket", "/host/run/containerd/containerd.sock"))
require.NotContains(t, volumeMounts, volumeMount("docker-socket", "/host/var/run/docker.sock"))
},
},
{
"disableAllCollectors",
map[string]string{"collectors.enabled": "false"},
func(t *testing.T, volumes []v1.Volume, volumeMounts []v1.VolumeMount) {
require.NotContains(t, volumes, volume("crio-socket", "/run/crio/crio.sock"))
require.NotContains(t, volumes, volume("containerd-socket", "/run/containerd/containerd.sock"))
require.NotContains(t, volumes, volume("docker-socket", "/var/run/docker.sock"))
require.NotContains(t, volumeMounts, volumeMount("crio-socket", "/host/run/crio/crio.sock"))
require.NotContains(t, volumeMounts, volumeMount("containerd-socket", "/host/run/containerd/containerd.sock"))
require.NotContains(t, volumeMounts, volumeMount("docker-socket", "/host/var/run/docker.sock"))
},
},
{
"customCrioSocketPath",
map[string]string{"collectors.crio.socket": "/custom/path/crio.sock"},
func(t *testing.T, volumes []v1.Volume, volumeMounts []v1.VolumeMount) {
require.Contains(t, volumes, volume("crio-socket", "/custom/path/crio.sock"))
require.Contains(t, volumes, volume("containerd-socket", "/run/containerd/containerd.sock"))
require.Contains(t, volumes, volume("docker-socket", "/var/run/docker.sock"))
require.Contains(t, volumeMounts, volumeMount("crio-socket", "/host/custom/path/crio.sock"))
require.Contains(t, volumeMounts, volumeMount("containerd-socket", "/host/run/containerd/containerd.sock"))
require.Contains(t, volumeMounts, volumeMount("docker-socket", "/host/var/run/docker.sock"))
},
},
{
"customContainerdSocketPath",
map[string]string{"collectors.containerd.socket": "/custom/path/containerd.sock"},
func(t *testing.T, volumes []v1.Volume, volumeMounts []v1.VolumeMount) {
require.Contains(t, volumes, volume("crio-socket", "/run/crio/crio.sock"))
require.Contains(t, volumes, volume("containerd-socket", "/custom/path/containerd.sock"))
require.Contains(t, volumes, volume("docker-socket", "/var/run/docker.sock"))
require.Contains(t, volumeMounts, volumeMount("crio-socket", "/host/run/crio/crio.sock"))
require.Contains(t, volumeMounts, volumeMount("containerd-socket", "/host/custom/path/containerd.sock"))
require.Contains(t, volumeMounts, volumeMount("docker-socket", "/host/var/run/docker.sock"))
},
},
{
"customDockerSocketPath",
map[string]string{"collectors.docker.socket": "/custom/path/docker.sock"},
func(t *testing.T, volumes []v1.Volume, volumeMounts []v1.VolumeMount) {
require.Contains(t, volumes, volume("crio-socket", "/run/crio/crio.sock"))
require.Contains(t, volumes, volume("containerd-socket", "/run/containerd/containerd.sock"))
require.Contains(t, volumes, volume("docker-socket", "/custom/path/docker.sock"))
require.Contains(t, volumeMounts, volumeMount("crio-socket", "/host/run/crio/crio.sock"))
require.Contains(t, volumeMounts, volumeMount("containerd-socket", "/host/run/containerd/containerd.sock"))
require.Contains(t, volumeMounts, volumeMount("docker-socket", "/host/custom/path/docker.sock"))
},
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/daemonset.yaml"})
var ds appsv1.DaemonSet
helm.UnmarshalK8SYaml(t, output, &ds)
for i := range ds.Spec.Template.Spec.Containers {
if ds.Spec.Template.Spec.Containers[i].Name == "falco" {
testCase.expected(t, ds.Spec.Template.Spec.Volumes, ds.Spec.Template.Spec.Containers[i].VolumeMounts)
return
}
}
})
}
}
func volume(name, path string) v1.Volume {
return v1.Volume{
Name: name,
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{
Path: filepath.Dir(path),
Type: nil,
},
EmptyDir: nil,
GCEPersistentDisk: nil,
AWSElasticBlockStore: nil,
GitRepo: nil,
Secret: nil,
NFS: nil,
ISCSI: nil,
Glusterfs: nil,
PersistentVolumeClaim: nil,
RBD: nil,
FlexVolume: nil,
Cinder: nil,
CephFS: nil,
Flocker: nil,
DownwardAPI: nil,
FC: nil,
AzureFile: nil,
ConfigMap: nil,
VsphereVolume: nil,
Quobyte: nil,
AzureDisk: nil,
PhotonPersistentDisk: nil,
Projected: nil,
PortworxVolume: nil,
ScaleIO: nil,
StorageOS: nil,
CSI: nil,
Ephemeral: nil,
},
}
}
func volumeMount(name, path string) v1.VolumeMount {
return v1.VolumeMount{
Name: name,
ReadOnly: false,
MountPath: filepath.Dir(path),
SubPath: "",
MountPropagation: nil,
SubPathExpr: "",
}
}

View File

@ -0,0 +1,13 @@
package containerPlugin
var volumeNames = []string{
"docker-socket",
"containerd-socket",
"crio-socket",
"container-engine-socket-0",
"container-engine-socket-1",
"container-engine-socket-2",
"container-engine-socket-3",
"container-engine-socket-4",
"container-engine-socket-5",
}

View File

@ -0,0 +1,767 @@
package containerPlugin
import (
"path/filepath"
"slices"
"testing"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
corev1 "k8s.io/api/core/v1"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"github.com/gruntwork-io/terratest/modules/helm"
)
func TestContainerPluginConfiguration(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, config any)
}{
{
"defaultValues",
nil,
func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
// Check engines configurations.
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok, "checking if engines section exists")
require.Len(t, engines, 7, "checking number of engines")
var engineConfig ContainerEngineConfig
// Unmarshal the engines configuration.
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check the default values for each engine.
require.True(t, engineConfig.Docker.Enabled)
require.Equal(t, []string{"/var/run/docker.sock"}, engineConfig.Docker.Sockets)
require.True(t, engineConfig.Podman.Enabled)
require.Equal(t, []string{"/run/podman/podman.sock"}, engineConfig.Podman.Sockets)
require.True(t, engineConfig.Containerd.Enabled)
require.Equal(t, []string{"/run/host-containerd/containerd.sock"}, engineConfig.Containerd.Sockets)
require.True(t, engineConfig.CRI.Enabled)
require.Equal(t, []string{"/run/containerd/containerd.sock", "/run/crio/crio.sock", "/run/k3s/containerd/containerd.sock", "/run/host-containerd/containerd.sock"}, engineConfig.CRI.Sockets)
require.True(t, engineConfig.LXC.Enabled)
require.True(t, engineConfig.LibvirtLXC.Enabled)
require.True(t, engineConfig.BPM.Enabled)
},
},
{
name: "changeDockerSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/custom/docker.sock",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.True(t, engineConfig.Docker.Enabled)
require.Equal(t, []string{"/custom/docker.sock"}, engineConfig.Docker.Sockets)
},
},
{
name: "changeCriSocket",
values: map[string]string{
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/custom/cri.sock",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.True(t, engineConfig.CRI.Enabled)
require.Equal(t, []string{"/custom/cri.sock"}, engineConfig.CRI.Sockets)
},
},
{
name: "disableDockerSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.False(t, engineConfig.Docker.Enabled)
},
},
{
name: "disableCriSocket",
values: map[string]string{
"collectors.containerEngine.engines.cri.enabled": "false",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.False(t, engineConfig.CRI.Enabled)
},
},
{
name: "changeContainerdSocket",
values: map[string]string{
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.containerd.sockets[0]": "/custom/containerd.sock",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.True(t, engineConfig.Containerd.Enabled)
require.Equal(t, []string{"/custom/containerd.sock"}, engineConfig.Containerd.Sockets)
},
},
{
name: "disableContainerdSocket",
values: map[string]string{
"collectors.containerEngine.engines.containerd.enabled": "false",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.False(t, engineConfig.Containerd.Enabled)
},
},
{
name: "defaultContainerEngineConfig",
values: map[string]string{},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
require.Equal(t, float64(100), initConfigMap["label_max_len"])
require.False(t, initConfigMap["with_size"].(bool))
hooks := initConfigMap["hooks"].([]interface{})
require.Len(t, hooks, 1)
require.Contains(t, hooks, "create")
engines := initConfigMap["engines"].(map[string]interface{})
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check default engine configurations
require.True(t, engineConfig.Docker.Enabled)
require.Equal(t, []string{"/var/run/docker.sock"}, engineConfig.Docker.Sockets)
require.True(t, engineConfig.Podman.Enabled)
require.Equal(t, []string{"/run/podman/podman.sock"}, engineConfig.Podman.Sockets)
require.True(t, engineConfig.Containerd.Enabled)
require.Equal(t, []string{"/run/host-containerd/containerd.sock"}, engineConfig.Containerd.Sockets)
require.True(t, engineConfig.CRI.Enabled)
require.Equal(t, []string{"/run/containerd/containerd.sock", "/run/crio/crio.sock", "/run/k3s/containerd/containerd.sock", "/run/host-containerd/containerd.sock"}, engineConfig.CRI.Sockets)
require.True(t, engineConfig.LXC.Enabled)
require.True(t, engineConfig.LibvirtLXC.Enabled)
require.True(t, engineConfig.BPM.Enabled)
},
},
{
name: "customContainerEngineConfig",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.labelMaxLen": "200",
"collectors.containerEngine.withSize": "true",
"collectors.containerEngine.hooks[0]": "create",
"collectors.containerEngine.hooks[1]": "start",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
"collectors.containerEngine.engines.containerd.sockets[0]": "/custom/containerd.sock",
"collectors.containerEngine.engines.cri.sockets[0]": "/custom/crio.sock",
"collectors.containerEngine.engines.lxc.enabled": "false",
"collectors.containerEngine.engines.libvirt_lxc.enabled": "false",
"collectors.containerEngine.engines.bpm.enabled": "false",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
require.Equal(t, float64(200), initConfigMap["label_max_len"])
require.True(t, initConfigMap["with_size"].(bool))
hooks := initConfigMap["hooks"].([]interface{})
require.Len(t, hooks, 2)
require.Contains(t, hooks, "create")
require.Contains(t, hooks, "start")
engines := initConfigMap["engines"].(map[string]interface{})
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check custom engine configurations
require.False(t, engineConfig.Docker.Enabled)
require.False(t, engineConfig.Podman.Enabled)
require.True(t, engineConfig.Containerd.Enabled)
require.Equal(t, []string{"/custom/containerd.sock"}, engineConfig.Containerd.Sockets)
require.True(t, engineConfig.CRI.Enabled)
require.Equal(t, []string{"/custom/crio.sock"}, engineConfig.CRI.Sockets)
require.False(t, engineConfig.LXC.Enabled)
require.False(t, engineConfig.LibvirtLXC.Enabled)
require.False(t, engineConfig.BPM.Enabled)
},
},
{
name: "customDockerEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.docker.sockets[0]": "/custom/docker.sock",
"collectors.containerEngine.engines.docker.sockets[1]": "/custom/docker.sock2",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check Docker engine configuration
require.False(t, engineConfig.Docker.Enabled)
require.Equal(t, []string{"/custom/docker.sock", "/custom/docker.sock2"}, engineConfig.Docker.Sockets)
},
},
{
name: "customContainerdEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.containerd.sockets[0]": "/custom/containerd.sock",
"collectors.containerEngine.engines.containerd.sockets[1]": "/custom/containerd.sock2",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check Containerd engine configuration
require.False(t, engineConfig.Containerd.Enabled)
require.Equal(t, []string{"/custom/containerd.sock", "/custom/containerd.sock2"}, engineConfig.Containerd.Sockets)
},
},
{
name: "customPodmanEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.podman.enabled": "true",
"collectors.containerEngine.engines.podman.sockets[0]": "/custom/podman.sock",
"collectors.containerEngine.engines.podman.sockets[1]": "/custom/podman.sock2",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check Podman engine configuration
require.True(t, engineConfig.Podman.Enabled)
require.Equal(t, []string{"/custom/podman.sock", "/custom/podman.sock2"}, engineConfig.Podman.Sockets)
},
},
{
name: "customCRIEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/custom/cri.sock",
"collectors.containerEngine.engines.cri.sockets[1]": "/custom/cri.sock2",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check CRI engine configuration
require.True(t, engineConfig.CRI.Enabled)
require.Equal(t, []string{"/custom/cri.sock", "/custom/cri.sock2"}, engineConfig.CRI.Sockets)
},
},
{
name: "customLXCEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.lxc.enabled": "true",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check LXC engine configuration
require.True(t, engineConfig.LXC.Enabled)
},
},
{
name: "customLibvirtLXCEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.libvirt_lxc.enabled": "true",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check LibvirtLXC engine configuration
require.True(t, engineConfig.LibvirtLXC.Enabled)
},
},
{
name: "customBPMEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.bpm.enabled": "true",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check BPM engine configuration
require.True(t, engineConfig.BPM.Enabled)
},
},
{
name: "allCollectorsDisabled",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "false",
},
expected: func(t *testing.T, config any) {
// When config is nil, it means the plugin wasn't found in the configuration
require.Nil(t, config, "container plugin should not be present in configuration when all collectors are disabled")
// If somehow the config exists (which it shouldn't), verify there are no engine configurations
if config != nil {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
if ok {
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"]
if ok {
engineMap := engines.(map[string]interface{})
require.Empty(t, engineMap, "engines configuration should be empty when all collectors are disabled")
}
}
}
},
},
{
name: "allCollectorsDisabledTopLevel",
values: map[string]string{
"collectors.enabled": "false",
},
expected: func(t *testing.T, config any) {
// When config is nil, it means the plugin wasn't found in the configuration
require.Nil(t, config, "container plugin should not be present in configuration when all collectors are disabled")
// If somehow the config exists (which it shouldn't), verify there are no engine configurations
if config != nil {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
if ok {
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"]
if ok {
engineMap := engines.(map[string]interface{})
require.Empty(t, engineMap, "engines configuration should be empty when all collectors are disabled")
}
}
}
},
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
// Render the chart with the given options.
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
// Unmarshal the output into a ConfigMap object.
helm.UnmarshalK8SYaml(t, output, &cm)
// Unmarshal the data field of the ConfigMap into a map.
var config map[string]interface{}
helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config)
// Extract the container plugin configuration.
plugins, ok := config["plugins"]
require.True(t, ok, "checking if plugins section exists")
pluginsList := plugins.([]interface{})
found := false
// Get the container plugin configuration.
for _, plugin := range pluginsList {
if name, ok := plugin.(map[string]interface{})["name"]; ok && name == unit.ContainerPluginName {
testCase.expected(t, plugin)
found = true
}
}
if found {
// Check that the plugin has been added to the ones that are enabled.
loadPlugins := config["load_plugins"]
require.True(t, slices.Contains(loadPlugins.([]interface{}), unit.ContainerPluginName))
} else {
testCase.expected(t, nil)
loadPlugins := config["load_plugins"]
require.False(t, slices.Contains(loadPlugins.([]interface{}), unit.ContainerPluginName))
}
})
}
}
func TestInvalidCollectorConfiguration(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expectedErr string
}{
{
name: "dockerAndContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "true",
"collectoars.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
},
expectedErr: "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated.",
},
{
name: "containerdAndContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "true",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
},
expectedErr: "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated.",
},
{
name: "crioAndContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectoars.containerd.enabled": "false",
"collectors.crio.enabled": "true",
"collectors.containerEngine.enabled": "true",
},
expectedErr: "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated.",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{
SetValues: tc.values,
}
// Attempt to render the template, expect an error
_, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
require.Error(t, err)
require.Contains(t, err.Error(), tc.expectedErr)
})
}
}
// Test that the helper does not overwrite user's configuration.
// And that the container reference is added to the configmap.
func TestFalcoctlRefs(t *testing.T) {
t.Parallel()
refShouldBeSet := func(t *testing.T, config any) {
// Get artifact configuration map.
configMap := config.(map[string]interface{})
artifactConfig := (configMap["artifact"]).(map[string]interface{})
// Test allowed types.
allowedTypes := artifactConfig["allowedTypes"]
require.Len(t, allowedTypes, 2)
require.True(t, slices.Contains(allowedTypes.([]interface{}), "plugin"))
require.True(t, slices.Contains(allowedTypes.([]interface{}), "rulesfile"))
// Test plugin reference.
refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{})
require.Len(t, refs, 2)
require.True(t, slices.Contains(refs, "falco-rules:4"))
require.True(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/container:0.3.5"))
}
refShouldNotBeSet := func(t *testing.T, config any) {
// Get artifact configuration map.
configMap := config.(map[string]interface{})
artifactConfig := (configMap["artifact"]).(map[string]interface{})
// Test allowed types.
allowedTypes := artifactConfig["allowedTypes"]
require.Len(t, allowedTypes, 2)
require.True(t, slices.Contains(allowedTypes.([]interface{}), "plugin"))
require.True(t, slices.Contains(allowedTypes.([]interface{}), "rulesfile"))
// Test plugin reference.
refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{})
require.Len(t, refs, 1)
require.True(t, slices.Contains(refs, "falco-rules:4"))
require.False(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/container:0.3.5"))
}
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, config any)
}{
{
"defaultValues",
nil,
refShouldBeSet,
},
{
"setPluginConfiguration",
map[string]string{
"collectors.enabled": "false",
},
refShouldNotBeSet,
},
{
"driver disabled",
map[string]string{
"driver.enabled": "false",
},
refShouldNotBeSet,
},
}
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/falcoctl-configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
var config map[string]interface{}
helm.UnmarshalK8SYaml(t, cm.Data["falcoctl.yaml"], &config)
testCase.expected(t, config)
})
}
}
type ContainerEngineSocket struct {
Enabled bool `yaml:"enabled"`
Sockets []string `yaml:"sockets,omitempty"`
}
type ContainerEngineConfig struct {
Docker ContainerEngineSocket `yaml:"docker"`
Podman ContainerEngineSocket `yaml:"podman"`
Containerd ContainerEngineSocket `yaml:"containerd"`
CRI ContainerEngineSocket `yaml:"cri"`
LXC ContainerEngineSocket `yaml:"lxc"`
LibvirtLXC ContainerEngineSocket `yaml:"libvirt_lxc"`
BPM ContainerEngineSocket `yaml:"bpm"`
}

View File

@ -0,0 +1,310 @@
package containerPlugin
import (
"path/filepath"
"slices"
"testing"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
)
func TestContainerPluginVolumeMounts(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, volumeMounts []corev1.VolumeMount)
}{
{
name: "defaultValues",
values: nil,
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 6)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/var/run/docker.sock", volumeMounts[0].MountPath)
require.Equal(t, "container-engine-socket-1", volumeMounts[1].Name)
require.Equal(t, "/host/run/podman/podman.sock", volumeMounts[1].MountPath)
require.Equal(t, "container-engine-socket-2", volumeMounts[2].Name)
require.Equal(t, "/host/run/host-containerd/containerd.sock", volumeMounts[2].MountPath)
require.Equal(t, "container-engine-socket-3", volumeMounts[3].Name)
require.Equal(t, "/host/run/containerd/containerd.sock", volumeMounts[3].MountPath)
require.Equal(t, "container-engine-socket-4", volumeMounts[4].Name)
require.Equal(t, "/host/run/crio/crio.sock", volumeMounts[4].MountPath)
require.Equal(t, "container-engine-socket-5", volumeMounts[5].Name)
require.Equal(t, "/host/run/k3s/containerd/containerd.sock", volumeMounts[5].MountPath)
},
},
{
name: "defaultDockerVolumeMount",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 1)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/var/run/docker.sock", volumeMounts[0].MountPath)
},
},
{
name: "customDockerSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/custom/docker.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 1)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/custom/docker.sock", volumeMounts[0].MountPath)
},
},
{
name: "defaultCriVolumeMount",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 4)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/run/containerd/containerd.sock", volumeMounts[0].MountPath)
require.Equal(t, "container-engine-socket-1", volumeMounts[1].Name)
require.Equal(t, "/host/run/crio/crio.sock", volumeMounts[1].MountPath)
require.Equal(t, "container-engine-socket-2", volumeMounts[2].Name)
require.Equal(t, "/host/run/k3s/containerd/containerd.sock", volumeMounts[2].MountPath)
require.Equal(t, "container-engine-socket-3", volumeMounts[3].Name)
require.Equal(t, "/host/run/host-containerd/containerd.sock", volumeMounts[3].MountPath)
},
},
{
name: "customCriSocket",
values: map[string]string{
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/custom/crio.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 1)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/custom/crio.sock", volumeMounts[0].MountPath)
},
},
{
name: "defaultContainerdVolumeMount",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 1)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/run/host-containerd/containerd.sock", volumeMounts[0].MountPath)
},
},
{
name: "customContainerdSocket",
values: map[string]string{
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.containerd.sockets[0]": "/custom/containerd.sock",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 1)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/custom/containerd.sock", volumeMounts[0].MountPath)
},
},
{
name: "ContainerEnginesDefaultValues",
values: map[string]string{},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 6)
// dockerV := findVolumeMount("docker-socket-0", volumeMounts)
// require.NotNil(t, dockerV)
// require.Equal(t, "/host/var/run/docker.sock", dockerV.MountPath)
// podmanV := findVolumeMount("podman-socket-0", volumeMounts)
// require.NotNil(t, podmanV)
// require.Equal(t, "/host/run/podman/podman.sock", podmanV.MountPath)
// containerdV := findVolumeMount("containerd-socket-0", volumeMounts)
// require.NotNil(t, containerdV)
// require.Equal(t, "/host/run/host-containerd/containerd.sock", containerdV.MountPath)
// crioV0 := findVolumeMount("cri-socket-0", volumeMounts)
// require.NotNil(t, crioV0)
// require.Equal(t, "/host/run/containerd/containerd.sock", crioV0.MountPath)
// crioV1 := findVolumeMount("cri-socket-1", volumeMounts)
// require.NotNil(t, crioV1)
// require.Equal(t, "/host/run/crio/crio.sock", crioV1.MountPath)
// crioV2 := findVolumeMount("cri-socket-2", volumeMounts)
// require.NotNil(t, crioV2)
// require.Equal(t, "/host/run/k3s/containerd/containerd.sock", crioV2.MountPath)
},
},
{
name: "ContainerEnginesDockerWithMultipleSockets",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/var/run/docker.sock",
"collectors.containerEngine.engines.docker.sockets[1]": "/custom/docker.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 2)
dockerV0 := findVolumeMount("container-engine-socket-0", volumeMounts)
require.NotNil(t, dockerV0)
require.Equal(t, "/host/var/run/docker.sock", dockerV0.MountPath)
dockerV1 := findVolumeMount("container-engine-socket-1", volumeMounts)
require.NotNil(t, dockerV1)
require.Equal(t, "/host/custom/docker.sock", dockerV1.MountPath)
},
},
{
name: "ContainerEnginesCrioWithMultipleSockets",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/run/crio/crio.sock",
"collectors.containerEngine.engines.cri.sockets[1]": "/custom/crio.sock",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 2)
crioV0 := findVolumeMount("container-engine-socket-0", volumeMounts)
require.NotNil(t, crioV0)
require.Equal(t, "/host/run/crio/crio.sock", crioV0.MountPath)
crioV1 := findVolumeMount("container-engine-socket-1", volumeMounts)
require.NotNil(t, crioV1)
require.Equal(t, "/host/custom/crio.sock", crioV1.MountPath)
},
},
{
name: "noVolumeMountsWhenCollectorsDisabled",
values: map[string]string{
"collectors.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 0)
},
},
{
name: "noVolumeMountsWhenDriverDisabled",
values: map[string]string{
"driver.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 0)
},
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{
SetValues: tc.values,
}
// Render the template
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/daemonset.yaml"})
// Parse the YAML output
var daemonset appsv1.DaemonSet
helm.UnmarshalK8SYaml(t, output, &daemonset)
// Find volumeMounts in the falco container
var pluginVolumeMounts []corev1.VolumeMount
for _, container := range daemonset.Spec.Template.Spec.Containers {
if container.Name == "falco" {
for _, volumeMount := range container.VolumeMounts {
if slices.Contains(volumeNames, volumeMount.Name) {
pluginVolumeMounts = append(pluginVolumeMounts, volumeMount)
}
}
}
}
// Run the test case's assertions
tc.expected(t, pluginVolumeMounts)
})
}
}
func TestInvalidVolumeMountConfiguration(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expectedErr string
}{
{
name: "bothOldAndNewConfigEnabled",
values: map[string]string{
"collectors.docker.enabled": "true",
"collectors.containerEngine.enabled": "true",
},
expectedErr: "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{
SetValues: tc.values,
}
// Attempt to render the template, expect an error
_, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, []string{"templates/daemonset.yaml"})
require.Error(t, err)
require.Contains(t, err.Error(), tc.expectedErr)
})
}
}
func findVolumeMount(name string, volumeMounts []corev1.VolumeMount) *corev1.VolumeMount {
for _, v := range volumeMounts {
if v.Name == name {
return &v
}
}
return nil
}

View File

@ -0,0 +1,373 @@
package containerPlugin
import (
"path/filepath"
"slices"
"testing"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
)
func TestContainerPluginVolumes(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, volumes []corev1.Volume)
}{
{
name: "defaultValues",
values: nil,
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 6)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/var/run/docker.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/run/podman/podman.sock", volumes[1].HostPath.Path)
require.Equal(t, "container-engine-socket-2", volumes[2].Name)
require.Equal(t, "/run/host-containerd/containerd.sock", volumes[2].HostPath.Path)
require.Equal(t, "container-engine-socket-3", volumes[3].Name)
require.Equal(t, "/run/containerd/containerd.sock", volumes[3].HostPath.Path)
require.Equal(t, "container-engine-socket-4", volumes[4].Name)
require.Equal(t, "/run/crio/crio.sock", volumes[4].HostPath.Path)
require.Equal(t, "container-engine-socket-5", volumes[5].Name)
require.Equal(t, "/run/k3s/containerd/containerd.sock", volumes[5].HostPath.Path)
},
},
{
name: "defaultDockerVolume",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 1)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/var/run/docker.sock", volumes[0].HostPath.Path)
},
},
{
name: "customDockerSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/custom/docker.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 1)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/custom/docker.sock", volumes[0].HostPath.Path)
},
},
{
name: "defaultCriVolume",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 4)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/run/containerd/containerd.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/run/crio/crio.sock", volumes[1].HostPath.Path)
require.Equal(t, "container-engine-socket-2", volumes[2].Name)
require.Equal(t, "/run/k3s/containerd/containerd.sock", volumes[2].HostPath.Path)
require.Equal(t, "container-engine-socket-3", volumes[3].Name)
require.Equal(t, "/run/host-containerd/containerd.sock", volumes[3].HostPath.Path)
},
},
{
name: "customCrioSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/custom/crio.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 1)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/custom/crio.sock", volumes[0].HostPath.Path)
},
},
{
name: "defaultContainerdVolume",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 1)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/run/host-containerd/containerd.sock", volumes[0].HostPath.Path)
},
},
{
name: "customContainerdSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.containerd.sockets[0]": "/custom/containerd.sock",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 1)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/custom/containerd.sock", volumes[0].HostPath.Path)
},
},
{
name: "ContainerEnginesDefaultValues",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 6)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/var/run/docker.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/run/podman/podman.sock", volumes[1].HostPath.Path)
require.Equal(t, "container-engine-socket-2", volumes[2].Name)
require.Equal(t, "/run/host-containerd/containerd.sock", volumes[2].HostPath.Path)
require.Equal(t, "container-engine-socket-3", volumes[3].Name)
require.Equal(t, "/run/containerd/containerd.sock", volumes[3].HostPath.Path)
require.Equal(t, "container-engine-socket-4", volumes[4].Name)
require.Equal(t, "/run/crio/crio.sock", volumes[4].HostPath.Path)
require.Equal(t, "container-engine-socket-5", volumes[5].Name)
require.Equal(t, "/run/k3s/containerd/containerd.sock", volumes[5].HostPath.Path)
},
},
{
name: "ContainerEnginesDockerWithMultipleSockets",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/var/run/docker.sock",
"collectors.containerEngine.engines.docker.sockets[1]": "/custom/docker.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 2)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/var/run/docker.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/custom/docker.sock", volumes[1].HostPath.Path)
},
},
{
name: "ContainerEnginesCrioWithMultipleSockets",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/run/crio/crio.sock",
"collectors.containerEngine.engines.cri.sockets[1]": "/custom/crio.sock",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 2)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/run/crio/crio.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/custom/crio.sock", volumes[1].HostPath.Path)
},
},
{
name: "ContainerEnginesPodmanWithMultipleSockets",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "true",
"collectors.containerEngine.engines.podman.sockets[0]": "/run/podman/podman.sock",
"collectors.containerEngine.engines.podman.sockets[1]": "/custom/podman.sock",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 2)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/run/podman/podman.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/custom/podman.sock", volumes[1].HostPath.Path)
},
},
{
name: "ContainerEnginesContainerdWithMultipleSockets",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.containerd.sockets[0]": "/run/containerd/containerd.sock",
"collectors.containerEngine.engines.containerd.sockets[1]": "/custom/containerd.sock",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 2)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/run/containerd/containerd.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/custom/containerd.sock", volumes[1].HostPath.Path)
},
},
{
name: "ContainerEnginesMultipleWithCustomSockets",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/custom/docker/socket.sock",
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/var/custom/crio.sock",
"collectors.containerEngine.engines.podman.enabled": "true",
"collectors.containerEngine.engines.podman.sockets[0]": "/run/podman/podman.sock",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 4)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/custom/docker/socket.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/run/podman/podman.sock", volumes[1].HostPath.Path)
require.Equal(t, "container-engine-socket-2", volumes[2].Name)
require.Equal(t, "/run/host-containerd/containerd.sock", volumes[2].HostPath.Path)
require.Equal(t, "container-engine-socket-3", volumes[3].Name)
require.Equal(t, "/var/custom/crio.sock", volumes[3].HostPath.Path)
},
},
{
name: "noVolumesWhenCollectorsDisabled",
values: map[string]string{
"collectors.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 0)
},
},
{
name: "noVolumesWhenDriverDisabled",
values: map[string]string{
"driver.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 0)
},
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{
SetValues: tc.values,
}
// Render the template
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/daemonset.yaml"})
// Parse the YAML output
var daemonset appsv1.DaemonSet
helm.UnmarshalK8SYaml(t, output, &daemonset)
// Find volumes that match our container plugin pattern
var pluginVolumes []corev1.Volume
for _, volume := range daemonset.Spec.Template.Spec.Volumes {
// Check if the volume is for container sockets
if volume.HostPath != nil && slices.Contains(volumeNames, volume.Name) {
pluginVolumes = append(pluginVolumes, volume)
}
}
// Run the test case's assertions
tc.expected(t, pluginVolumes)
})
}
}
func TestInvalidVolumeConfiguration(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expectedErr string
}{
{
name: "bothOldAndNewConfigEnabled",
values: map[string]string{
"collectors.docker.enabled": "true",
"collectors.containerEngine.enabled": "true",
},
expectedErr: "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{
SetValues: tc.values,
}
// Attempt to render the template, expect an error
_, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, []string{"templates/daemonset.yaml"})
require.Error(t, err)
require.Contains(t, err.Error(), tc.expectedErr)
})
}
}
func findVolume(name string, volumes []corev1.Volume) *corev1.Volume {
for _, v := range volumes {
if v.Name == name {
return &v
}
}
return nil
}

View File

@ -13,10 +13,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package unit
package falcoTemplates
import (
"fmt"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"path/filepath"
"strings"
"testing"
@ -29,7 +30,7 @@ import (
func TestDriverConfigInFalcoConfig(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(chartPath)
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
@ -241,7 +242,7 @@ func TestDriverConfigInFalcoConfig(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"})
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
@ -257,14 +258,14 @@ func TestDriverConfigInFalcoConfig(t *testing.T) {
func TestDriverConfigWithUnsupportedDriver(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(chartPath)
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
values := map[string]string{
"driver.kind": "notExisting",
}
options := &helm.Options{SetValues: values}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"})
_, err = helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
require.Error(t, err)
require.True(t, strings.Contains(err.Error(),
"unsupported driver kind: \"notExisting\". Supported drivers [kmod ebpf modern_ebpf gvisor auto], alias [module modern-bpf]"))

View File

@ -13,9 +13,10 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package unit
package falcoTemplates
import (
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"path/filepath"
"testing"
@ -38,7 +39,7 @@ var (
configmapEnvVar = v1.EnvVar{
Name: "FALCOCTL_DRIVER_CONFIG_CONFIGMAP",
Value: releaseName + "-falco",
Value: unit.ReleaseName + "-falco",
}
updateConfigMapEnvVar = v1.EnvVar{
@ -51,7 +52,7 @@ var (
func TestDriverLoaderEnabled(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(chartPath)
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
@ -197,7 +198,7 @@ func TestDriverLoaderEnabled(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/daemonset.yaml"})
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/daemonset.yaml"})
var ds appsv1.DaemonSet
helm.UnmarshalK8SYaml(t, output, &ds)

View File

@ -13,10 +13,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package unit
package falcoTemplates
import (
"fmt"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"io"
"os"
"path/filepath"
@ -40,7 +41,7 @@ type grafanaDashboardsTemplateTest struct {
func TestGrafanaDashboardsTemplate(t *testing.T) {
t.Parallel()
chartFullPath, err := filepath.Abs(chartPath)
chartFullPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
suite.Run(t, &grafanaDashboardsTemplateTest{
@ -131,7 +132,7 @@ func (g *grafanaDashboardsTemplateTest) TestConfig() {
// Check that contains the right label.
g.Contains(cfgMap.Labels, "grafana_dashboard")
// Check that the dashboard is contained in the config map.
file, err := os.Open("../../dashboards/falco-dashboard.json")
file, err := os.Open("../../../dashboards/falco-dashboard.json")
g.NoError(err)
content, err := io.ReadAll(file)
g.NoError(err)

View File

@ -13,9 +13,10 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package unit
package falcoTemplates
import (
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"path/filepath"
"testing"
@ -52,7 +53,7 @@ type webServerConfig struct {
func TestMetricsConfigInFalcoConfig(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(chartPath)
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
@ -167,7 +168,7 @@ func TestMetricsConfigInFalcoConfig(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"})
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)

View File

@ -1,6 +1,7 @@
package unit
package falcoTemplates
import (
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
@ -12,7 +13,7 @@ import (
func TestServiceAccount(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(chartPath)
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
@ -45,7 +46,7 @@ func TestServiceAccount(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
output, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, []string{"templates/serviceaccount.yaml"})
if err != nil {
require.True(t, strings.Contains(err.Error(), "Error: could not find template templates/serviceaccount.yaml in chart"))
}

View File

@ -13,10 +13,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package unit
package falcoTemplates
import (
"encoding/json"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"path/filepath"
"reflect"
"testing"
@ -38,7 +39,7 @@ type serviceMonitorTemplateTest struct {
func TestServiceMonitorTemplate(t *testing.T) {
t.Parallel()
chartFullPath, err := filepath.Abs(chartPath)
chartFullPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
suite.Run(t, &serviceMonitorTemplateTest{

View File

@ -13,10 +13,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package unit
package falcoTemplates
import (
"fmt"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"path/filepath"
"testing"
@ -37,7 +38,7 @@ type serviceTemplateTest struct {
func TestServiceTemplate(t *testing.T) {
t.Parallel()
chartFullPath, err := filepath.Abs(chartPath)
chartFullPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
suite.Run(t, &serviceTemplateTest{
@ -61,7 +62,7 @@ func (s *serviceTemplateTest) TestDefaultLabelsValues() {
output, err := helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, s.templates)
s.NoError(err, "should render template")
cInfo, err := chartInfo(s.T(), s.chartPath)
cInfo, err := unit.ChartInfo(s.T(), s.chartPath)
s.NoError(err)
// Get app version.
appVersion, found := cInfo["appVersion"]
@ -96,16 +97,14 @@ func (s *serviceTemplateTest) TestDefaultLabelsValues() {
}
}
func (s *serviceTemplateTest) TestCustomLabelsValues() {
options := &helm.Options{SetValues: map[string]string{"metrics.enabled": "true",
"metrics.service.labels.customLabel": "customLabelValues"}}
output, err := helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, s.templates)
s.NoError(err, "should render template")
cInfo, err := chartInfo(s.T(), s.chartPath)
cInfo, err := unit.ChartInfo(s.T(), s.chartPath)
s.NoError(err)
// Get app version.
appVersion, found := cInfo["appVersion"]
@ -139,7 +138,7 @@ func (s *serviceTemplateTest) TestCustomLabelsValues() {
expectedVal := labels[key]
s.Equal(expectedVal, value)
}
}
func (s *serviceTemplateTest) TestDefaultAnnotationsValues() {
@ -149,7 +148,7 @@ func (s *serviceTemplateTest) TestDefaultAnnotationsValues() {
s.NoError(err)
var svc corev1.Service
helm.UnmarshalK8SYaml(s.T(), output, &svc)
helm.UnmarshalK8SYaml(s.T(), output, &svc)
s.Nil(svc.Annotations, "should be nil")
}
@ -175,4 +174,4 @@ func (s *serviceTemplateTest) TestCustomAnnotationsValues() {
expectedVal := annotations[key]
s.Equal(expectedVal, value)
}
}
}

View File

@ -13,7 +13,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package unit
package k8smetaPlugin
import (
"encoding/json"
@ -23,6 +23,8 @@ import (
"strings"
"testing"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"slices"
"github.com/gruntwork-io/terratest/modules/helm"
@ -30,22 +32,20 @@ import (
corev1 "k8s.io/api/core/v1"
)
const chartPath = "../../"
// Using the default values we want to test that all the expected resources for the k8s-metacollector are rendered.
func TestRenderedResourcesWithDefaultValues(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(chartPath)
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
options := &helm.Options{}
// Template the chart using the default values.yaml file.
output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, nil)
output, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, nil)
require.NoError(t, err)
// Extract all rendered files from the output.
re := regexp.MustCompile(patternK8sMetacollectorFiles)
re := regexp.MustCompile(unit.PatternK8sMetacollectorFiles)
matches := re.FindAllStringSubmatch(output, -1)
require.Len(t, matches, 0)
@ -54,7 +54,7 @@ func TestRenderedResourcesWithDefaultValues(t *testing.T) {
func TestRenderedResourcesWhenNotEnabled(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(chartPath)
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
// Template files that we expect to be rendered.
@ -73,11 +73,11 @@ func TestRenderedResourcesWhenNotEnabled(t *testing.T) {
}}
// Template the chart using the default values.yaml file.
output, err := helm.RenderTemplateE(t, options, helmChartPath, releaseName, nil)
output, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, nil)
require.NoError(t, err)
// Extract all rendered files from the output.
re := regexp.MustCompile(patternK8sMetacollectorFiles)
re := regexp.MustCompile(unit.PatternK8sMetacollectorFiles)
matches := re.FindAllStringSubmatch(output, -1)
var renderedTemplates []string
@ -99,7 +99,7 @@ func TestRenderedResourcesWhenNotEnabled(t *testing.T) {
func TestPluginConfigurationInFalcoConfig(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(chartPath)
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
@ -125,7 +125,7 @@ func TestPluginConfigurationInFalcoConfig(t *testing.T) {
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", releaseName), hostName.(string))
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", unit.ReleaseName), hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "info", verbosity.(string))
@ -157,7 +157,7 @@ func TestPluginConfigurationInFalcoConfig(t *testing.T) {
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.test.svc", releaseName), hostName.(string))
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.test.svc", unit.ReleaseName), hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "info", verbosity.(string))
@ -327,7 +327,7 @@ func TestPluginConfigurationInFalcoConfig(t *testing.T) {
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", releaseName), hostName.(string))
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", unit.ReleaseName), hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "info", verbosity.(string))
@ -361,7 +361,7 @@ func TestPluginConfigurationInFalcoConfig(t *testing.T) {
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", releaseName), hostName.(string))
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", unit.ReleaseName), hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "trace", verbosity.(string))
@ -398,7 +398,7 @@ func TestPluginConfigurationInFalcoConfig(t *testing.T) {
}
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"})
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
@ -410,7 +410,7 @@ func TestPluginConfigurationInFalcoConfig(t *testing.T) {
found := false
// Find the k8smeta plugin configuration.
for _, plugin := range pluginsArray {
if name, ok := plugin.(map[string]interface{})["name"]; ok && name == k8sMetaPluginName {
if name, ok := plugin.(map[string]interface{})["name"]; ok && name == unit.K8sMetaPluginName {
testCase.expected(t, plugin)
found = true
}
@ -418,11 +418,11 @@ func TestPluginConfigurationInFalcoConfig(t *testing.T) {
if found {
// Check that the plugin has been added to the ones that need to be loaded.
loadplugins := config["load_plugins"]
require.True(t, slices.Contains(loadplugins.([]interface{}), k8sMetaPluginName))
require.True(t, slices.Contains(loadplugins.([]interface{}), unit.K8sMetaPluginName))
} else {
testCase.expected(t, nil)
loadplugins := config["load_plugins"]
require.True(t, !slices.Contains(loadplugins.([]interface{}), k8sMetaPluginName))
require.True(t, !slices.Contains(loadplugins.([]interface{}), unit.K8sMetaPluginName))
}
})
}
@ -456,21 +456,68 @@ func TestPluginConfigurationUniqueEntries(t *testing.T) {
},
"library_path": "libk8smeta.so",
"name": "k8smeta"
},
{
"init_config": {
"engines": {
"bpm": {
"enabled": false
},
"containerd": {
"enabled": true,
"sockets": [
"/run/containerd/containerd.sock"
]
},
"cri": {
"enabled": true,
"sockets": [
"/run/crio/crio.sock"
]
},
"docker": {
"enabled": true,
"sockets": [
"/var/run/docker.sock"
]
},
"libvirt_lxc": {
"enabled": false
},
"lxc": {
"enabled": false
},
"podman": {
"enabled": false,
"sockets": [
"/run/podman/podman.sock"
]
}
},
"hooks": [
"create"
],
"label_max_len": 100,
"with_size": false
},
"library_path": "libcontainer.so",
"name": "container"
}
]`
loadPluginsJSON := `[
"k8smeta",
"k8saudit"
"k8saudit",
"container"
]`
helmChartPath, err := filepath.Abs(chartPath)
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
options := &helm.Options{SetJsonValues: map[string]string{
"falco.plugins": pluginsJSON,
"falco.load_plugins": loadPluginsJSON,
}, SetValues: map[string]string{"collectors.kubernetes.enabled": "true"}}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/configmap.yaml"})
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
@ -486,7 +533,7 @@ func TestPluginConfigurationUniqueEntries(t *testing.T) {
// Find the k8smeta plugin configuration.
numConfigK8smeta := 0
for _, plugin := range pluginsArray {
if name, ok := plugin.(map[string]interface{})["name"]; ok && name == k8sMetaPluginName {
if name, ok := plugin.(map[string]interface{})["name"]; ok && name == unit.K8sMetaPluginName {
numConfigK8smeta++
}
}
@ -495,8 +542,8 @@ func TestPluginConfigurationUniqueEntries(t *testing.T) {
// Check that the plugin has been added to the ones that need to be loaded.
loadplugins := config["load_plugins"]
require.Len(t, loadplugins.([]interface{}), 2)
require.True(t, slices.Contains(loadplugins.([]interface{}), k8sMetaPluginName))
require.Len(t, loadplugins.([]interface{}), 3)
require.True(t, slices.Contains(loadplugins.([]interface{}), unit.K8sMetaPluginName))
}
// Test that the helper does not overwrite user's configuration.
@ -541,9 +588,10 @@ func TestFalcoctlRefs(t *testing.T) {
require.True(t, slices.Contains(allowedTypes.([]interface{}), "rulesfile"))
// Test plugin reference.
refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{})
require.Len(t, refs, 2)
require.True(t, slices.Contains(refs, "falco-rules:3"))
require.True(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.1"))
require.Len(t, refs, 3)
require.True(t, slices.Contains(refs, "falco-rules:4"))
require.True(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.3.1"))
require.True(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/container:0.3.5"))
}
testCases := []struct {
@ -579,7 +627,7 @@ func TestFalcoctlRefs(t *testing.T) {
},
}
helmChartPath, err := filepath.Abs(chartPath)
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
for _, testCase := range testCases {
@ -589,7 +637,7 @@ func TestFalcoctlRefs(t *testing.T) {
t.Parallel()
options := &helm.Options{SetJsonValues: testCase.valuesJSON, SetValues: map[string]string{"collectors.kubernetes.enabled": "true"}}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/falcoctl-configmap.yaml"})
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/falcoctl-configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)

View File

@ -53,11 +53,11 @@ falcoctl:
install:
# -- List of artifacts to be installed by the falcoctl init container.
# We do not recommend installing (or following) plugins for security reasons since they are executable objects.
refs: [falco-rules:3]
refs: [falco-rules:4]
follow:
# -- List of artifacts to be followed by the falcoctl sidecar container.
# We do not recommend installing (or following) plugins for security reasons since they are executable objects.
refs: [falco-rules:3]
refs: [falco-rules:4]
# Set this to true to force Falco so output the logs as soon as they are emmitted.
tty: false

View File

@ -30,10 +30,10 @@ falcoctl:
artifact:
install:
# -- List of artifacts to be installed by the falcoctl init container.
refs: [falco-rules:3, k8saudit-rules:0.11, k8saudit:0.11]
refs: [falco-rules:4, k8saudit-rules:0.11, k8saudit:0.11]
follow:
# -- List of artifacts to be followed by the falcoctl sidecar container.
refs: [falco-rules:3, k8saudit-rules:0.11, k8saudit:0.11]
refs: [falco-rules:4, k8saudit-rules:0.11, k8saudit:0.11]
services:
- name: k8saudit-webhook

View File

@ -362,24 +362,73 @@ collectors:
# -- Enable/disable all the metadata collectors.
enabled: true
# -- This collector is deprecated and will be removed in the future. Please use the containerEngine collector instead.
docker:
# -- Enable Docker support.
enabled: true
enabled: false
# -- The path of the Docker daemon socket.
socket: /var/run/docker.sock
# -- This collector is deprecated and will be removed in the future. Please use the containerEngine collector instead.
containerd:
# -- Enable ContainerD support.
enabled: true
enabled: false
# -- The path of the ContainerD socket.
socket: /run/containerd/containerd.sock
socket: /run/host-containerd/containerd.sock
# -- This collector is deprecated and will be removed in the future. Please use the containerEngine collector instead.
crio:
# -- Enable CRI-O support.
enabled: true
enabled: false
# -- The path of the CRI-O socket.
socket: /run/crio/crio.sock
# -- This collector is the new container engine collector that replaces the old docker, containerd, crio and podman collectors.
# It is designed to collect metadata from various container engines and provide a unified interface through the container plugin.
# When enabled, it will deploy the container plugin and use it to collect metadata from the container engines.
# Keep in mind that the old collectors (docker, containerd, crio, podman) will use the container plugin to collect metadata under the hood.
containerEngine:
# -- Enable Container Engine support.
enabled: true
# -- pluginRef is the OCI reference for the container plugin. It could be a full reference such as
# "ghcr.io/falcosecurity/plugins/plugin/container:0.3.5". Or just name + tag: container:0.3.5.
pluginRef: "ghcr.io/falcosecurity/plugins/plugin/container:0.3.5"
# -- labelMaxLen is the maximum length of the labels that can be used in the container plugin.
# container labels larger than this value won't be collected.
labelMaxLen: 100
# -- withSize specifies whether to enable container size inspection, which is inherently slow.
withSize: false
# -- hooks specify the hooks that will be used to collect metadata from the container engine.
# The available hooks are: create, start.
hooks: ["create"]
# -- engines specify the container engines that will be used to collect metadata.
# See https://github.com/falcosecurity/plugins/blob/main/plugins/container/README.md#configuration
engines:
docker:
enabled: true
sockets: ["/var/run/docker.sock"]
podman:
enabled: true
sockets: ["/run/podman/podman.sock"]
containerd:
enabled: true
sockets: ["/run/host-containerd/containerd.sock"]
cri:
enabled: true
sockets:
[
"/run/containerd/containerd.sock",
"/run/crio/crio.sock",
"/run/k3s/containerd/containerd.sock",
"/run/host-containerd/containerd.sock",
]
lxc:
enabled: true
libvirt_lxc:
enabled: true
bpm:
enabled: true
# -- kubernetes holds the configuration for the kubernetes collector. Starting from version 0.37.0 of Falco, the legacy
# kubernetes client has been removed. A new standalone component named k8s-metacollector and a Falco plugin have been developed
# to solve the issues that were present in the old implementation. More info here: https://github.com/falcosecurity/falco/issues/2973
@ -394,7 +443,7 @@ collectors:
enabled: false
# --pluginRef is the OCI reference for the k8smeta plugin. It could be a full reference such as:
# "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0". Or just name + tag: k8smeta:0.1.0.
pluginRef: "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.2.1"
pluginRef: "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.3.1"
# -- collectorHostname is the address of the k8s-metacollector. When not specified it will be set to match
# k8s-metacollector service. e.x: falco-k8smetacollecto.falco.svc. If for any reason you need to override
# it, make sure to set here the address of the k8s-metacollector.
@ -410,6 +459,7 @@ collectors:
# In Falco usually we put the host '/proc' folder under '/host/proc' so
# the default for this config is '/host'.
# The path used here must not have a final '/'.
# Deprecated since falco 0.41.0 and k8smeta 0.3.0.
hostProc: /host
###########################
@ -424,6 +474,9 @@ extra:
# -- Additional initContainers for Falco pods.
initContainers: []
# -- Override hostname in falco pod
podHostname:
# -- certificates used by webserver and grpc server.
# paste certificate content or use helm with --set-file
# or use existing secret containing key, crt, ca as well as pem bundle
@ -471,6 +524,14 @@ falcosidekick:
# -- Listen port. Default value: 2801
listenPort: ""
# -- Enable the response actions using Falco Talon.
responseActions:
enabled: false
# -- For configuration values, see https://github.com/falcosecurity/charts/blob/master/charts/falco-talon/values.yaml
# -- It must be used in conjunction with the response_actions.enabled option.
falco-talon: {}
####################
# falcoctl config #
####################
@ -483,7 +544,7 @@ falcoctl:
# -- The image repository to pull from.
repository: falcosecurity/falcoctl
# -- The image tag to pull.
tag: "0.11.0"
tag: "0.11.2"
artifact:
# -- Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before
# Falco starts. It provides them to Falco by using an emptyDir volume.
@ -536,7 +597,7 @@ falcoctl:
# -- Resolve the dependencies for artifacts.
resolveDeps: true
# -- List of artifacts to be installed by the falcoctl init container.
refs: [falco-rules:3]
refs: [falco-rules:4]
# -- Directory where the rulesfiles are saved. The path is relative to the container, which in this case is an emptyDir
# mounted also by the Falco pod.
rulesfilesDir: /rulesfiles
@ -544,7 +605,7 @@ falcoctl:
pluginsDir: /plugins
follow:
# -- List of artifacts to be followed by the falcoctl sidecar container.
refs: [falco-rules:3]
refs: [falco-rules:4]
# -- How often the tool checks for new versions of the followed artifacts.
every: 6h
# -- HTTP endpoint that serves the api versions of the Falco instance. It is used to check if the new versions are compatible
@ -799,8 +860,30 @@ falco:
# Also, nested include is not allowed, ie: included config files won't be able to include other config files.
#
# Like for 'rules_files', specifying a folder will load all the configs files present in it in a lexicographical order.
#
# 3 merge-strategies are available:
# `append` (default):
# * existing sequence keys will be appended
# * existing scalar keys will be overridden
# * non-existing keys will be added
# `override`:
# * existing keys will be overridden
# * non-existing keys will be added
# `add-only`:
# * existing keys will be ignored
# * non-existing keys will be added
#
# Each item on the list can be either a yaml map or a simple string.
# The simple string will be interpreted as the config file path, and the `append` merge-strategy will be enforced.
# When the item is a yaml map instead, it will be of the form: ` path: foo\n strategy: X`.
# When `strategy` is omitted, once again `append` is used.
#
# When a merge-strategy is enabled for a folder entry, all the included config files will use that merge-strategy.
config_files:
- /etc/falco/config.d
# Example of config file specified as yaml map with strategy made explicit.
# - path: $HOME/falco_local_configs/
# strategy: add-only
# [Stable] `watch_config_files`
#
@ -870,6 +953,13 @@ falco:
# information.
json_include_message_property: false
# [Incubating] `json_include_output_fields_property`
#
# When using JSON output in Falco, you have the option to include the individual
# output fields for easier access. To reduce the logging volume, it is recommended
# to turn it off if it's not necessary for your use case.
json_include_output_fields_property: true
# [Stable] `buffered_outputs`
#
# -- Enabling buffering for the output queue can offer performance optimization,
@ -903,6 +993,7 @@ falco:
# affect the regular Falco message in any way. These can be specified as a
# custom name with a custom format or as any supported field
# (see: https://falco.org/docs/reference/rules/supported-fields/)
# `suggested_output`: enable the use of extractor plugins suggested fields for the matching source output.
#
# Example:
#
@ -918,7 +1009,13 @@ falco:
# at the end telling the CPU number. In addition, if `json_output` is true, in the "output_fields"
# property you will find three new ones: "evt.cpu", "home_directory" which will contain the value of the
# environment variable $HOME, and "evt.hostname" which will contain the hostname.
append_output: []
# By default, we enable suggested_output for any source.
# This means that any extractor plugin that indicates some of its fields
# as suggested output formats, will see these fields in the output
# in the form "foo_bar=$foo.bar"
append_output:
- suggested_output: true
##########################
# Falco outputs channels #
@ -983,6 +1080,8 @@ falco:
compress_uploads: false
# -- keep_alive whether to keep alive the connection.
keep_alive: false
# Maximum consecutive timeouts of libcurl to ignore
max_consecutive_timeouts: 5
# [Stable] `program_output`
#
@ -1145,8 +1244,8 @@ falco:
# "alert", "critical", "error", "warning", "notice", "info", "debug". It is not
# recommended for production use.
libs_logger:
enabled: false
severity: debug
enabled: true
severity: info
#################################################################################
# Falco logging / alerting / metrics related to software functioning (advanced) #
@ -1553,7 +1652,6 @@ falco:
falco_libs:
thread_table_size: 262144
# [Incubating] `container_engines`
#
# This option allows you to explicitly enable or disable API lookups against container
@ -1584,7 +1682,12 @@ falco:
enabled: false
cri:
enabled: false
sockets: ["/run/containerd/containerd.sock", "/run/crio/crio.sock", "/run/k3s/containerd/containerd.sock"]
sockets:
[
"/run/containerd/containerd.sock",
"/run/crio/crio.sock",
"/run/k3s/containerd/containerd.sock",
]
disable_async: false
podman:
enabled: false

View File

@ -5,6 +5,14 @@ numbering uses [semantic versioning](http://semver.org).
Before release 0.1.20, the helm chart can be found in `falcosidekick` [repository](https://github.com/falcosecurity/falcosidekick/tree/master/deploy/helm/falcosidekick).
## 0.10.2
- Add type information to `volumeClaimTemplates`.
## 0.10.1
- Add an "or" condition for `configmap-ui`
## 0.10.0
- Add new features to the Loki dashboard

View File

@ -3,7 +3,7 @@ appVersion: 2.31.1
description: Connect Falco to your ecosystem
icon: https://raw.githubusercontent.com/falcosecurity/falcosidekick/master/imgs/falcosidekick_color.png
name: falcosidekick
version: 0.10.0
version: 0.10.2
keywords:
- monitoring
- security

View File

@ -1,4 +1,4 @@
{{- if and (.Values.webui.enabled) (.Values.webui.redis.enabled) -}}
{{- if and (.Values.webui.enabled) (or (.Values.webui.redis.enabled) (.Values.webui.externalRedis.enabled)) -}}
---
apiVersion: v1
kind: ConfigMap

View File

@ -276,7 +276,9 @@ spec:
{{ end }}
{{- if .Values.webui.redis.storageEnabled }}
volumeClaimTemplates:
- metadata:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "falcosidekick.fullname" . }}-ui-redis-data
spec:
accessModes: [ "ReadWriteOnce" ]

14
go.mod
View File

@ -1,6 +1,8 @@
module github.com/falcosecurity/charts
go 1.21.0
go 1.23.0
toolchain go1.23.4
require (
github.com/gruntwork-io/terratest v0.46.8
@ -50,12 +52,12 @@ require (
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/urfave/cli v1.22.2 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/crypto v0.36.0 // indirect
golang.org/x/net v0.38.0 // indirect
golang.org/x/oauth2 v0.8.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/sys v0.31.0 // indirect
golang.org/x/term v0.30.0 // indirect
golang.org/x/text v0.23.0 // indirect
golang.org/x/time v0.3.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.33.0 // indirect

20
go.sum
View File

@ -138,8 +138,8 @@ github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9dec
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
@ -148,8 +148,8 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8=
golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -163,17 +163,17 @@ golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@ -16,7 +16,7 @@ Thus, to trigger it, the following actions need to happen:
> The approvers may differ depending on the chart. Please, refer to the `OWNERS` file under the specific chart directory.
Once the CI has done its job, a new tag is live on [GitHub](https://github.com/falcosecurity/falco-exporter/releases), and the site [https://falcosecurity.github.io/charts](https://falcosecurity.github.io/charts) indexes the new chart version.
Once the CI has done its job, a new tag is live on [GitHub](https://github.com/falcosecurity/charts/releases), and the site [https://falcosecurity.github.io/charts](https://falcosecurity.github.io/charts) indexes the new chart version.
## Automation explained