Compare commits

...

408 Commits

Author SHA1 Message Date
dependabot[bot] 7ad10b8063 chore(deps): Bump lycheeverse/lychee-action from 2.4.0 to 2.5.0
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.4.0 to 2.5.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](1d97d84f0b...5c4ee84814)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-version: 2.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 08:29:15 +02:00
Federico Di Pierro cc96a4dde6 fix(charts/falco/tests): fixed Falco chart tests.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-08-04 22:36:52 +02:00
Federico Di Pierro 9717814edb update(charts/falco): updated CHANGELOG.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-08-04 22:36:52 +02:00
Federico Di Pierro 6305d9bf7d chore(charts/falco): bump chart version + variables.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-08-04 22:36:52 +02:00
Federico Di Pierro 0b9b5a01d4 update(charts/falco): bump container and k8smeta plugin to latest.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-08-04 22:36:52 +02:00
Leonardo Grasso 01ed738a2c docs(charts/falco): update docs for v6.2.1
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 15:15:40 +02:00
Leonardo Grasso 11be245149 update(charts/falco): bump version to 6.2.1
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 15:15:40 +02:00
Leonardo Grasso 65ba4c266e update(charts/falco): bump container plugin to v0.3.3
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 15:15:40 +02:00
Leonardo Grasso 530eded713 docs(charts/falco): update docs for v6.2.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 12:17:40 +02:00
Leonardo Grasso 9e1550ab44 update(charts/falco): bump charts to v6.2.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 12:17:40 +02:00
Leonardo Grasso 3a7cb6edba update(charts/falco): bump container plugin to v0.3.2
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 12:17:40 +02:00
Leonardo Grasso 2646171e4c chore(charts/falco): adapt volume mounts for new containerEngine
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 12:17:40 +02:00
Leonardo Grasso 9f5ead4705 update(charts/falco): update containerEngines configuration
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-24 12:17:40 +02:00
Benjamin FERNANDEZ 3cbf72bd9c feat(falco): Add possibility to custom falco pods hostname
Signed-off-by: Benjamin FERNANDEZ <benjamin2.fernandez.ext@orange.com>
2025-07-24 09:56:38 +02:00
Leonardo Grasso ff984cc8a8 update: remove falco-exporter
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-07-22 15:06:29 +02:00
Leonardo Di Giovanna cd4dc68cb1 docs(OWNERS): add `ekoops` as approver
Signed-off-by: Leonardo Di Giovanna <41296180+ekoops@users.noreply.github.com>
2025-07-18 10:36:10 +02:00
Leonardo Di Giovanna 56f2eb7ccf update(charts/falco): update `README.md` for 6.0.2
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-07-01 14:36:21 +02:00
Leonardo Di Giovanna 489e4d67b6 update(charts/falco): update `CHANGELOG.md` for 6.0.2
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-07-01 14:36:21 +02:00
Leonardo Di Giovanna b821e9db06 update(falco): bump container plugin to 0.3.1
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-07-01 14:36:21 +02:00
Leonardo Di Giovanna 4ba195cc61 update(falco): upgrade chart for Falco 0.41.3
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-07-01 14:36:21 +02:00
dependabot[bot] 2206d6ce36 chore(deps): Bump sigstore/cosign-installer from 3.9.0 to 3.9.1
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.9.0 to 3.9.1.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](fb28c2b633...398d4b0eee)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-24 07:57:43 +02:00
dependabot[bot] b31c2881eb chore(deps): Bump sigstore/cosign-installer from 3.8.2 to 3.9.0
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.8.2 to 3.9.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](3454372f43...fb28c2b633)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-18 11:44:05 +02:00
Leonardo Di Giovanna 1f52c29818 update(charts/falco): update `README.md` for 6.0.1
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-06-17 14:38:44 +02:00
Leonardo Di Giovanna 42b6a54d71 update(charts/falco): update `CHANGELOG.md` for 6.0.1
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-06-17 14:38:44 +02:00
Leonardo Di Giovanna f9c9f14e04 update(falco): bump container plugin to 0.3.0
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-06-17 14:38:44 +02:00
Leonardo Di Giovanna d7816c9a2f update(falco): upgrade chart for Falco 0.41.2
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2025-06-17 14:38:44 +02:00
Leonardo Grasso 36b77e4937 docs(falco): update docs for v6.0.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-13 10:31:16 +02:00
Leonardo Grasso a03331d05c update(falco): bump to v6.0.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-13 10:31:16 +02:00
Igor Eulalio 60f43e7ad4 chore: response_actions to responseActions
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-06-13 10:31:16 +02:00
Igor Eulalio 9655a0f6da make docs
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-06-13 10:31:16 +02:00
Igor Eulalio a6baf31059 feat: enable talon trough response_actions.enabled proposal
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-06-13 10:31:16 +02:00
Igor Eulalio 049a366d92 feat: bump talon to 0.3.0, rename talon subchart to follow standard
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

feat: bump talon to 0.3.0, rename talon subchart to follow standard

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-06-13 10:31:16 +02:00
Federico Di Pierro 90dea388ad update(charts/falco): bump Falco chart to 5.0.3.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-06-05 16:07:31 +02:00
Federico Di Pierro da70b354c2 update(tests): bump container plugin to 0.2.5.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-06-05 10:43:30 +02:00
Federico Di Pierro e9164d6a17 chore(docs): run `make docs`.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-06-05 10:43:30 +02:00
Federico Di Pierro fdf085c249 update(charts/falco): update charts for Falco 0.41.1.
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2025-06-05 10:43:30 +02:00
Leonardo Grasso e90802d1bf docs(.github): clean up leftover in PULL_REQUEST_TEMPLATE.md
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-03 14:21:13 +02:00
Leonardo Grasso 4c48271f75 chore(charts/falco): bump chart for release
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-03 14:20:14 +02:00
Leonardo Grasso 14b38d251b chore(charts/falco): clarify the notice does not require action items in all cases
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-03 14:20:14 +02:00
Leonardo Grasso 560fd390bc fix(charts/falco): some volumes do not depend on artifact install
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-06-03 14:20:14 +02:00
Leonardo Grasso 29c627a4ff update(charts/falco)!: bump chart to 5.0.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-30 16:09:37 +02:00
Leonardo Grasso 6d9ccd5078 update(charts/falco): don't use `debian` flavor anymore
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-30 16:09:37 +02:00
Leonardo Grasso ea882813a8 chore(charts/falco): chart release
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-30 16:09:37 +02:00
Leonardo Grasso 36160ce337 update(charts/falco): bump falcoctl to v0.11.2
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-30 16:09:37 +02:00
Leonardo Grasso 7baa31fbd5 docs(charts/falco): update readme
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-29 12:01:25 +02:00
Leonardo Grasso 601643dbbf update(charts/falco): add `json_include_output_fields_property` in config
Co-authored-by: Federico Di Pierro <nierro92@gmail.com>
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 3b9030f7ef update(falco): bump container plugin to 0.2.4
Co-authored-by: Federico Di Pierro <nierro92@gmail.com>
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 7a89dc0a6d update(falco): docs and changelog
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 931b11fd6e remove(falco): delete outdated unit tests
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku bb053c5c12 update(falco): bump rules to version 4.0.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku dba12e1e0d new(falco): unit tests for container plugin's configuration
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 092de0da9d new(falco): add container plugins's volumes and volumeMounts
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 911c16ce46 refactor(falco/tests): reorganize unit tests
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 1c8fb43e58 new(falco): add container plugin's configuration
* new helper
* unit tests
* configuration values in values.yaml

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku cc6cb3771d update(falco): bump k8smeta to version 0.3.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 4e49371d8c update(falco): update config_files configuration
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 8ab13ea972 update(falco): enable libs_logger
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku e74948391f feat(falco): add new http_output option max_consecutive_timeouts
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 1c5ac83a96 feat(falco): add suggested_output configuration to falco
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Aldo Lacuku 003d93735e chore(falco): drop -pk args
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-05-29 12:01:25 +02:00
Mathieu Garstecki ac7d08f06b chore: bump falcosidekick chart version (+changelog)
Signed-off-by: Mathieu Garstecki <mathieu.garstecki@pigment.com>
2025-05-28 15:04:18 +02:00
Mathieu Garstecki 4bbc57bc78 fix: avoid ArgoCD diff on volumeClaimTemplates
K8S adds those fields by itself on apply, which creates a diff in ArgoCD.
We could ignore it, but there is little reason to omit those fields in
the definition.

Signed-off-by: Mathieu Garstecki <mathieu.garstecki@pigment.com>
2025-05-28 15:04:18 +02:00
dependabot[bot] 6db1b396ae chore(deps): Bump actions/setup-python from 5.5.0 to 5.6.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.5.0 to 5.6.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](8d9ed9ac5c...a26af69be9)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 5.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-25 08:58:38 +02:00
dependabot[bot] 0547284c57 chore(deps): Bump sigstore/cosign-installer from 3.8.1 to 3.8.2
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.8.1 to 3.8.2.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](d7d6bc7722...3454372f43)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.8.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-23 09:05:30 +02:00
dependabot[bot] a306f299a3 chore(deps): Bump golang.org/x/net from 0.37.0 to 0.38.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.37.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.37.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-17 10:17:49 +02:00
dependabot[bot] 00dacd98de chore(deps): Bump lycheeverse/lychee-action from 2.3.0 to 2.4.0
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.3.0 to 2.4.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](f613c4a64e...1d97d84f0b)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-version: 2.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-07 10:57:46 +02:00
Aliloya e648b42f87 Update CHANGELOG.md
Signed-off-by: Aliloya <53705110+Aliloya-eng@users.noreply.github.com>
2025-03-27 15:23:36 +01:00
Aliloya bd24ca27db fix: add an or condition for configmap
Co-authored-by: Igor Eulalio <41654187+IgorEulalio@users.noreply.github.com>
Signed-off-by: Aliloya <53705110+Aliloya-eng@users.noreply.github.com>
2025-03-27 15:23:36 +01:00
Ali Hassan 8e24293503 fix: remove faulty condition for configmap
Signed-off-by: Ali Hassan <53705110+Aliloya-eng@users.noreply.github.com>
2025-03-27 15:23:36 +01:00
dependabot[bot] 347a231b17 chore(deps): Bump actions/setup-python from 5.4.0 to 5.5.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.4.0 to 5.5.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](42375524e2...8d9ed9ac5c)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-26 10:23:28 +01:00
dependabot[bot] 9c59182fe2 chore(deps): Bump actions/setup-go from 5.3.0 to 5.4.0
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.3.0 to 5.4.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](f111f3307d...0aaccfd150)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-20 07:47:25 +01:00
Igor Eulalio 56a04dac13 fix: disable Talon by default on Falco installation
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-03-19 15:53:21 +01:00
Igor Eulalio 6d160f3560 chore: make docs
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-03-19 14:14:21 +01:00
Igor Eulalio a23ab7247b feat: set falco and sidekick chart version to latest
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-03-19 14:14:21 +01:00
Igor Eulalio 06156a4c23 feat: bump chart version and add changelog.md
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-03-19 14:14:21 +01:00
Igor Eulalio 61bd0f0fc5 feat: add falco-talon as a falco subchart
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2025-03-19 14:14:21 +01:00
dependabot[bot] a759414abf chore(deps): Bump docker/login-action from 3.3.0 to 3.4.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](9780b0c442...74a5d14239)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 08:34:06 +01:00
Leonardo Grasso 5b033a204a chore: remove test settings for deprecated charts
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso 1442228ac7 update(event-generator): set chart as deprecated
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso 0dbc58494c fix(falco): ci testing
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso bc78f46298 chore(falco-expoter): bump to latest version
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso 894f7f7968 update(falco): falco-exporter deprecation
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso c3123e77ac chore: falco-exporter deprecation
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso 06bd1848c8 docs(falco-exporter): update changelog
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Leonardo Grasso 3ac90fe8c6 update(falco-exporter): add deprecation notice
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>

update(falco-exporter): bump version for deprecation

Signed-off-by: Leonardo Grasso <me@leonardograsso.com>

docs(falco-exporter):

Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 18:07:53 +01:00
Aldo Lacuku 168b69d3f1 chore(tests): bump go version to 1.23.0
Furthermore this commit updates other go modules.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-03-14 17:43:53 +01:00
Leonardo Grasso 5e50321e52 chore(falco): bump version to 4.21.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-03-14 10:23:50 +01:00
Ignacio Íñigo 4612281076 new(falco): adding imagePullSecrets at the service account level
Signed-off-by: Ignacio Íñigo <megalucio@users.noreply.github.com>
2025-03-14 10:23:50 +01:00
dependabot[bot] 6b132bbcae chore(deps): Bump sigstore/cosign-installer from 3.8.0 to 3.8.1
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.8.0 to 3.8.1.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](c56c2d3e59...d7d6bc7722)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-21 08:35:53 +01:00
dependabot[bot] c6140ee5b2 chore(deps): Bump azure/setup-helm from 4.2.0 to 4.3.0
Bumps [azure/setup-helm](https://github.com/azure/setup-helm) from 4.2.0 to 4.3.0.
- [Release notes](https://github.com/azure/setup-helm/releases)
- [Changelog](https://github.com/Azure/setup-helm/blob/main/CHANGELOG.md)
- [Commits](fe7b79cd5e...b9e51907a0)

---
updated-dependencies:
- dependency-name: azure/setup-helm
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-19 09:26:43 +01:00
Aldo Lacuku f7f219ae14 chore(falco): bump chart version to 4.20.1
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-02-14 12:44:16 +01:00
Aldo Lacuku 0f539f3336 new(falco): unit tests for container engines socket paths
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-02-14 12:44:16 +01:00
Aldo Lacuku 9cd296cd3e fix(falco): correctly mount the volumes based on socket path
This commit fixes a bug that had the socket paths hardcoded in the volumeMounts.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-02-14 12:44:16 +01:00
Alfonso Cobo Canela 8bbd18ea0c Update falcosidekick-loki-dashboard.json
get logs from 24h ago instead of 4d ago

Signed-off-by: Alfonso Cobo Canela <165585176+cobcan@users.noreply.github.com>
2025-02-13 12:28:10 +01:00
Alfonso Cobo Canela 19ecc8fe26 feat(falcosidekick): changelog updated
Signed-off-by: Alfonso Cobo Canela <165585176+cobcan@users.noreply.github.com>
2025-02-13 12:28:10 +01:00
Alfonso Cobo Canela 6acbc6c1a9 feat(falcosidekick): bumped chart version
Signed-off-by: Alfonso Cobo Canela <165585176+cobcan@users.noreply.github.com>
2025-02-13 12:28:10 +01:00
Alfonso Cobo Canela 659637942d Update falcosidekick-loki-dashboard.json
New update on the falcosidekick dashboard for loki, added some filtering options

Signed-off-by: Alfonso Cobo Canela <165585176+cobcan@users.noreply.github.com>
2025-02-13 12:28:10 +01:00
Thomas Labarussias 8f45a9bb4a add customtags
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2025-02-12 11:10:04 +01:00
Thomas Labarussias 95b273be5b fix missing values in the readme
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2025-02-10 11:39:51 +01:00
Thomas Labarussias 4f6e4312a9 bump up talon version to 0.3.0 + fix missing usage of imagePullSecrets
Co-authored-by: ddrp <darodriguez@gmv.com>
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2025-02-10 11:24:52 +01:00
Alex of Cyberia 96209d4898 feat(helm): add Azure Workload Identity support for Falcosidekick
Signed-off-by: Alex of Cyberia <cy83r14n@proton.me>
2025-02-07 11:20:38 +01:00
Aldo Lacuku 966f414577 chore(falco): bump falcoctl to 0.11.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-02-06 10:00:26 +01:00
Thomas Labarussias 06766a82f8 upgrade falcosidekick to v2.31.1 (fix)
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2025-02-05 14:59:21 +01:00
dependabot[bot] 6ccf1a8b7e chore(deps): Bump lycheeverse/lychee-action from 2.2.0 to 2.3.0
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.2.0 to 2.3.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](f796c8b7d4...f613c4a64e)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-05 08:53:18 +01:00
dependabot[bot] 06e6e42d4b chore(deps): Bump sigstore/cosign-installer from 3.7.0 to 3.8.0
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.7.0 to 3.8.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](dc72c7d5c4...c56c2d3e59)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-05 08:52:19 +01:00
Thomas Labarussias c872eb7463 upgrade falcosidekick to v2.31.1
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2025-02-04 13:48:13 +01:00
Thomas Labarussias 00e51da381 upgrade falcosidekick to v2.31.0
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2025-02-03 18:48:09 +01:00
dependabot[bot] 77021a105a chore(deps): Bump actions/setup-python from 5.3.0 to 5.4.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.3.0 to 5.4.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](0b93645e9f...42375524e2)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-29 07:34:36 +01:00
Aldo Lacuku 986a7ad988 fix(falco): bump version to 0.40.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-01-28 13:15:33 +01:00
Aldo Lacuku 8ff6323cbe chore(falco): add changelogs
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-01-28 11:47:34 +01:00
Aldo Lacuku 0b78768f83 update(falco/docs): update docs reflecting latest changes in values.yaml
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-01-28 11:47:34 +01:00
Aldo Lacuku 316932811c chore(falco): bump chart and falco version
chart version: 4.18.0
falco version: 0.40.0

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-01-28 11:47:34 +01:00
Aldo Lacuku f3fbfb2594 update(falco): use new falco images
More info at: https://github.com/falcosecurity/falco/issues/3165

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-01-28 11:47:34 +01:00
Aldo Lacuku a6c5c7b686 new(falco/tests): add unit tests for the new configuration keys
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-01-28 11:47:34 +01:00
Aldo Lacuku cf7545718c update(cri-flag): remove deprecated cli flag
--cri is removed and uses container_engines to configure collectors.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-01-28 11:47:34 +01:00
Aldo Lacuku f0e7921270 chore(falco/values.yaml): remove outdated comment
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2025-01-28 11:47:34 +01:00
dependabot[bot] 16e88288c9 chore(deps): Bump actions/setup-go from 5.2.0 to 5.3.0
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.2.0 to 5.3.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](3041bf56c9...f111f3307d)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-22 08:43:02 +01:00
dependabot[bot] efbd212da2 chore(deps): Bump golang.org/x/net from 0.23.0 to 0.33.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.23.0 to 0.33.0.
- [Commits](https://github.com/golang/net/compare/v0.23.0...v0.33.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-21 09:12:53 +01:00
dependabot[bot] 8103605112 chore(deps): Bump helm/chart-testing-action from 2.6.1 to 2.7.0
Bumps [helm/chart-testing-action](https://github.com/helm/chart-testing-action) from 2.6.1 to 2.7.0.
- [Release notes](https://github.com/helm/chart-testing-action/releases)
- [Commits](e6669bcd63...0d28d3144d)

---
updated-dependencies:
- dependency-name: helm/chart-testing-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-21 09:09:53 +01:00
dependabot[bot] 6b8234fd3d chore(deps): Bump helm/chart-releaser-action from 1.6.0 to 1.7.0
Bumps [helm/chart-releaser-action](https://github.com/helm/chart-releaser-action) from 1.6.0 to 1.7.0.
- [Release notes](https://github.com/helm/chart-releaser-action/releases)
- [Commits](a917fd15b2...cae68fefc6)

---
updated-dependencies:
- dependency-name: helm/chart-releaser-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-21 09:08:53 +01:00
Sylvester Damgaard b5118f9cab Update readme for v4.17.2
Signed-off-by: Sylvester Damgaard <sylvester@laravel.com>
2025-01-20 14:18:48 +01:00
Sylvester Damgaard fe8df69d7d Update chart version to v4.17.2
Signed-off-by: Sylvester Damgaard <sylvester@laravel.com>
2025-01-20 14:18:48 +01:00
Sylvester Damgaard 83686cbb42 Add ports definition in falco container spec
Update changelog for v4.17.2

Signed-off-by: Sylvester Damgaard <sylvester@laravel.com>
2025-01-20 14:18:48 +01:00
Leonardo Grasso 989a399116 update(charts/falco): bump to 4.17.1
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-01-17 14:59:33 +01:00
Leonardo Grasso affbecd2bf docs(charts/falco): update driverd documentation and fix links
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-01-17 14:59:33 +01:00
Leonardo Grasso c0d1ba51ab docs(charts/falco/CHANGELOG.md): fix broken URL
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-01-17 14:59:33 +01:00
Leonardo Grasso c617abfbc9 docs(charts/falco/values.yaml): fix broken URLs
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2025-01-17 14:59:33 +01:00
Tiago Martins bd57711e7c fix(falcosidekick): Move Prometheus scrape annotation to default values
Signed-off-by: Tiago Martins <tiago.martins@hotjar.com>
2025-01-13 13:11:39 +01:00
Frank 2a68bfe1a2 chore(github): Generate release notes
Signed-off-by: Frank <639906+syphernl@users.noreply.github.com>
2025-01-07 09:51:34 +01:00
Carlos Tadeu Panato Junior b4e6411fca remove non existing labels
Signed-off-by: cpanato <ctadeu@gmail.com>
2024-12-24 14:36:18 +01:00
dependabot[bot] bc297e36af chore(deps): Bump helm/kind-action from 1.11.0 to 1.12.0
Bumps [helm/kind-action](https://github.com/helm/kind-action) from 1.11.0 to 1.12.0.
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](ae94020eaf...a1b0e39133)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-24 10:45:18 +01:00
dependabot[bot] 0bac3d3884 chore(deps): Bump lycheeverse/lychee-action from 2.1.0 to 2.2.0
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.1.0 to 2.2.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](f81112d0d2...f796c8b7d4)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-20 10:43:36 +01:00
Frédéric Collonval e9784ecb65 Update charts/falcosidekick/CHANGELOG.md
Co-authored-by: Thomas Labarussias <issif+github@gadz.org>
Signed-off-by: Frédéric Collonval <fcollonval@gmail.com>
2024-12-19 15:34:49 +01:00
Frédéric Collonval b8cc0959ea Fix metrics nomenclature
Add changelog entry
Bump Chart version

Signed-off-by: Frédéric Collonval <fcollonval@gmail.com>
2024-12-19 15:34:49 +01:00
Leonardo Grasso 013117730a docs(.github): correct area labels
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-12-19 10:24:48 +01:00
Thomas Labarussias 7dfa1e0929 add a grafana dashboard for the falco talon metrics
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-12-18 09:23:42 +01:00
Thomas Labarussias b8dbf1f0be add a grafana dashboard for the falcosidekick metrics
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-12-17 18:11:39 +01:00
dependabot[bot] 4f70f39a26 chore(deps): Bump helm/kind-action from 1.10.0 to 1.11.0
Bumps [helm/kind-action](https://github.com/helm/kind-action) from 1.10.0 to 1.11.0.
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](0025e74a8c...ae94020eaf)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-17 11:15:39 +01:00
Alfonso Cobo Canela f3a8b2e27a changes suggested
Signed-off-by: Alfonso Cobo Canela <165585176+cobcan@users.noreply.github.com>
2024-12-16 11:23:33 +01:00
Alfonso Cobo Canela 355fbcf1a9 change default value
Co-authored-by: Alfonso Cobo <alfonso.cobo@cttexpress.com>
Signed-off-by: Alfonso Cobo <alfonso.cobo@cttexpress.com>
2024-12-16 11:23:33 +01:00
Alfonso Cobo Canela 29abc02c2c fix identation
Co-authored-by: Alfonso Cobo <alfonso.cobo@cttexpress.com>
Signed-off-by: Alfonso Cobo <alfonso.cobo@cttexpress.com>
2024-12-16 11:23:33 +01:00
Alfonso Cobo Canela e7ab64bd70 feat: added dahsboard and template
Co-authored-by: Alfonso Cobo <alfonso.cobo@cttexpress.com>
Signed-off-by: Alfonso Cobo <alfonso.cobo@cttexpress.com>
2024-12-16 11:23:33 +01:00
Leonardo Grasso 5e50b52572 docs(falco): update changelong and readme
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-12-12 11:53:11 +01:00
Leonardo Grasso d4e9b6f9d3 update(falco): bump chart version
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-12-12 11:53:11 +01:00
Leonardo Grasso 2eeffce731 update(falco): bump k8saudit version to 0.11 in values-k8saudit.yaml
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-12-12 11:53:11 +01:00
dependabot[bot] e457572a3d chore(deps): Bump golang.org/x/crypto from 0.21.0 to 0.31.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.21.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.21.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-12 10:48:12 +01:00
dependabot[bot] bbc8397a61 chore(deps): Bump actions/setup-go from 5.1.0 to 5.2.0
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.1.0 to 5.2.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](41dfa10bad...3041bf56c9)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-12 09:15:15 +01:00
NoOverflow fb9d2db88f chore(falco): bump chart version in README.md
Signed-off-by: NoOverflow <54811870+NoOverflow@users.noreply.github.com>
2024-12-12 07:20:11 +01:00
NoOverflow f10d0a48f5 chore(falco): bump chart version and update changelog
Signed-off-by: NoOverflow <54811870+NoOverflow@users.noreply.github.com>
2024-12-12 07:20:11 +01:00
NoOverflow 186f916634 fix(falco): set dnsPolicy to ClusterFirstWithHostNet when gvisor driver is enabled
Signed-off-by: NoOverflow <54811870+NoOverflow@users.noreply.github.com>
2024-12-12 07:20:11 +01:00
Aldo Lacuku 15f97d55f6 chore(falco): bump chart version and update docs
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-12-10 13:27:02 +01:00
Aldo Lacuku d1a1384ef7 new(falco/tests): add unit tests for serviceMonitor label selector
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-12-10 13:27:02 +01:00
Aldo Lacuku 227325789b fix(falco/serviceMonitor): set service label selector
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-12-10 13:27:02 +01:00
Thomas Labarussias d952b7b1bd update talon version to v0.2.1 for bug fixes
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-12-10 09:29:00 +01:00
Thomas Labarussias 3d0e6ffdf3 fix value conversions errors
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-12-03 21:11:28 +01:00
Thomas Labarussias 70d76b08b4 bump falcosidekick dependency to v0.9.* to match with future versions
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-12-02 11:32:20 +01:00
Thomas Labarussias b4b4ae091c upgrade to falcosidekick 2.30.0
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-11-28 09:37:57 +01:00
Igor Eulalio 56d5b2822a make docs
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2024-11-27 19:53:54 +01:00
Igor Eulalio 4c8848de9b fix: point to right docker.io repository
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2024-11-27 19:53:54 +01:00
Igor Eulalio 94b9db3ab0 fix lint
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2024-11-27 19:53:54 +01:00
Igor Eulalio 5fa1fcc710 rebase origin/master
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2024-11-27 19:53:54 +01:00
Igor Eulalio 4a1350f9d0 feat(falco-talon): Configure Talon pod to not rollout on configmap changes, allow user to input rules.yaml directly, configure Talon to rollout on secret change, bump appVersion v0.2.0
Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

feat: trigger rollout based on secret change

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

feat: remove rules_override.yaml file, add field so users can specify custom rules directly via values

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

chore: bump chart version, update CHANGELOG.md and make docs

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

feat: allow users to specify custom service accounts for deployment

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

chore: modify changelog.md

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

chore(deps): Bump lycheeverse/lychee-action from 2.0.2 to 2.1.0

Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.0.2 to 2.1.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](7cd0af4c74...f81112d0d2)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

feat: remove helm-generated labels and timestamp so that pod isn't recycled with a new update

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

feat: trigger rollout based on secret change

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

feat: remove rules_override.yaml file, add field so users can specify custom rules directly via values

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

chore: bump chart version, update CHANGELOG.md and make docs

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

feat: allow users to specify custom service accounts for deployment

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

chore: modify changelog.md

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

chore(deps): Bump lycheeverse/lychee-action from 2.0.2 to 2.1.0

Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.0.2 to 2.1.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](7cd0af4c74...f81112d0d2)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

change the key for the rulesfiles range

Signed-off-by: Thomas Labarussias <issif+github@gadz.org>

chore(falco/k8smeta): bump plugin version

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

chore(falco/test): update unit tests to reflect changes in k8smeta tag

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

chore(falco/k8smeta): bump chart version

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

fix(falco/dashboard): make pod variable independent of triggered rules

CPU and memory are now visible for each pod, even when no rules have been triggered for
that falco instance.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

chore(falco): bump chart version

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

chore(falco): apply suggestions

Co-authored-by: Thomas Labarussias <issif+github@gadz.org>
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

fix(falco/readme): use rules_files instead of deprecated rules_file in config snippet

Using rules_file causes collision with rules_files and falco does not start

```
Tue Nov 12 14:23:17 2024: Using deprecated config key 'rules_file' (singular form). Please use new 'rules_files' config key (plural form).
Error: Error reading config file (/etc/falco/falco.yaml): both 'rules_files' and 'rules_file' keys set
```

Signed-off-by: Robin Landström <robinlandstrom@users.noreply.github.com>

chore(falco): bump chart version

Signed-off-by: Robin Landström <robinlandstrom@users.noreply.github.com>

update(falco): bump falco version to 0.39.2 and falcoctl to 0.10.1

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

chore: bump chart version

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

chore: update docs

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>

chore(deps): Bump lycheeverse/lychee-action from 2.0.2 to 2.1.0

Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.0.2 to 2.1.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](7cd0af4c74...f81112d0d2)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

change the key for the rulesfiles range

Signed-off-by: Thomas Labarussias <issif+github@gadz.org>

chore(falco/k8smeta): bump plugin version

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

chore(falco/test): update unit tests to reflect changes in k8smeta tag

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

chore(falco/k8smeta): bump chart version

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

fix(falco/dashboard): make pod variable independent of triggered rules

CPU and memory are now visible for each pod, even when no rules have been triggered for
that falco instance.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

chore(falco): bump chart version

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

chore(falco): apply suggestions

Co-authored-by: Thomas Labarussias <issif+github@gadz.org>
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

fix(falco/readme): use rules_files instead of deprecated rules_file in config snippet

Using rules_file causes collision with rules_files and falco does not start

```
Tue Nov 12 14:23:17 2024: Using deprecated config key 'rules_file' (singular form). Please use new 'rules_files' config key (plural form).
Error: Error reading config file (/etc/falco/falco.yaml): both 'rules_files' and 'rules_file' keys set
```

Signed-off-by: Robin Landström <robinlandstrom@users.noreply.github.com>

chore(falco): bump chart version

Signed-off-by: Robin Landström <robinlandstrom@users.noreply.github.com>

update(falco): bump falco version to 0.39.2 and falcoctl to 0.10.1

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

chore: bump appVersion to match talon version

Signed-off-by: Igor Eulalio <igor.eulalio@sysdig.com>
2024-11-27 19:53:54 +01:00
Thomas Labarussias 9a7685dd0c fix: update the url for the docs about the concurrent queue classes
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-11-26 17:38:47 +01:00
Aldo Lacuku b273725559 update(falco): bump falco version to 0.39.2 and falcoctl to 0.10.1
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-11-22 09:58:26 +01:00
Robin Landström e980a2c5bb chore(falco): bump chart version
Signed-off-by: Robin Landström <robinlandstrom@users.noreply.github.com>
2024-11-18 15:13:11 +01:00
Robin Landström 87478dcace fix(falco/readme): use rules_files instead of deprecated rules_file in config snippet
Using rules_file causes collision with rules_files and falco does not start

```
Tue Nov 12 14:23:17 2024: Using deprecated config key 'rules_file' (singular form). Please use new 'rules_files' config key (plural form).
Error: Error reading config file (/etc/falco/falco.yaml): both 'rules_files' and 'rules_file' keys set
```

Signed-off-by: Robin Landström <robinlandstrom@users.noreply.github.com>
2024-11-18 15:13:11 +01:00
Aldo Lacuku 64cc7959a4 chore(falco): apply suggestions
Co-authored-by: Thomas Labarussias <issif+github@gadz.org>
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-11-11 16:34:42 +01:00
Aldo Lacuku a6a4cfbb41 chore(falco): bump chart version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-11-11 16:34:42 +01:00
Aldo Lacuku b72b40c18a fix(falco/dashboard): make pod variable independent of triggered rules
CPU and memory are now visible for each pod, even when no rules have been triggered for
that falco instance.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-11-11 16:34:42 +01:00
Aldo Lacuku 9f5ea39b63 chore(falco/k8smeta): bump chart version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-11-08 17:48:30 +01:00
Aldo Lacuku cc71346aee chore(falco/test): update unit tests to reflect changes in k8smeta tag
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-11-08 17:48:30 +01:00
Aldo Lacuku 4183124d1f chore(falco/k8smeta): bump plugin version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-11-08 17:48:30 +01:00
Thomas Labarussias ba7220bb9a change the key for the rulesfiles range
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-11-08 12:09:28 +01:00
dependabot[bot] 63edaffa81 chore(deps): Bump lycheeverse/lychee-action from 2.0.2 to 2.1.0
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.0.2 to 2.1.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](7cd0af4c74...f81112d0d2)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-08 07:19:14 +01:00
Aldo Lacuku 3fde64336c chore(falco): bump charts version and update docs
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-11-06 12:14:16 +01:00
Aldo Lacuku 3f00ed2095 new(falco/collectors): expose new config options for k8smeta plugins
This commit exposes two new config entries for k8smeta plugins:
  * verbosity;
  * hostProc.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-11-06 12:14:16 +01:00
Leonardo Grasso 6018dfb241 docs(charts/falco): update README.md
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-10-30 11:34:36 +01:00
doublez13 a9c342383e Update the changelog to document unconfined apparmor
Signed-off-by: doublez13 <zakraise@eng.utah.edu>
2024-10-30 11:34:36 +01:00
doublez13 17df3b9473 Falco: Bump chart to 4.12.0
Signed-off-by: doublez13 <zakraise@eng.utah.edu>
2024-10-30 11:34:36 +01:00
doublez13 1310e60b75 RFC falco: when leastPrivileged is true, set the apparmor profile to unconfined
It appears that when setting leastPrivileged: true, apparmor does not not allow falco to ptrace, which appears to leave the container fields null. If leastPrivileged: true, set the apparmor profile to unconfined.

Oct 24 09:52:57 hostname kernel: audit: type=1400 audit(1729785177.339:404624): apparmor="DENIED" operation="ptrace" profile="cri-containerd.apparmor.d" pid=2389102 comm="falco" requested_mask="read" denied_mask="read" peer="unconfined"


Signed-off-by: doublez13 <zakraise@eng.utah.edu>
2024-10-30 11:34:36 +01:00
José Carlos Chávez a3e4db32f8 fix: only prints env key if there are env values to be passed.
Signed-off-by: José Carlos Chávez <josecarlos.chavez@okta.com>
2024-10-29 09:51:32 +01:00
dependabot[bot] 22b3f58f5e chore(deps): Bump actions/setup-go from 5.0.2 to 5.1.0
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.0.2 to 5.1.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](0a12ed9d6a...41dfa10bad)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-25 09:55:15 +02:00
dependabot[bot] 6f1c964128 chore(deps): Bump actions/setup-python from 5.2.0 to 5.3.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.2.0 to 5.3.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](f677139bbe...0b93645e9f)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-25 09:54:16 +02:00
dependabot[bot] 5c8ef86e88 chore(deps): Bump actions/checkout from 4.2.1 to 4.2.2
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.2.1 to 4.2.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](eef61447b9...11bd71901b)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-24 09:36:13 +02:00
Aldo Lacuku 6ff5758424 chore(readme): add falco-talon chart link to readme
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-17 14:58:43 +02:00
dependabot[bot] f717e3bb34 chore(deps): Bump lycheeverse/lychee-action from 2.0.1 to 2.0.2
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.0.1 to 2.0.2.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](2bb232618b...7cd0af4c74)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-15 09:06:36 +02:00
Aldo Lacuku 1f8db46be0 update(falco): add details for the scap drops buffer charts with the dir and drops labels
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-14 14:34:31 +02:00
Thomas Labarussias 78a720f9ea falco-talon: remove all refs to the previous org
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-10-14 10:24:31 +02:00
dependabot[bot] 40d27a7cec chore(deps): Bump lycheeverse/lychee-action from 2.0.0 to 2.0.1
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 2.0.0 to 2.0.1.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](7da8ec1fc4...2bb232618b)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-14 08:37:31 +02:00
Aldo Lacuku f49025e5ef docs(falco): update readme and bump the chart version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-10 16:34:18 +02:00
Aldo Lacuku 146aa19cc3 test(falco): add unit tests for the new grafana dashboard
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-10 16:34:18 +02:00
Aldo Lacuku 2fcaa862cd new(falco): add grafana dashboard for falco
A default grafana dashboard can be now configured using the
charts.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-10 16:34:18 +02:00
Thomas Labarussias 684145440e add falco-talon chart
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-10-09 18:09:14 +02:00
Aldo Lacuku e91f988285 update(falco): bump falco version to 0.39.1
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-09 12:28:14 +02:00
krkjack 1550223b81 fix(falcosidekick): customConfig mountPath fix for webui redis
Signed-off-by: Krystian Jackowski <krkjackowski@gmail.com>
2024-10-09 11:05:13 +02:00
dependabot[bot] 9483726553 chore(deps): Bump lycheeverse/lychee-action from 1.10.0 to 2.0.0
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 1.10.0 to 2.0.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](2b973e86fc...7da8ec1fc4)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-09 06:53:13 +02:00
Leonardo Di Giovanna 5f318e167f fix(release.md): fix release process workflows explanation
Signed-off-by: Leonardo Di Giovanna <leonardodigiovanna1@gmail.com>
2024-10-08 17:51:09 +02:00
afreyermuth98 5bb945686b 🔧 Add possibility to add annotations to the metrics service
Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🔧 Resolving conflicts

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🐛 Fixing CI

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🐛 Missing dot before Values

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🔧 Conflicts

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🐛 Removed extra space

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🔧 Reviews

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🐛 Extra toYaml

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🔧 Working on tests

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🔧 Reviews

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🔧 Annotation test + duplicate chartInfo

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🔧 Ref to options

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🐛 Helm options type

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🔧 Patch

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

🐛 Fixed bad conflicts resolution

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>

⬆️ Bump chart version

Signed-off-by: afreyermuth98 <anthofreyer@gmail.com>
2024-10-08 15:29:08 +02:00
dependabot[bot] 8347de739a chore(deps): Bump actions/checkout from 4.1.7 to 4.2.1
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.7 to 4.2.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](692973e3d9...eef61447b9)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-08 09:37:07 +02:00
dependabot[bot] 7f2731bb73 chore(deps): Bump sigstore/cosign-installer from 3.6.0 to 3.7.0
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.6.0 to 3.7.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](4959ce089c...dc72c7d5c4)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-07 11:06:04 +02:00
krkjack f3c3fd8ba7 Update CHANGELOG.md
Signed-off-by: krkjack <56771214+krkjack@users.noreply.github.com>
2024-10-02 10:03:39 +02:00
krkjack b7d2ca20d0 Update Chart.yaml
Signed-off-by: krkjack <56771214+krkjack@users.noreply.github.com>
2024-10-02 10:03:39 +02:00
krkjack 04f146ab02 Update CHANGELOG.md
Signed-off-by: krkjack <56771214+krkjack@users.noreply.github.com>
2024-10-02 10:03:39 +02:00
krkjack 980234e6da Fix customConfig template in configmap-ui.yaml
Signed-off-by: krkjack <56771214+krkjack@users.noreply.github.com>
2024-10-02 10:03:39 +02:00
Aldo Lacuku d3148cac82 chore(falco): bump falco to 0.39.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-01 11:59:35 +02:00
Aldo Lacuku 401b9c2336 fix(falco): update broken link pointing to Falco docs
After the changes made by the following PR to the Falco docs
https://github.com/falcosecurity/falco-website/pull/1362 this
 commit updates a broken link.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-01 11:59:35 +02:00
Aldo Lacuku 47541d456b update(falco): mount proc filesystem for plugins
The following PR in the libs https://github.com/falcosecurity/libs/pull/1969
introduces a new platform for plugins that requires access to the
proc filesystem.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-01 11:59:35 +02:00
Aldo Lacuku ed8c5351e6 cleanup(falco): remove deprecated falco configuration
This commit removes the "output" config key that has
been deprecated in falco.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-01 11:59:35 +02:00
Aldo Lacuku 0d870e7556 update(falco): add new configuration entries for Falco
This commit adds new config keys introduces in Falco 0.39.0.
Furthermore, updates the unit tests for the latest changes
in the values.yaml.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-10-01 11:59:35 +02:00
Thomas Berreis 27c637d05e fix(falcosidekick): securityContext for webui initContainer
Signed-off-by: Thomas Berreis <thomas@berreis.de>
2024-09-24 16:23:07 +02:00
Thomas Labarussias b81522f94b use the names of the priorities in the prometheus rules
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-09-23 09:35:00 +02:00
Thomas Labarussias 1fd1c1c8e4 fix wrong values for OTLP_TRACES_PROTOCOL
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-09-23 09:35:00 +02:00
Thomas Labarussias 15dfc64bfd use redis-cli for the initContainer check + allow to override the redis server settings + allow to use a password for the external redis
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-09-23 09:35:00 +02:00
Aldo Lacuku 4e87255cef update(falco): support latest changes in falco-driver-loader
The init container, when driver.kind=auto, automatically generates
  a new Falco configuration file and selects the appropriate engine
  kind based on the environment where Falco is deployed.

  With this commit, along with falcoctl PR #630, the Helm charts now
  support different driver kinds for Falco instances based on the
  specific node they are running on. When driver.kind=auto is set,
  each Falco instance dynamically selects the most suitable
  driver (e.g., ebps, kmod, modern_ebpf) for the node.

  +-------------------------------------------------------+
  | Kubernetes Cluster                                    |
  |                                                       |
  |  +-------------------+  +-------------------+        |
  |  | Node 1             |  | Node 2             |        |
  |  |                   |  |                   |        |
  |  | Falco (ebpf) |  | Falco (kmod)       |        |
  |  +-------------------+  +-------------------+        |
  |                                                       |
  |                 +-------------------+                |
  |                 | Node 3             |                |
  |                 |                   |                |
  |                 | Falco (modern_ebpf)|                |
  |                 +-------------------+                |
  +-------------------------------------------------------+
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-09-17 11:26:50 +02:00
Aldo Lacuku 98897b00df fix(falco): correctly mount host filesystems when driver.kind is auto
When falco runs with kmod/module driver it needs special filesystems
to be mounted from the host such /dev and /sys/module/falco.
This commit ensures that we mount them in the falco container.

Note that, the /sys/module/falco is now mounted as /sys/module
since we do not know which kind of driver will be used. The falco
folder exists under /sys/module only when the kernel module is
loaded, hence it's not possible to use the /sys/module/falco
hostpath when driver.kind is set to auto.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-09-11 13:34:31 +02:00
Thomas Labarussias 3d3ab261f6 fix the error when the custom CA cert is missing, even it's the default, see: https://github.com/falcosecurity/falcosidekick/issues/987
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-09-11 11:19:31 +02:00
dependabot[bot] 4d2da46d13 chore(deps): Bump actions/setup-python from 5.1.1 to 5.2.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.1.1 to 5.2.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](39cd14951b...f677139bbe)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-30 09:27:41 +02:00
Cam Smith 46516b090c Bump version and update changelog.
Signed-off-by: Cam Smith <cam@tactiq.io>
2024-08-28 08:25:31 +02:00
Cam Smith 31ba1705c2 Run all actions.
Signed-off-by: Cam Smith <cam@tactiq.io>
2024-08-28 08:25:31 +02:00
Daniel Beilin 961efb68c6 fix(falcosidekick): support custom service type for webui redis in Helm chart
Signed-off-by: Daniel Beilin <daniel.beilin@outlook.com>
2024-08-22 14:43:50 +02:00
Daniel Beilin 9382b35f49 fix(falcosidekick): support custom service type for webui redis in Helm chart
Signed-off-by: Daniel Beilin <daniel.beilin@outlook.com>
2024-08-22 11:25:51 +02:00
Daniel Beilin 9ff842d99a fix(falcosidekick): support custom service type for webui redis in Helm chart
Signed-off-by: Daniel Beilin <daniel.beilin@outlook.com>
2024-08-22 11:25:51 +02:00
Leonardo Grasso 7747f9852d docs(charts/k8s-metacollector): update readme
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-08-20 17:56:45 +02:00
Nicolas Lamirault b1fea7cdf6 feat(helm): bump chart version
Signed-off-by: Nicolas Lamirault <nicolas.lamirault@gmail.com>
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-08-20 17:56:45 +02:00
Nicolas Lamirault c052931a84 fix(helm): documentation and chart version
Signed-off-by: Nicolas Lamirault <nicolas.lamirault@gmail.com>
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-08-20 17:56:45 +02:00
Nicolas Lamirault 66bb77c075 [fix] Helm : Grafana dashboard for k8s metacollector
Signed-off-by: Nicolas Lamirault <nicolas.lamirault@gmail.com>
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-08-20 17:56:45 +02:00
dependabot[bot] 196cb665b4 chore(deps): Bump sigstore/cosign-installer from 3.5.0 to 3.6.0
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.5.0 to 3.6.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](59acb6260d...4959ce089c)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-20 14:51:45 +02:00
Luca Guerra a27ba1877a update(falco): update chart to 0.38.2
Signed-off-by: Luca Guerra <luca@guerra.sh>
2024-08-19 15:54:41 +02:00
Aldo Lacuku ba95f4cbf2 fix(falco): use rules_files key in the preset values files
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-08-01 17:15:47 +02:00
Aldo Lacuku d17eebf46c fix(falco/config): use rules_files instead of deprecated key rules_file
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-08-01 16:38:47 +02:00
Aldo Lacuku 4fba8a3d70 chore(falco/tests): update tests to reflect the new k8smeta version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-07-25 11:12:15 +02:00
Aldo Lacuku fa426c37d4 update(k8smeta): bump k8smeta version to 0.2.0
The new version resolves a bug that prevented the k8smeta
fields from being populated for pods deployed before Falco.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-07-25 11:12:15 +02:00
Jochem 90cfd82e6b Resolve bug in the PrometheusRule for line in falco exporter
Signed-off-by: Jochem <j.bruijns@fullstaq.com>
2024-07-23 11:26:45 +02:00
dependabot[bot] 0a7da65024 chore(deps): Bump docker/login-action from 3.2.0 to 3.3.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.2.0 to 3.3.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](0d4c9c5ea7...9780b0c442)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-23 09:24:45 +02:00
Yohan Boyer 467c2a270d fix(falco): mount client-certs-volume only if certs.existingClientSecret is defined
Signed-off-by: Yohan Boyer <25897753+yohboy@users.noreply.github.com>
2024-07-18 17:17:28 +02:00
Thomas Labarussias ad412ee45d bump falcosidekick dependency to v0.8.* to match with future versions
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-07-18 15:39:28 +02:00
Thomas Labarussias 10161c82c0 Add a condition to create the secrets for the redis only if the webui is deployed
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-07-17 18:09:44 +02:00
Jochem ebf1ff84b3 Make 'for' configurable for falco exporter prometheus rules
Signed-off-by: Jochem <j.bruijns@fullstaq.com>

Bump chart version and run helm docs

Signed-off-by: Jochem <j.bruijns@fullstaq.com>

Update changelog

Signed-off-by: Jochem <j.bruijns@fullstaq.com>

Update charts/falco-exporter/CHANGELOG.md

Co-authored-by: Thomas Labarussias <issif+github@gadz.org>
Signed-off-by: Jochem <33828672+TheChef23@users.noreply.github.com>
2024-07-11 16:01:47 +02:00
sipr-invivo e1deeb0c82 chore(k8s-metacollector): Add podLabels
Signed-off-by: sipr-invivo <160140834+sipr-invivo@users.noreply.github.com>
2024-07-11 14:37:47 +02:00
Dominique Burnand e8a0387945 Bump falco-sidekick dependency version to include redis UI check
Signed-off-by: Dominique Burnand <dominique.burnand@jegger.ch>
2024-07-11 12:19:47 +02:00
Dominique Burnand 78aa87c107 Fix redis-availability check of the UI init-container in case externalRedis is enabled
Signed-off-by: Dominique Burnand <dominique.burnand@jegger.ch>
2024-07-11 12:00:47 +02:00
dependabot[bot] 1e11d6d5c2 chore(deps): Bump actions/setup-go from 5.0.1 to 5.0.2
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.0.1 to 5.0.2.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](cdcb360436...0a12ed9d6a)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-11 08:30:47 +02:00
dependabot[bot] cf21212a29 chore(deps): Bump actions/setup-python from 5.1.0 to 5.1.1
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.1.0 to 5.1.1.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](82c7e631bb...39cd14951b)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-11 08:29:48 +02:00
Thomas Berreis 05dd011762 feat(falcosidekick): allow to set resources, securityContext and image overwrite for wait-redis initContainer
Signed-off-by: Thomas Berreis <thomas@berreis.de>
2024-07-05 16:32:24 +02:00
Aldo Lacuku bb4fc15ba9 feat(falco): add support for Falco metrics
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-07-03 12:00:18 +02:00
Thomas Labarussias b9d09260cf bump falcosidekick dependency version to v0.8.0, for falcosidekick 2.29.0
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-07-02 14:51:12 +02:00
Thomas Labarussias f8205957df upgrade falcosidekick chart to falcosidekick 2.29.0 + custom
labels/annotations + add initContainer to check if the redis is up for falcosidekick-ui

Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-07-02 14:34:13 +02:00
Tom Müller 50dc3b1c0c ordered volume list in falco scc yaml alphabetically
Signed-off-by: Tom Müller <60851960+toamto94@users.noreply.github.com>
2024-07-01 12:21:10 +02:00
Fabian Zimmermann 7527d0f635 use directory-mapping instead of simple containerd.socket-file-mapping
to allow falco to reconnect if containerd got restarted on host

Fixes #632

Signed-off-by: Fabian Zimmermann <dev.faz@gmail.com>
2024-06-21 14:34:20 +02:00
Aldo Lacuku df1606ce6b update(falco): bump falco version to 0.38.1
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-06-19 11:46:08 +02:00
Strigix 040aea0137 Added the controller.labels field for extra deploment/daemonset labeling.
Signed-off-by: Strigix <stefan@vtveld.nl>
2024-06-18 13:11:03 +02:00
Thomas Berreis 3ed680173b chore(falcosidekick): upgrade redis-stack image to 7.2.0-v11
Signed-off-by: Thomas Berreis <thomas@berreis.de>
2024-06-18 12:05:03 +02:00
dependabot[bot] 37a10fb98e chore(deps): Bump actions/checkout from 4.1.6 to 4.1.7
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.6 to 4.1.7.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](a5ac7e51b4...692973e3d9)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-13 09:25:47 +02:00
Paul Rey d9c3ff9c91 Update CHANGELOG.md
Signed-off-by: Paul Rey <contact@paulrey.io>
2024-06-11 14:16:40 +02:00
Paul Rey 1b9c56ea08 Update Chart.yaml
Signed-off-by: Paul Rey <contact@paulrey.io>
2024-06-11 14:16:40 +02:00
Paul Rey 2ba9c51743 Fixes #686
Signed-off-by: Paul Rey <contact@paulrey.io>
2024-06-11 14:16:40 +02:00
Paul Rey 5b69ab8d84 Update CHANGELOG.md
Signed-off-by: Paul Rey <contact@paulrey.io>
2024-06-10 22:39:36 +02:00
Paul Rey 7ea44ac0aa Bump chart version
Signed-off-by: Paul Rey <contact@paulrey.io>
2024-06-10 22:39:36 +02:00
Paul Rey 1aa6e5d817 Use webui.service.port for WEBUI_URL secret value
Signed-off-by: Paul Rey <contact@paulrey.io>
2024-06-10 22:39:36 +02:00
Loïc Lajeanne 34448915ba bump version number in readme
Signed-off-by: Loïc Lajeanne <loic.lajeanne@orange.com>
2024-06-10 10:30:33 +02:00
Loïc Lajeanne e3ed4d18fe bump falco chart version to 4.4.2
Signed-off-by: Loïc Lajeanne <loic.lajeanne@orange.com>
2024-06-10 10:30:33 +02:00
Loïc Lajeanne b798b44e85 fix wrong check when using an existingClientSecret
Signed-off-by: Loïc Lajeanne <loic.lajeanne@orange.com>
2024-06-10 10:30:33 +02:00
Aldo Lacuku b76280cfa5 update(falco): bump k8s-metacollector dependency version to v0.1.1
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-06-06 15:04:22 +02:00
Aldo Lacuku addf0d3cc3 update(k8s-metacollector): bump application version to 0.1.1
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-06-06 10:38:22 +02:00
Leonardo Grasso c90a63978f docs(charts/falco): update README
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-05-30 14:46:56 +02:00
Leonardo Grasso c5cf615993 update(charts/falco): bump falcoctl to 0.8.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-05-30 14:46:56 +02:00
Leonardo Grasso 307e59bf4f update(charts/falco): bump appVersion to 0.38.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-05-30 14:46:56 +02:00
Aldo Lacuku fe11d265b1 update(falco): update readme.md
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-05-30 14:46:56 +02:00
Aldo Lacuku c18039eb67 update(falco): bump chart's version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-05-30 14:46:56 +02:00
Aldo Lacuku f5a6974b12 new(falco): update falco configuration settings in values.yaml
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-05-30 14:46:56 +02:00
Aldo Lacuku 8c6542c342 feat(falco): support automatic driver selection
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-05-30 14:46:56 +02:00
dependabot[bot] 439d14042f chore(deps): Bump docker/login-action from 3.1.0 to 3.2.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.1.0 to 3.2.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](e92390c5fb...0d4c9c5ea7)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-29 15:10:50 +02:00
Arnaud CHATELAIN 22073b516f bump falcosidekick dependency version to v0.7.19
Signed-off-by: Arnaud CHATELAIN <a.chatelain@actongroup.com>
2024-05-21 20:00:14 +02:00
ArnaudCHT 69efe9fcc7 Update charts/falcosidekick/CHANGELOG.md
Co-authored-by: Thomas Labarussias <issif+github@gadz.org>
Signed-off-by: ArnaudCHT <135008892+ArnaudCHT@users.noreply.github.com>
2024-05-21 19:14:14 +02:00
dependabot[bot] 61e99e2aca CHART : falcosidekick service monitor update add aditionnal properties and Fix prometheus rules
Signed-off-by: Arnaud CHATELAIN <a.chatelain@actongroup.com>
2024-05-21 19:14:14 +02:00
dependabot[bot] 79d1dfff24 CHART : falcosidekick service monitor update add aditionnal properties
Signed-off-by: Arnaud CHATELAIN <a.chatelain@actongroup.com>
2024-05-21 19:14:14 +02:00
dependabot[bot] 94073c768d feat : falcosidekick service monitor update add aditionnal properties
Signed-off-by: Arnaud CHATELAIN <a.chatelain@actongroup.com>
2024-05-21 19:14:14 +02:00
dependabot[bot] 9b52cb7e18 build(deps): bump actions/checkout from 4.1.5 to 4.1.6
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.5 to 4.1.6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](44c2b7a8a4...a5ac7e51b4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-17 07:59:56 +02:00
dependabot[bot] 11e5d98538 build(deps): bump actions/checkout from 4.1.4 to 4.1.5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.4 to 4.1.5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](0ad4b8fada...44c2b7a8a4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-07 09:02:48 +02:00
dependabot[bot] c52f43857c build(deps): bump actions/setup-go from 5.0.0 to 5.0.1
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.0.0 to 5.0.1.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](0c52d547c9...cdcb360436)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-03 09:41:01 +02:00
David Calvert 127563906e feat: updated falco-exporter grafana dashboard
Signed-off-by: David Calvert <david@0xdc.me>
2024-04-30 14:13:50 +02:00
dependabot[bot] 619b9f2347 build(deps): bump helm/kind-action from 1.9.0 to 1.10.0
Bumps [helm/kind-action](https://github.com/helm/kind-action) from 1.9.0 to 1.10.0.
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](99576bfa6d...0025e74a8c)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-30 11:53:50 +02:00
Belazar Mohamed 3799bd9a80 Update prometheusrule.yaml
Prometheus does not accept duplicate rule name.
The chart rule can't be installed.

Signed-off-by: Belazar Mohamed <50041651+kwop@users.noreply.github.com>

Update Chart.yaml

Bump chart version

Signed-off-by: Belazar Mohamed <50041651+kwop@users.noreply.github.com>

Update CHANGELOG.md

Signed-off-by: Belazar Mohamed <50041651+kwop@users.noreply.github.com>

Fix PrometheusRule duplicate alert name
2024-04-30 10:51:49 +02:00
Arnaud CHATELAIN bab068e34f CHART : falco-exporter service monitor update add aditionnal properties
Signed-off-by: Arnaud CHATELAIN <a.chatelain@actongroup.com>
2024-04-29 16:20:47 +02:00
dependabot[bot] 90cd218cc0 build(deps): bump lycheeverse/lychee-action from 1.9.3 to 1.10.0
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 1.9.3 to 1.10.0.
- [Release notes](https://github.com/lycheeverse/lychee-action/releases)
- [Commits](c053181aa0...2b973e86fc)

---
updated-dependencies:
- dependency-name: lycheeverse/lychee-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-26 09:50:31 +02:00
dependabot[bot] 9dacaf69d2 build(deps): bump actions/checkout from 4.1.3 to 4.1.4
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.3 to 4.1.4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](1d96c772d1...0ad4b8fada)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-25 13:59:28 +02:00
dependabot[bot] f47d8c91d3 build(deps): bump actions/checkout from 4.1.2 to 4.1.3
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.2 to 4.1.3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](9bb56186c3...1d96c772d1)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-20 07:36:59 +02:00
dependabot[bot] c333878923 build(deps): bump sigstore/cosign-installer from 3.4.0 to 3.5.0
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.4.0 to 3.5.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](e1523de757...59acb6260d)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-19 12:26:57 +02:00
dependabot[bot] 181239680d build(deps): bump golang.org/x/net from 0.19.0 to 0.23.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.19.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-19 12:25:56 +02:00
dependabot[bot] 144c33e9c8 build(deps): bump azure/setup-helm from 3.5 to 4
Bumps [azure/setup-helm](https://github.com/azure/setup-helm) from 3.5 to 4.
- [Release notes](https://github.com/azure/setup-helm/releases)
- [Changelog](https://github.com/Azure/setup-helm/blob/main/CHANGELOG.md)
- [Commits](5119fcb908...b7246b12e7)

---
updated-dependencies:
- dependency-name: azure/setup-helm
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-19 12:22:56 +02:00
David Calvert 138eb3a145 fix: value name and comments
Signed-off-by: David Calvert <david@0xdc.me>
2024-04-15 14:48:44 +02:00
David Calvert 51a377c0dd docs: updated falco-exporter/CHANGELOG.md
Signed-off-by: David Calvert <david@0xdc.me>
2024-04-15 14:48:44 +02:00
David Calvert 32ba0bbe5d feat: added ability to set the grafana folder annotation name
Signed-off-by: David Calvert <david@0xdc.me>
2024-04-15 14:48:44 +02:00
Leonardo Grasso 8653678175 docs(falco): add changelog for 4.3.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-04-12 09:58:31 +02:00
Leonardo Grasso 1d59e8a018 update(charts/falco): bump chart version to 4.3.0
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-04-12 09:58:31 +02:00
Leonardo Grasso b9776c8b62 update(falco/templates): add `HOST_ROOT` env
Although HOST_ROOT is already set in all container images consumed by this chart, its default value (i.e. `/host`) is hard-coded in many points across this chart. So for consistency, we also force set it in `env:` for the main container and the initContainer as well.

The alternative would be to make it parametric, but since this is just an implementation detail that does not produce a user-facing effect, there's no compelling reason for the user to modify it.  Moreover, the hard-coded value works since its usage is consistent across and limited to only containers managed by this chart.

Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-04-12 09:58:31 +02:00
Leonardo Grasso 8598119dbc update(falco/templates): always add `FALCO_HOSTNAME` env var
This env var is consumed by Falco's gRPC servers and passed to libs to populate `evt.hostname`, and used in metrics (possibly other purpose in the future) as well.

Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2024-04-12 09:58:31 +02:00
Thomas Labarussias 20f7b67921 bump falcosidekick dependency version to v0.7.17 install latest version through falco chart
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-04-09 11:48:14 +02:00
Thomas Labarussias fb6d059bae fix the labels for the serviceMonitor of falcosidekick
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-04-09 11:40:14 +02:00
Aldo Lacuku 91bfff2bf1 docs(OWNERS): add alacuku (Aldo Lacuku) to approvers
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-04-03 11:31:50 +02:00
Thomas Labarussias e34aadfc48 fix the error with the NOTES when the ingress for falcosidekick-ui is enabled (at <index .paths 0>: error calling index: index of untyped nil Use)
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2024-04-02 15:22:44 +02:00
dependabot[bot] 4a04852dde build(deps): bump actions/setup-python from 5.0.0 to 5.1.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.0.0 to 5.1.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](0a5c615913...82c7e631bb)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-27 13:41:13 +01:00
dependabot[bot] aafdba37bc build(deps): bump google.golang.org/protobuf from 1.31.0 to 1.33.0
Bumps google.golang.org/protobuf from 1.31.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-14 10:02:24 +01:00
dependabot[bot] e3354e2dae build(deps): bump actions/checkout from 4.1.1 to 4.1.2
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.1 to 4.1.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](b4ffde65f4...9bb56186c3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-14 09:58:24 +01:00
dependabot[bot] 45c363686c build(deps): bump docker/login-action from 3.0.0 to 3.1.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.0.0 to 3.1.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](343f7c4344...e92390c5fb)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-14 09:57:24 +01:00
cpanato 68db066571 fix docs
Signed-off-by: cpanato <ctadeu@gmail.com>
2024-03-14 09:54:24 +01:00
Simon Kranz 73f17ba805 Add version to change log description
Signed-off-by: Simon Kranz <simon.kranz@inovex.de>
2024-03-12 15:42:17 +01:00
Simon Kranz 77b233c4dc bump falcosidekick verion in falco chart dependency
Signed-off-by: Simon Kranz <simon.kranz@inovex.de>
2024-03-12 15:42:17 +01:00
Thomas Berreis 5535326247 [falcosidekick] docs: update CHANGELOG
Signed-off-by: Thomas Berreis <thomas@berreis.de>
2024-03-11 14:14:15 +01:00
Thomas Berreis d97cbf7539 [falcosidekick] chore: bump chart version
Signed-off-by: Thomas Berreis <thomas@berreis.de>
2024-03-11 14:14:15 +01:00
Thomas Berreis a6623b20a4 [falcosidekick] fix: component label in matchers for servicemonitor
Signed-off-by: Thomas Berreis <thomas@berreis.de>
2024-03-11 14:14:15 +01:00
Aldo Lacuku 34f7db6fba fix(falco/helpers): adjust formatting to be compatible with older helm versions
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-03-08 10:00:02 +01:00
Thomas Berreis 23f9fe870d [falcosidekick] fix: remove duplicate component label
This commit partly reverts #626

Signed-off-by: Thomas Berreis <thomas@berreis.de>
2024-02-26 14:48:21 +01:00
Filipp Akinfiev 70efc03efe servicemonitor fix port and selector
Signed-off-by: Filipp Akinfiev <filipp.akinfiev@clyso.com>
2024-02-19 18:14:48 +01:00
Aldo Lacuku 9fbf3ee93f new(CI): add link checker
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-16 15:58:37 +01:00
Aldo Lacuku bf8c072f8c fix(falco/README.md): dead link
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>

fix(falco/README.md): dead link

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-16 15:39:37 +01:00
dependabot[bot] e7879b81a1 build(deps): bump helm/kind-action from 1.8.0 to 1.9.0
Bumps [helm/kind-action](https://github.com/helm/kind-action) from 1.8.0 to 1.9.0.
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](dda0770415...99576bfa6d)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-16 10:32:36 +01:00
Aldo Lacuku 399474d1b7 fix(readme): link tags
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-15 17:14:32 +01:00
Aldo Lacuku 27cea0a8a6 chore(falco-exporter): bump chart version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-15 16:10:31 +01:00
Aldo Lacuku 747df180b9 fix(exporter/readme): update dead links
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-15 16:10:31 +01:00
Aldo Lacuku 8004bef2a1 chore(falco): bump chart version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-15 14:39:31 +01:00
Aldo Lacuku b16bf3a2a2 fix(falco/README): typos, formatting and broken links
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-15 14:39:31 +01:00
Aldo Lacuku f644c8becd update(README.md): update contributing section
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-13 16:25:23 +01:00
Aldo Lacuku a670ff35eb update(ci): process all charts for changes in values.yaml
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-13 16:12:22 +01:00
Aldo Lacuku c96fdb0e5d update(falco-exporter): bump chart and update changelog
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-13 15:57:22 +01:00
Aldo Lacuku efed89519c update(falco-exporter): update docs
Furthermore, it adds the autogeneration of the configuration from the values.yaml file.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-13 15:57:22 +01:00
Aldo Lacuku ed573c82d7 update(falco): bump falco to 0.37.1
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-13 14:19:21 +01:00
Johannes Schnatterer b5f79e2687 NOTES.txt: Fix links
Signed-off-by: Johannes Schnatterer <johannes.schnatterer@cloudogu.com>
2024-02-13 13:39:21 +01:00
Aldo Lacuku af72185d8b new(CI): enforce the helm docs
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-07 09:38:56 +01:00
Aldo Lacuku 2f56b9e99e fix(event-generator): update README.md
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-06 16:27:54 +01:00
Aldo Lacuku 6415a1f51a fix(falco): update README.md
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-06 16:21:54 +01:00
Aldo Lacuku 22df8b2a3b update(falcosidekick): update README.md
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-06 16:06:54 +01:00
Aldo Lacuku c1fb1da531 chore(CI): bump helm to v3.14.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-06 14:36:53 +01:00
Aldo Lacuku d8850d667e fix(tests): enable grpc output in falco deployment
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-06 14:36:53 +01:00
Aldo Lacuku 05112ae3dc fix(exporter): update tolerations
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-06 14:36:53 +01:00
Aldo Lacuku 53e41ca8d2 fix(falco): reintroduce service account
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-02-06 09:19:52 +01:00
dependabot[bot] 8880802615 build(deps): bump sigstore/cosign-installer from 3.3.0 to 3.4.0
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](9614fae9e5...e1523de757)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-01 09:03:36 +01:00
Aldo Lacuku 0274949af6 update(falco): bump rules in preset values file
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku 5fffb0774d fix(falco): use alias for kmod and modern_ebpf drivers
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku 2bbde3926d update(falco/k8saudit): bump k8saudit plugin and rulesfile to v0.7.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku 601f0f394b update(falco/tests): use falco with k8saudit plugin for testing
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku 232899ecab update(falco): disable k8s-metacollector by default
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku e913bcb24c update(falco): bump falcoctl version to 0.7.1
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku cc75790274 update(falco): bump chart and app versions
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku c231d6397d update(falco/README.md): update README.md file
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku d58c98cf75 new(falco): add breaking changes notice
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku 968f65b9b3 update(falco/changelog): add changelog for chart 4.0.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku b411be9302 update(falco/gvisor): update gvisor-gke prese values file
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku caca714086 fix(falco): mount /etc in falco pods
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku 95496d0b1e update(falco/driver)!: use the same names for drivers as falco
Please see: https://github.com/falcosecurity/falco/pull/2413.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku d5056d212a update(falco/falcoctl): bump falcoctl version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku 135ad04dbe new(falco): enable falcoctl deps resolver
Falco 0.37.0 will not ship with bundled plugins. Falcoctl with deps resolver
enabled downloads the required plugins starting from the rulesfiles.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku a4db1cd330 new(falco/tests): add unit tests for k8s-metacollector integration
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku 5e05e6d9e4 update(falco): remove service account and related resources
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku 55cf23c568 new(falco): integrate k8s-metacollector and k8smeta plugin
The defualt mode to get kubernetes metadata is using the
k8s-metacollector and the k8smeta plugin. This commit
adds the required helpers and variables to enable the
k8s-metacollector by default.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku ecde0d5443 new(falco): add k8s-metacollector chart as a dependency
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-30 12:56:32 +01:00
Aldo Lacuku d5986c747f update(k8s-metacollector): lower initial delay of readiness and liveness probes
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-26 16:52:18 +01:00
Aldo Lacuku 2b31ca4004 chore(k8s-metacollector): add license header to go files
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-26 12:38:18 +01:00
Aldo Lacuku 4a95d1784a new(k8s-metacollector): add unit tests for grafana dashboard
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-26 12:38:18 +01:00
Aldo Lacuku ff1fb8284e new(k8s-metacollector): add grafana dashboard
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-26 12:38:18 +01:00
Aldo Lacuku 9e37025f56 fix(k8s-metacollector): correctly indent the service monitor
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-25 17:39:16 +01:00
Victor Login 487b6c697c falco-exporter: add annotation for set of folder's grafana-chart
Signed-off-by: Victor Login <batazor111@gmail.com>
2024-01-25 11:30:15 +01:00
Aldo Lacuku 2536390e98 update(k8s-metacollector): lower interval and scrape_timeout values for service monitor
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-25 11:29:15 +01:00
Aldo Lacuku 00558f3aa5 update(README.md): add k8s-metacollector to the list of charts
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-24 17:00:14 +01:00
dependabot[bot] 1f31115eaf build(deps): bump actions/setup-go from 4.0.0 to 5.0.0
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4.0.0 to 5.0.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](4d34df0c23...0c52d547c9)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-23 07:19:08 +01:00
Aldo Lacuku 018b8212ff new(k8s-metacollector): enable unit tests
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-22 16:43:07 +01:00
Aldo Lacuku d914f25d49 update(k8s-metacollector): bump app version to 0.1.0
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-22 16:43:07 +01:00
Aldo Lacuku 6e5332eb6c update(k8s-metacollector): update changelog and bump chart version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-02 14:00:07 +01:00
Aldo Lacuku 6369df8298 fix(k8s-metacollector/tests): update tests to reflect latest changes
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2024-01-02 14:00:07 +01:00
Aldo Lacuku 6dbdad47e7 fix(makefile): correctly updted dependencies for each chart
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-21 12:03:14 +01:00
Aldo Lacuku cf0312611e update(pull-request-template): add k8s-metacollector area
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-21 10:25:15 +01:00
Aldo Lacuku b182b74226 update(k8s-metacollector): bump chart's version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-21 09:45:14 +01:00
Aldo Lacuku ef54f73f49 update(k8s-metacollector): add work in progress disclaimer
Furthermo e, fix indentation in commans example

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-21 09:45:14 +01:00
Aldo Lacuku bebb9241ee update(k8s-metacollector): update chart's info
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-21 09:45:14 +01:00
dependabot[bot] 723ba3a2c1 build(deps): bump golang.org/x/crypto from 0.16.0 to 0.17.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.16.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.16.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-20 17:39:13 +01:00
Aldo Lacuku 79f241ea3b new(k8s-metacollector): add CHANGELOG.md file
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-20 17:15:14 +01:00
Aldo Lacuku 2898397135 update(k8s-metacollector/README.md): update readme.
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-20 17:15:14 +01:00
Aldo Lacuku fe643cfa87 new(.gitignore): add .gitignore file
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-20 17:15:14 +01:00
Aldo Lacuku eea4adee78 new(tests): add unit tests for k8s-metacollector chart
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-20 17:15:14 +01:00
Aldo Lacuku 72eba5c4f7 new(charts): add k8s-metacollector chart
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-20 17:15:14 +01:00
Nicolas Trangosi 275433ad9d Upgrade falcosidekick chart to `v0.7.11` in falco chart
Signed-off-by: Nicolas Trangosi <nicolas.trangosi@dcbrain.com>
2023-12-18 17:37:01 +01:00
Marcel Birkner 2f57edbebe Update links in README
Signed-off-by: Marcel Birkner <marcel.birkner@dash0.com>
2023-12-18 16:59:01 +01:00
Brian Irish f77aabcc4e fix: README link to Policy Report documentation
Signed-off-by: Brian Irish <brian@teamraincoat.com>
2023-12-18 15:54:02 +01:00
Nicolas Trangosi 46b578dc52 Update chart version and changelog
Signed-off-by: Nicolas Trangosi <nicolas.trangosi@dcbrain.com>
2023-12-18 15:25:01 +01:00
Nicolas Lamirault 48caa59a1d Update: use core and ui labels.
- Use the recommanded label app.kubernetes.io/component for Core and UI
resources.
- Fix: UI Rbac and secrets resources only generate when webui.enabled is
  true

Signed-off-by: Nicolas Lamirault <nicolas.lamirault@gmail.com>
2023-12-18 15:25:01 +01:00
Nicolas Lamirault d69f384bef Add: missing recommanded labels
Signed-off-by: Nicolas Lamirault <nicolas.lamirault@gmail.com>
2023-12-18 15:25:01 +01:00
Nicolas Lamirault 2d58d3cce7 Add: missing recommanded labels
Signed-off-by: Nicolas Lamirault <nicolas.lamirault@gmail.com>
2023-12-18 15:25:01 +01:00
Nicolas Lamirault f9e592fd2c Fix: do not change ingress ui
Signed-off-by: Nicolas Lamirault <nicolas.lamirault@gmail.com>
2023-12-18 15:25:01 +01:00
Nicolas Lamirault f1c8651fa7 Update: Refactoringr Kubernetes recommanded labels
Signed-off-by: Nicolas Lamirault <nicolas.lamirault@gmail.com>
2023-12-18 15:25:01 +01:00
Nicolas Lamirault cbaaff4ead Fix: labels for Falcosidekick
Signed-off-by: Nicolas Lamirault <nicolas.lamirault@gmail.com>
2023-12-18 15:25:01 +01:00
Aldo Lacuku 55f2829b9f chore(falcosidekick): bump chart version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-18 09:44:00 +01:00
Aldo Lacuku 8513db9efa chore(falco): bump chart version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-18 09:44:00 +01:00
Aldo Lacuku 6c3057f85b chore(event-generator): bump chart version
Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-18 09:44:00 +01:00
Aldo Lacuku a39194ead9 fix(Makefile): unify all makefiles in a single one
After the last refactor when porting the CI from CircleCI to GHA
makefiles were broken. This commit fixes them, and instead of having
a makefile for each chart we have a single one.

Signed-off-by: Aldo Lacuku <aldo@lacuku.eu>
2023-12-18 09:44:00 +01:00
dependabot[bot] 4cf83929c2 build(deps): bump sigstore/cosign-installer from 3.2.0 to 3.3.0
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.2.0 to 3.3.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](1fc5bd396d...9614fae9e5)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-12 11:03:43 +01:00
dependabot[bot] 77f13968ab build(deps): bump actions/setup-python from 4.8.0 to 5.0.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4.8.0 to 5.0.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](b64ffcaf5b...0a5c615913)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-07 00:20:27 +01:00
dependabot[bot] 764cab2194 build(deps): bump actions/setup-python from 4.7.1 to 4.8.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4.7.1 to 4.8.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](65d7f2d534...b64ffcaf5b)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-06 06:04:26 +01:00
Juan Gonzalez ae53304323 Changed label import
Signed-off-by: Juan Gonzalez <jgfm@novonordisk.com>
2023-11-22 17:41:43 +01:00
Juan Gonzalez 91ed41d4a9 Bumped chart version and changelog
Signed-off-by: Juan Gonzalez <jgfm@novonordisk.com>
2023-11-22 17:41:43 +01:00
Juan Gonzalez 76e6e85e00 Add mTLS certificate loading capability
Signed-off-by: Juan Gonzalez <jgfm@novonordisk.com>
2023-11-22 17:41:43 +01:00
dependabot[bot] 2a9dd9aff3 build(deps): bump sigstore/cosign-installer from 3.1.2 to 3.2.0
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.1.2 to 3.2.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](11086d2504...1fc5bd396d)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-08 01:27:24 +01:00
dependabot[bot] ee4d0fa79d build(deps): bump helm/chart-releaser-action from 1.5.0 to 1.6.0
Bumps [helm/chart-releaser-action](https://github.com/helm/chart-releaser-action) from 1.5.0 to 1.6.0.
- [Release notes](https://github.com/helm/chart-releaser-action/releases)
- [Commits](be16258da8...a917fd15b2)

---
updated-dependencies:
- dependency-name: helm/chart-releaser-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-03 16:39:08 +01:00
dependabot[bot] cbeffdbbf1 build(deps): bump helm/chart-testing-action from 2.4.0 to 2.6.1
Bumps [helm/chart-testing-action](https://github.com/helm/chart-testing-action) from 2.4.0 to 2.6.1.
- [Release notes](https://github.com/helm/chart-testing-action/releases)
- [Commits](e878887317...e6669bcd63)

---
updated-dependencies:
- dependency-name: helm/chart-testing-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-03 13:56:08 +01:00
cpanato 3f7f616d8a update documentation
Signed-off-by: cpanato <ctadeu@gmail.com>
2023-10-29 21:23:55 +01:00
Federico Di Pierro 7264025aaa update(falco): bump chart and Falco version
Signed-off-by: Federico Di Pierro <nierro92@gmail.com>
2023-10-27 14:08:48 +02:00
Thomas Labarussias 30c2966175 fix the condition for the missing cert files
Signed-off-by: Thomas Labarussias <issif+github@gadz.org>
2023-10-24 15:37:29 +02:00
Cyprien Oger 74690e2bcf chore(falco): upgrade falcosidekick chart to v0.7.7
Signed-off-by: Cyprien Oger <cyprien.oger@opendatasoft.com>
2023-10-24 14:31:29 +02:00
Cyprien Oger 2697b2e711 feat(falcosidekick): support extra args in the chart
Signed-off-by: Cyprien Oger <cyprien.oger@opendatasoft.com>
2023-10-23 20:21:18 +02:00
154 changed files with 22306 additions and 2723 deletions

View File

@ -35,12 +35,14 @@ Please remove the leading whitespace before the `/kind <>` you uncommented.
> /area falco-chart
> /area falco-exporter-chart
> /area falcosidekick-chart
> /area falco-talon-chart
> /area event-generator-chart
> /area k8s-metacollector-chart
<!--
Please remove the leading whitespace before the `/area <>` you uncommented.
-->
@ -59,6 +61,7 @@ Fixes #
**Special notes for your reviewer**:
**Checklist**
<!--
Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.

View File

@ -4,7 +4,3 @@ updates:
directory: "/"
schedule:
interval: "daily"
labels:
- "area/dependency"
- "release-note-none"
- "ok-to-test"

35
.github/workflows/docs.yml vendored Normal file
View File

@ -0,0 +1,35 @@
name: Check Helm Docs
on:
pull_request:
jobs:
readme:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Run Helm Docs and check the outcome
run: |
for chart in $(ls ./charts); do
docker run \
--rm \
--workdir=/helm-docs \
--volume "$(pwd):/helm-docs" \
-u $(id -u) \
jnorwood/helm-docs:v1.11.0 \
helm-docs -c ./charts/$chart -t ./README.gotmpl -o ./README.md
done
exit_code=$(git diff --exit-code)
exit ${exit_code}
- name: Print a comment in case of failure
run: |
echo "The README.md files are not up to date.
Please, run \"make docs\" before pushing."
exit 1
if: |
failure() && github.event.pull_request.head.repo.full_name == github.repository

24
.github/workflows/links.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: Links
on:
push:
branches:
- main
- master
pull_request:
jobs:
linkChecker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Link Checker
uses: lycheeverse/lychee-action@5c4ee84814c983aa7164eaee476f014e53ff3963 #v2.5.0
with:
args: --no-progress './**/*.yml' './**/*.yaml' './**/*.md' './**/*.gotmpl' './**/*.tpl' './**/OWNERS' './**/LICENSE'
token: ${{ secrets.GITHUB_TOKE }}
fail: true

View File

@ -19,12 +19,12 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Install Cosign
uses: sigstore/cosign-installer@11086d25041f77fe8fe7b9ea4e48e3b9192b8f19 # v3.1.2
uses: sigstore/cosign-installer@398d4b0eeef1380460a10c8013a76f728fb906ac # v3.9.1
- name: Configure Git
run: |
@ -32,21 +32,22 @@ jobs:
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Set up Helm
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78 # v3.5
uses: azure/setup-helm@b9e51907a09c216f16ebe8536097933489208112 # v4.3.0
- name: Add dependency chart repos
run: |
helm repo add falcosecurity https://falcosecurity.github.io/charts
- name: Run chart-releaser
uses: helm/chart-releaser-action@be16258da8010256c6e82849661221415f031968 # v1.5.0
uses: helm/chart-releaser-action@cae68fefc6b5f367a0275617c9f83181ba54714f # v1.7.0
with:
charts_dir: charts
config: cr.yaml
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: Login to GitHub Container Registry
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}

View File

@ -8,19 +8,21 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78 # v3.5
- uses: actions/setup-python@65d7f2d534ac1bc67fcd62888c5f4f3d2cb2b236 # v4.7.1
uses: azure/setup-helm@b9e51907a09c216f16ebe8536097933489208112 # v4.3.0
with:
python-version: '3.x'
version: "3.14.0"
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
- name: Set up chart-testing
uses: helm/chart-testing-action@e8788873172cb653a90ca2e819d79d65a66d4e76 # v2.4.0
uses: helm/chart-testing-action@0d28d3144d3a25ea2cc349d6e59901c4ff469b3b # v2.7.0
- name: Run chart-testing (lint)
run: ct lint --config ct.yaml
@ -35,23 +37,38 @@ jobs:
- name: Create KIND Cluster
if: steps.list-changed.outputs.changed == 'true'
uses: helm/kind-action@dda0770415bac9fc20092cacbc54aa298604d140 # v1.8.0
uses: helm/kind-action@a1b0e391336a6ee6713a0583f8c6240d70863de3 # v1.12.0
with:
config: ./tests/kind-config.yaml
- name: install falco if needed (ie for falco-exporter)
if: steps.list-changed.outputs.changed == 'true'
run: |
changed=$(ct list-changed --config ct.yaml)
if [[ "$changed[@]" =~ "charts/falco-exporter" ]]; then
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco -f ./tests/falco-test-ci.yaml
kubectl get po -A
sleep 120
kubectl get po -A
fi
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: ct install --config ct.yaml
run: ct install --exclude-deprecated --config ct.yaml
go-unit-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@b9e51907a09c216f16ebe8536097933489208112 # v4.3.0
with:
version: "3.10.3"
- name: Update repo deps
run: helm dependency update ./charts/falco
- name: Setup Go
uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
with:
go-version: "1.21"
check-latest: true
- name: K8s-metacollector unit tests
run: go test ./charts/k8s-metacollector/tests/unit/...
- name: Falco unit tests
run: go test ./charts/falco/tests/unit/...

6
.gitignore vendored Normal file
View File

@ -0,0 +1,6 @@
# editor and IDE paraphernalia
.idea
*.swp
*.swo
*~
.vscode

22
.lycheeignore Normal file
View File

@ -0,0 +1,22 @@
nats:/host:port
https://yds.serverless.yandexcloud.net/
http:/host:port
https://chat.googleapis.com/v1/spaces/XXXXXX/YYYYYY
https://xxxx/hooks/YYYY
https://cliq.zoho.eu/api/v2/channelsbyname/XXXX/message*
https://outlook.office.com/webhook/XXXXXX/IncomingWebhook/YYYYYY
https://outlook.office.com/webhook/XXXXXX/IncomingWebhook/YYYYYY
https://discord.com/api/webhooks/xxxxxxxxxx
http://kafkarest:8082/topics/test
https://api.spyderbat.com/
https://hooks.slack.com/services/XXXX/YYYY/ZZZZ
http://\{domain*
https://github.com/falcosecurity/falcosidekick/tree/master/deploy/helm/falcosidekick
http://some.url/some/path/
https://localhost:32765/k8s-audit
https://some.url/some/path/
http://localhost:8765/versions
https://environmentid.live.dynatrace.com/api
https://yourdomain/e/ENVIRONMENTID/api
http://falco-talon:2803
https://http-intake.logs.datadoghq.com/

40
Makefile Normal file
View File

@ -0,0 +1,40 @@
DOCS_IMAGE_VERSION="v1.11.0"
LINT_IMAGE_VERSION="v3.8.0"
# Charts's path relative to the current directory.
CHARTS := $(wildcard ./charts/*)
CHARTS_NAMES := $(notdir $(CHARTS))
.PHONY: lint
lint: helm-deps-update $(addprefix lint-, $(CHARTS_NAMES))
lint-%:
@docker run \
-it \
-e HOME=/home/ct \
--mount type=tmpfs,destination=/home/ct \
--workdir=/data \
--volume $$(pwd):/data \
-u $$(id -u) \
quay.io/helmpack/chart-testing:$(LINT_IMAGE_VERSION) \
ct lint --config ./ct.yaml --charts ./charts/$*
.PHONY: docs
docs: $(addprefix docs-, $(CHARTS_NAMES))
docs-%:
@docker run \
--rm \
--workdir=/helm-docs \
--volume "$$(pwd):/helm-docs" \
-u $$(id -u) \
jnorwood/helm-docs:$(DOCS_IMAGE_VERSION) \
helm-docs -c ./charts/$* -t ./README.gotmpl -o ./README.md
.PHONY: helm-deps-update
helm-deps-update: $(addprefix helm-deps-update-, $(CHARTS_NAMES))
helm-deps-update-%:
helm dependency update ./charts/$*

3
OWNERS
View File

@ -2,9 +2,10 @@ approvers:
- leogr
- Issif
- cpanato
- alacuku
- ekoops
reviewers:
- bencer
- alacuku
emeritus_approvers:
- leodido
- fntlnz

View File

@ -4,7 +4,7 @@
This GitHub project is the source for the [Falco](https://github.com/falcosecurity/falco) Helm chart repository that you can use to [deploy](https://falco.org/docs/getting-started/deployment/) Falco in your Kubernetes infrastructure.
The purpose of this repository is to provide a place for maintaining and contributing Charts related to the Falco project, with CI processes in place for managing the releasing of Charts into [our Helm Chart Repository]((https://falcosecurity.github.io/charts)).
The purpose of this repository is to provide a place for maintaining and contributing Charts related to the Falco project, with CI processes in place for managing the releasing of Charts into [our Helm Chart Repository](https://falcosecurity.github.io/charts).
For more information about installing and using Helm, see the
[Helm Docs](https://helm.sh/docs/).
@ -12,19 +12,21 @@ For more information about installing and using Helm, see the
## Repository Structure
This GitHub repository contains the source for the packaged and versioned charts released to [https://falcosecurity.github.io/charts](https://falcosecurity.github.io/charts) (our Helm Chart Repository).
We also, are publishing the charts in a OCI Image and it is hosted in [GitHub Packages](https://github.com/orgs/falcosecurity/packages?repo_name=charts)
The Charts in this repository are organized into folders: each directory that contains a `Chart.yaml` is a chart.
The Charts in the `master` branch (with a corresponding [GitHub release](https://github.com/falcosecurity/charts/releases)) match the latest packaged Charts in [our Helm Chart Repository]((https://falcosecurity.github.io/charts)), though there may be previous versions of a Chart available in that Chart Repository.
The Charts in the `master` branch (with a corresponding [GitHub release](https://github.com/falcosecurity/charts/releases)) match the latest packaged Charts in [our Helm Chart Repository](https://falcosecurity.github.io/charts), though there may be previous versions of a Chart available in that Chart Repository.
## Charts
Charts currently available are listed below.
- [falco](falco)
- [falco-exporter](falco-exporter)
- [falcosidekick](falcosidekick)
- [event-generator](event-generator)
- [falco](./charts/falco)
- [falcosidekick](./charts/falcosidekick)
- [event-generator](./charts/event-generator)
- [k8s-metacollector](./charts/k8s-metacollector)
- [falco-talon](./charts/falco-talon)
## Usage
@ -39,7 +41,7 @@ helm repo update
### Installing a chart
Please refer to the instruction provided by the Chart you want to install. For installing Falco via Helm, the documentation is [here](https://github.com/falcosecurity/charts/tree/master/falco#adding-falcosecurity-repository).
Please refer to the instruction provided by the Chart you want to install. For installing Falco via Helm, the documentation is [here](https://github.com/falcosecurity/charts/tree/master/charts/falco#adding-falcosecurity-repository).
## Contributing
@ -50,9 +52,8 @@ So, we ask you to follow these simple steps when making your PR:
- The [DCO](https://github.com/falcosecurity/.github/blob/master/CONTRIBUTING.md#developer-certificate-of-origin) is required to contribute to a `falcosecurity` project. So ensure that all your commits have been signed off. We will not be able to merge the PR if a commit is not signed off.
- Bump the version number of the chart by modifying the `version` value in the chart's `Chart.yaml` file. This is particularly important, as it allows our CI to release a new chart version. If the version has not been increased, we will not be able to merge the PR.
- Add a new section in the chart's `CHANGELOG.md` file with the new version number of the chart.
- If your changes affect any chart variables, please update the chart's `README.md` file accordingly and run `make docs` in the chart folder.
Finally, when opening your PR, please fill in the provided PR template, including the final checklist of items to indicate that all the steps above have been performed.
- If your changes affect any chart variables, please update the chart's `README.gotmpl` file accordingly and run `make docs` in the main folder.
Finally, when opening your PR, please fill in the provided PR template, including the final checklist of items to indicate that all the steps above have been performed.
If you have any questions, please feel free to contact us via [GitHub issues](https://github.com/falcosecurity/charts/issues).

View File

@ -4,6 +4,18 @@
This file documents all notable changes to `event-generator` Helm Chart. The release
numbering uses [semantic versioning](http://semver.org).
## v0.3.4
* Pass `--all` flag to event-generator binary to allow disabled rules to run, e.g. the k8saudit ruleset.
## v0.3.3
* Update README.md.
## v0.3.2
* no change to the chart itself. Updated README.md and makefile.
## v0.3.1
* noop change just to test the ci

View File

@ -15,7 +15,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.1
version: 0.3.4
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@ -1,25 +0,0 @@
#generate helm documentation
DOCS_IMAGE_VERSION="v1.11.0"
#Here we use the "latest" tag since our CI uses the same(https://github.com/falcosecurity/charts/blob/2f04bccb5cacbbf3ecc2d2659304b74f865f41dd/.circleci/config.yml#L16).
LINT_IMAGE_VERSION="latest"
docs:
docker run \
--rm \
--workdir=/helm-docs \
--volume "$$(pwd):/helm-docs" \
-u $$(id -u) \
jnorwood/helm-docs:$(DOCS_IMAGE_VERSION) \
helm-docs -t ./README.gotmpl -o ./generated/helm-values.md
lint: helm-repo-update
docker run \
-it \
--workdir=/data \
--volume $$(pwd)/..:/data \
quay.io/helmpack/chart-testing:latest \
ct lint --config ./tests/ct.yaml --charts ./event-generator --chart-dirs .
helm-repo-update:
helm repo update

View File

@ -1,4 +1,123 @@
{{ template "chart.header" . }}
{{ template "chart.description" . }}
{{ template "chart.valuesSection" . }}
# Event-generator
[event-generator](https://github.com/falcosecurity/event-generator) is a tool designed to generate events for both syscalls and k8s audit. The tool can be used to check if Falco is working properly. It does so by performing a variety of suspects actions which trigger security events. The event-event generator implements a [minimalistic framework](https://github.com/falcosecurity/event-generator/tree/master/events) which makes easy to implement new actions.
## Introduction
This chart helps to deploy the event-generator in a kubernetes cluster in order to test an already deployed Falco instance.
## Adding `falcosecurity` repository
Before installing the chart, add the `falcosecurity` charts repository:
```bash
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
```
## Installing the Chart
To install the chart with default values and release name `event-generator` run:
```bash
helm install event-generator falcosecurity/event-generator
```
After a few seconds, event-generator should be running in the `default` namespace.
In order to install the event-generator in a custom namespace run:
```bash
# change the name of the namespace to fit your requirements.
kubectl create ns "ns-event-generator"
helm install event-generator falcosecurity/event-generator --namespace "ns-event-generator"
```
When the event-generator is installed using the default values in `values.yaml` file it is deployed using a k8s job, running the `run` command and, generates activity only for the k8s audit.
For more info check the next section.
> **Tip**: List all releases using `helm list`, a release is a name used to track a specific deployment
### Commands, actions and options
The event-generator tool accepts two commands: `run` and `test`. The first just generates activity, the later one, which is more sophisticated, also checks that for each generated activity Falco triggers the expected rule. Both of them accepts an argument that determines the actions to be performed:
```bash
event-generator run/test [regexp]
```
Without arguments, all actions are performed; otherwise, only those actions matching the given regular expression. If we want to `test` just the actions related to k8s the following command does the trick:
```bash
event-generator test ^k8saudit
```
The list of the supported actions can be found [here](https://github.com/falcosecurity/event-generator#list-actions)
Before diving in how this helm chart deploys and manages instances of the event-generator in kubernetes there are two more options that we need to talk about:
+ `--loop` to run actions in a loop
+ `--sleep` to set the length of time to wait before running an action (default to 1s)
### Deployment modes in k8s
Based on commands, actions and options configured the event-generator could be deployed as a k8s `job` or `deployment`. If the `config.loop` value is set a `deployment` is used since it is long running process, otherwise a `job`.
A configuration like the one below, set in the `values.yaml` file, will deploy the even-generator using a `deployment` with the `run` command passed to it and will will generate activity only for the syscalls:
```yaml
config:
# -- The event-generator accepts two commands (run, test):
# run: runs actions.
# test: runs and tests actions.
# For more info see: https://github.com/falcosecurity/event-generator
command: run
# -- Regular expression used to select the actions to be run.
actions: "^syscall"
# -- Runs in a loop the actions.
# If set to "true" the event-generator is deployed using a k8s deployment otherwise a k8s job.
loop: true
# -- The length of time to wait before running an action. Non-zero values should contain
# a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means no sleep. (default 100ms)
sleep: ""
grpc:
# -- Set it to true if you are deploying in "test" mode.
enabled: false
# -- Path to the Falco grpc socket.
bindAddress: "unix:///var/run/falco/falco.sock"
```
The following configuration will use a k8s `job` since we want to perform the k8s activity once and check that Falco reacts properly to those actions:
```yaml
config:
# -- The event-generator accepts two commands (run, test):
# run: runs actions.
# test: runs and tests actions.
# For more info see: https://github.com/falcosecurity/event-generator
command: test
# -- Regular expression used to select the actions to be run.
actions: "^k8saudit"
# -- Runs in a loop the actions.
# If set to "true" the event-generator is deployed using a k8s deployment otherwise a k8s job.
loop: false
# -- The length of time to wait before running an action. Non-zero values should contain
# a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means no sleep. (default 100ms)
sleep: ""
grpc:
# -- Set it to true if you are deploying in "test" mode.
enabled: true
# -- Path to the Falco grpc socket.
bindAddress: "unix:///var/run/falco/falco.sock"
```
Note that **grpc.enabled is set to true when running with the test command. Be sure that Falco exposes the grpc socket and emits output to it**.
## Uninstalling the Chart
To uninstall the `event-generator` release:
```bash
helm uninstall event-generator
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following table lists the main configurable parameters of the {{ template "chart.name" . }} chart v{{ template "chart.version" . }} and their default values. See `values.yaml` for full list.
{{ template "chart.valuesSection" . }}

View File

@ -32,7 +32,7 @@ In order to install the event-generator in a custom namespace run:
kubectl create ns "ns-event-generator"
helm install event-generator falcosecurity/event-generator --namespace "ns-event-generator"
```
When the event-generator is installed using the default values in `values.yaml` file it is deployed using a k8s job, running the `run` command and, generates activity only for the k8s audit.
When the event-generator is installed using the default values in `values.yaml` file it is deployed using a k8s job, running the `run` command and, generates activity only for the k8s audit.
For more info check the next section.
> **Tip**: List all releases using `helm list`, a release is a name used to track a specific deployment
@ -61,7 +61,7 @@ Based on commands, actions and options configured the event-generator could be d
A configuration like the one below, set in the `values.yaml` file, will deploy the even-generator using a `deployment` with the `run` command passed to it and will will generate activity only for the syscalls:
```yaml
config:
# -- The event-generator accepts two commands (run, test):
# -- The event-generator accepts two commands (run, test):
# run: runs actions.
# test: runs and tests actions.
# For more info see: https://github.com/falcosecurity/event-generator
@ -71,10 +71,10 @@ config:
# -- Runs in a loop the actions.
# If set to "true" the event-generator is deployed using a k8s deployment otherwise a k8s job.
loop: true
# -- The length of time to wait before running an action. Non-zero values should contain
# -- The length of time to wait before running an action. Non-zero values should contain
# a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means no sleep. (default 100ms)
sleep: ""
grpc:
# -- Set it to true if you are deploying in "test" mode.
enabled: false
@ -85,7 +85,7 @@ config:
The following configuration will use a k8s `job` since we want to perform the k8s activity once and check that Falco reacts properly to those actions:
```yaml
config:
# -- The event-generator accepts two commands (run, test):
# -- The event-generator accepts two commands (run, test):
# run: runs actions.
# test: runs and tests actions.
# For more info see: https://github.com/falcosecurity/event-generator
@ -95,10 +95,10 @@ config:
# -- Runs in a loop the actions.
# If set to "true" the event-generator is deployed using a k8s deployment otherwise a k8s job.
loop: false
# -- The length of time to wait before running an action. Non-zero values should contain
# -- The length of time to wait before running an action. Non-zero values should contain
# a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means no sleep. (default 100ms)
sleep: ""
grpc:
# -- Set it to true if you are deploying in "test" mode.
enabled: true
@ -108,7 +108,6 @@ config:
Note that **grpc.enabled is set to true when running with the test command. Be sure that Falco exposes the grpc socket and emits output to it**.
## Uninstalling the Chart
To uninstall the `event-generator` release:
```bash
@ -118,4 +117,29 @@ The command removes all the Kubernetes components associated with the chart and
## Configuration
All the configurable parameters of the event-generator chart and their default values can be found [here](./generated/helm-values.md).
The following table lists the main configurable parameters of the event-generator chart v0.3.4 and their default values. See `values.yaml` for full list.
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | Affinity, like the nodeSelector but with more expressive syntax. |
| config.actions | string | `"^syscall"` | Regular expression used to select the actions to be run. |
| config.command | string | `"run"` | The event-generator accepts two commands (run, test): run: runs actions. test: runs and tests actions. For more info see: https://github.com/falcosecurity/event-generator. |
| config.grpc.bindAddress | string | `"unix:///run/falco/falco.sock"` | Path to the Falco grpc socket. |
| config.grpc.enabled | bool | `false` | Set it to true if you are deploying in "test" mode. |
| config.loop | bool | `true` | Runs in a loop the actions. If set to "true" the event-generator is deployed using a k8s deployment otherwise a k8s job. |
| config.sleep | string | `""` | The length of time to wait before running an action. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means no sleep. (default 100ms) |
| fullnameOverride | string | `""` | Used to override the chart full name. |
| image | object | `{"pullPolicy":"IfNotPresent","repository":"falcosecurity/event-generator","tag":"latest"}` | Number of old history to retain to allow rollback (If not set, default Kubernetes value is set to 10) revisionHistoryLimit: 1 |
| image.pullPolicy | string | `"IfNotPresent"` | Pull policy for the event-generator image |
| image.repository | string | `"falcosecurity/event-generator"` | Repository from where the image is pulled. |
| image.tag | string | `"latest"` | Images' tag to select a development/custom version of event-generator instead of a release. Overrides the image tag whose default is the chart appVersion. |
| imagePullSecrets | list | `[]` | Secrets used to pull the image from a private repository. |
| nameOverride | string | `""` | Used to override the chart name. |
| nodeSelector | object | `{}` | Selectors to choose a given node where to run the pods. |
| podAnnotations | object | `{}` | Annotations to be added to the pod. |
| podSecurityContext | object | `{}` | Security context for the pod. |
| replicasCount | int | `1` | Number of replicas of the event-generator (meaningful when installed as a deployment). |
| securityContext | object | `{}` | Security context for the containers. |
| tolerations | list | `[]` | Tolerations to allow the pods to be scheduled on nodes whose taints the pod tolerates. |

View File

@ -1,27 +0,0 @@
# event-generator
A Helm chart used to deploy the event-generator in Kubernetes cluster.
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | Affinity, like the nodeSelector but with more expressive syntax. |
| config.actions | string | `"^syscall"` | Regular expression used to select the actions to be run. |
| config.command | string | `"run"` | The event-generator accepts two commands (run, test): run: runs actions. test: runs and tests actions. For more info see: https://github.com/falcosecurity/event-generator. |
| config.grpc.bindAddress | string | `"unix:///run/falco/falco.sock"` | Path to the Falco grpc socket. |
| config.grpc.enabled | bool | `false` | Set it to true if you are deploying in "test" mode. |
| config.loop | bool | `true` | Runs in a loop the actions. If set to "true" the event-generator is deployed using a k8s deployment otherwise a k8s job. |
| config.sleep | string | `""` | The length of time to wait before running an action. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means no sleep. (default 100ms) |
| fullnameOverride | string | `""` | Used to override the chart full name. |
| image.pullPolicy | string | `"IfNotPresent"` | Pull policy for the event-generator image |
| image.repository | string | `"falcosecurity/event-generator"` | Repository from where the image is pulled. |
| image.tag | string | `"latest"` | Images' tag to select a development/custom version of event-generator instead of a release. Overrides the image tag whose default is the chart appVersion. |
| imagePullSecrets | list | `[]` | Secrets used to pull the image from a private repository. |
| nameOverride | string | `""` | Used to override the chart name. |
| nodeSelector | object | `{}` | Selectors to choose a given node where to run the pods. |
| podAnnotations | object | `{}` | Annotations to be added to the pod. |
| podSecurityContext | object | `{}` | Security context for the pod. |
| replicasCount | int | `1` | Number of replicas of the event-generator (meaningful when installed as a deployment). |
| securityContext | object | `{}` | Security context for the containers. |
| tolerations | list | `[]` | Tolerations to allow the pods to be scheduled on nodes whose taints the pod tolerates. |

View File

@ -23,6 +23,7 @@ spec:
command:
- /bin/event-generator
- {{ .Values.config.command }}
- --all
{{- if .Values.config.actions }}
- {{ .Values.config.actions }}
{{- end }}

View File

@ -1,202 +0,0 @@
# Change Log
This file documents all notable changes to `falco-exporter` Helm Chart. The release
numbering uses [semantic versioning](http://semver.org).
## v0.9.7
* noop change just to test the ci
## v0.9.6
### Minor Changes
* Bump falco-exporter to v0.8.3
## v0.9.5
### Minor Changes
* Removed unnecessary capabilities from security context
* Setted filesystem on read-only
## v0.9.4
### Minor Changes
* Add options to configure readiness/liveness probe values
## v0.9.3
### Minor Changes
* Bump falco-exporter to v0.8.2
## v0.9.2
### Minor Changes
* Add option to place Grafana dashboard in a folder
## v0.9.1
### Minor Changes
* Fix PSP allowed host path prefix to match grpc socket path change.
## v0.8.3
### Major Changes
* Changing the grpc socket path from `unix:///var/run/falco/falco.soc` to `unix:///run/falco/falco.sock`.
### Minor Changes
* Bump falco-exporter to v0.8.0
## v0.8.2
### Minor Changes
* Support configuration of updateStrategy of the Daemonset
## v0.8.0
* Upgrade falco-exporter version to v0.7.0 (see the [falco-exporter changelog](https://github.com/falcosecurity/falco-exporter/releases/tag/v0.7.0))
### Major Changes
* Add option to add labels to the Daemonset pods
## v0.7.2
### Minor Changes
* Add option to add labels to the Daemonset pods
## v0.7.1
### Minor Changes
* Fix `FalcoExporterAbsent` expression
## v0.7.0
### Major Changes
* Adds ability to create custom PrometheusRules for alerting
## v0.6.2
## Minor Changes
* Add Check availability of 'monitoring.coreos.com/v1' api version
## v0.6.1
### Minor Changes
* Add option the add annotations to the Daemonset
## v0.6.0
### Minor Changes
* Upgrade falco-exporter version to v0.6.0 (see the [falco-exporter changelog](https://github.com/falcosecurity/falco-exporter/releases/tag/v0.6.0))
## v0.5.2
### Minor changes
* Make image registry configurable
## v0.5.1
* Display only non-zero rates in Grafana dashboard template
## v0.5.0
### Minor Changes
* Upgrade falco-exporter version to v0.5.0
* Add metrics about Falco drops
* Make `unix://` prefix optional
## v0.4.2
### Minor Changes
* Fix Prometheus datasource name reference in grafana dashboard template
## v0.4.1
### Minor Changes
* Support release namespace configuration
## v0.4.0
### Mayor Changes
* Add Mutual TLS for falco-exporter enable/disabled feature
## v0.3.8
### Minor Changes
* Replace extensions apiGroup/apiVersion because of deprecation
## v0.3.7
### Minor Changes
* Fixed falco-exporter PSP by allowing secret volumes
## v0.3.6
### Minor Changes
* Add SecurityContextConstraint to allow deploying in Openshift
## v0.3.5
### Minor Changes
* Added the possibility to automatically add a PSP (in combination with a Role and a RoleBindung) via the podSecurityPolicy values
* Namespaced the falco-exporter ServiceAccount and Service
## v0.3.4
### Minor Changes
* Add priorityClassName to values
## v0.3.3
### Minor Changes
* Add grafana dashboard to helm chart
## v0.3.2
### Minor Changes
* Fix for additional labels for falco-exporter servicemonitor
## v0.3.1
### Minor Changes
* Added the support to deploy a Prometheus Service Monitor. Is disables by default.
## v0.3.0
### Major Changes
* Chart moved to [falcosecurity/charts](https://github.com/falcosecurity/charts) repository
* gRPC over unix socket support (by default)
* Updated falco-exporter version to `0.3.0`
### Minor Changes
* README.md and CHANGELOG.md added

View File

@ -1,108 +0,0 @@
# falco-exporter Helm Chart
[falco-exporter](https://github.com/falcosecurity/falco-exporter) is a Prometheus Metrics Exporter for Falco output events.
Before using this chart, you need [Falco installed](https://falco.org/docs/installation/) and running with the [gRPC Output](https://falco.org/docs/grpc/) enabled (over Unix socket by default).
This chart is compatible with the [Falco Chart](https://github.com/falcosecurity/charts/tree/master/falco) version `v1.2.0` or greater. Instructions to enable the gRPC Output in the Falco Helm Chart can be found [here](https://github.com/falcosecurity/charts/tree/master/falco#enabling-grpc). We also strongly recommend using [gRPC over Unix socket](https://github.com/falcosecurity/charts/tree/master/falco#grpc-over-unix-socket-default).
## Introduction
The chart deploys **falco-exporter** as Daemon Set on your the Kubernetes cluster. If a [Prometheus installation](https://github.com/helm/charts/tree/master/stable/prometheus) is running within your cluster, metrics provided by **falco-exporter** will be automatically discovered.
## Adding `falcosecurity` repository
Prior to installing the chart, add the `falcosecurity` charts repository:
```bash
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
```
## Installing the Chart
To install the chart with the release name `falco-exporter` run:
```bash
helm install falco-exporter falcosecurity/falco-exporter
```
After a few seconds, **falco-exporter** should be running.
> **Tip**: List all releases using `helm list`, a release is a name used to track a specific deployment
## Uninstalling the Chart
To uninstall the `falco-exporter` deployment:
```bash
helm uninstall falco-exporter
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following table lists the main configurable parameters of the chart and their default values.
| Parameter | Description | Default |
| ------------------------------------------------ | ------------------------------------------------------------------------------------------------ | ---------------------------------- |
| `image.registry` | The image registry to pull from | `docker.io` |
| `image.repository` | The image repository to pull from | `falcosecurity/falco-exporter` |
| `image.tag` | The image tag to pull | `0.8.3` |
| `image.pullPolicy` | The image pull policy | `IfNotPresent` |
| `falco.grpcUnixSocketPath` | Unix socket path for connecting to a Falco gRPC server | `unix:///var/run/falco/falco.sock` |
| `falco.grpcTimeout` | gRPC connection timeout | `2m` |
| `serviceAccount.create` | Specify if a service account should be created | `true` |
| `podSecurityPolicy.create` | Specify if a PSP, Role & RoleBinding should be created | `false` |
| `serviceMonitor.enabled` | Enabled deployment of a Prometheus operator Service Monitor | `false` |
| `serviceMonitor.additionalLabels` | Add additional Labels to the Service Monitor | `{}` |
| `serviceMonitor.interval` | Specify a user defined interval for the Service Monitor | `""` |
| `serviceMonitor.scrapeTimeout` | Specify a user defined scrape timeout for the Service Monitor | `""` |
| `grafanaDashboard.enabled` | Enable the falco security dashboard, see https://github.com/falcosecurity/falco-exporter#grafana | `false` |
| `grafanaDashboard.folder` | The grafana folder to deplay the dashboard in | `""` |
| `grafanaDashboard.namespace` | The namespace to deploy the dashboard configmap in | `default` |
| `grafanaDashboard.prometheusDatasourceName` | The prometheus datasource name to be used for the dashboard | `Prometheus` |
| `scc.create` | Create OpenShift's Security Context Constraint | `true` |
| `service.mTLS.enabled` | Enable falco-exporter server Mutual TLS feature | `false` |
| `prometheusRules.enabled` | Enable the creation of falco-exporter PrometheusRules | `false` |
| `daemonset.podLabels` | Customized Daemonset pod labels | `{}` |
| `healthChecks.livenessProbe.probesPort` | Liveness probes port | `19376` |
| `healthChecks.readinessProbe.probesPort` | Readiness probes port | `19376` |
| `healthChecks.livenessProbe.initialDelaySeconds` | Number of seconds before performing the first liveness probe | `60` |
| `healthChecks.readinessProbe.initialDelaySeconds`| Number of seconds before performing the first readiness probe | `30` |
| `healthChecks.livenessProbe.timeoutSeconds` | Number of seconds after which the liveness probe times out | `5` |
| `healthChecks.readinessProbe.timeoutSeconds` | Number of seconds after which the readiness probe times out | `5` |
| `healthChecks.livenessProbe.periodSeconds` | Time interval in seconds to perform the liveness probe | `15` |
| `healthChecks.readinessProbe.periodSeconds` | Time interval in seconds to perform the readiness probe | `15` |
Please, refer to [values.yaml](./values.yaml) for the full list of configurable parameters.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```bash
helm install falco-exporter --set falco.grpcTimeout=3m falcosecurity/falco-exporter
```
Alternatively, a YAML file that specifies the parameters' values can be provided while installing the chart. For example,
```bash
helm install falco-exporter -f values.yaml falcosecurity/falco-exporter
```
### Enable Mutual TLS
Mutual TLS for `/metrics` endpoint can be enabled to prevent alerts content from being consumed by unauthorized components.
To install falco-exporter with Mutual TLS enabled, you have to:
```shell
helm install falco-exporter \
--set service.mTLS.enabled=true \
--set-file service.mTLS.server.key=/path/to/server.key \
--set-file service.mTLS.server.crt=/path/to/server.crt \
--set-file service.mTLS.ca.crt=/path/to/ca.crt \
falcosecurity/falco-exporter
```
> **Tip**: You can use the default [values.yaml](values.yaml)

View File

@ -1,16 +0,0 @@
Get the falco-exporter metrics URL by running these commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "falco-exporter.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo {{- if .Values.service.mTLS.enabled }} https{{- else }} http{{- end }}://$NODE_IP:$NODE_PORT/metrics
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "falco-exporter.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "falco-exporter.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo {{- if .Values.service.mTLS.enabled }} https{{- else }} http{{- end }}://$SERVICE_IP:{{ .Values.service.port }}/metrics
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "falco-exporter.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit {{- if .Values.service.mTLS.enabled }} https{{- else }} http{{- end }}://127.0.0.1:{{ .Values.service.targetPort }}/metrics to use your application"
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME {{ .Values.service.targetPort }}
{{- end }}
echo {{- if .Values.service.mTLS.enabled }} "You'll need a valid client certificate and its corresponding key for Mutual TLS handshake" {{- end }}

View File

@ -1,98 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "falco-exporter.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "falco-exporter.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "falco-exporter.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "falco-exporter.labels" -}}
{{ include "falco-exporter.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
{{- if not .Values.skipHelm }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{- if not .Values.skipHelm }}
helm.sh/chart: {{ include "falco-exporter.chart" . }}
{{- end }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "falco-exporter.selectorLabels" -}}
app.kubernetes.io/name: {{ include "falco-exporter.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "falco-exporter.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "falco-exporter.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the PSP to use
*/}}
{{- define "falco-exporter.podSecurityPolicyName" -}}
{{- if .Values.podSecurityPolicy.create -}}
{{ default (include "falco-exporter.fullname" .) .Values.podSecurityPolicy.name }}
{{- else -}}
{{ default "default" .Values.podSecurityPolicy.name }}
{{- end -}}
{{- end -}}
{{/*
Extract the unixSocket's directory path
*/}}
{{- define "falco-exporter.unixSocketDir" -}}
{{- if .Values.falco.grpcUnixSocketPath -}}
{{- .Values.falco.grpcUnixSocketPath | trimPrefix "unix://" | dir -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for rbac.
*/}}
{{- define "rbac.apiVersion" -}}
{{- if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1" }}
{{- print "rbac.authorization.k8s.io/v1" -}}
{{- else -}}
{{- print "rbac.authorization.k8s.io/v1beta1" -}}
{{- end -}}
{{- end -}}

View File

@ -1,132 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "falco-exporter.fullname" . }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
namespace: {{ .Release.Namespace }}
spec:
selector:
matchLabels:
{{- include "falco-exporter.selectorLabels" . | nindent 6 }}
updateStrategy:
{{ toYaml .Values.daemonset.updateStrategy | indent 4 }}
template:
metadata:
labels:
{{- include "falco-exporter.selectorLabels" . | nindent 8 }}
{{- if .Values.daemonset.podLabels }}
{{ toYaml .Values.daemonset.podLabels | nindent 8 }}
{{- end }}
{{- if .Values.daemonset.annotations }}
annotations:
{{ toYaml .Values.daemonset.annotations | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
serviceAccountName: {{ include "falco-exporter.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- /usr/bin/falco-exporter
{{- if .Values.falco.grpcUnixSocketPath }}
- --client-socket={{ .Values.falco.grpcUnixSocketPath }}
{{- else }}
- --client-hostname={{ .Values.falco.grpcHostname }}
- --client-port={{ .Values.falco.grpcPort }}
{{- end }}
- --timeout={{ .Values.falco.grpcTimeout }}
- --listen-address=0.0.0.0:{{ .Values.service.port }}
{{- if .Values.service.mTLS.enabled }}
- --server-ca=/etc/falco/server-certs/ca.crt
- --server-cert=/etc/falco/server-certs/server.crt
- --server-key=/etc/falco/server-certs/server.key
{{- end }}
ports:
- name: metrics
containerPort: {{ .Values.service.targetPort }}
protocol: TCP
livenessProbe:
initialDelaySeconds: {{ .Values.healthChecks.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.healthChecks.livenessProbe.timeoutSeconds }}
periodSeconds: {{ .Values.healthChecks.livenessProbe.periodSeconds }}
httpGet:
path: /liveness
port: {{ .Values.healthChecks.livenessProbe.probesPort }}
readinessProbe:
initialDelaySeconds: {{ .Values.healthChecks.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.healthChecks.readinessProbe.timeoutSeconds }}
periodSeconds: {{ .Values.healthChecks.readinessProbe.periodSeconds }}
httpGet:
path: /readiness
port: {{ .Values.healthChecks.readinessProbe.probesPort }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
{{- if .Values.falco.grpcUnixSocketPath }}
- mountPath: {{ include "falco-exporter.unixSocketDir" . }}
name: falco-socket-dir
readOnly: true
{{- else }}
- mountPath: /etc/falco/certs
name: certs-volume
readOnly: true
{{- end }}
{{- if .Values.service.mTLS.enabled }}
- mountPath: /etc/falco/server-certs
name: server-certs-volume
readOnly: true
{{- end }}
volumes:
{{- if .Values.falco.grpcUnixSocketPath }}
- name: falco-socket-dir
hostPath:
path: {{ include "falco-exporter.unixSocketDir" . }}
{{- else }}
- name: certs-volume
secret:
secretName: {{ include "falco-exporter.fullname" . }}-certs
items:
- key: client.key
path: client.key
- key: client.crt
path: client.crt
- key: ca.crt
path: ca.crt
{{- end }}
{{- if .Values.service.mTLS.enabled }}
- name: server-certs-volume
secret:
secretName: {{ include "falco-exporter.fullname" . }}-server-certs
items:
- key: server.key
path: server.key
- key: server.crt
path: server.crt
- key: ca.crt
path: ca.crt
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -1,315 +0,0 @@
{{- if .Values.grafanaDashboard.enabled }}
apiVersion: v1
data:
grafana-falco.json: |-
{
"__inputs": [
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "6.7.3"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": ""
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "table",
"name": "Table",
"version": ""
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": null,
"links": [],
"panels": [
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"description": "",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 11,
"w": 24,
"x": 0,
"y": 0
},
"hiddenSeries": false,
"id": 2,
"legend": {
"alignAsTable": true,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": true,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null as zero",
"options": {
"dataLinks": []
},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "rate(falco_events[5m]) > 0",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{`{{rule}} (node=\"{{kubernetes_node}}\",ns=\"{{k8s_ns_name}}\",pod=\"{{k8s_pod_name}}\")"`}},
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Events rate",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"columns": [],
"datasource": "$datasource",
"fontSize": "100%",
"gridPos": {
"h": 10,
"w": 24,
"x": 0,
"y": 11
},
"id": 4,
"links": [],
"pageSize": null,
"showHeader": true,
"sort": {
"col": null,
"desc": false
},
"styles": [
{
"alias": "Time",
"align": "auto",
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"pattern": "Time",
"type": "date"
},
{
"alias": "",
"align": "auto",
"colorMode": null,
"colors": [
"rgba(245, 54, 54, 0.9)",
"rgba(237, 129, 40, 0.89)",
"rgba(50, 172, 45, 0.97)"
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": false,
"mappingType": 1,
"pattern": "/__name__|instance|job|kubernetes_name|(__name|helm_|app_).*/",
"sanitize": false,
"thresholds": [],
"type": "hidden",
"unit": "short"
},
{
"alias": "Count",
"align": "auto",
"colorMode": null,
"colors": [
"rgba(245, 54, 54, 0.9)",
"rgba(237, 129, 40, 0.89)",
"rgba(50, 172, 45, 0.97)"
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 0,
"mappingType": 1,
"pattern": "Value",
"thresholds": [],
"type": "number",
"unit": "short"
},
{
"alias": "",
"align": "left",
"colorMode": null,
"colors": [
"rgba(245, 54, 54, 0.9)",
"rgba(237, 129, 40, 0.89)",
"rgba(50, 172, 45, 0.97)"
],
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 0,
"mappingType": 1,
"pattern": "priority",
"thresholds": [
""
],
"type": "number",
"unit": "none",
"valueMaps": [
{
"text": "5",
"value": "5"
}
]
},
{
"alias": "",
"align": "left",
"colorMode": null,
"colors": [
"rgba(245, 54, 54, 0.9)",
"rgba(237, 129, 40, 0.89)",
"rgba(50, 172, 45, 0.97)"
],
"decimals": 2,
"pattern": "/.*/",
"thresholds": [],
"type": "string",
"unit": "short"
}
],
"targets": [
{
"expr": "falco_events",
"format": "table",
"instant": true,
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Totals",
"transform": "table",
"transparent": true,
"type": "table"
}
],
"schemaVersion": 22,
"style": "dark",
"tags": [],
"templating": {
"list": []
},
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "{{ .Values.grafanaDashboard.prometheusDatasourceName }}",
"value": "{{ .Values.grafanaDashboard.prometheusDatasourceName }}"
},
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "datasource",
"options": [],
"query": "prometheus",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"type": "datasource"
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Falco Dashboard",
"uid": "FvUFlfuZz"
}
kind: ConfigMap
metadata:
labels:
grafana_dashboard: "1"
{{- if .Values.grafanaDashboard.folder }}
annotations:
k8s-sidecar-target-directory: /tmp/dashboards/{{ .Values.grafanaDashboard.folder }}
{{- end }}
name: grafana-falco
{{- if .Values.grafanaDashboard.namespace }}
namespace: {{ .Values.grafanaDashboard.namespace }}
{{- else }}
namespace: {{ .Release.Namespace }}
{{- end}}
{{- end -}}

View File

@ -1,28 +0,0 @@
{{- if and .Values.podSecurityPolicy.create (.Capabilities.APIVersions.Has "policy/v1beta1") }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "falco-exporter.podSecurityPolicyName" . }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- with .Values.podSecurityPolicy.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
allowPrivilegeEscalation: false
allowedHostPaths:
- pathPrefix: "/run/falco"
readOnly: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'hostPath'
- 'secret'
{{- end -}}

View File

@ -1,81 +0,0 @@
{{- if and .Values.prometheusRules.enabled .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ include "falco-exporter.fullname" . }}
{{- if .Values.prometheusRules.namespace }}
namespace: {{ .Values.prometheusRules.namespace }}
{{- end }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- if .Values.prometheusRules.additionalLabels }}
{{- toYaml .Values.prometheusRules.additionalLabels | nindent 4 }}
{{- end }}
spec:
groups:
- name: falco-exporter
rules:
{{- if .Values.prometheusRules.enabled }}
- alert: FalcoExporterAbsent
expr: absent(up{job="{{- include "falco-exporter.fullname" . }}"})
for: 10m
annotations:
summary: Falco Exporter has dissapeared from Prometheus service discovery.
description: No metrics are being scraped from falco. No events will trigger any alerts.
labels:
severity: critical
{{- end }}
{{- if .Values.prometheusRules.alerts.warning.enabled }}
- alert: FalcoWarningEventsRateHigh
annotations:
summary: Falco is experiencing high rate of warning events
description: A high rate of warning events are being detected by Falco
expr: rate(falco_events{priority="4"}[{{ .Values.prometheusRules.alerts.warning.rate_interval }}]) > {{ .Values.prometheusRules.alerts.warning.threshold }}
for: 15m
labels:
severity: warning
{{- end }}
{{- if .Values.prometheusRules.alerts.error.enabled }}
- alert: FalcoErrorEventsRateHigh
annotations:
summary: Falco is experiencing high rate of error events
description: A high rate of error events are being detected by Falco
expr: rate(falco_events{priority="3"}[{{ .Values.prometheusRules.alerts.error.rate_interval }}]) > {{ .Values.prometheusRules.alerts.error.threshold }}
for: 15m
labels:
severity: warning
{{- end }}
{{- if .Values.prometheusRules.alerts.critical.enabled }}
- alert: FalcoCriticalEventsRateHigh
annotations:
summary: Falco is experiencing high rate of critical events
description: A high rate of critical events are being detected by Falco
expr: rate(falco_events{priority="2"}[{{ .Values.prometheusRules.alerts.critical.rate_interval }}]) > {{ .Values.prometheusRules.alerts.critical.threshold }}
for: 15m
labels:
severity: critical
{{- end }}
{{- if .Values.prometheusRules.alerts.alert.enabled }}
- alert: FalcoAlertEventsRateHigh
annotations:
summary: Falco is experiencing high rate of alert events
description: A high rate of alert events are being detected by Falco
expr: rate(falco_events{priority="1"}[{{ .Values.prometheusRules.alerts.alert.rate_interval }}]) > {{ .Values.prometheusRules.alerts.alert.threshold }}
for: 5m
labels:
severity: critical
{{- end }}
{{- if .Values.prometheusRules.alerts.emergency.enabled }}
- alert: FalcoEmergencyEventsRateHigh
annotations:
summary: Falco is experiencing high rate of emergency events
description: A high rate of emergency events are being detected by Falco
expr: rate(falco_events{priority="0"}[{{ .Values.prometheusRules.alerts.emergency.rate_interval }}]) > {{ .Values.prometheusRules.alerts.emergency.threshold }}
for: 1m
labels:
severity: critical
{{- end }}
{{- with .Values.prometheusRules.additionalAlerts }}
{{ . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -1,22 +0,0 @@
{{- if .Values.podSecurityPolicy.create -}}
kind: Role
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: {{ include "falco-exporter.podSecurityPolicyName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- with .Values.podSecurityPolicy.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
rules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
resourceNames:
- {{ include "falco-exporter.podSecurityPolicyName" . }}
verbs:
- use
{{- end -}}

View File

@ -1,20 +0,0 @@
{{- if .Values.podSecurityPolicy.create -}}
kind: RoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: {{ include "falco-exporter.podSecurityPolicyName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- with .Values.podSecurityPolicy.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
subjects:
- kind: ServiceAccount
name: {{ include "falco-exporter.serviceAccountName" . }}
roleRef:
kind: Role
name: {{ include "falco-exporter.podSecurityPolicyName" . }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -1,24 +0,0 @@
{{- if .Values.certs }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "falco-exporter.fullname" . }}-certs
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
type: Opaque
data:
{{- if .Values.certs }}
{{- if and .Values.certs.ca .Values.certs.ca.crt }}
ca.crt: {{ .Values.certs.ca.crt | b64enc | quote }}
{{- end }}
{{- if .Values.certs.client }}
{{- if .Values.certs.client.key }}
client.key: {{ .Values.certs.client.key | b64enc | quote }}
{{- end }}
{{- if .Values.certs.client.crt }}
client.crt: {{ .Values.certs.client.crt | b64enc | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -1,41 +0,0 @@
{{- if and .Values.scc.create (.Capabilities.APIVersions.Has "security.openshift.io/v1") }}
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: |
This provides the minimum requirements Falco-exporter to run in Openshift.
name: {{ template "falco-exporter.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities: []
allowedUnsafeSysctls: []
defaultAddCapabilities: []
fsGroup:
type: RunAsAny
groups: []
priority: 0
readOnlyRootFilesystem: false
requiredDropCapabilities: []
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
seccompProfiles:
- '*'
supplementalGroups:
type: RunAsAny
users:
- system:serviceaccount:{{ .Release.Namespace }}:{{ include "falco-exporter.serviceAccountName" . }}
volumes:
- hostPath
- secret
{{- end }}

View File

@ -1,14 +0,0 @@
{{- if .Values.service.mTLS.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "falco-exporter.fullname" . }}-server-certs
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
type: Opaque
data:
server.crt: {{ .Values.service.mTLS.server.crt | b64enc | quote }}
server.key: {{ .Values.service.mTLS.server.key | b64enc | quote }}
ca.crt: {{ .Values.service.mTLS.ca.crt | b64enc | quote }}
{{- end }}

View File

@ -1,42 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "falco-exporter.fullname" . }}
{{- if .Values.service.annotations }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
{{- end }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
spec:
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- range $cidr := .Values.service.loadBalancerSourceRanges }}
- {{ $cidr }}
{{- end }}
{{- end }}
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
{{- if ( and (eq .Values.service.type "NodePort" ) (not (empty .Values.service.nodePort)) ) }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCP
name: metrics
selector:
{{- include "falco-exporter.selectorLabels" . | nindent 4 }}

View File

@ -1,24 +0,0 @@
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "falco-exporter.fullname" . }}
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
{{- range $key, $value := .Values.serviceMonitor.additionalLabels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- port: metrics
{{- if .Values.serviceMonitor.interval }}
interval: {{ .Values.serviceMonitor.interval }}
{{- end }}
{{- if .Values.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
{{- end }}
selector:
matchLabels:
{{- include "falco-exporter.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "falco-exporter.fullname" . }}-test-connection"
labels:
{{- include "falco-exporter.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "falco-exporter.fullname" . }}:{{ .Values.service.port }}/metrics']
restartPolicy: Never

View File

@ -1,165 +0,0 @@
# Default values for falco-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
service:
type: ClusterIP
clusterIP: None
port: 9376
targetPort: 9376
nodePort:
labels: {}
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9376"
# Enable Mutual TLS for HTTP metrics server
mTLS:
enabled: false
healthChecks:
livenessProbe:
# liveness probes port
probesPort: 19376
# -- Tells the kubelet that it should wait X seconds before performing the first probe.
initialDelaySeconds: 60
# -- Number of seconds after which the probe times out.
timeoutSeconds: 5
# -- Specifies that the kubelet should perform the check every x seconds.
periodSeconds: 15
readinessProbe:
# readiness probes port
probesPort: 19376
# -- Tells the kubelet that it should wait X seconds before performing the first probe.
initialDelaySeconds: 30
# -- Number of seconds after which the probe times out.
timeoutSeconds: 5
# -- Specifies that the kubelet should perform the check every x seconds.
periodSeconds: 15
image:
registry: docker.io
repository: falcosecurity/falco-exporter
tag: 0.8.3
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
priorityClassName:
falco:
grpcUnixSocketPath: "unix:///run/falco/falco.sock"
grpcTimeout: 2m
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template.
# If set and create is false, an already existing serviceAccount must be provided.
name:
podSecurityPolicy:
# Specifies whether a PSP, Role and RoleBinding should be created
create: false
# Annotations to add to the PSP, Role and RoleBinding
annotations: {}
# The name of the PSP, Role and RoleBinding to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext:
{}
# fsGroup: 2000
daemonset:
# Perform rolling updates by default in the DaemonSet agent
# ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
updateStrategy:
# You can also customize maxUnavailable or minReadySeconds if you
# need it
type: RollingUpdate
# Annotations to add to the DaemonSet pods
annotations: {}
podLabels: {}
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
privileged: false
seccompProfile:
type: RuntimeDefault
resources:
{}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
# Allow falco-exporter to run on Kubernetes 1.6 masters.
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
affinity: {}
serviceMonitor:
# Enable the deployment of a Service Monitor for the Prometheus Operator.
enabled: false
# Specify Additional labels to be added on the Service Monitor.
additionalLabels: {}
# Specify a user defined interval. When not specified Prometheus default interval is used.
interval: ""
# Specify a user defined scrape timeout. When not specified Prometheus default scrape timeout is used.
scrapeTimeout: ""
grafanaDashboard:
enabled: false
folder:
namespace: default
prometheusDatasourceName: Prometheus
scc:
# true here enabled creation of Security Context Constraints in Openshift
create: true
# Create PrometheusRules for alerting on priority events
prometheusRules:
enabled: false
alerts:
warning:
enabled: true
rate_interval: "5m"
threshold: 0
error:
enabled: true
rate_interval: "5m"
threshold: 0
critical:
enabled: true
rate_interval: "5m"
threshold: 0
alert:
enabled: true
rate_interval: "5m"
threshold: 0
emergency:
enabled: true
rate_interval: "1m"
threshold: 0
additionalAlerts: {}

View File

@ -0,0 +1,39 @@
# Change Log
This file documents all notable changes to Falco Talon Helm Chart. The release
numbering uses [semantic versioning](http://semver.org).
## 0.3.0 - 2024-02-07
- bump up version to `v0.3.0`
- fix missing usage of the `imagePullSecrets`
## 0.2.3 - 2024-12-18
- add a Grafana dashboard for the Prometheus metrics
## 0.2.1 - 2024-12-09
- bump up version to `v0.2.1` for bug fixes
## 0.2.0 - 2024-11-26
- configure pod to not rollout on configmap change
- configure pod to rollout on secret change
- add config.rulesOverride allowing users to override config rules
## 0.1.3 - 2024-11-08
- change the key for the range over the rules files
## 0.1.2 - 2024-10-14
- remove all refs to the previous org
## 0.1.1 - 2024-10-01
- Use version `0.1.1`
- Fix wrong port for the `serviceMonitor`
## 0.1.0 - 2024-09-05
- First release

View File

@ -0,0 +1,18 @@
apiVersion: v1
appVersion: 0.3.0
description: React to the events from Falco
name: falco-talon
version: 0.3.0
keywords:
- falco
- monitoring
- security
- response-engine
home: https://github.com/falcosecurity/falco-talon
sources:
- https://github.com/falcosecurity/falco-talon
maintainers:
- name: Issif
email: issif+github@gadz.org
- name: IgorEulalio
email: igoreulalio.ie@gmail.com

View File

@ -0,0 +1,76 @@
# Falco Talon
![release](https://flat.badgen.net/github/release/falcosecurity/falco-talon/latest?color=green) ![last commit](https://flat.badgen.net/github/last-commit/falcosecurity/falco-talon) ![licence](https://flat.badgen.net/badge/license/Apache2.0/blue) ![docker pulls](https://flat.badgen.net/docker/pulls/issif/falco-talon?icon=docker)
## Description
`Falco Talon` is a Response Engine for managing threats in your Kubernetes. It enhances the solutions proposed by the Falco community with a no-code tailor made solution. With easy rules, you can react to `events` from [`Falco`](https://falco.org) in milliseconds.
## Architecture
`Falco Talon` can receive the `events` from [`Falco`](https://falco.org) or [`Falcosidekick`](https://github.com/falcosecurity/falcosidekick):
```mermaid
flowchart LR
falco
falcosidekick
falco-talon
falco -- event --> falcosidekick
falco -- event --> falco-talon
falcosidekick -- event --> falco-talon
kubernetes -- context --> falco-talon
falco-talon -- action --> aws
falco-talon -- output --> minio
falco-talon -- action --> kubernetes
falco-talon -- notification --> slack
```
## Documentation
The full documentation is available on its own website: [https://docs.falco-talon.org/docs](https://docs.falco-talon.org/docs).
## Installation
```shell
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco-talon falcosecurity/falco-talon -n falco --create-namespace -f values.yaml
```
### Update the rules
Update `rules.yaml` then:
```
helm upgrade falco-talon falcosecurity/falco-talon -n falco -f values.yaml
```
### Uninstall Falco Talon
```
helm delete falco-talon -n falco
````
## Configuration
{{ template "chart.valuesSection" . }}
## Connect Falcosidekick
Once you have installed `Falco Talon` with Helm, you need to connect `Falcosidekick` by adding the flag `--set falcosidekick.config.webhook.address=http://falco-talon:2803`
```shell
helm upgrade -i falco falcosecurity/falco --namespace falco \
--create-namespace \
--set tty=true \
--set falcosidekick.enabled=true \
--set falcosidekick.config.talon.address=http://falco-talon:2803
```
## License
Falco Talon is licensed to you under the **Apache 2.0** open source license.
## Author
Thomas Labarussias (https://github.com/Issif)

View File

@ -0,0 +1,184 @@
# Falco Talon
![release](https://flat.badgen.net/github/release/falcosecurity/falco-talon/latest?color=green) ![last commit](https://flat.badgen.net/github/last-commit/falcosecurity/falco-talon) ![licence](https://flat.badgen.net/badge/license/Apache2.0/blue) ![docker pulls](https://flat.badgen.net/docker/pulls/issif/falco-talon?icon=docker)
## Description
`Falco Talon` is a Response Engine for managing threats in your Kubernetes. It enhances the solutions proposed by the Falco community with a no-code tailor made solution. With easy rules, you can react to `events` from [`Falco`](https://falco.org) in milliseconds.
## Architecture
`Falco Talon` can receive the `events` from [`Falco`](https://falco.org) or [`Falcosidekick`](https://github.com/falcosecurity/falcosidekick):
```mermaid
flowchart LR
falco
falcosidekick
falco-talon
falco -- event --> falcosidekick
falco -- event --> falco-talon
falcosidekick -- event --> falco-talon
kubernetes -- context --> falco-talon
falco-talon -- action --> aws
falco-talon -- output --> minio
falco-talon -- action --> kubernetes
falco-talon -- notification --> slack
```
## Documentation
The full documentation is available on its own website: [https://docs.falco-talon.org/docs](https://docs.falco-talon.org/docs).
## Installation
```shell
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco-talon falcosecurity/falco-talon -n falco --create-namespace -f values.yaml
```
### Update the rules
Update `rules.yaml` then:
```
helm upgrade falco-talon falcosecurity/falco-talon -n falco -f values.yaml
```
### Uninstall Falco Talon
```
helm delete falco-talon -n falco
````
## Configuration
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | affinity |
| config | object | `{"aws":{"accesKey":"","externalId":"","region":"","roleArn":"","secretKey":""},"deduplication":{"leaderElection":true,"timeWindowSeconds":5},"defaultNotifiers":["k8sevents"],"listenAddress":"0.0.0.0","listenPort":2803,"minio":{"accessKey":"","endpoint":"","secretKey":"","useSsl":false},"notifiers":{"elasticsearch":{"createIndexTemplate":true,"numberOfReplicas":1,"numberOfShards":1,"url":""},"loki":{"apiKey":"","customHeaders":[],"hostPort":"","tenant":"","user":""},"slack":{"footer":"https://github.com/falcosecurity/falco-talon","format":"long","icon":"https://upload.wikimedia.org/wikipedia/commons/2/26/Circaetus_gallicus_claw.jpg","username":"Falco Talon","webhookUrl":""},"smtp":{"format":"html","from":"","hostPort":"","password":"","tls":false,"to":"","user":""},"webhook":{"url":""}},"otel":{"collectorEndpoint":"","collectorPort":4317,"collectorUseInsecureGrpc":false,"metricsEnabled":false,"tracesEnabled":false},"printAllEvents":false,"rulesOverride":"- action: Terminate Pod\n actionner: kubernetes:terminate\n parameters:\n ignore_daemonsets: true\n ignore_statefulsets: true\n grace_period_seconds: 20\n","watchRules":true}` | config of Falco Talon (See https://docs.falco-talon.org/docs/configuration/) |
| config.aws | object | `{"accesKey":"","externalId":"","region":"","roleArn":"","secretKey":""}` | aws |
| config.aws.accesKey | string | `""` | access key (if not specified, default access_key from provider credential chain will be used) |
| config.aws.externalId | string | `""` | external id |
| config.aws.region | string | `""` | region (if not specified, default region from provider credential chain will be used) |
| config.aws.roleArn | string | `""` | role arn |
| config.aws.secretKey | string | `""` | secret key (if not specified, default secret_key from provider credential chain will be used) |
| config.deduplication | object | `{"leaderElection":true,"timeWindowSeconds":5}` | deduplication of the Falco events |
| config.deduplication.leaderElection | bool | `true` | enable the leader election for cluster mode |
| config.deduplication.timeWindowSeconds | int | `5` | duration in seconds for the deduplication time window |
| config.defaultNotifiers | list | `["k8sevents"]` | default notifiers for all rules |
| config.listenAddress | string | `"0.0.0.0"` | listen address |
| config.listenPort | int | `2803` | listen port |
| config.minio | object | `{"accessKey":"","endpoint":"","secretKey":"","useSsl":false}` | minio |
| config.minio.accessKey | string | `""` | access key |
| config.minio.endpoint | string | `""` | endpoint |
| config.minio.secretKey | string | `""` | secret key |
| config.minio.useSsl | bool | `false` | use ssl |
| config.notifiers | object | `{"elasticsearch":{"createIndexTemplate":true,"numberOfReplicas":1,"numberOfShards":1,"url":""},"loki":{"apiKey":"","customHeaders":[],"hostPort":"","tenant":"","user":""},"slack":{"footer":"https://github.com/falcosecurity/falco-talon","format":"long","icon":"https://upload.wikimedia.org/wikipedia/commons/2/26/Circaetus_gallicus_claw.jpg","username":"Falco Talon","webhookUrl":""},"smtp":{"format":"html","from":"","hostPort":"","password":"","tls":false,"to":"","user":""},"webhook":{"url":""}}` | notifiers (See https://docs.falco-talon.org/docs/notifiers/list/ for the settings) |
| config.notifiers.elasticsearch | object | `{"createIndexTemplate":true,"numberOfReplicas":1,"numberOfShards":1,"url":""}` | elasticsearch |
| config.notifiers.elasticsearch.createIndexTemplate | bool | `true` | create the index template |
| config.notifiers.elasticsearch.numberOfReplicas | int | `1` | number of replicas |
| config.notifiers.elasticsearch.numberOfShards | int | `1` | number of shards |
| config.notifiers.elasticsearch.url | string | `""` | url |
| config.notifiers.loki | object | `{"apiKey":"","customHeaders":[],"hostPort":"","tenant":"","user":""}` | loki |
| config.notifiers.loki.apiKey | string | `""` | api key |
| config.notifiers.loki.customHeaders | list | `[]` | custom headers |
| config.notifiers.loki.hostPort | string | `""` | host:port |
| config.notifiers.loki.tenant | string | `""` | tenant |
| config.notifiers.loki.user | string | `""` | user |
| config.notifiers.slack | object | `{"footer":"https://github.com/falcosecurity/falco-talon","format":"long","icon":"https://upload.wikimedia.org/wikipedia/commons/2/26/Circaetus_gallicus_claw.jpg","username":"Falco Talon","webhookUrl":""}` | slack |
| config.notifiers.slack.footer | string | `"https://github.com/falcosecurity/falco-talon"` | footer |
| config.notifiers.slack.format | string | `"long"` | format |
| config.notifiers.slack.icon | string | `"https://upload.wikimedia.org/wikipedia/commons/2/26/Circaetus_gallicus_claw.jpg"` | icon |
| config.notifiers.slack.username | string | `"Falco Talon"` | username |
| config.notifiers.slack.webhookUrl | string | `""` | webhook url |
| config.notifiers.smtp | object | `{"format":"html","from":"","hostPort":"","password":"","tls":false,"to":"","user":""}` | smtp |
| config.notifiers.smtp.format | string | `"html"` | format |
| config.notifiers.smtp.from | string | `""` | from |
| config.notifiers.smtp.hostPort | string | `""` | host:port |
| config.notifiers.smtp.password | string | `""` | password |
| config.notifiers.smtp.tls | bool | `false` | enable tls |
| config.notifiers.smtp.to | string | `""` | to |
| config.notifiers.smtp.user | string | `""` | user |
| config.notifiers.webhook | object | `{"url":""}` | webhook |
| config.notifiers.webhook.url | string | `""` | url |
| config.otel | object | `{"collectorEndpoint":"","collectorPort":4317,"collectorUseInsecureGrpc":false,"metricsEnabled":false,"tracesEnabled":false}` | open telemetry parameters |
| config.otel.collectorEndpoint | string | `""` | collector endpoint |
| config.otel.collectorPort | int | `4317` | collector port |
| config.otel.collectorUseInsecureGrpc | bool | `false` | use insecure grpc |
| config.otel.metricsEnabled | bool | `false` | enable otel metrics |
| config.otel.tracesEnabled | bool | `false` | enable otel traces |
| config.printAllEvents | bool | `false` | print in stdout all received events, not only those which match a rule |
| config.watchRules | bool | `true` | auto reload the rules when the files change |
| extraEnv | list | `[{"name":"LOG_LEVEL","value":"warning"}]` | extra env |
| grafana | object | `{"dashboards":{"configMaps":{"talon":{"folder":"","name":"falco-talon-grafana-dashboard","namespace":""}},"enabled":false}}` | grafana contains the configuration related to grafana. |
| grafana.dashboards | object | `{"configMaps":{"talon":{"folder":"","name":"falco-talon-grafana-dashboard","namespace":""}},"enabled":false}` | dashboards contains configuration for grafana dashboards. |
| grafana.dashboards.configMaps | object | `{"talon":{"folder":"","name":"falco-talon-grafana-dashboard","namespace":""}}` | configmaps to be deployed that contain a grafana dashboard. |
| grafana.dashboards.configMaps.talon | object | `{"folder":"","name":"falco-talon-grafana-dashboard","namespace":""}` | falco-talon contains the configuration for falco talon's dashboard. |
| grafana.dashboards.configMaps.talon.folder | string | `""` | folder where the dashboard is stored by grafana. |
| grafana.dashboards.configMaps.talon.name | string | `"falco-talon-grafana-dashboard"` | name specifies the name for the configmap. |
| grafana.dashboards.configMaps.talon.namespace | string | `""` | namespace specifies the namespace for the configmap. |
| grafana.dashboards.enabled | bool | `false` | enabled specifies whether the dashboards should be deployed. |
| image | object | `{"pullPolicy":"Always","registry":"falco.docker.scarf.sh","repository":"falcosecurity/falco-talon","tag":""}` | image parameters |
| image.pullPolicy | string | `"Always"` | The image pull policy |
| image.registry | string | `"falco.docker.scarf.sh"` | The image registry to pull from |
| image.repository | string | `"falcosecurity/falco-talon"` | The image repository to pull from |
| image.tag | string | `""` | Override the image tag to pull |
| imagePullSecrets | list | `[]` | one or more secrets to be used when pulling images |
| ingress | object | `{"annotations":{},"enabled":false,"hosts":[{"host":"falco-talon.local","paths":[{"path":"/"}]}],"tls":[]}` | ingress parameters |
| ingress.annotations | object | `{}` | annotations of the ingress |
| ingress.enabled | bool | `false` | enable the ingress |
| ingress.hosts | list | `[{"host":"falco-talon.local","paths":[{"path":"/"}]}]` | hosts |
| ingress.tls | list | `[]` | tls |
| nameOverride | string | `""` | override name |
| nodeSelector | object | `{}` | node selector |
| podAnnotations | object | `{}` | pod annotations |
| podSecurityContext | object | `{"fsGroup":1234,"runAsUser":1234}` | pod security context |
| podSecurityContext.fsGroup | int | `1234` | group |
| podSecurityContext.runAsUser | int | `1234` | user id |
| podSecurityPolicy | object | `{"create":false}` | pod security policy |
| podSecurityPolicy.create | bool | `false` | enable the creation of the PSP |
| priorityClassName | string | `""` | priority class name |
| rbac | object | `{"caliconetworkpolicies":["get","update","patch","create"],"ciliumnetworkpolicies":["get","update","patch","create"],"clusterroles":["get","delete"],"configmaps":["get","delete"],"daemonsets":["get","delete"],"deployments":["get","delete"],"events":["get","update","patch","create"],"leases":["get","update","patch","watch","create"],"namespaces":["get","delete"],"networkpolicies":["get","update","patch","create"],"nodes":["get","update","patch","watch","create"],"pods":["get","update","patch","delete","list"],"podsEphemeralcontainers":["patch","create"],"podsEviction":["get","create"],"podsExec":["get","create"],"podsLog":["get"],"replicasets":["get","delete"],"roles":["get","delete"],"secrets":["get","delete"],"serviceAccount":{"create":true,"name":""},"statefulsets":["get","delete"]}` | rbac |
| rbac.serviceAccount.create | bool | `true` | create the service account. If create is false, name is required |
| rbac.serviceAccount.name | string | `""` | name of the service account |
| replicaCount | int | `2` | number of running pods |
| resources | object | `{}` | resources |
| service | object | `{"annotations":{},"port":2803,"type":"ClusterIP"}` | service parameters |
| service.annotations | object | `{}` | annotations of the service |
| service.port | int | `2803` | port of the service |
| service.type | string | `"ClusterIP"` | type of service |
| serviceMonitor | object | `{"additionalLabels":{},"enabled":false,"interval":"30s","path":"/metrics","port":"http","relabelings":[],"scheme":"http","scrapeTimeout":"10s","targetLabels":[],"tlsConfig":{}}` | serviceMonitor holds the configuration for the ServiceMonitor CRD. |
| serviceMonitor.additionalLabels | object | `{}` | additionalLabels specifies labels to be added on the Service Monitor. |
| serviceMonitor.enabled | bool | `false` | enable the deployment of a Service Monitor for the Prometheus Operator. |
| serviceMonitor.interval | string | `"30s"` | interval specifies the time interval at which Prometheus should scrape metrics from the service. |
| serviceMonitor.path | string | `"/metrics"` | path at which the metrics are exposed |
| serviceMonitor.port | string | `"http"` | portname at which the metrics are exposed |
| serviceMonitor.relabelings | list | `[]` | relabelings configures the relabeling rules to apply the targets metadata labels. |
| serviceMonitor.scheme | string | `"http"` | scheme specifies network protocol used by the metrics endpoint. In this case HTTP. |
| serviceMonitor.scrapeTimeout | string | `"10s"` | scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request. If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for that target. |
| serviceMonitor.targetLabels | list | `[]` | targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics. |
| serviceMonitor.tlsConfig | object | `{}` | tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when scraping metrics from a service. It allows you to define the details of the TLS connection, such as CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support TLS configuration for the metrics endpoint. |
| tolerations | list | `[]` | tolerations |
## Connect Falcosidekick
Once you have installed `Falco Talon` with Helm, you need to connect `Falcosidekick` by adding the flag `--set falcosidekick.config.webhook.address=http://falco-talon:2803`
```shell
helm upgrade -i falco falcosecurity/falco --namespace falco \
--create-namespace \
--set tty=true \
--set falcosidekick.enabled=true \
--set falcosidekick.config.talon.address=http://falco-talon:2803
```
## License
Falco Talon is licensed to you under the **Apache 2.0** open source license.
## Author
Thomas Labarussias (https://github.com/Issif)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,8 @@
- action: Terminate Pod
actionner: kubernetes:terminate
- action: Label Pod as Suspicious
actionner: kubernetes:label
parameters:
labels:
analysis/status: "suspicious"

View File

@ -0,0 +1,73 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "falco-talon.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "falco-talon.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for ingress.
*/}}
{{- define "falco-talon.ingress.apiVersion" -}}
{{- if and (.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19-0" .Capabilities.KubeVersion.Version) -}}
{{- print "networking.k8s.io/v1" -}}
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" -}}
{{- print "networking.k8s.io/v1beta1" -}}
{{- else -}}
{{- print "extensions/v1beta1" -}}
{{- end -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "falco-talon.labels" -}}
helm.sh/chart: {{ include "falco-talon.chart" . }}
app.kubernetes.io/part-of: {{ include "falco-talon.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Name }}
{{ include "falco-talon.selectorLabels" . }}
{{- if .Values.image.tag }}
app.kubernetes.io/version: {{ .Values.image.tag }}
{{- else }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "falco-talon.selectorLabels" -}}
app.kubernetes.io/name: {{ include "falco-talon.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Return if ingress is stable.
*/}}
{{- define "falco-talon.ingress.isStable" -}}
{{- eq (include "falco-talon.ingress.apiVersion" .) "networking.k8s.io/v1" -}}
{{- end -}}
{{/*
Return if ingress supports pathType.
*/}}
{{- define "falco-talon.ingress.supportsPathType" -}}
{{- or (eq (include "falco-talon.ingress.isStable" .) "true") (and (eq (include "falco-talon.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) -}}
{{- end -}}
{{/*
Validate if either serviceAccount create is set to true or serviceAccount name is passed
*/}}
{{- define "falco-talon.validateServiceAccount" -}}
{{- if and (not .Values.rbac.serviceAccount.create) (not .Values.rbac.serviceAccount.name) -}}
{{- fail ".Values.rbac.serviceAccount.create is set to false and .Values.rbac.serviceAccount.name is not provided or is provided as empty string." -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if .Values.podSecurityPolicy.create }}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "falco-talon.name" .}}
labels:
{{- include "falco-talon.labels" . | nindent 4 }}
rules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
resourceNames:
- {{ template "falco-talon.name" . }}
verbs:
- use
{{- end }}

View File

@ -0,0 +1,22 @@
{{- if .Values.grafana.dashboards.enabled -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.grafana.dashboards.configMaps.talon.name }}
{{ if .Values.grafana.dashboards.configMaps.talon.namespace }}
namespace: {{ .Values.grafana.dashboards.configMaps.talon.namespace }}
{{- else -}}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
{{- include "falco-talon.labels" . | nindent 4 }}
grafana_dashboard: "1"
annotations:
{{- if .Values.grafana.dashboards.configMaps.talon.folder }}
k8s-sidecar-target-directory: /tmp/dashboards/{{ .Values.grafana.dashboards.configMaps.talon.folder}}
grafana_dashboard_folder: {{ .Values.grafana.dashboards.configMaps.talon.folder }}
{{- end }}
data:
falco-talon-grafana-dashboard.json: |-
{{- .Files.Get "dashboards/falco-talon-grafana-dashboard.json" | nindent 4 }}
{{- end -}}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "falco-talon.name" . }}-rules
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-talon.labels" . | nindent 4 }}
data:
rules.yaml: |-
{{ $.Files.Get "rules.yaml" | nindent 4 }}
{{- if .Values.config.rulesOverride }}
{{ .Values.config.rulesOverride | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,101 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "falco-talon.name" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-talon.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "falco-talon.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "falco-talon.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
annotations:
secret-checksum: {{ (lookup "v1" "Secret" .Release.Namespace (include "falco-talon.name" . | cat "-config")).data | toJson | sha256sum }}
spec:
serviceAccountName: {{ include "falco-talon.name" . }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
securityContext:
runAsUser: {{ .Values.podSecurityContext.runAsUser }}
fsGroup: {{ .Values.podSecurityContext.fsGroup }}
restartPolicy: Always
containers:
- name: {{ .Chart.Name }}
{{- if .Values.image.registry }}
image: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ default .Chart.AppVersion .Values.image.tag }}"
{{- else }}
image: "{{ .Values.image.repository }}:{{ default .Chart.AppVersion .Values.image.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
args: ["server", "-c", "/etc/falco-talon/config.yaml", "-r", "/etc/falco-talon/rules.yaml"]
ports:
- name: http
containerPort: 2803
protocol: TCP
- name: nats
containerPort: 4222
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 10
periodSeconds: 5
{{- if .Values.extraEnv }}
env:
{{- toYaml .Values.extraEnv | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: "config"
mountPath: "/etc/falco-talon/config.yaml"
subPath: config.yaml
readOnly: true
- name: "rules"
mountPath: "/etc/falco-talon/rules.yaml"
subPath: rules.yaml
readOnly: true
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: "rules"
configMap:
name: "{{ include "falco-talon.name" . }}-rules"
- name: "config"
secret:
secretName: "{{ include "falco-talon.name" . }}-config"

View File

@ -0,0 +1,50 @@
{{- if .Values.ingress.enabled -}}
{{- $name := include "falco-talon.name" . -}}
{{- $ingressApiIsStable := eq (include "falco-talon.ingress.isStable" .) "true" -}}
{{- $ingressSupportsPathType := eq (include "falco-talon.ingress.supportsPathType" .) "true" -}}
---
apiVersion: {{ include "falco-talon.ingress.apiVersion" . }}
kind: Ingress
metadata:
name: {{ $name }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-talon.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if $ingressSupportsPathType }}
pathType: {{ default "ImplementationSpecific" .pathType }}
{{- end }}
backend:
{{- if $ingressApiIsStable }}
service:
name: {{ $name }}
port:
name: http
{{- else }}
serviceName: {{ $name }}
servicePort: http
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,32 @@
{{- if .Values.podSecurityPolicy.create}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "falco-talon.name" . }}
labels:
{{- include "falco-talon.labels" . | nindent 4 }}
spec:
privileged: false
allowPrivilegeEscalation: false
hostNetwork: false
readOnlyRootFilesystem: true
requiredDropCapabilities:
- ALL
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- secret
{{- end }}

View File

@ -0,0 +1,216 @@
{{- include "falco-talon.validateServiceAccount" . -}}
---
{{- if .Values.rbac.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "falco-talon.name" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-talon.labels" . | nindent 4 }}
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "falco-talon.name" . }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ include "falco-talon.name" . }}
helm.sh/chart: {{ include "falco-talon.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
rules:
{{- if .Values.rbac.namespaces }}
- apiGroups:
- ""
resources:
- namespaces
verbs:
{{ toYaml .Values.rbac.namespaces | indent 6 }}
{{- end }}
{{- if .Values.rbac.pods }}
- apiGroups:
- ""
resources:
- pods
verbs:
{{ toYaml .Values.rbac.pods | indent 6 }}
{{- end }}
{{- if .Values.rbac.podsEphemeralcontainers }}
- apiGroups:
- ""
resources:
- pods/ephemeralcontainers
verbs:
{{ toYaml .Values.rbac.podsEphemeralcontainers | indent 6 }}
{{- end }}
{{- if .Values.rbac.nodes }}
- apiGroups:
- ""
resources:
- nodes
verbs:
{{ toYaml .Values.rbac.nodes | indent 6 }}
{{- end }}
{{- if .Values.rbac.podsLog }}
- apiGroups:
- ""
resources:
- pods/log
verbs:
{{ toYaml .Values.rbac.podsLog | indent 6 }}
{{- end }}
{{- if .Values.rbac.podsExec }}
- apiGroups:
- ""
resources:
- pods/exec
verbs:
{{ toYaml .Values.rbac.podsExec | indent 6 }}
{{- end }}
{{- if .Values.rbac.podsEviction }}
- apiGroups:
- ""
resources:
- pods/eviction
verbs:
{{ toYaml .Values.rbac.podsEviction | indent 6 }}
{{- end }}
{{- if .Values.rbac.events }}
- apiGroups:
- ""
resources:
- events
verbs:
{{ toYaml .Values.rbac.events | indent 6 }}
{{- end }}
{{- if .Values.rbac.daemonsets }}
- apiGroups:
- "apps"
resources:
- daemonsets
verbs:
{{ toYaml .Values.rbac.daemonsets | indent 6 }}
{{- end }}
{{- if .Values.rbac.deployments }}
- apiGroups:
- "apps"
resources:
- deployments
verbs:
{{ toYaml .Values.rbac.deployments | indent 6 }}
{{- end }}
{{- if .Values.rbac.replicasets }}
- apiGroups:
- "apps"
resources:
- replicasets
verbs:
{{ toYaml .Values.rbac.replicasets | indent 6 }}
{{- end }}
{{- if .Values.rbac.statefulsets }}
- apiGroups:
- "apps"
resources:
- statefulsets
verbs:
{{ toYaml .Values.rbac.statefulsets | indent 6 }}
{{- end }}
{{- if .Values.rbac.networkpolicies }}
- apiGroups:
- "networking.k8s.io"
resources:
- networkpolicies
verbs:
{{ toYaml .Values.rbac.networkpolicies | indent 6 }}
{{- end }}
{{- if .Values.rbac.caliconetworkpolicies }}
- apiGroups:
- "projectcalico.org"
resources:
- caliconetworkpolicies
verbs:
{{ toYaml .Values.rbac.caliconetworkpolicies | indent 6 }}
{{- end }}
{{- if .Values.rbac.ciliumnetworkpolicies }}
- apiGroups:
- "cilium.io"
resources:
- ciliumnetworkpolicies
verbs:
{{ toYaml .Values.rbac.ciliumnetworkpolicies | indent 6 }}
{{- end }}
{{- if .Values.rbac.roles }}
- apiGroups:
- "rbac.authorization.k8s.io"
resources:
- roles
verbs:
{{ toYaml .Values.rbac.roles | indent 6 }}
{{- end }}
{{- if .Values.rbac.clusterroles }}
- apiGroups:
- "rbac.authorization.k8s.io"
resources:
- clusterroles
verbs:
{{ toYaml .Values.rbac.clusterroles | indent 6 }}
{{- end }}
{{- if .Values.rbac.configmaps }}
- apiGroups:
- ""
resources:
- configmaps
verbs:
{{ toYaml .Values.rbac.configmaps | indent 6 }}
{{- end }}
{{- if .Values.rbac.secrets }}
- apiGroups:
- ""
resources:
- secrets
verbs:
{{ toYaml .Values.rbac.secrets | indent 6 }}
{{- end }}
{{- if .Values.rbac.leases }}
- apiGroups:
- "coordination.k8s.io"
resources:
- leases
verbs:
{{ toYaml .Values.rbac.leases | indent 6 }}
{{- end }}
{{- if .Values.podSecurityPolicy.create }}
- apiGroups:
- policy
resources:
- podsecuritypolicies
resourceNames:
- {{ template "falco-talon.name" . }}
verbs:
- use
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "falco-talon.name" . }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ include "falco-talon.name" . }}
helm.sh/chart: {{ include "falco-talon.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "falco-talon.name" . }}
subjects:
- kind: ServiceAccount
{{- if .Values.rbac.serviceAccount.create }}
name: {{ include "falco-talon.name" . }}
{{- else }}
name: {{ .Values.rbac.serviceAccount.name }}
{{- end }}
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,71 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ include "falco-talon.name" . }}-config
labels:
{{- include "falco-talon.labels" . | nindent 4 }}
stringData:
config.yaml: |
listen_address: {{ default "0.0.0.0" .Values.config.listenAddress }}
listen_port: {{ default 2803 .Values.config.listenPort }}
watch_rules: {{ default true .Values.config.watchRules }}
print_all_events: {{ default false .Values.config.printAllEvents }}
deduplication:
leader_election: {{ default true .Values.config.deduplication.leaderElection }}
time_window_seconds: {{ default 5 .Values.config.deduplication.timeWindowSeconds }}
default_notifiers:
{{- range .Values.config.defaultNotifiers }}
- {{ . -}}
{{ end }}
otel:
traces_enabled: {{ default false .Values.config.otel.tracesEnabled }}
metrics_enabled: {{ default false .Values.config.otel.metricsEnabled }}
collector_port: {{ default 4317 .Values.config.otel.collectorPort }}
collector_endpoint: {{ .Values.config.otel.collectorEndpoint }}
collector_use_insecure_grpc: {{ default false .Values.config.otel.collectorUseInsecureGrpc }}
notifiers:
slack:
webhook_url: {{ .Values.config.notifiers.slack.webhookUrl }}
icon: {{ .Values.config.notifiers.slack.icon }}
username: {{ .Values.config.notifiers.slack.username }}
footer: {{ .Values.config.notifiers.slack.footer }}
format: {{ .Values.config.notifiers.slack.format }}
webhook:
url: {{ .Values.config.notifiers.webhook.url }}
smtp:
host_port: {{ .Values.config.notifiers.smtp.hostPort }}
from: {{ .Values.config.notifiers.smtp.from }}
to: {{ .Values.config.notifiers.smtp.to }}
user: {{ .Values.config.notifiers.smtp.user }}
password: {{ .Values.config.notifiers.smtp.password }}
format: {{ .Values.config.notifiers.smtp.format }}
tls: {{ .Values.config.notifiers.smtp.tls }}
loki:
url: {{ .Values.config.notifiers.loki.url }}
user: {{ .Values.config.notifiers.loki.user }}
api_key: {{ .Values.config.notifiers.loki.apiKey }}
tenant: {{ .Values.config.notifiers.loki.tenant }}
custom_headers:
{{- range .Values.config.notifiers.loki.customHeaders }}
- {{ . -}}
{{ end }}
elasticsearch:
url: {{ .Values.config.notifiers.elasticsearch.url }}
create_index_template: {{ .Values.config.notifiers.loki.createIndexTemplate }}
number_of_shards: {{ .Values.config.notifiers.loki.numberOfShards }}
number_of_replicas: {{ .Values.config.notifiers.loki.numberOfReplicas }}
aws:
role_arn: {{ .Values.config.aws.roleArn }}
external_id: {{ .Values.config.aws.externalId }}
region: {{ .Values.config.aws.region }}
access_key: {{ .Values.config.aws.accessKey }}
secret_key: {{ .Values.config.aws.secretKey }}
minio:
endpoint: {{ .Values.config.minio.endpoint }}
access_key: {{ .Values.config.minio.accessKey }}
secret_key: {{ .Values.config.minio.secretKey }}
use_ssl: {{ .Values.config.minio.useSsl }}

View File

@ -0,0 +1,44 @@
{{- if .Values.serviceMonitor.enabled }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
{{- include "falco-talon.labels" . | nindent 4 }}
{{- with .Values.serviceMonitor.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "falco-talon.name" . }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- port: {{ .Values.serviceMonitor.port }}
{{- with .Values.serviceMonitor.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
honorLabels: true
path: {{ .Values.serviceMonitor.path }}
scheme: {{ .Values.serviceMonitor.scheme }}
{{- with .Values.serviceMonitor.tlsConfig }}
tlsConfig:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.serviceMonitor.relabelings }}
relabelings:
{{- toYaml . | nindent 6 }}
{{- end }}
jobLabel: "{{ .Release.Name }}"
selector:
matchLabels:
{{- include "falco-talon.selectorLabels" . | nindent 6 }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
{{- with .Values.serviceMonitor.targetLabels }}
targetLabels:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,21 @@
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "falco-talon.name" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco-talon.labels" . | nindent 4 }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "falco-talon.selectorLabels" . | nindent 4 }}

View File

@ -0,0 +1,309 @@
# Default values for falco-talon.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# -- number of running pods
replicaCount: 2
# -- image parameters
image:
# -- The image registry to pull from
registry: falco.docker.scarf.sh
# -- The image repository to pull from
repository: falcosecurity/falco-talon
# -- Override the image tag to pull
tag: ""
# -- The image pull policy
pullPolicy: Always
# -- pod security policy
podSecurityPolicy:
# -- enable the creation of the PSP
create: false
# -- pod security context
podSecurityContext:
# -- user id
runAsUser: 1234
# -- group
fsGroup: 1234
# -- one or more secrets to be used when pulling images
imagePullSecrets: []
# - registrySecretName
# -- override name
nameOverride: ""
# -- extra env
extraEnv:
- name: LOG_LEVEL
value: warning
# - name: AWS_REGION # Specify if running on EKS, ECS or EC2
# value: us-east-1
# -- priority class name
priorityClassName: ""
# -- pod annotations
podAnnotations: {}
# -- service parameters
service:
# -- type of service
type: ClusterIP
# -- port of the service
port: 2803
# -- annotations of the service
annotations: {}
# networking.gke.io/load-balancer-type: Internal
# -- ingress parameters
ingress:
# -- enable the ingress
enabled: false
# -- annotations of the ingress
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# -- hosts
hosts:
- host: falco-talon.local
paths:
- path: /
# -- pathType (e.g. ImplementationSpecific, Prefix, .. etc.)
# pathType: Prefix
# -- tls
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# -- resources
resources: {}
# -- limits
# limits:
# # -- cpu limit
# cpu: 100m
# # -- memory limit
# memory: 128Mi
# -- requests
# requests:
# # -- cpu request
# cpu: 100m
# # -- memory request
# memory: 128Mi
# -- node selector
nodeSelector: {}
# -- tolerations
tolerations: []
# -- affinity
affinity: {}
# -- rbac
rbac:
serviceAccount:
# -- create the service account. If create is false, name is required
create: true
# -- name of the service account
name: ""
namespaces: ["get", "delete"]
pods: ["get", "update", "patch", "delete", "list"]
podsEphemeralcontainers: ["patch", "create"]
nodes: ["get", "update", "patch", "watch", "create"]
podsLog: ["get"]
podsExec: ["get", "create"]
podsEviction: ["get", "create"]
events: ["get", "update", "patch", "create"]
daemonsets: ["get", "delete"]
deployments: ["get", "delete"]
replicasets: ["get", "delete"]
statefulsets: ["get", "delete"]
networkpolicies: ["get", "update", "patch", "create"]
caliconetworkpolicies: ["get", "update", "patch", "create"]
ciliumnetworkpolicies: ["get", "update", "patch", "create"]
roles: ["get", "delete"]
clusterroles: ["get", "delete"]
configmaps: ["get", "delete"]
secrets: ["get", "delete"]
leases: ["get", "update", "patch", "watch", "create"]
# -- config of Falco Talon (See https://docs.falco-talon.org/docs/configuration/)
config:
# -- listen address
listenAddress: 0.0.0.0
# -- listen port
listenPort: 2803
# -- default notifiers for all rules
defaultNotifiers:
# - slack
- k8sevents
# -- auto reload the rules when the files change
watchRules: true
# -- deduplication of the Falco events
deduplication:
# -- enable the leader election for cluster mode
leaderElection: true
# -- duration in seconds for the deduplication time window
timeWindowSeconds: 5
# -- print in stdout all received events, not only those which match a rule
printAllEvents: false
# User-defined additional rules for rules_override.yaml
rulesOverride: |
- action: Terminate Pod
actionner: kubernetes:terminate
parameters:
ignore_daemonsets: true
ignore_statefulsets: true
grace_period_seconds: 20
# -- open telemetry parameters
otel:
# -- enable otel traces
tracesEnabled: false
# -- enable otel metrics
metricsEnabled: false
# -- collector port
collectorPort: 4317
# -- collector endpoint
collectorEndpoint: ""
# -- use insecure grpc
collectorUseInsecureGrpc: false
# -- notifiers (See https://docs.falco-talon.org/docs/notifiers/list/ for the settings)
notifiers:
# -- slack
slack:
# -- webhook url
webhookUrl: ""
# -- icon
icon: "https://upload.wikimedia.org/wikipedia/commons/2/26/Circaetus_gallicus_claw.jpg"
# -- username
username: "Falco Talon"
# -- footer
footer: "https://github.com/falcosecurity/falco-talon"
# -- format
format: "long"
# -- webhook
webhook:
# -- url
url: ""
# -- smtp
smtp:
# -- host:port
hostPort: ""
# -- from
from: ""
# -- to
to: ""
# -- user
user: ""
# -- password
password: ""
# -- format
format: "html"
# -- enable tls
tls: false
# -- loki
loki:
# -- host:port
hostPort: ""
# -- user
user: ""
# -- api key
apiKey: ""
# -- tenant
tenant: ""
# -- custom headers
customHeaders: []
# -- elasticsearch
elasticsearch:
# -- url
url: ""
# -- create the index template
createIndexTemplate: true
# -- number of shards
numberOfShards: 1
# -- number of replicas
numberOfReplicas: 1
# -- aws
aws:
# -- role arn
roleArn: ""
# -- external id
externalId: ""
# -- region (if not specified, default region from provider credential chain will be used)
region: ""
# -- access key (if not specified, default access_key from provider credential chain will be used)
accesKey: ""
# -- secret key (if not specified, default secret_key from provider credential chain will be used)
secretKey: ""
# -- minio
minio:
# -- endpoint
endpoint: ""
# -- access key
accessKey: ""
# -- secret key
secretKey: ""
# -- use ssl
useSsl: false
# -- serviceMonitor holds the configuration for the ServiceMonitor CRD.
serviceMonitor:
# -- enable the deployment of a Service Monitor for the Prometheus Operator.
enabled: false
# -- portname at which the metrics are exposed
port: http
# -- path at which the metrics are exposed
path: /metrics
# -- additionalLabels specifies labels to be added on the Service Monitor.
additionalLabels: {}
# -- interval specifies the time interval at which Prometheus should scrape metrics from the service.
interval: "30s"
# -- scheme specifies network protocol used by the metrics endpoint. In this case HTTP.
scheme: http
# -- scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request.
# If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for
# that target.
scrapeTimeout: "10s"
# -- relabelings configures the relabeling rules to apply the targets metadata labels.
relabelings: []
# -- targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics.
targetLabels: []
# -- tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when
# scraping metrics from a service. It allows you to define the details of the TLS connection, such as
# CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support
# TLS configuration for the metrics endpoint.
tlsConfig: {}
# insecureSkipVerify: false
# caFile: /path/to/ca.crt
# certFile: /path/to/client.crt
# keyFile: /path/to/client.key
# -- grafana contains the configuration related to grafana.
grafana:
# -- dashboards contains configuration for grafana dashboards.
dashboards:
# -- enabled specifies whether the dashboards should be deployed.
enabled: false
# --configmaps to be deployed that contain a grafana dashboard.
configMaps:
# -- falco-talon contains the configuration for falco talon's dashboard.
talon:
# -- name specifies the name for the configmap.
name: falco-talon-grafana-dashboard
# -- namespace specifies the namespace for the configmap.
namespace: ""
# -- folder where the dashboard is stored by grafana.
folder: ""

View File

@ -1,11 +1,62 @@
# Helm chart Breaking Changes
- [5.0.0](#500)
- [Default Falco Image](#default-falco-image)
- [4.0.0](#400)
- [Drivers](#drivers)
- [K8s Collector](#k8s-collector)
- [Plugins](#plugins)
- [3.0.0](#300)
- [Falcoctl](#falcoctl-support)
- [Rulesfiles](#rulesfiles)
- [Falco Images](#drop-support-for-falcosecurityfalco-image)
- [Driver Loader Init Container](#driver-loader-simplified-logic)
## 6.0.0
### Falco Talon configuration changes
The following backward-incompatible changes have been made to `values.yaml`:
- `falcotalon` configuration has been renamed to `falco-talon`
- `falcotalon.enabled` has been renamed to `responseActions.enabled`
## 5.0.0
### Default Falco Image
**Starting with version 5.0.0, the Helm chart now uses the default Falco container image, which is a distroless image without any additional tools installed.**
Previously, the chart used the `debian` image with the several tools included to avoid breaking changes during upgrades. The new image is more secure and lightweight, but it does not include these tools.
If you rely on some tool—for example, when using the `program_output` feature—you can manually override the `image.tag` value to use a different image flavor. For instance, setting `image.tag` to `0.41.0-debian` will restore access to the tools available in the Debian-based image.
## 4.0.0
### Drivers
The `driver` section has been reworked based on the following PR: https://github.com/falcosecurity/falco/pull/2413.
It is an attempt to uniform how a driver is configured in Falco.
It also groups the configuration based on the driver type.
Some of the drivers has been renamed:
* kernel modules has been renamed from `module` to `kmod`;
* the ebpf probe has not been changed. It's still `ebpf`;
* the modern ebpf probe has been renamed from `modern-bpf` to `modern_ebpf`.
The `gvisor` configuration has been moved under the `driver` section since it is considered a driver on its own.
### K8s Collector
The old Kubernetes client has been removed in Falco 0.37.0. For more info checkout this issue: https://github.com/falcosecurity/falco/issues/2973#issuecomment-1877803422.
The [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) and [k8s-meta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) substitute
the old implementation.
The following resources needed by Falco to connect to the API server are no longer needed and has been removed from the chart:
* service account;
* cluster role;
* cluster role binding.
When the `collectors.kubernetes` is enabled the chart deploys the [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) and configures Falco to load the
[k8s-meta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin.
By default, the `collectors.kubernetes.enabled` is off; for more info, see the following issue: https://github.com/falcosecurity/falco/issues/2995.
### Plugins
The Falco docker image does not ship anymore the plugins: https://github.com/falcosecurity/falco/pull/2997.
For this reason, the `resolveDeps` is now enabled in relevant values files (ie. `values-k8saudit.yaml`).
When installing `rulesfile` artifacts `falcoctl` will try to resolve its dependencies and install the required plugins.
## 3.0.0
The new chart deploys new *k8s* resources and new configuration variables have been added to the `values.yaml` file. People upgrading the chart from `v2.x.y` have to port their configuration variables to the new `values.yaml` file used by the `v3.0.0` chart.
@ -25,7 +76,7 @@ This way you will upgrade Falco to `v0.34.0`.
### Falcoctl support
[Falcoctl](https://https://github.com/falcosecurity/falcoctl) is a new tool born to automatize operations when deploying Falco.
[Falcoctl](https://github.com/falcosecurity/falcoctl) is a new tool born to automatize operations when deploying Falco.
Before the `v3.0.0` of the charts *rulesfiles* and *plugins* were shipped bundled in the Falco docker image. It precluded the possibility to update the *rulesfiles* and *plugins* until a new version of Falco was released. Operators had to manually update the *rulesfiles or add new *plugins* to Falco. The process was cumbersome and error-prone. Operators had to create their own Falco docker images with the new plugins baked into it or wait for a new Falco release.
@ -178,11 +229,15 @@ Starting from `v0.3.0`, the chart drops the bundled **rulesfiles**. The previous
The reason why we are dropping them is pretty simple, the files are already shipped within the Falco image and do not apport any benefit. On the other hand, we had to manually update those files for each Falco release.
For users out there, do not worry, we have you covered. As said before the **rulesfiles** are already shipped inside the Falco image. Still, this solution has some drawbacks such as users having to wait for the next releases of Falco to get the latest version of those **rulesfiles**. Or they could manually update them by using the [custom rules](https://https://github.com/falcosecurity/charts/tree/master/falco#loading-custom-rules).
For users out there, do not worry, we have you covered. As said before the **rulesfiles** are already shipped inside
the Falco image. Still, this solution has some drawbacks such as users having to wait for the next releases of Falco
to get the latest version of those **rulesfiles**. Or they could manually update them by using the [custom rules](.
/README.md#loading-custom-rules).
We came up with a better solution and that is **falcoctl**. Users can configure the **falcoctl** tool to fetch and install the latest **rulesfiles** as provided by the *falcosecurity* organization. For more info, please check the **falcoctl** section.
**NOTE**: if any user (wrongly) used to customize those files before deploying Falco please switch to using the [custom rules](https://https://github.com/falcosecurity/charts/tree/master/falco#loading-custom-rules).
**NOTE**: if any user (wrongly) used to customize those files before deploying Falco please switch to using the
[custom rules](./README.md#loading-custom-rules).
### Drop support for `falcosecurity/falco` image

View File

@ -3,6 +3,375 @@
This file documents all notable changes to Falco Helm Chart. The release
numbering uses [semantic versioning](http://semver.org).
## v6.2.2
* Bump container plugin to 0.3.5
* Bump k8smeta plugin to 0.3.1
## v6.2.1
* Bump container plugin to 0.3.3
## v6.2.0
* Switch to `collectors.containerEngine` configuration by default
* Update `collectors.containerEngine.engines` default values
* Fix containerd socket path configuration
* Address "container.name shows container.id" issue
* Address "Missing k8s.pod name, container.name, other metadata with k3s" issue
* Bump container plugin to 0.3.2
## v6.1.0
* feat(falco): Add possibility to custom falco pods hostname
## v6.0.2
* Bump Falco to 0.41.3
* Bump container plugin to 0.3.1
## v6.0.1
* Bump Falco to 0.41.2
* Bump container plugin to 0.3.0
## v6.0.0
* Rename Falco Talon configuration keys naming
## v5.0.3
* Bump container plugin to 0.2.6
## v5.0.2
* Bump container plugin to 0.2.5
* Bump Falco to 0.41.1
## v5.0.1
* Correct installation issue when both artifact installation and follow are enabled
## v5.0.0
* Bump falcoctl to 0.11.2
* Use default falco image flavor (wolfi) by default
## v4.22.0
* Bump Falco to 0.41.0;
* Bump falco rules to 4.0.0;
* Deprecate old container engines in favor of the new container plugin;
* Add support for the new container plugin;
* Update k8smeta plugin to 0.3.0;
* Update falco configuration;
## v4.21.2
* add falco-talon as falco subchart
## v4.21.1
* removed falco-expoter (now deprecated) references from the readme
## v4.21.0
* feat(falco): adding imagePullSecrets at the service account level
## v4.20.1
* correctly mount the volumes based on socket path
* unit tests for container engines socket paths
## v4.20.0
* bump falcoctl to 0.11.0
## v4.19.0
* fix falco version to 0.40.0
## v4.18.0
* update the chart for falco 0.40;
* remove deprecated cli flag `--cri` and use instead the configuration file. More info here: https://github.com/falcosecurity/falco/pull/3329
* use new falco images, for more info see: https://github.com/falcosecurity/falco/issues/3165
## v4.17.2
* update(falco): add ports definition in falco container spec
## v4.17.1
* docs(falco): update README.md to reflect latest driver configuration and correct broken links
## v4.17.0
* update(falco): bump k8saudit version to 0.11
## v4.16.2
* fix(falco): set dnsPolicy to ClusterFirstWithHostNet when gvisor driver is enabled to prevent DNS lookup failures for cluster-internal services
## v4.16.1
* fix(falco/serviceMonitor): set service label selector
* new(falco/tests): add unit tests for serviceMonitor label selector
## v4.16.0
* bump falcosidekick dependency to v0.9.* to match with future versions
## v4.15.1
* fix: change the url for the concurrent queue classes docs
## v4.15.0
* update(falco): bump falco version to 0.39.2 and falcoctl to 0.10.1
## v4.14.2
* fix(falco/readme): use `rules_files` instead of deprecated `rules_file` in README config snippet
## v4.14.1
* fix(falco/dashboard): make pod variable independent of triggered rules. CPU and memory are now visible for each
pod, even when no rules have been triggered for that falco instance.
## v4.14.0
* Bump k8smeta plugin to 0.2.1, see: https://github.com/falcosecurity/plugins/releases/tag/plugins%2Fk8smeta%2Fv0.2.1
## v4.13.0
* Expose new config entries for k8smeta plugin:`verbosity` and `hostProc`.
## v4.12.0
* Set apparmor to `unconfined` (disabled) when `leastPrivileged: true` and (`kind: modern_ebpf` or `kind: ebpf`)
## v4.11.2
* only prints env key if there are env values to be passed on `falcoctl.initContainer` and `falcoctl.sidecar`
## v4.11.1
* add details for the scap drops buffer charts with the dir and drops labels
## v4.11.0
* new(falco): add grafana dashboard for falco
## v4.10.0
* Bump Falco to v0.39.1
## v4.9.1
* feat(falco): add labels and annotations to the metrics service
## v4.9.0
* Bump Falco to v0.39.0
* update(falco): add new configuration entries for Falco
This commit adds new config keys introduces in Falco 0.39.0.
Furthermore, updates the unit tests for the latest changes
in the values.yaml.
* cleanup(falco): remove deprecated falco configuration
This commit removes the "output" config key that has
been deprecated in falco.
* update(falco): mount proc filesystem for plugins
The following PR in libs https://github.com/falcosecurity/libs/pull/1969
introduces a new platform for plugins that requires access to the
proc filesystem.
* fix(falco): update broken link pointing to Falco docs
After the changes made by the following PR to the Falco docs https://github.com/falcosecurity/falco-website/pull/1362
this commit updates a broken link.
## v4.8.3
* The init container, when driver.kind=auto, automatically generates
a new Falco configuration file and selects the appropriate engine
kind based on the environment where Falco is deployed.
With this commit, along with falcoctl PR #630, the Helm charts now
support different driver kinds for Falco instances based on the
specific node they are running on. When driver.kind=auto is set,
each Falco instance dynamically selects the most suitable
driver (e.g., ebpf, kmod, modern_ebpf) for the node.
+-------------------------------------------------------+
| Kubernetes Cluster |
| |
| +-------------------+ +-------------------+ |
| | Node 1 | | Node 2 | |
| | | | | |
| | Falco (ebpf) | | Falco (kmod) | |
| +-------------------+ +-------------------+ |
| |
| +-------------------+ |
| | Node 3 | |
| | | |
| | Falco (modern_ebpf)| |
| +-------------------+ |
+-------------------------------------------------------+
## v4.8.2
* fix(falco): correctly mount host filesystems when driver.kind is auto
When falco runs with kmod/module driver it needs special filesystems
to be mounted from the host such /dev and /sys/module/falco.
This commit ensures that we mount them in the falco container.
Note that, the /sys/module/falco is now mounted as /sys/module since
we do not know which kind of driver will be used. The falco folder
exists under /sys/module only when the kernel module is loaded,
hence it's not possible to use the /sys/module/falco hostpath when driver.kind
is set to auto.
## v4.8.1
* fix(falcosidekick): add support for custom service type for webui redis
## v4.8.0
* Upgrade Falco version to 0.38.2
## v4.7.2
* use rules_files key in the preset values files
## v4.7.1
* fix(falco/config): use rules_files instead of deprecated key rules_file
## v4.7.0
* bump k8smeta plugin to version 0.2.0. The new version, resolves a bug that prevented the plugin
from populating the k8smeta fields. For more info see:
* https://github.com/falcosecurity/plugins/issues/514
* https://github.com/falcosecurity/plugins/pull/517
## v4.6.3
* fix(falco): mount client-certs-volume only if certs.existingClientSecret is defined
## v4.6.2
* bump falcosidekick dependency to v0.8.* to match with future versions
## v4.6.1
* bump falcosidekick dependency to v0.8.2 (fixes bug when using externalRedis in UI)
## v4.6.0
* feat(falco): add support for Falco metrics
## v4.5.2
* bump falcosidekick dependency version to v0.8.0, for falcosidekick 2.29.0
## v4.5.2
* reording scc configuration, making it more robust to plain yaml comparison
## v4.5.1
* falco is now able to reconnect to containerd.socket
## v4.5.0
* bump Falco version to 0.38.1
## v4.4.3
* Added a `labels` field in the controller to provide extra labeling for the daemonset/deployment
## v4.4.2
* fix wrong check in pod template where `existingSecret` was used instead of `existingClientSecret`
## v4.4.1
* bump k8s-metacollector dependency version to v0.1.1. See: https://github.com/falcosecurity/k8s-metacollector/releases
## v4.3.1
* bump falcosidekick dependency version to v0.7.19 install latest version through falco chart
## v4.3.0
* `FALCO_HOSTNAME` and `HOST_ROOT` are now set by default in pods configuration.
## v4.2.6
* bump falcosidekick dependency version to v0.7.17 install latest version through falco chart
## v4.2.5
* fix docs
## v4.2.4
* bump falcosidekick dependency version to v0.7.15 install latest version through falco chart
## v4.2.3
* fix(falco/helpers): adjust formatting to be compatible with older helm versions
## v4.2.2
* fix(falco/README): dead link
## v4.2.1
* fix(falco/README): typos, formatting and broken links
## v4.2.0
* Bump falco to v0.37.1 and falcoctl to v0.7.2
## v4.1.2
* Fix links in output after falco install without sidekick
## v4.1.1
* Update README.md.
## v4.1.0
* Reintroduce the service account.
## v4.0.0
The new chart introduces some breaking changes. For folks upgrading Falco please see the BREAKING-CHANGES.md file.
* Uniform driver names and configuration to the Falco one: https://github.com/falcosecurity/falco/pull/2413;
* Fix usernames and groupnames resolution by mounting the `/etc` filesystem;
* Drop old kubernetes collector related resources;
* Introduce the new k8s-metacollector and k8smeta plugin (experimental);
* Enable the dependency resolver for artifacts in falcoctl since the Falco image does not ship anymore the plugins;
* Bump Falco to 0.37.0;
* Bump falcoctl to 0.7.0.
## v3.8.7
* Upgrade falcosidekick chart to `v0.7.11`.
## v3.8.6
* no changes to the chart itself. Updated README.md and makefile.
## v3.8.5
* Add mTLS cryptographic material load via Helm for Falco
## v3.8.4
* Upgrade Falco to 0.36.2: https://github.com/falcosecurity/falco/releases/tag/0.36.2
## v3.8.3
* Upgrade falcosidekick chart to `v0.7.7`.
## v3.8.2
* Upgrade falcosidekick chart to `v0.7.6`.
@ -59,7 +428,7 @@ numbering uses [semantic versioning](http://semver.org).
## v3.3.0
* Upgrade Falco to 0.35.1. For more info see the release notes: https://github.com/falcosecurity/falco/releases/tag/0.35.1
* Upgrade falcoctl to 0.5.1. For more info see the release notes: https://github.com/falcosecurity/falcoctl/releases/tag/v0.5.1
* Introduce least privileged mode in modern ebpf. For more info see: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-2
* Introduce least privileged mode in modern ebpf. For more info see: https://falco.org/docs/setup/container/#docker-least-privileged-modern-ebpf
## v3.2.1
* Set falco.http_output.url to empty string in values.yaml file
@ -108,7 +477,7 @@ numbering uses [semantic versioning](http://semver.org).
## v2.5.4
* Fix incorrect entry in v2.5.2 changelog
* Fix incorrect entry in v2.5.2 changelog
## v2.5.3
@ -549,7 +918,7 @@ Remove whitespace around `falco.httpOutput.url` to fix the error `libcurl error:
### Minor Changes
* Upgrade to Falco 0.26.2, `DRIVERS_REPO` now defaults to https://download.falco.org/driver (see the [Falco changelog](https://github.com/falcosecurity/falco/blob/0.26.2/CHANGELOG.md))
* Upgrade to Falco 0.26.2, `DRIVERS_REPO` now defaults to https://download.falco.org/?prefix=driver/ (see the [Falco changelog](https://github.com/falcosecurity/falco/blob/0.26.2/CHANGELOG.md))
## v1.5.3

View File

@ -1,7 +1,7 @@
apiVersion: v2
name: falco
version: 3.8.2
appVersion: "0.36.1"
version: 6.2.2
appVersion: "0.41.3"
description: Falco
keywords:
- monitoring
@ -19,6 +19,14 @@ maintainers:
email: cncf-falco-dev@lists.cncf.io
dependencies:
- name: falcosidekick
version: "0.7.6"
version: "0.9.*"
condition: falcosidekick.enabled
repository: https://falcosecurity.github.io/charts
- name: k8s-metacollector
version: 0.1.*
repository: https://falcosecurity.github.io/charts
condition: collectors.kubernetes.enabled
- name: falco-talon
version: 0.3.*
repository: https://falcosecurity.github.io/charts
condition: responseActions.enabled

View File

@ -1,25 +0,0 @@
#generate helm documentation
DOCS_IMAGE_VERSION="v1.11.0"
#Here we use the "latest" tag since our CI uses the same(https://github.com/falcosecurity/charts/blob/2f04bccb5cacbbf3ecc2d2659304b74f865f41dd/.circleci/config.yml#L16).
LINT_IMAGE_VERSION="v3.8.0"
docs:
docker run \
--rm \
--workdir=/helm-docs \
--volume "$$(pwd):/helm-docs" \
-u $$(id -u) \
jnorwood/helm-docs:$(DOCS_IMAGE_VERSION) \
helm-docs -t ./README.gotmpl -o ./generated/helm-values.md
lint: helm-repo-update
docker run \
-it \
--workdir=/data \
--volume $$(pwd)/..:/data \
quay.io/helmpack/chart-testing:$(LINT_IMAGE_VERSION) \
ct lint --config ./tests/ct.yaml --charts ./falco --chart-dirs .
helm-repo-update:
helm repo update

View File

@ -1,3 +1,592 @@
# Configuration values for {{ template "chart.name" . }} chart
`Chart version: v{{ template "chart.version" . }}`
# Falco
[Falco](https://falco.org) is a *Cloud Native Runtime Security* tool designed to detect anomalous activity in your applications. You can use Falco to monitor runtime security of your Kubernetes applications and internal components.
## Introduction
The deployment of Falco in a Kubernetes cluster is managed through a **Helm chart**. This chart manages the lifecycle of Falco in a cluster by handling all the k8s objects needed by Falco to be seamlessly integrated in your environment. Based on the configuration in [values.yaml](./values.yaml) file, the chart will render and install the required k8s objects. Keep in mind that Falco could be deployed in your cluster using a `daemonset` or a `deployment`. See next sections for more info.
## Attention
Before installing Falco in a Kubernetes cluster, a user should check that the kernel version used in the nodes is supported by the community. Also, before reporting any issue with Falco (missing kernel image, CrashLoopBackOff and similar), make sure to read [about the driver](#about-the-driver) section and adjust your setup as required.
## Adding `falcosecurity` repository
Before installing the chart, add the `falcosecurity` charts repository:
```bash
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
```
## Installing the Chart
To install the chart with the release name `falco` in namespace `falco` run:
```bash
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco
```
After a few minutes Falco instances should be running on all your nodes. The status of Falco pods can be inspected through *kubectl*:
```bash
kubectl get pods -n falco -o wide
```
If everything went smoothly, you should observe an output similar to the following, indicating that all Falco instances are up and running in you cluster:
```bash
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
falco-57w7q 1/1 Running 0 3m12s 10.244.0.1 control-plane <none> <none>
falco-h4596 1/1 Running 0 3m12s 10.244.1.2 worker-node-1 <none> <none>
falco-kb55h 1/1 Running 0 3m12s 10.244.2.3 worker-node-2 <none> <none>
```
The cluster in our example has three nodes, one *control-plane* node and two *worker* nodes. The default configuration in [values.yaml](./values.yaml) of our helm chart deploys Falco using a `daemonset`. That's the reason why we have one Falco pod in each node.
> **Tip**: List Falco release using `helm list -n falco`, a release is a name used to track a specific deployment.
### Falco, Event Sources and Kubernetes
Starting from Falco 0.31.0 the [new plugin system](https://falco.org/docs/plugins/) is stable and production ready. The **plugin system** can be seen as the next step in the evolution of Falco. Historically, Falco monitored system events from the **kernel** trying to detect malicious behaviors on Linux systems. It also had the capability to process k8s Audit Logs to detect suspicious activities in Kubernetes clusters. Since Falco 0.32.0 all the related code to the k8s Audit Logs in Falco was removed and ported in a [plugin](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit). At the time being Falco supports different event sources coming from **plugins** or **drivers** (system events).
Note that **a Falco instance can handle multiple event sources in parallel**. you can deploy Falco leveraging **drivers** for syscall events and at the same time loading **plugins**. A step by step guide on how to deploy Falco with multiple sources can be found [here](https://falco.org/docs/getting-started/learning-environments/#falco-with-multiple-sources).
#### About Drivers
Falco needs a **driver** to analyze the system workload and pass security events to userspace. The supported drivers are:
* [Modern eBPF probe](https://falco.org/docs/concepts/event-sources/kernel/#modern-ebpf-probe)
* [Kernel module](https://falco.org/docs/concepts/event-sources/kernel/#kernel-module)
* [Legacy eBPF probe](https://falco.org/docs/concepts/event-sources/kernel/#legacy-ebpf-probe)
The driver must be loaded on the node where Falco is running. Falco now prefers the **Modern eBPF probe** by default. When using **falcoctl** with `driver.kind=auto`, it will automatically choose the best driver for your system. Specifically, it first attempts to use the Modern eBPF probe (which is shipped directly within the Falco binary) and will fall back to the _kernel module_ or the _original eBPF probe_ if the necessary BPF features are not available.
##### Pre-built drivers
The [kernel-crawler](https://github.com/falcosecurity/kernel-crawler) automatically discovers kernel versions and flavors. At the time being, it runs weekly. We have a site where users can check for the discovered kernel flavors and versions, [example for Amazon Linux 2](https://falcosecurity.github.io/kernel-crawler/?arch=x86_64&target=AmazonLinux2).
The discovery of a kernel version by the [kernel-crawler](https://falcosecurity.github.io/kernel-crawler/) does not imply that pre-built kernel modules and bpf probes are available. That is because once kernel-crawler has discovered new kernels versions, the drivers need to be built by jobs running on our [Driver Build Grid infra](https://github.com/falcosecurity/test-infra#dbg). Please keep in mind that the building process is based on best effort. Users can check the existence of prebuilt modules at the following [link](https://download.falco.org/driver/site/index.html?lib=3.0.1%2Bdriver&target=all&arch=all&kind=all).
##### Building the driver on the fly (fallback)
If a prebuilt driver is not available for your distribution/kernel, users can build the driver by them self or install the kernel headers on the nodes, and the init container (falco-driver-loader) will try and build the driver on the fly.
Falco needs **kernel headers** installed on the host as a prerequisite to build the driver on the fly correctly. You can find instructions for installing the kernel headers for your system under the [Install section](https://falco.org/docs/getting-started/installation/) of the official documentation.
##### Selecting a different driver loader image
Note that since Falco 0.36.0 and Helm chart version 3.7.0 the driver loader image has been updated to be compatible with newer kernels (5.x and above) meaning that if you have an older kernel version and you are trying to build the kernel module you may experience issues. In that case you can use the `falco-driver-loader-legacy` image to use the previous version of the toolchain. To do so you can set the appropriate value, i.e. `--set driver.loader.initContainer.image.repository=falcosecurity/falco-driver-loader-legacy`.
#### About Plugins
[Plugins](https://falco.org/docs/plugins/) are used to extend Falco to support new **data sources**. The current **plugin framework** supports *plugins* with the following *capabilities*:
* Event sourcing capability;
* Field extraction capability;
Plugin capabilities are *composable*, we can have a single plugin with both capabilities. Or on the other hand, we can load two different plugins each with its capability, one plugin as a source of events and another as an extractor. A good example of this is the [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) and the [Falcosecurity Json](https://github.com/falcosecurity/plugins/tree/master/plugins/json) *plugins*. By deploying them both we have support for the **K8s Audit Logs** in Falco
Note that **the driver is not required when using plugins**.
#### About gVisor
gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system. For more information please consult the [official docs](https://gvisor.dev/docs/). In version `0.32.1`, Falco first introduced support for gVisor by leveraging the stream of system call information coming from gVisor.
Falco requires the version of [runsc](https://gvisor.dev/docs/user_guide/install/) to be equal to or above `20220704.0`. The following snippet shows the gVisor configuration variables found in [values.yaml](./values.yaml):
```yaml
driver:
gvisor:
enabled: true
runsc:
path: /home/containerd/usr/local/sbin
root: /run/containerd/runsc
config: /run/containerd/runsc/config.toml
```
Falco uses the [runsc](https://gvisor.dev/docs/user_guide/install/) binary to interact with sandboxed containers. The following variables need to be set:
* `runsc.path`: absolute path of the `runsc` binary in the k8s nodes;
* `runsc.root`: absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it;
* `runsc.config`: absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence.
If you want to know more how Falco uses those configuration paths please have a look at the `falco.gvisor.initContainer` helper in [helpers.tpl](./templates/_helpers.tpl).
A preset `values.yaml` file [values-gvisor-gke.yaml](./values-gvisor-gke.yaml) is provided and can be used as it is to deploy Falco with gVisor support in a [GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods) cluster. It is also a good starting point for custom deployments.
##### Example: running Falco on GKE, with or without gVisor-enabled pods
If you use GKE with k8s version at least `1.24.4-gke.1800` or `1.25.0-gke.200` with gVisor sandboxed pods, you can install a Falco instance to monitor them with, e.g.:
```
helm install falco-gvisor falcosecurity/falco \
--create-namespace \
--namespace falco-gvisor \
-f https://raw.githubusercontent.com/falcosecurity/charts/master/charts/falco/values-gvisor-gke.yaml
```
Note that the instance of Falco above will only monitor gVisor sandboxed workloads on gVisor-enabled node pools. If you also need to monitor regular workloads on regular node pools you can use the eBPF driver as usual:
```
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
--set driver.kind=ebpf
```
The two instances of Falco will operate independently and can be installed, uninstalled or configured as needed. If you were already monitoring your regular node pools with eBPF you don't need to reinstall it.
##### Falco+gVisor additional resources
An exhaustive blog post about Falco and gVisor can be found on the [Falco blog](https://falco.org/blog/intro-gvisor-falco/).
If you need help on how to set gVisor in your environment please have a look at the [gVisor official docs](https://gvisor.dev/docs/user_guide/quick_start/kubernetes/)
### About Falco Artifacts
Historically **rules files** and **plugins** used to be shipped inside the Falco docker image and/or inside the chart. Starting from version `v0.3.0` of the chart, the [**falcoctl tool**](https://github.com/falcosecurity/falcoctl) can be used to install/update **rules files** and **plugins**. When referring to such objects we will use the term **artifact**. For more info please check out the following [proposal](https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md).
The default configuration of the chart for new installations is to use the **falcoctl** tool to handle **artifacts**. The chart will deploy two new containers along the Falco one:
* `falcoctl-artifact-install` an init container that makes sure to install the configured **artifacts** before the Falco container starts;
* `falcoctl-artifact-follow` a sidecar container that periodically checks for new artifacts (currently only *falco-rules*) and downloads them;
For more info on how to enable/disable and configure the **falcoctl** tool checkout the config values [here](./README.md#Configuration) and the [upgrading notes](./BREAKING-CHANGES.md#300)
### Deploying Falco in Kubernetes
After the clarification of the different [**event sources**](#falco-event-sources-and-kubernetes) and how they are consumed by Falco using the **drivers** and the **plugins**, now let us discuss how Falco is deployed in Kubernetes.
The chart deploys Falco using a `daemonset` or a `deployment` depending on the **event sources**.
#### Daemonset
When using the [drivers](#about-the-driver), Falco is typically deployed as a `DaemonSet`. By using a DaemonSet, Kubernetes ensures that a Falco instance is running on each node even as new nodes are added to your cluster. This makes it a perfect fit for monitoring across the entire cluster.
By default, with `driver.kind=auto`, the correct driver will will be automatically selected for each node. This is accomplished through the **driver loader** (implemented by `falcoctl`), which generates a new Falco configuration file and picks the right engine driver (Modern eBPF, kmod, or legacy eBPF) based on the underlying environment. If you prefer to manually force a specific driver, see the other available options below.
**Kernel module**
To run Falco with the [eBPF probe](https://falco.org/docs/concepts/event-sources/kernel/#kernel-module) you just need to set `driver.kind=kmod` as shown in the following snippet:
```bash
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco
--set driver.kind=kmod
```
**Legacy eBPF probe**
To run Falco with the [eBPF probe](http://falco.org/docs/concepts/event-sources/kernel/#legacy-ebpf-probe) you just need to set `driver.kind=ebpf` as shown in the following snippet:
```bash
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
--set driver.kind=ebpf
```
There are other configurations related to the eBPF probe, for more info please check the [values.yaml](./values.yaml) file. After you have made your changes to the configuration file you just need to run:
```bash
helm install falco falcosecurity/falco \
--create-namespace \
--namespace "your-custom-name-space" \
-f "path-to-custom-values.yaml-file"
```
**Modern eBPF probe**
To run Falco with the [modern eBPF probe](https://falco.org/docs/concepts/event-sources/kernel/#modern-ebpf-probe) you just need to set `driver.kind=modern_bpf` as shown in the following snippet:
```bash
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
--set driver.kind=modern_ebpf
```
#### Deployment
In the scenario when Falco is used with **plugins** as data sources, then the best option is to deploy it as a k8s `deployment`. **Plugins** could be of two types, the ones that follow the **push model** or the **pull model**. A plugin that adopts the firs model expects to receive the data from a remote source in a given endpoint. They just expose and endpoint and wait for data to be posted, for example [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) expects the data to be sent by the *k8s api server* when configured in such way. On the other hand other plugins that abide by the **pull model** retrieves the data from a given remote service.
The following points explain why a k8s `deployment` is suitable when deploying Falco with plugins:
* need to be reachable when ingesting logs directly from remote services;
* need only one active replica, otherwise events will be sent/received to/from different Falco instances;
## Uninstalling the Chart
To uninstall a Falco release from your Kubernetes cluster always you helm. It will take care to remove all components deployed by the chart and clean up your environment. The following command will remove a release called `falco` in namespace `falco`;
```bash
helm uninstall falco --namespace falco
```
## Showing logs generated by Falco container
There are many reasons why we would have to inspect the messages emitted by the Falco container. When deployed in Kubernetes the Falco logs can be inspected through:
```bash
kubectl logs -n falco falco-pod-name
```
where `falco-pods-name` is the name of the Falco pod running in your cluster.
The command described above will just display the logs emitted by falco until the moment you run the command. The `-f` flag comes handy when we are doing live testing or debugging and we want to have the Falco logs as soon as they are emitted. The following command:
```bash
kubectl logs -f -n falco falco-pod-name
```
The `-f (--follow)` flag follows the logs and live stream them to your terminal and it is really useful when you are debugging a new rule and want to make sure that the rule is triggered when some actions are performed in the system.
If we need to access logs of a previous Falco run we do that by adding the `-p (--previous)` flag:
```bash
kubectl logs -p -n falco falco-pod-name
```
A scenario when we need the `-p (--previous)` flag is when we have a restart of a Falco pod and want to check what went wrong.
### Enabling real time logs
By default in Falco the output is buffered. When live streaming logs we will notice delays between the logs output (rules triggering) and the event happening.
In order to enable the logs to be emitted without delays you need to set `.Values.tty=true` in [values.yaml](./values.yaml) file.
## K8s-metacollector
Starting from Falco `0.37` the old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) has been removed.
A new component named [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) replaces it.
The *k8s-metacollector* is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata
from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers.
Kubernetes' resources for which metadata will be collected and sent to Falco:
* pods;
* namespaces;
* deployments;
* replicationcontrollers;
* replicasets;
* services;
### Plugin
Since the *k8s-metacollector* is standalone, deployed in the cluster as a deployment, Falco instances need to connect to the component
in order to retrieve the `metadata`. Here it comes the [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin.
The plugin gathers details about Kubernetes resources from the *k8s-metacollector*. It then stores this information
in tables and provides access to Falco upon request. The plugin specifically acquires data for the node where the
associated Falco instance is deployed, resulting in node-level granularity.
### Exported Fields: Old and New
The old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) used to populate the
[k8s](https://falco.org/docs/reference/rules/supported-fields/#field-class-k8s) fields. The **k8s** field class is still
available in Falco, for compatibility reasons, but most of the fields will return `N/A`. The following fields are still
usable and will return meaningful data when the `container runtime collectors` are enabled:
* k8s.pod.name;
* k8s.pod.id;
* k8s.pod.label;
* k8s.pod.labels;
* k8s.pod.ip;
* k8s.pod.cni.json;
* k8s.pod.namespace.name;
The [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin exports a whole new
[field class]https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta#supported-fields. Note that the new
`k8smeta.*` fields are usable only when the **k8smeta** plugin is loaded in Falco.
### Enabling the k8s-metacollector
The following command will deploy Falco + k8s-metacollector + k8smeta:
```bash
helm install falco falcosecurity/falco \
--namespace falco \
--create-namespace \
--set collectors.kubernetes.enabled=true
```
## Loading custom rules
Falco ships with a nice default ruleset. It is a good starting point but sooner or later, we are going to need to add custom rules which fit our needs.
So the question is: How can we load custom rules in our Falco deployment?
We are going to create a file that contains custom rules so that we can keep it in a Git repository.
```bash
cat custom-rules.yaml
```
And the file looks like this one:
```yaml
customRules:
rules-traefik.yaml: |-
- macro: traefik_consider_syscalls
condition: (evt.num < 0)
- macro: app_traefik
condition: container and container.image startswith "traefik"
# Restricting listening ports to selected set
- list: traefik_allowed_inbound_ports_tcp
items: [443, 80, 8080]
- rule: Unexpected inbound tcp connection traefik
desc: Detect inbound traffic to traefik using tcp on a port outside of expected set
condition: inbound and evt.rawres >= 0 and not fd.sport in (traefik_allowed_inbound_ports_tcp) and app_traefik
output: Inbound network connection to traefik on unexpected port (command=%proc.cmdline pid=%proc.pid connection=%fd.name sport=%fd.sport user=%user.name %container.info image=%container.image)
priority: NOTICE
# Restricting spawned processes to selected set
- list: traefik_allowed_processes
items: ["traefik"]
- rule: Unexpected spawned process traefik
desc: Detect a process started in a traefik container outside of an expected set
condition: spawned_process and not proc.name in (traefik_allowed_processes) and app_traefik
output: Unexpected process spawned in traefik container (command=%proc.cmdline pid=%proc.pid user=%user.name %container.info image=%container.image)
priority: NOTICE
```
So next step is to use the custom-rules.yaml file for installing the Falco Helm chart.
```bash
helm install falco -f custom-rules.yaml falcosecurity/falco
```
And we will see in our logs something like:
```bash
Tue Jun 5 15:08:57 2018: Loading rules from file /etc/falco/rules.d/rules-traefik.yaml:
```
And this means that our Falco installation has loaded the rules and is ready to help us.
## Kubernetes Audit Log
The Kubernetes Audit Log is now supported via the built-in [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin. It is entirely up to you to set up the [webhook backend](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/#webhook-backend) of the Kubernetes API server to forward the Audit Log event to the Falco listening port.
The following snippet shows how to deploy Falco with the [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin:
```yaml
# -- Disable the drivers since we want to deploy only the k8saudit plugin.
driver:
enabled: false
# -- Disable the collectors, no syscall events to enrich with metadata.
collectors:
enabled: false
# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurable.
controller:
kind: deployment
deployment:
# -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing.
# For more info check the section on Plugins in the README.md file.
replicas: 1
falcoctl:
artifact:
install:
# -- Enable the init container. We do not recommend installing (or following) plugins for security reasons since they are executable objects.
enabled: true
follow:
# -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules.
enabled: true
config:
artifact:
install:
# -- Resolve the dependencies for artifacts.
resolveDeps: true
# -- List of artifacts to be installed by the falcoctl init container.
# Only rulesfile, the plugin will be installed as a dependency.
refs: [k8saudit-rules:0.5]
follow:
# -- List of artifacts to be followed by the falcoctl sidecar container.
refs: [k8saudit-rules:0.5]
services:
- name: k8saudit-webhook
type: NodePort
ports:
- port: 9765 # See plugin open_params
nodePort: 30007
protocol: TCP
falco:
rules_files:
- /etc/falco/k8s_audit_rules.yaml
- /etc/falco/rules.d
plugins:
- name: k8saudit
library_path: libk8saudit.so
init_config:
""
# maxEventBytes: 1048576
# sslCertificate: /etc/falco/falco.pem
open_params: "http://:9765/k8s-audit"
- name: json
library_path: libjson.so
init_config: ""
# Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container.
load_plugins: [k8saudit, json]
```
Here is the explanation of the above configuration:
* disable the drivers by setting `driver.enabled=false`;
* disable the collectors by setting `collectors.enabled=false`;
* deploy the Falco using a k8s *deployment* by setting `controller.kind=deployment`;
* make our Falco instance reachable by the `k8s api-server` by configuring a service for it in `services`;
* enable the `falcoctl-artifact-install` init container;
* configure `falcoctl-artifact-install` to install the required plugins;
* disable the `falcoctl-artifact-follow` sidecar container;
* load the correct ruleset for our plugin in `falco.rulesFile`;
* configure the plugins to be loaded, in this case, the `k8saudit` and `json`;
* and finally we add our plugins in the `load_plugins` to be loaded by Falco.
The configuration can be found in the [values-k8saudit.yaml(./values-k8saudit.yaml] file ready to be used:
```bash
#make sure the falco namespace exists
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
-f ./values-k8saudit.yaml
```
After a few minutes a Falco instance should be running on your cluster. The status of Falco pod can be inspected through *kubectl*:
```bash
kubectl get pods -n falco -o wide
```
If everything went smoothly, you should observe an output similar to the following, indicating that the Falco instance is up and running:
```bash
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
falco-64484d9579-qckms 1/1 Running 0 101s 10.244.2.2 worker-node-2 <none> <none>
```
Furthermore you can check that Falco logs through *kubectl logs*
```bash
kubectl logs -n falco falco-64484d9579-qckms
```
In the logs you should have something similar to the following, indicating that Falco has loaded the required plugins:
```bash
Fri Jul 8 16:07:24 2022: Falco version 0.32.0 (driver version 39ae7d40496793cf3d3e7890c9bbdc202263836b)
Fri Jul 8 16:07:24 2022: Falco initialized with configuration file /etc/falco/falco.yaml
Fri Jul 8 16:07:24 2022: Loading plugin (k8saudit) from file /usr/share/falco/plugins/libk8saudit.so
Fri Jul 8 16:07:24 2022: Loading plugin (json) from file /usr/share/falco/plugins/libjson.so
Fri Jul 8 16:07:24 2022: Loading rules from file /etc/falco/k8s_audit_rules.yaml:
Fri Jul 8 16:07:24 2022: Starting internal webserver, listening on port 8765
```
*Note that the support for the dynamic backend (also known as the `AuditSink` object) has been deprecated from Kubernetes and removed from this chart.*
### Manual setup with NodePort on kOps
Using `kops edit cluster`, ensure these options are present, then run `kops update cluster` and `kops rolling-update cluster`:
```yaml
spec:
kubeAPIServer:
auditLogMaxBackups: 1
auditLogMaxSize: 10
auditLogPath: /var/log/k8s-audit.log
auditPolicyFile: /srv/kubernetes/assets/audit-policy.yaml
auditWebhookBatchMaxWait: 5s
auditWebhookConfigFile: /srv/kubernetes/assets/webhook-config.yaml
fileAssets:
- content: |
# content of the webserver CA certificate
# remove this fileAsset and certificate-authority from webhook-config if using http
name: audit-ca.pem
roles:
- Master
- content: |
apiVersion: v1
kind: Config
clusters:
- name: falco
cluster:
# remove 'certificate-authority' when using 'http'
certificate-authority: /srv/kubernetes/assets/audit-ca.pem
server: https://localhost:32765/k8s-audit
contexts:
- context:
cluster: falco
user: ""
name: default-context
current-context: default-context
preferences: {}
users: []
name: webhook-config.yaml
roles:
- Master
- content: |
# ... paste audit-policy.yaml here ...
# https://raw.githubusercontent.com/falcosecurity/plugins/master/plugins/k8saudit/configs/audit-policy.yaml
name: audit-policy.yaml
roles:
- Master
```
## Enabling gRPC
The Falco gRPC server and the Falco gRPC Outputs APIs are not enabled by default.
Moreover, Falco supports running a gRPC server with two main binding types:
- Over a local **Unix socket** with no authentication
- Over the **network** with mandatory mutual TLS authentication (mTLS)
### gRPC over unix socket (default)
The preferred way to use the gRPC is over a Unix socket.
To install Falco with gRPC enabled over a **unix socket**, you have to:
```shell
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
--set falco.grpc.enabled=true \
--set falco.grpc_output.enabled=true
```
### gRPC over network
The gRPC server over the network can only be used with mutual authentication between the clients and the server using TLS certificates.
How to generate the certificates is [documented here](https://falco.org/docs/grpc/#generate-valid-ca).
To install Falco with gRPC enabled over the **network**, you have to:
```shell
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
--set falco.grpc.enabled=true \
--set falco.grpc_output.enabled=true \
--set falco.grpc.unixSocketPath="" \
--set-file certs.server.key=/path/to/server.key \
--set-file certs.server.crt=/path/to/server.crt \
--set-file certs.ca.crt=/path/to/ca.crt
```
## Enable http_output
HTTP output enables Falco to send events through HTTP(S) via the following configuration:
```shell
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
--set falco.http_output.enabled=true \
--set falco.http_output.url="http://some.url/some/path/" \
--set falco.json_output=true \
--set json_include_output_property=true
```
Additionally, you can enable mTLS communication and load HTTP client cryptographic material via:
```shell
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
--set falco.http_output.enabled=true \
--set falco.http_output.url="https://some.url/some/path/" \
--set falco.json_output=true \
--set json_include_output_property=true \
--set falco.http_output.mtls=true \
--set falco.http_output.client_cert="/etc/falco/certs/client/client.crt" \
--set falco.http_output.client_key="/etc/falco/certs/client/client.key" \
--set falco.http_output.ca_cert="/etc/falco/certs/client/ca.crt" \
--set-file certs.client.key="/path/to/client.key",certs.client.crt="/path/to/client.crt",certs.ca.crt="/path/to/cacert.crt"
```
Or instead of directly setting the files via `--set-file`, mounting an existing volume with the `certs.existingClientSecret` value.
## Deploy Falcosidekick with Falco
[`Falcosidekick`](https://github.com/falcosecurity/falcosidekick) can be installed with `Falco` by setting `--set falcosidekick.enabled=true`. This setting automatically configures all options of `Falco` for working with `Falcosidekick`.
All values for the configuration of `Falcosidekick` are available by prefixing them with `falcosidekick.`. The full list of available values is [here](https://github.com/falcosecurity/charts/tree/master/charts/falcosidekick#configuration).
For example, to enable the deployment of [`Falcosidekick-UI`](https://github.com/falcosecurity/falcosidekick-ui), add `--set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true`.
If you use a Proxy in your cluster, the requests between `Falco` and `Falcosidekick` might be captured, use the full FQDN of `Falcosidekick` by using `--set falcosidekick.fullfqdn=true` to avoid that.
## Configuration
The following table lists the main configurable parameters of the {{ template "chart.name" . }} chart v{{ template "chart.version" . }} and their default values. See [values.yaml](./values.yaml) for full list.
{{ template "chart.valuesSection" . }}

File diff suppressed because one or more lines are too long

View File

@ -1,16 +0,0 @@
# CI values for Falco.
# The following values will bypass the installation of the kernel module
# and disable the kernel space driver.
# disable the kernel space driver
driver:
enabled: false
# make Falco run in userspace only mode
extra:
args:
- --userspace
# enforce /proc mounting since Falco still tries to scan it
mounts:
enforceProcMount: true

File diff suppressed because it is too large Load Diff

View File

@ -1,169 +0,0 @@
# Configuration values for falco chart
`Chart version: v3.8.0`
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | Affinity constraint for pods' scheduling. |
| certs | object | `{"ca":{"crt":""},"existingSecret":"","server":{"crt":"","key":""}}` | certificates used by webserver and grpc server. paste certificate content or use helm with --set-file or use existing secret containing key, crt, ca as well as pem bundle |
| certs.ca.crt | string | `""` | CA certificate used by gRPC, webserver and AuditSink validation. |
| certs.existingSecret | string | `""` | Existing secret containing the following key, crt and ca as well as the bundle pem. |
| certs.server.crt | string | `""` | Certificate used by gRPC and webserver. |
| certs.server.key | string | `""` | Key used by gRPC and webserver. |
| collectors.containerd.enabled | bool | `true` | Enable ContainerD support. |
| collectors.containerd.socket | string | `"/run/containerd/containerd.sock"` | The path of the ContainerD socket. |
| collectors.crio.enabled | bool | `true` | Enable CRI-O support. |
| collectors.crio.socket | string | `"/run/crio/crio.sock"` | The path of the CRI-O socket. |
| collectors.docker.enabled | bool | `true` | Enable Docker support. |
| collectors.docker.socket | string | `"/var/run/docker.sock"` | The path of the Docker daemon socket. |
| collectors.enabled | bool | `true` | Enable/disable all the metadata collectors. |
| collectors.kubernetes.apiAuth | string | `"/var/run/secrets/kubernetes.io/serviceaccount/token"` | Provide the authentication method Falco should use to connect to the Kubernetes API. |
| collectors.kubernetes.apiUrl | string | `"https://$(KUBERNETES_SERVICE_HOST)"` | |
| collectors.kubernetes.enableNodeFilter | bool | `true` | If true, only the current node (on which Falco is running) will be considered when requesting metadata of pods to the API server. Disabling this option may have a performance penalty on large clusters. |
| collectors.kubernetes.enabled | bool | `true` | Enable Kubernetes meta data collection via a connection to the Kubernetes API server. When this option is disabled, Falco falls back to the container annotations to grap the meta data. In such a case, only the ID, name, namespace, labels of the pod will be available. |
| containerSecurityContext | object | `{}` | Set securityContext for the Falco container.For more info see the "falco.securityContext" helper in "pod-template.tpl" |
| controller.annotations | object | `{}` | |
| controller.daemonset.updateStrategy.type | string | `"RollingUpdate"` | Perform rolling updates by default in the DaemonSet agent ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/ |
| controller.deployment.replicas | int | `1` | Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. For more info check the section on Plugins in the README.md file. |
| controller.kind | string | `"daemonset"` | |
| customRules | object | `{}` | Third party rules enabled for Falco. More info on the dedicated section in README.md file. |
| driver.ebpf | object | `{"hostNetwork":false,"leastPrivileged":false,"path":null}` | Configuration section for ebpf driver. |
| driver.ebpf.hostNetwork | bool | `false` | Needed to enable eBPF JIT at runtime for performance reasons. Can be skipped if eBPF JIT is enabled from outside the container |
| driver.ebpf.leastPrivileged | bool | `false` | Constrain Falco with capabilities instead of running a privileged container. Ensure the eBPF driver is enabled (i.e., setting the `driver.kind` option to `ebpf`). Capabilities used: {CAP_SYS_RESOURCE, CAP_SYS_ADMIN, CAP_SYS_PTRACE}. On kernel versions >= 5.8 'CAP_PERFMON' and 'CAP_BPF' could replace 'CAP_SYS_ADMIN' but please pay attention to the 'kernel.perf_event_paranoid' value on your system. Usually 'kernel.perf_event_paranoid>2' means that you cannot use 'CAP_PERFMON' and you should fallback to 'CAP_SYS_ADMIN', but the behavior changes across different distros. Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-1 |
| driver.ebpf.path | string | `nil` | Path where the eBPF probe is located. It comes handy when the probe have been installed in the nodes using tools other than the init container deployed with the chart. |
| driver.enabled | bool | `true` | Set it to false if you want to deploy Falco without the drivers. Always set it to false when using Falco with plugins. |
| driver.kind | string | `"module"` | Tell Falco which driver to use. Available options: module (kernel driver), ebpf (eBPF probe), modern-bpf (modern eBPF probe). |
| driver.loader | object | `{"enabled":true,"initContainer":{"args":[],"env":[],"image":{"pullPolicy":"IfNotPresent","registry":"docker.io","repository":"falcosecurity/falco-driver-loader","tag":""},"resources":{},"securityContext":{}}}` | Configuration for the Falco init container. |
| driver.loader.enabled | bool | `true` | Enable/disable the init container. |
| driver.loader.initContainer.args | list | `[]` | Arguments to pass to the Falco driver loader init container. |
| driver.loader.initContainer.env | list | `[]` | Extra environment variables that will be pass onto Falco driver loader init container. |
| driver.loader.initContainer.image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. |
| driver.loader.initContainer.image.registry | string | `"docker.io"` | The image registry to pull from. |
| driver.loader.initContainer.image.repository | string | `"falcosecurity/falco-driver-loader"` | The image repository to pull from. |
| driver.loader.initContainer.resources | object | `{}` | Resources requests and limits for the Falco driver loader init container. |
| driver.loader.initContainer.securityContext | object | `{}` | Security context for the Falco driver loader init container. Overrides the default security context. If driver.kind == "module" you must at least set `privileged: true`. |
| driver.modern_bpf | object | `{"leastPrivileged":false}` | Configuration section for modern bpf driver. |
| driver.modern_bpf.leastPrivileged | bool | `false` | Constrain Falco with capabilities instead of running a privileged container. Ensure the modern bpf driver is enabled (i.e., setting the `driver.kind` option to `modern-bpf`). Capabilities used: {CAP_SYS_RESOURCE, CAP_BPF, CAP_PERFMON, CAP_SYS_PTRACE}. Read more on that here: https://falco.org/docs/event-sources/kernel/#least-privileged-mode-2 |
| extra.args | list | `[]` | Extra command-line arguments. |
| extra.env | list | `[]` | Extra environment variables that will be pass onto Falco containers. |
| extra.initContainers | list | `[]` | Additional initContainers for Falco pods. |
| falco.base_syscalls | object | `{"custom_set":[],"repair":false}` | - [Suggestions] NOTE: setting `base_syscalls.repair: true` automates the following suggestions for you. These suggestions are subject to change as Falco and its state engine evolve. For execve* events: Some Falco fields for an execve* syscall are retrieved from the associated `clone`, `clone3`, `fork`, `vfork` syscalls when spawning a new process. The `close` syscall is used to purge file descriptors from Falco's internal thread / process cache table and is necessary for rules relating to file descriptors (e.g. open, openat, openat2, socket, connect, accept, accept4 ... and many more) Consider enabling the following syscalls in `base_syscalls.custom_set` for process rules: [clone, clone3, fork, vfork, execve, execveat, close] For networking related events: While you can log `connect` or `accept*` syscalls without the socket syscall, the log will not contain the ip tuples. Additionally, for `listen` and `accept*` syscalls, the `bind` syscall is also necessary. We recommend the following as the minimum set for networking-related rules: [clone, clone3, fork, vfork, execve, execveat, close, socket, bind, getsockopt] Lastly, for tracking the correct `uid`, `gid` or `sid`, `pgid` of a process when the running process opens a file or makes a network connection, consider adding the following to the above recommended syscall sets: ... setresuid, setsid, setuid, setgid, setpgid, setresgid, setsid, capset, chdir, chroot, fchdir ... |
| falco.buffered_outputs | bool | `false` | Enabling buffering for the output queue can offer performance optimization, efficient resource usage, and smoother data flow, resulting in a more reliable output mechanism. By default, buffering is disabled (false). |
| falco.file_output | object | `{"enabled":false,"filename":"./events.txt","keep_alive":false}` | When appending Falco alerts to a file, each new alert will be added to a new line. It's important to note that Falco does not perform log rotation for this file. If the `keep_alive` option is set to `true`, the file will be opened once and continuously written to, else the file will be reopened for each output message. Furthermore, the file will be closed and reopened if Falco receives the SIGUSR1 signal. |
| falco.grpc | object | `{"bind_address":"unix:///run/falco/falco.sock","enabled":false,"threadiness":0}` | gRPC server using a local unix socket |
| falco.grpc.threadiness | int | `0` | When the `threadiness` value is set to 0, Falco will automatically determine the appropriate number of threads based on the number of online cores in the system. |
| falco.grpc_output | object | `{"enabled":false}` | Use gRPC as an output service. gRPC is a modern and high-performance framework for remote procedure calls (RPC). It utilizes protocol buffers for efficient data serialization. The gRPC output in Falco provides a modern and efficient way to integrate with other systems. By default the setting is turned off. Enabling this option stores output events in memory until they are consumed by a gRPC client. Ensure that you have a consumer for the output events or leave it disabled. |
| falco.http_output | object | `{"ca_bundle":"","ca_cert":"","ca_path":"/etc/ssl/certs","client_cert":"/etc/ssl/certs/client.crt","client_key":"/etc/ssl/certs/client.key","echo":false,"enabled":false,"insecure":false,"mtls":false,"url":"","user_agent":"falcosecurity/falco"}` | Send logs to an HTTP endpoint or webhook. |
| falco.http_output.ca_bundle | string | `""` | Path to a specific file that will be used as the CA certificate store. |
| falco.http_output.ca_cert | string | `""` | Path to the CA certificate that can verify the remote server. |
| falco.http_output.ca_path | string | `"/etc/ssl/certs"` | Path to a folder that will be used as the CA certificate store. CA certificate need to be stored as indivitual PEM files in this directory. |
| falco.http_output.client_cert | string | `"/etc/ssl/certs/client.crt"` | Path to the client cert. |
| falco.http_output.client_key | string | `"/etc/ssl/certs/client.key"` | Path to the client key. |
| falco.http_output.echo | bool | `false` | Whether to echo server answers to stdout |
| falco.http_output.insecure | bool | `false` | Tell Falco to not verify the remote server. |
| falco.http_output.mtls | bool | `false` | Tell Falco to use mTLS |
| falco.json_include_output_property | bool | `true` | When using JSON output in Falco, you have the option to include the "output" property itself in the generated JSON output. The "output" property provides additional information about the purpose of the rule. To reduce the logging volume, it is recommended to turn it off if it's not necessary for your use case. |
| falco.json_include_tags_property | bool | `true` | When using JSON output in Falco, you have the option to include the "tags" field of the rules in the generated JSON output. The "tags" field provides additional metadata associated with the rule. To reduce the logging volume, if the tags associated with the rule are not needed for your use case or can be added at a later stage, it is recommended to turn it off. |
| falco.json_output | bool | `false` | When enabled, Falco will output alert messages and rules file loading/validation results in JSON format, making it easier for downstream programs to process and consume the data. By default, this option is disabled. |
| falco.libs_logger | object | `{"enabled":false,"severity":"debug"}` | The `libs_logger` setting in Falco determines the minimum log level to include in the logs related to the functioning of the software of the underlying `libs` library, which Falco utilizes. This setting is independent of the `priority` field of rules and the `log_level` setting that controls Falco's operational logs. It allows you to specify the desired log level for the `libs` library specifically, providing more granular control over the logging behavior of the underlying components used by Falco. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". It is not recommended for production use. |
| falco.load_plugins | list | `[]` | Add here all plugins and their configuration. Please consult the plugins documentation for more info. Remember to add the plugins name in "load_plugins: []" in order to load them in Falco. |
| falco.log_level | string | `"info"` | The `log_level` setting determines the minimum log level to include in Falco's logs related to the functioning of the software. This setting is separate from the `priority` field of rules and specifically controls the log level of Falco's operational logging. By specifying a log level, you can control the verbosity of Falco's operational logs. Only logs of a certain severity level or higher will be emitted. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug". |
| falco.log_stderr | bool | `true` | Send information logs to stderr. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. |
| falco.log_syslog | bool | `true` | Send information logs to syslog. Note these are *not* security notification logs! These are just Falco lifecycle (and possibly error) logs. |
| falco.metadata_download | object | `{"chunk_wait_us":1000,"max_mb":100,"watch_freq_sec":1}` | When connected to an orchestrator like Kubernetes, Falco has the capability to collect metadata and enrich system call events with contextual data. The parameters mentioned here control the downloading process of this metadata. Please note that support for Mesos is deprecated, so these parameters currently apply only to Kubernetes. When using Falco with Kubernetes, you can enable this functionality by using the `-k` or `-K` command-line flag. However, it's worth mentioning that for important Kubernetes metadata fields such as namespace or pod name, these fields are automatically extracted from the container runtime, providing the necessary enrichment for common use cases of syscall-based threat detection. In summary, the `-k` flag is typically not required for most scenarios involving Kubernetes workload owner enrichment. The `-k` flag is primarily used when additional metadata is required beyond the standard fields, catering to more specific use cases, see https://falco.org/docs/reference/rules/supported-fields/#field-class-k8s. |
| falco.metrics | object | `{"convert_memory_to_mb":true,"enabled":false,"include_empty_values":false,"interval":"1h","kernel_event_counters_enabled":true,"libbpf_stats_enabled":true,"output_rule":true,"resource_utilization_enabled":true}` | - [Usage] `enabled`: Disabled by default. `interval`: The stats interval in Falco follows the time duration definitions used by Prometheus. https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations Time durations are specified as a number, followed immediately by one of the following units: ms - millisecond s - second m - minute h - hour d - day - assuming a day has always 24h w - week - assuming a week has always 7d y - year - assuming a year has always 365d Example of a valid time duration: 1h30m20s10ms A minimum interval of 100ms is enforced for metric collection. However, for production environments, we recommend selecting one of the following intervals for optimal monitoring: 15m 30m 1h 4h 6h `output_rule`: To enable seamless metrics and performance monitoring, we recommend emitting metrics as the rule "Falco internal: metrics snapshot". This option is particularly useful when Falco logs are preserved in a data lake. Please note that to use this option, the Falco rules config `priority` must be set to `info` at a minimum. `output_file`: Append stats to a `jsonl` file. Use with caution in production as Falco does not automatically rotate the file. `resource_utilization_enabled`: Emit CPU and memory usage metrics. CPU usage is reported as a percentage of one CPU and can be normalized to the total number of CPUs to determine overall usage. Memory metrics are provided in raw units (`kb` for `RSS`, `PSS` and `VSZ` or `bytes` for `container_memory_used`) and can be uniformly converted to megabytes (MB) using the `convert_memory_to_mb` functionality. In environments such as Kubernetes, it is crucial to track Falco's container memory usage. To customize the path of the memory metric file, you can create an environment variable named `FALCO_CGROUP_MEM_PATH` and set it to the desired file path. By default, Falco uses the file `/sys/fs/cgroup/memory/memory.usage_in_bytes` to monitor container memory usage, which aligns with Kubernetes' `container_memory_working_set_bytes` metric. `kernel_event_counters_enabled`: Emit kernel side event and drop counters, as an alternative to `syscall_event_drops`, but with some differences. These counters reflect monotonic values since Falco's start and are exported at a constant stats interval. `libbpf_stats_enabled`: Exposes statistics similar to `bpftool prog show`, providing information such as the number of invocations of each BPF program attached by Falco and the time spent in each program measured in nanoseconds. To enable this feature, the kernel must be >= 5.1, and the kernel configuration `/proc/sys/kernel/bpf_stats_enabled` must be set. This option, or an equivalent statistics feature, is not available for non `*bpf*` drivers. Additionally, please be aware that the current implementation of `libbpf` does not support granularity of statistics at the bpf tail call level. `include_empty_values`: When the option is set to true, fields with an empty numeric value will be included in the output. However, this rule does not apply to high-level fields such as `n_evts` or `n_drops`; they will always be included in the output even if their value is empty. This option can be beneficial for exploring the data schema and ensuring that fields with empty values are included in the output. todo: prometheus export option todo: syscall_counters_enabled option |
| falco.modern_bpf | object | `{"cpus_for_each_syscall_buffer":2}` | - [Suggestions] The default choice of index 2 (one syscall buffer for each CPU pair) was made because the modern bpf probe utilizes a different memory allocation strategy compared to the other two drivers (bpf and kernel module). However, you have the flexibility to experiment and find the optimal configuration for your system. When considering a fixed syscall_buf_size_preset and a fixed buffer dimension: - Increasing this configs value results in lower number of buffers and you can speed up your system and reduce memory usage - However, using too few buffers may increase contention in the kernel, leading to a slowdown. If you have low event throughputs and minimal drops, reducing the number of buffers (higher `cpus_for_each_syscall_buffer`) can lower the memory footprint. |
| falco.output_timeout | int | `2000` | The `output_timeout` parameter specifies the duration, in milliseconds, to wait before considering the deadline exceeded. By default, the timeout is set to 2000ms (2 seconds), meaning that the consumer of Falco outputs can block the Falco output channel for up to 2 seconds without triggering a timeout error. Falco actively monitors the performance of output channels. With this setting the timeout error can be logged, but please note that this requires setting Falco's operational logs `log_level` to a minimum of `notice`. It's important to note that Falco outputs will not be discarded from the output queue. This means that if an output channel becomes blocked indefinitely, it indicates a potential issue that needs to be addressed by the user. |
| falco.outputs | object | `{"max_burst":1000,"rate":0}` | A throttling mechanism, implemented as a token bucket, can be used to control the rate of Falco outputs. Each event source has its own rate limiter, ensuring that alerts from one source do not affect the throttling of others. The following options control the mechanism: - rate: the number of tokens (i.e. right to send a notification) gained per second. When 0, the throttling mechanism is disabled. Defaults to 0. - max_burst: the maximum number of tokens outstanding. Defaults to 1000. For example, setting the rate to 1 allows Falco to send up to 1000 notifications initially, followed by 1 notification per second. The burst capacity is fully restored after 1000 seconds of no activity. Throttling can be useful in various scenarios, such as preventing notification floods, managing system load, controlling event processing, or complying with rate limits imposed by external systems or APIs. It allows for better resource utilization, avoids overwhelming downstream systems, and helps maintain a balanced and controlled flow of notifications. With the default settings, the throttling mechanism is disabled. |
| falco.outputs_queue | object | `{"capacity":0}` | Falco utilizes tbb::concurrent_bounded_queue for handling outputs, and this parameter allows you to customize the queue capacity. Please refer to the official documentation: https://oneapi-src.github.io/oneTBB/main/tbb_userguide/Concurrent_Queue_Classes.html. On a healthy system with optimized Falco rules, the queue should not fill up. If it does, it is most likely happening due to the entire event flow being too slow, indicating that the server is under heavy load. `capacity`: the maximum number of items allowed in the queue is determined by this value. Setting the value to 0 (which is the default) is equivalent to keeping the queue unbounded. In other words, when this configuration is set to 0, the number of allowed items is effectively set to the largest possible long value, disabling this setting. In the case of an unbounded queue, if the available memory on the system is consumed, the Falco process would be OOM killed. When using this option and setting the capacity, the current event would be dropped, and the event loop would continue. This behavior mirrors kernel-side event drops when the buffer between kernel space and user space is full. |
| falco.plugins | list | `[{"init_config":null,"library_path":"libk8saudit.so","name":"k8saudit","open_params":"http://:9765/k8s-audit"},{"library_path":"libcloudtrail.so","name":"cloudtrail"},{"init_config":"","library_path":"libjson.so","name":"json"}]` | Customize subsettings for each enabled plugin. These settings will only be applied when the corresponding plugin is enabled using the `load_plugins` option. |
| falco.priority | string | `"debug"` | Any rule with a priority level more severe than or equal to the specified minimum level will be loaded and run by Falco. This allows you to filter and control the rules based on their severity, ensuring that only rules of a certain priority or higher are active and evaluated by Falco. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug" |
| falco.program_output | object | `{"enabled":false,"keep_alive":false,"program":"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"}` | Redirect the output to another program or command. Possible additional things you might want to do with program output: - send to a slack webhook: program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX" - logging (alternate method than syslog): program: logger -t falco-test - send over a network connection: program: nc host.example.com 80 If `keep_alive` is set to `true`, the program will be started once and continuously written to, with each output message on its own line. If `keep_alive` is set to `false`, the program will be re-spawned for each output message. Furthermore, the program will be re-spawned if Falco receives the SIGUSR1 signal. |
| falco.rule_matching | string | `"first"` | |
| falco.rules_file | list | `["/etc/falco/falco_rules.yaml","/etc/falco/falco_rules.local.yaml","/etc/falco/rules.d"]` | The location of the rules files that will be consumed by Falco. |
| falco.stdout_output | object | `{"enabled":true}` | Redirect logs to standard output. |
| falco.syscall_buf_size_preset | int | `4` | - [Suggestions] The buffer size was previously fixed at 8 MB (index 4). You now have the option to adjust the size based on your needs. Increasing the size, such as to 16 MB (index 5), can reduce syscall drops in heavy production systems, but may impact performance. Decreasing the size can speed up the system but may increase syscall drops. It's important to note that the buffer size is mapped twice in the process' virtual memory, so a buffer of 8 MB will result in a 16 MB area in virtual memory. Use this parameter with caution and only modify it if the default size is not suitable for your use case. |
| falco.syscall_drop_failed_exit | bool | `false` | Enabling this option in Falco allows it to drop failed system call exit events in the kernel driver before pushing them onto the ring buffer. This optimization can result in lower CPU usage and more efficient utilization of the ring buffer, potentially reducing the number of event losses. However, it is important to note that enabling this option also means sacrificing some visibility into the system. |
| falco.syscall_event_drops | object | `{"actions":["log","alert"],"max_burst":1,"rate":0.03333,"simulate_drops":false,"threshold":0.1}` | For debugging/testing it is possible to simulate the drops using the `simulate_drops: true`. In this case the threshold does not apply. |
| falco.syscall_event_drops.actions | list | `["log","alert"]` | Actions to be taken when system calls were dropped from the circular buffer. |
| falco.syscall_event_drops.max_burst | int | `1` | Max burst of messages emitted. |
| falco.syscall_event_drops.rate | float | `0.03333` | Rate at which log/alert messages are emitted. |
| falco.syscall_event_drops.simulate_drops | bool | `false` | Flag to enable drops for debug purposes. |
| falco.syscall_event_drops.threshold | float | `0.1` | The messages are emitted when the percentage of dropped system calls with respect the number of events in the last second is greater than the given threshold (a double in the range [0, 1]). |
| falco.syscall_event_timeouts | object | `{"max_consecutives":1000}` | Generates Falco operational logs when `log_level=notice` at minimum Falco utilizes a shared buffer between the kernel and userspace to receive events, such as system call information, in userspace. However, there may be cases where timeouts occur in the underlying libraries due to issues in reading events or the need to skip a particular event. While it is uncommon for Falco to experience consecutive event timeouts, it has the capability to detect such situations. You can configure the maximum number of consecutive timeouts without an event after which Falco will generate an alert, but please note that this requires setting Falco's operational logs `log_level` to a minimum of `notice`. The default value is set to 1000 consecutive timeouts without receiving any events. The mapping of this value to a time interval depends on the CPU frequency. |
| falco.syslog_output | object | `{"enabled":true}` | Send logs to syslog. |
| falco.time_format_iso_8601 | bool | `false` | When enabled, Falco will display log and output messages with times in the ISO 8601 format. By default, times are shown in the local time zone determined by the /etc/localtime configuration. |
| falco.watch_config_files | bool | `true` | Watch config file and rules files for modification. When a file is modified, Falco will propagate new config, by reloading itself. |
| falco.webserver | object | `{"enabled":true,"k8s_healthz_endpoint":"/healthz","listen_port":8765,"ssl_certificate":"/etc/falco/falco.pem","ssl_enabled":false,"threadiness":0}` | Falco supports an embedded webserver that runs within the Falco process, providing a lightweight and efficient way to expose web-based functionalities without the need for an external web server. The following endpoints are exposed: - /healthz: designed to be used for checking the health and availability of the Falco application (the name of the endpoint is configurable). - /versions: responds with a JSON object containing the version numbers of the internal Falco components (similar output as `falco --version -o json_output=true`). Please note that the /versions endpoint is particularly useful for other Falco services, such as `falcoctl`, to retrieve information about a running Falco instance. If you plan to use `falcoctl` locally or with Kubernetes, make sure the Falco webserver is enabled. The behavior of the webserver can be controlled with the following options, which are enabled by default: The `ssl_certificate` option specifies a combined SSL certificate and corresponding key that are contained in a single file. You can generate a key/cert as follows: $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem $ cat certificate.pem key.pem > falco.pem $ sudo cp falco.pem /etc/falco/falco.pem |
| falcoctl.artifact.follow | object | `{"args":["--verbose"],"enabled":true,"env":[],"mounts":{"volumeMounts":[]},"resources":{},"securityContext":{}}` | Runs "falcoctl artifact follow" command as a sidecar container. It is used to automatically check for updates given a list of artifacts. If an update is found it downloads and installs it in a shared folder (emptyDir) that is accessible by Falco. Rulesfiles are automatically detected and loaded by Falco once they are installed in the correct folder by falcoctl. To prevent new versions of artifacts from breaking Falco, the tool checks if it is compatible with the running version of Falco before installing it. |
| falcoctl.artifact.follow.args | list | `["--verbose"]` | Arguments to pass to the falcoctl-artifact-follow sidecar container. |
| falcoctl.artifact.follow.env | list | `[]` | Extra environment variables that will be pass onto falcoctl-artifact-follow sidecar container. |
| falcoctl.artifact.follow.mounts | object | `{"volumeMounts":[]}` | A list of volume mounts you want to add to the falcoctl-artifact-follow sidecar container. |
| falcoctl.artifact.follow.resources | object | `{}` | Resources requests and limits for the falcoctl-artifact-follow sidecar container. |
| falcoctl.artifact.follow.securityContext | object | `{}` | Security context for the falcoctl-artifact-follow sidecar container. |
| falcoctl.artifact.install | object | `{"args":["--verbose"],"enabled":true,"env":[],"mounts":{"volumeMounts":[]},"resources":{},"securityContext":{}}` | Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before Falco starts. It provides them to Falco by using an emptyDir volume. |
| falcoctl.artifact.install.args | list | `["--verbose"]` | Arguments to pass to the falcoctl-artifact-install init container. |
| falcoctl.artifact.install.env | list | `[]` | Extra environment variables that will be pass onto falcoctl-artifact-install init container. |
| falcoctl.artifact.install.mounts | object | `{"volumeMounts":[]}` | A list of volume mounts you want to add to the falcoctl-artifact-install init container. |
| falcoctl.artifact.install.resources | object | `{}` | Resources requests and limits for the falcoctl-artifact-install init container. |
| falcoctl.artifact.install.securityContext | object | `{}` | Security context for the falcoctl init container. |
| falcoctl.config | object | `{"artifact":{"allowedTypes":["rulesfile"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:2"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:2"],"resolveDeps":false,"rulesfilesDir":"/rulesfiles"}},"indexes":[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]}` | Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers. |
| falcoctl.config.artifact | object | `{"allowedTypes":["rulesfile"],"follow":{"every":"6h","falcoversions":"http://localhost:8765/versions","pluginsDir":"/plugins","refs":["falco-rules:2"],"rulesfilesDir":"/rulesfiles"},"install":{"pluginsDir":"/plugins","refs":["falco-rules:2"],"resolveDeps":false,"rulesfilesDir":"/rulesfiles"}}` | Configuration used by the artifact commands. |
| falcoctl.config.artifact.allowedTypes | list | `["rulesfile"]` | List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained in the list it will refuse to downloade and install that artifact. |
| falcoctl.config.artifact.follow.every | string | `"6h"` | How often the tool checks for new versions of the followed artifacts. |
| falcoctl.config.artifact.follow.falcoversions | string | `"http://localhost:8765/versions"` | HTTP endpoint that serves the api versions of the Falco instance. It is used to check if the new versions are compatible with the running Falco instance. |
| falcoctl.config.artifact.follow.pluginsDir | string | `"/plugins"` | See the fields of the artifact.install section. |
| falcoctl.config.artifact.follow.refs | list | `["falco-rules:2"]` | List of artifacts to be followed by the falcoctl sidecar container. |
| falcoctl.config.artifact.follow.rulesfilesDir | string | `"/rulesfiles"` | See the fields of the artifact.install section. |
| falcoctl.config.artifact.install.pluginsDir | string | `"/plugins"` | Same as the one above but for the artifacts. |
| falcoctl.config.artifact.install.refs | list | `["falco-rules:2"]` | List of artifacts to be installed by the falcoctl init container. |
| falcoctl.config.artifact.install.resolveDeps | bool | `false` | Do not resolve the depenencies for artifacts. By default is true, but for our use case we disable it. |
| falcoctl.config.artifact.install.rulesfilesDir | string | `"/rulesfiles"` | Directory where the rulesfiles are saved. The path is relative to the container, which in this case is an emptyDir mounted also by the Falco pod. |
| falcoctl.config.indexes | list | `[{"name":"falcosecurity","url":"https://falcosecurity.github.io/falcoctl/index.yaml"}]` | List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see: https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview |
| falcoctl.image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. |
| falcoctl.image.registry | string | `"docker.io"` | The image registry to pull from. |
| falcoctl.image.repository | string | `"falcosecurity/falcoctl"` | The image repository to pull from. |
| falcoctl.image.tag | string | `"0.6.2"` | The image tag to pull. |
| falcosidekick | object | `{"enabled":false,"fullfqdn":false,"listenPort":""}` | For configuration values, see https://github.com/falcosecurity/charts/blob/master/falcosidekick/values.yaml |
| falcosidekick.enabled | bool | `false` | Enable falcosidekick deployment. |
| falcosidekick.fullfqdn | bool | `false` | Enable usage of full FQDN of falcosidekick service (useful when a Proxy is used). |
| falcosidekick.listenPort | string | `""` | Listen port. Default value: 2801 |
| fullnameOverride | string | `""` | Same as nameOverride but for the fullname. |
| gvisor | object | `{"enabled":false,"runsc":{"config":"/run/containerd/runsc/config.toml","path":"/home/containerd/usr/local/sbin","root":"/run/containerd/runsc"}}` | Gvisor configuration. Based on your system you need to set the appropriate values. Please, rembember to add pod tolerations and affinities in order to schedule the Falco pods in the gVisor enabled nodes. |
| gvisor.enabled | bool | `false` | Set it to true if you want to deploy Falco with gVisor support. |
| gvisor.runsc | object | `{"config":"/run/containerd/runsc/config.toml","path":"/home/containerd/usr/local/sbin","root":"/run/containerd/runsc"}` | Runsc container runtime configuration. Falco needs to interact with it in order to intercept the activity of the sandboxed pods. |
| gvisor.runsc.config | string | `"/run/containerd/runsc/config.toml"` | Absolute path of the `runsc` configuration file, used by Falco to set its configuration and make aware `gVisor` of its presence. |
| gvisor.runsc.path | string | `"/home/containerd/usr/local/sbin"` | Absolute path of the `runsc` binary in the k8s nodes. |
| gvisor.runsc.root | string | `"/run/containerd/runsc"` | Absolute path of the root directory of the `runsc` container runtime. It is of vital importance for Falco since `runsc` stores there the information of the workloads handled by it; |
| healthChecks | object | `{"livenessProbe":{"initialDelaySeconds":60,"periodSeconds":15,"timeoutSeconds":5},"readinessProbe":{"initialDelaySeconds":30,"periodSeconds":15,"timeoutSeconds":5}}` | Parameters used |
| healthChecks.livenessProbe.initialDelaySeconds | int | `60` | Tells the kubelet that it should wait X seconds before performing the first probe. |
| healthChecks.livenessProbe.periodSeconds | int | `15` | Specifies that the kubelet should perform the check every x seconds. |
| healthChecks.livenessProbe.timeoutSeconds | int | `5` | Number of seconds after which the probe times out. |
| healthChecks.readinessProbe.initialDelaySeconds | int | `30` | Tells the kubelet that it should wait X seconds before performing the first probe. |
| healthChecks.readinessProbe.periodSeconds | int | `15` | Specifies that the kubelet should perform the check every x seconds. |
| healthChecks.readinessProbe.timeoutSeconds | int | `5` | Number of seconds after which the probe times out. |
| image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. |
| image.registry | string | `"docker.io"` | The image registry to pull from. |
| image.repository | string | `"falcosecurity/falco-no-driver"` | The image repository to pull from |
| image.tag | string | `""` | The image tag to pull. Overrides the image tag whose default is the chart appVersion. |
| imagePullSecrets | list | `[]` | Secrets containing credentials when pulling from private/secure registries. |
| mounts.enforceProcMount | bool | `false` | By default, `/proc` from the host is only mounted into the Falco pod when `driver.enabled` is set to `true`. This flag allows it to override this behaviour for edge cases where `/proc` is needed but syscall data source is not enabled at the same time (e.g. for specific plugins). |
| mounts.volumeMounts | list | `[]` | A list of volumes you want to add to the Falco pods. |
| mounts.volumes | list | `[]` | A list of volumes you want to add to the Falco pods. |
| nameOverride | string | `""` | Put here the new name if you want to override the release name used for Falco components. |
| namespaceOverride | string | `""` | Override the deployment namespace |
| nodeSelector | object | `{}` | Selectors used to deploy Falco on a given node/nodes. |
| podAnnotations | object | `{}` | Add additional pod annotations |
| podLabels | object | `{}` | Add additional pod labels |
| podPriorityClassName | string | `nil` | Set pod priorityClassName |
| podSecurityContext | object | `{}` | Set securityContext for the pods These security settings are overriden by the ones specified for the specific containers when there is overlap. |
| rbac.create | bool | `true` | |
| resources.limits | object | `{"cpu":"1000m","memory":"1024Mi"}` | Maximum amount of resources that Falco container could get. If you are enabling more than one source in falco, than consider to increase the cpu limits. |
| resources.requests | object | `{"cpu":"100m","memory":"512Mi"}` | Although resources needed are subjective on the actual workload we provide a sane defaults ones. If you have more questions or concerns, please refer to #falco slack channel for more info about it. |
| scc.create | bool | `true` | Create OpenShift's Security Context Constraint. |
| serviceAccount.annotations | object | `{}` | Annotations to add to the service account. |
| serviceAccount.create | bool | `true` | Specifies whether a service account should be created. |
| serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template |
| services | string | `nil` | Network services configuration (scenario requirement) Add here your services to be deployed together with Falco. |
| tolerations | list | `[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]` | Tolerations to allow Falco to run on Kubernetes masters. |
| tty | bool | `false` | Attach the Falco process to a tty inside the container. Needed to flush Falco logs as soon as they are emitted. Set it to "true" when you need the Falco logs to be immediately displayed. |

View File

@ -19,13 +19,29 @@ No further action should be required.
{{- if not .Values.falcosidekick.enabled }}
Tip:
You can easily forward Falco events to Slack, Kafka, AWS Lambda and more with falcosidekick.
Full list of outputs: https://github.com/falcosecurity/charts/tree/master/falcosidekick.
Full list of outputs: https://github.com/falcosecurity/charts/tree/master/charts/falcosidekick.
You can enable its deployment with `--set falcosidekick.enabled=true` or in your values.yaml.
See: https://github.com/falcosecurity/charts/blob/master/falcosidekick/values.yaml for configuration values.
See: https://github.com/falcosecurity/charts/blob/master/charts/falcosidekick/values.yaml for configuration values.
{{- end}}
{{- if (has .Values.driver.kind (list "module" "modern-bpf")) -}}
{{- println }}
WARNING(drivers):
{{- printf "\nThe driver kind: \"%s\" is an alias and might be removed in the future.\n" .Values.driver.kind -}}
{{- $driver := "" -}}
{{- if eq .Values.driver.kind "module" -}}
{{- $driver = "kmod" -}}
{{- else if eq .Values.driver.kind "modern-bpf" -}}
{{- $driver = "modern_ebpf" -}}
{{- end -}}
{{- printf "Please use \"%s\" instead." $driver}}
{{- end -}}
{{- if and (not (empty .Values.falco.load_plugins)) (or .Values.falcoctl.artifact.follow.enabled .Values.falcoctl.artifact.install.enabled) }}
WARNING:
{{ printf "It seems you are loading the following plugins %v, please make sure to install them by adding the correct reference to falcoctl.config.artifact.install.refs: %v" .Values.falco.load_plugins .Values.falcoctl.config.artifact.install.refs -}}
NOTICE:
{{ printf "It seems you are loading the following plugins %v, please make sure to install them by specifying the correct reference to falcoctl.config.artifact.install.refs: %v" .Values.falco.load_plugins .Values.falcoctl.config.artifact.install.refs -}}
{{ printf "Ignore this notice if the value of falcoctl.config.artifact.install.refs is correct already." -}}
{{- end }}

View File

@ -89,7 +89,7 @@ Return the proper Falco image name
{{- . }}/
{{- end -}}
{{- .Values.image.repository }}:
{{- .Values.image.tag | default .Chart.AppVersion -}}
{{- .Values.image.tag | default (printf "%s" .Chart.AppVersion) -}}
{{- end -}}
{{/*
@ -185,7 +185,7 @@ we just disable the sycall source.
*/}}
{{- define "falco.configSyscallSource" -}}
{{- $userspaceDisabled := true -}}
{{- $gvisorDisabled := (not .Values.gvisor.enabled) -}}
{{- $gvisorDisabled := (ne .Values.driver.kind "gvisor") -}}
{{- $driverDisabled := (not .Values.driver.enabled) -}}
{{- if or (has "-u" .Values.extra.args) (has "--userspace" .Values.extra.args) -}}
{{- $userspaceDisabled = false -}}
@ -214,8 +214,8 @@ be temporary and will stay here until we move this logic to the falcoctl tool.
set -o nounset
set -o pipefail
root={{ .Values.gvisor.runsc.root }}
config={{ .Values.gvisor.runsc.config }}
root={{ .Values.driver.gvisor.runsc.root }}
config={{ .Values.driver.gvisor.runsc.config }}
echo "* Configuring Falco+gVisor integration...".
# Check if gVisor is configured on the node.
@ -240,12 +240,12 @@ be temporary and will stay here until we move this logic to the falcoctl tool.
echo "* Falco+gVisor correctly configured."
exit 0
volumeMounts:
- mountPath: /host{{ .Values.gvisor.runsc.path }}
- mountPath: /host{{ .Values.driver.gvisor.runsc.path }}
name: runsc-path
readOnly: true
- mountPath: /host{{ .Values.gvisor.runsc.root }}
- mountPath: /host{{ .Values.driver.gvisor.runsc.root }}
name: runsc-root
- mountPath: /host{{ .Values.gvisor.runsc.config }}
- mountPath: /host{{ .Values.driver.gvisor.runsc.config }}
name: runsc-config
- mountPath: /gvisor-config
name: falco-gvisor-config
@ -280,8 +280,8 @@ be temporary and will stay here until we move this logic to the falcoctl tool.
{{- with .Values.falcoctl.artifact.install.mounts.volumeMounts }}
{{- toYaml . | nindent 4 }}
{{- end }}
env:
{{- if .Values.falcoctl.artifact.install.env }}
env:
{{- include "falco.renderTemplate" ( dict "value" .Values.falcoctl.artifact.install.env "context" $) | nindent 4 }}
{{- end }}
{{- end -}}
@ -314,8 +314,248 @@ be temporary and will stay here until we move this logic to the falcoctl tool.
{{- with .Values.falcoctl.artifact.follow.mounts.volumeMounts }}
{{- toYaml . | nindent 4 }}
{{- end }}
env:
{{- if .Values.falcoctl.artifact.follow.env }}
env:
{{- include "falco.renderTemplate" ( dict "value" .Values.falcoctl.artifact.follow.env "context" $) | nindent 4 }}
{{- end }}
{{- end -}}
{{- end -}}
{{/*
Build configuration for k8smeta plugin and update the relevant variables.
* The configuration that needs to be built up is the initconfig section:
init_config:
collectorPort: 0
collectorHostname: ""
nodeName: ""
The falco chart exposes this configuriotino through two variable:
* collectors.kubenetetes.collectorHostname;
* collectors.kubernetes.collectorPort;
If those two variable are not set, then we take those values from the k8smetacollector subchart.
The hostname is built using the name of the service that exposes the collector endpoints and the
port is directly taken form the service's port that exposes the gRPC endpoint.
We reuse the helpers from the k8smetacollector subchart, by passing down the variables. There is a
hardcoded values that is the chart name for the k8s-metacollector chart.
* The falcoctl configuration is updated to allow plugin artifacts to be installed. The refs in the install
section are updated by adding the reference for the k8s meta plugin that needs to be installed.
NOTE: It seems that the named templates run during the validation process. And then again during the
render fase. In our case we are setting global variable that persist during the various phases.
We need to make the helper idempotent.
*/}}
{{- define "k8smeta.configuration" -}}
{{- if and .Values.collectors.kubernetes.enabled .Values.driver.enabled -}}
{{- $hostname := "" -}}
{{- if .Values.collectors.kubernetes.collectorHostname -}}
{{- $hostname = .Values.collectors.kubernetes.collectorHostname -}}
{{- else -}}
{{- $collectorContext := (dict "Release" .Release "Values" (index .Values "k8s-metacollector") "Chart" (dict "Name" "k8s-metacollector")) -}}
{{- $hostname = printf "%s.%s.svc" (include "k8s-metacollector.fullname" $collectorContext) (include "k8s-metacollector.namespace" $collectorContext) -}}
{{- end -}}
{{- $hasConfig := false -}}
{{- range .Values.falco.plugins -}}
{{- if eq (get . "name") "k8smeta" -}}
{{ $hasConfig = true -}}
{{- end -}}
{{- end -}}
{{- if not $hasConfig -}}
{{- $listenPort := default (index .Values "k8s-metacollector" "service" "ports" "broker-grpc" "port") .Values.collectors.kubernetes.collectorPort -}}
{{- $listenPort = int $listenPort -}}
{{- $pluginConfig := dict "name" "k8smeta" "library_path" "libk8smeta.so" "init_config" (dict "collectorHostname" $hostname "collectorPort" $listenPort "nodeName" "${FALCO_K8S_NODE_NAME}" "verbosity" .Values.collectors.kubernetes.verbosity "hostProc" .Values.collectors.kubernetes.hostProc) -}}
{{- $newConfig := append .Values.falco.plugins $pluginConfig -}}
{{- $_ := set .Values.falco "plugins" ($newConfig | uniq) -}}
{{- $loadedPlugins := append .Values.falco.load_plugins "k8smeta" -}}
{{- $_ = set .Values.falco "load_plugins" ($loadedPlugins | uniq) -}}
{{- end -}}
{{- $_ := set .Values.falcoctl.config.artifact.install "refs" ((append .Values.falcoctl.config.artifact.install.refs .Values.collectors.kubernetes.pluginRef) | uniq)}}
{{- $_ = set .Values.falcoctl.config.artifact "allowedTypes" ((append .Values.falcoctl.config.artifact.allowedTypes "plugin") | uniq)}}
{{- end -}}
{{- end -}}
{{/*
Based on the user input it populates the driver configuration in the falco config map.
*/}}
{{- define "falco.engineConfiguration" -}}
{{- if .Values.driver.enabled -}}
{{- $supportedDrivers := list "kmod" "ebpf" "modern_ebpf" "gvisor" "auto" -}}
{{- $aliasDrivers := list "module" "modern-bpf" -}}
{{- if and (not (has .Values.driver.kind $supportedDrivers)) (not (has .Values.driver.kind $aliasDrivers)) -}}
{{- fail (printf "unsupported driver kind: \"%s\". Supported drivers %s, alias %s" .Values.driver.kind $supportedDrivers $aliasDrivers) -}}
{{- end -}}
{{- if or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") -}}
{{- $kmodConfig := dict "kind" "kmod" "kmod" (dict "buf_size_preset" .Values.driver.kmod.bufSizePreset "drop_failed_exit" .Values.driver.kmod.dropFailedExit) -}}
{{- $_ := set .Values.falco "engine" $kmodConfig -}}
{{- else if eq .Values.driver.kind "ebpf" -}}
{{- $ebpfConfig := dict "kind" "ebpf" "ebpf" (dict "buf_size_preset" .Values.driver.ebpf.bufSizePreset "drop_failed_exit" .Values.driver.ebpf.dropFailedExit "probe" .Values.driver.ebpf.path) -}}
{{- $_ := set .Values.falco "engine" $ebpfConfig -}}
{{- else if or (eq .Values.driver.kind "modern_ebpf") (eq .Values.driver.kind "modern-bpf") -}}
{{- $ebpfConfig := dict "kind" "modern_ebpf" "modern_ebpf" (dict "buf_size_preset" .Values.driver.modernEbpf.bufSizePreset "drop_failed_exit" .Values.driver.modernEbpf.dropFailedExit "cpus_for_each_buffer" .Values.driver.modernEbpf.cpusForEachBuffer) -}}
{{- $_ := set .Values.falco "engine" $ebpfConfig -}}
{{- else if eq .Values.driver.kind "gvisor" -}}
{{- $root := printf "/host%s/k8s.io" .Values.driver.gvisor.runsc.root -}}
{{- $gvisorConfig := dict "kind" "gvisor" "gvisor" (dict "config" "/gvisor-config/pod-init.json" "root" $root) -}}
{{- $_ := set .Values.falco "engine" $gvisorConfig -}}
{{- else if eq .Values.driver.kind "auto" -}}
{{- $engineConfig := dict "kind" "modern_ebpf" "kmod" (dict "buf_size_preset" .Values.driver.kmod.bufSizePreset "drop_failed_exit" .Values.driver.kmod.dropFailedExit) "ebpf" (dict "buf_size_preset" .Values.driver.ebpf.bufSizePreset "drop_failed_exit" .Values.driver.ebpf.dropFailedExit "probe" .Values.driver.ebpf.path) "modern_ebpf" (dict "buf_size_preset" .Values.driver.modernEbpf.bufSizePreset "drop_failed_exit" .Values.driver.modernEbpf.dropFailedExit "cpus_for_each_buffer" .Values.driver.modernEbpf.cpusForEachBuffer) -}}
{{- $_ := set .Values.falco "engine" $engineConfig -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
It returns "true" if the driver loader has to be enabled, otherwise false.
*/}}
{{- define "driverLoader.enabled" -}}
{{- if or (eq .Values.driver.kind "modern_ebpf") (eq .Values.driver.kind "modern-bpf") (eq .Values.driver.kind "gvisor") (not .Values.driver.enabled) (not .Values.driver.loader.enabled) -}}
false
{{- else -}}
true
{{- end -}}
{{- end -}}
{{/*
Based on the user input it populates the metrics configuration in the falco config map.
*/}}
{{- define "falco.metricsConfiguration" -}}
{{- if .Values.metrics.enabled -}}
{{- $_ := set .Values.falco.webserver "prometheus_metrics_enabled" true -}}
{{- $_ = set .Values.falco.webserver "enabled" true -}}
{{- $_ = set .Values.falco.metrics "enabled" .Values.metrics.enabled -}}
{{- $_ = set .Values.falco.metrics "interval" .Values.metrics.interval -}}
{{- $_ = set .Values.falco.metrics "output_rule" .Values.metrics.outputRule -}}
{{- $_ = set .Values.falco.metrics "rules_counters_enabled" .Values.metrics.rulesCountersEnabled -}}
{{- $_ = set .Values.falco.metrics "resource_utilization_enabled" .Values.metrics.resourceUtilizationEnabled -}}
{{- $_ = set .Values.falco.metrics "state_counters_enabled" .Values.metrics.stateCountersEnabled -}}
{{- $_ = set .Values.falco.metrics "kernel_event_counters_enabled" .Values.metrics.kernelEventCountersEnabled -}}
{{- $_ = set .Values.falco.metrics "kernel_event_counters_per_cpu_enabled" .Values.metrics.kernelEventCountersPerCPUEnabled -}}
{{- $_ = set .Values.falco.metrics "libbpf_stats_enabled" .Values.metrics.libbpfStatsEnabled -}}
{{- $_ = set .Values.falco.metrics "convert_memory_to_mb" .Values.metrics.convertMemoryToMB -}}
{{- $_ = set .Values.falco.metrics "include_empty_values" .Values.metrics.includeEmptyValues -}}
{{- end -}}
{{- end -}}
{{/*
This helper is used to add the container plugin to the falco configuration.
*/}}
{{ define "falco.containerPlugin" -}}
{{ if and .Values.driver.enabled .Values.collectors.enabled -}}
{{ if and (or .Values.collectors.docker.enabled .Values.collectors.crio.enabled .Values.collectors.containerd.enabled) .Values.collectors.containerEngine.enabled -}}
{{ fail "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated." }}
{{ else if or .Values.collectors.docker.enabled .Values.collectors.crio.enabled .Values.collectors.containerd.enabled .Values.collectors.containerEngine.enabled -}}
{{ if or .Values.collectors.docker.enabled .Values.collectors.crio.enabled .Values.collectors.containerd.enabled -}}
{{ $_ := set .Values.collectors.containerEngine.engines.docker "enabled" .Values.collectors.docker.enabled -}}
{{ $_ = set .Values.collectors.containerEngine.engines.docker "sockets" (list .Values.collectors.docker.socket) -}}
{{ $_ = set .Values.collectors.containerEngine.engines.containerd "enabled" .Values.collectors.containerd.enabled -}}
{{ $_ = set .Values.collectors.containerEngine.engines.containerd "sockets" (list .Values.collectors.containerd.socket) -}}
{{ $_ = set .Values.collectors.containerEngine.engines.cri "enabled" .Values.collectors.crio.enabled -}}
{{ $_ = set .Values.collectors.containerEngine.engines.cri "sockets" (list .Values.collectors.crio.socket) -}}
{{ $_ = set .Values.collectors.containerEngine.engines.podman "enabled" false -}}
{{ $_ = set .Values.collectors.containerEngine.engines.lxc "enabled" false -}}
{{ $_ = set .Values.collectors.containerEngine.engines.libvirt_lxc "enabled" false -}}
{{ $_ = set .Values.collectors.containerEngine.engines.bpm "enabled" false -}}
{{ end -}}
{{ $hasConfig := false -}}
{{ range .Values.falco.plugins -}}
{{ if eq (get . "name") "container" -}}
{{ $hasConfig = true -}}
{{ end -}}
{{ end -}}
{{ if not $hasConfig -}}
{{ $pluginConfig := dict -}}
{{ with .Values.collectors.containerEngine -}}
{{ $pluginConfig = dict "name" "container" "library_path" "libcontainer.so" "init_config" (dict "label_max_len" .labelMaxLen "with_size" .withSize "hooks" .hooks "engines" .engines) -}}
{{ end -}}
{{ $newConfig := append .Values.falco.plugins $pluginConfig -}}
{{ $_ := set .Values.falco "plugins" ($newConfig | uniq) -}}
{{ $loadedPlugins := append .Values.falco.load_plugins "container" -}}
{{ $_ = set .Values.falco "load_plugins" ($loadedPlugins | uniq) -}}
{{ end -}}
{{ $_ := set .Values.falcoctl.config.artifact.install "refs" ((append .Values.falcoctl.config.artifact.install.refs .Values.collectors.containerEngine.pluginRef) | uniq) -}}
{{ $_ = set .Values.falcoctl.config.artifact "allowedTypes" ((append .Values.falcoctl.config.artifact.allowedTypes "plugin") | uniq) -}}
{{ end -}}
{{ end -}}
{{ end -}}
{{/*
This helper is used to add container plugin volumes to the falco pod.
*/}}
{{- define "falco.containerPluginVolumes" -}}
{{- if and .Values.driver.enabled .Values.collectors.enabled -}}
{{- if and (or .Values.collectors.docker.enabled .Values.collectors.crio.enabled .Values.collectors.containerd.enabled) .Values.collectors.containerEngine.enabled -}}
{{ fail "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated." }}
{{- end -}}
{{ $volumes := list -}}
{{- if .Values.collectors.docker.enabled -}}
{{ $volumes = append $volumes (dict "name" "docker-socket" "hostPath" (dict "path" .Values.collectors.docker.socket)) -}}
{{- end -}}
{{- if .Values.collectors.crio.enabled -}}
{{ $volumes = append $volumes (dict "name" "crio-socket" "hostPath" (dict "path" .Values.collectors.crio.socket)) -}}
{{- end -}}
{{- if .Values.collectors.containerd.enabled -}}
{{ $volumes = append $volumes (dict "name" "containerd-socket" "hostPath" (dict "path" .Values.collectors.containerd.socket)) -}}
{{- end -}}
{{- if .Values.collectors.containerEngine.enabled -}}
{{- $seenPaths := dict -}}
{{- $idx := 0 -}}
{{- $engineOrder := list "docker" "podman" "containerd" "cri" "lxc" "libvirt_lxc" "bpm" -}}
{{- range $engineName := $engineOrder -}}
{{- $val := index $.Values.collectors.containerEngine.engines $engineName -}}
{{- if and $val $val.enabled -}}
{{- range $index, $socket := $val.sockets -}}
{{- $mountPath := print "/host" $socket -}}
{{- if not (hasKey $seenPaths $mountPath) -}}
{{ $volumes = append $volumes (dict "name" (printf "container-engine-socket-%d" $idx) "hostPath" (dict "path" $socket)) -}}
{{- $idx = add $idx 1 -}}
{{- $_ := set $seenPaths $mountPath true -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- if gt (len $volumes) 0 -}}
{{ toYaml $volumes -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
This helper is used to add container plugin volumeMounts to the falco pod.
*/}}
{{- define "falco.containerPluginVolumeMounts" -}}
{{- if and .Values.driver.enabled .Values.collectors.enabled -}}
{{- if and (or .Values.collectors.docker.enabled .Values.collectors.crio.enabled .Values.collectors.containerd.enabled) .Values.collectors.containerEngine.enabled -}}
{{ fail "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated." }}
{{- end -}}
{{ $volumeMounts := list -}}
{{- if .Values.collectors.docker.enabled -}}
{{ $volumeMounts = append $volumeMounts (dict "name" "docker-socket" "mountPath" (print "/host" .Values.collectors.docker.socket)) -}}
{{- end -}}
{{- if .Values.collectors.crio.enabled -}}
{{ $volumeMounts = append $volumeMounts (dict "name" "crio-socket" "mountPath" (print "/host" .Values.collectors.crio.socket)) -}}
{{- end -}}
{{- if .Values.collectors.containerd.enabled -}}
{{ $volumeMounts = append $volumeMounts (dict "name" "containerd-socket" "mountPath" (print "/host" .Values.collectors.containerd.socket)) -}}
{{- end -}}
{{- if .Values.collectors.containerEngine.enabled -}}
{{- $seenPaths := dict -}}
{{- $idx := 0 -}}
{{- $engineOrder := list "docker" "podman" "containerd" "cri" "lxc" "libvirt_lxc" "bpm" -}}
{{- range $engineName := $engineOrder -}}
{{- $val := index $.Values.collectors.containerEngine.engines $engineName -}}
{{- if and $val $val.enabled -}}
{{- range $index, $socket := $val.sockets -}}
{{- $mountPath := print "/host" $socket -}}
{{- if not (hasKey $seenPaths $mountPath) -}}
{{ $volumeMounts = append $volumeMounts (dict "name" (printf "container-engine-socket-%d" $idx) "mountPath" $mountPath) -}}
{{- $idx = add $idx 1 -}}
{{- $_ := set $seenPaths $mountPath true -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- if gt (len $volumeMounts) 0 -}}
{{ toYaml ($volumeMounts) }}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if and .Values.certs.client.key .Values.certs.client.crt .Values.certs.ca.crt }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "falco.fullname" . }}-client-certs
namespace: {{ .Release.Namespace }}
labels:
{{- include "falco.labels" $ | nindent 4 }}
type: Opaque
data:
{{ $key := .Values.certs.client.key }}
client.key: {{ $key | b64enc | quote }}
{{ $crt := .Values.certs.client.crt }}
client.crt: {{ $crt | b64enc | quote }}
falcoclient.pem: {{ print $key $crt | b64enc | quote }}
ca.crt: {{ .Values.certs.ca.crt | b64enc | quote }}
ca.pem: {{ .Values.certs.ca.crt | b64enc | quote }}
{{- end }}

View File

@ -8,4 +8,8 @@ metadata:
data:
falco.yaml: |-
{{- include "falco.falcosidekickConfig" . }}
{{- include "k8smeta.configuration" . -}}
{{- include "falco.engineConfiguration" . -}}
{{- include "falco.metricsConfiguration" . -}}
{{- include "falco.containerPlugin" . -}}
{{- toYaml .Values.falco | nindent 4 }}

View File

@ -6,6 +6,9 @@ metadata:
namespace: {{ include "falco.namespace" . }}
labels:
{{- include "falco.labels" . | nindent 4 }}
{{- if .Values.controller.labels }}
{{- toYaml .Values.controller.labels | nindent 4 }}
{{- end }}
{{- if .Values.controller.annotations }}
annotations:
{{ toYaml .Values.controller.annotations | nindent 4 }}
@ -20,4 +23,4 @@ spec:
updateStrategy:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -6,6 +6,9 @@ metadata:
namespace: {{ include "falco.namespace" . }}
labels:
{{- include "falco.labels" . | nindent 4 }}
{{- if .Values.controller.labels }}
{{- toYaml .Values.controller.labels | nindent 4 }}
{{- end }}
{{- if .Values.controller.annotations }}
annotations:
{{ toYaml .Values.controller.annotations | nindent 4 }}
@ -20,4 +23,4 @@ spec:
{{- include "falco.selectorLabels" . | nindent 6 }}
template:
{{- include "falco.podTemplate" . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,22 @@
{{- if .Values.grafana.dashboards.enabled -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.grafana.dashboards.configMaps.falco.name }}
{{ if .Values.grafana.dashboards.configMaps.falco.namespace }}
namespace: {{ .Values.grafana.dashboards.configMaps.falco.namespace }}
{{- else -}}
namespace: {{ include "falco.namespace" . }}
{{- end }}
labels:
{{- include "falco.labels" . | nindent 4 }}
grafana_dashboard: "1"
{{- if .Values.grafana.dashboards.configMaps.falco.folder }}
annotations:
k8s-sidecar-target-directory: /tmp/dashboards/{{ .Values.grafana.dashboards.configMaps.falco.folder}}
grafana_dashboard_folder: {{ .Values.grafana.dashboards.configMaps.falco.folder }}
{{- end }}
data:
falco-dashboard.json: |-
{{- .Files.Get "dashboards/falco-dashboard.json" | nindent 4 }}
{{- end -}}

View File

@ -8,5 +8,7 @@ metadata:
{{- include "falco.labels" . | nindent 4 }}
data:
falcoctl.yaml: |-
{{- include "k8smeta.configuration" . -}}
{{- include "falco.containerPlugin" . -}}
{{- toYaml .Values.falcoctl.config | nindent 4 }}
{{- end }}

View File

@ -12,10 +12,24 @@ metadata:
{{- if and .Values.certs (not .Values.certs.existingSecret) }}
checksum/certs: {{ include (print $.Template.BasePath "/certs-secret.yaml") . | sha256sum }}
{{- end }}
{{- if .Values.driver.enabled }}
{{- if (or (eq .Values.driver.kind "modern_ebpf") (eq .Values.driver.kind "modern-bpf")) }}
{{- if .Values.driver.modernEbpf.leastPrivileged }}
container.apparmor.security.beta.kubernetes.io/{{ .Chart.Name }}: unconfined
{{- end }}
{{- else if eq .Values.driver.kind "ebpf" }}
{{- if .Values.driver.ebpf.leastPrivileged }}
container.apparmor.security.beta.kubernetes.io/{{ .Chart.Name }}: unconfined
{{- end }}
{{- end }}
{{- end }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.falco.podHostname }}
hostname: {{ .Values.falco.podHostname }}
{{- end }}
serviceAccountName: {{ include "falco.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
@ -46,9 +60,10 @@ spec:
imagePullSecrets:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.gvisor.enabled }}
{{- if eq .Values.driver.kind "gvisor" }}
hostNetwork: true
hostPID: true
dnsPolicy: ClusterFirstWithHostNet
{{- end }}
containers:
- name: {{ .Chart.Name }}
@ -60,56 +75,30 @@ spec:
{{- include "falco.securityContext" . | nindent 8 }}
args:
- /usr/bin/falco
{{- if and .Values.driver.enabled (eq .Values.driver.kind "modern-bpf") }}
- --modern-bpf
{{- end }}
{{- if .Values.gvisor.enabled }}
- --gvisor-config
- /gvisor-config/pod-init.json
- --gvisor-root
- /host{{ .Values.gvisor.runsc.root }}/k8s.io
{{- end }}
{{- include "falco.configSyscallSource" . | indent 8 }}
{{- with .Values.collectors }}
{{- if .enabled }}
{{- if .containerd.enabled }}
- --cri
- /run/containerd/containerd.sock
{{- end }}
{{- if .crio.enabled }}
- --cri
- /run/crio/crio.sock
{{- end }}
{{- if .kubernetes.enabled }}
- -K
- {{ .kubernetes.apiAuth }}
- -k
- {{ .kubernetes.apiUrl }}
{{- if .kubernetes.enableNodeFilter }}
- --k8s-node
- "$(FALCO_K8S_NODE_NAME)"
{{- end }}
{{- end }}
- -pk
{{- end }}
{{- end }}
{{- with .Values.extra.args }}
{{- toYaml . | nindent 8 }}
{{- end }}
env:
- name: HOST_ROOT
value: /host
- name: FALCO_HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: FALCO_K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- if and .Values.driver.enabled (eq .Values.driver.kind "ebpf") }}
- name: FALCO_BPF_PROBE
value: {{ .Values.driver.ebpf.path }}
{{- end }}
{{- if .Values.extra.env }}
{{- include "falco.renderTemplate" ( dict "value" .Values.extra.env "context" $) | nindent 8 }}
{{- end }}
tty: {{ .Values.tty }}
{{- if .Values.falco.webserver.enabled }}
ports:
- containerPort: {{ .Values.falco.webserver.listen_port }}
name: web
protocol: TCP
livenessProbe:
initialDelaySeconds: {{ .Values.healthChecks.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.healthChecks.livenessProbe.timeoutSeconds }}
@ -132,6 +121,7 @@ spec:
{{- end }}
{{- end }}
volumeMounts:
{{- include "falco.containerPluginVolumeMounts" . | nindent 8 -}}
{{- if or .Values.falcoctl.artifact.install.enabled .Values.falcoctl.artifact.follow.enabled }}
{{- if has "rulesfile" .Values.falcoctl.config.artifact.allowedTypes }}
- mountPath: /etc/falco
@ -141,13 +131,15 @@ spec:
- mountPath: /usr/share/falco/plugins
name: plugins-install-dir
{{- end }}
{{- end }}
{{- if eq (include "driverLoader.enabled" .) "true" }}
- mountPath: /etc/falco/config.d
name: specialized-falco-configs
{{- end }}
- mountPath: /root/.falco
name: root-falco-fs
{{- if or .Values.driver.enabled .Values.mounts.enforceProcMount }}
- mountPath: /host/proc
name: proc-fs
{{- end }}
{{- if and .Values.driver.enabled (not .Values.driver.loader.enabled) }}
readOnly: true
- mountPath: /host/boot
@ -158,37 +150,23 @@ spec:
- mountPath: /host/usr
name: usr-fs
readOnly: true
{{- end }}
{{- if .Values.driver.enabled }}
- mountPath: /host/etc
name: etc-fs
readOnly: true
{{- end }}
{{- if and .Values.driver.enabled (eq .Values.driver.kind "module") }}
{{- end -}}
{{- if and .Values.driver.enabled (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") (eq .Values.driver.kind "auto")) }}
- mountPath: /host/dev
name: dev-fs
readOnly: true
- name: sys-fs
mountPath: /sys/module/falco
mountPath: /sys/module
{{- end }}
{{- if and .Values.driver.enabled (and (eq .Values.driver.kind "ebpf") (contains "falco-no-driver" .Values.image.repository)) }}
- name: debugfs
mountPath: /sys/kernel/debug
{{- end }}
{{- with .Values.collectors }}
{{- if .enabled }}
{{- if .docker.enabled }}
- mountPath: /host/var/run/docker.sock
name: docker-socket
{{- end }}
{{- if .containerd.enabled }}
- mountPath: /host/run/containerd/containerd.sock
name: containerd-socket
{{- end }}
{{- if .crio.enabled }}
- mountPath: /host/run/crio/crio.sock
name: crio-socket
{{- end }}
{{- end }}
{{- end }}
- mountPath: /etc/falco/falco.yaml
name: falco-yaml
subPath: falco.yaml
@ -201,17 +179,22 @@ spec:
name: certs-volume
readOnly: true
{{- end }}
{{- if or .Values.certs.existingClientSecret (and .Values.certs.client.key .Values.certs.client.crt .Values.certs.ca.crt) }}
- mountPath: /etc/falco/certs/client
name: client-certs-volume
readOnly: true
{{- end }}
{{- include "falco.unixSocketVolumeMount" . | nindent 8 -}}
{{- with .Values.mounts.volumeMounts }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.gvisor.enabled }}
{{- if eq .Values.driver.kind "gvisor" }}
- mountPath: /usr/local/bin/runsc
name: runsc-path
readOnly: true
- mountPath: /host{{ .Values.gvisor.runsc.root }}
- mountPath: /host{{ .Values.driver.gvisor.runsc.root }}
name: runsc-root
- mountPath: /host{{ .Values.gvisor.runsc.config }}
- mountPath: /host{{ .Values.driver.gvisor.runsc.config }}
name: runsc-config
- mountPath: /gvisor-config
name: falco-gvisor-config
@ -223,18 +206,21 @@ spec:
{{- with .Values.extra.initContainers }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if and .Values.gvisor.enabled }}
{{- if eq .Values.driver.kind "gvisor" }}
{{- include "falco.gvisor.initContainer" . | nindent 4 }}
{{- end }}
{{- if and .Values.driver.enabled (ne .Values.driver.kind "modern-bpf") }}
{{- if.Values.driver.loader.enabled }}
{{- if eq (include "driverLoader.enabled" .) "true" }}
{{- include "falco.driverLoader.initContainer" . | nindent 4 }}
{{- end }}
{{- end }}
{{- if .Values.falcoctl.artifact.install.enabled }}
{{- include "falcoctl.initContainer" . | nindent 4 }}
{{- end }}
volumes:
{{- include "falco.containerPluginVolumes" . | nindent 4 -}}
{{- if eq (include "driverLoader.enabled" .) "true" }}
- name: specialized-falco-configs
emptyDir: {}
{{- end }}
{{- if or .Values.falcoctl.artifact.install.enabled .Values.falcoctl.artifact.follow.enabled }}
- name: plugins-install-dir
emptyDir: {}
@ -257,54 +243,33 @@ spec:
hostPath:
path: /etc
{{- end }}
{{- if and .Values.driver.enabled (eq .Values.driver.kind "module") }}
{{- if and .Values.driver.enabled (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") (eq .Values.driver.kind "auto")) }}
- name: dev-fs
hostPath:
path: /dev
- name: sys-fs
hostPath:
path: /sys/module/falco
path: /sys/module
{{- end }}
{{- if and .Values.driver.enabled (and (eq .Values.driver.kind "ebpf") (contains "falco-no-driver" .Values.image.repository)) }}
- name: debugfs
hostPath:
path: /sys/kernel/debug
{{- end }}
{{- with .Values.collectors }}
{{- if .enabled }}
{{- if .docker.enabled }}
- name: docker-socket
hostPath:
path: {{ .docker.socket }}
{{- end }}
{{- if .containerd.enabled }}
- name: containerd-socket
hostPath:
path: {{ .containerd.socket }}
{{- end }}
{{- if .crio.enabled }}
- name: crio-socket
hostPath:
path: {{ .crio.socket }}
{{- end }}
{{- end }}
{{- end }}
{{- if or .Values.driver.enabled .Values.mounts.enforceProcMount }}
- name: proc-fs
hostPath:
path: /proc
{{- end }}
{{- if .Values.gvisor.enabled }}
{{- if eq .Values.driver.kind "gvisor" }}
- name: runsc-path
hostPath:
path: {{ .Values.gvisor.runsc.path }}/runsc
path: {{ .Values.driver.gvisor.runsc.path }}/runsc
type: File
- name: runsc-root
hostPath:
path: {{ .Values.gvisor.runsc.root }}
path: {{ .Values.driver.gvisor.runsc.root }}
- name: runsc-config
hostPath:
path: {{ .Values.gvisor.runsc.config }}
path: {{ .Values.driver.gvisor.runsc.config }}
type: File
- name: falco-gvisor-config
emptyDir: {}
@ -335,6 +300,15 @@ spec:
secretName: {{ include "falco.fullname" . }}-certs
{{- end }}
{{- end }}
{{- if or .Values.certs.existingClientSecret (and .Values.certs.client.key .Values.certs.client.crt .Values.certs.ca.crt) }}
- name: client-certs-volume
secret:
{{- if .Values.certs.existingClientSecret }}
secretName: {{ .Values.certs.existingClientSecret }}
{{- else }}
secretName: {{ include "falco.fullname" . }}-client-certs
{{- end }}
{{- end }}
{{- include "falco.unixSocketVolume" . | nindent 4 -}}
{{- with .Values.mounts.volumes }}
{{- toYaml . | nindent 4 }}
@ -345,10 +319,17 @@ spec:
- name: {{ .Chart.Name }}-driver-loader
image: {{ include "falco.driverLoader.image" . }}
imagePullPolicy: {{ .Values.driver.loader.initContainer.image.pullPolicy }}
{{- with .Values.driver.loader.initContainer.args }}
args:
{{- with .Values.driver.loader.initContainer.args }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if eq .Values.driver.kind "module" }}
- kmod
{{- else if eq .Values.driver.kind "modern-bpf"}}
- modern_ebpf
{{- else }}
- {{ .Values.driver.kind }}
{{- end }}
{{- with .Values.driver.loader.initContainer.resources }}
resources:
{{- toYaml . | nindent 4 }}
@ -356,7 +337,7 @@ spec:
securityContext:
{{- if .Values.driver.loader.initContainer.securityContext }}
{{- toYaml .Values.driver.loader.initContainer.securityContext | nindent 4 }}
{{- else if eq .Values.driver.kind "module" }}
{{- else if (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") (eq .Values.driver.kind "auto")) }}
privileged: true
{{- end }}
volumeMounts:
@ -376,20 +357,31 @@ spec:
- mountPath: /host/etc
name: etc-fs
readOnly: true
- mountPath: /etc/falco/config.d
name: specialized-falco-configs
env:
{{- if eq .Values.driver.kind "ebpf" }}
- name: FALCO_BPF_PROBE
value: {{ .Values.driver.ebpf.path }}
{{- end }}
- name: HOST_ROOT
value: /host
{{- if .Values.driver.loader.initContainer.env }}
{{- include "falco.renderTemplate" ( dict "value" .Values.driver.loader.initContainer.env "context" $) | nindent 4 }}
{{- end }}
{{- if eq .Values.driver.kind "auto" }}
- name: FALCOCTL_DRIVER_CONFIG_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: FALCOCTL_DRIVER_CONFIG_CONFIGMAP
value: {{ include "falco.fullname" . }}
{{- else }}
- name: FALCOCTL_DRIVER_CONFIG_UPDATE_FALCO
value: "false"
{{- end }}
{{- end -}}
{{- define "falco.securityContext" -}}
{{- $securityContext := dict -}}
{{- if .Values.driver.enabled -}}
{{- if eq .Values.driver.kind "module" -}}
{{- if (or (eq .Values.driver.kind "kmod") (eq .Values.driver.kind "module") (eq .Values.driver.kind "auto")) -}}
{{- $securityContext := set $securityContext "privileged" true -}}
{{- end -}}
{{- if eq .Values.driver.kind "ebpf" -}}
@ -399,8 +391,8 @@ spec:
{{- $securityContext := set $securityContext "privileged" true -}}
{{- end -}}
{{- end -}}
{{- if eq .Values.driver.kind "modern-bpf" -}}
{{- if .Values.driver.modern_bpf.leastPrivileged -}}
{{- if (or (eq .Values.driver.kind "modern_ebpf") (eq .Values.driver.kind "modern-bpf")) -}}
{{- if .Values.driver.modernEbpf.leastPrivileged -}}
{{- $securityContext := set $securityContext "capabilities" (dict "add" (list "BPF" "SYS_RESOURCE" "PERFMON" "SYS_PTRACE")) -}}
{{- else -}}
{{- $securityContext := set $securityContext "privileged" true -}}

View File

@ -0,0 +1,17 @@
{{- if and .Values.rbac.create (eq .Values.driver.kind "auto")}}
kind: Role
apiVersion: {{ include "rbac.apiVersion" . }}
metadata:
name: {{ include "falco.fullname" . }}
labels:
{{- include "falco.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- update
{{- end }}

View File

@ -1,5 +1,5 @@
{{- if .Values.rbac.create }}
kind: ClusterRoleBinding
{{- if and .Values.rbac.create (eq .Values.driver.kind "auto")}}
kind: RoleBinding
apiVersion: {{ include "rbac.apiVersion" . }}
metadata:
name: {{ include "falco.fullname" . }}
@ -10,7 +10,7 @@ subjects:
name: {{ include "falco.serviceAccountName" . }}
namespace: {{ include "falco.namespace" . }}
roleRef:
kind: ClusterRole
kind: Role
name: {{ include "falco.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end }}

View File

@ -36,8 +36,8 @@ supplementalGroups:
users:
- system:serviceaccount:{{ include "falco.namespace" . }}:{{ include "falco.serviceAccountName" . }}
volumes:
- hostPath
- emptyDir
- secret
- configMap
{{- end }}
- emptyDir
- hostPath
- secret
{{- end }}

View File

@ -0,0 +1,26 @@
{{- if and .Values.metrics.enabled .Values.metrics.service.create }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "falco.fullname" . }}-metrics
namespace: {{ include "falco.namespace" . }}
labels:
{{- include "falco.labels" . | nindent 4 }}
{{- with .Values.metrics.service.labels }}
{{ toYaml . | nindent 4 }}
{{- end }}
type: "falco-metrics"
{{- with .Values.metrics.service.annotations }}
annotations:
{{ toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.metrics.service.type }}
ports:
- port: {{ .Values.metrics.service.ports.metrics.port }}
targetPort: {{ .Values.metrics.service.ports.metrics.targetPort }}
protocol: {{ .Values.metrics.service.ports.metrics.protocol }}
name: "metrics"
selector:
{{- include "falco.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,51 @@
{{- if .Values.serviceMonitor.create }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "falco.fullname" . }}
{{- if .Values.serviceMonitor.namespace }}
namespace: {{ tpl .Values.serviceMonitor.namespace . }}
{{- else }}
namespace: {{ include "falco.namespace" . }}
{{- end }}
labels:
{{- include "falco.labels" . | nindent 4 }}
{{- with .Values.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
endpoints:
- port: "{{ .Values.serviceMonitor.endpointPort }}"
{{- with .Values.serviceMonitor.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
honorLabels: true
path: {{ .Values.serviceMonitor.path }}
scheme: {{ .Values.serviceMonitor.scheme }}
{{- with .Values.serviceMonitor.tlsConfig }}
tlsConfig:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.serviceMonitor.relabelings }}
relabelings:
{{- toYaml . | nindent 8 }}
{{- end }}
jobLabel: "{{ .Release.Name }}"
selector:
matchLabels:
{{- include "falco.selectorLabels" . | nindent 6 }}
{{- with .Values.serviceMonitor.selector }}
{{- toYaml . | nindent 6 }}
{{- end }}
type: "falco-metrics"
namespaceSelector:
matchNames:
- {{ include "falco.namespace" . }}
{{- with .Values.serviceMonitor.targetLabels }}
targetLabels:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -1,5 +1,10 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
{{- with .Values.serviceAccount.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 2 }}
{{- end }}
kind: ServiceAccount
metadata:
name: {{ include "falco.serviceAccountName" . }}
@ -10,4 +15,4 @@ metadata:
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,35 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package unit
import (
"testing"
"github.com/gruntwork-io/terratest/modules/helm"
"gopkg.in/yaml.v3"
)
// ChartInfo returns chart's information.
func ChartInfo(t *testing.T, chartPath string) (map[string]interface{}, error) {
// Get chart info.
output, err := helm.RunHelmCommandAndGetOutputE(t, &helm.Options{}, "show", "chart", chartPath)
if err != nil {
return nil, err
}
chartInfo := map[string]interface{}{}
err = yaml.Unmarshal([]byte(output), &chartInfo)
return chartInfo, err
}

View File

@ -0,0 +1,29 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package unit
const (
// ReleaseName is the name of the release we expect in the rendered resources.
ReleaseName = "rendered-resources"
// PatternK8sMetacollectorFiles is the regex pattern we expect to find in the rendered resources.
PatternK8sMetacollectorFiles = `# Source: falco/charts/k8s-metacollector/templates/([^\n]+)`
// K8sMetaPluginName is the name of the k8smeta plugin we expect in the falco configuration.
K8sMetaPluginName = "k8smeta"
// ContainerPluginName name of the container plugin we expect in the falco configuration.
ContainerPluginName = "container"
// ChartPath is the path to the chart.
ChartPath = "../../.."
)

View File

@ -0,0 +1,13 @@
package containerPlugin
var volumeNames = []string{
"docker-socket",
"containerd-socket",
"crio-socket",
"container-engine-socket-0",
"container-engine-socket-1",
"container-engine-socket-2",
"container-engine-socket-3",
"container-engine-socket-4",
"container-engine-socket-5",
}

View File

@ -0,0 +1,767 @@
package containerPlugin
import (
"path/filepath"
"slices"
"testing"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
corev1 "k8s.io/api/core/v1"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"github.com/gruntwork-io/terratest/modules/helm"
)
func TestContainerPluginConfiguration(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, config any)
}{
{
"defaultValues",
nil,
func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
// Check engines configurations.
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok, "checking if engines section exists")
require.Len(t, engines, 7, "checking number of engines")
var engineConfig ContainerEngineConfig
// Unmarshal the engines configuration.
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check the default values for each engine.
require.True(t, engineConfig.Docker.Enabled)
require.Equal(t, []string{"/var/run/docker.sock"}, engineConfig.Docker.Sockets)
require.True(t, engineConfig.Podman.Enabled)
require.Equal(t, []string{"/run/podman/podman.sock"}, engineConfig.Podman.Sockets)
require.True(t, engineConfig.Containerd.Enabled)
require.Equal(t, []string{"/run/host-containerd/containerd.sock"}, engineConfig.Containerd.Sockets)
require.True(t, engineConfig.CRI.Enabled)
require.Equal(t, []string{"/run/containerd/containerd.sock", "/run/crio/crio.sock", "/run/k3s/containerd/containerd.sock", "/run/host-containerd/containerd.sock"}, engineConfig.CRI.Sockets)
require.True(t, engineConfig.LXC.Enabled)
require.True(t, engineConfig.LibvirtLXC.Enabled)
require.True(t, engineConfig.BPM.Enabled)
},
},
{
name: "changeDockerSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/custom/docker.sock",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.True(t, engineConfig.Docker.Enabled)
require.Equal(t, []string{"/custom/docker.sock"}, engineConfig.Docker.Sockets)
},
},
{
name: "changeCriSocket",
values: map[string]string{
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/custom/cri.sock",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.True(t, engineConfig.CRI.Enabled)
require.Equal(t, []string{"/custom/cri.sock"}, engineConfig.CRI.Sockets)
},
},
{
name: "disableDockerSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.False(t, engineConfig.Docker.Enabled)
},
},
{
name: "disableCriSocket",
values: map[string]string{
"collectors.containerEngine.engines.cri.enabled": "false",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.False(t, engineConfig.CRI.Enabled)
},
},
{
name: "changeContainerdSocket",
values: map[string]string{
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.containerd.sockets[0]": "/custom/containerd.sock",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.True(t, engineConfig.Containerd.Enabled)
require.Equal(t, []string{"/custom/containerd.sock"}, engineConfig.Containerd.Sockets)
},
},
{
name: "disableContainerdSocket",
values: map[string]string{
"collectors.containerEngine.engines.containerd.enabled": "false",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
require.False(t, engineConfig.Containerd.Enabled)
},
},
{
name: "defaultContainerEngineConfig",
values: map[string]string{},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
require.Equal(t, float64(100), initConfigMap["label_max_len"])
require.False(t, initConfigMap["with_size"].(bool))
hooks := initConfigMap["hooks"].([]interface{})
require.Len(t, hooks, 1)
require.Contains(t, hooks, "create")
engines := initConfigMap["engines"].(map[string]interface{})
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check default engine configurations
require.True(t, engineConfig.Docker.Enabled)
require.Equal(t, []string{"/var/run/docker.sock"}, engineConfig.Docker.Sockets)
require.True(t, engineConfig.Podman.Enabled)
require.Equal(t, []string{"/run/podman/podman.sock"}, engineConfig.Podman.Sockets)
require.True(t, engineConfig.Containerd.Enabled)
require.Equal(t, []string{"/run/host-containerd/containerd.sock"}, engineConfig.Containerd.Sockets)
require.True(t, engineConfig.CRI.Enabled)
require.Equal(t, []string{"/run/containerd/containerd.sock", "/run/crio/crio.sock", "/run/k3s/containerd/containerd.sock", "/run/host-containerd/containerd.sock"}, engineConfig.CRI.Sockets)
require.True(t, engineConfig.LXC.Enabled)
require.True(t, engineConfig.LibvirtLXC.Enabled)
require.True(t, engineConfig.BPM.Enabled)
},
},
{
name: "customContainerEngineConfig",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.labelMaxLen": "200",
"collectors.containerEngine.withSize": "true",
"collectors.containerEngine.hooks[0]": "create",
"collectors.containerEngine.hooks[1]": "start",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
"collectors.containerEngine.engines.containerd.sockets[0]": "/custom/containerd.sock",
"collectors.containerEngine.engines.cri.sockets[0]": "/custom/crio.sock",
"collectors.containerEngine.engines.lxc.enabled": "false",
"collectors.containerEngine.engines.libvirt_lxc.enabled": "false",
"collectors.containerEngine.engines.bpm.enabled": "false",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
require.Equal(t, float64(200), initConfigMap["label_max_len"])
require.True(t, initConfigMap["with_size"].(bool))
hooks := initConfigMap["hooks"].([]interface{})
require.Len(t, hooks, 2)
require.Contains(t, hooks, "create")
require.Contains(t, hooks, "start")
engines := initConfigMap["engines"].(map[string]interface{})
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check custom engine configurations
require.False(t, engineConfig.Docker.Enabled)
require.False(t, engineConfig.Podman.Enabled)
require.True(t, engineConfig.Containerd.Enabled)
require.Equal(t, []string{"/custom/containerd.sock"}, engineConfig.Containerd.Sockets)
require.True(t, engineConfig.CRI.Enabled)
require.Equal(t, []string{"/custom/crio.sock"}, engineConfig.CRI.Sockets)
require.False(t, engineConfig.LXC.Enabled)
require.False(t, engineConfig.LibvirtLXC.Enabled)
require.False(t, engineConfig.BPM.Enabled)
},
},
{
name: "customDockerEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.docker.sockets[0]": "/custom/docker.sock",
"collectors.containerEngine.engines.docker.sockets[1]": "/custom/docker.sock2",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check Docker engine configuration
require.False(t, engineConfig.Docker.Enabled)
require.Equal(t, []string{"/custom/docker.sock", "/custom/docker.sock2"}, engineConfig.Docker.Sockets)
},
},
{
name: "customContainerdEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.containerd.sockets[0]": "/custom/containerd.sock",
"collectors.containerEngine.engines.containerd.sockets[1]": "/custom/containerd.sock2",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check Containerd engine configuration
require.False(t, engineConfig.Containerd.Enabled)
require.Equal(t, []string{"/custom/containerd.sock", "/custom/containerd.sock2"}, engineConfig.Containerd.Sockets)
},
},
{
name: "customPodmanEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.podman.enabled": "true",
"collectors.containerEngine.engines.podman.sockets[0]": "/custom/podman.sock",
"collectors.containerEngine.engines.podman.sockets[1]": "/custom/podman.sock2",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check Podman engine configuration
require.True(t, engineConfig.Podman.Enabled)
require.Equal(t, []string{"/custom/podman.sock", "/custom/podman.sock2"}, engineConfig.Podman.Sockets)
},
},
{
name: "customCRIEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/custom/cri.sock",
"collectors.containerEngine.engines.cri.sockets[1]": "/custom/cri.sock2",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check CRI engine configuration
require.True(t, engineConfig.CRI.Enabled)
require.Equal(t, []string{"/custom/cri.sock", "/custom/cri.sock2"}, engineConfig.CRI.Sockets)
},
},
{
name: "customLXCEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.lxc.enabled": "true",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check LXC engine configuration
require.True(t, engineConfig.LXC.Enabled)
},
},
{
name: "customLibvirtLXCEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.libvirt_lxc.enabled": "true",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check LibvirtLXC engine configuration
require.True(t, engineConfig.LibvirtLXC.Enabled)
},
},
{
name: "customBPMEngineConfigInContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.bpm.enabled": "true",
},
expected: func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"].(map[string]interface{})
require.True(t, ok)
var engineConfig ContainerEngineConfig
data, err := yaml.Marshal(engines)
require.NoError(t, err)
err = yaml.Unmarshal(data, &engineConfig)
require.NoError(t, err)
// Check BPM engine configuration
require.True(t, engineConfig.BPM.Enabled)
},
},
{
name: "allCollectorsDisabled",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "false",
},
expected: func(t *testing.T, config any) {
// When config is nil, it means the plugin wasn't found in the configuration
require.Nil(t, config, "container plugin should not be present in configuration when all collectors are disabled")
// If somehow the config exists (which it shouldn't), verify there are no engine configurations
if config != nil {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
if ok {
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"]
if ok {
engineMap := engines.(map[string]interface{})
require.Empty(t, engineMap, "engines configuration should be empty when all collectors are disabled")
}
}
}
},
},
{
name: "allCollectorsDisabledTopLevel",
values: map[string]string{
"collectors.enabled": "false",
},
expected: func(t *testing.T, config any) {
// When config is nil, it means the plugin wasn't found in the configuration
require.Nil(t, config, "container plugin should not be present in configuration when all collectors are disabled")
// If somehow the config exists (which it shouldn't), verify there are no engine configurations
if config != nil {
plugin := config.(map[string]interface{})
initConfig, ok := plugin["init_config"]
if ok {
initConfigMap := initConfig.(map[string]interface{})
engines, ok := initConfigMap["engines"]
if ok {
engineMap := engines.(map[string]interface{})
require.Empty(t, engineMap, "engines configuration should be empty when all collectors are disabled")
}
}
}
},
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
// Render the chart with the given options.
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
// Unmarshal the output into a ConfigMap object.
helm.UnmarshalK8SYaml(t, output, &cm)
// Unmarshal the data field of the ConfigMap into a map.
var config map[string]interface{}
helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config)
// Extract the container plugin configuration.
plugins, ok := config["plugins"]
require.True(t, ok, "checking if plugins section exists")
pluginsList := plugins.([]interface{})
found := false
// Get the container plugin configuration.
for _, plugin := range pluginsList {
if name, ok := plugin.(map[string]interface{})["name"]; ok && name == unit.ContainerPluginName {
testCase.expected(t, plugin)
found = true
}
}
if found {
// Check that the plugin has been added to the ones that are enabled.
loadPlugins := config["load_plugins"]
require.True(t, slices.Contains(loadPlugins.([]interface{}), unit.ContainerPluginName))
} else {
testCase.expected(t, nil)
loadPlugins := config["load_plugins"]
require.False(t, slices.Contains(loadPlugins.([]interface{}), unit.ContainerPluginName))
}
})
}
}
func TestInvalidCollectorConfiguration(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expectedErr string
}{
{
name: "dockerAndContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "true",
"collectoars.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
},
expectedErr: "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated.",
},
{
name: "containerdAndContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "true",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
},
expectedErr: "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated.",
},
{
name: "crioAndContainerEngine",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectoars.containerd.enabled": "false",
"collectors.crio.enabled": "true",
"collectors.containerEngine.enabled": "true",
},
expectedErr: "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time. Please use the containerEngine configuration since the old configurations are deprecated.",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{
SetValues: tc.values,
}
// Attempt to render the template, expect an error
_, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
require.Error(t, err)
require.Contains(t, err.Error(), tc.expectedErr)
})
}
}
// Test that the helper does not overwrite user's configuration.
// And that the container reference is added to the configmap.
func TestFalcoctlRefs(t *testing.T) {
t.Parallel()
refShouldBeSet := func(t *testing.T, config any) {
// Get artifact configuration map.
configMap := config.(map[string]interface{})
artifactConfig := (configMap["artifact"]).(map[string]interface{})
// Test allowed types.
allowedTypes := artifactConfig["allowedTypes"]
require.Len(t, allowedTypes, 2)
require.True(t, slices.Contains(allowedTypes.([]interface{}), "plugin"))
require.True(t, slices.Contains(allowedTypes.([]interface{}), "rulesfile"))
// Test plugin reference.
refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{})
require.Len(t, refs, 2)
require.True(t, slices.Contains(refs, "falco-rules:4"))
require.True(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/container:0.3.5"))
}
refShouldNotBeSet := func(t *testing.T, config any) {
// Get artifact configuration map.
configMap := config.(map[string]interface{})
artifactConfig := (configMap["artifact"]).(map[string]interface{})
// Test allowed types.
allowedTypes := artifactConfig["allowedTypes"]
require.Len(t, allowedTypes, 2)
require.True(t, slices.Contains(allowedTypes.([]interface{}), "plugin"))
require.True(t, slices.Contains(allowedTypes.([]interface{}), "rulesfile"))
// Test plugin reference.
refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{})
require.Len(t, refs, 1)
require.True(t, slices.Contains(refs, "falco-rules:4"))
require.False(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/container:0.3.5"))
}
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, config any)
}{
{
"defaultValues",
nil,
refShouldBeSet,
},
{
"setPluginConfiguration",
map[string]string{
"collectors.enabled": "false",
},
refShouldNotBeSet,
},
{
"driver disabled",
map[string]string{
"driver.enabled": "false",
},
refShouldNotBeSet,
},
}
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/falcoctl-configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
var config map[string]interface{}
helm.UnmarshalK8SYaml(t, cm.Data["falcoctl.yaml"], &config)
testCase.expected(t, config)
})
}
}
type ContainerEngineSocket struct {
Enabled bool `yaml:"enabled"`
Sockets []string `yaml:"sockets,omitempty"`
}
type ContainerEngineConfig struct {
Docker ContainerEngineSocket `yaml:"docker"`
Podman ContainerEngineSocket `yaml:"podman"`
Containerd ContainerEngineSocket `yaml:"containerd"`
CRI ContainerEngineSocket `yaml:"cri"`
LXC ContainerEngineSocket `yaml:"lxc"`
LibvirtLXC ContainerEngineSocket `yaml:"libvirt_lxc"`
BPM ContainerEngineSocket `yaml:"bpm"`
}

View File

@ -0,0 +1,310 @@
package containerPlugin
import (
"path/filepath"
"slices"
"testing"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
)
func TestContainerPluginVolumeMounts(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, volumeMounts []corev1.VolumeMount)
}{
{
name: "defaultValues",
values: nil,
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 6)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/var/run/docker.sock", volumeMounts[0].MountPath)
require.Equal(t, "container-engine-socket-1", volumeMounts[1].Name)
require.Equal(t, "/host/run/podman/podman.sock", volumeMounts[1].MountPath)
require.Equal(t, "container-engine-socket-2", volumeMounts[2].Name)
require.Equal(t, "/host/run/host-containerd/containerd.sock", volumeMounts[2].MountPath)
require.Equal(t, "container-engine-socket-3", volumeMounts[3].Name)
require.Equal(t, "/host/run/containerd/containerd.sock", volumeMounts[3].MountPath)
require.Equal(t, "container-engine-socket-4", volumeMounts[4].Name)
require.Equal(t, "/host/run/crio/crio.sock", volumeMounts[4].MountPath)
require.Equal(t, "container-engine-socket-5", volumeMounts[5].Name)
require.Equal(t, "/host/run/k3s/containerd/containerd.sock", volumeMounts[5].MountPath)
},
},
{
name: "defaultDockerVolumeMount",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 1)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/var/run/docker.sock", volumeMounts[0].MountPath)
},
},
{
name: "customDockerSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/custom/docker.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 1)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/custom/docker.sock", volumeMounts[0].MountPath)
},
},
{
name: "defaultCriVolumeMount",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 4)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/run/containerd/containerd.sock", volumeMounts[0].MountPath)
require.Equal(t, "container-engine-socket-1", volumeMounts[1].Name)
require.Equal(t, "/host/run/crio/crio.sock", volumeMounts[1].MountPath)
require.Equal(t, "container-engine-socket-2", volumeMounts[2].Name)
require.Equal(t, "/host/run/k3s/containerd/containerd.sock", volumeMounts[2].MountPath)
require.Equal(t, "container-engine-socket-3", volumeMounts[3].Name)
require.Equal(t, "/host/run/host-containerd/containerd.sock", volumeMounts[3].MountPath)
},
},
{
name: "customCriSocket",
values: map[string]string{
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/custom/crio.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 1)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/custom/crio.sock", volumeMounts[0].MountPath)
},
},
{
name: "defaultContainerdVolumeMount",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 1)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/run/host-containerd/containerd.sock", volumeMounts[0].MountPath)
},
},
{
name: "customContainerdSocket",
values: map[string]string{
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.containerd.sockets[0]": "/custom/containerd.sock",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 1)
require.Equal(t, "container-engine-socket-0", volumeMounts[0].Name)
require.Equal(t, "/host/custom/containerd.sock", volumeMounts[0].MountPath)
},
},
{
name: "ContainerEnginesDefaultValues",
values: map[string]string{},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 6)
// dockerV := findVolumeMount("docker-socket-0", volumeMounts)
// require.NotNil(t, dockerV)
// require.Equal(t, "/host/var/run/docker.sock", dockerV.MountPath)
// podmanV := findVolumeMount("podman-socket-0", volumeMounts)
// require.NotNil(t, podmanV)
// require.Equal(t, "/host/run/podman/podman.sock", podmanV.MountPath)
// containerdV := findVolumeMount("containerd-socket-0", volumeMounts)
// require.NotNil(t, containerdV)
// require.Equal(t, "/host/run/host-containerd/containerd.sock", containerdV.MountPath)
// crioV0 := findVolumeMount("cri-socket-0", volumeMounts)
// require.NotNil(t, crioV0)
// require.Equal(t, "/host/run/containerd/containerd.sock", crioV0.MountPath)
// crioV1 := findVolumeMount("cri-socket-1", volumeMounts)
// require.NotNil(t, crioV1)
// require.Equal(t, "/host/run/crio/crio.sock", crioV1.MountPath)
// crioV2 := findVolumeMount("cri-socket-2", volumeMounts)
// require.NotNil(t, crioV2)
// require.Equal(t, "/host/run/k3s/containerd/containerd.sock", crioV2.MountPath)
},
},
{
name: "ContainerEnginesDockerWithMultipleSockets",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/var/run/docker.sock",
"collectors.containerEngine.engines.docker.sockets[1]": "/custom/docker.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 2)
dockerV0 := findVolumeMount("container-engine-socket-0", volumeMounts)
require.NotNil(t, dockerV0)
require.Equal(t, "/host/var/run/docker.sock", dockerV0.MountPath)
dockerV1 := findVolumeMount("container-engine-socket-1", volumeMounts)
require.NotNil(t, dockerV1)
require.Equal(t, "/host/custom/docker.sock", dockerV1.MountPath)
},
},
{
name: "ContainerEnginesCrioWithMultipleSockets",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/run/crio/crio.sock",
"collectors.containerEngine.engines.cri.sockets[1]": "/custom/crio.sock",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 2)
crioV0 := findVolumeMount("container-engine-socket-0", volumeMounts)
require.NotNil(t, crioV0)
require.Equal(t, "/host/run/crio/crio.sock", crioV0.MountPath)
crioV1 := findVolumeMount("container-engine-socket-1", volumeMounts)
require.NotNil(t, crioV1)
require.Equal(t, "/host/custom/crio.sock", crioV1.MountPath)
},
},
{
name: "noVolumeMountsWhenCollectorsDisabled",
values: map[string]string{
"collectors.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 0)
},
},
{
name: "noVolumeMountsWhenDriverDisabled",
values: map[string]string{
"driver.enabled": "false",
},
expected: func(t *testing.T, volumeMounts []corev1.VolumeMount) {
require.Len(t, volumeMounts, 0)
},
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{
SetValues: tc.values,
}
// Render the template
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/daemonset.yaml"})
// Parse the YAML output
var daemonset appsv1.DaemonSet
helm.UnmarshalK8SYaml(t, output, &daemonset)
// Find volumeMounts in the falco container
var pluginVolumeMounts []corev1.VolumeMount
for _, container := range daemonset.Spec.Template.Spec.Containers {
if container.Name == "falco" {
for _, volumeMount := range container.VolumeMounts {
if slices.Contains(volumeNames, volumeMount.Name) {
pluginVolumeMounts = append(pluginVolumeMounts, volumeMount)
}
}
}
}
// Run the test case's assertions
tc.expected(t, pluginVolumeMounts)
})
}
}
func TestInvalidVolumeMountConfiguration(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expectedErr string
}{
{
name: "bothOldAndNewConfigEnabled",
values: map[string]string{
"collectors.docker.enabled": "true",
"collectors.containerEngine.enabled": "true",
},
expectedErr: "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{
SetValues: tc.values,
}
// Attempt to render the template, expect an error
_, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, []string{"templates/daemonset.yaml"})
require.Error(t, err)
require.Contains(t, err.Error(), tc.expectedErr)
})
}
}
func findVolumeMount(name string, volumeMounts []corev1.VolumeMount) *corev1.VolumeMount {
for _, v := range volumeMounts {
if v.Name == name {
return &v
}
}
return nil
}

View File

@ -0,0 +1,373 @@
package containerPlugin
import (
"path/filepath"
"slices"
"testing"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
)
func TestContainerPluginVolumes(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, volumes []corev1.Volume)
}{
{
name: "defaultValues",
values: nil,
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 6)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/var/run/docker.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/run/podman/podman.sock", volumes[1].HostPath.Path)
require.Equal(t, "container-engine-socket-2", volumes[2].Name)
require.Equal(t, "/run/host-containerd/containerd.sock", volumes[2].HostPath.Path)
require.Equal(t, "container-engine-socket-3", volumes[3].Name)
require.Equal(t, "/run/containerd/containerd.sock", volumes[3].HostPath.Path)
require.Equal(t, "container-engine-socket-4", volumes[4].Name)
require.Equal(t, "/run/crio/crio.sock", volumes[4].HostPath.Path)
require.Equal(t, "container-engine-socket-5", volumes[5].Name)
require.Equal(t, "/run/k3s/containerd/containerd.sock", volumes[5].HostPath.Path)
},
},
{
name: "defaultDockerVolume",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 1)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/var/run/docker.sock", volumes[0].HostPath.Path)
},
},
{
name: "customDockerSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/custom/docker.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 1)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/custom/docker.sock", volumes[0].HostPath.Path)
},
},
{
name: "defaultCriVolume",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 4)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/run/containerd/containerd.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/run/crio/crio.sock", volumes[1].HostPath.Path)
require.Equal(t, "container-engine-socket-2", volumes[2].Name)
require.Equal(t, "/run/k3s/containerd/containerd.sock", volumes[2].HostPath.Path)
require.Equal(t, "container-engine-socket-3", volumes[3].Name)
require.Equal(t, "/run/host-containerd/containerd.sock", volumes[3].HostPath.Path)
},
},
{
name: "customCrioSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/custom/crio.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 1)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/custom/crio.sock", volumes[0].HostPath.Path)
},
},
{
name: "defaultContainerdVolume",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 1)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/run/host-containerd/containerd.sock", volumes[0].HostPath.Path)
},
},
{
name: "customContainerdSocket",
values: map[string]string{
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.containerd.sockets[0]": "/custom/containerd.sock",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 1)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/custom/containerd.sock", volumes[0].HostPath.Path)
},
},
{
name: "ContainerEnginesDefaultValues",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 6)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/var/run/docker.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/run/podman/podman.sock", volumes[1].HostPath.Path)
require.Equal(t, "container-engine-socket-2", volumes[2].Name)
require.Equal(t, "/run/host-containerd/containerd.sock", volumes[2].HostPath.Path)
require.Equal(t, "container-engine-socket-3", volumes[3].Name)
require.Equal(t, "/run/containerd/containerd.sock", volumes[3].HostPath.Path)
require.Equal(t, "container-engine-socket-4", volumes[4].Name)
require.Equal(t, "/run/crio/crio.sock", volumes[4].HostPath.Path)
require.Equal(t, "container-engine-socket-5", volumes[5].Name)
require.Equal(t, "/run/k3s/containerd/containerd.sock", volumes[5].HostPath.Path)
},
},
{
name: "ContainerEnginesDockerWithMultipleSockets",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/var/run/docker.sock",
"collectors.containerEngine.engines.docker.sockets[1]": "/custom/docker.sock",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 2)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/var/run/docker.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/custom/docker.sock", volumes[1].HostPath.Path)
},
},
{
name: "ContainerEnginesCrioWithMultipleSockets",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/run/crio/crio.sock",
"collectors.containerEngine.engines.cri.sockets[1]": "/custom/crio.sock",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 2)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/run/crio/crio.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/custom/crio.sock", volumes[1].HostPath.Path)
},
},
{
name: "ContainerEnginesPodmanWithMultipleSockets",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "false",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "true",
"collectors.containerEngine.engines.podman.sockets[0]": "/run/podman/podman.sock",
"collectors.containerEngine.engines.podman.sockets[1]": "/custom/podman.sock",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 2)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/run/podman/podman.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/custom/podman.sock", volumes[1].HostPath.Path)
},
},
{
name: "ContainerEnginesContainerdWithMultipleSockets",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "false",
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.containerd.sockets[0]": "/run/containerd/containerd.sock",
"collectors.containerEngine.engines.containerd.sockets[1]": "/custom/containerd.sock",
"collectors.containerEngine.engines.cri.enabled": "false",
"collectors.containerEngine.engines.podman.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 2)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/run/containerd/containerd.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/custom/containerd.sock", volumes[1].HostPath.Path)
},
},
{
name: "ContainerEnginesMultipleWithCustomSockets",
values: map[string]string{
"collectors.docker.enabled": "false",
"collectors.containerd.enabled": "false",
"collectors.crio.enabled": "false",
"collectors.containerEngine.enabled": "true",
"collectors.containerEngine.engines.docker.enabled": "true",
"collectors.containerEngine.engines.docker.sockets[0]": "/custom/docker/socket.sock",
"collectors.containerEngine.engines.containerd.enabled": "true",
"collectors.containerEngine.engines.cri.enabled": "true",
"collectors.containerEngine.engines.cri.sockets[0]": "/var/custom/crio.sock",
"collectors.containerEngine.engines.podman.enabled": "true",
"collectors.containerEngine.engines.podman.sockets[0]": "/run/podman/podman.sock",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 4)
require.Equal(t, "container-engine-socket-0", volumes[0].Name)
require.Equal(t, "/custom/docker/socket.sock", volumes[0].HostPath.Path)
require.Equal(t, "container-engine-socket-1", volumes[1].Name)
require.Equal(t, "/run/podman/podman.sock", volumes[1].HostPath.Path)
require.Equal(t, "container-engine-socket-2", volumes[2].Name)
require.Equal(t, "/run/host-containerd/containerd.sock", volumes[2].HostPath.Path)
require.Equal(t, "container-engine-socket-3", volumes[3].Name)
require.Equal(t, "/var/custom/crio.sock", volumes[3].HostPath.Path)
},
},
{
name: "noVolumesWhenCollectorsDisabled",
values: map[string]string{
"collectors.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 0)
},
},
{
name: "noVolumesWhenDriverDisabled",
values: map[string]string{
"driver.enabled": "false",
},
expected: func(t *testing.T, volumes []corev1.Volume) {
require.Len(t, volumes, 0)
},
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{
SetValues: tc.values,
}
// Render the template
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/daemonset.yaml"})
// Parse the YAML output
var daemonset appsv1.DaemonSet
helm.UnmarshalK8SYaml(t, output, &daemonset)
// Find volumes that match our container plugin pattern
var pluginVolumes []corev1.Volume
for _, volume := range daemonset.Spec.Template.Spec.Volumes {
// Check if the volume is for container sockets
if volume.HostPath != nil && slices.Contains(volumeNames, volume.Name) {
pluginVolumes = append(pluginVolumes, volume)
}
}
// Run the test case's assertions
tc.expected(t, pluginVolumes)
})
}
}
func TestInvalidVolumeConfiguration(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expectedErr string
}{
{
name: "bothOldAndNewConfigEnabled",
values: map[string]string{
"collectors.docker.enabled": "true",
"collectors.containerEngine.enabled": "true",
},
expectedErr: "You can not enable any of the [docker, containerd, crio] collectors configuration and the containerEngine configuration at the same time",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{
SetValues: tc.values,
}
// Attempt to render the template, expect an error
_, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, []string{"templates/daemonset.yaml"})
require.Error(t, err)
require.Contains(t, err.Error(), tc.expectedErr)
})
}
}
func findVolume(name string, volumes []corev1.Volume) *corev1.Volume {
for _, v := range volumes {
if v.Name == name {
return &v
}
}
return nil
}

View File

@ -0,0 +1,17 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package unit contains the unit tests for the Falco chart.
package unit

View File

@ -0,0 +1,334 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package falcoTemplates
import (
"fmt"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"path/filepath"
"strings"
"testing"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
)
func TestDriverConfigInFalcoConfig(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, config any)
}{
{
"defaultValues",
nil,
func(t *testing.T, config any) {
require.Len(t, config, 4, "should have four items")
kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config)
require.NoError(t, err)
require.Equal(t, "modern_ebpf", kind)
require.Equal(t, float64(4), bufSizePreset)
require.False(t, dropFailedExit)
},
},
{
"kind=kmod",
map[string]string{
"driver.kind": "kmod",
},
func(t *testing.T, config any) {
require.Len(t, config, 2, "should have only two items")
kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config)
require.NoError(t, err)
require.Equal(t, "kmod", kind)
require.Equal(t, float64(4), bufSizePreset)
require.False(t, dropFailedExit)
},
},
{
"kind=module(alias)",
map[string]string{
"driver.kind": "module",
},
func(t *testing.T, config any) {
require.Len(t, config, 2, "should have only two items")
kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config)
require.NoError(t, err)
require.Equal(t, "kmod", kind)
require.Equal(t, float64(4), bufSizePreset)
require.False(t, dropFailedExit)
},
},
{
"kmod=config",
map[string]string{
"driver.kmod.bufSizePreset": "6",
"driver.kmod.dropFailedExit": "true",
"driver.kind": "module",
},
func(t *testing.T, config any) {
require.Len(t, config, 2, "should have only two items")
kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config)
require.NoError(t, err)
require.Equal(t, "kmod", kind)
require.Equal(t, float64(6), bufSizePreset)
require.True(t, dropFailedExit)
},
},
{
"ebpf=config",
map[string]string{
"driver.kind": "ebpf",
"driver.ebpf.bufSizePreset": "6",
"driver.ebpf.dropFailedExit": "true",
"driver.ebpf.path": "testing/Path/ebpf",
},
func(t *testing.T, config any) {
require.Len(t, config, 2, "should have only two items")
kind, path, bufSizePreset, dropFailedExit, err := getEbpfConfig(config)
require.NoError(t, err)
require.Equal(t, "ebpf", kind)
require.Equal(t, "testing/Path/ebpf", path)
require.Equal(t, float64(6), bufSizePreset)
require.True(t, dropFailedExit)
},
},
{
"kind=ebpf",
map[string]string{
"driver.kind": "ebpf",
},
func(t *testing.T, config any) {
require.Len(t, config, 2, "should have only two items")
kind, path, bufSizePreset, dropFailedExit, err := getEbpfConfig(config)
require.NoError(t, err)
require.Equal(t, "ebpf", kind)
require.Equal(t, "${HOME}/.falco/falco-bpf.o", path)
require.Equal(t, float64(4), bufSizePreset)
require.False(t, dropFailedExit)
},
},
{
"kind=modern_ebpf",
map[string]string{
"driver.kind": "modern_ebpf",
},
func(t *testing.T, config any) {
require.Len(t, config, 2, "should have only two items")
kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config)
require.NoError(t, err)
require.Equal(t, "modern_ebpf", kind)
require.Equal(t, float64(4), bufSizePreset)
require.Equal(t, float64(2), cpusForEachBuffer)
require.False(t, dropFailedExit)
},
},
{
"kind=modern-bpf(alias)",
map[string]string{
"driver.kind": "modern-bpf",
},
func(t *testing.T, config any) {
require.Len(t, config, 2, "should have only two items")
kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config)
require.NoError(t, err)
require.Equal(t, "modern_ebpf", kind)
require.Equal(t, float64(4), bufSizePreset)
require.Equal(t, float64(2), cpusForEachBuffer)
require.False(t, dropFailedExit)
},
},
{
"modernEbpf=config",
map[string]string{
"driver.kind": "modern-bpf",
"driver.modernEbpf.bufSizePreset": "6",
"driver.modernEbpf.dropFailedExit": "true",
"driver.modernEbpf.cpusForEachBuffer": "8",
},
func(t *testing.T, config any) {
require.Len(t, config, 2, "should have only two items")
kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config)
require.NoError(t, err)
require.Equal(t, "modern_ebpf", kind)
require.Equal(t, float64(6), bufSizePreset)
require.Equal(t, float64(8), cpusForEachBuffer)
require.True(t, dropFailedExit)
},
},
{
"kind=gvisor",
map[string]string{
"driver.kind": "gvisor",
},
func(t *testing.T, config any) {
require.Len(t, config, 2, "should have only two items")
kind, config, root, err := getGvisorConfig(config)
require.NoError(t, err)
require.Equal(t, "gvisor", kind)
require.Equal(t, "/gvisor-config/pod-init.json", config)
require.Equal(t, "/host/run/containerd/runsc/k8s.io", root)
},
},
{
"gvisor=config",
map[string]string{
"driver.kind": "gvisor",
"driver.gvisor.runsc.root": "/my/root/test",
},
func(t *testing.T, config any) {
require.Len(t, config, 2, "should have only two items")
kind, config, root, err := getGvisorConfig(config)
require.NoError(t, err)
require.Equal(t, "gvisor", kind)
require.Equal(t, "/gvisor-config/pod-init.json", config)
require.Equal(t, "/host/my/root/test/k8s.io", root)
},
},
{
"kind=auto",
map[string]string{
"driver.kind": "auto",
},
func(t *testing.T, config any) {
require.Len(t, config, 4, "should have four items")
// Check that configuration for kmod has been set.
kind, bufSizePreset, dropFailedExit, err := getKmodConfig(config)
require.NoError(t, err)
require.Equal(t, "modern_ebpf", kind)
require.Equal(t, float64(4), bufSizePreset)
require.False(t, dropFailedExit)
// Check that configuration for ebpf has been set.
kind, path, bufSizePreset, dropFailedExit, err := getEbpfConfig(config)
require.NoError(t, err)
require.Equal(t, "modern_ebpf", kind)
require.Equal(t, "${HOME}/.falco/falco-bpf.o", path)
require.Equal(t, float64(4), bufSizePreset)
require.False(t, dropFailedExit)
// Check that configuration for modern_ebpf has been set.
kind, bufSizePreset, cpusForEachBuffer, dropFailedExit, err := getModernEbpfConfig(config)
require.NoError(t, err)
require.Equal(t, "modern_ebpf", kind)
require.Equal(t, float64(4), bufSizePreset)
require.Equal(t, float64(2), cpusForEachBuffer)
require.False(t, dropFailedExit)
},
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
var config map[string]interface{}
helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config)
engine := config["engine"]
testCase.expected(t, engine)
})
}
}
func TestDriverConfigWithUnsupportedDriver(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
values := map[string]string{
"driver.kind": "notExisting",
}
options := &helm.Options{SetValues: values}
_, err = helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
require.Error(t, err)
require.True(t, strings.Contains(err.Error(),
"unsupported driver kind: \"notExisting\". Supported drivers [kmod ebpf modern_ebpf gvisor auto], alias [module modern-bpf]"))
}
func getKmodConfig(config interface{}) (kind string, bufSizePreset float64, dropFailedExit bool, err error) {
configMap, ok := config.(map[string]interface{})
if !ok {
err = fmt.Errorf("can't assert type of config")
return
}
kind = configMap["kind"].(string)
kmod := configMap["kmod"].(map[string]interface{})
bufSizePreset = kmod["buf_size_preset"].(float64)
dropFailedExit = kmod["drop_failed_exit"].(bool)
return
}
func getEbpfConfig(config interface{}) (kind, path string, bufSizePreset float64, dropFailedExit bool, err error) {
configMap, ok := config.(map[string]interface{})
if !ok {
err = fmt.Errorf("can't assert type of config")
return
}
kind = configMap["kind"].(string)
ebpf := configMap["ebpf"].(map[string]interface{})
bufSizePreset = ebpf["buf_size_preset"].(float64)
dropFailedExit = ebpf["drop_failed_exit"].(bool)
path = ebpf["probe"].(string)
return
}
func getModernEbpfConfig(config interface{}) (kind string, bufSizePreset, cpusForEachBuffer float64, dropFailedExit bool, err error) {
configMap, ok := config.(map[string]interface{})
if !ok {
err = fmt.Errorf("can't assert type of config")
return
}
kind = configMap["kind"].(string)
modernEbpf := configMap["modern_ebpf"].(map[string]interface{})
bufSizePreset = modernEbpf["buf_size_preset"].(float64)
dropFailedExit = modernEbpf["drop_failed_exit"].(bool)
cpusForEachBuffer = modernEbpf["cpus_for_each_buffer"].(float64)
return
}
func getGvisorConfig(cfg interface{}) (kind, config, root string, err error) {
configMap, ok := cfg.(map[string]interface{})
if !ok {
err = fmt.Errorf("can't assert type of config")
return
}
kind = configMap["kind"].(string)
gvisor := configMap["gvisor"].(map[string]interface{})
config = gvisor["config"].(string)
root = gvisor["root"].(string)
return
}

View File

@ -0,0 +1,266 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package falcoTemplates
import (
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"path/filepath"
"testing"
v1 "k8s.io/api/core/v1"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
appsv1 "k8s.io/api/apps/v1"
)
var (
namespaceEnvVar = v1.EnvVar{
Name: "FALCOCTL_DRIVER_CONFIG_NAMESPACE",
ValueFrom: &v1.EnvVarSource{
FieldRef: &v1.ObjectFieldSelector{
APIVersion: "",
FieldPath: "metadata.namespace",
},
}}
configmapEnvVar = v1.EnvVar{
Name: "FALCOCTL_DRIVER_CONFIG_CONFIGMAP",
Value: unit.ReleaseName + "-falco",
}
updateConfigMapEnvVar = v1.EnvVar{
Name: "FALCOCTL_DRIVER_CONFIG_UPDATE_FALCO",
Value: "false",
}
)
// TestDriverLoaderEnabled tests the helper that enables the driver loader based on the configuration.
func TestDriverLoaderEnabled(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, initContainer any)
}{
{
"defaultValues",
nil,
func(t *testing.T, initContainer any) {
container, ok := initContainer.(v1.Container)
require.True(t, ok)
require.Contains(t, container.Args, "auto")
require.True(t, *container.SecurityContext.Privileged)
require.Contains(t, container.Env, namespaceEnvVar)
require.Contains(t, container.Env, configmapEnvVar)
require.NotContains(t, container.Env, updateConfigMapEnvVar)
// Check that the expected volumes are there.
volumeMounts(t, container.VolumeMounts)
},
},
{
"driver.kind=modern-bpf",
map[string]string{
"driver.kind": "modern-bpf",
},
func(t *testing.T, initContainer any) {
require.Equal(t, initContainer, nil)
},
},
{
"driver.kind=modern_ebpf",
map[string]string{
"driver.kind": "modern_ebpf",
},
func(t *testing.T, initContainer any) {
require.Equal(t, initContainer, nil)
},
},
{
"driver.kind=gvisor",
map[string]string{
"driver.kind": "gvisor",
},
func(t *testing.T, initContainer any) {
require.Equal(t, initContainer, nil)
},
},
{
"driver.disabled",
map[string]string{
"driver.enabled": "false",
},
func(t *testing.T, initContainer any) {
require.Equal(t, initContainer, nil)
},
},
{
"driver.loader.disabled",
map[string]string{
"driver.loader.enabled": "false",
},
func(t *testing.T, initContainer any) {
require.Equal(t, initContainer, nil)
},
},
{
"driver.kind=kmod",
map[string]string{
"driver.kind": "kmod",
},
func(t *testing.T, initContainer any) {
container, ok := initContainer.(v1.Container)
require.True(t, ok)
require.Contains(t, container.Args, "kmod")
require.True(t, *container.SecurityContext.Privileged)
require.NotContains(t, container.Env, namespaceEnvVar)
require.NotContains(t, container.Env, configmapEnvVar)
require.Contains(t, container.Env, updateConfigMapEnvVar)
// Check that the expected volumes are there.
volumeMounts(t, container.VolumeMounts)
},
},
{
"driver.kind=module",
map[string]string{
"driver.kind": "module",
},
func(t *testing.T, initContainer any) {
container, ok := initContainer.(v1.Container)
require.True(t, ok)
require.Contains(t, container.Args, "kmod")
require.True(t, *container.SecurityContext.Privileged)
require.NotContains(t, container.Env, namespaceEnvVar)
require.NotContains(t, container.Env, configmapEnvVar)
require.Contains(t, container.Env, updateConfigMapEnvVar)
// Check that the expected volumes are there.
volumeMounts(t, container.VolumeMounts)
},
},
{
"driver.kind=ebpf",
map[string]string{
"driver.kind": "ebpf",
},
func(t *testing.T, initContainer any) {
container, ok := initContainer.(v1.Container)
require.True(t, ok)
require.Contains(t, container.Args, "ebpf")
require.Nil(t, container.SecurityContext)
require.NotContains(t, container.Env, namespaceEnvVar)
require.Contains(t, container.Env, updateConfigMapEnvVar)
require.NotContains(t, container.Env, configmapEnvVar)
// Check that the expected volumes are there.
volumeMounts(t, container.VolumeMounts)
},
},
{
"driver.kind=kmod&driver.loader.disabled",
map[string]string{
"driver.kind": "kmod",
"driver.loader.enabled": "false",
},
func(t *testing.T, initContainer any) {
require.Equal(t, initContainer, nil)
},
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/daemonset.yaml"})
var ds appsv1.DaemonSet
helm.UnmarshalK8SYaml(t, output, &ds)
for i := range ds.Spec.Template.Spec.InitContainers {
if ds.Spec.Template.Spec.InitContainers[i].Name == "falco-driver-loader" {
testCase.expected(t, ds.Spec.Template.Spec.InitContainers[i])
return
}
}
testCase.expected(t, nil)
})
}
}
// volumenMounts checks that the expected volume mounts have been configured.
func volumeMounts(t *testing.T, volumeMounts []v1.VolumeMount) {
rootFalcoFS := v1.VolumeMount{
Name: "root-falco-fs",
ReadOnly: false,
MountPath: "/root/.falco",
}
require.Contains(t, volumeMounts, rootFalcoFS)
procFS := v1.VolumeMount{
Name: "proc-fs",
ReadOnly: true,
MountPath: "/host/proc",
}
require.Contains(t, volumeMounts, procFS)
bootFS := v1.VolumeMount{
Name: "boot-fs",
ReadOnly: true,
MountPath: "/host/boot",
}
require.Contains(t, volumeMounts, bootFS)
libModulesFS := v1.VolumeMount{
Name: "lib-modules",
ReadOnly: false,
MountPath: "/host/lib/modules",
}
require.Contains(t, volumeMounts, libModulesFS)
usrFS := v1.VolumeMount{
Name: "usr-fs",
ReadOnly: true,
MountPath: "/host/usr",
}
require.Contains(t, volumeMounts, usrFS)
etcFS := v1.VolumeMount{
Name: "etc-fs",
ReadOnly: true,
MountPath: "/host/etc",
}
require.Contains(t, volumeMounts, etcFS)
specializedFalcoConfigs := v1.VolumeMount{
Name: "specialized-falco-configs",
ReadOnly: false,
MountPath: "/etc/falco/config.d",
}
require.Contains(t, volumeMounts, specializedFalcoConfigs)
}

View File

@ -0,0 +1,145 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package falcoTemplates
import (
"fmt"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"io"
"os"
"path/filepath"
"strings"
"testing"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
corev1 "k8s.io/api/core/v1"
)
type grafanaDashboardsTemplateTest struct {
suite.Suite
chartPath string
releaseName string
namespace string
templates []string
}
func TestGrafanaDashboardsTemplate(t *testing.T) {
t.Parallel()
chartFullPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
suite.Run(t, &grafanaDashboardsTemplateTest{
Suite: suite.Suite{},
chartPath: chartFullPath,
releaseName: "falco-test-dashboard",
namespace: "falco-test-dashboard",
templates: []string{"templates/falco-dashboard-grafana.yaml"},
})
}
func (g *grafanaDashboardsTemplateTest) TestCreationDefaultValues() {
// Render the dashboard configmap and check that it has not been rendered.
_, err := helm.RenderTemplateE(g.T(), &helm.Options{}, g.chartPath, g.releaseName, g.templates, fmt.Sprintf("--namespace=%s", g.namespace))
g.Error(err, "should error")
g.Equal("error while running command: exit status 1; Error: could not find template templates/falco-dashboard-grafana.yaml in chart", err.Error())
}
func (g *grafanaDashboardsTemplateTest) TestConfig() {
testCases := []struct {
name string
values map[string]string
expected func(cm *corev1.ConfigMap)
}{
{"dashboard enabled",
map[string]string{
"grafana.dashboards.enabled": "true",
},
func(cm *corev1.ConfigMap) {
// Check that the name is the expected one.
g.Equal("falco-grafana-dashboard", cm.Name)
// Check the namespace.
g.Equal(g.namespace, cm.Namespace)
g.Nil(cm.Annotations)
},
},
{"namespace",
map[string]string{
"grafana.dashboards.enabled": "true",
"grafana.dashboards.configMaps.falco.namespace": "custom-namespace",
},
func(cm *corev1.ConfigMap) {
// Check that the name is the expected one.
g.Equal("falco-grafana-dashboard", cm.Name)
// Check the namespace.
g.Equal("custom-namespace", cm.Namespace)
g.Nil(cm.Annotations)
},
},
{"folder",
map[string]string{
"grafana.dashboards.enabled": "true",
"grafana.dashboards.configMaps.falco.folder": "custom-folder",
},
func(cm *corev1.ConfigMap) {
// Check that the name is the expected one.
g.Equal("falco-grafana-dashboard", cm.Name)
g.NotNil(cm.Annotations)
g.Len(cm.Annotations, 2)
// Check sidecar annotation.
val, ok := cm.Annotations["k8s-sidecar-target-directory"]
g.True(ok)
g.Equal("/tmp/dashboards/custom-folder", val)
// Check grafana annotation.
val, ok = cm.Annotations["grafana_dashboard_folder"]
g.True(ok)
g.Equal("custom-folder", val)
},
},
}
for _, testCase := range testCases {
testCase := testCase
g.Run(testCase.name, func() {
subT := g.T()
subT.Parallel()
options := &helm.Options{SetValues: testCase.values}
// Render the configmap unmarshal it.
output, err := helm.RenderTemplateE(subT, options, g.chartPath, g.releaseName, g.templates, "--namespace="+g.namespace)
g.NoError(err, "should succeed")
var cfgMap corev1.ConfigMap
helm.UnmarshalK8SYaml(subT, output, &cfgMap)
// Common checks
// Check that contains the right label.
g.Contains(cfgMap.Labels, "grafana_dashboard")
// Check that the dashboard is contained in the config map.
file, err := os.Open("../../../dashboards/falco-dashboard.json")
g.NoError(err)
content, err := io.ReadAll(file)
g.NoError(err)
cfgData, ok := cfgMap.Data["falco-dashboard.json"]
g.True(ok)
g.Equal(strings.TrimRight(string(content), "\n"), cfgData)
testCase.expected(&cfgMap)
})
}
}

View File

@ -0,0 +1,210 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package falcoTemplates
import (
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"path/filepath"
"testing"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
corev1 "k8s.io/api/core/v1"
)
type metricsConfig struct {
Enabled bool `yaml:"enabled"`
ConvertMemoryToMB bool `yaml:"convert_memory_to_mb"`
IncludeEmptyValues bool `yaml:"include_empty_values"`
KernelEventCountersEnabled bool `yaml:"kernel_event_counters_enabled"`
KernelEventCountersPerCPUEnabled bool `yaml:"kernel_event_counters_per_cpu_enabled"`
ResourceUtilizationEnabled bool `yaml:"resource_utilization_enabled"`
RulesCountersEnabled bool `yaml:"rules_counters_enabled"`
LibbpfStatsEnabled bool `yaml:"libbpf_stats_enabled"`
OutputRule bool `yaml:"output_rule"`
StateCountersEnabled bool `yaml:"state_counters_enabled"`
Interval string `yaml:"interval"`
}
type webServerConfig struct {
Enabled bool `yaml:"enabled"`
K8sHealthzEndpoint string `yaml:"k8s_healthz_endpoint"`
ListenPort string `yaml:"listen_port"`
PrometheusMetricsEnabled bool `yaml:"prometheus_metrics_enabled"`
SSLCertificate string `yaml:"ssl_certificate"`
SSLEnabled bool `yaml:"ssl_enabled"`
Threadiness int `yaml:"threadiness"`
}
func TestMetricsConfigInFalcoConfig(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, metricsConfig, webServerConfig any)
}{
{
"defaultValues",
nil,
func(t *testing.T, metricsConfig, webServerConfig any) {
require.Len(t, metricsConfig, 11, "should have ten items")
metrics, err := getMetricsConfig(metricsConfig)
require.NoError(t, err)
require.NotNil(t, metrics)
require.True(t, metrics.ConvertMemoryToMB)
require.False(t, metrics.Enabled)
require.False(t, metrics.IncludeEmptyValues)
require.True(t, metrics.KernelEventCountersEnabled)
require.True(t, metrics.ResourceUtilizationEnabled)
require.True(t, metrics.RulesCountersEnabled)
require.Equal(t, "1h", metrics.Interval)
require.True(t, metrics.LibbpfStatsEnabled)
require.True(t, metrics.OutputRule)
require.True(t, metrics.StateCountersEnabled)
require.False(t, metrics.KernelEventCountersPerCPUEnabled)
webServer, err := getWebServerConfig(webServerConfig)
require.NoError(t, err)
require.NotNil(t, webServer)
require.True(t, webServer.Enabled)
require.False(t, webServer.PrometheusMetricsEnabled)
},
},
{
"metricsEnabled",
map[string]string{
"metrics.enabled": "true",
},
func(t *testing.T, metricsConfig, webServerConfig any) {
require.Len(t, metricsConfig, 11, "should have ten items")
metrics, err := getMetricsConfig(metricsConfig)
require.NoError(t, err)
require.NotNil(t, metrics)
require.True(t, metrics.ConvertMemoryToMB)
require.True(t, metrics.Enabled)
require.False(t, metrics.IncludeEmptyValues)
require.True(t, metrics.KernelEventCountersEnabled)
require.True(t, metrics.ResourceUtilizationEnabled)
require.True(t, metrics.RulesCountersEnabled)
require.Equal(t, "1h", metrics.Interval)
require.True(t, metrics.LibbpfStatsEnabled)
require.False(t, metrics.OutputRule)
require.True(t, metrics.StateCountersEnabled)
require.False(t, metrics.KernelEventCountersPerCPUEnabled)
webServer, err := getWebServerConfig(webServerConfig)
require.NoError(t, err)
require.NotNil(t, webServer)
require.True(t, webServer.Enabled)
require.True(t, webServer.PrometheusMetricsEnabled)
},
},
{
"Flip/Change Values",
map[string]string{
"metrics.enabled": "true",
"metrics.convertMemoryToMB": "false",
"metrics.includeEmptyValues": "true",
"metrics.kernelEventCountersEnabled": "false",
"metrics.resourceUtilizationEnabled": "false",
"metrics.rulesCountersEnabled": "false",
"metrics.libbpfStatsEnabled": "false",
"metrics.outputRule": "false",
"metrics.stateCountersEnabled": "false",
"metrics.interval": "1s",
"metrics.kernelEventCountersPerCPUEnabled": "true",
},
func(t *testing.T, metricsConfig, webServerConfig any) {
require.Len(t, metricsConfig, 11, "should have ten items")
metrics, err := getMetricsConfig(metricsConfig)
require.NoError(t, err)
require.NotNil(t, metrics)
require.False(t, metrics.ConvertMemoryToMB)
require.True(t, metrics.Enabled)
require.True(t, metrics.IncludeEmptyValues)
require.False(t, metrics.KernelEventCountersEnabled)
require.False(t, metrics.ResourceUtilizationEnabled)
require.False(t, metrics.RulesCountersEnabled)
require.Equal(t, "1s", metrics.Interval)
require.False(t, metrics.LibbpfStatsEnabled)
require.False(t, metrics.OutputRule)
require.False(t, metrics.StateCountersEnabled)
require.True(t, metrics.KernelEventCountersPerCPUEnabled)
webServer, err := getWebServerConfig(webServerConfig)
require.NoError(t, err)
require.NotNil(t, webServer)
require.True(t, webServer.Enabled)
require.True(t, webServer.PrometheusMetricsEnabled)
},
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
var config map[string]interface{}
helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config)
metrics := config["metrics"]
webServer := config["webserver"]
testCase.expected(t, metrics, webServer)
})
}
}
func getMetricsConfig(config any) (*metricsConfig, error) {
var metrics metricsConfig
metricsByte, err := yaml.Marshal(config)
if err != nil {
return nil, err
}
if err := yaml.Unmarshal(metricsByte, &metrics); err != nil {
return nil, err
}
return &metrics, nil
}
func getWebServerConfig(config any) (*webServerConfig, error) {
var webServer webServerConfig
webServerByte, err := yaml.Marshal(config)
if err != nil {
return nil, err
}
if err := yaml.Unmarshal(webServerByte, &webServer); err != nil {
return nil, err
}
return &webServer, nil
}

View File

@ -0,0 +1,60 @@
package falcoTemplates
import (
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
"path/filepath"
"strings"
"testing"
)
func TestServiceAccount(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, sa *corev1.ServiceAccount)
}{
{
"defaultValues",
nil,
func(t *testing.T, sa *corev1.ServiceAccount) {
require.Equal(t, sa.Name, "rendered-resources-falco")
},
},
{
"kind=auto",
map[string]string{
"serviceAccount.create": "false",
},
func(t *testing.T, sa *corev1.ServiceAccount) {
require.Equal(t, sa.Name, "")
},
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetValues: testCase.values}
output, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, []string{"templates/serviceaccount.yaml"})
if err != nil {
require.True(t, strings.Contains(err.Error(), "Error: could not find template templates/serviceaccount.yaml in chart"))
}
var sa corev1.ServiceAccount
helm.UnmarshalK8SYaml(t, output, &sa)
testCase.expected(t, &sa)
})
}
}

View File

@ -0,0 +1,160 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package falcoTemplates
import (
"encoding/json"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"path/filepath"
"reflect"
"testing"
"github.com/gruntwork-io/terratest/modules/helm"
monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
)
type serviceMonitorTemplateTest struct {
suite.Suite
chartPath string
releaseName string
namespace string
templates []string
}
func TestServiceMonitorTemplate(t *testing.T) {
t.Parallel()
chartFullPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
suite.Run(t, &serviceMonitorTemplateTest{
Suite: suite.Suite{},
chartPath: chartFullPath,
releaseName: "falco-test",
namespace: "falco-namespace-test",
templates: []string{"templates/serviceMonitor.yaml"},
})
}
func (s *serviceMonitorTemplateTest) TestCreationDefaultValues() {
// Render the servicemonitor and check that it has not been rendered.
_, err := helm.RenderTemplateE(s.T(), &helm.Options{}, s.chartPath, s.releaseName, s.templates)
s.Error(err, "should error")
s.Equal("error while running command: exit status 1; Error: could not find template templates/serviceMonitor.yaml in chart", err.Error())
}
func (s *serviceMonitorTemplateTest) TestEndpoint() {
defaultEndpointsJSON := `[
{
"port": "metrics",
"interval": "15s",
"scrapeTimeout": "10s",
"honorLabels": true,
"path": "/metrics",
"scheme": "http"
}
]`
var defaultEndpoints []monitoringv1.Endpoint
err := json.Unmarshal([]byte(defaultEndpointsJSON), &defaultEndpoints)
s.NoError(err)
options := &helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"}}
output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, s.templates)
var svcMonitor monitoringv1.ServiceMonitor
helm.UnmarshalK8SYaml(s.T(), output, &svcMonitor)
s.Len(svcMonitor.Spec.Endpoints, 1, "should have only one endpoint")
s.True(reflect.DeepEqual(svcMonitor.Spec.Endpoints[0], defaultEndpoints[0]))
}
func (s *serviceMonitorTemplateTest) TestNamespaceSelector() {
selectorsLabelJson := `{
"app.kubernetes.io/instance": "my-falco",
"foo": "bar"
}`
options := &helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"},
SetJsonValues: map[string]string{"serviceMonitor.selector": selectorsLabelJson}}
output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, s.templates)
var svcMonitor monitoringv1.ServiceMonitor
helm.UnmarshalK8SYaml(s.T(), output, &svcMonitor)
s.Len(svcMonitor.Spec.NamespaceSelector.MatchNames, 1)
s.Equal("default", svcMonitor.Spec.NamespaceSelector.MatchNames[0])
}
func (s *serviceMonitorTemplateTest) TestServiceMonitorSelector() {
testCases := []struct {
name string
values string
expected map[string]string
}{
{
"defaultValues",
"",
map[string]string{
"app.kubernetes.io/instance": "falco-test",
"app.kubernetes.io/name": "falco",
"type": "falco-metrics",
},
},
{
"customValues",
`{
"foo": "bar"
}`,
map[string]string{
"app.kubernetes.io/instance": "falco-test",
"app.kubernetes.io/name": "falco",
"foo": "bar",
"type": "falco-metrics",
},
},
{
"overwriteDefaultValues",
`{
"app.kubernetes.io/instance": "falco-overwrite",
"foo": "bar"
}`,
map[string]string{
"app.kubernetes.io/instance": "falco-overwrite",
"app.kubernetes.io/name": "falco",
"foo": "bar",
"type": "falco-metrics",
},
},
}
for _, testCase := range testCases {
testCase := testCase
s.Run(testCase.name, func() {
subT := s.T()
subT.Parallel()
options := &helm.Options{SetValues: map[string]string{"serviceMonitor.create": "true"},
SetJsonValues: map[string]string{"serviceMonitor.selector": testCase.values}}
output := helm.RenderTemplate(s.T(), options, s.chartPath, s.releaseName, s.templates)
var svcMonitor monitoringv1.ServiceMonitor
helm.UnmarshalK8SYaml(s.T(), output, &svcMonitor)
s.Equal(testCase.expected, svcMonitor.Spec.Selector.MatchLabels, "should be the same")
})
}
}

View File

@ -0,0 +1,177 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package falcoTemplates
import (
"fmt"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"path/filepath"
"testing"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
corev1 "k8s.io/api/core/v1"
)
type serviceTemplateTest struct {
suite.Suite
chartPath string
releaseName string
namespace string
templates []string
}
func TestServiceTemplate(t *testing.T) {
t.Parallel()
chartFullPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
suite.Run(t, &serviceTemplateTest{
Suite: suite.Suite{},
chartPath: chartFullPath,
releaseName: "falco-test",
namespace: "falco-namespace-test",
templates: []string{"templates/service.yaml"},
})
}
func (s *serviceTemplateTest) TestCreationDefaultValues() {
// Render the service and check that it has not been rendered.
_, err := helm.RenderTemplateE(s.T(), &helm.Options{}, s.chartPath, s.releaseName, s.templates)
s.Error(err, "should error")
s.Equal("error while running command: exit status 1; Error: could not find template templates/service.yaml in chart", err.Error())
}
func (s *serviceTemplateTest) TestDefaultLabelsValues() {
options := &helm.Options{SetValues: map[string]string{"metrics.enabled": "true"}}
output, err := helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, s.templates)
s.NoError(err, "should render template")
cInfo, err := unit.ChartInfo(s.T(), s.chartPath)
s.NoError(err)
// Get app version.
appVersion, found := cInfo["appVersion"]
s.True(found, "should find app version in chart info")
appVersion = appVersion.(string)
// Get chart version.
chartVersion, found := cInfo["version"]
s.True(found, "should find chart version in chart info")
// Get chart name.
chartName, found := cInfo["name"]
s.True(found, "should find chart name in chart info")
chartName = chartName.(string)
expectedLabels := map[string]string{
"helm.sh/chart": fmt.Sprintf("%s-%s", chartName, chartVersion),
"app.kubernetes.io/name": chartName.(string),
"app.kubernetes.io/instance": s.releaseName,
"app.kubernetes.io/version": appVersion.(string),
"app.kubernetes.io/managed-by": "Helm",
"type": "falco-metrics",
}
var svc corev1.Service
helm.UnmarshalK8SYaml(s.T(), output, &svc)
labels := svc.GetLabels()
for key, value := range labels {
expectedVal := expectedLabels[key]
s.Equal(expectedVal, value)
}
for key, value := range expectedLabels {
expectedVal := labels[key]
s.Equal(expectedVal, value)
}
}
func (s *serviceTemplateTest) TestCustomLabelsValues() {
options := &helm.Options{SetValues: map[string]string{"metrics.enabled": "true",
"metrics.service.labels.customLabel": "customLabelValues"}}
output, err := helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, s.templates)
s.NoError(err, "should render template")
cInfo, err := unit.ChartInfo(s.T(), s.chartPath)
s.NoError(err)
// Get app version.
appVersion, found := cInfo["appVersion"]
s.True(found, "should find app version in chart info")
appVersion = appVersion.(string)
// Get chart version.
chartVersion, found := cInfo["version"]
s.True(found, "should find chart version in chart info")
// Get chart name.
chartName, found := cInfo["name"]
s.True(found, "should find chart name in chart info")
chartName = chartName.(string)
expectedLabels := map[string]string{
"helm.sh/chart": fmt.Sprintf("%s-%s", chartName, chartVersion),
"app.kubernetes.io/name": chartName.(string),
"app.kubernetes.io/instance": s.releaseName,
"app.kubernetes.io/version": appVersion.(string),
"app.kubernetes.io/managed-by": "Helm",
"type": "falco-metrics",
"customLabel": "customLabelValues",
}
var svc corev1.Service
helm.UnmarshalK8SYaml(s.T(), output, &svc)
labels := svc.GetLabels()
for key, value := range labels {
expectedVal := expectedLabels[key]
s.Equal(expectedVal, value)
}
for key, value := range expectedLabels {
expectedVal := labels[key]
s.Equal(expectedVal, value)
}
}
func (s *serviceTemplateTest) TestDefaultAnnotationsValues() {
options := &helm.Options{SetValues: map[string]string{"metrics.enabled": "true"}}
output, err := helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, s.templates)
s.NoError(err)
var svc corev1.Service
helm.UnmarshalK8SYaml(s.T(), output, &svc)
s.Nil(svc.Annotations, "should be nil")
}
func (s *serviceTemplateTest) TestCustomAnnotationsValues() {
values := map[string]string{
"metrics.enabled": "true",
"metrics.service.annotations.annotation1": "customAnnotation1",
"metrics.service.annotations.annotation2": "customAnnotation2",
}
annotations := map[string]string{
"annotation1": "customAnnotation1",
"annotation2": "customAnnotation2",
}
options := &helm.Options{SetValues: values}
output, err := helm.RenderTemplateE(s.T(), options, s.chartPath, s.releaseName, s.templates)
s.NoError(err)
var svc corev1.Service
helm.UnmarshalK8SYaml(s.T(), output, &svc)
s.Len(svc.Annotations, 2)
for key, value := range svc.Annotations {
expectedVal := annotations[key]
s.Equal(expectedVal, value)
}
}

View File

@ -0,0 +1,649 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright 2024 The Falco Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package k8smetaPlugin
import (
"encoding/json"
"fmt"
"path/filepath"
"regexp"
"strings"
"testing"
"github.com/falcosecurity/charts/charts/falco/tests/unit"
"slices"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
)
// Using the default values we want to test that all the expected resources for the k8s-metacollector are rendered.
func TestRenderedResourcesWithDefaultValues(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
options := &helm.Options{}
// Template the chart using the default values.yaml file.
output, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, nil)
require.NoError(t, err)
// Extract all rendered files from the output.
re := regexp.MustCompile(unit.PatternK8sMetacollectorFiles)
matches := re.FindAllStringSubmatch(output, -1)
require.Len(t, matches, 0)
}
func TestRenderedResourcesWhenNotEnabled(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
// Template files that we expect to be rendered.
templateFiles := []string{
"clusterrole.yaml",
"clusterrolebinding.yaml",
"deployment.yaml",
"service.yaml",
"serviceaccount.yaml",
}
require.NoError(t, err)
options := &helm.Options{SetValues: map[string]string{
"collectors.kubernetes.enabled": "true",
}}
// Template the chart using the default values.yaml file.
output, err := helm.RenderTemplateE(t, options, helmChartPath, unit.ReleaseName, nil)
require.NoError(t, err)
// Extract all rendered files from the output.
re := regexp.MustCompile(unit.PatternK8sMetacollectorFiles)
matches := re.FindAllStringSubmatch(output, -1)
var renderedTemplates []string
for _, match := range matches {
// Filter out test templates.
if !strings.Contains(match[1], "test-") {
renderedTemplates = append(renderedTemplates, match[1])
}
}
// Assert that the rendered resources are equal tho the expected ones.
require.Equal(t, len(renderedTemplates), len(templateFiles), "should be equal")
for _, rendered := range renderedTemplates {
require.True(t, slices.Contains(templateFiles, rendered), "template files should contain all the rendered files")
}
}
func TestPluginConfigurationInFalcoConfig(t *testing.T) {
t.Parallel()
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
testCases := []struct {
name string
values map[string]string
expected func(t *testing.T, config any)
}{
{
"defaultValues",
nil,
func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
// Get init config.
initConfig, ok := plugin["init_config"]
require.True(t, ok)
require.Len(t, initConfig, 5, "checking number of config entries in the init section")
initConfigMap := initConfig.(map[string]interface{})
// Check that the collector port is correctly set.
port := initConfigMap["collectorPort"]
require.Equal(t, float64(45000), port.(float64))
// Check that the collector nodeName is correctly set.
nodeName := initConfigMap["nodeName"]
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", unit.ReleaseName), hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "info", verbosity.(string))
// Check that host proc fs has been set.
hostProc := initConfigMap["hostProc"]
require.Equal(t, "/host", hostProc.(string))
// Check that the library path is set.
libPath := plugin["library_path"]
require.Equal(t, "libk8smeta.so", libPath)
},
},
{
"overrideK8s-metacollectorNamespace",
map[string]string{
"k8s-metacollector.namespaceOverride": "test",
},
func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
// Get init config.
initConfig, ok := plugin["init_config"]
require.True(t, ok)
require.Len(t, initConfig, 5, "checking number of config entries in the init section")
initConfigMap := initConfig.(map[string]interface{})
// Check that the collector port is correctly set.
port := initConfigMap["collectorPort"]
require.Equal(t, float64(45000), port.(float64))
// Check that the collector nodeName is correctly set.
nodeName := initConfigMap["nodeName"]
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.test.svc", unit.ReleaseName), hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "info", verbosity.(string))
// Check that host proc fs has been set.
hostProc := initConfigMap["hostProc"]
require.Equal(t, "/host", hostProc.(string))
// Check that the library path is set.
libPath := plugin["library_path"]
require.Equal(t, "libk8smeta.so", libPath)
},
},
{
"overrideK8s-metacollectorName",
map[string]string{
"k8s-metacollector.fullnameOverride": "collector",
},
func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
// Get init config.
initConfig, ok := plugin["init_config"]
require.True(t, ok)
require.Len(t, initConfig, 5, "checking number of config entries in the init section")
initConfigMap := initConfig.(map[string]interface{})
// Check that the collector port is correctly set.
port := initConfigMap["collectorPort"]
require.Equal(t, float64(45000), port.(float64))
// Check that the collector nodeName is correctly set.
nodeName := initConfigMap["nodeName"]
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, "collector.default.svc", hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "info", verbosity.(string))
// Check that host proc fs has been set.
hostProc := initConfigMap["hostProc"]
require.Equal(t, "/host", hostProc.(string))
// Check that the library path is set.
libPath := plugin["library_path"]
require.Equal(t, "libk8smeta.so", libPath)
},
},
{
"overrideK8s-metacollectorNamespaceAndName",
map[string]string{
"k8s-metacollector.namespaceOverride": "test",
"k8s-metacollector.fullnameOverride": "collector",
},
func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
// Get init config.
initConfig, ok := plugin["init_config"]
require.True(t, ok)
require.Len(t, initConfig, 5, "checking number of config entries in the init section")
initConfigMap := initConfig.(map[string]interface{})
// Check that the collector port is correctly set.
port := initConfigMap["collectorPort"]
require.Equal(t, float64(45000), port.(float64))
// Check that the collector nodeName is correctly set.
nodeName := initConfigMap["nodeName"]
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, "collector.test.svc", hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "info", verbosity.(string))
// Check that host proc fs has been set.
hostProc := initConfigMap["hostProc"]
require.Equal(t, "/host", hostProc.(string))
// Check that the library path is set.
libPath := plugin["library_path"]
require.Equal(t, "libk8smeta.so", libPath)
},
},
{
"set CollectorHostname",
map[string]string{
"collectors.kubernetes.collectorHostname": "test",
},
func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
// Get init config.
initConfig, ok := plugin["init_config"]
require.True(t, ok)
require.Len(t, initConfig, 5, "checking number of config entries in the init section")
initConfigMap := initConfig.(map[string]interface{})
// Check that the collector port is correctly set.
port := initConfigMap["collectorPort"]
require.Equal(t, float64(45000), port.(float64))
// Check that the collector nodeName is correctly set.
nodeName := initConfigMap["nodeName"]
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, "test", hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "info", verbosity.(string))
// Check that host proc fs has been set.
hostProc := initConfigMap["hostProc"]
require.Equal(t, "/host", hostProc.(string))
// Check that the library path is set.
libPath := plugin["library_path"]
require.Equal(t, "libk8smeta.so", libPath)
},
},
{
"set CollectorHostname and namespace name",
map[string]string{
"collectors.kubernetes.collectorHostname": "test-with-override",
"k8s-metacollector.namespaceOverride": "test",
"k8s-metacollector.fullnameOverride": "collector",
},
func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
// Get init config.
initConfig, ok := plugin["init_config"]
require.True(t, ok)
require.Len(t, initConfig, 5, "checking number of config entries in the init section")
initConfigMap := initConfig.(map[string]interface{})
// Check that the collector port is correctly set.
port := initConfigMap["collectorPort"]
require.Equal(t, float64(45000), port.(float64))
// Check that the collector nodeName is correctly set.
nodeName := initConfigMap["nodeName"]
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, "test-with-override", hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "info", verbosity.(string))
// Check that host proc fs has been set.
hostProc := initConfigMap["hostProc"]
require.Equal(t, "/host", hostProc.(string))
// Check that the library path is set.
libPath := plugin["library_path"]
require.Equal(t, "libk8smeta.so", libPath)
},
},
{
"set collectorPort",
map[string]string{
"collectors.kubernetes.collectorPort": "8888",
},
func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
// Get init config.
initConfig, ok := plugin["init_config"]
require.True(t, ok)
initConfigMap := initConfig.(map[string]interface{})
// Check that the collector port is correctly set.
port := initConfigMap["collectorPort"]
require.Equal(t, float64(8888), port.(float64))
// Check that the collector nodeName is correctly set.
nodeName := initConfigMap["nodeName"]
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", unit.ReleaseName), hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "info", verbosity.(string))
// Check that host proc fs has been set.
hostProc := initConfigMap["hostProc"]
require.Equal(t, "/host", hostProc.(string))
// Check that the library path is set.
libPath := plugin["library_path"]
require.Equal(t, "libk8smeta.so", libPath)
},
},
{
"set collector logger level and hostProc",
map[string]string{
"collectors.kubernetes.verbosity": "trace",
"collectors.kubernetes.hostProc": "/host/test",
},
func(t *testing.T, config any) {
plugin := config.(map[string]interface{})
// Get init config.
initConfig, ok := plugin["init_config"]
require.True(t, ok)
require.Len(t, initConfig, 5, "checking number of config entries in the init section")
initConfigMap := initConfig.(map[string]interface{})
// Check that the collector port is correctly set.
port := initConfigMap["collectorPort"]
require.Equal(t, float64(45000), port.(float64))
// Check that the collector nodeName is correctly set.
nodeName := initConfigMap["nodeName"]
require.Equal(t, "${FALCO_K8S_NODE_NAME}", nodeName.(string))
// Check that the collector hostname is correctly set.
hostName := initConfigMap["collectorHostname"]
require.Equal(t, fmt.Sprintf("%s-k8s-metacollector.default.svc", unit.ReleaseName), hostName.(string))
// Check that the loglevel has been set.
verbosity := initConfigMap["verbosity"]
require.Equal(t, "trace", verbosity.(string))
// Check that host proc fs has been set.
hostProc := initConfigMap["hostProc"]
require.Equal(t, "/host/test", hostProc.(string))
// Check that the library path is set.
libPath := plugin["library_path"]
require.Equal(t, "libk8smeta.so", libPath)
},
},
{
"driver disabled",
map[string]string{
"driver.enabled": "false",
},
func(t *testing.T, config any) {
require.Nil(t, config)
},
},
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
// Enable the collector.
if testCase.values != nil {
testCase.values["collectors.kubernetes.enabled"] = "true"
} else {
testCase.values = map[string]string{"collectors.kubernetes.enabled": "true"}
}
options := &helm.Options{SetValues: testCase.values}
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
var config map[string]interface{}
helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config)
plugins := config["plugins"]
pluginsArray := plugins.([]interface{})
found := false
// Find the k8smeta plugin configuration.
for _, plugin := range pluginsArray {
if name, ok := plugin.(map[string]interface{})["name"]; ok && name == unit.K8sMetaPluginName {
testCase.expected(t, plugin)
found = true
}
}
if found {
// Check that the plugin has been added to the ones that need to be loaded.
loadplugins := config["load_plugins"]
require.True(t, slices.Contains(loadplugins.([]interface{}), unit.K8sMetaPluginName))
} else {
testCase.expected(t, nil)
loadplugins := config["load_plugins"]
require.True(t, !slices.Contains(loadplugins.([]interface{}), unit.K8sMetaPluginName))
}
})
}
}
// Test that the helper does not overwrite user's configuration.
func TestPluginConfigurationUniqueEntries(t *testing.T) {
t.Parallel()
pluginsJSON := `[
{
"init_config": null,
"library_path": "libk8saudit.so",
"name": "k8saudit",
"open_params": "http://:9765/k8s-audit"
},
{
"library_path": "libcloudtrail.so",
"name": "cloudtrail"
},
{
"init_config": "",
"library_path": "libjson.so",
"name": "json"
},
{
"init_config": {
"collectorHostname": "rendered-resources-k8s-metacollector.default.svc",
"collectorPort": 45000,
"nodeName": "${FALCO_K8S_NODE_NAME}"
},
"library_path": "libk8smeta.so",
"name": "k8smeta"
},
{
"init_config": {
"engines": {
"bpm": {
"enabled": false
},
"containerd": {
"enabled": true,
"sockets": [
"/run/containerd/containerd.sock"
]
},
"cri": {
"enabled": true,
"sockets": [
"/run/crio/crio.sock"
]
},
"docker": {
"enabled": true,
"sockets": [
"/var/run/docker.sock"
]
},
"libvirt_lxc": {
"enabled": false
},
"lxc": {
"enabled": false
},
"podman": {
"enabled": false,
"sockets": [
"/run/podman/podman.sock"
]
}
},
"hooks": [
"create"
],
"label_max_len": 100,
"with_size": false
},
"library_path": "libcontainer.so",
"name": "container"
}
]`
loadPluginsJSON := `[
"k8smeta",
"k8saudit",
"container"
]`
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
options := &helm.Options{SetJsonValues: map[string]string{
"falco.plugins": pluginsJSON,
"falco.load_plugins": loadPluginsJSON,
}, SetValues: map[string]string{"collectors.kubernetes.enabled": "true"}}
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
var config map[string]interface{}
helm.UnmarshalK8SYaml(t, cm.Data["falco.yaml"], &config)
plugins := config["plugins"]
out, err := json.MarshalIndent(plugins, "", " ")
require.NoError(t, err)
require.Equal(t, pluginsJSON, string(out))
pluginsArray := plugins.([]interface{})
// Find the k8smeta plugin configuration.
numConfigK8smeta := 0
for _, plugin := range pluginsArray {
if name, ok := plugin.(map[string]interface{})["name"]; ok && name == unit.K8sMetaPluginName {
numConfigK8smeta++
}
}
require.Equal(t, 1, numConfigK8smeta)
// Check that the plugin has been added to the ones that need to be loaded.
loadplugins := config["load_plugins"]
require.Len(t, loadplugins.([]interface{}), 3)
require.True(t, slices.Contains(loadplugins.([]interface{}), unit.K8sMetaPluginName))
}
// Test that the helper does not overwrite user's configuration.
func TestFalcoctlRefs(t *testing.T) {
t.Parallel()
pluginsJSON := `[
{
"init_config": null,
"library_path": "libk8saudit.so",
"name": "k8saudit",
"open_params": "http://:9765/k8s-audit"
},
{
"library_path": "libcloudtrail.so",
"name": "cloudtrail"
},
{
"init_config": "",
"library_path": "libjson.so",
"name": "json"
},
{
"init_config": {
"collectorHostname": "rendered-resources-k8s-metacollector.default.svc",
"collectorPort": 45000,
"nodeName": "${FALCO_K8S_NODE_NAME}"
},
"library_path": "libk8smeta.so",
"name": "k8smeta"
}
]`
testFunc := func(t *testing.T, config any) {
// Get artifact configuration map.
configMap := config.(map[string]interface{})
artifactConfig := (configMap["artifact"]).(map[string]interface{})
// Test allowed types.
allowedTypes := artifactConfig["allowedTypes"]
require.Len(t, allowedTypes, 2)
require.True(t, slices.Contains(allowedTypes.([]interface{}), "plugin"))
require.True(t, slices.Contains(allowedTypes.([]interface{}), "rulesfile"))
// Test plugin reference.
refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{})
require.Len(t, refs, 3)
require.True(t, slices.Contains(refs, "falco-rules:4"))
require.True(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.3.1"))
require.True(t, slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/container:0.3.5"))
}
testCases := []struct {
name string
valuesJSON map[string]string
expected func(t *testing.T, config any)
}{
{
"defaultValues",
nil,
testFunc,
},
{
"setPluginConfiguration",
map[string]string{
"falco.plugins": pluginsJSON,
},
testFunc,
},
{
"driver disabled",
map[string]string{
"driver.enabled": "false",
},
func(t *testing.T, config any) {
// Get artifact configuration map.
configMap := config.(map[string]interface{})
artifactConfig := (configMap["artifact"]).(map[string]interface{})
// Test plugin reference.
refs := artifactConfig["install"].(map[string]interface{})["refs"].([]interface{})
require.True(t, !slices.Contains(refs, "ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0"))
},
},
}
helmChartPath, err := filepath.Abs(unit.ChartPath)
require.NoError(t, err)
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
options := &helm.Options{SetJsonValues: testCase.valuesJSON, SetValues: map[string]string{"collectors.kubernetes.enabled": "true"}}
output := helm.RenderTemplate(t, options, helmChartPath, unit.ReleaseName, []string{"templates/falcoctl-configmap.yaml"})
var cm corev1.ConfigMap
helm.UnmarshalK8SYaml(t, output, &cm)
var config map[string]interface{}
helm.UnmarshalK8SYaml(t, cm.Data["falcoctl.yaml"], &config)
testCase.expected(t, config)
})
}
}

View File

@ -22,17 +22,15 @@ tolerations:
operator: Equal
value: gvisor
# Disable the driver since it is not needed.
# Enable gVisor and set the appropriate paths.
driver:
enabled: false
# Enable gVisor and set the appropriate paths.
gvisor:
enabled: true
runsc:
path: /home/containerd/usr/local/sbin
root: /run/containerd/runsc
config: /run/containerd/runsc/config.toml
kind: gvisor
gvisor:
runsc:
path: /home/containerd/usr/local/sbin
root: /run/containerd/runsc
config: /run/containerd/runsc/config.toml
# Enable the containerd collector to enrich the syscall events with metadata.
collectors:
@ -53,15 +51,13 @@ falcoctl:
config:
artifact:
install:
# -- Do not resolve the depenencies for artifacts. By default is true, but for our use case we disable it.
resolveDeps: false
# -- List of artifacts to be installed by the falcoctl init container.
# We do not recommend installing (or following) plugins for security reasons since they are executable objects.
refs: [falco-rules:1]
refs: [falco-rules:4]
follow:
# -- List of artifacts to be followed by the falcoctl sidecar container.
# We do not recommend installing (or following) plugins for security reasons since they are executable objects.
refs: [falco-rules:1]
refs: [falco-rules:4]
# Set this to true to force Falco so output the logs as soon as they are emmitted.
tty: false

View File

@ -14,27 +14,22 @@ controller:
# For more info check the section on Plugins in the README.md file.
replicas: 1
falcoctl:
artifact:
install:
# -- Enable the init container. We do not recommend installing (or following) plugins for security reasons since they are executable objects.
# -- Enable the init container.
enabled: true
follow:
# -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules.
# -- Enable the sidecar container.
enabled: true
config:
artifact:
install:
# -- Do not resolve the depenencies for artifacts. By default is true, but for our use case we disable it.
resolveDeps: false
# -- List of artifacts to be installed by the falcoctl init container.
# Only rulesfiles, we do no recommend plugins for security reasonts since they are executable objects.
refs: [k8saudit-rules:0.6]
refs: [k8saudit-rules:0.11, k8saudit:0.11]
follow:
# -- List of artifacts to be followed by the falcoctl sidecar container.
# Only rulesfiles, we do no recommend plugins for security reasonts since they are executable objects.
refs: [k8saudit-rules:0.6]
refs: [k8saudit-rules:0.11]
services:
- name: k8saudit-webhook
@ -45,7 +40,7 @@ services:
protocol: TCP
falco:
rules_file:
rules_files:
- /etc/falco/k8s_audit_rules.yaml
- /etc/falco/rules.d
plugins:

View File

@ -21,24 +21,19 @@ controller:
falcoctl:
artifact:
install:
# -- Enable the init container. We do not recommend installing plugins for security reasons since they are executable objects.
# We install only "rulesfiles".
# -- Enable the init container.
enabled: true
follow:
# -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules.
# -- Enable the sidecar container.
enabled: true
config:
artifact:
install:
# -- Do not resolve the depenencies for artifacts. By default is true, but for our use case we disable it.
resolveDeps: false
# -- List of artifacts to be installed by the falcoctl init container.
# We do not recommend installing (or following) plugins for security reasons since they are executable objects.
refs: [falco-rules:2, k8saudit-rules:0.6]
refs: [falco-rules:4, k8saudit-rules:0.11, k8saudit:0.11]
follow:
# -- List of artifacts to be followed by the falcoctl sidecar container.
# We do not recommend installing (or following) plugins for security reasons since they are executable objects.
refs: [falco-rules:2, k8saudit-rules:0.6]
refs: [falco-rules:4, k8saudit-rules:0.11, k8saudit:0.11]
services:
- name: k8saudit-webhook
@ -49,7 +44,7 @@ services:
protocol: TCP
falco:
rules_file:
rules_files:
- /etc/falco/falco_rules.yaml
- /etc/falco/k8s_audit_rules.yaml
- /etc/falco/rules.d

File diff suppressed because it is too large Load Diff

View File

@ -5,580 +5,749 @@ numbering uses [semantic versioning](http://semver.org).
Before release 0.1.20, the helm chart can be found in `falcosidekick` [repository](https://github.com/falcosecurity/falcosidekick/tree/master/deploy/helm/falcosidekick).
## 0.10.2
- Add type information to `volumeClaimTemplates`.
## 0.10.1
- Add an "or" condition for `configmap-ui`
## 0.10.0
- Add new features to the Loki dashboard
## 0.9.11
- Add `customtags` setting
## 0.9.10
- Fix missing values in the README
## 0.9.9
- Added Azure Workload Identity for Falcosidekick
## 0.9.8
- Ugrade to Falcosidekick 2.31.1 (fix last release)
## 0.9.7
- Ugrade to Falcosidekick 2.31.1
## 0.9.6
- Ugrade to Falcosidekick 2.31.0
## 0.9.5
- Move the `prometheus.io/scrape` annotation to the default values, to allow overrides.
## 0.9.4
- Fix Prometheus metrics names in Prometheus Rule
## 0.9.3
- Add a Grafana dashboard for the Prometheus metrics
## 0.9.2
- Add new dashboard with Loki
## 0.9.1
- Ugrade to Falcosidekick 2.30.0
## 0.8.9
- Fix customConfig mount path for webui redis
## 0.8.8
- Fix customConfig template for webui redis
## 0.8.7
- Fix securityContext for webui initContainer
## 0.8.6
- Use of `redis-cli` by the initContainer of Falcosidekick-UI to wait til the redis is up and running
- Add the possibility to override the default redis server settings
- Allow to set up a password to use with an external redis
- Fix wrong value used for `OTLP_TRACES_PROTOCOL` env var
- Used names for the priorities in the prometheus rules
## 0.8.5
- Fix an issue with the by default missing custom CA cert
## 0.8.4
- Fix falcosidekick chart ignoring custom service type for webui redis
## 0.8.3
- Add a condition to create the secrets for the redis only if the webui is deployed
## 0.8.2
- Fix redis-availability check of the UI init-container in case externalRedis is enabled
## 0.8.1
- Allow to set resources, securityContext and image overwrite for wait-redis initContainer
## 0.8.0
- Ugrade to Falcosidekick 2.29.0
- Allow to set custom labels and annotations to set to all resources
- Allow to use an existing secrets and values for the env vars at the same time
- Fix missing ingressClassName settings in the values.yaml
- Add of an initContainer to check if the redis for falcosidekick-ui is up
## 0.7.22
- Upgrade redis-stack image to 7.2.0-v11
## 0.7.21
- Fix the Falco Sidekick WEBUI_URL secret value.
## 0.7.20
- Align Web UI service port from values.yaml file with Falco Sidekick WEBUI_URL secret value.
## 0.7.19
- Enhanced the service Monitor to support additional Properties.
- Fix the promql query for prometheusRules: FalcoErrorOutputEventsRateHigh.
## 0.7.18
- Fix PrometheusRule duplicate alert name
## 0.7.17
- Fix the labels for the serviceMonitor
## 0.7.16
- Fix the error with the `NOTES` (`index of untyped nil Use`) when the ingress is enabled to falcosidekick-ui
## 0.7.15
- Fix ServiceMonitor selector labels
## 0.7.14
- Fix duplicate component labels
## 0.7.13
- Fix ServiceMonitor port name and selector labels
## 0.7.12
- Align README values with the values.yaml file
## 0.7.11
- Fix a link in the falcosidekick README to the policy report output documentation
## 0.7.10
- Set Helm recommended labels (`app.kubernetes.io/name`, `app.kubernetes.io/instance`, `app.kubernetes.io/version`, `helm.sh/chart`, `app.kubernetes.io/part-of`, `app.kubernetes.io/managed-by`) using helpers.tpl
## 0.7.9
- noop change to the chart itself. Updated makefile.
## 0.7.8
- Fix the condition for missing cert files
## 0.7.7
- Support extraArgs in the helm chart
## 0.7.6
* Fix the behavior with the `AWS IRSA` with a new value `aws.config.useirsa`
* Add a section in the README to describe how to use a subpath for `Falcosidekick-ui` ingress
* Add a `ServiceMonitor` for prometheus-operator
* Add a `PrometheusRule` for prometheus-operator
- Fix the behavior with the `AWS IRSA` with a new value `aws.config.useirsa`
- Add a section in the README to describe how to use a subpath for `Falcosidekick-ui` ingress
- Add a `ServiceMonitor` for prometheus-operator
- Add a `PrometheusRule` for prometheus-operator
## 0.7.5
* noop change just to test the ci
- noop change just to test the ci
## 0.7.4
* Fix volume mount when `config.tlsserver.servercrt`, `config.tlsserver.serverkey` and `config.tlsserver.cacrt` variables are defined.
- Fix volume mount when `config.tlsserver.servercrt`, `config.tlsserver.serverkey` and `config.tlsserver.cacrt` variables are defined.
## 0.7.3
* Allow to set (m)TLS Server cryptographic material via `config.tlsserver.servercrt`, `config.tlsserver.serverkey` and `config.tlsserver.cacrt` variables or through `config.tlsserver.existingSecret` variables.
- Allow to set (m)TLS Server cryptographic material via `config.tlsserver.servercrt`, `config.tlsserver.serverkey` and `config.tlsserver.cacrt` variables or through `config.tlsserver.existingSecret` variables.
## 0.7.2
* Fix the wrong key of the secret for the user
- Fix the wrong key of the secret for the user
## 0.7.1
* Allow to set a password `webui.redis.password` for Redis for `Falcosidekick-UI`
* The user for `Falcosidekick-UI` is now set with an env var from a secret
- Allow to set a password `webui.redis.password` for Redis for `Falcosidekick-UI`
- The user for `Falcosidekick-UI` is now set with an env var from a secret
## 0.7.0
* Support configuration of revisionHistoryLimit of the deployments
- Support configuration of revisionHistoryLimit of the deployments
## 0.6.3
* Update Falcosidekick to 2.28.0
* Add Mutual TLS Client config
* Add TLS Server config
* Add `bracketreplacer` config
* Add `customseveritymap` to `alertmanager` output
* Add Drop Event config to `alertmanager` output
* Add `customheaders` to `elasticsearch` output
* Add `customheaders` to `loki` output
* Add `customheaders` to `grafana` output
* Add `rolearn` and `externalid` for `aws` outputs
* Add `method` to `webhook` output
* Add `customattributes` to `gcp.pubsub` output
* Add `region` to `pargerduty` output
* Add `topiccreation` and `tls` to `kafka` output
* Add `Grafana OnCall` output
* Add `Redis` output
* Add `Telegram` output
* Add `N8N` output
* Add `OpenObserver` output
- Update Falcosidekick to 2.28.0
- Add Mutual TLS Client config
- Add TLS Server config
- Add `bracketreplacer` config
- Add `customseveritymap` to `alertmanager` output
- Add Drop Event config to `alertmanager` output
- Add `customheaders` to `elasticsearch` output
- Add `customheaders` to `loki` output
- Add `customheaders` to `grafana` output
- Add `rolearn` and `externalid` for `aws` outputs
- Add `method` to `webhook` output
- Add `customattributes` to `gcp.pubsub` output
- Add `region` to `pargerduty` output
- Add `topiccreation` and `tls` to `kafka` output
- Add `Grafana OnCall` output
- Add `Redis` output
- Add `Telegram` output
- Add `N8N` output
- Add `OpenObserver` output
## 0.6.2
* Fix interpolation of `SYSLOG_PORT`
- Fix interpolation of `SYSLOG_PORT`
## 0.6.1
* Add `webui.allowcors` value for `Falcosidekick-UI`
- Add `webui.allowcors` value for `Falcosidekick-UI`
## 0.6.0
* Change the docker image for the redis pod for falcosidekick-ui
- Change the docker image for the redis pod for falcosidekick-ui
## 0.5.16
* Add `affinity`, `nodeSelector` and `tolerations` values for the Falcosidekick test-connection pod
- Add `affinity`, `nodeSelector` and `tolerations` values for the Falcosidekick test-connection pod
## 0.5.15
* Set extra labels and annotations for `AlertManager` only if they're not empty
- Set extra labels and annotations for `AlertManager` only if they're not empty
## 0.5.14
* Fix Prometheus extralabels configuration in Falcosidekick
- Fix Prometheus extralabels configuration in Falcosidekick
## 0.5.13
* Fix missing quotes in Falcosidekick-UI ttl argument
- Fix missing quotes in Falcosidekick-UI ttl argument
## 0.5.12
* Fix missing space in Falcosidekick-UI ttl argument
- Fix missing space in Falcosidekick-UI ttl argument
## 0.5.11
* Fix missing space in Falcosidekick-UI arguments
- Fix missing space in Falcosidekick-UI arguments
## 0.5.10
* upgrade Falcosidekick image to 2.27.0
* upgrade Falcosidekick-UI image to 2.1.0
* Add `Yandex Data Streams` output
* Add `Node-Red` output
* Add `MQTT` output
* Add `Zincsearch` output
* Add `Gotify` output
* Add `Spyderbat` output
* Add `Tekton` output
* Add `TimescaleDB` output
* Add `AWS Security Lake` output
* Add `config.templatedfields` to set templated fields
* Add `config.slack.channel` to override `Slack` channel
* Add `config.alertmanager.extralabels` and `config.alertmanager.extraannotations` for `AlertManager` output
* Add `config.influxdb.token`, `config.influxdb.organization` and `config.influxdb.precision` for `InfluxDB` output
* Add `config.aws.checkidentity` to disallow STS checks
* Add `config.smtp.authmechanism`, `config.smtp.token`, `config.smtp.identity`, `config.smtp.trace` to manage `SMTP` auth
* Update default doc type for `Elastichsearch`
* Add `config.loki.user`, `config.loki.apikey` to manage auth to Grafana Cloud for `Loki` output
* Add `config.kafka.sasl`, `config.kafka.async`, `config.kafka.compression`, `config.kafka.balancer`, `config.kafka.clientid` to manage auth and communication for `Kafka` output
* Add `config.syslog.format` to manage the format of `Syslog` payload
* Add `webui.ttl` to set TTL of keys in Falcosidekick-UI
* Add `webui.loglevel` to set log level in Falcosidekick-UI
* Add `webui.user` to set log user:password in Falcosidekick-UI
- upgrade Falcosidekick image to 2.27.0
- upgrade Falcosidekick-UI image to 2.1.0
- Add `Yandex Data Streams` output
- Add `Node-Red` output
- Add `MQTT` output
- Add `Zincsearch` output
- Add `Gotify` output
- Add `Spyderbat` output
- Add `Tekton` output
- Add `TimescaleDB` output
- Add `AWS Security Lake` output
- Add `config.templatedfields` to set templated fields
- Add `config.slack.channel` to override `Slack` channel
- Add `config.alertmanager.extralabels` and `config.alertmanager.extraannotations` for `AlertManager` output
- Add `config.influxdb.token`, `config.influxdb.organization` and `config.influxdb.precision` for `InfluxDB` output
- Add `config.aws.checkidentity` to disallow STS checks
- Add `config.smtp.authmechanism`, `config.smtp.token`, `config.smtp.identity`, `config.smtp.trace` to manage `SMTP` auth
- Update default doc type for `Elastichsearch`
- Add `config.loki.user`, `config.loki.apikey` to manage auth to Grafana Cloud for `Loki` output
- Add `config.kafka.sasl`, `config.kafka.async`, `config.kafka.compression`, `config.kafka.balancer`, `config.kafka.clientid` to manage auth and communication for `Kafka` output
- Add `config.syslog.format` to manage the format of `Syslog` payload
- Add `webui.ttl` to set TTL of keys in Falcosidekick-UI
- Add `webui.loglevel` to set log level in Falcosidekick-UI
- Add `webui.user` to set log user:password in Falcosidekick-UI
## 0.5.9
* Fix: remove `namespace` from `clusterrole` and `clusterrolebinding` metadata
- Fix: remove `namespace` from `clusterrole` and `clusterrolebinding` metadata
## 0.5.8
* Support `storageEnabled` for `redis` to allow ephemeral installs
- Support `storageEnabled` for `redis` to allow ephemeral installs
## 0.5.7
* Removing unused Kafka config values
- Removing unused Kafka config values
## 0.5.6
* Fixing Syslog's port import in `secrets.yaml`
- Fixing Syslog's port import in `secrets.yaml`
## 0.5.5
* Add `webui.externalRedis` with `enabled`, `url` and `port` to values to set an external Redis database with RediSearch > v2 for the WebUI
* Add `webui.redis.enabled` option to disable the deployment of the database.
* `webui.redis.enabled ` and `webui.externalRedis.enabled` are mutually exclusive
- Add `webui.externalRedis` with `enabled`, `url` and `port` to values to set an external Redis database with RediSearch > v2 for the WebUI
- Add `webui.redis.enabled` option to disable the deployment of the database.
- `webui.redis.enabled ` and `webui.externalRedis.enabled` are mutually exclusive
## 0.5.4
* Upgrade image to fix Panic of `Prometheus` output when `customfields` is set
* Add `extralabels` for `Loki` and `Prometheus` outputs to set fields to use as labels
* Add `expiresafter` for `AlertManager` output
- Upgrade image to fix Panic of `Prometheus` output when `customfields` is set
- Add `extralabels` for `Loki` and `Prometheus` outputs to set fields to use as labels
- Add `expiresafter` for `AlertManager` output
## 0.5.3
* Support full configuration of `securityContext` blocks in falcosidekick and falcosidekick-ui deployments, and redis statefulset.
- Support full configuration of `securityContext` blocks in falcosidekick and falcosidekick-ui deployments, and redis statefulset.
## 0.5.2
* Update Falcosidekick-UI image (fix wrong redirect to localhost when an ingress is used)
- Update Falcosidekick-UI image (fix wrong redirect to localhost when an ingress is used)
## 0.5.1
* Support `ingressClassName` field in falcosidekick ingresses.
- Support `ingressClassName` field in falcosidekick ingresses.
## 0.5.0
### Major Changes
* Add `Policy Report` output
* Add `Syslog` output
* Add `AWS Kinesis` output
* Add `Zoho Cliq` output
* Support IRSA for AWS authentication
* Upgrade Falcosidekick-UI to v2.0.1
- Add `Policy Report` output
- Add `Syslog` output
- Add `AWS Kinesis` output
- Add `Zoho Cliq` output
- Support IRSA for AWS authentication
- Upgrade Falcosidekick-UI to v2.0.1
### Minor changes
* Allow to set custom Labels for pods
- Allow to set custom Labels for pods
## 0.4.5
* Allow additional service-ui annotations
- Allow additional service-ui annotations
## 0.4.4
* Fix output after chart installation when ingress is enable
- Fix output after chart installation when ingress is enable
## 0.4.3
* Support `annotation` block in service
- Support `annotation` block in service
## 0.4.2
* Fix: Added the rule to use the podsecuritypolicy
* Fix: Added `ServiceAccountName` to the UI deployment
- Fix: Added the rule to use the podsecuritypolicy
- Fix: Added `ServiceAccountName` to the UI deployment
## 0.4.1
* Removes duplicate `Fission` keys from secret
- Removes duplicate `Fission` keys from secret
## 0.4.0
### Major Changes
* Support Ingress API version `networking.k8s.io/v1`, see `ingress.hosts` and `webui.ingress.hosts` in [values.yaml](values.yaml) for a breaking change in the `path` parameter
- Support Ingress API version `networking.k8s.io/v1`, see `ingress.hosts` and `webui.ingress.hosts` in [values.yaml](values.yaml) for a breaking change in the `path` parameter
## 0.3.17
* Fix: Remove the value for bucket of `Yandex S3`, it enabled the output by default
- Fix: Remove the value for bucket of `Yandex S3`, it enabled the output by default
## 0.3.16
### Major Changes
* Fix: set correct new image 2.24.0
- Fix: set correct new image 2.24.0
## 0.3.15
### Major Changes
* Add `Fission` output
- Add `Fission` output
## 0.3.14
### Major Changes
* Add `Grafana` output
* Add `Yandex Cloud S3` output
* Add `Kafka REST` output
- Add `Grafana` output
- Add `Yandex Cloud S3` output
- Add `Kafka REST` output
### Minor changes
* Docker image is now available on AWS ECR Public Gallery (`--set image.registry=public.ecr.aws`)
- Docker image is now available on AWS ECR Public Gallery (`--set image.registry=public.ecr.aws`)
## 0.3.13
### Minor changes
* Enable extra volumes and volumemounts for `falcosidekick` via values
- Enable extra volumes and volumemounts for `falcosidekick` via values
## 0.3.12
* Add AWS configuration field `config.aws.rolearn`
- Add AWS configuration field `config.aws.rolearn`
## 0.3.11
### Minor changes
* Make image registries for `falcosidekick` and `falcosidekick-ui` configurable
- Make image registries for `falcosidekick` and `falcosidekick-ui` configurable
## 0.3.10
### Minor changes
* Fix table formatting in `README.md`
- Fix table formatting in `README.md`
## 0.3.9
### Fixes
* Add missing `imagePullSecrets` in `falcosidekick/templates/deployment-ui.yaml`
- Add missing `imagePullSecrets` in `falcosidekick/templates/deployment-ui.yaml`
## 0.3.8
### Major Changes
* Add `GCP Cloud Run` output
* Add `GCP Cloud Functions` output
* Add `Wavefront` output
* Allow MutualTLS for some outputs
* Add basic auth for Elasticsearch output
- Add `GCP Cloud Run` output
- Add `GCP Cloud Functions` output
- Add `Wavefront` output
- Allow MutualTLS for some outputs
- Add basic auth for Elasticsearch output
## 0.3.7
### Minor changes
* Fix table formatting in `README.md`
* Fix `config.azure.eventHub` parameter name in `README.md`
- Fix table formatting in `README.md`
- Fix `config.azure.eventHub` parameter name in `README.md`
## 0.3.6
### Fixes
* Point to the correct name of aadpodidentnity
- Point to the correct name of aadpodidentnity
## 0.3.5
### Minor Changes
* Fix link to Falco in the `README.md`
- Fix link to Falco in the `README.md`
## 0.3.4
### Major Changes
* Bump up version (`v1.0.1`) of image for `falcosidekick-ui`
- Bump up version (`v1.0.1`) of image for `falcosidekick-ui`
## 0.3.3
### Minor Changes
* Set default values for `OpenFaaS` output type parameters
* Fixes of documentation
- Set default values for `OpenFaaS` output type parameters
- Fixes of documentation
## 0.3.2
### Fixes
* Add config checksum annotation to deployment pods to restart pods on config change
* Fix statsd config options in the secret to make them match the docs
- Add config checksum annotation to deployment pods to restart pods on config change
- Fix statsd config options in the secret to make them match the docs
## 0.3.1
### Fixes
* Fix for `s3.bucket`, it should be empty
- Fix for `s3.bucket`, it should be empty
## 0.3.0
### Major Changes
* Add `AWS S3` output
* Add `GCP Storage` output
* Add `RabbitMQ` output
* Add `OpenFaas` output
- Add `AWS S3` output
- Add `GCP Storage` output
- Add `RabbitMQ` output
- Add `OpenFaas` output
## 0.2.9
### Major Changes
* Updated falcosidekuck-ui default image version to `v0.2.0`
- Updated falcosidekuck-ui default image version to `v0.2.0`
## 0.2.8
### Fixes
* Fixed to specify `kafka.hostPort` instead of `kafka.url`
- Fixed to specify `kafka.hostPort` instead of `kafka.url`
## 0.2.7
### Fixes
* Fixed missing hyphen in podidentity
- Fixed missing hyphen in podidentity
## 0.2.6
### Fixes
* Fix repo and tag for `ui` image
- Fix repo and tag for `ui` image
## 0.2.5
### Major Changes
* Add `CLOUDEVENTS` output
* Add `WEBUI` output
- Add `CLOUDEVENTS` output
- Add `WEBUI` output
### Minor Changes
* Add details about syntax for adding `custom_fields`
- Add details about syntax for adding `custom_fields`
## 0.2.4
### Minor Changes
* Add `DATADOG_HOST` to secret
- Add `DATADOG_HOST` to secret
## 0.2.3
### Minor Changes
* Allow additional pod annotations
* Remove namespace condition in aad-pod-identity
- Allow additional pod annotations
- Remove namespace condition in aad-pod-identity
## 0.2.2
### Major Changes
* Add `Kubeless` output
- Add `Kubeless` output
## 0.2.1
### Major Changes
* Add `PagerDuty` output
- Add `PagerDuty` output
## 0.2.0
### Major Changes
* Add option to use an existing secret
* Add option to add extra environment variables
* Add `Stan` output
- Add option to use an existing secret
- Add option to add extra environment variables
- Add `Stan` output
### Minor Changes
* Use the Existing secret resource and add all possible variables to there, and make it simpler to read and less error-prone in the deployment resource
- Use the Existing secret resource and add all possible variables to there, and make it simpler to read and less error-prone in the deployment resource
## 0.1.37
### Minor Changes
* Fix aws keys not being added to the deployment
- Fix aws keys not being added to the deployment
## 0.1.36
### Minor Changes
* Fix helm test
- Fix helm test
## 0.1.35
### Major Changes
* Update image to use release 2.19.1
- Update image to use release 2.19.1
## 0.1.34
* New outputs can be set : `Kafka`, `AWS CloudWatchLogs`
- New outputs can be set : `Kafka`, `AWS CloudWatchLogs`
## 0.1.33
### Minor Changes
* Fixed GCP Pub/Sub values references in `deployment.yaml`
- Fixed GCP Pub/Sub values references in `deployment.yaml`
## 0.1.32
### Major Changes
* Support release namespace configuration
- Support release namespace configuration
## 0.1.31
### Major Changes
* New outputs can be set : `Googlechat`
- New outputs can be set : `Googlechat`
## 0.1.30
### Major changes
* New output can be set : `GCP PubSub`
* Custom Headers can be set for `Webhook` output
* Fix typo `aipKey` for OpsGenie output
- New output can be set : `GCP PubSub`
- Custom Headers can be set for `Webhook` output
- Fix typo `aipKey` for OpsGenie output
## 0.1.29
* Fix falcosidekick configuration table to use full path of configuration properties in the `README.md`
- Fix falcosidekick configuration table to use full path of configuration properties in the `README.md`
## 0.1.28
### Major changes
* New output can be set : `AWS SNS`
* Metrics in `prometheus` format can be scrapped from `/metrics` URI
- New output can be set : `AWS SNS`
- Metrics in `prometheus` format can be scrapped from `/metrics` URI
## 0.1.27
### Minor Changes
* Replace extensions apiGroup/apiVersion because of deprecation
- Replace extensions apiGroup/apiVersion because of deprecation
## 0.1.26
### Minor Changes
* Allow the creation of a PodSecurityPolicy, disabled by default
- Allow the creation of a PodSecurityPolicy, disabled by default
## 0.1.25
### Minor Changes
* Allow the configuration of the Pod securityContext, set default runAsUser and fsGroup values
- Allow the configuration of the Pod securityContext, set default runAsUser and fsGroup values
## 0.1.24
### Minor Changes
* Remove duplicated `webhook` block in `values.yaml`
- Remove duplicated `webhook` block in `values.yaml`
## 0.1.23
* fake release for triggering CI for auto-publishing
- fake release for triggering CI for auto-publishing
## 0.1.22
### Major Changes
* Add `imagePullSecrets`
- Add `imagePullSecrets`
## 0.1.21
### Minor Changes
* Fix `Azure Indentity` case sensitive value
- Fix `Azure Indentity` case sensitive value
## 0.1.20
### Major Changes
* New outputs can be set : `Azure Event Hubs`, `Discord`
- New outputs can be set : `Azure Event Hubs`, `Discord`
### Minor Changes
* Fix wrong port name in output
- Fix wrong port name in output
## 0.1.17
### Major Changes
* New outputs can be set : `Mattermost`, `Rocketchat`
- New outputs can be set : `Mattermost`, `Rocketchat`
## 0.1.11
### Major Changes
* Add Pod Security Policy
- Add Pod Security Policy
## 0.1.11
### Minor Changes
* Fix wrong value reference for Elasticsearch output in deployment.yaml
- Fix wrong value reference for Elasticsearch output in deployment.yaml
## 0.1.10
### Major Changes
* New output can be set : `DogStatsD`
- New output can be set : `DogStatsD`
## 0.1.9
### Major Changes
* New output can be set : `StatsD`
- New output can be set : `StatsD`
## 0.1.7
### Major Changes
* New output can be set : `Opsgenie`
- New output can be set : `Opsgenie`
## 0.1.6
### Major Changes
* New output can be set : `NATS`
- New output can be set : `NATS`
## 0.1.5
### Major Changes
* `Falcosidekick` and its chart are now part of `falcosecurity` organization
- `Falcosidekick` and its chart are now part of `falcosecurity` organization
## 0.1.4
### Minor Changes
* Use more recent image with `Golang` 1.14
- Use more recent image with `Golang` 1.14
## 0.1.3
### Major Changes
* New output can be set : `Loki`
- New output can be set : `Loki`
## 0.1.2
### Major Changes
* New output can be set : `SMTP`
- New output can be set : `SMTP`
## 0.1.1
### Major Changes
* New outputs can be set : `AWS Lambda`, `AWS SQS`, `Teams`
- New outputs can be set : `AWS Lambda`, `AWS SQS`, `Teams`
## 0.1.0
### Major Changes
* Initial release of Falcosidekick Helm Chart
- Initial release of Falcosidekick Helm Chart

View File

@ -1,9 +1,9 @@
apiVersion: v1
appVersion: 2.28.0
appVersion: 2.31.1
description: Connect Falco to your ecosystem
icon: https://raw.githubusercontent.com/falcosecurity/falcosidekick/master/imgs/falcosidekick_color.png
name: falcosidekick
version: 0.7.6
version: 0.10.2
keywords:
- monitoring
- security

View File

@ -1,11 +0,0 @@
#generate helm documentation
DOCS_IMAGE_VERSION="v1.11.0"
docs:
docker run \
--rm \
--workdir=/helm-docs \
--volume "$$(pwd):/helm-docs" \
-u $$(id -u) \
jnorwood/helm-docs:$(DOCS_IMAGE_VERSION) \
helm-docs -t ./README.gotmpl -o ./README.md

View File

@ -116,7 +116,7 @@ Follow the links to get the configuration of each output.
- [**n8n**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/n8n.md)
### Other
- [**Policy Report**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/policy-reporter.md)
- [**Policy Report**](https://github.com/falcosecurity/falcosidekick/blob/master/docs/outputs/policy_report.md)
## Adding `falcosecurity` repository
@ -150,7 +150,7 @@ After a few seconds, Falcosidekick should be running.
> **Tip**: List all releases using `helm list`, a release is a name used to track a specific deployment
## Minumiun Kubernetes version
## Minimum Kubernetes version
The minimum Kubernetes version required is 1.17.x
@ -184,4 +184,4 @@ You may want to access the `WebUI (Falcosidekick UI)`](https://github.com/falcos
```yaml
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
```
```

Some files were not shown because too many files have changed in this diff Show More