Refactor: Remove docs which overlap with Flux website

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
This commit is contained in:
Stefan Prodan 2022-10-19 17:28:29 +03:00
parent 0f131a0361
commit 3e935d0b8f
No known key found for this signature in database
GPG Key ID: 3299AEB0E4085BAF
4 changed files with 98 additions and 503 deletions

330
README.md
View File

@ -6,328 +6,50 @@
[![license](https://img.shields.io/github/license/fluxcd/kustomize-controller.svg)](https://github.com/fluxcd/kustomize-controller/blob/main/LICENSE)
[![release](https://img.shields.io/github/release/fluxcd/kustomize-controller/all.svg)](https://github.com/fluxcd/kustomize-controller/releases)
The kustomize-controller is a Kubernetes operator, specialized in running
continuous delivery pipelines for infrastructure and workloads
The kustomize-controller is a [Flux](https://github.com/fluxcd/flux2) component,
specialized in running continuous delivery pipelines for infrastructure and workloads
defined with Kubernetes manifests and assembled with Kustomize.
The cluster desired state is described through a Kubernetes Custom Resource named `Kustomization`.
Based on the creation, mutation or removal of a `Kustomization` resource in the cluster,
the controller performs actions to reconcile the cluster current state with the desired state.
![overview](docs/diagrams/kustomize-controller-overview.png)
Features:
## Features
* watches for `Kustomization` objects
* fetches artifacts produced by [source-controller](https://github.com/fluxcd/source-controller) from `Source` objects
* watches `Source` objects for revision changes
* generates the `kustomization.yaml` file if needed
* generates Kubernetes manifests with kustomize build
* decrypts Kubernetes secrets with Mozilla SOPS
* validates the build output with server-side apply dry-run
* applies the generated manifests on the cluster
* generates Kubernetes manifests with Kustomize SDK
* decrypts Kubernetes secrets with Mozilla SOPS and KMS
* validates the generated manifests with Kubernetes server-side apply dry-run
- detects drift between the desired and state and cluster state
- corrects drift by patching objects with Kubernetes server-side apply
* prunes the Kubernetes objects removed from source
* checks the health of the deployed workloads
* runs `Kustomizations` in a specific order, taking into account the depends-on relationship
* notifies whenever a `Kustomization` status changes
Specifications:
## Specifications
* [API](docs/spec/v1beta2/README.md)
* [Controller](docs/spec/README.md)
## Usage
## Guides
The kustomize-controller is part of a composable [GitOps toolkit](https://fluxcd.io/flux/components/)
and depends on [source-controller](https://github.com/fluxcd/source-controller)
to acquire the Kubernetes manifests from Git repositories and S3 compatible storage buckets.
* [Get started with Flux](https://fluxcd.io/flux/get-started/)
* [Setup Notifications](https://fluxcd.io/flux/guides/notifications/)
* [Manage Kubernetes secrets with Flux and Mozilla SOPS](https://fluxcd.io/flux/guides/mozilla-sops/)
* [How to build, publish and consume OCI Artifacts with Flux](https://fluxcd.io/flux/cheatsheets/oci-artifacts/)
* [Flux and Kustomize FAQ](https://fluxcd.io/flux/faq/#kustomize-questions)
### Install the toolkit controllers
## Roadmap
Download the flux CLI:
The roadmap for the Flux family of projects can be found at <https://fluxcd.io/roadmap/>.
```bash
curl -s https://fluxcd.io/install.sh | sudo bash
```
## Contributing
Install the toolkit controllers in the `flux-system` namespace:
```bash
flux install
```
### Define a Git repository source
Create a source object that points to a Git repository containing Kubernetes and Kustomize manifests:
```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: podinfo
namespace: flux-system
spec:
interval: 1m
url: https://github.com/stefanprodan/podinfo
ref:
branch: master
```
For private repositories, SSH or token based authentication can be
[configured with Kubernetes secrets](https://github.com/fluxcd/source-controller/blob/master/docs/spec/v1beta1/gitrepositories.md).
Save the above file and apply it on the cluster.
You can wait for the source controller to assemble an artifact from the head of the repo master branch with:
```bash
kubectl -n flux-system wait gitrepository/podinfo --for=condition=ready
```
The source controller will check for new commits in the master branch every minute. You can force a git sync with:
```bash
kubectl -n flux-system annotate --overwrite gitrepository/podinfo reconcile.fluxcd.io/requestedAt="$(date +%s)"
```
### Define a kustomization
Create a kustomization object that uses the git repository defined above:
```yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: podinfo-dev
namespace: flux-system
spec:
interval: 5m
path: "./deploy/overlays/dev/"
prune: true
sourceRef:
kind: GitRepository
name: podinfo
healthChecks:
- kind: Deployment
name: frontend
namespace: dev
- kind: Deployment
name: backend
namespace: dev
timeout: 80s
```
> **Note** that if your repository contains only plain Kubernetes manifests, the controller will
> [automatically generate](docs/spec/v1beta1/kustomization.md#generate-kustomizationyaml)
> a kustomization.yaml file inside the specified path.
A detailed explanation of the Kustomization object and its fields
can be found in the [specification doc](docs/spec/v1beta1/README.md).
Based on the above definition, the kustomize-controller fetches the Git repository content from source-controller,
generates Kubernetes manifests by running kustomize build inside `./deploy/overlays/dev/`,
and validates them with a dry-run apply. If the manifests pass validation, the controller will apply them
on the cluster and starts the health assessment of the deployed workload. If the health checks are passing, the
Kustomization object status transitions to a ready state.
![workflow](docs/diagrams/kustomize-controller-flow.png)
You can wait for the kustomize controller to complete the deployment with:
```bash
kubectl -n flux-system wait kustomization/podinfo-dev --for=condition=ready
```
When the controller finishes the reconciliation, it will log the applied objects:
```bash
kubectl -n flux-system logs deploy/kustomize-controller | jq .
```
```json
{
"level": "info",
"ts": "2020-09-17T07:27:11.921Z",
"logger": "controllers.Kustomization",
"msg": "Kustomization applied in 1.436096591s",
"kustomization": "flux-system/podinfo-dev",
"output": {
"namespace/dev": "created",
"service/dev/frontend": "created",
"deployment/dev/frontend": "created",
"horizontalpodautoscaler/dev/frontend": "created",
"service/dev/backend": "created",
"deployment/dev/backend": "created",
"horizontalpodautoscaler/dev/backend": "created"
}
}
```
You can trigger a kustomization reconciliation any time with:
```bash
kubectl -n flux-system annotate --overwrite kustomization/podinfo-dev \
fluxcd.io/reconcileAt="$(date +%s)"
```
When the source controller pulls a new Git revision, the kustomize controller will detect that the
source revision changed, and will reconcile those changes right away.
If the kustomization reconciliation fails, the controller sets the ready condition to `false` and logs the error:
```yaml
status:
conditions:
- lastTransitionTime: "2020-09-17T07:27:58Z"
message: 'namespaces dev not found'
reason: ReconciliationFailed
status: "False"
type: Ready
```
```json
{
"kustomization": "flux-system/podinfo-dev",
"error": "Error when creating 'Service/dev/frontend': namespaces dev not found"
}
```
### Control the execution order
When running a kustomization, you may need to make sure other kustomizations have been
successfully applied beforehand. A kustomization can specify a list of dependencies with `spec.dependsOn`.
When combined with health assessment, a kustomization will run after all its dependencies health checks are passing.
For example, a service mesh proxy injector should be running before deploying applications inside the mesh:
```yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: istio
namespace: flux-system
spec:
interval: 10m
path: "./istio/system/"
sourceRef:
kind: GitRepository
name: istio
healthChecks:
- kind: Deployment
name: istiod
namespace: istio-system
timeout: 2m
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: podinfo-dev
namespace: flux-system
spec:
dependsOn:
- name: istio
interval: 5m
path: "./deploy/overlays/dev/"
prune: true
sourceRef:
kind: GitRepository
name: podinfo
```
### Deploy releases to production
For production deployments, instead of synchronizing with a branch you can use a semver range to target stable releases:
```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: podinfo-releases
namespace: flux-system
spec:
interval: 5m
url: https://github.com/stefanprodan/podinfo
ref:
semver: ">=4.0.0 <5.0.0"
```
With `ref.semver` we configure source controller to pull the Git tags and create an artifact from the most recent tag
that matches the semver range.
Create a production kustomization and reference the git source that follows the latest semver release:
```yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: podinfo-production
namespace: flux-system
spec:
interval: 10m
path: "./deploy/overlays/production/"
sourceRef:
kind: GitRepository
name: podinfo-releases
```
Based on the above definition, the kustomize controller will apply the kustomization that matches the semver range
set in the Git repository.
### Configure alerting
The kustomize controller emits Kubernetes events whenever a kustomization status changes.
You can use the [notification-controller](https://github.com/fluxcd/notification-controller) to forward these events
to Slack, Microsoft Teams, Discord or Rocket chart.
Create a notification provider for Slack:
```yaml
apiVersion: notification.toolkit.fluxcd.io/v1beta1
kind: Provider
metadata:
name: slack
namespace: flux-system
spec:
type: slack
channel: alerts
secretRef:
name: slack-url
---
apiVersion: v1
kind: Secret
metadata:
name: slack-url
namespace: flux-system
data:
address: <encoded-url>
```
Create an alert for a list of GitRepositories and Kustomizations:
```yaml
apiVersion: notification.toolkit.fluxcd.io/v1beta1
kind: Alert
metadata:
name: on-call
namespace: flux-system
spec:
providerRef:
name: slack
eventSeverity: info
eventSources:
- kind: GitRepository
name: podinfo-releases
- kind: Kustomization
name: podinfo-production
```
Multiple alerts can be used to send notifications to different channels or Slack organizations.
The event severity can be set to `info` or `error`.
When the severity is set to `error`, the controller will alert on any error encountered during the
reconciliation process. This includes kustomize build and validation errors, apply errors and
health check failures.
![error alert](docs/diagrams/slack-error-alert.png)
When the verbosity is set to `info`, the controller will alert if:
* a Kubernetes object was created, updated or deleted
* heath checks are passing
* a dependency is delaying the execution
* an error occurs
![info alert](docs/diagrams/slack-info-alert.png)
This project is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
To start contributing please see the [development guide](DEVELOPMENT.md).

View File

@ -1,6 +1,6 @@
# Kustomize Controller
The Kustomize Controller is a Kubernetes operator, specialized in running
The kustomize-controller is a Kubernetes operator, specialized in running
continuous delivery pipelines for infrastructure and workloads
defined with Kubernetes manifests and assembled with Kustomize.
@ -34,158 +34,20 @@ of the frontend app was deployed and if the deployment is healthy, no matter the
The reconciliation process can be defined with a Kubernetes custom resource
that describes a pipeline such as:
- **check** if depends-on conditions are meet
- **fetch** manifests from source-controller (Git repository or S3 bucket)
- **generate** a kustomization if needed
- **build** the manifest using kustomization X
- **decrypt** Kubernetes secrets using Mozilla SOPS
- **validate** the resulting objects
- **impersonate** Kubernetes account
- **apply** the objects
- **fetch** manifests from source-controller
- **generate** `kustomization.yaml` if needed
- **build** the manifests using the Kustomize SDK
- **decrypt** Kubernetes secrets using Mozilla SOPS SDK
- **impersonate** the tenant's Kubernetes account
- **validate** the resulting objects using server-side apply dry-run
- **detect drift** between the desired and state and cluster state
- **correct drift** by applying the objects using server-side apply
- **prune** the objects removed from source
- **verify** the deployment status
- **alert** if something went wrong
- **notify** if the cluster state changed
- **wait** for the applied changes to rollout using Kubernetes kstatus library
- **report** the reconciliation result in the `status` sub-resource
- **alert** if something went wrong by sending events to Kubernetes API and notification-controller
- **notify** if the cluster state changed by sending events to Kubernetes API and notification-controller
The controller that runs these pipelines relies on
[source-controller](https://github.com/fluxcd/source-controller)
for providing the raw manifests from Git repositories or any
other source that source-controller could support in the future.
If a Git repository contains no Kustomize manifests, the controller can
generate the `kustomization.yaml` file automatically and label
the objects for garbage collection (GC).
A pipeline runs on-a-schedule and ca be triggered manually by a
cluster admin or automatically by a source event such as a Git revision change.
When a pipeline is removed from the cluster, the controller's GC terminates
all the objects previously created by that pipeline.
A pipeline can be suspended, while in suspension the controller
stops the scheduler and ignores any source events.
Deleting a suspended pipeline does not trigger garbage collection.
Alerting can be configured with a Kubernetes custom resource
that specifies a webhook address, and a group of pipelines to be monitored.
The API design of the controller can be found at [kustomize.toolkit.fluxcd.io/v1beta1](v1beta1/README.md).
## Backward compatibility
| Feature | Kustomize Controller | Flux v1 |
| -------------------------------------------- | ----------------------- | ------------------ |
| Plain Kubernetes manifests sync | :heavy_check_mark: | :heavy_check_mark: |
| Kustomize build sync | :heavy_check_mark: | :heavy_check_mark: |
| Garbage collection | :heavy_check_mark: | :heavy_check_mark: |
| Secrets decryption | :heavy_check_mark: | :heavy_check_mark: |
| Generate manifests with shell scripts | :x: | :heavy_check_mark: |
Syncing will not support the `.flux.yaml` mechanism as running shell scripts and binaries to
generate manifests is not in the scope of Kustomize controller.
## Example
After installing kustomize-controller and its companion source-controller, we
can create a series of pipelines for deploying Istio, and an application made of
multiple services.
Create a source that points to where the Istio control plane manifests are,
and a kustomization for installing/upgrading Istio:
```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: istio
namespace: flux-system
spec:
interval: 5m
url: https://github.com/stefanprodan/gitops-istio
ref:
branch: master
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: istio
namespace: flux-system
spec:
interval: 10m
path: "./istio/"
sourceRef:
kind: GitRepository
name: istio
healthChecks:
- kind: Deployment
name: istiod
namespace: istio-system
timeout: 2m
```
Create a source for the app repo, a kustomization for each service defining depends-on relationships:
```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: webapp
namespace: flux-system
spec:
interval: 1m
url: https://github.com/stefanprodan/podinfo-deploy
ref:
branch: master
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: webapp-common
namespace: flux-system
spec:
dependsOn:
- name: istio
interval: 5m
path: "./webapp/common/"
prune: true
sourceRef:
kind: GitRepository
name: webapp
validation: client
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: webapp-backend
namespace: flux-system
spec:
dependsOn:
- name: webapp-common
interval: 5m
path: "./webapp/backend/"
prune: true
sourceRef:
kind: GitRepository
name: webapp
validation: server
healthChecks:
- kind: Deployment
name: backend
namespace: webapp
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: webapp-frontend
namespace: flux-system
spec:
dependsOn:
- name: webapp-backend
interval: 5m
path: "./webapp/frontend/"
prune: true
sourceRef:
kind: GitRepository
name: webapp
validation: server
```
## Specifications
The latest API specifications can be found [here](v1beta2/README.md).

View File

@ -6,6 +6,8 @@ of Kubernetes objects generated with Kustomize.
## Specification
- [Kustomization CRD](kustomization.md)
+ [Example](kustomization.md#example)
+ [Recommended settings](kustomization.md#recommended-settings)
+ [Source reference](kustomization.md#source-reference)
+ [Generate kustomization.yaml](kustomization.md#generate-kustomizationyaml)
+ [Reconciliation](kustomization.md#reconciliation)

View File

@ -127,7 +127,7 @@ spec:
timeout: 3m0s # give up waiting after three minutes
retryInterval: 2m0s # retry every two minutes on apply or waiting failures
prune: true # remove stale resources from cluster
force: true # recreate resources on immutable fields changes
force: false # enable this to recreate resources on immutable fields changes
targetNamespace: apps # set the namespace for all resources
sourceRef:
kind: GitRepository
@ -136,6 +136,8 @@ spec:
path: "./deploy/production"
```
### Disable Kustomize remote bases
For security and performance reasons, it is advised to disallow the usage of
[remote bases](https://github.com/kubernetes-sigs/kustomize/blob/a7f4db7fb41e17b2c826a524f545e6174b4dc6ac/examples/remoteBuild.md)
in Kustomize overlays. To enforce this setting, platform admins can use the `--no-remote-bases=true` controller flag.
@ -149,14 +151,15 @@ changes, it generates a Kubernetes event that triggers a kustomize build and app
Source supported types:
* [GitRepository](https://github.com/fluxcd/source-controller/blob/main/docs/spec/v1beta2/gitrepositories.md)
* [OCIRepository](https://github.com/fluxcd/source-controller/blob/main/docs/spec/v1beta2/ocirepositories.md)
* [Bucket](https://github.com/fluxcd/source-controller/blob/main/docs/spec/v1beta2/buckets.md)
- [GitRepository](https://github.com/fluxcd/source-controller/blob/main/docs/spec/v1beta2/gitrepositories.md)
- [OCIRepository](https://github.com/fluxcd/source-controller/blob/main/docs/spec/v1beta2/ocirepositories.md)
- [Bucket](https://github.com/fluxcd/source-controller/blob/main/docs/spec/v1beta2/buckets.md)
> **Note** that the source should contain the kustomization.yaml and all the
> Kubernetes manifests and configuration files referenced in the kustomization.yaml.
> If your Git repository or S3 bucket contains only plain manifests,
> then a kustomization.yaml will be automatically generated.
**Note:** If the source contains a `kustomization.yaml` file, then it should also contain
all the Kubernetes manifests and configuration files referenced in the Kustomize config file.
If your Git, OCI repository or S3 bucket contains **plain manifests**,
then a kustomization.yaml will be [automatically generated](#generate-kustomizationyaml)
by the controller.
### Cross-namespace references
@ -187,12 +190,12 @@ If your repository contains plain Kubernetes manifests, the
manifests in the directory tree specified in the `spec.path` field of the Flux `Kustomization`.
All YAML files present under that path must be valid Kubernetes
manifests, unless they're excluded either by way of the `.sourceignore`
file or the `spec.ignore` field on the corresponding `GitRepository` object.
file or the `spec.ignore` field on the corresponding source object.
Example of excluding CI workflows and SOPS config files:
```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
name: podinfo
@ -201,7 +204,6 @@ spec:
interval: 5m
url: https://github.com/stefanprodan/podinfo
ignore: |
.git/
.github/
.sops.yaml
.gitlab-ci.yml
@ -209,16 +211,13 @@ spec:
It is recommended to generate the `kustomization.yaml` on your own and store it in Git, this way you can
validate your manifests in CI (example script [here](https://github.com/fluxcd/flux2-multi-tenancy/blob/main/scripts/validate.sh)).
Assuming your manifests are inside `./clusters/my-cluster`, you can generate a `kustomization.yaml` with:
Assuming your manifests are inside `apps/my-app`, you can generate a `kustomization.yaml` with:
```sh
cd clusters/my-cluster
cd apps/my-app
# create kustomization
# create kustomization.yaml
kustomize create --autodetect --recursive
# validate kustomization
kustomize build | kubeconform -ignore-missing-schemas
```
## Reconciliation
@ -260,7 +259,7 @@ You can configure the controller to ignore in-cluster resources by labeling or a
kustomize.toolkit.fluxcd.io/reconcile: disabled
```
Note that when the `kustomize.toolkit.fluxcd.io/reconcile` annotation is set to `disabled`,
**Note:** When the `kustomize.toolkit.fluxcd.io/reconcile` annotation is set to `disabled`,
the controller will no longer apply changes from source, nor will it prune the resource.
To resume reconciliation, set the annotation to `enabled` or remove it.
@ -279,7 +278,7 @@ Another option is to annotate or label objects with:
kustomize.toolkit.fluxcd.io/ssa: merge
```
Note that the fields defined in manifests will always be overridden,
**Note:** The fields defined in manifests will always be overridden,
the above procedure works only for adding new fields that dont overlap with the desired state.
## Garbage collection
@ -329,18 +328,20 @@ spec:
```
If you wish to select only certain resources, list them under `spec.healthChecks`.
Note that when `spec.wait` is enabled, the `spec.healthChecks` field is ignored.
**Note:** When `spec.wait` is enabled, the `spec.healthChecks` field is ignored.
A health check entry can reference one of the following types:
* Kubernetes builtin kinds: Deployment, DaemonSet, StatefulSet, PersistentVolumeClaim, Pod, PodDisruptionBudget, Job, CronJob, Service, Secret, ConfigMap, CustomResourceDefinition
* Toolkit kinds: HelmRelease, HelmRepository, GitRepository, etc
* Custom resources that are compatible with [kstatus](https://github.com/kubernetes-sigs/cli-utils/tree/master/pkg/kstatus)
- Kubernetes builtin kinds: Deployment, DaemonSet, StatefulSet, PersistentVolumeClaim, Pod, PodDisruptionBudget, Job, CronJob, Service, Secret, ConfigMap, CustomResourceDefinition
- Toolkit kinds: HelmRelease, HelmRepository, GitRepository, etc
- Custom resources that are compatible with [kstatus](https://github.com/kubernetes-sigs/cli-utils/tree/master/pkg/kstatus)
Assuming the Kustomization source contains a Kubernetes Deployment named `backend`,
a health check can be defined as follows:
```yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
@ -371,6 +372,7 @@ When a Kustomization contains HelmRelease objects, instead of checking the under
define a health check that waits for the HelmReleases to be reconciled with:
```yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
@ -410,12 +412,13 @@ its last apply status condition.
Assuming two Kustomizations:
* `cert-manager` - reconciles the cert-manager CRDs and controller
* `certs` - reconciles the cert-manager custom resources
- `cert-manager` - reconciles the cert-manager CRDs and controller
- `certs` - reconciles the cert-manager custom resources
You can instruct the controller to apply the `cert-manager` Kustomization before `certs`:
```yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
@ -453,8 +456,8 @@ spec:
When combined with health assessment, a Kustomization will run after all its dependencies health checks are passing.
For example, a service mesh proxy injector should be running before deploying applications inside the mesh.
> **Note** that circular dependencies between Kustomizations must be avoided, otherwise the
> interdependent Kustomizations will never be applied on the cluster.
**Note:** Circular dependencies between Kustomizations must be avoided, otherwise the
interdependent Kustomizations will never be applied on the cluster.
## Role-based access control
@ -468,6 +471,7 @@ Assuming you want to restrict a group of Kustomizations to a single namespace, y
with a role binding that grants access only to that namespace:
```yaml
---
apiVersion: v1
kind: Namespace
metadata:
@ -504,13 +508,14 @@ subjects:
namespace: webapp
```
> **Note** that the namespace, RBAC and service account manifests should be
> placed in a Git source and applied with a Kustomization. The Kustomizations that
> are running under that service account should depend-on the one that contains the account.
**Note:** The namespace, RBAC and service account manifests should be
placed in a Git source and applied with a Kustomization. The Kustomizations that
are running under that service account should depend-on the one that contains the account.
Create a Kustomization that prevents altering the cluster state outside of the `webapp` namespace:
```yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
@ -580,6 +585,7 @@ patch or a [JSON6902](https://kubectl.docs.kubernetes.io/references/kustomize/gl
The patch can target a single resource or multiple resources:
```yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
@ -675,13 +681,14 @@ for [bash string replacement functions](https://github.com/drone/envsubst) e.g.:
- `${var:position:length}`
- `${var/substring/replacement}`
Note that the name of a variable can contain only alphanumeric and underscore characters.
**Note:** The name of a variable can contain only alphanumeric and underscore characters.
The controller validates the var names using this regular expression:
`^[_[:alpha:]][_[:alpha:][:digit:]]*$`.
Assuming you have manifests with the following variables:
```yaml
---
apiVersion: v1
kind: Namespace
metadata:
@ -695,6 +702,7 @@ You can specify the variables and their values in the Kustomization definition u
`substitute` and/or `substituteFrom` post build section:
```yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
@ -716,9 +724,10 @@ spec:
# Fail if this Secret does not exist.
```
Note that for substituting variables in a secret, `spec.stringData` field must be used i.e
**Note:** For substituting variables in a secret, `spec.stringData` field must be used i.e
```yaml
---
apiVersion: v1
kind: Secret
metadata:
@ -732,7 +741,7 @@ stringData:
The var values which are specified in-line with `substitute`
take precedence over the ones in `substituteFrom`.
Note that if you want to avoid var substitutions in scripts embedded in ConfigMaps or container commands,
**Note:** If you want to avoid var substitutions in scripts embedded in ConfigMaps or container commands,
you must use the format `$var` instead of `${var}`. If you want to keep the curly braces you can use `$${var}`
which will print out `${var}`.
@ -854,11 +863,11 @@ kubectl create secret generic prod-kubeconfig \
--from-file=value.yaml=./kubeconfig
```
> **Note** that the KubeConfig should be self-contained and not rely on binaries, environment,
> or credential files from the kustomize-controller Pod.
> This matches the constraints of KubeConfigs from current Cluster API providers.
> KubeConfigs with `cmd-path` in them likely won't work without a custom,
> per-provider installation of kustomize-controller.
**Note:** The KubeConfig should be self-contained and not rely on binaries, environment,
or credential files from the kustomize-controller Pod.
This matches the constraints of KubeConfigs from current Cluster API providers.
KubeConfigs with `cmd-path` in them likely won't work without a custom,
per-provider installation of kustomize-controller.
When both `spec.kubeConfig` and `spec.ServiceAccountName` are specified,
the controller will impersonate the service account on the target cluster.
@ -871,10 +880,10 @@ and encrypt your Kubernetes Secrets data with [age](https://age-encryption.org/v
and [OpenPGP](https://www.openpgp.org) keys, or using provider
implementations like Azure Key Vault, GCP KMS or Hashicorp Vault.
> **Note:** You should encrypt only the `data` section of the Kubernetes
> Secret, encrypting the `metadata`, `kind` or `apiVersion` fields is not
> supported. An easy way to do this is by appending
> `--encrypted-regex '^(data|stringData)$'` to your `sops --encrypt` command.
**Note:** You should encrypt only the `data` section of the Kubernetes
Secret, encrypting the `metadata`, `kind` or `apiVersion` fields is not
supported. An easy way to do this is by appending
`--encrypted-regex '^(data|stringData)$'` to your `sops --encrypt` command.
### Decryption Secret reference
@ -1171,9 +1180,9 @@ In addition to this, the
[general SOPS documentation around KMS AWS applies](https://github.com/mozilla/sops#27kms-aws-profiles),
allowing you to specify e.g. a `SOPS_KMS_ARN` environment variable.
> **Note:**: If you're mounting a secret containing the AWS credentials as a file in the `kustomize-controller` pod,
> you'd need to specify an environment variable `$HOME`, since the AWS credentials file is expected to be present
> at `~/.aws`, like so:
**Note:**: If you're mounting a secret containing the AWS credentials as a file in the `kustomize-controller` pod,
you'd need to specify an environment variable `$HOME`, since the AWS credentials file is expected to be present
at `~/.aws`, like so:
```yaml
env:
- name: HOME
@ -1422,7 +1431,7 @@ status:
lastAttemptedRevision: master/7c500d302e38e7e4a3f327343a8a5c21acaaeb87
```
> **Note** that the last applied revision is updated only on a successful reconciliation.
**Note:** The last applied revision is updated only on a successful reconciliation.
When a reconciliation fails, the controller logs the error and issues a Kubernetes event: