The algorithm for conditionally downloading object files is the same, whether you are using GCP storage or an S3/Minio-compatible bucket. The only thing that differs is how the respective clients handle enumerating through the objects in the bucket; by implementing just that in each provider, I can have the select-and-fetch code in once place. This deliberately omits the parallelised fetching that the GCP client had, for the sake of lining the clients up. It can be reintroduced (in the factored out code) later. Signed-off-by: Michael Bridgen <michael@weave.works> |
||
---|---|---|
.github | ||
api | ||
config | ||
controllers | ||
docs | ||
hack | ||
internal | ||
pkg | ||
.dockerignore | ||
.gitignore | ||
CHANGELOG.md | ||
CODE_OF_CONDUCT.md | ||
DCO | ||
DEVELOPMENT.md | ||
Dockerfile | ||
LICENSE | ||
MAINTAINERS | ||
Makefile | ||
PROJECT | ||
README.md | ||
go.mod | ||
go.sum | ||
main.go |
README.md
Source controller
The source-controller is a Kubernetes operator, specialised in artifacts acquisition from external sources such as Git, Helm repositories and S3 buckets. The source-controller implements the source.toolkit.fluxcd.io API and is a core component of the GitOps toolkit.
Features:
- authenticates to sources (SSH, user/password, API token)
- validates source authenticity (PGP)
- detects source changes based on update policies (semver)
- fetches resources on-demand and on-a-schedule
- packages the fetched resources into a well-known format (tar.gz, yaml)
- makes the artifacts addressable by their source identifier (sha, version, ts)
- makes the artifacts available in-cluster to interested 3rd parties
- notifies interested 3rd parties of source changes and availability (status conditions, events, hooks)
- reacts to Git push and Helm chart upload events (via notification-controller)