Based on recommendations from Microsoft, change the order valid
authentication options are taken into account. Mainly to ensure it works
as expected when multiple Managed Identities are bound on the same VM
node.
Signed-off-by: Hidde Beydals <hello@hidde.co>
This commit allows for a Secret to be configured with `tenantId`,
`clientId` and `clientCertificate` data fields (with optionally
`clientCertificatePassword`) to authenticate using TLS.
Signed-off-by: Hidde Beydals <hello@hidde.co>
This commit introduces an Azure Blob BucketProvider implementation,
capable of fetching from objects from public and private "container"
buckets.
The supported credential types are:
- ManagedIdentity with a `resourceId` Secret data field.
- ManagedIdentity with a `clientId` Secret data field.
- ClientSecret with `tenantId`, `clientId` and `clientSecret` Secret
data fields.
- SharedKey with `accountKey` Secret data field, the Account Name is
extracted from the endpoint URL specified on the object.
If no Secret is provided, the Bucket is assumed to be public.
Co-authored-by: Zhongcheng Lao <Zhongcheng.Lao@microsoft.com>
Signed-off-by: Hidde Beydals <hello@hidde.co>
Reuses the same transport across different helm chart downloads,
whilst resetting the tlsconfig to avoid cross-contamination.
Crypto material is now only processed in-memory and does not
touch the disk.
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
This commit updates to a version of Helm 3.8.0, with patches applied to
deal with memory leak and HTTP transport issues. The latter being
described in https://github.com/fluxcd/source-controller/issues/578.
Signed-off-by: Hidde Beydals <hello@hidde.co>
This commit introduces a BucketProvider interface for fetch operations
against object storage provider buckets. Allowing for easier
introduction of new provider implementations.
The algorithm for conditionally downloading object files is the same,
whether you are using GCP storage or an S3/Minio-compatible
bucket. The only thing that differs is how the respective clients
handle enumerating through the objects in the bucket; by implementing
just that in each provider, I can have the select-and-fetch code in
once place.
The client implementations do now include safe-guards to ensure the
fetched object is the same as metadata has been collected for. In
addition, minor changes have been made to the object fetch operation
to take into account that:
- Etags can change between composition of index and actual fetch, in
which case the etag is now updated.
- Objects can disappear between composition of index and actual fetch,
in which case the item is removed from the index.
Lastly, the requirement for authentication has been removed (and not
referring to a Secret at all is thus allowed), to provide support
for e.g. public buckets.
Co-authored-by: Hidde Beydals <hello@hidde.co>
Co-authored by: Michael Bridgen <michael@weave.works>
Signed-off-by: pa250194 <pa250194@ncr.com>
This adds a Size field to Artifacts, which reflects the number of bytes
written to the artifact when it's being archived.
Signed-off-by: Kevin McDermott <bigkevmcd@gmail.com>
Status content could be very long compare to other fields. Moving it to
the end helps improve the visibility of other fields.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
- Update summarize helper to have patch field owner.
- Updated the controllers to set the patch field owner.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Use the etagIndex to provide more information about the artifact in
NewArtifact events and remove the revision from the event message. The
revision is still kept in the event annotations.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
- Update v1beta2 API descriptions and reconciling messages to be
consistent.
- Replace 'download' with 'fetch'. Since the status condition for
download failure is called FetchFailed, using the term 'fetch' makes
the messaging more consistent.
- Replace `BucketOperationSucceed` with `BucketOperationSucceeded` and
generate api docs.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
summarizeAndPatch() was used by all the reconcilers with their own
object type. This creates a generic SummarizeAndPatch helper that takes
a conditions.Setter object and performs the same operations. All the
reconcilers are updated to use SummarizeAndPatch(). The process of
summarize and patch can be configured using the HelperOptions.
Introduce ResultProcessor to allow injecting middlewares in the
SummarizeAndPatch process.
Introduce RuntimeResultBuilder to allow defining how the reconciliation
result is computed for specific reconciler. This enabled different
reconcilers to have different meanings of the reconciliation results.
Introduce Conditions in summary package to store all the status
conditions related information of a reconciler. This is passed to
SummarizeAndPatch() to be used for summary and patch calculation.
Remove all the redundant summarizeAndPatch() tests per reconciler.
Add package internal/object containing helpers for interacting with
runtime.Object needed by the generic SummarizeAndPatch().
Add tests for ComputeReconcileResult().
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Update Storage.RemoveAll() and Storage.RemoveAllButCurrent() to return
the details about the deleted items. This helps emit useful information
about garbage collection in the controllers and ignore no-op garbage
collections.
RemoveAll() returns the path of the deleted directory if any.
RemoveAllButCurrent() returns a slice of path of all the deleted items
from a resource's artifact dir.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
If a local reference does not contain a path to a valid file, returning
`ErrChartReference` is more correct to signal the reference is invalid.
This also indirectly causes the reconciler to signal a Suspend, as the
source or resource requires a change before a reattempt might be
successful.
Signed-off-by: Hidde Beydals <hello@hidde.co>
Consolidate BuildRuntimeResult() into summarizeAndPatch() to simplify
where the results are computed, summarized and patched.
Move the event recording and logging of context specific errors into
RecordContextualError() and call it in summarizeAndPatch().
Introduce Waiting error for wait and requeue scenarios. Update
ComputeReconcileResult() and RecordContextualError() to consider Waiting
error.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
- Ensure all logged messages start with a lowercase.
- Make some pushed (and logged) events of type `EventTypeTrace` to
prevent them from being sinked to the external event recorder, to
prevent spam.
- Only log if artifact is up-to-date with upstream (instead of pushing
an event).
Signed-off-by: Hidde Beydals <hello@hidde.co>