Prioritize StorageOperationFailedCondition over other artifact outdated
and unavailable conditions so that when artifact is failing due to
storage operation, it's visble in the ready status condition, making the
reason for not ready more accurate.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
libgit2 network operations are blocking and do not provide timeout nor context capabilities,
leading for several reports by users of the controllers hanging indefinitely.
By using managed transport, golang primitives such as http.Transport and net.Dial can be used
to ensure timeouts are enforced.
Co-Authored-by: Sunny <darkowlzz@protonmail.com>
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
Introduce new condition StorageOperationFailedCondition for all the
failures related to the storage. It is a negative polarity condition and
is considered in computing summary of reconciliation.
Also, introduce more granular event reasons related to
StorageOperationFailedCondition for precise reasoning behind failures.
These replace the vague StorageOperationFailedReason.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Details about the source reference, reconcile strategy and artifact
revision value based on the reconcile strategy.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Reuses the same transport across different helm chart downloads,
whilst resetting the tlsconfig to avoid cross-contamination.
Crypto material is now only processed in-memory and does not
touch the disk.
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
This commit introduces a BucketProvider interface for fetch operations
against object storage provider buckets. Allowing for easier
introduction of new provider implementations.
The algorithm for conditionally downloading object files is the same,
whether you are using GCP storage or an S3/Minio-compatible
bucket. The only thing that differs is how the respective clients
handle enumerating through the objects in the bucket; by implementing
just that in each provider, I can have the select-and-fetch code in
once place.
The client implementations do now include safe-guards to ensure the
fetched object is the same as metadata has been collected for. In
addition, minor changes have been made to the object fetch operation
to take into account that:
- Etags can change between composition of index and actual fetch, in
which case the etag is now updated.
- Objects can disappear between composition of index and actual fetch,
in which case the item is removed from the index.
Lastly, the requirement for authentication has been removed (and not
referring to a Secret at all is thus allowed), to provide support
for e.g. public buckets.
Co-authored-by: Hidde Beydals <hello@hidde.co>
Co-authored by: Michael Bridgen <michael@weave.works>
Signed-off-by: pa250194 <pa250194@ncr.com>
This adds a Size field to Artifacts, which reflects the number of bytes
written to the artifact when it's being archived.
Signed-off-by: Kevin McDermott <bigkevmcd@gmail.com>
- Update summarize helper to have patch field owner.
- Updated the controllers to set the patch field owner.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Use the etagIndex to provide more information about the artifact in
NewArtifact events and remove the revision from the event message. The
revision is still kept in the event annotations.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
- Update v1beta2 API descriptions and reconciling messages to be
consistent.
- Replace 'download' with 'fetch'. Since the status condition for
download failure is called FetchFailed, using the term 'fetch' makes
the messaging more consistent.
- Replace `BucketOperationSucceed` with `BucketOperationSucceeded` and
generate api docs.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
summarizeAndPatch() was used by all the reconcilers with their own
object type. This creates a generic SummarizeAndPatch helper that takes
a conditions.Setter object and performs the same operations. All the
reconcilers are updated to use SummarizeAndPatch(). The process of
summarize and patch can be configured using the HelperOptions.
Introduce ResultProcessor to allow injecting middlewares in the
SummarizeAndPatch process.
Introduce RuntimeResultBuilder to allow defining how the reconciliation
result is computed for specific reconciler. This enabled different
reconcilers to have different meanings of the reconciliation results.
Introduce Conditions in summary package to store all the status
conditions related information of a reconciler. This is passed to
SummarizeAndPatch() to be used for summary and patch calculation.
Remove all the redundant summarizeAndPatch() tests per reconciler.
Add package internal/object containing helpers for interacting with
runtime.Object needed by the generic SummarizeAndPatch().
Add tests for ComputeReconcileResult().
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Update Storage.RemoveAll() and Storage.RemoveAllButCurrent() to return
the details about the deleted items. This helps emit useful information
about garbage collection in the controllers and ignore no-op garbage
collections.
RemoveAll() returns the path of the deleted directory if any.
RemoveAllButCurrent() returns a slice of path of all the deleted items
from a resource's artifact dir.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Consolidate BuildRuntimeResult() into summarizeAndPatch() to simplify
where the results are computed, summarized and patched.
Move the event recording and logging of context specific errors into
RecordContextualError() and call it in summarizeAndPatch().
Introduce Waiting error for wait and requeue scenarios. Update
ComputeReconcileResult() and RecordContextualError() to consider Waiting
error.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
- Ensure all logged messages start with a lowercase.
- Make some pushed (and logged) events of type `EventTypeTrace` to
prevent them from being sinked to the external event recorder, to
prevent spam.
- Only log if artifact is up-to-date with upstream (instead of pushing
an event).
Signed-off-by: Hidde Beydals <hello@hidde.co>
- Remove ArtifactUnavailable condition and use Reconciling condition to
convey the same.
- Make Reconciling condition affect the ready condition.
- Introduce summarizeAndPatch() to calculate the final status conditions
and patch them.
- Introduce reconcile() to iterate through the sub-reconcilers and
execute them.
- Simplify logging and event recording
- Introduce controller-check/status checks to assert that the status
conditions are valid at the end of the tests.
- Create variables for various condition groups: owned conditions, ready
dependencies and ready dependencies negative.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
This commit rewrites the `HelmRepositoryReconciler` to new standards,
while implementing the newly introduced Condition types, and trying to
adhere better to Kubernetes API conventions.
More specifically it introduces:
- Implementation of more explicit Condition types to highlight
abnormalities.
- Extensive usage of the `conditions` subpackage from `runtime`.
- Better and more conflict-resilient (status)patching of reconciled
objects using the `patch` subpackage from runtime.
- Proper implementation of kstatus' `Reconciling` and `Stalled`
conditions.
- Refactoring of some Helm elements to make them easier to use within
the new reconciler logic.
- Integration tests that solely rely on `testenv` and do not
use Ginkgo.
There are a couple of TODOs marked in-code, these are suggestions for
the future and should be non-blocking.
In addition to the TODOs, more complex and/or edge-case test scenarios
may be added as well.
Signed-off-by: Hidde Beydals <hello@hidde.co>
- Remove ArtifactUnavailable condition and use Reconciling condition to
convey the same.
- Make Reconciling condition affect the ready condition.
- Introduce summarizeAndPatch() to calculate the final status conditions
and patch them.
- Introduce reconcile() to iterate through the sub-reconcilers and
execute them.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
- Introduce mock GCP Server to test the gcp bucket client against mocked
gcp server results.
- Add tests for reconcileGCPSource().
- Patch GCPClient.BucketExists() to return no error when the bucket
doesn't exists. This keeps the GCP client compatible with the minio
client.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Add `BucketReconciler.reconcileArtifact` tests based on
`GitRepositoryReconciler.reconcileArtifact` test cases.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
This commit consolidates the `DownloadFailed` and `CheckoutFailed`
Condition types into a new more generic `FetchFailed` type to simplify
the API and observations by consumers.
Signed-off-by: Hidde Beydals <hello@hidde.co>
This commit rewrites the `BucketReconciler` to new standards, while
implementing the newly introduced Condition types, and trying to
adhere better to Kubernetes API conventions.
More specifically it introduces:
- Implementation of more explicit Condition types to highlight
abnormalities.
- Extensive usage of the `conditions` subpackage from `runtime`.
- Better and more conflict-resilient (status)patching of reconciled
objects using the `patch` subpackage from runtime.
- Proper implementation of kstatus' `Reconciling` and `Stalled`
conditions.
- Refactor of reconciler logic, including more efficient detection of
changes to bucket objects by making use of the etag data available,
and downloading of object files in parallel with a limited number of
workers (4).
- Integration tests that solely rely on `testenv` and do not
use Ginkgo.
There are a couple of TODOs marked in-code, these are suggestions for
the future and should be non-blocking.
In addition to the TODOs, more complex and/or edge-case test scenarios
may be added as well.
Signed-off-by: Hidde Beydals <hello@hidde.co>
The artifacts built in reconcileInclude() should be persisted by writing
it to includes. reconcileArtifact() later adds includes to the git repo
object status and persists it.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
- Remove ArtifactUnavailable condition and use Reconciling condition to
convey the same.
- Make Reconciling condition affect the ready condition.
- Introduce summarizeAndPatch() to calculate the final status conditions
and patch them.
- Introduce reconcile() to iterate through the sub-reconcilers and
execute them.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
While deleting, patching an object with new status results in "not
found" error because the object is already deleted. The patching
operation first patches the status conditions, the rest of the object
and, at the very end, the rest of the status. When an object is
deleted, the garbage collection results in the artifact in the status
to be updated, resulting in a diff that is attempted to be patched
when the deferred patch runs.
Since the status patching runs at the very end, the object gets deleted
before it can be patched.
Ignore "not found" error while patching when the delete timestamp is
set.
Signed-off-by: Sunny <darkowlzz@protonmail.com>