- Attempt short-circuiting clone only when the artifact is already in the
storage.
- A successful no-op clone need not return an error, but a partial
commit which contains only a hash + reference.
- On no-op clone, reconcileSource() populates the source build dir by
copying the existing artifact and lets the reconciliation continue.
- Reconciliation is not skipped to allow other subreconcilers to operate
on other parts of GitRepo object like include, ignore, etc, when
attributes associated with them change but the remote repo has not
changed.
- Add a function IsConcreteCommit() to differentiate between partial and
concrete commit.
- Update and simplify go-git and libgit2 no-op clone tests.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Introduce a new field in the GitRepositoryReconciler to set the enabled
features. This makes it test friendly compared to using global flags for
setting and checking flags in the tests.
Enable default feature gates in all the GitRepo reconciler tests.
Add test cases for reconcileSource() to test the behavior of optimized
git clone when the Repo is ready and not ready. This ensures that the
full reconciliation is not skipped when GitRepo is not ready.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
For gradual migration to Generic error, update only the GitRepo
reconciler to use Generic error.
Replace the Waiting error for git no change scenario with a Generic
error with proper no-op, early return, error configurations. This
ensures that the no-op only results in log and K8s native events at
normal level.
Fixes a reconciliation issue when recovering from a failure state (with
previous success state and artifact in the storage) and optimized git
clone feature is on, which results in failure to persist as the git
optimization prevented full reconciliation due to already existing
artifact and removal of failure negative conditions on the object
status. In order to allow failure recovery, the git clone optimizations
are now only applied when the object is already in a ready state.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
OptimizedGitClones decreases resource utilization for GitRepository
reconciliations. It supports both go-git and libgit2 implementations
when cloning repositories using branches or tags.
This is an opt-out feature, which can be disabled by starting the
controller with the argument '--feature-gates=OptimizedGitClones=false'.
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
No-op reconciliations are very inefficient, as they carry out
a full clone operation of the target repository even when
no changes have taken place.
This change will execute a remote-ls operation, and cancel
the clone operation if the remote tip commit is still the same
as the one observed on the last reconcilation. In such cases,
an git.NoChangesError is returned.
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
This commit replaces `os.MkdirTemp` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using `os.MkdirTemp`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but `t.TempDir` handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
This change prevents Reconciling and ArtifactOutdated conditions to be
set on HelmRepo when the checksum of a cached repo index changes.
Adds some tests to ensure that when the repo index is cached, the
revision and checksum of the returned artifact are the same as on the
existing object status.
Also adds checks for the returned artifact and chartRepo from
reconcileSource, to ensure that chartRepo is populated and the checksum
of a new potential artifact is always empty, as it's populated when the
artifact is written in the storage.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Avoid validating (and thus loading) indexes if the checksum already exists in storage.
In other words, if the YAML is identical to the Artifact in storage, the reconciliation should
be a no-op, and therefore can short-circuit long/heavy operations.
Co-authored-by: Hidde Beydals <hello@hidde.co>
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
I assume using "interval" for timeouts was an accident and "timeout" was
actually meant to be used. This also fixes flakiness of tests.
Signed-off-by: Alexander Block <ablock84@gmail.com>
This fixes the immediate issue of the nil pointer dereference but we
still haven't isolated the actual cause of the size being nil to begin
with. This is ongoing work and as soon as we have boiled that down to
the simplest case we will provide a regression test for that case.
closes#680
Signed-off-by: Max Jonas Werner <mail@makk.es>
Co-authored-by: Hidde Beydals <hiddeco@users.noreply.github.com>
If implemented this will:
- enable the helmCharts dependency manager to use the helm in memry
cache to retrieve reconciled HelmRepositories indexes.
- record cache events.
Signed-off-by: Soule BA <soule@weave.works>
Azure SDK dependencies cannot be updated, as this requires us to move to
Go 1.18.
- cloud.google.com/go/storage to v1.22.0
- github.com/ProtonMail/go-crypto to v0.0.0-20220407094043-a94812496cf5
- github.com/darkowlzz/controller-check to v0.0.0-20220325122359-11f5827b7981
- github.com/elazarl/goproxy to v0.0.0-20220403042543-a53172b9392e
- github.com/fluxcd/pkg/gittestserver to v0.5.2
- github.com/go-logr/logr to v1.2.3
- github.com/minio/minio-go/v7 to v7.0.24
- github.com/onsi/gomega to v1.19.0
- golang.org/x/crypto to v0.0.0-20220411220226-7b82a4e95df4
- google.golang.org/api to v0.74.0
Signed-off-by: Hidde Beydals <hello@hidde.co>
As suggested by @pjbgf
Co-authored-by: Paulo Gomes <paulo.gomes.uk@gmail.com>
Co-authored-by: Paulo Gomes <paulo.gomes.uk@gmail.com>
Signed-off-by: Peter Gundel <mail@petergundel.de>
This better represent permissions as Linux handles such information in
octal format, meaning that the left-most 0 has an important meaning
and is not to be ignored as normally integers would.
See https://github.com/fluxcd/source-controller/issues/603
Signed-off-by: Peter Gundel <mail@petergundel.de>
Add two new flags to enable users to configure exponential
back-off for Flux objects. The default values are now
set to 750ms for minimum retry time, and 15min for max.
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
This includes some rewiring of tests, and slight changes in how we work
with the local chart reference. `Path` is expected to be relative to
`WorkDir`, and both fields are now mandatory.
Signed-off-by: Hidde Beydals <hello@hidde.co>
notify() is used to emit events for new artifact and failure recovery
scenarios. It's implemented in all the reconcilers.
Previously, when there used to be a failure due to any reason, on a
subsequent successful reconciliation, no notification was sent to
indicate that the failure has been resolved.
With notify(), the old version of the object is compared with the new
version of the object to determine if all, if any, of the failures have
been resolved and a notification is sent. The notification message is
the same that's sent in usual successful source reconciliation message
about stored artifact.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
We try to avoid affecting the source reconciliation when there's a
garbage collection related failure.
The event logging was resulting in events and notifications related to
GC failure when the artifact directory isn't created in the first
reconciliation of an object.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Introduce two new flags to configure the ttl of an artifact and the max
no. of files to retain for an artifact. Modify the gc process to
consider the options and use timeouts to prevent the controller from
hanging.
This helps in situations when the SC has already garbage collected the
current artifact but the advertised artifact url is still the same,
which leads to the server returning a 404.
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
If implemented, will provide users with a way to cache index files.
This addresses issues where the index file is loaded and unmarshalled in
concurrent reconciliation resulting in a heavy memory footprint.
The caching strategy used is cache aside, and the cache is a k/v store
with expiration.
The cache number of entries and ttl for entries are configurable.
The cache is optional and is disabled by default
Signed-off-by: Soule BA <soule@weave.works>
Update alll the other reconcilers similar to the GitRepository
reconcilers to introduce positive condition ArtifactInStorage and
reorder the status conditions.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Introduce separate positive polarity conditions which are used to set
Ready condition. Move the "artifact stored" ready condition into
ArtifactInStorage positive polarity condition. If ArtifactInStorage is
True and there's no negative polarity condition present, the Ready
condition is summarized with ArtifactInStorage condition value.
Also, update the priorities of the conditions. ArtifactInStorage has
higher priority than SourceVerfied condition. If both are present, the
Ready condition will have ArtifactInStorage.
The negative polarity conditions are reordered to have the most likely
actual cause of failure condition the highest priority, for example
StorageOperationFailed, followed by the conditions that are reconciled
first in the whole reconciliation so as to prioritize the first failure
which may be the cause of subsequent failures.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
The GitRepository object with included artifact should not stall when
the included artifact is not available since there's no way to signal a
reconciliation when the included artifact becomes available. The
reconciliation should fail and retry until the included artifact becomes
available.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
This to facilitate improvements on the notification-controller side,
where annotations prefixed with the FQDN of the Group of the Involved
Object will be transformed into "fields".
Signed-off-by: Hidde Beydals <hello@hidde.co>
Add gitrepository controller test for source ignore in a repository with
subdirectories where the subdirectories are part of the ignore patterns.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Prioritize StorageOperationFailedCondition over other artifact outdated
and unavailable conditions so that when artifact is failing due to
storage operation, it's visble in the ready status condition, making the
reason for not ready more accurate.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
libgit2 network operations are blocking and do not provide timeout nor context capabilities,
leading for several reports by users of the controllers hanging indefinitely.
By using managed transport, golang primitives such as http.Transport and net.Dial can be used
to ensure timeouts are enforced.
Co-Authored-by: Sunny <darkowlzz@protonmail.com>
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
Introduce new condition StorageOperationFailedCondition for all the
failures related to the storage. It is a negative polarity condition and
is considered in computing summary of reconciliation.
Also, introduce more granular event reasons related to
StorageOperationFailedCondition for precise reasoning behind failures.
These replace the vague StorageOperationFailedReason.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Details about the source reference, reconcile strategy and artifact
revision value based on the reconcile strategy.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
Reuses the same transport across different helm chart downloads,
whilst resetting the tlsconfig to avoid cross-contamination.
Crypto material is now only processed in-memory and does not
touch the disk.
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
This commit introduces a BucketProvider interface for fetch operations
against object storage provider buckets. Allowing for easier
introduction of new provider implementations.
The algorithm for conditionally downloading object files is the same,
whether you are using GCP storage or an S3/Minio-compatible
bucket. The only thing that differs is how the respective clients
handle enumerating through the objects in the bucket; by implementing
just that in each provider, I can have the select-and-fetch code in
once place.
The client implementations do now include safe-guards to ensure the
fetched object is the same as metadata has been collected for. In
addition, minor changes have been made to the object fetch operation
to take into account that:
- Etags can change between composition of index and actual fetch, in
which case the etag is now updated.
- Objects can disappear between composition of index and actual fetch,
in which case the item is removed from the index.
Lastly, the requirement for authentication has been removed (and not
referring to a Secret at all is thus allowed), to provide support
for e.g. public buckets.
Co-authored-by: Hidde Beydals <hello@hidde.co>
Co-authored by: Michael Bridgen <michael@weave.works>
Signed-off-by: pa250194 <pa250194@ncr.com>
This adds a Size field to Artifacts, which reflects the number of bytes
written to the artifact when it's being archived.
Signed-off-by: Kevin McDermott <bigkevmcd@gmail.com>