Adds the SMI metrics API to the Linkerd install flow. This installs the SMI metrics controller deployment, the SMI metrics ApiService object, and supporting RBAC, and config resources.
This is the first step toward having Linkerd consume the SMI metrics API in the CLI and web dashboard.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Check Extension api server Authentication
* Added Checks and tests for extension api-server authentication
* Fixed Failing Static Checks
* Updated the golden file
Signed-off-by: Christy Jacob <christyjacob4@gmail.com>
Fixes#4105
In my local machine, `linkerd stat` was not returning traffic up until
the 17th try or so. Which explains why the 20s timeout was a bit too
close to the limit and this test was failing sometimes. So I increased
the timeout up to 40s and I'm also adding stderr to the error message.
Updated regex for ignoring version mismatch warning events. It was only
applied for '-*upgrade' namespaces.
It is safe to ignore such warnings because the endpoint controller
retries when that happens, and if after many retries it still can't then
a different warning is thrown which is _not_ whitelisted and will make
the build fail.
https://github.com/kubernetes/kubernetes/blob/v1.16.6/pkg/controller/endpoint/endpoints_controller.go#L334-L348
This PR also removes logging matches on expected warnings, to avoid
cluttering the CI log.
From time to time we get this CI error when testing the external issuer
mechanism:
```
Test script: [external_issuer_test.go] Params:
[--linkerd-namespace=l5d-integration-external-issuer
--external-issuer=true]
--- FAIL: TestExternalIssuer (33.61s)
external_issuer_test.go:89: Received error while ensuring test app
works (before cert rotation): Error stripping header and trailing
newline; full output:
FAIL
```
https://github.com/alpeb/linkerd2/runs/428273855?check_suite_focus=true#step:6:526
This is caused by the "backend" pod not receiving traffic from
"slow-cooker" in a timely manner.
After those pods are deployed we're only checking that "backend" is
ready, but not "slow-cooker", so this change adds that check.
I'm also removing the `TestHelper.CheckDeployment` call because it's
redundant, since the preceeding `TestHelper.CheckPods` is already checking
that the deployment has all the specified replicas ready.
In light of the breaking changes we are introducing to the Helm chart and the convoluted upgrade process (see linkerd/website#647) an integration test can be quite helpful. This simply installs latest stable through helm install and then upgrades to the current head of the branch.
Signed-off-by: Zahari Dichev zaharidichev@gmail.com
*From the comment disabling the test*:
#2316
The response from `http://httpbin.org/get` is non-deterministic--returning
either `http://..` or `https://..` for GET requests. As #2316 mentions, this
test should not have an external dependency on this endpoint. As a workaround
for edge-20.1.3, temporarily disable this test and renable with one that has
reliable behavior.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
## edge-20.1.3
* CLI
* Introduced `linkerd check --pre --linkerd-cni-enabled`, used when the CNI
plugin is used, to check it has been properly installed before proceeding
with the control plane installation
* Added support for the `--as-group` flag so that users can impersonate
groups for Kubernetes operations (thanks @mayankshah160!)
* Controller
* Fixed an issue where an override of the Docker registry was not being
applied to debug containers (thanks @javaducky!)
* Added check for the Subject Alternate Name attributes to the API server
when access restrictions have been enabled (thanks @javaducky!)
* Added support for arbitrary pod labels so that users can leverage the
Linkerd provided Prometheus instance to scrape for their own labels
(thanks @daxmc99!)
* Fixed an issue with CNI config parsing
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
As part of the effort to remove the "experimental" label from the CNI plugin, this PR introduces cni checks to `linkerd check`
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
This integration test roughly follows the [Linkerd guide to distributed tracing](https://linkerd.io/2019/10/07/a-guide-to-distributed-tracing-with-linkerd/).
We deploy the tracing components (oc-collector and jaeger), emojivoto, and nginx as an ingress to do span initiation. We then watch the jaeger API and check that a trace is eventually created that includes traces from all of the data plane components: nginx, linkerd-proxy, web, voting, and emoji.
Signed-off-by: Alex Leong <alex@buoyant.io>
This is an experimental release that includes large changes to the
proxy's request buffering and backpressure infrastructure.
Please exercise caution before deploying this proxy version into mission
critical environments.
Fixes a problem where the identitiy serice can issue a certificate that has a lifetime larger than the issuer certificate. This was causing the proxies to end up using an invalid TLS certificate. This fix ensures that the lifetime of the issued certificate is not greater than the smallest lifetime of the certs in the issuer cert trust chain.
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
We were ignoring events like
```
MountVolume.SetUp failed for volume .* : couldn't propagate object cache: timed out waiting for the condition
```
but as k8s 1.16 those got replaced by more precise messages, like
```
MountVolume.SetUp failed for volume "linkerd-identity-token-cm4fn" :failed to sync secret cache: timed out waiting for the condition
MountVolume.SetUp failed for volume "prometheus-config" : failed to sync configmap cache: timed out waiting for the condition
```
This was causing sporadic CI test failures like
[here](https://github.com/linkerd/linkerd2/runs/368424822#step:7:562)
So I'm including another regex for that.
Re: 96c41f8a1e
In various integration tests we're not showing stderr when a failure
happens, thus hiding some possibly useful debugging info.
E.g. in the latest CI failures, commands like `linkerd update` were
failing with no visible reason why.
Fixes#3444Fixes#3443
## Background and Behavior
This change adds support for the destination service to resolve Get requests which contain a service clusterIP or pod ip as the `Path` parameter. It returns the stream of endpoints, just as if `Get` had been called with the service's authority. This lays the groundwork for allowing the proxy to TLS TCP connections by allowing the proxy to do destination lookups for the SO_ORIG_DST of tcp connections. When that ip address corresponds to a service cluster ip or pod ip, the destination service will return the endpoints stream, including the pod metadata required to establish identity.
Prior to this change, attempting to look up an ip address in the destination service would result in a `InvalidArgument` error.
Updating the `GetProfile` method to support ip address lookups is out of scope and attempts to look up an ip address with the `GetProfile` method will result in `InvalidArgument`.
## Implementation
We do this by creating a `IPWatcher` which wraps the `EndpointsWatcher` and supports lookups by ip. `IPWatcher` maintains a mapping up clusterIPs to service ids and translates subscriptions to an IP address into a subscription to the service id using the underlying `EndpointsWatcher`.
Since the service name is no longer always infer-able directly from the input parameters, we restructure `EndpointTranslator` and `PodSet` so that we propagate the service name from the endpoints API response.
## Testing
This can be tested by running the destination service locally, using the current kube context to connect to a Kubernetes cluster:
```
go run controller/cmd/main.go destination -kubeconfig ~/.kube/config
```
Then lookups can be issued using the destination client:
```
go run controller/script/destination-client/main.go -path 192.168.54.78:80 -method get -addr localhost:8086
```
Service cluster ips and pod ips can be used as the `path` argument.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Added checks for cert correctness
* Add warning checks for approaching expiration
* Add unit tests
* Improve unit tests
* Address comments
* Address more comments
* Prevent upgrade from breaking proxies when issuer cert is overwritten (#3821)
* Address more comments
* Add gate to upgrade cmd that checks that all proxies roots work with the identitiy issuer that we are updating to
* Address comments
* Enable use of upgarde to modify both roots and issuer at the same time
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
* Enable cert rotation test to work with dynamic namespaces
This PR adds support for dynamic cert generation when running the cert rotation intergration tests. This allows to avoid baking in the namespace in the certificate CN, thereby allowing us to run these tests on the clouds.
The tests in #3775 were failing because the second secret holding the issuer cert replacement was a leaf cert and not a root/intermediary cert capable of signing the CSRs. This is how the replacement cert looked like:
```bash
$ k -n l5d-integration-external-issuer get secrets linkerd-identity-issuer-new -ojson | jq '.data|.["tls.crt"]' | tr -d '"' | base64 -d | step certificate inspect -
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 2 (0x2)
Signature Algorithm: ECDSA-SHA256
Issuer: CN=identity.l5d-integration-external-issuer.cluster.local
Validity
Not Before: Dec 6 19:16:08 2019 UTC
Not After : Dec 5 19:16:28 2020 UTC
Subject: CN=identity.l5d-integration-external-issuer.cluster.local
Subject Public Key Info:
Public Key Algorithm: ECDSA
Public-Key: (256 bit)
X:
93:d5:fa:f8:d1:44:4f:9a:8c:aa:0c:9e:4f:98:a3:
8d:28:d9:cc:f2:74:4c:5f:76:14:52:47:b9:fb:c9:
a3:33
Y:
d2:04:74:95:2e:b4:78:28:94:8a:90:b2:fb:66:1b:
e7:60:e5:02:48:d2:02:0e:4d:9e:4f:6f:e9:0a:d9:
22:78
Curve: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Subject Alternative Name:
DNS:identity.l5d-integration-external-issuer.cluster.local
Signature Algorithm: ECDSA-SHA256
30:46:02:21:00:f6:93:2f:10:ba:eb:be:bf:77:1a:2d:68:e6:
04:17:a4:b4:2a:05:80:f7:c5:f7:37:82:7b:b7:9c:a1:66:6a:
e1:02:21:00:b3:65:06:37:49:06:1e:13:98:7c:cf:f9:71:ce:
5a:55:de:f6:1b:83:85:b0:a8:88:b7:cf:21:d1:16:f2:10:f9
```
For it to be a root/intermediate cert it should have had `CA:TRUE` under the `X509v3 extensions` section.
Why did the test pass sometimes? When it did pass for me, I could see in the linkerd-identity proxy logs something like:
```
ERR! [ 320.964592s] linkerd2_proxy_identity::certify Received invalid ceritficate: invalid certificate: UnknownIssuer
```
so the cert retrieved from identity still was invalid but for some reason the proxy, sometimes, keeps on going despite that. And when one would delete the linkerd-identity pod, its proxy wouldn't come up at all, also showing that error.
With the changes from this branch, we no longer see that error in the logs and after deleting the linkerd-identity pod it comes back gracefully.
This PR adds support for dynamic cert generation when running the cert rotation intergration tests. This allows to avoid baking in the namespace in the certificate CN, thereby allowing us to run these tests on the clouds.
* Enable cert rotation test to work with dynamic namespaces
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
* Address comments
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
* Address further comments
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
https://github.com/linkerd/linkerd2/pull/3693 caused the proxy to start resolving private IP addresses with the destination service. However, the destination service does not support IP lookups and returns failures for these lookups. This negatively affects the destination service success rate and can cause this test to fail. We disable this test for now until the destination service supports IP lookups.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Traffic split integration test
Signed-off-by: zaharidichev <zaharidichev@gmail.com>
* Address comments
Signed-off-by: zaharidichev <zaharidichev@gmail.com>
* Display placeholder when there is no basic stats data
Signed-off-by: zaharidichev <zaharidichev@gmail.com>
* Replaced `uuid` with `uid` from linkerd-config resource
Fixes#3621
Removed the old `uuid` for identifying linkerd installations, and
replaced it with the `uid` property from the `linkerd-config` ConfigMap.
I tested that this `uid` remains the same by updating the config and
also upgrading linkerd, using both the CLI and Helm.
Note that this required granting `linkerd-web` RBAC access to the
`linkerd-config` Config.
I also added an integration test to verify the stability of the uid.
The edges integration test can fail when more edges are added to the Linked namespace due to https://github.com/linkerd/linkerd2/issues/3706. We disable this test until that issue can be resolved.
Signed-off-by: Alex Leong <alex@buoyant.io>
Add an integration test which exercises the behavior when one meshed pod connects to another meshed pod by pod ip address.
The current behavior is that the Linkerd proxy will not do any lookup against the destination service for this kind of connection and will proxy directly to the SO_ORIG_DST. This means that it will not have the identity metadata necessary to TLS the connection, and the connection will not be present in the `linkerd edges` command output. This test validates that behavior.
The purpose of this test is to set the stage for future work which will allow the Linkerd proxy to TLS this type of connection and display it in `linkerd edges`. The assertions in this test will be updated as part of that work.
This test will be run as part of the integration test suite. It can also be run directly:
```
go test --failfast --mod=readonly test/install_test.go --linkerd=(pwd)"/bin/linkerd" --k8s-context="$CTX" --integration-tests
go test -v --mod=readonly test/edges/edges_test.go --linkerd=(pwd)"/bin/linkerd" --k8s-context="$CTX" --integration-tests
```
Signed-off-by: Alex Leong <alex@buoyant.io>
* Add missing package to proxy Dockerfile
* Fix failing 'check' integration test
* Trim whitespaces in certs comparison.
Without this change, the integration test would fail because the trust anchor
stored in the linkerd-config config map generated by the Helm renderer is
stripped of the line breaks. See charts/linkerd2/templates/_config.tpl
Signed-off-by: Ivan Sim <ivan@buoyant.io>
* Re-add the destination container to the controller spec
This fix is necessary to avoid data plane downtime during an upgrade to
stable-2.6. All existing older proxies will continue to send requests to
this destination container, until the data plane is restarted.
On restart, the new pods will start forwarding their requests to the new
linkerd-dst service.
* Use the 2.6 destination service fqdn
* Fixed unit tests
* Fix integration test failure
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Fixes#278
Add `linkerd install|upgrade --disable-heartbeat` flag, and have
`linkerd check` check for the heartbeat's SA only if it's enabled.
Also added those flags into the `linkerd upgrade -h` examples.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
The integration tests check for known k8s events using a regex. This
regex included an incorrect pattern that prepended a failure reason and
object, rather than simply the event message we were trying to match on.
This resulted in failures such as:
https://github.com/linkerd/linkerd2/runs/217872818#step:6:476
Fix the regex to only check for the event message. Also explicitly
differentiate reason, object, and message in the log output.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
We're getting flakey `KillPodSandbox` events in the integration tests:
https://github.com/linkerd/linkerd2/runs/216505657#step:6:427
This is despite adding a regex for these events in #3380.
Modify the KillPodSandbox event regex to match on a broader set of
strings.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#3356
1.16 removes some api groups that were already deprecated. From k8s blog
post (https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/):
```
- PodSecurityPolicy: will no longer be served from extensions/v1beta1 in
v1.16.
Migrate to the policy/v1beta1 API, available since v1.10. Existing
persisted data can be retrieved/updated via the policy/v1beta1 API.
- DaemonSet, Deployment, StatefulSet, and ReplicaSet: will no longer be
served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 in v1.16.
Migrate to the apps/v1 API, available since v1.9. Existing persisted
data can be retrieved/updated via the apps/v1 API.
```
Previous PRs had already made this change at the Helm templates level,
but we still needed to do it at the API calls and tests.
The integration tests ran fine for k8s 1.12 and 1.15. They fail on 1.16
because the upgrade integration test tries to install linkerd 2.5 which is not
compatible with 1.16.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* Fix auto-injecting pods and integration tests reporting
When creating an Event when auto-injection occurs (#3316) we try to
fetch the parent object to associate the event to it. If the parent
doesn't exist (like in the case of stand-alone pods) the event isn't
created. I had missed dealing with one part where that parent was
expected.
This also adds a new integration test that I verified fails before this
fix.
Finally, I removed from `_test-run.sh` some `|| exit_code=$?` that was
preventing the whole suite to report failure whenever one of the tests
in `/tests` failed.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
The `linkerd upgrade` integration test compares the output from two
commands:
- `linkerd upgrade control-plane`
- `linkerd upgrade control-plane --from-manifests`
The output of these commands include the heartbeat cronjob schedule,
which is generated based on the current time.
Modify the upgrade integration test to retry the manifest comparison one
time, assuming that `linkerd upgrade control-plane` should not take more
than one minute to execute.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Stop ignoring client-go log entries
Pipe klog output into logrus. Not doing this avoids us from seeing
client-go log entries, for some reason I don't understand.
To enable, `--controller-log-level` must be `debug`.
This was discovered while trying to debug sending events for #3253.
I added an integration test that fails when this piping is not in place.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
Followup to #3194
The namespace was too long for l5d-bot:
```
inject_test.go:117: failed to create
l5d-integration-auto-git-9688d9ba-inject-namespace-override-test
namespace: Namespace
"l5d-integration-auto-git-9688d9ba-inject-namespace-override-test"
is invalid: metadata.name: Invalid value:
"l5d-integration-auto-git-9688d9ba-inject-namespace-override-test":
must be no more than 63 characters
```
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* Check for Namespace level config override annotations
* Add unit tests for namespace level config overrides
* add integration test for namespace level config override
* use different namespace for override tests
* check resource requests for integration tests
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* Refactor proxy injection to use Helm charts
Fixes#3128
A new chart `/charts/patch` was created, that generates the JSON patch
payload that is to be returned to the k8s API when doing the injection
through the proxy injector, and it's also leveraged by the `linkerd
inject --manual` CLI.
The VFS was used by `linkerd install` to access the old chart under
`/chart`. Now the proxy injection also uses the Helm charts to generate
the JSON patch (see above) so we've moved the VFS from `cli/static` to a
new common place under `/pkg/charts/static`, and the new root for the VFS is
now `/charts`.
`linkerd install` hasn't yet migrated to use the new charts (that'll
happen in #3127), so the only change in that regard was the creation of
`/charts/chart` which is a symlink pointing to `/chart` that
`install.go` now uses, so that the VFS contains both the old and new
charts, as a temporary measure.
You can see that `/bin/Dockerfile-bin`, `/controller/Dockerfile` and
`/bin/build-cli-bin` do now `go generate` pointing to the new location
(and the `go generate` annotation was moved from `/cli/main.go` to
`pkg/charts/static/templates.go`).
The symlink trick doesn't work when building the binaries through
Docker, so `/bin/Dockerfile-bin` replaces the symlink with an actual
copy of `/chart`.
Also note that in `/controller/Dockerfile` we now need to include the
`prod` tag in `go install` like we do in `/bin/Dockerfile-bin` so that
the proxy injector does use the VFS instead of the local file system.
- The common logic to parse a chart has been moved from `install.go` to
`/pkg/charts/util.go`.
- The special ENV var in the proxy for "outbound router capacity" that
only applies to the Prometheus pod is now handled directly in the proxy
partial and all the associated go code could be removed.
- The `patch.go` lib for generating the JSON patch in go along
with its tests `patch_test.go` are no longer needed.
- Lots of functions in `/pkg/inject/inject.go` got removed/simplified
with their logic being moved into the charts themselves. As a
consequence lots of things in `inject_test.go` became irrelevant.
- Moved `template-values.go` from `/pkg/inject` to `pkg/charts` as that
contains the go structs representation of the chart variables that
will be leveraged in #3127.
Don't forget to run `/bin/helm.sh` whenever you make changes to charts
;-)
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
The Tap Service enabled tapping of any meshed pod, regardless of user
privilege.
This change introduces a new Tap APIService. Kubernetes provides
authentication and authorization of Tap requests, and then forwards
requests to a new Tap APIServer, which implements a Kubernetes
aggregated APIServer. The Tap APIServer authenticates the client TLS
from Kubernetes, and authorizes the user via a SubjectAccessReview.
This change also modifies the `linkerd tap` command to make requests
against the new APIService.
The Tap APIService implements these Kubernetes-style endpoints:
POST /apis/tap.linkerd.io/v1alpha1/watch/namespaces/:ns/tap
POST /apis/tap.linkerd.io/v1alpha1/watch/namespaces/:ns/:res/:name/tap
GET /apis
GET /apis/tap.linkerd.io
GET /apis/tap.linkerd.io/v1alpha1
GET /healthz
GET /healthz/log
GET /healthz/ping
GET /metrics
GET /openapi/v2
GET /version
Users authorize to the new `tap.linkerd.io/v1alpha1` via RBAC. Only the
`watch` verb is supported. Access is also available via subresources
such as `deployments/tap` and `pods/tap`.
This change introduces the following resources into the default Linkerd
install:
- Global
- APIService/v1alpha1.tap.linkerd.io
- ClusterRoleBinding/linkerd-linkerd-tap-auth-delegator
- `linkerd` namespace:
- Secret/linkerd-tap-tls
- `kube-system` namespace:
- RoleBinding/linkerd-linkerd-tap-auth-reader
Tasks not covered by this PR:
- `linkerd top`
- `linkerd dashboard`
- `linkerd profile --tap`
- removal of the unauthenticated tap controller
Fixes#2725, #3162, #3172
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes https://github.com/linkerd/linkerd2/issues/2800#issuecomment-513740498
When the Linkerd proxy sends a query for a Kubernetes external name service to the destination service, the destination service returns `NoEndpoints: exists=false` because an external name service has no endpoints resource. Due to a change in the proxy's fallback logic, this no longer causes the proxy to fallback to either DNS or SO_ORIG_DST and instead fails the request. The net effect is that Linkerd fails all requests to external name services.
We change the destination service to instead return `InvalidArgument` for external name services. This causes the proxy to fallback to SO_ORIG_DST instead of failing the request.
Signed-off-by: Alex Leong <alex@buoyant.io>
PR #3056 introduced a cluster heartbeat cronjob to the Linkerd
installation. This implies the user installing Linkerd requires the
privileges to create CronJobs.
Update `linkerd check` to validate the user has privileges necessary to
create CronJobs.
Fixes#3057
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
`linkerd check`, the web dashboard, and Grafana all perform version
checks to validate Linkerd is up to date. It's common for users to
seldom execute these codepaths. This makes it difficult to identify what
versions of Linkerd are currently in use and what environments it is
being run in, which helps prioritize testing and backports.
Introduce a `heartbeat` CronJob to the default Linkerd install. The
cronjob executes every 24 hours, starting from 5 minutes after
`linkerd install` is run.
Example check URL:
https://versioncheck.linkerd.io/version.json?
install-time=1562761177&
k8s-version=v1.15.0&
meshed-pods=8&
rps=3&
source=heartbeat&
uuid=cc4bb700-3314-426a-9f0f-ec588b9df020&
version=git-b97ee9f7
Fixes#2961
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The openAPIV3Schema validation in the ServiceProfiles CRD is very limited in what it can validate and is obviated by more sophisticated validation done by the validating admission controller. Therefore, we would like to remove the openAPIV3Schema validation to reduce the size and complexity of the CRD object.
To do so, we must also bump the version of the ServiceProfile custom resource from v1alpha1 to v1alpha2. This ensures that when the controller is upgraded, it will attempt to watch the v1alpha2 resource. If it cannot (because, for example, the controller pod started before the ServiceProfile CRD was updated and therefore the v1alpha2 version does not exist) then it will go into a crash loop backoff until it can. This essentially means that the controller will wait for the CRD to be upgraded to include v1alpha2 before it will start.
Bumping the version is necessary because if we did not, it would be possible for the controller to start before the CRD is updated (removing the validation). In this case, when the CRD is edited, the controller will lose its list watch on ServiceProfiles and will stop getting updates.
Signed-off-by: Alex Leong <alex@buoyant.io>
Integration tests may fail and leave behind namespaces that following
builds aren't able to clean up because the git sha is being included in
the namespace name, and the following builds don't know about those
shas.
This modifies the `test-cleanup` script to delete based on object labels
instead of relying on the objects names, now that after 2.4 all the
control plane components are labeled. Note that this will also remove
non-testing linkerd namespaces, but we were already kinda doing that
partially because we were removing the cluster-level resources (CRDs,
webhook configs, clusterroles, clusterrolebindings, psp).
`test-cleanup` no longer receives a namespace name as an argument.
The data plane namespaces aren't labeled though, so I've added the
`linkerd.io/is-test-data-plane` label for them in
`CreateNamespaceIfNotExists()`, and making sure all tests that need a
data plaine explicitly call that method instead of creating the
namespace as a side-effect in `KubectlApply()`.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
During operations with `linkerd stat` sometimes it's not clear the actual
pod status.
This commit introduces a method, to the `k8s`package, getting the pod status,
based on [`kubectl` logic](33a3e325f7/pkg/printers/internalversion/printers.go (L558-L640))
to expose the `STATUS` column for pods . Also, it changes the stat command
on the` cli` package adding a column when the resource type is a Pod.
Fixes#1967
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
`linkerd check --pre` validates that PSPs provide `NET_ADMIN`, but was
not validating `NET_RAW`, despite `NET_RAW` being required by Linkerd's
proxy-init container since #2969.
Introduce a `has NET_RAW capability` check to `linkerd check --pre`.
Fixes#3054
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd check` for healthy ReplicaSets had a generic
`control plane components ready` description, and a hint anchor to
`l5d-existence-psp`. While a ReplicaSet failure could definitely occur
due to psp, that hintAnchor was already in use by the "control plane
PodSecurityPolicies exist" check.
Rename the `control plane components ready` check to
`control plane replica sets are ready`, and the hintAnchor from
`l5d-existence-psp` to `l5d-existence-replicasets`.
Relates to https://github.com/linkerd/website/issues/372.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Introduce new checks to determine existence of global resources and the
'linkerd-config' config map.
* Update pre-check to check for existence of global resources
This ensures that multiple control planes can't be installed into
different namespaces.
* Update integration test clean-up script to delete psp and crd
Signed-off-by: Ivan Sim <ivan@buoyant.io>
`linkerd check` validates whether PSP's exist, and if the caller has the
`NET_ADMIN` capability. This check was previously failing if `NET_ADMIN`
was not found, even in the case where the PSP admission controller was
not running. Related, `linkerd install` now includes a PSP, so
`linkerd check` should also validate that the caller can create PSP's.
Modify the `NET_ADMIN` check to warn, but not fail, if PSP's are found
but the caller does not have `NET_ADMIN`. Update the warning message to
mention that this is only a problem if the PSP admission controller is
running (and will only be a problem during injection, since #2920
handles control plane installation by adding its own PSP).
Also introduce a check to validate the caller can create PSP's.
Fixes#2884, #2849
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#2927
Also moved `TestInstallSP` after `TestCheckPostInstall` so we're sure
the validating webhook is ready before installing a service profile.
Signed-off-by: Alejandro Pedraza Borrero <alejandro@buoyant.io>
This is a major refactor of the destination service. The goals of this refactor are to simplify the code for improved maintainability. In particular:
* Remove the "resolver" interfaces. These were a holdover from when our decision tree was more complex about how to handle different kinds of authorities. The current implementation only accepts fully qualified kubernetes service names and thus this was an unnecessary level of indirection.
* Moved the endpoints and profile watchers into their own package for a more clear separation of concerns. These watchers deal only in Kubernetes primitives and are agnostic to how they are used. This allows a cleaner layering when we use them from our gRPC service.
* Renamed the "listener" types to "translator" to make it more clear that the function of these structs is to translate kubernetes updates from the watcher to gRPC messages.
Signed-off-by: Alex Leong <alex@buoyant.io>
Split proxy-init into separate repo
Fixes#2563
The new repo is https://github.com/linkerd/linkerd2-proxy-init, and I
tagged the latest there `v1.0.0`.
Here, I've removed the `/proxy-init` dir and pinned the injected
proxy-init version to `v1.0.0` in the injector code and tests.
`/cni-plugin` depends on proxy-init, so I updated the import paths
there, and could verify CNI is still working (there is some flakiness
but unrelated to this PR).
For consistency, I added a `--init-image-version` flag to `linkerd
inject` along with its corresponding override config annotation.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
In #2679 we introduced an upgrade integration test. At the time we only
supported upgrading from a recent edge. Since that PR, a stable build
was released supporting upgrade.
Modify the upgrade integration test to upgrade from the latest stable
rather than latest edge. This fulfills the original intent of #2669.
Also add some known k8s event warnings.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Support for resources opting out of tap
Implements the `linkerd inject --disable-tap` flag (although hidden pending #2811) and the config override annotation `config.linkerd.io/disable-tap`.
Fixes#2778
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
Integration test for k8s events generated during install
Fixes#2713
I did make sure a scenario like the one described in #2964 is caught.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
With the server configured to response with a failure of 50%, the test first
checks to ensure the actual success rate is less than 100%. Then the
service profile is edited to perform retries. The test then checks to
ensure the effective success rate is at least 95%.
This is (hopefully) more reliable than changing the test to perform waits and
retries until there is a difference between effective success rate and actual
success rate and compare them.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Add support for `linkerd check config`. Validates the existence of the
Linkerd Namespace, ClusterRoles, ClusterRoleBindings, ServiceAccounts,
and CustomResourceDefitions.
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
PR #2737 introduced a warning in the proxy-injector when owner ref
lookups failed due to not having up-to-date ReplicaSet information. That
warning may occur during integration tests, causing a failure.
Add the warning as a known controller log message. The warning will be
printed as a skipped test, allowing the integration tests to pass.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
`linkerd install` supports a 2-stage install process, `linkerd upgrade`
did not.
Add 2-stage support for `linkerd upgrade`. Also exercise multi-stage
functionality during upgrade integration tests.
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#2720 and 2711
This changes the default behavior of `linkerd inject` to not inject the
proxy but just the `linkerd.io/inject: enabled` annotation for the
auto-injector to pick it up (regardless of any namespace annotation).
A new `--manual` mode was added, which behaves as before, injecting
the proxy in the command output.
The unit tests are running with `--manual` to avoid any changes in the
fixtures.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
Add config.linkerd.io/disable-identity annotation
First part of #2540
We'll tackle support for `--disable-identity` in `linkerd install` in a
separate commit.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
The integration tests check container logs for errors. When an error is
encountered that matches a list of expected errors, it was hidden, and
the test passed.
Modify the integration tests to report known errors in logs via
`t.Skipf`.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#2465
* Add check for unschedulable pods and psp issues (#2465)
* Return error reason and message on pod or node failure
Signed-off-by: Gaurav Kumar <gaurav.kumar9825@gmail.com>
* The 'linkerd-version' CLI flag is renamed to 'control-plane-version'
* Add version field to proxy config
* Add the control plane version to the global config
* Unit test for init image version
* Use more specific control plane and proxy versions in unit tests
Signed-off-by: Ivan Sim <ivan@buoyant.io>
The `linkerd upgrade` command read the control-plane's config from
Kubernetes, which required the environment to be configured to connect
to the appropriate k8s cluster.
Intrdouce a `linkerd upgrade --from-manifests` flag, allowing the user
to feed the output of `linkerd install` into the upgrade command.
Fixes#2629
Signed-off-by: Andrew Seigner <siggy@buoyant.io>