Fixes#3332
Fixes the very rare test failure
```
--- FAIL: TestGetProfiles (0.33s)
--- FAIL: TestGetProfiles/Returns_server_profile (0.11s)
server_test.go:228: Expected 1 or 2 updates but got 3:
[retry_budget:<retry_ratio:0.2 min_retries_per_second:10
ttl:<seconds:10 > > routes:<condition:<path:<regex:"/a/b/c"
> > metrics_labels:<key:"route" value:"route1" >
timeout:<seconds:10 > > retry_budget:<retry_ratio:0.2
min_retries_per_second:10 ttl:<seconds:10 > >
routes:<condition:<path:<regex:"/a/b/c" > >
metrics_labels:<key:"route" value:"route1" >
timeout:<seconds:10 > > retry_budget:<retry_ratio:0.2
min_retries_per_second:10 ttl:<seconds:10 > > ]
FAIL
FAIL github.com/linkerd/linkerd2/controller/api/destination
0.624s
```
that occurs when a third unexpected stream update occurs, when the fake
API takes more time to notify its listeners about the resources created.
For all the nasty details check #3332
From time to time we get this CI error when testing the external issuer
mechanism:
```
Test script: [external_issuer_test.go] Params:
[--linkerd-namespace=l5d-integration-external-issuer
--external-issuer=true]
--- FAIL: TestExternalIssuer (33.61s)
external_issuer_test.go:89: Received error while ensuring test app
works (before cert rotation): Error stripping header and trailing
newline; full output:
FAIL
```
https://github.com/alpeb/linkerd2/runs/428273855?check_suite_focus=true#step:6:526
This is caused by the "backend" pod not receiving traffic from
"slow-cooker" in a timely manner.
After those pods are deployed we're only checking that "backend" is
ready, but not "slow-cooker", so this change adds that check.
I'm also removing the `TestHelper.CheckDeployment` call because it's
redundant, since the preceeding `TestHelper.CheckPods` is already checking
that the deployment has all the specified replicas ready.
* Allow CI to run concurrent builds in master
Fixes#3911
Refactors the `cloud_integration` test to run in separate GKE clusters
that are created and torn down on the fly.
It leverages a new "gcloud" github action that is also used to set up
gcloud in other build steps (`docker_deploy` and `chart_deploy`).
The action also generates unique names for those clusters, based on the
git commit SHA and `run_id`, a recently introduced variable that is
unique per CI run and available to all the jobs.
This fixes part of #3635 in that CI runs on the same SHA don't interfere
with one another (in the `cloud_integration` test; still to do for
`kind_integration`).
The "gcloud" GH action is hosted under its own repo in https://github.com/linkerd/linkerd2-action-gcloud
* Allow CI to run concurrent builds in master
Fixes#3911
Refactors the `cloud_integration` test to run in separate GKE clusters
that are created and torn down on the fly.
It leverages a new "gcloud" github action that is also used to set up
gcloud in other build steps (`docker_deploy` and `chart_deploy`).
The action also generates unique names for those clusters, based on the
git commit SHA and `run_id`, a recently introduced variable that is
unique per CI run and available to all the jobs.
This fixes part of #3635 in that CI runs on the same SHA don't interfere
with one another (in the `cloud_integration` test; still to do for
`kind_integration`).
The "gcloud" GH action is supported by `.github/actions/gcloud/index.js`
that has a couple of dependencies. To avoid having to commit
`node_modules`, after every change to that file one must run
```bash
# only needed the first time
npm i -g @zeit/ncc
cd .github/actions/gcloud
ncc build index.js
```
which generates the self-contained file
`.github/actions/gcloud/dist/index.js`.
(This last part might get easier in the future after other refactorings
outside this PR).
* Run integration tests for forked repos
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Address reviews
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Address more reviews
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Move some conditionals to jobs
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Change job name
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Move more conditionals to job level
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Added more flags to 'gcloud container clusters create' and consolidated
'create' and 'destroy' into ' action'
* Run kind cleanup only for non-forked PRs
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Got rid of cloud_cleanup by using a post hook in the gcloud action
* Removed cluster naming responsibility from the gcloud action
* Consolidate .gitignore statements
* Removed bin/_gcp.sh
* Change name of Kind int. test job
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Ensure `kind_cleanup` still runs on cancelled host CI runs
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Add reviews
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Update workflow comment
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Split index.js into setup.js and destroy.js
* trigger build
* Moved the gcloud action into its own repo
* Full version for the gcloud GH action
* Rebase back to master
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Remvoe additional changes
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Remove additional changes
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Trigger CI
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Co-authored-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
## stable-2.7.0
This release adds support for integrating Linkerd's PKI with an external
certificate issuer such as [`cert-manager`] as well as streamlining the
certificate rotation process in general. For more details about cert-manager
and certificate rotation, see the
[docs](https://linkerd.io/2/tasks/use_external_certs/). This release also
includes performance improvements to the dashboard, reduced memory usage of the
proxy, various improvements to the Helm chart, and much much more.
To install this release, run: `curl https://run.linkerd.io/install | sh`
**Upgrade notes**: This release includes breaking changes to our Helm charts.
Please see the [upgrade instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-270).
**Special thanks to**: @alenkacz, @bmcstdio, @daxmc99, @droidnoob, @ereslibre,
@javaducky, @joakimr-axis, @JohannesEH, @KIVagant, @mayankshah1607,
@Pothulapati, and @StupidScience!
**Full release notes**:
* CLI
* Updated the mTLS trust anchor checks to eliminate false positives caused by
extra trailing spaces
* Reduced the severity level of the Linkerd version checks, so that they
don't fail when the external version endpoint is unreachable
(thanks @mayankshah1607!)
* Added a new `tap` APIService check to aid with uncovering Kubernetes API
aggregatation layer issues (thanks @droidnoob!)
* Introduced CNI checks to confirm the CNI plugin is installed and ready;
this is done through `linkerd check --pre --linkerd-cni-enabled` before
installation and `linkerd check` after installation if the CNI plugin is
present
* Added support for the `--as-group` flag so that users can impersonate
groups for Kubernetes operations (thanks @mayankshah1607!)
* Added HA specific checks to `linkerd check` to ensure that the `kube-system`
namespace has the `config.linkerd.io/admission-webhooks:disabled`
label set
* Fixed a problem causing the presence of unnecessary empty fields in
generated resource definitions (thanks @mayankshah1607)
* Added the ability to pass both port numbers and port ranges to
`--skip-inbound-ports` and `--skip-outbound-ports` (thanks to @javaducky!)
* Increased the comprehensiveness of `linkerd check --pre`
* Added TLS certificate validation to `check` and `upgrade` commands
* Added support for injecting CronJobs and ReplicaSets, as well as the ability
to use them as targets in the CLI subcommands
* Introduced the new flags `--identity-issuer-certificate-file`,
`--identity-issuer-key-file` and `identity-trust-anchors-file` to `linkerd
upgrade` to support trust anchor and issuer certificate rotation
* Added a check that ensures using `--namespace` and `--all-namespaces`
results in an error as they are mutually exclusive
* Added a `Dashboard.Replicas` parameter to the Linkerd Helm chart to allow
configuring the number of dashboard replicas (thanks @KIVagant!)
* Removed redundant service profile check (thanks @alenkacz!)
* Updated `uninject` command to work with namespace resources
(thanks @mayankshah1607!)
* Added a new `--identity-external-issuer` flag to `linkerd install` that
configures Linkerd to use certificates issued by an external certificate
issuer (such as `cert-manager`)
* Added support for injecting a namespace to `linkerd inject` (thanks
@mayankshah1607!)
* Added checks to `linkerd check --preinstall` ensuring Kubernetes Secrets
can be created and accessed
* Fixed `linkerd tap` sometimes displaying incorrect pod names for unmeshed
IPs that match multiple running pods
* Made `linkerd install --ignore-cluster` and `--skip-checks` faster
* Fixed a bug causing `linkerd upgrade` to fail when used with
`--from-manifest`
* Made `--cluster-domain` an install-only flag (thanks @bmcstdio!)
* Updated `check` to ensure that proxy trust anchors match configuration
(thanks @ereslibre!)
* Added condition to the `linkerd stat` command that requires a window size
of at least 15 seconds to work properly with Prometheus
* Controller
* Fixed an issue where an override of the Docker registry was not being
applied to debug containers (thanks @javaducky!)
* Added check for the Subject Alternate Name attributes to the API server
when access restrictions have been enabled (thanks @javaducky!)
* Added support for arbitrary pod labels so that users can leverage the
Linkerd provided Prometheus instance to scrape for their own labels
(thanks @daxmc99!)
* Fixed an issue with CNI config parsing
* Fixed a race condition in the `linkerd-web` service
* Updated Prometheus to 2.15.2 (thanks @Pothulapati)
* Increased minimum kubernetes version to 1.13.0
* Added support for pod ip and service cluster ip lookups in the destination
service
* Added recommended kubernetes labels to control-plane
* Added the `--wait-before-exit-seconds` flag to linkerd inject for the proxy
sidecar to delay the start of its shutdown process (a huge commit from
@KIVagant, thanks!)
* Added a pre-sign check to the identity service
* Fixed inject failures for pods with security context capabilities
* Added `conntrack` to the `debug` container to help with connection tracking
debugging
* Fixed a bug in `tap` where mismatch cluster domain and trust domain caused
`tap` to hang
* Fixed an issue in the `identity` RBAC resource which caused start up errors
in k8s 1.6 (thanks @Pothulapati!)
* Added support for using trust anchors from an external certificate issuer
(such as `cert-mananger`) to the `linkerd-identity` service
* Added support for headless services (thanks @JohannesEH!)
* Helm
* **Breaking change**: Renamed `noInitContainer` parameter to `cniEnabled`
* **Breaking Change** Updated Helm charts to follow best practices (thanks
@Pothulapati and @javaducky!)
* Fixed an issue with `helm install` where the lists of ignored inbound and
outbound ports would not be reflected
* Fixed the `linkerd-cni` Helm chart not setting proper namespace annotations
and labels
* Fixed certificate issuance lifetime not being set when installing through
Helm
* Updated the helm build to retain previous releases
* Moved CNI template into its own Helm chart
* Proxy
* Fixed an issue that could cause the OpenCensus exporter to stall
* Improved error classification and error responses for gRPC services
* Fixed a bug where the proxy could stop receiving service discovery updates,
resulting in 503 errors
* Improved debug/error logging to include detailed contextual information
* Fixed a bug in the proxy's logging subsystem that could cause the proxy to
consume memory until the process is OOM killed, especially when the proxy was
configured to log diagnostic information
* Updated proxy dependencies to address RUSTSEC-2019-0033, RUSTSEC-2019-0034,
and RUSTSEC-2020-02
* Web UI
* Fixed an error when refreshing an already open dashboard when the Linkerd
version has changed
* Increased the speed of the dashboard by pausing network activity when the
dashboard is not visible to the user
* Added support for CronJobs and ReplicaSets, including new Grafana dashboards
for them
* Added `linkerd check` to the dashboard in the `/controlplane` view
* Added request and response headers to the `tap` expanded view in the
dashboard
* Added filter to namespace select button
* Improved how empty tables are displayed
* Added `Host:` header validation to the `linkerd-web` service, to protect
against DNS rebinding attacks
* Made the dashboard sidebar component responsive
* Changed the navigation bar color to the one used on the [Linkerd](https://linkerd.io/) website
* Internal
* Added validation to incoming sidecar injection requests that ensures
the value of `linkerd.io/inject` is either `enabled` or `disabled`
(thanks @mayankshah1607)
* Upgraded the Prometheus Go client library to v1.2.1 (thanks @daxmc99!)
* Fixed an issue causing `tap`, `injector` and `sp-validator` to use
old certificates after `helm upgrade` due to not being restarted
* Fixed incomplete Swagger definition of the tap api, causing benign
error logging in the kube-apiserver
* Removed the destination container from the linkerd-controller deployment as
it now runs in the linkerd-destination deployment
* Allowed the control plane to be injected with the `debug` container
* Updated proxy image build script to support HTTP proxy options
(thanks @joakimr-axis!)
* Updated the CLI `doc` command to auto-generate documentation for the proxy
configuration annotations (thanks @StupidScience!)
* Added new `--trace-collector` and `--trace-collector-svc-account` flags to
`linkerd inject` that configures the OpenCensus trace collector used by
proxies in the injected workload (thanks @Pothulapati!)
* Added a new `--control-plane-tracing` flag to `linkerd install` that enables
distributed tracing in the control plane (thanks @Pothulapati!)
* Added distributed tracing support to the control plane (thanks
@Pothulapati!)
[`cert-manager`]: https://github.com/jetstack/cert-manager
Signed-off-by: Alex Leong <alex@buoyant.io>
This edge release is a release candidate for `stable-2.7` and fixes an issue
where the proxy could consume inappropriate amounts of memory.
* Proxy
* Fixed a bug in the proxy's logging subsystem that could cause the proxy to
consume memory until the process is OOMKilled, especially when the proxy was
configured to log diagnostic information
* Fixed properly emitting `grpc-status` headers when signaling proxy errors to
gRPC clients
* Internal
* Updated to Rust 1.40
* Updated certain proxy dependencies to address RUSTSEC-2019-0033,
RUSTSEC-2019-0034, and RUSTSEC-2020-02
Signed-off-by: Alex Leong <alex@buoyant.io>
This release fixes a bug in the proxy's logging subsystem that could
cause the proxy to consume memory until the process is OOMKilled,
especially when the proxy was configured to log diagnostic information.
The proxy also now properly emits `grpc-status` headers when signaling
proxy errors to gRPC clients.
This release upgrades the proxy's Rust version, the `http` crate
dependency to address RUSTSEC-2019-0033 and RUSTSEC-2019-0034, and the
`prost` crate dependency has been patched to address RUSTSEC-2020-02.
---
* internal: Introduce a locking middleware (linkerd/linkerd2-proxy#408)
* Update to Rust 1.40 with new Cargo.lock format (linkerd/linkerd2-proxy#410)
* Update http to v0.1.21 (linkerd/linkerd2-proxy#412)
* internal: Split retry, http-classify, and http-metrics (linkerd/linkerd2-proxy#409)
* Actually update http to v0.1.21 (linkerd/linkerd2-proxy#413)
* patch `prost` 0.5 to pick up security fix (linkerd/linkerd2-proxy#414)
* metrics: Make Counter & Gauge atomic (linkerd/linkerd2-proxy#415)
* Set grpc-status headers on dispatch errors (linkerd/linkerd2-proxy#416)
* trace: update `tracing-subscriber` to 0.2.0-alpha.4 (linkerd/linkerd2-proxy#418)
* discover: Warn on discovery error (linkerd/linkerd2-proxy#422)
* router: Avoid large up-front allocations (linkerd/linkerd2-proxy#421)
* errors: Set correct HTTP version on responses (linkerd/linkerd2-proxy#424)
* app: initialize tracing prior to parsing env vars (linkerd/linkerd2-proxy#425)
* trace: update tracing-subscriber to 0.2.0-alpha.6 (linkerd/linkerd2-proxy#423)
In light of the breaking changes we are introducing to the Helm chart and the convoluted upgrade process (see linkerd/website#647) an integration test can be quite helpful. This simply installs latest stable through helm install and then upgrades to the current head of the branch.
Signed-off-by: Zahari Dichev zaharidichev@gmail.com
* Refactoring to suppress eslint warnings
Upon enabling react/no-did-update-set-state flag in .eslintrc , a couple of warnings are raised because it is a bad practice to use the setState() function within the componentDidUpdate() hook.
The code has been refactored to follow the eslint spec.
During the code review, it was pointed out that the react/no-did-update-set-state is enabled by default and can be removed from .eslintrc
The flag was removed from .eslintrc
Fixes#3928
Signed-off-by: Christy Jacob <christyjacob4@gmail.com>
This fix ensures that we ignore whitespace and newlines when checking that roots match between the Linkerd config map and the issuer secret (in the case of using external issue + Helm).
Fixes: #3907
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
# edge-20.1.3
An update to the Helm charts has caused a **breaking change** for users who
have installed Linkerd using Helm. In order to make the purpose of the
`NoInitContainer` parameter more explicit, it has been renamed to `CniEnabled`.
* CLI
* Introduced `linkerd check --pre --linkerd-cni-enabled`, used when the CNI
plugin is used, to check it has been properly installed before proceeding
with the control plane installation
* Added support for the `--as-group` flag so that users can impersonate
groups for Kubernetes operations (thanks @mayankshah160!)
* Controller
* Fixed an issue where an override of the Docker registry was not being
applied to debug containers (thanks @javaducky!)
* Added check for the Subject Alternate Name attributes to the API server
when access restrictions have been enabled (thanks @javaducky!)
* Added support for arbitrary pod labels so that users can leverage the
Linkerd provided Prometheus instance to scrape for their own labels
(thanks @daxmc99!)
* Fixed an issue with CNI config parsing
* Helm
* **Breaking change**: Renamed `NoInitContainer` parameter to `CniEnabled`
* Fixed an issue with `helm install` where the lists of ignored inbound and
outbound ports would not be reflected
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
*From the comment disabling the test*:
#2316
The response from `http://httpbin.org/get` is non-deterministic--returning
either `http://..` or `https://..` for GET requests. As #2316 mentions, this
test should not have an external dependency on this endpoint. As a workaround
for edge-20.1.3, temporarily disable this test and renable with one that has
reliable behavior.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
There was a problem that caused helm install to not reflect the proper list of ignored inbound and outbound ports. Namely if you supply just one port, that would not get reflected.
To reproduce do a:
```
helm install \
--name=linkerd2 \
--set-file global.identityTrustAnchorsPEM=ca.crt \
--set-file identity.issuer.tls.crtPEM=issuer.crt \
--set-file identity.issuer.tls.keyPEM=issuer.key \
--set identity.issuer.crtExpiry=2021-01-14T14:21:43Z \
--set-string global.proxyInit.ignoreInboundPorts="6666" \
linkerd-edge/linkerd2
```
Check your config:
```bash
$ kubectl get configmap -n linkerd -oyaml | grep ignoreInboundPort
"ignoreInboundPorts":[],
```
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
## edge-20.1.3
* CLI
* Introduced `linkerd check --pre --linkerd-cni-enabled`, used when the CNI
plugin is used, to check it has been properly installed before proceeding
with the control plane installation
* Added support for the `--as-group` flag so that users can impersonate
groups for Kubernetes operations (thanks @mayankshah160!)
* Controller
* Fixed an issue where an override of the Docker registry was not being
applied to debug containers (thanks @javaducky!)
* Added check for the Subject Alternate Name attributes to the API server
when access restrictions have been enabled (thanks @javaducky!)
* Added support for arbitrary pod labels so that users can leverage the
Linkerd provided Prometheus instance to scrape for their own labels
(thanks @daxmc99!)
* Fixed an issue with CNI config parsing
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This allows for users of Linkerd to leverage the Prometheus instance
deployed by the mesh for their metric needs. With support for pod labels
outside of the Linkerd metrics users are able to scrape metrics
based upon their own labels.
Signed-off-by: Dax McDonald <dax@rancher.com>
Subject
Utilize Common Name or Subject Alternate Name for access checks (#3459)
Problem
When access restrictions to API server have been enabled with the requestheader-allowed-names configuration, only the Common Name of the requestor certificate is being checked. This check should include the use of Subject Alternate Name attributes.
Solution
API server will now check the SAN attributes (DNS Names, Email Addresses, IP Addresses, and URIs) when determining accessibility for allowed names.
Fixes issue #3459
Signed-off-by: Paul Balogh <javaducky@gmail.com>
This is a follow-up to #3882, which adopted a bunch of new linting rules
in our Javascript codebase. The no-use-before-define rule requires
moving some functions around, so I'm doing it in a separate branch.
Note that I was originally going to also enable the react/sort-comp rule
as part of this branch, but I decided that the sort ordering doesn't
work for our codebase.
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
**Subject**
Fixes bug where override of Docker registry was not being applied to debug containers (#3851)
**Problem**
Overrides for Docker registry are not being applied to debug containers and provide no means to correct the image.
**Solution**
This update expands the `data.proxy` configuration section within the Linkerd `ConfigMap` to maintain the overridden image name for debug containers at _install_-time similar to handling of the `proxy` and `proxyInit` images.
This change also enables the further override option of the registry for debug containers at _inject_-time given utilization of the `--registry` CLI option.
**Validation**
Several new unit tests have been created to confirm functionality. In addition, the following workflows were run through:
### Standard Workflow with Custom Registry
This workflow installs Linkerd control plane based upon a custom registry, then injecting the debug sidecar into a service.
* Start with a k8s instance having no Linkerd installation
* Build all images locally using `bin/docker-build`
* Create custom tags (using same version) for generated images, e.g. `docker tag gcr.io/linkerd-io/debug:git-a4ebecb6 javaducky.com/linkerd-io/debug:git-a4ebecb6`
* Install Linkerd with registry override `bin/linkerd install --registry=javaducky.com/linkerd-io | kubectl apply -f -`
* Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap now contains the debug image name, pull policy, and version within the `data.proxy` section
* Request injection of the debug image into an available container. I used the Emojivoto voting service as described in https://linkerd.io/2/tasks/using-the-debug-container/ as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -`
* Once the deployment creates a new pod for the service, inspection should show that the container now includes the "linkerd-debug" container name based on the applicable override image seen previously within the ConfigMap
* Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f`
* Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Overriding the Custom Registry Override at Injection
This builds upon the “Standard Workflow with Custom Registry” by overriding the Docker registry utilized for the debug container at the time of injection.
* “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment
* Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=gcr.io/linkerd-io - | kubectl apply -f -`
* Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry. Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image. Of note, the proxy and proxy-init images are still running the “original” override images.
* As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Standard Workflow with Default Registry
This workflow is the typical workflow which utilizes the standard Linkerd image registry.
* Uninstall the Linkerd control plane using `bin/linkerd install --ignore-cluster | kubectl delete -f -` as described at https://linkerd.io/2/tasks/uninstall/
* Clean the Emojivoto environment using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl delete -f -` then reinstall using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -`
* Perform standard Linkerd installation as `bin/linkerd install | kubectl apply -f -`
* Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap references the default debug image of `gcr.io/linkerd-io/debug` within the `data.proxy` section
* Request injection of the debug image into an available container as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -`
* Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f`
* Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Overriding the Default Registry at Injection
This workflow builds upon the “Standard Workflow with Default Registry” by overriding the Docker registry utilized for the debug container at the time of injection.
* “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment
* Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=javaducky.com/linkerd-io - | kubectl apply -f -`
* Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry. Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image. Of note, the proxy and proxy-init images are still running the “original” override images.
* As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container.
Fixes issue #3851
Signed-off-by: Paul Balogh javaducky@gmail.com
As part of the effort to remove the "experimental" label from the CNI plugin, this PR introduces cni checks to `linkerd check`
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
This integration test roughly follows the [Linkerd guide to distributed tracing](https://linkerd.io/2019/10/07/a-guide-to-distributed-tracing-with-linkerd/).
We deploy the tracing components (oc-collector and jaeger), emojivoto, and nginx as an ingress to do span initiation. We then watch the jaeger API and check that a trace is eventually created that includes traces from all of the data plane components: nginx, linkerd-proxy, web, voting, and emoji.
Signed-off-by: Alex Leong <alex@buoyant.io>
## edge-20.1.2
* CLI
* Added HA specific checks to `linkerd check` to ensure that the `kube-system`
namespace has the `config.linkerd.io/admission-webhooks:disabled`
label set
* Fixed a problem causing the presence of unnecessary empty fields in
generated resource definitions (thanks @mayankshah1607)
* Proxy
* Fixed an issue that could cause the OpenCensus exporter to stall
* Internal
* Added validation to incoming sidecar injection requests that ensures
the value of `linkerd.io/inject` is either `enabled` or `disabled`
(thanks @mayankshah1607)
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
This release fixes an issue that could cause the OpenCensus exporter to
stall.
This release does NOT include the experimental changes from
v2.83.0-experimental.
---
* http: Use the endpoint type to inform URI normalization (linkerd/linkerd2-proxy#404)
* Remove clone in opencensus exporter to ensure task is notified (linkerd/linkerd2-proxy#405)
* sort alphabatically and update prometheus version
* update version field to static
* sort linkerd2-cni readme
* switch to uppercase CNI
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes
- https://github.com/linkerd/linkerd2/issues/2962
- https://github.com/linkerd/linkerd2/issues/2545
### Problem
Field omissions for workload objects are not respected while marshaling to JSON.
### Solution
After digging a bit into the code, I came to realize that while marshaling, workload objects have empty structs as values for various fields which would rather be omitted. As of now, the standard library`encoding/json` does not support zero values of structs with the `omitemty` tag. The relevant issue can be found [here](https://github.com/golang/go/issues/11939). To tackle this problem, the object declaration should have _pointer-to-struct_ as a field type instead of _struct_ itself. However, this approach would be out of scope as the workload object declaration is handled by the k8s library.
I was able to find a drop-in replacement for the `encoding/json` library which supports zero value of structs with the `omitempty` tag. It can be found [here](https://github.com/clarketm/json). I have made use of this library to implement a simple filter like functionality to remove empty tags once a YAML with empty tags is generated, hence leaving the previously existing methods unaffected
Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>
There are a few dangling references to old release versions in our charts and readmes.
I've removed as many of these references as possible so that we no longer need to worry about them getting out of date. The one reference that remains is `cniPluginVersion` and this will need to be manually updated as part of the release process.
Signed-off-by: Alex Leong <alex@buoyant.io>
Adds a check to ensure kube-system namespace has `config.linkerd.io/admission-webhooks:disabled`
FIxes#3721
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
## edge-20.1.1
This edge release includes experimental improvements to the Linkerd proxy's
request buffering and backpressure infrastructure.
Additionally, we've fixed several bugs when installing Linkerd with Helm,
updated the CLI to allow using both port numbers _and_ port ranges with the
`--skip-inbound-ports` and `--skip-outbound-ports` flags, and fixed a dashboard
error that can occur if the dashboard is open in a browser while updating Linkerd.
**Note**: The `linkerd-proxy` version included with this release is more
experimental than usual. We'd love your help testing, but be aware that there
might be stability issues.
* CLI
* Added the ability to pass both port numbers and port ranges to
`--skip-inbound-ports` and `--skip-outbound-ports` (thanks to @javaducky!)
* Controller
* Fixed a race condition in the `linkerd-web` service
* Updated Prometheus to 2.15.2 (thanks @Pothulapati)
* Web UI
* Fixed an error when refreshing an already open dashboard when the Linkerd
version has changed
* Proxy
* Internal changes to the proxy's request buffering and backpressure
infrastructure
* Helm
* Fixed the `linkerd-cni` Helm chart not setting proper namespace annotations
and labels
* Fixed certificate issuance lifetime not being set when installing through
Helm
* More improvements to Helm best practices (thanks to @Pothulapati!)
This is an experimental release that includes large changes to the
proxy's request buffering and backpressure infrastructure.
Please exercise caution before deploying this proxy version into mission
critical environments.