Commit Graph

153 Commits

Author SHA1 Message Date
Oliver Gould 27d89a8e3a
cli: Only allow HTTPS URLs with `inject` (#7940)
The CLI may access manifests over insecure channels, potentially
allowing MITM-attacks to run arbitrary code.

This change modifies `inject` to only support `https` URLs to mitigate
this risk.

This change addresses a security review finding (`TOB-LKDTM-4`).

Signed-off-by: Oliver Gould <ver@buoyant.io>
2022-02-23 09:26:27 -08:00
Oliver Gould f5876c2a98
go: Enable `errorlint` checking (#7885)
Since Go 1.13, errors may "wrap" other errors. [`errorlint`][el] checks
that error formatting and inspection is wrapping-aware.

This change enables `errorlint` in golangci-lint and updates all error
handling code to pass the lint. Some comparisons in tests have been left
unchanged (using `//nolint:errorlint` comments).

[el]: https://github.com/polyfloyd/go-errorlint

Signed-off-by: Oliver Gould <ver@buoyant.io>
2022-02-16 18:32:19 -07:00
Alejandro Pedraza 68b63269d9
Remove the `proxy.disableIdentity` config (#7729)
* Remove the `proxy.disableIdentity` config

Fixes #7724

Also:
- Removed the `linkerd.io/identity-mode` annotation.
- Removed the `config.linkerd.io/disable-identity` annotation.
- Removed the `linkerd.proxy.validation` template partial, which only
  made sense when `proxy.disableIdentity` was `true`.
- TestInjectManualParams now requires to hit the cluster to retrieve the
  trust root.
2022-01-31 10:17:10 -05:00
Eliza Weisman 9e9c9457ae
inject: support `config.linkerd.io/access-log` annotation (#7689)
With #7661, the proxy supports a `LINKERD2_PROXY_ACCESS_LOG`
configuration with the values `apache` or `json`. This configuration
causes the proxy to emit access logs to stderr. This branch makes it
possible for users to enable access logging by adding an annotation,
`config.linkerd.io/access-log`, that tells the proxy injector to set
this environment variable.

I've also added some tests to ensure that the annotation and the
environment variable are set correctly. I tried to follow the existing
tests as examples of how we do this, but please let me know if I've
overlooked anything!

Closes #7662 #1913

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2022-01-24 14:02:19 -08:00
Alejandro Pedraza f9f3ebefa9
Remove namespace from charts and split them into `linkerd-crd` and `linkerd-control-plane` (#6635)
Fixes #6584 #6620 #7405

# Namespace Removal

With this change, the `namespace.yaml` template is rendered only for CLI installs and not Helm, and likewise the `namespace:` entry in the namespace-level objects (using a new `partials.namespace` helper).

The `installNamespace` and `namespace` entries in `values.yaml` have been removed.

There in the templates where the namespace is required, we moved from `.Values.namespace` to `.Release.Namespace` which is filled-in automatically by Helm. For the CLI, `install.go` now explicitly defines the contents of the `Release` map alongside `Values`.

The proxy-injector has a new `linkerd-namespace` argument given the namespace is no longer persisted in the `linkerd-config` ConfigMap, so it has to be passed in. To pass it further down to `injector.Inject()` without modifying the `Handler` signature, a closure was used.

------------
Update: Merged-in #6638: Similar changes for the `linkerd-viz` chart:

Stop rendering `namespace.yaml` in the `linkerd-viz` chart.

The additional change here is the addition of the `namespace-metadata.yaml` template (and its RBAC), _not_ rendered in CLI installs, which is a Helm `post-install` hook, consisting on a Job that executes a script adding the required annotations and labels to the viz namespace using a PATCH request against kube-api. The script first checks if the namespace doesn't already have an annotations/labels entries, in which case it has to add extra ops in that patch.

---------
Update: Merged-in the approved #6643, #6665 and #6669 which address the `linkerd2-cni`, `linkerd-multicluster` and `linkerd-jaeger` charts. 

Additional changes from what's already mentioned above:
- Removes the install-namespace option from `linkerd install-cni`, which isn't found in `linkerd install` nor `linkerd viz install` anyways, and it would add some complexity to support.
- Added a dependency on the `partials` chart to the `linkerd-multicluster-link` chart, so that we can tap on the `partials.namespace` helper.
- We don't have any more the restriction on having the muticluster objects live in a separate namespace than linkerd. It's still good practice, and that's the default for the CLI install, but I removed that validation.


Finally, as a side-effect, the `linkerd mc allow` subcommand was fixed; it has been broken for a while apparently:

```console
$ linkerd mc allow --service-account-name foobar
Error: template: linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml:16:7: executing "linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml" at <include "partials.annotations.created-by" $>: error calling include: template: no template "partials.annotations.created-by" associated with template "gotpl"
```
---------
Update: see helm/helm#5465 describing the current best-practice

# Core Helm Charts Split

This removes the `linkerd2` chart, and replaces it with the `linkerd-crds` and `linkerd-control-plane` charts. Note that the viz and other extension charts are not concerned by this change.

Also note the original `values.yaml` file has been split into both charts accordingly.

### UX

```console
$ helm install linkerd-crds --namespace linkerd --create-namespace linkerd/linkerd-crds
...
# certs.yaml should contain identityTrustAnchorsPEM and the identity issuer values
$ helm install linkerd-control-plane --namespace linkerd -f certs.yaml linkerd/linkerd-control-plane
```

### Upgrade

As explained in #6635, this is a breaking change. Users will have to uninstall the `linkerd2` chart and install these two, and eventually rollout the proxies (they should continue to work during the transition anyway).

### CLI

The CLI install/upgrade code was updated to be able to pick the templates from these new charts, but the CLI UX remains identical as before.

### Other changes

- The `linkerd-crds` and `linkerd-control-plane` charts now carry a version scheme independent of linkerd's own versioning, as explained in #7405.
- These charts are Helm v3, which is reflected in the `Chart.yaml` entries and in the removal of the `requirements.yaml` files.
- In the integration tests, replaced the `helm-chart` arg with `helm-charts` containing the path `./charts`, used to build the paths for both charts.

### Followups

- Now it's possible to add a `ServiceProfile` instance for Destination in the `linkerd-control-plane` chart.
2021-12-10 15:53:08 -05:00
Ahmed Al-Hulaibi 9c0d457abd
feat: add default-inbound-policy to inject flags (#7428)
PR #6750 adds the config.linkerd.io/default-inbound-policy annotation for setting the default inbound policy for an injected proxy.

This commit adds support for a default-inbound-policy flag in makeProxyFlags so that it can be set with the linkerd inject command.

Closes #6754

Signed-off-by: ahmedalhulaibi <ahmed.alhulaibi41@gmail.com>
2021-12-09 11:04:09 -07:00
Tarun Pothulapati 92421d047a
core: use serviceAccountToken volume for pod authentication (#7117)
Fixes #3260 

## Summary

Currently, Linkerd uses a service Account token to validate a pod
during the `Certify` request with identity,  through which identity
is established on the proxy. This works well and good, as Kubernetes
attaches the `default` service account token of a namespace as a volume
(unless overridden with a specific service account by the user). Catch
here being that this token is aimed at the application to talk to the
kubernetes API and not specifically for Linkerd. This means that there
are [controls outside of Linkerd](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server), to manage this service token, which
users might want to use, [causing problems with Linkerd](https://github.com/linkerd/linkerd2/issues/3183)
as Linkerd might expect it to be present.

To have a more granular control over the token, and not rely on the
service token that can be managed externally, [Bound Service Tokens](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens)
can be used to generate tokens that are specifically for Linkerd,
that are bound to a specific pod, along with an expiry.

## Background on Bounded Service Tokens 

This feature has been GA’ed in Kubernetes 1.20, and is enabled by default
in most cloud provider distributions. Using this feature, Kubernetes can
be asked to issue specific tokens for linkerd usage (through audience bound
configuration), with a specific expiry time (as the validation happens every
24 hours when establishing identity, we can follow the same), bounded to
a specific pod (meaning verification fails if the pod object isn’t available).

Because of all these bounds, and not being able to use this token for
anything else, This feels like the right thing to rely on to validate
a pod to issue a certificate.

### Pod Identity Name

We still use the same service account name as the pod identity
(used with metrics, etc) as these tokens are all generated from the
same base service account attached to the pod (could be defualt, or
the user overriden one). This can be verified by looking at the `user`
field in the `TokenReview` response.

<details>

<summary>Sample TokenReview response</summary>

Here, The new token was created for the vault audience for a pod which
had a serviceAccount token volume projection and was using the `mine`
serviceAccount in the default namespace.

```json
  "kind": "TokenReview",
  "apiVersion": "authentication.k8s.io/v1",
  "metadata": {
    "creationTimestamp": null,
    "managedFields": [
      {
        "manager": "curl",
        "operation": "Update",
        "apiVersion": "authentication.k8s.io/v1",
        "time": "2021-10-19T19:21:40Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {"f:spec":{"f:audiences":{},"f:token":{}}}
      }
    ]
  },
  "spec": {
    "token": "....",
    "audiences": [
      "vault"
    ]
  },
  "status": {
    "authenticated": true,
    "user": {
      "username": "system:serviceaccount:default:mine",
      "uid": "889a81bd-e31c-4423-b542-98ddca89bfd9",
      "groups": [
        "system:serviceaccounts",
        "system:serviceaccounts:default",
        "system:authenticated"
      ],
      "extra": {
        "authentication.kubernetes.io/pod-name": [
  "nginx"
],
        "authentication.kubernetes.io/pod-uid": [
  "ebf36f80-40ee-48ee-a75b-96dcc21466a6"
]
      }
    },
    "audiences": [
      "vault"
    ]
  }

```

</details>


## Changes

- Update `proxy-injector` and install scripts to include the new
  projected Volume and VolumeMount.
- Update the `identity` pod to validate the token with the linkerd
  audience key.
- Added `identity.serviceAccountTokenProjection`  to disable this
 feature.
- Updated err'ing logic with `autoMountServiceAccount: false`
 to fail only when this feature is disabled.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-11-03 02:03:39 +05:30
Josh Soref 0be792fadc
Spelling (#6215)
This PR corrects misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).

The misspellings have been reported at 0d56327e6f (commitcomment-51603624)

The action reports that the changes in this PR would make it happy: 03a9c310aa

Note: this PR does not include the action. If you're interested in running a spell check on every PR and push, that can be offered separately.

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2021-06-07 15:16:59 -06:00
Kevin Leimkuhler 1071ec2e77
Add support for awaiting proxy readiness (#5967)
### What

This change adds the `config.linkerd.io/proxy-await` annotation which when set will delay application container start until the proxy is ready. This allows users to force application containers to wait for the proxy container to be ready without modifying the application's Docker image. This is different from the current use-case of [linkerd-await](https://github.com/olix0r/linkerd-await) which does require modifying the image.

---

To support this, Linkerd is using the fact that containers are started in the order that they appear in `spec.containers`. If `linkerd-proxy` is the first container, then it will be started first.

Kubernetes will start each container without waiting on the result of the previous container. However, if a container has a hook that is executed immediately after container creation, then Kubernetes will wait on the result of that hook before creating the next container. Using a `PostStart` hook in the `linkerd-proxy` container, the `linkerd-await` binary can be run and force Kubernetes to pause container creation until the proxy is ready. Once `linkerd-await` completes, the container hook completes and the application container is created.

Adding the `config.linkerd.io/await-proxy` annotation to a pod's metadata results in the `linkerd-proxy` container being the first container, as well as having the container hook:

```yaml
postStart:
  exec:
    command:
    - /usr/lib/linkerd/linkerd-await
```

---

### Update after draft

There has been some additional discussion both off GitHub as well as on this PR (specifically with @electrical).

First, we decided that this feature should be enabled by default. The reason for this is more often than not, this feature will prevent start-up ordering issues from occurring without having any negative effects on the application. Additionally, this will be a part of edges up until the 2.11 (the next stable release) and having it enabled by default will allow us to check that it does not conflict often with applications. Once we are closer to 2.11, we'll be able to determine if this should be disabled by default because it causes more issues than it prevents.

Second, this feature will remain configurable; if disabled, then upon injection the proxy container will not be made the first container in the pod manifest. This is important for the reasons discussed with @electrical about tools that make assumptions about app containers being the first container. For example, Rancher defaults to showing overview pages for the `0` index container, and if the proxy container was always `0` then this would defeat the purpose of the overview page.

### Testing

To test this I used the `sleep.sh` script and changed `Dockerfile-proxy` to use it as it's `ENTRYPOINT`. This forces the container to sleep for 20 seconds before starting the proxy.

---

`sleep.sh`:

```bash
#!/bin/bash
echo "sleeping..."
sleep 20
/usr/bin/linkerd2-proxy-run
```

`Dockerfile-proxy`:

```textile
...
COPY sleep.sh /sleep.sh
RUN ["chmod", "+x", "/sleep.sh"]
ENTRYPOINT ["/sleep.sh"]
```

---

```bash
# Build and install with the above changes
$ bin/docker-build
...
$ bin/image-load --k3d
...
$ bin/linkerd install |kubectl apply -f -
```

Annotate the `emoji` deployment so that it's the only workload that should wait for it's proxy to be ready and inject it:

```bash
cat emojivoto.yaml |bin/linkerd inject - |kubectl apply -f -
```

You can then see that the `emoji` deployment is not starting its application container until the proxy is ready:

```bash
$ kubectl get -n emojivoto pods
NAME                        READY   STATUS            RESTARTS   AGE
voting-ff4c54b8d-sjlnz      1/2     Running           0          9s
emoji-f985459b4-7mkzt       0/2     PodInitializing   0          9s
web-5f86686c4d-djzrz        1/2     Running           0          9s
vote-bot-6d7677bb68-mv452   1/2     Running           0          9s
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-04-21 17:43:23 -04:00
Tarun Pothulapati 90ad5d345a
cli: make inject add the right annotation with `--ingress` (#5983)
* cli: make inject add the right annotation with `--ingress`

Currently, When `--ingress` flag is used with inject. The
CLI is not setting `linkerd.io/inject: ingress`. Instead,
we get a `linkerd.io/inject: enabled`.

This is because we are setting the annotation to `ingress` in
`getOverrideAnnotations` which is being overridden to `enabled`
later in the inject pipeline at `transform`. This PR updates
the `transform` to only add the `enabled` annotation if its
not ingress mode injection.

After this change:

```bash
 ./bin/go-run cli inject 005f4facb3/shell.yaml --ingress --proxy-cpu-limit 50m
github.com/linkerd/linkerd2/cli/cmd
github.com/linkerd/linkerd2/cli
apiVersion: v1
kind: Pod
metadata:
  annotations:
    config.linkerd.io/proxy-cpu-limit: 50m
    linkerd.io/inject: ingress
  name: shell-demo
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: shared-data
  dnsPolicy: Default
  volumes:
  - emptyDir: {}
    name: shared-data
---

pod "shell-demo" injected
```

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-04-05 23:11:52 +05:30
Kevin Leimkuhler a11012819c
Add opaque ports namespace inheritance to pods (#5941)
### What

When a namespace has the opaque ports annotation, pods and services should
inherit it if they do not have one themselves. Currently, services do this but
pods do not. This can lead to surprising behavior where services are correctly
marked as opaque, but pods are not.

This changes the proxy-injector so that it now passes down the opaque ports
annotation to pods from their namespace if they do not have their own annotation
set. Closes #5736.

### How

The proxy-injector webhook receives admission requests for pods and services.
Regardless of the resource kind, it now checks if the resource should inherit
the opaque ports annotation from its namespace. It should inherit it if the
namespace has the annotation but the resource does not.

If the resource should inherit the annotation, the webhook creates an annotation
patch which is only responsible for adding the opaque ports annotation.

After generating the annotation patch, it checks if the resource is injectable.
From here there are a few scenarios:

1. If no annotation patch was created and the resource is not injectable, then
   admit the request with no changes. Examples of this are services with no OP
   annotation and inject-disabled pods with no OP annotation.
2. If the resource is a pod and it is injectable, create a patch that includes
   the proxy and proxy-init containers—as well as any other annotations and
   labels.
3. The above two scenarios lead to a patch being generated at this point, so no
   matter the resource the patch is returned.

### UI changes

Resources are now reported to either be "injected", "skipped", or "annotated".

The first pass at this PR worked around the fact that injection reports consider
services and namespaces injectable. This is not accurate because they don't have
pod templates that could be injected; they can however be annotated.

To fix this, an injection report now considers resources "annotatable" and uses
this to clean up some logic in the `inject` command, as well as avoid a more
complex proxy-injector webhook.

What's cool about this is it fixes some `inject` command output that would label
resources as "injected" when they were not even mutated. For example, namespaces
were always reported as being injected even if annotations were not added. Now,
it will properly report that a namespace has been "annotated" or "skipped".

### Tests

For testing, unit tests and integration tests have been added. Manual testing
can be done by installing linkerd with `debug` controller log levels, and
tailing the proxy-injector's app container when creating pods or services.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-03-29 19:41:15 -04:00
Alex Leong abb1e69fbd
CLI: add `--opaque-ports` flag to `inject` (#5851)
Continuation of https://github.com/linkerd/linkerd2/pull/5721/

The `config.linkerd.io/opaque-ports` annotation can now be set using the `--opaque-ports` flag on `inject`

Example

```bash
$ linkerd inject /path/to/manifest.yaml --opaque-ports 3000,5000-6000,mysql
```

This annotation is the only one which is applied to services.

Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Mayank Shah <mayankshah1614@gmail.com>
2021-03-02 08:59:09 -08:00
Alejandro Pedraza 571f505b6b
Move CP check after the readiness check (#5848)
* Move CP check after the readiness check

Moved the `can initialize client` and `can query the control plane API`
checks from the `linkerd-existence` section to the `linkerd-api` because
they required the `linkerd-controller` pod to not just be "Running" but
actually be ready.

This was causing `linkerd check` to show some port-forwarding warnings
when ran right after install.

This also allowed getting rid of the `CheckPublicAPIClientOrExit` function
and directly use `CheckPublicAPIClientOrRetryOrExit` (better naming
punted for later) which was refactored so it always runs the
`linkerd-api` checks before retrieving the client.

Other changes:

- Temporarily disabled `upgrade-edge` test because the latest edge has this readiness check issue
- Have the upgrade tests do proper pruning (stolen for @Pothulapati's #5673 😉 )
- Added missing label to tap SA (fixes #5850)
- Complete tap-injector Service selector
2021-03-01 19:47:25 -05:00
Kevin Leimkuhler edd3812f30
Add services to proxy injector for opaque ports annotation (#5766)
This adds namespace inheritance of the opaque ports annotation to services. 

This means that the proxy injector now watches services creation in a cluster.
When a new service is created, the webhook receives an admission request for
that service and determines whether a patch needs to be created.

A patch is created if the service does not have the annotation, but the
namespace does. This means the service inherits the annotation from the
namespace.

A patch is not created if the service and the namespace do not have the
annotation, or the service has the annotation. In the case of the service having
the annotation, we don't even need to check the namespace since it would not
inherit it anyways.

If a namespace has the annotation value changed, this will not be reflected on
the service. The service would need to be recreated so that it goes through
another admission request.

None of this applies to the `inject` command which still skips service
injection. We rely on being able to check the namespace annotations, and this is
only possible in the proxy injector webhook when we can query the k8s API.

Closes #5737

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-17 20:58:18 -05:00
Jimil Desai 8bc732e483
Handle incompatible automount token setting In service account YAML (#5549)
Fixes #4651

This PR adds a check in the proxy injector webhook. This checks if the concerned container has serviceaccount volume mounted. In case, the volume is not mounted, which essentially means that automountServiceAccountToken field in the service account used by the injected workload is set to false, we don't inject the sidecar proxy.

Signed-off-by: Jimil Desai <jimildesai42@gmail.com>
2021-02-12 09:37:45 -05:00
Tarun Pothulapati a393c42536
values: removal of .global field (#5699)
* values: removal of .global field

Fixes #5425

With the new extension model, We no longer need `Global` field
as we don't rely on chart dependencies anymore. This helps us
further cleanup Values, and make configuration more simpler.

To make upgrades and the usage of new CLI with older config work,
We add a new method called `config.RemoveGlobalFieldIfPresent` that
is used in the upgrade and `FetchCurrentConfiguration` paths to remove
global field and attach its child nodes if global is present. This is verified
by the `TestFetchCurrentConfiguration`'s older test that has the global
field.

We also don't yet remove .global in some helm stable-upgrade tests for
the initial install to work.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-02-11 23:38:34 +05:30
Mayank Shah 96e078421c
CLI: Remove the `--disable-tap` flag from inject (#5671)
Fixes https://github.com/linkerd/linkerd2/issues/5664

- Remove `--disable-flag` from `inject`
-  Move `config.linkerd.io/disable-tap` to `viz.linkerd.io/disable-tap`

Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>
2021-02-11 10:01:53 -05:00
Tarun Pothulapati 4c3d002501
viz: move sub-cmds using viz extension under viz cmd (#5485)
* viz: move sub-cmds using viz extension under viz cmd

Fixes #5327 , #5524 

This branch moves the following commands, under the `linkerd viz`
cmd as they use the viz extension to perform the job.

- dashboard
- edges
- routes
- stat
- tap
- top

This also creates a new pkg `public-api` which fecilitates
interaction and communication with public-api to be used
across extensions.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Co-authored-by: Alex Leong <alex@buoyant.io>
2021-01-13 12:11:25 +05:30
Alejandro Pedraza d661054795
Fix CLI install/upgrade overriding settings in HA (#5399)
Fixes #5385

## The problems

- `linkerd install --ha` isn't honoring flags
- `linkerd upgrade --ha` is overridding existing configs silently or failing with an error
- *Upgrading HA instances from before 2.9 to version 2.9.1 results in configs being overridden silently, or the upgrade fails with an error*

## The cause

The change in #5358 attempted to fix `linkerd install --ha` that was only applying some of the `values-ha.yaml` defaults, by calling `charts.NewValues(true)` and merging that with the values built from `values.yaml` overriden by the flags. It turns out the `charts.NewValues()` implementation was by itself merging against `values.yaml` and as a result any flag was getting overridden by its default.

This also happened when doing `linkerd upgrade --ha` on an existing instance, which could result in silently overriding settings, or it could also fail loudly like for example when upgrading set up that has an external issuer (in this case the issuer cert won't be able to be read during upgrade and an error would occur as described in #5385).

Finally, when doing `linkerd upgrade` (no --ha flag) on an HA install from before 2.9 results in configs getting overridden as well (silently or with an error) because in order to generate the `linkerd-config-overrides` secret, the original install flags are retrieved from `linkerd-config` via the `loadStoredValuesLegacy()` function which then effectively ends up performing a `linkerd upgrade` with all the flags used for `linkerd install` and falls into the same trap as above.

## The fix

In `values.go` the faulting merging logic is not used anymore, so now `NewValues()` only returns the default values from `values.yaml` and doesn't require an argument anymore. It calls `readDefaults()` which now only returns the appropriate values depending on whether we're on HA or not.
There's a new function `MergeHAValues()` that merges `values-ha.yaml` into the current values (it doesn't look into `values.yaml` anymore), which is only used when processing the `--ha` flag in `options.go`.

## How to test

To replicate the issue try setting a custom setting and check it's not applied:
```bash
linkerd install --ha --controller-log level debug | grep log.level
- -log-level=info
```

## Followup

This wasn't caught because we don't have HA integration tests. Now that our test infra is based on k3d, it should be easy to make such a test using a cluster with multiple nodes. Either that or issuing `linkerd install --ha` with additional configs and compare against a golden file.
2020-12-18 12:11:52 -05:00
Alex Leong cdc57d1af0
Use linkerd-jaeger extension for control plane tracing (#5299)
Now that tracing has been split out of the main control plane and into the linkerd-jaeger extension, we remove references to tracing from the main control plane including:

* removing the tracing components from the main control plane chart
* removing the tracing injection logic from the main proxy injector and inject CLI (these will be added back into the new injector in the linkerd-jaeger extension)
* removing tracing related checks (these will be added back into `linkerd jaeger check`)
* removing related tests

We also update the `--control-plane-tracing` flag to configure the control plane components to send traces to the linkerd-jaeger extension.  To make sure this works even when the linkerd-jaeger extension is installed in a non-default namespace, we also add a `--control-plane-tracing-namespace` flag which can be used to change the namespace that the control plane components send traces to.

Note that for now, only the control plane components send traces; the proxies in the control plane do not.  This is because the linkerd-jaeger injector is not yet available.  However, this change adds the appropriate namespace annotations to the control plane namespace to configure the proxies to send traces to the linkerd-jaeger extension once the linkerd-jaeger injector is available.

I tested this by doing the following:

1. bin/linkerd install | kubectl apply -f -
1. bin/helm install jaeger jaeger/charts/jaeger
1. bin/linkerd upgrade --control-plane-tracing=true | kubectl apply -f -
1. kubectl -n linkerd-jaeger port-forward svc/jaeger 16686
1. open http://localhost:16686
1. see traces from the linkerd control plane

Signed-off-by: Alex Leong <alex@buoyant.io>
2020-12-08 14:34:26 -08:00
hodbn 92eb174e06
Add safe accessor for Global in linkerd-config (#5269)
CLI crashes if linkerd-config contains unexpected values.

Add a safe accessor that initializes an empty Global on the first
access. Refactor all accesses to use the newly introduced accessor using
gopls.

Add test for linkerd-config data without Global.

Fixes #5215

Co-authored-by: Itai Schwartz <yitai27@gmail.com>
Signed-off-by: Hod Bin Noon <bin.noon.hod@gmail.com>
2020-11-23 12:45:58 -08:00
Tarun Pothulapati a30b5e49a6
cli: add `--ingress` flag to inject cmd (#5154)
* cli: add `--ingress` flag to inject cmd

This PR adds a new inject flag called `--ingress` which when enabled
adds a new annotation i.e `linkerd.io/inject: ingress`.

This annotation is not applied in the `--manual` case and the env
variable is directly set.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2020-11-02 14:47:16 +05:30
Alex Leong 41c1fc65b0
Upgrade using config overrides (#5005)
This is a major refactor of the install/upgrade code which removes the config protobuf and replaces it with a config overrides secret which stores overrides to the values struct.  Further background on this change can be found here: https://github.com/linkerd/linkerd2/discussions/4966

Note: as-is this PR breaks injection.  There is work to move injection onto a Values-based config which must land before this can be merged.

A summary of the high level changes:

* the install, global, and proxy fields of linkerd-config ConfigMap are no longer populated
* the CLI install flow now follows these simple steps:
  * load default Values from the chart
  * update the Values based on the provided CLI flags
  * render the chart with these values
  * also render a Secret/linkerd-config-overrides which describes the values which have been changed from their defaults
* the CLI upgrade flow now follows these simple stesp:
  * load the default Values from the chart
  * if Secret/linkerd-config-overrides exists, apply the overrides onto the values
  * otherwise load the legacy ConfigMap/linkerd-config and use it to updates the values
  * further update the values based on the provided CLI flags
  * render the chart and the Secret/linkerd-config-overrides as above
* Helm install and upgrade is unchanged

Signed-off-by: Alex Leong <alex@buoyant.io>
2020-10-12 14:23:14 -07:00
Tarun Pothulapati 1e7bb1217d
Update Injection to use new linkerd-config.values (#5036)
This PR Updates the Injection Logic (both CLI and proxy-injector)
to use `Values` struct instead of protobuf Config, part of our move
in removing the protobuf.

This does not touch any of the flags, install related code.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

Co-authored-by: Alex Leong <alex@buoyant.io>
2020-10-07 09:54:34 -07:00
Tarun Pothulapati 5e774aaf05
Remove dependency of linkerd-config for control plane components (#4915)
* Remove dependency of linkerd-config for most control plane components

This PR removes the dependency of `linkerd-config` into control
plane components by making all that information passed through CLI
flags. As most of these components require a couple of flags, passing
them as flags could be more helpful, as updations to the flags trigger a
rollout unlike a configMap update.

This does not update the proxy-injector as it needs a lot more data
and mounting `linkerd-config` is better.
2020-10-06 22:19:18 +05:30
Kevin Leimkuhler 2ec5245d67
Add configuration for opaque ports (#4972)
## Motivation

Closes #4950

## Solution

Add the `config.linkerd.io/opaque-ports` annotation to either a namespace or pod
spec to set the proxy `LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION`
environment variable.

Currently this environment variable is not used by the proxy, but will be
addressed by #4938.

## Valid values

Ports: `config.linkerd.io/opaque-ports: 4322,3306`

Port ranges: `config.linkerd.io/opaque-ports: 4320-4325`

Mixed ports and port ranges: `config.linkerd.io/opaque-ports: 4320-4325`

If the pod has named ports such as:

```
- name: nginx
  image: nginx:latest
  ports:
  - name: nginx-port
    containerPort: 80
    protocol: TCP
```

The name can also be used as a value: `config.linkerd.io/opaque-ports:
nginx-port`

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-09-25 15:36:12 -04:00
Paul Balogh 62d54838b8
Ensure and update debug image during upgrade (#4823)
Some installations upgrading from versions prior to 2.7.x may have missing debug image name and version. This fix ensures that the default values are in place for this scenario and additionally upgrades the version of debug image with the control plane version.

Signed-off-by: Paul Balogh <javaducky@gmail.com>
2020-08-05 11:39:29 -07:00
Naseem 361d35bb6a
feat: add log format annotation and helm value (#4620)
* feat: add log format annotation and helm value

Json log formatting has been added via https://github.com/linkerd/linkerd2-proxy/pull/500
but wiring the option through as an annotation/helm value is still
necessary.

This PR adds the annotation and helm value to configure log format.

Closes #2491

Signed-off-by: Naseem <naseem@transit.app>
2020-07-02 10:08:52 -05:00
Mayank Shah 2b0482c821
Update `inject` to throw an error while injecting non-compliant pods (#4346)
* Update inject to error out on failure

Update injection process to throw an error when the reason for failure is due to sidecar, udp, automountServiceAccountToken or hostNetwork

Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>
2020-06-24 14:07:05 -05:00
Alex Leong acacf2e023
Add --close-wait-timeout inject flag (#4409)
Depends on https://github.com/linkerd/linkerd2-proxy-init/pull/10

Fixes #4276 

We add a `--close-wait-timeout` inject flag which configures the proxy-init container to run with `privileged: true` and to set `nf_conntrack_tcp_timeout_close_wait`. 

Signed-off-by: Alex Leong <alex@buoyant.io>
2020-05-21 14:14:14 -07:00
Zahari Dichev edd9b654a7
Make gateway require TLS for incoming requests (#4339)
Make gateway require TLS for incoming requests

Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
2020-05-11 10:07:48 +03:00
Mayank Shah 4429c1a5b1
Update inject to handle `automountServiceAccountToken: false` (#4145)
* Handle automountServiceAccountToken

Return error during inject if pod spec has `automountServiceAccountToken: false`

Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>
2020-04-01 09:39:49 -05:00
Alejandro Pedraza 1cbc26a2c1
Upgrade golangci-lint to v1.23.8 (#4181)
* Upgrade golangci-lint to v1.23.8

This should help with some timeouts we're seeing in CI.

I fixed some new warnings found in `inject.go` and `uninject.go`.
Also we now have to explicitly disable linting `/controller/gen`.

The linter was also complaining that in `/pkg/k8s/fake.go` the
`spClient.Interface` and `tsclient.Interface` returned in the function
`newFakeClientSetsFromManifests()` aren't used, but I opted to ignore
that to leave them available for future tests.
2020-03-18 09:13:19 -05:00
Paul Balogh dabee12b93 Fix issue for debug containers when using custom Docker registry (#3873)
**Subject**
Fixes bug where override of Docker registry was not being applied to debug containers (#3851)

**Problem**
Overrides for Docker registry are not being applied to debug containers and provide no means to correct the image.

**Solution**
This update expands the `data.proxy` configuration section within the Linkerd `ConfigMap` to maintain the overridden image name for debug containers at _install_-time similar to handling of the `proxy` and `proxyInit` images.

This change also enables the further override option of the registry for debug containers at _inject_-time given utilization of the `--registry` CLI option.

**Validation**
Several new unit tests have been created to confirm functionality.  In addition, the following workflows were run through:

### Standard Workflow with Custom Registry
This workflow installs Linkerd control plane based upon a custom registry, then injecting the debug sidecar into a service.

* Start with a k8s instance having no Linkerd installation
* Build all images locally using `bin/docker-build`
* Create custom tags (using same version) for generated images, e.g. `docker tag gcr.io/linkerd-io/debug:git-a4ebecb6 javaducky.com/linkerd-io/debug:git-a4ebecb6`
* Install Linkerd with registry override `bin/linkerd install --registry=javaducky.com/linkerd-io | kubectl apply -f -`
* Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap now contains the debug image name, pull policy, and version within the `data.proxy` section
* Request injection of the debug image into an available container.  I used the Emojivoto voting service as described in https://linkerd.io/2/tasks/using-the-debug-container/ as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -`
* Once the deployment creates a new pod for the service, inspection should show that the container now includes the "linkerd-debug" container name based on the applicable override image seen previously within the ConfigMap
* Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f`
* Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container.

### Overriding the Custom Registry Override at Injection
This builds upon the “Standard Workflow with Custom Registry” by overriding the Docker registry utilized for the debug container at the time of injection.

* “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment
* Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=gcr.io/linkerd-io - | kubectl apply -f -`
* Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry.  Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image.  Of note, the proxy and proxy-init images are still running the “original” override images.
* As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container.

### Standard Workflow with Default Registry
This workflow is the typical workflow which utilizes the standard Linkerd image registry.

* Uninstall the Linkerd control plane using `bin/linkerd install --ignore-cluster | kubectl delete -f -` as described at https://linkerd.io/2/tasks/uninstall/
* Clean the Emojivoto environment using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl delete -f -` then reinstall using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -`
* Perform standard Linkerd installation as `bin/linkerd install | kubectl apply -f -`
* Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap references the default debug image of `gcr.io/linkerd-io/debug` within the `data.proxy` section
* Request injection of the debug image into an available container as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -`
* Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f`
* Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container.

### Overriding the Default Registry at Injection
This workflow builds upon the “Standard Workflow with Default Registry” by overriding the Docker registry utilized for the debug container at the time of injection.

* “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment
* Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=javaducky.com/linkerd-io - | kubectl apply -f -`
* Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry.  Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image.  Of note, the proxy and proxy-init images are still running the “original” override images.
* As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container.

Fixes issue #3851 

Signed-off-by: Paul Balogh javaducky@gmail.com
2020-01-17 10:18:03 -08:00
Alex Leong 3b2c1eb540
Respect registry override during inject (#3879)
Fixes https://github.com/linkerd/linkerd2/issues/3878

If the `--registry` flag is provided to Linkerd without the `--proxy-image` or `--init-image` flags, the `--registry` flag is ignored and not applied to the existing values for the proxy or init images pulled from the configmap.

We now override the registry with the value from the `--registry` flag regardless of which other flags are provided.

Signed-off-by: Alex Leong <alex@buoyant.io>
2020-01-08 15:54:09 -08:00
Paul Balogh 2cd2ecfa30 Enable mixed configuration of skip-[inbound|outbound]-ports (#3766)
* Enable mixed configuration of skip-[inbound|outbound]-ports using port numbers and ranges (#3752)
* included tests for generated output given proxy-ignore configuration options
* renamed "validate" method to "parseAndValidate" given mutation
* updated documentation to denote inclusiveness of ranges
* Updates for expansion of ignored inbound and outbound port ranges to be handled by the proxy-init rather than CLI (#3766)

This change maintains the configured ports and ranges as strings rather than unsigned integers, while still providing validation at the command layer.

* Bump versions for proxy-init to v1.3.0

Signed-off-by: Paul Balogh <javaducky@gmail.com>
2019-12-20 09:32:13 -05:00
Eugene Glotov 748da80409 Inject preStop hook into the proxy sidecar container to stop it last (#3798)
* Inject preStop hook into the proxy sidecar container to stop it last

This commit adds support for a Graceful Shutdown technique that is used
by some Kubernetes administrators while the more perspective
configuration is being discussed in
https://github.com/kubernetes/kubernetes/issues/65502

The problem is that RollingUpdate strategy does not guarantee that all
traffic will be sent to a new pod _before_ the previous pod is removed.
Kubernetes inside is an event-driven system and when a pod is being
terminating, several processes can receive the event simultaneously.
And if an Ingress Controller gets the event too late or processes it
slower than Kubernetes removes the pod from its Service, users requests
will continue flowing into the black whole.

According [to the documentation](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods)

> 1. If one of the Pod’s containers has defined a `preStop` hook,
> it is invoked inside of the container. If the `preStop` hook is still
> running after the grace period expires, step 2 is then invoked with
> a small (2 second) extended grace period.
>
> 2. The container is sent the `TERM` signal. Note that not all
> containers in the Pod will receive the `TERM` signal at the same time
> and may each require a preStop hook if the order in which
> they shut down matters.

This commit adds support for the `preStop` hook that can be configured
in three forms:

1. As command line argument `--wait-before-exit-seconds` for
  `linkerd inject` command.

2. As `linkerd2` Helm chart value `Proxy.WaitBeforeExitSeconds`.

2. As `config.alpha.linkerd.io/wait-before-exit-seconds` annotation.

If configured, it will add the following preHook to the proxy container
definition:

```yaml
lifecycle:
  preStop:
    exec:
      command:
        - /bin/bash
        - -c
        - sleep {{.Values.Proxy.WaitBeforeExitSeconds}}
```

To achieve max benefit from the option, the main container should have
its own `preStop` hook with the `sleep` command inside which has
a smaller period than is set for the proxy sidecar. And none of them
must be bigger than `terminationGracePeriodSeconds` configured for the
entire pod.

An example of a rendered Kubernetes resource where
`.Values.Proxy.WaitBeforeExitSeconds` is equal to `40`:

```yaml
       # application container
        lifecycle:
          preStop:
            exec:
              command:
                - /bin/bash
                - -c
                - sleep 20

        # linkerd-proxy container
        lifecycle:
          preStop:
            exec:
              command:
                - /bin/bash
                - -c
                - sleep 40
    terminationGracePeriodSeconds: 160 # for entire pod
```

Fixes #3747

Signed-off-by: Eugene Glotov <kivagant@gmail.com>
2019-12-18 16:58:14 -05:00
StupidScience 5958111533 WIP: Added annotations parsing and doc generation (#3564)
* rework annotations doc generation from godoc parsing to map[string]string and get rid of unused yaml tags
* move annotations doc function from pkg/k8s to cli/cmd

Signed-off-by: StupidScience <tonysignal@gmail.com>
2019-11-04 14:55:50 -08:00
Zahari Dichev 86854ac845
Control plane debug (#3507)
* Add cmd to inject debug sidecar for l5d components only

Signed-off-by: zaharidichev <zaharidichev@gmail.com>

* Revert "Add cmd to inject debug sidecar for l5d components only"

This reverts commit 50b8b3577e.

Signed-off-by: zaharidichev <zaharidichev@gmail.com>

* Stop uninjecting metadata from control plane components

Signed-off-by: zaharidichev <zaharidichev@gmail.com>

* Ensure inject can be run on control plane components only if --manual is present

Signed-off-by: zaharidichev <zaharidichev@gmail.com>
2019-11-04 18:56:35 +02:00
Mayank Shah ec848d4ef3 Add inject support for namespace configs (Fix #3255) (#3607)
* Add inject support for namespaces(Fix #3255)

* Add relevant unit tests (including overridden annotations)

Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>
2019-10-30 10:18:01 -05:00
Tarun Pothulapati 78b6f42ea7 Add Collector Flags for inject cmd (#3588)
* add flags to inject cmd
* add trace flags to readme
* use ns from pod

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2019-10-24 10:16:13 -07:00
陈谭军 a30882ef22 remove the duplicate word (#3385)
Signed-off-by: chentanjun <2799194073@qq.com>
2019-09-04 20:13:55 -07:00
Alejandro Pedraza 02efb46e45
Have the proxy-injector emit events upon injection/skipping injection (#3316)
* Have the proxy-injector emit events upon injection/skipping injection

Fixes #3253

Have the proxy-injector emit an event whenever a injection happens, or
when injection is skipped for some reason (also added that reason into
the proxy-injector logs). The level is associated to the parent workload
(it can't be associated to the pod because at this point the pod hasn't
been persisted).

The event recorder was setup at the `webhook/server.go` level and passed
to the proxy-injector's `Inject` function. The sp-validator thus also
has access to the event recorder, but for now it's not using it.

Related changes:

- Refactored `api.GetOwnerKindAndName()` to have it return a more
generic object.
- Refactored `report.Injectable()` to also have it return the reason why
a workload is not injectable.

Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
2019-08-26 13:34:36 -05:00
Ivan Sim 4d01e3720e
Update install and upgrade code to use the new helm charts (#3229)
* Delete symlink to old Helm chart
* Update 'install' code to use common Helm template structs
* Remove obsolete TLS assets functions.

These are now handle by Helm functions inside the templates

* Read defaults from values.yaml and values-ha.yaml
* Ensure that webhooks TLS assets are retained during upgrade
* Fix a few bugs in the Helm templates (see bullet points):
* Merge the way the 'install' ha and non-ha options are handled into one function
* Honor the 'NoInitContainer' option in the components templates
* Control plane mTLS will not be disabled if identity context in the
config map is empty. The data plane mTLS will still be automatically disabled
if the context is nil.
* Resolve test failures from rebase with master
* Fix linter issues
* Set service account mount path read-only field
* Add TLS variables of the webhooks and tap to values.yaml

During upgrade, these secrets are preserved to ensure they remain synced
wih the CA bundle in the webhook configurations. These Helm variables are used
to override the defaults in the templates.

* Remove obsolete 'chart' folder
* Fix bugs in templates
* Handle missing webhooks and tap TLS assets during upgrade

When upgrading from an older version that don't have these secrets, fallback to let Helm
create them by creating an empty charts.TLS struct.

* Revert the selector labels of webhooks to be compatible with that in 2.4

In 2.4, the proxy injector and profile validator webhooks already have their selector labels defined.
Since these attributes are immutable, the recent change to these selectors introduced by the Helm chart work will cause upgrade to fail.

* Alejandro's feedback
* Siggy's feedback
* Removed redundant unexported custom types

Signed-off-by: Ivan Sim <ivan@buoyant.io>
2019-08-13 14:16:24 -07:00
Alejandro Pedraza 3ae653ae92
Refactor proxy injection to use Helm charts (#3200)
* Refactor proxy injection to use Helm charts

Fixes #3128

A new chart `/charts/patch` was created, that generates the JSON patch
payload that is to be returned to the k8s API when doing the injection
through the proxy injector, and it's also leveraged by the `linkerd
inject --manual` CLI.

The VFS was used by `linkerd install` to access the old chart under
`/chart`. Now the proxy injection also uses the Helm charts to generate
the JSON patch (see above) so we've moved the VFS from `cli/static` to a
new common place under `/pkg/charts/static`, and the new root for the VFS is
now `/charts`.

`linkerd install` hasn't yet migrated to use the new charts (that'll
happen in #3127), so the only change in that regard was the creation of
`/charts/chart` which is a symlink pointing to `/chart` that
`install.go` now uses, so that the VFS contains both the old and new
charts, as a temporary measure.

You can see that `/bin/Dockerfile-bin`, `/controller/Dockerfile` and
`/bin/build-cli-bin` do now `go generate` pointing to the new location
(and the `go generate` annotation was moved from `/cli/main.go` to
`pkg/charts/static/templates.go`).

The symlink trick doesn't work when building the binaries through
Docker, so `/bin/Dockerfile-bin` replaces the symlink with an actual
copy of `/chart`.

Also note that in `/controller/Dockerfile` we now need to include the
`prod` tag in `go install` like we do in `/bin/Dockerfile-bin` so that
the proxy injector does use the VFS instead of the local file system.

- The common logic to parse a chart has been moved from `install.go` to
`/pkg/charts/util.go`.
- The special ENV var in the proxy for "outbound router capacity" that
only applies to the Prometheus pod is now handled directly in the proxy
partial and all the associated go code could be removed.
- The `patch.go` lib for generating the JSON patch in go along
with its tests `patch_test.go` are no longer needed.
- Lots of functions in `/pkg/inject/inject.go` got removed/simplified
with their logic being moved into the charts themselves. As a
consequence lots of things in `inject_test.go` became irrelevant.
- Moved `template-values.go` from `/pkg/inject` to `pkg/charts` as that
contains the go structs representation of the chart variables that
will be leveraged in #3127.

Don't forget to run `/bin/helm.sh` whenever you make changes to charts
;-)

Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
2019-08-07 17:32:37 -05:00
Alejandro Pedraza 8c07223f3b
Remove unused argument (#3149)
Removed unused argument in the `GetPatch()` function in
`pkg/inject/inject.go`

Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
2019-07-26 11:39:25 -05:00
Tarun Pothulapati 7db058f096 linkerd inject from remote URL (#2988)
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2019-06-28 09:47:33 -07:00
Alejandro Pedraza 74ca92ea25
Split proxy-init into separate repo (#2824)
Split proxy-init into separate repo

Fixes #2563

The new repo is https://github.com/linkerd/linkerd2-proxy-init, and I
tagged the latest there `v1.0.0`.

Here, I've removed the `/proxy-init` dir and pinned the injected
proxy-init version to `v1.0.0` in the injector code and tests.

`/cni-plugin` depends on proxy-init, so I updated the import paths
there, and could verify CNI is still working (there is some flakiness
but unrelated to this PR).

For consistency, I added a `--init-image-version` flag to `linkerd
inject` along with its corresponding override config annotation.

Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
2019-06-03 16:24:05 -05:00
Ivan Sim 86d822f8ea
Generate the debug container spec in the shared library (#2854)
This commit refactors the changes introduced by #2842 where the debug
container spec is created in the 'cli' and 'pkg' packages. This change
follows the existing pattern of annotating the YAML in the CLI code,
and injecting the sidecar spec in the shared library.

Signed-off-by: Ivan Sim <ivan@buoyant.io>
2019-05-28 15:26:37 -07:00
Alejandro Pedraza 253068e844
Unhide CLI inject's --disable-tap now that #2811 has been addressed (#2827)
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
2019-05-16 11:12:20 -05:00