mirror of https://github.com/linkerd/linkerd2.git
6 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
|
cdc57d1af0
|
Use linkerd-jaeger extension for control plane tracing (#5299)
Now that tracing has been split out of the main control plane and into the linkerd-jaeger extension, we remove references to tracing from the main control plane including: * removing the tracing components from the main control plane chart * removing the tracing injection logic from the main proxy injector and inject CLI (these will be added back into the new injector in the linkerd-jaeger extension) * removing tracing related checks (these will be added back into `linkerd jaeger check`) * removing related tests We also update the `--control-plane-tracing` flag to configure the control plane components to send traces to the linkerd-jaeger extension. To make sure this works even when the linkerd-jaeger extension is installed in a non-default namespace, we also add a `--control-plane-tracing-namespace` flag which can be used to change the namespace that the control plane components send traces to. Note that for now, only the control plane components send traces; the proxies in the control plane do not. This is because the linkerd-jaeger injector is not yet available. However, this change adds the appropriate namespace annotations to the control plane namespace to configure the proxies to send traces to the linkerd-jaeger extension once the linkerd-jaeger injector is available. I tested this by doing the following: 1. bin/linkerd install | kubectl apply -f - 1. bin/helm install jaeger jaeger/charts/jaeger 1. bin/linkerd upgrade --control-plane-tracing=true | kubectl apply -f - 1. kubectl -n linkerd-jaeger port-forward svc/jaeger 16686 1. open http://localhost:16686 1. see traces from the linkerd control plane Signed-off-by: Alex Leong <alex@buoyant.io> |
|
|
361d35bb6a
|
feat: add log format annotation and helm value (#4620)
* feat: add log format annotation and helm value Json log formatting has been added via https://github.com/linkerd/linkerd2-proxy/pull/500 but wiring the option through as an annotation/helm value is still necessary. This PR adds the annotation and helm value to configure log format. Closes #2491 Signed-off-by: Naseem <naseem@transit.app> |
|
|
dabee12b93 |
Fix issue for debug containers when using custom Docker registry (#3873)
**Subject** Fixes bug where override of Docker registry was not being applied to debug containers (#3851) **Problem** Overrides for Docker registry are not being applied to debug containers and provide no means to correct the image. **Solution** This update expands the `data.proxy` configuration section within the Linkerd `ConfigMap` to maintain the overridden image name for debug containers at _install_-time similar to handling of the `proxy` and `proxyInit` images. This change also enables the further override option of the registry for debug containers at _inject_-time given utilization of the `--registry` CLI option. **Validation** Several new unit tests have been created to confirm functionality. In addition, the following workflows were run through: ### Standard Workflow with Custom Registry This workflow installs Linkerd control plane based upon a custom registry, then injecting the debug sidecar into a service. * Start with a k8s instance having no Linkerd installation * Build all images locally using `bin/docker-build` * Create custom tags (using same version) for generated images, e.g. `docker tag gcr.io/linkerd-io/debug:git-a4ebecb6 javaducky.com/linkerd-io/debug:git-a4ebecb6` * Install Linkerd with registry override `bin/linkerd install --registry=javaducky.com/linkerd-io | kubectl apply -f -` * Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap now contains the debug image name, pull policy, and version within the `data.proxy` section * Request injection of the debug image into an available container. I used the Emojivoto voting service as described in https://linkerd.io/2/tasks/using-the-debug-container/ as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -` * Once the deployment creates a new pod for the service, inspection should show that the container now includes the "linkerd-debug" container name based on the applicable override image seen previously within the ConfigMap * Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f` * Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container. ### Overriding the Custom Registry Override at Injection This builds upon the “Standard Workflow with Custom Registry” by overriding the Docker registry utilized for the debug container at the time of injection. * “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment * Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=gcr.io/linkerd-io - | kubectl apply -f -` * Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry. Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image. Of note, the proxy and proxy-init images are still running the “original” override images. * As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container. ### Standard Workflow with Default Registry This workflow is the typical workflow which utilizes the standard Linkerd image registry. * Uninstall the Linkerd control plane using `bin/linkerd install --ignore-cluster | kubectl delete -f -` as described at https://linkerd.io/2/tasks/uninstall/ * Clean the Emojivoto environment using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl delete -f -` then reinstall using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -` * Perform standard Linkerd installation as `bin/linkerd install | kubectl apply -f -` * Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap references the default debug image of `gcr.io/linkerd-io/debug` within the `data.proxy` section * Request injection of the debug image into an available container as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -` * Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f` * Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container. ### Overriding the Default Registry at Injection This workflow builds upon the “Standard Workflow with Default Registry” by overriding the Docker registry utilized for the debug container at the time of injection. * “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment * Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=javaducky.com/linkerd-io - | kubectl apply -f -` * Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry. Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image. Of note, the proxy and proxy-init images are still running the “original” override images. * As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container. Fixes issue #3851 Signed-off-by: Paul Balogh javaducky@gmail.com |
|
|
748da80409 |
Inject preStop hook into the proxy sidecar container to stop it last (#3798)
* Inject preStop hook into the proxy sidecar container to stop it last This commit adds support for a Graceful Shutdown technique that is used by some Kubernetes administrators while the more perspective configuration is being discussed in https://github.com/kubernetes/kubernetes/issues/65502 The problem is that RollingUpdate strategy does not guarantee that all traffic will be sent to a new pod _before_ the previous pod is removed. Kubernetes inside is an event-driven system and when a pod is being terminating, several processes can receive the event simultaneously. And if an Ingress Controller gets the event too late or processes it slower than Kubernetes removes the pod from its Service, users requests will continue flowing into the black whole. According [to the documentation](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) > 1. If one of the Pod’s containers has defined a `preStop` hook, > it is invoked inside of the container. If the `preStop` hook is still > running after the grace period expires, step 2 is then invoked with > a small (2 second) extended grace period. > > 2. The container is sent the `TERM` signal. Note that not all > containers in the Pod will receive the `TERM` signal at the same time > and may each require a preStop hook if the order in which > they shut down matters. This commit adds support for the `preStop` hook that can be configured in three forms: 1. As command line argument `--wait-before-exit-seconds` for `linkerd inject` command. 2. As `linkerd2` Helm chart value `Proxy.WaitBeforeExitSeconds`. 2. As `config.alpha.linkerd.io/wait-before-exit-seconds` annotation. If configured, it will add the following preHook to the proxy container definition: ```yaml lifecycle: preStop: exec: command: - /bin/bash - -c - sleep {{.Values.Proxy.WaitBeforeExitSeconds}} ``` To achieve max benefit from the option, the main container should have its own `preStop` hook with the `sleep` command inside which has a smaller period than is set for the proxy sidecar. And none of them must be bigger than `terminationGracePeriodSeconds` configured for the entire pod. An example of a rendered Kubernetes resource where `.Values.Proxy.WaitBeforeExitSeconds` is equal to `40`: ```yaml # application container lifecycle: preStop: exec: command: - /bin/bash - -c - sleep 20 # linkerd-proxy container lifecycle: preStop: exec: command: - /bin/bash - -c - sleep 40 terminationGracePeriodSeconds: 160 # for entire pod ``` Fixes #3747 Signed-off-by: Eugene Glotov <kivagant@gmail.com> |
|
|
5958111533 |
WIP: Added annotations parsing and doc generation (#3564)
* rework annotations doc generation from godoc parsing to map[string]string and get rid of unused yaml tags * move annotations doc function from pkg/k8s to cli/cmd Signed-off-by: StupidScience <tonysignal@gmail.com> |
|
|
f9d353ea22
|
Generate CLI docs for usage by the website (#2296)
* Generate CLI docs for usage by the website * Update description to match existing commands * Remove global |