* Changed the protobuf definition to take out destinationApiPort entirely
* Store destinationAPIPort as a constant in pkg/inject.go
Fixes#2351
Signed-off-by: Aditya Sharma <hello@adi.run>
- Created the pkg/inject package to hold the new injection shared lib.
- Extracted from `/cli/cmd/inject.go` and `/cli/cmd/inject_util.go`
the core methods doing the workload parsing and injection, and moved them into
`/pkg/inject/inject.go`. The CLI files should now deal only with
strictly CLI concerns, and applying the json patch returned by the new
lib.
- Proceeded analogously with `/cli/cmd/uninject.go` and
`/pkg/inject/uninject.go`.
- The `InjectReport` struct and helping methods were moved into
`/pkg/inject/report.go`
- Refactored webhook to use the new injection lib
- Removed linkerd-proxy-injector-sidecar-config ConfigMap
- Added the ability to add pod labels and annotations without having to
specify the already existing ones
Fixes#1748, #2289
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
linkerd/linkerd2#2349 introduced a `SelfSubjectAccessReview` check at
startup, to determine whether each control-plane component should
establish Kubernetes watches cluster-wide or namespace-wide. If this
check occurs before the linkerd-proxy sidecar is ready, it fails, and
the control-plane component restarts.
This change configures each control-plane pod to skip outbound port 443
when injecting the proxy, allowing the control-plane to connect to
Kubernetes regardless of the `linkerd-proxy` state.
A longer-term fix should involve a more robust control-plane startup,
that is resilient to failed Kubernetes API requests. An even longer-term
fix could involve injecting `linkerd-proxy` as a Kubernetes "sidecar"
container, when that becomes available.
Workaround for #2407
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Also, some protobuf updates:
* Rename `api_port` to match recent changes in CLI code.
* Remove the `cni` message because it won't be used.
* Remove `registry` field from proto types. This helps to avoid having to workaround edge cases like fully-qualified image name in different format, and overriding user-specified Linkerd version etc.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Add options in CLI for setting proxy CPU and memory limits
- Deprecated `proxy-cpu` and `proxy-memory` in favor of `proxy-cpu-limit` and `proxy-memory-limit`
- Updated validations and tests to reflect new options
Signed-off-by: TwinProduction <twin@twinnation.org>
chart/templates/base.yaml is nearly 800 lines and contains the
kubernetes configurations for the marjority of the control plane.
Furthermore, its contents are not particularly organized (for example,
the prometheus RBAC bindings are in the middle of the controller's
configuration).
The size and complexity of this file makes it especially daunting to
introduce new functionality.
In order to make the situation easier to understand and change, this
splits base.yaml into several new template files: namespace, controller,
serviceprofile, and prometheus, and grafana. The `tls.yaml` template has
been renamed `ca.yaml`, since it installs the `linkerd-ca` resources.
This change also makes the comments uniform, adding a "header" to each
logical component.
Fixes#2154
Up until now, the proxy-api controller service has been the sole service
that the proxy communicates with, implementing the majoriry of the API
defined in the `linkerd2-proxy-api` repo. But this is about to change:
linkerd/linkerd2-proxy-api#25 introduces a new Identity service; and
this service must be served outside of the existing proxy-api service
in the linkerd-controller deployment (so that it may run under a
distinct service account).
With this change, the "proxy-api" name becomes less descriptive. It's no
longer "the service that serves the API for the proxy," it's "the
service that serves the Destination API to the proxy." Therefore, it
seems best to bite the bullet and rename this to be the "destination"
service (i.e. because it only serves the
`io.linkerd.proxy.destination.Destination` service).
Co-authored-by: Kevin Lingerfelt <kl@buoyant.io>
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
The `linkerd check` command was doing limited validation on
ServiceProfiles.
Make ServiceProfile validation more complete, specifically validate:
- types of all fields
- presence of required fields
- presence of unknown fields
- recursive fields
Also move all validation code into a new `Validate` function in the
profiles package.
Validation of field types and required fields is handled via
`yaml.UnmarshalStrict` in the `Validate` function. This motivated
migrating from github.com/ghodss/yaml to a fork, sigs.k8s.io/yaml.
Fixes#2190
# Problem
In order to switch Linkerd template rendering to use `.yaml` files, static
assets must be bundled in the Go binary for use by `linkerd install`.
# Solution
The solution should not affect the local development process of building and
testing.
[vfsgen](https://github.com/shurcooL/vfsgen) generates Go code that statically
implements the provided `http.FileSystem`. Paired with `go generate` and Go
[build tags](https://golang.org/pkg/go/build/), we can continue to use the
template files on disk when developing with no change required.
In `!prod` Go builds, the `cli/static/templates.go` file provides a
`http.FileSystem` to the local templates. In `prod` Go builds, `go generate
./cli` generates `cli/static/generated_templates.gogen.go` that statically
provides the template files.
When built with `-tags prod`, the executable will be built with the staticlly
generated file instead of the local files.
# Validation
The binaries were compiled locally with `bin/docker-build`. The binaries were
then tested with `bin/test-run (pwd)/target/cli/darwin/linkerd`. All tests
passed.
No change was required to successfully run `bin/go-run cli install`. No change
was required to run `bin/linkerd install`.
Fixes#2153
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
In linkerd/linkerd2-proxy#186, the proxy supports configuration of TCP
keepalive values.
This change sets `LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE` and
`LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE` to 10s when injecting the
proxy, so that remote connections are configured with a keepalive.
This configuration is NOT yet exposed through the CLI. This may be done
in a followup, if necessary.
Fixes#1949
* Add pod spec annotation to disable injection in CLI and auto-injector
* Remove support for linkerd.io/auto-inject label entirely
* Update based on review feedback
* Fix issue with finding the namespace of deployments applied to the default ns
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
Use `ca.NewCA()` for generating certs and keys for the proxy injector
- Remove from CA controller everything that dealt with the
webhook/proxy-injector
- Remove no longer needed proxy-injector volumes for 'trust-anchors' and
'webhook-secrets'
- Remove from the proxy-injector the retrieval of the trust anchor and
secrets
- tls flag during install is no longer needed for auto-inject to work
Fixes#2095 and fixes#2166
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* Export RootOptions and BuildFirewallConfiguration so that the cni-plugin can use them.
* Created the cni-plugin based on istio-cni implementation
* Create skeleton files that need to be filled out.
* Create the install scripts and finish up plugin to write iptables
* Added in an integration test around the install_cni.sh and updated the script to handle the case where it isn't the only plugin. Removed the istio kubernetes.go file in favor of pkg/k8s; initial usage of this package; found and fixed the typo in the ClusterRole and ClusterRoleBinding; found the docker-build-cni-plugin script
* Corrected an incorrect name in the docker build file for cni-plugin
* Rename linkerd2-cni to linkerd-cni
* Fixup Dockerfile and clean up code a bit as well as logging statements.
* Update Gopkg.lock after master merge.
* Update test file to remove temporary tag.
* Fixed the command to run during the test while building up the docker run.
* Added attributions to applicable files; in the test file, use a different container for each test scenario and also print the docker logs to stdout when there is an error;
* Add the --no-init-container flag to install and inject. This flag will not output the initContainer and will add an annotation assuming that the cni will be used in this case.
* Update .travis.yml to build the cni-plugin docker image before running the tests.
* Workaround golint warnings.
* Create a new command to install the linkerd-cni plugin.
* Add the --no-init-container option to linkerd inject
* Use the setup ip tables annotation during the proxy auto inject webhook prevent/allow addition of an init container; move cni-plugin tests to the integration-test section of travis
* gate the cni-plugin tests with the -integration-tests flag; remove unnecessary deployment .yaml file.
* Incorporate PR Cleanup suggestions.
* Remove the SetupIPTablesLabel annotation and use config flags and the presence of the init container to determine whether the cni-plugin writes ip tables.
* Fix a logic bug in the cni-plugin code that prevented the iptables from being written; Address PR comments; make tests pass.
* Update go deps shas
* Changed the single file install-cni plugin filename to be .conf vs .conflist; Incorporated latest PR comments around spacing with the new renderer among others.
* Fix an issue with renaming .conf to .conflist when needed.
* Renamed some of the variables to try to make it more clear what is going on.
* Address final PR comments.
* Hide cni flags for the time being.
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
# Problem
In order to refactor `install` to allow for a more flexible configuration, we
should start with the format of the YAML that it renders. Using the Helm
YAML format will make it easier add flexible configuration options in the
future. Currently, the rendered template that `install` produces does not
follow this format.
# Solution
Use the internals that Helm itself uses to render an inject template that
follows the same formatting rules. Helm's `template` cmd provides a good
outline of what is needed to make Linkerd's `install` cmd work as if it was
a Chart.
# Validation
There are no new tests, but there may not be anything to test at this stage.
This is a WIP PR towards the ultimate goal of `install` allowing a more
flexible configuration.
However, `install` now uses all the Helm `template` internals and therefore
satisfies the needed properties for Helm Charts.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This branch removes the `--proxy-bind-timeout` flag from the
`linkerd inject` and `linkerd install` CLI commands, and the
`LINKERD2_PROXY_BIND_TIMEOUT` environment variable from their output.
This is in preparation for removing that timeout from the proxy (as
described in #2013).
I thought it was prudent to remove this from the CLIs before removing it
from the proxy, so we can't create a situation where the CLIs produce
output that results in broken proxy containers.
Fixes#2013
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
* Allow input of a volume name for prometheus and grafana
* Make Prometheus and Grafana volume names 'data' by default and disallow user editing via cli flags
* Remove volume name from options
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
* add securityContext with runAsUser: {{.ProxyUID}} to the various containers in the install template
* Update golden to reflect new additions
* changed to a different user id than the proxy user id
* Added a controller-uid install option
* change the port that the proxy-injector runs
* The initContainers needs to be run as the root user.
* move security contexts to container level
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
Add support for service profiles created on external (non-service) authorities. For example, this allows you to create a service profile named `linkerd.io` which will apply to calls made to `linkerd.io`.
This is done by changing the `LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES` to `.` so that the proxy will attempt to lookup a service profile for any authority. We provide the `--disable-external-profiles` proxy flag to revert this behavior in case it is a problem.
We also refactor the proxy-api implementation of GetProfiles so that it does the profile lookup, regardless of if the authority looks like a Kubernetes service name or not. To simplify this, support for multiple resolves (which was unused) was removed.
Signed-off-by: Alex Leong <alex@buoyant.io>
The proxy-api service _always_ suggests that two meshed pods communicate
via HTTP/2 (i.e. via transparent protocol upgrading, if necessary).
This can complicate debugging and diagnostics at times, so it's
important that we have a way to deploy linkerd without this auto-upgrade
behavior.
This change adds a `-disable-h2-upgrade` flag to the `linkerd install`
command that disables transparent upgrading for the whole cluster.
This change allows some advised production config to be applied to the install of the control plane.
Currently this runs 3x replicas of the controller and adds some pretty sane requests to each of the components + containers of the control plane.
Fixes#1101
Signed-off-by: Ben Lambert <ben@blam.sh>
* Add --single-namespace install flag for restricted permissions
* Better formatting in install template
* Mark --single-namespace and --proxy-auto-inject as experimental
* Fix wording of --single-namespace check flag
* Small healthcheck refactor
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Support auto sidecar-injection
1. Add proxy-injector deployment spec to cli/install/template.go
2. Inject the Linkerd CA bundle into the MutatingWebhookConfiguration
during the webhook's start-up process.
3. Add a new handler to the CA controller to create a new secret for the
webhook when a new MutatingWebhookConfiguration is created.
4. Declare a config map to store the proxy and proxy-init container
specs used during the auto-inject process.
5. Ignore namespace and pods that are labeled with
linkerd.io/auto-inject: disabled or linkerd.io/auto-inject: completed
6. Add new flag to `linkerd install` to enable/disable proxy
auto-injection
Proposed implementation for #561.
* Resolve missing packages errors
* Move the auto-inject label to the pod level
* PR review items
* Move proxy-injector to its own deployment
* Ignore pods that already have proxy injected
This ensures the webhook doesn't error out due to proxy that are injected using the command
* PR review items on creating/updating the MWC on-start
* Replace API calls to ConfigMap with file reads
* Fixed post-rebase broken tests
* Don't mutate the auto-inject label
Since we started using healhcheck.HasExistingSidecars() to ensure pods with
existing proxies aren't mutated, we don't need to use the auto-inject label as
an indicator.
This resolves a bug which happens with the kubectl run command where the deployment
is also assigned the auto-inject label. The mutation causes the pod auto-inject
label to not match the deployment label, causing kubectl run to fail.
* Tidy up unit tests
* Include proxy resource requests in sidecar config map
* Fixes to broken YAML in CLI install config
The ignore inbound and outbound ports are changed to string type to
avoid broken YAML caused by the string conversion in the uint slice.
Also, parameterized the proxy bind timeout option in template.go.
Renamed the sidecar config map to
'linkerd-proxy-injector-webhook-config'.
Signed-off-by: ihcsim <ihcsim@gmail.com>
If an input file is un-injectable, existing inject behavior is to simply
output a copy of the input.
Introduce a report, printed to stderr, that communicates the end state
of the inject command. Currently this includes checking for hostNetwork
and unsupported resources.
Malformed YAML documents will continue to cause no YAML output, and return
error code 1.
This change also modifies integration tests to handle stdout and stderr separately.
example outputs...
some pods injected, none with host networking:
```
hostNetwork: pods do not use host networking...............................[ok]
supported: at least one resource injected..................................[ok]
Summary: 4 of 8 YAML document(s) injected
deploy/emoji
deploy/voting
deploy/web
deploy/vote-bot
```
some pods injected, one host networking:
```
hostNetwork: pods do not use host networking...............................[warn] -- deploy/vote-bot uses "hostNetwork: true"
supported: at least one resource injected..................................[ok]
Summary: 3 of 8 YAML document(s) injected
deploy/emoji
deploy/voting
deploy/web
```
no pods injected:
```
hostNetwork: pods do not use host networking...............................[warn] -- deploy/emoji, deploy/voting, deploy/web, deploy/vote-bot use "hostNetwork: true"
supported: at least one resource injected..................................[warn] -- no supported objects found
Summary: 0 of 8 YAML document(s) injected
```
TODO: check for UDP and other init containers
Part of #1516
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Currently, when a cluster has over 100 pods injected with the Linkerd2
proxy, Prometheus metrics are not collected correctly. This is because
Prometheus appears to be making more concurrent requests than its'
proxy's outbound router cache can handle See issue #1322 for further
details.
This branch introduces a workaround for this issue, by increasing the
outbound router cache capacity to 10000 routes for the Prometheus pod's
proxy only. The router capacity limit of 100 active routes is primarily
due to the limitation of the number of active Destination service
lookups, so increasing the capacity for the Prometheus pod specifically
is probably okay, as the scrape requests are made to IP addresses
directly and therefore will not cause service discovery lookups.
This change was originally implemented and tested in @siggy's PR #1228.
I've rebased his branch onto the current `master`, and updated the code
to reflect the project name change.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Co-authored-by: Andrew Seigner <siggy@buoyant.io>
This PR begins to migrate Conduit to Linkerd2:
* The proxy has been completely removed from this repo, and is now located at
github.com/linkerd/linkerd2-proxy.
* A `Dockerfile-proxy` has been added to fetch the most-recently published proxy
binary from build.l5d.io.
* Proxy-specific protobuf bindings have been moved to
github.com/linkerd/linkerd2-proxy-api.
* All docker images now use the gcr.io/linkerd-io registry.
* `inject` now uses `LINKERD2_PROXY_` environment variables
* Go paths have been updated to reflect the new (future) repo location.
* Add TLS support to `conduit inject`.
Add the settings needed to enable TLs when `--tls=optional` is passed on the
commend line. Later the requirement to add `--tls` will be removed.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Add CA certificate bundle distributor to conduit install
* Update ca-distributor to use shared informers
* Only install CA distributor when --enable-tls flag is set
* Only copy CA bundle into namespaces where inject pods have the same controller
* Update API config to only watch pods and configmaps
* Address review feedback
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
Running `conduit install --api-port xxx` where xxx != 8086 would yield a
broken install.
Fix the install command to correctly propagate the `api-port` flag,
setting it as the serve address in the proxy-api container.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Remove package-scoped vars in cmd package
* Run gofmt on all cmd package files
* Re-add missing Args setting on check command
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
There have been a number of performance improvements and bug fixes since
v2.1.0.
Bump our Prometheus container to v2.2.1.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* CLI: change conduit namespace shorthand flag to -c
All of the conduit CLI subcommands accept a --conduit-namespace flag,
indicating the namespace where conduit is running. Some of the
subcommands also provide a --namespace flag, indicating the kubernetes
namespace where a user's application code is running. To prevent
confusion, I'm changing the shorthand flag for the conduit namespace to
-c, and using the -n shorthand when referring to user namespaces.
As part of this change I've also standardized the capitalization of all
of our command line flags, removed the -r shorthand for the install
--registry flag, and made the global --kubeconfig and --api-addr flags
apply to all subcommands.
* Switch flag descriptions from lowercase to Capital
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
Using a vanilla Grafana Docker image as part of `conduit install`
avoided maintaining a conduit-specific Grafana Docker image, but made
packaging dashboard json files cumbersome.
Roll our own Grafana Docker image, that includes conduit-specific
dashboard json files. This significantly decreases the `conduit install`
output size, and enables dashboard integration in the docker-compose
environment.
Fixes#567
Part of #420
Signed-off-by: Andrew Seigner <siggy@buoyant.io>