Fix broken docker build by moving Service Profile conversion and validation into `/pkg`.
Fix broken integration test by adding service profile validation output to `check`'s expected output.
Testing done:
* `gotest -v ./...`
* `bin/docker-build`
* `bin/test-run (pwd)/bin/linkerd`
Signed-off-by: Alex Leong <alex@buoyant.io>
Add a check to `linkerd check` which validates all service profile resources. In particular it checks:
* does the service profile refer to an existent service
* is the service profile valid
Signed-off-by: Alex Leong <alex@buoyant.io>
We implement the getProfiles method in the destination service. This method returns a stream of destination profiles for a given authority. It does this by looking up the ServiceProfile resource in the controller namespace named `<svc>.<ns>` where `<svc>` is the name of the service and `<ns>` is the namespace of the service.
This PR includes:
* Adding a ServiceProfile Custom Resource Definition to linkerd install
* A watch based implementation of the getProfiles method in the destination service, similar to the implementation of get.
* An update to the destination client script that allows querying the getProfiles method.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Ensure that the proxy injector mutating webhook preserves the original labels
and annotations
The deployment's selector must also match the pod template labels in
newer version of Kubernetes.
This resolves issue #1756.
* Add the Linkerd labels to the deployment metadata during auto proxy
injection
* Remove selector match labels JSON patch from proxy injector
This isn't needed to resolve the selector label mismatch errors.
Signed-off-by: ihcsim <ihcsim@gmail.com>
Appending proxy-init to the end of the list ensures that it won't
interfere with other init containers from accessing the network,
before the proxy container is created.
This resolves bug #1760
Signed-off-by: ihcsim <ihcsim@gmail.com>
Added support for json output in `linkerd stat` through a new (-o|--output)=json option.
Fixes#1417
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
Updates to the Kubernetes utility code in `/controller/k8s` to support interacting with ServiceProfiles.
This makes use of the code generated client added in #1752
Signed-off-by: Alex Leong <alex@buoyant.io>
To support reading and writing of the ServiceProfile custom resource, we add a codegen'd Kubernetes client for this resource.
* Adding the ServiceProfile type and related boilerplate to /controller/gen/apis/serviceprofile. This boilerplate also contains directives that control how codegen works.
* A script in /hack which invokes codegen that generates Kubernetes client machinery for interacting with ServiceProfile resources. The majority of the generated code lives in /controller/gen/client.
* The above-mentioned generated code.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Add --single-namespace install flag for restricted permissions
* Better formatting in install template
* Mark --single-namespace and --proxy-auto-inject as experimental
* Fix wording of --single-namespace check flag
* Small healthcheck refactor
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
`go test` was failing with
`Fatalf call has arguments but no formatting directives`
Fix test failure, make all logrus api calls consistent.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Support auto sidecar-injection
1. Add proxy-injector deployment spec to cli/install/template.go
2. Inject the Linkerd CA bundle into the MutatingWebhookConfiguration
during the webhook's start-up process.
3. Add a new handler to the CA controller to create a new secret for the
webhook when a new MutatingWebhookConfiguration is created.
4. Declare a config map to store the proxy and proxy-init container
specs used during the auto-inject process.
5. Ignore namespace and pods that are labeled with
linkerd.io/auto-inject: disabled or linkerd.io/auto-inject: completed
6. Add new flag to `linkerd install` to enable/disable proxy
auto-injection
Proposed implementation for #561.
* Resolve missing packages errors
* Move the auto-inject label to the pod level
* PR review items
* Move proxy-injector to its own deployment
* Ignore pods that already have proxy injected
This ensures the webhook doesn't error out due to proxy that are injected using the command
* PR review items on creating/updating the MWC on-start
* Replace API calls to ConfigMap with file reads
* Fixed post-rebase broken tests
* Don't mutate the auto-inject label
Since we started using healhcheck.HasExistingSidecars() to ensure pods with
existing proxies aren't mutated, we don't need to use the auto-inject label as
an indicator.
This resolves a bug which happens with the kubectl run command where the deployment
is also assigned the auto-inject label. The mutation causes the pod auto-inject
label to not match the deployment label, causing kubectl run to fail.
* Tidy up unit tests
* Include proxy resource requests in sidecar config map
* Fixes to broken YAML in CLI install config
The ignore inbound and outbound ports are changed to string type to
avoid broken YAML caused by the string conversion in the uint slice.
Also, parameterized the proxy bind timeout option in template.go.
Renamed the sidecar config map to
'linkerd-proxy-injector-webhook-config'.
Signed-off-by: ihcsim <ihcsim@gmail.com>
Horizontal Pod Autoscaling does not work when container definitions in pods do not all have resource requests, so here's the ability to add CPU + Memory requests to install + inject commands by proving proxy options --proxy-cpu + --proxy-memory
Fixes#1480
Signed-off-by: Ben Lambert <ben@blam.sh>
* Use ListPods always for data plane HC
* Missing changes in grpc_server.go
* Address review comments
* Read proxy version from spec
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
Add checks to `linkerd check --pre` to verify that the user has permission to create:
* namespaces
* serviceaccounts
* clusterroles
* clusterrolebindings
* services
* deployments
* configmaps
Signed-off-by: Alex Leong <alex@buoyant.io>
Sometimes, the tap server causes the controller pod to restart after it receives this error.
This error arises when the Tap server does not close gRPC tap streams to proxies before the tap server terminates its streams to its upstream clients and causes the controller pod to restart.
This PR uses the request context from the initial TapByReource to help shutdown tap streams to the data plane proxies gracefully.
fixes#1504
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
If an input file is un-injectable, existing inject behavior is to simply
output a copy of the input.
Introduce a report, printed to stderr, that communicates the end state
of the inject command. Currently this includes checking for hostNetwork
and unsupported resources.
Malformed YAML documents will continue to cause no YAML output, and return
error code 1.
This change also modifies integration tests to handle stdout and stderr separately.
example outputs...
some pods injected, none with host networking:
```
hostNetwork: pods do not use host networking...............................[ok]
supported: at least one resource injected..................................[ok]
Summary: 4 of 8 YAML document(s) injected
deploy/emoji
deploy/voting
deploy/web
deploy/vote-bot
```
some pods injected, one host networking:
```
hostNetwork: pods do not use host networking...............................[warn] -- deploy/vote-bot uses "hostNetwork: true"
supported: at least one resource injected..................................[ok]
Summary: 3 of 8 YAML document(s) injected
deploy/emoji
deploy/voting
deploy/web
```
no pods injected:
```
hostNetwork: pods do not use host networking...............................[warn] -- deploy/emoji, deploy/voting, deploy/web, deploy/vote-bot use "hostNetwork: true"
supported: at least one resource injected..................................[warn] -- no supported objects found
Summary: 0 of 8 YAML document(s) injected
```
TODO: check for UDP and other init containers
Part of #1516
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Increase the MaxRps on the tap server to 100 RPS.
The max RPS for tap/top was increased in for the CLI #1531, but we were
still manually setting this to 1 RPS in the Web UI and Web server.
Remove the pervasive setting of MaxRps to 1 in the web frontend and server
The default value for the max-rps argument to the tap and top commands is an overly conservative 1rps. This causes the data to come in very slowly and much data to be discarded. Furthermore, because tap requests are windowed to 10 seconds, this causes long pauses between updates.
We fix this in two ways. Firstly we reduce the window size to 1s so that updates will come in at least once per second, even when the actual RPS of the data path is extremely high. Secondly, we increase the default max-rps parameter from 1 to 100. This allows tap to paint an accurate picture of the data much more quickly and sidesteps some sampling bias that happens when the max-rps is low.
In general, tap events tend to happen in bursts. For example, one request in may trigger one or more requests out. Likewise, a single upstream event may trigger several requests to the tapped pod in quick succession. Sampling bias will occur when the max-rps is less than the actual rps and when the tap event limit subdivides these event bursts (biasing towards the first few events in the burst). The greater the max-rps, the less the effects of this bias.
Fixes#1525
Signed-off-by: Alex Leong <alex@buoyant.io>
Previously, we would tap any resource's pods, regardless of whether the pods
were meshed or not. We can't actually tap non-meshed pods, so I'm adding a check
that will filter out non-meshed pods from the pods that tap watches.
Previous behaviour:
When attempting to hang a non meshed pod, it would establish
a watch on the pods, but then never return any results. In the CLI you could
just cancel it with Ctrl-C. In the web, clicking Stop would send a
WebSocket.close(1000) but wouldn't actually close the connection...
Behaviour after change :
If no pods under the specified resource are meshed, it'll
return an error of no pods being found to tap
Fixes#1493.
When the tap server hydrates metadata for the source or destination peer
of a Tap event from the peer's IP address, it doesn't currently add a
namespace label. However, destinations labeled by the proxy do have such
a label.
This is because the tap server currently gets the hydrated labels from
the `GetPodLabels` function, which is also used by the Destination
service for labeling the individual endpoints in a `WeightedAddrSet`
response. However, the Destination service also adds some labels to all
the endpoints in the set, including the namespace and service, so
`GetPodLabels` doesn't return these labels. However, when the tap server
uses that function, it does not add the service or namespace labels.
This branch fixes this issue by adding those labels to the Tap event
after calling `GetPodLabels`. In addition, it fixes a missing space
between the `src/dst_res` and `src/dst_ns` labels in Tap CLI output
with the `-o wide` flag set. This issue was introduced during the
review of #1437, but was missed at the time because the namespace label
wasn't being set correctly.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Closes#1170.
This branch adds a `-o wide` (or `--output wide`) flag to the Tap CLI.
Passing this flag adds `src_res` and `dst_res` elements to the Tap
output, as described in #1170. These use the metadata labels in the tap
event to describe what Kubernetes resource the source and destination
peers belong to, based on what resource type is being tapped, and fall
back to pods if either peer is not a member of the specified resource
type.
In addition, when the resource type is not `namespace`, `src_ns` and
`dst_ns` elements are added, which show what namespaces the the source
and destination peers are in. For peers which are not in the Kubernetes
cluster, none of these labels are displayed.
The source metadata added in #1434 is used to populate the `src_res` and
`src_ns` fields.
Also, this branch includes some refactoring to how tap output is
formatted.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
* Upgrade to dep 0.5.0, go 1.10.3
* Remove existing dep binary if it's the wrong version
* Add version in filename of dep binary to prevent version conflicts
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
This an initial implementation of the `linkerd top` command. This command launches an ncurses style tabular view of current requests (using data from tap). Most of the command line arguments are the same as tap and allow selecting the resource to inspect and filtering which requests to view.
Fixes#1283
Signed-off-by: Alex Leong <alex@buoyant.io>
Based on @adleong's suggestion in
https://github.com/linkerd/linkerd2/pull/1434#pullrequestreview-145428857,
this branch adds label hydration from destination IPs to the Tap server.
This works the same as the label hydration for destination IPs added in
#1434. However, it is only applied to the destination fields of events
recorded by proxies in the inbound direction, since outbound
destinations are already labeled with metadata provided by the
Destination service.
This means that when a user taps inbound traffic, the CLI will show k8s
metadata labels for the destination peer (if it's available). This can
be useful especially when tapping several pods at once, as it makes it
easier to distinguish what pod received a request.
This branch also refactors how the label hydration is performed,
primarily to make adding it to the destination field less repetitive.
Also, the `hydrateIPLabels` function now mutates the label map in the
`TapEvent`, rather than returning a new map of labels, so that the case
where no pod was found doesn't require an additional allocation of an
empty map.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The `TapEvent` protobuf contains two maps, `DestinationMeta` and
`SourceMeta`. The `DestinationMeta` contains all the metadata provided
by the proxy that originated the event (ultimately originating from the
Destination service), while the `SourceMeta` currently only contains the
source connection's TLS status.
This branch modifies the Tap server to hydrate the same set of metadata
from the source IP address, when the source was within the cluster. It
does this by adding an indexer of pod IPs to pods to its k8s API client,
and looking up IPs against this index. If a pod was found, the extra
metadata is added to the tap event sent to the client.
This branch also changes the client so that if a source pod name was
provided in the metadata, it prints the pod name rather than the IP
address for the `src` field in its output. This mimics what is currently
done for the `dst` field in tap output. Furthermore, the added source
metadata will be necessary for adding src resource types to tap output
(see issue #1170).
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Fixes#1405.
According to the Kubernetes Endpoints API documentation, the `name`
field in the `EndpointPort` response object is "Optional if only one
port is defined". (see
https://v1-9.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#endpointport-v1-core)
However, when the Destination service an endpoints response for a
service with a named target port, it expects the ports in the endpoints
response to have the same name as the target port in the service.
When a user creates a `NodePort` service with an unnamed port that
targets a named container port, this behaviour results in Linkerd
failing to route to that service by hostname. Without Linkerd injected,
the hostname is still reachable.
This branch fixes this issue by changing the `endpointsToAddresses`
function in `endpoints_watcher.go` to handle the case when an endpoints
response contains only a single unnamed port.
I've manually verified that this fixes the issue described in #1405.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The `reader.Read` method only reads as many bytes as are currently available from reader. When reading the 4 byte message length header, if not all 4 of those bytes are available, `Read` will only read the available bytes and return. This causes alignment issues when the message body is read and there are still unread header bytes in the reader. These bytes will appear at the beginning of the message body and cause a crash when the message is unmarshalled.
Use `io.ReadFull` to ensure that we read all 4 of the message length header bytes.
Fixes#1287
Signed-off-by: Alex Leong <alex@buoyant.io>
* Update ant to 3.7.2
* Add autocomplete of namespaces/resources to Tap in web ui
* Add form fields for authority/path/method/rps/scheme
* Add the ability to clear error messages to the error banner
* Add error listener to ws object
Speed up incremental rebuilds by avoiding relinking the controller
and/or web executables when changes are made to unrelated files.
Before this change, any time the git tag changed, the executables
would have to be rebuilt (relinked at least), even if no Go files
changed.
Validated by running the integration tests.
Signed-off-by: Brian Smith <brian@briansmith.org>
Adds a tap endpoint in the web api that communicates with the dashboard
via websockets.
I've moved a bunch of code from the cli tap.go into utils so that the code
can be shared between web and CLI. I think we should consider making the
display more suited to web, but in the short term, reusing the CLI's
rendering of tap events works.
Adds a Tap page in the Web UI that you can use to make tap requests.
The form currently only allows you to enter a resource and namespace,
other filters coming in a follow-up branch.
`ca-bundle-distributor` described the original role of the program but
`ca` ("Certificate Authority") better describes its current role.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Stop using `installsuffix` when building Go code.
See https://plus.google.com/117192131596509381660/posts/eNnNePihYnK.
`-installsuffix cgo` isn't necessary as of Go 1.10 (where build caching
changed substantially) and it probably wasn't necessary earlier.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Ensure destination service always sends pod metadata
* Fix test that relied on hash ordering
* Stop using protobuf structs as map keys, fix logging
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
This PR begins to migrate Conduit to Linkerd2:
* The proxy has been completely removed from this repo, and is now located at
github.com/linkerd/linkerd2-proxy.
* A `Dockerfile-proxy` has been added to fetch the most-recently published proxy
binary from build.l5d.io.
* Proxy-specific protobuf bindings have been moved to
github.com/linkerd/linkerd2-proxy-api.
* All docker images now use the gcr.io/linkerd-io registry.
* `inject` now uses `LINKERD2_PROXY_` environment variables
* Go paths have been updated to reflect the new (future) repo location.
* Fix bug where we were using dst_authorities as a group by instead of authorities
* Add test to make sure we don't dst_authorities
Previously, we were only checking to make sure we didn't add
dst_authorities in the query labels in promDstQueryLabels but we
weren't checking the groupBy labels in promDstGroupByLabelNames -
this caused us to try to query for dst_authorities when a --from
query was sent. There are no dst_authorities, so there would be no
named results.