- Remove a conduit image from our img folder
- Add a linkerd favicon, should no longer get the favicon not found console error
- Configure webpack to not hash image names
* This commit adds an application topology graph within the namespace tab. As a developer / operator one would like to see an overview of the services running to identify dependencies. Adding this graph gives Linkerd2 users a good overview of service dependencies.
* networkgraphtest added
Fixes: #924
Signed-off-by: Franziska von der Goltz <franziska@vdgoltz.eu>
`ca-bundle-distributor` described the original role of the program but
`ca` ("Certificate Authority") better describes its current role.
Signed-off-by: Brian Smith <brian@briansmith.org>
These files were created with the executable bit set accidentally due
to the way my network file system setup was configured.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Stop using `installsuffix` when building Go code.
See https://plus.google.com/117192131596509381660/posts/eNnNePihYnK.
`-installsuffix cgo` isn't necessary as of Go 1.10 (where build caching
changed substantially) and it probably wasn't necessary earlier.
Signed-off-by: Brian Smith <brian@briansmith.org>
* update grafana dashboards to remove conduit reference and replace with linkerd instances
* update test install fixtures to reflect changes
Fixes: #1315
Signed-off-by: Franziska von der Goltz <franziska@vdgoltz.eu>
* Add flag that skips `dep ensure` to bin/fast-build
bin/fast-build is supposed to be fast. `dep ensure -vendor-only` is too slow
to meet this goal. Add `LINKERD_SKIP_DEP` to allow skipping it. The default
behavior is kept as-is to reduce new users' confusion.
The difference in speed isn't too notable now because the bin/docker-build
step drowns out the win currently. But if/when the bin/docker-build step is
replaced, this matters a lot.
Signed-off-by: Brian Smith <brian@briansmith.org>
This PR adjusts the colour of a popup in the sidebar, as well as removes
references to conduit in the frontend test fixtures.
All that's left in the Web UI code now is a few references to the conduit sites / githubs,
as well as the CLI name.
* Remove a touch of conduit blue from the sidebar popup
* Remove minor references to conduit throughout the web code
* Fully colour the sidebar in new bg colour
The control-plane's `ClusterRole` and `ClusterRoleBinding` objects are
global. Because their names did not vary across multiple control-plane
deployments, it prevented multiple control-planes from coexisting (when
RBAC is enabled).
Modify the `ClusterRole` and `ClusterRoleBinding` objects to include the
control-plane's namespace in their names. Also modify the integration
test to first install two control-planes, and then perform its full
suite of tests, to prevent regression.
Fixes#1292.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Previously the proxy was fetched without verifying the endpoint's
signature.
Now, the `ca-certificates` package is installed prior to fetching the
package.
Additionally, the produced image contains a file containing the version.
* Ensure destination service always sends pod metadata
* Fix test that relied on hash ordering
* Stop using protobuf structs as map keys, fix logging
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
This PR begins to migrate Conduit to Linkerd2:
* The proxy has been completely removed from this repo, and is now located at
github.com/linkerd/linkerd2-proxy.
* A `Dockerfile-proxy` has been added to fetch the most-recently published proxy
binary from build.l5d.io.
* Proxy-specific protobuf bindings have been moved to
github.com/linkerd/linkerd2-proxy-api.
* All docker images now use the gcr.io/linkerd-io registry.
* `inject` now uses `LINKERD2_PROXY_` environment variables
* Go paths have been updated to reflect the new (future) repo location.
* Fix bug where we were using dst_authorities as a group by instead of authorities
* Add test to make sure we don't dst_authorities
Previously, we were only checking to make sure we didn't add
dst_authorities in the query labels in promDstQueryLabels but we
weren't checking the groupBy labels in promDstGroupByLabelNames -
this caused us to try to query for dst_authorities when a --from
query was sent. There are no dst_authorities, so there would be no
named results.
* Additional doc updates regarding protocol support
* Re-add information about server-speaks-first protocols
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
Depends on #1141.
This PR adds a `tls_config_last_reload_seconds` Prometheus metric
that reports the last time the TLS configuration files were reloaded.
Proof that it works:
Started the proxy with no certs, then generated them:
```
➜ http GET localhost:4191/metrics
HTTP/1.1 200 OK
content-encoding: gzip
content-length: 323
content-type: text/plain
date: Mon, 25 Jun 2018 23:02:52 GMT
# HELP tls_config_reload_total Total number of times the proxy's TLS config files were reloaded.
# TYPE tls_config_reload_total counter
tls_config_reload_total{status="io_error",path="example-example.crt",error_code="2"} 9
tls_config_reload_total{status="reloaded"} 3
# HELP tls_config_last_reload_seconds Timestamp of when the TLS configuration files were last reloaded successfully (in seconds since the UNIX epoch)
# TYPE tls_config_last_reload_seconds gauge
tls_config_last_reload_seconds 1529967764
# HELP process_start_time_seconds Time that the process started (in seconds since the UNIX epoch)
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1529967754
```
Started the proxy with certs already present:
```
➜ http GET localhost:4191/metrics
HTTP/1.1 200 OK
content-encoding: gzip
content-length: 285
content-type: text/plain
date: Mon, 25 Jun 2018 23:04:39 GMT
# HELP tls_config_reload_total Total number of times the proxy's TLS config files were reloaded.
# TYPE tls_config_reload_total counter
tls_config_reload_total{status="reloaded"} 4
# HELP tls_config_last_reload_seconds Timestamp of when the TLS configuration files were last reloaded successfully (in seconds since the UNIX epoch)
# TYPE tls_config_last_reload_seconds gauge
tls_config_last_reload_seconds 1529967876
# HELP process_start_time_seconds Time that the process started (in seconds since the UNIX epoch)
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1529967874
```
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This PR starts removing all references to the word "Conduit" in the web UI.
In the interest of not making huge changes all at once, I'll gradually start moving away
from the usage of "conduit" in the Web UI. For example, there are a lot of components that
have conduit in their names but they don't need to.
This branch is mostly component / variable names. There should be no visible changes except
the spinner is no longer a Conduit spinner.
See #1262 for visible branding changes.
- Rename ConduitLink to PrefixedLink
- Remove ConduitSpinner in favour of antd.Spin
- Remove css classnames that are conduit- centered
- Parameterize the current Product Name so that it's easier to change in the future
Tracking ticket: linkerd/linkerd#2018
* doc update to remove extra configurations for websockets and HTTP tunneling:
- remove instructions from readme and docs to set extra configs for websockets and HTTP tunneling, since proxy upgrades automatically
Signed-off-by: Franziska von der Goltz <franziska@vdgoltz.eu>
The `inotify-rs` library's `EventStream` implementation currently
calls `task::current().notify()` in a hot loop when a poll returns
`WouldBlock`, causing the task to constantly burn CPU.
This branch updates the `inotify-rs` dependency to point at a branch
of `inotify-rs` I had previously written. That branch rewrites the
`EventStream` to use `mio` to register interest in the `inotify` file
descriptor instead, fixing the out-of-control polling.
When inotify-rs/inotify#105 is merged upstream, we can go back to
depending on the master version of the library.
Fixes#1261
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
- Add Reason to the error data passed from the api
- Rewrite error logic in the UI to try to make it clearer
- Show 0/0 pods meshed instead of 0/0 pods meshed (N/A) if 0 pods are meshed
During protocol detection, we buffer data to detect a TLS Client Hello
message. If the client disconnects while this detection occurs, we do
not properly handle the disconnect, and the proxy may busy loop.
To fix this, we must handle the case where `read(2)` returns 0 by
creating a `Connection` with the already-closed socket.
While doing this, I've moved some of the implementation of
`ConditionallyUpgradeServerToTls::poll` into helpers on
`ConditionallyUpgradeServerToTlsInner` so that the poll method is easier
to read, hiding the inner details from the polling logic.
This PR adds a Prometheus stat tracking the number of times
TLS config files have been reloaded, and the number of times
reloading those files has errored.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Create a ephemeral, in-memory TLS certificate authority and integrate it into the certificate distributor.
Remove the re-creation of deleted ConfigMaps; this will be added back later in #1248.
Signed-off-by: Brian Smith brian@briansmith.org
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
The proxy's metrics are instrumented with a `tls` label that describes
the state of TLS for each connection and associated messges.
This same level of detail is useful to get in `tap` output as well.
This change updates Tap in the following ways:
* `TapEvent` protobuf updated:
* Added `source_meta` field including source labels
* `proxy_direction` enum indicates which proxy server was used.
* The proxy adds a `tls` label to both source and destination meta indicating the state of each peer's connection
* The CLI uses the `proxy_direction` field to determine which `tls` label should be rendered.
The `tls` label could sometimes be formatted incorrectly, without a
preceding comma.
To fix this, the `TlsStatus` type no longer formats commas so that they
must be provided in the context in which they are used (as is done
otherwise in this file).