The openAPIV3Schema validation in the ServiceProfiles CRD is very limited in what it can validate and is obviated by more sophisticated validation done by the validating admission controller. Therefore, we would like to remove the openAPIV3Schema validation to reduce the size and complexity of the CRD object.
To do so, we must also bump the version of the ServiceProfile custom resource from v1alpha1 to v1alpha2. This ensures that when the controller is upgraded, it will attempt to watch the v1alpha2 resource. If it cannot (because, for example, the controller pod started before the ServiceProfile CRD was updated and therefore the v1alpha2 version does not exist) then it will go into a crash loop backoff until it can. This essentially means that the controller will wait for the CRD to be upgraded to include v1alpha2 before it will start.
Bumping the version is necessary because if we did not, it would be possible for the controller to start before the CRD is updated (removing the validation). In this case, when the CRD is edited, the controller will lose its list watch on ServiceProfiles and will stop getting updates.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Allow custom cluster domain in destination watcher
The change relaxes the constrains of an authority requiring a
`svc.cluster.local` suffix to only require `svc` as third part.
A unit test could be added though the destination/server and endpoint
watcher already test this behaviour.
* Update proto to allow setting custom cluster domain
Update golden templates
* Allow setting custom domain in grpc, web server
* Remove cluster domain flags from web srv and public api
* Set defaultClusterDomain in validateAndBuild if none is set
Signed-off-by: Armin Buerkle <armin.buerkle@alfatraining.de>
* Updates for the integration test script
1. Remove existing resources prior to starting the test
2. Remove existing resources post upgrade test
3. Fail fast if 'install_test.go` fails
4. Don't perform cleanup if any of the tests fail for debugging
opportunity
* Remove pre-test cleanup from .travis.yaml
This is now done in the bin/test-run script so that it can be shared
between l5d-bot and staging.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
When waiting for controller pods to be created or become ready, `linkerd check` doesn't offer any hints as to whether there has been an error (such as an ImagePullBackoff).
We add pod status to the output to make this more immediately obvious.
Fixes#2877
Signed-off-by: Alex Leong <alex@buoyant.io>
* Added Anti Affinity when HA is configured
* Move check to validate()
* Test output with anti-affinity when ha upgrade
* Add anti-affinity to identity deployment
* made host anti-affinity default when ha
* Define affinity template in a separate file
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
When installing using some of the flags that persist in install, e.g
`linkerd install --ha`, and then doing `linkerd upgrade config` a nil
pointer error is thrown.
Fixes#3094
`newCmdUpgradeConfig()` was using passing `flags` as nil because
`linkerd upgrade config` doesn't expose any flags for the subcommand,
but turns out they're still needed down the call stack in
`setFlagsFromInstall` to reuse the flags persisted during install.
I also added a new unit test catching this.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
This PR improves the CLI output for `linkerd edges` to reflect the latest API
changes.
Source and destination namespaces for each edge are now shown by default. The
`MSG` column has been replaced with `Secured` and contains a green checkmark or
the reason for no identity. A new `-o wide` flag shows the identity of client
and server if known.
The `TestGetServicesFor` is flaky because it compares two slices of services which are in a non-deterministic order.
To make this deterministic, we first sort the slices by name.
Signed-off-by: Alex Leong <alex@buoyant.io>
Integration tests may fail and leave behind namespaces that following
builds aren't able to clean up because the git sha is being included in
the namespace name, and the following builds don't know about those
shas.
This modifies the `test-cleanup` script to delete based on object labels
instead of relying on the objects names, now that after 2.4 all the
control plane components are labeled. Note that this will also remove
non-testing linkerd namespaces, but we were already kinda doing that
partially because we were removing the cluster-level resources (CRDs,
webhook configs, clusterroles, clusterrolebindings, psp).
`test-cleanup` no longer receives a namespace name as an argument.
The data plane namespaces aren't labeled though, so I've added the
`linkerd.io/is-test-data-plane` label for them in
`CreateNamespaceIfNotExists()`, and making sure all tests that need a
data plaine explicitly call that method instead of creating the
namespace as a side-effect in `KubectlApply()`.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
The git-commt-proxy-version script attempted to link to the specifc
SHAs. GitHub doesn't actually render these links, and the information is
redundant since we link the appropriate PR.
Each time we update the proxy from the linkerd2-proxy repo, we make the
change slightly differently. The bin/git-commit-proxy-version does all the
steps needed to update the proxy version up to and including making a
commit to this repo.
The proxy version is now stored in a .proxy-version file and is
consumed directly by Dockerfile-proxy, which both simplifies the
Dockerfile and the update process.
This script formats commit messages and emits output as follows:
```
commit c05198a851f69bdc7007974a0ef1f4c01c98d0ce (HEAD -> ver/proxy-update)
Author: Oliver Gould <ver@buoyant.io>
Date: Thu Jul 11 17:23:05 2019 +0000
proxy: Update to linkerd/linkerd2-proxy#3a3ec3b
* linkerd/linkerd2-proxy#0cc58cd fallback: Clarify fallback layering (linkerd/linkerd2-proxy#288)
* linkerd/linkerd2-proxy#b71349a Replace `log` and `env-logger` with `tracing` and `tracing-fmt` (linkerd/linkerd2-proxy#277)
* linkerd/linkerd2-proxy#3a3ec3b Use a constant-time load balancer (linkerd/linkerd2-proxy#266)
diff --git a/.proxy-version b/.proxy-version
index f81f40de..d7faa12d 100644
--- a/.proxy-version
+++ b/.proxy-version
@@ -1 +1 @@
-05b012d
+3a3ec3b
```
Integration tests on master broke following the 2.4 release, caused by
the recent disabling of multi control-plane support, coupled with the
upgrade integration test (which now upgrades from 2.4 to current sha).
The integration tests do the following:
1. install the current sha
2. test the current sha
3. install the latest stable in an `upgrade` namespace
4. in the `upgrade` namespace, upgrade from stable to latest sha
5. test the upgraded installation
Step 3 breaks because `linkerd install` with stable-2.4 will fail if
existing global resources (from step 1) are present.
For now, modify the integration tests to do the following:
1. install the latest stable in an `upgrade` namespace
2. in the `upgrade` namespace, upgrade from stable to latest sha
3. test the upgraded installation
4. upon successful step 3, remove all related resources
5. install the current sha
6. test the current sha
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd routes` command gets the list of routes for a resource by checking which services that resource is a member of. If a traffic split exists, it is possible for a resource to get traffic via a service that it is not a member of. Specifically, a resource which is a member of a leaf service can get traffic to the apex service. This means that even though the resource is serving routes associated with the apex service, these will not be displayed in the `linkerd routes` command.
We update `linkerd routes` to be traffic-split aware. This means that when a traffic split exists, we consider resources which are members of a leaf service with non-zero weight to be members of the apex service for the purpose of determining which routes to display.
Signed-off-by: Alex Leong <alex@buoyant.io>
During operations with `linkerd stat` sometimes it's not clear the actual
pod status.
This commit introduces a method, to the `k8s`package, getting the pod status,
based on [`kubectl` logic](33a3e325f7/pkg/printers/internalversion/printers.go (L558-L640))
to expose the `STATUS` column for pods . Also, it changes the stat command
on the` cli` package adding a column when the resource type is a Pod.
Fixes#1967
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
When getting pods for specific kubernetes resources, the usage of just
labels, as a selector, generates wrong outputs. Once, two resources can use
the same label selector and manage distinct pods, a new mechanism to check
pods for a given resource it's needed. More details on #2932.
This commit introduces a verification through the pod owner references
`UID`s, comparing with the given resource's. Additional logic is needed
when handling `Deployments` since it creates a `ReplicaSet` and this last
one is the actual pod's owner. No verification is done in case of
`Services`.
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
`linkerd check --pre` validates that PSPs provide `NET_ADMIN`, but was
not validating `NET_RAW`, despite `NET_RAW` being required by Linkerd's
proxy-init container since #2969.
Introduce a `has NET_RAW capability` check to `linkerd check --pre`.
Fixes#3054
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
## stable-2.4.0
This release adds traffic splitting functionality, support for the Kubernetes
Service Mesh Interface (SMI), graduates high-availability support out of
experimental status, and adds a tremendous list of other improvements,
performance enhancements, and bug fixes.
Linkerd's new traffic splitting feature allows users to dynamically control the
percentage of traffic destined for a service. This powerful feature can be used
to implement rollout strategies like canary releases and blue-green deploys.
Support for the [Service Mesh Interface](https://smi-spec.io) (SMI) makes it
easier for ecosystem tools to work across all service mesh implementations.
Along with the introduction of optional install stages via the `linkerd install
config` and `linkerd install control-plane` commands, the default behavior of
the `linkerd inject` command only adds annotations and defers injection to the
always-installed proxy injector component.
Finally, there have been many performance and usability improvements to the
proxy and UI, as well as production-ready features including:
* A new `linkerd edges` command that provides fine-grained observability into
the TLS-based identity system
* A `--enable-debug-sidecar` flag for the `linkerd inject` command that improves
debugging efforts
Linkerd recently passed a CNCF-sponsored security audit! Check out the in-depth
report [here](https://github.com/linkerd/linkerd2/blob/master/SECURITY_AUDIT.pdf).
To install this release, run: `curl https://run.linkerd.io/install | sh`
**Upgrade notes**: Use the `linkerd upgrade` command to upgrade the control
plane. This command ensures that all existing control plane's configuration and
mTLS secrets are retained. For more details, please see the [upgrade
instructions](https://linkerd.io/2/tasks/upgrade/#upgrade-notice-stable-2-4-0) for more details.
**Special thanks to**: @alenkacz, @codeman9, @dwj300, @jackprice, @liquidslr
@matej-g, @Pothulapati, @zaharidichev,
**Full release notes**:
* CLI
* **Breaking Change** Removed the `--proxy-auto-inject` flag, as the proxy
injector is now always installed
* **Breaking Change** Replaced the `--linkerd-version` flag with the
`--proxy-version` flag in the `linkerd install` and `linkerd upgrade`
commands, which allows setting the version for the injected proxy sidecar
image, without changing the image versions for the control plane
* Introduced install stages: `linkerd install config` and `linkerd install
control-plane`
* Introduced upgrade stages: `linkerd upgrade config` and `linkerd upgrade
control-plane`
* Introduced a new `--from-manifests` flag to `linkerd upgrade` allowing
manually feeding a previously saved output of `linkerd install` into the
command, instead of requiring a connection to the cluster to fetch the
config
* Introduced a new `--manual` flag to `linkerd inject` to output the proxy
sidecar container spec
* Introduced a new `--enable-debug-sidecar` flag to `linkerd inject`, that
injects a debug sidecar to inspect traffic to and from the meshed pod
* Added a new check for unschedulable pods and PSP issues (thanks,
@liquidslr!)
* Disabled the spinner in `linkerd check` when running without a TTY
* Ensured the ServiceAccount for the proxy injector is created before its
Deployment to avoid warnings when installing the proxy injector (thanks,
@dwj300!)
* Added a `linkerd check config` command for verifying that `linkerd install
config` was successful
* Improved the help documentation of `linkerd install` to clarify flag usage
* Added support for private Kubernetes clusters by changing the CLI to connect
to the control plane using a port-forward (thanks, @jackprice!)
* Fixed `linkerd check` and `linkerd dashboard` failing when any control plane
pod is not ready, even when multiple replicas exist (as in HA mode)
* **New** Added a `linkerd edges` command that shows the source and
destination name and identity for proxied connections, to assist in
debugging
* Tap can now be disabled for specific pods during injection by using the
`--disable-tap` flag, or by using the `config.linkerd.io/disable-tap`
annotation
* Introduced pre-install healthcheck for clock skew (thanks, @matej-g!)
* Added a JSON option to the `linkerd edges` command so that output is
scripting friendly and can be parsed easily (thanks @alenkacz!)
* Fixed an issue when Linkerd is installed with `--ha`, running `linkerd
upgrade` without `--ha` will disable the high availability control plane
* Fixed an issue with `linkerd upgrade` where running without `--ha` would
unintentionally disable high availability features if they were previously
enabled
* Added a `--init-image-version` flag to `linkerd inject` to override the
injected proxy-init container version
* Added the `--linkerd-cni-enabled` flag to the `install` subcommands so that
`NET_ADMIN` capability is omitted from the CNI-enabled control plane's PSP
* Updated `linkerd check` to validate the caller can create
`PodSecurityPolicy` resources
* Added a check to `linkerd install` to prevent installing multiple control
planes into different namespaces avoid conflicts between global resources
* Added support for passing a URL directly to `linkerd inject` (thanks
@Pothulapati!)
* Added more descriptive output to the `linkerd check` output for control
plane ReplicaSet readiness
* Refactored the `linkerd endpoints` to use the same interface as used by the
proxy for service discovery information
* Fixed a bug where `linkerd inject` would fail when given a path to a file
outside the current directory
* Graduated high-availability support out of experimental status
* Modified the error message for `linkerd install` to provide instructions for
proceeding when an existing installation is found
* Controller
* Added Go pprof HTTP endpoints to all control plane components' admin servers
to better assist debugging efforts
* Fixed bug in the proxy injector, where sporadically the pod workload owner
wasn't properly determined, which would result in erroneous stats
* Added support for a new `config.linkerd.io/disable-identity` annotation to
opt out of identity for a specific pod
* Fixed pod creation failure when a `ResourceQuota` exists by adding a default
resource spec for the proxy-init init container
* Fixed control plane components failing on startup when the Kubernetes API
returns an `ErrGroupDiscoveryFailed`
* Added Controller Component Labels to the webhook config resources (thanks,
@Pothulapati!)
* Moved the tap service into its own pod
* **New** Control plane installations now generate a self-signed certificate
and private key pair for each webhook, to prepare for future work to make
the proxy injector and service profile validator HA
* Added the ` config.linkerd.io/enable-debug-sidecar` annotation allowing the
`--enable-debug-sidecar` flag to work when auto-injecting Linkerd proxies
* Added multiple replicas for the `proxy-injector` and `sp-validator`
controllers when run in high availability mode (thanks to @Pothulapati!)
* Defined least privilege default security context values for the proxy
container so that auto-injection does not fail (thanks @codeman9!)
* Default the webhook failure policy to `Fail` in order to account for
unexpected errors during auto-inject; this ensures uninjected applications
are not deployed
* Introduced control plane's PSP and RBAC resources into Helm templates; these
policies are only in effect if the PSP admission controller is enabled
* Removed `UPDATE` operation from proxy-injector webhook because pod mutations
are disallowed during update operations
* Default the mutating and validating webhook configurations `sideEffects`
property to `None` to indicate that the webhooks have no side effects on
other resources (thanks @Pothulapati!)
* Added support for the SMI TrafficSplit API which allows users to define
traffic splits in TrafficSplit custom resources
* Added the `linkerd.io/control-plane-ns` label to all Linkerd resources
allowing them to be identified using a label selector
* Added Prometheus metrics for the Kubernetes watchers in the destination
service for better visibility
* Proxy
* Replaced the fixed reconnect backoff with an exponential one (thanks,
@zaharidichev!)
* Fixed an issue where load balancers can become stuck
* Added a dispatch timeout that limits the amount of time a request can be
buffered in the proxy
* Removed the limit on the number of concurrently active service discovery
queries to the destination service
* Fix an epoll notification issue that could cause excessive CPU usage
* Added the ability to disable tap by setting an env var (thanks,
@zaharidichev!)
* Changed the proxy's routing behavior so that, when the control plane does
not resolve a destination, the proxy forwards the request with minimal
additional routing logic
* Fixed a bug in the proxy's HPACK codec that could cause requests with very
large header values to hang indefinitely
* Fixed a memory leak that can occur if an HTTP/2 request with a payload ends
before the entire payload is sent to the destination
* The `l5d-override-dst` header is now used for inbound service profile
discovery
* Added errors totals to `response_total` metrics
* Changed the load balancer to require that Kubernetes services are resolved
via the control plane
* Added the `NET_RAW` capability to the proxy-init container to be compatible
with `PodSecurityPolicy`s that use `drop: all`
* Fixed the proxy rejecting HTTP2 requests that don't have an `:authority`
* Improved idle service eviction to reduce resource consumption for clients
that send requests to many services
* Fixed proxied HTTP/2 connections returning 502 errors when the upstream
connection is reset, rather than propagating the reset to the client
* Changed the proxy to treat unexpected HTTP/2 frames as stream errors rather
than connection errors
* Fixed a bug where DNS queries could persist longer than necessary
* Improved router eviction to remove idle services in a more timely manner
* Fixed a bug where the proxy would fail to process requests with obscure
characters in the URI
* Web UI
* Added the Font Awesome stylesheet locally; this allows both Font Awesome and
Material-UI sidebar icons to display consistently with no/limited internet
access (thanks again, @liquidslr!)
* Removed the Authorities table and sidebar link from the dashboard to prepare
for a new, improved dashboard view communicating authority data
* Fixed dashboard behavior that caused incorrect table sorting
* Removed the "Debug" page from the Linkerd dashboard while the functionality
of that page is being redesigned
* Added an Edges table to the resource detail view that shows the source,
destination name, and identity for proxied connections
* Improved UI for Edges table in dashboard by changing column names, adding a
"Secured" icon and showing an empty Edges table in the case of no returned
edges
* Internal
* Known container errors were hidden in the integration tests; now they are
reported in the output without having the tests fail
* Fixed integration tests by adding known proxy-injector log warning to tests
* Modified the integration test for `linkerd upgrade` in order to test
upgrading from the latest stable release instead of the latest edge and
reflect the typical use case
* Moved the proxy-init container to a separate `linkerd/proxy-init` Git
repository
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
## edge-19.7.3
* CLI
* Graduated high-availability support out of experimental status
* Modified the error message for `linkerd install` to provide instructions for
proceeding when an existing installation is found
* Controller
* Added Prometheus metrics for the Kubernetes watchers in the destination
service for better visibility
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
The existing `linkerd install` error message for existing resources was
shared with `linkerd check`. Given the different contexts, the messaging
made more sense for `linkerd check` than for `linkerd install`.
Modify the error messaging for `linkerd install` to print a bare list
of existing resources, and provide instructions for proceeding.
For example:
```bash
$ linkerd install
Unable to install the Linkerd control plane. It appears that there is an existing installation:
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-controller
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity
If you are sure you'd like to have a fresh install, remove these resources with:
linkerd install --ignore-cluster | kubectl delete -f -
Otherwise, you can use the --ignore-cluster flag to overwrite the existing global resources.
```
Fixes#3045
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
To give better visibility into the inner workings of the kubernetes watchers in the destination service, we add some prometheus metrics.
Signed-off-by: Alex Leong <alex@buoyant.io>
* CLI
* Refactored the `linkerd endpoints` to use the same interface as used by the
proxy for service discovery information
* Fixed a bug where `linkerd inject` would fail when given a path to a file
outside the current directory
* Proxy
* Fixed a bug where DNS queries could persist longer than necessary
* Improved router eviction to remove idle services in a more timely manner
* Fixed a bug where the proxy would fail to process requests with obscure
characters in the URI
Signed-off-by: Alex Leong <alex@buoyant.io>
Pick up the following proxy changes:
* Update httparse to v1.3.4
* canonicalize: stop resolving when the receiver is dropped
* router: Remove interval from router eviction
Signed-off-by: Alex Leong <alex@buoyant.io>
PR #2603 modified the web process to read the UUID from the
`linkerd-config` ConfigMap rather than from a command line flag. The
`linkerd check` command relied on that command line flag to retrieve the
UUID as part of its version check.
Modify `linkerd check` to correctly retrieve the UUID from
`linkerd-config`. Also refactor `linkerd-config` retrieval and parsing
code to be shared between healthcheck, install, and upgrade.
Relates to #2961
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Have `linkerd endpoints` use `Destination.Get`
Fixes#2885
We're refactoring `linkerd endpoints` so it hits
directly the `Destination.Get` endpoint, instead of relying on the
Discovery service.
For that, I've created a new `client.go` for Destination and added it to
the `APIClient` interface.
I've also added a `destinationClient` struct that mimics `tapClient`,
and whose common logic has been moved into `stream_client.go`.
Analogously, I added a `destinationServer` struct that mimics
`tapServer`.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* CLI
* Added more descriptive output to the `linkerd check` output for control
plane ReplicaSet readiness
* **Breaking change** Renamed `config.linkerd.io/debug` annotation to
`config.linkerd.io/enable-debug-sidecar`, to match the
`--enable-debug-sidecar` CLI flag that sets it
* Fixed a bug in `linkerd edges` that caused incorrect identities to be
displayed when requests were sent from two or more namespaces
* Controller
* Added the `linkerd.io/control-plane-ns` label to the SMI Traffic Split CRD
* Proxy
* Fixed proxied HTTP/2 connections returning 502 errors when the upstream
connection is reset, rather than propagating the reset to the client
* Changed the proxy to treat unexpected HTTP/2 frames as stream errors rather
than connection errors
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This PR fixes a bug in the edges command where if src_resources from two
different namespaces sent requests to the same dst_resource, the original
src_identity was overwritten.
The `linkerd check` for healthy ReplicaSets had a generic
`control plane components ready` description, and a hint anchor to
`l5d-existence-psp`. While a ReplicaSet failure could definitely occur
due to psp, that hintAnchor was already in use by the "control plane
PodSecurityPolicies exist" check.
Rename the `control plane components ready` check to
`control plane replica sets are ready`, and the hintAnchor from
`l5d-existence-psp` to `l5d-existence-replicasets`.
Relates to https://github.com/linkerd/website/issues/372.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Linkerd's CLI flags all match 1:1 with their `config.linkerd.io/*`
annotation counterparts, except `--enable-debug-sidecar`, which
corresponded to `config.linkerd.io/debug`. Additionally, the Linkerd
docs assume this 1:1 mapping.
Rename the `config.linkerd.io/debug` annotation to
`config.linkerd.io/enable-debug-sidecar`.
Relates to https://github.com/linkerd/website/issues/381
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
## edge-19.6.4
This release adds support for the SMI [Traffic Split](https://github.com/deislabs/smi-spec/blob/master/traffic-split.md)
API. Creating a TrafficSplit resource will cause Linkerd to split traffic
between the specified backend services. Please see [the spec](https://github.com/deislabs/smi-spec/blob/master/traffic-split.md)
for more details.
* CLI
* Added a check to `install` to prevent installing multiple control planes
into different namespaces
* Added support for passing a URL directly to `linkerd inject` (thanks
@Pothulapati!)
* Added the `--all-namespaces` flag to `linkerd edges`
* Controller
* Added support for the SMI TrafficSplit API which allows users to define
traffic splits in TrafficSplit custom resources
* Web UI
* Improved UI for Edges table in dashboard by changing column names, adding a
"Secured" icon and showing an empty Edges table in the case of no returned
edges
Signed-off-by: Alex Leong <alex@buoyant.io>
This change implements the DstOverrides feature of the destination profile API (aka traffic splitting).
We add a TrafficSplitWatcher to the destination service which watches for TrafficSplit resources and notifies subscribers about TrafficSplits for services that they are subscribed to. A new TrafficSplitAdaptor then merges the TrafficSplit logic into the DstOverrides field of the destination profile.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Introduce new checks to determine existence of global resources and the
'linkerd-config' config map.
* Update pre-check to check for existence of global resources
This ensures that multiple control planes can't be installed into
different namespaces.
* Update integration test clean-up script to delete psp and crd
Signed-off-by: Ivan Sim <ivan@buoyant.io>
This PR improves the UI for the Edges table in the dashboard, including changing column names, adding a "Secured" icon and showing an empty Edges table in the case of no returned edges.