Merge main into shared/docs/2.12 (#1431)

* improve installation guide (#1401)

* Point out that Helm is recommended for production installations.
* Remove multi-stage instructions.
* Wordsmithing.

* Pull @wmorgan's install-docs changes from 2.11 into 2.12.

Signed-off-by: Flynn <flynn@buoyant.io>

* Drop empty line for lint

Signed-off-by: Flynn <flynn@buoyant.io>

Signed-off-by: Flynn <flynn@buoyant.io>
Co-authored-by: William Morgan <william@buoyant.io>
Co-authored-by: Flynn <flynn@buoyant.io>
This commit is contained in:
Flynn 2022-08-23 12:28:58 -04:00 committed by GitHub
parent db029d86dc
commit 91436e959e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 199 additions and 315 deletions

View File

@ -1,19 +1,20 @@
+++
title = "Installing Linkerd with Helm"
description = "Install Linkerd onto your own Kubernetes cluster using Helm."
description = "Install Linkerd onto your Kubernetes cluster using Helm."
+++
Linkerd can optionally be installed via Helm rather than with the `linkerd
install` command.
Linkerd can be installed via Helm rather than with the `linkerd install`
command. This is recommended for production, since it allows for repeatability.
## Prerequisite: identity certificates
## Prerequisite: generate identity certificates
The identity component of Linkerd requires setting up a trust anchor
certificate, and an issuer certificate with its key. These must use the ECDSA
P-256 algorithm and need to be provided to Helm by the user (unlike when using
the `linkerd install` CLI which can generate these automatically). You can
provide your own, or follow [these instructions](../generate-certificates/)
to generate new ones.
To do [automatic mutual TLS](../../features/automatic-mtls/), Linkerd requires
trust anchor certificate and an issuer certificate and key pair. When you're
using `linkerd install`, we can generate these for you. However, for Helm,
you will need to generate these yourself.
Please follow the instructions in [Generating your own mTLS root
certificates](../generate-certificates/) to generate these.
## Helm install procedure for stable releases
@ -71,20 +72,19 @@ helm install linkerd-control-plane \
linkerd-edge/linkerd-control-plane
```
## Disabling The Proxy Init Container
{{< note >}}
If you are using [Linkerd's CNI plugin](../../features/cni/), you must also add the
`--set cniEnabled=true` flag to your `helm install` command.
{{< /note >}}
If installing with CNI, make sure that you add the `--set
cniEnabled=true` flag to your `helm install` command.
## Enabling high availability mode
## Setting High-Availability
The linkerd2 chart (or, for edge releases, the linkerd-control-plane chart)
contains a file called `values-ha.yaml` that overrides some default values to
enable high availability mode, analogous to the `--ha` option in `linkerd
install`.
The linkerd2 chart (linkerd-control-plane chart for edge releases) contains a
file `values-ha.yaml` that overrides some default values as to set things up
under a high-availability scenario, analogous to the `--ha` option in `linkerd
install`. Values such as higher number of replicas, higher memory/cpu limits and
affinities are specified in that file.
You can get ahold of `values-ha.yaml` by fetching the chart files:
You can get the `values-ha.yaml` by fetching the chart files:
```bash
# for stable
@ -94,7 +94,7 @@ helm fetch --untar linkerd/linkerd2
helm fetch --untar --devel linkerd-edge/linkerd-control-plane
```
Then use the `-f` flag to provide the override file, for example:
Then use the `-f` flag to provide this override file. For example:
```bash
# for stable
@ -116,24 +116,24 @@ helm install linkerd-control-plane \
linkerd-edge/linkerd-control-plane
```
## Customizing the Namespace in the stable release
## Customizing the namespace
To install Linkerd to a different namespace than the default `linkerd`,
override the `Namespace` variable.
To install Linkerd to a different namespace, you can override the Helm
`Namespace` variable.
By default, the chart creates the control plane namespace with the
`config.linkerd.io/admission-webhooks: disabled` label. It is required for the
control plane to work correctly. This means that the chart won't work with the
`--namespace` option. If you're relying on a separate tool to create the
`config.linkerd.io/admission-webhooks: disabled` label. This is required for the
control plane to work correctly. This means that the chart won't work with
Helm's `--namespace` option. If you're relying on a separate tool to create the
control plane namespace, make sure that:
1. The namespace is labeled with `config.linkerd.io/admission-webhooks: disabled`
1. The `installNamespace` is set to `false`
1. The `namespace` variable is overridden with the name of your namespace
## Helm upgrade procedure
## Upgrading with Helm
Make sure your local Helm repos are updated:
First, make sure your local Helm repos are updated:
```bash
helm repo update
@ -143,12 +143,15 @@ NAME CHART VERSION APP VERSION DESCRIPTIO
linkerd/linkerd2 <chart-semver-version> {{% latestversion %}} Linkerd gives you observability, reliability, and securit...
```
The `helm upgrade` command has a number of flags that allow you to customize
its behaviour. The ones that special attention should be paid to are
`--reuse-values` and `--reset-values` and how they behave when charts change
from version to version and/or overrides are applied through `--set` and
`--set-file`. To summarize there are the following prominent cases that can be
observed:
During an upgrade, you must choose whether you want to reuse the values in the
chart or move to the values specified in the newer chart. Our advice is to use
a `values.yaml` file that stores all custom overrides that you have for your
chart.
The `helm upgrade` command has a number of flags that allow you to customize its
behavior. Special attention should be paid to `--reuse-values` and
`--reset-values` and how they behave when charts change from version to version
and/or overrides are applied through `--set` and `--set-file`. For example:
- `--reuse-values` with no overrides - all values are reused
- `--reuse-values` with overrides - all except the values that are overridden
@ -160,12 +163,9 @@ provided release are applied together with the overrides
- no flag and no overrides - `--reuse-values` will be used by default
- no flag and overrides - `--reset-values` will be used by default
Bearing all that in mind, you have to decide whether you want to reuse the
values in the chart or move to the values specified in the newer chart.
The advised practice is to use a `values.yaml` file that stores all custom
overrides that you have for your chart. Before upgrade, check whether there
are breaking changes to the chart (i.e. renamed or moved keys, etc). You can
consult the [edge](https://hub.helm.sh/charts/linkerd2-edge/linkerd2) or the
Finally, before upgrading, check whether there are breaking changes to the chart
(i.e. renamed or moved keys, etc). You can consult the
[edge](https://hub.helm.sh/charts/linkerd2-edge/linkerd2) or the
[stable](https://hub.helm.sh/charts/linkerd2/linkerd2) chart docs, depending on
which one your are upgrading to. If there are, make the corresponding changes to
your `values.yaml` file. Then you can use:
@ -175,4 +175,4 @@ helm upgrade linkerd2 linkerd/linkerd2 --reset-values -f values.yaml --atomic
```
The `--atomic` flag will ensure that all changes are rolled back in case the
upgrade operation fails
upgrade operation fails.

View File

@ -1,6 +1,6 @@
+++
title = "Installing Linkerd"
description = "Install Linkerd to your own Kubernetes cluster."
description = "Install Linkerd onto your Kubernetes cluster."
aliases = [
"../upgrading/",
"../installing/",
@ -8,148 +8,81 @@ aliases = [
]
+++
Before you can use Linkerd, you'll need to install the
[core control plane](../../reference/architecture/#control-plane). This page
covers how to accomplish that, as well as common problems that you may
encounter.
Before you can use Linkerd, you'll need to install the [control
plane](../../reference/architecture/#control-plane). This page covers how to
accomplish that.
Note that the control plane is typically installed by using Linkerd's CLI. See
[Getting Started](../../getting-started/) for how to install the CLI onto your local
environment.
Linkerd's control plane can be installed in two ways: with the CLI and with
Helm. The CLI is convenient and easy, but for production use cases we recommend
Helm which allows for repeatability.
Linkerd also comprises of some first party extensions which add additional features
i.e `viz`, `multicluster` and `jaeger`. See [Extensions](../extensions/)
to understand how to install them.
Note also that, once the control plane is installed, you'll need to "mesh" any
services you want Linkerd active for. See
[Adding Your Service](../../adding-your-service/) for how to add Linkerd's data
plane to your services.
In either case, we recommend installing the CLI itself so that you can validate
the success of the installation. See the [Getting Started
Guide](../../getting-started/) for how to install the CLI if you haven't done
this already.
## Requirements
Linkerd 2.x requires a functioning Kubernetes cluster on which to run. This
cluster may be hosted on a cloud provider or may be running locally via
Minikube or Docker for Desktop.
Linkerd requires a Kubernetes cluster on which to run. Where this cluster lives
is not important: it might be hosted on a cloud provider, may be running on your
local machine, or even somewhere else.
You can validate that this Kubernetes cluster is configured appropriately for
Linkerd by running
Before installing the control plane, validate that this Kubernetes cluster is
configured appropriately for Linkerd by running:
```bash
linkerd check --pre
```
### GKE
Be sure to address any issues that the checks identify before proceeding.
{{< note >}}
If installing Linkerd on GKE, there are some extra steps required depending on
how your cluster has been configured. If you are using any of these features,
check out the additional instructions.
check out the additional instructions on [GKE private
clusters](../../reference/cluster-configuration/#private-clusters)
{{< /note >}}
- [Private clusters](../../reference/cluster-configuration/#private-clusters)
## Installing with the CLI
## Installing
Once you have a cluster ready, generally speaking, installing Linkerd is as
easy as running `linkerd install` to generate a Kubernetes manifest, and
applying that to your cluster, for example, via
Once you have a cluster ready, installing Linkerd is as easy as running `linkerd
install` to generate a Kubernetes manifest, and applying that to your cluster.
For example:
```bash
linkerd install | kubectl apply -f -
```
See [Getting Started](../../getting-started/) for an example.
This basic installation should work for most cases. However, there are some
configuration options are provided as flags for `install`. See the [CLI
reference documentation](../../reference/cli/install/) for a complete list of
options. You can also use [tools like Kustomize](../customize-install/) to
programmatically alter this manifest.
{{< note >}}
Most common configuration options are provided as flags for `install`. See the
[reference documentation](../../reference/cli/install/) for a complete list of
options. To do configuration that is not part of the `install` command, see how
you can create a [customized install](../customize-install/).
{{< /note >}}
## Installing via Helm
{{< note >}}
For organizations that distinguish cluster privileges by role, jump to the
[Multi-stage install](#multi-stage-install) section.
{{< /note >}}
To install Linkerd with Helm (recommended for production installations),
see the [Installing Linkerd with Helm](../install-helm/).
## Verification
After installation, you can validate that the installation was successful by
running:
After installation (whether CLI or Helm) you can validate that Linkerd is in a
good state running:
```bash
linkerd check
```
## Uninstalling
## Next steps
Once you've installed the control plane, you may want to install some
extensions, such as `viz`, `multicluster` and `jaeger`. See [Using
extensions](../extensions/) for how to install them.
Finally, once the control plane is installed, you'll need to "mesh" any services
you want Linkerd active for. See [Adding your services to
Linkerd](../../adding-your-service/) for how to do this.
## Uninstalling the control plane
See [Uninstalling Linkerd](../uninstall/).
## Multi-stage install
If your organization assigns Kubernetes cluster privileges based on role
(typically cluster owner and service owner), Linkerd provides a "multi-stage"
installation to accommodate these two roles. The two installation stages are
`config` (for the cluster owner) and `control-plane` (for the service owner).
The cluster owner has privileges necessary to create namespaces, as well as
global resources including cluster roles, bindings, and custom resource
definitions. The service owner has privileges within a namespace necessary to
create deployments, configmaps, services, and secrets.
### Stage 1: config
The `config` stage is intended to be run by the cluster owner, the role with
more privileges. It is also the cluster owner's responsibility to run the
initial pre-install check:
```bash
linkerd check --pre
```
Once the pre-install check passes, install the config stage with:
```bash
linkerd install config | kubectl apply -f -
```
In addition to creating the `linkerd` namespace, this command installs the
following resources onto your Kubernetes cluster:
- ClusterRole
- ClusterRoleBinding
- CustomResourceDefinition
- MutatingWebhookConfiguration
- PodSecurityPolicy
- Role
- RoleBinding
- Secret
- ServiceAccount
- ValidatingWebhookConfiguration
To validate the `config` stage succeeded, run:
```bash
linkerd check config
```
### Stage 2: control-plane
Following successful installation of the `config` stage, the service owner may
install the `control-plane` with:
```bash
linkerd install control-plane | kubectl apply -f -
```
This command installs the following resources onto your Kubernetes cluster, all
within the `linkerd` namespace:
- ConfigMap
- Deployment
- Secret
- Service
To validate the `control-plane` stage succeeded, run:
```bash
linkerd check
```

View File

@ -1,21 +1,22 @@
+++
title = "Installing Linkerd with Helm"
description = "Install Linkerd onto your own Kubernetes cluster using Helm."
description = "Install Linkerd onto your Kubernetes cluster using Helm."
+++
Linkerd can optionally be installed via Helm rather than with the `linkerd
install` command.
Linkerd can be installed via Helm rather than with the `linkerd install`
command. This is recommended for production, since it allows for repeatability.
## Prerequisite: identity certificates
## Prerequisite: generate identity certificates
The identity component of Linkerd requires setting up a trust anchor
certificate, and an issuer certificate with its key. These must use the ECDSA
P-256 algorithm and need to be provided to Helm by the user (unlike when using
the `linkerd install` CLI which can generate these automatically). You can
provide your own, or follow [these instructions](../generate-certificates/)
to generate new ones.
To do [automatic mutual TLS](../../features/automatic-mtls/), Linkerd requires
trust anchor certificate and an issuer certificate and key pair. When you're
using `linkerd install`, we can generate these for you. However, for Helm,
you will need to generate these yourself.
## Adding Linkerd's Helm repository
Please follow the instructions in [Generating your own mTLS root
certificates](../generate-certificates/) to generate these.
## Helm install procedure for stable releases
```bash
# To add the repo for Linkerd stable releases:
@ -49,6 +50,11 @@ creating it beforehand elsewhere in your pipeline, just omit the
`--create-namespace` flag.
{{< /note >}}
{{< note >}}
If you are using [Linkerd's CNI plugin](../../features/cni/), you must also add the
`--set cniEnabled=true` flag to your `helm install` command.
{{< /note >}}
### linkerd-control-plane
The `linkerd-control-plane` chart sets up all the control plane components:
@ -62,25 +68,25 @@ helm install linkerd-control-plane \
linkerd/linkerd-control-plane
```
## Disabling The Proxy Init Container
{{< note >}}
If you are using [Linkerd's CNI plugin](../../features/cni/), you must also add the
`--set cniEnabled=true` flag to your `helm install` command.
{{< /note >}}
If installing with CNI, make sure that you add the `--set
cniEnabled=true` flag to your `helm install` command in both charts.
## Enabling high availability mode
## Setting High-Availability
`linkerd-control-plane` contains a file `values-ha.yaml` that overrides some
default values as to set things up under a high-availability scenario, analogous
The `linkerd-control-plane` chart contains a file `values-ha.yaml` that overrides
some default values to set things up under a high-availability scenario, analogous
to the `--ha` option in `linkerd install`. Values such as higher number of
replicas, higher memory/cpu limits and affinities are specified in those files.
replicas, higher memory/cpu limits, and affinities are specified in those files.
You can get ahold of `values-ha.yaml` by fetching the chart file:
You can get `values-ha.yaml` by fetching the chart file:
```bash
helm fetch --untar linkerd/linkerd-control-plane
```
Then use the `-f` flag to provide the override file, for example:
Then use the `-f` flag to provide this override file. For example:
```bash
helm install linkerd-control-plane \
@ -91,9 +97,24 @@ helm install linkerd-control-plane \
linkerd/linkerd-control-plane
```
## Helm upgrade procedure
## Customizing the namespace
Make sure your local Helm repos are updated:
To install Linkerd to a different namespace, you can override the Helm
`Namespace` variable.
By default, the chart creates the control plane namespace with the
`config.linkerd.io/admission-webhooks: disabled` label. This is required for the
control plane to work correctly. This means that the chart won't work with
Helm's `--namespace` option. If you're relying on a separate tool to create the
control plane namespace, make sure that:
1. The namespace is labeled with `config.linkerd.io/admission-webhooks: disabled`
1. The `installNamespace` is set to `false`
1. The `namespace` variable is overridden with the name of your namespace
## Upgrading with Helm
First, make sure your local Helm repos are updated:
```bash
helm repo update
@ -104,11 +125,15 @@ linkerd/linkerd-crds <chart-semver-version> Li
linkerd/linkerd-control-plane <chart-semver-version> {{% latestversion %}} Linkerd gives you observability, reliability, and securit...
```
The `helm upgrade` command has a number of flags that allow you to customize
its behaviour. The ones that special attention should be paid to are
`--reuse-values` and `--reset-values` and how they behave when charts change
from version to version and/or overrides are applied through `--set` and
`--set-file`. To summarize these are prominent cases that can be observed:
During an upgrade, you must choose whether you want to reuse the values in the
chart or move to the values specified in the newer chart. Our advice is to use
a `values.yaml` file that stores all custom overrides that you have for your
chart.
The `helm upgrade` command has a number of flags that allow you to customize its
behavior. Special attention should be paid to `--reuse-values` and
`--reset-values` and how they behave when charts change from version to version
and/or overrides are applied through `--set` and `--set-file`. For example:
- `--reuse-values` with no overrides - all values are reused
- `--reuse-values` with overrides - all except the values that are overridden
@ -120,16 +145,14 @@ provided release are applied together with the overrides
- no flag and no overrides - `--reuse-values` will be used by default
- no flag and overrides - `--reset-values` will be used by default
Bearing all that in mind, you have to decide whether you want to reuse the
values in the chart or move to the values specified in the newer chart. The
advised practice is to use a `values.yaml` file that stores all custom overrides
that you have for your chart. Before upgrade, check whether there are breaking
changes to the chart (i.e. renamed or moved keys, etc). You can consult the
[edge](https://artifacthub.io/packages/helm/linkerd2/linkerd-control-plane#values)
Finally, before upgrading, check whether there are breaking changes to the chart
(i.e. renamed or moved keys, etc). You can consult the
[stable](https://artifacthub.io/packages/helm/linkerd2/linkerd-control-plane#values)
or the
[stable](https://artifacthub.io/packages/helm/linkerd2-edge/linkerd-control-plane#values)
chart docs, depending on which one your are upgrading to. If there are, make the
corresponding changes to your `values.yaml` file. Then you can use:
[edge](https://artifacthub.io/packages/helm/linkerd2-edge/linkerd-control-plane#values)
chart docs, depending on
which one your are upgrading to. If there are, make the corresponding changes to
your `values.yaml` file. Then you can use:
```bash
# the linkerd-crds chart currently doesn't have a values.yaml file
@ -140,4 +163,4 @@ helm upgrade linkerd-control-plane linkerd/linkerd-control-plane --reset-values
```
The `--atomic` flag will ensure that all changes are rolled back in case the
upgrade operation fails
upgrade operation fails.

View File

@ -1,6 +1,6 @@
+++
title = "Installing Linkerd"
description = "Install Linkerd to your own Kubernetes cluster."
description = "Install Linkerd onto your Kubernetes cluster."
aliases = [
"../upgrading/",
"../installing/",
@ -8,148 +8,81 @@ aliases = [
]
+++
Before you can use Linkerd, you'll need to install the
[core control plane](../../reference/architecture/#control-plane). This page
covers how to accomplish that, as well as common problems that you may
encounter.
Before you can use Linkerd, you'll need to install the [control
plane](../../reference/architecture/#control-plane). This page covers how to
accomplish that.
Note that the control plane is typically installed by using Linkerd's CLI. See
[Getting Started](../../getting-started/) for how to install the CLI onto your local
environment.
Linkerd's control plane can be installed in two ways: with the CLI and with
Helm. The CLI is convenient and easy, but for production use cases we recommend
Helm which allows for repeatability.
Linkerd also comprises of some first party extensions which add additional features
i.e `viz`, `multicluster` and `jaeger`. See [Extensions](../extensions/)
to understand how to install them.
Note also that, once the control plane is installed, you'll need to "mesh" any
services you want Linkerd active for. See
[Adding Your Service](../../adding-your-service/) for how to add Linkerd's data
plane to your services.
In either case, we recommend installing the CLI itself so that you can validate
the success of the installation. See the [Getting Started
Guide](../../getting-started/) for how to install the CLI if you haven't done
this already.
## Requirements
Linkerd 2.x requires a functioning Kubernetes cluster on which to run. This
cluster may be hosted on a cloud provider or may be running locally via
Minikube or Docker for Desktop.
Linkerd requires a Kubernetes cluster on which to run. Where this cluster lives
is not important: it might be hosted on a cloud provider, may be running on your
local machine, or even somewhere else.
You can validate that this Kubernetes cluster is configured appropriately for
Linkerd by running
Before installing the control plane, validate that this Kubernetes cluster is
configured appropriately for Linkerd by running:
```bash
linkerd check --pre
```
### GKE
Be sure to address any issues that the checks identify before proceeding.
{{< note >}}
If installing Linkerd on GKE, there are some extra steps required depending on
how your cluster has been configured. If you are using any of these features,
check out the additional instructions.
check out the additional instructions on [GKE private
clusters](../../reference/cluster-configuration/#private-clusters)
{{< /note >}}
- [Private clusters](../../reference/cluster-configuration/#private-clusters)
## Installing with the CLI
## Installing
Once you have a cluster ready, generally speaking, installing Linkerd is as
easy as running `linkerd install` to generate a Kubernetes manifest, and
applying that to your cluster, for example, via
Once you have a cluster ready, installing Linkerd is as easy as running `linkerd
install` to generate a Kubernetes manifest, and applying that to your cluster.
For example:
```bash
linkerd install | kubectl apply -f -
```
See [Getting Started](../../getting-started/) for an example.
This basic installation should work for most cases. However, there are some
configuration options are provided as flags for `install`. See the [CLI
reference documentation](../../reference/cli/install/) for a complete list of
options. You can also use [tools like Kustomize](../customize-install/) to
programmatically alter this manifest.
{{< note >}}
Most common configuration options are provided as flags for `install`. See the
[reference documentation](../../reference/cli/install/) for a complete list of
options. To do configuration that is not part of the `install` command, see how
you can create a [customized install](../customize-install/).
{{< /note >}}
## Installing via Helm
{{< note >}}
For organizations that distinguish cluster privileges by role, jump to the
[Multi-stage install](#multi-stage-install) section.
{{< /note >}}
To install Linkerd with Helm (recommended for production installations),
see the [Installing Linkerd with Helm](../install-helm/).
## Verification
After installation, you can validate that the installation was successful by
running:
After installation (whether CLI or Helm) you can validate that Linkerd is in a
good state running:
```bash
linkerd check
```
## Uninstalling
## Next steps
Once you've installed the control plane, you may want to install some
extensions, such as `viz`, `multicluster` and `jaeger`. See [Using
extensions](../extensions/) for how to install them.
Finally, once the control plane is installed, you'll need to "mesh" any services
you want Linkerd active for. See [Adding your services to
Linkerd](../../adding-your-service/) for how to do this.
## Uninstalling the control plane
See [Uninstalling Linkerd](../uninstall/).
## Multi-stage install
If your organization assigns Kubernetes cluster privileges based on role
(typically cluster owner and service owner), Linkerd provides a "multi-stage"
installation to accommodate these two roles. The two installation stages are
`config` (for the cluster owner) and `control-plane` (for the service owner).
The cluster owner has privileges necessary to create namespaces, as well as
global resources including cluster roles, bindings, and custom resource
definitions. The service owner has privileges within a namespace necessary to
create deployments, configmaps, services, and secrets.
### Stage 1: config
The `config` stage is intended to be run by the cluster owner, the role with
more privileges. It is also the cluster owner's responsibility to run the
initial pre-install check:
```bash
linkerd check --pre
```
Once the pre-install check passes, install the config stage with:
```bash
linkerd install config | kubectl apply -f -
```
In addition to creating the `linkerd` namespace, this command installs the
following resources onto your Kubernetes cluster:
- ClusterRole
- ClusterRoleBinding
- CustomResourceDefinition
- MutatingWebhookConfiguration
- PodSecurityPolicy
- Role
- RoleBinding
- Secret
- ServiceAccount
- ValidatingWebhookConfiguration
To validate the `config` stage succeeded, run:
```bash
linkerd check config
```
### Stage 2: control-plane
Following successful installation of the `config` stage, the service owner may
install the `control-plane` with:
```bash
linkerd install control-plane | kubectl apply -f -
```
This command installs the following resources onto your Kubernetes cluster, all
within the `linkerd` namespace:
- ConfigMap
- Deployment
- Secret
- Service
To validate the `control-plane` stage succeeded, run:
```bash
linkerd check
```

View File

@ -68,19 +68,14 @@ able to reach each other, run:
linkerd --context=west multicluster check
```
You should also see the list of gateways show up by running:
You should also see the list of gateways show up by running. Note that you'll
need Linkerd's Viz extension to be installed in the source cluster to get the
list of gateways:
```bash
linkerd --context=west multicluster gateways
```
Note that you'll need Linkerd's Viz extension to be installed in the source
cluster to get the list of gateways. You can also use the [Buoyant
Cloud](https://buoyant.io/cloud) extension for visibility into gateways, and
that requires a [policy
modification](https://docs.buoyant.cloud/article/99-linkerd-multi-cluster-policy)
to grant that extension access.
For a detailed explanation of what this step does, check out the
[linking the clusters section](../multicluster/#linking-the-clusters).