Update docs to reference Linkerd (#1331)

* Update docs to reference Linkerd

* Review comments
This commit is contained in:
Thomas Rampelberg 2018-07-17 15:29:46 -07:00 committed by GitHub
parent 37f8490edb
commit 74da62f8c0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 217 additions and 197 deletions

View File

@ -6,7 +6,7 @@
If you have a question about Linkerd2 or have encountered problems using it,
start by [asking a question in the forums][discourse] or join us in the
(yet-to-be-renamed) [#conduit Slack channel][slack].
[#linkerd2 Slack channel][slack].
## Certificate of Origin ##

View File

@ -42,7 +42,7 @@ Linkerd is hosted by the Cloud Native Computing Foundation ([CNCF][cncf]).
* [conduit-announce mailing list][conduit-announce]: Linkerd2 announcements only
(low volume).
* Follow [@RunConduit][twitter] on Twitter.
* Join the #conduit channel on the [Linkerd Slack][slack].
* Join the #linkerd2 channel on the [Linkerd Slack][slack].
## Documentation

View File

@ -1,5 +1,6 @@
The documentation here is published at https://conduit.io/docs/ when each
version is released. The version of the documentation in GitHub should
always match the code in the same branch. For example, when a pull request
changes the behavior of a documented feature, the pull request should change
the documentation here to match the new behavior.
The documentation here is published at
[https://linkerd.io/docs/](https://linkerd.io/docs/) when each version is
released. The version of the documentation in GitHub should always match the
code in the same branch. For example, when a pull request changes the behavior
of a documented feature, the pull request should change the documentation here
to match the new behavior.

View File

@ -5,8 +5,8 @@ docpage = true
parent = "adding-your-service"
+++
In order for your service to take advantage of Conduit, it needs to be added
to the service mesh. This is done by using the Conduit CLI to add the Conduit
In order for your service to take advantage of Linkerd, it needs to be added
to the service mesh. This is done by using the Linkerd CLI to add the Linkerd
proxy sidecar to each pod. By doing this as a rolling update, the availability
of your application will not be affected.
@ -21,44 +21,46 @@ of your application will not be affected.
## Adding your service
### To add your service to the service mesh, run:
#### `conduit inject deployment.yml | kubectl apply -f -`
#### `linkerd inject deployment.yml | kubectl apply -f -`
`deployment.yml` is the Kubernetes config file containing your
application. This will trigger a rolling update of your deployment, replacing
each pod with a new one that additionally contains the Conduit sidecar proxy.
each pod with a new one that additionally contains the Linkerd sidecar proxy.
You will know that your service has been successfully added to the service mesh
if its proxy status is green in the Conduit dashboard.
if its proxy status is green in the Linkerd dashboard.
![](images/dashboard-data-plane.png "conduit dashboard")
![dashboard](images/dashboard-data-plane.png "linkerd dashboard")
### You can always get to the Conduit dashboard by running
#### `conduit dashboard`
### You can always get to the Linkerd dashboard by running
#### `linkerd dashboard`
## Protocol support
Conduit is capable of proxying all TCP traffic, including WebSockets and HTTP
Linkerd is capable of proxying all TCP traffic, including WebSockets and HTTP
tunneling, and reporting top-line metrics (success rates, latencies, etc) for
all HTTP, HTTP/2, and gRPC traffic.
### Server-speaks-first protocols
For protocols where the server sends data before the client sends data over
connections that aren't protected by TLS, Conduit cannot automatically recognize
connections that aren't protected by TLS, Linkerd cannot automatically recognize
the protocol used on the connection. Two common examples of this type of
protocol are MySQL and SMTP. If you are using Conduit to proxy plaintext MySQL
or SMTP requests on their default ports (3306 and 25, respectively), then Conduit
protocol are MySQL and SMTP. If you are using Linkerd to proxy plaintext MySQL
or SMTP requests on their default ports (3306 and 25, respectively), then Linkerd
is able to successfully identify these protocols based on the port. If you're
using non-default ports, or if you're using a different server-speaks-first
protocol, then you'll need to manually configure Conduit to recognize these
protocol, then you'll need to manually configure Linkerd to recognize these
protocols.
If you're working with a protocol that can't be automatically recognized by
Conduit, use the `--skip-inbound-ports` and `--skip-outbound-ports` flags when
running `conduit inject`.
Linkerd, use the `--skip-inbound-ports` and `--skip-outbound-ports` flags when
running `linkerd inject`.
### For example, if your application makes requests to a MySQL database running on port 4406, use the command:
#### `conduit inject deployment.yml --skip-outbound-ports=4406 | kubectl apply -f -`
#### `linkerd inject deployment.yml --skip-outbound-ports=4406 | kubectl apply -f -`
### Likewise if your application runs an SMTP server that accepts incoming requests on port 35, use the command:
#### `conduit inject deployment.yml --skip-inbound-ports=35 | kubectl apply -f -`
#### `linkerd inject deployment.yml --skip-inbound-ports=35 | kubectl apply -f -`

View File

@ -5,47 +5,46 @@ docpage = true
parent = "automatic-tls"
+++
As of [Conduit v0.5.0][conduit-v0.5.0], Conduit can be configured to
automatically negotiate Transport Layer Security (TLS) for application
communication.
Linkerd can be configured to automatically negotiate Transport Layer Security
(TLS) for application communication.
When TLS is enabled, Conduit automatically establishes and authenticates
secure, private connections between Conduit proxies. This is done without
When TLS is enabled, Linkerd automatically establishes and authenticates
secure, private connections between Linkerd proxies. This is done without
breaking unencrypted communication with endpoints that are not configured
with TLS-enabled Conduit proxies.
with TLS-enabled Linkerd proxies.
This feature is currently **experimental** and is designed to _fail open_ so
that it cannot easily break existing applications. As the feature matures,
this policy will change in favor of stronger security guarantees.
### Getting started with TLS
## Getting started with TLS
The TLS feature is currently disabled by default. To enable it, you must
install the control plane with the `--tls` flag set to `optional`. This
configures the mesh so that TLS is enabled opportunistically:
```
conduit install --tls=optional |kubectl apply -f -
```bash
linkerd install --tls=optional | kubectl apply -f -
```
This causes a Certificate Authority (CA) container to be run in the
control-plane. The CA watches for the creation and updates of Conduit-enabled
pods. For each Conduit-enabled pod, it generates a private key, issues a
control-plane. The CA watches for the creation and updates of Linkerd-enabled
pods. For each Linkerd-enabled pod, it generates a private key, issues a
certificate, and distributes the certificate and private key to each pod as a
Kubernetes Secret.
Once you've configured the control plane to support TLS, you may enable TLS
for each application when it is injected with the Conduit proxy:
for each application when it is injected with the Linkerd proxy:
```
conduit inject --tls=optional app.yml |kubectl apply -f -
```bash
linkerd inject --tls=optional app.yml | kubectl apply -f -
```
Then, tools like `conduit dashboard`, `conduit stat`, and `conduit tap` will
Then, tools like `linkerd dashboard`, `linkerd stat`, and `linkerd tap` will
indicate the TLS status of traffic:
```
conduit stat authority -n emojivoto
```bash
linkerd stat authority -n emojivoto
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 TLS
emoji-svc.emojivoto:8080 - 100.00% 0.6rps 1ms 1ms 1ms 100%
emoji-svc.emojivoto:8888 - 100.00% 0.8rps 1ms 1ms 9ms 100%
@ -53,13 +52,12 @@ voting-svc.emojivoto:8080 - 45.45% 0.6rps 4ms 10m
web-svc.emojivoto:80 - 0.00% 0.6rps 8ms 33ms 39ms 100%
```
### Known issues
## Known issues
As this feature is _experimental_, we know that there's still a lot of [work
to do][tls-issues]. We **LOVE** bug reports though, so please don't hesitate
to [file an issue][new-issue] if you run into any problems while testing
automatic TLS.
[conduit-v0.5.0]: https://github.com/linkerd/linkerd2/releases/tag/v0.5.0
[tls-issues]: https://github.com/linkerd/linkerd2/issues?q=is%3Aissue+is%3Aopen+label%3Aarea%2Ftls
[new-issue]: https://github.com/linkerd/linkerd2/issues/new

View File

@ -6,19 +6,19 @@ docpage = true
+++
This section assumes you've followed the steps in the [Getting
Started](/getting-started) guide and have Conduit and the demo application
Started](/getting-started) guide and have Linkerd and the demo application
running in some flavor of Kubernetes cluster.
## Using Conduit to debug a failing service 💻🔥
Now that we have Conduit and the demo application [up and
running](/getting-started), let's use Conduit to diagnose issues.
## Using Linkerd to debug a failing service 💻🔥
Now that we have Linkerd and the demo application [up and
running](/getting-started), let's use Linkerd to diagnose issues.
First, let's use the `conduit stat` command to get an overview of deployment
First, let's use the `linkerd stat` command to get an overview of deployment
health:
#### `conduit -n emojivoto stat deploy`
#### `linkerd -n emojivoto stat deploy`
### Your results will be something like:
```
```bash
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
emoji 1/1 100.00% 2.0rps 1ms 4ms 5ms
vote-bot 1/1 - - - - -
@ -29,15 +29,15 @@ web 1/1 94.92% 2.0rps 5ms 10ms 18ms
We can see that the `voting` service is performing far worse than the others.
How do we figure out what's going on? Our traditional options are: looking at
the logs, attaching a debugger, etc. Conduit gives us a new tool that we can use
- a live view of traffic going through the deployment. Let's use the `tap`
the logs, attaching a debugger, etc. Linkerd gives us a new tool that we can
use: a live view of traffic going through the deployment. Let's use the `tap`
command to take a look at all requests currently flowing to this deployment.
#### `conduit -n emojivoto tap deploy --to deploy/voting`
#### `linkerd -n emojivoto tap deploy --to deploy/voting`
This gives us a lot of requests:
```
```bash
req id=0:1624 src=10.1.8.150:56224 dst=voting-6795f54474-6vfbs :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VoteDoughnut
rsp id=0:1624 src=10.1.8.150:56224 dst=voting-6795f54474-6vfbs :status=200 latency=1603µs
end id=0:1624 src=10.1.8.150:56224 dst=voting-6795f54474-6vfbs grpc-status=OK duration=28µs response-length=5B
@ -59,9 +59,9 @@ requests.
Let's figure out where those are coming from. Let's run the `tap` command again,
and grep the output for `Unknown`s:
#### ```conduit -n emojivoto tap deploy --to deploy/voting | grep Unknown -B 2```
#### `linkerd -n emojivoto tap deploy --to deploy/voting | grep Unknown -B 2`
```
```bash
req id=0:2294 src=10.1.8.150:56224 dst=voting-6795f54474-6vfbs :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePoop
rsp id=0:2294 src=10.1.8.150:56224 dst=voting-6795f54474-6vfbs :status=200 latency=2147µs
end id=0:2294 src=10.1.8.150:56224 dst=voting-6795f54474-6vfbs grpc-status=Unknown duration=0µs response-length=0B
@ -76,9 +76,9 @@ We can see that all of the `grpc-status=Unknown`s are coming from the `VotePoop`
endpoint. Let's use the `tap` command's flags to narrow down our output to just
this endpoint:
#### ```conduit -n emojivoto tap deploy/voting --path /emojivoto.v1.VotingService/VotePoop```
#### `linkerd -n emojivoto tap deploy/voting --path /emojivoto.v1.VotingService/VotePoop`
```
```bash
req id=0:2724 src=10.1.8.150:56224 dst=voting-6795f54474-6vfbs :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePoop
rsp id=0:2724 src=10.1.8.150:56224 dst=voting-6795f54474-6vfbs :status=200 latency=1644µs
end id=0:2724 src=10.1.8.150:56224 dst=voting-6795f54474-6vfbs grpc-status=Unknown duration=0µs response-length=0B
@ -96,10 +96,10 @@ when we try to vote for 💩 ourselves, in the UI? Follow the instructions in
Now click on the 💩 emoji to vote on it.
![](images/emojivoto-poop.png "Demo application 💩 page")
![demo application](images/emojivoto-poop.png "Demo application 💩 page")
Oh! The demo application is intentionally returning errors for all requests to
vote for 💩. We've found where the errors are coming from. At this point, we
can start diving into the logs or code for our failing service. In future
versions of Conduit, we'll even be able to apply routing rules to change what
versions of Linkerd, we'll even be able to apply routing rules to change what
happens when this endpoint is called.

View File

@ -3,15 +3,14 @@ title = "Get involved"
docpage = true
[menu.docs]
parent = "get-involved"
+++
We're really excited to welcome Contributors to [Conduit](https://github.com/linkerd/linkerd2)!
We're really excited to welcome Contributors to [Linkerd](https://github.com/linkerd/linkerd2)!
There are several ways to get involved:
- Conduit on [GitHub](https://github.com/linkerd/linkerd2)
- Join us on the #conduit channel in [Linkerd slack](https://slack.linkerd.io/)
- Linkerd on [GitHub](https://github.com/linkerd/linkerd2)
- Join us on the #linkerd2 channel in [Linkerd slack](https://slack.linkerd.io/)
- Join the mailing lists
- Users list: [conduit-users@googlegroups.com](https://groups.google.com/forum/#!forum/conduit-users)
- Developers list: [conduit-dev@googlegroups.com](https://groups.google.com/forum/#!forum/conduit-dev)

View File

@ -5,19 +5,20 @@ docpage = true
parent = "getting-started"
+++
Conduit has two basic components: a *data plane* comprised of lightweight
Linkerd has two basic components: a *data plane* comprised of lightweight
proxies, which are deployed as sidecar containers alongside your service code,
and a *control plane* of processes that coordinate and manage these proxies.
Humans interact with the service mesh via a command-line interface (CLI) or
a web app that you use to control the cluster.
In this guide, well walk you through how to deploy Conduit on your Kubernetes
In this guide, well walk you through how to deploy Linkerd on your Kubernetes
cluster, and how to set up a sample gRPC application.
Afterwards, check out the [Using Conduit to debug a service](/debugging-an-app) page,
where we'll walk you through how to use Conduit to investigate poorly performing services.
Afterwards, check out the [Using Linkerd to debug a service](/debugging-an-app)
page, where we'll walk you through how to use Linkerd to investigate poorly
performing services.
> Note that Conduit v{{% latestversion %}} is an alpha release. It is capable of
> Note that Linkerd v{{% latestversion %}} is an alpha release. It is capable of
proxying all TCP traffic, including WebSockets and HTTP tunneling, and reporting
top-line metrics (success rates, latencies, etc) for all HTTP, HTTP/2, and gRPC
traffic.
@ -26,6 +27,7 @@ ____
##### STEP ONE
## Set up 🌟
First, you'll need a Kubernetes cluster running 1.8 or later, and a functioning
`kubectl` command on your local machine.
@ -37,67 +39,71 @@ To run Kubernetes on your local machine, we suggest
#### `kubectl version --short`
### Which should display:
```
```bash
Client Version: v1.10.3
Server Version: v1.10.3
```
Confirm that both `Client Version` and `Server Version` are v1.8.0 or greater.
If not, or if `kubectl` displays an error message, your Kubernetes cluster may
not exist or may not be set up correctly.
___
____
##### STEP TWO
## Install the CLI 💻
If this is your first time running
Conduit, youll need to download the command-line interface (CLI) onto your
local machine. Youll then use this CLI to install Conduit on a Kubernetes
Linkerd, youll need to download the command-line interface (CLI) onto your
local machine. Youll then use this CLI to install Linkerd on a Kubernetes
cluster.
### To install the CLI, run:
#### `curl https://run.conduit.io/install | sh`
### Which should display:
```
Downloading conduit-{{% latestversion %}}-macos...
Conduit was successfully installed 🎉
Copy $HOME/.conduit/bin/conduit into your PATH. Then run
conduit install | kubectl apply -f -
to deploy Conduit to Kubernetes. Once deployed, run
conduit dashboard
to view the Conduit UI.
Visit conduit.io for more information.
```bash
Downloading linkerd-{{% latestversion %}}-macos...
Linkerd was successfully installed 🎉
Copy $HOME/.linkerd/bin/linkerd into your PATH. Then run
linkerd install | kubectl apply -f -
to deploy Linkerd to Kubernetes. Once deployed, run
linkerd dashboard
to view the Linkerd UI.
Visit linkerd.io for more information.
```
>Alternatively, you can download the CLI directly via the
[Conduit releases page](https://github.com/linkerd/linkerd2/releases/v{{% latestversion %}}).
### Next, add conduit to your path with:
#### `export PATH=$PATH:$HOME/.conduit/bin`
[Linkerd releases page](https://github.com/linkerd/linkerd2/releases/v{{% latestversion %}}).
### Next, add linkerd to your path with:
#### `export PATH=$PATH:$HOME/.linkerd/bin`
### Verify the CLI is installed and running correctly with:
#### `conduit version`
#### `linkerd version`
### Which should display:
```
```bash
Client version: v{{% latestversion %}}
Server version: unavailable
```
With `Server version: unavailable`, don't worry, we haven't added the control plane... yet.
With `Server version: unavailable`, don't worry, we haven't added the control
plane... yet.
___
____
##### STEP THREE
## Install Conduit onto the cluster 😎
Now that you have the CLI running locally, its time to install the Conduit control plane
onto your Kubernetes cluster. Dont worry if you already have things running on this
cluster---the control plane will be installed in a separate `conduit` namespace, where
it can easily be removed.
## Install Linkerd onto the cluster 😎
Now that you have the CLI running locally, its time to install the Linkerd
control plane onto your Kubernetes cluster. Dont worry if you already have
things running on this cluster---the control plane will be installed in a
separate `linkerd` namespace, where it can easily be removed.
### To install conduit into your environment, run the following commands.
### To install linkerd into your environment, run the following commands.
<main>
@ -109,7 +115,7 @@ it can easily be removed.
<div class="first-tab">
<h4 class="minikube">
<code>conduit install | kubectl apply -f -</code>
<code>linkerd install | kubectl apply -f -</code>
</h4>
</div>
@ -129,20 +135,21 @@ it can easily be removed.
</blockquote>
<p style="margin-top: 1rem;">Then run:</p>
<h4>
<code>conduit install | kubectl apply -f -</code>
<code>linkerd install | kubectl apply -f -</code>
</h4>
</div>
</main>
### Which should display:
```
namespace "conduit" created
serviceaccount "conduit-controller" created
clusterrole "conduit-controller" created
clusterrolebinding "conduit-controller" created
serviceaccount "conduit-prometheus" created
clusterrole "conduit-prometheus" created
clusterrolebinding "conduit-prometheus" created
```bash
namespace "linkerd" created
serviceaccount "linkerd-controller" created
clusterrole "linkerd-controller" created
clusterrolebinding "linkerd-controller" created
serviceaccount "linkerd-prometheus" created
clusterrole "linkerd-prometheus" created
clusterrolebinding "linkerd-prometheus" created
service "api" created
service "proxy-api" created
deployment "controller" created
@ -153,41 +160,47 @@ deployment "prometheus" created
configmap "prometheus-config" created
```
### To verify the Conduit server version is v{{% latestversion %}}, run:
#### `conduit version`
### To verify the Linkerd server version is v{{% latestversion %}}, run:
#### `linkerd version`
### Which should display:
```
```bash
Client version: v{{% latestversion %}}
Server version: v{{% latestversion %}}
```
### Now, to view the control plane locally, run:
#### `conduit dashboard`
#### `linkerd dashboard`
The first command generates a Kubernetes config, and pipes it to `kubectl`.
Kubectl then applies the config to your Kubernetes cluster.
If you see something like below, Conduit is now running on your cluster. 🎉
If you see something like below, Linkerd is now running on your cluster. 🎉
![](images/dashboard.png "An example of the empty conduit dashboard")
![linkerd dashboard](images/dashboard.png "An example of the empty linkerd
dashboard")
Of course, you havent actually added any services to the mesh yet,
so the dashboard wont have much to display beyond the status of the service mesh itself.
so the dashboard wont have much to display beyond the status of the service
mesh itself.
____
___
##### STEP FOUR
## Install the demo app 🚀
Finally, its time to install a demo application and add it to the service mesh.
<a href="http://emoji.voto/" class="button" target="_blank">See a live version of the demo app</a>
### To install a local version of this demo locally and add it to Conduit, run:
### To install a local version of this demo locally and add it to Linkerd, run:
#### `curl https://raw.githubusercontent.com/runconduit/conduit-examples/master/emojivoto/emojivoto.yml | conduit inject - | kubectl apply -f -`
#### `curl https://raw.githubusercontent.com/runconduit/conduit-examples/master/emojivoto/emojivoto.yml | linkerd inject - | kubectl apply -f -`
### Which should display:
```
```bash
namespace "emojivoto" created
deployment "emoji" created
service "emoji-svc" created
@ -200,27 +213,28 @@ deployment "vote-bot" created
This command downloads the Kubernetes config for an example gRPC application
where users can vote for their favorite emoji, then runs the config through
`conduit inject`. This rewrites the config to insert the Conduit data plane
`linkerd inject`. This rewrites the config to insert the Linkerd data plane
proxies as sidecar containers in the application pods.
Finally, `kubectl` applies the config to the Kubernetes cluster.
As with `conduit install`, in this command, the Conduit CLI is simply doing text
As with `linkerd install`, in this command, the Linkerd CLI is simply doing text
transformations, with `kubectl` doing the heavy lifting of actually applying
config to the Kubernetes cluster. This way, you can introduce additional filters
into the pipeline, or run the commands separately and inspect the output of each
one.
At this point, you should have an application running on your Kubernetes
cluster, and (unbeknownst to it!) also added to the Conduit service mesh.
cluster, and (unbeknownst to it!) also added to the Linkerd service mesh.
___
____
##### STEP FIVE
## Watch it run! 👟
If you glance at the Conduit dashboard, you should see all the
If you glance at the Linkerd dashboard, you should see all the
HTTP/2 and HTTP/1-speaking services in the demo app show up in the list of
deployments that have been added to the Conduit mesh.
deployments that have been added to the Linkerd mesh.
### View the demo app by visiting the web service's public IP:
@ -246,7 +260,7 @@ deployments that have been added to the Conduit mesh.
</div>
</main>
Finally, lets take a look back at our dashboard (run `conduit dashboard` if you
Finally, lets take a look back at our dashboard (run `linkerd dashboard` if you
havent already). You should be able to browse all the services that are running
as part of the application to view:
@ -257,7 +271,7 @@ as part of the application to view:
As well as various other bits of information about live traffic. Neat, huh?
### Views available in `conduit dashboard`:
### Views available in `linkerd dashboard`:
### SERVICE MESH
Displays continuous health metrics of the control plane itself, as well as
@ -280,14 +294,15 @@ ___
## Using the CLI 💻
Of course, the dashboard isnt the only way to inspect whats
happening in the Conduit service mesh. The CLI provides several interesting and
powerful commands that you should experiment with, including `conduit stat` and `conduit tap`.
happening in the Linkerd service mesh. The CLI provides several interesting and
powerful commands that you should experiment with, including `linkerd stat` and `linkerd tap`.
### To view details per deployment, run:
#### `conduit -n emojivoto stat deploy`
#### `linkerd -n emojivoto stat deploy`
### Which should display:
```
```bash
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
emoji 1/1 100.00% 2.0rps 1ms 2ms 3ms
vote-bot 1/1 - - - - -
@ -298,10 +313,11 @@ web 1/1 90.68% 2.0rps 4ms 5ms 5ms
&nbsp;
### To see a live pipeline of requests for your application, run:
#### `conduit -n emojivoto tap deploy`
#### `linkerd -n emojivoto tap deploy`
### Which should display:
```
```bash
req id=0:2900 src=10.1.8.151:51978 dst=10.1.8.150:80 :method=GET :authority=web-svc.emojivoto:80 :path=/api/list
req id=0:2901 src=10.1.8.150:49246 dst=emoji-664486dccb-97kws :method=POST :authority=emoji-svc.emojivoto:8080 :path=/emojivoto.v1.EmojiService/ListAll
rsp id=0:2901 src=10.1.8.150:49246 dst=emoji-664486dccb-97kws :status=200 latency=2146µs
@ -312,12 +328,15 @@ req id=0:2902 src=10.1.8.151:51978 dst=10.1.8.150:80 :method=GET :authority=web-
...
```
___
____
## Thats it! 👏
For more information about Conduit, check out the
[overview doc](/docs) and the [roadmap doc](/roadmap), or hop into the #conduit channel on [the
Linkerd Slack](https://slack.linkerd.io) or browse through the
For more information about Linkerd, check out the
[overview doc](/docs) and the [roadmap doc](/roadmap), or hop into the #linkerd2
channel on [the Linkerd Slack](https://slack.linkerd.io) or browse through the
[Conduit forum](https://discourse.linkerd.io/c/conduit). You can also follow
[@runconduit](https://twitter.com/runconduit) on Twitter.
Were just getting started building Conduit, and were extremely interested in your feedback!
Were just getting started building Linkerd, and were extremely interested in
your feedback!

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 MiB

After

Width:  |  Height:  |  Size: 104 KiB

View File

@ -1,35 +1,35 @@
+++
title = "Conduit overview"
title = "Linkerd overview"
docpage = true
[menu.docs]
parent = "docs"
+++
Conduit is an ultralight service mesh for Kubernetes. It
Linkerd is an ultralight service mesh for Kubernetes. It
makes running services on Kubernetes safer and more reliable by transparently
managing the runtime communication between services. It provides features for
observability, reliability, and security---all without requiring changes to your
code.
In this doc, youll get a high-level overview of Conduit and how it works. If
In this doc, youll get a high-level overview of Linkerd and how it works. If
youre not familiar with the service mesh model, you may want to first read
William Morgans overview, [Whats a service mesh? And why do I need one?](https://buoyant.io/2017/04/25/whats-a-service-mesh-and-why-do-i-need-one/)
## Conduits architecture
## Linkerds architecture
The Conduit service mesh is deployed on a Kubernetes
The Linkerd service mesh is deployed on a Kubernetes
cluster as two basic components: a *data plane* and a *control plane*. The data
plane carries the actual application request traffic between service instances.
The control plane drives the data plane and provides APIs for modifying its
behavior (as well as for accessing aggregated metrics). The Conduit CLI and web
behavior (as well as for accessing aggregated metrics). The Linkerd CLI and web
UI consume this API and provide ergonomic controls for human beings.
Lets take each of these components in turn.
The Conduit data plane is comprised of lightweight proxies, which are deployed
The Linkerd data plane is comprised of lightweight proxies, which are deployed
as sidecar containers alongside each instance of your service code. In order to
“add” a service to the Conduit service mesh, the pods for that service must be
redeployed to include a data plane proxy in each pod. (The `conduit inject`
“add” a service to the Linkerd service mesh, the pods for that service must be
redeployed to include a data plane proxy in each pod. (The `linkerd inject`
command accomplishes this, as well as the configuration work necessary to
transparently funnel traffic from each instance through the proxy.)
@ -41,16 +41,16 @@ policy.
These proxies are not designed to be configured by hand. Rather, their behavior
is driven by the control plane.
The Conduit control plane is a set of services that run in a dedicated
Kubernetes namespace (`conduit` by default). These services accomplish various
The Linkerd control plane is a set of services that run in a dedicated
Kubernetes namespace (`linkerd` by default). These services accomplish various
things---aggregating telemetry data, providing a user-facing API, providing
control data to the data plane proxies, etc. Together, they drive the behavior
of the data plane.
## Using Conduit
## Using Linkerd
In order to interact with Conduit as a human,
you use the Conduit CLI and the web UI (as well as with associated tools like
In order to interact with Linkerd as a human,
you use the Linkerd CLI and the web UI (as well as with associated tools like
`kubectl`). The CLI and the web UI drive the control plane via its API, and the
control plane in turn drives the behavior of the data plane.
@ -58,21 +58,21 @@ The control plane API is designed to be generic enough that other tooling can be
built on top of it. For example, you may wish to additionally drive the API from
a CI/CD system.
A brief overview of the CLIs functionality can be seen by running `conduit
A brief overview of the CLIs functionality can be seen by running `linkerd
--help`.
## Conduit with Kubernetes
## Linkerd with Kubernetes
Conduit is designed to fit seamlessly into an
Linkerd is designed to fit seamlessly into an
existing Kubernetes system. This design has several important features.
First, the Conduit CLI (`conduit`) is designed to be used in conjunction with
`kubectl` whenever possible. For example, `conduit install` and `conduit inject` are generated Kubernetes configurations designed to be fed directly into `kubectl`. This is to provide a clear division of labor between the service mesh
and the orchestrator, and to make it easier to fit Conduit into existing
First, the Linkerd CLI (`linkerd`) is designed to be used in conjunction with
`kubectl` whenever possible. For example, `linkerd install` and `linkerd inject` are generated Kubernetes configurations designed to be fed directly into `kubectl`. This is to provide a clear division of labor between the service mesh
and the orchestrator, and to make it easier to fit Linkerd into existing
Kubernetes workflows.
Second, Conduits core noun in Kubernetes is the Deployment, not the Service.
For example, `conduit inject` adds a Deployment; the Conduit web UI displays
Second, Linkerds core noun in Kubernetes is the Deployment, not the Service.
For example, `linkerd inject` adds a Deployment; the Linkerd web UI displays
Deployments; aggregated performance metrics are given per Deployment. This is
because individual pods can be a part of arbitrary numbers of Services, which
can lead to complex mappings between traffic flow and pods. Deployments, by
@ -80,14 +80,14 @@ contrast, require that a single pod be a part of at most one Deployment. By
building on Deployments rather than Services, the mapping between traffic and
pod is always clear.
These two design features compose nicely. For example, `conduit inject` can be
These two design features compose nicely. For example, `linkerd inject` can be
used on a live Deployment, as Kubernetes rolls pods to include the data plane
proxy as it updates the Deployment.
## Extending Conduits Behavior
## Extending Linkerds Behavior
The Conduit control plane also provides a convenient place for custom
functionality to be built. While the initial release of Conduit does not support
this yet, in the near future, youll be able to extend Conduits functionality
The Linkerd control plane also provides a convenient place for custom
functionality to be built. While the initial release of Linkerd does not support
this yet, in the near future, youll be able to extend Linkerds functionality
by writing gRPC plugins that run as part of the control plane, without needing
to recompile Conduit.
to recompile Linkerd.

View File

@ -6,19 +6,19 @@ docpage = true
parent = "prometheus"
+++
If you have an existing Prometheus cluster, it is very easy to export Conduit's
If you have an existing Prometheus cluster, it is very easy to export Linkerd's
rich telemetry data to your cluster. Simply add the following item to your
`scrape_configs` in your Prometheus config file:
```yaml
- job_name: 'conduit-controller'
- job_name: 'linkerd-controller'
kubernetes_sd_configs:
- role: pod
namespaces:
names: ['{{.Namespace}}']
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_label_conduit_io_control_plane_component
- __meta_kubernetes_pod_label_linkerd_io_control_plane_component
- __meta_kubernetes_pod_container_port_name
action: keep
regex: (.*);admin-http$
@ -26,7 +26,7 @@ rich telemetry data to your cluster. Simply add the following item to your
action: replace
target_label: component
- job_name: 'conduit-proxy'
- job_name: 'linkerd-proxy'
kubernetes_sd_configs:
- role: pod
relabel_configs:
@ -34,7 +34,7 @@ rich telemetry data to your cluster. Simply add the following item to your
- __meta_kubernetes_pod_container_name
- __meta_kubernetes_pod_container_port_name
action: keep
regex: ^conduit-proxy;conduit-metrics$
regex: ^linkerd-proxy;linkerd-metrics$
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
@ -43,28 +43,28 @@ rich telemetry data to your cluster. Simply add the following item to your
target_label: pod
# special case k8s' "job" label, to not interfere with prometheus' "job"
# label
# __meta_kubernetes_pod_label_conduit_io_proxy_job=foo =>
# __meta_kubernetes_pod_label_linkerd_io_proxy_job=foo =>
# k8s_job=foo
- source_labels: [__meta_kubernetes_pod_label_conduit_io_proxy_job]
- source_labels: [__meta_kubernetes_pod_label_linkerd_io_proxy_job]
action: replace
target_label: k8s_job
# __meta_kubernetes_pod_label_conduit_io_proxy_deployment=foo =>
# __meta_kubernetes_pod_label_linkerd_io_proxy_deployment=foo =>
# deployment=foo
- action: labelmap
regex: __meta_kubernetes_pod_label_conduit_io_proxy_(.+)
regex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)
# drop all labels that we just made copies of in the previous labelmap
- action: labeldrop
regex: __meta_kubernetes_pod_label_conduit_io_proxy_(.+)
regex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)
# __meta_kubernetes_pod_label_foo=bar => foo=bar
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
```
That's it! Your Prometheus cluster is now configured to scrape Conduit's
That's it! Your Prometheus cluster is now configured to scrape Linkerd's
metrics.
Conduit's proxy metrics will have the label `job="conduit-proxy"`. Conduit's
control-plane metrics will have the label `job="conduit-controller"`.
Linkerd's proxy metrics will have the label `job="linkerd-proxy"`. Linkerd's
control-plane metrics will have the label `job="linkerd-controller"`.
For more information on specific metric and label definitions, have a look at
[Proxy Metrics](/proxy-metrics),

View File

@ -5,7 +5,7 @@ docpage = true
parent = "proxy-metrics"
+++
The Conduit proxy exposes metrics that describe the traffic flowing through the
The Linkerd proxy exposes metrics that describe the traffic flowing through the
proxy. The following metrics are available at `/metrics` on the proxy's metrics
port (default: `:4191`) in the [Prometheus format][prom-format]:
@ -24,11 +24,12 @@ incremented when the response stream ends.
### `response_latency_ms`
A histogram of response latencies. This measurement reflects the
[time-to-first-byte][ttfb] (TTFB) by recording the elapsed time between the proxy
processing a request's headers and the first data frame of the response. If a response
does not include any data, the end-of-stream event is used. The TTFB measurement is used
so that Conduit accurately reflects application behavior when a server provides response
headers immediately but is slow to begin serving the response body.
[time-to-first-byte][ttfb] (TTFB) by recording the elapsed time between the
proxy processing a request's headers and the first data frame of the response.
If a response does not include any data, the end-of-stream event is used. The
TTFB measurement is used so that Linkerd accurately reflects application
behavior when a server provides response headers immediately but is slow to
begin serving the response body.
Note that latency measurements are not exported to Prometheus until the stream
_completes_. This is necessary so that latencies can be labeled with the appropriate
@ -81,7 +82,7 @@ The following labels are added by the Prometheus collector.
* `instance`: ip:port of the pod.
* `job`: The Prometheus job responsible for the collection, typically
`conduit-proxy`.
`linkerd-proxy`.
#### Kubernetes labels added at collection time
@ -96,11 +97,11 @@ Prometheus labels.
approximates a pod's `ReplicaSet` or
`ReplicationController`.
#### Conduit labels added at collection time
#### Linkerd labels added at collection time
Kubernetes labels prefixed with `conduit.io/` are added to your application at
`conduit inject` time. More specifically, Kubernetes labels prefixed with
`conduit.io/proxy-*` will correspond to these Prometheus labels:
Kubernetes labels prefixed with `linkerd.io/` are added to your application at
`linkerd inject` time. More specifically, Kubernetes labels prefixed with
`linkerd.io/proxy-*` will correspond to these Prometheus labels:
* `daemon_set`: The daemon set that the pod belongs to (if applicable).
* `deployment`: The deployment that the pod belongs to (if applicable).
@ -119,25 +120,25 @@ name: vote-bot-5b7f5657f6-xbjjw
namespace: emojivoto
labels:
app: vote-bot
conduit.io/control-plane-ns: conduit
conduit.io/proxy-deployment: vote-bot
linkerd.io/control-plane-ns: linkerd
linkerd.io/proxy-deployment: vote-bot
pod-template-hash: "3957278789"
test: vote-bot-test
```
The resulting Prometheus labels will look like this:
```
```bash
request_total{
pod="vote-bot-5b7f5657f6-xbjjw",
namespace="emojivoto",
app="vote-bot",
conduit_io_control_plane_ns="conduit",
linkerd_io_control_plane_ns="linkerd",
deployment="vote-bot",
pod_template_hash="3957278789",
test="vote-bot-test",
instance="10.1.3.93:4191",
job="conduit-proxy"
job="linkerd-proxy"
}
```
@ -189,11 +190,11 @@ are also added to transport-level metrics, when applicable.
### Connection Close Labels
The following labels are added only to metrics which are updated when a connection closes
(`tcp_close_total` and `tcp_connection_duration_ms`):
The following labels are added only to metrics which are updated when a
connection closes (`tcp_close_total` and `tcp_connection_duration_ms`):
+ `classification`: `success` if the connection terminated cleanly, `failure` if the
connection closed due to a connection failure.
* `classification`: `success` if the connection terminated cleanly, `failure` if
the connection closed due to a connection failure.
[prom-format]: https://prometheus.io/docs/instrumenting/exposition_formats/#format-version-0.0.4
[pod-template-hash]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label