Move Conduit documentation sources to Conduit repo. (#418)

The Markdown files were all originally named "$x/_index.md"; I renamed
them as follows:

```
for x in `ls ~/conduit-site/conduit.io/content`; do
    cp ~/conduit-site/conduit.io/content/$x/_index.md doc/$x.md
done
mv doc/doc.md doc/overview.md
```

When we publish the files on conduit.io we need to do the inverse
transformation to avoid breaking existing links.

The images were embedded using a syntax GitHub doesn't support. Also, the
images were not originally in a subdirectory of docs/.

Use normal Markdown syntax for image embedding, and reference the docs
using relative links to the images/ subdirectory. This way they will show
up in the GitHub UI. When we publish the docs on conduit.io we'll need to
figure out how to deal with this change.

I took the liberty of renaming data-plane.png to dashboard-data-plane.png to
clarify it a bit.

There is no other roadmap so there's no need to qualify this one as
"public." Before it was made public we marked it "public" to emphasize
that it would become public, but that isn't needed now.

Signed-off-by: Brian Smith <brian@briansmith.org>
This commit is contained in:
Brian Smith 2018-02-28 13:39:28 -10:00 committed by GitHub
parent 42ee56d1af
commit 82bbcbd137
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 759 additions and 0 deletions

5
doc/README.md Executable file
View File

@ -0,0 +1,5 @@
The documentation here is published at https://conduit.io/docs/ when each
version is released. The version of the documentation in GitHub should
always match the code in the same branch. For example, when a pull request
changes the behavior of a documented feature, the pull request should change
the documentation here to match the new behavior.

78
doc/adding-your-service.md Executable file
View File

@ -0,0 +1,78 @@
+++
title = "Adding your service to the mesh"
docpage = true
[menu.docs]
parent = "adding-your-service"
+++
In order for your service to take advantage of Conduit, it needs to be added
to the service mesh. This is done by using the Conduit CLI to add the Conduit
proxy sidecar to each pod. By doing this as a rolling update, the availability
of your application will not be affected.
## Prerequisites
* Applications that use WebSockets or HTTP tunneling/proxying (use of the HTTP
`CONNECT` method), or plaintext MySQL, SMTP, or other protocols where the server
sends data before the client sends data, require additional configuration. See
the [Protocol Support](#protocol-support) section below.
* gRPC applications that use grpc-go must use grpc-go version 1.3 or later due
to a [bug](https://github.com/grpc/grpc-go/issues/1120) in earlier versions.
* Conduit doesn't yet support external DNS lookups (e.g. proxying a call to a
third-party API). This will be addressed in an [upcoming
release](https://github.com/runconduit/conduit/issues/155).
## Adding your service
### To add your service to the service mesh, run:
#### `conduit inject deployment.yml | kubectl apply -f -`
`deployment.yml` is the Kubernetes config file containing your
application. This will trigger a rolling update of your deployment, replacing
each pod with a new one that additionally contains the Conduit sidecar proxy.
You will know that your service has been successfully added to the service mesh
if its proxy status is green in the Conduit dashboard.
![](images/dashboard-data-plane.png "conduit dashboard")
### You can always get to the Conduit dashboard by running
#### `conduit dashboard`
## Protocol Support
Conduit supports most applications without requiring any configuration on your
part. To accomplish this, Conduit automatically detects the protocol used on
each connection. In some cases, however, Conduit's protocol detection can't be
fully automated and requires some configuration from you.
### HTTP Tunneling and WebSockets
Most HTTP traffic (including HTTP/2) will be handled automatically and
transparently by Conduit without any configuration on your part. However,
non-HTTPS WebSockets and HTTP tunneling/proxying (use of the HTTP `CONNECT`
method) currently require manual configuration to disable the layer 7 features
for those connections. For pods that accept incoming `CONNECT` requests and/or
incoming WebSocket connections, use the `--skip-inbound-ports` flag when running
`conduit inject`. For pods that make outgoing `CONNECT` requests and/or outgoing
WebSocket connections, use the `--skip-outbound-ports` flag when running
`conduit inject`. (Automatic transparent proxying of WebSockets will be
implemented in a [future release](https://github.com/runconduit/conduit/issues/195).)
### For example, to allow inbound traffic on ports 80 and 7777 to bypass the proxy, use the command:
#### `conduit inject deployment.yml --skip-inbound-ports=80,7777 | kubectl apply -f -`
### MySQL and SMTP
Most non-HTTP traffic will also be handled automatically and transparently by
Conduit without any configuration on your part. However, for protocols where the
server sends data before the client sends, e.g. MySQL and SMTP connections that
aren't protected by TLS, Conduit currently requires some manual configuration.
In such cases, use the `--skip-inbound-ports` flag when running `conduit
inject`. For pods that make outgoing connections using such protocols, use the
`--skip-outbound-ports` flag when running `conduit inject`. (Note that this
applies only to non-TLS'd connections; connections with TLS enabled do not
require any additional configuration irrespective of protocol.)
### For example, to allow outbound traffic to port 3306 (MySQL) to bypass the proxy, use the command:
#### `conduit inject deployment.yml --skip-outbound-ports=3306 | kubectl apply -f -`

105
doc/debugging-an-app.md Executable file
View File

@ -0,0 +1,105 @@
+++
title = "Example: debugging an app"
docpage = true
[menu.docs]
parent = "debugging-an-app"
+++
This section assumes you've followed the steps in the [Getting
Started](/getting-started) guide and have Conduit and the demo application
running in some flavor of Kubernetes cluster.
## Using Conduit to debug a failing service 💻🔥
Now that we have Conduit and the demo application [up and
running](/getting-started), let's use Conduit to diagnose issues.
First, let's use the `conduit stat` command to get an overview of deployment
health:
#### `conduit stat deployments`
### Your results will be something like:
```
NAME REQUEST_RATE SUCCESS_RATE P50_LATENCY P99_LATENCY
emojivoto/emoji 2.0rps 100.00% 0ms 0ms
emojivoto/voting 0.6rps 66.67% 0ms 0ms
emojivoto/web 2.0rps 95.00% 0ms 0ms
```
We can see that the `voting` service is performing far worse than the others.
How do we figure out what's going on? Our traditional options are: looking at
the logs, attaching a debugger, etc. Conduit gives us a new tool that we can use
- a live view of traffic going through the deployment. Let's use the `tap`
command to take a look at requests currently flowing through this deployment.
#### `conduit tap deploy emojivoto/voting`
This gives us a lot of requests:
```
req id=0:458 src=172.17.0.9:45244 dst=172.17.0.8:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VoteGhost
rsp id=0:458 src=172.17.0.9:45244 dst=172.17.0.8:8080 :status=200 latency=758µs
end id=0:458 src=172.17.0.9:45244 dst=172.17.0.8:8080 grpc-status=OK duration=9µs response-length=5B
req id=0:459 src=172.17.0.9:45244 dst=172.17.0.8:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VoteDoughnut
rsp id=0:459 src=172.17.0.9:45244 dst=172.17.0.8:8080 :status=200 latency=987µs
end id=0:459 src=172.17.0.9:45244 dst=172.17.0.8:8080 grpc-status=OK duration=9µs response-length=5B
req id=0:460 src=172.17.0.9:45244 dst=172.17.0.8:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VoteBurrito
rsp id=0:460 src=172.17.0.9:45244 dst=172.17.0.8:8080 :status=200 latency=767µs
end id=0:460 src=172.17.0.9:45244 dst=172.17.0.8:8080 grpc-status=OK duration=18µs response-length=5B
req id=0:461 src=172.17.0.9:45244 dst=172.17.0.8:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VoteDog
rsp id=0:461 src=172.17.0.9:45244 dst=172.17.0.8:8080 :status=200 latency=693µs
end id=0:461 src=172.17.0.9:45244 dst=172.17.0.8:8080 grpc-status=OK duration=10µs response-length=5B
req id=0:462 src=172.17.0.9:45244 dst=172.17.0.8:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePoop
```
Let's see if we can narrow down what we're looking at. We can see a few
`grpc-status=Unknown`s in these logs. This is GRPCs way of indicating failed
requests.
Let's figure out where those are coming from. Let's run the `tap` command again,
and grep the output for `Unknown`s:
#### ```conduit tap deploy emojivoto/voting | grep Unknown -B 2```
```
req id=0:212 src=172.17.0.8:58326 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePoop
rsp id=0:212 src=172.17.0.8:58326 dst=172.17.0.10:8080 :status=200 latency=360µs
end id=0:212 src=172.17.0.8:58326 dst=172.17.0.10:8080 grpc-status=Unknown duration=0µs response-length=0B
--
req id=0:215 src=172.17.0.8:58326 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePoop
rsp id=0:215 src=172.17.0.8:58326 dst=172.17.0.10:8080 :status=200 latency=414µs
end id=0:215 src=172.17.0.8:58326 dst=172.17.0.10:8080 grpc-status=Unknown duration=0µs response-length=0B
--
```
We can see that all of the `grpc-status=Unknown`s are coming from the `VotePoop`
endpoint. Let's use the `tap` command's flags to narrow down our output to just
this endpoint:
#### ```conduit tap deploy emojivoto/voting --path /emojivoto.v1.VotingService/VotePoop```
```
req id=0:264 src=172.17.0.8:58326 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePoop
rsp id=0:264 src=172.17.0.8:58326 dst=172.17.0.10:8080 :status=200 latency=696µs
end id=0:264 src=172.17.0.8:58326 dst=172.17.0.10:8080 grpc-status=Unknown duration=0µs response-length=0B
req id=0:266 src=172.17.0.8:58326 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePoop
rsp id=0:266 src=172.17.0.8:58326 dst=172.17.0.10:8080 :status=200 latency=667µs
end id=0:266 src=172.17.0.8:58326 dst=172.17.0.10:8080 grpc-status=Unknown duration=0µs response-length=0B
req id=0:270 src=172.17.0.8:58326 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePoop
rsp id=0:270 src=172.17.0.8:58326 dst=172.17.0.10:8080 :status=200 latency=346µs
end id=0:270 src=172.17.0.8:58326 dst=172.17.0.10:8080 grpc-status=Unknown duration=0µs response-length=0B
```
We can see that none of our `VotePoop` requests are successful. What happens
when we try to vote for 💩 ourselves, in the UI? Follow the instructions in
[Step Five](/getting-started/#step-five) to open the demo app.
Now click on the 💩 emoji to vote on it.
![](images/emojivoto-poop.png "Demo application 💩 page")
Oh! The demo application is intentionally returning errors for all requests to
vote for 💩. We've found where the errors are coming from. At this point, we
can start diving into the logs or code for our failing service. In future
versions of Conduit, we'll even be able to apply routing rules to change what
happens when this endpoint is called.

18
doc/get-involved.md Executable file
View File

@ -0,0 +1,18 @@
+++
title = "Get involved"
docpage = true
[menu.docs]
parent = "get-involved"
+++
We're really excited to welcome Contributors to [Conduit](https://github.com/runconduit/conduit)!
There are several ways to get involved:
- Conduit on [Github](https://github.com/runconduit/conduit)
- Join us on the #conduit channel in [Linkerd slack](https://slack.linkerd.io/)
- Join the mailing lists
- Users list: [conduit-users@googlegroups.com](https://groups.google.com/forum/#!forum/conduit-users)
- Developers list: [conduit-dev@googlegroups.com](https://groups.google.com/forum/#!forum/conduit-dev)
- Announcements: [conduit-announce@googlegroups.com](https://groups.google.com/forum/#!forum/conduit-announce)

313
doc/getting-started.md Executable file
View File

@ -0,0 +1,313 @@
+++
title = "Getting started"
docpage = true
[menu.docs]
parent = "getting-started"
+++
Conduit has two basic components: a *data plane* comprised of lightweight
proxies, which are deployed as sidecar containers alongside your service code,
and a *control plane* of processes that coordinate and manage these proxies.
Humans interact with the service mesh via a command-line interface (CLI) or
a web app that you use to control the cluster.
In this guide, well walk you through how to deploy Conduit on your Kubernetes
cluster, and how to set up a sample gRPC application.
Afterwards, check out the [Using Conduit to debug a service](/debugging-an-app) page,
where we'll walk you through how to use Conduit to investigate poorly performing services.
> Note that Conduit v{{% latestversion %}} is an alpha release. Conduit will
automatically work for most protocols. However, applications that use
WebSockets, HTTP tunneling/proxying, or protocols such as MySQL and SMTP, will
require some additional configuration. See [Adding your service to the
mesh](/adding-your-service) for details.
____
##### STEP ONE
## Set up 🌟
First, you'll need a Kubernetes cluster running 1.8 or later, and a functioning
`kubectl` command on your local machine.
To run Kubernetes on your local machine, we suggest
<a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" target="_blank">Minikube</a>
--- running version 0.24.1 or later.
### When ready, make sure you're running the latest version of Kubernetes with:
#### `kubectl version --short`
### Which should display:
```
Client Version: v1.8.3
Server Version: v1.8.0
```
Confirm that both `Client Version` and `Server Version` are v1.8.0 or greater.
If not, or if `kubectl` displays an error message, your Kubernetes cluster may
not exist or may not be set up correctly.
___
##### STEP TWO
## Install the CLI 💻
If this is your first time running
Conduit, youll need to download the command-line interface (CLI) onto your
local machine. Youll then use this CLI to install Conduit on a Kubernetes
cluster.
### To install the CLI, run:
#### `curl https://run.conduit.io/install | sh`
### Which should display:
```
Downloading conduit-{{% latestversion %}}-macos...
Conduit was successfully installed 🎉
Copy $HOME/.conduit/bin/conduit into your PATH. Then run
conduit install | kubectl apply -f -
to deploy Conduit to Kubernetes. Once deployed, run
conduit dashboard
to view the Conduit UI.
Visit conduit.io for more information.
```
>Alternatively, you can download the CLI directly via the
[Conduit releases page](https://github.com/runconduit/conduit/releases/v{{% latestversion %}}).
### Next, add conduit to your path with:
#### `export PATH=$PATH:$HOME/.conduit/bin`
### Verify the CLI is installed and running correctly with:
#### `conduit version`
### Which should display:
```
Client version: v{{% latestversion %}}
Server version: unavailable
```
With `Server version: unavailable`, don't worry, we haven't added the control plane... yet.
___
##### STEP THREE
## Install Conduit onto the cluster 😎
Now that you have the CLI running locally, its time to install the Conduit control plane
onto your Kubernetes cluster. Dont worry if you already have things running on this
cluster---the control plane will be installed in a separate `conduit` namespace, where
it can easily be removed.
### To install conduit into your environment, run the following commands.
<main>
<input id="tab1" type="radio" name="tabs" checked>
<label for="tab1">Standard</label>
<input id="tab2" type="radio" name="tabs">
<label for="tab2">GKE</label>
<div class="first-tab">
<h4 class="minikube">
<code>conduit install | kubectl apply -f -</code>
</h4>
</div>
<div class="second-tab">
<p>First run:</p>
<h4 class="kubernetes">
<code>kubectl create clusterrolebinding cluster-admin-binding-$USER
--clusterrole=cluster-admin --user=$(gcloud config get-value account)</code>
</h4>
<blockquote>
If you are using GKE with RBAC enabled, you must grant a <code>ClusterRole</code> of <code>cluster-admin</code>
to your Google Cloud account first, in order to install certain telemetry features in the control plane.
</blockquote>
<blockquote>
Note that the <code>$USER</code> environment variable should be the username of your
Google Cloud account.
</blockquote>
<p style="margin-top: 1rem;">Then run:</p>
<h4>
<code>conduit install | kubectl apply -f -</code>
</h4>
</div>
</main>
### Which should display:
```
namespace "conduit" created
serviceaccount "conduit-controller" created
clusterrole "conduit-controller" created
clusterrolebinding "conduit-controller" created
serviceaccount "conduit-prometheus" created
clusterrole "conduit-prometheus" created
clusterrolebinding "conduit-prometheus" created
service "api" created
service "proxy-api" created
deployment "controller" created
service "web" created
deployment "web" created
service "prometheus" created
deployment "prometheus" created
configmap "prometheus-config" created
```
### To verify the Conduit server version is v{{% latestversion %}}, run:
#### `conduit version`
### Which should display:
```
Client version: v{{% latestversion %}}
Server version: v{{% latestversion %}}
```
### Now, to view the control plane locally, run:
#### `conduit dashboard`
The first command generates a Kubernetes config, and pipes it to `kubectl`.
Kubectl then applies the config to your Kubernetes cluster.
If you see something like below, Conduit is now running on your cluster. 🎉
![](images/dashboard.png "An example of the empty conduit dashboard")
Of course, you havent actually added any services to the mesh yet,
so the dashboard wont have much to display beyond the status of the service mesh itself.
___
##### STEP FOUR
## Install the demo app 🚀
Finally, its time to install a demo application and add it to the service mesh.
<a href="http://emoji.voto/" class="button" target="_blank">See a live version of the demo app</a>
### To install a local version of this demo locally and add it to Conduit, run:
#### `curl https://raw.githubusercontent.com/runconduit/conduit-examples/master/emojivoto/emojivoto.yml | conduit inject - | kubectl apply -f -`
### Which should display:
```
namespace "emojivoto" created
deployment "emoji" created
service "emoji-svc" created
deployment "voting" created
service "voting-svc" created
deployment "web" created
service "web-svc" created
deployment "vote-bot" created
```
This command downloads the Kubernetes config for an example gRPC application
where users can vote for their favorite emoji, then runs the config through
`conduit inject`. This rewrites the config to insert the Conduit data plane
proxies as sidecar containers in the application pods.
Finally, `kubectl` applies the config to the Kubernetes cluster.
As with `conduit install`, in this command, the Conduit CLI is simply doing text
transformations, with `kubectl` doing the heavy lifting of actually applying
config to the Kubernetes cluster. This way, you can introduce additional filters
into the pipeline, or run the commands separately and inspect the output of each
one.
At this point, you should have an application running on your Kubernetes
cluster, and (unbeknownst to it!) also added to the Conduit service mesh.
___
##### STEP FIVE
## Watch it run! 👟
If you glance at the Conduit dashboard, you should see all the
HTTP/2 and HTTP/1-speaking services in the demo app show up in the list of
deployments that have been added to the Conduit mesh.
### View the demo app by visiting the web service's public IP:
<main>
<p>Find the public IP by selecting your environment below.</p>
<input id="tab3" type="radio" name="second-tabs" checked>
<label for="tab3">Kubernetes</label>
<input id="tab4" type="radio" name="second-tabs">
<label for="tab4">Minikube</label>
<div class="first-tab">
<h4 class="kubernetes">
<code>kubectl get svc web-svc -n emojivoto -o jsonpath="{.status.loadBalancer.ingress[0].*}"</code>
</h4>
</div>
<div class="second-tab">
<h4 class="minikube">
<code>minikube -n emojivoto service web-svc --url</code>
</h4>
</div>
</main>
Finally, lets take a look back at our dashboard (run `conduit dashboard` if you
havent already). You should be able to browse all the services that are running
as part of the application to view:
- Success rates
- Request rates
- Latency distribution percentiles
- Upstream and downstream dependencies
As well as various other bits of information about live traffic. Neat, huh?
### Views available in `conduit dashboard`:
### SERVICE MESH
Displays continuous health metrics of the control plane itself, as well as
high-level health metrics of deployments in the data plane.
### DEPLOYMENTS
Lists all deployments by requests, success rate, and latency.
___
## Using the CLI 💻
Of course, the dashboard isnt the only way to inspect whats
happening in the Conduit service mesh. The CLI provides several interesting and
powerful commands that you should experiment with, including `conduit stat` and `conduit tap`.
### To view details per deployment, run:
#### `conduit stat deployments`
### Which should display:
```
NAME REQUEST_RATE SUCCESS_RATE P50_LATENCY P99_LATENCY
emojivoto/emoji 2.0rps 100.00% 0ms 0ms
emojivoto/voting 0.6rps 66.67% 0ms 0ms
emojivoto/web 2.0rps 95.00% 0ms 0ms
```
&nbsp;
### To see a live pipeline of requests for your application, run:
#### `conduit tap deploy emojivoto/voting`
### Which should display:
```
req id=0:127 src=172.17.0.11:50992 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VoteManInTuxedo
rsp id=0:127 src=172.17.0.11:50992 dst=172.17.0.10:8080 :status=200 latency=588µs
end id=0:127 src=172.17.0.11:50992 dst=172.17.0.10:8080 grpc-status=OK duration=9µs response-length=5B
req id=0:128 src=172.17.0.11:50992 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePager
rsp id=0:128 src=172.17.0.11:50992 dst=172.17.0.10:8080 :status=200 latency=601µs
end id=0:128 src=172.17.0.11:50992 dst=172.17.0.10:8080 grpc-status=OK duration=11µs response-length=5B
req id=0:129 src=172.17.0.11:50992 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePoop
...
```
___
## Thats it! 👏
For more information about Conduit, check out the
[overview doc](/docs) and the [roadmap doc](/roadmap), or hop into the #conduit channel on [the
Linkerd Slack](https://slack.linkerd.io) or browse through the
[Conduit forum](https://discourse.linkerd.io/c/conduit). You can also follow
[@runconduit](https://twitter.com/runconduit) on Twitter.
Were just getting started building Conduit, and were extremely interested in your feedback!

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

BIN
doc/images/dashboard.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

BIN
doc/images/emojivoto-poop.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

93
doc/overview.md Executable file
View File

@ -0,0 +1,93 @@
+++
title = "Conduit overview"
docpage = true
[menu.docs]
parent = "docs"
+++
Conduit is an ultralight service mesh for Kubernetes. It
makes running services on Kubernetes safer and more reliable by transparently
managing the runtime communication between services. It provides features for
observability, reliability, and security---all without requiring changes to your
code.
In this doc, youll get a high-level overview of Conduit and how it works. If
youre not familiar with the service mesh model, you may want to first read
William Morgans overview, [Whats a service mesh? And why do I need one?](https://buoyant.io/2017/04/25/whats-a-service-mesh-and-why-do-i-need-one/)
## Conduits architecture
The Conduit service mesh is deployed on a Kubernetes
cluster as two basic components: a *data plane* and a *control plane*. The data
plane carries the actual application request traffic between service instances.
The control plane drives the data plane and provides APIs for modifying its
behavior (as well as for accessing aggregated metrics). The Conduit CLI and web
UI consume this API and provide ergonomic controls for human beings.
Lets take each of these components in turn.
The Conduit data plane is comprised of lightweight proxies, which are deployed
as sidecar containers alongside each instance of your service code. In order to
“add” a service to the Conduit service mesh, the pods for that service must be
redeployed to include a data plane proxy in each pod. (The `conduit inject`
command accomplishes this, as well as the configuration work necessary to
transparently funnel traffic from each instance through the proxy.)
These proxies transparently intercept communication to and from each pod, and
add features such as retries and timeouts, instrumentation, and encryption
(TLS), as well as allowing and denying requests according to the relevant
policy.
These proxies are not designed to be configured by hand. Rather, their behavior
is driven by the control plane.
The Conduit control plane is a set of services that run in a dedicated
Kubernetes namespace (`conduit` by default). These services accomplish various
things---aggregating telemetry data, providing a user-facing API, providing
control data to the data plane proxies, etc. Together, they drive the behavior
of the data plane.
## Using Conduit
In order to interact with Conduit as a human,
you use the Conduit CLI and the web UI (as well as with associated tools like
`kubectl`). The CLI and the web UI drive the control plane via its API, and the
control plane in turn drives the behavior of the data plane.
The control plane API is designed to be generic enough that other tooling can be
built on top of it. For example, you may wish to additionally drive the API from
a CI/CD system.
A brief overview of the CLIs functionality can be seen by running `conduit
--help`.
## Conduit with Kubernetes
Conduit is designed to fit seamlessly into an
existing Kubernetes system. This design has several important features.
First, the Conduit CLI (`conduit`) is designed to be used in conjunction with
`kubectl` whenever possible. For example, `conduit install` and `conduit inject` are generated Kubernetes configurations designed to be fed directly into `kubectl`. This is to provide a clear division of labor between the service mesh
and the orchestrator, and to make it easier to fit Conduit into existing
Kubernetes workflows.
Second, Conduits core noun in Kubernetes is the Deployment, not the Service.
For example, `conduit inject` adds a Deployment; the Conduit web UI displays
Deployments; aggregated performance metrics are given per Deployment. This is
because individual pods can be a part of arbitrary numbers of Services, which
can lead to complex mappings between traffic flow and pods. Deployments, by
contrast, require that a single pod be a part of at most one Deployment. By
building on Deployments rather than Services, the mapping between traffic and
pod is always clear.
These two design features compose nicely. For example, `conduit inject` can be
used on a live Deployment, as Kubernetes rolls pods to include the data plane
proxy as it updates the Deployment.
## Extending Conduits Behavior
The Conduit control plane also provides a convenient place for custom
functionality to be built. While the initial release of Conduit does not support
this yet, in the near future, youll be able to extend Conduits functionality
by writing gRPC plugins that run as part of the control plane, without needing
to recompile Conduit.

36
doc/prometheus.md Executable file
View File

@ -0,0 +1,36 @@
+++
title = "Exporting metrics to Prometheus"
weight = 5
docpage = true
[menu.docs]
parent = "prometheus"
+++
If you have an existing Prometheus cluster, it is very easy to export Conduit's
rich telemetry data to your cluster. Simply add the following item to your
`scrape_configs` in your Prometheus config file:
```yaml
- job_name: 'conduit'
kubernetes_sd_configs:
- role: pod
namespaces:
# Replace this with the namespace that Conduit is running in
names: ['conduit']
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: ^admin-http$
```
That's it! Your Prometheus cluster is now configured to scrape Conduit's
metrics. Conduit's metrics will have the label `job="conduit"` and include:
* `requests_total`: Total number of requests
* `responses_total`: Total number of responses
* `response_latency_ms`: Response latency in milliseconds
All metrics include the following labels:
* `source_deployment`: The deployment (or replicaset, job, etc.) that sent the request
* `target_deployment`: The deployment (or replicaset, job, etc.) that received the request

111
doc/roadmap.md Executable file
View File

@ -0,0 +1,111 @@
+++
title = "Conduit roadmap"
docpage = true
[menu.docs]
parent = "roadmap"
+++
This is the planned roadmap for Conduit. Of course, as with any software project
(especially open source) even the best of plans change rapidly as development progresses.
Our goal is to get Conduit to production-readiness as rapidly as possible with a minimal
featureset, then to build functionality out from there. Well make alpha / beta / GA
designations based on actual community usage, and generally will err on the side of being
overly conservative.
##### Status: alpha
## [0.3: Telemetry Stability](https://github.com/runconduit/conduit/milestone/5)
#### Late February 2018
### Visibility
- Stable, automatic top-line metrics for small-scale clusters.
### Usability
- Routing to external DNS names
### Reliability
- Least-loaded L7 load balancing
- Improved error handling
- Improved egress support
### Development
- Published (this) roadmap
- All milestones, issues, PRs, & mailing lists made public
## [0.4: Automatic TLS; Prometheus++](https://github.com/runconduit/conduit/milestone/6)
#### Late March 2018
### Usability
- Helm integration
- Mutating webhook admission controller
### Security
- Self-bootstrapping Certificate Authority
- Secured communication to and within the Conduit control plane
- Automatically provide all meshed services with cryptographic identity
- Automatically secure all meshed communication
### Visibility
- Enhanced server-side metrics, including per-path and per-status-code counts & latencies.
- Client-side metrics to surface egress traffic, etc.
### Reliability
- Latency-aware load balancing
## [0.5: Controllable Deadlines/Timeouts](https://github.com/runconduit/conduit/milestone/7)
#### Early April 2018
### Reliability
- Controllable latency objectives to configure timeouts
- Controllable response classes to inform circuit breaking, retryability, & success rate calculation
- High-availability controller
### Visibility
- OpenTracing integration
### Security
- Mutual authentication
- Key rotation
## [0.6: Controllable Response Classification & Retries](https://github.com/runconduit/conduit/milestone/8)
#### Late April 2018
### Reliability
- Automatic alerting for latency & success objectives
- Controllable retry policies
### Routing
- Rich ingress routing
- Contextual route overrides
### Security
- Authorization policy
## And Beyond:
- Controller policy plugins
- Support for non-Kubernetes services
- Failure injection (aka "chaos chihuahua")
- Speculative retries
- Dark traffic
- gRPC payload-aware `tap`
- Automated red-line testing