+ If you are using GKE with RBAC enabled, you must grant a ClusterRole of cluster-admin
+ to your Google Cloud account first, in order to install certain telemetry features in the control plane.
+
+
+ Note that the $USER environment variable should be the username of your
+ Google Cloud account.
+
+
Then run:
+
+ conduit install | kubectl apply -f -
+
+
+
+
+### Which should display:
+```
+namespace "conduit" created
+serviceaccount "conduit-controller" created
+clusterrole "conduit-controller" created
+clusterrolebinding "conduit-controller" created
+serviceaccount "conduit-prometheus" created
+clusterrole "conduit-prometheus" created
+clusterrolebinding "conduit-prometheus" created
+service "api" created
+service "proxy-api" created
+deployment "controller" created
+service "web" created
+deployment "web" created
+service "prometheus" created
+deployment "prometheus" created
+configmap "prometheus-config" created
+```
+
+### To verify the Conduit server version is v{{% latestversion %}}, run:
+#### `conduit version`
+
+### Which should display:
+```
+Client version: v{{% latestversion %}}
+Server version: v{{% latestversion %}}
+```
+
+### Now, to view the control plane locally, run:
+#### `conduit dashboard`
+
+The first command generates a Kubernetes config, and pipes it to `kubectl`.
+Kubectl then applies the config to your Kubernetes cluster.
+
+If you see something like below, Conduit is now running on your cluster. đ
+
+
+
+Of course, you havenât actually added any services to the mesh yet,
+so the dashboard wonât have much to display beyond the status of the service mesh itself.
+
+___
+##### STEP FOUR
+## Install the demo app đ
+Finally, itâs time to install a demo application and add it to the service mesh.
+
+See a live version of the demo app
+
+### To install a local version of this demo locally and add it to Conduit, run:
+
+#### `curl https://raw.githubusercontent.com/runconduit/conduit-examples/master/emojivoto/emojivoto.yml | conduit inject - | kubectl apply -f -`
+
+### Which should display:
+```
+namespace "emojivoto" created
+deployment "emoji" created
+service "emoji-svc" created
+deployment "voting" created
+service "voting-svc" created
+deployment "web" created
+service "web-svc" created
+deployment "vote-bot" created
+```
+
+This command downloads the Kubernetes config for an example gRPC application
+where users can vote for their favorite emoji, then runs the config through
+`conduit inject`. This rewrites the config to insert the Conduit data plane
+proxies as sidecar containers in the application pods.
+
+Finally, `kubectl` applies the config to the Kubernetes cluster.
+
+As with `conduit install`, in this command, the Conduit CLI is simply doing text
+transformations, with `kubectl` doing the heavy lifting of actually applying
+config to the Kubernetes cluster. This way, you can introduce additional filters
+into the pipeline, or run the commands separately and inspect the output of each
+one.
+
+At this point, you should have an application running on your Kubernetes
+cluster, and (unbeknownst to it!) also added to the Conduit service mesh.
+
+___
+
+##### STEP FIVE
+## Watch it run! đ
+If you glance at the Conduit dashboard, you should see all the
+HTTP/2 and HTTP/1-speaking services in the demo app show up in the list of
+deployments that have been added to the Conduit mesh.
+
+### View the demo app by visiting the web service's public IP:
+
+
+
Find the public IP by selecting your environment below.
+
+
+
+
+
+
+
+
+
+ kubectl get svc web-svc -n emojivoto -o jsonpath="{.status.loadBalancer.ingress[0].*}"
+
+
+
+
+
+ minikube -n emojivoto service web-svc --url
+
+
+
+
+Finally, letâs take a look back at our dashboard (run `conduit dashboard` if you
+havenât already). You should be able to browse all the services that are running
+as part of the application to view:
+
+- Success rates
+- Request rates
+- Latency distribution percentiles
+- Upstream and downstream dependencies
+
+As well as various other bits of information about live traffic. Neat, huh?
+
+### Views available in `conduit dashboard`:
+
+### SERVICE MESH
+Displays continuous health metrics of the control plane itself, as well as
+high-level health metrics of deployments in the data plane.
+
+### DEPLOYMENTS
+Lists all deployments by requests, success rate, and latency.
+
+___
+
+## Using the CLI đ»
+Of course, the dashboard isnât the only way to inspect whatâs
+happening in the Conduit service mesh. The CLI provides several interesting and
+powerful commands that you should experiment with, including `conduit stat` and `conduit tap`.
+
+### To view details per deployment, run:
+#### `conduit stat deployments`
+
+### Which should display:
+```
+NAME REQUEST_RATE SUCCESS_RATE P50_LATENCY P99_LATENCY
+emojivoto/emoji 2.0rps 100.00% 0ms 0ms
+emojivoto/voting 0.6rps 66.67% 0ms 0ms
+emojivoto/web 2.0rps 95.00% 0ms 0ms
+```
+
+
+
+### To see a live pipeline of requests for your application, run:
+#### `conduit tap deploy emojivoto/voting`
+
+### Which should display:
+```
+req id=0:127 src=172.17.0.11:50992 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VoteManInTuxedo
+rsp id=0:127 src=172.17.0.11:50992 dst=172.17.0.10:8080 :status=200 latency=588”s
+end id=0:127 src=172.17.0.11:50992 dst=172.17.0.10:8080 grpc-status=OK duration=9”s response-length=5B
+req id=0:128 src=172.17.0.11:50992 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePager
+rsp id=0:128 src=172.17.0.11:50992 dst=172.17.0.10:8080 :status=200 latency=601”s
+end id=0:128 src=172.17.0.11:50992 dst=172.17.0.10:8080 grpc-status=OK duration=11”s response-length=5B
+req id=0:129 src=172.17.0.11:50992 dst=172.17.0.10:8080 :method=POST :authority=voting-svc.emojivoto:8080 :path=/emojivoto.v1.VotingService/VotePoop
+...
+```
+
+___
+
+## Thatâs it! đ
+For more information about Conduit, check out the
+[overview doc](/docs) and the [roadmap doc](/roadmap), or hop into the #conduit channel on [the
+Linkerd Slack](https://slack.linkerd.io) or browse through the
+[Conduit forum](https://discourse.linkerd.io/c/conduit). You can also follow
+[@runconduit](https://twitter.com/runconduit) on Twitter.
+Weâre just getting started building Conduit, and weâre extremely interested in your feedback!
diff --git a/doc/images/dashboard-data-plane.png b/doc/images/dashboard-data-plane.png
new file mode 100755
index 000000000..e60e2c69f
Binary files /dev/null and b/doc/images/dashboard-data-plane.png differ
diff --git a/doc/images/dashboard.png b/doc/images/dashboard.png
new file mode 100755
index 000000000..06676a11e
Binary files /dev/null and b/doc/images/dashboard.png differ
diff --git a/doc/images/emojivoto-poop.png b/doc/images/emojivoto-poop.png
new file mode 100755
index 000000000..0fbe107d8
Binary files /dev/null and b/doc/images/emojivoto-poop.png differ
diff --git a/doc/overview.md b/doc/overview.md
new file mode 100755
index 000000000..f22169225
--- /dev/null
+++ b/doc/overview.md
@@ -0,0 +1,93 @@
++++
+title = "Conduit overview"
+docpage = true
+[menu.docs]
+ parent = "docs"
++++
+
+Conduit is an ultralight service mesh for Kubernetes. It
+makes running services on Kubernetes safer and more reliable by transparently
+managing the runtime communication between services. It provides features for
+observability, reliability, and security---all without requiring changes to your
+code.
+
+In this doc, youâll get a high-level overview of Conduit and how it works. If
+youâre not familiar with the service mesh model, you may want to first read
+William Morganâs overview, [Whatâs a service mesh? And why do I need one?](https://buoyant.io/2017/04/25/whats-a-service-mesh-and-why-do-i-need-one/)
+
+## Conduitâs architecture
+
+The Conduit service mesh is deployed on a Kubernetes
+cluster as two basic components: a *data plane* and a *control plane*. The data
+plane carries the actual application request traffic between service instances.
+The control plane drives the data plane and provides APIs for modifying its
+behavior (as well as for accessing aggregated metrics). The Conduit CLI and web
+UI consume this API and provide ergonomic controls for human beings.
+
+Letâs take each of these components in turn.
+
+The Conduit data plane is comprised of lightweight proxies, which are deployed
+as sidecar containers alongside each instance of your service code. In order to
+âaddâ a service to the Conduit service mesh, the pods for that service must be
+redeployed to include a data plane proxy in each pod. (The `conduit inject`
+command accomplishes this, as well as the configuration work necessary to
+transparently funnel traffic from each instance through the proxy.)
+
+These proxies transparently intercept communication to and from each pod, and
+add features such as retries and timeouts, instrumentation, and encryption
+(TLS), as well as allowing and denying requests according to the relevant
+policy.
+
+These proxies are not designed to be configured by hand. Rather, their behavior
+is driven by the control plane.
+
+The Conduit control plane is a set of services that run in a dedicated
+Kubernetes namespace (`conduit` by default). These services accomplish various
+things---aggregating telemetry data, providing a user-facing API, providing
+control data to the data plane proxies, etc. Together, they drive the behavior
+of the data plane.
+
+## Using Conduit
+
+In order to interact with Conduit as a human,
+you use the Conduit CLI and the web UI (as well as with associated tools like
+`kubectl`). The CLI and the web UI drive the control plane via its API, and the
+control plane in turn drives the behavior of the data plane.
+
+The control plane API is designed to be generic enough that other tooling can be
+built on top of it. For example, you may wish to additionally drive the API from
+a CI/CD system.
+
+A brief overview of the CLIâs functionality can be seen by running `conduit
+--help`.
+
+## Conduit with Kubernetes
+
+Conduit is designed to fit seamlessly into an
+existing Kubernetes system. This design has several important features.
+
+First, the Conduit CLI (`conduit`) is designed to be used in conjunction with
+`kubectl` whenever possible. For example, `conduit install` and `conduit inject` are generated Kubernetes configurations designed to be fed directly into `kubectl`. This is to provide a clear division of labor between the service mesh
+and the orchestrator, and to make it easier to fit Conduit into existing
+Kubernetes workflows.
+
+Second, Conduitâs core noun in Kubernetes is the Deployment, not the Service.
+For example, `conduit inject` adds a Deployment; the Conduit web UI displays
+Deployments; aggregated performance metrics are given per Deployment. This is
+because individual pods can be a part of arbitrary numbers of Services, which
+can lead to complex mappings between traffic flow and pods. Deployments, by
+contrast, require that a single pod be a part of at most one Deployment. By
+building on Deployments rather than Services, the mapping between traffic and
+pod is always clear.
+
+These two design features compose nicely. For example, `conduit inject` can be
+used on a live Deployment, as Kubernetes rolls pods to include the data plane
+proxy as it updates the Deployment.
+
+## Extending Conduitâs Behavior
+
+The Conduit control plane also provides a convenient place for custom
+functionality to be built. While the initial release of Conduit does not support
+this yet, in the near future, youâll be able to extend Conduitâs functionality
+by writing gRPC plugins that run as part of the control plane, without needing
+to recompile Conduit.
diff --git a/doc/prometheus.md b/doc/prometheus.md
new file mode 100755
index 000000000..edcae61e2
--- /dev/null
+++ b/doc/prometheus.md
@@ -0,0 +1,36 @@
++++
+title = "Exporting metrics to Prometheus"
+weight = 5
+docpage = true
+[menu.docs]
+ parent = "prometheus"
++++
+
+If you have an existing Prometheus cluster, it is very easy to export Conduit's
+rich telemetry data to your cluster. Simply add the following item to your
+`scrape_configs` in your Prometheus config file:
+
+```yaml
+ - job_name: 'conduit'
+ kubernetes_sd_configs:
+ - role: pod
+ namespaces:
+ # Replace this with the namespace that Conduit is running in
+ names: ['conduit']
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_container_port_name]
+ action: keep
+ regex: ^admin-http$
+```
+
+That's it! Your Prometheus cluster is now configured to scrape Conduit's
+metrics. Conduit's metrics will have the label `job="conduit"` and include:
+
+* `requests_total`: Total number of requests
+* `responses_total`: Total number of responses
+* `response_latency_ms`: Response latency in milliseconds
+
+All metrics include the following labels:
+
+* `source_deployment`: The deployment (or replicaset, job, etc.) that sent the request
+* `target_deployment`: The deployment (or replicaset, job, etc.) that received the request
diff --git a/doc/roadmap.md b/doc/roadmap.md
new file mode 100755
index 000000000..525d558d8
--- /dev/null
+++ b/doc/roadmap.md
@@ -0,0 +1,111 @@
++++
+title = "Conduit roadmap"
+docpage = true
+[menu.docs]
+ parent = "roadmap"
++++
+
+This is the planned roadmap for Conduit. Of course, as with any software project
+(especially open source) even the best of plans change rapidly as development progresses.
+
+Our goal is to get Conduit to production-readiness as rapidly as possible with a minimal
+featureset, then to build functionality out from there. Weâll make alpha / beta / GA
+designations based on actual community usage, and generally will err on the side of being
+overly conservative.
+
+##### Status: alpha
+## [0.3: Telemetry Stability](https://github.com/runconduit/conduit/milestone/5)
+
+#### Late February 2018
+
+### Visibility
+
+- Stable, automatic top-line metrics for small-scale clusters.
+
+### Usability
+
+- Routing to external DNS names
+
+### Reliability
+
+- Least-loaded L7 load balancing
+- Improved error handling
+- Improved egress support
+
+### Development
+
+- Published (this) roadmap
+- All milestones, issues, PRs, & mailing lists made public
+
+## [0.4: Automatic TLS; Prometheus++](https://github.com/runconduit/conduit/milestone/6)
+
+#### Late March 2018
+
+### Usability
+
+- Helm integration
+- Mutating webhook admission controller
+
+### Security
+
+- Self-bootstrapping Certificate Authority
+- Secured communication to and within the Conduit control plane
+- Automatically provide all meshed services with cryptographic identity
+- Automatically secure all meshed communication
+
+### Visibility
+
+- Enhanced server-side metrics, including per-path and per-status-code counts & latencies.
+- Client-side metrics to surface egress traffic, etc.
+
+### Reliability
+
+- Latency-aware load balancing
+
+## [0.5: Controllable Deadlines/Timeouts](https://github.com/runconduit/conduit/milestone/7)
+
+#### Early April 2018
+
+### Reliability
+
+- Controllable latency objectives to configure timeouts
+- Controllable response classes to inform circuit breaking, retryability, & success rate calculation
+- High-availability controller
+
+### Visibility
+
+- OpenTracing integration
+
+### Security
+
+- Mutual authentication
+- Key rotation
+
+## [0.6: Controllable Response Classification & Retries](https://github.com/runconduit/conduit/milestone/8)
+
+#### Late April 2018
+
+### Reliability
+
+- Automatic alerting for latency & success objectives
+- Controllable retry policies
+
+### Routing
+
+- Rich ingress routing
+- Contextual route overrides
+
+### Security
+
+- Authorization policy
+
+## And Beyond:
+
+- Controller policy plugins
+- Support for non-Kubernetes services
+- Failure injection (aka "chaos chihuahua")
+- Speculative retries
+- Dark traffic
+- gRPC payload-aware `tap`
+- Automated red-line testing
+