18 KiB
Linkerd2 Development Guide
🎈 Welcome to the Linkerd2 development guide! 👋
This document will help you build and run Linkerd2 from source. More information about testing from source can be found in the TEST.md guide.
Table of contents
- Repo Layout
- Components
- Development configurations
- Dependencies
- Linkerd Helm Chart
Repo layout
Linkerd2 is primarily written in Rust, Go, and React. At its core is a high-performance data plane written in Rust. The control plane components and its extensions are written in Go. The dashboard UI is a React application.
Control Plane (Go/React)
cli: Command-linelinkerdutility, view and drive the control plane.controllerdestination: Accepts requests fromproxyinstances and serves service discovery information.proxy-injector: Mutating webhook triggered by pods creation, that injects the proxy container as a sidecar.identity: Provides a CA to distribute certificates to proxies for them to establish mTLS connections between them.
viz extensionmetrics-api: Accepts requests from API clients such as cli and web, serving metrics from the proxies in the cluster through Prometheus queries.tap: Provides a live pipeline of requests.tap-injector: Mutating webhook triggered by pods creation, that injects metadata into the proxy container in order to enable tap.web: Provides a UI dashboard to view and drive the control plane.
multicluster extension- [
linkerd-gateway]: Accepts requests from other clusters and forwards them to the appropriate destination in the local cluster. linkerd-service-mirror-xxx: Controller observing the labeling of exported services in the target cluster, each one for which it will create a mirrored service in the local cluster.
- [
jaeger extensionjaeger-injector: Mutating webhook triggered by pods creation, that expands the proxy container for it to produce tracing spans.
Data Plane (Rust)
linkerd2-proxy: Rust source code for the proxy lives in the linkerd2-proxy repo.linkerd2-proxy-api: Protobuf definitions for the data plane APIs live in the linkerd2-proxy-api repo.
Components
linkerd2_components digraph G { rankdir=LR;
node [style=filled, shape=rect];
"cli" [color=lightblue];
"destination" [color=lightblue];
"identity" [color=lightblue];
"metrics-api" [color=lightblue];
"tap" [color=lightblue];
"web" [color=lightblue];
"proxy" [color=orange];
"cli" -> "metrics-api";
"cli" -> "tap";
"web" -> "metrics-api";
"web" -> "tap";
"web" -> "grafana";
"metrics-api" -> "prometheus";
"tap" -> "proxy";
"proxy" -> "destination";
"proxy" -> "identity";
"identity" -> "kubernetes api"
"destination" -> "kubernetes api";
"grafana" -> "prometheus";
"prometheus" -> "proxy";
} linkerd2_components
Development configurations
Depending on use case, there are several configurations with which to develop and run Linkerd2:
- Comprehensive: Integrated configuration using k3d, most closely matches release.
- Web: Development of the Linkerd2 Dashboard.
Comprehensive
This configuration builds all Linkerd2 components in Docker images, and deploys them onto a k3d cluster. This setup most closely parallels our recommended production installation, documented in Getting Started.
Note that you need to have first installed docker buildx, as explained here.
# create the k3d cluster
bin/k3d cluster create
# build all docker images
bin/docker-build
# load all the images into k3d
bin/image-load --k3d
# install linkerd
bin/linkerd install | kubectl apply -f -
# wait for the core components to be ready, then install linkerd-viz
bin/linkerd viz install | kubectl apply -f -
# in order to use `linkerd viz tap` against control plane components, you need
# to restart them (so that the tap-injector enables tap on their proxies)
kubectl -n linkerd rollout restart deploy
# verify cli and server versions
bin/linkerd version
# validate installation
bin/linkerd check --expected-version $(bin/root-tag)
# view linkerd dashboard
bin/linkerd viz dashboard
# install the demo app
curl https://run.linkerd.io/emojivoto.yml | bin/linkerd inject - | kubectl apply -f -
# port-forward the demo app's frontend to see it at http://localhost:8080
kubectl -n emojivoto port-forward svc/web-svc 8080:80
# view details per deployment
bin/linkerd viz -n emojivoto stat deployments
# view a live pipeline of requests
bin/linkerd viz -n emojivoto tap deploy voting
Deploying Control Plane components with Tracing
Control Plane components have the trace-collector flag used to enable
Distributed Tracing for
development purposes. It can be enabled globally i.e Control plane components
and their proxies by using the --set controlPlaneTracing=true installation
flag.
This will configure all the components to send the traces at
collector.{{.Values.controlPlaneTracingNamespace}}.svc.{{.Values.ClusterDomain}}:55678
# install Linkerd with tracing
linkerd install --set controlPlaneTracing=true | kubectl apply -f -
# install the Jaeger extension
linkerd jaeger install | kubectl apply -f -
# restart the control plane components so that the jaeger-injector enables
# tracing in their proxies
kubectl -n linkerd rollout restart deploy
Publishing images
The example above builds and loads the docker images into k3d. For testing your built images outside your local environment, you need to publish your images so they become accessible in those external environments.
To signal bin/docker-build or any of the more specific scripts
bin/docker-build-* what registry to use, just set the environment variable
DOCKER_REGISTRY (which defaults to the official registry cr.l5d.io/linkerd).
After having pushed those images through the usual means (docker push) you'll
have to pass the --registry flag to linkerd install with a value matching
your registry. Extensions don't have that flag and instead you need to use the
equivalent Helm value; e.g. for Viz linkerd viz install --set defaultRegistry=....
Go
A note about Go run
Our instructions use a bin/go-run script in lieu go run. This
is a convenience script that leverages caching via go build to make your
build/run/debug loop faster.
In general, replace commands like this:
go run cli/main.go check
with this:
bin/go-run cli check
That is equivalent to running linkerd check using the code on your branch.
Lint
To analyze and lint the Go code using golangci-lint, run:
bin/lint
Formatting
All Go source code is formatted with goimports. The version of goimports
used by this project is specified in go.mod. To ensure you have the same
version installed, run go install -mod=readonly golang.org/x/tools/cmd/goimports. It's recommended that you set your IDE or
other development tools to use goimports. Formatting is checked during CI by
the bin/fmt script.
Building the CLI for development
The script for building the CLI binaries using docker is
bin/docker-build-cli-bin. This will also be called indirectly when calling
bin/docker-build. By default it creates binaries for your current host's
OS/arch.
To cross-build targeting a different OS or architecture, set the environment
variable DOCKER_TARGET according to any of the final stages available in
cli/Dockerfile.
For local development and a faster edit-build-test cycle you can build directly
without going through a docker container by calling bin/build-cli-bin.
If you set the environment variable LINKERD_LOCAL_BUILD_CLI=1 then
bin/docker-build will use this last method for the step that builds the CLI.
Running the control plane for development
Linkerd2's control plane is composed of several Go microservices. You can run these components in a Kubernetes cluster, or even locally.
To run an individual component locally, you can use the go-run command, and
pass in valid Kubernetes credentials via the -kubeconfig flag. For instance,
to run the destination service locally, run:
bin/go-run controller/cmd destination -kubeconfig ~/.kube/config -log-level debug
You can send test requests to the destination service using the
destination-client in the controller/script directory. For instance:
bin/go-run controller/script/destination-client -path hello.default.svc.cluster.local:80
Debugging the Tap APIService for development
The Tap APIService is a Kubernetes extension API server, so it can be
challenging to run outside the cluster. The most straightforward workflow is to
simply test changes by building and loading the container image as explained in
the comprehensive configuration section above (in order to
just build this component use bin/docker-build-tap).
Generating CLI docs
The documentation for the CLI tool is partially
generated from YAML. This can be generated by running the linkerd doc command.
Web
This is a React app fronting a Go process. It uses webpack to bundle assets, and postcss to transform css.
These commands assume working Go and Yarn environments.
First time setup
-
Install Yarn and use it to install JS dependencies:
brew install yarn bin/web setup -
Install Linkerd on a Kubernetes cluster.
Run web standalone
bin/web run
The web server will be running on localhost:7777.
Webpack dev server
To develop with a webpack dev server:
-
Start the development server.
bin/web devNote: this will start up:
webon :7777. This is the golang process that serves the dashboard.webpack-dev-serveron :8080 to manage rebuilding/reloading of the javascript.metrics-apiis port-forwarded from the Kubernetes cluster viakubectlon :8085
-
Go to http://localhost:7777 to see everything running.
JavaScript dependencies
To add a JS dependency:
cd web/app
yarn add [dep]
Translations
To add a locale:
cd web/app
yarn lingui add-locale [locales...] # will create a messages.json file for new locale(s)
To extract message keys from existing components:
cd web/app
yarn lingui extract
...
yarn lingui compile # done automatically in bin/web run
Finally, make sure the new locale is also referred in the following places:
- Under the
linguisection inpackage.json - In the
make-plural/pluralsimport inindex.js - In the
langOptionsobject inindex.js
Rust
All Rust development happens in the
linkerd2-proxy repo.
Docker
The bin/docker-build-proxy script builds the proxy by pulling a pre-published
proxy binary:
bin/docker-build-proxy
Locally built proxy
If you want to deploy a locally built proxy, you can build it in the
linkerd2-proxy repo by running:
DOCKER_TAG=cr.l5d.io/linkerd/proxy:dev make docker
Then, in this repo, run:
./bin/k3d image import cr.l5d.io/linkerd/proxy:dev
Now, to make a pod use your image, add the following annotations to it:
config.linkerd.io/proxy-version: dev
Dependencies
Updating protobuf dependencies
If you make Protobuf changes, run:
bin/protoc-go.sh
Updating ServiceProfile generated code
The ServiceProfile client code is generated by
bin/update-codegen.sh, which depends on K8s
code-generator, which does not
yet support Go Modules. To re-generate this code, check out this repo into your
GOPATH:
go get -u github.com/linkerd/linkerd2
cd $GOPATH/src/github.com/linkerd/linkerd2
bin/update-codegen.sh
Linkerd Helm chart
The Linkerd control plane chart is located in the
charts/linkerd2 folder. The charts/patch
chart consists of the Linkerd proxy specification, which is used by the proxy
injector to inject the proxy container. Both charts depend on the partials
subchart which can be found in the charts/partials folder.
Note that the charts/linkerd2/values.yaml file contains a placeholder
linkerdVersionValue that you need to replace with an appropriate string (like
edge-20.2.2) before proceeding.
During development, please use the bin/helm wrapper script to
invoke the Helm commands. For example,
bin/helm install linkerd2 charts/linkerd2
This ensures that you use the same Helm version as that of the Linkerd CI system.
For general instructions on how to install the charts check out the docs. You also need to supply or generate your own certificates to use the chart, as explained here.
Extensions Helm charts
Extensions provide each their own chart:
- Viz:
viz/charts/linkerd-viz - Multicluster:
multicluster/charts/linkerd-multicluster - Jaeger:
jaeger/charts/linkerd-jaeger
Making changes to the chart templates
Whenever you make changes to the files under
charts/linkerd2/templates or its dependency
charts/partials, make sure to run
bin/helm-build which will refresh the dependencies and lint
the templates.
Generating Helm charts docs
Whenever a new chart is created, or updated a README should be generated from the chart's values.yml. This can be done by utilizing the bundled helm-docs binary. For adding additional information, such as specific installation instructions a README template is required to be created. Check existing charts for examples.
Using helm-docs
Example usage:
bin/helm-docs
bin/helm-docs --dry-run #Prints to cli instead
bin/helm-docs --chart-search-root=./charts #Sets search root for charts
bin/helm-docs --template-files=README.md.gotmpl #Sets the template file used
Note: The tool searches through the current directory and sub-directories by default. For additional information checkout their repo above.
Annotating values.yml
To allow helm-docs to properly document the values in values.yml a descriptive
comment is required. This can be done in two ways.
Either comment the value directly above with
# -- This is a really nice value where the double dashes automatically
annotates the value. Another explicit usage is to type out the value name.
# global.MyNiceValue -- I really like this value
Markdown templates
In order to accommodate for extra data that might not have a proper place in the ´values.yaml´ file the corresponding ´README.md.gotmpl´ can be modified for each chart. This template allows the standard markdown syntax as well as the go templating functions. Checkout helm-docs for more info.