website/linkerd.io/content/1/tutorials/part-nine.md

9.8 KiB
Raw Blame History

+++ date = "2017-04-19T13:43:54-07:00" title = "Part IX: gRPC for fun and profit" description = "As of Linkerd 0.8.5, released earlier this year, Linkerd supports gRPC and HTTP/2!" weight = 10 draft = true [menu.docs] parent = "tutorials" +++

Author: Risha Mars

As of Linkerd 0.8.5, released earlier this year, Linkerd supports gRPC and HTTP/2! These powerful protocols can provide significant benefits to applications that make use of them. In this post, well demonstrate how to use Linkerd with gRPC, allowing applications that speak gRPC to take full advantage of Linkerds load balancing, service discovery, circuit breaking, and distributed tracing logic.


For this post well use our familiar hello world microservice app and configs, which can be found in the linkerd-examples repo (k8s configs here and hello world code here).

The hello world application consists of two components—a hello service which calls a world service to complete a request. hello and world use gRPC to talk to each other. Well deploy Linkerd as a DaemonSet (so one Linkerd instance per host), and a request from hello to world will look like this:

{{< fig src="/images/tutorials/buoyant-grpc-daemonset-1024x617.png" title="DaemonSet deployment model: one Linkerd per host." >}}

As shown above, when the hello service wants to call world, the request goes through the outgoing router of its host-local Linkerd, which does not send the request directly to the destination world service, but to a Linkerd instance running on the same host as world (on its incoming router). That Linkerd instance then sends the request to the world service on its host. This three-hop model allows Linkerd to decouple the applications protocol from the transport protocol—for example, by wrapping cross-node connections in TLS. (For more on this deployment topology, see Part II of this series, Pods are great until theyre not.)


Trying this at home

Lets see this setup in action! Deploy the hello and world to the default k8s namespace. These apps rely on the nodeName supplied by the Kubernetes downward API to find Linkerd. To check if your cluster supports nodeName, you can run this test job:

kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/node-name-test.yml

And then looks at its logs:

kubectl logs node-name-test

If you see an ip, great! Go ahead and deploy the hello world app using:

kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/hello-world-grpc.yml

If instead you see a “server cant find …” error, deploy the hello-world legacy version that relies on hostIP instead of nodeName:

kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/hello-world-grpc-legacy.yml

Also deploy Linkerd:

kubectl apply -f https://raw.githubusercontent.com/BuoyantIO/linkerd-examples/master/k8s-daemonset/k8s/linkerd-grpc.yml

Once Kubernetes provisions an external LoadBalancer IP for Linkerd, we can do some test requests! Note that the examples in these blog posts assume k8s is running on GKE (e.g. external loadbalancer IPs are available, no CNI plugins are being used). Slight modifications may be needed for other environments—see our Flavors of Kubernetes help page for environments like Minikube or CNI configurations with Calico/Weave.

Well use the helloworld-client provided by the hello world docker image in order to send test gRPC requests to our hello world service:

$ L5D_INGRESS_LB=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}")
$ docker run --rm --entrypoint=helloworld-client buoyantio/helloworld:0.1.3 $L5D_INGRESS_LB:4140
Hello (10.196.1.242) world (10.196.1.243)!!

Or if external load balancer support is unavailable for the cluster, use hostIP:

$ L5D_INGRESS_LB=$(kubectl get po -l app=l5d -o jsonpath="{.items[0].status.hostIP}")
$ docker run --rm --entrypoint=helloworld-client buoyantio/helloworld:0.1.3 $L5D_INGRESS_LB:$(kubectl get svc l5d -o 'jsonpath={.spec.ports[0].nodePort}')
Hello (10.196.1.242) world (10.196.1.243)!!

It works!

We can check out the Linkerd admin dashboard by doing:

$ open http://$L5D_INGRESS_LB:9990 # on OSX

Or using hostIP:

$ open http://$L5D_INGRESS_LB:$(kubectl get svc l5d -o 'jsonpath={.spec.ports[2].nodePort}') # on OSX

And thats it! We now have gRPC services talking to each other, with their HTTP/2 requests being routed through Linkerd. Now we can use all of Linkerds awesome features, including per-request routing, load balancing, circuit-breaking, retries, TLS, distributed tracing, service discovery integration and more, in our gRPC microservice applications!


How did we configure Linkerd for GRPC over HTTP/2?

Lets take a step back and examine our config. Whats different about using gRPC rather than HTTP/1.1? Actually, not very much! If you compare our Linkerd config for routing gRPC with the config for plain old HTTP/1.1, theyre quite similar (full documentation on configuring an HTTP/2 router can be found here).

The changes youll notice are:

Protocol

Weve changed the router protocol from http to h2 (naturally!) and set the experimental flag to true to opt in to experimental HTTP/2 support.

routers:
- protocol: h2
  experimental: true

Identifier

We use the header path identifier to assign a logical name based on the gRPC request. gRPC clients set HTTP/2s :path pseudo-header to /package.Service/Method. The header path identifier uses this pseudo-header to assign a logical name to the request (such as /svc/helloworld.Hello/Greeting). Setting segments to 1 means we only take the first segment of the path, in other words, dropping the gRPC Method. The resulting name can then be transformed via a dtab where we extract the gRPC service name, and route the request to a Kubernetes service of the same name. For more on how Linkerd routes requests, see our routing docs.

identifier:
  kind: io.l5d.header.path
  segments: 1

DTAB

Weve adjusted the dtab slightly, now that were routing on the /serviceName prefix from the header path identifier. The dtab below transforms the logical name assigned by the path identifier (/svc/helloworld.Hello) to a name that tells the io.l5d.k8s namer to query the API for the grpc port of the hello Service in the default namespace (/#/io.l5d.k8s/default/grpc/Hello).

The domainToPathPfx namer is used to extract the service name from the package-qualified gRPC service name, as seen in the dentry /svc => /$/io.buoyant.http.domainToPathPfx/grpc.

Delegation to world is similar, however weve decided to version the world service, so weve added the additional rule /grpc/World => /srv/world-v1 to send requests to world-v1.

Our full dtab is now:

/srv        => /#/io.l5d.k8s/default/grpc;
/grpc       => /srv;
/grpc/World => /srv/world-v1;
/svc        => /$/io.buoyant.http.domainToPathPfx/grpc;

Conclusion

In this article, weve seen how to use Linkerd as a service mesh for gRPC requests, adding latency-aware load balancing, circuit breaking, and request-level routing to gRPC apps. Linkerd and gRPC are a great combination, especially as gRPCs HTTP/2 underpinnings provide it with powerful mechanisms like multiplexed streaming, back pressure, and cancelation, which Linkerd can take full advantage of. Because gRPC includes routing information in the request, its a natural fit for Linkerd, and makes it very easy to set up Linkerd to route gRPC requests. For more on Linkerds roadmap around gRPC, see Olivers blog post on the topic.

Finally, for a more advanced example of configuring gRPC services, take a look at our Gob microservice app. In that example, we additionally deploy Namerd, which we use to manage our routing rules centrally, and update routing rules without redeploying Linkerd. This lets us to do things like canarying and blue green deploys between different versions of a service.

{{< note >}} There are a myriad of ways to deploy Kubernetes and different environments support different features. Learn more about deployment differences here. {{< /note >}}

For more information on Linkerd, gRPC, and HTTP/2 head to the Linkerd gRPC documentation as well as our config documentation for HTTP/2.