Drop old discourse links (#1605)

* Drop old discourse links

Signed-off-by: Flynn <flynn@buoyant.io>

* Fix CONTRIBUTING.md

Signed-off-by: Flynn <flynn@buoyant.io>

* Try to force a redeploy

Signed-off-by: Flynn <flynn@buoyant.io>

---------

Signed-off-by: Flynn <flynn@buoyant.io>
Co-authored-by: Flynn <flynn@buoyant.io>
This commit is contained in:
Flynn 2023-04-11 18:06:31 -04:00 committed by GitHub
parent 7b037e9264
commit c3fc926090
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 26 additions and 41 deletions

View File

@ -5,7 +5,7 @@
## Getting Help ##
If you have a question about Linkerd or have encountered problems using it,
start by [asking a question in the forums][discourse] or join us on
start by [asking a question in the Linkerd Support Forum][forum] or join us on
[Slack][slack].
## Developer Certificate of Origin ##
@ -94,7 +94,8 @@ thought of as being the motivation for your change.
Describe the modifications you've made.
[discourse]: https://discourse.linkerd.io/
[forum]: https://linkerd.buoyant.io/
[issue]: https://github.com/linkerd/linkerd/issues/new
[members]: https://github.com/orgs/linkerd/people
[slack]: https://slack.linkerd.io/

View File

@ -15,7 +15,7 @@ IgnoreURLs:
- https://monkey.org/*
- https://svn.apache.org/*
- http://roc.cs.berkeley.edu/papers/dsconfig.pdf
- https://discourse.linkerd.io
- https://linkerd.buoyant.io
- https://github.com/*
- https://www.man7.org/*
- https://offerup.com/

View File

@ -270,4 +270,4 @@ The examples and configurations in this post drew heavily from some excellent bl
Theres a lot more that you can do with Linkerd. For more details about this setup, see [Getting Started: Running in ECS](https://linkerd.io/getting-started/ecs/). For all commands and config files referenced in this post, see the [linkerd-examples repo](https://github.com/linkerd/linkerd-examples/tree/master/ecs). For more information about configuring Linkerd, see the [Linkerd Configuration](https://api.linkerd.io/latest/linkerd/index.html) page. Finally, for more information about linkerd-viz, see the [linkerd-viz Github repo](https://github.com/linkerd/linkerd-viz).
We hope this post was useful. Wed love to get your thoughts. Please join us in the Linkerd [Discourse](https://discourse.linkerd.io/) forums and the Linkerd [Slack](https://slack.linkerd.io/) channel!
We hope this post was useful. Wed love to get your thoughts. Please join us in the [Linkerd Support Forum](https://linkerd.buoyant.io/) and the Linkerd [Slack](https://slack.linkerd.io/) channel!

View File

@ -175,6 +175,4 @@ In the meantime, for more details about running linkerd in Kubernetes, visit the
Stay tuned for Part II in this series: [Pods Are Great Until Theyre Not][part-ii].
{{< note >}} There are a myriad of ways to deploy Kubernetes and different environments support different features. Learn more about deployment differences [here](https://discourse.linkerd.io/t/flavors-of-kubernetes). {{< /note >}}
[part-ii]: {{< ref "a-service-mesh-for-kubernetes-part-ii-pods-are-great-until-theyre-not" >}}

View File

@ -147,7 +147,7 @@ routers:
Deploying linkerd as a Kubernetes DaemonSet gives us the best of both worlds—it allows us to accomplish the full set of goals of a service mesh (such as transparent TLS, protocol upgrades, latency-aware load balancing, etc), while scaling linkerd instances per host rather than per pod.
For a full, working example, see the [previous blog post][part-i], or download our [example app](https://github.com/linkerd/linkerd-examples/tree/master/k8s-daemonset). And for help with this configuration or anything else about linkerd, feel free to drop into our very active [Slack](http://slack.linkerd.io/?__hstc=9342122.76ce13dbfb256ee6981b45631b434a7a.1497486135169.1498849007669.1499118552444.5&__hssc=9342122.14.1499118552444&__hsfp=188505984) or post a topic on [linkerd discourse](https://discourse.linkerd.io/?__hstc=9342122.76ce13dbfb256ee6981b45631b434a7a.1497486135169.1498849007669.1499118552444.5&__hssc=9342122.14.1499118552444&__hsfp=188505984).
For a full, working example, see the [previous blog post][part-i], or download our [example app](https://github.com/linkerd/linkerd-examples/tree/master/k8s-daemonset). And for help with this configuration or anything else about linkerd, feel free to drop into our very active [Slack](http://slack.linkerd.io/?__hstc=9342122.76ce13dbfb256ee6981b45631b434a7a.1497486135169.1498849007669.1499118552444.5&__hssc=9342122.14.1499118552444&__hsfp=188505984) or post a topic on the [Linkerd Support Forum](https://linkerd.buoyant.io/).
## Acknowledgments

View File

@ -174,6 +174,6 @@ TLS is a complex topic and weve glossed over some important security consider
Finally, adding TLS to the communications substrate is just one of many things that can be accomplished with a service mesh. Be sure to check out the rest of the articles in this series for more!
For help with this or anything else about linkerd, feel free to stop by our [linkerd community Slack](http://slack.linkerd.io/), post a topic on [linkerd discourse](https://discourse.linkerd.io/), or [contact us directly](https://linkerd.io/overview/help/)!
For help with this or anything else about linkerd, feel free to stop by our [linkerd community Slack](http://slack.linkerd.io/), post a topic on the [Linkerd Support Forum](https://linkerd.buoyant.io/), or [contact us directly](https://linkerd.io/overview/help/)!
[part-i]: {{< ref "a-service-mesh-for-kubernetes-part-i-top-line-service-metrics" >}} [part-ii]: {{< ref "a-service-mesh-for-kubernetes-part-ii-pods-are-great-until-theyre-not" >}} [part-iii]: {{< ref "a-service-mesh-for-kubernetes-part-iii-encrypting-all-the-things" >}} [part-iv]: {{< ref "a-service-mesh-for-kubernetes-part-iv-continuous-deployment-via-traffic-shifting" >}} [part-v]: {{< ref "a-service-mesh-for-kubernetes-part-v-dogfood-environments-ingress-and-edge-routing" >}} [part-vi]: {{< ref "a-service-mesh-for-kubernetes-part-vi-staging-microservices-without-the-tears" >}} [part-vii]: {{< ref "a-service-mesh-for-kubernetes-part-vii-distributed-tracing-made-easy" >}} [part-viii]: {{< ref "a-service-mesh-for-kubernetes-part-viii-linkerd-as-an-ingress-controller" >}} [part-ix]: {{< ref "a-service-mesh-for-kubernetes-part-ix-grpc-for-fun-and-profit" >}} [part-x]: {{< ref "a-service-mesh-for-kubernetes-part-x-the-service-mesh-api" >}} [part-xi]: {{< ref "a-service-mesh-for-kubernetes-part-xi-egress" >}}

View File

@ -399,8 +399,6 @@ Everything looks good. Kicking off a subsequent pipeline job will deploy a `wor
In this post, weve shown a basic workflow incorporating Linkerd, namerd, and Jenkins to progressively shift traffic from an old version to a new version of a service as the final step of a continuous deployment pipeline. Weve shown how Linkerds ability to do per-request routing actually lets us stage the new version of the service without needing a separate staging cluster, by using the `l5d-dtab` header to stitch the new service into the production topology *just for that request*. Finally, weve shown how percentage-based traffic shifting can be combined with a Jenkins `input` step to allow for human-in-the-loop verification of metrics as traffic moves from 0% to 100%.
This was a fairly simple example, but we hope it demonstrates the basic pattern of using service mesh routing for continuous deployment and provides a template for customizing this workflow for your own organization. For help with dtabs or anything else about Linkerd, feel free to stop by our [Linkerd forum](https://discourse.linkerd.io/), [Linkerd community Slack](http://slack.linkerd.io/), or [contact us directly](https://linkerd.io/overview/help/)!
{{< note >}} There are a myriad of ways to deploy Kubernetes and different environments support different features. Learn more about deployment differences [here](https://discourse.linkerd.io/t/flavors-of-kubernetes). {{< /note >}}
This was a fairly simple example, but we hope it demonstrates the basic pattern of using service mesh routing for continuous deployment and provides a template for customizing this workflow for your own organization. For help with dtabs or anything else about Linkerd, feel free to stop by the [Linkerd Support Forum](https://linkerd.buoyant.io/), [Linkerd community Slack](http://slack.linkerd.io/), or [contact us directly](https://linkerd.io/overview/help/)!
[part-i]: {{< ref "a-service-mesh-for-kubernetes-part-i-top-line-service-metrics" >}} [part-ii]: {{< ref "a-service-mesh-for-kubernetes-part-ii-pods-are-great-until-theyre-not" >}} [part-iii]: {{< ref "a-service-mesh-for-kubernetes-part-iii-encrypting-all-the-things" >}} [part-iv]: {{< ref "a-service-mesh-for-kubernetes-part-iv-continuous-deployment-via-traffic-shifting" >}} [part-v]: {{< ref "a-service-mesh-for-kubernetes-part-v-dogfood-environments-ingress-and-edge-routing" >}} [part-vi]: {{< ref "a-service-mesh-for-kubernetes-part-vi-staging-microservices-without-the-tears" >}} [part-vii]: {{< ref "a-service-mesh-for-kubernetes-part-vii-distributed-tracing-made-easy" >}} [part-viii]: {{< ref "a-service-mesh-for-kubernetes-part-viii-linkerd-as-an-ingress-controller" >}} [part-ix]: {{< ref "a-service-mesh-for-kubernetes-part-ix-grpc-for-fun-and-profit" >}} [part-x]: {{< ref "a-service-mesh-for-kubernetes-part-x-the-service-mesh-api" >}} [part-xi]: {{< ref "a-service-mesh-for-kubernetes-part-xi-egress" >}}

View File

@ -78,7 +78,7 @@ Also deploy Linkerd:
kubectl apply -f https://raw.githubusercontent.com/BuoyantIO/linkerd-examples/master/k8s-daemonset/k8s/linkerd-grpc.yml
```
Once Kubernetes provisions an external LoadBalancer IP for Linkerd, we can do some test requests! Note that the examples in these blog posts assume k8s is running on GKE (e.g. external loadbalancer IPs are available, no CNI plugins are being used). Slight modifications may be needed for other environments—see our [Flavors of Kubernetes help page](https://discourse.linkerd.io/t/flavors-of-kubernetes/53) for environments like Minikube or CNI configurations with Calico/Weave.
Once Kubernetes provisions an external LoadBalancer IP for Linkerd, we can do some test requests! Note that the examples in these blog posts assume k8s is running on GKE (e.g. external loadbalancer IPs are available, no CNI plugins are being used). Slight modifications may be needed for other environments, for example Minikube or CNI configurations with Calico/Weave.
Well use the helloworld-client provided by the `hello world` [docker image](https://hub.docker.com/r/buoyantio/helloworld/)in order to send test gRPC requests to our `hello world` service:
@ -162,8 +162,6 @@ In this article, weve seen how to use Linkerd as a service mesh for gRPC requ
Finally, for a more advanced example of configuring gRPC services, take a look at our [Gob microservice app](https://github.com/BuoyantIO/linkerd-examples/tree/master/gob). In that example, we additionally deploy [Namerd](https://github.com/linkerd/linkerd/tree/master/namerd), which we use to manage our routing rules centrally, and update routing rules without redeploying Linkerd. This lets us to do things like canarying and blue green deploys between different versions of a service.
Note: there are a myriad of ways to deploy Kubernetes and different environments support different features. Learn more about deployment differences [here](https://discourse.linkerd.io/t/flavors-of-kubernetes).
For more information on Linkerd, gRPC, and HTTP/2 head to the [Linkerd gRPC documentation](https://linkerd.io/features/grpc/) as well as our [config documentation for HTTP/2](https://linkerd.io/config/1.0.0/linkerd/index.html#http-2-protocol).
[part-i]: {{< ref "a-service-mesh-for-kubernetes-part-i-top-line-service-metrics" >}} [part-ii]: {{< ref "a-service-mesh-for-kubernetes-part-ii-pods-are-great-until-theyre-not" >}} [part-iii]: {{< ref "a-service-mesh-for-kubernetes-part-iii-encrypting-all-the-things" >}} [part-iv]: {{< ref "a-service-mesh-for-kubernetes-part-iv-continuous-deployment-via-traffic-shifting" >}} [part-v]: {{< ref "a-service-mesh-for-kubernetes-part-v-dogfood-environments-ingress-and-edge-routing" >}} [part-vi]: {{< ref "a-service-mesh-for-kubernetes-part-vi-staging-microservices-without-the-tears" >}} [part-vii]: {{< ref "a-service-mesh-for-kubernetes-part-vii-distributed-tracing-made-easy" >}} [part-viii]: {{< ref "a-service-mesh-for-kubernetes-part-viii-linkerd-as-an-ingress-controller" >}} [part-ix]: {{< ref "a-service-mesh-for-kubernetes-part-ix-grpc-for-fun-and-profit" >}} [part-x]: {{< ref "a-service-mesh-for-kubernetes-part-x-the-service-mesh-api" >}} [part-xi]: {{< ref "a-service-mesh-for-kubernetes-part-xi-egress" >}}

View File

@ -240,8 +240,6 @@ The system works! When this cookie is set, youll be in dogfood mode. Without
In this post, we saw how to use linkerd to provide powerful and flexible ingress to a Kubernetes cluster. Weve demonstrated how to deploy a nominally production-ready setup that uses linkerd for service routing. And weve demonstrated how to use some of the advanced routing features of linkerd to decouple the *traffic-serving* topology from the *deployment topology*, allowing for the creation of dogfood environments without separate clusters or deploy-time complications.
{{< note >}} There are a myriad of ways to deploy Kubernetes and different environments support different features. Learn more about deployment differences [here](https://discourse.linkerd.io/t/flavors-of-kubernetes). {{< /note >}}
For more about running linkerd in Kubernetes, or if you have any issues configuring ingress in your setup, feel free to stop by our [linkerd community Slack](http://slack.linkerd.io/), ask a question on [Discourse](https://discourse.linkerd.io), or [contact us directly](https://linkerd.io/overview/help/)!
For more about running linkerd in Kubernetes, or if you have any issues configuring ingress in your setup, feel free to stop by our [linkerd community Slack](http://slack.linkerd.io/), ask a question on the [Linkerd Support Forum](https://linkerd.buoyant.io/), or [contact us directly](https://linkerd.io/overview/help/)!
[part-i]: {{< ref "a-service-mesh-for-kubernetes-part-i-top-line-service-metrics" >}} [part-ii]: {{< ref "a-service-mesh-for-kubernetes-part-ii-pods-are-great-until-theyre-not" >}} [part-iii]: {{< ref "a-service-mesh-for-kubernetes-part-iii-encrypting-all-the-things" >}} [part-iv]: {{< ref "a-service-mesh-for-kubernetes-part-iv-continuous-deployment-via-traffic-shifting" >}} [part-v]: {{< ref "a-service-mesh-for-kubernetes-part-v-dogfood-environments-ingress-and-edge-routing" >}} [part-vi]: {{< ref "a-service-mesh-for-kubernetes-part-vi-staging-microservices-without-the-tears" >}} [part-vii]: {{< ref "a-service-mesh-for-kubernetes-part-vii-distributed-tracing-made-easy" >}} [part-viii]: {{< ref "a-service-mesh-for-kubernetes-part-viii-linkerd-as-an-ingress-controller" >}} [part-ix]: {{< ref "a-service-mesh-for-kubernetes-part-ix-grpc-for-fun-and-profit" >}} [part-x]: {{< ref "a-service-mesh-for-kubernetes-part-x-the-service-mesh-api" >}} [part-xi]: {{< ref "a-service-mesh-for-kubernetes-part-xi-egress" >}}

View File

@ -175,8 +175,6 @@ Lastly, its important to ensure that the `l5d-dtab` header is not settable
Weve demonstrated how to create ad-hoc staging environments with linkerd by setting per-request routing rules. With this approach, we can stage services in the context of production environment, without modifying existing code, provisioning extra resources for our staging environment (other than for the staging instance itself), or maintaining parallel environments for production and staging. For microservices with complex application topologies, this approach can provide an easy, low-cost way to staging services before pushing to production.
Note: there are a myriad of ways to deploy Kubernetes and different environments support different features. Learn more about deployment differences [here](https://discourse.linkerd.io/t/flavors-of-kubernetes).
For more about running linkerd in Kubernetes, or if you have any issues configuring ingress in your setup, feel free to stop by our [linkerd community Slack](https://slack.linkerd.io/), ask a question on [Discourse](https://discourse.linkerd.io), or [contact us directly](https://linkerd.io/overview/help/)!
For more about running linkerd in Kubernetes, or if you have any issues configuring ingress in your setup, feel free to stop by our [linkerd community Slack](https://slack.linkerd.io/), ask a question on the [Linkerd Support Forum](https://linkerd.buoyant.io/), or [contact us directly](https://linkerd.io/overview/help/)!
[part-i]: {{< ref "a-service-mesh-for-kubernetes-part-i-top-line-service-metrics" >}} [part-ii]: {{< ref "a-service-mesh-for-kubernetes-part-ii-pods-are-great-until-theyre-not" >}} [part-iii]: {{< ref "a-service-mesh-for-kubernetes-part-iii-encrypting-all-the-things" >}} [part-iv]: {{< ref "a-service-mesh-for-kubernetes-part-iv-continuous-deployment-via-traffic-shifting" >}} [part-v]: {{< ref "a-service-mesh-for-kubernetes-part-v-dogfood-environments-ingress-and-edge-routing" >}} [part-vi]: {{< ref "a-service-mesh-for-kubernetes-part-vi-staging-microservices-without-the-tears" >}} [part-vii]: {{< ref "a-service-mesh-for-kubernetes-part-vii-distributed-tracing-made-easy" >}} [part-viii]: {{< ref "a-service-mesh-for-kubernetes-part-viii-linkerd-as-an-ingress-controller" >}} [part-ix]: {{< ref "a-service-mesh-for-kubernetes-part-ix-grpc-for-fun-and-profit" >}} [part-x]: {{< ref "a-service-mesh-for-kubernetes-part-x-the-service-mesh-api" >}} [part-xi]: {{< ref "a-service-mesh-for-kubernetes-part-xi-egress" >}}

View File

@ -200,6 +200,4 @@ For instance, consider the following trace:
In this example, an external request is routed by Linkerd to the “Web” service, which then calls “Service B” and “Service C” sequentially (via Linkerd) before returning a response. The trace has 6 spans, and a total duration of 20 milliseconds. The 3 yellow spans are *server spans*, and the 3 blue spans are *client spans*. The *root span* is Span A, which represents the time from when Linkerd initially received the external request until it returned the response. Span A has one child, Span B, which represents the amount of time that it took for the Web service to respond to Linkerds forwarded request. Likewise Span D represents the amount of time that it took for Service B to respond to the request from the Web service. For more information about tracing, read our previous blog post, [Distributed Tracing for Polyglot Microservices][polyglot].
Note: there are a myriad of ways to deploy Kubernetes and different environments support different features. Learn more about deployment differences [here](https://discourse.linkerd.io/t/flavors-of-kubernetes).
[part-i]: {{< ref "a-service-mesh-for-kubernetes-part-i-top-line-service-metrics" >}} [part-ii]: {{< ref "a-service-mesh-for-kubernetes-part-ii-pods-are-great-until-theyre-not" >}} [part-iii]: {{< ref "a-service-mesh-for-kubernetes-part-iii-encrypting-all-the-things" >}} [part-iv]: {{< ref "a-service-mesh-for-kubernetes-part-iv-continuous-deployment-via-traffic-shifting" >}} [part-v]: {{< ref "a-service-mesh-for-kubernetes-part-v-dogfood-environments-ingress-and-edge-routing" >}} [part-vi]: {{< ref "a-service-mesh-for-kubernetes-part-vi-staging-microservices-without-the-tears" >}} [part-vii]: {{< ref "a-service-mesh-for-kubernetes-part-vii-distributed-tracing-made-easy" >}} [part-viii]: {{< ref "a-service-mesh-for-kubernetes-part-viii-linkerd-as-an-ingress-controller" >}} [part-ix]: {{< ref "a-service-mesh-for-kubernetes-part-ix-grpc-for-fun-and-profit" >}} [part-x]: {{< ref "a-service-mesh-for-kubernetes-part-x-the-service-mesh-api" >}} [part-xi]: {{< ref "a-service-mesh-for-kubernetes-part-xi-egress" >}} [polyglot]: /2016/05/17/distributed-tracing-for-polyglot-microservices/

View File

@ -264,9 +264,7 @@ $ earth (10.0.1.5)!
Linkerd provides a ton of benefits as an edge router. In addition to the dynamic routing and TLS termination described in this post, it also [pools connections](https://en.wikipedia.org/wiki/Connection_pool), [load balances dynamically](/2016/03/16/beyond-round-robin-load-balancing-for-latency/) , [enables circuit breaking](/2017/01/14/making-microservices-more-resilient-with-circuit-breaking/) , and supports [distributed tracing][part-vii]. Using the Linkerd ingress controller and the [Kubernetes configuration](https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd-ingress-controller.yml) referenced in this post, you gain access to all these features in an easy to use, Kubernetes-native approach. Best of all, this method works seamlessly with the rest of the service mesh, allowing for operation, visibility, and high availability in virtually any cloud architecture.
Note: there are a myriad of ways to deploy Kubernetes and different environments support different features. Learn more about deployment differences [here](https://discourse.linkerd.io/t/flavors-of-kubernetes).
The [ingress identifier is new](https://github.com/linkerd/linkerd/pull/1116), so wed love to get your thoughts on what features you want from an ingress controller. You can find us in the [Linkerd community Slack](https://slack.linkerd.io/) or on the [linkerd discourse](https://discourse.linkerd.io/).
The [ingress identifier is new](https://github.com/linkerd/linkerd/pull/1116), so wed love to get your thoughts on what features you want from an ingress controller. You can find us in the [Linkerd community Slack](https://slack.linkerd.io/) or on the [Linkerd Support Forum](https://linkerd.buoyant.io/).
### ACKNOWLEDGEMENTS

View File

@ -71,7 +71,7 @@ Deploy it to your Kubernetes cluster with this command:
kubectl apply -f https://raw.githubusercontent.com/BuoyantIO/linkerd-examples/master/k8s-daemonset/k8s/hello-world-latency.yml
```
(Note that the examples in these blog posts assume Kubernetes is running in an environment like GKE, where external loadbalancer IPs are available, and no CNI plugins are being used. Slight modifications may be needed for other environments—see our [Flavors of Kubernetes forum posting](https://discourse.linkerd.io/t/flavors-of-kubernetes/53) for how to handle environments like Minikube or CNI configurations with Calico/Weave.)
(Note that the examples in these blog posts assume Kubernetes is running in an environment like GKE, where external loadbalancer IPs are available, and no CNI plugins are being used. Slight modifications may be needed for other environments, for example Minikube or CNI configurations with Calico/Weave.)
Our next step will be to deploy the Linkerd service mesh. Wed like to add a timeout so that we can abort (and potentially retry) requests that are taking too long, but were faced with a problem. The `world` service is fast, responding in less than `100ms`, but the `hello` service is slow, taking more than `500ms` to respond. If we set our timeout just above `100ms`, requests to the `world` service will succeed, but requests to the `hello` service are guaranteed to timeout. On the other hand, if we set our timeout above `500ms` then were giving the `world` service a much longer timeout than necessary, which may cause problems to *our* callers.
@ -142,4 +142,4 @@ In this post, weve seen an example of using Linkerds new per-service commu
In the coming months, well add this communications policy to Linkerds service mesh API, alongside routing policy. Looking still further, other forms of policy—including [rate limiting](https://github.com/linkerd/linkerd/issues/1006), [request forking policy](https://github.com/linkerd/linkerd/issues/1277), and [security policy](https://github.com/linkerd/linkerd/issues/1276)—are all on [the Linkerd roadmap](https://github.com/linkerd/linkerd/projects/3), and will form more of Linkerds service mesh API. A consistent, uniform, well-designed service mesh API with comprehensive control over Linkerds runtime behavior is central to our vision of Linkerd as the service mesh for cloud native applications.
Theres a lot of very exciting work ahead of us and it wont be possible without input and involvement from the amazing Linkerd community. Please comment on an issue, discuss your use case on [Discourse](https://discourse.linkerd.io/), hit us up on [Slack](https://slack.linkerd.io/), or—best of all—submit a [pull request](https://github.com/linkerd/linkerd/pulls)!
Theres a lot of very exciting work ahead of us and it wont be possible without input and involvement from the amazing Linkerd community. Please comment on an issue, discuss your use case on the [Linkerd Support Forum](https://linkerd.buoyant.io/), hit us up on [Slack](https://slack.linkerd.io/), or—best of all—submit a [pull request](https://github.com/linkerd/linkerd/pulls)!

View File

@ -118,8 +118,6 @@ kubectl apply -f https://raw.githubusercontent.com/BuoyantIO/linkerd-examples/ma
Once Kubernetes provisions an external LoadBalancer IP for Linkerd, we can test requests to the `hello` and `world` services as well as external services running outside of Kubernetes.
(Note that the examples in these blog posts assume k8s is running on GKE (e.g. external loadbalancer IPs are available, no CNI plugins are being used). Slight modifications may be needed for other environments—see our [Flavors of Kubernetes help page](https://discourse.linkerd.io/t/flavors-of-kubernetes/53) for environments like Minikube or CNI configurations with Calico/Weave.)
```bash
L5D_INGRESS_LB=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}")
```
@ -153,6 +151,6 @@ In the above configuration, we assume that the Linkerd DaemonSet pods are able t
By using Linkerd for egress, external services are able to share the same benefits that services running inside of Kubernetes get from the Linkerd service mesh. These include adaptive load balancing, circuit breaking, observability, dynamic routing, and TLS initiation. Most importantly, Linkerd gives you a uniform, consistent model of request handling and naming thats independent of whether those requests are destined for internal services, or for external, third-party APIs.
If you have any questions about using Linkerd for egress, please come ask on [Discourse](https://discourse.linkerd.io/) or [Slack](https://slack.linkerd.io/)!
If you have any questions about using Linkerd for egress, please come ask on the [Linkerd Support Forum](https://linkerd.buoyant.io/) or [Slack](https://slack.linkerd.io/)!
[part-i]: {{< ref "a-service-mesh-for-kubernetes-part-i-top-line-service-metrics" >}} [part-ii]: {{< ref "a-service-mesh-for-kubernetes-part-ii-pods-are-great-until-theyre-not" >}} [part-iii]: {{< ref "a-service-mesh-for-kubernetes-part-iii-encrypting-all-the-things" >}} [part-iv]: {{< ref "a-service-mesh-for-kubernetes-part-iv-continuous-deployment-via-traffic-shifting" >}} [part-v]: {{< ref "a-service-mesh-for-kubernetes-part-v-dogfood-environments-ingress-and-edge-routing" >}} [part-vi]: {{< ref "a-service-mesh-for-kubernetes-part-vi-staging-microservices-without-the-tears" >}} [part-vii]: {{< ref "a-service-mesh-for-kubernetes-part-vii-distributed-tracing-made-easy" >}} [part-viii]: {{< ref "a-service-mesh-for-kubernetes-part-viii-linkerd-as-an-ingress-controller" >}} [part-ix]: {{< ref "a-service-mesh-for-kubernetes-part-ix-grpc-for-fun-and-profit" >}} [part-x]: {{< ref "a-service-mesh-for-kubernetes-part-x-the-service-mesh-api" >}} [part-xi]: {{< ref "a-service-mesh-for-kubernetes-part-xi-egress" >}}

View File

@ -50,4 +50,4 @@ More instructions on configuring your application to work with the official Kube
With this config, its easier than ever to get started running Linkerd on Kubernetes! Whether youre running Kubernetes on a massive production cluster or in [Minikube](https://github.com/kubernetes/minikube) on your laptop; whether youre already using Linkerd to route critical production traffic or just checking it out for the first time, this config will allow you to easily set up a fully-featured Linkerd service mesh, and serve as a starting point to write your own custom configurations to best suit the needs of your application.
For more information about [Linkerds various features](https://linkerd.io/features/index.html) on Kubernetes, see our [Service Mesh For Kubernetes]({{< ref
"a-service-mesh-for-kubernetes-part-i-top-line-service-metrics" >}}) blog series. As always, if you have any questions or just want to chat about Linkerd, join [the Linkerd Slack](http://slack.linkerd.io/) or browse [the Discourse community forum](https://discourse.linkerd.io) for more in-depth discussion.
"a-service-mesh-for-kubernetes-part-i-top-line-service-metrics" >}}) blog series. As always, if you have any questions or just want to chat about Linkerd, join [the Linkerd Slack](http://slack.linkerd.io/) or browse the [Linkerd Support Forum](https://linkerd.buoyant.io/) for more in-depth discussion.

View File

@ -113,6 +113,6 @@ Dtabs are an incredibly powerful system that provide fine-grained control over t
## Want more?
This is just the beginning, and we have some very big plans for Linkerd-tcp. Want to get involved? [Linkerd-tcp is on Github](https://github.com/linkerd/linkerd-tcp). And for help with Linkerd-tcp, Dtabs, or anything else about the Linkerd service mesh, feel free to stop by the [Linkerd community Slack](https://slack.linkerd.io/) or post a topic on [Linkerd discourse](https://discourse.linkerd.io/)!
This is just the beginning, and we have some very big plans for Linkerd-tcp. Want to get involved? [Linkerd-tcp is on Github](https://github.com/linkerd/linkerd-tcp). And for help with Linkerd-tcp, Dtabs, or anything else about the Linkerd service mesh, feel free to stop by the [Linkerd community Slack](https://slack.linkerd.io/) or post a topic on the [Linkerd Support Forum](https://linkerd.buoyant.io/)!
[part-iv]: {{< ref "a-service-mesh-for-kubernetes-part-iv-continuous-deployment-via-traffic-shifting" >}}

View File

@ -29,6 +29,6 @@ In 0.8.4, we started testing Linkerd against known-good gRPC clients and servers
For now, HTTP/2 and gRPC support remain behind the experimental flag. However, production-ready HTTP/2 and gRPC support are on our short term roadmap, and you should expect to see these features continue to improve over the next few releases.
We hope you enjoy this release. For more about HTTP/2 or gRPC with Linkerd, feel free to stop by our [Linkerd community Slack](http://slack.linkerd.io/), ask a question on [Discourse](https://discourse.linkerd.io), or [contact us directly](https://linkerd.io/overview/help/).
We hope you enjoy this release. For more about HTTP/2 or gRPC with Linkerd, feel free to stop by our [Linkerd community Slack](http://slack.linkerd.io/), ask a question on the [Linkerd Support Forum](https://linkerd.buoyant.io/), or [contact us directly](https://linkerd.io/overview/help/).
—William and the gang at [Buoyant](https://buoyant.io/)

View File

@ -46,4 +46,4 @@ Finally, wed like to thank another first-time Linkerd contributor, Robert Pan
## The Linkerd Community is Amazing
As always, were humbled and gratified to have such a strong open source community around Linkerd. Thanks again to Robert, Carlos, Zack, Steve, Chris, Matt, and Sergey. For a first-hand view into just how helpful the community around Linkerd can be, please join us in the [Linkerd Slack](http://slack.linkerd.io) or on the Linkerd Community [Discourse forums](https://discourse.linkerd.io/)!
As always, were humbled and gratified to have such a strong open source community around Linkerd. Thanks again to Robert, Carlos, Zack, Steve, Chris, Matt, and Sergey. For a first-hand view into just how helpful the community around Linkerd can be, please join us in the [Linkerd Slack](http://slack.linkerd.io) or on the [Linkerd Support Forum](https://linkerd.buoyant.io/)!

View File

@ -207,7 +207,7 @@ need help, or have questions, feel free to reach out via one of the following
channels:
- The [Linkerd slack](http://slack.linkerd.io/)
- The [Linkerd discourse](https://discourse.linkerd.io/)
- The [Linkerd Support Forum](https://linkerd.buoyant.io/)
- Email us directly at support@buoyant.io
## Acknowledgments

View File

@ -98,7 +98,7 @@ For more information on configuration options available in Linkerd, have a look
In this example, weve seen how Linkerd can improve system throughput in the presence of failing and slow components, even though Linkerd itself adds a small amount of latency to each request. In our [experience operating large-scale systems]({{< relref "linkerd-twitter-style-operability-for-microservices" >}}), this test environment demonstrates the types of performance issues and incidents that we have seen in production. A single request from the outside can hit 10s or even 100s of services, each having 10s or 100s of instances, any of which may be slow or down. Setting up Linkerd as your service mesh can help ensure latency stays low and success rate stays high in the face of inconsistent performance and partial failure in your distributed systems.
If you have any questions about this post, Linkerd, or distributed systems in general, feel free to stop by our [Linkerd community Slack](http://slack.linkerd.io/), post a topic on [Linkerd discourse](https://discourse.linkerd.io/), or [contact us directly](https://linkerd.io/overview/help/).
If you have any questions about this post, Linkerd, or distributed systems in general, feel free to stop by our [Linkerd community Slack](http://slack.linkerd.io/), post a topic on the [Linkerd Support Forum](https://linkerd.buoyant.io/), or [contact us directly](https://linkerd.io/overview/help/).
## Acknowledgements

View File

@ -183,7 +183,7 @@ And thats it! The Linkerd pods now use the `linkerd-svc-account` and have th
## Putting it all together
For a complete Kubernetes config file that uses all of the above, just use this file: [linkerd-rbac.yml][linkerd-rbac]. This config will allow Linkerd and Namerd to have all the access needed to the Kubernetes API with the default service account. If you'd like to set this up using a dedicated service account, you'll need to modify linkerd-rbac-beta.yml, as described in the previous section. We hope this post was useful. Wed love to get your thoughts. Please join us in the Linkerd [Discourse](https://discourse.linkerd.io/) forums and the Linkerd [Slack](https://slack.linkerd.io/) channel! And for more walkthroughs of how to use [Linkerds various features](https://linkerd.io/features/index.html) on Kubernetes, see our [Service Mesh For Kubernetes]({{< ref
For a complete Kubernetes config file that uses all of the above, just use this file: [linkerd-rbac.yml][linkerd-rbac]. This config will allow Linkerd and Namerd to have all the access needed to the Kubernetes API with the default service account. If you'd like to set this up using a dedicated service account, you'll need to modify linkerd-rbac-beta.yml, as described in the previous section. We hope this post was useful. Wed love to get your thoughts. Please join us in the [Linkerd Support Forum](https://linkerd.buoyant.io/) and the Linkerd [Slack](https://slack.linkerd.io/) channel! And for more walkthroughs of how to use [Linkerds various features](https://linkerd.io/features/index.html) on Kubernetes, see our [Service Mesh For Kubernetes]({{< ref
"a-service-mesh-for-kubernetes-part-i-top-line-service-metrics" >}}) blog series.
[daemonset]: https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd.yml

View File

@ -70,8 +70,8 @@
</a>
</div>
<div>
<a class="" href="https://discourse.linkerd.io" target="_blank">
<img style="height:24px;" src="/images/discourse.svg" alt="Discourse">
<a class="" href="https://linkerd.buoyant.io" target="_blank">
<img style="height:24px; vertical-align: middle;" src="/images/identity/png/transparent_background/2x/forum_dark@2x.png" alt="FORUM">
</a>
</div>
</div>

View File

@ -117,8 +117,8 @@
</a>
</div>
<div>
<a class="" href="https://discourse.linkerd.io" target="_blank">
<img style="height:24px;" src="/images/discourse.svg" alt="Discourse">
<a class="" href="https://linkerd.buoyant.io" target="_blank">
<img style="height:24px; vertical-align: middle;" src="/images/identity/png/transparent_background/2x/forum_dark@2x.png" alt="FORUM">
</a>
</div>
</div>

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB