Merge changes from the release-1.0 branch (#2161)

* Prep for 1.0 release

* Fix typo for 1.0 announcement. (#2081)

* Updated kubect link for IBM Cloud Private. (#2083)

* Fix generated tablegen.py (needs backport) (#2084)

Original table was dreadfully wrong.

(cherry picked from commit b3fa64fa41)

* add a VirtualService for external HTTPS ServiceEntry (#2080)

* add a VirtualService for external HTTPS ServiceEntry

* a VirtualService -> the VirtualService

(cherry picked from commit 9e57d4a5b7)

* egress gateway: use subsets for cnn in destination rules and virtual services (#1942)

* use subsets for cnn in destination rules and virtual services

* remove trailing spaces

* separate virtual services for traffic to and from egress gateway

to egress gateway: TLS match
from egress gateway: TCP match

* put back tls match for HTTPS egress for Istio without Auth

combine defining the Gateway and the VirtualServices

* use ISTIO_MUTUAL with sni in destination rules

* update the log message to print HTTP/2 as the protocol

* make two VirtualServices into one

* remove redundant explanation about SNI setting in a destination rule

* use different virtual service matches for Istio with and without SNI

* fix the case of HTTP traffic for Istio without Auth

(cherry picked from commit 81baa2e939)

* Disable Mesh Expansion page.

(cherry picked from commit dc4da48042)

* Blog fix.

* adding juspay (#2092)

* Update homepage and what is istio page (#2085)

- update the two pages
- make the links point to the Chinese document

(cherry picked from commit 993231abeb)

* Chinese: announcing istio 1.0 (#2088)

(cherry picked from commit 5301d4ea13)

* Move advanced egress tasks to examples, Advanced egress traffic control, release 1.0 (#2093)

* add advanced-egress subsection in Examples

* move egress gateway and egress tls origination tasks into advanced examples

* rename task to example and fix the links

* Tweak the HP blog post a tad.

* Another blog tweak.

* Update index.md (#2096)

Removing VM support until it's fixed

(cherry picked from commit c2e529212b)

* Make the site work when it's published to a subdirectory (for the archive) (#2095)

(cherry picked from commit 137e1d13f4)

* Change "Testing mutual TLS" tutorial to "Mutual TLS deep dive" (#1972)

(cherry picked from commit 0662e413f1)

* fix kubectl output (#2100)

fixes https://github.com/istio/istio.github.io/issues/2066

(cherry picked from commit 2a852d1408)

* Another blog tweak.

* Add section to tracing task to cover sampling. (#2097)

* Add section to tracing task to cover sampling.

* Lint fix

* Review comments.

* Review comments.

* Review comments.

* Add documentation for redisquota adapter in rate limiting doc (#2098)

* fix multicluster doc issues. (#2104)

* remove unnecessary gateway spec (#2091)

* Clarify and correct distributed tracing task (#2115)

* Cherry-pick latest changes from master (#2118)

* Translate fix zh links (#2105)

* zh: all linkes without '#' had been replaced

* translate: rewrite links to zh version if it exists.

(cherry picked from commit c4daa73dee)

* Translate Istio 1.0 canary into Chinese (#2110)

(cherry picked from commit 4d6eec754c)

* Fix typo in "Delayering Istio" blog post (#2102)

(cherry picked from commit 6bdb4605f4)

* Minikube settings (#2082)

(cherry picked from commit 9f6ebe9eeb)

* Fix single word in command (#2112)

It returned this:
```
kubectl get svc istio-ingress -n istio-system
Error from server (NotFound): services "istio-ingress" not found
```

Now it works correctly

(cherry picked from commit 2bbe9eef03)

* add initial galley intro to "what is istio" concept page (#2113)

(cherry picked from commit 2db7f5648d)

* make cmd/result match (#2117)

* make cmd/result match

* address comment

* Add Rigs to the English content owners file. (#2119)

(cherry picked from commit bd577696bf)

* Cherry-picks from master (#2122)

* Add Istio security vulnerabilities disclosure and handling page (#2114)

(cherry picked from commit dfee9b8ec0)

* Fix an error in faq page (#2120)

(cherry picked from commit d3c04a5ba7)

* More work to fix use of the site in a subdirectory. (#2123) (#2124)

(cherry picked from commit 5bd9c0f0bd)

* Cherry-pick latest changes from master (#2128)

* Add a couple entries to our prefered vocab list.

(cherry picked from commit 2cbe43aea7)

* Translate attribute-vocabulary (#2101)

* translate attribute-vocabulary

* fix Chinese link

* fix Chinese style & translate table header

(cherry picked from commit 056bf27879)

* fix the virtual-services fault injection error in the YAML (#2109)

fix the virtual-services fault injection error in the YAML

(cherry picked from commit 453012d3ab)

* Add an item to check whether mTLS is enabled for a service (#2062)

(cherry picked from commit 384f6cd8be)

* Chinese content was aliasing English content. (#2126)

Page aliases are intended to redirect users from a page old's location to a new location.
As it was, the Chinese content pages were redirect old English locations to Chinese, which
made Chinese show up on English systems that were using the old links.

(cherry picked from commit c86d357f2e)

* Fix formatting glitch in a few glossary entries.

(cherry picked from commit a6420a4475)

* Cherry pick latest changes from master (#2138)

* Translate into Chinese: docs/examples/multicluster/icp/index.md (#2129)

* Translate into Chinese: docs/examples/multicluster/icp/index.md

* fix link anchor

(cherry picked from commit eca46893fe)

* Add an icon for the security vulnerabilities page (#2132)

(cherry picked from commit 11ce2b3924)

* Fix security concept figure captions etc. (#2087)

(cherry picked from commit f83bb8ada0)

* Translate into Chinese: blog/2018/aws-nlb/index.md (#2130)

(cherry picked from commit 9e77fa4cd0)

* Translate: all keywords in front matters (#2135)

* Translate: all keywords

* fixed typo

* remvoed from terms: vm,  config->configuration

(cherry picked from commit 02392ff87e)

* Initial checkin of the setup ops guide. (#2078) (#2139)

(cherry picked from commit 3b529341a1)

* Document DestinationRule mTLS conflict (#2131)

* Document TLS conflict in DRs

* spelling errors

* lint errors

* tweak title

* tweak title

* address review comments

* Cherry-pick latest changes from master (#2143)

* Add twitch livestream blog post (#2140)

This is for the all-day istio livestream on August 17th.

(cherry picked from commit 41d3caa211)

* Make the big boxes on the home page clickable.

(cherry picked from commit 387e54c299)

* Cherry-pick latest changes from master. (#2159)

* Fix broken Mixer Adapter Dev Guide links (#2144)

Signed-off-by: Venil Noronha <veniln@vmware.com>

(cherry picked from commit 5342ab2a80)

* Fix some more stale wiki links. (#2145)

(cherry picked from commit b641486002)

* translate tasks/traffic-management/egress-gateway to Chinese (#2146)

* translate tasks/traffic-management/egress-gateway to Chinese

* 修改内部链接路径

* 去掉空格

* 删除空格

(cherry picked from commit 75baef98ec)

* Improve linting (#2148)

- We now detect text blocks that are incorrectly indented.

- We now detect image captions that end in a period.

- We now detect page descriptions that don't end in a period.

- CircleCi now runs linting without minifying HTML first, improving perf and
improving error output.

- In CircleCi, we now have a per-build cache for HTML proofer output. This
helps reduce the frequency of link timeout errors.

- Fix errors flagged by the above new lint checks.

(cherry picked from commit fd290dc73e)

* translate:setup-kubernetes-requirments (#2147)

(cherry picked from commit 0d98eee9c4)

* Translate into Chinese: blog/2017/0.2-announcement/index.md (#2150)

(cherry picked from commit a34cfc063d)

* Translate into Chinese:  content/blog/2018/aws-nlb/index.md Sync/Update (#2153)

* Translate into Chinese: blog/2017/0.2-announcement/index.md

* Update index.md

* Update _index.md

(cherry picked from commit 4ee8e44cb6)

* re translate /zh/blog/2018/egress-tcp/ page (#2151)

* re translate /zh/blog/2018/egress-tcp/, for changes of content/blog/2018/egress-tcp/index.md file between commit fd290dc73e and 82eb2c21a3

* fix unaviable link (#2151)

(cherry picked from commit 0b313e373b)

* Flip conditional polarity to remove useless work when linting.

(cherry picked from commit 4424563918)

* Enable extra lint stuff (#2158)

(cherry picked from commit 0b2ea1d38e)

* Fix indent, given new linting rules.
This commit is contained in:
Martin Taillefer 2018-08-06 12:11:57 -07:00 committed by GitHub
parent 0b2ea1d38e
commit 54c0506698
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 235 additions and 26 deletions

View File

@ -55,7 +55,7 @@ Below is our list of existing features and their current phases. This informatio
| [Distributed Tracing to Zipkin / Jaeger](/docs/tasks/telemetry/distributed-tracing/) | Alpha
| [Service Tracing](/docs/tasks/telemetry/distributed-tracing/) | Alpha
| [Logging with Fluentd](/docs/tasks/telemetry/fluentd/) | Alpha
| Trace Sampling | Alpha
| [Trace Sampling](/docs/tasks/telemetry/distributed-tracing/#trace-sampling) | Alpha
### Security and Policy Enforcement

View File

@ -4,10 +4,10 @@ subtitle: The production ready service mesh
description: Istio is ready for production use with its 1.0 release.
publishdate: 2018-07-31
attribution: The Istio Team
weight: 84
weight: 85
---
Today, were excited to announce [Istio 1.0](/about/notes/1.0). Its been a little over a year since our initial 0.1 release. Since then, Istio has evolved significantly with the help of a thriving and growing community of contributors and users. Weve now reached the point where many companies have successfully adopted Istio in production and have gotten real value from the insight and control it provides over their deployments. Weve helped large enterprises and fast-moving startups like [eBay](https://www.ebay.com/), [Auto Trader UK](https://www.autotrader.co.uk/), [Descartes Labs](http://www.descarteslabs.com/), [HP FitStation](https://www.fitstation.com/), [Namely](https://www.namely.com/), [PubNub](https://www.pubnub.com/) & [Trulia](https://www.trulia.com/) use Istio to connect, manage and secure their services from the ground up. Shipping this release as 1.0 is recognition that weve built a core set of functionality that our users can rely on for production use.
Today, were excited to announce [Istio 1.0](/about/notes/1.0). Its been a little over a year since our initial 0.1 release. Since then, Istio has evolved significantly with the help of a thriving and growing community of contributors and users. Weve now reached the point where many companies have successfully adopted Istio in production and have gotten real value from the insight and control it provides over their deployments. Weve helped large enterprises and fast-moving startups like [eBay](https://www.ebay.com/), [Auto Trader UK](https://www.autotrader.co.uk/), [Descartes Labs](http://www.descarteslabs.com/), [HP FitStation](https://www.fitstation.com/), [JUSPAY](https://juspay.in), [Namely](https://www.namely.com/), [PubNub](https://www.pubnub.com/) and [Trulia](https://www.trulia.com/) use Istio to connect, manage and secure their services from the ground up. Shipping this release as 1.0 is recognition that weve built a core set of functionality that our users can rely on for production use.
## Ecosystem

View File

@ -1,8 +1,9 @@
---
title: Istio is a Game Changer for HP Platform
title: Istio a Game Changer for HP's FitStation Platform
subtitle: How HP is building its next-generation footwear personalization platform on Istio.
publishdate: 2018-07-31
attribution: Steven Ceuppens, Chief Software Architect @ HP FitStation, Open Source Advocate / Contributor
weight: 85
weight: 84
---
The FitStation team at HP strongly believes in the future of Kubernetes, BPF and service-mesh as the next standards in cloud infrastructure. We are also very happy to see Istio coming to its official Istio 1.0 release -- thanks to the joint collaboration that started at Google, IBM and Lyft beginning in May 2017.

View File

@ -65,7 +65,7 @@ cluster to the Istio control plane.
{{< text bash >}}
$ export PILOT_POD_IP=$(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}')
$ export POLICY_POD_IP=$(kubectl -n istio-system get pod -l istio=mixer -o jsonpath='{.items[0].status.podIP}')
$ export POLICY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=policy -o jsonpath='{.items[0].status.podIP}')
$ export STATSD_POD_IP=$(kubectl -n istio-system get pod -l istio=statsd-prom-bridge -o jsonpath='{.items[0].status.podIP}')
$ export TELEMETRY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}')
$ export ZIPKIN_POD_IP=$(kubectl -n istio-system get pod -l app=jaeger -o jsonpath='{range .items[*]}{.status.podIP}{end}')
@ -91,7 +91,7 @@ Proceed to one of the options for connecting the remote cluster to the local clu
--set global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
--set global.proxy.envoyStatsd.enabled=true \
--set global.proxy.envoyStatsd.host=${STATSD_POD_IP} \
${ZIPKIN_POD_IP:+ --set global.remoteZipkinAddress=${ZIPKIN_POD_IP}} > $HOME/istio-remote.yaml
--set global.remoteZipkinAddress=${ZIPKIN_POD_IP}} > $HOME/istio-remote.yaml
{{< /text >}}
1. Create a namespace for remote Istio.

View File

@ -36,5 +36,5 @@ Replace `<cluster-name>` with the name of the cluster you want to use in the fol
## IBM Cloud Private
[Configure `kubectl`](https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/manage_cluster/cfc_cli.html)
[Configure `kubectl`](https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/manage_cluster/cfc_cli.html)
to access the IBM Cloud Private Cluster.

View File

@ -55,13 +55,113 @@ so the configuration to enable rate limiting on both adapters is the same.
* `memquota adapter` defines memquota adapter configuration.
* `quota rule` defines when quota instance is dispatched to the memquota adapter.
Run the following command to enable rate limits.
Run the following command to enable rate limits using memquota:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/policy/mixer-rule-productpage-ratelimit.yaml@
{{< /text >}}
1. Confirm the `memquota` handler was created:
Or
Save the following yaml file as `redisquota.yaml`. Replace [rate_limit_algorithm](/docs/reference/config/policy-and-telemetry/adapters/redisquota/#Params-QuotaAlgorithm),
[redis_server_url](/docs/reference/config/policy-and-telemetry/adapters/redisquota/#Params) with values for your configuration.
{{< text yaml >}}
apiVersion: "config.istio.io/v1alpha2"
kind: redisquota
metadata:
name: handler
namespace: istio-system
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 500
validDuration: 1s
bucketDuration: 500ms
rateLimitAlgorithm: <rate_limit_algorithm>
# The first matching override is applied.
# A requestcount instance is checked against override dimensions.
overrides:
# The following override applies to 'reviews' regardless
# of the source.
- dimensions:
destination: reviews
maxAmount: 1
validDuration: 5s
# The following override applies to 'productpage' when
# the source is a specific ip address.
- dimensions:
destination: productpage
source: "10.28.11.20"
maxAmount: 500
validDuration: 1s
# The following override applies to 'productpage' regardless
# of the source.
- dimensions:
destination: productpage
maxAmount: 2
validDuration: 5s
redisServerUrl: <redis_server_url>
connectionPoolSize: 10
---
apiVersion: "config.istio.io/v1alpha2"
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.workload.name | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
# quota only applies if you are not logged in.
# match: match(request.headers["cookie"], "user=*") == false
actions:
- handler: handler.redisquota
instances:
- requestcount.quota
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcount
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: productpage
namespace: default
# - service: '*' # Uncomment this to bind *all* services to request-count
---
{{< /text >}}
Run the following command to enable rate limits using redisquota:
{{< text bash >}}
$ kubectl apply -f redisquota.yaml
{{< /text >}}
1. Confirm the `memquota` handler was created:
{{< text bash yaml >}}
$ kubectl -n istio-system get memquota handler -o yaml
@ -93,7 +193,51 @@ so the configuration to enable rate limiting on both adapters is the same.
`validDuration` field), if the `destination` is `reviews`.
* The second is `2` requests every `5s`, if the `destination` is `productpage`.
When a request is sent to the first matching override is picked (reading from top to bottom).
When a request is processed, the first matching override is picked (reading from top to bottom).
Or
Confirm the `redisquota` handler was created:
{{< text bash yaml >}}
$ kubectl -n istio-system get redisquota handler -o yaml
apiVersion: config.istio.io/v1alpha2
kind: redisquota
metadata:
name: handler
namespace: istio-system
spec:
connectionPoolSize: 10
quotas:
- name: requestcount.quota.istio-system
maxAmount: 500
validDuration: 1s
bucketDuration: 500ms
rateLimitAlgorithm: ROLLING_WINDOW
overrides:
- dimensions:
destination: reviews
maxAmount: 1
- dimensions:
destination: productpage
source: 10.28.11.20
maxAmount: 500
- dimensions:
destination: productpage
maxAmount: 2
{{< /text >}}
The `redisquota` handler defines 4 different rate limit schemes. The default,
if no overrides match, is `500` requests per one second (`1s`). It is using `ROLLING_WINDOW`
algorithm for quota check and thus define `bucketDuration` of 500ms for `ROLLING_WINDOW`
algorithm. Three overrides are also defined:
* The first is `1` request (the `maxAmount` field), if the `destination` is `reviews`.
* The second is `500`, if the destination is `productpage` and source
is `10.28.11.20`
* The third is `2`, if the `destination` is `productpage`.
When a request is processed, the first matching override is picked (reading from top to bottom).
1. Confirm the `quota instance` was created:
@ -111,7 +255,7 @@ so the configuration to enable rate limiting on both adapters is the same.
destinationVersion: destination.labels["version"] | "unknown"
{{< /text >}}
The `quota` template defines three dimensions that are used by `memquota`
The `quota` template defines three dimensions that are used by `memquota` or `redisquota`
to set overrides on requests that match certain attributes. The
`destination` will be set to the first non-empty value in
`destination.labels["app"]`, `destination.service.host`, or `"unknown"`. For
@ -134,10 +278,10 @@ so the configuration to enable rate limiting on both adapters is the same.
- requestcount.quota
{{< /text >}}
The `rule` tells Mixer to invoke the `handler.memquota` handler (created
The `rule` tells Mixer to invoke the `handler.memquota\handler.redisquota` handler (created
above) and pass it the object constructed using the instance
`requestcount.quota` (also created above). This maps the
dimensions from the `quota` template to `memquota` handler.
dimensions from the `quota` template to `memquota` or `redisquota` handler.
1. Confirm the `QuotaSpec` was created:
@ -213,7 +357,7 @@ spec:
- requestcount.quota
{{< /text >}}
`memquota` adapter is now dispatched only if `user=<username>` cookie is absent from the request.
`memquota` or `redisquota` adapter is now dispatched only if `user=<username>` cookie is absent from the request.
This ensures that a logged in user is not subject to this quota.
1. Verify that rate limit does not apply to a logged in user.
@ -239,9 +383,12 @@ returns status `HTTP 429` to the caller.
The `memquota` adapter uses a sliding window of sub-second resolution to
enforce rate limits.
The `redisquota` adapter can be configured to use either the [`ROLLING_WINDOW` or `FIXED_WINDOW`](/docs/reference/config/policy-and-telemetry/adapters/redisquota/#Params-QuotaAlgorithm)
algorithms to enforce rate limits.
The `maxAmount` in the adapter configuration sets the default limit for all
counters associated with a quota instance. This default limit applies if a quota
override does not match the request. The `memquota` adapter selects the first
override does not match the request. The `memquota/redisquota` adapter selects the first
override that matches a request. An override need not specify all quota
dimensions. In the example, the 0.2 qps override is selected by matching only
three out of four quota dimensions.
@ -252,12 +399,20 @@ namespace.
## Cleanup
1. Remove the rate limit configuration:
1. If using `memquota`, remove the `memquota` rate limit configuration:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/policy/mixer-rule-ratings-ratelimit.yaml@
{{< /text >}}
Or
If using `redisquota`, remove the `redisquota` rate limit configuration:
{{< text bash >}}
$ kubectl delete -f redisquota.yaml
{{< /text >}}
1. Remove the application routing rules:
{{< text bash >}}

View File

@ -151,7 +151,6 @@ Re-running the testing command as above, you will see all requests between Istio
$ for from in "foo" "bar"; do for to in "foo" "bar"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl http://httpbin.${to}:8000/ip -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 503
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
{{< /text >}}

View File

@ -56,16 +56,13 @@ The page should look something like this:
caption="Detailed Trace View"
>}}
As you can see, the trace is comprised of spans,
As you can see, the trace is comprised of a set of spans,
where each span corresponds to a Bookinfo service invoked during the execution of a `/productpage` request.
Although every service has the same label, `istio-proxy`, because the tracing is being done by
the Istio sidecar (Envoy proxy) which wraps the call to the actual service,
the label of the destination (to the right) identifies the service for which the time is represented by each line.
The call from `productpage` to `reviews` is represented by two spans in the trace. The first of the two spans (labeled `productpage
reviews.default.svc.cluster.local:9080/`) represents the client-side span for the call. It took 24.13ms . The second span
(labeled `reviews reviews.default.svc.cluster.local:9080/`) is a child of the first span and represents the server-side
span for the call. It took 22.99ms .
Every RPC is represented by two spans in the trace. For example, the call from `productpage` to `reviews` starts
with the span labeled `productpage reviews.default.svc.cluster.local:9080/`, which represents the client-side
span for the call. It took 24.13ms . The second span (labeled `reviews reviews.default.svc.cluster.local:9080/`)
is a child of the first span and represents the server-side span for the call. It took 22.99ms .
The trace for the call to the `reviews` services reveals two subsequent RPC's in the trace. The first is to the `istio-policy`
service, reflecting the server-side Check call made for the service to authorize access. The second is the call out to
@ -137,6 +134,34 @@ public Response bookReviewsById(@PathParam("productId") int productId,
When you make downstream calls in your applications, make sure to include these headers.
## Trace sampling
Istio captures a trace for all requests by default. For example, when
using the Bookinfo sample application above, every time you access
`/productpage` you see a corresponding trace in the Jaeger
dashboard. This sampling rate is suitable for a test or low traffic
mesh. For a high traffic mesh you can lower the trace sampling
percentage in one of two ways:
* During the mesh setup, use the Helm option `pilot.traceSampling` to
set the percentage of trace sampling. See the
[Helm Install](/docs/setup/kubernetes/helm-install/) documentation for
details on setting options.
* In a running mesh, edit the `istio-pilot` deployment and
change the environment variable with the following steps:
1. To open your text editor with the deployment configuration file
loaded, run the following command:
{{< text bash >}}
$ kubectl -n istio-system edit deploy istio-pilot
{{< /text >}}
1. Find the `PILOT_TRACE_SAMPLING` environment variable, and change
the `value:` to your desired percentage.
In both cases, valid values are from 0.0 to 100.0 with a precision of 0.01.
## Cleanup
* Remove any `kubectl port-forward` processes that may still be running:

View File

@ -6,6 +6,35 @@ weight: 5
This section provides specific deployment or configuration guidelines to avoid networking or traffic management issues.
## 503 errors after setting destination rule
If requests to a service immediately start generating HTTP 503 errors after you applied a `DestinationRule`
and the errors continue until you remove or revert the `DestinationRule`, then the `DesintationRule` is probably
causing a TLS conflict for the service.
For example, if you configure mutual TLS in the cluster globally, the `DestinationRule` must include the following `trafficPolicy`:
{{< text yaml >}}
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
{{< /text >}}
Otherwise, the mode defaults to `DISABLED` causing client proxy sidecars to make plain HTTP requests
instead of TLS encrypted requests. Thus, the requests conflict with the server proxy because the server proxy expects
encrypted requests.
To confirm there is a conflict, check whether the `STATUS` field in the output of the `istioctl authn tls-check` command
is set to `CONFLICT` for your service. For example:
{{< text bash >}}
$ istioctl authn tls-check httpbin.default.svc.cluster.local
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
httpbin.default.svc.cluster.local:8000 CONFLICT mTLS HTTP default/ httpbin/default
{{< /text >}}
Whenever you apply a `DestinationRule`, ensure the `trafficPolicy` TLS mode matches the global server configuration.
## 503 errors while reconfiguring service routes
When setting route rules to direct traffic to specific versions (subsets) of a service, care must be taken to ensure