mirror of https://github.com/istio/istio.io.git
parent
b77085ccad
commit
c8ac8eb91c
|
|
@ -306,7 +306,7 @@ This example shows there are many variables, based on whether the automatic side
|
|||
- default policy (Configured in the ConfigMap `istio-sidecar-injector`)
|
||||
- per-pod override annotation (`sidecar.istio.io/inject`)
|
||||
|
||||
The [injection status table](/docs/ops/troubleshooting/injection/) shows a clear picture of the final injection status based on the value of the above variables.
|
||||
The [injection status table](/docs/ops/common-problems/injection/) shows a clear picture of the final injection status based on the value of the above variables.
|
||||
|
||||
## Traffic flow from application container to sidecar proxy
|
||||
|
||||
|
|
|
|||
|
|
@ -55,7 +55,7 @@ operators can easily expand the set of collected proxy metrics when required. Th
|
|||
overall cost of monitoring across the mesh.
|
||||
|
||||
The [Envoy documentation site](https://www.envoyproxy.io/docs/envoy/latest/) includes a detailed overview of [Envoy statistics collection](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/observability/statistics.html?highlight=statistics).
|
||||
The operations guide on [Envoy Statistics](/docs/ops/troubleshooting/proxy-cmd/) provides more information on controlling the generation of proxy-level metrics.
|
||||
The operations guide on [Envoy Statistics](/docs/ops/diagnostic-tools/proxy-cmd/) provides more information on controlling the generation of proxy-level metrics.
|
||||
|
||||
Example proxy-level Metrics:
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Troubleshooting
|
||||
title: Common Problems
|
||||
description: Describes how to identify and resolve common problems in Istio.
|
||||
weight: 50
|
||||
weight: 70
|
||||
keywords: [ops]
|
||||
aliases:
|
||||
- /help/ops/troubleshooting
|
||||
|
|
@ -1,7 +1,10 @@
|
|||
---
|
||||
title: Sidecar Injection Problems
|
||||
description: Resolve common problems with Istio's use of Kubernetes webhooks for automatic sidecar injection.
|
||||
weight: 5
|
||||
force_inline_toc: true
|
||||
weight: 40
|
||||
aliases:
|
||||
- /docs/ops/troubleshooting/injection
|
||||
---
|
||||
|
||||
## The result of sidecar injection was not what I expected
|
||||
|
|
@ -219,3 +222,10 @@ One workaround is to remove the proxy settings from the `kube-apiserver` manifes
|
|||
|
||||
An [issue](https://github.com/kubernetes/kubeadm/issues/666) was filed with Kubernetes related to this and has since been closed.
|
||||
[https://github.com/kubernetes/kubernetes/pull/58698#discussion_r163879443](https://github.com/kubernetes/kubernetes/pull/58698#discussion_r163879443)
|
||||
|
||||
## Limitations for using Tcpdump in pods
|
||||
|
||||
Tcpdump doesn't work in the sidecar pod - the container doesn't run as root. However any other container in the same pod will see all the packets, since the
|
||||
network namespace is shared. `iptables` will also see the pod-wide configuration.
|
||||
|
||||
Communication between Envoy and the app happens on 127.0.0.1, and is not encrypted.
|
||||
|
|
@ -1,10 +1,12 @@
|
|||
---
|
||||
title: Network Problems
|
||||
description: Tools and techniques to address common Istio traffic management and network problems.
|
||||
weight: 4
|
||||
title: Networking Problems
|
||||
description: Techniques to address common Istio traffic management and network problems.
|
||||
force_inline_toc: true
|
||||
weight: 10
|
||||
aliases:
|
||||
- /help/ops/traffic-management/troubleshooting
|
||||
- /help/ops/troubleshooting/network-issues
|
||||
- /help/ops/traffic-management/troubleshooting
|
||||
- /help/ops/troubleshooting/network-issues
|
||||
- /docs/ops/troubleshooting/network-issues
|
||||
---
|
||||
|
||||
## Requests are rejected by Envoy
|
||||
|
|
@ -1,15 +1,17 @@
|
|||
---
|
||||
title: Missing Metrics
|
||||
description: Diagnose problems where metrics are not being collected.
|
||||
weight: 29
|
||||
title: Observability Problems
|
||||
description: Dealing with telemetry collection issues.
|
||||
force_inline_toc: true
|
||||
weight: 30
|
||||
aliases:
|
||||
- /help/ops/telemetry/missing-metrics
|
||||
- /help/ops/troubleshooting/missing-metrics
|
||||
|
||||
- /docs/ops/troubleshooting/grafana
|
||||
- /docs/ops/troubleshooting/missing-traces
|
||||
---
|
||||
|
||||
The procedures below help you diagnose problems where metrics
|
||||
you are expecting to see reported and not being collected.
|
||||
## Expected metrics are not being collected
|
||||
|
||||
The following procedure helps you diagnose problems where metrics
|
||||
you are expecting to see reported are not being collected.
|
||||
|
||||
The expected flow for metrics is:
|
||||
|
||||
|
|
@ -25,7 +27,7 @@ The Mixer default installations include a Prometheus adapter and the configurati
|
|||
|
||||
If the Istio Dashboard or the Prometheus queries don’t show the expected metrics, any step of the flow above may present an issue. The following sections provide instructions to troubleshoot each step.
|
||||
|
||||
## Verify Mixer is receiving Report calls
|
||||
### Verify Mixer is receiving Report calls
|
||||
|
||||
Mixer generates metrics to monitor its own behavior. The first step is to check these metrics:
|
||||
|
||||
|
|
@ -45,7 +47,7 @@ Mixer generates metrics to monitor its own behavior. The first step is to check
|
|||
|
||||
1. In this case, ensure you integrated the services properly into the mesh. You can achieve this task with either [automatic or manual sidecar injection](/docs/setup/additional-setup/sidecar-injection/).
|
||||
|
||||
## Verify the Mixer rules exist
|
||||
### Verify the Mixer rules exist
|
||||
|
||||
In Kubernetes environments, issue the following command:
|
||||
|
||||
|
|
@ -64,7 +66,7 @@ If the output shows no rules named `promhttp` or `promtcp`, then the Mixer confi
|
|||
|
||||
For reference, please consult the [default rules for Prometheus]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml).
|
||||
|
||||
## Verify the Prometheus handler configuration exists
|
||||
### Verify the Prometheus handler configuration exists
|
||||
|
||||
1. In Kubernetes environments, issue the following command:
|
||||
|
||||
|
|
@ -87,7 +89,7 @@ For reference, please consult the [default rules for Prometheus]({{< github_file
|
|||
|
||||
For reference, please consult the [default handler configuration for Prometheus]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml).
|
||||
|
||||
## Verify Mixer metric instances configuration exists
|
||||
### Verify Mixer metric instances configuration exists
|
||||
|
||||
1. In Kubernetes environments, issue the following command:
|
||||
|
||||
|
|
@ -105,7 +107,7 @@ For reference, please consult the [default rules for Prometheus]({{< github_file
|
|||
|
||||
For reference, please consult the [default instances configuration for metrics]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml).
|
||||
|
||||
## Verify there are no known configuration errors
|
||||
### Verify there are no known configuration errors
|
||||
|
||||
1. To establish a connection to the Istio-telemetry self-monitoring endpoint, setup a port-forward to the Istio-telemetry self-monitoring port as described in
|
||||
[Verify Mixer is receiving Report calls](#verify-mixer-is-receiving-report-calls).
|
||||
|
|
@ -134,7 +136,7 @@ configured correctly.
|
|||
If any of those metrics have a value, confirm that the metric value with the largest configuration ID is 0. This will verify that Mixer has generated no errors
|
||||
in processing the most recent configuration as supplied.
|
||||
|
||||
## Verify Mixer is sending metric instances to the Prometheus adapter
|
||||
### Verify Mixer is sending metric instances to the Prometheus adapter
|
||||
|
||||
1. Establish a connection to the `istio-telemetry` self-monitoring endpoint. Setup a port-forward to the `istio-telemetry` self-monitoring port as described in
|
||||
[Verify Mixer is receiving Report calls](#verify-mixer-is-receiving-report-calls).
|
||||
|
|
@ -163,7 +165,7 @@ in processing the most recent configuration as supplied.
|
|||
$ kubectl -n istio-system logs <istio-telemetry pod> -c mixer
|
||||
{{< /text >}}
|
||||
|
||||
## Verify Prometheus configuration
|
||||
### Verify Prometheus configuration
|
||||
|
||||
1. Connect to the Prometheus UI
|
||||
|
||||
|
|
@ -192,3 +194,38 @@ in processing the most recent configuration as supplied.
|
|||
static_configs:
|
||||
- targets: ['istio-mixer.istio-system:42422']</td>
|
||||
{{< /text >}}
|
||||
|
||||
## No traces appearing in Zipkin when running Istio locally on Mac
|
||||
|
||||
Istio is installed and everything seems to be working except there are no traces showing up in Zipkin when there
|
||||
should be.
|
||||
|
||||
This may be caused by a known [Docker issue](https://github.com/docker/for-mac/issues/1260) where the time inside
|
||||
containers may skew significantly from the time on the host machine. If this is the case,
|
||||
when you select a very long date range in Zipkin you will see the traces appearing as much as several days too early.
|
||||
|
||||
You can also confirm this problem by comparing the date inside a Docker container to outside:
|
||||
|
||||
{{< text bash >}}
|
||||
$ docker run --entrypoint date gcr.io/istio-testing/ubuntu-16-04-slave:latest
|
||||
Sun Jun 11 11:44:18 UTC 2017
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ date -u
|
||||
Thu Jun 15 02:25:42 UTC 2017
|
||||
{{< /text >}}
|
||||
|
||||
To fix the problem, you'll need to shutdown and then restart Docker before reinstalling Istio.
|
||||
|
||||
## Missing Grafana output
|
||||
|
||||
If you're unable to get Grafana output when connecting from a local web client to Istio remotely hosted, you
|
||||
should validate the client and server date and time match.
|
||||
|
||||
The time of the web client (e.g. Chrome) affects the output from Grafana. A simple solution
|
||||
to this problem is to verify a time synchronization service is running correctly within the
|
||||
Kubernetes cluster and the web client machine also is correctly using a time synchronization
|
||||
service. Some common time synchronization systems are NTP and Chrony. This is especially
|
||||
problematic in engineering labs with firewalls. In these scenarios, NTP may not be configured
|
||||
properly to point at the lab-based NTP services.
|
||||
|
|
@ -1,10 +1,16 @@
|
|||
---
|
||||
title: Security Problems
|
||||
description: Tools and techniques to address common Istio authentication, authorization, and general security-related problems.
|
||||
weight: 5
|
||||
description: Techniques to address common Istio authentication, authorization, and general security-related problems.
|
||||
force_inline_toc: true
|
||||
weight: 20
|
||||
keywords: [security,citadel]
|
||||
aliases:
|
||||
- /help/ops/security/repairing-citadel
|
||||
- /help/ops/troubleshooting/repairing-citadel
|
||||
- /docs/ops/troubleshooting/repairing-citadel
|
||||
---
|
||||
|
||||
## End-user Authentication Fails
|
||||
## End-user authentication fails
|
||||
|
||||
With Istio, you can enable authentication for end users. Currently, the end user credential supported by the Istio authentication policy is JWT. The following is a guide for troubleshooting the end user JWT authentication.
|
||||
|
||||
|
|
@ -51,7 +57,7 @@ With Istio, you can enable authentication for end users. Currently, the end user
|
|||
[2018-07-04T19:13:40.463Z] "GET /ip HTTP/1.1" 401 - 0 29 0 - "-" "curl/7.35.0" "9badd659-fa0e-9ca9-b4c0-9ac225571929" "httpbin.foo:8000" "-"
|
||||
{{< /text >}}
|
||||
|
||||
## Authorization is Too Restrictive
|
||||
## Authorization is too restrictive
|
||||
|
||||
When you first enable authorization for a service, all requests are denied by default. After you add one or more authorization policies, then
|
||||
matching requests should flow through. If all requests continue to be denied, you can try the following:
|
||||
|
|
@ -70,10 +76,10 @@ for TCP services. Otherwise, Istio ignores the policies as if they didn't exist.
|
|||
`default` namespace (`metadata/namespace` line should be `default`). For non-Kubernetes environments, all `ServiceRoles` and `ServiceRoleBindings`
|
||||
for a mesh should be in the same namespace.
|
||||
|
||||
1. Visit [Ensure Authorization is Enabled Correctly](/docs/ops/troubleshooting/security-issues/#ensure-authorization-is-enabled-correctly)
|
||||
1. Visit [Ensure Authorization is Enabled Correctly](#ensure-authorization-is-enabled-correctly)
|
||||
to find out the exact cause.
|
||||
|
||||
## Authorization is Too Permissive
|
||||
## Authorization is too permissive
|
||||
|
||||
If authorization checks are enabled for a service and yet requests to the
|
||||
service aren't being blocked, then authorization was likely not enabled
|
||||
|
|
@ -93,10 +99,10 @@ successfully. To verify, follow these steps:
|
|||
You can disable Pilot's authorization plug-in if there is an error pushing
|
||||
authorization policy to Envoy.
|
||||
|
||||
1. Visit [Ensure Authorization is Enabled Correctly](/docs/ops/troubleshooting/security-issues/#ensure-authorization-is-enabled-correctly)
|
||||
1. Visit [Ensure Authorization is Enabled Correctly](#ensure-authorization-is-enabled-correctly)
|
||||
to find out the exact cause.
|
||||
|
||||
## Ensure Authorization is Enabled Correctly
|
||||
## Ensure authorization is enabled correctly
|
||||
|
||||
The `ClusterRbacConfig` default cluster level singleton custom resource controls the authorization functionality globally.
|
||||
|
||||
|
|
@ -117,7 +123,7 @@ authorization functionality and ignores all policies.
|
|||
1. If there is more than one `ClusterRbacConfig` instance, remove any additional `ClusterRbacConfig` instances and
|
||||
ensure **only one** instance is named `default`.
|
||||
|
||||
## Ensure Pilot Accepts the Policies
|
||||
## Ensure Pilot accepts the policies
|
||||
|
||||
Pilot converts and distributes your authorization policies to the proxies. The following steps help
|
||||
you ensure Pilot is working as expected:
|
||||
|
|
@ -182,7 +188,7 @@ you ensure Pilot is working as expected:
|
|||
- An config for `productpage.default.svc.cluster.local` and Istio will allow anyone to access it
|
||||
with GET method.
|
||||
|
||||
## Ensure Pilot Distributes Policies to Proxies Correctly
|
||||
## Ensure Pilot distributes policies to proxies correctly
|
||||
|
||||
Pilot distributes the authorization policies to proxies. The following steps help you ensure Pilot
|
||||
is working as expected:
|
||||
|
|
@ -255,7 +261,7 @@ with rules that allows anyone to access it via `GET` method. The `shadow_rules`
|
|||
},
|
||||
{{< /text >}}
|
||||
|
||||
## Ensure Proxies Enforce Policies Correctly
|
||||
## Ensure proxies enforce policies correctly
|
||||
|
||||
Proxies eventually enforce the authorization policies. The following steps help you ensure the proxy
|
||||
is working as expected:
|
||||
|
|
@ -342,10 +348,10 @@ The `shadow denied` has no effect and you can ignore it safely.
|
|||
...
|
||||
{{< /text >}}
|
||||
|
||||
## Keys and Certificates errors
|
||||
## Keys and certificates errors
|
||||
|
||||
If you suspect that some of the keys and/or certificates used by Istio aren't correct, the
|
||||
first step is to ensure that [Citadel is healthy](/docs/ops/troubleshooting/repairing-citadel/).
|
||||
first step is to ensure that [Citadel is healthy](#repairing-citadel).
|
||||
|
||||
You can then verify that Citadel is actually generating keys and certificates:
|
||||
|
||||
|
|
@ -520,7 +526,58 @@ Certificate:
|
|||
|
||||
## Mutual TLS errors
|
||||
|
||||
If you suspect problems with mutual TLS, first ensure that [Citadel is healthy](/docs/ops/troubleshooting/repairing-citadel/), and
|
||||
second ensure that [keys and certificates are being delivered](/docs/ops/troubleshooting/security-issues/) to sidecars properly.
|
||||
If you suspect problems with mutual TLS, first ensure that [Citadel is healthy](#repairing-citadel), and
|
||||
second ensure that [keys and certificates are being delivered](#keys-and-certificates-errors) to sidecars properly.
|
||||
|
||||
If everything appears to be working so far, the next step is to verify that the right [authentication policy](/docs/tasks/security/authn-policy/) is applied and the right destination rules are in place.
|
||||
If everything appears to be working so far, the next step is to verify that the right [authentication policy](/docs/tasks/security/authn-policy/)
|
||||
is applied and the right destination rules are in place.
|
||||
|
||||
## Citadel is not behaving properly {#repairing-citadel}
|
||||
|
||||
{{< warning >}}
|
||||
Citadel does not support multiple instances. Running multiple Citadel instances
|
||||
may introduce race conditions and lead to system outages.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< warning >}}
|
||||
Workloads with new Kubernetes service accounts can not be started when Citadel is
|
||||
disabled for maintenance since they can't get their certificates generated.
|
||||
{{< /warning >}}
|
||||
|
||||
Citadel is not a critical data plane component. The default workload certificate lifetime is 3
|
||||
months. Certificates will be rotated by Citadel before they expire. If Citadel is disabled for
|
||||
short maintenance periods, existing mutual TLS traffic will not be affected.
|
||||
|
||||
If you suspect Citadel isn't working properly, verify the status of the `istio-citadel` pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod -l istio=citadel -n istio-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-citadel-ff5696f6f-ht4gq 1/1 Running 0 25d
|
||||
{{< /text >}}
|
||||
|
||||
If the `istio-citadel` pod doesn't exist, try to re-deploy the pod.
|
||||
|
||||
If the `istio-citadel` pod is present but its status is not `Running`, run the commands below to get more
|
||||
debugging information and check if there are any errors:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs -l istio=citadel -n istio-system
|
||||
$ kubectl describe pod -l istio=citadel -n istio-system
|
||||
{{< /text >}}
|
||||
|
||||
If you want to check a workload (with `default` service account and `default` namespace)
|
||||
certificate's lifetime:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get secret -o json istio.default -n default | jq -r '.data["cert-chain.pem"]' | base64 --decode | openssl x509 -noout -text | grep "Not After" -C 1
|
||||
Not Before: Jun 1 18:23:30 2019 GMT
|
||||
Not After : Aug 30 18:23:30 2019 GMT
|
||||
Subject:
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
Remember to replace `istio.default` and `-n default` with `istio.YourServiceAccount` and
|
||||
`-n YourNamespace` for other workloads. If the certificate is expired, Citadel did not
|
||||
update the secret properly. Check Citadel logs for more information.
|
||||
{{< /tip >}}
|
||||
|
|
@ -1,10 +1,12 @@
|
|||
---
|
||||
title: Galley Configuration Problems
|
||||
description: Describes how to resolve Galley configuration problems.
|
||||
weight: 20
|
||||
force_inline_toc: true
|
||||
weight: 50
|
||||
aliases:
|
||||
- /help/ops/setup/validation
|
||||
- /help/ops/troubleshooting/validation
|
||||
- /docs/ops/troubleshooting/validation
|
||||
---
|
||||
|
||||
## Seemingly valid configuration is rejected
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
title: Diagnostic Tools
|
||||
description: Tools and techniques to help troubleshoot an Istio mesh.
|
||||
weight: 80
|
||||
keywords: [ops]
|
||||
aliases:
|
||||
- /docs/ops/troubleshooting/proxy-cmd
|
||||
---
|
||||
|
|
@ -1,10 +1,11 @@
|
|||
---
|
||||
title: Component Logging
|
||||
description: Describes how to use component-level logging to get insights into a running component's behavior.
|
||||
weight: 97
|
||||
weight: 70
|
||||
keywords: [ops]
|
||||
aliases:
|
||||
- /help/ops/component-logging
|
||||
- /help/ops/component-logging
|
||||
- /docs/ops/troubleshooting/component-logging
|
||||
---
|
||||
|
||||
Istio components are built with a flexible logging framework which provides a number of features and controls to
|
||||
|
|
@ -45,7 +46,7 @@ $ mixs server --log_output_level attributes=debug,adapters=warning
|
|||
{{< /text >}}
|
||||
|
||||
In addition to controlling the output level from the command-line, you can also control the output level of a running component
|
||||
by using its [ControlZ](/docs/ops/troubleshooting/controlz) interface.
|
||||
by using its [ControlZ](/docs/ops/diagnostic-tools/controlz) interface.
|
||||
|
||||
## Controlling output
|
||||
|
||||
|
Before Width: | Height: | Size: 209 KiB After Width: | Height: | Size: 209 KiB |
|
|
@ -1,10 +1,11 @@
|
|||
---
|
||||
title: Component Introspection
|
||||
description: Describes how to use ControlZ to get insight into individual running components.
|
||||
weight: 98
|
||||
weight: 60
|
||||
keywords: [ops]
|
||||
aliases:
|
||||
- /help/ops/controlz
|
||||
- /help/ops/controlz
|
||||
- /docs/ops/troubleshooting/controlz
|
||||
---
|
||||
|
||||
Istio components are built with a flexible introspection framework which makes it easy to inspect and manipulate the internal state
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Diagnose your configuration with istioctl analyze
|
||||
title: Diagnose your Configuration with Istioctl Analyze
|
||||
description: Shows you how to use istioctl analyze to identify potential issues with your configuration.
|
||||
weight: 90
|
||||
weight: 40
|
||||
keywords: [istioctl, debugging, kubernetes]
|
||||
---
|
||||
|
||||
|
|
@ -1,8 +1,10 @@
|
|||
---
|
||||
title: Understand your Mesh with istioctl describe
|
||||
title: Understand your Mesh with Istioctl Describe
|
||||
description: Shows you how to use istioctl describe to verify the configurations of a pod in your mesh.
|
||||
weight: 90
|
||||
weight: 30
|
||||
keywords: [traffic-management, istioctl, debugging, kubernetes]
|
||||
aliases:
|
||||
- /docs/ops/troubleshooting/istioctl-describe
|
||||
---
|
||||
|
||||
{{< boilerplate experimental-feature-warning >}}
|
||||
|
|
@ -1,16 +1,17 @@
|
|||
---
|
||||
title: Using the istioctl command-line tool
|
||||
title: Using the Istioctl Command-line Tool
|
||||
description: Istio includes a supplemental tool that provides debugging and diagnosis for Istio service mesh deployments.
|
||||
weight: 1
|
||||
weight: 10
|
||||
keywords: [istioctl,bash,zsh,shell,command-line]
|
||||
aliases:
|
||||
- /help/ops/component-debugging
|
||||
- /help/ops/component-debugging
|
||||
- /docs/ops/troubleshooting/istioctl
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
You can gain insights into what individual components are doing by inspecting their [logs](/docs/ops/troubleshooting/component-logging/)
|
||||
or peering inside via [introspection](/docs/ops/troubleshooting/controlz/). If that's insufficient, the steps below explain
|
||||
You can gain insights into what individual components are doing by inspecting their [logs](/docs/ops/diagnostic-tools/component-logging/)
|
||||
or peering inside via [introspection](/docs/ops/diagnostic-tools/controlz/). If that's insufficient, the steps below explain
|
||||
how to get under the hood.
|
||||
|
||||
The [`istioctl`](/docs/reference/commands/istioctl) tool is a configuration command line utility that allows service operators to debug and diagnose their Istio service mesh deployments. The Istio project also includes two helpful scripts for `istioctl` that enable auto-completion for Bash and ZSH. Both of these scripts provide support for the currently available `istioctl` commands.
|
||||
|
|
@ -65,7 +66,7 @@ To retrieve information about endpoint configuration for the Envoy instance in a
|
|||
$ istioctl proxy-config endpoints <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
See [Debugging Envoy and Pilot](/docs/ops/troubleshooting/proxy-cmd/) for more advice on interpreting this information.
|
||||
See [Debugging Envoy and Pilot](/docs/ops/diagnostic-tools/proxy-cmd/) for more advice on interpreting this information.
|
||||
|
||||
## `istioctl` auto-completion
|
||||
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Security
|
||||
description: Helps you manage the security aspects of a running mesh.
|
||||
weight: 25
|
||||
weight: 50
|
||||
keywords: [ops,security]
|
||||
aliases:
|
||||
- /help/ops/security
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Installation and Configuration
|
||||
description: Describes important requirements, concepts, and considerations for installing and configuring Istio.
|
||||
weight: 2
|
||||
description: Requirements, concepts, and considerations for installing and configuring Istio.
|
||||
weight: 30
|
||||
keywords: [ops,setup]
|
||||
aliases:
|
||||
- /help/ops/setup
|
||||
|
|
|
|||
|
|
@ -1,12 +1,13 @@
|
|||
---
|
||||
title: Health Checking of Istio Services
|
||||
description: Shows how to do health checking for Istio services.
|
||||
weight: 1
|
||||
weight: 60
|
||||
aliases:
|
||||
- /docs/tasks/traffic-management/app-health-check/
|
||||
- /docs/ops/security/health-checks-and-mtls/
|
||||
- /help/ops/setup/app-health-check
|
||||
- /help/ops/app-health-check
|
||||
- /docs/ops/app-health-check
|
||||
keywords: [security,health-check]
|
||||
---
|
||||
|
||||
|
|
@ -17,7 +18,7 @@ offer three different options:
|
|||
1. TCP request
|
||||
1. HTTP request
|
||||
|
||||
This task shows how to use these approaches in Istio with mutual TLS is enabled.
|
||||
This guide shows how to use these approaches in Istio with mutual TLS enabled.
|
||||
|
||||
Command and TCP type probes work with Istio regardless of whether or not mutual TLS is enabled. The HTTP request approach requires different Istio configuration with
|
||||
mutual TLS enabled.
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Telemetry
|
||||
title: Observability
|
||||
description: Helps you manage telemetry collection and visualization in a running mesh.
|
||||
weight: 30
|
||||
weight: 60
|
||||
keywords: [ops,telemetry]
|
||||
aliases:
|
||||
- /help/ops/telemetry
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ keys are:
|
|||
|
||||
To see the Envoy settings for statistics data collection use
|
||||
[`istioctl proxy-config bootstrap`](/docs/reference/commands/istioctl/#istioctl-proxy-config-bootstrap) and follow the
|
||||
[deep dive into Envoy configuration](/docs/ops/troubleshooting/proxy-cmd/#deep-dive-into-envoy-configuration).
|
||||
[deep dive into Envoy configuration](/docs/ops/diagnostic-tools/proxy-cmd/#deep-dive-into-envoy-configuration).
|
||||
Envoy only collects statistical data on items matching the `inclusion_list` within
|
||||
the `stats_matcher` JSON element.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Traffic Management
|
||||
description: Helps you manage the networking aspects of a running mesh.
|
||||
weight: 23
|
||||
weight: 40
|
||||
keywords: [ops,traffic-management]
|
||||
aliases:
|
||||
- /help/ops/traffic-management
|
||||
|
|
|
|||
|
|
@ -220,7 +220,7 @@ spec:
|
|||
The downside of this kind of configuration is that other configuration (e.g., route rules) for any of the
|
||||
underlying microservices, will need to also be included in this single configuration file, instead of
|
||||
in separate resources associated with, and potentially owned by, the individual service teams.
|
||||
See [Route rules have no effect on ingress gateway requests](/docs/ops/troubleshooting/network-issues)
|
||||
See [Route rules have no effect on ingress gateway requests](/docs/ops/common-problems/network-issues/#route-rules-have-no-effect-on-ingress-gateway-requests)
|
||||
for details.
|
||||
|
||||
To avoid this problem, it may be preferable to break up the configuration of `myapp.com` into several
|
||||
|
|
|
|||
|
|
@ -1,18 +0,0 @@
|
|||
---
|
||||
title: Missing Grafana Output
|
||||
description: Dealing with Grafana issues.
|
||||
weight: 89
|
||||
aliases:
|
||||
- /help/ops/telemetry/grafana
|
||||
- /help/ops/troubleshooting/grafana
|
||||
---
|
||||
|
||||
If you're unable to get Grafana output when connecting from a local web client to Istio remotely hosted, you
|
||||
should validate the client and server date and time match.
|
||||
|
||||
The time of the web client (e.g. Chrome) affects the output from Grafana. A simple solution
|
||||
to this problem is to verify a time synchronization service is running correctly within the
|
||||
Kubernetes cluster and the web client machine also is correctly using a time synchronization
|
||||
service. Some common time synchronization systems are NTP and Chrony. This is especially
|
||||
problematic in engineering labs with firewalls. In these scenarios, NTP may not be configured
|
||||
properly to point at the lab-based NTP services.
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
---
|
||||
title: Missing Zipkin Traces
|
||||
description: Fix missing traces in Zipkin.
|
||||
weight: 90
|
||||
aliases:
|
||||
- /help/ops/troubleshooting/missing-traces
|
||||
---
|
||||
## No traces appearing in Zipkin when running Istio locally on Mac
|
||||
|
||||
Istio is installed and everything seems to be working except there are no traces showing up in Zipkin when there
|
||||
should be.
|
||||
|
||||
This may be caused by a known [Docker issue](https://github.com/docker/for-mac/issues/1260) where the time inside
|
||||
containers may skew significantly from the time on the host machine. If this is the case,
|
||||
when you select a very long date range in Zipkin you will see the traces appearing as much as several days too early.
|
||||
|
||||
You can also confirm this problem by comparing the date inside a Docker container to outside:
|
||||
|
||||
{{< text bash >}}
|
||||
$ docker run --entrypoint date gcr.io/istio-testing/ubuntu-16-04-slave:latest
|
||||
Sun Jun 11 11:44:18 UTC 2017
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ date -u
|
||||
Thu Jun 15 02:25:42 UTC 2017
|
||||
{{< /text >}}
|
||||
|
||||
To fix the problem, you'll need to shutdown and then restart Docker before reinstalling Istio.
|
||||
|
|
@ -1,58 +0,0 @@
|
|||
---
|
||||
title: Repairing Citadel
|
||||
description: What to do if Citadel is not behaving properly.
|
||||
weight: 15
|
||||
keywords: [security,citadel,ops]
|
||||
aliases:
|
||||
- /help/ops/security/repairing-citadel
|
||||
- /help/ops/troubleshooting/repairing-citadel
|
||||
|
||||
---
|
||||
|
||||
{{< warning >}}
|
||||
Citadel does not support multiple instances. Running multiple Citadel instances
|
||||
may introduce race conditions and lead to system outages.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< warning >}}
|
||||
Workloads with new Kubernetes service accounts can not be started when Citadel is
|
||||
disabled for maintenance since they can't get their certificates generated.
|
||||
{{< /warning >}}
|
||||
|
||||
Citadel is not a critical data plane component. The default workload certificate lifetime is 3
|
||||
months. Certificates will be rotated by Citadel before they expire. If Citadel is disabled for
|
||||
short maintenance periods, existing mutual TLS traffic will not be affected.
|
||||
|
||||
If you suspect Citadel isn't working properly, verify the status of the `istio-citadel` pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod -l istio=citadel -n istio-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-citadel-ff5696f6f-ht4gq 1/1 Running 0 25d
|
||||
{{< /text >}}
|
||||
|
||||
If the `istio-citadel` pod doesn't exist, try to re-deploy the pod.
|
||||
|
||||
If the `istio-citadel` pod is present but its status is not `Running`, run the commands below to get more
|
||||
debugging information and check if there are any errors:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs -l istio=citadel -n istio-system
|
||||
$ kubectl describe pod -l istio=citadel -n istio-system
|
||||
{{< /text >}}
|
||||
|
||||
If you want to check a workload (with `default` service account and `default` namespace)
|
||||
certificate's lifetime:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get secret -o json istio.default -n default | jq -r '.data["cert-chain.pem"]' | base64 --decode | openssl x509 -noout -text | grep "Not After" -C 1
|
||||
Not Before: Jun 1 18:23:30 2019 GMT
|
||||
Not After : Aug 30 18:23:30 2019 GMT
|
||||
Subject:
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
Remember to replace `istio.default` and `-n default` with `istio.YourServiceAccount` and
|
||||
`-n YourNamespace` for other workloads. If the certificate is expired, Citadel did not
|
||||
update the secret properly. Check Citadel logs for more information.
|
||||
{{< /tip >}}
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
title: Tcpdump Limitations
|
||||
description: Limitations for using Tcpdump in pods.
|
||||
weight: 99
|
||||
---
|
||||
|
||||
Tcpdump doesn't work in the sidecar pod - the container doesn't run as root. However any other container in the same pod will see all the packets, since the
|
||||
network namespace is shared. `iptables` will also see the pod-wide configuration.
|
||||
|
||||
Communication between Envoy and the app happens on 127.0.0.1, and is not encrypted.
|
||||
|
|
@ -64,7 +64,7 @@ Download the Istio release which includes installation files, samples and a comm
|
|||
$ export PATH=$PWD/bin:$PATH
|
||||
{{< /text >}}
|
||||
|
||||
1. You can optionally enable the [auto-completion option](/docs/ops/troubleshooting/istioctl#enabling-auto-completion) when working with a bash or ZSH console.
|
||||
1. You can optionally enable the [auto-completion option](/docs/ops/diagnostic-tools/istioctl#enabling-auto-completion) when working with a bash or ZSH console.
|
||||
|
||||
## Installing Istio
|
||||
|
||||
|
|
|
|||
|
|
@ -29,7 +29,8 @@ The `httpbin` application serves as the backend service for this task.
|
|||
when calling the `httpbin` service:
|
||||
|
||||
{{< warning >}}
|
||||
If you installed/configured Istio with mutual TLS authentication enabled, you must add a TLS traffic policy `mode: ISTIO_MUTUAL` to the `DestinationRule` before applying it. Otherwise requests will generate 503 errors as described [here](/docs/ops/troubleshooting/network-issues/#503-errors-after-setting-destination-rule).
|
||||
If you installed/configured Istio with mutual TLS authentication enabled, you must add a TLS traffic policy `mode: ISTIO_MUTUAL` to the `DestinationRule` before applying it.
|
||||
Otherwise requests will generate 503 errors as described [here](/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule).
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
|
|||
|
|
@ -192,7 +192,7 @@ Let's see how you can configure a `Gateway` on port 80 for HTTP traffic.
|
|||
you can add the special value `mesh` to the list of `gateways`. Since the internal hostname for the
|
||||
service is probabaly different (e.g., `httpbin.default.svc.cluster.local`) from the external one,
|
||||
you will also need to add it to the `hosts` list. Refer to the
|
||||
[troubleshooting guide](/docs/ops/troubleshooting/network-issues)
|
||||
[operations guide](/docs/ops/common-problems/network-issues#route-rules-have-no-effect-on-ingress-gateway-requests)
|
||||
for more details.
|
||||
{{< /warning >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -127,7 +127,8 @@ In this step, you will change that behavior so that all traffic goes to `v1`.
|
|||
1. Create a default route rule to route all traffic to `v1` of the service:
|
||||
|
||||
{{< warning >}}
|
||||
If you installed/configured Istio with mutual TLS authentication enabled, you must add a TLS traffic policy `mode: ISTIO_MUTUAL` to the `DestinationRule` before applying it. Otherwise requests will generate 503 errors as described [here](/docs/ops/troubleshooting/network-issues/#503-errors-after-setting-destination-rule).
|
||||
If you installed/configured Istio with mutual TLS authentication enabled, you must add a TLS traffic policy `mode: ISTIO_MUTUAL` to the `DestinationRule` before applying it.
|
||||
Otherwise requests will [generate 503 errors](/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule).
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
|
|||
|
|
@ -8,10 +8,10 @@ If mutual TLS is enabled, HTTP and TCP health checks from the kubelet will not w
|
|||
As of Istio 1.1, we have several options to solve this issue.
|
||||
|
||||
1. Using probe rewrite to redirect liveness and readiness requests to the
|
||||
workload directly. Please refer to [Probe Rewrite](/docs/ops/app-health-check/#probe-rewrite)
|
||||
workload directly. Please refer to [Probe Rewrite](/docs/ops/setup/app-health-check/#probe-rewrite)
|
||||
for more information.
|
||||
|
||||
1. Using a separate port for health checks and enabling mutual TLS only on the regular service port. Please refer to [Health Checking of Istio Services](/docs/ops/app-health-check/#separate-port) for more information.
|
||||
1. Using a separate port for health checks and enabling mutual TLS only on the regular service port. Please refer to [Health Checking of Istio Services](/docs/ops/setup/app-health-check/#separate-port) for more information.
|
||||
|
||||
1. Using the [`PERMISSIVE` mode](/docs/tasks/security/mtls-migration) for Istio services so they can accept both HTTP and mutual TLS traffic. Please keep in mind that mutual TLS is not enforced since others can communicate with the service with HTTP traffic.
|
||||
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ We're pleased to announce the availability of Istio 1.0.3. Please see below for
|
|||
|
||||
## Behavior changes
|
||||
|
||||
- [Validating webhook](/docs/ops/troubleshooting/validation) is now mandatory. Disabling it may result in Pilot crashes.
|
||||
- [Validating webhook](/docs/ops/common-problems/validation) is now mandatory. Disabling it may result in Pilot crashes.
|
||||
|
||||
- [Service entry](/docs/reference/config/networking/v1alpha3/service-entry/) validation now rejects the wildcard hostname (`*`) when configuring DNS resolution. The API has never allowed this, however `ServiceEntry` was erroneously excluded from validation in the previous release. Use of wildcards as part of a hostname, e.g. `*.bar.com`, remains unchanged.
|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ aliases:
|
|||
## Security
|
||||
|
||||
- **Improved** extend the default lifetime of self-signed Citadel root certificates to 10 years.
|
||||
- **Added** Kubernetes health check prober rewrite per deployment via `sidecar.istio.io/rewriteAppHTTPProbers: "true"` in the `PodSpec` [annotation](/docs/ops/app-health-check/#use-annotations-on-pod).
|
||||
- **Added** Kubernetes health check prober rewrite per deployment via `sidecar.istio.io/rewriteAppHTTPProbers: "true"` in the `PodSpec` [annotation](/docs/ops/setup/app-health-check/#use-annotations-on-pod).
|
||||
- **Added** support for configuring the secret paths for Istio mutual TLS certificates. Refer [here](https://github.com/istio/istio/issues/11984) for more details.
|
||||
- **Added** support for [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) private keys for workloads, enabled by the flag `pkcs8-keys` on Citadel.
|
||||
- **Improved** JWT public key fetching logic to be more resilient to network failure.
|
||||
|
|
@ -76,7 +76,7 @@ Refer to the [installation option change page](/news/2019/announcing-1.2/helm-ch
|
|||
- **Added** a new experimental ['a-la-carte' Istio installer](https://github.com/istio/installer/wiki) to enable users to install and upgrade Istio with desired isolation and security.
|
||||
- **Added** the [DNS-discovery](https://github.com/istio-ecosystem/dns-discovery) and [iter8](https://github.com/istio-ecosystem/iter8) in [Istio ecosystem](https://github.com/istio-ecosystem).
|
||||
- **Added** [environment variable and configuration file support](https://docs.google.com/document/d/1M-qqBMNbhbAxl3S_8qQfaeOLAiRqSBpSgfWebFBRuu8/edit) for configuring Galley, in addition to command-line flags.
|
||||
- **Added** [ControlZ](/docs/ops/troubleshooting/controlz/) support to visualize the state of the MCP Server in Galley.
|
||||
- **Added** [ControlZ](/docs/ops/diagnostic-tools/controlz/) support to visualize the state of the MCP Server in Galley.
|
||||
- **Added** the [`enableServiceDiscovery` command-line flag](/docs/reference/commands/galley/#galley-server) to control the service discovery module in Galley.
|
||||
- **Added** `InitialWindowSize` and `InitialConnWindowSize` parameters to Galley and Pilot to allow fine-tuning of MCP (gRPC) connection settings.
|
||||
- **Graduated** configuration processing with Galley from Alpha to Beta.
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ Leverage Istio to integrate with Kubernetes and handle large fleets of Envoys in
|
|||
|
||||
- Added the new [Istio Deployment Models concept](/docs/concepts/deployment-models/) to help you decide what deployment model suits your needs.
|
||||
|
||||
- Organized the content in of our [Operations Guide](/docs/ops/) and created a [section with all troubleshooting tasks](/docs/ops/troubleshooting) to help you find the information you seek faster.
|
||||
- Organized the content in of our [Operations Guide](/docs/ops/) and created a [section with all troubleshooting tasks](/docs/ops/common-problems) to help you find the information you seek faster.
|
||||
|
||||
As always, there is a lot happening in the [Community Meeting](https://github.com/istio/community#community-meeting); join us every other Thursday at 11 AM Pacific.
|
||||
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ aliases:
|
|||
- **Added** trust domain validation for services using mutual TLS. By default, the server only authenticates the requests from the same trust domain.
|
||||
- **Added** [labels](/docs/concepts/security/#how-citadel-determines-whether-to-create-service-account-secrets) to control service account secret generation by namespace.
|
||||
- **Added** SDS support to deliver the private key and certificates to each Istio control plane service.
|
||||
- **Added** support for [introspection](/docs/ops/troubleshooting/controlz/) to Citadel.
|
||||
- **Added** support for [introspection](/docs/ops/diagnostic-tools/controlz/) to Citadel.
|
||||
- **Added** metrics to the `/metrics` endpoint of Citadel Agent on port 15014 to monitor the SDS service.
|
||||
- **Added** diagnostics to the Citadel Agent using the `/debug/sds/workload` and `/debug/sds/gateway` on port 8080.
|
||||
- **Improved** the ingress gateway to [load the trusted CA certificate from a separate secret](/docs/tasks/traffic-management/ingress/secure-ingress-sds/#configure-a-mutual-tls-ingress-gateway) when using SDS.
|
||||
|
|
|
|||
|
|
@ -12,4 +12,4 @@ weight: 50
|
|||
|
||||
1. 在你的 Kubernetes 环境中,检查所有命名空间下的部署,确保没有可能导致 Istio 错误的遗留的部署。如果发现在向 Envoy 推送授权策略的时候发生错误,你可以禁用 Pilot 的授权插件。
|
||||
|
||||
1. 根据[调试授权文档](/docs/ops/troubleshooting/security-issues/)找到确切的原因。
|
||||
1. 根据[调试授权文档](/docs/ops/diagnostic-tools/security-issues/)找到确切的原因。
|
||||
|
|
|
|||
Loading…
Reference in New Issue