mirror of https://github.com/istio/istio.io.git
Reorganized the Operations node (#4765)
This commit is contained in:
parent
4b21bc14aa
commit
07178c1348
|
@ -44,7 +44,7 @@ Below is our list of existing features and their current phases. This informatio
|
|||
| Gateway: Ingress, Egress for all protocols | Stable
|
||||
| TLS termination and SNI Support in Gateways | Stable
|
||||
| SNI (multiple certs) at ingress | Stable
|
||||
| [Locality load balancing](/docs/ops/traffic-management/locality-load-balancing/) | Beta
|
||||
| [Locality load balancing](/docs/tasks/traffic-management/locality-load-balancing/) | Beta
|
||||
| Enabling custom filters in Envoy | Alpha
|
||||
| CNI container interface | Alpha
|
||||
| [Sidecar API](/docs/reference/config/networking/v1alpha3/sidecar/) | Alpha
|
||||
|
|
|
@ -306,7 +306,7 @@ This example shows there are many variables, based on whether the automatic side
|
|||
- default policy (Configured in the ConfigMap `istio-sidecar-injector`)
|
||||
- per-pod override annotation (`sidecar.istio.io/inject`)
|
||||
|
||||
The [injection status table](/docs/ops/setup/injection/) shows a clear picture of the final injection status based on the value of the above variables.
|
||||
The [injection status table](/docs/ops/troubleshooting/injection/) shows a clear picture of the final injection status based on the value of the above variables.
|
||||
|
||||
## Traffic flow from application container to sidecar proxy
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
## Behavior changes
|
||||
|
||||
- [Validating webhook](/docs/ops/setup/validation) is now mandatory. Disabling it may result in Pilot crashes.
|
||||
- [Validating webhook](/docs/ops/troubleshooting/validation) is now mandatory. Disabling it may result in Pilot crashes.
|
||||
|
||||
- [Service entry](/docs/reference/config/networking/v1alpha3/service-entry/) validation now rejects the wildcard hostname (`*`) when configuring DNS resolution. The API has never allowed this, however `ServiceEntry` was erroneously excluded from validation in the previous release. Use of wildcards as part of a hostname, e.g. `*.bar.com`, remains unchanged.
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
## Traffic Management
|
||||
|
||||
- **Improved** [locality based routing](/docs/ops/traffic-management/locality-load-balancing/) in multicluster environments.
|
||||
- **Improved** [locality based routing](/docs/tasks/traffic-management/locality-load-balancing/) in multicluster environments.
|
||||
- **Improved** outbound traffic policy in [`ALLOW_ANY` mode](/docs/reference/config/installation-options/#global-options). Traffic for unknown HTTP/HTTPS hosts on an existing port will be [forwarded as is](/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services). Unknown traffic will be logged in Envoy access logs.
|
||||
- **Added** support for setting HTTP idle timeouts to upstream services.
|
||||
- **Improved** Sidecar support for [NONE mode](/docs/reference/config/networking/v1alpha3/sidecar/#CaptureMode) (without iptables) .
|
||||
|
@ -16,7 +16,7 @@
|
|||
## Security
|
||||
|
||||
- **Improved** extend the default lifetime of self-signed Citadel root certificates to 10 years.
|
||||
- **Added** Kubernetes health check prober rewrite per deployment via `sidecar.istio.io/rewriteAppHTTPProbers: "true"` in the `PodSpec` [annotation](/docs/ops/setup/app-health-check/#use-annotations-on-pod).
|
||||
- **Added** Kubernetes health check prober rewrite per deployment via `sidecar.istio.io/rewriteAppHTTPProbers: "true"` in the `PodSpec` [annotation](/docs/ops/app-health-check/#use-annotations-on-pod).
|
||||
- **Added** support for configuring the secret paths for Istio mutual TLS certificates. Refer [here](https://github.com/istio/istio/issues/11984) for more details.
|
||||
- **Added** support for [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) private keys for workloads, enabled by the flag `pkcs8-keys` on Citadel.
|
||||
- **Improved** JWT public key fetching logic to be more resilient to network failure.
|
||||
|
@ -69,7 +69,7 @@ Refer to the [installation option change page](/docs/reference/config/installati
|
|||
- **Added** a new experimental ['a-la-carte' Istio installer](https://github.com/istio/installer/wiki) to enable users to install and upgrade Istio with desired isolation and security.
|
||||
- **Added** the [DNS-discovery](https://github.com/istio-ecosystem/dns-discovery) and [iter8](https://github.com/istio-ecosystem/iter8) in [Istio ecosystem](https://github.com/istio-ecosystem).
|
||||
- **Added** [environment variable and configuration file support](https://docs.google.com/document/d/1M-qqBMNbhbAxl3S_8qQfaeOLAiRqSBpSgfWebFBRuu8/edit) for configuring Galley, in addition to command-line flags.
|
||||
- **Added** [ControlZ](/docs/ops/controlz/) support to visualize the state of the MCP Server in Galley.
|
||||
- **Added** [ControlZ](/docs/ops/troubleshooting/controlz/) support to visualize the state of the MCP Server in Galley.
|
||||
- **Added** the [`enableServiceDiscovery` command-line flag](/docs/reference/commands/galley/#galley-server) to control the service discovery module in Galley.
|
||||
- **Added** `InitialWindowSize` and `InitialConnWindowSize` parameters to Galley and Pilot to allow fine-tuning of MCP (gRPC) connection settings.
|
||||
- **Graduated** configuration processing with Galley from Alpha to Beta.
|
||||
|
|
|
@ -55,7 +55,7 @@ operators can easily expand the set of collected proxy metrics when required. Th
|
|||
overall cost of monitoring across the mesh.
|
||||
|
||||
The [Envoy documentation site](https://www.envoyproxy.io/docs/envoy/latest/) includes a detailed overview of [Envoy statistics collection](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/observability/statistics.html?highlight=statistics).
|
||||
The operations guide on [Envoy Statistics](/docs/ops/telemetry/envoy-stats/) provides more information on controlling the generation of proxy-level metrics.
|
||||
The operations guide on [Envoy Statistics](/docs/ops/troubleshooting/proxy-cmd/) provides more information on controlling the generation of proxy-level metrics.
|
||||
|
||||
Example proxy-level Metrics:
|
||||
|
||||
|
|
|
@ -148,7 +148,7 @@ Istio supports the following load balancing methods:
|
|||
for more information.
|
||||
|
||||
You can also choose to prioritize your load balancing pools based on geographic
|
||||
location. Visit the [operations guide](/docs/ops/traffic-management/locality-load-balancing/)
|
||||
location. Visit the [operations guide](/docs/tasks/traffic-management/locality-load-balancing/)
|
||||
for more information on the locality load balancing feature.
|
||||
|
||||
In addition to basic service discovery and load balancing, Istio provides a rich
|
||||
|
|
|
@ -1,11 +1,12 @@
|
|||
---
|
||||
title: Health Checking of Istio Services
|
||||
description: Shows how to do health checking for Istio services.
|
||||
weight: 65
|
||||
weight: 1
|
||||
aliases:
|
||||
- /docs/tasks/traffic-management/app-health-check/
|
||||
- /docs/ops/security/health-checks-and-mtls/
|
||||
- /help/ops/setup/app-health-check
|
||||
- /help/ops/app-health-check
|
||||
keywords: [security,health-check]
|
||||
---
|
||||
|
|
@ -1,68 +0,0 @@
|
|||
---
|
||||
title: Component Debugging
|
||||
description: How to do low-level debugging of Istio components.
|
||||
weight: 25
|
||||
aliases:
|
||||
- /help/ops/component-debugging
|
||||
---
|
||||
|
||||
You can gain insights into what individual components are doing by inspecting their [logs](/docs/ops/component-logging/)
|
||||
or peering inside via [introspection](/docs/ops/controlz/). If that's insufficient, the steps below explain
|
||||
how to get under the hood.
|
||||
|
||||
## With `istioctl`
|
||||
|
||||
### Get an overview of your mesh
|
||||
|
||||
You can get an overview of your mesh using the `proxy-status` command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-status
|
||||
{{< /text >}}
|
||||
|
||||
If a proxy is missing from the output list it means that it is not currently connected to a Pilot instance and so it
|
||||
will not receive any configuration. Additionally, if it is marked stale, it likely means there are networking issues or
|
||||
Pilot needs to be scaled.
|
||||
|
||||
### Get proxy configuration
|
||||
|
||||
`istioctl` allows you to retrieve information about proxy configuration using the `proxy-config` or `pc` command.
|
||||
|
||||
For example, to retrieve information about cluster configuration for the Envoy instance in a specific pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config cluster <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
To retrieve information about bootstrap configuration for the Envoy instance in a specific pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config bootstrap <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
To retrieve information about listener configuration for the Envoy instance in a specific pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config listener <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
To retrieve information about route configuration for the Envoy instance in a specific pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config route <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
To retrieve information about endpoint configuration for the Envoy instance in a specific pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config endpoints <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
See [Debugging Envoy and Pilot](/docs/ops/traffic-management/proxy-cmd/) for more advice on interpreting this information.
|
||||
|
||||
## With Tcpdump
|
||||
|
||||
Tcpdump doesn't work in the sidecar pod - the container doesn't run as root. However any other container in the same pod will see all the packets, since the
|
||||
network namespace is shared. `iptables` will also see the pod-wide configuration.
|
||||
|
||||
Communication between Envoy and the app happens on 127.0.0.1, and is not encrypted.
|
|
@ -1,105 +0,0 @@
|
|||
---
|
||||
title: Miscellaneous
|
||||
description: Advice on tackling common problems with Istio.
|
||||
weight: 90
|
||||
force_inline_toc: true
|
||||
aliases:
|
||||
- /help/ops/misc
|
||||
---
|
||||
|
||||
## Verifying connectivity to Istio Pilot
|
||||
|
||||
Verifying connectivity to Pilot is a useful troubleshooting step. Every proxy container in the service mesh should be able to communicate with Pilot. This can be accomplished in a few simple steps:
|
||||
|
||||
1. Get the name of the Istio Ingress pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ INGRESS_POD_NAME=$(kubectl get po -n istio-system | grep ingressgateway\- | awk '{print$1}'); echo ${INGRESS_POD_NAME};
|
||||
{{< /text >}}
|
||||
|
||||
1. Exec into the Istio Ingress pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it $INGRESS_POD_NAME -n istio-system /bin/bash
|
||||
{{< /text >}}
|
||||
|
||||
1. Test connectivity to Pilot using `curl`. The following example invokes the v1 registration API using default Pilot configuration parameters and mutual TLS enabled:
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl -k --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem --key /etc/certs/key.pem https://istio-pilot:8080/debug/edsz
|
||||
{{< /text >}}
|
||||
|
||||
If mutual TLS is disabled:
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl http://istio-pilot:8080/debug/edsz
|
||||
{{< /text >}}
|
||||
|
||||
You should receive a response listing the "service-key" and "hosts" for each service in the mesh.
|
||||
|
||||
## No traces appearing in Zipkin when running Istio locally on Mac
|
||||
|
||||
Istio is installed and everything seems to be working except there are no traces showing up in Zipkin when there
|
||||
should be.
|
||||
|
||||
This may be caused by a known [Docker issue](https://github.com/docker/for-mac/issues/1260) where the time inside
|
||||
containers may skew significantly from the time on the host machine. If this is the case,
|
||||
when you select a very long date range in Zipkin you will see the traces appearing as much as several days too early.
|
||||
|
||||
You can also confirm this problem by comparing the date inside a Docker container to outside:
|
||||
|
||||
{{< text bash >}}
|
||||
$ docker run --entrypoint date gcr.io/istio-testing/ubuntu-16-04-slave:latest
|
||||
Sun Jun 11 11:44:18 UTC 2017
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ date -u
|
||||
Thu Jun 15 02:25:42 UTC 2017
|
||||
{{< /text >}}
|
||||
|
||||
To fix the problem, you'll need to shutdown and then restart Docker before reinstalling Istio.
|
||||
|
||||
## Automatic sidecar injection fails if the Kubernetes API server has proxy settings
|
||||
|
||||
When the Kubernetes API server includes proxy settings such as:
|
||||
|
||||
{{< text yaml >}}
|
||||
env:
|
||||
- name: http_proxy
|
||||
value: http://proxy-wsa.esl.foo.com:80
|
||||
- name: https_proxy
|
||||
value: http://proxy-wsa.esl.foo.com:80
|
||||
- name: no_proxy
|
||||
value: 127.0.0.1,localhost,dockerhub.foo.com,devhub-docker.foo.com,10.84.100.125,10.84.100.126,10.84.100.127
|
||||
{{< /text >}}
|
||||
|
||||
With these settings, Sidecar injection fails. The only related failure log can be found in `kube-apiserver` log:
|
||||
|
||||
{{< text plain >}}
|
||||
W0227 21:51:03.156818 1 admission.go:257] Failed calling webhook, failing open sidecar-injector.istio.io: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject: Service Unavailable
|
||||
{{< /text >}}
|
||||
|
||||
Make sure both pod and service CIDRs are not proxied according to *_proxy variables. Check the `kube-apiserver` files and logs to verify the configuration and whether any requests are being proxied.
|
||||
|
||||
One workaround is to remove the proxy settings from the `kube-apiserver` manifest, another workaround is to include `istio-sidecar-injector.istio-system.svc` or `.svc` in the `no_proxy` value. Make sure that `kube-apiserver` is restarted after each workaround.
|
||||
|
||||
An [issue](https://github.com/kubernetes/kubeadm/issues/666) was filed with Kubernetes related to this and has since been closed.
|
||||
[https://github.com/kubernetes/kubernetes/pull/58698#discussion_r163879443](https://github.com/kubernetes/kubernetes/pull/58698#discussion_r163879443)
|
||||
|
||||
## What Envoy version is Istio using?
|
||||
|
||||
To find out the Envoy version used in deployment, you can `exec` into the container and query the `server_info` endpoint:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it PODNAME -c istio-proxy -n NAMESPACE /bin/bash
|
||||
root@5c7e9d3a4b67:/# curl localhost:15000/server_info
|
||||
envoy 0/1.9.0-dev//RELEASE live 57964 57964 0
|
||||
{{< /text >}}
|
||||
|
||||
In addition, the `Envoy` and `istio-api` repository versions are stored as labels on the image:
|
||||
|
||||
{{< text bash >}}
|
||||
$ docker inspect -f '{{json .Config.Labels }}' ISTIO-PROXY-IMAGE
|
||||
{"envoy-vcs-ref":"b3be5713f2100ab5c40316e73ce34581245bd26a","istio-api-vcs-ref":"825044c7e15f6723d558b7b878855670663c2e1e"}
|
||||
{{< /text >}}
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Security
|
||||
description: Helps you manage the security aspects of a running mesh.
|
||||
weight: 40
|
||||
weight: 25
|
||||
keywords: [ops,security]
|
||||
aliases:
|
||||
- /help/ops/security
|
||||
|
|
|
@ -1,27 +0,0 @@
|
|||
---
|
||||
title: Authorization Too Permissive
|
||||
description: Authorization is enabled, but requests make it through anyway.
|
||||
weight: 50
|
||||
aliases:
|
||||
- /help/ops/security/authorization-permissive
|
||||
---
|
||||
If authorization checks are enabled for a service and yet requests to the
|
||||
service aren't being blocked, then authorization was likely not enabled
|
||||
successfully. To verify, follow these steps:
|
||||
|
||||
1. Check the [enable authorization docs](/docs/concepts/security/#enabling-authorization)
|
||||
to correctly enable Istio authorization.
|
||||
|
||||
1. Avoid enabling authorization for Istio Control Planes Components, including
|
||||
Mixer, Pilot and Ingress. The Istio authorization features are designed for
|
||||
authorizing access to services in an Istio Mesh. Enabling the authorization
|
||||
features for the Istio Control Planes components can cause unexpected
|
||||
behavior.
|
||||
|
||||
1. In your Kubernetes environment, check deployments in all namespaces to make
|
||||
sure there is no legacy deployment left that can cause an error in Pilot.
|
||||
You can disable Pilot's authorization plug-in if there is an error pushing
|
||||
authorization policy to Envoy.
|
||||
|
||||
1. Visit [Debugging Authorization](/docs/ops/security/debugging-authorization/)
|
||||
to find out the exact cause.
|
|
@ -1,27 +0,0 @@
|
|||
---
|
||||
title: Authorization Too Restrictive
|
||||
description: Authorization is enabled and no requests make it through to the service.
|
||||
weight: 60
|
||||
aliases:
|
||||
- /help/ops/security/authorization-restrictive
|
||||
---
|
||||
|
||||
When you first enable authorization for a service, all requests are denied by default. After you add one or more authorization policies, then
|
||||
matching requests should flow through. If all requests continue to be denied, you can try the following:
|
||||
|
||||
1. Make sure there is no typo in your policy YAML file.
|
||||
|
||||
1. Avoid enabling authorization for Istio Control Planes Components, including Mixer, Pilot, Ingress. Istio authorization policy is designed for authorizing access to services in Istio Mesh. Enabling it for Istio Control Planes Components may cause unexpected behavior.
|
||||
|
||||
1. Make sure that your `ServiceRoleBinding` and referred `ServiceRole` objects are in the same namespace (by checking "metadata"/”namespace” line).
|
||||
|
||||
1. Make sure that your service role and service role binding policies don't use any HTTP only fields
|
||||
for TCP services. Otherwise, Istio ignores the policies as if they didn't exist.
|
||||
|
||||
1. In Kubernetes environment, make sure all services in a `ServiceRole` object are in the same namespace as the
|
||||
`ServiceRole` itself. For example, if a service in a `ServiceRole` object is `a.default.svc.cluster.local`, the `ServiceRole` must be in the
|
||||
`default` namespace (`metadata/namespace` line should be `default`). For non-Kubernetes environments, all `ServiceRoles` and `ServiceRoleBindings`
|
||||
for a mesh should be in the same namespace.
|
||||
|
||||
1. Visit [Debugging Authorization](/docs/ops/security/debugging-authorization/)
|
||||
to find out the exact cause.
|
|
@ -1,262 +0,0 @@
|
|||
---
|
||||
title: Debugging Authorization
|
||||
description: Demonstrates how to debug authorization.
|
||||
weight: 5
|
||||
keywords: [debug,security,authorization,rbac]
|
||||
aliases:
|
||||
- /help/ops/security/debugging-authorization
|
||||
---
|
||||
|
||||
This page demonstrates how to debug Istio authorization.
|
||||
|
||||
{{< idea >}}
|
||||
It would be very helpful to also include a cluster state archive in your email by following instructions in
|
||||
[reporting bugs](/about/bugs).
|
||||
{{< /idea >}}
|
||||
|
||||
## Ensure Authorization is Enabled Correctly
|
||||
|
||||
The `ClusterRbacConfig` default cluster level singleton custom resource controls the authorization functionality globally.
|
||||
|
||||
1. Run the following command to list existing `ClusterRbacConfig`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get clusterrbacconfigs.rbac.istio.io --all-namespaces
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify there is only **one** instance of `ClusterRbacConfig` with name `default`. Otherwise, Istio disables the
|
||||
authorization functionality and ignores all policies.
|
||||
|
||||
{{< text plain >}}
|
||||
NAMESPACE NAME AGE
|
||||
default default 1d
|
||||
{{< /text >}}
|
||||
|
||||
1. If there is more than one `ClusterRbacConfig` instance, remove any additional `ClusterRbacConfig` instances and
|
||||
ensure **only one** instance is named `default`.
|
||||
|
||||
## Ensure Pilot Accepts the Policies
|
||||
|
||||
Pilot converts and distributes your authorization policies to the proxies. The following steps help
|
||||
you ensure Pilot is working as expected:
|
||||
|
||||
1. Run the following command to export the Pilot `ControlZ`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl port-forward $(kubectl -n istio-system get pods -l istio=pilot -o jsonpath='{.items[0].metadata.name}') -n istio-system 9876:9876
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify you see the following output:
|
||||
|
||||
{{< text plain >}}
|
||||
Forwarding from 127.0.0.1:9876 -> 9876
|
||||
{{< /text >}}
|
||||
|
||||
1. Start your browser and open the `ControlZ` page at `http://127.0.0.1:9876/scopez/`.
|
||||
|
||||
1. Change the `rbac` Output Level to `debug`.
|
||||
|
||||
1. Use `Ctrl+C` in the terminal you started in step 1 to stop the port-forward command.
|
||||
|
||||
1. Print the log of Pilot and search for `rbac` with the following command:
|
||||
|
||||
{{< tip >}}
|
||||
You probably need to first delete and then re-apply your authorization policies so that
|
||||
the debug output is generated for these policies.
|
||||
{{< /tip >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs $(kubectl -n istio-system get pods -l istio=pilot -o jsonpath='{.items[0].metadata.name}') -c discovery -n istio-system | grep rbac
|
||||
{{< /text >}}
|
||||
|
||||
1. Check the output and verify:
|
||||
|
||||
* There are no errors.
|
||||
* There is a `"built filter config for ..."` message which means the filter is generated
|
||||
for the target service.
|
||||
|
||||
1. For example, you might see something similar to the following:
|
||||
|
||||
{{< text plain >}}
|
||||
2018-07-26T22:25:41.009838Z debug rbac building filter config for {sleep.foo.svc.cluster.local map[app:sleep pod-template-hash:3326367878] map[destination.name:sleep destination.namespace:foo destination.user:default]}
|
||||
2018-07-26T22:25:41.009915Z info rbac no service role in namespace foo
|
||||
2018-07-26T22:25:41.009957Z info rbac no service role binding in namespace foo
|
||||
2018-07-26T22:25:41.010000Z debug rbac generated filter config: { }
|
||||
2018-07-26T22:25:41.010114Z info rbac built filter config for sleep.foo.svc.cluster.local
|
||||
2018-07-26T22:25:41.182400Z debug rbac building filter config for {productpage.default.svc.cluster.local map[pod-template-hash:2600844901 version:v1 app:productpage] map[destination.name:productpage destination.namespace:default destination.user:bookinfo-productpage]}
|
||||
2018-07-26T22:25:41.183131Z debug rbac checking role app2-grpc-viewer
|
||||
2018-07-26T22:25:41.183214Z debug rbac role skipped for no AccessRule matched
|
||||
2018-07-26T22:25:41.183255Z debug rbac checking role productpage-viewer
|
||||
2018-07-26T22:25:41.183281Z debug rbac matched AccessRule[0]
|
||||
2018-07-26T22:25:41.183390Z debug rbac generated filter config: {policies:<key:"productpage-viewer" value:<permissions:<and_rules:<rules:<or_rules:<rules:<header:<name:":method" exact_match:"GET" > > > > > > principals:<and_ids:<ids:<any:true > > > > > }
|
||||
2018-07-26T22:25:41.184407Z info rbac built filter config for productpage.default.svc.cluster.local
|
||||
{{< /text >}}
|
||||
|
||||
It means Pilot generated:
|
||||
|
||||
* An empty config for `sleep.foo.svc.cluster.local` as there is no authorization policies matched
|
||||
and Istio denies all requests sent to this service by default.
|
||||
|
||||
* An config for `productpage.default.svc.cluster.local` and Istio will allow anyone to access it
|
||||
with GET method.
|
||||
|
||||
## Ensure Pilot Distributes Policies to Proxies Correctly
|
||||
|
||||
Pilot distributes the authorization policies to proxies. The following steps help you ensure Pilot
|
||||
is working as expected:
|
||||
|
||||
{{< tip >}}
|
||||
The command used in this section assumes you have deployed [Bookinfo application](/docs/examples/bookinfo/),
|
||||
otherwise you should replace `"-l app=productpage"` with your actual pod.
|
||||
{{< /tip >}}
|
||||
|
||||
1. Run the following command to get the proxy configuration dump for the `productpage` service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec $(kubectl get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -c istio-proxy -- curl localhost:15000/config_dump -s
|
||||
{{< /text >}}
|
||||
|
||||
1. Check the log and verify:
|
||||
|
||||
* The log includes an `envoy.filters.http.rbac` filter to enforce the authorization policy
|
||||
on each incoming request.
|
||||
* Istio updates the filter accordingly after you update your authorization policy.
|
||||
|
||||
1. The following output means the proxy of `productpage` has enabled the `envoy.filters.http.rbac` filter
|
||||
with rules that allows anyone to access it via `GET` method. The `shadow_rules` are not used and you can ignored them safely.
|
||||
|
||||
{{< text plain >}}
|
||||
{
|
||||
"name": "envoy.filters.http.rbac",
|
||||
"config": {
|
||||
"rules": {
|
||||
"policies": {
|
||||
"productpage-viewer": {
|
||||
"permissions": [
|
||||
{
|
||||
"and_rules": {
|
||||
"rules": [
|
||||
{
|
||||
"or_rules": {
|
||||
"rules": [
|
||||
{
|
||||
"header": {
|
||||
"exact_match": "GET",
|
||||
"name": ":method"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"principals": [
|
||||
{
|
||||
"and_ids": {
|
||||
"ids": [
|
||||
{
|
||||
"any": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"shadow_rules": {
|
||||
"policies": {}
|
||||
}
|
||||
}
|
||||
},
|
||||
{{< /text >}}
|
||||
|
||||
## Ensure Proxies Enforce Policies Correctly
|
||||
|
||||
Proxies eventually enforce the authorization policies. The following steps help you ensure the proxy
|
||||
is working as expected:
|
||||
|
||||
{{< tip >}}
|
||||
The command used in this section assumes you have deployed [Bookinfo application](/docs/examples/bookinfo/).
|
||||
otherwise you should replace `"-l app=productpage"` with your actual pod.
|
||||
{{< /tip >}}
|
||||
|
||||
1. Turn on the authorization debug logging in proxy with the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec $(kubectl get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -c istio-proxy -- curl -X POST localhost:15000/logging?rbac=debug -s
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify you see the following output:
|
||||
|
||||
{{< text plain >}}
|
||||
active loggers:
|
||||
... ...
|
||||
rbac: debug
|
||||
... ...
|
||||
{{< /text >}}
|
||||
|
||||
1. Visit the `productpage` in your browser to generate some logs.
|
||||
|
||||
1. Print the proxy logs with the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs $(kubectl get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -c istio-proxy
|
||||
{{< /text >}}
|
||||
|
||||
1. Check the output and verify:
|
||||
|
||||
* The output log shows either `enforced allowed` or `enforced denied` depending on whether the request
|
||||
was allowed or denied respectively.
|
||||
|
||||
* Your authorization policy expects the data extracted from the request.
|
||||
|
||||
1. The following output means there is a `GET` request at path `/productpage` and the policy allows the request.
|
||||
The `shadow denied` has no effect and you can ignore it safely.
|
||||
|
||||
{{< text plain >}}
|
||||
...
|
||||
[2018-07-26 20:39:18.060][152][debug][rbac] external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:79] checking request: remoteAddress: 10.60.0.139:51158, localAddress: 10.60.0.93:9080, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account, subjectPeerCertificate: O=, headers: ':authority', '35.238.0.62'
|
||||
':path', '/productpage'
|
||||
':method', 'GET'
|
||||
'upgrade-insecure-requests', '1'
|
||||
'user-agent', 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
|
||||
'dnt', '1'
|
||||
'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8'
|
||||
'accept-encoding', 'gzip, deflate'
|
||||
'accept-language', 'en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7'
|
||||
'x-forwarded-for', '10.60.0.1'
|
||||
'x-forwarded-proto', 'http'
|
||||
'x-request-id', 'e23ea62d-b25d-91be-857c-80a058d746d4'
|
||||
'x-b3-traceid', '5983108bf6d05603'
|
||||
'x-b3-spanid', '5983108bf6d05603'
|
||||
'x-b3-sampled', '1'
|
||||
'x-istio-attributes', 'CikKGGRlc3RpbmF0aW9uLnNlcnZpY2UubmFtZRINEgtwcm9kdWN0cGFnZQoqCh1kZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWVzcGFjZRIJEgdkZWZhdWx0Ck8KCnNvdXJjZS51aWQSQRI/a3ViZXJuZXRlczovL2lzdGlvLWluZ3Jlc3NnYXRld2F5LTc2NjY0Y2NmY2Ytd3hjcjQuaXN0aW8tc3lzdGVtCj4KE2Rlc3RpbmF0aW9uLnNlcnZpY2USJxIlcHJvZHVjdHBhZ2UuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbApDChhkZXN0aW5hdGlvbi5zZXJ2aWNlLmhvc3QSJxIlcHJvZHVjdHBhZ2UuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbApBChdkZXN0aW5hdGlvbi5zZXJ2aWNlLnVpZBImEiRpc3RpbzovL2RlZmF1bHQvc2VydmljZXMvcHJvZHVjdHBhZ2U='
|
||||
'content-length', '0'
|
||||
'x-envoy-internal', 'true'
|
||||
'sec-istio-authn-payload', 'CkVjbHVzdGVyLmxvY2FsL25zL2lzdGlvLXN5c3RlbS9zYS9pc3Rpby1pbmdyZXNzZ2F0ZXdheS1zZXJ2aWNlLWFjY291bnQSRWNsdXN0ZXIubG9jYWwvbnMvaXN0aW8tc3lzdGVtL3NhL2lzdGlvLWluZ3Jlc3NnYXRld2F5LXNlcnZpY2UtYWNjb3VudA=='
|
||||
, dynamicMetadata: filter_metadata {
|
||||
key: "istio_authn"
|
||||
value {
|
||||
fields {
|
||||
key: "request.auth.principal"
|
||||
value {
|
||||
string_value: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
|
||||
}
|
||||
}
|
||||
fields {
|
||||
key: "source.principal"
|
||||
value {
|
||||
string_value: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
[2018-07-26 20:39:18.060][152][debug][rbac] external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:88] shadow denied
|
||||
[2018-07-26 20:39:18.060][152][debug][rbac] external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:98] enforced allowed
|
||||
...
|
||||
{{< /text >}}
|
||||
|
|
@ -1,53 +0,0 @@
|
|||
---
|
||||
title: End User Authentication
|
||||
description: What to do if end-user authentication doesn't work.
|
||||
weight: 80
|
||||
aliases:
|
||||
- /help/ops/security/end-user-auth
|
||||
---
|
||||
|
||||
With Istio, you can enable authenticating end user. Currently, the end user credential supported by the Istio authentication policy is JWT.
|
||||
The following is a guide for troubleshooting the end user JWT authentication.
|
||||
|
||||
1. Check your Istio authentication policy, `principalBinding` should be set as `USE_ORIGIN` to authenticate the end user.
|
||||
|
||||
1. If `jwksUri` isn’t set, make sure the JWT issuer is of url format and `url + /.well-known/openid-configuration` can be opened in browser; for example, if the JWT issuer is `https://accounts.google.com`, make sure `https://accounts.google.com/.well-known/openid-configuration` is a valid url and can be opened in a browser.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "authentication.istio.io/v1alpha1"
|
||||
kind: "Policy"
|
||||
metadata:
|
||||
name: "example-3"
|
||||
spec:
|
||||
targets:
|
||||
- name: httpbin
|
||||
peers:
|
||||
- mtls:
|
||||
origins:
|
||||
- jwt:
|
||||
issuer: "628645741881-noabiu23f5a8m8ovd8ucv698lj78vv0l@developer.gserviceaccount.com"
|
||||
jwksUri: "https://www.googleapis.com/service_accounts/v1/jwk/628645741881-noabiu23f5a8m8ovd8ucv698lj78vv0l@developer.gserviceaccount.com"
|
||||
principalBinding: USE_ORIGIN
|
||||
{{< /text >}}
|
||||
|
||||
1. If the JWT token is placed in the Authorization header in http requests, make sure the JWT token is valid (not expired, etc). The fields in a JWT token can be decoded by using online JWT parsing tools, e.g., [jwt.io](https://jwt.io/).
|
||||
|
||||
1. Get the Istio proxy (i.e., Envoy) logs to verify the configuration which Pilot distributes is correct.
|
||||
|
||||
For example, if the authentication policy is enforced on the `httpbin` service in the namespace `foo`, use the command below to get logs from the Istio proxy, make sure `local_jwks` is set and the http response code is in the Istio proxy logs.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs httpbin-68fbcdcfc7-hrnzm -c istio-proxy -n foo
|
||||
[2018-07-04 19:13:30.762][15][info][config] ./src/envoy/http/jwt_auth/auth_store.h:72] Loaded JwtAuthConfig: rules {
|
||||
issuer: "628645741881-noabiu23f5a8m8ovd8ucv698lj78vv0l@developer.gserviceaccount.com"
|
||||
local_jwks {
|
||||
inline_string: "{\n \"keys\": [\n {\n \"kty\": \"RSA\",\n \"alg\": \"RS256\",\n \"use\": \"sig\",\n \"kid\": \"03bc39a6b56602c0d2ad421c3993d5e4f88e6f54\",\n \"n\": \"u9gnSMDYw4ggVKInAfxpXqItv9Ii7PlUFrAcwANQMW9fbZrFpITFD45t0gUy9CK4QewkLhqDDUJSvpH7wprS8Hi0M8wAJf_lgugdRr6Nc2qK-eywjjDK-afQjhGLcMJGS0YXi3K2lyP-oWiLingMbYRiJxTi86icWT8AU8bKoTyTPFOExAJkDFnquulU0_KlteZxbjnRIVvMKfpgZ3yK9Pzv7XjtdvO7xlr59K9Zotd4mgphIUADfw1fR0lNkjHQp9N0WP9cbOsyUwm5jjDklnyVh7yBHcEk1YHccntosxnwIn-cj538PSaL_qDZgDAsJKHPZlkiP_1mjsu3NkofIQ\",\n \"e\": \"AQAB\"\n },\n {\n \"kty\": \"RSA\",\n \"alg\": \"RS256\",\n \"use\": \"sig\",\n \"kid\": \"60aef5b0877e9f0d67b787b5be797636735efdee\",\n \"n\": \"0TmzDEN12GF9UaWJI40oKwJlu53ZQihHcaVi1thLGs1l3ubdPWv8MEsc9X2DjCRxEB6Ss1R2VOImrQ2RWFuBSNHorjE0_GyEGNzvOH-0uUQ5uES2HvEN7384XfUYj9MoTPibstDEl84pm4d3Ka3R_1wk03Jrl9MIq6fnV_4Z-F7O7ElGqk8xcsiVUowd447dwlrd55ChIyISF5PvbCLtOKz9FgTz2mEb8jmzuZQs5yICgKZCzlJ7xNOOmZcqCZf9Qzaz4OnVLXykBLzSuLMtxvvOxf53rvWB0F2__CjKlEWBCQkB39Zaa_4I8dCAVxgkeQhgoU26BdzLL28xjWzdbw\",\n \"e\": \"AQAB\"\n },\n {\n \"kty\": \"RSA\",\n \"alg\": \"RS256\",\n \"use\": \"sig\",\n \"kid\": \"62a93512c9ee4c7f8067b5a216dade2763d32a47\",\n \"n\": \"0YWnm_eplO9BFtXszMRQNL5UtZ8HJdTH2jK7vjs4XdLkPW7YBkkm_2xNgcaVpkW0VT2l4mU3KftR-6s3Oa5Rnz5BrWEUkCTVVolR7VYksfqIB2I_x5yZHdOiomMTcm3DheUUCgbJRv5OKRnNqszA4xHn3tA3Ry8VO3X7BgKZYAUh9fyZTFLlkeAh0-bLK5zvqCmKW5QgDIXSxUTJxPjZCgfx1vmAfGqaJb-nvmrORXQ6L284c73DUL7mnt6wj3H6tVqPKA27j56N0TB1Hfx4ja6Slr8S4EB3F1luYhATa1PKUSH8mYDW11HolzZmTQpRoLV8ZoHbHEaTfqX_aYahIw\",\n \"e\": \"AQAB\"\n },\n {\n \"kty\": \"RSA\",\n \"alg\": \"RS256\",\n \"use\": \"sig\",\n \"kid\": \"b3319a147514df7ee5e4bcdee51350cc890cc89e\",\n \"n\": \"qDi7Tx4DhNvPQsl1ofxxc2ePQFcs-L0mXYo6TGS64CY_2WmOtvYlcLNZjhuddZVV2X88m0MfwaSA16wE-RiKM9hqo5EY8BPXj57CMiYAyiHuQPp1yayjMgoE1P2jvp4eqF-BTillGJt5W5RuXti9uqfMtCQdagB8EC3MNRuU_KdeLgBy3lS3oo4LOYd-74kRBVZbk2wnmmb7IhP9OoLc1-7-9qU1uhpDxmE6JwBau0mDSwMnYDS4G_ML17dC-ZDtLd1i24STUw39KH0pcSdfFbL2NtEZdNeam1DDdk0iUtJSPZliUHJBI_pj8M-2Mn_oA8jBuI8YKwBqYkZCN1I95Q\",\n \"e\": \"AQAB\"\n }\n ]\n}\n"
|
||||
}
|
||||
forward: true
|
||||
forward_payload_header: "istio-sec-8a85f33ec44c5ccbaf951742ff0aaa34eb94d9bd"
|
||||
}
|
||||
allow_missing_or_failed: true
|
||||
[2018-07-04 19:13:30.763][15][info][upstream] external/envoy/source/server/lds_api.cc:62] lds: add/update listener '10.8.2.9_8000'
|
||||
[2018-07-04T19:13:39.755Z] "GET /ip HTTP/1.1" 401 - 0 29 0 - "-" "curl/7.35.0" "e8374005-1957-99e4-96b6-9d6ec5bef396" "httpbin.foo:8000" "-"
|
||||
[2018-07-04T19:13:40.463Z] "GET /ip HTTP/1.1" 401 - 0 29 0 - "-" "curl/7.35.0" "9badd659-fa0e-9ca9-b4c0-9ac225571929" "httpbin.foo:8000" "-"
|
||||
{{< /text >}}
|
|
@ -1,181 +0,0 @@
|
|||
---
|
||||
title: Keys and Certificates
|
||||
description: What to do if you suspect problems with Istio keys and certificates.
|
||||
weight: 20
|
||||
aliases:
|
||||
- /help/ops/security/keys-and-certs
|
||||
---
|
||||
|
||||
If you suspect that some of the keys and/or certificates used by Istio aren't correct, the
|
||||
first step is to ensure that [Citadel is healthy](/docs/ops/security/repairing-citadel/).
|
||||
|
||||
You can then verify that Citadel is actually generating keys and certificates:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get secret istio.my-sa -n my-ns
|
||||
NAME TYPE DATA AGE
|
||||
istio.my-sa istio.io/key-and-cert 3 24d
|
||||
{{< /text >}}
|
||||
|
||||
Where `my-ns` and `my-sa` are the namespace and service account your pod is running as.
|
||||
|
||||
If you want to check the keys and certificates of other service accounts, you can run the following
|
||||
command to list all secrets for which Citadel has generated a key and certificate:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get secret --all-namespaces | grep istio.io/key-and-cert
|
||||
NAMESPACE NAME TYPE DATA AGE
|
||||
.....
|
||||
istio-system istio.istio-citadel-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-cleanup-old-ca-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-egressgateway-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-ingressgateway-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-mixer-post-install-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-mixer-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-pilot-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-sidecar-injector-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.prometheus istio.io/key-and-cert 3 14d
|
||||
kube-public istio.default istio.io/key-and-cert 3 14d
|
||||
.....
|
||||
{{< /text >}}
|
||||
|
||||
Then check that the certificate is valid with:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get secret -o json istio.my-sa -n my-ns | jq -r '.data["cert-chain.pem"]' | base64 --decode | openssl x509 -noout -text
|
||||
Certificate:
|
||||
Data:
|
||||
Version: 3 (0x2)
|
||||
Serial Number:
|
||||
99:59:6b:a2:5a:f4:20:f4:03:d7:f0:bc:59:f5:d8:40
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
Issuer: O = k8s.cluster.local
|
||||
Validity
|
||||
Not Before: Jun 4 20:38:20 2018 GMT
|
||||
Not After : Sep 2 20:38:20 2018 GMT
|
||||
Subject: O =
|
||||
Subject Public Key Info:
|
||||
Public Key Algorithm: rsaEncryption
|
||||
Public-Key: (2048 bit)
|
||||
Modulus:
|
||||
00:c8:a0:08:24:61:af:c1:cb:81:21:90:cc:03:76:
|
||||
01:25:bc:ff:ca:25:fc:81:d1:fa:b8:04:aa:d4:6b:
|
||||
55:e9:48:f2:e4:ab:22:78:03:47:26:bb:8f:22:10:
|
||||
66:47:47:c3:b2:9a:70:f1:12:f1:b3:de:d0:e9:2d:
|
||||
28:52:21:4b:04:33:fa:3d:92:8c:ab:7f:cc:74:c9:
|
||||
c4:68:86:b0:4f:03:1b:06:33:48:e3:5b:8f:01:48:
|
||||
6a:be:64:0e:01:f5:98:6f:57:e4:e7:b7:47:20:55:
|
||||
98:35:f9:99:54:cf:a9:58:1e:1b:5a:0a:63:ce:cd:
|
||||
ed:d3:a4:88:2b:00:ee:b0:af:e8:09:f8:a8:36:b8:
|
||||
55:32:80:21:8e:b5:19:c0:2f:e8:ca:4b:65:35:37:
|
||||
2f:f1:9e:6f:09:d4:e0:b1:3d:aa:5f:fe:25:1a:7b:
|
||||
d4:dd:fe:d1:d3:b6:3c:78:1d:3b:12:c2:66:bd:95:
|
||||
a8:3b:64:19:c0:51:05:9f:74:3d:6e:86:1e:20:f5:
|
||||
ed:3a:ab:44:8d:7c:5b:11:14:83:ee:6b:a1:12:2e:
|
||||
2a:0e:6b:be:02:ad:11:6a:ec:23:fe:55:d9:54:f3:
|
||||
5c:20:bc:ec:bf:a6:99:9b:7a:2e:71:10:92:51:a7:
|
||||
cb:79:af:b4:12:4e:26:03:ab:35:e2:5b:00:45:54:
|
||||
fe:91
|
||||
Exponent: 65537 (0x10001)
|
||||
X509v3 extensions:
|
||||
X509v3 Key Usage: critical
|
||||
Digital Signature, Key Encipherment
|
||||
X509v3 Extended Key Usage:
|
||||
TLS Web Server Authentication, TLS Web Client Authentication
|
||||
X509v3 Basic Constraints: critical
|
||||
CA:FALSE
|
||||
X509v3 Subject Alternative Name:
|
||||
URI:spiffe://cluster.local/ns/my-ns/sa/my-sa
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
78:77:7f:83:cc:fc:f4:30:12:57:78:62:e9:e2:48:d6:ea:76:
|
||||
69:99:02:e9:62:d2:53:db:2c:13:fe:0f:00:56:2b:83:ca:d3:
|
||||
4c:d2:01:f6:08:af:01:f2:e2:3e:bb:af:a3:bf:95:97:aa:de:
|
||||
1e:e6:51:8c:21:ee:52:f0:d3:af:9c:fd:f7:f9:59:16:da:40:
|
||||
4d:53:db:47:bb:9c:25:1a:6e:34:41:42:d9:26:f7:3a:a6:90:
|
||||
2d:82:42:97:08:f4:6b:16:84:d1:ad:e3:82:2c:ce:1c:d6:cd:
|
||||
68:e6:b0:5e:b5:63:55:3e:f1:ff:e1:a0:42:cd:88:25:56:f7:
|
||||
a8:88:a1:ec:53:f9:c1:2a:bb:5c:d7:f8:cb:0e:d9:f4:af:2e:
|
||||
eb:85:60:89:b3:d0:32:60:b4:a8:a1:ee:f3:3a:61:60:11:da:
|
||||
2d:7f:2d:35:ce:6e:d4:eb:5c:82:cf:5c:9a:02:c0:31:33:35:
|
||||
51:2b:91:79:8a:92:50:d9:e0:58:0a:78:9d:59:f4:d3:39:21:
|
||||
bb:b4:41:f9:f7:ec:ad:dd:76:be:28:58:c0:1f:e8:26:5a:9e:
|
||||
7b:7f:14:a9:18:8d:61:d1:06:e3:9e:0f:05:9e:1b:66:0c:66:
|
||||
d1:27:13:6d:ab:59:46:00:77:6e:25:f6:e8:41:ef:49:58:73:
|
||||
b4:93:04:46
|
||||
{{< /text >}}
|
||||
|
||||
Make sure the displayed certificate contains valid information. In particular, the Subject Alternative Name field should be `URI:spiffe://cluster.local/ns/my-ns/sa/my-sa`.
|
||||
If this is not the case, it is likely that something is wrong with your Citadel. Try to redeploy Citadel and check again.
|
||||
|
||||
Finally, you can verify that the key and certificate are correctly mounted by your sidecar proxy at the directory `/etc/certs`. You
|
||||
can use this command to check:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it my-pod-id -c istio-proxy -- ls /etc/certs
|
||||
cert-chain.pem key.pem root-cert.pem
|
||||
{{< /text >}}
|
||||
|
||||
Optionally, you could use the following command to check its contents:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it my-pod-id -c istio-proxy -- cat /etc/certs/cert-chain.pem | openssl x509 -text -noout
|
||||
Certificate:
|
||||
Data:
|
||||
Version: 3 (0x2)
|
||||
Serial Number:
|
||||
7e:b4:44:fe:d0:46:ba:27:47:5a:50:c8:f0:8e:8b:da
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
Issuer: O = k8s.cluster.local
|
||||
Validity
|
||||
Not Before: Jul 13 01:23:13 2018 GMT
|
||||
Not After : Oct 11 01:23:13 2018 GMT
|
||||
Subject: O =
|
||||
Subject Public Key Info:
|
||||
Public Key Algorithm: rsaEncryption
|
||||
Public-Key: (2048 bit)
|
||||
Modulus:
|
||||
00:bb:c9:cd:f4:b8:b5:e4:3b:f2:35:aa:4c:67:cc:
|
||||
1b:a9:30:c4:b7:fd:0a:f5:ac:94:05:b5:82:96:b2:
|
||||
c8:98:85:f9:fc:09:b3:28:34:5e:79:7e:a9:3c:58:
|
||||
0a:14:43:c1:f4:d7:b8:76:ab:4e:1c:89:26:e8:55:
|
||||
cd:13:6b:45:e9:f1:67:e1:9b:69:46:b4:7e:8c:aa:
|
||||
fd:70:de:21:15:4f:f5:f3:0f:b7:d4:c6:b5:9d:56:
|
||||
ef:8a:91:d7:16:fa:db:6e:4c:24:71:1c:9c:f3:d9:
|
||||
4b:83:f1:dd:98:5b:63:5c:98:5e:2f:15:29:0f:78:
|
||||
31:04:bc:1d:c8:78:c3:53:4f:26:b2:61:86:53:39:
|
||||
0a:3b:72:3e:3d:0d:22:61:d6:16:72:5d:64:e3:78:
|
||||
c8:23:9d:73:17:07:5a:6b:79:75:91:ce:71:4b:77:
|
||||
c5:1f:60:f1:da:ca:aa:85:56:5c:13:90:23:02:20:
|
||||
12:66:3f:8f:58:b8:aa:72:9d:36:f1:f3:b7:2b:2d:
|
||||
3e:bb:7c:f9:b5:44:b9:57:cf:fc:2f:4b:3c:e6:ee:
|
||||
51:ba:23:be:09:7b:e2:02:6a:6e:e7:83:06:cd:6c:
|
||||
be:7a:90:f1:1f:2c:6d:12:9e:2f:0f:e4:8c:5f:31:
|
||||
b1:a2:fa:0b:71:fa:e1:6a:4a:0f:52:16:b4:11:73:
|
||||
65:d9
|
||||
Exponent: 65537 (0x10001)
|
||||
X509v3 extensions:
|
||||
X509v3 Key Usage: critical
|
||||
Digital Signature, Key Encipherment
|
||||
X509v3 Extended Key Usage:
|
||||
TLS Web Server Authentication, TLS Web Client Authentication
|
||||
X509v3 Basic Constraints: critical
|
||||
CA:FALSE
|
||||
X509v3 Subject Alternative Name:
|
||||
URI:spiffe://cluster.local/ns/default/sa/bookinfo-productpage
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
8f:be:af:a4:ee:f7:be:21:e9:c8:c9:e2:3b:d3:ac:41:18:5d:
|
||||
f8:9a:85:0f:98:f3:35:af:b7:e1:2d:58:5a:e0:50:70:98:cc:
|
||||
75:f6:2e:55:25:ed:66:e7:a4:b9:4a:aa:23:3b:a6:ee:86:63:
|
||||
9f:d8:f9:97:73:07:10:25:59:cc:d9:01:09:12:f9:ab:9e:54:
|
||||
24:8a:29:38:74:3a:98:40:87:67:e4:96:d0:e6:c7:2d:59:3d:
|
||||
d3:ea:dd:6e:40:5f:63:bf:30:60:c1:85:16:83:66:66:0b:6a:
|
||||
f5:ab:60:7e:f5:3b:44:c6:11:5b:a1:99:0c:bd:53:b3:a7:cc:
|
||||
e2:4b:bd:10:eb:fb:f0:b0:e5:42:a4:b2:ab:0c:27:c8:c1:4c:
|
||||
5b:b5:1b:93:25:9a:09:45:7c:28:31:13:a3:57:1c:63:86:5a:
|
||||
55:ed:14:29:db:81:e3:34:47:14:ba:52:d6:3c:3d:3b:51:50:
|
||||
89:a9:db:17:e4:c4:57:ec:f8:22:98:b7:e7:aa:8a:72:28:9a:
|
||||
a7:27:75:60:85:20:17:1d:30:df:78:40:74:ea:bc:ce:7b:e5:
|
||||
a5:57:32:da:6d:f2:64:fb:28:94:7d:28:37:6f:3c:97:0e:9c:
|
||||
0c:33:42:f0:b6:f5:1c:0d:fb:70:65:aa:93:3e:ca:0e:58:ec:
|
||||
8e:d5:d0:1e
|
||||
{{< /text >}}
|
|
@ -1,13 +0,0 @@
|
|||
---
|
||||
title: Mutual TLS
|
||||
description: What to do if mutual TLS authentication isn't working.
|
||||
weight: 30
|
||||
aliases:
|
||||
- /help/ops/security/mutual-tls
|
||||
---
|
||||
|
||||
If you suspect problems with mutual TLS, first ensure that [Citadel is healthy](/docs/ops/security/repairing-citadel/), and
|
||||
second ensure that [keys and certificates are being delivered](/docs/ops/security/keys-and-certs/) to sidecars properly.
|
||||
|
||||
If everything appears to be working so far, the next step is to verify that the right [authentication policy](/docs/tasks/security/authn-policy/) is applied and
|
||||
the right destination rules are in place.
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Installation and Setup
|
||||
description: Helps you diagnose and repair Istio installations.
|
||||
weight: 80
|
||||
title: Installation and Configuration
|
||||
description: Describes important requirements, concepts, and considerations for installing and configuring Istio.
|
||||
weight: 2
|
||||
keywords: [ops,setup]
|
||||
aliases:
|
||||
- /help/ops/setup
|
||||
|
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
title: Automatic Sidecar Injection
|
||||
description: Describes Istio's use of Kubernetes webhooks for automatic sidecar injection.
|
||||
weight: 5
|
||||
aliases:
|
||||
- /help/ops/setup/injection
|
||||
---
|
||||
|
||||
Automatic sidecar injection adds the sidecar proxy into user-created
|
||||
pods. It uses a `MutatingWebhook` to append the sidecar’s containers
|
||||
and volumes to each pod’s template spec during creation
|
||||
time. Injection can be scoped to particular sets of namespaces using
|
||||
the webhooks `namespaceSelector` mechanism. Injection can also be
|
||||
enabled and disabled per-pod with an annotation.
|
||||
|
||||
Whether or not a sidecar is injected depends on three pieces of configuration and two security rules:
|
||||
|
||||
Configuration:
|
||||
|
||||
- webhooks `namespaceSelector`
|
||||
- default `policy`
|
||||
- per-pod override annotation
|
||||
|
||||
Security rules:
|
||||
|
||||
- sidecars cannot be injected in the `kube-system` or `kube-public` namespaces
|
||||
- sidecars cannot be injected into pods that use the host network
|
||||
|
||||
The following truth table shows the final injection status based on
|
||||
the three configuration items. The security rules above cannot be overridden.
|
||||
|
||||
| `namespaceSelector` match | default `policy` | Pod override annotation `sidecar.istio.io/inject` | Sidecar injected? |
|
||||
|---------------------------|------------------|---------------------------------------------------|-----------|
|
||||
| yes | enabled | true | yes |
|
||||
| yes | enabled | false | no |
|
||||
| yes | enabled | | yes |
|
||||
| yes | disabled | true | yes |
|
||||
| yes | disabled | false | no |
|
||||
| yes | disabled | | no |
|
||||
| no | enabled | true | no |
|
||||
| no | enabled | false | no |
|
||||
| no | enabled | | no |
|
||||
| no | disabled | true | no |
|
||||
| no | disabled | false | no |
|
||||
| no | disabled | | no |
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Required Pod Capabilities
|
||||
description: Describes how to check which capabilities are allowed for your pods.
|
||||
weight: 40
|
||||
weight: 9
|
||||
aliases:
|
||||
- /help/ops/setup/required-pod-capabilities
|
||||
---
|
||||
|
|
|
@ -3,7 +3,7 @@ title: Configuration Validation Webhook
|
|||
description: Describes Istio's use of Kubernetes webhooks for server-side configuration validation.
|
||||
weight: 20
|
||||
aliases:
|
||||
- /help/ops/setup/validation
|
||||
- /help/ops/setup/validation
|
||||
---
|
||||
|
||||
Galley’s configuration validation ensures user authored Istio
|
||||
|
@ -23,299 +23,3 @@ port 443. Each webhook has its own `clientConfig`, `namespaceSelector`,
|
|||
and `rules` section. Both webhooks are scoped to all namespaces. The
|
||||
`namespaceSelector` should be empty. Both rules apply to Istio Custom
|
||||
Resource Definitions (CRDs).
|
||||
|
||||
## Seemingly valid configuration is rejected
|
||||
|
||||
Manually verify your configuration is correct, cross-referencing
|
||||
[Istio API reference](/docs/reference/config) when
|
||||
necessary.
|
||||
|
||||
## Invalid configuration is accepted
|
||||
|
||||
Verify the `istio-galley` `validationwebhookconfiguration` exists and
|
||||
is correct. The `apiVersion`, `apiGroup`, and `resource` of the
|
||||
invalid configuration should be listed in one of the two `webhooks`
|
||||
entries.
|
||||
|
||||
{{< text bash yaml >}}
|
||||
$ kubectl get validatingwebhookconfiguration istio-galley -o yaml
|
||||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
metadata:
|
||||
labels:
|
||||
app: istio-galley
|
||||
name: istio-galley
|
||||
ownerReferences:
|
||||
- apiVersion: extensions/v1beta1
|
||||
blockOwnerDeletion: true
|
||||
controller: true
|
||||
kind: Deployment
|
||||
name: istio-galley
|
||||
uid: 5c64585d-91c6-11e8-a98a-42010a8001a8
|
||||
webhooks:
|
||||
- clientConfig:
|
||||
# caBundle should be non-empty. This is periodically (re)patched
|
||||
# every second by the webhook service using the ca-cert
|
||||
# from the mounted service account secret.
|
||||
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1VENDQWMyZ0F3SUJBZ0lRVzVYNWpJcnJCemJmZFdLaWVoaVVSakFOQmdrcWhraUc5dzBCQVFzRkFEQWMKTVJvd0dBWURWUVFLRXhGck9ITXVZMngxYzNSbGNpNXNiMk5oYkRBZUZ3MHhPREEzTWpjeE56VTJNakJhRncweApPVEEzTWpjeE56VTJNakJhTUJ3eEdqQVlCZ05WQkFvVEVXczRjeTVqYkhWemRHVnlMbXh2WTJGc01JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXdVMi9SdWlyeTNnUzdPd2xJRCtaaGZiOEpOWnMKK05OL0dRWUsxbVozb3duaEw4dnJHdDBhenpjNXFuOXo2ZEw5Z1pPVFJXeFVCYXVJMUpOa3d0dSt2NmRjRzlkWgp0Q2JaQWloc1BLQWQ4MVRaa3RwYkNnOFdrcTRyNTh3QldRemNxMldsaFlPWHNlWGtRejdCbStOSUoyT0NRbmJwCjZYMmJ4Slc2OGdaZkg2UHlNR0libXJxaDgvZ2hISjFha3ptNGgzc0VGU1dTQ1Y2anZTZHVJL29NM2pBem5uZlUKU3JKY3VpQnBKZmJSMm1nQm4xVmFzNUJNdFpaaTBubDYxUzhyZ1ZiaHp4bWhpeFhlWU0zQzNHT3FlRUthY0N3WQo0TVczdEJFZ3NoN2ovZGM5cEt1ZG1wdFBFdit2Y2JnWjdreEhhazlOdFV2YmRGempJeTMxUS9Qd1NRSURBUUFCCm95TXdJVEFPQmdOVkhROEJBZjhFQkFNQ0FnUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEIKQVFzRkFBT0NBUUVBTnRLSnVkQ3NtbTFzU3dlS2xKTzBIY1ZMQUFhbFk4ZERUYWVLNksyakIwRnl0MkM3ZUtGSAoya3JaOWlkbWp5Yk8xS0djMVlWQndNeWlUMGhjYWFlaTdad2g0aERRWjVRN0k3ZFFuTVMzc2taR3ByaW5idU1aCmg3Tm1WUkVnV1ZIcm9OcGZEN3pBNEVqWk9FZzkwR0J6YXUzdHNmanI4RDQ1VVRJZUw3M3hwaUxmMXhRTk10RWEKd0NSelplQ3lmSUhra2ZrTCtISVVGK0lWV1g2VWp2WTRpRDdRR0JCenpHZTluNS9KM1g5OU1Gb1F3bExjNHMrTQpnLzNQdnZCYjBwaTU5MWxveXluU3lkWDVqUG5ibDhkNEFJaGZ6OU8rUTE5UGVULy9ydXFRNENOancrZmVIbTBSCjJzYmowZDd0SjkyTzgwT2NMVDlpb05NQlFLQlk3cGlOUkE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
|
||||
service:
|
||||
# service corresponds to the Kubernetes service that implements the
|
||||
# webhook, e.g. istio-galley.istio-system.svc:443
|
||||
name: istio-galley
|
||||
namespace: istio-system
|
||||
path: /admitpilot
|
||||
failurePolicy: Fail
|
||||
name: pilot.validation.istio.io
|
||||
namespaceSelector: {}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- config.istio.io
|
||||
apiVersions:
|
||||
- v1alpha2
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- httpapispecs
|
||||
- httpapispecbindings
|
||||
- quotaspecs
|
||||
- quotaspecbindings
|
||||
- apiGroups:
|
||||
- rbac.istio.io
|
||||
apiVersions:
|
||||
- '*'
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- '*'
|
||||
- apiGroups:
|
||||
- authentication.istio.io
|
||||
apiVersions:
|
||||
- '*'
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- '*'
|
||||
- apiGroups:
|
||||
- networking.istio.io
|
||||
apiVersions:
|
||||
- '*'
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- destinationrules
|
||||
- envoyfilters
|
||||
- gateways
|
||||
- virtualservices
|
||||
- clientConfig:
|
||||
# caBundle should be non-empty. This is periodically (re)patched
|
||||
# every second by the webhook service using the ca-cert
|
||||
# from the mounted service account secret.
|
||||
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1VENDQWMyZ0F3SUJBZ0lRVzVYNWpJcnJCemJmZFdLaWVoaVVSakFOQmdrcWhraUc5dzBCQVFzRkFEQWMKTVJvd0dBWURWUVFLRXhGck9ITXVZMngxYzNSbGNpNXNiMk5oYkRBZUZ3MHhPREEzTWpjeE56VTJNakJhRncweApPVEEzTWpjeE56VTJNakJhTUJ3eEdqQVlCZ05WQkFvVEVXczRjeTVqYkhWemRHVnlMbXh2WTJGc01JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXdVMi9SdWlyeTNnUzdPd2xJRCtaaGZiOEpOWnMKK05OL0dRWUsxbVozb3duaEw4dnJHdDBhenpjNXFuOXo2ZEw5Z1pPVFJXeFVCYXVJMUpOa3d0dSt2NmRjRzlkWgp0Q2JaQWloc1BLQWQ4MVRaa3RwYkNnOFdrcTRyNTh3QldRemNxMldsaFlPWHNlWGtRejdCbStOSUoyT0NRbmJwCjZYMmJ4Slc2OGdaZkg2UHlNR0libXJxaDgvZ2hISjFha3ptNGgzc0VGU1dTQ1Y2anZTZHVJL29NM2pBem5uZlUKU3JKY3VpQnBKZmJSMm1nQm4xVmFzNUJNdFpaaTBubDYxUzhyZ1ZiaHp4bWhpeFhlWU0zQzNHT3FlRUthY0N3WQo0TVczdEJFZ3NoN2ovZGM5cEt1ZG1wdFBFdit2Y2JnWjdreEhhazlOdFV2YmRGempJeTMxUS9Qd1NRSURBUUFCCm95TXdJVEFPQmdOVkhROEJBZjhFQkFNQ0FnUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEIKQVFzRkFBT0NBUUVBTnRLSnVkQ3NtbTFzU3dlS2xKTzBIY1ZMQUFhbFk4ZERUYWVLNksyakIwRnl0MkM3ZUtGSAoya3JaOWlkbWp5Yk8xS0djMVlWQndNeWlUMGhjYWFlaTdad2g0aERRWjVRN0k3ZFFuTVMzc2taR3ByaW5idU1aCmg3Tm1WUkVnV1ZIcm9OcGZEN3pBNEVqWk9FZzkwR0J6YXUzdHNmanI4RDQ1VVRJZUw3M3hwaUxmMXhRTk10RWEKd0NSelplQ3lmSUhra2ZrTCtISVVGK0lWV1g2VWp2WTRpRDdRR0JCenpHZTluNS9KM1g5OU1Gb1F3bExjNHMrTQpnLzNQdnZCYjBwaTU5MWxveXluU3lkWDVqUG5ibDhkNEFJaGZ6OU8rUTE5UGVULy9ydXFRNENOancrZmVIbTBSCjJzYmowZDd0SjkyTzgwT2NMVDlpb05NQlFLQlk3cGlOUkE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
|
||||
service:
|
||||
# service corresponds to the Kubernetes service that implements the
|
||||
# webhook, e.g. istio-galley.istio-system.svc:443
|
||||
name: istio-galley
|
||||
namespace: istio-system
|
||||
path: /admitmixer
|
||||
failurePolicy: Fail
|
||||
name: mixer.validation.istio.io
|
||||
namespaceSelector: {}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- config.istio.io
|
||||
apiVersions:
|
||||
- v1alpha2
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- rules
|
||||
- attributemanifests
|
||||
- circonuses
|
||||
- deniers
|
||||
- fluentds
|
||||
- kubernetesenvs
|
||||
- listcheckers
|
||||
- memquotas
|
||||
- noops
|
||||
- opas
|
||||
- prometheuses
|
||||
- rbacs
|
||||
- servicecontrols
|
||||
- solarwindses
|
||||
- stackdrivers
|
||||
- statsds
|
||||
- stdios
|
||||
- apikeys
|
||||
- authorizations
|
||||
- checknothings
|
||||
- listentries
|
||||
- logentries
|
||||
- metrics
|
||||
- quotas
|
||||
- reportnothings
|
||||
- servicecontrolreports
|
||||
- tracespans
|
||||
{{< /text >}}
|
||||
|
||||
If the `validatingwebhookconfiguration` doesn’t exist, verify the
|
||||
`istio-galley-configuration` `configmap` exists. `istio-galley` uses
|
||||
the data from this configmap to create and update the
|
||||
`validatingwebhookconfiguration`.
|
||||
|
||||
{{< text bash yaml >}}
|
||||
$ kubectl -n istio-system get configmap istio-galley-configuration -o jsonpath='{.data}'
|
||||
map[validatingwebhookconfiguration.yaml:apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
metadata:
|
||||
name: istio-galley
|
||||
namespace: istio-system
|
||||
labels:
|
||||
app: istio-galley
|
||||
chart: galley-1.0.0
|
||||
release: istio
|
||||
heritage: Tiller
|
||||
webhooks:
|
||||
- name: pilot.validation.istio.io
|
||||
clientConfig:
|
||||
service:
|
||||
name: istio-galley
|
||||
namespace: istio-system
|
||||
path: "/admitpilot"
|
||||
caBundle: ""
|
||||
rules:
|
||||
- operations:
|
||||
(... snip ...)
|
||||
{{< /text >}}
|
||||
|
||||
If the webhook array in `istio-galley-configuration` is empty and
|
||||
you're using `helm template` or `helm install`, verify `--set
|
||||
galley.enabled` and `--set global.configValidation=true` options are
|
||||
set. If you're not using helm, you'll need to find a generate
|
||||
YAML that includes the populated webhook array.
|
||||
|
||||
The `istio-galley` validation configuration is fail-close. If
|
||||
configuration exists and is scoped properly, the webhook will be
|
||||
invoked. A missing `caBundle`, bad certificate, or network connectivity
|
||||
problem will produce an error message when the resource is
|
||||
created/updated. If you don’t see any error message and the webhook
|
||||
wasn’t invoked and the webhook configuration is valid, your cluster is
|
||||
misconfigured.
|
||||
|
||||
## Creating configuration fails with x509 certificate errors
|
||||
|
||||
`x509: certificate signed by unknown authority` related errors are
|
||||
typically caused by an empty `caBundle` in the webhook
|
||||
configuration. Verify that it is not empty (see [verify webhook
|
||||
configuration](#invalid-configuration-is-accepted)). The
|
||||
`istio-galley` deployment consciously reconciles webhook configuration
|
||||
used the `istio-galley-configuration` `configmap` and root certificate
|
||||
mounted from `istio.istio-galley-service-account` secret in the
|
||||
`istio-system` namespace.
|
||||
|
||||
1. Verify the `istio-galley` pod(s) are running:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system get pod -listio=galley
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-galley-5dbbbdb746-d676g 1/1 Running 0 2d
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify you’re using Istio version >= 1.0.0. Older version of Galley
|
||||
did not properly re-patch the `caBundle`. This typically happened
|
||||
when the `istio.yaml` was re-applied, overwriting a previously
|
||||
patched `caBundle`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ for pod in $(kubectl -n istio-system get pod -listio=galley -o jsonpath='{.items[*].metadata.name}'); do \
|
||||
kubectl -n istio-system exec ${pod} -it /usr/local/bin/galley version| grep ^Version; \
|
||||
done
|
||||
Version: 1.0.0
|
||||
{{< /text >}}
|
||||
|
||||
1. Check the Galley pod logs for errors. Failing to patch the
|
||||
`caBundle` should print an error.
|
||||
|
||||
{{< text bash >}}
|
||||
$ for pod in $(kubectl -n istio-system get pod -listio=galley -o jsonpath='{.items[*].metadata.name}'); do \
|
||||
kubectl -n istio-system logs ${pod} \
|
||||
done
|
||||
{{< /text >}}
|
||||
|
||||
1. If the patching failed, verify the RBAC configuration for Galley:
|
||||
|
||||
{{< text bash yaml >}}
|
||||
$ kubectl get clusterrole istio-galley-istio-system -o yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
labels:
|
||||
app: istio-galley
|
||||
name: istio-galley-istio-system
|
||||
rules:
|
||||
- apiGroups:
|
||||
- admissionregistration.k8s.io
|
||||
resources:
|
||||
- validatingwebhookconfigurations
|
||||
verbs:
|
||||
- '*'
|
||||
- apiGroups:
|
||||
- config.istio.io
|
||||
resources:
|
||||
- '*'
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- '*'
|
||||
resourceNames:
|
||||
- istio-galley
|
||||
resources:
|
||||
- deployments
|
||||
verbs:
|
||||
- get
|
||||
{{< /text >}}
|
||||
|
||||
`istio-galley` needs `validatingwebhookconfigurations` write access to
|
||||
create and update the `istio-galley` `validatingwebhookconfiguration`.
|
||||
|
||||
## Creating configuration fails with `no such hosts` or `no endpoints available` errors
|
||||
|
||||
Validation is fail-close. If the `istio-galley` pod is not ready,
|
||||
configuration cannot be created and updated. In such cases you’ll see
|
||||
an error about `no endpoints available`.
|
||||
|
||||
Verify the `istio-galley` pod(s) are running and endpoints are ready.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system get pod -listio=galley
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-galley-5dbbbdb746-d676g 1/1 Running 0 2d
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system get endpoints istio-galley
|
||||
NAME ENDPOINTS AGE
|
||||
istio-galley 10.48.6.108:10514,10.48.6.108:443 3d
|
||||
{{< /text >}}
|
||||
|
||||
If the pods or endpoints aren't ready, check the pod logs and
|
||||
status for any indication about why the webhook pod is failing to start
|
||||
and serve traffic.
|
||||
|
||||
{{< text bash >}}
|
||||
$ for pod in $(kubectl -n istio-system get pod -listio=galley -o jsonpath='{.items[*].metadata.name}'); do \
|
||||
kubectl -n istio-system logs ${pod} \
|
||||
done
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ for pod in $(kubectl -n istio-system get pod -listio=galley -o name); do \
|
||||
kubectl -n istio-system describe ${pod} \
|
||||
done
|
||||
{{< /text >}}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Telemetry
|
||||
description: Helps you manage telemetry collection and visualization in a running mesh.
|
||||
weight: 50
|
||||
weight: 30
|
||||
keywords: [ops,telemetry]
|
||||
aliases:
|
||||
- /help/ops/telemetry
|
||||
|
|
|
@ -33,7 +33,7 @@ keys are:
|
|||
|
||||
To see the Envoy settings for statistics data collection use
|
||||
`istioctl proxy-config bootstrap` and follow the
|
||||
[deep dive into Envoy configuration](/docs/ops/traffic-management/proxy-cmd/#deep-dive-into-envoy-configuration).
|
||||
[deep dive into Envoy configuration](/docs/ops/troubleshooting/proxy-cmd/#deep-dive-into-envoy-configuration).
|
||||
Envoy only collects statistical data on items matching the `inclusion_list` within
|
||||
the `stats_matcher` JSON element.
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Traffic Management
|
||||
description: Helps you manage the networking aspects of a running mesh.
|
||||
weight: 30
|
||||
weight: 23
|
||||
keywords: [ops,traffic-management]
|
||||
aliases:
|
||||
- /help/ops/traffic-management
|
||||
|
|
|
@ -1,9 +1,10 @@
|
|||
---
|
||||
title: Deployment and Configuration Guidelines
|
||||
description: Provides specific deployment and configuration guidelines.
|
||||
weight: 20
|
||||
title: Avoiding Traffic Management Issues
|
||||
description: Provides specific deployment or configuration guidelines to avoid networking or traffic management issues.
|
||||
weight: 2
|
||||
aliases:
|
||||
- /help/ops/traffic-management/deploy-guidelines
|
||||
- /help/ops/deploy-guidelines
|
||||
---
|
||||
|
||||
This section provides specific deployment or configuration guidelines to avoid networking or traffic management issues.
|
||||
|
@ -219,7 +220,7 @@ spec:
|
|||
The downside of this kind of configuration is that other configuration (e.g., route rules) for any of the
|
||||
underlying microservices, will need to also be included in this single configuration file, instead of
|
||||
in separate resources associated with, and potentially owned by, the individual service teams.
|
||||
See [Route rules have no effect on ingress gateway requests](/docs/ops/traffic-management/troubleshooting/#route-rules-have-no-effect-on-ingress-gateway-requests)
|
||||
See [Route rules have no effect on ingress gateway requests](/docs/ops/troubleshooting/network-issues)
|
||||
for details.
|
||||
|
||||
To avoid this problem, it may be preferable to break up the configuration of `myapp.com` into several
|
||||
|
@ -319,30 +320,4 @@ To make sure services will have zero down-time when configuring routes with subs
|
|||
|
||||
1. Update the `DestinationRule` to remove the unused subsets.
|
||||
|
||||
## Browser problem when multiple gateways configured with same TLS certificate
|
||||
|
||||
Configuring more than one gateway using the same TLS certificate will cause browsers
|
||||
that leverage [HTTP/2 connection reuse](https://httpwg.org/specs/rfc7540.html#reuse)
|
||||
(i.e., most browsers) to produce 404 errors when accessing a second host after a
|
||||
connection to another host has already been established.
|
||||
|
||||
For example, let's say you have 2 hosts that share the same TLS certificate like this:
|
||||
|
||||
* Wildcard certificate `*.test.com` installed in `istio-ingressgateway`
|
||||
* `Gateway` configuration `gw1` with host `service1.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate
|
||||
* `Gateway` configuration `gw2` with host `service2.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate
|
||||
* `VirtualService` configuration `vs1` with host `service1.test.com` and gateway `gw1`
|
||||
* `VirtualService` configuration `vs2` with host `service2.test.com` and gateway `gw2`
|
||||
|
||||
Since both gateways are served by the same workload (i.e., selector `istio: ingressgateway`) requests to both services
|
||||
(`service1.test.com` and `service2.test.com`) will resolve to the same IP. If `service1.test.com` is accessed first, it
|
||||
will return the wildcard certificate (`*.test.com`) indicating that connections to `service2.test.com` can use the same certificate.
|
||||
Browsers like Chrome and Firefox will consequently reuse the existing connection for requests to `service2.test.com`.
|
||||
Since the gateway (`gw1`) has no route for `service2.test.com`, it will then return a 404 (Not Found) response.
|
||||
|
||||
You can avoid this problem by configuring a single wildcard `Gateway`, instead of two (`gw1` and `gw2`).
|
||||
Then, simply bind both `VirtualServices` to it like this:
|
||||
|
||||
* `Gateway` configuration `gw` with host `*.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate
|
||||
* `VirtualService` configuration `vs1` with host `service1.test.com` and gateway `gw`
|
||||
* `VirtualService` configuration `vs2` with host `service2.test.com` and gateway `gw`
|
||||
|
|
|
@ -1,9 +1,10 @@
|
|||
---
|
||||
title: Introduction to Network Operations
|
||||
description: An introduction to Istio networking operational aspects.
|
||||
weight: 10
|
||||
weight: 1
|
||||
aliases:
|
||||
- /help/ops/traffic-management/introduction
|
||||
- /help/ops/introduction
|
||||
---
|
||||
This section is intended as a guide to operators of an Istio based
|
||||
deployment. It will provide information an operator of a Istio deployment
|
||||
|
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: Troubleshooting
|
||||
description: Describes how to identify and resolve common problems in Istio.
|
||||
weight: 50
|
||||
keywords: [ops]
|
||||
aliases:
|
||||
- /help/ops/troubleshooting
|
||||
- /help/ops/traffic-management/troubleshooting
|
||||
- /help/ops/setup
|
||||
---
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Component Logging
|
||||
description: Describes how to use component-level logging to get insights into a running component's behavior.
|
||||
weight: 10
|
||||
weight: 97
|
||||
keywords: [ops]
|
||||
aliases:
|
||||
- /help/ops/component-logging
|
||||
|
@ -45,7 +45,7 @@ $ mixs server --log_output_level attributes=debug,adapters=warning
|
|||
{{< /text >}}
|
||||
|
||||
In addition to controlling the output level from the command-line, you can also control the output level of a running component
|
||||
by using its [ControlZ](/docs/ops/controlz) interface.
|
||||
by using its [ControlZ](/docs/ops/troubleshooting/controlz) interface.
|
||||
|
||||
## Controlling output
|
||||
|
Before Width: | Height: | Size: 209 KiB After Width: | Height: | Size: 209 KiB |
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Component Introspection
|
||||
description: Describes how to use ControlZ to get insight into individual running components.
|
||||
weight: 20
|
||||
weight: 98
|
||||
keywords: [ops]
|
||||
aliases:
|
||||
- /help/ops/controlz
|
|
@ -1,9 +1,10 @@
|
|||
---
|
||||
title: Grafana
|
||||
title: Missing Grafana Output
|
||||
description: Dealing with Grafana issues.
|
||||
weight: 90
|
||||
weight: 89
|
||||
aliases:
|
||||
- /help/ops/telemetry/grafana
|
||||
- /help/ops/troubleshooting/grafana
|
||||
---
|
||||
|
||||
If you're unable to get Grafana output when connecting from a local web client to Istio remotely hosted, you
|
|
@ -1,49 +1,9 @@
|
|||
---
|
||||
title: Sidecar Injection Webhook
|
||||
description: Describes Istio's use of Kubernetes webhooks for automatic sidecar injection.
|
||||
weight: 30
|
||||
aliases:
|
||||
- /help/ops/setup/injection
|
||||
title: Sidecar Injection Problems
|
||||
description: Resolve common problems with Istio's use of Kubernetes webhooks for automatic sidecar injection.
|
||||
weight: 5
|
||||
---
|
||||
|
||||
Automatic sidecar injection adds the sidecar proxy into user-created
|
||||
pods. It uses a `MutatingWebhook` to append the sidecar’s containers
|
||||
and volumes to each pod’s template spec during creation
|
||||
time. Injection can be scoped to particular sets of namespaces using
|
||||
the webhooks `namespaceSelector` mechanism. Injection can also be
|
||||
enabled and disabled per-pod with an annotation.
|
||||
|
||||
Whether or not a sidecar is injected depends on three pieces of configuration and two security rules:
|
||||
|
||||
Configuration:
|
||||
|
||||
- webhooks `namespaceSelector`
|
||||
- default `policy`
|
||||
- per-pod override annotation
|
||||
|
||||
Security rules:
|
||||
|
||||
- sidecars cannot be injected in the `kube-system` or `kube-public` namespaces
|
||||
- sidecars cannot be injected into pods that use the host network
|
||||
|
||||
The following truth table shows the final injection status based on
|
||||
the three configuration items. The security rules above cannot be overridden.
|
||||
|
||||
| `namespaceSelector` match | default `policy` | Pod override annotation `sidecar.istio.io/inject` | Sidecar injected? |
|
||||
|---------------------------|------------------|---------------------------------------------------|-----------|
|
||||
| yes | enabled | true | yes |
|
||||
| yes | enabled | false | no |
|
||||
| yes | enabled | | yes |
|
||||
| yes | disabled | true | yes |
|
||||
| yes | disabled | false | no |
|
||||
| yes | disabled | | no |
|
||||
| no | enabled | true | no |
|
||||
| no | enabled | false | no |
|
||||
| no | enabled | | no |
|
||||
| no | disabled | true | no |
|
||||
| no | disabled | false | no |
|
||||
| no | disabled | | no |
|
||||
|
||||
## The result of sidecar injection was not what I expected
|
||||
|
||||
This includes an injected sidecar when it wasn't expected and a lack
|
||||
|
@ -232,3 +192,30 @@ $ for pod in $(kubectl -n istio-system get pod -listio=sidecar-injector -o name)
|
|||
kubectl -n istio-system describe ${pod} \
|
||||
done
|
||||
{{< /text >}}
|
||||
|
||||
## Automatic sidecar injection fails if the Kubernetes API server has proxy settings
|
||||
|
||||
When the Kubernetes API server includes proxy settings such as:
|
||||
|
||||
{{< text yaml >}}
|
||||
env:
|
||||
- name: http_proxy
|
||||
value: http://proxy-wsa.esl.foo.com:80
|
||||
- name: https_proxy
|
||||
value: http://proxy-wsa.esl.foo.com:80
|
||||
- name: no_proxy
|
||||
value: 127.0.0.1,localhost,dockerhub.foo.com,devhub-docker.foo.com,10.84.100.125,10.84.100.126,10.84.100.127
|
||||
{{< /text >}}
|
||||
|
||||
With these settings, Sidecar injection fails. The only related failure log can be found in `kube-apiserver` log:
|
||||
|
||||
{{< text plain >}}
|
||||
W0227 21:51:03.156818 1 admission.go:257] Failed calling webhook, failing open sidecar-injector.istio.io: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject: Service Unavailable
|
||||
{{< /text >}}
|
||||
|
||||
Make sure both pod and service CIDRs are not proxied according to *_proxy variables. Check the `kube-apiserver` files and logs to verify the configuration and whether any requests are being proxied.
|
||||
|
||||
One workaround is to remove the proxy settings from the `kube-apiserver` manifest, another workaround is to include `istio-sidecar-injector.istio-system.svc` or `.svc` in the `no_proxy` value. Make sure that `kube-apiserver` is restarted after each workaround.
|
||||
|
||||
An [issue](https://github.com/kubernetes/kubeadm/issues/666) was filed with Kubernetes related to this and has since been closed.
|
||||
[https://github.com/kubernetes/kubernetes/pull/58698#discussion_r163879443](https://github.com/kubernetes/kubernetes/pull/58698#discussion_r163879443)
|
|
@ -1,12 +1,18 @@
|
|||
---
|
||||
title: Using the istioctl command-line tool
|
||||
description: Istio includes a supplemental tool that provides debugging and diagnosis for Istio service mesh deployments.
|
||||
weight: 20
|
||||
weight: 1
|
||||
keywords: [istioctl,bash,zsh,shell,command-line]
|
||||
aliases:
|
||||
- /help/ops/component-debugging
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
You can gain insights into what individual components are doing by inspecting their [logs](/docs/ops/troubleshooting/component-logging/)
|
||||
or peering inside via [introspection](/docs/ops/troubleshooting/controlz/). If that's insufficient, the steps below explain
|
||||
how to get under the hood.
|
||||
|
||||
The `istioctl` tool is a configuration command line utility that allows service operators to debug and diagnose their Istio service mesh deployments. The Istio project also includes two helpful scripts for `istioctl` that enable auto-completion for Bash and ZSH. Both of these scripts provide support for the currently available `istioctl` commands.
|
||||
|
||||
{{< tip >}}
|
||||
|
@ -15,6 +21,54 @@ The `istioctl` tool is a configuration command line utility that allows service
|
|||
|
||||
Documentation for the complete set of supported commands can be found in [`istioctl` reference](/docs/reference/commands/istioctl/).
|
||||
|
||||
### Get an overview of your mesh
|
||||
|
||||
You can get an overview of your mesh using the `proxy-status` command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-status
|
||||
{{< /text >}}
|
||||
|
||||
If a proxy is missing from the output list it means that it is not currently connected to a Pilot instance and so it
|
||||
will not receive any configuration. Additionally, if it is marked stale, it likely means there are networking issues or
|
||||
Pilot needs to be scaled.
|
||||
|
||||
### Get proxy configuration
|
||||
|
||||
`istioctl` allows you to retrieve information about proxy configuration using the `proxy-config` or `pc` command.
|
||||
|
||||
For example, to retrieve information about cluster configuration for the Envoy instance in a specific pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config cluster <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
To retrieve information about bootstrap configuration for the Envoy instance in a specific pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config bootstrap <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
To retrieve information about listener configuration for the Envoy instance in a specific pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config listener <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
To retrieve information about route configuration for the Envoy instance in a specific pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config route <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
To retrieve information about endpoint configuration for the Envoy instance in a specific pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config endpoints <pod-name> [flags]
|
||||
{{< /text >}}
|
||||
|
||||
See [Debugging Envoy and Pilot](/docs/ops/troubleshooting/proxy-cmd/) for more advice on interpreting this information.
|
||||
|
||||
## `istioctl` auto-completion
|
||||
|
||||
{{< tabset cookie-name="prereqs" >}}
|
|
@ -1,9 +1,11 @@
|
|||
---
|
||||
title: Missing Metrics
|
||||
description: Diagnose problems where metrics are not being collected.
|
||||
weight: 10
|
||||
weight: 29
|
||||
aliases:
|
||||
- /help/ops/telemetry/missing-metrics
|
||||
- /help/ops/troubleshooting/missing-metrics
|
||||
|
||||
---
|
||||
|
||||
The procedures below help you diagnose problems where metrics
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
title: Missing Zipkin Traces
|
||||
description: Fix missing traces in Zipkin.
|
||||
weight: 90
|
||||
aliases:
|
||||
- /help/ops/troubleshooting/missing-traces
|
||||
---
|
||||
## No traces appearing in Zipkin when running Istio locally on Mac
|
||||
|
||||
Istio is installed and everything seems to be working except there are no traces showing up in Zipkin when there
|
||||
should be.
|
||||
|
||||
This may be caused by a known [Docker issue](https://github.com/docker/for-mac/issues/1260) where the time inside
|
||||
containers may skew significantly from the time on the host machine. If this is the case,
|
||||
when you select a very long date range in Zipkin you will see the traces appearing as much as several days too early.
|
||||
|
||||
You can also confirm this problem by comparing the date inside a Docker container to outside:
|
||||
|
||||
{{< text bash >}}
|
||||
$ docker run --entrypoint date gcr.io/istio-testing/ubuntu-16-04-slave:latest
|
||||
Sun Jun 11 11:44:18 UTC 2017
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ date -u
|
||||
Thu Jun 15 02:25:42 UTC 2017
|
||||
{{< /text >}}
|
||||
|
||||
To fix the problem, you'll need to shutdown and then restart Docker before reinstalling Istio.
|
|
@ -1,13 +1,12 @@
|
|||
---
|
||||
title: Troubleshooting Networking Issues
|
||||
description: Describes common networking issues and how to recognize and avoid them.
|
||||
weight: 30
|
||||
title: Network Problems
|
||||
description: Tools and techniques to address common Istio traffic management and network problems.
|
||||
weight: 4
|
||||
aliases:
|
||||
- /help/ops/traffic-management/troubleshooting
|
||||
- /help/ops/troubleshooting/network-issues
|
||||
---
|
||||
|
||||
This section describes common problems and tools and techniques to address issues related to traffic management.
|
||||
|
||||
## Requests are rejected by Envoy
|
||||
|
||||
Requests may be rejected for various reasons. The best way to understand why requests are being rejected is
|
||||
|
@ -340,3 +339,31 @@ server {
|
|||
}
|
||||
}
|
||||
{{< /text >}}
|
||||
|
||||
## 404 errors occur when multiple gateways configured with same TLS certificate
|
||||
|
||||
Configuring more than one gateway using the same TLS certificate will cause browsers
|
||||
that leverage [HTTP/2 connection reuse](https://httpwg.org/specs/rfc7540.html#reuse)
|
||||
(i.e., most browsers) to produce 404 errors when accessing a second host after a
|
||||
connection to another host has already been established.
|
||||
|
||||
For example, let's say you have 2 hosts that share the same TLS certificate like this:
|
||||
|
||||
- Wildcard certificate `*.test.com` installed in `istio-ingressgateway`
|
||||
- `Gateway` configuration `gw1` with host `service1.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate
|
||||
- `Gateway` configuration `gw2` with host `service2.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate
|
||||
- `VirtualService` configuration `vs1` with host `service1.test.com` and gateway `gw1`
|
||||
- `VirtualService` configuration `vs2` with host `service2.test.com` and gateway `gw2`
|
||||
|
||||
Since both gateways are served by the same workload (i.e., selector `istio: ingressgateway`) requests to both services
|
||||
(`service1.test.com` and `service2.test.com`) will resolve to the same IP. If `service1.test.com` is accessed first, it
|
||||
will return the wildcard certificate (`*.test.com`) indicating that connections to `service2.test.com` can use the same certificate.
|
||||
Browsers like Chrome and Firefox will consequently reuse the existing connection for requests to `service2.test.com`.
|
||||
Since the gateway (`gw1`) has no route for `service2.test.com`, it will then return a 404 (Not Found) response.
|
||||
|
||||
You can avoid this problem by configuring a single wildcard `Gateway`, instead of two (`gw1` and `gw2`).
|
||||
Then, simply bind both `VirtualServices` to it like this:
|
||||
|
||||
- `Gateway` configuration `gw` with host `*.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate
|
||||
- `VirtualService` configuration `vs1` with host `service1.test.com` and gateway `gw`
|
||||
- `VirtualService` configuration `vs2` with host `service2.test.com` and gateway `gw`
|
|
@ -1,10 +1,12 @@
|
|||
---
|
||||
title: Debugging Envoy and Pilot
|
||||
description: Describes tools and techniques to diagnose Envoy configuration issues related to traffic management.
|
||||
weight: 40
|
||||
weight: 20
|
||||
keywords: [debug,proxy,status,config,pilot,envoy]
|
||||
aliases:
|
||||
- /help/ops/traffic-management/proxy-cmd
|
||||
- /help/ops/misc
|
||||
- /help/ops/troubleshooting/proxy-cmd
|
||||
---
|
||||
|
||||
Istio provides two very valuable commands to help diagnose traffic management configuration problems,
|
||||
|
@ -313,3 +315,50 @@ $ istioctl proxy-config bootstrap -n istio-system istio-ingressgateway-7d6874b48
|
|||
},
|
||||
...
|
||||
{{< /text >}}
|
||||
|
||||
## Verifying connectivity to Istio Pilot
|
||||
|
||||
Verifying connectivity to Pilot is a useful troubleshooting step. Every proxy container in the service mesh should be able to communicate with Pilot. This can be accomplished in a few simple steps:
|
||||
|
||||
1. Get the name of the Istio Ingress pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ INGRESS_POD_NAME=$(kubectl get po -n istio-system | grep ingressgateway\- | awk '{print$1}'); echo ${INGRESS_POD_NAME};
|
||||
{{< /text >}}
|
||||
|
||||
1. Exec into the Istio Ingress pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it $INGRESS_POD_NAME -n istio-system /bin/bash
|
||||
{{< /text >}}
|
||||
|
||||
1. Test connectivity to Pilot using `curl`. The following example invokes the v1 registration API using default Pilot configuration parameters and mutual TLS enabled:
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl -k --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem --key /etc/certs/key.pem https://istio-pilot:8080/debug/edsz
|
||||
{{< /text >}}
|
||||
|
||||
If mutual TLS is disabled:
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl http://istio-pilot:8080/debug/edsz
|
||||
{{< /text >}}
|
||||
|
||||
You should receive a response listing the "service-key" and "hosts" for each service in the mesh.
|
||||
|
||||
## What Envoy version is Istio using?
|
||||
|
||||
To find out the Envoy version used in deployment, you can `exec` into the container and query the `server_info` endpoint:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it PODNAME -c istio-proxy -n NAMESPACE /bin/bash
|
||||
root@5c7e9d3a4b67:/# curl localhost:15000/server_info
|
||||
envoy 0/1.9.0-dev//RELEASE live 57964 57964 0
|
||||
{{< /text >}}
|
||||
|
||||
In addition, the `Envoy` and `istio-api` repository versions are stored as labels on the image:
|
||||
|
||||
{{< text bash >}}
|
||||
$ docker inspect -f '{{json .Config.Labels }}' ISTIO-PROXY-IMAGE
|
||||
{"envoy-vcs-ref":"b3be5713f2100ab5c40316e73ce34581245bd26a","istio-api-vcs-ref":"825044c7e15f6723d558b7b878855670663c2e1e"}
|
||||
{{< /text >}}
|
|
@ -1,10 +1,12 @@
|
|||
---
|
||||
title: Repairing Citadel
|
||||
description: What to do if Citadel is not behaving properly.
|
||||
weight: 10
|
||||
weight: 15
|
||||
keywords: [security,citadel,ops]
|
||||
aliases:
|
||||
- /help/ops/security/repairing-citadel
|
||||
- /help/ops/troubleshooting/repairing-citadel
|
||||
|
||||
---
|
||||
|
||||
{{< warning >}}
|
|
@ -0,0 +1,526 @@
|
|||
---
|
||||
title: Security Problems
|
||||
description: Tools and techniques to address common Istio authentication, authorization, and general security-related problems.
|
||||
weight: 5
|
||||
---
|
||||
|
||||
## End-user Authentication Fails
|
||||
|
||||
With Istio, you can enable authentication for end users. Currently, the end user credential supported by the Istio authentication policy is JWT. The following is a guide for troubleshooting the end user JWT authentication.
|
||||
|
||||
1. Check your Istio authentication policy, `principalBinding` should be set as `USE_ORIGIN` to authenticate the end user.
|
||||
|
||||
1. If `jwksUri` isn’t set, make sure the JWT issuer is of url format and `url + /.well-known/openid-configuration` can be opened in browser; for example, if the JWT issuer is `https://accounts.google.com`, make sure `https://accounts.google.com/.well-known/openid-configuration` is a valid url and can be opened in a browser.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "authentication.istio.io/v1alpha1"
|
||||
kind: "Policy"
|
||||
metadata:
|
||||
name: "example-3"
|
||||
spec:
|
||||
targets:
|
||||
- name: httpbin
|
||||
peers:
|
||||
- mtls:
|
||||
origins:
|
||||
- jwt:
|
||||
issuer: "628645741881-noabiu23f5a8m8ovd8ucv698lj78vv0l@developer.gserviceaccount.com"
|
||||
jwksUri: "https://www.googleapis.com/service_accounts/v1/jwk/628645741881-noabiu23f5a8m8ovd8ucv698lj78vv0l@developer.gserviceaccount.com"
|
||||
principalBinding: USE_ORIGIN
|
||||
{{< /text >}}
|
||||
|
||||
1. If the JWT token is placed in the Authorization header in http requests, make sure the JWT token is valid (not expired, etc). The fields in a JWT token can be decoded by using online JWT parsing tools, e.g., [jwt.io](https://jwt.io/).
|
||||
|
||||
1. Get the Istio proxy (i.e., Envoy) logs to verify the configuration which Pilot distributes is correct.
|
||||
|
||||
For example, if the authentication policy is enforced on the `httpbin` service in the namespace `foo`, use the command below to get logs from the Istio proxy, make sure `local_jwks` is set and the http response code is in the Istio proxy logs.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs httpbin-68fbcdcfc7-hrnzm -c istio-proxy -n foo
|
||||
[2018-07-04 19:13:30.762][15][info][config] ./src/envoy/http/jwt_auth/auth_store.h:72] Loaded JwtAuthConfig: rules {
|
||||
issuer: "628645741881-noabiu23f5a8m8ovd8ucv698lj78vv0l@developer.gserviceaccount.com"
|
||||
local_jwks {
|
||||
inline_string: "{\n \"keys\": [\n {\n \"kty\": \"RSA\",\n \"alg\": \"RS256\",\n \"use\": \"sig\",\n \"kid\": \"03bc39a6b56602c0d2ad421c3993d5e4f88e6f54\",\n \"n\": \"u9gnSMDYw4ggVKInAfxpXqItv9Ii7PlUFrAcwANQMW9fbZrFpITFD45t0gUy9CK4QewkLhqDDUJSvpH7wprS8Hi0M8wAJf_lgugdRr6Nc2qK-eywjjDK-afQjhGLcMJGS0YXi3K2lyP-oWiLingMbYRiJxTi86icWT8AU8bKoTyTPFOExAJkDFnquulU0_KlteZxbjnRIVvMKfpgZ3yK9Pzv7XjtdvO7xlr59K9Zotd4mgphIUADfw1fR0lNkjHQp9N0WP9cbOsyUwm5jjDklnyVh7yBHcEk1YHccntosxnwIn-cj538PSaL_qDZgDAsJKHPZlkiP_1mjsu3NkofIQ\",\n \"e\": \"AQAB\"\n },\n {\n \"kty\": \"RSA\",\n \"alg\": \"RS256\",\n \"use\": \"sig\",\n \"kid\": \"60aef5b0877e9f0d67b787b5be797636735efdee\",\n \"n\": \"0TmzDEN12GF9UaWJI40oKwJlu53ZQihHcaVi1thLGs1l3ubdPWv8MEsc9X2DjCRxEB6Ss1R2VOImrQ2RWFuBSNHorjE0_GyEGNzvOH-0uUQ5uES2HvEN7384XfUYj9MoTPibstDEl84pm4d3Ka3R_1wk03Jrl9MIq6fnV_4Z-F7O7ElGqk8xcsiVUowd447dwlrd55ChIyISF5PvbCLtOKz9FgTz2mEb8jmzuZQs5yICgKZCzlJ7xNOOmZcqCZf9Qzaz4OnVLXykBLzSuLMtxvvOxf53rvWB0F2__CjKlEWBCQkB39Zaa_4I8dCAVxgkeQhgoU26BdzLL28xjWzdbw\",\n \"e\": \"AQAB\"\n },\n {\n \"kty\": \"RSA\",\n \"alg\": \"RS256\",\n \"use\": \"sig\",\n \"kid\": \"62a93512c9ee4c7f8067b5a216dade2763d32a47\",\n \"n\": \"0YWnm_eplO9BFtXszMRQNL5UtZ8HJdTH2jK7vjs4XdLkPW7YBkkm_2xNgcaVpkW0VT2l4mU3KftR-6s3Oa5Rnz5BrWEUkCTVVolR7VYksfqIB2I_x5yZHdOiomMTcm3DheUUCgbJRv5OKRnNqszA4xHn3tA3Ry8VO3X7BgKZYAUh9fyZTFLlkeAh0-bLK5zvqCmKW5QgDIXSxUTJxPjZCgfx1vmAfGqaJb-nvmrORXQ6L284c73DUL7mnt6wj3H6tVqPKA27j56N0TB1Hfx4ja6Slr8S4EB3F1luYhATa1PKUSH8mYDW11HolzZmTQpRoLV8ZoHbHEaTfqX_aYahIw\",\n \"e\": \"AQAB\"\n },\n {\n \"kty\": \"RSA\",\n \"alg\": \"RS256\",\n \"use\": \"sig\",\n \"kid\": \"b3319a147514df7ee5e4bcdee51350cc890cc89e\",\n \"n\": \"qDi7Tx4DhNvPQsl1ofxxc2ePQFcs-L0mXYo6TGS64CY_2WmOtvYlcLNZjhuddZVV2X88m0MfwaSA16wE-RiKM9hqo5EY8BPXj57CMiYAyiHuQPp1yayjMgoE1P2jvp4eqF-BTillGJt5W5RuXti9uqfMtCQdagB8EC3MNRuU_KdeLgBy3lS3oo4LOYd-74kRBVZbk2wnmmb7IhP9OoLc1-7-9qU1uhpDxmE6JwBau0mDSwMnYDS4G_ML17dC-ZDtLd1i24STUw39KH0pcSdfFbL2NtEZdNeam1DDdk0iUtJSPZliUHJBI_pj8M-2Mn_oA8jBuI8YKwBqYkZCN1I95Q\",\n \"e\": \"AQAB\"\n }\n ]\n}\n"
|
||||
}
|
||||
forward: true
|
||||
forward_payload_header: "istio-sec-8a85f33ec44c5ccbaf951742ff0aaa34eb94d9bd"
|
||||
}
|
||||
allow_missing_or_failed: true
|
||||
[2018-07-04 19:13:30.763][15][info][upstream] external/envoy/source/server/lds_api.cc:62] lds: add/update listener '10.8.2.9_8000'
|
||||
[2018-07-04T19:13:39.755Z] "GET /ip HTTP/1.1" 401 - 0 29 0 - "-" "curl/7.35.0" "e8374005-1957-99e4-96b6-9d6ec5bef396" "httpbin.foo:8000" "-"
|
||||
[2018-07-04T19:13:40.463Z] "GET /ip HTTP/1.1" 401 - 0 29 0 - "-" "curl/7.35.0" "9badd659-fa0e-9ca9-b4c0-9ac225571929" "httpbin.foo:8000" "-"
|
||||
{{< /text >}}
|
||||
|
||||
## Authorization is Too Restrictive
|
||||
|
||||
When you first enable authorization for a service, all requests are denied by default. After you add one or more authorization policies, then
|
||||
matching requests should flow through. If all requests continue to be denied, you can try the following:
|
||||
|
||||
1. Make sure there is no typo in your policy YAML file.
|
||||
|
||||
1. Avoid enabling authorization for Istio Control Planes Components, including Mixer, Pilot, Ingress. Istio authorization policy is designed for authorizing access to services in Istio Mesh. Enabling it for Istio Control Planes Components may cause unexpected behavior.
|
||||
|
||||
1. Make sure that your `ServiceRoleBinding` and referred `ServiceRole` objects are in the same namespace (by checking "metadata"/”namespace” line).
|
||||
|
||||
1. Make sure that your service role and service role binding policies don't use any HTTP only fields
|
||||
for TCP services. Otherwise, Istio ignores the policies as if they didn't exist.
|
||||
|
||||
1. In Kubernetes environment, make sure all services in a `ServiceRole` object are in the same namespace as the
|
||||
`ServiceRole` itself. For example, if a service in a `ServiceRole` object is `a.default.svc.cluster.local`, the `ServiceRole` must be in the
|
||||
`default` namespace (`metadata/namespace` line should be `default`). For non-Kubernetes environments, all `ServiceRoles` and `ServiceRoleBindings`
|
||||
for a mesh should be in the same namespace.
|
||||
|
||||
1. Visit [Ensure Authorization is Enabled Correctly](/docs/ops/troubleshooting/security-issues/#ensure-authorization-is-enabled-correctly)
|
||||
to find out the exact cause.
|
||||
|
||||
## Authorization is Too Permissive
|
||||
|
||||
If authorization checks are enabled for a service and yet requests to the
|
||||
service aren't being blocked, then authorization was likely not enabled
|
||||
successfully. To verify, follow these steps:
|
||||
|
||||
1. Check the [enable authorization docs](/docs/concepts/security/#enabling-authorization)
|
||||
to correctly enable Istio authorization.
|
||||
|
||||
1. Avoid enabling authorization for Istio Control Planes Components, including
|
||||
Mixer, Pilot and Ingress. The Istio authorization features are designed for
|
||||
authorizing access to services in an Istio Mesh. Enabling the authorization
|
||||
features for the Istio Control Planes components can cause unexpected
|
||||
behavior.
|
||||
|
||||
1. In your Kubernetes environment, check deployments in all namespaces to make
|
||||
sure there is no legacy deployment left that can cause an error in Pilot.
|
||||
You can disable Pilot's authorization plug-in if there is an error pushing
|
||||
authorization policy to Envoy.
|
||||
|
||||
1. Visit [Ensure Authorization is Enabled Correctly](/docs/ops/troubleshooting/security-issues/#ensure-authorization-is-enabled-correctly)
|
||||
to find out the exact cause.
|
||||
|
||||
## Ensure Authorization is Enabled Correctly
|
||||
|
||||
The `ClusterRbacConfig` default cluster level singleton custom resource controls the authorization functionality globally.
|
||||
|
||||
1. Run the following command to list existing `ClusterRbacConfig`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get clusterrbacconfigs.rbac.istio.io --all-namespaces
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify there is only **one** instance of `ClusterRbacConfig` with name `default`. Otherwise, Istio disables the
|
||||
authorization functionality and ignores all policies.
|
||||
|
||||
{{< text plain >}}
|
||||
NAMESPACE NAME AGE
|
||||
default default 1d
|
||||
{{< /text >}}
|
||||
|
||||
1. If there is more than one `ClusterRbacConfig` instance, remove any additional `ClusterRbacConfig` instances and
|
||||
ensure **only one** instance is named `default`.
|
||||
|
||||
## Ensure Pilot Accepts the Policies
|
||||
|
||||
Pilot converts and distributes your authorization policies to the proxies. The following steps help
|
||||
you ensure Pilot is working as expected:
|
||||
|
||||
1. Run the following command to export the Pilot `ControlZ`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl port-forward $(kubectl -n istio-system get pods -l istio=pilot -o jsonpath='{.items[0].metadata.name}') -n istio-system 9876:9876
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify you see the following output:
|
||||
|
||||
{{< text plain >}}
|
||||
Forwarding from 127.0.0.1:9876 -> 9876
|
||||
{{< /text >}}
|
||||
|
||||
1. Start your browser and open the `ControlZ` page at `http://127.0.0.1:9876/scopez/`.
|
||||
|
||||
1. Change the `rbac` Output Level to `debug`.
|
||||
|
||||
1. Use `Ctrl+C` in the terminal you started in step 1 to stop the port-forward command.
|
||||
|
||||
1. Print the log of Pilot and search for `rbac` with the following command:
|
||||
|
||||
{{< tip >}}
|
||||
You probably need to first delete and then re-apply your authorization policies so that
|
||||
the debug output is generated for these policies.
|
||||
{{< /tip >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs $(kubectl -n istio-system get pods -l istio=pilot -o jsonpath='{.items[0].metadata.name}') -c discovery -n istio-system | grep rbac
|
||||
{{< /text >}}
|
||||
|
||||
1. Check the output and verify:
|
||||
|
||||
- There are no errors.
|
||||
- There is a `"built filter config for ..."` message which means the filter is generated
|
||||
for the target service.
|
||||
|
||||
1. For example, you might see something similar to the following:
|
||||
|
||||
{{< text plain >}}
|
||||
2018-07-26T22:25:41.009838Z debug rbac building filter config for {sleep.foo.svc.cluster.local map[app:sleep pod-template-hash:3326367878] map[destination.name:sleep destination.namespace:foo destination.user:default]}
|
||||
2018-07-26T22:25:41.009915Z info rbac no service role in namespace foo
|
||||
2018-07-26T22:25:41.009957Z info rbac no service role binding in namespace foo
|
||||
2018-07-26T22:25:41.010000Z debug rbac generated filter config: { }
|
||||
2018-07-26T22:25:41.010114Z info rbac built filter config for sleep.foo.svc.cluster.local
|
||||
2018-07-26T22:25:41.182400Z debug rbac building filter config for {productpage.default.svc.cluster.local map[pod-template-hash:2600844901 version:v1 app:productpage] map[destination.name:productpage destination.namespace:default destination.user:bookinfo-productpage]}
|
||||
2018-07-26T22:25:41.183131Z debug rbac checking role app2-grpc-viewer
|
||||
2018-07-26T22:25:41.183214Z debug rbac role skipped for no AccessRule matched
|
||||
2018-07-26T22:25:41.183255Z debug rbac checking role productpage-viewer
|
||||
2018-07-26T22:25:41.183281Z debug rbac matched AccessRule[0]
|
||||
2018-07-26T22:25:41.183390Z debug rbac generated filter config: {policies:<key:"productpage-viewer" value:<permissions:<and_rules:<rules:<or_rules:<rules:<header:<name:":method" exact_match:"GET" > > > > > > principals:<and_ids:<ids:<any:true > > > > > }
|
||||
2018-07-26T22:25:41.184407Z info rbac built filter config for productpage.default.svc.cluster.local
|
||||
{{< /text >}}
|
||||
|
||||
It means Pilot generated:
|
||||
|
||||
- An empty config for `sleep.foo.svc.cluster.local` as there is no authorization policies matched
|
||||
and Istio denies all requests sent to this service by default.
|
||||
|
||||
- An config for `productpage.default.svc.cluster.local` and Istio will allow anyone to access it
|
||||
with GET method.
|
||||
|
||||
## Ensure Pilot Distributes Policies to Proxies Correctly
|
||||
|
||||
Pilot distributes the authorization policies to proxies. The following steps help you ensure Pilot
|
||||
is working as expected:
|
||||
|
||||
{{< tip >}}
|
||||
The command used in this section assumes you have deployed [Bookinfo application](/docs/examples/bookinfo/),
|
||||
otherwise you should replace `"-l app=productpage"` with your actual pod.
|
||||
{{< /tip >}}
|
||||
|
||||
1. Run the following command to get the proxy configuration dump for the `productpage` service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec $(kubectl get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -c istio-proxy -- curl localhost:15000/config_dump -s
|
||||
{{< /text >}}
|
||||
|
||||
1. Check the log and verify:
|
||||
|
||||
- The log includes an `envoy.filters.http.rbac` filter to enforce the authorization policy
|
||||
on each incoming request.
|
||||
- Istio updates the filter accordingly after you update your authorization policy.
|
||||
|
||||
1. The following output means the proxy of `productpage` has enabled the `envoy.filters.http.rbac` filter
|
||||
with rules that allows anyone to access it via `GET` method. The `shadow_rules` are not used and you can ignored them safely.
|
||||
|
||||
{{< text plain >}}
|
||||
{
|
||||
"name": "envoy.filters.http.rbac",
|
||||
"config": {
|
||||
"rules": {
|
||||
"policies": {
|
||||
"productpage-viewer": {
|
||||
"permissions": [
|
||||
{
|
||||
"and_rules": {
|
||||
"rules": [
|
||||
{
|
||||
"or_rules": {
|
||||
"rules": [
|
||||
{
|
||||
"header": {
|
||||
"exact_match": "GET",
|
||||
"name": ":method"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"principals": [
|
||||
{
|
||||
"and_ids": {
|
||||
"ids": [
|
||||
{
|
||||
"any": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"shadow_rules": {
|
||||
"policies": {}
|
||||
}
|
||||
}
|
||||
},
|
||||
{{< /text >}}
|
||||
|
||||
## Ensure Proxies Enforce Policies Correctly
|
||||
|
||||
Proxies eventually enforce the authorization policies. The following steps help you ensure the proxy
|
||||
is working as expected:
|
||||
|
||||
{{< tip >}}
|
||||
The command used in this section assumes you have deployed [Bookinfo application](/docs/examples/bookinfo/).
|
||||
otherwise you should replace `"-l app=productpage"` with your actual pod.
|
||||
{{< /tip >}}
|
||||
|
||||
1. Turn on the authorization debug logging in proxy with the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec $(kubectl get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -c istio-proxy -- curl -X POST localhost:15000/logging?rbac=debug -s
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify you see the following output:
|
||||
|
||||
{{< text plain >}}
|
||||
active loggers:
|
||||
... ...
|
||||
rbac: debug
|
||||
... ...
|
||||
{{< /text >}}
|
||||
|
||||
1. Visit the `productpage` in your browser to generate some logs.
|
||||
|
||||
1. Print the proxy logs with the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs $(kubectl get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -c istio-proxy
|
||||
{{< /text >}}
|
||||
|
||||
1. Check the output and verify:
|
||||
|
||||
- The output log shows either `enforced allowed` or `enforced denied` depending on whether the request
|
||||
was allowed or denied respectively.
|
||||
|
||||
- Your authorization policy expects the data extracted from the request.
|
||||
|
||||
1. The following output means there is a `GET` request at path `/productpage` and the policy allows the request.
|
||||
The `shadow denied` has no effect and you can ignore it safely.
|
||||
|
||||
{{< text plain >}}
|
||||
...
|
||||
[2018-07-26 20:39:18.060][152][debug][rbac] external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:79] checking request: remoteAddress: 10.60.0.139:51158, localAddress: 10.60.0.93:9080, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account, subjectPeerCertificate: O=, headers: ':authority', '35.238.0.62'
|
||||
':path', '/productpage'
|
||||
':method', 'GET'
|
||||
'upgrade-insecure-requests', '1'
|
||||
'user-agent', 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
|
||||
'dnt', '1'
|
||||
'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8'
|
||||
'accept-encoding', 'gzip, deflate'
|
||||
'accept-language', 'en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7'
|
||||
'x-forwarded-for', '10.60.0.1'
|
||||
'x-forwarded-proto', 'http'
|
||||
'x-request-id', 'e23ea62d-b25d-91be-857c-80a058d746d4'
|
||||
'x-b3-traceid', '5983108bf6d05603'
|
||||
'x-b3-spanid', '5983108bf6d05603'
|
||||
'x-b3-sampled', '1'
|
||||
'x-istio-attributes', 'CikKGGRlc3RpbmF0aW9uLnNlcnZpY2UubmFtZRINEgtwcm9kdWN0cGFnZQoqCh1kZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWVzcGFjZRIJEgdkZWZhdWx0Ck8KCnNvdXJjZS51aWQSQRI/a3ViZXJuZXRlczovL2lzdGlvLWluZ3Jlc3NnYXRld2F5LTc2NjY0Y2NmY2Ytd3hjcjQuaXN0aW8tc3lzdGVtCj4KE2Rlc3RpbmF0aW9uLnNlcnZpY2USJxIlcHJvZHVjdHBhZ2UuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbApDChhkZXN0aW5hdGlvbi5zZXJ2aWNlLmhvc3QSJxIlcHJvZHVjdHBhZ2UuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbApBChdkZXN0aW5hdGlvbi5zZXJ2aWNlLnVpZBImEiRpc3RpbzovL2RlZmF1bHQvc2VydmljZXMvcHJvZHVjdHBhZ2U='
|
||||
'content-length', '0'
|
||||
'x-envoy-internal', 'true'
|
||||
'sec-istio-authn-payload', 'CkVjbHVzdGVyLmxvY2FsL25zL2lzdGlvLXN5c3RlbS9zYS9pc3Rpby1pbmdyZXNzZ2F0ZXdheS1zZXJ2aWNlLWFjY291bnQSRWNsdXN0ZXIubG9jYWwvbnMvaXN0aW8tc3lzdGVtL3NhL2lzdGlvLWluZ3Jlc3NnYXRld2F5LXNlcnZpY2UtYWNjb3VudA=='
|
||||
, dynamicMetadata: filter_metadata {
|
||||
key: "istio_authn"
|
||||
value {
|
||||
fields {
|
||||
key: "request.auth.principal"
|
||||
value {
|
||||
string_value: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
|
||||
}
|
||||
}
|
||||
fields {
|
||||
key: "source.principal"
|
||||
value {
|
||||
string_value: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
[2018-07-26 20:39:18.060][152][debug][rbac] external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:88] shadow denied
|
||||
[2018-07-26 20:39:18.060][152][debug][rbac] external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:98] enforced allowed
|
||||
...
|
||||
{{< /text >}}
|
||||
|
||||
## Keys and Certificates errors
|
||||
|
||||
If you suspect that some of the keys and/or certificates used by Istio aren't correct, the
|
||||
first step is to ensure that [Citadel is healthy](/docs/ops/troubleshooting/repairing-citadel/).
|
||||
|
||||
You can then verify that Citadel is actually generating keys and certificates:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get secret istio.my-sa -n my-ns
|
||||
NAME TYPE DATA AGE
|
||||
istio.my-sa istio.io/key-and-cert 3 24d
|
||||
{{< /text >}}
|
||||
|
||||
Where `my-ns` and `my-sa` are the namespace and service account your pod is running as.
|
||||
|
||||
If you want to check the keys and certificates of other service accounts, you can run the following
|
||||
command to list all secrets for which Citadel has generated a key and certificate:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get secret --all-namespaces | grep istio.io/key-and-cert
|
||||
NAMESPACE NAME TYPE DATA AGE
|
||||
.....
|
||||
istio-system istio.istio-citadel-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-cleanup-old-ca-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-egressgateway-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-ingressgateway-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-mixer-post-install-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-mixer-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-pilot-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.istio-sidecar-injector-service-account istio.io/key-and-cert 3 14d
|
||||
istio-system istio.prometheus istio.io/key-and-cert 3 14d
|
||||
kube-public istio.default istio.io/key-and-cert 3 14d
|
||||
.....
|
||||
{{< /text >}}
|
||||
|
||||
Then check that the certificate is valid with:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get secret -o json istio.my-sa -n my-ns | jq -r '.data["cert-chain.pem"]' | base64 --decode | openssl x509 -noout -text
|
||||
Certificate:
|
||||
Data:
|
||||
Version: 3 (0x2)
|
||||
Serial Number:
|
||||
99:59:6b:a2:5a:f4:20:f4:03:d7:f0:bc:59:f5:d8:40
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
Issuer: O = k8s.cluster.local
|
||||
Validity
|
||||
Not Before: Jun 4 20:38:20 2018 GMT
|
||||
Not After : Sep 2 20:38:20 2018 GMT
|
||||
Subject: O =
|
||||
Subject Public Key Info:
|
||||
Public Key Algorithm: rsaEncryption
|
||||
Public-Key: (2048 bit)
|
||||
Modulus:
|
||||
00:c8:a0:08:24:61:af:c1:cb:81:21:90:cc:03:76:
|
||||
01:25:bc:ff:ca:25:fc:81:d1:fa:b8:04:aa:d4:6b:
|
||||
55:e9:48:f2:e4:ab:22:78:03:47:26:bb:8f:22:10:
|
||||
66:47:47:c3:b2:9a:70:f1:12:f1:b3:de:d0:e9:2d:
|
||||
28:52:21:4b:04:33:fa:3d:92:8c:ab:7f:cc:74:c9:
|
||||
c4:68:86:b0:4f:03:1b:06:33:48:e3:5b:8f:01:48:
|
||||
6a:be:64:0e:01:f5:98:6f:57:e4:e7:b7:47:20:55:
|
||||
98:35:f9:99:54:cf:a9:58:1e:1b:5a:0a:63:ce:cd:
|
||||
ed:d3:a4:88:2b:00:ee:b0:af:e8:09:f8:a8:36:b8:
|
||||
55:32:80:21:8e:b5:19:c0:2f:e8:ca:4b:65:35:37:
|
||||
2f:f1:9e:6f:09:d4:e0:b1:3d:aa:5f:fe:25:1a:7b:
|
||||
d4:dd:fe:d1:d3:b6:3c:78:1d:3b:12:c2:66:bd:95:
|
||||
a8:3b:64:19:c0:51:05:9f:74:3d:6e:86:1e:20:f5:
|
||||
ed:3a:ab:44:8d:7c:5b:11:14:83:ee:6b:a1:12:2e:
|
||||
2a:0e:6b:be:02:ad:11:6a:ec:23:fe:55:d9:54:f3:
|
||||
5c:20:bc:ec:bf:a6:99:9b:7a:2e:71:10:92:51:a7:
|
||||
cb:79:af:b4:12:4e:26:03:ab:35:e2:5b:00:45:54:
|
||||
fe:91
|
||||
Exponent: 65537 (0x10001)
|
||||
X509v3 extensions:
|
||||
X509v3 Key Usage: critical
|
||||
Digital Signature, Key Encipherment
|
||||
X509v3 Extended Key Usage:
|
||||
TLS Web Server Authentication, TLS Web Client Authentication
|
||||
X509v3 Basic Constraints: critical
|
||||
CA:FALSE
|
||||
X509v3 Subject Alternative Name:
|
||||
URI:spiffe://cluster.local/ns/my-ns/sa/my-sa
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
78:77:7f:83:cc:fc:f4:30:12:57:78:62:e9:e2:48:d6:ea:76:
|
||||
69:99:02:e9:62:d2:53:db:2c:13:fe:0f:00:56:2b:83:ca:d3:
|
||||
4c:d2:01:f6:08:af:01:f2:e2:3e:bb:af:a3:bf:95:97:aa:de:
|
||||
1e:e6:51:8c:21:ee:52:f0:d3:af:9c:fd:f7:f9:59:16:da:40:
|
||||
4d:53:db:47:bb:9c:25:1a:6e:34:41:42:d9:26:f7:3a:a6:90:
|
||||
2d:82:42:97:08:f4:6b:16:84:d1:ad:e3:82:2c:ce:1c:d6:cd:
|
||||
68:e6:b0:5e:b5:63:55:3e:f1:ff:e1:a0:42:cd:88:25:56:f7:
|
||||
a8:88:a1:ec:53:f9:c1:2a:bb:5c:d7:f8:cb:0e:d9:f4:af:2e:
|
||||
eb:85:60:89:b3:d0:32:60:b4:a8:a1:ee:f3:3a:61:60:11:da:
|
||||
2d:7f:2d:35:ce:6e:d4:eb:5c:82:cf:5c:9a:02:c0:31:33:35:
|
||||
51:2b:91:79:8a:92:50:d9:e0:58:0a:78:9d:59:f4:d3:39:21:
|
||||
bb:b4:41:f9:f7:ec:ad:dd:76:be:28:58:c0:1f:e8:26:5a:9e:
|
||||
7b:7f:14:a9:18:8d:61:d1:06:e3:9e:0f:05:9e:1b:66:0c:66:
|
||||
d1:27:13:6d:ab:59:46:00:77:6e:25:f6:e8:41:ef:49:58:73:
|
||||
b4:93:04:46
|
||||
{{< /text >}}
|
||||
|
||||
Make sure the displayed certificate contains valid information. In particular, the Subject Alternative Name field should be `URI:spiffe://cluster.local/ns/my-ns/sa/my-sa`.
|
||||
If this is not the case, it is likely that something is wrong with your Citadel. Try to redeploy Citadel and check again.
|
||||
|
||||
Finally, you can verify that the key and certificate are correctly mounted by your sidecar proxy at the directory `/etc/certs`. You
|
||||
can use this command to check:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it my-pod-id -c istio-proxy -- ls /etc/certs
|
||||
cert-chain.pem key.pem root-cert.pem
|
||||
{{< /text >}}
|
||||
|
||||
Optionally, you could use the following command to check its contents:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it my-pod-id -c istio-proxy -- cat /etc/certs/cert-chain.pem | openssl x509 -text -noout
|
||||
Certificate:
|
||||
Data:
|
||||
Version: 3 (0x2)
|
||||
Serial Number:
|
||||
7e:b4:44:fe:d0:46:ba:27:47:5a:50:c8:f0:8e:8b:da
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
Issuer: O = k8s.cluster.local
|
||||
Validity
|
||||
Not Before: Jul 13 01:23:13 2018 GMT
|
||||
Not After : Oct 11 01:23:13 2018 GMT
|
||||
Subject: O =
|
||||
Subject Public Key Info:
|
||||
Public Key Algorithm: rsaEncryption
|
||||
Public-Key: (2048 bit)
|
||||
Modulus:
|
||||
00:bb:c9:cd:f4:b8:b5:e4:3b:f2:35:aa:4c:67:cc:
|
||||
1b:a9:30:c4:b7:fd:0a:f5:ac:94:05:b5:82:96:b2:
|
||||
c8:98:85:f9:fc:09:b3:28:34:5e:79:7e:a9:3c:58:
|
||||
0a:14:43:c1:f4:d7:b8:76:ab:4e:1c:89:26:e8:55:
|
||||
cd:13:6b:45:e9:f1:67:e1:9b:69:46:b4:7e:8c:aa:
|
||||
fd:70:de:21:15:4f:f5:f3:0f:b7:d4:c6:b5:9d:56:
|
||||
ef:8a:91:d7:16:fa:db:6e:4c:24:71:1c:9c:f3:d9:
|
||||
4b:83:f1:dd:98:5b:63:5c:98:5e:2f:15:29:0f:78:
|
||||
31:04:bc:1d:c8:78:c3:53:4f:26:b2:61:86:53:39:
|
||||
0a:3b:72:3e:3d:0d:22:61:d6:16:72:5d:64:e3:78:
|
||||
c8:23:9d:73:17:07:5a:6b:79:75:91:ce:71:4b:77:
|
||||
c5:1f:60:f1:da:ca:aa:85:56:5c:13:90:23:02:20:
|
||||
12:66:3f:8f:58:b8:aa:72:9d:36:f1:f3:b7:2b:2d:
|
||||
3e:bb:7c:f9:b5:44:b9:57:cf:fc:2f:4b:3c:e6:ee:
|
||||
51:ba:23:be:09:7b:e2:02:6a:6e:e7:83:06:cd:6c:
|
||||
be:7a:90:f1:1f:2c:6d:12:9e:2f:0f:e4:8c:5f:31:
|
||||
b1:a2:fa:0b:71:fa:e1:6a:4a:0f:52:16:b4:11:73:
|
||||
65:d9
|
||||
Exponent: 65537 (0x10001)
|
||||
X509v3 extensions:
|
||||
X509v3 Key Usage: critical
|
||||
Digital Signature, Key Encipherment
|
||||
X509v3 Extended Key Usage:
|
||||
TLS Web Server Authentication, TLS Web Client Authentication
|
||||
X509v3 Basic Constraints: critical
|
||||
CA:FALSE
|
||||
X509v3 Subject Alternative Name:
|
||||
URI:spiffe://cluster.local/ns/default/sa/bookinfo-productpage
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
8f:be:af:a4:ee:f7:be:21:e9:c8:c9:e2:3b:d3:ac:41:18:5d:
|
||||
f8:9a:85:0f:98:f3:35:af:b7:e1:2d:58:5a:e0:50:70:98:cc:
|
||||
75:f6:2e:55:25:ed:66:e7:a4:b9:4a:aa:23:3b:a6:ee:86:63:
|
||||
9f:d8:f9:97:73:07:10:25:59:cc:d9:01:09:12:f9:ab:9e:54:
|
||||
24:8a:29:38:74:3a:98:40:87:67:e4:96:d0:e6:c7:2d:59:3d:
|
||||
d3:ea:dd:6e:40:5f:63:bf:30:60:c1:85:16:83:66:66:0b:6a:
|
||||
f5:ab:60:7e:f5:3b:44:c6:11:5b:a1:99:0c:bd:53:b3:a7:cc:
|
||||
e2:4b:bd:10:eb:fb:f0:b0:e5:42:a4:b2:ab:0c:27:c8:c1:4c:
|
||||
5b:b5:1b:93:25:9a:09:45:7c:28:31:13:a3:57:1c:63:86:5a:
|
||||
55:ed:14:29:db:81:e3:34:47:14:ba:52:d6:3c:3d:3b:51:50:
|
||||
89:a9:db:17:e4:c4:57:ec:f8:22:98:b7:e7:aa:8a:72:28:9a:
|
||||
a7:27:75:60:85:20:17:1d:30:df:78:40:74:ea:bc:ce:7b:e5:
|
||||
a5:57:32:da:6d:f2:64:fb:28:94:7d:28:37:6f:3c:97:0e:9c:
|
||||
0c:33:42:f0:b6:f5:1c:0d:fb:70:65:aa:93:3e:ca:0e:58:ec:
|
||||
8e:d5:d0:1e
|
||||
{{< /text >}}
|
||||
|
||||
## Mutual TLS errors
|
||||
|
||||
If you suspect problems with mutual TLS, first ensure that [Citadel is healthy](/docs/ops/troubleshooting/repairing-citadel/), and
|
||||
second ensure that [keys and certificates are being delivered](/docs/ops/troubleshooting/security-issues/) to sidecars properly.
|
||||
|
||||
If everything appears to be working so far, the next step is to verify that the right [authentication policy](/docs/tasks/security/authn-policy/) is applied and the right destination rules are in place.
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: Tcpdump Limitations
|
||||
description: Limitations for using Tcpdump in pods.
|
||||
weight: 99
|
||||
---
|
||||
|
||||
Tcpdump doesn't work in the sidecar pod - the container doesn't run as root. However any other container in the same pod will see all the packets, since the
|
||||
network namespace is shared. `iptables` will also see the pod-wide configuration.
|
||||
|
||||
Communication between Envoy and the app happens on 127.0.0.1, and is not encrypted.
|
|
@ -0,0 +1,304 @@
|
|||
---
|
||||
title: Galley Configuration Problems
|
||||
description: Describes how to resolve Galley configuration problems.
|
||||
weight: 20
|
||||
aliases:
|
||||
- /help/ops/setup/validation
|
||||
- /help/ops/troubleshooting/validation
|
||||
---
|
||||
|
||||
## Seemingly valid configuration is rejected
|
||||
|
||||
Manually verify your configuration is correct, cross-referencing
|
||||
[Istio API reference](/docs/reference/config) when
|
||||
necessary.
|
||||
|
||||
## Invalid configuration is accepted
|
||||
|
||||
Verify the `istio-galley` `validationwebhookconfiguration` exists and
|
||||
is correct. The `apiVersion`, `apiGroup`, and `resource` of the
|
||||
invalid configuration should be listed in one of the two `webhooks`
|
||||
entries.
|
||||
|
||||
{{< text bash yaml >}}
|
||||
$ kubectl get validatingwebhookconfiguration istio-galley -o yaml
|
||||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
metadata:
|
||||
labels:
|
||||
app: istio-galley
|
||||
name: istio-galley
|
||||
ownerReferences:
|
||||
- apiVersion: extensions/v1beta1
|
||||
blockOwnerDeletion: true
|
||||
controller: true
|
||||
kind: Deployment
|
||||
name: istio-galley
|
||||
uid: 5c64585d-91c6-11e8-a98a-42010a8001a8
|
||||
webhooks:
|
||||
- clientConfig:
|
||||
# caBundle should be non-empty. This is periodically (re)patched
|
||||
# every second by the webhook service using the ca-cert
|
||||
# from the mounted service account secret.
|
||||
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1VENDQWMyZ0F3SUJBZ0lRVzVYNWpJcnJCemJmZFdLaWVoaVVSakFOQmdrcWhraUc5dzBCQVFzRkFEQWMKTVJvd0dBWURWUVFLRXhGck9ITXVZMngxYzNSbGNpNXNiMk5oYkRBZUZ3MHhPREEzTWpjeE56VTJNakJhRncweApPVEEzTWpjeE56VTJNakJhTUJ3eEdqQVlCZ05WQkFvVEVXczRjeTVqYkhWemRHVnlMbXh2WTJGc01JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXdVMi9SdWlyeTNnUzdPd2xJRCtaaGZiOEpOWnMKK05OL0dRWUsxbVozb3duaEw4dnJHdDBhenpjNXFuOXo2ZEw5Z1pPVFJXeFVCYXVJMUpOa3d0dSt2NmRjRzlkWgp0Q2JaQWloc1BLQWQ4MVRaa3RwYkNnOFdrcTRyNTh3QldRemNxMldsaFlPWHNlWGtRejdCbStOSUoyT0NRbmJwCjZYMmJ4Slc2OGdaZkg2UHlNR0libXJxaDgvZ2hISjFha3ptNGgzc0VGU1dTQ1Y2anZTZHVJL29NM2pBem5uZlUKU3JKY3VpQnBKZmJSMm1nQm4xVmFzNUJNdFpaaTBubDYxUzhyZ1ZiaHp4bWhpeFhlWU0zQzNHT3FlRUthY0N3WQo0TVczdEJFZ3NoN2ovZGM5cEt1ZG1wdFBFdit2Y2JnWjdreEhhazlOdFV2YmRGempJeTMxUS9Qd1NRSURBUUFCCm95TXdJVEFPQmdOVkhROEJBZjhFQkFNQ0FnUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEIKQVFzRkFBT0NBUUVBTnRLSnVkQ3NtbTFzU3dlS2xKTzBIY1ZMQUFhbFk4ZERUYWVLNksyakIwRnl0MkM3ZUtGSAoya3JaOWlkbWp5Yk8xS0djMVlWQndNeWlUMGhjYWFlaTdad2g0aERRWjVRN0k3ZFFuTVMzc2taR3ByaW5idU1aCmg3Tm1WUkVnV1ZIcm9OcGZEN3pBNEVqWk9FZzkwR0J6YXUzdHNmanI4RDQ1VVRJZUw3M3hwaUxmMXhRTk10RWEKd0NSelplQ3lmSUhra2ZrTCtISVVGK0lWV1g2VWp2WTRpRDdRR0JCenpHZTluNS9KM1g5OU1Gb1F3bExjNHMrTQpnLzNQdnZCYjBwaTU5MWxveXluU3lkWDVqUG5ibDhkNEFJaGZ6OU8rUTE5UGVULy9ydXFRNENOancrZmVIbTBSCjJzYmowZDd0SjkyTzgwT2NMVDlpb05NQlFLQlk3cGlOUkE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
|
||||
service:
|
||||
# service corresponds to the Kubernetes service that implements the
|
||||
# webhook, e.g. istio-galley.istio-system.svc:443
|
||||
name: istio-galley
|
||||
namespace: istio-system
|
||||
path: /admitpilot
|
||||
failurePolicy: Fail
|
||||
name: pilot.validation.istio.io
|
||||
namespaceSelector: {}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- config.istio.io
|
||||
apiVersions:
|
||||
- v1alpha2
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- httpapispecs
|
||||
- httpapispecbindings
|
||||
- quotaspecs
|
||||
- quotaspecbindings
|
||||
- apiGroups:
|
||||
- rbac.istio.io
|
||||
apiVersions:
|
||||
- '*'
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- '*'
|
||||
- apiGroups:
|
||||
- authentication.istio.io
|
||||
apiVersions:
|
||||
- '*'
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- '*'
|
||||
- apiGroups:
|
||||
- networking.istio.io
|
||||
apiVersions:
|
||||
- '*'
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- destinationrules
|
||||
- envoyfilters
|
||||
- gateways
|
||||
- virtualservices
|
||||
- clientConfig:
|
||||
# caBundle should be non-empty. This is periodically (re)patched
|
||||
# every second by the webhook service using the ca-cert
|
||||
# from the mounted service account secret.
|
||||
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1VENDQWMyZ0F3SUJBZ0lRVzVYNWpJcnJCemJmZFdLaWVoaVVSakFOQmdrcWhraUc5dzBCQVFzRkFEQWMKTVJvd0dBWURWUVFLRXhGck9ITXVZMngxYzNSbGNpNXNiMk5oYkRBZUZ3MHhPREEzTWpjeE56VTJNakJhRncweApPVEEzTWpjeE56VTJNakJhTUJ3eEdqQVlCZ05WQkFvVEVXczRjeTVqYkhWemRHVnlMbXh2WTJGc01JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXdVMi9SdWlyeTNnUzdPd2xJRCtaaGZiOEpOWnMKK05OL0dRWUsxbVozb3duaEw4dnJHdDBhenpjNXFuOXo2ZEw5Z1pPVFJXeFVCYXVJMUpOa3d0dSt2NmRjRzlkWgp0Q2JaQWloc1BLQWQ4MVRaa3RwYkNnOFdrcTRyNTh3QldRemNxMldsaFlPWHNlWGtRejdCbStOSUoyT0NRbmJwCjZYMmJ4Slc2OGdaZkg2UHlNR0libXJxaDgvZ2hISjFha3ptNGgzc0VGU1dTQ1Y2anZTZHVJL29NM2pBem5uZlUKU3JKY3VpQnBKZmJSMm1nQm4xVmFzNUJNdFpaaTBubDYxUzhyZ1ZiaHp4bWhpeFhlWU0zQzNHT3FlRUthY0N3WQo0TVczdEJFZ3NoN2ovZGM5cEt1ZG1wdFBFdit2Y2JnWjdreEhhazlOdFV2YmRGempJeTMxUS9Qd1NRSURBUUFCCm95TXdJVEFPQmdOVkhROEJBZjhFQkFNQ0FnUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEIKQVFzRkFBT0NBUUVBTnRLSnVkQ3NtbTFzU3dlS2xKTzBIY1ZMQUFhbFk4ZERUYWVLNksyakIwRnl0MkM3ZUtGSAoya3JaOWlkbWp5Yk8xS0djMVlWQndNeWlUMGhjYWFlaTdad2g0aERRWjVRN0k3ZFFuTVMzc2taR3ByaW5idU1aCmg3Tm1WUkVnV1ZIcm9OcGZEN3pBNEVqWk9FZzkwR0J6YXUzdHNmanI4RDQ1VVRJZUw3M3hwaUxmMXhRTk10RWEKd0NSelplQ3lmSUhra2ZrTCtISVVGK0lWV1g2VWp2WTRpRDdRR0JCenpHZTluNS9KM1g5OU1Gb1F3bExjNHMrTQpnLzNQdnZCYjBwaTU5MWxveXluU3lkWDVqUG5ibDhkNEFJaGZ6OU8rUTE5UGVULy9ydXFRNENOancrZmVIbTBSCjJzYmowZDd0SjkyTzgwT2NMVDlpb05NQlFLQlk3cGlOUkE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
|
||||
service:
|
||||
# service corresponds to the Kubernetes service that implements the
|
||||
# webhook, e.g. istio-galley.istio-system.svc:443
|
||||
name: istio-galley
|
||||
namespace: istio-system
|
||||
path: /admitmixer
|
||||
failurePolicy: Fail
|
||||
name: mixer.validation.istio.io
|
||||
namespaceSelector: {}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- config.istio.io
|
||||
apiVersions:
|
||||
- v1alpha2
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- rules
|
||||
- attributemanifests
|
||||
- circonuses
|
||||
- deniers
|
||||
- fluentds
|
||||
- kubernetesenvs
|
||||
- listcheckers
|
||||
- memquotas
|
||||
- noops
|
||||
- opas
|
||||
- prometheuses
|
||||
- rbacs
|
||||
- servicecontrols
|
||||
- solarwindses
|
||||
- stackdrivers
|
||||
- statsds
|
||||
- stdios
|
||||
- apikeys
|
||||
- authorizations
|
||||
- checknothings
|
||||
- listentries
|
||||
- logentries
|
||||
- metrics
|
||||
- quotas
|
||||
- reportnothings
|
||||
- servicecontrolreports
|
||||
- tracespans
|
||||
{{< /text >}}
|
||||
|
||||
If the `validatingwebhookconfiguration` doesn’t exist, verify the
|
||||
`istio-galley-configuration` `configmap` exists. `istio-galley` uses
|
||||
the data from this configmap to create and update the
|
||||
`validatingwebhookconfiguration`.
|
||||
|
||||
{{< text bash yaml >}}
|
||||
$ kubectl -n istio-system get configmap istio-galley-configuration -o jsonpath='{.data}'
|
||||
map[validatingwebhookconfiguration.yaml:apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
metadata:
|
||||
name: istio-galley
|
||||
namespace: istio-system
|
||||
labels:
|
||||
app: istio-galley
|
||||
chart: galley-1.0.0
|
||||
release: istio
|
||||
heritage: Tiller
|
||||
webhooks:
|
||||
- name: pilot.validation.istio.io
|
||||
clientConfig:
|
||||
service:
|
||||
name: istio-galley
|
||||
namespace: istio-system
|
||||
path: "/admitpilot"
|
||||
caBundle: ""
|
||||
rules:
|
||||
- operations:
|
||||
(... snip ...)
|
||||
{{< /text >}}
|
||||
|
||||
If the webhook array in `istio-galley-configuration` is empty and
|
||||
you're using `helm template` or `helm install`, verify `--set
|
||||
galley.enabled` and `--set global.configValidation=true` options are
|
||||
set. If you're not using helm, you'll need to find a generate
|
||||
YAML that includes the populated webhook array.
|
||||
|
||||
The `istio-galley` validation configuration is fail-close. If
|
||||
configuration exists and is scoped properly, the webhook will be
|
||||
invoked. A missing `caBundle`, bad certificate, or network connectivity
|
||||
problem will produce an error message when the resource is
|
||||
created/updated. If you don’t see any error message and the webhook
|
||||
wasn’t invoked and the webhook configuration is valid, your cluster is
|
||||
misconfigured.
|
||||
|
||||
## Creating configuration fails with x509 certificate errors
|
||||
|
||||
`x509: certificate signed by unknown authority` related errors are
|
||||
typically caused by an empty `caBundle` in the webhook
|
||||
configuration. Verify that it is not empty (see [verify webhook
|
||||
configuration](#invalid-configuration-is-accepted)). The
|
||||
`istio-galley` deployment consciously reconciles webhook configuration
|
||||
used the `istio-galley-configuration` `configmap` and root certificate
|
||||
mounted from `istio.istio-galley-service-account` secret in the
|
||||
`istio-system` namespace.
|
||||
|
||||
1. Verify the `istio-galley` pod(s) are running:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system get pod -listio=galley
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-galley-5dbbbdb746-d676g 1/1 Running 0 2d
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify you’re using Istio version >= 1.0.0. Older version of Galley
|
||||
did not properly re-patch the `caBundle`. This typically happened
|
||||
when the `istio.yaml` was re-applied, overwriting a previously
|
||||
patched `caBundle`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ for pod in $(kubectl -n istio-system get pod -listio=galley -o jsonpath='{.items[*].metadata.name}'); do \
|
||||
kubectl -n istio-system exec ${pod} -it /usr/local/bin/galley version| grep ^Version; \
|
||||
done
|
||||
Version: 1.0.0
|
||||
{{< /text >}}
|
||||
|
||||
1. Check the Galley pod logs for errors. Failing to patch the
|
||||
`caBundle` should print an error.
|
||||
|
||||
{{< text bash >}}
|
||||
$ for pod in $(kubectl -n istio-system get pod -listio=galley -o jsonpath='{.items[*].metadata.name}'); do \
|
||||
kubectl -n istio-system logs ${pod} \
|
||||
done
|
||||
{{< /text >}}
|
||||
|
||||
1. If the patching failed, verify the RBAC configuration for Galley:
|
||||
|
||||
{{< text bash yaml >}}
|
||||
$ kubectl get clusterrole istio-galley-istio-system -o yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
labels:
|
||||
app: istio-galley
|
||||
name: istio-galley-istio-system
|
||||
rules:
|
||||
- apiGroups:
|
||||
- admissionregistration.k8s.io
|
||||
resources:
|
||||
- validatingwebhookconfigurations
|
||||
verbs:
|
||||
- '*'
|
||||
- apiGroups:
|
||||
- config.istio.io
|
||||
resources:
|
||||
- '*'
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- '*'
|
||||
resourceNames:
|
||||
- istio-galley
|
||||
resources:
|
||||
- deployments
|
||||
verbs:
|
||||
- get
|
||||
{{< /text >}}
|
||||
|
||||
`istio-galley` needs `validatingwebhookconfigurations` write access to
|
||||
create and update the `istio-galley` `validatingwebhookconfiguration`.
|
||||
|
||||
## Creating configuration fails with `no such hosts` or `no endpoints available` errors
|
||||
|
||||
Validation is fail-close. If the `istio-galley` pod is not ready,
|
||||
configuration cannot be created and updated. In such cases you’ll see
|
||||
an error about `no endpoints available`.
|
||||
|
||||
Verify the `istio-galley` pod(s) are running and endpoints are ready.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system get pod -listio=galley
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-galley-5dbbbdb746-d676g 1/1 Running 0 2d
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system get endpoints istio-galley
|
||||
NAME ENDPOINTS AGE
|
||||
istio-galley 10.48.6.108:10514,10.48.6.108:443 3d
|
||||
{{< /text >}}
|
||||
|
||||
If the pods or endpoints aren't ready, check the pod logs and
|
||||
status for any indication about why the webhook pod is failing to start
|
||||
and serve traffic.
|
||||
|
||||
{{< text bash >}}
|
||||
$ for pod in $(kubectl -n istio-system get pod -listio=galley -o jsonpath='{.items[*].metadata.name}'); do \
|
||||
kubectl -n istio-system logs ${pod} \
|
||||
done
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ for pod in $(kubectl -n istio-system get pod -listio=galley -o name); do \
|
||||
kubectl -n istio-system describe ${pod} \
|
||||
done
|
||||
{{< /text >}}
|
|
@ -104,5 +104,5 @@ services from all other namespaces.
|
|||
$ export PATH=$PWD/bin:$PATH
|
||||
{{< /text >}}
|
||||
|
||||
1. You can enable the [auto-completion option](/docs/ops/setup/istioctl) when working with a bash or ZSH console.
|
||||
1. You can enable the [auto-completion option](/docs/ops/troubleshooting/istioctl) when working with a bash or ZSH console.
|
||||
|
||||
|
|
|
@ -0,0 +1,364 @@
|
|||
---
|
||||
title: Mesh Expansion
|
||||
description: Integrate VMs and bare metal hosts into an Istio mesh deployed on Kubernetes.
|
||||
weight: 95
|
||||
keywords: [kubernetes,vms]
|
||||
aliases:
|
||||
- /docs/setup/kubernetes/mesh-expansion/
|
||||
---
|
||||
|
||||
This guide provides instructions to integrate VMs and bare metal hosts into
|
||||
an Istio mesh deployed on Kubernetes.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* You have already set up Istio on Kubernetes. If you haven't done so, you can find out how in the [Installation guide](/docs/setup/install/kubernetes/).
|
||||
|
||||
* Mesh expansion machines must have IP connectivity to the endpoints in the mesh. This
|
||||
typically requires a VPC or a VPN, as well as a container network that
|
||||
provides direct (without NAT or firewall deny) routing to the endpoints. The machine
|
||||
is not required to have access to the cluster IP addresses assigned by Kubernetes.
|
||||
|
||||
* Mesh expansion VMs must have access to a DNS server that resolves names to cluster IP addresses. Options
|
||||
include exposing the Kubernetes DNS server through an internal load balancer, using a Core DNS
|
||||
server, or configuring the IPs in any other DNS server accessible from the VM.
|
||||
|
||||
* Install the [Helm client](https://docs.helm.sh/using_helm/). Helm is needed to enable mesh expansion.
|
||||
|
||||
The following instructions:
|
||||
|
||||
* Assume the expansion VM is running on GCE.
|
||||
* Use Google platform-specific commands for some steps.
|
||||
|
||||
## Installation steps
|
||||
|
||||
Setup consists of preparing the mesh for expansion and installing and configuring each VM.
|
||||
|
||||
### Preparing the Kubernetes cluster for expansion
|
||||
|
||||
The first step when adding non-Kubernetes services to an Istio mesh is to configure the Istio installation itself, and
|
||||
generate the configuration files that let mesh expansion VMs connect to the mesh. To prepare the
|
||||
cluster for mesh expansion, run the following commands on a machine with cluster admin privileges:
|
||||
|
||||
1. Ensure that mesh expansion is enabled for the cluster. If you didn't use
|
||||
the `--set global.meshExpansion.enabled=true` flag when installing Helm,
|
||||
you can use one of the following two options depending on how you originally installed
|
||||
Istio on the cluster:
|
||||
|
||||
* If you installed Istio with Helm and Tiller, run `helm upgrade` with the new option:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cd install/kubernetes/helm/istio
|
||||
$ helm upgrade --set global.meshExpansion.enabled=true istio .
|
||||
$ cd -
|
||||
{{< /text >}}
|
||||
|
||||
* If you installed Istio without Helm and Tiller, use `helm template` to update your configuration with the option and reapply with `kubectl`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace istio-system
|
||||
$ helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
|
||||
$ cd install/kubernetes/helm/istio
|
||||
$ helm template --set global.meshExpansion.enabled=true --namespace istio-system . > istio.yaml
|
||||
$ kubectl apply -f istio.yaml
|
||||
$ cd -
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
When updating configuration with Helm, you can either set the option on the command line, as in our examples, or add
|
||||
it to a `.yaml` values file and pass it to
|
||||
the command with `--values`, which is the recommended approach when managing configurations with multiple options. You
|
||||
can see some sample values files in your Istio installation's `install/kubernetes/helm/istio` directory and find out
|
||||
more about customizing Helm charts in the [Helm documentation](https://docs.helm.sh/using_helm/#using-helm).
|
||||
{{< /tip >}}
|
||||
|
||||
1. Define the namespace the VM joins. This example uses the `SERVICE_NAMESPACE` environment variable to store the namespace. The value of this variable must match the namespace you use in the configuration files later on.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export SERVICE_NAMESPACE="default"
|
||||
{{< /text >}}
|
||||
|
||||
1. Determine and store the IP address of the Istio ingress gateway since the mesh expansion machines access [Citadel](/docs/concepts/security/) and [Pilot](/docs/concepts/traffic-management/#pilot) through this IP address.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
$ echo $GWIP
|
||||
35.232.112.158
|
||||
{{< /text >}}
|
||||
|
||||
1. Generate a `cluster.env` configuration to deploy in the VMs. This file contains the Kubernetes cluster IP address ranges
|
||||
to intercept and redirect via Envoy. You specify the CIDR range when you install Kubernetes as `servicesIpv4Cidr`.
|
||||
Replace `$MY_ZONE` and `$MY_PROJECT` in the following example commands with the appropriate values to obtain the CIDR
|
||||
after installation:
|
||||
|
||||
{{< text bash >}}
|
||||
$ ISTIO_SERVICE_CIDR=$(gcloud container clusters describe $K8S_CLUSTER --zone $MY_ZONE --project $MY_PROJECT --format "value(servicesIpv4Cidr)")
|
||||
$ echo -e "ISTIO_CP_AUTH=MUTUAL_TLS\nISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
|
||||
{{< /text >}}
|
||||
|
||||
1. Check the contents of the generated `cluster.env` file. It should be similar to the following example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat cluster.env
|
||||
ISTIO_CP_AUTH=MUTUAL_TLS
|
||||
ISTIO_SERVICE_CIDR=10.55.240.0/20
|
||||
{{< /text >}}
|
||||
|
||||
1. If the VM only calls services in the mesh, you can skip this step. Otherwise, add the ports the VM exposes
|
||||
to the `cluster.env` file with the following command. You can change the ports later if necessary.
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "ISTIO_INBOUND_PORTS=3306,8080" >> cluster.env
|
||||
{{< /text >}}
|
||||
|
||||
1. Extract the initial keys the service account needs to use on the VMs.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
|
||||
-o jsonpath='{.data.root-cert\.pem}' |base64 --decode > root-cert.pem
|
||||
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
|
||||
-o jsonpath='{.data.key\.pem}' |base64 --decode > key.pem
|
||||
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
|
||||
-o jsonpath='{.data.cert-chain\.pem}' |base64 --decode > cert-chain.pem
|
||||
{{< /text >}}
|
||||
|
||||
### Setting up the VM
|
||||
|
||||
Next, run the following commands on each machine that you want to add to the mesh:
|
||||
|
||||
1. Copy the previously created `cluster.env` and `*.pem` files to the VM. For example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GCE_NAME="your-gce-instance"
|
||||
$ gcloud compute scp --project=${MY_PROJECT} --zone=${MY_ZONE} {key.pem,cert-chain.pem,cluster.env,root-cert.pem} ${GCE_NAME}:~
|
||||
{{< /text >}}
|
||||
|
||||
1. Install the Debian package with the Envoy sidecar.
|
||||
|
||||
{{< text bash >}}
|
||||
$ gcloud compute ssh --project=${MY_PROJECT} --zone=${MY_ZONE} "${GCE_NAME}"
|
||||
$ curl -L https://storage.googleapis.com/istio-release/releases/{{< istio_full_version >}}/deb/istio-sidecar.deb > istio-sidecar.deb
|
||||
$ sudo dpkg -i istio-sidecar.deb
|
||||
{{< /text >}}
|
||||
|
||||
1. Add the IP address of the Istio gateway to `/etc/hosts`. Revisit the [preparing the cluster](#preparing-the-kubernetes-cluster-for-expansion) section to learn how to obtain the IP address.
|
||||
The following example updates the `/etc/hosts` file with the Istio gateway address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "35.232.112.158 istio-citadel istio-pilot istio-pilot.istio-system" | sudo tee -a /etc/hosts
|
||||
{{< /text >}}
|
||||
|
||||
1. Install `root-cert.pem`, `key.pem` and `cert-chain.pem` under `/etc/certs/`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo mkdir -p /etc/certs
|
||||
$ sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
|
||||
{{< /text >}}
|
||||
|
||||
1. Install `cluster.env` under `/var/lib/istio/envoy/`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo cp cluster.env /var/lib/istio/envoy
|
||||
{{< /text >}}
|
||||
|
||||
1. Transfer ownership of the files in `/etc/certs/` and `/var/lib/istio/envoy/` to the Istio proxy.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify the node agent works:
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo node_agent
|
||||
....
|
||||
CSR is approved successfully. Will renew cert in 1079h59m59.84568493s
|
||||
{{< /text >}}
|
||||
|
||||
1. Start Istio using `systemctl`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo systemctl start istio-auth-node-agent
|
||||
$ sudo systemctl start istio
|
||||
{{< /text >}}
|
||||
|
||||
## Send requests from VM workloads to Kubernetes services
|
||||
|
||||
After setup, the machine can access services running in the Kubernetes cluster
|
||||
or on other mesh expansion machines.
|
||||
|
||||
The following example shows accessing a service running in the Kubernetes cluster from a mesh expansion VM using
|
||||
`/etc/hosts/`, in this case using a service from the [Bookinfo example](/docs/examples/bookinfo/).
|
||||
|
||||
1. First, on the cluster admin machine get the virtual IP address (`clusterIP`) for the service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get svc productpage -o jsonpath='{.spec.clusterIP}'
|
||||
10.55.246.247
|
||||
{{< /text >}}
|
||||
|
||||
1. Then on the mesh expansion machine, add the service name and address to its `etc/hosts` file. You can then connect to
|
||||
the cluster service from the VM, as in the example below:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "10.55.246.247 productpage.default.svc.cluster.local" | sudo tee -a /etc/hosts
|
||||
$ curl -v productpage.default.svc.cluster.local:9080
|
||||
< HTTP/1.1 200 OK
|
||||
< content-type: text/html; charset=utf-8
|
||||
< content-length: 1836
|
||||
< server: envoy
|
||||
... html content ...
|
||||
{{< /text >}}
|
||||
|
||||
The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
||||
|
||||
## Running services on a mesh expansion machine
|
||||
|
||||
1. Setup an HTTP server on the VM instance to serve HTTP traffic on port 8080:
|
||||
|
||||
{{< text bash >}}
|
||||
$ gcloud compute ssh ${GCE_NAME}
|
||||
$ python -m SimpleHTTPServer 8080
|
||||
{{< /text >}}
|
||||
|
||||
1. Determine the VM instance's IP address. For example, find the IP address
|
||||
of the GCE instance with the following commands:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GCE_IP=$(gcloud --format="value(networkInterfaces[0].networkIP)" compute instances describe ${GCE_NAME})
|
||||
$ echo ${GCE_IP}
|
||||
{{< /text >}}
|
||||
|
||||
1. Configure a service entry to enable service discovery for the VM. You can add VM services to the mesh using a
|
||||
[service entry](/docs/reference/config/networking/v1alpha3/service-entry/). Service entries let you manually add
|
||||
additional services to Pilot's abstract model of the mesh. Once VM services are part of the mesh's abstract model,
|
||||
other services can find and direct traffic to them. Each service entry configuration contains the IP addresses, ports,
|
||||
and appropriate labels of all VMs exposing a particular service, for example:
|
||||
|
||||
{{< text bash yaml >}}
|
||||
$ kubectl -n ${SERVICE_NAMESPACE} apply -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: vmhttp
|
||||
spec:
|
||||
hosts:
|
||||
- vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local
|
||||
ports:
|
||||
- number: 8080
|
||||
name: http
|
||||
protocol: HTTP
|
||||
resolution: STATIC
|
||||
endpoints:
|
||||
- address: ${GCE_IP}
|
||||
ports:
|
||||
http: 8080
|
||||
labels:
|
||||
app: vmhttp
|
||||
version: "v1"
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
1. The workloads in a Kubernetes cluster need a DNS mapping to resolve the domain names of VM services. To
|
||||
integrate the mapping with your own DNS system, use `istioctl register` and creates a Kubernetes `selector-less`
|
||||
service, for example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl register -n ${SERVICE_NAMESPACE} vmhttp ${GCE_IP} 8080
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
Make sure you have already added `istioctl` client to your `PATH` environment variable, as described in the Download page.
|
||||
{{< /tip >}}
|
||||
|
||||
1. Deploy a pod running the `sleep` service in the Kubernetes cluster, and wait until it is ready:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f @samples/sleep/sleep.yaml@
|
||||
$ kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
productpage-v1-8fcdcb496-xgkwg 2/2 Running 0 1d
|
||||
sleep-88ddbcfdd-rm42k 2/2 Running 0 1s
|
||||
...
|
||||
{{< /text >}}
|
||||
|
||||
1. Send a request from the `sleep` service on the pod to the VM's HTTP service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local:8080
|
||||
{{< /text >}}
|
||||
|
||||
You should see something similar to the output below.
|
||||
|
||||
```html
|
||||
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
|
||||
<title>Directory listing for /</title>
|
||||
<body>
|
||||
<h2>Directory listing for /</h2>
|
||||
<hr>
|
||||
<ul>
|
||||
<li><a href=".bashrc">.bashrc</a></li>
|
||||
<li><a href=".ssh/">.ssh/</a></li>
|
||||
...
|
||||
</body>
|
||||
```
|
||||
|
||||
**Congratulations!** You successfully configured a service running in a pod within the cluster to
|
||||
send traffic to a service running on a VM outside of the cluster and tested that
|
||||
the configuration worked.
|
||||
|
||||
## Cleanup
|
||||
|
||||
Run the following commands to remove the expansion VM from the mesh's abstract
|
||||
model.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl deregister -n ${SERVICE_NAMESPACE} vmhttp ${GCE_IP}
|
||||
2019-02-21T22:12:22.023775Z info Deregistered service successfull
|
||||
$ kubectl delete ServiceEntry vmhttp -n ${SERVICE_NAMESPACE}
|
||||
serviceentry.networking.istio.io "vmhttp" deleted
|
||||
{{< /text >}}
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
The following are some basic troubleshooting steps for common mesh expansion issues.
|
||||
|
||||
* When making requests from a VM to the cluster, ensure you don't run the requests as `root` or
|
||||
`istio-proxy` user. By default, Istio excludes both users from interception.
|
||||
|
||||
* Verify the machine can reach the IP of the all workloads running in the cluster. For example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get endpoints productpage -o jsonpath='{.subsets[0].addresses[0].ip}'
|
||||
10.52.39.13
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl 10.52.39.13:9080
|
||||
html output
|
||||
{{< /text >}}
|
||||
|
||||
* Check the status of the node agent and sidecar:
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo systemctl status istio-auth-node-agent
|
||||
$ sudo systemctl status istio
|
||||
{{< /text >}}
|
||||
|
||||
* Check that the processes are running. The following is an example of the processes you should see on the VM if you run
|
||||
`ps`, filtered for `istio`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ ps aux | grep istio
|
||||
root 6941 0.0 0.2 75392 16820 ? Ssl 21:32 0:00 /usr/local/istio/bin/node_agent --logtostderr
|
||||
root 6955 0.0 0.0 49344 3048 ? Ss 21:32 0:00 su -s /bin/bash -c INSTANCE_IP=10.150.0.5 POD_NAME=demo-vm-1 POD_NAMESPACE=default exec /usr/local/bin/pilot-agent proxy > /var/log/istio/istio.log istio-proxy
|
||||
istio-p+ 7016 0.0 0.1 215172 12096 ? Ssl 21:32 0:00 /usr/local/bin/pilot-agent proxy
|
||||
istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.default~default.svc.cluster.local
|
||||
{{< /text >}}
|
||||
|
||||
* Check the Envoy access and error logs:
|
||||
|
||||
{{< text bash >}}
|
||||
$ tail /var/log/istio/istio.log
|
||||
$ tail /var/log/istio/istio.err.log
|
||||
{{< /text >}}
|
|
@ -29,7 +29,7 @@ The `httpbin` application serves as the backend service for this task.
|
|||
when calling the `httpbin` service:
|
||||
|
||||
{{< warning >}}
|
||||
If you installed/configured Istio with mutual TLS authentication enabled, you must add a TLS traffic policy `mode: ISTIO_MUTUAL` to the `DestinationRule` before applying it. Otherwise requests will generate 503 errors as described [here](/docs/ops/traffic-management/troubleshooting/#503-errors-after-setting-destination-rule).
|
||||
If you installed/configured Istio with mutual TLS authentication enabled, you must add a TLS traffic policy `mode: ISTIO_MUTUAL` to the `DestinationRule` before applying it. Otherwise requests will generate 503 errors as described [here](/docs/ops/troubleshooting/network-issues/#503-errors-after-setting-destination-rule).
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
|
@ -192,7 +192,7 @@ Let's see how you can configure a `Gateway` on port 80 for HTTP traffic.
|
|||
you can add the special value `mesh` to the list of `gateways`. Since the internal hostname for the
|
||||
service is probabaly different (e.g., `httpbin.default.svc.cluster.local`) from the external one,
|
||||
you will also need to add it to the `hosts` list. Refer to the
|
||||
[troubleshooting guide](/docs/ops/traffic-management/troubleshooting/#route-rules-have-no-effect-on-ingress-gateway-requests)
|
||||
[troubleshooting guide](/docs/ops/troubleshooting/network-issues)
|
||||
for more details.
|
||||
{{< /warning >}}
|
||||
|
||||
|
|
|
@ -1,10 +1,12 @@
|
|||
---
|
||||
title: Locality Load Balancing
|
||||
description: Information on how to enable and understand Locality Load Balancing.
|
||||
weight: 40
|
||||
weight: 98
|
||||
keywords: [locality,load balancing,priority,prioritized]
|
||||
aliases:
|
||||
- /help/ops/traffic-management/locality-load-balancing
|
||||
- /help/ops/locality-load-balancing
|
||||
- /help/tasks/traffic-management/locality-load-balancing
|
||||
---
|
||||
|
||||
A locality defines a geographic location within your mesh using the following triplet:
|
|
@ -127,7 +127,7 @@ In this step, you will change that behavior so that all traffic goes to `v1`.
|
|||
1. Create a default route rule to route all traffic to `v1` of the service:
|
||||
|
||||
{{< warning >}}
|
||||
If you installed/configured Istio with mutual TLS authentication enabled, you must add a TLS traffic policy `mode: ISTIO_MUTUAL` to the `DestinationRule` before applying it. Otherwise requests will generate 503 errors as described [here](/docs/ops/traffic-management/troubleshooting/#503-errors-after-setting-destination-rule).
|
||||
If you installed/configured Istio with mutual TLS authentication enabled, you must add a TLS traffic policy `mode: ISTIO_MUTUAL` to the `DestinationRule` before applying it. Otherwise requests will generate 503 errors as described [here](/docs/ops/troubleshooting/network-issues/#503-errors-after-setting-destination-rule).
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
|
@ -8,10 +8,10 @@ If mutual TLS is enabled, HTTP and TCP health checks from the kubelet will not w
|
|||
As of Istio 1.1, we have several options to solve this issue.
|
||||
|
||||
1. Using probe rewrite to redirect liveness and readiness requests to the
|
||||
workload directly. Please refer to [Probe Rewrite](/docs/ops/setup/app-health-check/#probe-rewrite)
|
||||
workload directly. Please refer to [Probe Rewrite](/docs/ops/app-health-check/#probe-rewrite)
|
||||
for more information.
|
||||
|
||||
1. Using a separate port for health checks and enabling mutual TLS only on the regular service port. Please refer to [Health Checking of Istio Services](/docs/ops/setup/app-health-check/#separate-port) for more information.
|
||||
1. Using a separate port for health checks and enabling mutual TLS only on the regular service port. Please refer to [Health Checking of Istio Services](/docs/ops/app-health-check/#separate-port) for more information.
|
||||
|
||||
1. Using the [`PERMISSIVE` mode](/docs/tasks/security/mtls-migration) for Istio services so they can accept both HTTP and mutual TLS traffic. Please keep in mind that mutual TLS is not enforced since others can communicate with the service with HTTP traffic.
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ Mixer、Pilot 和 Galley 都实现了 ControlZ 功能。这些组件启动时将
|
|||
|
||||
下面是 ControlZ 界面的示例:
|
||||
|
||||
{{< image width="80%" link="/docs/ops/controlz/ctrlz.png" caption="ControlZ 用户界面" >}}
|
||||
{{< image width="80%" link="/docs/ops/troubleshooting/controlz/ctrlz.png" caption="ControlZ 用户界面" >}}
|
||||
|
||||
当启动组件时,可以通过命令行参数 `--ctrlz_port` 和 `--ctrlz_address` 指定特定的地址和端口来控制 ControlZ 暴露的地址。
|
||||
|
||||
|
|
|
@ -12,4 +12,4 @@ weight: 50
|
|||
|
||||
1. 在你的 Kubernetes 环境中,检查所有命名空间下的部署,确保没有可能导致 Istio 错误的遗留的部署。如果发现在向 Envoy 推送授权策略的时候发生错误,你可以禁用 Pilot 的授权插件。
|
||||
|
||||
1. 根据[调试授权文档](/docs/ops/security/debugging-authorization/)找到确切的原因。
|
||||
1. 根据[调试授权文档](/docs/ops/troubleshooting/security-issues/)找到确切的原因。
|
||||
|
|
|
@ -4,6 +4,6 @@ description: 如何处理 TLS 认证的失效问题。
|
|||
weight: 30
|
||||
---
|
||||
|
||||
如果观察到双向 TLS 的问题,首先要确认 [Citadel 的健康情况](/zh/docs/ops/security/repairing-citadel/),接下来要查看的是[密钥和证书](/docs/ops/security/keys-and-certs/)是否已经被正确分发给 Sidecar.
|
||||
如果观察到双向 TLS 的问题,首先要确认 [Citadel 的健康情况](/zh/docs/ops/security/repairing-citadel/),接下来要查看的是[密钥和证书](/docs/ops/troubleshooting/security-issues/)是否已经被正确分发给 Sidecar.
|
||||
|
||||
如果上述检查都正确无误,下一步就应该验证[认证策略](/zh/docs/tasks/security/authn-policy/)以及对应的目标规则是否正确应用。
|
||||
|
|
|
@ -8,7 +8,7 @@ weight: 50
|
|||
,因此当这个模式打开时他们可以接受 http 和双向 TLS 流量。这可以解决健康检查问题。
|
||||
请记住,双向 TLS 没有强制执行,因为其他服务可以使用 http 流量与该服务进行通信。
|
||||
|
||||
您可以使用单独的端口进行健康检查,并只在常规服务端口上启用双向 TLS。请参阅 [Istio 服务的健康检查](/zh/docs/ops/setup/app-health-check/)了解更多信息。
|
||||
您可以使用单独的端口进行健康检查,并只在常规服务端口上启用双向 TLS。请参阅 [Istio 服务的健康检查](/docs/ops/app-health-check/)了解更多信息。
|
||||
|
||||
由于存在新功能的风险,我们默认情况下不会启用上述功能。未来的推出计划将在 [GitHub 问题](https://github.com/istio/istio/issues/10357)上进行跟踪。
|
||||
|
||||
|
|
Loading…
Reference in New Issue