Fix spelling and grammar stuff throughtout the site. (#3114)

This commit is contained in:
Martin Taillefer 2019-01-21 09:35:38 -08:00 committed by GitHub
parent 12730e09d2
commit 1c1242ffc4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
95 changed files with 438 additions and 479 deletions

543
.spelling
View File

@ -12,419 +12,454 @@
123456789012.my
14.60
15.30
18x
1qps
2x
22.99
24.13
2x
3.99ms
3s
404s
4s
5000qps
50Mb
6s
72.96ms
7Mb
7s
ACLs
ACS-Engine
AKS
ANDed
API
APIs
alt
Acmeair
Ansible
AppOptics
Auth0
args.yaml
AuthPolicy
Autoscalers
az
BigQuery
BluePerf
Bookinfo
Brooks
Budinsky
bring-your-own-CA
bring-your-own-identity
CAP_NET_ADMIN
CAs
CDNs
CIDRs
CIOs
Cilium
cli
CoreDNS
Costin
CSRs
Chrony
Circonus
CloudWatch
cnn.com
ConfigMap
ControlZ
CRD
CRDs
Ctrl
D3.js
Datadog
Datawire
edition.cnn.com
Elasticsearch
emojis
ExecAction
Exfiltrating
ExternalName
Firebase
FitStation
Fluentd
FQDNs
GCP-IAM
GCP_OPTS
GKE-IAM
GKE-Istio
GKE-Workloads
GitHub
GlueCon
Gmail
Grafana
Graphviz
HTTP
HTTP1.1
HTTP2
Hystrix
ILBs
IPs
IPv4
Incrementality
Initializers
Istio
IstioMesh
Istiofied
JSON-formatted
JWT
JWTs
Keepalived
Kibana
Kops
Kuat
Kube
Kubecon
Kubelet
Kubernetes
L3-4
L4-L6
LibreSSL
LightStep
Lyft
macOS
Manolache
MeshPolicy
Mesos
Minikube
MongoDB
Multicloud
Multicluster
MutatingWebhookConfiguration
MySQL
Mysql
NamespaceSelector
networking.istio.io
NodePort
OAuth2
OP_QUERY
OpenID_Connect
OpenSSL
OpenShift
Ostrowski
PaaS
Papertrail
programmatically
Pub/Sub
Rajagopalan
RawVM
Redis
Redis-based
Registrator
Reviewer1
Reviewer2
SNI
SREs
Secura
ServiceGraph
Servicegraph
Sharding
Shriram
SignalFX
Snell-Feikema
SolarWinds
Steitz
Superfeet
TCP
TCP-level
TLS
TLS-secured
Tcpdump
Tigera
TopicList
UID
UIDs
Undeploy
VMware
VM-based
VMs
VPN
ValueType
WeaveWorks
WebSocket
Websockets
WebSphere
Webhooks
X.509
X.509.
Yessenov
Zack
Zipkin
_CA_
_OK_
_V2_
8x
_blog
_CA_
_data
_docs
_help
_OK_
_proxy
_V2_
_v2_
_v3_
a.k.a.
abc
abcde12345
accesslog
accesslogs
accounts.my
ack-istio
ACLs
Acmeair
ACS-Engine
addon
addons
add-ons
strongSwan
admissionregistration
admissionregistration.k8s.io
AES-NI
AKS
Alibaba
alt
analytics
ANDed
Ansible
API
api-server
apiVersion
Apigee
APIs
Aporeto
AppOptics
appswitch
AppSwitch
archive.istio.io
archive.istio.io.
args.yaml
Auth0
AuthPolicy
autoscaler
Autoscalers
autoscalers
autoscaling
az
backend
backends
base64
BigQuery
bind-productpager-viewer
booksale
bitpipe
BluePerf
BluePerf
boilerplates
Bookinfo
boolean
bring-your-own-CA
bring-your-own-identity
Brooks
bt
Budinsky
camelCase
canaried
canarying
CAP_NET_ADMIN
CAs
CDNs
Chrony
CIDRs
Cilium
CIOs
Circonus
cli
CloudWatch
cnn.com
colocated
ConfigMap
configmap
configmap
configmaps
containerID
ControlZ
CoreDNS
coreos
Costin
CRD
CRD
CRDs
CSRs
Ctrl
D3.js
Datadog
datapath
dataset
datastore
Datawire
debian
decapsulated
Delayering
deployment
dev
Devirtualization
devops
discuss.istio.io
distro
docker-compose's
docker.io
e.g.
eBPF
edition.cnn.com
Elasticsearch
embeddable
emojis
enablement
endUser-to-Service
env
etcd
example.com
ExecAction
Exfiltrating
ExternalName
facto
failovers
faq
faq.md
fcm.googleapis.com
FDs
filename
filenames
fluentd
fine-grained
Firebase
FitStation
Fluentd
foo.yaml
fortio
FQDNs
frontend
gRPC
gbd
GCP-IAM
GCP_OPTS
gdb
getPetsById
git
GitHub
GKE-IAM
GKE-Istio
GKE-Workloads
GlueCon
Gmail
golang
googleapis.com
googlegroups.com
goroutine
goroutines
goto
Grafana
grafana-istio-dashboard
Graphviz
gRPC
grpc
helloworld
Herness
hostIP
hostname
hostnames
hotspots
HP
html
HTTP
http
HTTP1.1
HTTP2
http2
httpReqTimeout
httpbin
httpbin.org
httpReqTimeout
https
hyperkube
Hystrix
i.e.
ILBs
Incrementality
initializer
Initializers
initializers
injector
int64
interdependencies
Interdependencies
intermediation
interoperate
interoperation
intra-cluster
intrahost
ip_address
IPs
iptables
istio
istio-apiserver
istio-system1
iptables
IPv4
Istio
istio.io
istio.io.
Istiofied
IstioMesh
jason
Jog
json
JSON-formatted
JWT
jwt.io
JWTs
Keepalived
key.pem
kube-api
kube-apiserver
kube-dns
kube-inject
kube-proxy
kube-public
kube-system
Keycloak
Kiali
Kibana
Knative
Kops
Kuat
Kube
Kubecon
kubeconfig
Kubelet
kubelet
Kubernetes
kubernetes.default
L3-4
L4-L6
learnings
LibreSSL
lifecycle
LightStep
liveness
logInfo
mTLS
Lyft
machineSetup
macOS
Mandar
Manolache
memcached
memcached
memcached-2's
memquota
MeshPolicy
Mesos
mesos-dns
metadata
methodName
microservice
microservices
middleboxes
middleware
Minikube
misconfigured
misordered
MongoDB
mongodb
MSG_PEEK
Multicloud
multicloud
Multicluster
multicluster
mutatingwebhookconfiguration
mutual-tls
my-svc
my-svc-234443-5sffe
myapp
MySQL
mysql
mysqldb
namespace
namespaces
natively
netmask
networking.istio.io
nginx
nginx-proxy
nodePorts
nodeport
non-sandboxed
ns
OAuth2
oc
ok
onboarding
Onboarding
onwards
OP_QUERY
OpenID
OpenID_Connect
OpenShift
OpenSSL
openssl
OS-level
Ostrowski
p50
p99
PaaS
packageName.serviceName
Papertrail
parenthesization
passthrough
peek
pem
platform-specific
pluggability
pluggable
png
pprof
pre-connected
pre-specified
preconfigured
prefetching
preformatted
preliminary.istio.io
preliminary.istio.io.
prepends
prober
programmatically
proto
protobuf
protos
proxied
proxy_http_version
proxying
proxyless
Proxyless
Pub/Sub
PubNub
pwd
qps
quay.io
quo
radis
Rajagopalan
ratelimit-handler
RawVM
rbac
re-applied
re-patch
reachability
rearchitect
readinessProbe
recomposition
Redis
redis
Redis-based
referer
Registrator
registrator
reimplement
reimplemented
reinject
repo
repurposed
rethink
Reviewer1
Reviewer2
roadmap
roadmaps
RocketChat
roleRef
rollout
rollouts
routable
runtime
runtimes
RPC
RPCs
runtime
runtimes
sa
sayin
schemas
secretName
Secura
selinux
serverless
serviceaccount
ServiceGraph
servicegraph-example
sha256
sharded
Sharding
sharding
Shriram
sidecar
sidecar.env
SignalFX
sinkInfo
SLOs
Snell-Feikema
SNI
SolarWinds
spiffe
SREs
Stackdriver
Statsd
stdout
Steitz
strongSwan
struct
Styra
subdomain
subdomains
subnet
subnets
subresources
substring
Superfeet
svc
svc.com
svg
Sysdig
Taillefer
TCP
tcp
TCP-level
Tcpdump
team1
team1-ns
team2
team2-ns
templated
test-api
Tigera
timeframe
timestamp
TLS
TLS-secured
touchpoints
Trulia
trustability
tunneling
UID
UIDs
ulimit
uncomment
uncommented
Undeploy
unencrypted
unmanaged
unsampled
@ -443,121 +478,49 @@ v1beta1#MutatingWebhookConfiguration
v2
v2-mysql
v3
ValueType
vCPU
versioned
versioning
veth-pair
virtualization
Virtualization
virtualservices-destrules
vm-1
VM-based
VMs
VMware
VPN
WBoilerplates
WeaveWorks
webhook
webhook
Webhooks
webhooks
WebSocket
Websockets
WebSphere
whitelist
whitelists
wikipedia
wikipedia.org
wildcard
wildcards
wildcarded
wildcards
www.google.com
x-envoy-upstream-rq-timeout-ms
X.509
x.509
X.509.
x509
x86
xDS
yaml
yamls
Yessenov
yournamespace
BluePerf
embeddable
p99
vCPU
AES-NI
Stackdriver
Statsd
3.99ms
72.96ms
18x
8x
appswitch
appswitch-0
AppSwitch
bitpipe
datapath
decapsulated
Delayering
Devirtualization
FDs
fine-grained
goto
intrahost
iptables
Jog
kube-proxy
Mandar
memcached
memcached-2's
MSG_PEEK
onboarding
Onboarding
OS-level
p50
peek
pre-connected
proxyless
Proxyless
quo
rethink
reimplement
sidecar
subnet
subnets
touchpoints
tunneling
veth-pair
virtualization
Virtualization
HP
PubNub
Trulia
Sysdig
Aporeto
Styra
Kiali
Knative
Apigee
SLOs
serverless
Alibaba
ack-istio
subresources
re-patch
re-applied
CustomResourceDefinitions
CRD
injector
configmap
x509
VirtualService
sha256
deployment
webhook
Keycloak
OpenID
Istioctl
Boilerplates
boilerplates
RocketChat
discuss.istio.io
add-ons
recomposition
interdependencies
Interdependencies
middleware
interoperation
misordered
x86
repurposed
netmask
Herness
distro
docker-compose's
Taillefer
Zack
Zipkin
- search.md
searchresults

View File

@ -21,7 +21,7 @@ Don't use capitalization for emphasis.
Follow the original capitalization employed in the code or configuration files
when referencing those values directly. Use back-ticks \`\` around the
referenced content to make the connection explicit. For example, use
`IstioRoleBinding`, not "Istio Role Binding" or "istio role binding".
`IstioRoleBinding`, not `Istio Role Binding` or `istio role binding`.
If you are not referencing values or code directly, use normal sentence
capitalization, for example, "The Istio role binding configuration takes place
@ -131,6 +131,7 @@ Related Terms:
| Mixer | `mixer`
| certificate | `cert`
| configuration | `config`
| delete | `kill`
## Best practices

View File

@ -12,7 +12,7 @@ icon: notes
model to describe and store its configuration. When running in Kubernetes, configuration can now be optionally managed using the `kubectl`
command.
- **Multiple Namespace Support**. Istio control plane components are now in the dedicated "istio-system" namespace. Istio can manage
- **Multiple Namespace Support**. Istio control plane components are now in the dedicated `istio-system` namespace. Istio can manage
services in other non-system namespaces.
- **Mesh Expansion**. Initial support for adding non-Kubernetes services (in the form of VMs and/or physical machines) to a mesh. This is an early version of
@ -21,7 +21,7 @@ this feature and has some limitations (such as requiring a flat network across c
- **Multi-Environment Support**. Initial support for using Istio in conjunction with other service registries
including Consul and Eureka.
- **Automatic injection of sidecars**. Istio sidecar can automatically be injected into a Pod upon deployment using the
- **Automatic injection of sidecars**. Istio sidecar can automatically be injected into a pod upon deployment using the
[Initializers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) alpha feature in Kubernetes.
## Performance and quality
@ -91,7 +91,7 @@ persistent storage to facilitate CA restarts.
- **User may get periodical 404 when accessing the application**: We have noticed that Envoy doesn't get routes properly occasionally
thus a 404 is returned to the user. We are actively working on this [issue](https://github.com/istio/istio/issues/1038).
- **Istio Ingress or Egress reports ready before Pilot is actually ready**: You can check the istio-ingress and istio-egress pods status
- **Istio Ingress or Egress reports ready before Pilot is actually ready**: You can check the `istio-ingress` and `istio-egress` pods status
in the `istio-system` namespace and wait a few seconds after all the Istio pods reach ready status. We are actively working on this
[issue](https://github.com/istio/istio/pull/1055).

View File

@ -32,7 +32,7 @@ providing a flexible fine-grained access control mechanism. [Learn more](https:/
- **Istio RBAC**. Mixer now has a role-based access control adapter.
[Learn more](/docs/concepts/security/#authorization)
- **Fluentd**. Mixer now has an adapter for log collection through [fluentd](https://www.fluentd.org).
- **Fluentd**. Mixer now has an adapter for log collection through [Fluentd](https://www.fluentd.org).
[Learn more](/docs/tasks/telemetry/fluentd/)
- **Stdio**. The stdio adapter now lets you log to files with support for log rotation & backup, along with a host

View File

@ -52,4 +52,4 @@ management functionality without dealing with Mixer or Citadel.
- There is a [problem with Google Kubernetes Engine 1.10.2](https://github.com/istio/istio/issues/5723). The workaround is to use Kubernetes 1.9 or switch the node image to Ubuntu. A fix is expected in GKE 1.10.4.
- There is a known namespace issue with `istioctl experimental convert-networking-config` tool where the desired namespace may be changed to the istio-system namespace, please manually adjust to use the desired namespace after running the conversation tool. [Learn more](https://github.com/istio/istio/issues/5817)
- There is a known namespace issue with `istioctl experimental convert-networking-config` tool where the desired namespace may be changed to the `istio-system` namespace, please manually adjust to use the desired namespace after running the conversation tool. [Learn more](https://github.com/istio/istio/issues/5817)

View File

@ -11,7 +11,7 @@ This release note describes what's different between Istio 1.0.4 and Istio 1.0.5
## Fixes
- Disabled the precondition cache in the istio-policy service as it lead to invalid results. The
- Disabled the precondition cache in the `istio-policy` service as it lead to invalid results. The
cache will be reintroduced in a later release.
- Mixer now only generates spans when there is an enabled `tracespan` adapter, resulting in lower CPU overhead in normal cases.

View File

@ -18,7 +18,7 @@ TODO announcement
Refer to our guide on [Migrating the `RbacConfig` to `ClusterRbacConfig`](/docs/setup/kubernetes/upgrading-istio#migrating-the-rbacconfig-to-clusterrbacconfig)
for migration instructions.
## Istioctl
## `istioctl`
- Deprecated `istioctl create`, `istioctl replace`, `istioctl get`, and `istioctl delete`. Use `kubectl` instead (see <https://kubernetes.io/docs/tasks/tools/install-kubectl>). The next release (1.2) removes the deprecated commands.
- Deprecated `istioctl gen-deploy`. Use a [`helm template`](/docs/setup/kubernetes/helm-install/#option-1-install-with-helm-via-helm-template) instead. The next release (1.2) removes this command.

View File

@ -27,7 +27,7 @@ As an example, let's say we have a deployed service, **helloworld** version **v1
Although fine for what it does, this approach is only useful when we have a properly tested version that we want to deploy, i.e., more of a blue/green, a.k.a. red/black, kind of upgrade than a "dip your feet in the water" kind of canary deployment. In fact, for the latter (for example, testing a canary version that may not even be ready or intended for wider exposure), the canary deployment in Kubernetes would be done using two Deployments with [common pod labels](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). In this case, we cant use autoscaling anymore because its now being done by two independent autoscalers, one for each Deployment, so the replica ratios (percentages) may vary from the desired ratio, depending purely on load.
Whether we use one deployment or two, canary management using deployment features of container orchestration platforms like Docker, Mesos/Marathon, or Kubernetes has a fundamental problem: the use of instance scaling to manage the traffic; traffic version distribution and replica deployment are not independent in these systems. All replica pods, regardless of version, are treated the same in the kube-proxy round-robin pool, so the only way to manage the amount of traffic that a particular version receives is by controlling the replica ratio. Maintaining canary traffic at small percentages requires many replicas (e.g., 1% would require a minimum of 100 replicas). Even if we ignore this problem, the deployment approach is still very limited in that it only supports the simple (random percentage) canary approach. If, instead, we wanted to limit the visibility of the canary to requests based on some specific criteria, we still need another solution.
Whether we use one deployment or two, canary management using deployment features of container orchestration platforms like Docker, Mesos/Marathon, or Kubernetes has a fundamental problem: the use of instance scaling to manage the traffic; traffic version distribution and replica deployment are not independent in these systems. All replica pods, regardless of version, are treated the same in the `kube-proxy` round-robin pool, so the only way to manage the amount of traffic that a particular version receives is by controlling the replica ratio. Maintaining canary traffic at small percentages requires many replicas (e.g., 1% would require a minimum of 100 replicas). Even if we ignore this problem, the deployment approach is still very limited in that it only supports the simple (random percentage) canary approach. If, instead, we wanted to limit the visibility of the canary to requests based on some specific criteria, we still need another solution.
## Enter Istio

View File

@ -79,7 +79,7 @@ spec:
istio: ingress
{{< /text >}}
The istio-ingress exposes ports 80 and 443. Lets limit incoming traffic to just these two ports. Envoy has a [built-in administrative interface](https://www.envoyproxy.io/docs/envoy/latest/operations/admin.html#operations-admin-interface), and we dont want a misconfigured istio-ingress image to accidentally expose our admin interface to the outside world. This is an example of defense in depth: a properly configured image should not expose the interface, and a properly configured Network Policy will prevent anyone from connecting to it. Either can fail or be misconfigured and we are still protected.
The `istio-ingress` exposes ports 80 and 443. Lets limit incoming traffic to just these two ports. Envoy has a [built-in administrative interface](https://www.envoyproxy.io/docs/envoy/latest/operations/admin.html#operations-admin-interface), and we dont want a misconfigured `istio-ingress` image to accidentally expose our admin interface to the outside world. This is an example of defense in depth: a properly configured image should not expose the interface, and a properly configured Network Policy will prevent anyone from connecting to it. Either can fail or be misconfigured and we are still protected.
{{< text yaml >}}
apiVersion: networking.k8s.io/v1

View File

@ -88,7 +88,7 @@ Even though AppSwitch is not a proxy, it _does_ arbitrate connections between ap
### Zero-Cost Load Balancer, Firewall and Network Analyzer
Typical implementations of network functions such as load balancers and firewalls require an intermediate layer that needs to tap into data/packet stream. Kubernetes' implementation of load balancer (kube-proxy) for example introduces a probe into the packet stream through iptables and Istio implements the same at the proxy layer. But if all that is required is to redirect or drop connections based on policy, it is not really necessary to stay in the datapath during the entire course of the connection. AppSwitch can take care of that much more efficiently by simply manipulating the control path at the API level. Given its intimate proximity to the application, AppSwitch also has easy access to various pieces of application level metrics such as dynamics of stack and heap usage, precisely when a service comes alive, attributes of active connections etc., all of which could potentially form a rich signal for monitoring and analytics.
Typical implementations of network functions such as load balancers and firewalls require an intermediate layer that needs to tap into data/packet stream. Kubernetes' implementation of load balancer (`kube-proxy`) for example introduces a probe into the packet stream through iptables and Istio implements the same at the proxy layer. But if all that is required is to redirect or drop connections based on policy, it is not really necessary to stay in the datapath during the entire course of the connection. AppSwitch can take care of that much more efficiently by simply manipulating the control path at the API level. Given its intimate proximity to the application, AppSwitch also has easy access to various pieces of application level metrics such as dynamics of stack and heap usage, precisely when a service comes alive, attributes of active connections etc., all of which could potentially form a rich signal for monitoring and analytics.
To go a step further, AppSwitch can also perform L7 load balancing and firewall functions based on the protocol data that it obtains from the socket buffers. It can synthesize the protocol data and various other signals with the policy information acquired from Pilot to implement a highly efficient form of routing and access control enforcement. It can essentially "influence" the application to connect to the right backend server without requiring any changes to the application or its configuration. It is as if the application itself is infused with policy and traffic-management intelligence. Except in this case, the application can't escape the influence.
@ -194,7 +194,7 @@ Encrypted traffic completely undermines the ability of the service mesh to analy
AppSwitch removes a number of layers and processing from the standard service mesh stack. What does all that translate to in terms of performance?
We ran some initial experiments to characterize the extent of the opportunity for optimization based on the initial integration of AppSwitch discussed earlier. The experiments were run on GKE using fortio-0.11.0, istio-0.8.0 and appswitch-0.4.0-2. In case of the proxyless test, AppSwitch daemon was run as a `DaemonSet` on the Kubernetes cluster and the Fortio pod spec was modified to inject AppSwitch client. These were the only two changes made to the setup. The test was configured to measure the latency of GRPC requests across 100 concurrent connections.
We ran some initial experiments to characterize the extent of the opportunity for optimization based on the initial integration of AppSwitch discussed earlier. The experiments were run on GKE using `fortio-0.11.0`, `istio-0.8.0` and `appswitch-0.4.0-2`. In case of the proxyless test, AppSwitch daemon was run as a `DaemonSet` on the Kubernetes cluster and the Fortio pod spec was modified to inject AppSwitch client. These were the only two changes made to the setup. The test was configured to measure the latency of GRPC requests across 100 concurrent connections.
{{< image link="perf.png" alt="Performance comparison" caption="Latency with and without AppSwitch" >}}

View File

@ -346,7 +346,7 @@ becomes addressable by a local cluster domain name, for example by `mysqldb.vm.s
to it can be secured by
[mutual TLS authentication](/docs/concepts/security/#mutual-tls-authentication). There is no need to create a service
entry to access this service; however, the service must be registered with Istio. To enable such integration, Istio
components (_Envoy proxy_, _node-agent_, _istio-agent_) must be installed on the machine and the Istio control plane
components (_Envoy proxy_, _node-agent_, `_istio-agent_`) must be installed on the machine and the Istio control plane
(_Pilot_, _Mixer_, _Citadel_) must be accessible from it. See the
[Istio Mesh Expansion](/docs/setup/kubernetes/mesh-expansion/) instructions for more details.

View File

@ -185,13 +185,13 @@ a Stackdriver handler is described [here](/docs/reference/config/policy-and-tele
project and you should find a bucket named
`accesslog.logentry.istio-system` in your sink bucket.
* Pub/Sub: Navigate to the [Pub/Sub
TopicList](https://pantheon.corp.google.com/cloudpubsub/topicList) for
Topic List](https://pantheon.corp.google.com/cloudpubsub/topicList) for
your project and you should find a topic for `accesslog` in your sink
topic.
## Understanding what happened
`Stackdriver.yaml` file above configured Istio to send accesslogs to
`Stackdriver.yaml` file above configured Istio to send access logs to
Stackdriver and then added a sink configuration where these logs could be
exported. In detail as follows:

View File

@ -47,7 +47,7 @@ Fortunately, a standard Istio deployment already includes a [Gateway](/docs/conc
A simple way to see this type of approach in action is to first setup your Kubernetes environment using the [Platform Setup](/docs/setup/kubernetes/platform-setup/) instructions, and then install Istio using [Helm](/docs/setup/kubernetes/minimal-install/), including only the traffic management components (ingress gateway, egress gateway, Pilot). The following example uses [Google Kubernetes Engine](https://cloud.google.com/gke).
First, **setup and configure [GKE](/docs/setup/kubernetes/platform-setup/gke/)**:
First, setup and configure [GKE](/docs/setup/kubernetes/platform-setup/gke/):
{{< text bash >}}
$ gcloud container clusters create istio-inc --zone us-central1-f
@ -57,7 +57,7 @@ $ kubectl create clusterrolebinding cluster-admin-binding \
--user=$(gcloud config get-value core/account)
{{< /text >}}
Next, **[install Helm](https://docs.helm.sh/using_helm/#installing-helm) and [generate a minimal Istio install](/docs/setup/kubernetes/minimal-install/)** -- only traffic management components:
Next, [install Helm](https://docs.helm.sh/using_helm/#installing-helm) and [generate a minimal Istio install](/docs/setup/kubernetes/minimal-install/) -- only traffic management components:
{{< text bash >}}
$ helm template install/kubernetes/helm/istio \
@ -72,20 +72,20 @@ $ helm template install/kubernetes/helm/istio \
--set pilot.sidecar=false > istio-minimal.yaml
{{< /text >}}
Then **create the istio-system namespace and deploy Istio**:
Then create the `istio-system` namespace and deploy Istio:
{{< text bash >}}
$ kubectl create namespace istio-system
$ kubectl apply -f istio-minimal.yaml
{{< /text >}}
Next, **deploy the Bookinfo sample** without the Istio sidecar containers:
Next, deploy the Bookinfo sample without the Istio sidecar containers:
{{< text bash >}}
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
{{< /text >}}
Now, **configure a new Gateway** that allows access to the reviews service from outside the Istio mesh, a new `VirtualService` that splits traffic evenly between v1 and v2 of the reviews service, and a set of new `DestinationRule` resources that match destination subsets to service versions:
Now, configure a new Gateway that allows access to the reviews service from outside the Istio mesh, a new `VirtualService` that splits traffic evenly between v1 and v2 of the reviews service, and a set of new `DestinationRule` resources that match destination subsets to service versions:
{{< text bash >}}
$ cat <<EOF | kubectl apply -f -
@ -146,7 +146,7 @@ spec:
EOF
{{< /text >}}
Finally, **deploy a Pod that you can use for testing** with `curl` (and without the Istio sidecar container):
Finally, deploy a pod that you can use for testing with `curl` (and without the Istio sidecar container):
{{< text bash >}}
$ kubectl apply -f samples/sleep/sleep.yaml
@ -222,4 +222,4 @@ null
Mission accomplished! This post showed how to deploy a minimal installation of Istio that only contains the traffic management components (Pilot, ingress Gateway), and then use those components to direct traffic to specific versions of the reviews service. And it wasn't necessary to deploy the Istio sidecar proxy to gain these capabilities, so there was little to no interruption of existing workloads or applications.
Using the built-in ingress Gateway (along with some `VirtualService` and `DestinationRule` resources) this post showed how you can easily leverage Istios traffic management for cluster-external ingress traffic and cluster-internal service-to-service traffic. This technique is a great example of an incremental approach to adopting Istio, and can be especially useful in real-world cases where Pods are owned by different teams or deployed to different namespaces.
Using the built-in ingress gateway (along with some `VirtualService` and `DestinationRule` resources) this post showed how you can easily leverage Istios traffic management for cluster-external ingress traffic and cluster-internal service-to-service traffic. This technique is a great example of an incremental approach to adopting Istio, and can be especially useful in real-world cases where Pods are owned by different teams or deployed to different namespaces.

View File

@ -42,9 +42,9 @@ when official multi-tenancy support is provided.
Deploying multiple Istio control planes starts by replacing all `namespace` references
in a manifest file with the desired namespace. Using `istio.yaml` as an example, if two tenant
level Istio control planes are required; the first can use the `istio.yaml` default name of
*istio-system* and a second control plane can be created by generating a new yaml file with
`istio-system` and a second control plane can be created by generating a new yaml file with
a different namespace. As an example, the following command creates a yaml file with
the Istio namespace of *istio-system1*.
the Istio namespace of `istio-system1`.
{{< text bash >}}
$ cat istio.yaml | sed s/istio-system/istio-system1/g > istio-system1.yaml
@ -88,7 +88,7 @@ administrator to only the assigned namespace.
The manifest files in the Istio repositories create both common resources that would
be used by all Istio control planes as well as resources that are replicated per control
plane. Although it is a simple matter to deploy multiple control planes by replacing the
*istio-system* namespace references as described above, a better approach is to split the
`istio-system` namespace references as described above, a better approach is to split the
manifests into a common part that is deployed once for all tenants and a tenant
specific part. For the [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions), the roles and the role
bindings should be separated out from the provided Istio manifests. Additionally, the
@ -101,7 +101,7 @@ section.
To restrict a tenant administrator to a single Istio namespace, the cluster
administrator would create a manifest containing, at a minimum, a `Role` and `RoleBinding`
similar to the one below. In this example, a tenant administrator named *sales-admin*
is limited to the namespace *istio-system1*. A completed manifest would contain many
is limited to the namespace `istio-system1`. A completed manifest would contain many
more `apiGroups` under the `Role` providing resource access to the tenant administrator.
{{< text yaml >}}
@ -137,7 +137,7 @@ Istio control plane, the Istio manifest must be updated to specify the applicati
that Pilot should watch for creation of its xDS cache. This is done by starting the Pilot
component with the additional command line arguments `--appNamespace, ns-1`. Where *ns-1*
is the namespace that the tenants application will be deployed in. An example snippet from
the istio-system1.yaml file is included below.
the `istio-system1.yaml` file is shown below.
{{< text yaml >}}
apiVersion: extensions/v1beta1
@ -167,7 +167,7 @@ spec:
### Deploying the tenant application in a namespace
Now that the cluster administrator has created the tenant's namespace (ex. *istio-system1*) and
Now that the cluster administrator has created the tenant's namespace (ex. `istio-system1`) and
Pilot's service discovery has been configured to watch for a specific application
namespace (ex. *ns-1*), create the application manifests to deploy in that tenant's specific
namespace. For example:
@ -209,7 +209,7 @@ The *-n* option will scope the rule to the tenant's mesh and should be set to th
the tenant's app is deployed in. Note that the *-n* option can be skipped on the command line if
the .yaml file for the resource scopes it properly instead.
For example, the following command would be required to add a route rule to the *istio-system1*
For example, the following command would be required to add a route rule to the `istio-system1`
namespace:
{{< text bash >}}
@ -304,8 +304,8 @@ technology, ex. Kubernetes, rather than improvements in Istio capabilities.
## Issues
* The CA (Certificate Authority) and Mixer pod logs from one tenant's Istio control
plane (ex. *istio-system* `namespace`) contained 'info' messages from a second tenant's
Istio control plane (ex *istio-system1* `namespace`).
plane (e.g. `istio-system` namespace) contained 'info' messages from a second tenant's
Istio control plane (e.g. `istio-system1` namespace).
## Challenges with other multi-tenancy models

View File

@ -133,7 +133,7 @@ The creation of custom ingress gateway could be used in order to have different
1. Create your service:
{{< warning_icon >}} The `NodePort` used needs to be an available Port.
{{< warning_icon >}} The `NodePort` used needs to be an available port.
{{< text yaml >}}
apiVersion: v1

View File

@ -55,7 +55,7 @@ Fortio is a fast, small, reusable, embeddable go library as well as a command li
Fortio is also 100% open-source and with no external dependencies beside go and gRPC so you can reproduce all our results easily and add your own variants or scenarios you are interested in exploring.
Here is an example of scenario (one out of the 8 scenarios we run for every build) result graphing the latency distribution for istio-0.7.1 at 400 Query-Per-Second (qps) between 2 services inside the mesh (with mutual TLS, Mixer policy checks and telemetry collection):
Here is an example of scenario (one out of the 8 scenarios we run for every build) result graphing the latency distribution for `istio-0.7.1` at 400 Query-Per-Second (qps) between 2 services inside the mesh (with mutual TLS, Mixer policy checks and telemetry collection):
<iframe src="https://fortio.istio.io/browse?url=qps_400-s1_to_s2-0.7.1-2018-04-05-22-06.json&xMax=105&yLog=true" width="100%" height="1024" scrolling="no" frameborder="0"></iframe>

View File

@ -82,7 +82,7 @@ Istio service identities on different platforms:
- **AWS**: AWS IAM user/role account
- **On-premises (non-Kubernetes)**: user account, custom service account, service name, istio service account, or GCP service account.
- **On-premises (non-Kubernetes)**: user account, custom service account, service name, Istio service account, or GCP service account.
The custom service account refers to the existing service account just like the identities that the customer's Identity Directory manages.
### Istio security vs SPIFFE

View File

@ -54,7 +54,7 @@ in the mesh and uses this model to let Envoy instances know about the other Envo
Each Envoy instance maintains [load balancing information](#discovery-and-load-balancing)
based on the information it gets from Pilot and periodic health-checks
of other instances in its load-balancing pool, allowing it to intelligently
of other instances in its load balancing pool, allowing it to intelligently
distribute traffic between destination instances while following its specified
routing rules.
@ -279,7 +279,7 @@ continued unavailability of critical services in the application, resulting
in poor user experience.
Istio enables protocol-specific fault injection into the network, instead
of killing pods or delaying or corrupting packets at the TCP layer. The rationale
of deleting pods or delaying or corrupting packets at the TCP layer. The rationale
is that the failures observed by the application layer are the same
regardless of network level failures, and that more meaningful failures can
be injected at the application layer (for example, HTTP error codes) to exercise the resilience of an application.
@ -477,7 +477,7 @@ spec:
timeout: 10s
{{< /text >}}
You can also specify the number of retry attempts for an HTTP request in a VirtualService.
You can also specify the number of retry attempts for an HTTP request in a virtual service.
The maximum number of retry attempts, or the number of attempts possible within the default or overridden timeout period, can be set as follows:
{{< text yaml >}}
@ -505,7 +505,7 @@ See the [request timeouts task](/docs/tasks/traffic-management/request-timeouts)
#### Injecting faults
A VirtualService can specify one or more faults to inject
A virtual service can specify one or more faults to inject
while forwarding HTTP requests to the rule's corresponding request destination.
The faults can be either delays or aborts.

View File

@ -272,8 +272,8 @@ TLS origination for an external service, only this time using a service that req
This example is considerably more involved because you need to first:
1. generate client and server certificates
1. deploy an external service that supports the mTLS protocol
1. redeploy the egress gateway with the needed mTLS certs
1. deploy an external service that supports the mutual TLS protocol
1. redeploy the egress gateway with the needed mutual TLS certs
Only then can you configure the external traffic to go through the egress gateway which will perform
TLS origination.
@ -313,9 +313,9 @@ TLS origination.
$ cd ..
{{< /text >}}
### Deploy an mTLS server
### Deploy a mutual TLS server
To simulate an actual external service that supports the mTLS protocol,
To simulate an actual external service that supports the mutual TLS protocol,
deploy an [NGINX](https://www.nginx.com) server in your Kubernetes cluster, but running outside of
the Istio service mesh, i.e., in a namespace without Istio sidecar proxy injection enabled.

View File

@ -14,7 +14,7 @@ example extends that example to show how to configure SNI monitoring and apply p
* Configure traffic to `*.wikipedia.org` by following
[the steps](/docs/examples/advanced-gateways/wildcard-egress-hosts#wildcard-configuration-for-arbitrary-domains) in
[Configure Egress Traffic using Wildcard Hosts](/docs/examples/advanced-gateways/wildcard-egress-hosts/) example,
**with mTLS enabled**.
**with mutual TLS enabled**.
## SNI monitoring and access policies
@ -323,7 +323,7 @@ access the English and the French versions.
and Spanish.
> It may take several minutes for the Mixer policy components to synchronize on the new policy. In case you want to
quickly demonstrate the new policy without waiting until the synchronization is complete, kill the Mixer policy pods:
quickly demonstrate the new policy without waiting until the synchronization is complete, delete the Mixer policy pods:
{{< text bash >}}
$ kubectl delete pod -n istio-system -l istio-mixer-type=policy

View File

@ -86,7 +86,7 @@ Adding `"--http_port=8081"` in the ESP deployment arguments and expose the HTTP
authPolicy: MUTUAL_TLS
{{< /text >}}
1. After this, you will find access to `EXTERNAL_IP` no longer works because istio proxy only accept secure mesh connections.
1. After this, you will find access to `EXTERNAL_IP` no longer works because the Istio proxy only accept secure mesh connections.
Accessing through Ingress works because Ingress does HTTP terminations.
1. To secure the access at Ingress, follow the [instructions](/docs/tasks/traffic-management/secure-ingress/).

View File

@ -100,7 +100,7 @@ running in a second cluster.
`<IPofCluster2IngressGateway>:15443` over an mTLS connection.
If your cluster2 Kubernetes cluster is running in an environment that does not
support external load-balancers, you must use the IP and nodePort corresponding
support external load balancers, you must use the IP and nodePort corresponding
to port 15443 of a node running the `istio-ingressgateway` service. Instructions
for obtaining the node IP can be found in the
[Control Ingress Traffic](/docs/tasks/traffic-management/ingress/#determining-the-ingress-ip-and-ports)

View File

@ -45,7 +45,7 @@ certificate from the Istio samples directory.
The instructions, below, also set up the `remote` cluster with a selector-less service and an endpoint for `istio-pilot.istio-system`
that has the address of the `local` Istio ingress gateway.
This will be used to access the `local` pilot securely using the ingress gateway without mTLS termination.
This will be used to access the `local` pilot securely using the ingress gateway without mutual TLS termination.
### Setup the local cluster
@ -325,7 +325,7 @@ The difference between the two instances is the version of their `helloworld` im
{{< /text >}}
Although deployed locally, this Gateway instance will also affect the `remote` cluster by configuring it to passthrough
incoming traffic to the relevant remote service (SNI-based) but keeping the mTLS all the way from the source to destination sidecars.
incoming traffic to the relevant remote service (SNI-based) but keeping mutual TLS all the way from the source to destination sidecars.
1. Deploy the files:

View File

@ -78,8 +78,8 @@ services:
Debian packages for Istio Pilot, Mixer, and Citadel are available through the
Istio release. Alternatively, these components can be run as Docker
containers (docker.io/istio/pilot, docker.io/istio/mixer,
docker.io/istio/citadel). Note that these components are stateless and can
containers (`docker.io/istio/pilot`, `docker.io/istio/mixer`,
`docker.io/istio/citadel`). Note that these components are stateless and can
be scaled horizontally. Each of these components depends on the Istio API
server, which in turn depends on the etcd cluster for persistence. To
achieve high availability, each control plane service could be run as a

View File

@ -20,7 +20,7 @@ services from all other namespaces.
{{< /text >}}
1. Move to the Istio package directory. For example, if the package is
istio-{{< istio_full_version >}}:
`istio-{{< istio_full_version >}}`:
{{< text bash >}}
$ cd istio-{{< istio_full_version >}}
@ -28,7 +28,7 @@ services from all other namespaces.
The installation directory contains:
* Installation `.yaml` files for Kubernetes in `install/`
* Installation YAML files for Kubernetes in `install/`
* Sample applications in `samples/`
* The `istioctl` client binary in the `bin/` directory. `istioctl` is
used when manually injecting Envoy as a sidecar proxy.

View File

@ -42,11 +42,12 @@ The following commands have relative references in the Istio directory. You must
1. Choose one of the following two **mutually exclusive** options described below.
> To customize Istio and install add-ons, use the `--set <key>=<value>` option in the helm template or install command. [Installation Options](/docs/reference/config/installation-options/) references supported installation key and value pairs.
> To customize Istio and install addons, use the `--set <key>=<value>` option in the helm template or install command. [Installation Options](/docs/reference/config/installation-options/) references supported installation key and value pairs.
### Option 1: Install with Helm via `helm template`
1. Install all the Istio's [Custom Resource Definitions or CRDs for short](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) via `kubectl apply`, and wait a few seconds for the CRDs to be committed in the kube-apiserver:
1. Install all the Istio's [Custom Resource Definitions or CRDs for short](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) via `kubectl apply`, and wait a few seconds for the CRDs to be committed to
the Kubernetes API server:
{{< text bash >}}
$ for i in install install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
@ -95,7 +96,7 @@ to manage the lifecycle of Istio.
$ helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
{{< /text >}}
1. Verify all the Istio's CRDs have been committed in the kube-apiserver by checking all the CRD creation jobs complete with success:
1. Verify all the Istio's CRDs have been committed to the Kubernetes API server by checking all the CRD creation jobs complete with success:
{{< text bash >}}
$ kubectl get job --namespace istio-system | grep istio-crd

View File

@ -16,7 +16,7 @@ Refer to the [prerequisites](/docs/setup/kubernetes/quick-start/#prerequisites)
## Installation steps
1. If using a Helm version prior to 2.10.0, install Istio's [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions)
via `kubectl apply`, and wait a few seconds for the CRDs to be committed in the kube-apiserver:
via `kubectl apply`, and wait a few seconds for the CRDs to be committed to the Kubernetes API server:
{{< text bash >}}
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

View File

@ -96,7 +96,7 @@ DNS, Kubernetes' DNS needs to be configured to point to CoreDNS as the DNS
server for the `.global` DNS domain. Create one of the following ConfigMaps
or update an existing one:
For clusters that use kube-dns:
For clusters that use `kube-dns`:
{{< text bash >}}
$ kubectl apply -f - <<EOF

View File

@ -123,7 +123,7 @@ perform a manual sidecar injection refer to the [manual sidecar example](#manual
{{< /text >}}
{{< info_icon >}} All clusters must have the same namespace for the Istio
components. It is possible to override the "istio-system" name on the main
components. It is possible to override the `istio-system` name on the main
cluster as long as the namespace is the same for all Istio components in
all clusters.
@ -356,7 +356,7 @@ Before you begin, set the endpoint IP environment variables as described in the
$ helm template install/kubernetes/helm/istio-remote --namespace istio-system --name istio-remote --set global.remotePilotAddress=${PILOT_POD_IP} --set global.remotePolicyAddress=${POLICY_POD_IP} --set global.remoteTelemetryAddress=${TELEMETRY_POD_IP} --set global.proxy.envoyStatsd.enabled=true --set global.proxy.envoyStatsd.host=${STATSD_POD_IP} --set global.remoteZipkinAddress=${ZIPKIN_POD_IP} --set sidecarInjectorWebhook.enabled=false > $HOME/istio-remote_noautoinj.yaml
{{< /text >}}
1. Create the istio-system namespace for remote Istio.
1. Create the `istio-system` namespace for remote Istio:
{{< text bash >}}
$ kubectl create ns istio-system

View File

@ -69,7 +69,8 @@ This guide installs the current release version of Istio.
### Deploy the Istio Helm chart
1. If using a Helm version prior to 2.10.0, install Istios Custom Resource Definitions via `kubectl apply`, and wait a few seconds for the CRDs to be committed in the kube-apiserver:
1. If using a Helm version prior to 2.10.0, install Istios Custom Resource Definitions via `kubectl apply`, and wait a few seconds for the CRDs to be committed
to the Kubernetes API server:
{{< text bash >}}
$ kubectl apply -f https://raw.githubusercontent.com/IBM/charts/master/stable/ibm-istio/templates/crds.yaml
@ -156,7 +157,7 @@ This guide installs the current release version of Istio.
{{< image link="./istio-installation-1.png" caption="IBM Cloud Private - Istio Installation" >}}
- Input the Helm release name (e.g. istio-1.0.3) and select `istio-system` as the target namespace.
- Input the Helm release name (e.g. `istio-1.0.3`) and select `istio-system` as the target namespace.
- Agree to the license terms.
- (Optional) Customize the installation parameters by clicking `All parameters`.
- Click the `Install` button.

View File

@ -29,7 +29,7 @@ To install and configure Istio in a Kubernetes cluster, follow these instruction
## Installation steps
1. Install Istio's [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions)
via `kubectl apply`, and wait a few seconds for the CRDs to be committed in the kube-apiserver:
via `kubectl apply`, and wait a few seconds for the CRDs to be committed in Kubernetes API server:
{{< text bash >}}
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
@ -107,8 +107,8 @@ Follow our instructions on how to
> If your cluster is running in an environment that does not
> support an external load balancer (e.g., minikube), the
> `EXTERNAL-IP` of `istio-ingress` and `istio-ingressgateway` will
> say `<pending>`. You will need to access it using the service
> NodePort, or use port-forwarding instead.
> say `<pending>`. You will need to access it using the service's
> `NodePort`, or use port-forwarding instead.
1. Ensure the corresponding Kubernetes pods are deployed and all containers: `istio-citadel-*`,
`istio-engressgateway-*`, `istio-galley-*`, `istio-ingress-*`, `istio-ingressgateway-*`,

View File

@ -72,7 +72,7 @@ sleep 1 1 1 1 2h sleep,istio-pro
### Automatic sidecar injection
Sidecars can be automatically added to applicable Kubernetes pods using a
[mutating webhook admission controller](https://kubernetes.io/docs/admin/admission-controllers/). This feature requires Kubernetes 1.9 or later. Verify that the kube-apiserver process has the `admission-control` flag set with the `MutatingAdmissionWebhook` and `ValidatingAdmissionWebhook` admission controllers added and listed in the correct order and the admissionregistration API is enabled.
[mutating webhook admission controller](https://kubernetes.io/docs/admin/admission-controllers/). This feature requires Kubernetes 1.9 or later. Verify that the `kube-apiserver` process has the `admission-control` flag set with the `MutatingAdmissionWebhook` and `ValidatingAdmissionWebhook` admission controllers added and listed in the correct order and the admissionregistration API is enabled.
{{< text bash >}}
$ kubectl api-versions | grep admissionregistration
@ -161,7 +161,7 @@ sleep-776b7bcdcd-gmvnr 1/1 Running 0 2s
configures when the webhook is invoked by Kubernetes. The default
supplied with Istio selects pods in namespaces with label
`istio-injection=enabled`. The set of namespaces in which injection
is applied can be changed by editing the MutatingWebhookConfiguration
is applied can be changed by editing the `MutatingWebhookConfiguration`
with `kubectl edit mutatingwebhookconfiguration
istio-sidecar-injector`.

View File

@ -18,7 +18,7 @@ In the following steps, we assume that the Istio components are installed and up
and change directory to the new release directory.
1. Upgrade Istio's [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions)
via `kubectl apply`, and wait a few seconds for the CRDs to be committed in the kube-apiserver:
via `kubectl apply`, and wait a few seconds for the CRDs to be committed to the Kubernetes API server:
{{< text bash >}}
$ kubectl apply -f @install/kubernetes/helm/istio/templates/crds.yaml@

View File

@ -391,7 +391,7 @@ dimensions. In the example, the 0.2 qps override is selected by matching only
three out of four quota dimensions.
If you want the policies enforced for a given namespace instead of the entire
Istio mesh, you can replace all occurrences of istio-system with the given
Istio mesh, you can replace all occurrences of `istio-system` with the given
namespace.
## Cleanup

View File

@ -132,8 +132,8 @@ Before you start, please make sure that you have finished [preparation task](#be
1. Verify the logs stream has been created and check `permissiveResponseCode`.
In a Kubernetes environment, search through the logs for the istio-telemetry
pods as follows:
In a Kubernetes environment, search through the `istio-telemetry`
pods' logs as follows:
{{< text bash json >}}
$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep \"instance\":\"rbacsamplelog.logentry.istio-system\"
@ -142,7 +142,7 @@ Before you start, please make sure that you have finished [preparation task](#be
{"level":"warn","time":"2018-08-30T21:53:41.019851Z","instance":"rbacsamplelog.logentry.istio-system","destination":"productpage","latency":"1.112521495s","permissiveResponseCode":"denied","permissiveResponsePolicyID":"","responseCode":200,"responseSize":5723,"source":"istio-ingressgateway","user":"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"}
{{< /text >}}
In telemetry logs above, the `responseCode` is 200 which is what user see now.
In the above telemetry logs, the `responseCode` is 200 which is what user see now.
The `permissiveResponseCode` is `denied` which is what user will see after switching
global authorization configuration from `PERMISSIVE` mode to `ENFORCED` mode, which
indicates the global authorization configuration will work as expected after rolling
@ -172,8 +172,8 @@ Before you start, please make sure that you have finished [preparation task](#be
1. Verify the logs and check `permissiveResponseCode` again.
In a Kubernetes environment, search through the logs for the istio-telemetry
pods as follows:
In a Kubernetes environment, search through the `istio-telemetry`
pods's logs as follows:
{{< text bash json >}}
$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep \"instance\":\"rbacsamplelog.logentry.istio-system\"
@ -256,8 +256,8 @@ Before you start, please make sure that you have finished [step 1](#step-1-allow
1. Verify the logs and check `permissiveResponseCode` again.
In a Kubernetes environment, search through the logs for the istio-telemetry
pods as follows:
In a Kubernetes environment, search through the `istio-telemetry`
pods' logs as follows:
{{< text bash json >}}
$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep \"instance\":\"rbacsamplelog.logentry.istio-system\"

View File

@ -18,7 +18,7 @@ The activities in this task assume that you:
* Follow the instructions in the [quick start](/docs/setup/kubernetes/quick-start/) to install Istio on
Kubernetes **with authentication enabled**.
* Enable mutual TLS (mTLS) authentication when running the [installation steps](/docs/setup/kubernetes/quick-start/#installation-steps).
* Enable mutual TLS authentication when running the [installation steps](/docs/setup/kubernetes/quick-start/#installation-steps).
The commands used in this task assume the Bookinfo example application is deployed in the default
namespace. To specify a namespace other than the default namespace, use the `-n` option in the command.

View File

@ -72,7 +72,7 @@ my-nginx-jwwck 1/1 Running 0 1h
sleep-847544bbfc-d27jg 2/2 Running 0 18h
{{< /text >}}
Ssh into the istio-proxy container of sleep pod.
Ssh into the `istio-proxy` container of sleep pod.
{{< text bash >}}
$ kubectl exec -it $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy /bin/bash
@ -131,7 +131,7 @@ $ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name
...
{{< /text >}}
If you run from istio-proxy container, it should work as well
If you run from the `istio-proxy` container, it should work as well:
{{< text bash >}}
$ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://my-nginx -k
@ -144,7 +144,7 @@ $ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name
### Create an HTTPS service with Istio sidecar with mutual TLS enabled
You need to deploy Istio control plane with mutual TLS enabled. If you have istio
You need to deploy Istio control plane with mutual TLS enabled. If you have the Istio
control plane with mutual TLS disabled installed, please delete it. For example, if
you followed the quick start:
@ -152,7 +152,7 @@ you followed the quick start:
$ kubectl delete -f install/kubernetes/istio-demo.yaml
{{< /text >}}
And wait for everything is down, i.e., there is no pod in control plane namespace (istio-system).
And wait for everything to have been deleted, i.e., there is no pod in the control plane namespace (`istio-system`):
{{< text bash >}}
$ kubectl get pod -n istio-system
@ -212,11 +212,11 @@ $ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name
...
{{< /text >}}
The reason is that for the workflow "sleep -> sleep-proxy -> nginx-proxy -> nginx",
the whole flow is L7 traffic, and there is a L4 mutual TLS encryption between sleep-proxy
and nginx-proxy. In this case, everything works fine.
The reason is that for the workflow "sleep -> `sleep-proxy` -> `nginx-proxy` -> nginx",
the whole flow is L7 traffic, and there is a L4 mutual TLS encryption between `sleep-proxy`
and `nginx-proxy`. In this case, everything works fine.
However, if you run this command from istio-proxy container, it will not work.
However, if you run this command from the `istio-proxy` container, it will not work:
{{< text bash >}}
$ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://my-nginx -k

View File

@ -60,7 +60,7 @@ The following steps enable plugging in the certificates and key into Citadel:
{{< /text >}}
1. To make sure the workloads obtain the new certificates promptly,
delete the secrets generated by Citadel (named as istio.\*).
delete the secrets generated by Citadel (named as `istio.\*`).
In this example, `istio.default`. Citadel will issue new certificates for the workloads.
{{< text bash >}}

View File

@ -315,7 +315,7 @@ spec:
latency: response.duration | "0ms"
monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a fluentd handler
# Configuration for a Fluentd handler
apiVersion: "config.istio.io/v1alpha2"
kind: fluentd
metadata:
@ -324,7 +324,7 @@ metadata:
spec:
address: "fluentd-es.logging:24224"
---
# Rule to send logentry instances to the fluentd handler
# Rule to send logentry instances to the Fluentd handler
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:

View File

@ -166,7 +166,7 @@ as the example application throughout this task.
1. Verify that the logs stream has been created and is being populated for
requests.
In a Kubernetes environment, search through the logs for the istio-telemetry pods as
In a Kubernetes environment, search through the logs for the `istio-telemetry``` pods as
follows:
{{< text bash json >}}

View File

@ -177,13 +177,13 @@ If you want to completely bypass Istio for a specific IP range,
you can configure the Envoy sidecars to prevent them from
[intercepting](/docs/concepts/traffic-management/#communication-between-services)
the external requests. This can be done by setting the `global.proxy.includeIPRanges` variable of
[Helm](/docs/reference/config/installation-options/) and updating the `ConfigMap` _istio-sidecar-injector_ by using `kubectl apply`. After _istio-sidecar-injector_ is updated, the value of `global.proxy.includeIPRanges` will affect all the future deployments of the application pods.
[Helm](/docs/reference/config/installation-options/) and updating the `istio-sidecar-injector` configmap by using `kubectl apply`. After `istio-sidecar-injector` is updated, the value of `global.proxy.includeIPRanges` will affect all the future deployments of the application pods.
The simplest way to use the `global.proxy.includeIPRanges` variable is to pass it the IP range(s)
used for internal cluster services, thereby excluding external IPs from being redirected
to the sidecar proxy.
The values used for internal IP range(s), however, depends on where your cluster is running.
For example, with Minikube the range is 10.0.0.1&#47;24, so you would update your `ConfigMap` _istio-sidecar-injector_ like this:
For example, with Minikube the range is 10.0.0.1&#47;24, so you would update your `istio-sidecar-injector` configmap like this:
{{< text bash >}}
$ helm template install/kubernetes/helm/istio <the flags you used to install Istio> --set global.proxy.includeIPRanges="10.0.0.1/24" -x templates/sidecar-injector-configmap.yaml | kubectl apply -f -
@ -255,7 +255,7 @@ $ kubectl describe pod kube-apiserver -n kube-system | grep 'service-cluster-ip-
### Access the external services
After updating the `ConfigMap` _istio-sidecar-injector_ and redeploying the `sleep` application,
After updating the `istio-sidecar-injector` configmap and redeploying the `sleep` application,
the Istio sidecar will only intercept and manage internal requests
within the cluster. Any external request bypasses the sidecar and goes straight to its intended destination. For example:
@ -295,7 +295,7 @@ cluster provider specific knowledge and configuration.
$ kubectl delete -f @samples/sleep/sleep.yaml@
{{< /text >}}
1. Update the `ConfigMap` _istio-sidecar-injector_ to redirect all outbound traffic to the sidecar proxies:
1. Update the `istio-sidecar-injector` configmap to redirect all outbound traffic to the sidecar proxies:
{{< text bash >}}
$ helm template install/kubernetes/helm/istio <the flags you used to install Istio> -x templates/sidecar-injector-configmap.yaml | kubectl apply -f -

View File

@ -493,7 +493,7 @@ they have valid values, according to the output of the following commands:
$ kubectl logs -n istio-system -l istio=ingressgateway
{{< /text >}}
1. If the secret was created but the keys were not mounted, kill the ingress gateway pod and force it to reload certs:
1. If the secret was created but the keys were not mounted, delete the ingress gateway pod and force it to reload certs:
{{< text bash >}}
$ kubectl delete pod -n istio-system -l istio=ingressgateway
@ -521,7 +521,7 @@ In addition to the steps in the previous section, perform the following:
Subject: C=US, ST=Denial, L=Springfield, O=Dis, CN=httpbin.example.com
{{< /text >}}
1. If the secret was created but the keys were not mounted, kill the ingress gateway pod and force it to reload certs:
1. If the secret was created but the keys were not mounted, delete the ingress gateway pod and force it to reload certs:
{{< text bash >}}
$ kubectl delete pod -n istio-system -l istio=ingressgateway

View File

@ -3,4 +3,4 @@ title: Consul - My application isn't working, where can I troubleshoot this?
weight: 40
---
Please ensure all required containers are running: etcd, istio-apiserver, consul, registrator, pilot. If one of them is not running, you may find the {containerID} using `docker ps -a` and then use `docker logs {containerID}` to read the logs.
Please ensure all required containers are running: `etcd`, `istio-apiserver`, `consul`, `registrator`, `pilot`. If one of them is not running, you may find the {containerID} using `docker ps -a` and then use `docker logs {containerID}` to read the logs.

View File

@ -6,6 +6,6 @@ weight: 20
Ensure that your cluster has met the
[prerequisites](/docs/setup/kubernetes/sidecar-injection/#automatic-sidecar-injection) for
the automatic sidecar injection. If your microservice is deployed in
kube-system, kube-public or istio-system namespaces, they are exempted
`kube-system`, `kube-public` or `istio-system` namespaces, they are exempted
from automatic sidecar injection. Please use a different namespace
instead.

View File

@ -58,9 +58,9 @@ Thu Jun 15 02:25:42 UTC 2017
To fix the problem, you'll need to shutdown and then restart Docker before reinstalling Istio.
## Automatic sidecar injection will fail if the kube-apiserver has proxy settings
## Automatic sidecar injection fails if the Kubernetes API server has proxy settings
When the Kube-apiserver included proxy settings such as:
When the Kubernetes API server includes proxy settings such as:
{{< text yaml >}}
env:
@ -72,20 +72,20 @@ env:
value: 127.0.0.1,localhost,dockerhub.foo.com,devhub-docker.foo.com,10.84.100.125,10.84.100.126,10.84.100.127
{{< /text >}}
The sidecar injection would fail. The only related failure logs was in the kube-apiserver log:
With these settings, Sidecar injection fails. The only related failure log can be found in `kube-apiserver` log:
{{< text plain >}}
W0227 21:51:03.156818 1 admission.go:257] Failed calling webhook, failing open sidecar-injector.istio.io: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject: Service Unavailable
{{< /text >}}
Make sure both pod and service CIDRs are not proxied according to *_proxy variables. Check the kube-apiserver files and logs to verify the configuration and whether any requests are being proxied.
Make sure both pod and service CIDRs are not proxied according to *_proxy variables. Check the `kube-apiserver` files and logs to verify the configuration and whether any requests are being proxied.
A workaround is to remove the proxy settings from the kube-apiserver manifest and restart the server or use a later version of Kubernetes.
A workaround is to remove the proxy settings from the `kube-apiserver` manifest and restart the server or use a later version of Kubernetes.
An issue was filed with Kubernetes related to this and has since been closed. [https://github.com/kubernetes/kubeadm/issues/666](https://github.com/kubernetes/kubeadm/issues/666)
An [issue](https://github.com/kubernetes/kubeadm/issues/666) was filed with Kubernetes related to this and has since been closed.
[https://github.com/kubernetes/kubernetes/pull/58698#discussion_r163879443](https://github.com/kubernetes/kubernetes/pull/58698#discussion_r163879443)
## What Envoy version is istio using?
## What Envoy version is Istio using?
To find out the envoy version, you can follow below steps:

View File

@ -11,7 +11,7 @@ Kubernetes `ValidatingWebhook`. The `istio-galley`
* `pilot.validation.istio.io` - Served on path `/admitpilot` and is
responsible for validating configuration consumed by Pilot
(e.g. VirtualService, Authentication).
(e.g. `VirtualService`, Authentication).
* `mixer.validation.istio.io` - Served on path `/admitmixer` and is
responsible for validating configuration consumed by Mixer.

View File

@ -25,7 +25,7 @@ If the Istio Dashboard or the Prometheus queries dont show the expected metri
Mixer generates metrics to monitor its own behavior. The first step is to check these metrics:
1. Establish a connection to the Mixer self-monitoring endpoint for the istio-telemetry deployment. In Kubernetes environments, execute the following command:
1. Establish a connection to the Mixer self-monitoring endpoint for the `istio-telemetry` deployment. In Kubernetes environments, execute the following command:
{{< text bash >}}
$ kubectl -n istio-system port-forward <istio-telemetry pod> 9093 &
@ -126,7 +126,7 @@ Confirm that the metric value with the largest configuration ID is 0. This will
## Verify Mixer is sending metric instances to the Prometheus adapter
1. Establish a connection to the istio-telemetry self-monitoring endpoint. Setup a port-forward to the istio-telemetry self-monitoring port as described in
1. Establish a connection to the `istio-telemetry` self-monitoring endpoint. Setup a port-forward to the `istio-telemetry` self-monitoring port as described in
[Verify Mixer is receiving Report calls](#verify-mixer-is-receiving-report-calls).
1. On the Mixer self-monitoring port, search for `mixer_runtime_dispatch_count`. The output should be similar to:

View File

@ -8,7 +8,7 @@ icon: notes
- **更新了配置模型**。Istio 在Kubernetes中运行时使用 Kubernetes [自定义资源](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)模型来描述和存储其配置,现在可以使用 `kubectl` 命令选择性地管理配置。
- **多命名空间支持**。Istio 控制平面组件现在位于专用的 "istio-system" 命名空间中, Istio 可以管理其他非系统命名空间中的服务。
- **多命名空间支持**。Istio 控制平面组件现在位于专用的 `istio-system` 命名空间中, Istio 可以管理其他非系统命名空间中的服务。
- **网格扩展**。初始支持将非 Kubernetes 服务(以 VM 和/或物理机的形式)添加到网格中,这是此功能的早期版本,并且存在一些限制(例如需要跨容器和 VM 的扁平网络)。
@ -68,7 +68,7 @@ icon: notes
- **用户在访问应用程序时可能会获得定期404**: 我们注意到 Envoy 偶尔不会正确获取路由,因此 404 会返回给用户,我们正积极致力于[问题](https://github.com/istio/istio/issues/1038)。
- **在Pilot实际准备就绪之前Istio Ingress 或 Egress 报告已准备就绪**:您可以在 `istio-system` 命名空间中检查 istio-ingress 和 istio-egress pods 状态,并在所有 Istio pod 达到就绪状态后等待几秒钟,我们正积极致力于[问题](https://github.com/istio/istio/pull/1055)。
- **在Pilot实际准备就绪之前Istio Ingress 或 Egress 报告已准备就绪**:您可以在 `istio-system` 命名空间中检查 `istio-ingress``istio-egress` pods 状态,并在所有 Istio pod 达到就绪状态后等待几秒钟,我们正积极致力于[问题](https://github.com/istio/istio/pull/1055)。
- **启用了 `Istio Auth` 的服务无法与没有 Istio 的服务通信**:此限制将在不久的将来删除。

View File

@ -22,7 +22,7 @@ icon: notes
- **Istio RBAC**Mixer 有了一套基于角色的访问控制适配器。[参考资料](/zh/docs/concepts/security/#授权和鉴权)
- **`Fluentd`**Mixer 新增了使用 [fluentd](https://www.fluentd.org) 进行日志收集的功能。
- **`Fluentd`**Mixer 新增了使用 [Fluentd](https://www.fluentd.org) 进行日志收集的功能。
- **`Stdio`**:该适配器可以将日志存储到主机的本地文件,并且支持日志的翻转和备份功能。

View File

@ -44,4 +44,4 @@ icon: notes
- [Google Kubernetes Engine 1.10.2](https://github.com/istio/istio/issues/5723) 中,使用 Kubernetes 1.9 或者把节点切换为 Ubuntu 就会复现这一问题。该问题在 GKE 1.10.4 中有望得到更正。
- `istioctl experimental convert-networking-config` 会引发一个命名相关的问题——目标命名空间可能被替换为 istio-system因此在运行这一工具之后需要手工调整命名空间。[参考资料](https://github.com/istio/istio/issues/5817)
- `istioctl experimental convert-networking-config` 会引发一个命名相关的问题——目标命名空间可能被替换为 `istio-system`,因此在运行这一工具之后,需要手工调整命名空间。[参考资料](https://github.com/istio/istio/issues/5817)

View File

@ -10,7 +10,7 @@ icon: notes
- 改善了 Pilot 的可伸缩性和 Envoy 的启动时间。
- 修复了增加一个端口时 VirtualService Host 不匹配的问题。
- 修复了增加一个端口时 virtual service host 不匹配的问题。
- 添加了同一个主机内对 [合并多个 `VirtualService` 或 `DestinationRule` 定义](/zh/help/ops/traffic-management/deploy-guidelines/#在网关中配置多个-tls-主机) 的有限支持。

View File

@ -10,7 +10,7 @@ icon: notes
## 修复
- 禁用 istio-policy 服务中的前置条件缓存,因为它会导致无效的结果。缓存将在以后的版本中被重新引入。
- 禁用 `istio-policy` 服务中的前置条件缓存,因为它会导致无效的结果。缓存将在以后的版本中被重新引入。
- Mixer 现在只在启用 `tracespan` 适配器时才生成 span从而在正常情况下降低 CPU 开销。

View File

@ -17,7 +17,7 @@ TBD
- 弃用的 `RbacConfig``ClusterRbacConfig` 代替,以正确实现针对集群范围。
参考我们的指南 [迁移 `RbacConfig` 到 `ClusterRbacConfig`](/zh/docs/setup/kubernetes/upgrading-istio/#迁移-rbacconfig-到-clusterrbacconfig) 中的迁移说明。
## Istioctl
## `istioctl`
- 弃用 `istioctl create``istioctl replace` `istioctl get``istioctl delete`。使用 `kubectl` 代替(参考<https://kubernetes.io/docs/tasks/tools/install-kubectl>。下个版本1.2)将删除这些弃用的命令。
- 弃用 `istioctl gen-deploy`。使用 [`helm template`](/zh/docs/setup/kubernetes/helm-install/#选项1-通过-helm-的-helm-template-安装-istio) 代替。下个版本1.2)将删除这些弃用的命令。

View File

@ -19,7 +19,7 @@ keywords: [traffic-management,canary]
尽管这种机制能够很好工作,但这种方式只适用于部署的经过适当测试的版本,也就是说,更多的是蓝/绿发布,又称红/黑发布,而不是 “蜻蜓点水“ 式的金丝雀部署。实际上对于后者例如并没有完全准备好或者无意对外暴露的版本Kubernetes 中的金丝雀部署将使用具有[公共 pod 标签](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)的两个 Deployment 来完成。在这种情况下,我们不能再使用自动缩放器,因为是有由两个独立的自动缩放器来进行控制,不同负载情况下,副本比例(百分比)可能与所需的比例不同。
无论我们使用一个或者两个部署,使用 DockerMesos/Marathon 或 Kubernetes 等容器编排平台进行的金丝雀发布管理都存在一个根本问题:使用实例扩容来管理流量;版本流量分发和副本部署在上述平台中并独立。所有 pod 副本,无论版本如何,在 kube-proxy 循环池中都被一视同仁地对待因此管理特定版本接收的流量的唯一方法是控制副本比例。以小百分比维持金丝雀流量需要许多副本例如1 将需要至少 100 个副本)。即使我们可以忽略这个问题,部署方式功能仍然非常有限,因为它只支持简单(随机百分比)金丝雀部署。如果我们想根据某些特定规则将请求路由到金丝雀版本上,我们仍然需要另一种解决方案。
无论我们使用一个或者两个部署,使用 DockerMesos/Marathon 或 Kubernetes 等容器编排平台进行的金丝雀发布管理都存在一个根本问题:使用实例扩容来管理流量;版本流量分发和副本部署在上述平台中并独立。所有 pod 副本,无论版本如何,在 `kube-proxy` 循环池中都被一视同仁地对待因此管理特定版本接收的流量的唯一方法是控制副本比例。以小百分比维持金丝雀流量需要许多副本例如1 将需要至少 100 个副本)。即使我们可以忽略这个问题,部署方式功能仍然非常有限,因为它只支持简单(随机百分比)金丝雀部署。如果我们想根据某些特定规则将请求路由到金丝雀版本上,我们仍然需要另一种解决方案。
## 使用 Istio

View File

@ -74,7 +74,7 @@ spec:
istio: ingress
{{< /text >}}
istio-ingress 暴露端口 80 和 443 . 我们需要将流入流量限制在这两个端口上。 Envoy 有[`内置管理接口`](https://www.envoyproxy.io/docs/envoy/latest/operations/admin.html#operations-admin-interface),我们不希望错误配置 istio-ingress 镜像而导致意外地将我们的管理接口暴露给外界。这里深度防御的示例:正确配置的镜像应该暴露接口,正确配置的网络策略将阻止任何人连接到它,要么失败,要么配置错误,受到保护。
`istio-ingress` 暴露端口 80 和 443 . 我们需要将流入流量限制在这两个端口上。 Envoy 有[`内置管理接口`](https://www.envoyproxy.io/docs/envoy/latest/operations/admin.html#operations-admin-interface),我们不希望错误配置 `istio-ingress` 镜像而导致意外地将我们的管理接口暴露给外界。这里深度防御的示例:正确配置的镜像应该暴露接口,正确配置的网络策略将阻止任何人连接到它,要么失败,要么配置错误,受到保护。
{{< text yaml >}}
apiVersion: networking.k8s.io/v1

View File

@ -100,4 +100,4 @@ Mixer 还很年轻。在 Istio 0.3 中Mixer 并没有性能方面的重要改
我们希望本文能够让读者能够意识到 Mixer 对 Istio 的益处。
如果有说明或者问题,无需犹豫,尽管去 [istio-policies-and-telemetry](https://groups.google.com/forum/#!forum/istio-policies-and-telemetry) 提出吧。
如果有说明或者问题,无需犹豫,

View File

@ -83,7 +83,7 @@ keywords: [ingress,traffic-management,aws]
## 重写 Istio Ingress 服务
你需要使用以下内容来重写 istio ingress 服务:
你需要使用以下内容来重写 `istio-ingress` 服务:
{{< text yaml >}}
apiVersion: v1

View File

@ -88,7 +88,7 @@ TCP/IP 几乎是所有通信过程的媒介。如果恰好应用端点处于同
### 零损耗的负载均衡器、防火墙和网络分析器
负载均衡器和防火墙这样的典型网络功能,通常的实现方式都是引入一个中间层,介入到数据/包之中。Kubernetes 的负载均衡器kube-proxy实现利用 iptables 完成对数据包流的探测Istio 也在代理层中实现了同样的功能。但是如果目标只是根据策略来对连接进行重定向或者丢弃那么在整个连接过程中都介入到数据通路上是不必要的。AppSwitch 能够更有效的处理这些任务,只要简单的在 API 层面处理控制路径即可。AppSwitch 和应用紧密结合,因此还能够获取更多的应用信息,例如堆栈动态和堆利用情况、服务就绪时间以及活动连接属性等,这些信息可以为监控和分析提供更大发的操作空间。
负载均衡器和防火墙这样的典型网络功能,通常的实现方式都是引入一个中间层,介入到数据/包之中。Kubernetes 的负载均衡器(`kube-proxy`)实现利用 iptables 完成对数据包流的探测Istio 也在代理层中实现了同样的功能。但是如果目标只是根据策略来对连接进行重定向或者丢弃那么在整个连接过程中都介入到数据通路上是不必要的。AppSwitch 能够更有效的处理这些任务,只要简单的在 API 层面处理控制路径即可。AppSwitch 和应用紧密结合,因此还能够获取更多的应用信息,例如堆栈动态和堆利用情况、服务就绪时间以及活动连接属性等,这些信息可以为监控和分析提供更大发的操作空间。
更进一步AppSwitch 还能够利用从 Socket 缓冲区获取的协议数据,完成七层负载均衡和防火墙功能。它能够利用从 Pilot 获取的策略信息合成协议数据和各种其它信号从而实现高效的路由和访问控制能力。实际上无需对应用程序自身或配置做出任何变动AppSwitch 也可以“诱导”应用连接到正确的后端服务器。看起来好像应用程序本身就具备了策略和流量管理能力。

View File

@ -267,7 +267,7 @@ keywords: [traffic-management,egress,tcp]
## 与网格扩展的关系
请注意,本文中描述的场景与[集成虚拟机](/zh/docs/examples/integrating-vms/)示例中描述的网格扩展场景不同。 在这种情况下MySQL 实例在与 Istio 服务网格集成的外部集群外机器裸机或VM上运行 MySQL 服务成为网格的一等公民,具有 Istio 的所有有益功能,除此之外,服务可以通过本地集群域名寻址,例如通过 `mysqldb.vm.svc.cluster.local`,并且可以通过[双向 TLS 身份验证](/zh/docs/concepts/security/#双向-tls-认证)保护与它的通信,无需创建服务入口来访问此服务; 但是,该服务必须在 Istio 注侧,要启用此类集成,必须在计算机上安装 Istio 组件( _Envoy proxy_ _node-agent_ _istio-agent_ ),并且必须可以从中访问 Istio 控制平面_Pilot_ _Mixer_ _Citadel_ )。有关详细信息,请参阅 [Istio Mesh Expansion](/zh/docs/setup/kubernetes/mesh-expansion/) 说明。
请注意,本文中描述的场景与[集成虚拟机](/zh/docs/examples/integrating-vms/)示例中描述的网格扩展场景不同。 在这种情况下MySQL 实例在与 Istio 服务网格集成的外部集群外机器裸机或VM上运行 MySQL 服务成为网格的一等公民,具有 Istio 的所有有益功能,除此之外,服务可以通过本地集群域名寻址,例如通过 `mysqldb.vm.svc.cluster.local`,并且可以通过[双向 TLS 身份验证](/zh/docs/concepts/security/#双向-tls-认证)保护与它的通信,无需创建服务入口来访问此服务; 但是,该服务必须在 Istio 注侧,要启用此类集成,必须在计算机上安装 Istio 组件( _Envoy proxy_ _node-agent_ `_istio-agent_` ),并且必须可以从中访问 Istio 控制平面_Pilot_ _Mixer_ _Citadel_ )。有关详细信息,请参阅 [Istio Mesh Expansion](/zh/docs/setup/kubernetes/mesh-expansion/) 说明。
在我们的示例中MySQL 实例可以在任何计算机上运行,也可以由云提供商作为服务进行配置,无需集成机器
与 Istio ,无需从机器访问 Istio 控制平面,在 MySQL 作为服务的情况下MySQL 运行的机器可能无法访问并在其上安装所需的组件可能是不可能的在我们的例子中MySQL 实例可以通过其全局域名进行寻址,如果消费应用程序希望使用该域名,这可能是有益的,当在消费应用程序的部署配置中无法更改预期的域名时,这尤其重要。

View File

@ -69,7 +69,7 @@ Istio 中的安全性涉及多个组件:
- **AWS**: AWS IAM 用户/角色 帐户
- **On-premises (非 Kubernetes**: 用户帐户、自定义服务帐户、服务名称、istio 服务帐户或 GCP 服务帐户。
- **On-premises (非 Kubernetes**: 用户帐户、自定义服务帐户、服务名称、Istio 服务帐户或 GCP 服务帐户。
自定义服务帐户引用现有服务帐户,就像客户的身份目录管理的身份一样。

View File

@ -59,7 +59,7 @@ Istio 引入了服务版本的概念,可以通过版本(`v1`、`v2`)或环
Istio 还为同一服务版本的多个实例提供流量负载均衡。可以在[服务发现和负载均衡](/zh/docs/concepts/traffic-management/#服务发现和负载均衡)中找到更多信息。
Istio 不提供 DNS。应用程序可以尝试使用底层平台kube-dns、mesos-dns 等)中存在的 DNS 服务来解析 FQDN。
Istio 不提供 DNS。应用程序可以尝试使用底层平台`kube-dns``mesos-dns` 等)中存在的 DNS 服务来解析 FQDN。
### Ingress 和 Egress

View File

@ -88,7 +88,7 @@ Istio 服务网格逻辑上分为**数据平面**和**控制平面**。
### Envoy
Istio 使用 [Envoy](https://www.envoyproxy.io/) 代理的扩展版本Envoy 是以 C++ 开发的高性能代理用于调解服务网格中所有服务的所有入站和出站流量。Envoy 的许多内置功能被 istio 发扬光大,例如:
Istio 使用 [Envoy](https://www.envoyproxy.io/) 代理的扩展版本Envoy 是以 C++ 开发的高性能代理用于调解服务网格中所有服务的所有入站和出站流量。Envoy 的许多内置功能被 Istio 发扬光大,例如:
* 动态服务发现
* 负载均衡

View File

@ -8,7 +8,7 @@ keywords: [流量管理,egress]
[配置出口网关](/zh/docs/examples/advanced-gateways/egress-gateway)示例描述了如何配置 Istio 以通过名为 _egress gateway_ 的专用服务引导出口流量。
此示例展示如何配置出口网关以启用到外部服务的流量的双向 TLS。
要模拟支持 mTLS 协议的实际外部服务,首先在 Kubernetes 集群中部署 [NGINX](https://www.nginx.com) 服务器,但在 Istio 服务网格之外运行,即在命名空间中运行没有启用 Istio 的代理注入 sidecar 。
要模拟支持 mutual TLS 协议的实际外部服务,首先在 Kubernetes 集群中部署 [NGINX](https://www.nginx.com) 服务器,但在 Istio 服务网格之外运行,即在命名空间中运行没有启用 Istio 的代理注入 sidecar 。
接下来,配置出口网关以与外部 NGINX 服务器执行双向 TLS。
最后,通过出口网关将流量从网格内的应用程序 pod 引导到网格外的 NGINX 服务器。

View File

@ -74,7 +74,7 @@ $ curl --request POST --header "content-type:application/json" --data '{"message
authPolicy: MUTUAL_TLS
{{< /text >}}
1. 在此之后,你会发现访问 `EXTERNAL_IP` 不再生效, 因为 istio 代理仅接受安全网格链接。通过 Ingress 访问有效是因为 Ingress 使 HTTP 终止。
1. 在此之后,你会发现访问 `EXTERNAL_IP` 不再生效, 因为 Istio 代理仅接受安全网格链接。通过 Ingress 访问有效是因为 Ingress 使 HTTP 终止。
1. 安全访问 Ingress查看相关[说明](/zh/docs/tasks/traffic-management/secure-ingress/)。

View File

@ -22,7 +22,7 @@ $ istio_ca [flags]
| `--custom-dns-names <string>` | `account.namespace: customdns` 名称列表,以逗号分隔。(默认 `''` |
| `--enable-profiling` | 启用监视 Citadel 的性能分析。|
| `--experimental-dual-use` | 启用两用模式。使用与 `SAN` 相同的 `CommonName` 生成证书。|
| `--grpc-host-identities <string>` | istio ca server 的主机名列表,以逗号分隔。(默认 `istio-ca``istio-citadel` |
| `--grpc-host-identities <string>` | Istio CA server 的主机名列表,以逗号分隔。(默认 `istio-ca``istio-citadel` |
| `--grpc-port <int>` | Citadel GRPC 服务器的端口号。如果未指定Citadel 将不会提供 GRPC 请求。(默认为 `8060` |
| `--identity-domain <string>` | 用于标识的域(`default: cluster.local`)(默认为 `cluster.local` |
| `--key-size <int>` | 生成私钥的大小(默认为 `2048` |
@ -55,7 +55,7 @@ $ istio_ca [flags]
| `--workload-cert-min-grace-period <duration>` | 最小工作负载证书轮换宽期限或者周期。( 默认 `10m0s` |
| `--workload-cert-ttl <duration>` | 已发布工作负载证书的 TTL 默认为 `2160h0m0s` |
## istio\_ca probe
## `istio\_ca` probe
检查本地运行的服务器的活跃度或准备情况
@ -80,7 +80,7 @@ $ istio_ca probe [flags]
| `--log_target <stringArray>` | 输出日志的路径集。这可以是任何路径以及特殊值 `stdout``stderr` 默认 `[stdout]` |
| `--probe-path <string>` | 用于检查可用性的文件的路径。( 默认 `''` |
## istio\_ca version
## `istio\_ca` version
打印出版本信息

View File

@ -161,7 +161,7 @@ istioctl deregister my-svc 172.17.0.2
## `istioctl experimental convert-ingress`
将 Ingress 转化为 VirtualService 配置。其输出内容可以作为 Istio 配置的起点,可能需要进行一些小修改。如果指定配置无法完美的完成转化,就会出现警告信息。输入内容必须是 Kubernetes Ingress。对 v1alpha1 的 Istio 规则的转换支持现在已经移除。
将 Ingress 转化为 `VirtualService` 配置。其输出内容可以作为 Istio 配置的起点,可能需要进行一些小修改。如果指定配置无法完美的完成转化,就会出现警告信息。输入内容必须是 Kubernetes Ingress。对 v1alpha1 的 Istio 规则的转换支持现在已经移除。
基本用法:
@ -268,7 +268,7 @@ $ istioctl gen-deploy [选项]
|`--helm-chart-dir <string>`|在这一目录中查找 Helm chart 用来渲染生成 Istio 部署。`-o yaml` 会用这个参数在本地进行 Helm chart 的渲染。(缺省值 `.`|
|`--hyperkube-hub <string>`|用于拉取 Hyperkube 镜像的容器仓库(缺省值 `quay.io/coreos/hyperkube`|
|`--hyperkube-tag <Hyperkube>`|Hyperkube 镜像的 Tag缺省值 `v1.7.6_coreos.0`|
|`--ingress-node-port <uint16>`|如果指定了这一选项Istio ingress 会以 NodePort 的形式运行,并映射到这一选项指定的端口。注意,如果 `ingress` 选项没有打开,这一选项会被忽略(缺省值 `0`|
|`--ingress-node-port <uint16>`|如果指定了这一选项Istio ingress 会以 `NodePort` 的形式运行,并映射到这一选项指定的端口。注意,如果 `ingress` 选项没有打开,这一选项会被忽略(缺省值 `0`|
|`--values <string>`|`values.yaml` 文件的路径,在使用 `--out=yaml` 时,会用来在本地渲染 YAML。如果直接使用这一文件会忽略上面的选项值缺省值 `''`|
典型用例:

View File

@ -4,7 +4,7 @@ description: 用于将日志发送给 Fluentd 守护进程的适配器。
weight: 70
---
Fluentd 适配器的设计目的是将 Istio 的日志发送给 [fluentd](https://www.fluentd.org/) 守护进程。
Fluentd 适配器的设计目的是将 Istio 的日志发送给 [Fluentd](https://www.fluentd.org/) 守护进程。
该适配器支持 [logentry template](/zh/docs/reference/config/policy-and-telemetry/templates/logentry/)。

View File

@ -69,7 +69,7 @@ services:
### 其他 Istio 组件
Istio Pilot 、Mixer 和 Citadel 的 Debian 包可以通过 Istio 的发行版获得。同时,这些组件可以运行在 Docker 容器( docker.io/istio/pilot, docker.io/istio/mixer, docker.io/istio/citadel ) 中。请注意,这些组件都是无状态的并且可以水平伸缩。每个组件都依赖 Istio API server而 Istio API server 依赖 etcd 集群做持久存储。为了实现高可用,每个控制平面服务可以作为 [job](https://www.nomadproject.io/docs/job-specification/index.html) 在 Nomad 中运行,其中 [service stanza](https://www.nomadproject.io/docs/job-specification/service.html) 可以用来描述控制平面服务的期望属性。
Istio Pilot 、Mixer 和 Citadel 的 Debian 包可以通过 Istio 的发行版获得。同时,这些组件可以运行在 Docker 容器( `docker.io/istio/pilot`, `docker.io/istio/mixer`, `docker.io/istio/citadel` ) 中。请注意,这些组件都是无状态的并且可以水平伸缩。每个组件都依赖 Istio API server而 Istio API server 依赖 etcd 集群做持久存储。为了实现高可用,每个控制平面服务可以作为 [job](https://www.nomadproject.io/docs/job-specification/index.html) 在 Nomad 中运行,其中 [service stanza](https://www.nomadproject.io/docs/job-specification/service.html) 可以用来描述控制平面服务的期望属性。
其中的一些组件可能需要在 Istio API 服务器中进行额外的安装步骤才能正常工作。
## 将 sidecars 添加到服务实例中

View File

@ -15,7 +15,7 @@ Istio 会被安装到自己的 `istio-system` 命名空间,并且能够对所
$ curl -L https://git.io/getLatestIstio | sh -
{{< /text >}}
1. 进入 Istio 包目录。例如,假设这个包是 istio-{{< istio_full_version >}}.0
1. 进入 Istio 包目录。例如,假设这个包是 `istio-{{< istio_full_version >}}.0`
{{< text bash >}}
$ cd istio-{{< istio_full_version >}}

View File

@ -32,13 +32,13 @@ icon: helm
以下命令在 Istio 目录执行使用相对引用。您必须在 Istio 的根目录中执行下面的命令。
1. 如果使用 Helm 2.10.0 之前的版本,通过 `kubectl apply` [自定义资源定义](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions),然后等待几秒钟,直到 kube-apiserver 中的 CRDs 提交完成:
1. 如果使用 Helm 2.10.0 之前的版本,通过 `kubectl apply` [自定义资源定义](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions),然后等待几秒钟,直到 `kube-apiserver` 中的 CRDs 提交完成:
{{< text bash >}}
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
{{< /text >}}
> 如果您正在启用 `certmanager`,那么您还需要安装它的 CRDs并等待几秒钟以便在 kube-apiserver 中提交 CRDs :
> 如果您正在启用 `certmanager`,那么您还需要安装它的 CRDs并等待几秒钟以便在 `kube-apiserver` 中提交 CRDs :
{{< text bash >}}
$ kubectl apply -f install/kubernetes/helm/subcharts/certmanager/templates/crds.yaml

View File

@ -11,7 +11,7 @@ keywords: [kubernetes,vms]
* 根据[安装指南](/zh/docs/setup/kubernetes/quick-start/)的步骤在 Kubernetes 上部署 Istio。
* 待接入服务器必须能够通过 IP 接入网格中的服务端点。通常这需要 VPN 或者 VPC 的支持,或者容器网络为服务端点提供直接路由(非 NAT 或者防火墙屏蔽)。该服务器无需访问 Kubernetes 指派的集群 IP 地址。
* Istio 控制平面服务Pilot、Mixer、Citadel以及 Kubernetes 的 DNS 服务器必须能够从虚拟机进行访问,通常会使用[内部负载均衡器](https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer)(也可以使用 NodePort来满足这一要求在虚拟机上运行 Istio 组件,或者使用自定义网络配置,相关的高级配置另有文档描述。
* Istio 控制平面服务Pilot、Mixer、Citadel以及 Kubernetes 的 DNS 服务器必须能够从虚拟机进行访问,通常会使用[内部负载均衡器](https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer)(也可以使用 `NodePort`)来满足这一要求,在虚拟机上运行 Istio 组件,或者使用自定义网络配置,相关的高级配置另有文档描述。
## 安装步骤
@ -117,7 +117,7 @@ $ @install/tools/setupMeshEx.sh@ machineSetup VM_NAME
istio-pilot.istio-system.svc.cluster.local has address 10.63.247.248
{{< /text >}}
用类似的方法检查 istio-ingress
用类似的方法检查 `istio-ingress`
{{< text bash >}}
$ host istio-ingress.istio-system.svc.cluster.local.
@ -158,7 +158,7 @@ $ @install/tools/setupMeshEx.sh@ machineSetup VM_NAME
$ @install/tools/setupMeshEx.sh@ machineCerts ACCOUNT NAMESPACE
{{< /text >}}
生成的几个文件 (`key.pem`, `root-cert.pem`, `cert-chain.pem`) 必须复制到每台服务器的 `/etc/certs`,让 istio proxy 访问。
生成的几个文件 (`key.pem`, `root-cert.pem`, `cert-chain.pem`) 必须复制到每台服务器的 `/etc/certs`,让 `istio-proxy` 访问。
* 安装 Istio Debian 文件,启动 `istio` 以及 `istio-auth-node-agent` 服务。从 [GitHub 发布页面](https://github.com/istio/istio/releases) 可以得到 Debian 文件,或者:

View File

@ -14,7 +14,7 @@ icon: helm
## 安装步骤
1. 如果你的 Helm 版本低于 2.10.0,通过 `kubectl apply` 安装 Istio 的 [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions),稍等片刻 CRD 会被提交到 kube-apiserver
1. 如果你的 Helm 版本低于 2.10.0,通过 `kubectl apply` 安装 Istio 的 [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions),稍等片刻 CRD 会被提交到 `kube-apiserver`
{{< text bash >}}
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

View File

@ -20,13 +20,13 @@ keywords: [kubernetes,multicluster,federation,gateway]
* 在**每个** Kubernetes 集群上授权[使用 Helm 部署 Istio 控制平面](/zh/docs/setup/kubernetes/helm-install/)。
* 一个 **Root CA**。跨集群通信需要在 service 之间使用 mTLS 连接。为了启用跨集群 mTLS 通信,每个集群的 Citadel
* 一个 **Root CA**。跨集群通信需要在 service 之间使用 mutual TLS 连接。为了启用跨集群 mutual TLS 通信,每个集群的 Citadel
都将被配置使用共享 root CA 生成的中间 CA 凭证。出于演示目的,我们将使用 `samples/certs` 目录下的简单 root CA
证书,该证书作为 Istio 安装的一部分提供。
## 在每个集群中部署 Istio 控制平面
1. 从您的组织 root CA 生成每个集群 Citadel 使用的中间证书。使用共享的 root CA 启用跨越不同集群的 mTLS 通信。
1. 从您的组织 root CA 生成每个集群 Citadel 使用的中间证书。使用共享的 root CA 启用跨越不同集群的 mutual TLS 通信。
出于演示目的,我们将使用相同的简单 root 证书作为中间证书。
1. 在每个集群中,使用类似如下的命令,为您生成的 CA 证书创建一个 Kubernetes secret
@ -154,7 +154,7 @@ spec:
为了验证设置,请尝试从 `cluster1` 上的任意 pod 访问 `bar.ns2.global``bar.ns2`
两个 DNS 名称都应该被解析到 127.255.0.2 这个在 service entry 配置中使用的地址。
以上配置将使得 `cluster1` 中所有到 `bar.ns2.global` 和*任意端口*的流量通过 mTLS 连接路由到
以上配置将使得 `cluster1` 中所有到 `bar.ns2.global` 和*任意端口*的流量通过 mutual TLS 连接路由到
endpoint `<IPofCluster2IngressGateway>:15443`
端口 15443 的 gateway 是一个特殊的 SNI 感知 Envoy它已经预先进行了配置并作为前提条件中描述的 Istio

View File

@ -1,7 +0,0 @@
---
title: 多集群安装
description: 跨多个 kubernetes 集群配置 Istio 网格。
weight: 60
type: 章节索引
keywords: [kubernetes,多集群,联邦]
---

View File

@ -107,7 +107,7 @@ $ helm template install/kubernetes/helm/istio-remote --namespace istio-system \
{{< /text >}}
{{< info_icon >}} 所有集群必须有相同的 Istio 组件命名空间。
只要命名空间对有所有集群中的 Istio 组件都相同,就可以覆盖住集群上的“istio-system”名称。
只要命名空间对有所有集群中的 Istio 组件都相同,就可以覆盖住集群上的`istio-system`名称。
1. 通过以下命令实例化远程集群与 Istio 控制平面的连接:
@ -312,7 +312,7 @@ $ helm delete --purge istio-remote
$ helm template install/kubernetes/helm/istio-remote --namespace istio-system --name istio-remote --set global.remotePilotAddress=${PILOT_POD_IP} --set global.remotePolicyAddress=${POLICY_POD_IP} --set global.remoteTelemetryAddress=${TELEMETRY_POD_IP} --set global.proxy.envoyStatsd.enabled=true --set global.proxy.envoyStatsd.host=${STATSD_POD_IP} --set global.remoteZipkinAddress=${ZIPKIN_POD_IP} --set sidecarInjectorWebhook.enabled=false > $HOME/istio-remote_noautoinj.yaml
{{< /text >}}
1. 为远程 Istio 创建 istio-system 命名空间:
1. 为远程 Istio 创建 `istio-system` 命名空间:
{{< text bash >}}
$ kubectl create ns istio-system

View File

@ -6,7 +6,7 @@ skip_seealso: true
keywords: [platform-setup,kubernetes,docker-for-desktop]
---
如果你想在桌面版 docker 内置的 Kubernetes 下运行 istio你可能需要在 docker 首选项的 *Advanced* 面板下增加 docker 的内存限制。Pilot 默认请求内存为 `2048Mi`,这是 docker 的默认限制。
如果你想在桌面版 Docker 内置的 Kubernetes 下运行 Istio你可能需要在 Docker 首选项的 *Advanced* 面板下增加 Docker 的内存限制。Pilot 默认请求内存为 `2048Mi`,这是 Docker 的默认限制。
{{< image width="60%" link="./dockerprefs.png" caption="Docker 首选项" >}}

View File

@ -15,7 +15,7 @@ keywords: [kubernetes,alibabacloud,aliyun]
- 确保 `kubectl` 对你的 Kubernetes 集群工作正常
- 你可以创建一个命名空间用来部署 Istio 组建。例如如下创建的命名空间 “istio-system”
- 你可以创建一个命名空间用来部署 Istio 组建。例如如下创建的命名空间 `istio-system`
{{< text bash >}}
$ kubectl create namespace istio-system

View File

@ -28,7 +28,7 @@ keywords: [kubernetes]
## 安装步骤
1. 使用 `kubectl apply` 安装 Istio 的[自定义资源定义CRD](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions)几秒钟之后CRD 被提交给 kube-apiserver
1. 使用 `kubectl apply` 安装 Istio 的[自定义资源定义CRD](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions)几秒钟之后CRD 被提交给 `kube-apiserver`
{{< text bash >}}
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

View File

@ -55,7 +55,7 @@ sleep 1 1 1 1 2h sleep,istio-pro
### Sidecar 的自动注入
使用 Kubernetes 的 [mutating webhook admission controller](https://kubernetes.io/docs/admin/admission-controllers/),可以进行 Sidecar 的自动注入。Kubernetes 1.9 以后的版本才具备这一能力。使用这一功能之前首先要检查 kube-apiserver 的进程,是否具备 `admission-control` 参数,并且这个参数的值中需要包含 `MutatingAdmissionWebhook` 以及 `ValidatingAdmissionWebhook` 两项,并且按照正确的顺序加载,这样才能启用 `admissionregistration` API
使用 Kubernetes 的 [mutating webhook admission controller](https://kubernetes.io/docs/admin/admission-controllers/),可以进行 Sidecar 的自动注入。Kubernetes 1.9 以后的版本才具备这一能力。使用这一功能之前首先要检查 `kube-apiserver` 的进程,是否具备 `admission-control` 参数,并且这个参数的值中需要包含 `MutatingAdmissionWebhook` 以及 `ValidatingAdmissionWebhook` 两项,并且按照正确的顺序加载,这样才能启用 `admissionregistration` API
{{< text bash >}}
$ kubectl api-versions | grep admissionregistration

View File

@ -15,7 +15,7 @@ keywords: [kubernetes,upgrading]
1. [下载新的 Istio 版本](/zh/docs/setup/kubernetes/download-release/)并将目录更改为新版本目录。
1. 升级 Istio 的[自定义资源定义](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions)
通过 `kubectl apply` ,等待几秒钟,让 CRD 在 kube-apiserver 中提交:
通过 `kubectl apply` ,等待几秒钟,让 CRD 在 `kube-apiserver` 中提交:
{{< text bash >}}
$ kubectl apply -f @install/kubernetes/helm/istio/templates/crds.yaml@

View File

@ -339,7 +339,7 @@ spec:
适配器配置中的 `maxAmount` 设置了关联到 Quota 实例中的所有计数器的缺省限制。如果所有 `overrides` 条目都无法匹配到一个请求,就只能使用 `maxAmount` 限制了。Memquota 会选择适合请求的第一条 `override`。`override` 条目无需定义所有 quota dimension 例如例子中的 `0.2 qps` 条目在 4 条 quota dimensions 中只选用了三条。
如果要把上面的策略应用到某个命名空间而非整个 Istio 网格,可以把所有 istio-system 替换成为给定的命名空间。
如果要把上面的策略应用到某个命名空间而非整个 Istio 网格,可以把所有 `istio-system` 替换成为给定的命名空间。
## 清理

View File

@ -108,7 +108,7 @@ Istio 采用基于角色的访问控制方式,本文内容涵盖了为 HTTP
1. 查看日志,并检查 `permissiveResponseCode`
Kubernetes 环境中,可以用下列操作搜索 istio-telemetry 的 Pod 日志:
Kubernetes 环境中,可以用下列操作搜索 `istio-telemetry` 的 Pod 日志:
{{< text bash json >}}
$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep \"instance\":\"rbacsamplelog.logentry.istio-system\"
@ -139,7 +139,7 @@ Istio 采用基于角色的访问控制方式,本文内容涵盖了为 HTTP
1. 再一次查看日志,并检查 `permissiveResponseCode`
Kubernetes 环境中,可以用下列操作搜索 istio-telemetry 的 Pod 日志:
Kubernetes 环境中,可以用下列操作搜索 `istio-telemetry` 的 Pod 日志:
{{< text bash json >}}
$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep \"instance\":\"rbacsamplelog.logentry.istio-system\"
@ -211,7 +211,7 @@ Istio 采用基于角色的访问控制方式,本文内容涵盖了为 HTTP
1. 查看日志,检查 `permissiveResponseCode`
在 Kubernetes 环境中,查看 istio-telemetry pod 的日志:
在 Kubernetes 环境中,查看 `istio-telemetry` pod 的日志:
{{< text bash json >}}
$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep \"instance\":\"rbacsamplelog.logentry.istio-system\"

View File

@ -13,7 +13,7 @@ keywords: [security,access-control,rbac,tcp,authorization]
* 阅读 [Istio 中的授权和鉴权](/zh/docs/concepts/security/#授权和鉴权)。
* 按照[快速开始](/zh/docs/setup/kubernetes/quick-start/)一文的指导,在 Kubernetes 中安装**启用了认证功能**的 Istio。
* 执行[安装步骤](/zh/docs/setup/kubernetes/quick-start/#安装步骤)时启用双向 TLSmTLS认证
* 执行[安装步骤](/zh/docs/setup/kubernetes/quick-start/#安装步骤)时启用双向 TLS 认证
任务中所执行的命令还假设 Bookinfo 示例应用部署在 `default` 命名空间中。如果使用的是其它命名空间,在命令中需要加入 `-n` 参数。

View File

@ -59,7 +59,7 @@ my-nginx-jwwck 1/1 Running 0 1h
sleep-847544bbfc-d27jg 2/2 Running 0 18h
{{< /text >}}
SSH 进入包含 sleep pod 的 istio-proxy 容器。
SSH 进入包含 sleep pod 的 `istio-proxy` 容器。
{{< text bash >}}
$ kubectl exec -it $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy /bin/bash
@ -117,7 +117,7 @@ $ kubectl exec sleep-847544bbfc-d27jg -c sleep -- curl https://my-nginx -k
...
{{< /text >}}
如果从 istio-proxy 容器运行,它也应该正常运行
如果从 `istio-proxy` 容器运行,它也应该正常运行
{{< text bash >}}
$ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://my-nginx -k
@ -130,13 +130,13 @@ $ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name
### 用 Istio sidecar 创建一个 HTTPS 服务,并使用双向 TLS
您需要使用启用了双向 TLS 的 Istio 控制平面。如果您已经安装了 istio 控制平面,并安装了双向 TLS请删除它
您需要使用启用了双向 TLS 的 Istio 控制平面。如果您已经安装了 Istio 控制平面,并安装了双向 TLS请删除它
{{< text bash >}}
$ kubectl delete -f install/kubernetes/istio-demo.yaml
{{< /text >}}
等待一切都完成了也就是说在控制平面名称空间istio-system中没有 pod。
等待一切都完成了,也就是说在控制平面名称空间(`istio-system`)中没有 pod。
{{< text bash >}}
$ kubectl get pod -n istio-system
@ -198,7 +198,7 @@ $ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name
因为工作流"sleep --> sleep-proxy --> nginx-proxy --> nginx”整个过程是7层流量在 sleep-proxy 和 nginx-proxy 之间有一个 L4 双向 TLS 加密。在这种情况下,一切都很好。
但是,如果您从 istio-proxy 容器运行这个命令,它将无法工作。
但是,如果您从 `istio-proxy` 容器运行这个命令,它将无法工作。
{{< text bash >}}
$ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://my-nginx -k

View File

@ -112,7 +112,7 @@ keywords: [安全,访问控制,rbac,鉴权]
1. 验证已创建日志流并检查 `permissiveResponseCode`
在 Kubernetes 环境中,搜索日志以查找 istio-telemetry pods如下所示
在 Kubernetes 环境中,搜索日志以查找 `istio-telemetry` pods如下所示
{{< text bash json >}}
$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep \"instance\":\"rbacsamplelog.logentry.istio-system\"
@ -146,7 +146,7 @@ keywords: [安全,访问控制,rbac,鉴权]
1. 验证已创建日志流并检查 `permissiveResponseCode`
在 Kubernetes 环境中,搜索日志以查找 istio-telemetry pods如下所示
在 Kubernetes 环境中,搜索日志以查找 `istio-telemetry` pods如下所示
{{< text bash json >}}
$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep \"instance\":\"rbacsamplelog.logentry.istio-system\"
@ -223,7 +223,7 @@ keywords: [安全,访问控制,rbac,鉴权]
1. 验证日志并再次检查 `permissiveResponseCode`
在 Kubernetes 环境中,搜索日志以查找 istio-telemetry pods如下所示
在 Kubernetes 环境中,搜索日志以查找 `istio-telemetry` pods如下所示
{{< text bash json >}}
$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep \"instance\":\"rbacsamplelog.logentry.istio-system\"

View File

@ -149,7 +149,7 @@ keywords: [telemetry,metrics]
1. 检查请求过程中生成和处理的日志流。
在 Kubernetes 环境中,像这样在 istio-telemetry pods 中搜索日志:
在 Kubernetes 环境中,像这样在 `istio-telemetry` pods 中搜索日志:
{{< text bash json >}}
$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep \"instance\":\"newlog.logentry.istio-system\"

View File

@ -81,12 +81,12 @@ Mixer 中内置了 Prometheus 适配器,这一适配器将生成的指标值
配置好的 Prometheus 插件会抓取以下的端点:
1. *istio-mesh* (`istio-telemetry.istio-system:42422`): 所有 Mixer 生成的网格指标。
1. *mixer* (`istio-telemetry.istio-system:9093`): 所有特定于 Mixer 的指标, 用于监控 Mixer 本身。
1. *envoy* (`istio-proxy:15090`): envoy 生成的原始统计数据。Prometheus 从 envoy 暴露的端口获取统计数据,过滤掉不想要的数据。
1. *pilot* (`istio-pilot.istio-system:9093`): 所有 pilot 的指标。
1. *galley* (`istio-galley.istio-system:9093`): 所有 galley 的指标。
1. *istio-policy* (`istio-policy.istio-system:9093`): 所有 policy 的指标。
1. `istio-mesh` (`istio-telemetry.istio-system:42422`): 所有 Mixer 生成的网格指标。
1. `mixer` (`istio-telemetry.istio-system:9093`): 所有特定于 Mixer 的指标, 用于监控 Mixer 本身。
1. `envoy` (`istio-proxy:15090`): envoy 生成的原始统计数据。Prometheus 从 envoy 暴露的端口获取统计数据,过滤掉不想要的数据。
1. `pilot` (`istio-pilot.istio-system:9093`): 所有 pilot 的指标。
1. `galley` (`istio-galley.istio-system:9093`): 所有 galley 的指标。
1. `istio-policy` (`istio-policy.istio-system:9093`): 所有 policy 的指标。
有关查询 Prometheus 的更多信息,请阅读他们的[查询文档](https://prometheus.io/docs/querying/basics/)。

View File

@ -19,7 +19,7 @@ keywords: [security,health-check]
* 了解 [Kubernetes liveness 和 readiness 探针](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)Istio [认证策略](/zh/docs/concepts/security/#认证策略)和[双向 TLS 认证](/zh/docs/concepts/security/#双向-tls-认证)的概念。
* 具有一个安装了 Istio 的 Kubernetes 集群,但没有启用全局双向 TLS例如按照 [安装步骤](/zh/docs/setup/kubernetes/quick-start/#安装步骤) 中的描述使用 istio-demo.yaml或者在使用 [Helm](/zh/docs/setup/kubernetes/helm-install/) 时将 `global.mtls.enabled` 设置为 false
* 具有一个安装了 Istio 的 Kubernetes 集群,但没有启用全局双向 TLS例如按照 [安装步骤](/zh/docs/setup/kubernetes/quick-start/#安装步骤) 中的描述使用 `istio-demo.yaml`,或者在使用 [Helm](/zh/docs/setup/kubernetes/helm-install/) 时将 `global.mtls.enabled` 设置为 false
## 使用命令选项的 liveness 和 readiness 探针

View File

@ -220,7 +220,7 @@ keywords: [traffic-management,circuit-breaking]
Code 503 : 11 (36.7 %)
{{< /text >}}
1. 我们可以查询 istio-proxy 的状态,获取更多相关信息:
1. 我们可以查询 `istio-proxy` 的状态,获取更多相关信息:
{{< text bash >}}
$ kubectl exec -it $FORTIO_POD -c istio-proxy -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pending

View File

@ -3,4 +3,4 @@ title: Consul - 我的应用程序无法正常工作,我该如何调试并解
weight: 40
---
请确保所有需要的容器都运行正常etcd、istio-apiserver、consul、registrator、pilot。如果以上某个容器未正常运行你可以使用 `docker ps -a` 命令找到 {containerID},然后使用命令 `docker logs {containerID}` 来查阅日志。
请确保所有需要的容器都运行正常etcd、`istio-apiserver`、consul、registrator、pilot。如果以上某个容器未正常运行你可以使用 `docker ps -a` 命令找到 {containerID},然后使用命令 `docker logs {containerID}` 来查阅日志。

View File

@ -3,4 +3,4 @@ title: Kubernetes - 我该如何调试 sidecar 自动注入的问题?
weight: 20
---
为了支持 sidecar 自动注入,请确保你的集群符合此[前提条件](/docs/setup/kubernetes/sidecar-injection/#automatic-sidecar-injection)。如果你的微服务是部署在 kube-system、kube-public 或者 istio-system 这些命名空间,那么就会被免除 sidecar 自动注入。请使用其他命名空间替代。
为了支持 sidecar 自动注入,请确保你的集群符合此[前提条件](/docs/setup/kubernetes/sidecar-injection/#automatic-sidecar-injection)。如果你的微服务是部署在 `kube-system``kube-public` 或者 `istio-system` 这些命名空间,那么就会被免除 sidecar 自动注入。请使用其他命名空间替代。

View File

@ -1,4 +1,4 @@
---
title: Pilot
---
Pilot 是 istio 里的一个组件,它控制 [envoy](#envoy) 代理,负责服务发现、负载均衡和路由分发。
Pilot 是 Istio 里的一个组件,它控制 [Envoy](#envoy) 代理,负责服务发现、负载均衡和路由分发。

View File

@ -56,9 +56,9 @@ Thu Jun 15 02:25:42 UTC 2017
要解决此问题,您需要在重新安装 Istio 之前关闭然后重新启动 Docker。
## 如果 kube-apiserver 具有代理设置,则 sidecar 自动注入将失败
## 如果 `kube-apiserver` 具有代理设置,则 sidecar 自动注入将失败
当 Kube-apiserver 包含代理设置时,例如:
`Kube-apiserver` 包含代理设置时,例如:
{{< text yaml >}}
env:
@ -70,15 +70,15 @@ env:
value: 127.0.0.1,localhost,dockerhub.foo.com,devhub-docker.foo.com,10.84.100.125,10.84.100.126,10.84.100.127
{{< /text >}}
sidecar 注入将失败。唯一相关的故障日志位于 kube-apiserver 日志中:
sidecar 注入将失败。唯一相关的故障日志位于 `kube-apiserver` 日志中:
{{< text plain >}}
W0227 21:51:03.156818 1 admission.go:257] Failed calling webhook, failing open sidecar-injector.istio.io: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject: Service Unavailable
{{< /text >}}
确保 pod 和 service CIDR 都没有通过 *_proxy 变量来代理。检查 kube-apiserver 文件和日志以验证配置以及是否正在代理任何请求。
确保 pod 和 service CIDR 都没有通过 *_proxy 变量来代理。检查 `kube-apiserver` 文件和日志以验证配置以及是否正在代理任何请求。
一个解决方法是从 kube-apiserver 配置中删除代理设置,然后重新启动服务器或使用更高版本的 Kubernetes。
一个解决方法是从 `kube-apiserver` 配置中删除代理设置,然后重新启动服务器或使用更高版本的 Kubernetes。
向Kubernetes提出了与此相关的问题此后一直关闭。
[https://github.com/kubernetes/kubeadm/issues/666](https://github.com/kubernetes/kubeadm/issues/666)

View File

@ -6,7 +6,7 @@ weight: 20
Galley 配置验证确保用户授权的 Istio 配置在语法和语义上都是有效的。Galley 使用 Kubernetes `ValidatingWebhook``istio-galley` `ValidationWebhookConfiguration` 有两个 webhook。
* `pilot.validation.istio.io` - 服务地址路径为 `/admitpilot`,负责验证 Pilot 使用的配置(例如 VirtualService 、Authentication
* `pilot.validation.istio.io` - 服务地址路径为 `/admitpilot`,负责验证 Pilot 使用的配置(例如 `VirtualService` 、Authentication
* `mixer.validation.istio.io` - 服务地址路径为 `/admitmixer`,负责验证 Mixer 使用的配置。

View File

@ -24,7 +24,7 @@ Mixer 默认安装了一套包括 Prometheus adapter 和用于生成一组[默
Mixer 会生成指标来监控它自身行为。第一步是检查这些指标:
1. 建立与 mixer 自监控 endpoint 的连接以进行 istio 遥测部署。在 Kubernetes 环境中,执行以下命令:
1. 建立与 mixer 自监控 endpoint 的连接以进行 Istio 遥测部署。在 Kubernetes 环境中,执行以下命令:
{{< text bash >}}
$ kubectl -n istio-system port-forward <istio-telemetry pod> 9093 &
@ -93,7 +93,7 @@ istio-system tcpkubeattrgenrulerule 13d
## 验证没有配置错误
1. 要建立与 Istio 遥测自监控 (istio-telemetry self-monitoring) endpoint 的连接,像上面[确认 Mixer 可以收到指标报告的调用](#%E7%A1%AE%E8%AE%A4-mixer-%E5%8F%AF%E4%BB%A5%E6%94%B6%E5%88%B0%E6%8C%87%E6%A0%87%E6%8A%A5%E5%91%8A%E7%9A%84%E8%B0%83%E7%94%A8)描述那样,设置一个到 Istio 遥测自监控 endpoint 的 port forward。
1. 要建立与 Istio 遥测自监控 (`istio-telemetry` self-monitoring) endpoint 的连接,像上面[确认 Mixer 可以收到指标报告的调用](#%E7%A1%AE%E8%AE%A4-mixer-%E5%8F%AF%E4%BB%A5%E6%94%B6%E5%88%B0%E6%8C%87%E6%A0%87%E6%8A%A5%E5%91%8A%E7%9A%84%E8%B0%83%E7%94%A8)描述那样,设置一个到 Istio 遥测自监控 endpoint 的 port forward。
1. 确认以下的指标的最新的值是0
@ -123,7 +123,7 @@ mixer_config_rule_config_match_error_count{configID="1"} 0</td>
## 验证 Mixer 可以将指标实例发送到 Prometheus adapter
1. 要建立与 Istio 遥测自监控 (istio-telemetry self-monitoring) endpoint 的连接,像上面[确认 Mixer 可以收到指标报告的调用](#%E7%A1%AE%E8%AE%A4-mixer-%E5%8F%AF%E4%BB%A5%E6%94%B6%E5%88%B0%E6%8C%87%E6%A0%87%E6%8A%A5%E5%91%8A%E7%9A%84%E8%B0%83%E7%94%A8)描述那样,设置一个到 Istio 遥测自监控 endpoint 的 port forward。
1. 要建立与 Istio 遥测自监控 (`istio-telemetry` self-monitoring) endpoint 的连接,像上面[确认 Mixer 可以收到指标报告的调用](#%E7%A1%AE%E8%AE%A4-mixer-%E5%8F%AF%E4%BB%A5%E6%94%B6%E5%88%B0%E6%8C%87%E6%A0%87%E6%8A%A5%E5%91%8A%E7%9A%84%E8%B0%83%E7%94%A8)描述那样,设置一个到 Istio 遥测自监控 endpoint 的 port forward。
1. 在 Mixer 自监控 endpoint 上,搜索 `mixer_runtime_dispatch_count`。输出应该大致是: