Setup for linting markdown files. (#1147)

- linters.sh will run spell-checking and a style checker on
markdown files.

- Fix a whole bunch of typos and bad markdown content throughout. There are many more fixes
to come before we can enable the linters as a checkin gate, but this takes care of a majority
of items. More to come later.
This commit is contained in:
Martin Taillefer 2018-04-04 22:22:14 -07:00 committed by GitHub
parent cc68009847
commit a6fae1b368
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
117 changed files with 1753 additions and 1423 deletions

521
.spelling Normal file
View File

@ -0,0 +1,521 @@
# markdown-spellcheck spelling configuration file
# Format - lines beginning # are comments
# global dictionary is at the start, file overrides afterwards
# one word per line, to define a file override use ' - filename'
# where filename is relative to this configuration file
0.1.x
0.2.x
1.x
10s
123456789012.my
15ms
1qps
243ms
24ms
290ms
2x
3s
404s
4s
5000qps
50Mb
6s
7Mb
7s
ACLs
API
APIs
Ansible
AppOptics
AuthPolicy
Autoscalers
Bookinfo
CAP_NET_ADMIN
CAs
CDNs
CIDRs
CIOs
CSRs
Chrony
Circonus
Cmd
Config
ConfigMap
Ctrl
CustomResourceDefinition
D3.js
DaemonSet
Datadog
Datawire
EgressRule
Elasticsearch
ExecAction
Exfiltrating
ExternalName
Fluentd
GATEWAY_URL
GCP-IAM
GCP_OPTS
GKE-IAM
GKE-Istio
GKE-Workloads
GitHub
GlueCon
Gmail
Grafana
Graphviz
Hystrix
ILBs
IPs
IPv4
ISTIO_INBOUND_PORTS
Incrementality
Initializers
Istio
IstioMesh
IstioRBAC.svg
Istiofied
JSON-formatted
Kibana
Kops
Kube
Kubelet
Kubernetes
L3-4
LabelDescription
LoadBalancers
LoadBalancing.svg
Lyft
MacOS
Memquota
Mesos
Minikube
MongoDB
MutatingWebhookConfiguration
MySQL
Mysql
NamespaceSelector
NodePort
OAuth2
OP_QUERY
OpenID_Connect
OpenSSL
OpenShift
Papertrail
PilotAdapters.svg
RawVM
Redis
Redis-based
Redisquota
Registrator
Reviewer1
Reviewer2
SREs
ServiceGraph
ServiceModel_RequestFlow.svg
ServiceModel_Versions.svg
ServiceRole
Servicegraph
Sharding
SolarWinds
SolarWindws
TCP-level
TLS-secured
Tcpdump
Tigera
TrafficManagementOverview.svg
Undeploy
VM-based
VMs
ValueType
WeaveWorks
WebSocket
Webhooks
X.509
X.509.
Zipkin
_CA_
_OK_
_V2_
_blog
_data
_docs
_help
_proxy
_v2_
_v3_
a.k.a.
abc
abcde12345
accounts.my
adapters.svg
addon
addons
admissionregistration
admissionregistration.k8s.io
analytics
api-server
api.operation
api.protocol
api.service
api.version
apiVersion
arch.svg
archive.istio.io
archive.istio.io.
auth.svg
autoscaler
autoscalers
autoscaling
backend
backends
base64
bind-productpager-viewer
bookinfo
booksale
bookstore.default.svc.cluster.local
boolean
bt
camelCase
canaried
canarying
check.error_code
check.error_message
cluster.local
colocated
concepts.yaml
config
configmap
configmaps
connection.duration
connection.id
connection.received.bytes
connection.received.bytes_total
connection.sent.bytes
connection.sent.bytes_total
containerID
context.protocol
context.time
coreos
current.istio.io
current.istio.io.
datastore
debian
default.svc.cluster.local
destination.domain
destination.ip
destination.labels
destination.name
destination.namespace
destination.port
destination.service
destination.uid
destination.user
dev
dm_bookinfo.png
dm_gcp_iam.png
dm_grafana.png
dm_kubernetes.png
dm_kubernetes_workloads.png
dm_launcher.png
dm_prometheus.png
dm_servicegraph.png
dm_zipkin.png
docker.io
e.g.
eBPF
enablement
endUser-to-Service
env
envars.yaml
errorFetchingBookDetails.png
errorFetchingBookRating.png
etcd
example.com
externalBookDetails.png
externalMySQLRatings.png
facto
failovers
faq
faq.md
fcm.googleapis.com
figure.html
filename
filenames
fluentd
fortio
frontend
gRPC
gbd
gcloud
gdb
getPetsById
git
global.hyperkube.hub
global.hyperkube.tag
global.ingress.nodeport_port
global.ingress.use_nodeport
global.initializer.enabled
global.mixer.enabled
global.mixer.hub
global.mixer.tag
global.namespace
global.pilot.enabled
global.pilot.hub
global.pilot.tag
global.proxy.debug
global.proxy.hub
global.proxy.tag
global.security.enabled
global.security.hub
global.security.tag
golang
googleapis.com
googlegroups.com
goroutine
goroutines
grafana-istio-dashboard
grpc
helloworld
hostIP
hostname
hotspots
html
http
http2
httpReqTimeout
httpbin
httpbin.org
httpbin.yaml
https
https_from_the_app.svg
hyperkube
i.e.
img
initializer
initializers
int64
intermediation
interoperate
intra-cluster
ip_address
iptables
istio
istio-apiserver
istio.github.io
istio.github.io.
istio.io
istio.io.
istio.yaml
istio_auth_overview.svg
istio_auth_workflow.svg
istio_grafana_dashboard-new
istio_grafana_disashboard-new
istio_zipkin_dashboard.png
istioctl
jaeger_dashboard.png
jaeger_trace.png
jason
k8s
key.pem
kube-api
kube-apiserver
kube-dns
kube-inject
kube-proxy
kube-public
kube-system
kubectl
kubelet
kubernetes
kubernetes.default
learnings
lifecycle
listchecker
liveness
mTLS
machine.svg
memcached
memquota
mesos-dns
metadata
metadata.initializers.pending
methodName
microservice
microservices
minikube
misconfigured
mixer-spof-myth-1
mixer-spof-myth-2
mongodb
mtls_excluded_services
mutual-tls
my-svc
my-svc-234443-5sffe
mysql
mysqldb
namespace
namespaces
natively
nginx
nginx-proxy
nodePorts
nodeport
noistio.svg
non-sandboxed
ns
oc
ok
openssl
packageName.serviceName
parenthesization
pem
phases.svg
platform-specific
pluggable
png
pprof
pre-specified
preconfigured
prefetching
preformatted
preliminary.istio.io
preliminary.istio.io.
prepends
prober
productpage
productpage.ns.svc.cluster.local
products.default.svc.cluster.local
prometheus
prometheus_query_result.png
proto
protobuf
protos
proxied
proxy_http_version
proxying
pwd
qps
quay.io
radis
ratelimit-handler
raw.githubusercontent.com
raw.githubusercontent.com)
readinessProbe
redis
redis-master-2353460263-1ecey
referer
registrator
reinject
repo
request.api_key
request.auth.audiences
request.auth.presenter
request.auth.principal
request.headers
request.host
request.id
request.method
request.path
request.reason
request.referer
request.responseheaders
request.scheme
request.size
request.time
request.useragent
requestcontext
response.code
response.duration
response.headers
response.size
response.time
reviews.abc.svc.cluster.local
roadmap
roleRef
rollout
rollouts
runtime
runtimes
sa
sayin
schemas
secretName
serviceaccount
servicegraph-example
setupIstioVM.sh
setupMeshEx.sh
sharded
sharding
sidecar.env
sleep.legacy
sleep.yaml
source.domain
source.ip
source.labels
source.name
source.namespace
source.service
source.uid
source.user
spiffe
stackdriver
statsd
stdout
struct
subdomains
substring
svc.cluster.local
svc.com
svg
tcp
team1
team1-ns
team2
team2-ns
templated
test-api
timeframe
timestamp
traffic.svg
trustability
ulimit
uncomment
uncommented
unencrypted
uptime
url
user
user1
v1
v1.7.4
v1.7.6_coreos.0
v1alpha1
v1alpha3
v1beta1#MutatingWebhookConfiguration
v2
v2-mysql
v3
versioned
versioning
vm-1
webhook
webhooks
whitelist
whitelists
wikipedia.org
wildcard
withistio.svg
www.google.com
x-envoy-upstream-rq-timeout-ms
x.509
yaml
yamls
yournamespace
zipkin_dashboard.png
zipkin_span.png
qcc
- search.md
searchresults
gcse

View File

@ -1,6 +1,6 @@
# Contribution guidelines
## Contribution guidelines
So, you want to hack on the Istio web site? Yay! Please refer to Istio's overall
So, you want to hack on the Istio web site? Cool! Please refer to Istio's overall
[contribution guidelines](https://github.com/istio/community/blob/master/CONTRIBUTING.md)
to find out how you can help.

View File

@ -1,4 +1,4 @@
# istio.github.io
## istio.github.io
This repository contains the source code for the [istio.io](https://istio.io),
[preliminary.istio.io](https://preliminary.istio.io) and [archive.istio.io](https://archive.istio.io) websites.
@ -18,13 +18,13 @@ see the Istio [contribution guidelines](https://github.com/istio/community/blob/
The website uses [Jekyll](https://jekyllrb.com/) templates Please make sure you are
familiar with these before editing.
To run the site locally with Docker, use the following command from the toplevel directory for this git repo
To run the site locally with Docker, use the following command from the top level directory for this git repo
(e.g. pwd must be `~/github/istio.github.io` if you were in `~/github` when you issued
`git clone https://github.com/istio/istio.github.io.git`)
```bash
# First time: (slow)
docker run --name istio-jekyll --volume=$(pwd):/srv/jekyll -it -p 4000:4000 jekyll/jekyll:3.5.2 sh -c "bundle install && rake test && bundle exec jekyll serve --incremental --host 0.0.0.0"
docker run --name istio-jekyll --volume=$(pwd):/srv/jekyll -it -p 4000:4000 jekyll/jekyll:3.7.3 sh -c "bundle install && rake test && bundle exec jekyll serve --incremental --host 0.0.0.0"
# Then open browser with url 127.0.0.1:4000 to see the change.
# Subsequent, each time you want to see a new change and you stopped the previous run by ctrl+c: (much faster)
docker start istio-jekyll -a -i
@ -32,15 +32,19 @@ docker start istio-jekyll -a -i
docker rm istio-jekyll
```
The `rake test` part is to make sure you are not introducing html errors or bad links, you should see
```
The `rake test` part is to make sure you are not introducing HTML errors or bad links, you should see
```bash
HTML-Proofer finished successfully.
```
in the output
> In some cases the `--incremental` may not work properly and you might have to remove it.
Alternatively, if you just want to develop locally w/o Docker/Kubernetes/Minikube, you can try installing Jekyll locally. You may need to install other prerequisites manually (which is where using the docker image shines). Here's an example of doing so for Mac OS X:
Alternatively, if you just want to develop locally w/o Docker/Kubernetes/Minikube, you can try installing Jekyll locally.
You may need to install other prerequisites manually (which is where using the docker image shines). Here's an example of doing
so for Mac OS X:
```bash
xcode-select --install
@ -110,33 +114,35 @@ and subsequent entries should point to archive.istio.io.
1. Commit the previous edit to GitHub.
1. Go to the Google Search Console and create a new search engine that searches the archive.istio.io/V<major>.<minor>
1. Go to the Google Search Console and create a new search engine that searches the archive.istio.io/V&lt;major&gt;.&lt;minor&gt;
directory. This search engine will be used to perform version-specific searches on archive.istio.io.
1. In the **previous release's** branch (in this case release-0.6), edit the file `_data/istio.yml`. Set the
`archive` field to true, the `archive_date` field to the current date, and the `search_engine_id` field
to the ID of the search engine you created in the prior step.
1. Switch to the istio/admin-sites repo. In this repo:
1. Switch to the istio/admin-sites repo.
1. Navigate to the archive.istio.io directory and edit the `build.sh` script to add the newest archive version (in this case
release-0.6) to the `TOBUILD` variable.
1. Navigate to the archive.istio.io directory.
1. Commit the previous edit to GitHub.
1. Edit the `build.sh` script to add the newest archive version (in this case
release-0.6) to the `TOBUILD` variable.
1. Run the `build.sh` script.
1. Commit the previous edit to GitHub.
1. Once the script completes, run 'firebase deploy'. This will update archive.istio.io to contain the
right set of archives, based on the above steps.
1. Run the `build.sh` script.
1. Navigate to the current.istio.io directory
1. Once the script completes, run 'firebase deploy'. This will update archive.istio.io to contain the
right set of archives, based on the above steps.
1. Edit the build.sh script to set the `BRANCH` variable to the current release branch (in this case release-0.7)
1. Navigate to the current.istio.io directory.
1. Run the `build.sh` script.
1. Edit the `build.sh` script to set the `BRANCH` variable to the current release branch (in this case release-0.7)
1. Once the script completes, run 'firebase deploy`. This will update the content of istio.io to reflect what is the new release
branch you created.
1. Run the `build.sh` script.
1. Once the script completes, run 'firebase deploy`. This will update the content of istio.io to reflect what is the new release
branch you created.
Once all this is done, browse the three sites (preliminary.istio.io, istio.io, and archive.istio.io) to make sure
everything looks good.

View File

@ -10,12 +10,12 @@ redirect_from: /docs/welcome/contribute/creating-a-pull-request.html
---
To contribute to Istio documentation, create a pull request against the
[istio/istio.github.io](https://github.com/istio/istio.github.io){: target="_blank"}
[istio/istio.github.io](https://github.com/istio/istio.github.io)
repository. This page shows the steps necessary to create a pull request.
## Before you begin
1. Create a [GitHub account](https://github.com){: target="_blank"}.
1. Create a [GitHub account](https://github.com).
1. Sign the [Contributor License Agreement](https://github.com/istio/community/blob/master/CONTRIBUTING.md#contributor-license-agreements)
@ -26,7 +26,7 @@ Documentation will be published under the [Apache 2.0](https://github.com/istio/
Before you can edit documentation, you need to create a fork of Istio's documentation GitHub repository:
1. Go to the
[istio/istio.github.io](https://github.com/istio/istio.github.io){: target="_blank"}
[istio/istio.github.io](https://github.com/istio/istio.github.io)
repository.
1. In the upper-right corner, click **Fork**. This creates a copy of Istio's

View File

@ -1,7 +1,7 @@
---
title: Editing Docs
overview: Lets you start editing this site's documentation.
order: 10
layout: about
@ -10,7 +10,7 @@ redirect_from: /docs/welcome/contribute/editing.html
---
Click the button below to visit the GitHub repository for this whole web site. You can then click the
**Fork** button in the upper-right area of the screen to
**Fork** button in the upper-right area of the screen to
create a copy of our site in your GitHub account called a _fork_. Make any changes you want in your fork, and when you
are ready to send those changes to us, go to the index page for your fork and click **New Pull Request** to let us know about it.

View File

@ -1,7 +1,7 @@
---
title: Doc Issues
overview: Explains the process involved in accepting documentation updates.
order: 60
layout: about
@ -10,7 +10,7 @@ redirect_from: /docs/welcome/contribute/reviewing-doc-issues.html
---
This page explains how documentation issues are reviewed and prioritized for the
[istio/istio.github.io](https://github.com/istio/istio.github.io){: target="_blank"} repository.
[istio/istio.github.io](https://github.com/istio/istio.github.io) repository.
The purpose is to provide a way to organize issues and make it easier to contribute to
Istio documentation. The following should be used as the standard way of prioritizing,
labeling, and interacting with issues.
@ -25,7 +25,7 @@ the issue with your reasoning for the change.
<td>P1</td>
<td><ul>
<li>Major content errors affecting more than 1 page</li>
<li>Broken code sample on a heavily trafficked page</li>
<li>Broken code sample on a heavily trafficked page</li>
<li>Errors on a “getting started” page</li>
<li>Well known or highly publicized customer pain points</li>
<li>Automation issues</li>
@ -52,7 +52,7 @@ the issue with your reasoning for the change.
## Handling special issue types
If a single problem has one or more issues open for it, the problem should be consolidated into a single issue. You should decide which issue to keep open
If a single problem has one or more issues open for it, the problem should be consolidated into a single issue. You should decide which issue to keep open
(or open a new issue), port over all relevant information, link related issues, and close all the other issues that describe the same problem. Only having
a single issue to work on will help reduce confusion and avoid duplicating work on the same problem.

View File

@ -1,7 +1,7 @@
---
title: Style Guide
overview: Explains the dos and donts of writing Istio docs.
order: 70
layout: about
@ -29,18 +29,18 @@ objects use
[camelCase](https://en.wikipedia.org/wiki/Camel_case).
Don't split the API object name into separate words. For example, use
PodTemplateList, not Pod Template List.
`PodTemplateList`, not Pod Template List.
Refer to API objects without saying "object," unless omitting "object"
leads to an awkward construction.
|Do |Don't
|--------------------------------------------|------
|The Pod has two Containers. |The pod has two containers.
|The Deployment is responsible for ... |The Deployment object is responsible for ...
|A PodList is a list of Pods. |A Pod List is a list of pods.
|The two ContainerPorts ... |The two ContainerPort objects ...
|The two ContainerStateTerminated objects ...|The two ContainerStateTerminated ...
|The `Pod` has two Containers. |The pod has two containers.
|The `Deployment` is responsible for ... |The `Deployment` object is responsible for ...
|A `PodList` is a list of Pods. |A Pod List is a list of pods.
|The two `ContainerPorts` ... |The two `ContainerPort` objects ...
|The two `ContainerStateTerminated` objects ...|The two `ContainerStateTerminated` ...
### Use angle brackets for placeholders
@ -52,7 +52,7 @@ represents.
```bash
kubectl describe pod <pod-name>
```
where `<pod-name>` is the name of one of your pods.
### Use **bold** for user interface elements
@ -81,7 +81,7 @@ represents.
|Do | Don't
|----------------------------|------
|The `kubectl run` command creates a Deployment.|The "kubectl run" command creates a Deployment.
|The `kubectl run` command creates a `Deployment`.|The "kubectl run" command creates a `Deployment`.
|For declarative management, use `kubectl apply`.|For declarative management, use "kubectl apply".
### Use `code` style for object field names
@ -97,19 +97,20 @@ For field values of type string or integer, use normal style without quotation m
|Do | Don't
|----------------------------------------------|------
|Set the value of `imagePullPolicy` to Always. | Set the value of `imagePullPolicy` to "Always".|Set the value of `image` to nginx:1.8. | Set the value of `image` to `nginx:1.8`.
|Set the value of `imagePullPolicy` to Always. | Set the value of `imagePullPolicy` to "Always".
|Set the value of `image` to nginx:1.8. | Set the value of `image` to `nginx:1.8`.
|Set the value of the `replicas` field to 2. | Set the value of the `replicas` field to `2`.
### Only capitalize the first letter of headings
For any headings, only apply an uppercase letter to the first word of the heading,
except is a word is a proper noun or an acronym.
except if a word is a proper noun or an acronym.
|Do | Don't
|------------------------|-----
|Configuring rate limits | Configuring Rate Limits
|Using Envoy for ingress | Using envoy for ingress
|Using HTTPS | Using https
|Using HTTPS | Using https
## Code snippet formatting
@ -128,7 +129,7 @@ kubectl get pods --output=wide
```
The output is similar to this:
```bash
```xxx
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 13s 10.200.0.4 worker0
```
@ -150,7 +151,7 @@ Synonyms:
- “Sidecar” -- mostly restricted to conceptual docs
- “Proxy -- only if context is obvious
Related Terms
Related Terms:
- Proxy agent - This is a minor infrastructural component and should only show up in low-level detail documentation.
It is not a proper noun.
@ -171,7 +172,7 @@ forms of configuration.
No dash, it's *load balancing* not *load-balancing*.
### Service mesh
### Service mesh
Not a proper noun. Use in place of service fabric.
@ -208,7 +209,7 @@ Use simple and direct language. Avoid using unnecessary phrases, such as saying
|Do | Don't
|----------------------------|------
|To create a ReplicaSet, ... | In order to create a ReplicaSet, ...
|To create a `ReplicaSet`, ... | In order to create a `ReplicaSet`, ...
|See the configuration file. | Please see the configuration file.
|View the Pods. | With this next command, we'll view the Pods.
@ -216,7 +217,7 @@ Use simple and direct language. Avoid using unnecessary phrases, such as saying
|Do | Don't
|---------------------------------------|------
|You can create a Deployment by ... | We'll create a Deployment by ...
|You can create a `Deployment` by ... | We'll create a `Deployment` by ...
|In the preceding output, you can see...| In the preceding output, we can see ...
### Create useful links
@ -237,7 +238,7 @@ whether they're part of the "we" you're describing.
|Do | Don't
|------------------------------------------|------
|Version 1.4 includes ... | In version 1.4, we have added ...
|Kubernetes provides a new feature for ... | We provide a new feature ...
|Istio provides a new feature for ... | We provide a new feature ...
|This page teaches you how to use pods. | In this page, we are going to learn about pods.
### Avoid jargon and idioms

View File

@ -1,7 +1,7 @@
---
title: Writing a New Topic
overview: Explains the mechanics of creating new documentation pages.
order: 30
layout: about
@ -25,7 +25,7 @@ is the best fit for your content:
<table>
<tr>
<td>Concept</td>
<td>A concept page explains some significant aspect of Istio. For example, a concept page might describe the
<td>A concept page explains some significant aspect of Istio. For example, a concept page might describe the
Mixer's configuration model and explain some of its subtleties.
Typically, concept pages don't include sequences of steps, but instead provide links to
tasks that do.</td>
@ -58,7 +58,7 @@ is the best fit for your content:
activities.
</td>
</tr>
<tr>
<td>Blog Post</td>
<td>
@ -79,13 +79,13 @@ all in lower case.
## Updating the front matter
Every documentation file needs to start with Jekyll
Every documentation file needs to start with Jekyll
[front matter](https://jekyllrb.com/docs/frontmatter/).
The front matter is a block of YAML that is between the
triple-dashed lines at the top of each file. Here's the
chunk of front matter you should start with:
```
```yaml
---
title: <title>
overview: <overview>
@ -117,12 +117,12 @@ matter fields are:
Depending on your page type, put your new file in a subdirectory of one of these:
* _blog/
* _docs/concepts/
* _docs/guides/
* _docs/reference/
* _docs/setup/
* _docs/tasks/
- _blog/
- _docs/concepts/
- _docs/guides/
- _docs/reference/
- _docs/setup/
- _docs/tasks/
You can put your file in an existing subdirectory, or you can create a new
subdirectory. For blog posts, put the file into a subdirectory for the current
@ -135,7 +135,7 @@ Put image files in an `img` subdirectory of where you put your markdown file. Th
If you must use a PNG or JPEG file instead, and the file
was generated from an original SVG file, please include the
SVG file in the repository even if it isn't used in the web
site itself. This is so we can update the imagery over time
site itself. This is so we can update the imagery over time
if needed.
Within markdown, use the following sequence to add the image:
@ -181,10 +181,10 @@ current hierarchy:
{% raw %}[see here]({{home}}/docs/adir/afile.html){% endraw %}
```
In order to use \{\{home\}\} in a file,
In order to use \{\{home\}\} in a file,
you need to make sure that the file contains the following
line of boilerplate right after the block of front matter:
```markdown
...
---
@ -206,7 +206,7 @@ func HelloWorld() {
The above produces this kind of output:
```
```xxx
func HelloWorld() {
fmt.Println("Hello World")
}
@ -230,7 +230,7 @@ func HelloWorld() {
}
```
You can use `markdown`, `yaml`, `json`, `java`, `javascript`, `c`, `cpp`, `csharp`, `go`, `html`, `protobuf`,
You can use `markdown`, `yaml`, `json`, `java`, `javascript`, `c`, `cpp`, `csharp`, `go`, `html`, `protobuf`,
`perl`, `docker`, and `bash`.
## Displaying file content
@ -261,13 +261,13 @@ redirects to the site very easily.
In the page that is the target of the redirect (where you'd like users to land), you simply add the
following to the front-matter:
```
```xxx
redirect_from: <url>
```
For example
```
```xxx
---
title: Frequently Asked Questions
overview: Questions Asked Frequently
@ -279,14 +279,14 @@ type: markdown
redirect_from: /faq
---
```
```
With the above in a page saved as _help/faq.md, the user will be able to access the page by going
to istio.io/help/faq as normal, as well as istio.io/faq.
You can also add many redirects like so:
```
```xxx
---
title: Frequently Asked Questions
overview: Questions Asked Frequently
@ -301,4 +301,4 @@ redirect_from:
- /faq3
---
```
```

View File

@ -17,22 +17,22 @@ This page lists the relative maturity and support
level of every Istio feature. Please note that the phases (Alpha, Beta, and Stable) are applied to individual features
within the project, not to the project as a whole. Here is a high level description of what these labels means:
## Feature Phase Definition
## Feature phase definitions
| | Alpha | Beta | Stable
| | Alpha | Beta | Stable
|-------------------|-------------------|-------------------|-------------------
| **Purpose** | Demo-able, works end-to-end but has limitations | Usable in production, not a toy anymore | Dependable, production hardened
| **Purpose** | Demo-able, works end-to-end but has limitations | Usable in production, not a toy anymore | Dependable, production hardened
| **API** | No guarantees on backward compatibility | APIs are versioned | Dependable, production-worthy. APIs are versioned, with automated version conversion for backward compatibility
| **Performance** | Not quantified or guaranteed | Not quantified or guaranteed | Perf (latency/scale) is quantified, documented, with guarantees against regression
| **Performance** | Not quantified or guaranteed | Not quantified or guaranteed | Performance (latency/scale) is quantified, documented, with guarantees against regression
| **Deprecation Policy** | None | Weak - 3 months | Dependable, Firm. 1 year notice will be provided before changes
## Istio features
Below is our list of existing features and their current phases. This information will be updated after every monthly release.
### Traffic Management
### Traffic management
| Feature | Phase
| Feature | Phase
|-------------------|-------------------
| [Protocols: HTTP 1.1](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/http_connection_management.html#http-protocols) | Beta
| [Protocols: HTTP 2.0](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/http_connection_management.html#http-protocols) | Alpha
@ -50,23 +50,20 @@ Below is our list of existing features and their current phases. This informatio
### Observability
| Feature | Phase
| Feature | Phase
|-------------------|-------------------
| [Prometheus Integration]({{home}}/docs/guides/telemetry.html) | Beta
| [Local Logging (STDIO)]({{home}}/docs/guides/telemetry.html) | Beta
| [Statsd Integration]({{home}}/docs/reference/config/adapters/statsd.html) | Stable
| [Service Dashboard in Grafana]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html) | Beta
| [Stackdriver Integration]({{home}}/docs/reference/config/adapters/stackdriver.html) | Alpha
| [Service Graph]({{home}}/docs/tasks/telemetry/servicegraph.html) | Alpha
| [Distributed Tracing to Zipkin / Jaeger]({{home}}/docs/tasks/telemetry/distributed-tracing.html) | Alpha
| [Prometheus Integration]({{home}}/docs/guides/telemetry.html) | Beta
| [Local Logging (STDIO)]({{home}}/docs/guides/telemetry.html) | Beta
| [Statsd Integration]({{home}}/docs/reference/config/adapters/statsd.html) | Stable
| [Service Dashboard in Grafana]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html) | Beta
| [Stackdriver Integration]({{home}}/docs/reference/config/adapters/stackdriver.html) | Alpha
| [Service Graph]({{home}}/docs/tasks/telemetry/servicegraph.html) | Alpha
| [Distributed Tracing to Zipkin / Jaeger]({{home}}/docs/tasks/telemetry/distributed-tracing.html) | Alpha
| [Istio Component Dashboard in Grafana]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html) - **New to 0.5** | Alpha
### Security
| Feature | Phase
| Feature | Phase
|-------------------|-------------------
| [Deny Checker]({{home}}/docs/reference/config/adapters/denier.html) | Beta
| [List Checker]({{home}}/docs/reference/config/adapters/list.html) | Beta
@ -77,12 +74,9 @@ Below is our list of existing features and their current phases. This informatio
| [VM: Service Credential Distribution]({{home}}/docs/concepts/security/mutual-tls.html) | Alpha
| [OPA Checker](https://github.com/istio/istio/blob/41a8aa4f75f31bf0c1911d844a18da4cff8ac584/mixer/adapter/opa/README.md) | Alpha
### Core
| Feature | Phase
| Feature | Phase
|-------------------|-------------------
| [Kubernetes: Envoy Installation and Traffic Interception]({{home}}/docs/setup/kubernetes/) | Beta
| [Kubernetes: Istio Control Plane Installation]({{home}}/docs/setup/kubernetes/) | Beta
@ -92,13 +86,10 @@ Below is our list of existing features and their current phases. This informatio
| [VM: Envoy Installation, Traffic Interception and Service Registration]({{home}}/docs/guides/integrating-vms.html) | Alpha
| [VM: Istio Control Plane Installation and Upgrade (Galley, Mixer, Pilot, CA)](https://github.com/istio/istio/issues/2083) | Alpha
| [Kubernetes: Istio Control Plane Upgrade]({{home}}/docs/setup/kubernetes/) | Alpha
| [Pilot Integration into Consul]({{home}}/docs/setup/consul/quick-start.html) | Alpha
| [Pilot Integration into Eureka]({{home}}/docs/setup/consul/quick-start.html) | Alpha
| [Pilot Integration into Consul]({{home}}/docs/setup/consul/quick-start.html) | Alpha
| [Pilot Integration into Eureka]({{home}}/docs/setup/consul/quick-start.html) | Alpha
| [Pilot Integration into Cloud Foundry Service Discovery]({{home}}/docs/setup/consul/quick-start.html) | Alpha
| [Basic Config Resource Validation](https://github.com/istio/istio/issues/1894) | Alpha
| [Basic Config Resource Validation](https://github.com/istio/istio/issues/1894) | Alpha
> <img src="{{home}}/img/bulb.svg" alt="Bulb" title="Help" style="width: 32px; display:inline" />
Please get in touch by joining our [community]({{home}}/community.html) if there are features you'd like to see in our future releases!

View File

@ -25,7 +25,7 @@ rate limits and quotas.
- Automatic metrics, logs, and traces for all traffic within a cluster,
including cluster ingress and egress.
- Secure service-to-service communication in a cluster with strong
- Secure service-to-service communication in a cluster with strong
identity-based authentication and authorization.
Istio can be deployed on [Kubernetes](https://kubernetes.io),

View File

@ -11,33 +11,35 @@ redirect_from: /docs/welcome/notes/0.2.html
## General
- **Updated Config Model**. Istio now uses the Kubernetes [Custom Resource](https://kubernetes.io/docs/concepts/api-extension/custom-resources/)
model to describe and store its configuration. When running in Kubernetes, configuration can now be optionally managed using the `kubectl`
model to describe and store its configuration. When running in Kubernetes, configuration can now be optionally managed using the `kubectl`
command.
- **Multiple Namespace Support**. Istio control plane components are now in the dedicated "istio-system" namespace. Istio can manage
- **Multiple Namespace Support**. Istio control plane components are now in the dedicated "istio-system" namespace. Istio can manage
services in other non-system namespaces.
- **Mesh Expansion**. Initial support for adding non-Kubernetes services (in the form of VMs and/or physical machines) to a mesh. This is an early version of
this feature and has some limitations (such as requiring a flat network across containers and VMs).
- **Multi-Environment Support**. Initial support for using Istio in conjunction with other service registries
including Consul and Eureka.
- **Multi-Environment Support**. Initial support for using Istio in conjunction with other service registries
including Consul and Eureka.
- **Automatic injection of sidecars**. Istio sidecar can automatically be injected into a Pod upon deployment using the [Initializers](https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers) alpha feature in Kubernetes.
- **Automatic injection of sidecars**. Istio sidecar can automatically be injected into a Pod upon deployment using the
[Initializers](https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers) alpha feature in Kubernetes.
## Perf and quality
## Performance and quality
There have been many performance and reliability improvements throughout the system. We dont consider Istio 0.2 ready for production yet, but weve made excellent progress in that direction. Here are a few items of note:
There have been many performance and reliability improvements throughout the system. We dont consider Istio 0.2 ready for production yet, but
weve made excellent progress in that direction. Here are a few items of note:
- **Caching Client**. The Mixer client library used by Envoy now provides caching for Check calls and batching for Report calls, considerably reducing
- **Caching Client**. The Mixer client library used by Envoy now provides caching for Check calls and batching for Report calls, considerably reducing
end-to-end overhead.
- **Avoid Hot Restarts**. The need to hot-restart Envoy has been mostly eliminated through effective use of LDS/RDS/CDS/EDS.
- **Reduced Memory Use**. Significantly reduced the size of the sidecar helper agent, from 50Mb to 7Mb.
- **Improved Mixer Latency**. Mixer now clearly delineates configuration-time vs. request-time computations, which avoids doing extra setup work at
request-time for initial requests and thus delivers a smoother average latency. Better resource caching also contributes to better end-to-end perf.
- **Improved Mixer Latency**. Mixer now clearly delineates configuration-time vs. request-time computations, which avoids doing extra setup work at
request-time for initial requests and thus delivers a smoother average latency. Better resource caching also contributes to better end-to-end performance.
- **Reduced Latency for Egress Traffic**. We now forward traffic to external services directly from the sidecar.
@ -55,13 +57,13 @@ Jaeger tracing.
- **Ingress Policies**. In addition to east-west traffic supported in 0.1. policies can now be applied to north-south traffic.
- **Support for TCP Services**. In addition to the HTTP-level policy controls available in 0.1, 0.2 introduces policy controls for
- **Support for TCP Services**. In addition to the HTTP-level policy controls available in 0.1, 0.2 introduces policy controls for
TCP services.
- **New Mixer API**. The API that Envoy uses to interact with Mixer has been completely redesigned for increased robustness, flexibility, and to support
- **New Mixer API**. The API that Envoy uses to interact with Mixer has been completely redesigned for increased robustness, flexibility, and to support
rich proxy-side caching and batching for increased performance.
- **New Mixer Adapter Model**. A new adapter composition model makes it easier to extend Mixer by adding whole new classes of adapters via templates. This
- **New Mixer Adapter Model**. A new adapter composition model makes it easier to extend Mixer by adding whole new classes of adapters via templates. This
new model will serve as the foundational building block for many features in the future. See the
[Adapter Developer's Guide](https://github.com/istio/istio/blob/master/mixer/doc/adapters.md) to learn how
to write adapters.
@ -84,10 +86,15 @@ identity provisioning. This agent runs on each node (VM / physical machine) and
- **Bring Your Own CA Certificates**. Allows users to provide their own key and certificate for Istio CA.
- **Persistent CA Key/Certificate Storage**. Istio CA now stores signing key/certificates in
persistent storage to facilitate CA restarts.
persistent storage to facilitate CA restarts.
## Known issues
- **User may get periodical 404 when accessing the application**: We have noticed that Envoy doesn't get routes properly occasionally thus a 404 is returned to the user. We are actively working on this [issue](https://github.com/istio/istio/issues/1038).
- **Istio Ingress or Egress reports ready before Pilot is actually ready**: You can check the istio-ingress and istio-egress pods status in the `istio-system` namespace and wait a few seconds after all the Istio pods reach ready status. We are actively working on this [issue](https://github.com/istio/istio/pull/1055).
- **User may get periodical 404 when accessing the application**: We have noticed that Envoy doesn't get routes properly occasionally
thus a 404 is returned to the user. We are actively working on this [issue](https://github.com/istio/istio/issues/1038).
- **Istio Ingress or Egress reports ready before Pilot is actually ready**: You can check the istio-ingress and istio-egress pods status
in the `istio-system` namespace and wait a few seconds after all the Istio pods reach ready status. We are actively working on this
[issue](https://github.com/istio/istio/pull/1055).
- **A service with Istio Auth enabled can't communicate with a service without Istio**: This limitation will be removed in the near future.

View File

@ -42,6 +42,5 @@ significant drop in average latency for authorization checks.
- **Config Validation**. Mixer does more extensive validation of configuration state in order to catch problems earlier.
We expect to invest more in this area in coming releases.
If you're into the nitty-gritty details, you can see our more detailed low-level
release notes [here](https://github.com/istio/istio/wiki/v0.3.0).

View File

@ -10,8 +10,8 @@ redirect_from: /docs/welcome/notes/0.4.html
---
{% include home.html %}
This release has only got a few weeks' worth of changes, as we stabilize our monthly release process.
In addition to the usual pile of bug fixes and perf improvements, this release includes:
This release has only got a few weeks' worth of changes, as we stabilize our monthly release process.
In addition to the usual pile of bug fixes and performance improvements, this release includes:
- **Cloud Foundry**. Added minimum Pilot support for the [Cloud Foundry](https://www.cloudfoundry.org) platform, making it
possible for Pilot to discover CF services and service instances.

View File

@ -8,7 +8,7 @@ type: markdown
---
{% include home.html %}
In addition to the usual pile of bug fixes and perf improvements, this release includes the new or
In addition to the usual pile of bug fixes and performance improvements, this release includes the new or
updated features detailed below.
## Networking
@ -17,9 +17,11 @@ updated features detailed below.
the components you want (e.g, Pilot+Ingress only as the minimal Istio install). Refer to the `istioctl` CLI tool for generating a
information on customized Istio deployments.
- **Automatic Proxy Injection**. We leverage Kubernetes 1.9's new [mutating webhook feature](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md#api-machinery) to provide automatic
- **Automatic Proxy Injection**. We leverage Kubernetes 1.9's new
[mutating webhook feature](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md#api-machinery) to provide automatic
pod-level proxy injection. Automatic injection requires Kubernetes 1.9 or beyond and
therefore doesn't work on older versions. The alpha initializer mechanism is no longer supported. [Learn more]({{home}}/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection)
therefore doesn't work on older versions. The alpha initializer mechanism is no longer supported.
[Learn more]({{home}}/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection)
- **Revised Traffic Rules**. Based on user feedback, we have made significant changes to Istio's traffic management
(routing rules, destination rules, etc.). We would love your continuing feedback while we polish this in the coming weeks.
@ -32,7 +34,7 @@ providing a flexible fine-grained access control mechanism. [Learn more](https:/
- **Istio RBAC**. Mixer now has a role-based access control adapter.
[Learn more]({{home}}/docs/concepts/security/rbac.html)
- **Fluentd**. Mixer now has an adapter for log collection through [fluentd](https://www.fluentd.org).
- **Fluentd**. Mixer now has an adapter for log collection through [fluentd](https://www.fluentd.org).
[Learn more]({{home}}/docs/tasks/telemetry/fluentd.html)
- **Stdio**. The stdio adapter now lets you log to files with support for log rotation & backup, along with a host
@ -51,10 +53,10 @@ of controls.
## Other
- **Release-Mode Binaries**. We switched release and installation default to release for improved
performance and security.
performance and security.
- **Component Logging**. Istio components now offer a rich set of command-line options to control local logging, including
common support for log rotation.
common support for log rotation.
- **Consistent Version Reporting**. Istio components now offer a consistent command-line interface to report their version information.

View File

@ -8,7 +8,7 @@ type: markdown
---
{% include home.html %}
In addition to the usual pile of bug fixes and perf improvements, this release includes the new or
In addition to the usual pile of bug fixes and performance improvements, this release includes the new or
updated features detailed below.
## Networking

View File

@ -11,22 +11,24 @@ redirect_from:
- "/release-notes"
- "/docs/welcome/notes/index.html"
- "/docs/references/notes"
toc: false
toc: false
---
{% include section-index.html docs=site.about %}
The latest Istio monthly release is {{site.data.istio.version}} ([release notes]({{site.data.istio.version}}.html)). You can [download {{site.data.istio.version}}](https://github.com/istio/istio/releases) with:
The latest Istio monthly release is {{site.data.istio.version}} ([release notes]({{site.data.istio.version}}.html)). You can
[download {{site.data.istio.version}}](https://github.com/istio/istio/releases) with:
```
```bash
curl -L https://git.io/getLatestIstio | sh -
```
The most recent stable release is 0.2.12. You can [download 0.2.12](https://github.com/istio/istio/releases/tag/0.2.12) with:
```
```bash
curl -L https://git.io/getIstio | sh -
```
[Archived documentation for the 0.2.12 release](https://archive.istio.io/v0.2/docs/).
> As we don't control the `git.io` domain, please examine the output of the `curl` command before piping it to a shell if running in any sensitive or non sandboxed environment.
> As we don't control the `git.io` domain, please examine the output of the `curl` command before piping it to a shell if running in any
sensitive or non-sandboxed environment.

View File

@ -51,7 +51,7 @@ Google, IBM and Lyft joined forces to create Istio from a desire to provide a re
caption='Zipkin Dashboard'
%}
**Resiliency and efficiency**: When developing microservices, operators need to assume that the network will be unreliable. Operators can use retries, load balancing, flow-control (HTTP/2), and circuit-breaking to compensate for some of the common failure modes due to an unreliable network. Istio provides a uniform approach to configuring these features, making it easier to operate a highly resilient service mesh.
**Resiliency and efficiency**: When developing microservices, operators need to assume that the network will be unreliable. Operators can use retries, load balancing, flow-control (HTTP/2), and circuit-breaking to compensate for some of the common failure modes due to an unreliable network. Istio provides a uniform approach to configuring these features, making it easier to operate a highly resilient service mesh.
**Developer productivity**: Istio provides a significant boost to developer productivity by letting them focus on building service features in their language of choice, while Istio handles resiliency and networking challenges in a uniform way. Developers are freed from having to bake solutions to distributed systems problems into their code. Istio further improves productivity by providing common functionality supporting A/B testing, canarying, and fault injection.
@ -59,14 +59,14 @@ Google, IBM and Lyft joined forces to create Istio from a desire to provide a re
**Secure by default**: It is a common fallacy of distributed computing that the network is secure. Istio enables operators to authenticate and secure all communication between services using a mutual TLS connection, without burdening the developer or the operator with cumbersome certificate management tasks. Our security framework is aligned with the emerging [SPIFFE](https://spiffe.github.io/) specification, and is based on similar systems that have been tested extensively inside Google.
**Incremental Adoption**: We designed Istio to be completely transparent to the services running in the mesh, allowing teams to incrementally adopt features of Istio over time. Adopters can start with enabling fleet-wide visibility and once theyre comfortable with Istio in their environment they can switch on other features as needed.
**Incremental Adoption**: We designed Istio to be completely transparent to the services running in the mesh, allowing teams to incrementally adopt features of Istio over time. Adopters can start with enabling fleet-wide visibility and once theyre comfortable with Istio in their environment they can switch on other features as needed.
## Join us in this journey
Istio is a completely open development project. Today we are releasing version 0.1, which works in a Kubernetes cluster, and we plan to have major new
releases every 3 months, including support for additional environments. Our goal is to enable developers and operators to rollout and operate microservices
with agility, complete visibility of the underlying network, and uniform control and security in all environments. We look forward to working with the Istio
community and our partners towards these goals, following our [roadmap]({{home}}/docs/reference/release-roadmap.html).
Istio is a completely open development project. Today we are releasing version 0.1, which works in a Kubernetes cluster, and we plan to have major new
releases every 3 months, including support for additional environments. Our goal is to enable developers and operators to rollout and operate microservices
with agility, complete visibility of the underlying network, and uniform control and security in all environments. We look forward to working with the Istio
community and our partners towards these goals, following our [roadmap]({{home}}/docs/reference/release-roadmap.html).
Visit [here](https://github.com/istio/istio/releases) to get the latest released bits.
@ -75,9 +75,9 @@ View the [presentation]({{home}}/talks/istio_talk_gluecon_2017.pdf) from GlueCon
## Community
We are excited to see early commitment to support the project from many companies in the community:
[Red Hat](https://blog.openshift.com/red-hat-istio-launch/) with Red Hat Openshift and OpenShift Application Runtimes,
[Red Hat](https://blog.openshift.com/red-hat-istio-launch/) with Red Hat OpenShift and OpenShift Application Runtimes,
Pivotal with [Pivotal Cloud Foundry](https://content.pivotal.io/blog/pivotal-and-istio-advancing-the-ecosystem-for-microservices-in-the-enterprise),
Weaveworks with [Weave Cloud](https://www.weave.works/blog/istio-weave-cloud/) and Weave Net 2.0,
WeaveWorks with [Weave Cloud](https://www.weave.works/blog/istio-weave-cloud/) and Weave Net 2.0,
[Tigera](https://www.projectcalico.org/welcoming-istio-to-the-kubernetes-networking-community) with the Project Calico Network Policy Engine
and [Datawire](https://www.datawire.io/istio-and-datawire-ecosystem/) with the Ambassador project. We hope to see many more companies join us in
this journey.
@ -87,12 +87,12 @@ To get involved, connect with us via any of these channels:
* [istio.io]({{home}}) for documentation and examples.
* The [istio-users@googlegroups.com](https://groups.google.com/forum/#!forum/istio-users) mailing list for general discussions,
or [istio-announce@googlegroups.com](https://groups.google.com/forum/#!forum/istio-announce) for key announcements regarding the project.
or [istio-announce@googlegroups.com](https://groups.google.com/forum/#!forum/istio-announce) for key announcements regarding the project.
* [Stack Overflow](https://stackoverflow.com/questions/tagged/istio) for curated questions and answers
* [GitHub](https://github.com/istio/issues/issues) for filing issues
* [@IstioMesh](https://twitter.com/IstioMesh) on Twitter
* [@IstioMesh](https://twitter.com/IstioMesh) on Twitter
From everyone working on Istio, welcome aboard!

View File

@ -16,23 +16,23 @@ redirect_from:
{% include home.html %}
Conventional network security approaches fail to address security threats to distributed applications deployed in dynamic production environments. Today, we describe how Istio Auth enables enterprises to transform their security posture from just protecting the edge to consistently securing all inter-service communications deep within their applications. With Istio Auth, developers and operators can protect services with sensitive data against unauthorized insider access and they can achieve this without any changes to the application code!
Istio Auth is the security component of the broader [Istio platform]({{home}}/). It incorporates the learnings of securing millions of microservice
Istio Auth is the security component of the broader [Istio platform]({{home}}/). It incorporates the learnings of securing millions of microservice
endpoints in Googles production environment.
## Background
Modern application architectures are increasingly based on shared services that are deployed and scaled dynamically on cloud platforms. Traditional network edge security (e.g. firewall) is too coarse-grained and allows access from unintended clients. An example of a security risk is stolen authentication tokens that can be replayed from another client. This is a major risk for companies with sensitive data that are concerned about insider threats. Other network security approaches like IP whitelists have to be statically defined, are hard to manage at scale, and are unsuitable for dynamic production environments.
Thus, security administrators need a tool that enables them to consistently, and by default, secure all communication between services across diverse production environments.
Thus, security administrators need a tool that enables them to consistently, and by default, secure all communication between services across diverse production environments.
## Solution: strong service identity and authentication
Google has, over the years, developed architecture and technology to uniformly secure millions of microservice endpoints in its production environment against
external
attacks and insider threats. Key security principles include trusting the endpoints and not the network, strong mutual authentication based on service identity and service level authorization. Istio Auth is based on the same principles.
Google has, over the years, developed architecture and technology to uniformly secure millions of microservice endpoints in its production environment against
external
attacks and insider threats. Key security principles include trusting the endpoints and not the network, strong mutual authentication based on service identity and service level authorization. Istio Auth is based on the same principles.
The version 0.1 release of Istio Auth runs on Kubernetes and provides the following features:
The version 0.1 release of Istio Auth runs on Kubernetes and provides the following features:
* Strong identity assertion between services
@ -67,7 +67,7 @@ Istio Auth uses [Kubernetes service accounts](https://kubernetes.io/docs/tasks/c
### Communication security
Service-to-service communication is tunneled through high performance client side and server side [Envoy](https://envoyproxy.github.io/envoy/) proxies. The communication between the proxies is secured using mutual TLS. The benefit of using mutual TLS is that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. Istio Auth also introduces the concept of Secure Naming to protect from a server spoofing attacks - the client side proxy verifies that the authenticated server's service account is allowed to run the named service.
Service-to-service communication is tunneled through high performance client side and server side [Envoy](https://envoyproxy.github.io/envoy/) proxies. The communication between the proxies is secured using mutual TLS. The benefit of using mutual TLS is that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. Istio Auth also introduces the concept of Secure Naming to protect from a server spoofing attacks - the client side proxy verifies that the authenticated server's service account is allowed to run the named service.
### Key management and distribution
@ -77,10 +77,10 @@ Istio Auth provides a per-cluster CA (Certificate Authority) and automated key &
* Distributes keys and certificates to the appropriate pods using [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
* Rotates keys and certificates periodically.
* Rotates keys and certificates periodically.
* Revokes a specific key and certificate pair when necessary (future).
The following diagram explains the end to end Istio Auth authentication workflow on Kubernetes:
{% include figure.html width='100%' ratio='56.25%'
@ -95,14 +95,14 @@ Istio Auth is part of the broader security story for containers. Red Hat, a part
## Benefits of Istio Auth
**Defense in depth**: When used in conjunction with Kubernetes (or infrastructure) network policies, users achieve higher levels of confidence, knowing that pod-to-pod or service-to-service communication is secured both at network and application layers.
**Secure by default**: When used with Istios proxy and centralized policy engine, Istio Auth can be configured during deployment with minimal or no application change. Administrators and operators can thus ensure that service communications are secured by default and that they can enforce these policies consistently across diverse protocols and runtimes.
**Strong service authentication**: Istio Auth secures service communication using mutual TLS to ensure that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. This ensures that services with sensitive data can only be accessed from strongly authenticated and authorized clients.
**Secure by default**: When used with Istios proxy and centralized policy engine, Istio Auth can be configured during deployment with minimal or no application change. Administrators and operators can thus ensure that service communications are secured by default and that they can enforce these policies consistently across diverse protocols and runtimes.
**Strong service authentication**: Istio Auth secures service communication using mutual TLS to ensure that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. This ensures that services with sensitive data can only be accessed from strongly authenticated and authorized clients.
## Join us in this journey
Istio Auth is the first step towards providing a full stack of capabilities to protect services with sensitive data from external attacks and insider
threats. While the initial version runs on Kubernetes, our goal is to enable Istio Auth to secure services across diverse production environments. We encourage the
community to [join us](https://github.com/istio/istio/blob/master/security) in making robust service security easy and ubiquitous across different application
stacks and runtime platforms.
threats. While the initial version runs on Kubernetes, our goal is to enable Istio Auth to secure services across diverse production environments. We encourage the
community to [join us](https://github.com/istio/istio/blob/master/security) in making robust service security easy and ubiquitous across different application
stacks and runtime platforms.

View File

@ -30,8 +30,8 @@ Whether we use one deployment or two, canary management using deployment feature
With Istio, traffic routing and replica deployment are two completely independent functions. The number of pods implementing services are free to scale up and down based on traffic load, completely orthogonal to the control of version traffic routing. This makes managing a canary version in the presence of autoscaling a much simpler problem. Autoscalers may, in fact, respond to load variations resulting from traffic routing changes, but they are nevertheless functioning independently and no differently than when loads change for other reasons.
Istios [routing rules]({{home}}/docs/concepts/traffic-management/rules-configuration.html) also provide other important advantages; you can easily control
fine grain traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). To illustrate, lets look at deploying the **helloworld** service and see how simple the problem becomes.
Istios [routing rules]({{home}}/docs/concepts/traffic-management/rules-configuration.html) also provide other important advantages; you can easily control
fine grain traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). To illustrate, lets look at deploying the **helloworld** service and see how simple the problem becomes.
We begin by defining the **helloworld** Service, just like any other Kubernetes service, something like this:
@ -211,7 +211,7 @@ As before, the autoscalers bound to the 2 version Deployments will automatically
## Summary
In this article weve shown how Istio supports general scalable canary deployments, and how this differs from the basic deployment support in Kubernetes. Istios service mesh provides the control necessary to manage traffic distribution with complete independence from deployment scaling. This allows for a simpler, yet significantly more functional, way to do canary test and rollout.
In this article weve shown how Istio supports general scalable canary deployments, and how this differs from the basic deployment support in Kubernetes. Istios service mesh provides the control necessary to manage traffic distribution with complete independence from deployment scaling. This allows for a simpler, yet significantly more functional, way to do canary test and rollout.
Intelligent routing in support of canary deployment is just one of the many features of Istio that will make the production deployment of large-scale microservices-based applications much simpler. Check out [istio.io]({{home}}) for more information and to try it out.
The sample code used in this article can be found [here](https://github.com/istio/istio/tree/master/samples/helloworld).

View File

@ -19,8 +19,8 @@ Lets start with the basics: why might you want to use both Istio and Kubernet
| | Istio Policy | Network Policy |
| --------------------- | ----------------- | ------------------ |
| **Layer** | "Service" --- L7 | "Network" --- L3-4 |
| **Implementation** | Userspace | Kernel |
| **Layer** | "Service" --- L7 | "Network" --- L3-4 |
| **Implementation** | User space | Kernel |
| **Enforcement Point** | Pod | Node |
## Layer
@ -33,13 +33,16 @@ In contrast, operating at the network layer has the advantage of being universal
## Implementation
The Istios proxy is based on [Envoy](https://envoyproxy.github.io/envoy/), which is implemented as a userspace daemon in the dataplane that interacts with the network layer using standard sockets. This gives it a large amount of flexibility in processing, and allows it to be distributed (and upgraded!) in a container.
The Istios proxy is based on [Envoy](https://envoyproxy.github.io/envoy/), which is implemented as a user space daemon in the data plane that
interacts with the network layer using standard sockets. This gives it a large amount of flexibility in processing, and allows it to be
distributed (and upgraded!) in a container.
Network Policy dataplane is typically implemented in kernel space (e.g. using iptables, eBPF filters, or even custom kernel modules). Being in kernel space allows them to be extremely fast, but not as flexible as the Envoy proxy.
Network Policy data plane is typically implemented in kernel space (e.g. using iptables, eBPF filters, or even custom kernel modules). Being in kernel space
allows them to be extremely fast, but not as flexible as the Envoy proxy.
## Enforcement Point
Policy enforcement using the Envoy proxy is implemented inside the pod, as a sidecar container in the same network namespace. This allows a simple deployment model. Some containers are given permission to reconfigure the networking inside their pod (CAP_NET_ADMIN). If such a service instance is compromised, or misbehaves (as in a malicious tenant) the proxy can be bypassed.
Policy enforcement using the Envoy proxy is implemented inside the pod, as a sidecar container in the same network namespace. This allows a simple deployment model. Some containers are given permission to reconfigure the networking inside their pod (CAP_NET_ADMIN). If such a service instance is compromised, or misbehaves (as in a malicious tenant) the proxy can be bypassed.
While this wont let an attacker access other Istio-enabled pods, so long as they are correctly configured, it opens several attack vectors:

View File

@ -14,42 +14,44 @@ redirect_from: "/blog/istio-0.2-announcement.html"
---
{% include home.html %}
We launched Istio; an open platform to connect, manage, monitor, and secure microservices, on May 24, 2017. We have been humbled by the incredible interest, and rapid community growth of developers, operators, and partners. Our 0.1 release was focused on showing all the concepts of Istio in Kubernetes.
We launched Istio; an open platform to connect, manage, monitor, and secure microservices, on May 24, 2017. We have been humbled by the incredible interest, and
rapid community growth of developers, operators, and partners. Our 0.1 release was focused on showing all the concepts of Istio in Kubernetes.
Today we are happy to announce the 0.2 release which improves stability and performance, allows for cluster wide deployment and automated injection of sidecars in Kubernetes, adds policy and authentication for TCP services, and enables expansion of the mesh to include services deployed in virtual machines. In addition, Istio can now run outside Kubernetes, leveraging Consul/Nomad or Eureka. Beyond core features, Istio is now ready for extensions to be written by third party companies and developers.
## Highlights for the 0.2 release
**Usability improvements:**
### Usability improvements
* _Multiple namespace support_: Istio now works cluster-wide, across multiple namespaces and this was one of the top requests from community from 0.1 release.
* _Multiple namespace support_: Istio now works cluster-wide, across multiple namespaces and this was one of the top requests from community from 0.1 release.
* _Policy and security for TCP services_: In addition to HTTP, we have added transparent mutual TLS authentication and policy enforcement for TCP services as well. This will allow you to secure more of your kubernetes deployment, and get Istio features like telemetry, policy and security.
* _Policy and security for TCP services_: In addition to HTTP, we have added transparent mutual TLS authentication and policy enforcement for TCP services as well. This will allow you to secure more of your
Kubernetes deployment, and get Istio features like telemetry, policy and security.
* _Automated sidecar injection_: By leveraging the alpha [initializer](https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers) feature provided by Kubernetes 1.7, envoy sidecars can now be automatically injected into application deployments when your cluster has the initializer enabled. This enables you to deploy microservices using `kubectl`, the exact same command that you normally use for deploying the microservices without Istio.
* _Extending Istio_: An improved Mixer design that lets vendors write Mixer adapters to implement support for their own systems, such as application
* _Extending Istio_: An improved Mixer design that lets vendors write Mixer adapters to implement support for their own systems, such as application
management or policy enforcement. The
[Mixer Adapter Developer's Guide](https://github.com/istio/istio/blob/master/mixer/doc/adapters.md) can help
you easily integrate your solution with Istio.
you easily integrate your solution with Istio.
* _Bring your own CA certificates_: Allows users to provide their own key and certificate for Istio CA and persistent CA key/certificate Storage. Enables storing signing key/certificates in persistent storage to facilitate CA restarts.
* _Improved routing & metrics_: Support for WebSocket, MongoDB and Redis protocols. You can apply resilience features like circuit breakers on traffic to third party services. In addition to Mixers metrics, hundreds of metrics from Envoy are now visible inside Prometheus for all traffic entering, leaving and within Istio mesh.
* _Improved routing & metrics_: Support for WebSocket, MongoDB and Redis protocols. You can apply resilience features like circuit breakers on traffic to third party services. In addition to Mixers metrics, hundreds of metrics from Envoy are now visible inside Prometheus for all traffic entering, leaving and within Istio mesh.
**Cross environment support:**
### Cross environment support
* _Mesh expansion_: Istio mesh can now span services running outside of Kubernetes - like those running in virtual machines while enjoying benefits such as automatic mutual TLS authentication, traffic management, telemetry, and policy enforcement across the mesh.
* _Mesh expansion_: Istio mesh can now span services running outside of Kubernetes - like those running in virtual machines while enjoying benefits such as automatic mutual TLS authentication, traffic management, telemetry, and policy enforcement across the mesh.
* _Running outside Kubernetes_: We know many customers use other service registry and orchestration solutions like [Consul/Nomad]({{home}}/docs/setup/consul/quick-start.html) and [Eureka]({{home}}/docs/setup/eureka/quick-start.html). Istio Pilot can now run standalone outside Kubernetes, consuming information from these systems, and manage the Envoy fleet in VMs or containers.
## Get involved in shaping the future of Istio
## Get involved in shaping the future of Istio
We have a growing [roadmap]({{home}}/docs/reference/release-roadmap.html) ahead of us, full of great features to implement. Our focus next release is going to be on stability, reliability, integration with third party tools and multi-cluster use cases.
We have a growing [roadmap]({{home}}/docs/reference/release-roadmap.html) ahead of us, full of great features to implement. Our focus next release is going to be on stability, reliability, integration with third party tools and multi-cluster use cases.
To learn how to get involved and contribute to Istio's future, check out our [community](https://github.com/istio/community) GitHub repository which
will introduce you to our working groups, our mailing lists, our various community meetings, our general procedures and our guidelines.
We want to thank our fantastic community for field testing new versions, filing bug reports, contributing code, helping out other community members, and shaping Istio by participating in countless productive discussions. This has enabled the project to accrue 3000 stars on GitHub since launch and hundreds of active community members on Istio mailing lists.
We want to thank our fantastic community for field testing new versions, filing bug reports, contributing code, helping out other community members, and shaping Istio by participating in countless productive discussions. This has enabled the project to accrue 3000 stars on GitHub since launch and hundreds of active community members on Istio mailing lists.
Thank you

View File

@ -26,11 +26,11 @@ In addition to insulating application-level code from the details of infrastruct
Given that individual infrastructure backends each have different interfaces and operational models, Mixer needs custom
code to deal with each and we call these custom bundles of code [*adapters*](https://github.com/istio/istio/blob/master/mixer/doc/adapters.md).
Adapters are Go packages that are directly linked into the Mixer binary. Its fairly simple to create custom Mixer binaries linked with specialized sets of adapters, in case the default set of adapters is not sufficient for specific use cases.
Adapters are Go packages that are directly linked into the Mixer binary. Its fairly simple to create custom Mixer binaries linked with specialized sets of adapters, in case the default set of adapters is not sufficient for specific use cases.
## Philosophy
Mixer is essentially an attribute processing and routing machine. The proxy sends it [attributes]({{home}}/docs/concepts/policy-and-control/attributes.html) as part of doing precondition checks and telemetry reports, which it turns into a series of calls into adapters. The operator supplies configuration which describes how to map incoming attributes to inputs for the adapters.
Mixer is essentially an attribute processing and routing machine. The proxy sends it [attributes]({{home}}/docs/concepts/policy-and-control/attributes.html) as part of doing precondition checks and telemetry reports, which it turns into a series of calls into adapters. The operator supplies configuration which describes how to map incoming attributes to inputs for the adapters.
{% assign url = home | append: "/docs/concepts/policy-and-control/img/mixer-config/machine.svg" %}
{% include figure.html width='60%' ratio='42.60%'
@ -46,7 +46,7 @@ Configuration is a complex task. In fact, evidence shows that the overwhelming m
Each adapter that Mixer uses requires some configuration to operate. Typically, adapters need things like the URL to their backend, credentials, caching options, and so forth. Each adapter defines the exact configuration data it needs via a [protobuf](https://developers.google.com/protocol-buffers/) message.
You configure each adapter by creating [*handlers*]({{home}}/docs/concepts/policy-and-control/mixer-config.html#handlers) for them. A handler is a
You configure each adapter by creating [*handlers*]({{home}}/docs/concepts/policy-and-control/mixer-config.html#handlers) for them. A handler is a
configuration resource which represents a fully configured adapter ready for use. There can be any number of handlers for a single adapter, making it possible to reuse an adapter in different scenarios.
## Templates: adapter input schema
@ -54,9 +54,9 @@ configuration resource which represents a fully configured adapter ready for use
Mixer is typically invoked twice for every incoming request to a mesh service, once for precondition checks and once for telemetry reporting. For every such call, Mixer invokes one or more adapters. Different adapters need different pieces of data as input in order to do their work. A logging adapter needs a log entry, a metric adapter needs a metric, an authorization adapter needs credentials, etc.
Mixer [*templates*]({{home}}/docs/reference/config/template/) are used to describe the exact data that an adapter consumes at request time.
Each template is specified as a [protobuf](https://developers.google.com/protocol-buffers/) message. A single template describes a bundle of data that is delivered to one or more adapters at runtime. Any given adapter can be designed to support any number of templates, the specific templates the adapter supports is determined by the adapter developer.
Each template is specified as a [protobuf](https://developers.google.com/protocol-buffers/) message. A single template describes a bundle of data that is delivered to one or more adapters at runtime. Any given adapter can be designed to support any number of templates, the specific templates the adapter supports is determined by the adapter developer.
[metric]({{home}}/docs/reference/config/template/metric.html) and [logentry]({{home}}/docs/reference/config/template/logentry.html) are two of the most essential templates used within Istio. They represent respectively the payload to report a single metric and a single log entry to appropriate backends.
[metric]({{home}}/docs/reference/config/template/metric.html) and [logentry]({{home}}/docs/reference/config/template/logentry.html) are two of the most essential templates used within Istio. They represent respectively the payload to report a single metric and a single log entry to appropriate backends.
## Instances: attribute mapping
@ -75,10 +75,10 @@ to a string field. This kind of strong typing is designed to minimize the risk
## Rules: delivering data to adapters
The last piece to the puzzle is telling Mixer which instances to send to which handler and when. This is done by
creating [*rules*]({{home}}/docs/concepts/policy-and-control/mixer-config.html#rules). Each rule identifies a specific handler and the set of
creating [*rules*]({{home}}/docs/concepts/policy-and-control/mixer-config.html#rules). Each rule identifies a specific handler and the set of
instances to send to that handler. Whenever Mixer processes an incoming call, it invokes the indicated handler and gives it the specific set of instances for processing.
Rules contain matching predicates. A predicate is an attribute expression which returns a true/false value. A rule only takes effect if its predicate expression returns true. Otherwise, its like the rule didnt exist and the indicated handler isnt invoked.
Rules contain matching predicates. A predicate is an attribute expression which returns a true/false value. A rule only takes effect if its predicate expression returns true. Otherwise, its like the rule didnt exist and the indicated handler isnt invoked.
## Future

View File

@ -17,7 +17,7 @@ As [Mixer]({{home}}/docs/concepts/policy-and-control/mixer.html) is in the reque
overall system availability and latency. A common refrain we hear when people first glance at Istio architecture diagrams is
"Isn't this just introducing a single point of failure?"
In this post, well dig deeper and cover the design principles that underpin Mixer and the surprising fact Mixer actually
In this post, well dig deeper and cover the design principles that underpin Mixer and the surprising fact Mixer actually
increases overall mesh availability and reduces average request latency.
Istio's use of Mixer has two main benefits in terms of overall system availability and latency:
@ -43,7 +43,7 @@ The older system was built around a centralized fleet of fairly heavy proxies in
caption="Google's API & Service Management System"
%}
Look familiar? Of course: its just like Istio! Istio was conceived as a second generation of this distributed proxy architecture. We took the core lessons from this internal system, generalized many of the concepts by working with our partners, and created Istio.
Look familiar? Of course: its just like Istio! Istio was conceived as a second generation of this distributed proxy architecture. We took the core lessons from this internal system, generalized many of the concepts by working with our partners, and created Istio.
## Architecture recap
@ -95,7 +95,7 @@ We have opportunities ahead to continue improving the system in many ways.
### Config canaries
Mixer is highly scaled so it is generally resistant to individual instance failures. However, Mixer is still susceptible to cascading failures in the case when a poison configuration is deployed which causes all Mixer instances to crash basically at the same time (yeah, that would be a bad day). To prevent this from happening, config changes can be canaried to a small set of Mixer instances, and then more broadly rolled out.
Mixer is highly scaled so it is generally resistant to individual instance failures. However, Mixer is still susceptible to cascading failures in the case when a poison configuration is deployed which causes all Mixer instances to crash basically at the same time (yeah, that would be a bad day). To prevent this from happening, config changes can be canaried to a small set of Mixer instances, and then more broadly rolled out.
Mixer doesnt yet do canarying of config changes, but we expect this to come online as part of Istios ongoing work on reliable config distribution.
@ -112,17 +112,17 @@ At the moment, each Mixer instance operates independently of all other instances
In very large meshes, the load on Mixer can be great. There can be a large number of Mixer instances, each straining to keep caches primed to
satisfy incoming traffic. We expect to eventually introduce intelligent sharding such that Mixer instances become slightly specialized in
handling particular data streams in order to increase the likelihood of cache hits. In other words, sharding helps improve cache
efficiency by routing related traffic to the same Mixer instance over time, rather than randomly dispatching to
efficiency by routing related traffic to the same Mixer instance over time, rather than randomly dispatching to
any available Mixer instance.
## Conclusion
Practical experience at Google showed that the model of a slim sidecar proxy and a large shared caching control plane intermediary hits a sweet
spot, delivering excellent perceived availability and latency. Weve taken the lessons learned there and applied them to create more sophisticated and
spot, delivering excellent perceived availability and latency. Weve taken the lessons learned there and applied them to create more sophisticated and
effective caching, prefetching, and buffering strategies in Istio. Weve also optimized the communication protocols to reduce overhead when a cache miss does occur.
Mixer is still young. As of Istio 0.3, we havent really done significant performance work within Mixer itself. This means when a request misses the sidecar
cache, we spend more time in Mixer to respond to requests than we should. Were doing a lot of work to improve this in coming months to reduce the overhead
Mixer is still young. As of Istio 0.3, we havent really done significant performance work within Mixer itself. This means when a request misses the sidecar
cache, we spend more time in Mixer to respond to requests than we should. Were doing a lot of work to improve this in coming months to reduce the overhead
that Mixer imparts in the synchronous precondition check case.
We hope this post makes you appreciate the inherent benefits that Mixer brings to Istio.

View File

@ -20,9 +20,10 @@ In this blog post, I modify the [Istio Bookinfo Sample Application]({{home}}/doc
## Bookinfo sample application with external details web service
### Initial setting
To demonstrate the scenario of consuming an external web service, I start with a Kubernetes cluster with [Istio installed]({{home}}/docs/setup/kubernetes/quick-start.html#installation-steps). Then I deploy [Istio Bookinfo Sample Application]({{home}}/docs/guides/bookinfo.html). This application uses the _details_ microservice to fetch book details, such as the number of pages and the publisher. The original _details_ microservice provides the book details without consulting any external service.
The example commands in this blog post work with Istio version 0.2+, with or without [Mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) enabled.
The example commands in this blog post work with Istio 0.2+, with or without [Mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) enabled.
The Bookinfo configuration files required for the scenario of this post appear starting from [Istio release version 0.5](https://github.com/istio/istio/releases/tag/0.5.0).
The Bookinfo configuration files reside in the `samples/bookinfo/kube` directory of the Istio release archive.
@ -38,6 +39,7 @@ Here is a copy of the end-to-end architecture of the application from the origin
%}
### Bookinfo with details version 2
Let's add a new version of the _details_ microservice, _v2_, that fetches the book details from [Google Books APIs](https://developers.google.com/books/docs/v1/getting_started).
```bash
@ -89,7 +91,9 @@ The good news is that our application did not crash. With a good microservice de
So what might have gone wrong? Ah... The answer is that I forgot to enable traffic from inside the mesh to an external service, in this case to the Google Books web service. By default, the Istio sidecar proxies ([Envoy proxies](https://www.envoyproxy.io)) **block all the traffic to destinations outside the cluster**. To enable such traffic, we must define an [egress rule]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#EgressRule).
### Egress rule for Google Books web service
No worries, let's define an **egress rule** and fix our application:
```bash
cat <<EOF | istioctl create -f -
apiVersion: config.istio.io/v1alpha2
@ -115,7 +119,7 @@ Now accessing the web page of the application displays the book details without
caption='Book Details Displayed Correctly'
%}
Note that our egress rule allows traffic to any domain matching _*.googleapis.com_, on port 443, using the HTTPS protocol. Let's assume for the sake of the example that the applications in our Istio service mesh must access multiple subdomains of _gooogleapis.com_, for example _www.googleapis.com_ and also _fcm.googleapis.com_. Our rule allows traffic to both _www.googleapis.com_ and _fcm.googleapis.com_, since they both match _*.googleapis.com_. This **wildcard** feature allows us to enable traffic to multiple domains using a single egress rule.
Note that our egress rule allows traffic to any domain matching _*.googleapis.com_, on port 443, using the HTTPS protocol. Let's assume for the sake of the example that the applications in our Istio service mesh must access multiple subdomains of _googleapis.com_, for example _www.googleapis.com_ and also _fcm.googleapis.com_. Our rule allows traffic to both _www.googleapis.com_ and _fcm.googleapis.com_, since they both match _*.googleapis.com_. This **wildcard** feature allows us to enable traffic to multiple domains using a single egress rule.
We can query our egress rules:
```bash
@ -123,25 +127,30 @@ istioctl get egressrules
```
and see our new egress rule in the output:
```bash
NAME KIND NAMESPACE
googleapis EgressRule.v1alpha2.config.istio.io default
NAME KIND NAMESPACE
googleapis EgressRule.v1alpha2.config.istio.io default
```
We can delete our egress rule:
```bash
istioctl delete egressrule googleapis -n default
```
and see in the output of _istioctl delete_ that the egress rule is deleted:
```
```bash
Deleted config: egressrule googleapis
```
Accessing the web page after deleting the egress rule produces the same error that we experienced before, namely _Error fetching product details_. As we can see, the egress rules are defined **dynamically**, as many other Istio configuration artifacts. The Istio operators can decide dynamically which domains they allow the microservices to access. They can enable and disable traffic to the external domains on the fly, without redeploying the microservices.
## Issues with Istio egress traffic control
### TLS origination by Istio
There is a caveat to this story. In HTTPS, all the HTTP details (hostname, path, headers etc.) are encrypted, so Istio cannot know the destination domain of the encrypted requests. Well, Istio could know the destination domain by the [SNI](https://tools.ietf.org/html/rfc3546#section-3.1) (_Server Name Indication_) field. This feature, however, is not yet implemented in Istio. Therefore, currently Istio cannot perform filtering of HTTPS requests based on the destination domains.
To allow Istio to perform filtering of egress requests based on domains, the microservices must issue HTTP requests. Istio then opens an HTTPS connection to the destination (performs TLS origination). The code of the microservices must be written differently or configured differently, according to whether the microservice runs inside or outside an Istio service mesh. This contradicts the Istio design goal of [maximizing transparency]({{home}}/docs/concepts/what-is-istio/goals.html). Sometimes we need to compromise...
@ -157,6 +166,7 @@ sends regular HTTPS requests, encrypted end-to-end. On the bottom, the same micr
%}
Here is how we code this behavior in the [the Bookinfo details microservice code](https://github.com/istio/istio/blob/master/samples/bookinfo/src/details/details.rb), using the Ruby [net/http module](https://docs.ruby-lang.org/en/2.0.0/Net/HTTP.html):
```ruby
uri = URI.parse('https://www.googleapis.com/books/v1/volumes?q=isbn:' + isbn)
http = Net::HTTP.new(uri.host, uri.port)
@ -170,7 +180,10 @@ Note that the port is derived by the `URI.parse` from the URI's schema (https://
When the `WITH_ISTIO` environment variable is defined, the request is performed without SSL (plain HTTP).
We set the `WITH_ISTIO` environment variable to _"true"_ in the [Kubernetes deployment spec of _details v2_](https://github.com/istio/istio/blob/master/samples/bookinfo/kube/bookinfo-details-v2.yaml), the `container` section:
We set the `WITH_ISTIO` environment variable to _"true"_ in the
[Kubernetes deployment spec of _details v2_](https://github.com/istio/istio/blob/master/samples/bookinfo/kube/bookinfo-details-v2.yaml),
the `container` section:
```yaml
env:
- name: WITH_ISTIO
@ -178,22 +191,27 @@ env:
```
#### Relation to Istio mutual TLS
Note that the TLS origination in this case is unrelated to [the mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) applied by Istio. The TLS origination for the external services will work, whether the Istio mutual TLS is enabled or not. The **mutual** TLS secures service-to-service communication **inside** the service mesh and provides each service with a strong identity. In the case of the **external services**, we have **one-way** TLS, the same mechanism used to secure communication between a web browser and a web server. TLS is applied to the communication with external services to verify the identity of the external server and to encrypt the traffic.
### Malicious microservices threat
Another issue is that the egress rules are currently **not a security feature**; they only **enable** traffic to external services. For HTTP-based protocols, the rules are based on domains. Istio does not check that the destination IP of the request matches the _Host_ header. This means that a malicious microservice inside a service mesh could trick Istio to allow traffic to a malicious IP. The attack is to set one of the domains allowed by some existing Egress Rule as the _Host_ header of the malicious request.
Securing egress traffic is currently not supported in Istio and should be performed elsewhere, for example by a firewall or by an additional proxy outside Istio. Right now, we're working to enable the application of Mixer security policies on the egress traffic and to prevent the attack described above.
### No tracing, telemetry and no mixer checks
Note that currently no tracing and telemetry information can be collected for the egress traffic. Mixer policies cannot be applied. We are working to fix this in future Istio releases.
## Future work
In my next blog posts I will demonstrate Istio egress rules for TCP traffic and will show examples of combining routing rules and egress rules.
In Istio, we are working on making Istio egress traffic more secure, and in particular on enabling tracing, telemetry, and Mixer checks for the egress traffic.
## Conclusion
In this blog post I demonstrated how the microservices in an Istio service mesh can consume external web services via HTTPS. By default, Istio blocks all the traffic to the hosts outside the cluster. To enable such traffic, egress rules must be created for the service mesh. It is possible to access the external sites by HTTPS, however the microservices must issue HTTP requests while Istio will perform TLS origination. Currently, no tracing, telemetry and Mixer checks are enabled for the egress traffic. Egress rules are currently not a security feature, so additional mechanisms are required for securing egress traffic. We're working to enable logging/telemetry and security policies for the egress traffic in future releases.
To read more about Istio egress traffic control, see [Control Egress Traffic Task]({{home}}/docs/tasks/traffic-management/egress.html).

View File

@ -16,9 +16,11 @@ redirect_from: "/blog/egress-tcp.html"
In my previous blog post, [Consuming External Web Services]({{home}}/blog/2018/egress-https.html), I described how external services can be consumed by in-mesh Istio applications via HTTPS. In this post, I demonstrate consuming external services over TCP. I use the [Istio Bookinfo sample application]({{home}}/docs/guides/bookinfo.html), the version in which the book ratings data is persisted in a MySQL database. I deploy this database outside the cluster and configure the _ratings_ microservice to use it. I define an [egress rule]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#EgressRule) to allow the in-mesh applications to access the external database.
## Bookinfo sample application with external ratings database
First, I set up a MySQL database instance to hold book ratings data, outside my Kubernetes cluster. Then I modify the [Bookinfo sample application]({{home}}/docs/guides/bookinfo.html) to use my database.
### Setting up the database for ratings data
For this task I set up an instance of [MySQL](https://www.mysql.com). You can use any MySQL instance; I use [Compose for MySQL](https://www.ibm.com/cloud/compose/mysql). I use `mysqlsh` ([MySQL Shell](https://dev.mysql.com/doc/refman/5.7/en/mysqlsh.html)) as a MySQL client to feed the ratings data.
1. To initialize the database, I run the following command entering the password when prompted. The command is performed with the credentials of the `admin` user, created by default by [Compose for MySQL](https://www.ibm.com/cloud/compose/mysql).
@ -35,7 +37,7 @@ For this task I set up an instance of [MySQL](https://www.mysql.com). You can us
mysql -u root -p
```
2. I then create a user with the name _bookinfo_ and grant it _SELECT_ privilege on the `test.ratings` table:
1. I then create a user with the name _bookinfo_ and grant it _SELECT_ privilege on the `test.ratings` table:
```bash
mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
-e "CREATE USER 'bookinfo' IDENTIFIED BY '<password you choose>'; GRANT SELECT ON test.ratings to 'bookinfo';"
@ -52,7 +54,7 @@ For this task I set up an instance of [MySQL](https://www.mysql.com). You can us
After running the command to create the user, I will clean my bash history by checking the number of the last command and running `history -d <the number of the command that created the user>`. I don't want the password of the new user to be stored in the bash history. If I'm using `mysql`, I'll remove the last command from `~/.mysql_history` file as well. Read more about password protection of the newly created user in [MySQL documentation](https://dev.mysql.com/doc/refman/5.5/en/create-user.html).
3. I inspect the created ratings to see that everything worked as expected:
1. I inspect the created ratings to see that everything worked as expected:
```bash
mysqlsh --sql --ssl-mode=REQUIRED -u bookinfo -p --host <the database host> --port <the database port> \
-e "select * from test.ratings;"
@ -83,7 +85,7 @@ For this task I set up an instance of [MySQL](https://www.mysql.com). You can us
+----------+--------+
```
4. I set the ratings temporarily to 1 to provide a visual clue when our database is used by the Bookinfo _ratings_ service:
1. I set the ratings temporarily to 1 to provide a visual clue when our database is used by the Bookinfo _ratings_ service:
```bash
mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
-e "update test.ratings set rating=1; select * from test.ratings;"
@ -118,9 +120,10 @@ For this task I set up an instance of [MySQL](https://www.mysql.com). You can us
Now I am ready to deploy a version of the Bookinfo application that will use my database.
### Initial setting of Bookinfo application
To demonstrate the scenario of using an external database, I start with a Kubernetes cluster with [Istio installed]({{home}}/docs/setup/kubernetes/quick-start.html#installation-steps). Then I deploy the [Istio Bookinfo sample application]({{home}}/docs/guides/bookinfo.html). This application uses the _ratings_ microservice to fetch book ratings, a number between 1 and 5. The ratings are displayed as stars for each review. There are several versions of the _ratings_ microservice. Some use [MongoDB](https://www.mongodb.com), others use [MySQL](https://www.mysql.com) as their database.
The example commands in this blog post work with Istio version 0.3+, with or without [Mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) enabled.
The example commands in this blog post work with Istio 0.3+, with or without [Mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) enabled.
As a reminder, here is the end-to-end architecture of the application from the [Bookinfo Guide]({{home}}/docs/guides/bookinfo.html).
@ -133,6 +136,7 @@ As a reminder, here is the end-to-end architecture of the application from the [
%}
### Use the database for ratings data in Bookinfo application
1. I modify the deployment spec of a version of the _ratings_ microservice that uses a MySQL database, to use my database instance. The spec is in `samples/bookinfo/kube/bookinfo-ratings-v2-mysql.yaml` of an Istio release archive. I edit the following lines:
```yaml
@ -147,7 +151,7 @@ As a reminder, here is the end-to-end architecture of the application from the [
```
I replace the values in the snippet above, specifying the database host, port, user, and password. Note that the correct way to work with passwords in container's environment variables in Kubernetes is [to use secrets](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables). For this example task only, I write the password directly in the deployment spec. **Do not do it** in a real environment! I also assume everyone realizes that `"password"` should not be used as a password...
2. I apply the modified spec to deploy the version of the _ratings_ microservice, _v2-mysql_, that will use my database.
1. I apply the modified spec to deploy the version of the _ratings_ microservice, _v2-mysql_, that will use my database.
```bash
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql.yaml)
@ -156,7 +160,8 @@ As a reminder, here is the end-to-end architecture of the application from the [
deployment "ratings-v2-mysql" created
```
3. I route all the traffic destined to the _reviews_ service to its _v3_ version. I do this to ensure that the _reviews_ service always calls the _ratings_ service. In addition, I route all the traffic destined to the _ratings_ service to _ratings v2-mysql_ that uses my database. I add routing for both services above by adding two [route rules]({{home}}/docs/reference/config/istio.routing.v1alpha1.html). These rules are specified in `samples/bookinfo/kube/route-rule-ratings-mysql.yaml` of an Istio release archive.
1. I route all the traffic destined to the _reviews_ service to its _v3_ version. I do this to ensure that the _reviews_ service always calls the _ratings_
service. In addition, I route all the traffic destined to the _ratings_ service to _ratings v2-mysql_ that uses my database. I add routing for both services above by adding two [route rules]({{home}}/docs/reference/config/istio.routing.v1alpha1.html). These rules are specified in `samples/bookinfo/kube/route-rule-ratings-mysql.yaml` of an Istio release archive.
```bash
istioctl create -f samples/bookinfo/kube/route-rule-ratings-mysql.yaml
@ -178,6 +183,7 @@ The updated architecture appears below. Note that the blue arrows inside the mes
Note that the MySQL database is outside the Istio service mesh, or more precisely outside the Kubernetes cluster. The boundary of the service mesh is marked by a dashed line.
### Access the webpage
Let's access the webpage of the application, after [determining the ingress IP and port]({{home}}/docs/guides/bookinfo.html#determining-the-ingress-ip-and-port).
We have a problem... Instead of the rating stars, the message _"Ratings service is currently unavailable"_ is currently displayed below each review:
@ -194,6 +200,7 @@ As in [Consuming External Web Services]({{home}}/blog/2018/egress-https.html), w
We have the same problem as in [Consuming External Web Services]({{home}}/blog/2018/egress-https.html), namely all the traffic outside the Kubernetes cluster, both TCP and HTTP, is blocked by default by the sidecar proxies. To enable such traffic for TCP, an egress rule for TCP must be defined.
### Egress rule for an external MySQL instance
TCP egress rules come to our rescue. I copy the following YAML spec to a text file (let's call it `egress-rule-mysql.yaml`) and edit it to specify the IP of my database instance and its port.
```yaml
@ -211,9 +218,11 @@ spec:
```
Then I run `istioctl` to add the egress rule to the service mesh:
```bash
istioctl create -f egress-rule-mysql.yaml
```
```bash
Created config egress-rule/default/mysql at revision 1954425
```
@ -233,13 +242,17 @@ Note that we see a one-star rating for both displayed reviews, as expected. I ch
As with egress rules for HTTP/HTTPS, we can delete and create egress rules for TCP using `istioctl`, dynamically.
## Motivation for egress TCP traffic control
Some in-mesh Istio applications must access external services, for example legacy systems. In many cases, the access is not performed over HTTP or HTTPS protocols. Other TCP protocols are used, such as database-specific protocols like [MongoDB Wire Protocol](https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/) and [MySQL Client/Server Protocol](https://dev.mysql.com/doc/internals/en/client-server-protocol.html) to communicate with external databases.
Note that in case of access to external HTTPS services, as described in the [Control Egress TCP Traffic]({{home}}/docs/tasks/traffic-management/egress.html) task, an application must issue HTTP requests to the external service. The Envoy sidecar proxy attached to the pod or the VM, will intercept the requests and open an HTTPS connection to the external service. The traffic will be unencrypted inside the pod or the VM, but it will leave the pod or the VM encrypted.
However, sometimes this approach cannot work due to the following reasons:
* The code of the application is configured to use an HTTPS URL and cannot be changed
* The code of the application uses some library to access the external service and that library uses HTTPS only
* There are compliance requirements that do not allow unencrypted traffic, even if the traffic is unencrypted only inside the pod or the VM
In this case, HTTPS can be treated by Istio as _opaque TCP_ and can be handled in the same way as other TCP non-HTTP protocols.
@ -247,6 +260,7 @@ In this case, HTTPS can be treated by Istio as _opaque TCP_ and can be handled i
Next let's see how we define egress rules for TCP traffic.
## Egress rules for TCP traffic
The egress rules for enabling TCP traffic to a specific port must specify `TCP` as the protocol of the port. Additionally, for the [MongoDB Wire Protocol](https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/), the protocol can be specified as `MONGO`, instead of `TCP`.
For the `destination.service` field of the rule, an IP or a block of IPs in [CIDR](https://tools.ietf.org/html/rfc2317) notation must be used.
@ -258,6 +272,7 @@ Note that all the IPs of an external service are not always known. To enable TCP
Also note that the IPs of an external service are not always static, for example in the case of [CDNs](https://en.wikipedia.org/wiki/Content_delivery_network). Sometimes the IPs are static most of the time, but can be changed from time to time, for example due to infrastructure changes. In these cases, if the range of the possible IPs is known, you should specify the range by CIDR blocks (even by multiple egress rules if needed). As an example, see the approach we used in the case of `wikipedia.org`, described in [Control Egress TCP Traffic Task]({{home}}/docs/tasks/traffic-management/egress-tcp.html). If the range of the possible IPs is not known, egress rules for TCP cannot be used and [the external services must be called directly]({{home}}/docs/tasks/traffic-management/egress.html#calling-external-services-directly), circumventing the sidecar proxies.
## Relation to mesh expansion
Note that the scenario described in this post is different from the mesh expansion scenario, described in the
[Integrating Virtual Machines]({{home}}/docs/guides/integrating-vms.html) guide. In that scenario, a MySQL instance runs on an external
(outside the cluster) machine (a bare metal or a VM), integrated with the Istio service mesh. The MySQL service becomes a first-class citizen of the mesh with all the beneficial features of Istio applicable. Among other things, the service becomes addressable by a local cluster domain name, for example by `mysqldb.vm.svc.cluster.local`, and the communication to it can be secured by
@ -266,9 +281,11 @@ service must be registered with Istio. To enable such integration, Istio compone
installed on the machine and the Istio control plane (_Pilot_, _Mixer_, _CA_) must be accessible from it. See the
[Istio Mesh Expansion]({{home}}/docs/setup/kubernetes/mesh-expansion.html) instructions for more details.
In our case, the MySQL instance can run on any machine or can be provisioned as a service by a cloud provider. There is no requirement to integrate the machine with Istio. The Istio contol plane does not have to be accessible from the machine. In the case of MySQL as a service, the machine which MySQL runs on may be not accessible and installing on it the required components may be impossible. In our case, the MySQL instance is addressable by its global domain name, which could be beneficial if the consuming applications expect to use that domain name. This is especially relevant when that expected domain name cannot be changed in the deployment configuration of the consuming applications.
In our case, the MySQL instance can run on any machine or can be provisioned as a service by a cloud provider. There is no requirement to integrate the machine
with Istio. The Istio control plane does not have to be accessible from the machine. In the case of MySQL as a service, the machine which MySQL runs on may be not accessible and installing on it the required components may be impossible. In our case, the MySQL instance is addressable by its global domain name, which could be beneficial if the consuming applications expect to use that domain name. This is especially relevant when that expected domain name cannot be changed in the deployment configuration of the consuming applications.
## Cleanup
1. Drop the _test_ database and the _bookinfo_ user:
```bash
mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
@ -281,7 +298,7 @@ In our case, the MySQL instance can run on any machine or can be provisioned as
```bash
mysql -u root -p -e "drop database test; drop user bookinfo;"
```
2. Remove the route rules:
1. Remove the route rules:
```bash
istioctl delete -f samples/bookinfo/kube/route-rule-ratings-mysql.yaml
```
@ -289,7 +306,7 @@ In our case, the MySQL instance can run on any machine or can be provisioned as
Deleted config: route-rule/default/ratings-test-v2-mysql
Deleted config: route-rule/default/reviews-test-ratings-v2
```
3. Undeploy _ratings v2-mysql_:
1. Undeploy _ratings v2-mysql_:
```bash
kubectl delete -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql.yaml)
```
@ -297,7 +314,7 @@ In our case, the MySQL instance can run on any machine or can be provisioned as
deployment "ratings-v2-mysql" deleted
```
4. Delete the egress rule:
1. Delete the egress rule:
```bash
istioctl delete egressrule mysql -n default
```
@ -306,13 +323,17 @@ In our case, the MySQL instance can run on any machine or can be provisioned as
```
## Future work
In my next blog posts, I will show examples of combining route rules and egress rules, and also examples of accessing external services via Kubernetes _ExternalName_ services.
## Conclusion
In this blog post, I demonstrated how the microservices in an Istio service mesh can consume external services via TCP. By default, Istio blocks all the traffic, TCP and HTTP, to the hosts outside the cluster. To enable such traffic for TCP, TCP egress rules must be created for the service mesh.
## What's next
To read more about Istio egress traffic control:
* for TCP, see [Control Egress TCP Traffic Task]({{home}}/docs/tasks/traffic-management/egress-tcp.html)
* for HTTP/HTTPS, see [Control Egress Traffic Task]({{home}}/docs/tasks/traffic-management/egress.html)

View File

@ -13,11 +13,10 @@ redirect_from: "/blog/traffic-mirroring.html"
---
{% include home.html %}
Trying to enumerate all the possible combinations of test cases for testing services in non-production/test environments can be daunting. In some cases, you'll find that all of the effort that goes into cataloging these use cases doesn't match up to real production use cases. Ideally, we could use live production use cases and traffic to help illuminate all of the feature areas of the service under test that we might miss in more contrived testing environments.
Trying to enumerate all the possible combinations of test cases for testing services in non-production/test environments can be daunting. In some cases, you'll find that all of the effort that goes into cataloging these use cases doesn't match up to real production use cases. Ideally, we could use live production use cases and traffic to help illuminate all of the feature areas of the service under test that we might miss in more contrived testing environments.
Istio can help here. With the release of [Istio 0.5.0]({{home}}/about/notes/0.5.html), Istio can mirror traffic to help test your services. You can write route rules similar to the following to enable traffic mirroring:
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
@ -31,14 +30,14 @@ spec:
- labels:
version: v1
weight: 100
- labels:
- labels:
version: v2
weight: 0
mirror:
name: httpbin
labels:
version: v2
```
```
A few things to note here:
@ -47,4 +46,4 @@ A few things to note here:
* You'll need to have the 0-weighted route to hint to Istio to create the proper Envoy cluster under the covers; [this should be ironed out in future releases](https://github.com/istio/istio/issues/3270).
Learn more about mirroring by visiting the [Mirroring Task]({{home}}/docs/tasks/traffic-management/mirroring.html) and see a more
[comprehensive treatment of this scenario on my blog](https://blog.christianposta.com/microservices/traffic-shadowing-with-istio-reduce-the-risk-of-code-release/).
[comprehensive treatment of this scenario on my blog](https://blog.christianposta.com/microservices/traffic-shadowing-with-istio-reduce-the-risk-of-code-release/).

View File

@ -1,7 +1,7 @@
---
title: Attributes
overview: Explains the important notion of attributes, which is a central mechanism for how policies and control are applied to services within the mesh.
order: 10
layout: docs
@ -19,11 +19,13 @@ environment this traffic occurs in. An Istio attribute carries a specific piece
of information such as the error code of an API request, the latency of an API request, or the
original IP address of a TCP connection. For example:
request.path: xyz/abc
request.size: 234
request.time: 12:34:56.789 04/17/2017
source.ip: 192.168.0.1
destination.service: example
```xxx
request.path: xyz/abc
request.size: 234
request.time: 12:34:56.789 04/17/2017
source.ip: 192.168.0.1
destination.service: example
```
## Attribute vocabulary

View File

@ -1,7 +1,7 @@
---
title: Mixer Configuration
overview: An overview of the key concepts used to configure Mixer.
order: 30
layout: docs
@ -81,18 +81,18 @@ metadata:
namespace: istio-system
spec:
# kind specific configuration.
```
```
- **apiVersion** - A constant for an Istio release.
- **kind** - A Mixer assigned unique "kind" for every adapter and template.
- **name** - The configuration resource name.
- **namespace** - The namespace in which the configuration resource is applicable.
- **namespace** - The namespace in which the configuration resource is applicable.
- **spec** - The `kind`-specific configuration.
### Handlers
[Adapters](./mixer.html#adapters) encapsulate the logic necessary to interface Mixer with specific external infrastructure
backends such as [Prometheus](https://prometheus.io), [New Relic](https://newrelic.com), or [Stackdriver](https://cloud.google.com/logging).
Individual adapters generally need operational parameters in order to do their work. For example, a logging adapter may require
Individual adapters generally need operational parameters in order to do their work. For example, a logging adapter may require
the IP address and port of the log sink.
Here is an example showing how to configure an adapter of kind = `listchecker`. The listchecker adapter checks an input value against a list.
@ -109,7 +109,7 @@ spec:
blacklist: false
```
`{metadata.name}.{kind}.{metadata.namespace}` is the fully qualified name of a handler. The fully qualified name of the above handler is
`{metadata.name}.{kind}.{metadata.namespace}` is the fully qualified name of a handler. The fully qualified name of the above handler is
`staticversion.listchecker.istio-system` and it must be unique.
The schema of the data in the `spec` stanza depends on the specific adapter being configured.
@ -189,8 +189,8 @@ spec:
instances:
- requestduration.metric.istio-system
```
A rule contains a `match` predicate expression and a list of actions to perform if the predicate is true.
An action specifies the list of instances to be delivered to a handler.
A rule contains a `match` predicate expression and a list of actions to perform if the predicate is true.
An action specifies the list of instances to be delivered to a handler.
A rule must use the fully qualified names of handlers and instances.
If the rule, handlers, and instances are all in the same namespace, the namespace suffix can be elided from the fully qualified name as seen in `handler.prometheus`.
@ -221,7 +221,7 @@ destination_version: destination.labels["version"] | "unknown"
With the above, the `destination_version` label is assigned the value of `destination.labels["version"]`. However if that attribute
is not present, the literal `"unknown"` is used.
The attributes that can be used in attribute expressions must be defined in an
The attributes that can be used in attribute expressions must be defined in an
[*attribute manifest*](#manifests) for the deployment. Within the manifest, each attribute has
a type which represents the kind of data that the attribute carries. In the
same way, attribute expressions are also typed, and their type is derived from
@ -244,9 +244,9 @@ Mixer goes through the following steps to arrive at the set of `actions`.
1. Extract the value of the identity attribute from the request.
2. Extract the service namespace from the identity attribute.
1. Extract the service namespace from the identity attribute.
3. Evaluate the `match` predicate for all rules in the `configDefaultNamespace` and the service namespace.
1. Evaluate the `match` predicate for all rules in the `configDefaultNamespace` and the service namespace.
The actions resulting from these steps are performed by Mixer.
@ -291,4 +291,4 @@ configuration](https://github.com/istio/istio/blob/master/mixer/testdata/config)
## What's next
* Read the [blog post]({{home}}/blog/mixer-adapter-model.html) describing Mixer's adapter model.
-Read the [blog post]({{home}}/blog/mixer-adapter-model.html) describing Mixer's adapter model.

View File

@ -1,7 +1,7 @@
---
title: Mixer
overview: Architectural deep-dive into the design of Mixer, which provides the policy and control mechanisms within the service mesh.
order: 20
layout: docs
@ -147,4 +147,4 @@ phases:
## What's next
* Read the [blog post]({{home}}/blog/2017/adapter-model.html) describing Mixer's adapter model.
- Read the [blog post]({{home}}/blog/2017/adapter-model.html) describing Mixer's adapter model.

View File

@ -13,18 +13,18 @@ type: markdown
Istio Auth's aim is to enhance the security of microservices and their communication without requiring service code changes. It is responsible for:
* Providing each service with a strong identity that represents its role to enable interoperability across clusters and clouds
* Securing service to service communication and end-user to service communication
* Providing each service with a strong identity that represents its role to enable interoperability across clusters and clouds
* Securing service to service communication and end-user to service communication
* Providing a key management system to automate key and certificate generation, distribution, rotation, and revocation
* Providing a key management system to automate key and certificate generation, distribution, rotation, and revocation
## Architecture
The diagram below shows Istio Auth's architecture, which includes three primary components: identity, key management, and communication security. This diagram describes how Istio Auth is used to secure the service-to-service communication between service 'frontend' running as the service account 'frontend-team' and service 'backend' running as the service account 'backend-team'. Istio supports services running on both Kubernetes containers and VM/bare-metal machines.
The diagram below shows Istio Auth's architecture, which includes three primary components: identity, key management, and communication
security. This diagram describes how Istio Auth is used to secure the service-to-service communication between service 'frontend' running
as the service account 'frontend-team' and service 'backend' running as the service account 'backend-team'. Istio supports services running
on both Kubernetes containers and VM/bare-metal machines.
{% include figure.html width='80%' ratio='56.25%'
img='./img/mutual-tls/auth.svg'
@ -33,7 +33,9 @@ The diagram below shows Istio Auth's architecture, which includes three primary
caption='Istio Auth Architecture'
%}
As illustrated in the diagram, Istio Auth leverages secret volume mount to deliver keys/certs from Istio CA to Kubernetes containers. For services running on VM/bare-metal machines, we introduce a node agent, which is a process running on each VM/bare-metal machine. It generates the private key and CSR (certificate signing request) locally, sends CSR to Istio CA for signing, and delivers the generated certificate together with the private key to Envoy.
As illustrated in the diagram, Istio Auth leverages secret volume mount to deliver keys/certs from Istio CA to Kubernetes containers. For services running on
VM/bare-metal machines, we introduce a node agent, which is a process running on each VM/bare-metal machine. It generates the private key and CSR (certificate
signing request) locally, sends CSR to Istio CA for signing, and delivers the generated certificate together with the private key to Envoy.
## Components
@ -41,47 +43,48 @@ As illustrated in the diagram, Istio Auth leverages secret volume mount to deliv
Istio Auth uses [Kubernetes service accounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) to identify who runs the service:
* A service account in Istio has the format "spiffe://\<_domain_\>/ns/\<_namespace_>/sa/\<_serviceaccount_\>".
* A service account in Istio has the format "spiffe://\<_domain_\>/ns/\<_namespace_>/sa/\<_serviceaccount_\>".
* _domain_ is currently _cluster.local_. We will support customization of domain in the near future.
* _namespace_ is the namespace of the Kubernetes service account.
* _serviceaccount_ is the Kubernetes service account name.
* _domain_ is currently _cluster.local_. We will support customization of domain in the near future.
* _namespace_ is the namespace of the Kubernetes service account.
* _serviceaccount_ is the Kubernetes service account name.
* A service account is **the identity (or role) a workload runs as**, which represents that workload's privileges. For systems requiring strong security, the amount of privilege for a workload should not be identified by a random string (i.e., service name, label, etc), or by the binary that is deployed.
* A service account is **the identity (or role) a workload runs as**, which represents that workload's privileges. For systems requiring strong security, the
amount of privilege for a workload should not be identified by a random string (i.e., service name, label, etc), or by the binary that is deployed.
* For example, let's say we have a workload pulling data from a multi-tenant database. If Alice ran this workload, she will be able to pull a different set of data than if Bob ran this workload.
* For example, let's say we have a workload pulling data from a multi-tenant database. If Alice ran this workload, she will be able to pull
a different set of data than if Bob ran this workload.
* Service accounts enable strong security policies by offering the flexibility to identify a machine, a user, a workload, or a group of workloads (different workloads can run as the same service account).
* Service accounts enable strong security policies by offering the flexibility to identify a machine, a user, a workload, or a group of workloads (different
workloads can run as the same service account).
* The service account a workload runs as won't change during the lifetime of the workload.
* The service account a workload runs as won't change during the lifetime of the workload.
* Service account uniqueness can be ensured with domain name constraint
* Service account uniqueness can be ensured with domain name constraint
### Communication security
Service-to-service communication is tunneled through the client side [Envoy](https://envoyproxy.github.io/envoy/) and the server side Envoy. End-to-end communication is secured by:
* Local TCP connections between the service and Envoy
* Local TCP connections between the service and Envoy
* Mutual TLS connections between proxies
* Mutual TLS connections between proxies
* Secure Naming: during the handshake process, the client side Envoy checks that the service account provided by the server side certificate is allowed to run the target service
* Secure Naming: during the handshake process, the client side Envoy checks that the service account provided by the server side certificate is allowed to run the target service
### Key management
Istio v0.2 supports services running on both Kubernetes pods and VM/bare-metal machines. We use different key provisioning mechanisms for each scenario.
Istio 0.2 supports services running on both Kubernetes pods and VM/bare-metal machines. We use different key provisioning mechanisms for each scenario.
For services running on Kubernetes pods, the per-cluster Istio CA (Certificate Authority) automates the key & certificate management process. It mainly performs four critical operations :
* Generate a [SPIFFE](https://spiffe.github.io/docs/svid) key and certificate pair for each service account
* Generate a [SPIFFE](https://spiffe.github.io/docs/svid) key and certificate pair for each service account
* Distribute a key and certificate pair to each pod according to the service account
* Distribute a key and certificate pair to each pod according to the service account
* Rotate keys and certificates periodically
* Rotate keys and certificates periodically
* Revoke a specific key and certificate pair when necessary
* Revoke a specific key and certificate pair when necessary
For services running on VM/bare-metal machines, the above four operations are performed by Istio CA together with node agents.
@ -91,78 +94,70 @@ The Istio Auth workflow consists of two phases, deployment and runtime. For the
### Deployment phase (Kubernetes Scenario)
1. Istio CA watches Kubernetes API Server, creates a [SPIFFE](https://spiffe.github.io/docs/svid) key and certificate pair for each of the existing and new service accounts, and sends them to API Server.
1. Istio CA watches Kubernetes API Server, creates a [SPIFFE](https://spiffe.github.io/docs/svid) key and certificate pair for each of the existing and new service accounts, and sends them to API Server.
1. When a pod is created, API Server mounts the key and certificate pair according to the service account using [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
1. When a pod is created, API Server mounts the key and certificate pair according to the service account using [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
1. [Pilot]({{home}}/docs/concepts/traffic-management/pilot.html) generates the config with proper key and certificate and secure naming information,
which
defines what service account(s) can run a certain service, and passes it to Envoy.
1. [Pilot]({{home}}/docs/concepts/traffic-management/pilot.html) generates the config with proper key and certificate and secure naming information,
which defines what service account(s) can run a certain service, and passes it to Envoy.
### Deployment phase (VM/bare-metal Machines Scenario)
1. Istio CA creates a gRPC service to take CSR request.
1. Istio CA creates a gRPC service to take CSR request.
1. Node agent creates the private key and CSR, sends the CSR to Istio CA for signing.
1. Node agent creates the private key and CSR, sends the CSR to Istio CA for signing.
1. Istio CA validates the credentials carried in the CSR, and signs the CSR to generate the certificate.
1. Istio CA validates the credentials carried in the CSR, and signs the CSR to generate the certificate.
1. Node agent puts the certificate received from CA and the private key to Envoy.
1. The above CSR process repeats periodically for rotation.
1. Node agent puts the certificate received from CA and the private key to Envoy.
1. The above CSR process repeats periodically for rotation.
### Runtime phase
1. The outbound traffic from a client service is rerouted to its local Envoy.
1. The client side Envoy starts a mutual TLS handshake with the server side Envoy. During the handshake, it also does a secure naming check to verify that the service account presented in the server certificate can run the server service.
1. The outbound traffic from a client service is rerouted to its local Envoy.
1. The client side Envoy starts a mutual TLS handshake with the server side Envoy. During the handshake, it also does a secure naming check to verify that the service account presented in the server certificate can run the server service.
1. The traffic is forwarded to the server side Envoy after mTLS connection is established, which is then forwarded to the server service through local TCP connections.
1. The traffic is forwarded to the server side Envoy after mTLS connection is established, which is then forwarded to the server service through local TCP connections.
## Best practices
In this section, we provide a few deployment guidelines and then discuss a real-world scenario.
In this section, we provide a few deployment guidelines and then discuss a real-world scenario.
### Deployment guidelines
* If there are multiple service operators (a.k.a. [SREs](https://en.wikipedia.org/wiki/Site_reliability_engineering)) deploying different services in a cluster (typically in a medium- or large-size cluster), we recommend creating a separate [namespace](https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/) for each SRE team to isolate their access. For example, you could create a "team1-ns" namespace for team1, and "team2-ns" namespace for team2, such that both teams won't be able to access each other's services.
* If there are multiple service operators (a.k.a. [SREs](https://en.wikipedia.org/wiki/Site_reliability_engineering)) deploying different services in a cluster (typically in a medium- or large-size cluster), we recommend creating a separate [namespace](https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/) for each SRE team to isolate their access. For example, you could create a "team1-ns" namespace for team1, and "team2-ns" namespace for team2, such that both teams won't be able to access each other's services.
* If Istio CA is compromised, all its managed keys and certificates in the cluster may be exposed. We *strongly* recommend running Istio CA on a dedicated namespace (for example, istio-ca-ns), which only cluster admins have access to.
* If Istio CA is compromised, all its managed keys and certificates in the cluster may be exposed. We *strongly* recommend running Istio CA on a dedicated namespace (for example, istio-ca-ns), which only cluster admins have access to.
### Example
Let's consider a 3-tier application with three services: photo-frontend, photo-backend, and datastore. Photo-frontend and photo-backend services are managed by the photo SRE team while the datastore service is managed by the datastore SRE team. Photo-frontend can access photo-backend, and photo-backend can access datastore. However, photo-frontend cannot access datastore.
In this scenario, a cluster admin creates 3 namespaces: istio-ca-ns, photo-ns, and datastore-ns. Admin has access to all namespaces, and each team only has
access to its own namespace. The photo SRE team creates 2 service accounts to run photo-frontend and photo-backend respectively in namespace photo-ns. The
datastore SRE team creates 1 service account to run the datastore service in namespace datastore-ns. Moreover, we need to enforce the service access control
In this scenario, a cluster admin creates 3 namespaces: istio-ca-ns, photo-ns, and datastore-ns. Admin has access to all namespaces, and each team only has
access to its own namespace. The photo SRE team creates 2 service accounts to run photo-frontend and photo-backend respectively in namespace photo-ns. The
datastore SRE team creates 1 service account to run the datastore service in namespace datastore-ns. Moreover, we need to enforce the service access control
in [Istio Mixer]({{home}}/docs/concepts/policy-and-control/mixer.html) such that photo-frontend cannot access datastore.
In this setup, Istio CA is able to provide keys and certificates management for all namespaces, and isolate microservice deployments from each other.
## Future work
* Inter-cluster service-to-service authentication
* Inter-cluster service-to-service authentication
* Powerful authorization mechanisms: ABAC, RBAC, etc
* Powerful authorization mechanisms: ABAC, RBAC, etc
* Per-service auth enablement support
* Per-service auth enablement support
* Secure Istio components (Mixer, Pilot)
* Secure Istio components (Mixer, Pilot)
* End-user to service authentication using JWT/OAuth2/OpenID_Connect.
* End-user to service authentication using JWT/OAuth2/OpenID_Connect.
* Support GCP service account
* Support GCP service account
* Unix domain socket for local communication between service and Envoy
* Unix domain socket for local communication between service and Envoy
* Middle proxy support
* Middle proxy support
* Pluggable key management component
* Pluggable key management component

View File

@ -9,15 +9,20 @@ type: markdown
{% include home.html %}
## Overview
Istio Role-Based Access Control (RBAC) provides namespace-level, service-level, method-level access control for services in Istio Mesh.
Istio Role-Based Access Control (RBAC) provides namespace-level, service-level, method-level access control for services in the Istio Mesh.
It features:
* Role-Based semantics, which is simple and easy to use.
* Service-to-service and endUser-to-Service authorization.
* Flexibility through custom properties support in roles and role-bindings.
## Architecture
The diagram below shows Istio RBAC architecture. The admins specify Istio RBAC policies. The policies are saved in Istio config store.
The diagram below shows the Istio RBAC architecture. Operators specify Istio RBAC policies. The policies are saved in
the Istio config store.
{% include figure.html width='80%' ratio='56.25%'
img='./img/IstioRBAC.svg'
@ -26,14 +31,14 @@ The diagram below shows Istio RBAC architecture. The admins specify Istio RBAC p
caption='Istio RBAC Architecture'
%}
Istio RBAC engine does two things:
The Istio RBAC engine does two things:
* **Fetch RBAC policy.** Istio RBAC engine watches for changes on RBAC policy. It fetches the updated RBAC policy if it sees any changes.
* **Authorize Requests.** At runtime, when a request comes, the request context is passed to Istio RBAC engine. RBAC engine evaluates the
request context against the RBAC policies, and returns the authorization result (ALLOW or DENY).
### Request Context
### Request context
In the current release, Istio RBAC engine is implemented as a [Mixer adapter]({{home}}/docs/concepts/policy-and-control/mixer.html#adapters).
In the current release, the Istio RBAC engine is implemented as a [Mixer adapter]({{home}}/docs/concepts/policy-and-control/mixer.html#adapters).
The request context is provided as an instance of the
[authorization template](https://github.com/istio/istio/blob/master/mixer/template/authorization/template.proto). The request context
contains all the information about the request and the environment that an authorization module needs to know. In particular, it has two parts:
@ -44,7 +49,7 @@ or any additional properties about the subject such as namespace, service name.
and any additional properties about the action.
Below we show an example "requestcontext".
```rule
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: authorization
metadata:
@ -66,27 +71,27 @@ Below we show an example "requestcontext".
version: request.headers["version"] | ""
```
## Istio RBAC Policy
## Istio RBAC policy
Istio RBAC introduces ServiceRole and ServiceRoleBinding, both of which are defined as Kubernetes CustomResourceDefinition (CRD) objects.
Istio RBAC introduces `ServiceRole` and `ServiceRoleBinding`, both of which are defined as Kubernetes CustomResourceDefinition (CRD) objects.
* **ServiceRole** defines a role for access to services in the mesh.
* **ServiceRoleBinding** grants a role to subjects (e.g., a user, a group, a service).
* **`ServiceRole`** defines a role for access to services in the mesh.
* **`ServiceRoleBinding`** grants a role to subjects (e.g., a user, a group, a service).
### ServiceRole
### `ServiceRole`
A ServiceRole specification includes a list of rules. Each rule has the following standard fields:
A `ServiceRole` specification includes a list of rules. Each rule has the following standard fields:
* **services**: A list of service names, which are matched against the `action.service` field of the "requestcontext".
* **methods**: A list of method names which are matched against the `action.method` field of the "requestcontext". In the above "requestcontext",
this is the HTTP or gRPC method. Note that gRPC methods are formatted in the form of "packageName.serviceName/methodName" (case sensitive).
* **paths**: A list of HTTP paths which are matched against the `action.path` field of the "requestcontext". It is ignored in gRPC case.
A ServiceRole specification only applies to the **namespace** specified in `"metadata"` section. The "services" and "methods" are required
A `ServiceRole` specification only applies to the **namespace** specified in `"metadata"` section. The "services" and "methods" are required
fields in a rule. "paths" is optional. If not specified or set to "*", it applies to "any" instance.
Here is an example of a simple role "service-admin", which has full access to all services in "default" namespace.
```rule
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
@ -101,7 +106,7 @@ Here is an example of a simple role "service-admin", which has full access to al
Here is another role "products-viewer", which has read ("GET" and "HEAD") access to service "products.default.svc.cluster.local"
in "default" namespace.
```rule
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
@ -113,13 +118,13 @@ in "default" namespace.
methods: ["GET", "HEAD"]
```
In addition, we support **prefix match** and **suffix match** for all the fields in a rule. For example, you can define a "tester" role that
In addition, we support **prefix matching** and **suffix matching** for all the fields in a rule. For example, you can define a "tester" role that
has the following permissions in "default" namespace:
* Full access to all services with prefix "test-" (e.g, "test-bookstore", "test-performance", "test-api.default.svc.cluster.local").
* Read ("GET") access to all paths with "/reviews" suffix (e.g, "/books/reviews", "/events/booksale/reviews", "/reviews")
in service "bookstore.default.svc.cluster.local".
```rule
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
@ -134,15 +139,15 @@ in service "bookstore.default.svc.cluster.local".
methods: ["GET"]
```
In ServiceRole, the combination of "namespace"+"services"+"paths"+"methods" defines "how a service (services) is allowed to be accessed".
In `ServiceRole`, the combination of "namespace"+"services"+"paths"+"methods" defines "how a service (services) is allowed to be accessed".
In some situations, you may need to specify additional constraints that a rule applies to. For example, a rule may only applies to a
certain "version" of a service, or only applies to services that are labeled "foo". You can easily specify these constraints using
custom fields.
For example, the following ServiceRole definition extends the previous "products-viewer" role by adding a constraint on service "version"
For example, the following `ServiceRole` definition extends the previous "products-viewer" role by adding a constraint on service "version"
being "v1" or "v2". Note that the "version" property is provided by `"action.properties.version"` in "requestcontext".
```rule
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
@ -157,21 +162,21 @@ being "v1" or "v2". Note that the "version" property is provided by `"action.pro
values: ["v1", "v2"]
```
### ServiceRoleBinding
### `ServiceRoleBinding`
A ServiceRoleBinding specification includes two parts:
* **roleRef** refers to a ServiceRole object **in the same namespace**.
A `ServiceRoleBinding` specification includes two parts:
* **roleRef** refers to a `ServiceRole` resource **in the same namespace**.
* A list of **subjects** that are assigned the role.
A subject can either be a "user", or a "group", or is represented with a set of "properties". Each entry ("user" or "group" or an entry
in "properties") must match one of fields ("user" or "groups" or an entry in "properties") in the "subject" part of the "requestcontext"
instance.
Here is an example of ServiceRoleBinding object "test-binding-products", which binds two subjects to ServiceRole "product-viewer":
Here is an example of `ServiceRoleBinding` resource "test-binding-products", which binds two subjects to ServiceRole "product-viewer":
* user "alice@yahoo.com".
* "reviews.abc.svc.cluster.local" service in "abc" namespace.
```rule
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRoleBinding
metadata:
@ -185,13 +190,13 @@ Here is an example of ServiceRoleBinding object "test-binding-products", which b
namespace: "abc"
roleRef:
kind: ServiceRole
name: "products-viewer"
name: "products-vieweqr"
```
In the case that you want to make a service(s) publically accessible, you can use set the subject to `user: "*"`. This will assign a ServiceRole
In the case that you want to make a service(s) publicly accessible, you can use set the subject to `user: "*"`. This will assign a `ServiceRole`
to all users/services.
```rule
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRoleBinding
metadata:
@ -220,7 +225,7 @@ earlier in the document](#request-context).
In the following example, Istio RBAC is enabled for "default" namespace. And the cache duration is set to 30 seconds.
```rule
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: rbac
metadata:
@ -248,4 +253,4 @@ In the following example, Istio RBAC is enabled for "default" namespace. And the
## What's next
Try out [Istio RBAC with BookInfo Sample]({{home}}/docs/tasks/security/role-based-access-control.html).
Try out the [Istio RBAC with Bookinfo]({{home}}/docs/tasks/security/role-based-access-control.html) sample.

View File

@ -1,14 +1,14 @@
---
title: Fault Injection
overview: Introduces the idea of systematic fault injection that can be used to uncover conflicting failure recovery policies across services.
order: 40
layout: docs
type: markdown
toc: false
---
While Envoy sidecar/proxy provides a host of
[failure recovery mechanisms](./handling-failures.html) to services running
on Istio, it is still
@ -25,12 +25,12 @@ regardless of network level failures, and that more meaningful failures can
be injected at the application layer (e.g., HTTP error codes) to exercise
the resilience of an application.
Operators can configure faults to be injected into requests that match
Operators can configure faults to be injected into requests that match
specific criteria. Operators can further restrict the percentage of
requests that should be subjected to faults. Two types of faults can be
injected: delays and aborts. Delays are timing failures, mimicking
increased network latency, or an overloaded upstream service. Aborts are
crash failures that mimick failures in upstream services. Aborts usually
crash failures that mimic failures in upstream services. Aborts usually
manifest in the form of HTTP error codes, or TCP connection failures.
Refer to [Istio's traffic management rules](./rules-configuration.html) for more details.
Refer to [Istio's traffic management rules](./rules-configuration.html) for more details.

View File

@ -14,18 +14,22 @@ that can be taken advantage of by the services in an application. Features
include:
1. Timeouts
2. Bounded retries with timeout budgets and variable jitter between retries
3. Limits on number of concurrent connections and requests to upstream services
4. Active (periodic) health checks on each member of the load balancing pool
5. Fine-grained circuit breakers (passive health checks) -- applied per
instance in the load balancing pool
1. Bounded retries with timeout budgets and variable jitter between retries
1. Limits on number of concurrent connections and requests to upstream services
1. Active (periodic) health checks on each member of the load balancing pool
1. Fine-grained circuit breakers (passive health checks) -- applied per
instance in the load balancing pool
These features can be dynamically configured at runtime through
[Istio's traffic management rules](./rules-configuration.html).
The jitter between retries minimizes the impact of retries on an overloaded
upstream service, while timeout budgets ensure that the calling service
gets a response (success/failure) within a predictable timeframe.
gets a response (success/failure) within a predictable time frame.
A combination of active and passive health checks (4 and 5 above)
minimizes the chances of accessing an unhealthy instance in the load
@ -37,7 +41,7 @@ mesh, minimizing the request failures and impact on latency.
Together, these features enable the service mesh to tolerate failing nodes
and prevent localized failures from cascading instability to other nodes.
## Fine tuning
## Fine tuning
Istio's traffic management rules allow
operators to set global defaults for failure recovery per
@ -48,7 +52,7 @@ and
defaults by providing request-level overrides through special HTTP headers.
With the Envoy proxy implementation, the headers are "x-envoy-upstream-rq-timeout-ms" and
"x-envoy-max-retries", respectively.
## FAQ
@ -61,15 +65,15 @@ a load balancing pool have failed, Envoy will return HTTP 503. It is the
responsibility of the application to implement any fallback logic that is
needed to handle the HTTP 503 error code from an upstream service.
_2. Will Envoy's failure recovery features break applications that already
_1. Will Envoy's failure recovery features break applications that already
use fault tolerance libraries (e.g., [Hystrix](https://github.com/Netflix/Hystrix))?_
No. Envoy is completely transparent to the application. A failure response
returned by Envoy would not be distinguishable from a failure response
returned by the upstream service to which the call was made.
_3. How will failures be handled when using application-level libraries and
Envoy at the same time?_
_1. How will failures be handled when using application-level libraries and
Envoy at the same time?_
Given two failure recovery policies for the same destination service (e.g.,
two timeouts -- one set in Envoy and another in application's library), **the

View File

@ -1,7 +1,7 @@
---
title: Discovery & Load Balancing
overview: Describes how traffic is load balanced across instances of a service in the mesh.
order: 25
layout: docs
@ -22,7 +22,7 @@ applications.
**Service Discovery:** Pilot consumes information from the service
registry and provides a platform-agnostic service discovery
interface. Envoy instances in the mesh perform service discovery and
interface. Envoy instances in the mesh perform service discovery and
dynamically update their load balancing pools accordingly.
{% include figure.html width='80%' ratio='74.79%'
@ -40,7 +40,6 @@ the load balancing pool. While Envoy supports several
Istio currently allows three load balancing modes:
round robin, random, and weighted least request.
In addition to load balancing, Envoy periodically checks the health of each
instance in the pool. Envoy follows a circuit breaker style pattern to
classify instances as unhealthy or healthy based on their failure rates for

View File

@ -1,7 +1,7 @@
---
title: Overview
overview: Provides a conceptual overview of traffic management in Istio and the features it enables.
order: 0
layout: docs

View File

@ -1,7 +1,7 @@
---
title: Pilot
overview: Introduces Pilot, the component responsible for managing a distributed deployment of Envoy proxies in the service mesh.
order: 10
layout: docs
@ -37,6 +37,6 @@ and [routing tables](https://www.envoyproxy.io/docs/envoy/latest/configuration/h
These APIs decouple Envoy from platform-specific nuances, simplifying the
design and increasing portability across platforms.
Operators can specify high-level traffic management rules through
Operators can specify high-level traffic management rules through
[Pilot's Rules API]({{home}}/docs/reference/config/istio.routing.v1alpha1.html). These rules are translated into low-level
configurations and distributed to Envoy instances via the discovery API.

View File

@ -1,7 +1,7 @@
---
title: Request Routing
overview: Describes how requests are routed between services in an Istio service mesh.
order: 20
layout: docs
@ -21,7 +21,6 @@ etc.). Platform-specific adapters are responsible for populating the
internal model representation with various fields from the metadata found
in the platform.
Istio introduces the concept of a service version, which is a finer-grained
way to subdivide service instances by versions (`v1`, `v2`) or environment
(`staging`, `prod`). These variants are not necessarily different API

View File

@ -37,7 +37,7 @@ spec:
The destination is the name of the service to which the traffic is being
routed. The route *labels* identify the specific service instances that will
recieve traffic. For example, in a Kubernetes deployment of Istio, the route
receive traffic. For example, in a Kubernetes deployment of Istio, the route
*label* "version: v1" indicates that only pods containing the label "version: v1"
will receive traffic.
@ -76,7 +76,7 @@ domain name (FQDN). It is used by Istio Pilot for matching rules to services.
Normally, the FQDN of the service is composed from three components: *name*,
*namespace*, and *domain*:
```
```xxx
FQDN = name + "." + namespace + "." + domain
```

View File

@ -1,7 +1,7 @@
---
title: Design Goals
overview: Describes the core principles that Istio's design adheres to.
order: 20
layout: docs
@ -10,29 +10,29 @@ type: markdown
This page outlines the core principles that guide Istio's design.
Istios architecture is informed by a few key design goals that are essential to making the system capable of dealing with services at scale and with high
Istios architecture is informed by a few key design goals that are essential to making the system capable of dealing with services at scale and with high
performance.
- **Maximize Transparency**.
To adopt Istio, an operator or developer should be required to do the minimum amount of work possible to get real value from the system. To this end, Istio
can automatically inject itself into all the network paths between services. Istio uses sidecar proxies to capture traffic, and where possible, automatically
program the networking layer to route traffic through those proxies without any changes to the deployed application code. In Kubernetes, the proxies are
injected into pods and traffic is captured by programming iptables rules. Once the sidecar proxies are injected and traffic routing is programmed, Istio is
able to mediate all traffic. This principle also applies to performance. When applying Istio to a deployment, operators should see a minimal increase in
resource costs for the
To adopt Istio, an operator or developer should be required to do the minimum amount of work possible to get real value from the system. To this end, Istio
can automatically inject itself into all the network paths between services. Istio uses sidecar proxies to capture traffic, and where possible, automatically
program the networking layer to route traffic through those proxies without any changes to the deployed application code. In Kubernetes, the proxies are
injected into pods and traffic is captured by programming iptables rules. Once the sidecar proxies are injected and traffic routing is programmed, Istio is
able to mediate all traffic. This principle also applies to performance. When applying Istio to a deployment, operators should see a minimal increase in
resource costs for the
functionality being provided. Components and APIs must all be designed with performance and scale in mind.
- **Incrementality**.
As operators and developers become more dependent on the functionality that Istio provides, the system must grow with their needs. While we expect to
continue adding new features ourselves, we expect the greatest need will be the ability to extend the policy system, to integrate with other sources of policy and control and to propagate signals about mesh behavior to other systems for analysis. The policy runtime supports a standard extension mechanism for plugging in other services. In addition, it allows for the extension of its vocabulary to allow policies to be enforced based on new signals that the mesh produces.
As operators and developers become more dependent on the functionality that Istio provides, the system must grow with their needs. While we expect to
continue adding new features ourselves, we expect the greatest need will be the ability to extend the policy system, to integrate with other sources of policy and control and to propagate signals about mesh behavior to other systems for analysis. The policy runtime supports a standard extension mechanism for plugging in other services. In addition, it allows for the extension of its vocabulary to allow policies to be enforced based on new signals that the mesh produces.
- **Portability**.
The ecosystem in which Istio will be used varies along many dimensions. Istio must run on any cloud or on-prem environment with minimal effort. The task of
porting Istio-based services to new environments should be trivial, and it should be possible to operate a single service deployed into multiple
The ecosystem in which Istio will be used varies along many dimensions. Istio must run on any cloud or on-prem environment with minimal effort. The task of
porting Istio-based services to new environments should be trivial, and it should be possible to operate a single service deployed into multiple
environments (on multiple clouds for redundancy for example) using Istio.
- **Policy Uniformity**.
The application of policy to API calls between services provides a great deal of control over mesh behavior, but it can be equally important to apply
policies to resources which are not necessarily expressed at the API level. For example, applying quota to the amount of CPU consumed by an ML training task
is more useful than applying quota to the call which initiated the work. To this end, the policy system is maintained as a distinct service with its own API
The application of policy to API calls between services provides a great deal of control over mesh behavior, but it can be equally important to apply
policies to resources which are not necessarily expressed at the API level. For example, applying quota to the amount of CPU consumed by an ML training task
is more useful than applying quota to the call which initiated the work. To this end, the policy system is maintained as a distinct service with its own API
rather than being baked into the proxy/sidecar, allowing services to directly integrate with it as needed.

View File

@ -1,7 +1,7 @@
---
title: Overview
overview: Provides a conceptual introduction to Istio, including the problems it solves and its high-level architecture.
order: 15
layout: docs
@ -19,7 +19,7 @@ For detailed conceptual information about Istio components see our other [Concep
Istio addresses many of the challenges faced by developers and operators as monolithic applications transition towards a distributed microservice architecture. The term **service mesh** is often used to describe the network of
microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand
and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring, and often more complex operational requirements
and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring, and often more complex operational requirements
such as A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.
Istio provides a complete solution to satisfy the diverse requirements of microservice applications by providing
@ -28,10 +28,10 @@ network of services:
- **Traffic Management**. Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face
of adverse conditions.
- **Observability**. Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues.
- **Policy Enforcement**. Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly
- **Policy Enforcement**. Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly
distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code.
- **Service Identity and Security**. Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic
@ -39,14 +39,14 @@ as it flows over networks of varying degrees of trustability.
In addition to these behaviors, Istio is designed for extensibility to meet diverse deployment needs:
- **Platform Support**. Istio is designed to run in a variety of environments including ones that span Cloud, on-premise, Kubernetes, Mesos etc. Were
- **Platform Support**. Istio is designed to run in a variety of environments including ones that span Cloud, on-premise, Kubernetes, Mesos etc. Were
initially focused on Kubernetes but are working to support other environments soon.
- **Integration and Customization**. The policy enforcement component can be extended and customized to integrate with existing solutions for
- **Integration and Customization**. The policy enforcement component can be extended and customized to integrate with existing solutions for
ACLs, logging, monitoring, quotas, auditing and more.
These capabilities greatly decrease the coupling between application code, the underlying platform, and policy. This decreased coupling not only makes
services easier to implement, but also makes it simpler for operators to move application deployments between environments or to new policy schemes.
These capabilities greatly decrease the coupling between application code, the underlying platform, and policy. This decreased coupling not only makes
services easier to implement, but also makes it simpler for operators to move application deployments between environments or to new policy schemes.
Applications become inherently more portable as a result.
## Architecture
@ -56,7 +56,7 @@ An Istio service mesh is logically split into a **data plane** and a **control p
- The **data plane** is composed of a set of intelligent
proxies (Envoy) deployed as sidecars that mediate and control all network communication between microservices.
- The **control plane** is responsible for managing and
- The **control plane** is responsible for managing and
configuring proxies to route traffic, as well as enforcing policies at runtime.
The following diagram shows the different components that make up each plane:
@ -70,7 +70,7 @@ The following diagram shows the different components that make up each plane:
### Envoy
Istio uses an extended version of the [Envoy](https://envoyproxy.github.io/envoy/) proxy, a high-performance proxy developed in C++, to mediate all inbound and outbound traffic for all services in the service mesh.
Istio uses an extended version of the [Envoy](https://envoyproxy.github.io/envoy/) proxy, a high-performance proxy developed in C++, to mediate all inbound and outbound traffic for all services in the service mesh.
Istio leverages Envoys many built-in features such as dynamic service discovery, load balancing, TLS termination, HTTP/2 & gRPC proxying, circuit breakers,
health checks, staged rollouts with %-based traffic split, fault injection, and rich metrics.
@ -78,8 +78,8 @@ Envoy is deployed as a **sidecar** to the relevant service in the same Kubernete
### Mixer
[Mixer]({{home}}/docs/concepts/policy-and-control/mixer.html) is a platform-independent component responsible for enforcing access control and usage policies across the service mesh and collecting telemetry data from the Envoy proxy and other
services. The proxy extracts request level [attributes]({{home}}/docs/concepts/policy-and-control/attributes.html), which are sent to Mixer for evaluation. More information on this attribute extraction and policy
[Mixer]({{home}}/docs/concepts/policy-and-control/mixer.html) is a platform-independent component responsible for enforcing access control and usage policies across the service mesh and collecting telemetry data from the Envoy proxy and other
services. The proxy extracts request level [attributes]({{home}}/docs/concepts/policy-and-control/attributes.html), which are sent to Mixer for evaluation. More information on this attribute extraction and policy
evaluation can be found in [Mixer Configuration]({{home}}/docs/concepts/policy-and-control/mixer-config.html). Mixer includes a flexible plugin model enabling it to interface with a variety of host environments and infrastructure backends, abstracting the Envoy proxy and Istio-managed services from these details.
### Pilot
@ -90,11 +90,11 @@ for intelligent routing (e.g., A/B tests, canary deployments, etc.),
and resiliency (timeouts, retries, circuit breakers, etc.). It converts
high level routing rules that control traffic behavior into Envoy-specific
configurations, and propagates them to the sidecars at runtime. Pilot
abstracts platform-specifc service discovery mechanisms and synthesizes
abstracts platform-specific service discovery mechanisms and synthesizes
them into a standard format consumable by any sidecar that conforms to the
[Envoy data plane APIs](https://github.com/envoyproxy/data-plane-api).
This loose coupling allows Istio to run on multiple environments
(e.g., Kubernetes, Consul/Nomad) while maintaining the same operator
This loose coupling allows Istio to run on multiple environments
(e.g., Kubernetes, Consul/Nomad) while maintaining the same operator
interface for traffic management.
### Istio-Auth
@ -107,10 +107,10 @@ role-based access control as well as authorization hooks.
## What's next
* Learn about Istio's [design goals]({{home}}/docs/concepts/what-is-istio/goals.html).
- Learn about Istio's [design goals]({{home}}/docs/concepts/what-is-istio/goals.html).
* Explore our [Guides]({{home}}/docs/guides/).
- Explore our [Guides]({{home}}/docs/guides/).
* Read about Istio components in detail in our other [Concepts]({{home}}/docs/concepts/) guides.
- Read about Istio components in detail in our other [Concepts]({{home}}/docs/concepts/) guides.
* Learn how to deploy Istio with your own services using our [Tasks]({{home}}/docs/tasks/) guides.
- Learn how to deploy Istio with your own services using our [Tasks]({{home}}/docs/tasks/) guides.

View File

@ -75,7 +75,7 @@ To start the application, follow the instructions below corresponding to your Is
### Running on Kubernetes
> Note: If you use GKE, please ensure your cluster has at least 4 standard GKE nodes. If you use Minikube, please ensure you have at least 4GB RAM.
> If you use GKE, please ensure your cluster has at least 4 standard GKE nodes. If you use Minikube, please ensure you have at least 4GB RAM.
1. Change directory to the root of the Istio installation directory.
@ -113,7 +113,7 @@ To start the application, follow the instructions below corresponding to your Is
```
which produces the following output:
```bash
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details 10.0.0.31 <none> 9080/TCP 6m
@ -128,9 +128,9 @@ To start the application, follow the instructions below corresponding to your Is
```bash
kubectl get pods
```
which produces
```bash
NAME READY STATUS RESTARTS AGE
details-v1-1520924117-48z17 2/2 Running 0 6m
@ -157,7 +157,7 @@ To start the application, follow the instructions below corresponding to your Is
```
The address of the ingress service would then be
```bash
export GATEWAY_URL=130.211.10.121:80
```
@ -183,7 +183,7 @@ To start the application, follow the instructions below corresponding to your Is
```
1. _Minikube:_ External load balancers are not supported in Minikube. You can use the host IP of the ingress service, along with the NodePort, to access the ingress.
```bash
export GATEWAY_URL=$(kubectl get po -l istio=ingress -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
```
@ -194,12 +194,12 @@ To start the application, follow the instructions below corresponding to your Is
1. Bring up the application containers.
* To test with Consul, run the following commands:
- To test with Consul, run the following commands:
```bash
docker-compose -f samples/bookinfo/consul/bookinfo.yaml up -d
docker-compose -f samples/bookinfo/consul/bookinfo.sidecars.yaml up -d
```
* To test with Eureka, run the following commands:
- To test with Eureka, run the following commands:
```bash
docker-compose -f samples/bookinfo/eureka/bookinfo.yaml up -d
docker-compose -f samples/bookinfo/eureka/bookinfo.sidecars.yaml up -d
@ -225,7 +225,7 @@ To confirm that the Bookinfo application is running, run the following `curl` co
```bash
curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
```
```
```xxx
200
```
@ -236,7 +236,7 @@ stars, black stars, no stars), since we haven't yet used Istio to control the
version routing.
You can now use this sample to experiment with Istio's features for
traffic routing, fault injection, rate limitting, etc..
traffic routing, fault injection, rate limiting, etc..
To proceed, refer to one or more of the [Istio Guides]({{home}}/docs/guides),
depending on your interest. [Intelligent Routing]({{home}}/docs/guides/intelligent-routing.html)
is a good place to start for beginners.
@ -270,14 +270,14 @@ uninstall and clean it up using the following instructions.
```bash
samples/bookinfo/consul/cleanup.sh
```
1. In a Eureka setup, run the following command:
```bash
samples/bookinfo/eureka/cleanup.sh
```
2. Confirm cleanup
1. Confirm cleanup
```bash
istioctl get routerules #-- there should be no more routing rules

View File

@ -12,9 +12,9 @@ This sample deploys the Bookinfo services across Kubernetes and a set of
Virtual Machines, and illustrates how to use Istio service mesh to control
this infrastructure as a single mesh.
> Note: this guide is still under development and only tested on Google Cloud Platform.
On IBM Cloud or other platforms where overlay network of Pods is isolated from VM network,
VMs cannot initiate any direct communication to Kubernetes Pods even when using Istio.
> This guide is still under development and only tested on Google Cloud Platform.
On IBM Cloud or other platforms where overlay network of Pods is isolated from VM network,
VMs cannot initiate any direct communication to Kubernetes Pods even when using Istio.
## Overview
@ -27,7 +27,6 @@ this infrastructure as a single mesh.
<!-- source of the drawing https://docs.google.com/drawings/d/1gQp1OTusiccd-JUOHktQ9RFZaqREoQbwl2Vb-P3XlRQ/edit -->
## Before you begin
* Setup Istio by following the instructions in the
@ -36,11 +35,12 @@ this infrastructure as a single mesh.
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application (in the `bookinfo` namespace).
* Create a VM named 'vm-1' in the same project as Istio cluster, and [Join the Mesh]({{home}}/docs/setup/kubernetes/mesh-expansion.html).
## Running mysql on the VM
## Running MySQL on the VM
We will first install mysql on the VM, and configure it as a backend for the ratings service.
We will first install MySQL on the VM, and configure it as a backend for the ratings service.
On the VM:
```bash
sudo apt-get update && sudo apt-get install -y mariadb-server
sudo mysql
@ -49,14 +49,18 @@ GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'password' WITH
quit;
sudo systemctl restart mysql
```
You can find details of configuring mysql at [Mysql](https://mariadb.com/kb/en/library/download/).
You can find details of configuring MySQL at [Mysql](https://mariadb.com/kb/en/library/download/).
On the VM add ratings database to mysql.
```bash
# Add ratings db to the mysql db
# Add ratings db to the MySQL db
curl -q https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/src/mysql/mysqldb-init.sql | mysql -u root -ppassword
```
To make it easy to visually inspect the difference in the output of the bookinfo application, you can change the ratings that are generated by using the following commands
To make it easy to visually inspect the difference in the output of the Bookinfo application, you can change the ratings that are generated by using the
following commands
```bash
# To inspect the ratings
mysql -u root -ppassword test -e "select * from ratings;"
@ -84,13 +88,13 @@ hostname -I
```
## Registering the mysql service with the mesh
On a host with access to `istioctl` commands, register the VM and mysql db service
```bash
istioctl register -n vm mysqldb <ip-address-of-vm> 3306
```
Sample output:
```
$ istioctl register -n vm mysqldb 10.150.0.5 3306
```xxx
I1108 20:17:54.256699 40419 register.go:43] Registering for service 'mysqldb' ip '10.150.0.5', ports list [{3306 mysql}]
I1108 20:17:54.256815 40419 register.go:48] 0 labels ([]) and 1 annotations ([alpha.istio.io/kubernetes-serviceaccounts=default])
W1108 20:17:54.573068 40419 register.go:123] Got 'services "mysqldb" not found' looking up svc 'mysqldb' in namespace 'vm', attempting to create it
@ -104,6 +108,7 @@ Note that the 'mysqldb' virtual machine does not need and should not have specia
## Using the mysql service
The ratings service in bookinfo will use the DB on the machine. To verify that it works, create version 2 of the ratings service that uses the mysql db on the VM. Then specify route rules that force the review service to use the ratings version 2.
```bash
# Create the version of ratings service that will use mysql back end
istioctl kube-inject -n bookinfo -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql-vm.yaml | kubectl apply -n bookinfo -f -
@ -112,6 +117,7 @@ istioctl kube-inject -n bookinfo -f samples/bookinfo/kube/bookinfo-ratings-v2-my
istioctl create -n bookinfo -f samples/bookinfo/kube/route-rule-ratings-mysql-vm.yaml
```
You can verify the output of bookinfo application is showing 1 star from Reviewer1 and 4 stars from Reviewer2 or change the ratings on your VM and see the results.
You can verify the output of the Bookinfo application is showing 1 star from Reviewer1 and 4 stars from Reviewer2 or change the ratings on your VM and see the
results.
You can also find some troubleshooting and other information in the [RawVM MySQL](https://github.com/istio/istio/blob/master/samples/rawvm/README.md) document in the meantime.

View File

@ -49,7 +49,7 @@ developers to manually instrument their applications.
applications.
1. [Using the Istio Dashboard]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html)
This task installs the Grafana add-on with a pre-configured dashboard
This task installs the Grafana add-on with a preconfigured dashboard
for monitoring mesh traffic.
## Cleanup

View File

@ -1,6 +1,6 @@
---
title: Integrating with External Services
overview: This sample integrates third party services with Bookinfo and demonstrates how to use Istio service mesh to provide metrics, and routing functions for these services.
overview: This sample integrates third party services with Bookinfo and demonstrates how to use Istio service mesh to provide metrics, and routing functions for these services.
order: 50
draft: true

View File

@ -18,7 +18,7 @@ This page describes how to use the Mixer config expression language (CEXL).
Mixer configuration uses an expression language (CEXL) to specify match expressions and [mapping expressions]({{mixerConfig}}#attribute-expressions). CEXL expressions map a set of typed [attributes]({{home}}/docs/concepts/policy-and-control/attributes.html) and constants to a typed
[value](https://github.com/istio/api/blob/master/policy/v1beta1/value_type.proto).
## Syntax
CEXL accepts a subset of **[Go expressions](https://golang.org/ref/spec#Expressions)**, which defines the syntax. CEXL implements a subset of the Go operators that constrains the set of accepted Go expressions. CEXL also supports arbitrary parenthesization.
@ -29,32 +29,31 @@ CEXL supports the following functions.
|Operator/Function |Definition |Example | Description
|-----------------------------------------
|`==` |Equals |`request.size == 200`
|`==` |Equals |`request.size == 200`
|`!=` |Not Equals |`request.auth.principal != "admin"`
|`||` |Logical OR | `(request.size == 200) || (request.auth.principal == "admin")`
|`&&` |Logical AND | `(request.size == 200) && (request.auth.principal == "admin")`
|`||` |Logical OR | `(request.size == 200) || (request.auth.principal == "admin")`
|`&&` |Logical AND | `(request.size == 200) && (request.auth.principal == "admin")`
|`[ ]` |Map Access | `request.headers["x-id"]`
|`|` |First non empty | `source.labels["app"] | source.labels["svc"] | "unknown"`
|`match` | Glob match |`match(destination.service, "*.ns1.svc.cluster.local")` | Matches prefix or suffix based on the location of `*`
|`ip` | Convert a textual IPv4 address into the `IP_ADDRESS` type | `source.ip == ip("10.11.12.13")` | Use the `ip` function to create an `IP_ADDRESS` literal.
|`timestamp` | Convert a textual timestamp in RFC 3339 format into the `TIMESTAMP` type |`timestamp("2015-01-02T15:04:35Z")` | Use the `timestamp` function to create a `TIMESTAMP` literal.
|`.matches` | Regular expression match | `"svc.*".matches(destination.service)` | Matches `destination.service` against regular expression pattern `"svc.*"`.
|`.startsWith` | string prefix match | `destination.service.startsWith("acme")` | Checks whether `destination.service` starts with `"acme"`.
|`.matches` | Regular expression match | `"svc.*".matches(destination.service)` | Matches `destination.service` against regular expression pattern `"svc.*"`.
|`.startsWith` | string prefix match | `destination.service.startsWith("acme")` | Checks whether `destination.service` starts with `"acme"`.
|`.endsWith` | string postfix match | `destination.service.endsWith("acme")` | Checks whether `destination.service` ends with `"acme"`.
## Type checking
CEXL variables are attributes from the typed [attribute vocabulary]({{home}}/docs/reference/config/mixer/attribute-vocabulary.html), constants are implicitly typed and, functions are explicitly typed.
Mixer validates a CEXL expression and resolves it to a type during config validation.
Selectors must resolve to a boolean value and mapping expressions must resolve to the type they are mapping into. Config validation fails if a selector fails to resolve to a boolean or if a mapping expression resolves to an incorrect type.
Selectors must resolve to a boolean value and mapping expressions must resolve to the type they are mapping into. Config validation fails if a selector fails to resolve to a boolean or if a mapping expression resolves to an incorrect type.
For example, if an operator specifies a *string* label as `request.size | 200`, validation fails because the expression resolves to an integer.
## Missing attributes
If an expression uses an attribute that is not available during request processing, the expression evaluation fails. Use the `|` operator to provide a default value if an attribute may be missing.
If an expression uses an attribute that is not available during request processing, the expression evaluation fails. Use the `|` operator to provide a default value if an attribute may be missing.
For example, the expression `request.auth.principal == "user1"` fails evaluation if the `request.auth.principal` attribute is missing. The `|` (OR) operator addresses the problem: `(request.auth.principal | "nobody" ) == "user1"`.
@ -63,7 +62,7 @@ For example, the expression `request.auth.principal == "user1"` fails evaluation
|Expression |Return Type |Description
|------------------------------------
|`request.size| 200` | **int** | `request.size` if available, otherwise 200.
|`request.headers["X-FORWARDED-HOST"] == "myhost"`| **boolean**
|`request.headers["X-FORWARDED-HOST"] == "myhost"`| **boolean**
|`(request.headers["x-user-group"] == "admin") || (request.auth.principal == "admin")`| **boolean**| True if the user is admin or in the admin group.
|`(request.auth.principal | "nobody" ) == "user1"` | **boolean** | True if `request.auth.principal` is "user1", The expression will not error out if `request.auth.principal` is missing.
|`source.labels["app"]=="reviews" && source.labels["version"]=="v3"`| **boolean** | True if app label is reviews and version label is v3, false otherwise.

View File

@ -1,235 +0,0 @@
---
title: Writing Configuration
overview: How to write Istio config YAML content.
order: 70
layout: docs
type: markdown
---
This page describes how to write configuration that conforms to Istio's schemas. All configuration schemas in Istio are defined as [protobuf messages](https://developers.google.com/protocol-buffers/docs/proto3). When in doubt, search for the protos.
## Translating to YAML
There's an implicit mapping from protobuf to YAML using [protobuf's mapping to JSON](https://developers.google.com/protocol-buffers/docs/proto3#json). Below are a few examples showing common mappings you'll encounter writing configuration in Istio.
**Important things to note:**
- YAML fields are implicitly strings
- Proto `repeated` fields map to YAML lists; each element in a YAML list is prefixed by a dash (`-`)
- Proto `message`s map to objects; in YAML objects are field names all at the same indentation level
- YAML is whitespace sensitive and must use spaces; tabs are never allowed
### `map` and `message` fields
<table>
<tbody>
<tr>
<th>Proto</th>
<th>YAML</th>
</tr>
<tr>
<td>
<pre>
message Metric {
string descriptor_name = 1;
string value = 2;
map<string, string> labels = 3;
}
</pre>
</td>
<td>
<pre>
descriptorName: request_count
value: "1"
labels:
source: origin.ip
destination: destination.service
</pre>
</td>
</tr>
</tbody>
</table>
*Note that when numeric literals are used as strings (like `value` above) they must be enclosed in quotes. Quotation marks (`"`) are optional for normal strings.*
### `repeated` fields
<table>
<tbody>
<tr>
<th>Proto</th>
<th>YAML</th>
</tr>
<tr>
<td>
<pre>
message Metric {
string descriptor_name = 1;
string value = 2;
map<string, string> labels = 3;
}
message MetricsParams {
repeated Metric metrics = 1;
}
</pre>
</td>
<td>
<pre>
metrics:
- descriptorName: request_count
value: "1"
labels:
source: origin.ip
destination: destination.service
- descriptorName: request_latency
value: response.duration
labels:
source: origin.ip
destination: destination.service
</pre>
</td>
</tr>
</tbody>
</table>
### `enum` fields
<table>
<tbody>
<tr>
<th>Proto</th>
<th>YAML</th>
</tr>
<tr>
<td>
<pre>
enum ValueType {
STRING = 1;
INT64 = 2;
DOUBLE = 3;
// more values omitted
}
message AttributeDescriptor {
string name = 1;
string description = 2;
ValueType value_type = 3;
}
</pre>
</td>
<td>
<pre>
name: request.duration
value_type: INT64
</pre>
or
<pre>
name: request.duration
valueType: INT64
</pre>
</td>
</tr>
</tbody>
</table>
*Note that YAML parsing will handle both `snake_case` and `lowerCamelCase` field names. `lowerCamelCase` is the canonical version in YAML.*
### Nested `message` fields
<table>
<tbody>
<tr>
<th>Proto</th>
<th>YAML</th>
</tr>
<tr>
<td>
<pre>
enum ValueType {
STRING = 1;
INT64 = 2;
DOUBLE = 3;
// more values omitted
}
message LabelDescriptor {
string name = 1;
string description = 2;
ValueType value_type = 3;
}
message MonitoredResourceDescriptor {
string name = 1;
string description = 2;
repeated LabelDescriptor labels = 3;
}
</pre>
</td>
<td>
<pre>
name: My Monitored Resource
labels:
- name: label one
valueType: STRING
- name: second label
valueType: DOUBLE
</pre>
</td>
</tr>
</tbody>
</table>
### `Timestamp`, `Duration`, and 64 bit integer fields
The protobuf spec special cases the JSON/YAML representations of a few well-known protobuf messages. 64 bit integer types are also special due to the fact that JSON numbers are implicitly doubles, which cannot represent all valid 64 bit integer values.
<table>
<tbody>
<tr>
<th>Proto</th>
<th>YAML</th>
</tr>
<tr>
<td>
<pre>
message Quota {
string descriptor_name = 1;
map<string, string> labels = 2;
int64 max_amount = 3;
google.protobuf.Duration expiration = 4;
}
</pre>
</td>
<td>
<pre>
descriptorName: RequestCount
labels:
label one: STRING
second label: DOUBLE
maxAmount: "7"
expiration: 1.000340012s
</pre>
</td>
</tr>
</tbody>
</table>
Specifically, the [protobuf spec declares](https://developers.google.com/protocol-buffers/docs/proto3#json):
| Proto | JSON/YAML | Example | Notes |
| --- | --- | --- | --- |
| Timestamp | string | "1972-01-01T10:00:20.021Z" | Uses RFC 3339, where generated output will always be Z-normalized and uses 0, 3, 6 or 9 fractional digits. |
| Duration | string | "1.000340012s", "1s" | Generated output always contains 0, 3, 6, or 9 fractional digits, depending on required precision. Accepted are any fractional digits (also none) as long as they fit into nano-seconds precision. |
| int64, fixed64, uint64 | string | "1", "-10" | JSON value will be a decimal string. Either numbers or strings are accepted.|
## What's next
* TODO: link to overall mixer config concept guide (how the config pieces fit together)

View File

@ -8,13 +8,13 @@ layout: docs
type: markdown
---
> Note: Setup on Nomad has not been tested.
> Setup on Nomad has not been tested.
Using Istio in a non-kubernetes environment involves a few key tasks:
Using Istio in a non-Kubernetes environment involves a few key tasks:
1. Setting up the Istio control plane with the Istio API server
2. Adding the Istio sidecar to every instance of a service
3. Ensuring requests are routed through the sidecars
1. Adding the Istio sidecar to every instance of a service
1. Ensuring requests are routed through the sidecars
## Setting up the Control Plane
@ -29,7 +29,7 @@ server requires an
[etcd cluster](https://kubernetes.io/docs/getting-started-guides/scratch/#etcd)
as a persistent store. Detailed instructions for setting up the API server can
be found
[here](https://kubernetes.io/docs/getting-started-guides/scratch/#apiserver-controller-manager-and-scheduler).
[here](https://kubernetes.io/docs/getting-started-guides/scratch/#apiserver-controller-manager-and-scheduler).
Documentation on set of startup options for the Kubernetes API server can be found [here](https://kubernetes.io/docs/admin/kube-apiserver/)
#### Local Install
@ -71,15 +71,14 @@ services:
environment:
- SERVICE_IGNORE=1
command: [
"kube-apiserver", "--etcd-servers", "http://etcd:2379",
"--service-cluster-ip-range", "10.99.0.0/16",
"--insecure-port", "8080",
"-v", "2",
"kube-apiserver", "--etcd-servers", "http://etcd:2379",
"--service-cluster-ip-range", "10.99.0.0/16",
"--insecure-port", "8080",
"-v", "2",
"--insecure-bind-address", "0.0.0.0"
]
```
### Other Istio Components
Debian packages for Istio Pilot, Mixer, and CA are available through the
@ -94,7 +93,6 @@ Nomad, where the
[service stanza](https://www.nomadproject.io/docs/job-specification/service.html)
can be used to describe the desired properties of the control plane services.
## Adding Sidecars to Service Instances
Each instance of a service in an application must be accompanied by the
@ -124,5 +122,5 @@ the Istio sidecars. The IP table script to setup such forwarding can be
found in the
[here](https://raw.githubusercontent.com/istio/istio/master/tools/deb/istio-iptables.sh).
> Note: This script must be executed before starting the application or
> the sidecar process.
> This script must be executed before starting the application or
> the sidecar process.

View File

@ -61,7 +61,7 @@ Quick Start instructions to install and configure Istio in a Docker Compose setu
```
> If the Istio Pilot container terminates, ensure that you run the `istioctl context-create` command and re-run the command from the previous step.
1. Configure `istioctl` to use mapped local port for the Istio API server:
```bash
@ -74,7 +74,7 @@ You can now deploy your own application or one of the sample applications provid
installation like [Bookinfo]({{home}}/docs/guides/bookinfo.html).
> Note 1: Since there is no concept of pods in a Docker setup, the Istio
> sidecar runs in the same container as the application. We will use
> sidecar runs in the same container as the application. We will use
> [Registrator](https://gliderlabs.github.io/registrator/latest/) to
> automatically register instances of services in the Consul service
> registry.

View File

@ -8,7 +8,7 @@ layout: docs
type: markdown
---
Using Istio in a non-kubernetes environment involves a few key tasks:
Using Istio in a non-Kubernetes environment involves a few key tasks:
1. Setting up the Istio control plane with the Istio API server
2. Adding the Istio sidecar to every instance of a service
@ -27,7 +27,7 @@ server requires an
[etcd cluster](https://kubernetes.io/docs/getting-started-guides/scratch/#etcd)
as a persistent store. Detailed instructions for setting up the API server can
be found
[here](https://kubernetes.io/docs/getting-started-guides/scratch/#apiserver-controller-manager-and-scheduler).
[here](https://kubernetes.io/docs/getting-started-guides/scratch/#apiserver-controller-manager-and-scheduler).
Documentation on set of startup options for the Kubernetes API server can be found
[here](https://kubernetes.io/docs/admin/kube-apiserver/)
@ -69,10 +69,10 @@ services:
environment:
- SERVICE_IGNORE=1
command: [
"kube-apiserver", "--etcd-servers", "http://etcd:2379",
"--service-cluster-ip-range", "10.99.0.0/16",
"--insecure-port", "8080",
"-v", "2",
"kube-apiserver", "--etcd-servers", "http://etcd:2379",
"--service-cluster-ip-range", "10.99.0.0/16",
"--insecure-port", "8080",
"-v", "2",
"--insecure-bind-address", "0.0.0.0"
]
```
@ -104,5 +104,5 @@ Table rules to transparently route application's network traffic through
the Istio sidecars. The IP table script to setup such forwarding can be
found [here](https://raw.githubusercontent.com/istio/istio/master/tools/deb/istio-iptables.sh).
> Note: This script must be executed before starting the application or
> the sidecar process.
> This script must be executed before starting the application or
> the sidecar process.

View File

@ -53,8 +53,8 @@ Quick Start instructions to install and configure Istio in a Docker Compose setu
```bash
docker ps -a
```
> If the Istio Pilot container terminates, ensure that you run the `istioctl context-create` comamnd and re-run the command from the previous step.
> If the Istio Pilot container terminates, ensure that you run the `istioctl context-create` command and re-run the command from the previous step.
1. Configure `istioctl` to use mapped local port for the Istio API server:
```bash
@ -67,7 +67,7 @@ You can now deploy your own application or one of the sample applications provid
installation like [Bookinfo]({{home}}/docs/guides/bookinfo.html).
> Note 1: Since there is no concept of pods in a Docker setup, the Istio
> sidecar runs in the same container as the application. We will use
> sidecar runs in the same container as the application. We will use
> [Registrator](https://gliderlabs.github.io/registrator/latest/) to
> automatically register instances of services in the Eureka service
> registry.

View File

@ -20,7 +20,6 @@ L7 routing capabilities such as version-aware routing, header based
routing, gRPC/HTTP2 proxying, tracing, etc. Deploy Istio Pilot only and
disable other components. Do not deploy the Istio initializer.
## Ingress Controller with Telemetry & Policies
By deploying Istio Pilot and Mixer, the Ingress controller configuration

View File

@ -38,12 +38,12 @@ ansible-playbook main.yml
The Ansible playbook ships with reasonable defaults.
The currently exposed options are explained in the following table:
The currently exposed options are:
| Parameter | Description | Values | Default |
| --- | --- | --- | --- |
| `cluster_flavour` | Define the target cluster type | `k8s` or `ocp` | `ocp` |
| `github_api_token` | A valid Github API authentication token used for authenticating with Github | A valid Github API token | empty |
| `github_api_token` | A valid GitHub API authentication token used for authenticating with GitHub | A valid GitHub API token | empty |
| `cmd_path` | Override the path to `kubectl` or `oc` | A valid path to a `kubectl` or `oc` binary | `$PATH/oc` |
| `istio.release_tag_name` | Istio release version to install | Any valid Istio release version | the latest Istio release version |
| `istio.dest` | The directory of the target machine where Istio will be installed | Any directory with read+write permissions | `~/.istio` |

View File

@ -14,11 +14,11 @@ type: markdown
Quick start instructions for the setup and configuration of Istio using the Helm package manager.
*Installation with Helm prior to Istio version 0.7 is unstable and not recommended.*
*Installation with Helm prior to Istio 0.7 is unstable and not recommended.*
## Prerequisites
The following instructions require you have access to Helm **2.7.2 or newer** in your Kubernetes environment or
The following instructions require you have access to Helm **2.7.2 or newer** in your Kubernetes environment or
alternately the ability to modify RBAC rules required to install Helm. Additionally Kubernetes **1.7.3 or newer**
is also required. Finally this Helm chart **does not** yet implement automatic sidecar injection.

View File

@ -117,7 +117,6 @@ else, e.g., on-prem or raw VM, we have to bootstrap a key/cert as credential,
which typically has a limited lifetime. And when the cert expires, you have to
rerun the above command.
Or the equivalent manual steps:
------ Manual setup steps begin ------
@ -175,7 +174,6 @@ curl 'http://istio-pilot.istio-system:8080/v1/registration/istio-pilot.istio-sys
curl 'http://10.60.1.4:8080/v1/registration/istio-pilot.istio-system.svc.cluster.local|http-discovery'
```
* Extract the initial Istio authentication secrets and copy them to the machine. The default
installation of Istio includes Istio CA and will generate Istio secrets even if
the automatic 'mTLS'
@ -195,7 +193,7 @@ install/tools/setupMeshEx.sh machineCerts ACCOUNT NAMESPACE
The generated files (`key.pem`, `root-cert.pem`, `cert-chain.pem`) must be copied to /etc/certs on each machine, readable by istio-proxy.
* Install Istio Debian files and start 'istio' and 'istio-auth-node-agent' services.
Get the debian packages from [github releases](https://github.com/istio/istio/releases) or:
Get the debian packages from [GitHub releases](https://github.com/istio/istio/releases) or:
```bash
# Note: This will be replaced with an 'apt-get' command once the repositories are setup.
@ -226,7 +224,7 @@ Check that the processes are running:
```bash
ps aux |grep istio
```
```
```xxx
root 6941 0.0 0.2 75392 16820 ? Ssl 21:32 0:00 /usr/local/istio/bin/node_agent --logtostderr
root 6955 0.0 0.0 49344 3048 ? Ss 21:32 0:00 su -s /bin/bash -c INSTANCE_IP=10.150.0.5 POD_NAME=demo-vm-1 POD_NAMESPACE=default exec /usr/local/bin/pilot-agent proxy > /var/log/istio/istio.log istio-proxy
istio-p+ 7016 0.0 0.1 215172 12096 ? Ssl 21:32 0:00 /usr/local/bin/pilot-agent proxy
@ -236,7 +234,7 @@ Istio auth node agent is healthy:
```bash
sudo systemctl status istio-auth-node-agent
```
```
```xxx
● istio-auth-node-agent.service - istio-auth-node-agent: The Istio auth node agent
Loaded: loaded (/lib/systemd/system/istio-auth-node-agent.service; disabled; vendor preset: enabled)
Active: active (running) since Fri 2017-10-13 21:32:29 UTC; 9s ago

View File

@ -10,13 +10,11 @@ type: markdown
{% include home.html %}
Quick Start instructions to install and run Istio in [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) (GKE) using [Google Cloud Deployment Manager](https://cloud.google.com/deployment-manager/).
This Quick Start creates a new GKE [zonal cluster](https://cloud.google.com/kubernetes-engine/versioning-and-upgrades#versions_available_for_new_cluster_masters), installs Istio and then deploys the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample
application. It uses Deployment Manager to automate the steps detailed in the [Istio on Kubernetes setup guide]({{home}}/docs/setup/kubernetes/quick-start.html) for Kubernetes Engine
## Prerequisites
- This sample requires a valid Google Cloud Platform project with billing enabled. If you are not an existing GCP user, you may be able to enroll for a $300 US [Free Trial](https://cloud.google.com/free/) credit.
@ -46,16 +44,16 @@ application. It uses Deployment Manager to automate the steps detailed in the [
- [Istio GKE Deployment Manager](https://accounts.google.com/signin/v2/identifier?service=cloudconsole&continue=https://console.cloud.google.com/launcher/config?templateurl=https://raw.githubusercontent.com/istio/istio/master/install/gcp/deployment_manager/istio-cluster.jinja&followup=https://console.cloud.google.com/launcher/config?templateurl=https://raw.githubusercontent.com/istio/istio/master/install/gcp/deployment_manager/istio-cluster.jinja&flowName=GlifWebSignIn&flowEntry=ServiceLogin)
We recommend that you leave the default settings as the rest of this tutorial shows how to access the installed features. By default the tool creates a
We recommend that you leave the default settings as the rest of this tutorial shows how to access the installed features. By default the tool creates a
GKE alpha cluster with the specified settings, then installs the Istio [control plane]({{home}}/docs/concepts/what-is-istio/overview.html#architecture), the
[Bookinfo]({{home}}/docs/guides/bookinfo.html) sample app,
[Grafana]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html) with
[Prometheus]({{home}}/docs/tasks/telemetry/querying-metrics.html),
[ServiceGraph]({{home}}/docs/tasks/telemetry/servicegraph.html),
and [Zipkin]({{home}}/docs/tasks/telemetry/distributed-tracing.html#zipkin).
You'll find out more about how to access all of these below. This script will enable istio auto-injection on the ```default``` namespace only.
You'll find out more about how to access all of these below. This script will enable Istio auto-injection on the ```default``` namespace only.
2. Click **Deploy**:
1. Click **Deploy**:
{% include figure.html width="100%" ratio="67.17%"
img='./img/dm_launcher.png'
@ -70,23 +68,22 @@ application. It uses Deployment Manager to automate the steps detailed in the [
Once deployment is complete, do the following on the workstation where you've installed `gcloud`:
1. Bootstrap kubectl for the cluster you just created and confirm the cluster is
running and istio is enabled
1. Bootstrap `kubectl` for the cluster you just created and confirm the cluster is
running and Istio is enabled
```
```bash
gcloud container clusters list
```
```
```xxx
NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
istio-cluster us-central1-a v1.9.2-gke.1 130.211.216.64 n1-standard-2 v1.9.2-gke.1 3 RUNNING
```
In this case, the cluster name is ```istio-cluster```
2. Now acquire the credentials for this cluster
1. Now acquire the credentials for this cluster
```
```bash
gcloud container clusters get-credentials istio-cluster --zone=us-central1-a
```
@ -97,7 +94,7 @@ Verify Istio is installed in its own namespace
```bash
kubectl get deployments,ing -n istio-system
```
```
```xxx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/grafana 1 1 1 1 3m
deploy/istio-ca 1 1 1 1 3m
@ -109,15 +106,12 @@ deploy/prometheus 1 1 1 1 3m
deploy/servicegraph 1 1 1 1 3m
deploy/zipkin 1 1 1 1 3m
```
Now confirm that the Bookinfo sample application is also installed:
```bash
kubectl get deployments,ing
```
```
```xxx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/details-v1 1 1 1 1 3m
deploy/productpage-v1 1 1 1 1 3m
@ -152,7 +146,7 @@ You can also view the installation using the ***Kubernetes Engine -> Workloads**
export GATEWAY_URL=35.202.120.89
```
2. Verify you can access the Bookinfo ```http://${GATEWAY_URL}/productpage```:
1. Verify you can access the Bookinfo ```http://${GATEWAY_URL}/productpage```:
{% include figure.html width="100%" ratio="45.04%"
img='./img/dm_bookinfo.png'
@ -161,7 +155,7 @@ You can also view the installation using the ***Kubernetes Engine -> Workloads**
caption='Bookinfo'
%}
3. Now send some traffic to it:
1. Now send some traffic to it:
```bash
for i in {1..100}; do curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage; done
```
@ -170,7 +164,7 @@ You can also view the installation using the ***Kubernetes Engine -> Workloads**
Once you have verified that the Istio control plane and sample application are working, try accessing the installed Istio plugins.
If you are using Cloud Shell rather than the installed `gcloud` client, you can port forward and proxy using its [Web Preview](https://cloud.google.com/shell/docs/using-web-preview#previewing_the_application) feature. For example, to access Grafana from Cloud Shell, change the kubectl port mapping from 3000:3000 to 8080:3000. You can simultaneously preview four other consoles via Web Preview proxied on ranges 8080 to 8084.
If you are using Cloud Shell rather than the installed `gcloud` client, you can port forward and proxy using its [Web Preview](https://cloud.google.com/shell/docs/using-web-preview#previewing_the_application) feature. For example, to access Grafana from Cloud Shell, change the `kubectl` port mapping from 3000:3000 to 8080:3000. You can simultaneously preview four other consoles via Web Preview proxied on ranges 8080 to 8084.
### Grafana
@ -180,7 +174,7 @@ Set up a tunnel to Grafana:
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
```
then
```
```xxx
http://localhost:3000/dashboard/db/istio-dashboard
```
You should see some statistics for the requests you sent earlier.
@ -194,7 +188,6 @@ You should see some statistics for the requests you sent earlier.
For more details about using Grafana, see [About the Grafana Add-on]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html#about-the-grafana-add-on).
### Prometheus
Prometheus is installed with Grafana. You can view Istio and application metrics using the console as follows:
@ -225,9 +218,10 @@ Set up a tunnel to ServiceGraph:
```bash
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088 &
```
You should see the Bookinfo service topology at
```
```xxx
http://localhost:8088/dotviz
```
@ -250,7 +244,7 @@ kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=zi
You should see the trace statistics sent earlier:
```
```xxx
http://localhost:9411
```
@ -274,6 +268,7 @@ on our workstation or within Cloud Shell.
1. Navigate to the Deployments section of the Cloud Console at [https://console.cloud.google.com/deployments](https://console.cloud.google.com/deployments)
2. Select the deployment and click **Delete**.
1. Select the deployment and click **Delete**.
3. Deployment Manager will remove all the deployed GKE artifacts - however, items such as Ingress and LoadBalancers will remain. You can delete those artifacts by again going to the cloud console under [**Network Services** -> **LoadBalancers**](https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list)
1. Deployment Manager will remove all the deployed GKE artifacts - however, items such as Ingress and LoadBalancers will remain. You can delete those artifacts
by again going to the cloud console under [**Network Services** -> **LoadBalancers**](https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list)

View File

@ -16,11 +16,11 @@ Quick Start instructions to install and configure Istio in a Kubernetes cluster.
## Prerequisites
The following instructions require you have access to a Kubernetes **1.7.3 or newer** cluster
with [RBAC (Role-Based Access Control)](https://kubernetes.io/docs/admin/authorization/rbac/) enabled. You will also need `kubectl` **1.7.3 or newer** installed.
with [RBAC (Role-Based Access Control)](https://kubernetes.io/docs/admin/authorization/rbac/) enabled. You will also need `kubectl` **1.7.3 or newer** installed.
If you wish to enable [automatic sidecar injection]({{home}}/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection) or server-side configuration validation, you need Kubernetes version 1.9 or greater.
> Note: If you installed Istio 0.1.x,
> If you installed Istio 0.1.x,
> [uninstall](https://archive.istio.io/v0.1/docs/tasks/installing-istio.html#uninstalling)
> it completely before installing the newer version (including the Istio sidecar
> for all Istio enabled application pods).
@ -51,12 +51,12 @@ Create a new cluster.
```bash
gcloud container clusters create <cluster-name> \
--cluster-version=1.9.4-gke.1
--cluster-version=1.9.4-gke.1
--zone <zone>
--project <project-name>
```
Retrieve your credentials for kubectl.
Retrieve your credentials for `kubectl`.
```bash
gcloud container clusters get-credentials <cluster-name> \
@ -65,7 +65,7 @@ gcloud container clusters get-credentials <cluster-name> \
```
Grant cluster admin permissions to the current user (admin permissions are required to create the necessary RBAC rules for Istio).
```bash
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
@ -74,7 +74,7 @@ kubectl create clusterrolebinding cluster-admin-binding \
### [IBM Cloud Container Service (IKS)](https://www.ibm.com/cloud/container-service)
Kubernetes 1.9 is generallly available on IBM Cloud Container Service (IKS).
Kubernetes 1.9 is generally available on IBM Cloud Container Service (IKS).
At the time of writing it is not the default version, so to create a new lite cluster:
@ -88,7 +88,7 @@ Or create a new paid cluster:
bx cs cluster-create --location location --machine-type u2c.2x4 --name <cluster-name> --kube-version 1.9.3
```
Retrieve your credentials for kubectl (replace `<cluster-name>` with the name of the cluster you want to use):
Retrieve your credentials for `kubectl` (replace `<cluster-name>` with the name of the cluster you want to use):
```bash
$(bx cs cluster-config <cluster-name>|grep "export KUBECONFIG")
@ -98,9 +98,9 @@ $(bx cs cluster-config <cluster-name>|grep "export KUBECONFIG")
Configure `kubectl` CLI based on steps [here](https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/manage_cluster/cfc_cli.html) for how to access the IBM Cloud Private Cluster.
### [Openshift Origin](https://www.openshift.org) (version 3.7 or later)
Openshift by default does not allow containers running with UID 0. Enable containers running
### [OpenShift Origin](https://www.openshift.org) (version 3.7 or later)
OpenShift by default does not allow containers running with UID 0. Enable containers running
with UID 0 for Istio's service accounts for ingress as well the Prometheus and Grafana addons:
```bash
@ -112,10 +112,10 @@ Service account that runs application pods need privileged security context cons
```bash
oc adm policy add-scc-to-user privileged -z default -n <target-namespace>
```
### AWS (w/Kops)
When you install a new cluster with Kubernetes version 1.9, prerequisite for `admissionregistration.k8s.io/v1beta1` enabled is covered.
When you install a new cluster with Kubernetes version 1.9, prerequisite for `admissionregistration.k8s.io/v1beta1` enabled is covered.
Nevertheless the list of admission controllers needs to be updated.
@ -161,7 +161,7 @@ Validate with `kubectl` client on kube-api pod, you should see new admission con
for i in `kubectl get pods -nkube-system | grep api | awk '{print $1}'` ; do kubectl describe pods -nkube-system $i | grep "/usr/local/bin/kube-apiserver" ; done
```
Ouput should be:
Output should be:
```bash
[...] --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority [...]
```
@ -260,7 +260,7 @@ installation like [Bookinfo]({{home}}/docs/guides/bookinfo.html).
Note: the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because HTTP/1.0 is not supported.
If you started the [Istio-sidecar-injector]({{home}}/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection),
as shown above, you can deploy the application directly using `kubectl create`.
as shown above, you can deploy the application directly using `kubectl create`.
The Istio-Sidecar-injector will automatically inject Envoy containers into your application pods assuming running in namespaces labeled with `istio-injection=enabled`
@ -271,7 +271,8 @@ kubectl create -n <namespace> -f <your-app-spec>.yaml
If you do not have the Istio-sidecar-injector installed, you must
use [istioctl kube-inject]({{home}}/docs/reference/commands/istioctl.html#istioctl kube-inject) to
manuallly inject Envoy containers in your application pods before deploying them:
manually inject Envoy containers in your application pods before deploying them:
```bash
kubectl create -f <(istioctl kube-inject -f <your-app-spec>.yaml)
```

View File

@ -9,10 +9,15 @@ type: markdown
---
{% include home.html %}
_NOTE_: The following requires Istio 0.5.0 or greater. See [https://archive.istio.io/v0.4/docs/setup/kubernetes/sidecar-injection](https://archive.istio.io/v0.4/docs/setup/kubernetes/sidecar-injection) for Istio versions 0.4.0 or older.
_NOTE_: In previous releases, the Kubernetes initializer feature was used for automatic proxy injection. This was an Alpha feature, subject to change/removal, and not enabled by default in Kubernetes. Starting in Kubernetes 1.9 it was replaced by a beta feature called
[mutating webhooks](https://kubernetes.io/docs/admin/admission-controllers/#mutatingadmissionwebhook-beta-in-19), which is now enabled by default in Kubernetes 1.9 and beyond. Starting in Istio 0.5.0 the automatic proxy injection uses mutating webhooks, and support for injection by initializer has been removed. Users who cannot uprade to Kubernetes 1.9 should use manual injection.
> The following requires Istio 0.5 or greater. See
> [https://archive.istio.io/v0.4/docs/setup/kubernetes/sidecar-injection](https://archive.istio.io/v0.4/docs/setup/kubernetes/sidecar-injection)
> for Istio 0.4 or prior.
>
> In previous releases, the Kubernetes initializer feature was used for automatic proxy injection. This was an Alpha feature, subject to change/removal,
> and not enabled by default in Kubernetes. Starting in Kubernetes 1.9 it was replaced by a beta feature called
> [mutating webhooks](https://kubernetes.io/docs/admin/admission-controllers/#mutatingadmissionwebhook-beta-in-19), which is now enabled by default in
> Kubernetes 1.9 and beyond. Starting with Istio 0.5.0 the automatic proxy injection uses mutating webhooks, and support for injection by initializer has been
> removed. Users who cannot upgrade to Kubernetes 1.9 should use manual injection.
## Pod spec requirements
@ -44,33 +49,33 @@ cluster must satisfy the following requirements:
ways of injecting the Istio sidecar into a pod: manually using `istioctl`
CLI tool or automatically using the Istio Initializer. Note that the
sidecar is not involved in traffic between containers in the same pod.
## Injection
Manual injection modifies the controller configuration, e.g. deployment. It
Manual injection modifies the controller configuration, e.g. deployment. It
does this by modifying the pod template spec such that *all* pods for that
deployment are created with the injected sidecar. Adding/Updating/Removing
the sidecar requires modifying the entire deployment.
Automatic injection injects at pod creation time. The controller resource is
Automatic injection injects at pod creation time. The controller resource is
unmodified. Sidecars can be updated selectively by manually deleting a pods or
systematically with a deployment rolling update.
Manual and automatic injection use the same templated configuration. Automatic
injection loads the configuration from the `istio-inject` ConfigMap in the
`istio-system` namespace. Manual injection can load from a local file or from
Manual and automatic injection use the same templated configuration. Automatic
injection loads the configuration from the `istio-inject` ConfigMap in the
`istio-system` namespace. Manual injection can load from a local file or from
the ConfigMap.
Two variants of the injection configuration are provided with the default
install: `istio-sidecar-injector-configmap-release.yaml`
and `istio-sidecar-injector-configmap-debug.yaml`. The injection configmap includes
the default injection policy and sidecar injection template. The debug version
includes debug proxy images and additional logging and core dump functionality using
for debugging the sidecar proxy.
Two variants of the injection configuration are provided with the default
install: `istio-sidecar-injector-configmap-release.yaml`
and `istio-sidecar-injector-configmap-debug.yaml`. The injection configmap includes
the default injection policy and sidecar injection template. The debug version
includes debug proxy images and additional logging and core dump functionality using
for debugging the sidecar proxy.
### Manual sidecar injection
Use the built-in defaults template and dynamically fetch the mesh
Use the built-in defaults template and dynamically fetch the mesh
configuration from the `istio` ConfigMap. Additional parameter overrides
are available via flags (see `istioctl kube-inject --help`).
@ -85,7 +90,7 @@ cluster. Create local copies of the injection and mesh configmap.
kubectl create -f install/kubernetes/istio-sidecar-injector-configmap-release.yaml \
--dry-run \
-o=jsonpath='{.data.config}' > inject-config.yaml
kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml
```
@ -102,7 +107,7 @@ istioctl kube-inject \
Deploy the injected YAML file.
```bash
kubectl apply -f sleep-injected.yaml
kubectl apply -f sleep-injected.yaml
```
Verify that the sidecar has been injected into the deployment.
@ -110,14 +115,14 @@ Verify that the sidecar has been injected into the deployment.
```bash
kubectl get deployment sleep -o wide
```
```
```bash
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
sleep 1 1 1 1 2h sleep,istio-proxy tutum/curl,unknown/proxy:unknown app=sleep
```
### Automatic sidecar injection
Sidecars can be automatically added to applicable Kubernetes pods using a
Sidecars can be automatically added to applicable Kubernetes pods using a
[mutating webhook admission controller](https://kubernetes.io/docs/admin/admission-controllers/#validatingadmissionwebhook-alpha-in-18-beta-in-19). This feature requires Kubernetes 1.9 or later. Verify that the kube-apiserver process has the `admission-control` flag set with the `MutatingAdmissionWebhook` and `ValidatingAdmissionWebhook` admission controllers added and listed in the correct order and the admissionregistration API is enabled.
```bash
@ -134,7 +139,9 @@ Note that unlike manual injection, automatic injection occurs at the pod-level.
#### Installing the webhook
_NOTE_: The [0.5.0](https://github.com/istio/istio/releases/tag/0.5.0) and [0.5.1](https://github.com/istio/istio/releases/tag/0.5.1) releases are missing scripts to provision webhook certificates. Download the missing files from [here](https://raw.githubusercontent.com/istio/istio/release-0.7/install/kubernetes/webhook-create-signed-cert.sh) and [here](https://raw.githubusercontent.com/istio/istio/release-0.7/install/kubernetes/webhook-patch-ca-bundle.sh). Subsqeuent releases (> 0.5.1) should include these missing files.
> The [0.5.0](https://github.com/istio/istio/releases/tag/0.5.0) and [0.5.1](https://github.com/istio/istio/releases/tag/0.5.1) releases are missing scripts to
provision webhook certificates. Download the missing files from [here](https://raw.githubusercontent.com/istio/istio/release-0.7/install/kubernetes/webhook-create-signed-cert.sh) and [here](https://raw.githubusercontent.com/istio/istio/release-0.7/install/kubernetes/webhook-patch-ca-bundle.sh).
Subsequent releases (> 0.5.1) should include these missing files.
Install base Istio.
@ -142,13 +149,13 @@ Install base Istio.
kubectl apply -f install/kubernetes/istio.yaml
```
Webhooks requires a signed cert/key pair. Use `install/kubernetes/webhook-create-signed-cert.sh` to generate
a cert/key pair signed by the Kubernetes' CA. The resulting cert/key file is stored as a Kubernetes
secret for the sidecar injector webhook to consume.
Webhooks requires a signed cert/key pair. Use `install/kubernetes/webhook-create-signed-cert.sh` to generate
a cert/key pair signed by the Kubernetes' CA. The resulting cert/key file is stored as a Kubernetes
secret for the sidecar injector webhook to consume.
_Note_: Kubernetes CA approval requires permissions to create and approve CSR. See
[https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster ](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/) and
[install/kubernetes/webhook-create-signed-cert.sh](https://raw.githubusercontent.com/istio/istio/release-0.7/install/kubernetes/webhook-create-signed-cert.sh) for more information.
> Kubernetes CA approval requires permissions to create and approve CSR. See
[Managing TLS in a Cluster](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/)
and [install/kubernetes/webhook-create-signed-cert.sh](https://raw.githubusercontent.com/istio/istio/release-0.7/install/kubernetes/webhook-create-signed-cert.sh) for more information.
```bash
./install/kubernetes/webhook-create-signed-cert.sh \
@ -157,14 +164,14 @@ _Note_: Kubernetes CA approval requires permissions to create and approve CSR. S
--secret sidecar-injector-certs
```
Install the sidecar injection configmap.
Install the sidecar injection configmap.
```bash
kubectl apply -f install/kubernetes/istio-sidecar-injector-configmap-release.yaml
```
Set the `caBundle` in the webhook install YAML that the Kubernetes api-server
uses to invoke the webhook.
Set the `caBundle` in the webhook install YAML that the Kubernetes api-server
uses to invoke the webhook.
```bash
cat install/kubernetes/istio-sidecar-injector.yaml | \
@ -183,23 +190,25 @@ The sidecar injector webhook should now be running.
```bash
kubectl -n istio-system get deployment -listio=sidecar-injector
```
```
```xxx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
istio-sidecar-injector 1 1 1 1 1d
```
NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector (see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors). The default webhook configuration uses `istio-injection=enabled`.
NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the
selector (see <https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors>). The default webhook configuration
uses `istio-injection=enabled`.
View namespaces showing `istio-injection` label and verify the `default` namespace is not labeled.
```bash
kubectl get namespace -L istio-injection
```
```
```xxx
NAME STATUS AGE ISTIO-INJECTION
default Active 1h
istio-system Active 1h
kube-public Active 1h
default Active 1h
istio-system Active 1h
kube-public Active 1h
kube-system Active 1h
```
@ -208,19 +217,19 @@ kube-system Active 1h
Deploy sleep app. Verify both deployment and pod have a single container.
```bash
kubectl apply -f samples/sleep/sleep.yaml
kubectl apply -f samples/sleep/sleep.yaml
```
```bash
kubectl get deployment -o wide
```
```
```xxx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
sleep 1 1 1 1 12m sleep tutum/curl app=sleep
```
```bash
kubectl get pod
```
```
```xxx
NAME READY STATUS RESTARTS AGE
sleep-776b7bcdcd-7hpnk 1/1 Running 0 4
```
@ -233,23 +242,23 @@ kubectl label namespace default istio-injection=enabled
```bash
kubectl get namespace -L istio-injection
```
```
```xxx
NAME STATUS AGE ISTIO-INJECTION
default Active 1h enabled
istio-system Active 1h
kube-public Active 1h
kube-system Active 1h
istio-system Active 1h
kube-public Active 1h
kube-system Active 1h
```
Injection occurs at pod creation time. Kill the running pod and verify a new pod is created with the injected sidecar. The original pod has 1/1 READY containers and the pod with injected sidecar has 2/2 READY containers.
```bash
kubectl delete pod sleep-776b7bcdcd-7hpnk
kubectl delete pod sleep-776b7bcdcd-7hpnk
```
```bash
kubectl get pod
```
```
```xxx
NAME READY STATUS RESTARTS AGE
sleep-776b7bcdcd-7hpnk 1/1 Terminating 0 1m
sleep-776b7bcdcd-bhn9m 2/2 Running 0 7s
@ -267,12 +276,12 @@ Disable injection for the `default` namespace and verify new pods are created wi
kubectl label namespace default istio-injection-
```
```bash
kubectl delete pod sleep-776b7bcdcd-bhn9m
kubectl delete pod sleep-776b7bcdcd-bhn9m
```
```bash
kubectl get pod
```
```
```xxx
NAME READY STATUS RESTARTS AGE
sleep-776b7bcdcd-bhn9m 2/2 Terminating 0 2m
sleep-776b7bcdcd-gmvnr 1/1 Running 0 2s
@ -282,21 +291,21 @@ sleep-776b7bcdcd-gmvnr 1/1 Running 0 2s
[admissionregistration.k8s.io/v1beta1#MutatingWebhookConfiguration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#mutatingwebhookconfiguration-v1beta1-admissionregistration)
configures when the webhook is invoked by Kubernetes. The default
supplied with Istio selects pods in namespaces with label `istio-injection=enabled`.
supplied with Istio selects pods in namespaces with label `istio-injection=enabled`.
This can be changed by modifying the MutatingWebhookConfiguration in
`install/kubernetes/istio-sidecar-injector-with-ca-bundle.yaml`.
The `istio-inject` ConfigMap in the `istio-system` namespace the default
The `istio-inject` ConfigMap in the `istio-system` namespace the default
injection policy and sidecar injection template.
##### _**policy**_
`disabled` - The sidecar injector will not inject the sidecar into
pods by default. Add the `sidecar.istio.io/inject` annotation with
pods by default. Add the `sidecar.istio.io/inject` annotation with
value `true` to the pod template spec to enable injection.
`enabled` - The sidecar injector will inject the sidecar into pods by
default. Add the `sidecar.istio.io/inject` annotation with
default. Add the `sidecar.istio.io/inject` annotation with
value `false` to the pod template spec to disable injection.
The following example uses the `sidecar.istio.io/inject` annotation to disable sidecar injection.
@ -317,13 +326,13 @@ spec:
image: tutum/curl
command: ["/bin/sleep","infinity"]
```
##### _**template**_
The sidecar injection template uses [https://golang.org/pkg/text/template](https://golang.org/pkg/text/template) which,
when parsed and exectuted, is decoded to the following
struct containing the list of containers and volumes to inject into the pod.
The sidecar injection template uses [https://golang.org/pkg/text/template](https://golang.org/pkg/text/template) which,
when parsed and executed, is decoded to the following
struct containing the list of containers and volumes to inject into the pod.
```golang
type SidecarInjectionSpec struct {
InitContainers []v1.Container `yaml:"initContainers"`
@ -332,20 +341,20 @@ type SidecarInjectionSpec struct {
}
```
The template is applied to the following data structure at runtime.
The template is applied to the following data structure at runtime.
```golang
type SidecarTemplateData struct {
ObjectMeta *metav1.ObjectMeta
Spec *v1.PodSpec
ObjectMeta *metav1.ObjectMeta
Spec *v1.PodSpec
ProxyConfig *meshconfig.ProxyConfig // Defined by https://istio.io/docs/reference/config/service-mesh.html#proxyconfig
MeshConfig *meshconfig.MeshConfig // Defined by https://istio.io/docs/reference/config/service-mesh.html#meshconfig
MeshConfig *meshconfig.MeshConfig // Defined by https://istio.io/docs/reference/config/service-mesh.html#meshconfig
}
```
`ObjectMeta` and `Spec` are from the pod. `ProxyConfig` and `MeshConfig`
are from the `istio` ConfigMap in the `istio-system` namespace. Templates can conditional
define injected containers and volumes with this data.
`ObjectMeta` and `Spec` are from the pod. `ProxyConfig` and `MeshConfig`
are from the `istio` ConfigMap in the `istio-system` namespace. Templates can conditional
define injected containers and volumes with this data.
For example, the following template snippet from `install/kubernetes/istio-sidecar-injector-configmap-release.yaml`
@ -370,7 +379,7 @@ containers:
```
{% endraw %}
expands to
expands to
```yaml
containers:
@ -386,7 +395,7 @@ containers:
- --serviceCluster
- sleep
```
when applied over a pod defined by the pod template spec in [samples/sleep/sleep.yaml](https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml).
#### Uninstalling the webhook

View File

@ -27,9 +27,10 @@ upgraded in the same namespace ISTIO\_NAMESPACE.
## Tasks
### Control plane upgrade
The Istio control plane components include: CA, Ingress, Pilot, Mixer, and
Sidecar injector. We can use Kubernetes rolling update mechanism to upgrade the
control plance components. It can be done by simply applying the new version
control plane components. It can be done by simply applying the new version
yaml file directly, e.g.
```bash
@ -38,13 +39,14 @@ kubectl apply -f istio.yaml (or istio-auth.yaml)
Note: If you have used [Helm](https://istio.io/docs/setup/kubernetes/helm.html)
to generate a customized Istio deployment, please use the customized yaml files
generated by Helm instead of the standard installtion yamls.
generated by Helm instead of the standard installation yamls.
The rolling update process will upgrade all deployments and configmaps to the
new version. If there is any issue with the new control plane, you can rollback
the changes either by applying the old version yaml files.
### Sidecar upgrade
After the control plane is upgraded, you will need to re-inject the new version
of sidecar proxy. There are two cases: Manual injection and Automatic injection.

View File

@ -1,7 +1,7 @@
---
title: Enabling Rate Limits
overview: This task shows you how to use Istio to dynamically limit the traffic to a service.
order: 10
layout: docs
@ -26,27 +26,27 @@ This task shows you how to use Istio to dynamically limit the traffic to a servi
istioctl create -f samples/bookinfo/kube/route-rule-reviews-test-v2.yaml
istioctl create -f samples/bookinfo/kube/route-rule-reviews-v3.yaml
```
> Note: if you have conflicting rule that you set in previous tasks,
> If you have conflicting rule that you set in previous tasks,
use `istioctl replace` instead of `istioctl create`.
## Rate limits
Istio enables users to rate limit traffic to a service.
Consider `ratings` as an external paid service like Rotten Tomatoes® with `1qps` free quota.
Using Istio we can ensure that `1qps` is not breached.
Using Istio we can ensure that `1qps` is not breached.
1. Point your browser at the Bookinfo `productpage` (http://$GATEWAY_URL/productpage).
If you log in as user "jason", you should see black ratings stars with each review,
indicating that the `ratings` service is being called by the "v2" version of the `reviews` service.
If you log in as any other user (or logout) you should see red ratings stars with each review,
indicating that the `ratings` service is being called by the "v3" version of the `reviews` service.
1. Configure a `memquota` adapter with rate limits.
1. Configure a `memquota` adapter with rate limits.
Save the following YAML snippet as `ratelimit-handler.yaml`.
```yaml
@ -79,11 +79,11 @@ Using Istio we can ensure that `1qps` is not breached.
```bash
istioctl create -f ratelimit-handler.yaml
```
This configuration specifies a default 5000 qps rate limit. Traffic reaching the ratings service via
reviews-v2 is subject to a 1qps rate limit. In our example user "jason" is routed via reviews-v2 and is therefore subject
to the 1qps rate limit.
1. Configure rate limit instance and rule
Create a quota instance named `requestcount` that maps incoming attributes to quota dimensions,
@ -128,7 +128,7 @@ Using Istio we can ensure that `1qps` is not breached.
1. Refresh the `productpage` in your browser.
If you log in as user "jason" while the load generator is running (i.e., generating more than 1 req/s),
If you log in as user "jason" while the load generator is running (i.e., generating more than 1 req/s),
the traffic generated by your browser will be rate limited to 1qps.
The reviews-v2 service is unable to access the ratings service and you stop seeing stars.
For all other users the default 5000qps rate limit will apply and you will continue seeing red stars.
@ -165,16 +165,16 @@ In the preceding examples we saw how Mixer applies rate limits to requests that
Every named quota instance like `requestcount` represents a set of counters.
The set is defined by a Cartesian product of all quota dimensions.
If the number of requests in the last `expiration` duration exceed `maxAmount`, Mixer returns a `RESOURCE_EXHAUSTED`
message to the proxy. The proxy in turn returns status `HTTP 429` to the caller.
message to the proxy. The proxy in turn returns status `HTTP 429` to the caller.
The `memquota` adapter uses a sliding window of sub second resolution to enforce rate limits.
The `memquota` adapter uses a sliding window of sub second resolution to enforce rate limits.
The `maxAmount` in the adapter configuration sets the default limit for all counters associated with a quota instance.
This default limit applies if a quota override does not match the request. Memquota selects the first override that matches a request.
An override need not specify all quota dimensions. In the ratelimit-handler.yaml example, the `1qps` override is
selected by matching only three out of four quota dimensions.
selected by matching only three out of four quota dimensions.
If you would like the above policies enforced for a given namespace instead of the entire Istio mesh, you can replace all occurrences of istio-system with the given namespace.
If you would like the above policies enforced for a given namespace instead of the entire Istio mesh, you can replace all occurrences of istio-system with the given namespace.
## Cleanup
@ -201,5 +201,3 @@ If you would like the above policies enforced for a given namespace instead of t
* Learn more about [Mixer]({{home}}/docs/concepts/policy-and-control/mixer.html) and [Mixer Config]({{home}}/docs/concepts/policy-and-control/mixer-config.html).
* Discover the full [Attribute Vocabulary]({{home}}/docs/reference/config/mixer/attribute-vocabulary.html).
* Read the reference guide to [Writing Config]({{home}}/docs/reference/writing-config.html).

View File

@ -25,14 +25,14 @@ This task shows how to control access to a service using the Kubernetes labels.
istioctl create -f samples/bookinfo/kube/route-rule-reviews-test-v2.yaml
istioctl create -f samples/bookinfo/kube/route-rule-reviews-v3.yaml
```
> Note: if you have conflicting rules that you set in previous tasks,
> If you have conflicting rules that you set in previous tasks,
use `istioctl replace` instead of `istioctl create`.
> Note: if you are using a namespace other than `default`,
> If you are using a namespace other than `default`,
use `istioctl -n namespace ...` to specify the namespace.
## Access control using _denials_
## Access control using _denials_
Using Istio you can control access to a service based on any attributes that are available within Mixer.
This simple form of access control is based on conditionally denying requests using Mixer selectors.
@ -44,7 +44,7 @@ of the `reviews` service. We would like to cut off access to version `v3` of the
If you log in as user "jason", you should see black rating stars with each review,
indicating that the `ratings` service is being called by the "v2" version of the `reviews` service.
If you log in as any other user (or logout) you should see red rating stars with each review,
indicating that the `ratings` service is being called by the "v3" version of the `reviews` service.
@ -67,10 +67,10 @@ of the `reviews` service. We would like to cut off access to version `v3` of the
It matches requests coming from the service `reviews` with label `v3` to the service `ratings`.
This rule uses the `denier` adapter to deny requests coming from version `v3` of the reviews service.
The adapter always denies requests with a pre-configured status code and message.
The adapter always denies requests with a preconfigured status code and message.
The status code and the message is specified in the [denier]({{home}}/docs/reference/config/adapters/denier.html)
adapter configuration.
1. Refresh the `productpage` in your browser.
If you are logged out or logged in as any user other than "jason" you will no longer see red ratings stars because
@ -78,7 +78,7 @@ of the `reviews` service. We would like to cut off access to version `v3` of the
In contrast, if you log in as user "jason" (the `reviews:v2` user) you continue to see
the black ratings stars.
## Access control using _whitelists_
## Access control using _whitelists_
Istio also supports attribute-based whitelists and blacklists. The following whitelist configuration is equivalent to the
`denier` configuration in the previous section. The rule effectively rejects requests from version `v3` of the `reviews` service.
@ -183,8 +183,6 @@ Verify that after logging in as "jason" you see black stars.
* Discover the full [Attribute Vocabulary]({{home}}/docs/reference/config/mixer/attribute-vocabulary.html).
* Read the reference guide to [Writing Config]({{home}}/docs/reference/writing-config.html).
* Understand the differences between Kubernetes network policies and Istio
access control policies from this
[blog]({{home}}/blog/using-network-policy-in-concert-with-istio.html).

View File

@ -9,9 +9,9 @@ type: markdown
---
{% include home.html %}
This task shows how to enable Istio CA health check. Note this is an alpha feature since Istio V0.6.
This task shows how to enable Istio CA health check. Note this is an alpha feature since Istio 0.6.
Since Istio V0.6, Istio CA has a health check feature that can be optionally enabled.
Since Istio 0.6, Istio CA has a health check feature that can be optionally enabled.
By default, the normal Istio deployment process does not enable this feature.
Currently, the health check feature is able to detect the failures of the CA CSR signing service,
by periodically sending CSRs to the API. More health check features are coming shortly.
@ -64,7 +64,7 @@ EOF
## Verifying the health checker is working
Isito CA will log the health check results. Run the following in command line:
Istio CA will log the health check results. Run the following in command line:
```bash
kubectl logs `kubectl get po -n istio-system | grep istio-ca | awk '{print $1}'` -n istio-system
@ -115,7 +115,7 @@ the `liveness-probe-interval` is the interval to update the health status file,
the `probe-check-interval` is the interval for the Istio CA health check.
The `interval` is the maximum time elapsed since the last update of the health status file, for the prober to consider
the Istio CA as healthy.
`initialDelaySeconds` and `periodSeconds` are the intial delay and the probe running period.
`initialDelaySeconds` and `periodSeconds` are the initial delay and the probe running period.
Prolonging `probe-check-interval` will reduce the health check overhead, but there will be a greater lagging for the
prober to get notified on the unhealthy status.

View File

@ -1,6 +1,6 @@
---
title: Mutual TLS over https services
overview: This task shows how to enable mTLS on https services.
title: Mutual TLS over HTTPS services
overview: This task shows how to enable mTLS on HTTPS services.
order: 80
@ -9,14 +9,14 @@ type: markdown
---
{% include home.html %}
This task shows how Istio Mutual TLS works with https services. It includes: 1)
Deploy an https service without Istio sidecar; 2) Deploy an https service with
Istio with mTLS disabled; 3) Deploy an https service with mTLS enabled. For each
This task shows how Istio Mutual TLS works with HTTPS services. It includes: 1)
Deploy an HTTPS service without Istio sidecar; 2) Deploy an HTTPS service with
Istio with mTLS disabled; 3) Deploy an HTTPS service with mTLS enabled. For each
deployment, connect to this service and verify it works.
When Istio sidecar is deployed with an https service, the proxy automatically downgrades
When the Istio sidecar is deployed with an HTTPS service, the proxy automatically downgrades
from L7 to L4 (no matter mTLS is enabled or not), which means it does not terminate the
original https traffic. And this is the reason Istio can work on https services.
original HTTPS traffic. And this is the reason Istio can work on HTTPS services.
## Before you begin
@ -25,39 +25,32 @@ original https traffic. And this is the reason Istio can work on https services.
Note that authentication should be **disabled** at step 5 in the
[installation steps]({{home}}/docs/setup/kubernetes/quick-start.html#installation-steps).
### Generate certificates and configmap
You need to have openssl installed to run this command
```bash
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx"
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx"
```
```bash
kubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt
```
```bash
$ kubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt
secret "nginxsecret" created
```
Create a configmap used for the https service
Create a configmap used for the HTTPS service
```bash
kubectl create configmap nginxconfigmap --from-file=samples/https/default.conf
```
```bash
$ kubectl create configmap nginxconfigmap --from-file=samples/https/default.conf
configmap "nginxconfigmap" created
```
## Deploy an https service without Istio sidecar
## Deploy an HTTPS service without Istio sidecar
This section creates a nginx-based https service.
This section creates a NGINX-based HTTPS service.
```bash
kubectl apply -f samples/https/nginx-app.yaml
```
```bash
$ kubectl apply -f samples/https/nginx-app.yaml
...
service "my-nginx" created
replicationcontroller "my-nginx" created
@ -74,7 +67,7 @@ Get the pods
```bash
kubectl get pod
```
```bash
```xxx
NAME READY STATUS RESTARTS AGE
my-nginx-jwwck 2/2 Running 0 1h
sleep-847544bbfc-d27jg 2/2 Running 0 18h
@ -89,7 +82,7 @@ Call my-nginx
```bash
curl https://my-nginx -k
```
```bash
```xxx
...
<h1>Welcome to nginx!</h1>
...
@ -100,23 +93,24 @@ You can actually combine the above three command into one:
```bash
kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://my-nginx -k
```
```bash
```xxx
...
<h1>Welcome to nginx!</h1>
...
```
### Create an https service with Istio sidecar with mTLS disabled
### Create an HTTPS service with the Istio sidecar and mTLS disabled
In "Before you begin" section, the istio control plane is deployed with mTLS
disabled. So you only need to redeploy the nginx https service with sidecar.
In "Before you begin" section, the Istio control plane is deployed with mTLS
disabled. So you only need to redeploy the NGINX HTTPS service with sidecar.
Delete the HTTPS service.
Delete the https service.
```bash
kubectl delete -f nginx-app.yaml
```
Deploy it with sidecar
Deploy it with a sidecar
```bash
kubectl apply -f <(bin/istioctl kube-inject --debug -f samples/https/nginx-app.yaml)
@ -127,7 +121,7 @@ Make sure the pod is up and running
```bash
kubectl get pod
```
```bash
```xxx
NAME READY STATUS RESTARTS AGE
my-nginx-6svcc 2/2 Running 0 1h
sleep-847544bbfc-d27jg 2/2 Running 0 18h
@ -137,7 +131,7 @@ And run
```bash
kubectl exec sleep-847544bbfc-d27jg -c sleep -- curl https://my-nginx -k
```
```bash
```xxx
...
<h1>Welcome to nginx!</h1>
...
@ -147,7 +141,7 @@ If you run from istio-proxy container, it should work as well
```bash
kubectl exec sleep-847544bbfc-d27jg -c istio-proxy -- curl https://my-nginx -k
```
```bash
```xxx
...
<h1>Welcome to nginx!</h1>
...
@ -155,7 +149,7 @@ kubectl exec sleep-847544bbfc-d27jg -c istio-proxy -- curl https://my-nginx -k
Note: this example is borrowed from [kubernetes examples](https://github.com/kubernetes/examples/blob/master/staging/https-nginx/README.md).
### Create an https service with Istio sidecar with mTLS enabled
### Create an HTTPS service with Istio sidecar with mTLS enabled
You need to deploy Istio control plane with mTLS enabled. If you have istio
control plane with mTLS disabled installed, please delete it:
@ -169,7 +163,7 @@ And wait for everything is down, i.e., there is no pod in control plane namespac
```bash
kubectl get pod -n istio-system
```
```bash
```xxx
No resources found.
```
@ -183,7 +177,7 @@ Make sure everything is up and running:
```bash
kubectl get po -n istio-system
```
```bash
```xxx
NAME READY STATUS RESTARTS AGE
istio-ca-58c5856966-k6nm4 1/1 Running 0 2m
istio-ingress-5789d889bc-xzdg2 1/1 Running 0 2m
@ -191,7 +185,7 @@ istio-mixer-65c55bc5bf-8n95w 3/3 Running 0 2m
istio-pilot-6954dcd96d-phh5z 2/2 Running 0 2m
```
Then redeploy the https service and sleep service
Then redeploy the HTTPS service and sleep service
```bash
kubectl delete -f <(bin/istioctl kube-inject --debug -f samples/sleep/sleep.yaml)
@ -215,21 +209,21 @@ And run
```bash
kubectl exec sleep-77f457bfdd-hdknx -c sleep -- curl https://my-nginx -k
```
```bash
```xxx
...
<h1>Welcome to nginx!</h1>
...
```
The reason is that for the workflow "sleep -> sleep-proxy -> nginx-proxy -> nginx",
the whole flow is L7 traffic, and there is a L4 mTLS encryption between sleep-proxy
and nginx-proxy. In this case, everthing works fine.
and nginx-proxy. In this case, everything works fine.
However, if you run this command from istio-proxy container, it will not work.
```bash
kubectl exec sleep-77f457bfdd-hdknx -c istio-proxy -- curl https://my-nginx -k
```
```bash
...
```xxx
curl: (35) gnutls_handshake() failed: Handshake failed
command terminated with exit code 35
```

View File

@ -124,14 +124,14 @@ There are several steps:
The service name and port are defined [here](https://github.com/istio/istio/blob/master/samples/bookinfo/kube/bookinfo.yaml).
Note that Istio uses [Kubernetes service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
Note that Istio uses [Kubernetes service accounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
as service identity, which offers stronger security than service name
(refer [here]({{home}}/docs/concepts/security/mutual-tls.html#identity) for more information).
Thus the certificates used in Istio do not have service name, which is the information that curl needs to verify
server identity. As a result, we use curl option '-k' to prevent the curl client from aborting when failing to
Thus the certificates used in Istio do not have service names, which is the information that `curl` needs to verify
server identity. As a result, we use `curl` option `-k` to prevent the `curl` client from aborting when failing to
find and verify the server name (i.e., productpage.ns.svc.cluster.local) in the certificate provided by the server.
Please check secure naming [here]({{home}}/docs/concepts/security/mutual-tls.html#workflow) for more information
Please check [secure naming]({{home}}/docs/concepts/security/mutual-tls.html#workflow) for more information
about how the client verifies the server's identity in Istio.
What we are demonstrating and verifying above is that the server accepts the connection from the client. Try not giving the client `--key` and `--cert` and observe you are not allowed to connect and you do not get an HTTP 200.

View File

@ -18,7 +18,7 @@ In this tutorial, you will learn:
## Before you begin
* Understand Isio [mutual TLS authentication]({{home}}/docs/concepts/security/mutual-tls.html) concepts.
* Understand Istio [mutual TLS authentication]({{home}}/docs/concepts/security/mutual-tls.html) concepts.
* Familiar with [testing Istio mutual TLS authentication]({{home}}/docs/tasks/security/mutual-tls.html).
@ -109,7 +109,7 @@ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
}
```
Now, run `kubectl edit configmap istio -n istio-system` and clear mtlsExcludedServices and restart pilot after done:
Now, run `kubectl edit configmap istio -n istio-system` and clear `mtlsExcludedServices` and restart Pilot after done:
```bash
kubectl get pod $(kubectl get pod -l istio=pilot -n istio-system -o jsonpath={.items..metadata.name}) -n istio-system -o yaml | kubectl replace --force -f -

View File

@ -21,11 +21,11 @@ RBAC from [Istio RBAC concept page]({{home}}/docs/concepts/security/rbac.html).
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application.
*> Note: The current Istio release may not have the up-to-date Istio RBAC samples. So before you continue, you
need to copy the following configuration files from https://github.com/istio/istio/tree/master/samples/bookinfo/kube to
"samples/bookinfo/kube" directory under where you installed Istio, and replace the original ones. The files include
`bookinfo-add-serviceaccount.yaml`, `istio-rbac-enable.yaml`, `istio-rbac-namespace.yaml`, `istio-rbac-productpage.yaml`,
`istio-rbac-details-reviews.yaml`, `istio-rbac-ratings.yaml`.*
> The current Istio release may not have the up-to-date Istio RBAC samples. So before you continue, you
need to copy the following configuration files from <https://github.com/istio/istio/tree/master/samples/bookinfo/kube> to
`samples/bookinfo/kube` directory under where you installed Istio, and replace the original ones. The files include
`bookinfo-add-serviceaccount.yaml`, `istio-rbac-enable.yaml`, `istio-rbac-namespace.yaml`, `istio-rbac-productpage.yaml`,
`istio-rbac-details-reviews.yaml`, `istio-rbac-ratings.yaml`.
* In this task, we will enable access control based on Service Accounts, which are cryptographically authenticated in the Istio mesh.
In order to give different microservices different access privileges, we will create some service accounts and redeploy Bookinfo
@ -49,9 +49,7 @@ microservices running under them.
deployment "reviews-v3" configured
```
> Note: if you are using a namespace other than `default`,
use `istioctl -n namespace ...` to specify the namespace.
> If you are using a namespace other than `default`, use `istioctl -n namespace ...` to specify the namespace.
Point your browser at the Bookinfo `productpage` (http://$GATEWAY_URL/productpage). You should see:
* "Book Details" section in the lower left part of the page, including type, pages, publisher, etc.
@ -61,16 +59,15 @@ Point your browser at the Bookinfo `productpage` (http://$GATEWAY_URL/productpag
Run the following command to enable Istio RBAC for "default" namespace.
> Note: if you are using a namespace other than `default`, edit the file `samples/bookinfo/kube/istio-rbac-enable.yaml`,
and specify the namespace, say `"your-namespace"`, in the `match` statement in `rule` spec
`"match: destination.namespace == "your-namespace"`.
> If you are using a namespace other than `default`, edit the file `samples/bookinfo/kube/istio-rbac-enable.yaml`,
and specify the namespace, say `"your-namespace"`, in the `match` statement in `rule` spec
`"match: destination.namespace == "your-namespace"`.
```bash
istioctl create -f samples/bookinfo/kube/istio-rbac-enable.yaml
```
> Note: if you have conflicting rules that you set in previous tasks, use `istioctl replace` instead of `istioctl create`.
> If you have conflicting rules that you set in previous tasks, use `istioctl replace` instead of `istioctl create`.
It also defines "requestcontext", which is an instance of the
[authorization template](https://github.com/istio/istio/blob/master/mixer/template/authorization/template.proto).
@ -80,7 +77,7 @@ Point your browser at the Bookinfo `productpage` (http://$GATEWAY_URL/productpag
`"PERMISSION_DENIED:handler.rbac.istio-system:RBAC: permission denied."` This is because Istio RBAC is "deny by default",
which means that you need to explicitly define access control policy to grant access to any service.
> Note: There may be delay due to caching on browser and Istio proxy.
> There may be delay due to caching on browser and Istio proxy.
## Namespace-level access control
@ -98,7 +95,7 @@ istioctl create -f samples/bookinfo/kube/istio-rbac-namespace.yaml
```
The policy does the following:
* Creates a ServiceRole "service-viewer" which allows read access to any service in "default" namespace that has "app" label
* Creates a `ServiceRole` "service-viewer" which allows read access to any service in "default" namespace that has "app" label
set to one of the values in ["productpage", "details", "reviews", "ratings"]. Note that there is a "constraint" specifying that
the services must have one of the listed "app" labels.
@ -117,7 +114,7 @@ the services must have one of the listed "app" labels.
values: ["productpage", "details", "reviews", "ratings"]
```
* Creates a ServiceRoleBinding that assign the "service-viewer" role to all services in "istio-system" and "default" namespaces.
* Creates a `ServiceRoleBinding` that assign the "service-viewer" role to all services in "istio-system" and "default" namespaces.
```bash
apiVersion: "config.istio.io/v1alpha2"
@ -146,7 +143,7 @@ servicerolebinding "bind-service-viewer" created
Now if you point your browser at Bookinfo `productpage` (http://$GATEWAY_URL/productpage). You should see "Bookinfo Sample" page,
with "Book Details" section in the lower left part and "Book Reviews" section in the lower right part.
> Note: There may be delay due to caching on browser and Istio proxy.
> There may be delay due to caching on browser and Istio proxy.
### Cleanup namespace-level access control
@ -176,7 +173,7 @@ istioctl create -f samples/bookinfo/kube/istio-rbac-productpage.yaml
```
The policy does the following:
* Creates a ServiceRole "productpage-viewer" which allows read access to "productpage" service.
* Creates a `ServiceRole` "productpage-viewer" which allows read access to "productpage" service.
```bash
apiVersion: "config.istio.io/v1alpha2"
@ -190,7 +187,7 @@ The policy does the following:
methods: ["GET"]
```
* Creates a ServiceRoleBinding "bind-productpager-viewer" which assigns "productpage-viewer" role to all users/services.
* Creates a `ServiceRoleBinding` "bind-productpager-viewer" which assigns "productpage-viewer" role to all users/services.
```bash
apiVersion: "config.istio.io/v1alpha2"
@ -211,7 +208,7 @@ page. But there are errors `"Error fetching product details"` and `"Error fetchi
are expected because we have not granted "productpage" service to access "details" and "reviews" services. We will fix the errors
in the following steps.
> Note: There may be delay due to caching on browser and Istio proxy.
> There may be delay due to caching on browser and Istio proxy.
### Step 2. allowing "productpage" service to access "details" and "reviews" services
@ -225,7 +222,7 @@ istioctl create -f samples/bookinfo/kube/istio-rbac-details-reviews.yaml
```
The policy does the following:
* Creates a ServiceRole "details-reviews-viewer" which allows read access to "details" and "reviews" services.
* Creates a `ServiceRole` "details-reviews-viewer" which allows read access to "details" and "reviews" services.
```bash
apiVersion: "config.istio.io/v1alpha2"
@ -239,7 +236,7 @@ The policy does the following:
methods: ["GET"]
```
* Creates a ServiceRoleBinding "bind-details-reviews" which assigns "details-reviews-viewer" role to service
* Creates a `ServiceRoleBinding` "bind-details-reviews" which assigns "details-reviews-viewer" role to service
account "cluster.local/ns/default/sa/bookinfo-productpage" (representing the "productpage" service).
```bash
@ -262,8 +259,7 @@ there is an error `"Ratings service currently unavailable"`. This is because "re
"ratings" service. To fix this issue, you need to grant "reviews" service read access to "ratings" service.
We will show how to do that in the next step.
> Note: There may be delay due to caching on browser and Istio proxy.
> There may be delay due to caching on browser and Istio proxy.
### Step 3. allowing "reviews" service to access "ratings" service
@ -278,7 +274,8 @@ istioctl create -f samples/bookinfo/kube/istio-rbac-ratings.yaml
```
The policy does the following:
* Creates a ServiceRole "ratings-viewer" which allows read access to "ratings" service.
* Creates a `ServiceRole` "ratings-viewer" which allows read access to "ratings" service.
```bash
apiVersion: "config.istio.io/v1alpha2"
@ -292,7 +289,7 @@ The policy does the following:
methods: ["GET"]
```
* Creates a ServiceRoleBinding "bind-ratings" which assigns "ratings-viewer" role to service
* Creates a `ServiceRoleBinding` "bind-ratings" which assigns "ratings-viewer" role to service
account "cluster.local/ns/default/sa/bookinfo-reviews", which represents the "reviews" services.
```bash
@ -312,7 +309,7 @@ account "cluster.local/ns/default/sa/bookinfo-reviews", which represents the "re
Point your browser at the Bookinfo `productpage` (http://$GATEWAY_URL/productpage). Now you should see
the "black" and "red" ratings in "Book Reviews" section.
> Note: There may be delay due to caching on browser and Istio proxy.
> There may be delay due to caching on browser and Istio proxy.
If you would like to only see "red" ratings in "Book Reviews" section, you can do that by specifying that only "reviews"
service at version "v3" can access "ratings" service.
@ -343,7 +340,7 @@ spec:
istioctl delete -f samples/bookinfo/kube/istio-rbac-productpage.yaml
```
Alternatively, you can delete all ServiceRole and ServiceRoleBinding objects by running the following commands:
Alternatively, you can delete all `ServiceRole` and `ServiceRoleBinding` resources by running the following commands:
```bash
kubectl delete servicerole --all
@ -358,4 +355,4 @@ spec:
## What's next
* Learn more about [Istio RBAC]({{home}}/docs/concepts/security/rbac.html).
* Learn more about [Istio RBAC]({{home}}/docs/concepts/security/rbac.html).

View File

@ -38,10 +38,8 @@ For the format of the service account in Istio, please refer to the
serviceaccount "bookinfo-productpage" created
deployment "productpage-v1" configured
```
> Note: if you are using a namespace other than `default`,
use `istioctl -n namespace ...` to specify the namespace.
> If you are using a namespace other than `default`,
use `istioctl -n namespace ...` to specify the namespace.
## Access control using _denials_
@ -70,12 +68,13 @@ the `productpage` service.
```
match: destination.labels["app"] == "details" && source.user == "cluster.local/ns/default/sa/bookinfo-productpage"
```
It matches requests coming from the serivce account
It matches requests coming from the service account
"_cluster.local/ns/default/sa/bookinfo-productpage_" on the `details` service.
> Note: If you are using a namespace other than `default`, replace the `default` with your namespace in the value of `source.user`.
> If you are using a namespace other than `default`, replace the `default` with your namespace in the value of `source.user`.
This rule uses the `denier` adapter to deny these requests.
The adapter always denies requests with a pre-configured status code and message.
The adapter always denies requests with a preconfigured status code and message.
The status code and message are specified in the [denier]({{home}}/docs/reference/config/adapters/denier.html)
adapter configuration.
@ -105,8 +104,6 @@ the `productpage` service.
* Discover the full [Attribute Vocabulary]({{home}}/docs/reference/config/mixer/attribute-vocabulary.html).
* Read the reference guide to [Writing Config]({{home}}/docs/reference/writing-config.html).
* Understand the differences between Kubernetes network policies and Istio
access control policies from this
[blog]({{home}}/blog/using-network-policy-in-concert-with-istio.html).

View File

@ -10,7 +10,7 @@ redirect_from: /docs/tasks/zipkin-tracing.html
---
{% include home.html %}
This task shows you how Istio-enabled applications
This task shows you how Istio-enabled applications
can be configured to collect trace spans using [Zipkin](https://zipkin.io) or [Jaeger](https://jaeger.readthedocs.io).
After completing this task, you should understand all of the assumptions about your
application and how to have it participate in tracing, regardless of what
@ -19,20 +19,19 @@ language/framework/platform you use to build your application.
The [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample is used as the
example application for this task.
## Before you begin
* Setup Istio by following the instructions in the [Installation guide]({{home}}/docs/setup/).
If you didn't start the Zipkin or Jaeger addon during installation,
you can run the following command to start it now.
For zipkin:
For Zipkin:
```bash
kubectl apply -f install/kubernetes/addons/zipkin.yaml
```
For Jaeger:
```bash
@ -41,7 +40,6 @@ example application for this task.
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application.
## Accessing the dashboard
### Zipkin
@ -64,7 +62,6 @@ kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=ja
Then open your browser at [http://localhost:16686](http://localhost:16686)
## Generating traces using the Bookinfo sample
With the Bookinfo application up and running, generate trace information by accessing
@ -161,7 +158,7 @@ def getForwardHeaders(request):
```
The reviews application (Java) does something similar:
```java
@GET
@Path("/reviews")
@ -178,13 +175,12 @@ public Response bookReviews(@CookieParam("user") Cookie user,
if(ratings_enabled){
JsonObject ratings = getRatings(user, xreq, xtraceid, xspanid, xparentspanid, xsampled, xflags, xotspan);
```
```
When you make downstream calls in your applications, make sure to include these headers.
## Cleanup
* Remove the addon tracing configuration:
If you are running with Zipkin, run the following command to cleanup:

View File

@ -24,6 +24,7 @@ The [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application is used
as the example application throughout this task.
## Before you begin
* [Install Istio]({{home}}/docs/setup/) in your cluster and deploy an
application. This task assumes that Mixer is setup in a default configuration
(`--configDefaultNamespace=istio-system`). If you use a different
@ -48,7 +49,7 @@ connect to a running Fluentd daemon, you may need to add a
for Fluentd. The Fluentd configuration to listen for forwarded logs
is:
```
```xml
<source>
type forward
</source>
@ -71,7 +72,7 @@ called `logging`.
Save the following as `logging-stack.yaml`.
```
```yaml
# Logging Namespace. All below are a part of this namespace.
apiVersion: v1
kind: Namespace
@ -285,7 +286,7 @@ kubectl apply -f logging-stack.yaml
You should see the following:
```
```xxx
namespace "logging" created
service "elasticsearch" created
deployment "elasticsearch" created
@ -305,7 +306,7 @@ Istio will generate and collect automatically.
Save the following as `fluentd-istio.yaml`:
```
```yaml
# Configuration for logentry instances
apiVersion: "config.istio.io/v1alpha2"
kind: logentry
@ -393,7 +394,7 @@ example stack.
1. Select `@timestamp` as the Time Filter field name, and click "Create index pattern."
1. Now click "Discover" on the left menu, and start exploring the logs generated
1. Now click "Discover" on the left menu, and start exploring the logs generated
## Cleanup
@ -422,5 +423,3 @@ example stack.
and [Mixer Config]({{home}}/docs/concepts/policy-and-control/mixer-config.html).
* Discover the full [Attribute Vocabulary]({{home}}/docs/reference/config/mixer/attribute-vocabulary.html).
* Read the reference guide to [Writing Config]({{home}}/docs/reference/writing-config.html).

View File

@ -24,7 +24,7 @@ as the example application throughout this task.
value, update the configuration and commands in this task to match the value.
* Install the Prometheus add-on. Prometheus
will be used to verify task success.
will be used to verify task success.
```bash
kubectl apply -f install/kubernetes/addons/prometheus.yaml
```
@ -158,16 +158,16 @@ as the example application throughout this task.
```
View values for the new metric via the [Prometheus UI](http://localhost:9090/graph#%5B%7B%22range_input%22%3A%221h%22%2C%22expr%22%3A%22istio_double_request_count%22%2C%22tab%22%3A1%7D%5D).
The provided link opens the Prometheus UI and executes a query for values of
the `istio_double_request_count` metric. The table displayed in the
**Console** tab includes entries similar to:
```
istio_double_request_count{destination="details.default.svc.cluster.local",instance="istio-mixer.istio-system:42422",job="istio-mesh",message="twice the fun!",source="productpage.default.svc.cluster.local"} 2
istio_double_request_count{destination="ingress.istio-system.svc.cluster.local",instance="istio-mixer.istio-system:42422",job="istio-mesh",message="twice the fun!",source="unknown"} 2
istio_double_request_count{destination="productpage.default.svc.cluster.local",instance="istio-mixer.istio-system:42422",job="istio-mesh",message="twice the fun!",source="ingress.istio-system.svc.cluster.local"} 2
istio_double_request_count{destination="reviews.default.svc.cluster.local",instance="istio-mixer.istio-system:42422",job="istio-mesh",message="twice the fun!",source="productpage.default.svc.cluster.local"} 2
```xxx
istio_double_request_count{destination="details.default.svc.cluster.local",instance="istio-mixer.istio-system:42422",job="istio-mesh",message="twice the fun!",source="productpage.default.svc.cluster.local"} 2
istio_double_request_count{destination="ingress.istio-system.svc.cluster.local",instance="istio-mixer.istio-system:42422",job="istio-mesh",message="twice the fun!",source="unknown"} 2
istio_double_request_count{destination="productpage.default.svc.cluster.local",instance="istio-mixer.istio-system:42422",job="istio-mesh",message="twice the fun!",source="ingress.istio-system.svc.cluster.local"} 2
istio_double_request_count{destination="reviews.default.svc.cluster.local",instance="istio-mixer.istio-system:42422",job="istio-mesh",message="twice the fun!",source="productpage.default.svc.cluster.local"} 2
```
For more on querying Prometheus for metric values, see the [Querying Istio
@ -200,10 +200,13 @@ automatically generate and report a new metric and a new log stream for all
traffic within the mesh.
The added configuration controlled three pieces of Mixer functionality:
1. Generation of *instances* (in this example, metric values and log entries)
from Istio attributes
1. Creation of *handlers* (configured Mixer adapters) capable of processing
generated *instances*
1. Dispatch of *instances* to *handlers* according to a set of *rules*
### Understanding the metrics configuration
@ -241,7 +244,7 @@ translates received metric instances into prometheus-formatted values that can
be processed by a Prometheus backend. This configuration specified a new
Prometheus metric named `double_request_count`. The Prometheus adapter prepends
the `istio_` namespace to all metric names, therefore this metric will show up
in Promethus as `istio_double_request_count`. The metric has three labels
in Prometheus as `istio_double_request_count`. The metric has three labels
matching the dimensions configured for `doublerequestcount.metric` instances.
For `kind: prometheus` handlers, Mixer instances are matched to Prometheus
@ -249,10 +252,10 @@ metrics via the `instance_name` parameter. The `instance_name` values must be
the fully-qualified name for Mixer instances (example:
`doublerequestcount.metric.istio-system`).
The `kind: rule` stanza of config defines a new *rule* named `doubleprom`. The
The `kind: rule` stanza of config defines a new *rule* named `doubleprom`. The
rule directs Mixer to send all `doublerequestcount.metric` instances to the
`doublehandler.prometheus` handler. Because there is no `match` clause in the
rule, and because the rule is in the configured default configuration namespace
`doublehandler.prometheus` handler. Because there is no `match` clause in the
rule, and because the rule is in the configured default configuration namespace
(`istio-system`), the rule is executed for all requests in the mesh.
### Understanding the logs configuration
@ -320,7 +323,4 @@ here to illustrate how to use `match` expressions to control rule execution.
* Discover the full [Attribute
Vocabulary]({{home}}/docs/reference/config/mixer/attribute-vocabulary.html).
* Read the reference guide to [Writing
Config]({{home}}/docs/reference/writing-config.html).
* Refer to the [In-Depth Telemetry]({{home}}/docs/guides/telemetry.html) guide.

View File

@ -63,7 +63,7 @@ the example application throughout this task.
In Kubernetes environments, execute the following command:
```bash
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &
```
Visit [http://localhost:9090/graph](http://localhost:9090/graph) in your web browser.
@ -95,7 +95,7 @@ the example application throughout this task.
```
istio_request_count{destination_service="reviews.default.svc.cluster.local", destination_version="v3"}
```
This query returns the current total count of all requests to the v3 of the reviews service.
- Rate of requests over the past 5 minutes to all `productpage` services:
@ -108,9 +108,9 @@ the example application throughout this task.
Mixer comes with a built-in [Prometheus](https://prometheus.io) adapter that
exposes an endpoint serving generated metric values. The Prometheus add-on is a
Prometheus server that comes pre-configured to scrape Mixer endpoints to collect
Prometheus server that comes preconfigured to scrape Mixer endpoints to collect
the exposed metrics. It provides a mechanism for persistent storage and querying
of Istio metrics.
of Istio metrics.
The configured Prometheus add-on scrapes three endpoints:
1. *istio-mesh* (`istio-mixer.istio-system:42422`): all Mixer-generated mesh

View File

@ -49,7 +49,7 @@ the example application throughout this task.
The output will be similar to:
```
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
servicegraph 10.59.253.165 <none> 8088/TCP 30s
```
@ -66,7 +66,7 @@ the example application throughout this task.
Refresh the page a few times (or send the command a few times) to generate a
small amount of traffic.
Note: `$GATEWAY_URL` is the value set in the
> `$GATEWAY_URL` is the value set in the
[Bookinfo]({{home}}/docs/guides/bookinfo.html) guide.
1. Open the Servicegraph UI.
@ -74,11 +74,10 @@ the example application throughout this task.
In Kubernetes environments, execute the following command:
```bash
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088 &
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088 &
```
Visit
[http://localhost:8088/force/forcegraph.html](http://localhost:8088/force/forcegraph.html)
Visit [http://localhost:8088/force/forcegraph.html](http://localhost:8088/force/forcegraph.html)
in your web browser. Try clicking on a service to see details on
the service. Real time traffic data is shown in a panel below.
@ -109,20 +108,23 @@ the example application throughout this task.
### About the Servicegraph Add-on
The
[Servicegraph](https://github.com/istio/istio/tree/master/addons/servicegraph)
The [Servicegraph](https://github.com/istio/istio/tree/master/addons/servicegraph)
service provides endpoints for generating and visualizing a graph of
services within a mesh. It exposes the following endpoints:
- `/force/forcegraph.html` As explored above, this is an interactive
* `/force/forcegraph.html` As explored above, this is an interactive
[D3.js](https://d3js.org/) visualization.
- `/dotviz` is a static [Graphviz](https://www.graphviz.org/)
* `/dotviz` is a static [Graphviz](https://www.graphviz.org/)
visualization.
- `/dotgraph` provides a
* `/dotgraph` provides a
[DOT](https://en.wikipedia.org/wiki/DOT_(graph_description_language))
serialization.
- `/d3graph` provides a JSON serialization for D3 visualization.
- `/graph` provides a generic JSON serialization.
* `/d3graph` provides a JSON serialization for D3 visualization.
* `/graph` provides a generic JSON serialization.
All endpoints take the query parameters explored above.

View File

@ -27,7 +27,7 @@ as the example application throughout this task.
example configuration and commands.
* Install the Prometheus add-on. Prometheus
will be used to verify task success.
will be used to verify task success.
```bash
kubectl apply -f install/kubernetes/addons/prometheus.yaml
```
@ -201,7 +201,7 @@ as the example application throughout this task.
```
View values for the new metric via the [Prometheus UI](http://localhost:9090/graph#%5B%7B%22range_input%22%3A%221h%22%2C%22expr%22%3A%22istio_mongo_received_bytes%22%2C%22tab%22%3A1%7D%5D).
The provided link opens the Prometheus UI and executes a query for values of
the `istio_mongo_received_bytes` metric. The table displayed in the
**Console** tab includes entries similar to:
@ -229,7 +229,7 @@ configuration consisted of _instances_, a _handler_, and a _rule_. Please see
that Task for a complete description of the components of metric collection.
Metrics collection for TCP services differs only in the limited set of
attributes that are available for use in _instances_.
attributes that are available for use in _instances_.
### TCP Attributes
@ -273,9 +273,6 @@ protocols within policies.
* Discover the full [Attribute
Vocabulary]({{home}}/docs/reference/config/mixer/attribute-vocabulary.html).
* Read the reference guide to [Writing
Config]({{home}}/docs/reference/writing-config.html).
* Refer to the [In-Depth Telemetry]({{home}}/docs/guides/telemetry.html) guide.
* Learn more about [Querying Istio

View File

@ -26,12 +26,12 @@ the example application throughout this task.
```bash
kubectl apply -f install/kubernetes/addons/prometheus.yaml
```
Use of the Prometheus add-on is _required_ for the Istio Dashboard.
## Viewing the Istio Dashboard
1. To view Istio metrics in a graphical dashboard install the Grafana add-on.
1. To view Istio metrics in a graphical dashboard install the Grafana add-on.
In Kubernetes environments, execute the following command:
@ -100,7 +100,7 @@ the example application throughout this task.
### About the Grafana add-on
The Grafana add-on is a pre-configured instance of Grafana. The base image
The Grafana add-on is a preconfigured instance of Grafana. The base image
([`grafana/grafana:4.1.2`](https://hub.docker.com/r/grafana/grafana/)) has been
modified to start with both a Prometheus data source and the Istio Dashboard
installed. The base install files for Istio, and Mixer in particular, ship with

View File

@ -15,13 +15,13 @@ This task demonstrates the circuit-breaking capability for resilient application
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Start the [httpbin](https://github.com/istio/istio/tree/master/samples/httpbin) sample
which will be used as the backend service for our task
```bash
kubectl apply -f <(istioctl kube-inject --debug -f samples/httpbin/httpbin.yaml)
```
```
## Circuit breaker
@ -52,13 +52,13 @@ Let's set up a scenario to demonstrate the circuit-breaking capabilities of Isti
maxEjectionPercent: 100
EOF
```
2. Verify our destination rule was created correctly:
1. Verify our destination rule was created correctly:
```bash
istioctl get destinationrule httpbin -o yaml
```
```
```xxx
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
@ -79,8 +79,8 @@ Let's set up a scenario to demonstrate the circuit-breaking capabilities of Isti
consecutiveErrors: 1
interval: 1.000s
maxEjectionPercent: 100
```
```
### Setting up our client
Now that we've set up rules for calling the `httpbin` service, let's create a client we can use to send traffic to our service and see whether we can trip the circuit breaking policies. We're going to use a simple load-testing client called [fortio](https://github.com/istio/fortio). With this client we can control the number of connections, concurrency, and delays of outgoing HTTP calls. In this step, we'll set up a client that is injected with the istio sidecar proxy so our network interactions are governed by Istio:
@ -88,14 +88,14 @@ Now that we've set up rules for calling the `httpbin` service, let's create a cl
```bash
kubectl apply -f <(istioctl kube-inject --debug -f samples/httpbin/sample-client/fortio-deploy.yaml)
```
Now we should be able to log into that client pod and use the simple fortio tool to call `httpbin`. We'll pass in `-curl` to indicate we just want to make one call:
Now we should be able to log into that client pod and use the simple fortio tool to call `httpbin`. We'll pass in `-curl` to indicate we just want to make one call:
```bash
FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -curl http://httpbin:8000/get
```
```
```xxx
HTTP/1.1 200 OK
server: envoy
date: Tue, 16 Jan 2018 23:47:00 GMT
@ -106,32 +106,32 @@ content-length: 445
x-envoy-upstream-service-time: 36
{
"args": {},
"args": {},
"headers": {
"Content-Length": "0",
"Host": "httpbin:8000",
"User-Agent": "istio/fortio-0.6.2",
"X-B3-Sampled": "1",
"X-B3-Spanid": "824fbd828d809bf4",
"X-B3-Traceid": "824fbd828d809bf4",
"X-Ot-Span-Context": "824fbd828d809bf4;824fbd828d809bf4;0000000000000000",
"Content-Length": "0",
"Host": "httpbin:8000",
"User-Agent": "istio/fortio-0.6.2",
"X-B3-Sampled": "1",
"X-B3-Spanid": "824fbd828d809bf4",
"X-B3-Traceid": "824fbd828d809bf4",
"X-Ot-Span-Context": "824fbd828d809bf4;824fbd828d809bf4;0000000000000000",
"X-Request-Id": "1ad2de20-806e-9622-949a-bd1d9735a3f4"
},
"origin": "127.0.0.1",
},
"origin": "127.0.0.1",
"url": "http://httpbin:8000/get"
}
```
You can see the request succeeded! Now, let's break something.
### Tripping the circuit breaker:
```
You can see the request succeeded! Now, let's break something.
### Tripping the circuit breaker
In the circuit-breaking settings, we specified `maxConnections: 1` and `http1MaxPendingRequests: 1`. This should mean that if we exceed more than one connection and request concurrently, we should see the istio-proxy open the circuit for further requests/connections. Let's try with two concurrent connections (`-c 2`) and send 20 requests (`-n 20`)
```bash
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
```
```
```xxx
Fortio 0.6.2 running at 0 queries per second, 2->2 procs, for 5s: http://httpbin:8000/get
Starting at max qps with 2 thread(s) [gomax 2] for exactly 20 calls (10 per thread + 0)
23:51:10 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
@ -158,10 +158,10 @@ Response Header Sizes : count 20 avg 218.85 +/- 50.21 min 0 max 231 sum 4377
Response Body/Total Sizes : count 20 avg 652.45 +/- 99.9 min 217 max 676 sum 13049
All done 20 calls (plus 0 warmup) 10.215 ms avg, 187.8 qps
```
We see almost all requests made it through!
```
We see almost all requests made it through!
```xxx
Code 200 : 19 (95.0 %)
Code 503 : 1 (5.0 %)
```
@ -171,7 +171,7 @@ The istio-proxy does allow for some leeway. Let's bring the number of concurrent
```bash
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
```
```
```xxx
Fortio 0.6.2 running at 0 queries per second, 2->2 procs, for 5s: http://httpbin:8000/get
Starting at max qps with 3 thread(s) [gomax 2] for exactly 30 calls (10 per thread + 0)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
@ -213,7 +213,7 @@ All done 30 calls (plus 0 warmup) 5.336 ms avg, 422.2 qps
Now we start to see the circuit breaking behavior we expect.
```
```xxx
Code 200 : 19 (63.3 %)
Code 503 : 11 (36.7 %)
```
@ -223,19 +223,19 @@ Only 63.3% of the requests made it through and the rest were trapped by circuit
```bash
kubectl exec -it $FORTIO_POD -c istio-proxy -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pending
```
```
```xxx
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_active: 0
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_failure_eject: 0
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_overflow: 12
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_total: 39
```
We see `12` for the `upstream_rq_pending_overflow` value which means `12` calls so far have been flagged for circuit breaking.
```
We see `12` for the `upstream_rq_pending_overflow` value which means `12` calls so far have been flagged for circuit breaking.
## Cleaning up
1. Remove the rules.
```bash
istioctl delete destinationrule httpbin
```

View File

@ -27,6 +27,7 @@ This task describes how to configure Istio to expose external TCP services to ap
**Note**: any pod that you can execute `curl` from is good enough.
## Using Istio external services for external TCP traffic
In this task we access `wikipedia.org` by HTTPS originated by the application. This task demonstrates the use case where an application cannot use HTTP with TLS origination by the sidecar proxy. Using HTTP with TLS origination by the sidecar proxy is described in the [Control Egress Traffic]({{home}}/docs/tasks/traffic-management-v1alpha3/egress.html) task. In that task, `https://google.com` was accessed by issuing HTTP requests to `http://www.google.com:443`.
The HTTPS traffic originated by the application will be treated by Istio as _opaque_ TCP. To enable such traffic, we define a TCP external service on port 443. In TCP external services, as opposed to HTTP-based external services, the destinations are specified by IPs or by blocks of IPs in [CIDR notation](https://tools.ietf.org/html/rfc2317).
@ -34,7 +35,9 @@ The HTTPS traffic originated by the application will be treated by Istio as _opa
Let's assume for the sake of this example that we want to access `wikipedia.org` by the domain name. This means that we have to specify all the IPs of `wikipedia.org` in our TCP external service. Fortunately, the IPs of `wikipedia.org` are published [here]( https://www.mediawiki.org/wiki/Wikipedia_Zero/IP_Addresses). It is a list of IP blocks in [CIDR notation](https://tools.ietf.org/html/rfc2317): `91.198.174.192/27`, `103.102.166.224/27`, and more.
## Creating an external service
Let's create an external service to enable TCP access to `wikipedia.org`:
```bash
cat <<EOF | istioctl create -f -
apiVersion: networking.istio.io/v1alpha3
@ -66,25 +69,25 @@ This command instructs the Istio proxy to forward requests on port 443 of any of
kubectl exec -it $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep bash
```
2. Make a request and verify that we can access https://www.wikipedia.org successfully:
1. Make a request and verify that we can access https://www.wikipedia.org successfully:
```bash
curl -o /dev/null -s -w "%{http_code}\n" https://www.wikipedia.org
```
```bash
```xxx
200
```
We should see `200` printed as the output, which is the HTTP code _OK_.
3. Now let's fetch the current number of the articles available on Wikipedia in the English language:
1. Now let's fetch the current number of the articles available on Wikipedia in the English language:
```bash
curl -s https://en.wikipedia.org/wiki/Main_Page | grep articlecount | grep 'Special:Statistics'
```
The output should be similar to:
```bash
```xxx
<div id="articlecount" style="font-size:85%;"><a href="/wiki/Special:Statistics" title="Special:Statistics">5,563,121</a> articles in <a href="/wiki/English_language" title="English language">English</a></div>
```

View File

@ -12,7 +12,7 @@ type: markdown
By default, Istio-enabled services are unable to access URLs outside of the cluster because
iptables is used in the pod to transparently redirect all outbound traffic to the sidecar proxy,
which only handles intra-cluster destinations.
This task describes how to configure Istio to expose external services to Istio-enabled clients.
You'll learn how to enable access to external services by defining `ExternalService` configurations,
or alternatively, to simply bypass the Istio proxy for a specific range of IPs.
@ -24,7 +24,7 @@ or alternatively, to simply bypass the Istio proxy for a specific range of IPs.
* Start the [sleep](https://github.com/istio/istio/tree/master/samples/sleep) sample
which will be used as a test source for external calls.
```bash
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml)
```
@ -34,7 +34,7 @@ or alternatively, to simply bypass the Istio proxy for a specific range of IPs.
## Configuring Istio external services
Using Istio `ExternalService` configurations, you can access any publicly accessible service
from within your Istio cluster. In this task we will use
from within your Istio cluster. In this task we will use
[httpbin.org](http://httpbin.org) and [www.google.com](http://www.google.com) as examples.
### Configuring the external services
@ -57,7 +57,7 @@ from within your Istio cluster. In this task we will use
EOF
```
2. Create an `ExternalService` to allow access to an external HTTPS service:
1. Create an `ExternalService` to allow access to an external HTTPS service:
```bash
cat <<EOF | istioctl create -f -
@ -86,7 +86,7 @@ from within your Istio cluster. In this task we will use
```
Notice that we also create a corresponding `DestinationRule` to
initiate TLS for connections to the HTTPS service.
initiate TLS for connections to the HTTPS service.
Callers must access this service using HTTP on port 443 and Istio will upgrade
the connection to HTTPS.
@ -94,19 +94,19 @@ the connection to HTTPS.
1. Exec into the pod being used as the test source. For example,
if you are using the sleep service, run the following commands:
```bash
export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec -it $SOURCE_POD -c sleep bash
```
2. Make a request to the external HTTP service:
1. Make a request to the external HTTP service:
```bash
curl http://httpbin.org/headers
```
3. Make a request to the external HTTPS service.
1. Make a request to the external HTTPS service.
External services of type HTTPS must be accessed over HTTP with the port specified in the request:
```bash
@ -172,11 +172,10 @@ to set a timeout rule on calls to the httpbin.org service.
This time a 504 (Gateway Timeout) appears after 3 seconds.
Although httpbin.org was waiting 5 seconds, Istio cut off the request at 3 seconds.
## Calling external services directly
The Istio `ExternalService` currently only supports HTTP/HTTPS requests.
If you want to access services with other protocols (e.g., mongodb://host/database),
If you want to access services with other protocols (e.g., mongodb://host/database),
or if you want to completely bypass Istio for a specific IP range,
you will need to configure the source service's Envoy sidecar to prevent it from
[intercepting]({{home}}/docs/concepts/traffic-management/request-routing.html#communication-between-services)
@ -187,7 +186,7 @@ when starting the service.
The simplest way to use the `--includeIPRanges` option is to pass it the IP range(s)
used for internal cluster services, thereby excluding external IPs from being redirected
to the sidecar proxy.
The values used for internal IP range(s), however, depends on where your cluster is running.
The values used for internal IP range(s), however, depends on where your cluster is running.
For example, with Minikube the range is 10.0.0.1/24, so you would start the sleep service like this:
```bash
@ -204,7 +203,7 @@ On IBM Cloud Private, use:
A sample output is as following:
```
```xxx
service_cluster_ip_range: 10.0.0.1/24
```
@ -226,7 +225,7 @@ need to run the `gcloud container clusters describe` command to determine the ra
```bash
gcloud container clusters describe XXXXXXX --zone=XXXXXX | grep -e clusterIpv4Cidr -e servicesIpv4Cidr
```
```
```xxx
clusterIpv4Cidr: 10.4.0.0/14
servicesIpv4Cidr: 10.7.240.0/20
```
@ -240,7 +239,6 @@ On Azure Container Service(ACS), use:
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml --includeIPRanges=10.244.0.0/16,10.240.0.0/16)
```
After starting your service this way, the Istio sidecar will only intercept and manage internal requests
within the cluster. Any external request will simply bypass the sidecar and go straight to its intended
destination.
@ -250,28 +248,26 @@ export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.n
kubectl exec -it $SOURCE_POD -c sleep curl http://httpbin.org/headers
```
## Understanding what happened
In this task we looked at two ways to call external services from within an Istio cluster:
In this task we looked at two ways to call external services from an Istio mesh:
1. Using an `ExternalService` (recommended)
2. Configuring the Istio sidecar to exclude external IPs from its remapped IP table
1. Configuring the Istio sidecar to exclude external IPs from its remapped IP table
The first approach (`ExternalService`) only supports HTTP(S) requests, but allows
you to use all of the same Istio service mesh features for calls to services within or outside
you to use all of the same Istio service mesh features for calls to services within or outside
of the cluster. We demonstrated this by setting a timeout rule for calls to an external service.
The second approach bypasses the Istio sidecar proxy, giving your services direct access to any
external URL. However, configuring the proxy this way does require
cloud provider specific knowledge and configuration.
## Cleanup
1. Remove the rules.
```bash
istioctl delete externalservice httpbin-ext google-ext
istioctl delete destinationrule google-ext
@ -284,7 +280,8 @@ cloud provider specific knowledge and configuration.
kubectl delete -f samples/sleep/sleep.yaml
```
## ExternalService and Access Control
## `ExternalService` and Access Control
Note that Istio `ExternalService` is **not a security feature**. It enables access to external (out of the service mesh) services. It is up to the user to deploy appropriate security mechanisms such as firewalls to prevent unauthorized access to the external services. We are working on adding access control support for the external services.
## What's next

View File

@ -18,7 +18,7 @@ is needed to allow Istio features, for example, monitoring and route rules, to b
Istio provides an envoy-based ingress controller that implements very limited support for standard Kubernetes `Ingress` resources
as well as full support for an alternative specification,
[Istio Gateway]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#Gateway).
Using a `Gateway` is the recommended approach for configuring ingress traffic for Istio services.
Using a `Gateway` is the recommended approach for configuring ingress traffic for Istio services.
It is significantly more functional, not to mention the only option for non-Kubernetes environments.
This task describes how to configure Istio to expose a service outside of the service mesh using either specification.
@ -27,9 +27,9 @@ This task describes how to configure Istio to expose a service outside of the se
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Make sure your current directory is the `istio` directory.
* Start the [httpbin](https://github.com/istio/istio/tree/master/samples/httpbin) sample,
which will be used as the destination service to be exposed externally.
@ -55,12 +55,12 @@ This task describes how to configure Istio to expose a service outside of the se
## Configuring ingress using an Istio Gateway resource (recommended)
> Note: This is still a WIP and not working yet.
> This is still a work in progress and is not yet functional.
An [Istio Gateway]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#Gateway) is the preferred
model for configuring ingress traffic in Istio.
An ingress `Gateway` describes a load balancer operating at the edge of the mesh receiving incoming
HTTP/TCP connections.
HTTP/TCP connections.
It configures exposed ports, protocols, etc.,
but, unlike [Kubernetes Ingress Resources](https://kubernetes.io/docs/concepts/services-networking/ingress/),
does not include any traffic routing configuration. Traffic routing for ingress traffic is instead configured
@ -90,10 +90,10 @@ using Istio routing rules, exactly in the same was as for internal service reque
privateKey: /tmp/tls.key
EOF
```
Notice that a single `Gateway` specification can configure multiple ports, a simple HTTP (port 80) and
secure HTTPS (port 443) in our case.
1. Configure routes for traffic entering via the `Gateway`
```bash
@ -125,20 +125,20 @@ using Istio routing rules, exactly in the same was as for internal service reque
number: 8000
name: httpbin
EOF
```
Here we've created a [VirtualService]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#VirtualService)
```
Here we've created a [virtual service]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#VirtualService)
configuration for the `httpbin` service, containing two route rules that allow traffic for paths `/status` and `/delay`.
The [gateways]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#VirtualService.gateways) list
specifies that only requests through our `httpbin-gateway` are allowed.
specifies that only requests through our `httpbin-gateway` are allowed.
All other external requests will be rejected with a 404 response.
Note that in this configuration internal requests from other services in the mesh are not subject to these rules,
but instead will simply default to round-robin routing. To apply these (or other rules) to internal calls,
we could add the special value `mesh` to the list of `gateways`.
### Verifying a Gateway
The proxy instances implementing a particular `Gateway` configuration can be specified using a
[selector]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#Gateway.selector) field.
If not specified, as in our case, the `Gateway` will be implemented by the default `istio-ingress` controller.
@ -176,7 +176,7 @@ Therefore, to test our `Gateway` we will send requests to the `istio-ingress` se
curl -I http://$INGRESS_HOST/status/200
```
```
```xxx
HTTP/1.1 200 OK
server: envoy
date: Mon, 29 Jan 2018 04:45:49 GMT
@ -191,7 +191,7 @@ Therefore, to test our `Gateway` we will send requests to the `istio-ingress` se
curl -I -k https://$SECURE_INGRESS_HOST/status/200
```
```
```xxx
HTTP/1.1 200 OK
server: envoy
date: Mon, 29 Jan 2018 04:45:49 GMT
@ -209,7 +209,7 @@ Therefore, to test our `Gateway` we will send requests to the `istio-ingress` se
curl -I http://$INGRESS_HOST/headers
```
```
```xxx
HTTP/1.1 404 Not Found
date: Mon, 29 Jan 2018 04:45:49 GMT
server: envoy
@ -220,7 +220,7 @@ Therefore, to test our `Gateway` we will send requests to the `istio-ingress` se
curl -I https://$SECURE_INGRESS_HOST/headers
```
```
```xxx
HTTP/1.1 404 Not Found
date: Mon, 29 Jan 2018 04:45:49 GMT
server: envoy
@ -234,12 +234,12 @@ specification, with the following differences:
1. Istio `Ingress` specification contains a `kubernetes.io/ingress.class: istio` annotation.
2. All other annotations are ignored.
1. All other annotations are ignored.
3. Path syntax is [c++11 regex format](http://en.cppreference.com/w/cpp/regex/ecmascript)
1. Path syntax is [c++11 regex format](http://en.cppreference.com/w/cpp/regex/ecmascript)
Note that `Ingress` traffic is not affected by routing rules configured for a backend
(i.e., an Istio `VirtualService` cannot be combined with an `Ingress` specification).
(i.e., an Istio `VirtualService` cannot be combined with an `Ingress` specification).
Traffic splitting, fault injection, mirroring, header match, etc., will not work for ingress traffic.
A `DestinationRule` associated with the backend service will, however, work as expected.
@ -275,19 +275,19 @@ service declaration.
servicePort: 8000
EOF
```
### Verifying simple Ingress
1. Determine the ingress URL:
* If your cluster is running in an environment that supports external load balancers,
- If your cluster is running in an environment that supports external load balancers,
use the ingress' external address:
```bash
kubectl get ingress simple-ingress -o wide
```
```bash
```xxx
NAME HOSTS ADDRESS PORTS AGE
simple-ingress * 130.211.10.121 80 1d
```
@ -297,37 +297,37 @@ service declaration.
```
* If load balancers are not supported, use the ingress controller pod's hostIP:
```bash
kubectl -n istio-system get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
```
```bash
```xxx
169.47.243.100
```
along with the istio-ingress service's nodePort for port 80:
```bash
kubectl -n istio-system get svc istio-ingress
```
```bash
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.10.10.155 <pending> 80:31486/TCP,443:32254/TCP 32m
```
```bash
export INGRESS_HOST=169.47.243.100:31486
```
1. Access the httpbin service using _curl_:
```bash
curl -I http://$INGRESS_HOST/status/200
```
```
```xxx
HTTP/1.1 200 OK
server: envoy
date: Mon, 29 Jan 2018 04:45:49 GMT
@ -345,7 +345,7 @@ service declaration.
curl -I http://$INGRESS_HOST/headers
```
```
```xxx
HTTP/1.1 404 Not Found
date: Mon, 29 Jan 2018 04:45:49 GMT
server: envoy
@ -359,8 +359,8 @@ service declaration.
Create the secret `istio-ingress-certs` in namespace `istio-system` using `kubectl`. The Istio ingress controller
will automatically load the secret.
> Note: the secret MUST be called `istio-ingress-certs` in the `istio-system` namespace, or it will not
be mounted and available to the Istio ingress controller.
> The secret MUST be called `istio-ingress-certs` in the `istio-system` namespace, or it will not
be mounted and available to the Istio ingress controller.
```bash
kubectl create -n istio-system secret tls istio-ingress-certs --key /tmp/tls.key --cert /tmp/tls.crt
@ -369,7 +369,7 @@ service declaration.
Note that by default all service accounts in the `istio-system` namespace can access this ingress key/cert,
which risks leaking the key/cert. You can change the Role-Based Access Control (RBAC) rules to protect them.
See (Link TBD) for details.
1. Create the `Ingress` specification for the httpbin service
```bash
@ -397,7 +397,7 @@ service declaration.
EOF
```
> Note: Because SNI is not yet supported, Envoy currently only allows a single TLS secret in the ingress.
> Because SNI is not yet supported, Envoy currently only allows a single TLS secret in the ingress.
> That means the secretName field in ingress resource is not used.
### Verifying secure Ingress
@ -411,7 +411,7 @@ service declaration.
kubectl get ingress secure-ingress -o wide
```
```bash
```xxx
NAME HOSTS ADDRESS PORTS AGE
secure-ingress * 130.211.10.121 80 1d
```
@ -426,7 +426,7 @@ service declaration.
kubectl -n istio-system get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
```
```bash
```xxx
169.47.243.100
```
@ -436,7 +436,7 @@ service declaration.
kubectl -n istio-system get svc istio-ingress
```
```bash
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.10.10.155 <pending> 80:31486/TCP,443:32254/TCP 32m
```
@ -451,7 +451,7 @@ service declaration.
curl -I -k https://$SECURE_INGRESS_HOST/status/200
```
```
```xxx
HTTP/1.1 200 OK
server: envoy
date: Mon, 29 Jan 2018 04:45:49 GMT
@ -469,7 +469,7 @@ service declaration.
curl -I -k https://$SECURE_INGRESS_HOST/headers
```
```
```xxx
HTTP/1.1 404 Not Found
date: Mon, 29 Jan 2018 04:45:49 GMT
server: envoy
@ -491,19 +491,19 @@ and may be especially useful when moving existing Kubernetes applications to Ist
## Cleanup
1. Remove the `Gateway` configuration.
```bash
kubectl delete gateway httpbin-gateway
```
1. Remove the `Ingress` configuration.
```bash
kubectl delete ingress simple-ingress secure-ingress
kubectl delete ingress simple-ingress secure-ingress
```
1. Remove the routing rule and secret.
```bash
istioctl delete virtualservice httpbin
kubectl delete -n istio-system secret istio-ingress-certs

View File

@ -11,12 +11,11 @@ type: markdown
This task demonstrates the traffic shadowing/mirroring capabilities of Istio. Traffic mirroring is a powerful concept that allows feature teams to bring changes to production with as little risk as possible. Mirroring brings a copy of live traffic to a mirrored service and happens out of band of the critical request path for the primary service.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Start two versions of the `httpbin` service that have access logging enabled
httpbin-v1:
@ -72,7 +71,7 @@ EOF
httpbin Kubernetes service:
```bash
```bash
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
@ -87,14 +86,13 @@ spec:
selector:
app: httpbin
EOF
```
```
* Start the `sleep` service so we can use `curl` to provide load
sleep service:
```bash
```bash
cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
@ -113,14 +111,11 @@ spec:
command: ["/bin/sleep","infinity"]
imagePullPolicy: IfNotPresent
EOF
```
```
## Mirroring
Let's set up a scenario to demonstrate the traffic-mirroring capabilities of Istio. We have two versions of our `httpbin` service. By default Kubernetes will load balance across both versions of the service. We'll use Istio to force all traffic to v1 of the `httpbin` service.
Let's set up a scenario to demonstrate the traffic-mirroring capabilities of Istio. We have two versions of our `httpbin` service. By default Kubernetes will load balance across both versions of the service. We'll use Istio to force all traffic to v1 of the `httpbin` service.
### Creating default routing policy
@ -164,16 +159,16 @@ Now all traffic should go to `httpbin v1` service. Let's try sending in some tra
export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8080/headers'
```
```
```xxx
{
"headers": {
"Accept": "*/*",
"Content-Length": "0",
"Host": "httpbin:8080",
"User-Agent": "curl/7.35.0",
"X-B3-Sampled": "1",
"X-B3-Spanid": "eca3d7ed8f2e6a0a",
"X-B3-Traceid": "eca3d7ed8f2e6a0a",
"Accept": "*/*",
"Content-Length": "0",
"Host": "httpbin:8080",
"User-Agent": "curl/7.35.0",
"X-B3-Sampled": "1",
"X-B3-Spanid": "eca3d7ed8f2e6a0a",
"X-B3-Traceid": "eca3d7ed8f2e6a0a",
"X-Ot-Span-Context": "eca3d7ed8f2e6a0a;eca3d7ed8f2e6a0a;0000000000000000"
}
}
@ -185,7 +180,7 @@ If we check the logs for `v1` and `v2` of our `httpbin` pods, we should see acce
export V1_POD=$(kubectl get pod -l app=httpbin,version=v1 -o jsonpath={.items..metadata.name})
kubectl logs -f $V1_POD -c httpbin
```
```
```xxx
127.0.0.1 - - [07/Mar/2018:19:02:43 +0000] "GET /headers HTTP/1.1" 200 321 "-" "curl/7.35.0"
```
@ -193,11 +188,11 @@ kubectl logs -f $V1_POD -c httpbin
export V2_POD=$(kubectl get pod -l app=httpbin,version=v2 -o jsonpath={.items..metadata.name})
kubectl logs -f $V2_POD -c httpbin
```
```
```xxx
<none>
```
2. Change the route rule to mirror traffic to v2
1. Change the route rule to mirror traffic to v2
```bash
cat <<EOF | istioctl replace -f -
@ -220,8 +215,7 @@ spec:
EOF
```
This route rule specifies we route 100% of the traffic to v1. The last stanza specifies we want to mirror to the `httpbin v2` service. When traffic gets mirrored, the requests are sent to the mirrored service with its Host/Authority header appended with *-shadow*. For example, *cluster-1* becomes *cluster-1-shadow*. Also important to realize is that these requests are mirrored as "fire and forget", i.e., the responses are discarded.
This route rule specifies we route 100% of the traffic to v1. The last stanza specifies we want to mirror to the `httpbin v2` service. When traffic gets mirrored, the requests are sent to the mirrored service with its Host/Authority header appended with *-shadow*. For example, *cluster-1* becomes *cluster-1-shadow*. Also important to realize is that these requests are mirrored as "fire and forget", i.e., the responses are discarded.
Now if we send in traffic:
@ -234,7 +228,7 @@ We should see access logging for both `v1` and `v2`. The access logs created in
```bash
kubectl logs -f $V1_POD -c httpbin
```
```
```xxx
127.0.0.1 - - [07/Mar/2018:19:02:43 +0000] "GET /headers HTTP/1.1" 200 321 "-" "curl/7.35.0"
127.0.0.1 - - [07/Mar/2018:19:26:44 +0000] "GET /headers HTTP/1.1" 200 321 "-" "curl/7.35.0"
```
@ -242,15 +236,14 @@ kubectl logs -f $V1_POD -c httpbin
```bash
kubectl logs -f $V2_POD -c httpbin
```
```
```xxx
127.0.0.1 - - [07/Mar/2018:19:26:44 +0000] "GET /headers HTTP/1.1" 200 361 "-" "curl/7.35.0"
```
```
## Cleaning up
1. Remove the rules.
```bash
istioctl delete virtualservice httpbin
istioctl delete destinationrule httpbin

View File

@ -27,7 +27,7 @@ star ratings.
This is because without an explicit default version set, Istio will
route requests to all available versions of a service in a random fashion.
> Note: This task assumes you don't have any routes set yet. If you've already created conflicting route rules for the sample,
> This task assumes you don't have any routes set yet. If you've already created conflicting route rules for the sample,
you'll need to use `replace` rather than `create` in the following command.
1. Set the default version for all microservices to v1.
@ -36,7 +36,7 @@ route requests to all available versions of a service in a random fashion.
istioctl create -f samples/bookinfo/routing/route-rule-all-v1.yaml
```
> Note: In a Kubernetes deployment of Istio, you can replace `istioctl`
> In a Kubernetes deployment of Istio, you can replace `istioctl`
> with `kubectl` in the above, and for all other CLI commands.
> Note, however, that `kubectl` currently does not provide input validation.
@ -107,7 +107,7 @@ route requests to all available versions of a service in a random fashion.
---
```
> Note: The corresponding `subset` definitions can be displayed using `istioctl get destinationrules -o yaml`.
> The corresponding `subset` definitions can be displayed using `istioctl get destinationrules -o yaml`.
Since rule propagation to the proxies is asynchronous, you should wait a few seconds for the rules
to propagate to all pods before attempting to access the application.

View File

@ -1,7 +1,7 @@
---
title: Setting Request Timeouts
overview: This task shows you how to setup request timeouts in Envoy using Istio.
order: 28
layout: docs
@ -31,7 +31,7 @@ By default, the timeout is 15 seconds, but in this task we'll override the `revi
timeout to 1 second.
To see its effect, however, we'll also introduce an artificial 2 second delay in calls
to the `ratings` service.
1. Route requests to v2 of the `reviews` service, i.e., a version that calls the `ratings` service
```bash
@ -80,7 +80,7 @@ to the `ratings` service.
but there is a 2 second delay whenever you refresh the page.
1. Now add a 1 second request timeout for calls to the `reviews` service
```bash
cat <<EOF | istioctl replace -f -
apiVersion: networking.istio.io/v1alpha3
@ -107,11 +107,11 @@ to the `ratings` service.
## Understanding what happened
In this task, you used Istio to set the request timeout for calls to the `reviews`
microservice to 1 second (instead of the default 15 seconds).
microservice to 1 second (instead of the default 15 seconds).
Since the `reviews` service subsequently calls the `ratings` service when handling requests,
you used Istio to inject a 2 second delay in calls to `ratings`, so that you would cause the
`reviews` service to take longer than 1 second to complete and consequently you could see the
timeout in action.
timeout in action.
You observed that the Bookinfo productpage (which calls the `reviews` service to populate the page),
instead of displaying reviews, displayed
@ -128,7 +128,7 @@ More details can be found [here]({{home}}/docs/concepts/traffic-management/handl
One more thing to note about timeouts in Istio is that in addition to overriding them in route rules,
as you did in this task, they can also be overridden on a per-request basis if the application adds
an "x-envoy-upstream-rq-timeout-ms" header on outbound requests. In the header
the timeout is specified in millisecond (instead of second) units.
the timeout is specified in millisecond (instead of second) units.
## Cleanup

View File

@ -70,7 +70,7 @@ two steps: 50%, 100%.
1. Refresh the `productpage` in your browser and you should now see *red* colored star ratings approximately 50% of the time.
> Note: With the current Envoy sidecar implementation, you may need to refresh the `productpage` very many times
> With the current Envoy sidecar implementation, you may need to refresh the `productpage` very many times
> to see the proper distribution. It may require 15 refreshes or more before you see any change. You can modify the rules to route 90% of the traffic to v3 to see red stars more often.
1. When version v3 of the `reviews` microservice is considered stable, we can route 100% of the traffic to `reviews:v3`:

View File

@ -15,13 +15,13 @@ This task demonstrates the circuit-breaking capability for resilient application
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Start the [httpbin](https://github.com/istio/istio/tree/master/samples/httpbin) sample
which will be used as the backend service for our task
```bash
kubectl apply -f <(istioctl kube-inject --debug -f samples/httpbin/httpbin.yaml)
```
```
## Circuit breaker
@ -36,7 +36,8 @@ Let's set up a scenario to demonstrate the circuit-breaking capabilities of Isti
istioctl create -f samples/httpbin/routerules/httpbin-v1.yaml
```
2. Create a [destination policy]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#CircuitBreaker) to specify our circuit breaking settings when calling `httpbin` service:
1. Create a [destination policy]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#CircuitBreaker) to specify our circuit breaking settings when
calling `httpbin` service:
```bash
cat <<EOF | istioctl create -f -
@ -60,17 +61,17 @@ Let's set up a scenario to demonstrate the circuit-breaking capabilities of Isti
httpMaxRequestsPerConnection: 1
EOF
```
3. Verify our destination policy was created correctly:
1. Verify our destination policy was created correctly:
```bash
istioctl get destinationpolicy
```
```
```xxx
NAME KIND NAMESPACE
httpbin-circuit-breaker DestinationPolicy.v1alpha2.config.istio.io istio-samples
```
```
### Setting up our client
Now that we've set up rules for calling the `httpbin` service, let's create a client we can use to send traffic to our service and see whether we can trip the circuit breaking policies. We're going to use a simple load-testing client called [fortio](https://github.com/istio/fortio). With this client we can control the number of connections, concurrency, and delays of outgoing HTTP calls. In this step, we'll set up a client that is injected with the istio sidecar proxy so our network interactions are governed by Istio:
@ -78,14 +79,14 @@ Now that we've set up rules for calling the `httpbin` service, let's create a cl
```bash
kubectl apply -f <(istioctl kube-inject --debug -f samples/httpbin/sample-client/fortio-deploy.yaml)
```
Now we should be able to log into that client pod and use the simple fortio tool to call `httpbin`. We'll pass in `-curl` to indicate we just want to make one call:
Now we should be able to log into that client pod and use the simple fortio tool to call `httpbin`. We'll pass in `-curl` to indicate we just want to make one call:
```bash
FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -curl http://httpbin:8000/get
```
```
```xxx
HTTP/1.1 200 OK
server: envoy
date: Tue, 16 Jan 2018 23:47:00 GMT
@ -96,32 +97,32 @@ content-length: 445
x-envoy-upstream-service-time: 36
{
"args": {},
"args": {},
"headers": {
"Content-Length": "0",
"Host": "httpbin:8000",
"User-Agent": "istio/fortio-0.6.2",
"X-B3-Sampled": "1",
"X-B3-Spanid": "824fbd828d809bf4",
"X-B3-Traceid": "824fbd828d809bf4",
"X-Ot-Span-Context": "824fbd828d809bf4;824fbd828d809bf4;0000000000000000",
"Content-Length": "0",
"Host": "httpbin:8000",
"User-Agent": "istio/fortio-0.6.2",
"X-B3-Sampled": "1",
"X-B3-Spanid": "824fbd828d809bf4",
"X-B3-Traceid": "824fbd828d809bf4",
"X-Ot-Span-Context": "824fbd828d809bf4;824fbd828d809bf4;0000000000000000",
"X-Request-Id": "1ad2de20-806e-9622-949a-bd1d9735a3f4"
},
"origin": "127.0.0.1",
},
"origin": "127.0.0.1",
"url": "http://httpbin:8000/get"
}
```
You can see the request succeeded! Now, let's break something.
### Tripping the circuit breaker:
```
You can see the request succeeded! Now, let's break something.
### Tripping the circuit breaker
In the circuit-breaking settings, we specified `maxConnections: 1` and `httpMaxPendingRequests: 1`. This should mean that if we exceed more than one connection and request concurrently, we should see the istio-proxy open the circuit for further requests/connections. Let's try with two concurrent connections (`-c 2`) and send 20 requests (`-n 20`)
```bash
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
```
```
```xxx
Fortio 0.6.2 running at 0 queries per second, 2->2 procs, for 5s: http://httpbin:8000/get
Starting at max qps with 2 thread(s) [gomax 2] for exactly 20 calls (10 per thread + 0)
23:51:10 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
@ -148,10 +149,10 @@ Response Header Sizes : count 20 avg 218.85 +/- 50.21 min 0 max 231 sum 4377
Response Body/Total Sizes : count 20 avg 652.45 +/- 99.9 min 217 max 676 sum 13049
All done 20 calls (plus 0 warmup) 10.215 ms avg, 187.8 qps
```
We see almost all requests made it through!
```
We see almost all requests made it through!
```xxx
Code 200 : 19 (95.0 %)
Code 503 : 1 (5.0 %)
```
@ -161,7 +162,7 @@ The istio-proxy does allow for some leeway. Let's bring the number of concurrent
```bash
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
```
```
```xxx
Fortio 0.6.2 running at 0 queries per second, 2->2 procs, for 5s: http://httpbin:8000/get
Starting at max qps with 3 thread(s) [gomax 2] for exactly 30 calls (10 per thread + 0)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
@ -203,7 +204,7 @@ All done 30 calls (plus 0 warmup) 5.336 ms avg, 422.2 qps
Now we start to see the circuit breaking behavior we expect.
```
```xxx
Code 200 : 19 (63.3 %)
Code 503 : 11 (36.7 %)
```
@ -213,19 +214,19 @@ Only 63.3% of the requests made it through and the rest were trapped by circuit
```bash
kubectl exec -it $FORTIO_POD -c istio-proxy -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pending
```
```
```xxx
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_active: 0
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_failure_eject: 0
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_overflow: 12
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_total: 39
```
We see `12` for the `upstream_rq_pending_overflow` value which means `12` calls so far have been flagged for circuit breaking.
```
We see `12` for the `upstream_rq_pending_overflow` value which means `12` calls so far have been flagged for circuit breaking.
## Cleaning up
1. Remove the rules.
```bash
istioctl delete routerule httpbin-default-v1
istioctl delete destinationpolicy httpbin-circuit-breaker

View File

@ -24,9 +24,10 @@ This task describes how to configure Istio to expose external TCP services to ap
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml)
```
**Note**: any pod that you can execute `curl` from is good enough.
> any pod that you can execute `curl` from is good enough.
## Using Istio egress rules for external TCP traffic
In this task we access `wikipedia.org` by HTTPS originated by the application. This task demonstrates the use case when the application cannot use HTTP with TLS origination by the sidecar proxy. Using HTTP with TLS origination by the sidecar proxy is described in the [Control Egress Traffic]({{home}}/docs/tasks/traffic-management/egress.html) task. In that task, `https://google.com` was accessed by issuing HTTP requests to `http://www.google.com:443`.
The HTTPS traffic originated by the application will be treated by Istio as _opaque_ TCP. To enable such traffic, we define a TCP egress rule on port 443.
@ -38,7 +39,9 @@ Let's assume for the sake of the example that we want to access `wikipedia.org`
Alternatively, if we want to access `wikipedia.org` by an IP, just a single egress rule for that IP must be defined.
## Creating egress rules
Let's create egress rules to enable TCP access to `wikipedia.org`:
```bash
cat <<EOF | istioctl create -f -
kind: EgressRule
@ -105,7 +108,7 @@ a single `istioctl` command.
kubectl exec -it $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep bash
```
2. Make a request and verify that we can access https://www.wikipedia.org successfully:
1. Make a request and verify that we can access https://www.wikipedia.org successfully:
```bash
curl -o /dev/null -s -w "%{http_code}\n" https://www.wikipedia.org
@ -116,7 +119,7 @@ a single `istioctl` command.
We should see `200` printed as the output, which is the HTTP code _OK_.
3. Now let's fetch the current number of the articles available on Wikipedia in the English language:
1. Now let's fetch the current number of the articles available on Wikipedia in the English language:
```bash
curl -s https://en.wikipedia.org/wiki/Main_Page | grep articlecount | grep 'Special:Statistics'
```

View File

@ -13,7 +13,7 @@ redirect_from: /docs/tasks/egress.html
By default, Istio-enabled services are unable to access URLs outside of the cluster because
iptables is used in the pod to transparently redirect all outbound traffic to the sidecar proxy,
which only handles intra-cluster destinations.
This task describes how to configure Istio to expose external services to Istio-enabled clients.
You'll learn how to enable access to external services using egress rules,
or alternatively, to simply bypass the Istio proxy for a specific range of IPs.
@ -25,7 +25,7 @@ or alternatively, to simply bypass the Istio proxy for a specific range of IPs.
* Start the [sleep](https://github.com/istio/istio/tree/master/samples/sleep) sample
which will be used as a test source for external calls.
```bash
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml)
```
@ -35,7 +35,7 @@ or alternatively, to simply bypass the Istio proxy for a specific range of IPs.
## Using Istio egress rules
Using Istio egress rules, you can access any publicly accessible service
from within your Istio cluster. In this task we will use
from within your Istio cluster. In this task we will use
[httpbin.org](https://httpbin.org) and [www.google.com](https://www.google.com) as examples.
### Configuring the external services
@ -57,7 +57,7 @@ from within your Istio cluster. In this task we will use
EOF
```
2. Create an egress rule to allow access to an external HTTPS service:
1. Create an egress rule to allow access to an external HTTPS service:
```bash
cat <<EOF | istioctl create -f -
@ -78,19 +78,19 @@ from within your Istio cluster. In this task we will use
1. Exec into the pod being used as the test source. For example,
if you are using the sleep service, run the following commands:
```bash
export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec -it $SOURCE_POD -c sleep bash
```
2. Make a request to the external HTTP service:
1. Make a request to the external HTTP service:
```bash
curl http://httpbin.org/headers
```
3. Make a request to the external HTTPS service.
1. Make a request to the external HTTPS service.
External services of type HTTPS must be accessed over HTTP with the port specified in the request:
```bash
@ -157,11 +157,10 @@ to set a timeout rule on calls to the httpbin.org service.
This time a 504 (Gateway Timeout) appears after 3 seconds.
Although httpbin.org was waiting 5 seconds, Istio cut off the request at 3 seconds.
## Calling external services directly
The Istio egress rules currently only supports HTTP/HTTPS requests.
If you want to access services with other protocols (e.g., mongodb://host/database),
If you want to access services with other protocols (e.g., mongodb://host/database),
or if you want to completely bypass Istio for a specific IP range,
you will need to configure the source service's Envoy sidecar to prevent it from
[intercepting]({{home}}/docs/concepts/traffic-management/request-routing.html#communication-between-services)
@ -172,7 +171,7 @@ when starting the service.
The simplest way to use the `--includeIPRanges` option is to pass it the IP range(s)
used for internal cluster services, thereby excluding external IPs from being redirected
to the sidecar proxy.
The values used for internal IP range(s), however, depends on where your cluster is running.
The values used for internal IP range(s), however, depends on where your cluster is running.
For example, with Minikube the range is 10.0.0.1/24, so you would start the sleep service like this:
```bash
@ -189,7 +188,7 @@ On IBM Cloud Private, use:
A sample output is as following:
```
```xxx
service_cluster_ip_range: 10.0.0.1/24
```
@ -211,7 +210,7 @@ need to run the `gcloud container clusters describe` command to determine the ra
```bash
gcloud container clusters describe XXXXXXX --zone=XXXXXX | grep -e clusterIpv4Cidr -e servicesIpv4Cidr
```
```
```xxx
clusterIpv4Cidr: 10.4.0.0/14
servicesIpv4Cidr: 10.7.240.0/20
```
@ -225,7 +224,6 @@ On Azure Container Service(ACS), use:
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml --includeIPRanges=10.244.0.0/16,10.240.0.0/16)
```
After starting your service this way, the Istio sidecar will only intercept and manage internal requests
within the cluster. Any external request will simply bypass the sidecar and go straight to its intended
destination.
@ -235,28 +233,26 @@ export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.n
kubectl exec -it $SOURCE_POD -c sleep curl http://httpbin.org/headers
```
## Understanding what happened
In this task we looked at two ways to call external services from within an Istio cluster:
1. Using an egress rule (recommended)
2. Configuring the Istio sidecar to exclude external IPs from its remapped IP table
1. Configuring the Istio sidecar to exclude external IPs from its remapped IP table
The first approach (egress rule) currently only supports HTTP(S) requests, but allows
you to use all of the same Istio service mesh features for calls to services within or outside
you to use all of the same Istio service mesh features for calls to services within or outside
of the cluster. We demonstrated this by setting a timeout rule for calls to an external service.
The second approach bypasses the Istio sidecar proxy, giving your services direct access to any
external URL. However, configuring the proxy this way does require
cloud provider specific knowledge and configuration.
## Cleanup
1. Remove the rules.
```bash
istioctl delete egressrule httpbin-egress-rule google-egress-rule
istioctl delete routerule httpbin-timeout-rule
@ -269,6 +265,7 @@ cloud provider specific knowledge and configuration.
```
## Egress Rules and Access Control
Note that Istio Egress Rules are **not a security feature**. They enable access to external (out of the service mesh) services. It is up to the user to deploy appropriate security mechanisms such as firewalls to prevent unauthorized access to the external services. We are working on adding access control support for the external services.
## What's next

View File

@ -21,24 +21,25 @@ This task shows how to inject delays and test the resiliency of your application
* Initialize the application version routing by either first doing the
[request routing](./request-routing.html) task or by running following
commands:
> Note: This assumes you don't have any routes set yet. If you've already created conflicting route rules for the sample, you'll need to use `replace` rather than `create` in one or both of the following commands.
> This assumes you don't have any routes set yet. If you've already created conflicting route rules for the sample, you'll need to use `replace` rather than `create` in one or both of the following commands.
```bash
istioctl create -f samples/bookinfo/kube/route-rule-all-v1.yaml
istioctl create -f samples/bookinfo/kube/route-rule-reviews-test-v2.yaml
```
> Note: This task assumes you are deploying the application on Kubernetes.
All of the example commands are using the Kubernetes version of the rule yaml files
(e.g., `samples/bookinfo/kube/route-rule-all-v1.yaml`). If you are running this
task in a different environment, change `kube` to the directory that corresponds
to your runtime (e.g., `samples/bookinfo/consul/route-rule-all-v1.yaml` for
the Consul-based runtime).
> This task assumes you are deploying the application on Kubernetes.
All of the example commands are using the Kubernetes version of the rule yaml files
(e.g., `samples/bookinfo/kube/route-rule-all-v1.yaml`). If you are running this
task in a different environment, change `kube` to the directory that corresponds
to your runtime (e.g., `samples/bookinfo/consul/route-rule-all-v1.yaml` for
the Consul-based runtime).
# Fault injection
## Fault injection using HTTP delay
To test our Bookinfo application microservices for resiliency, we will _inject a 7s delay_
between the reviews:v2 and ratings microservices, for user "jason". Since the _reviews:v2_ service has a
10s timeout for its calls to the ratings service, we expect the end-to-end flow to
@ -118,6 +119,7 @@ continue without any errors.
use a 2.8 second delay and then run it against the v3 version of reviews.)
## Fault injection using HTTP Abort
As another test of resiliency, we will introduce an HTTP abort to the ratings microservices for the user "jason".
We expect the page to load immediately unlike the delay example and display the "product ratings not available"
message.

View File

@ -21,20 +21,20 @@ The Istio Ingress specification is based on the standard [Kubernetes Ingress Res
1. Istio Ingress specification contains `kubernetes.io/ingress.class: istio` annotation.
2. All other annotations are ignored.
1. All other annotations are ignored.
The following are known limitations of Istio Ingress:
1. Regular expressions in paths are not supported.
2. Fault injection at the Ingress is not supported.
1. Fault injection at the Ingress is not supported.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Make sure your current directory is the `istio` directory.
* Start the [httpbin](https://github.com/istio/istio/tree/master/samples/httpbin) sample,
which will be used as the destination service to be exposed externally.
@ -76,12 +76,12 @@ The following are known limitations of Istio Ingress:
servicePort: 8000
EOF
```
`/.*` is a special Istio notation that is used to indicate a prefix
match, specifically a
[rule match configuration]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#matchcondition)
of the form (`prefix: /`).
### Verifying ingress
1. Determine the ingress URL:
@ -92,8 +92,8 @@ The following are known limitations of Istio Ingress:
```bash
kubectl get ingress simple-ingress -o wide
```
```bash
```xxx
NAME HOSTS ADDRESS PORTS AGE
simple-ingress * 130.211.10.121 80 1d
```
@ -103,37 +103,37 @@ The following are known limitations of Istio Ingress:
```
* If load balancers are not supported, use the ingress controller pod's hostIP:
```bash
kubectl -n istio-system get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
```
```bash
```xxx
169.47.243.100
```
along with the istio-ingress service's nodePort for port 80:
```bash
kubectl -n istio-system get svc istio-ingress
```
```bash
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.10.10.155 <pending> 80:31486/TCP,443:32254/TCP 32m
```
```bash
export INGRESS_HOST=169.47.243.100:31486
```
1. Access the httpbin service using _curl_:
```bash
curl -I http://$INGRESS_HOST/status/200
```
```
```xxx
HTTP/1.1 200 OK
server: envoy
date: Mon, 29 Jan 2018 04:45:49 GMT
@ -151,7 +151,7 @@ The following are known limitations of Istio Ingress:
curl -I http://$INGRESS_HOST/headers
```
```
```xxx
HTTP/1.1 404 Not Found
date: Mon, 29 Jan 2018 04:45:49 GMT
server: envoy
@ -173,7 +173,7 @@ The following are known limitations of Istio Ingress:
Create the secret `istio-ingress-certs` in namespace `istio-system` using `kubectl`. The Istio Ingress will automatically
load the secret.
> Note: the secret must be called `istio-ingress-certs` in `istio-system` namespace, for it to be mounted on Istio Ingress.
> The secret must be called `istio-ingress-certs` in `istio-system` namespace, for it to be mounted on Istio Ingress.
```bash
kubectl create -n istio-system secret tls istio-ingress-certs --key /tmp/tls.key --cert /tmp/tls.crt
@ -206,7 +206,7 @@ The following are known limitations of Istio Ingress:
EOF
```
> Note: Because SNI is not yet supported, Envoy currently only allows a single TLS secret in the ingress.
> Because SNI is not yet supported, Envoy currently only allows a single TLS secret in the ingress.
> That means the secretName field in ingress resource is not used.
### Verifying ingress
@ -220,7 +220,7 @@ The following are known limitations of Istio Ingress:
kubectl get ingress secure-ingress -o wide
```
```bash
```xxx
NAME HOSTS ADDRESS PORTS AGE
secure-ingress * 130.211.10.121 80 1d
```
@ -235,7 +235,7 @@ The following are known limitations of Istio Ingress:
kubectl -n istio-system get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
```
```bash
```xxx
169.47.243.100
```
@ -245,7 +245,7 @@ The following are known limitations of Istio Ingress:
kubectl -n istio-system get svc istio-ingress
```
```bash
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.10.10.155 <pending> 80:31486/TCP,443:32254/TCP 32m
```
@ -260,7 +260,7 @@ The following are known limitations of Istio Ingress:
curl -I -k https://$INGRESS_HOST/status/200
```
```
```xxx
HTTP/1.1 200 OK
server: envoy
date: Mon, 29 Jan 2018 04:45:49 GMT
@ -278,7 +278,7 @@ The following are known limitations of Istio Ingress:
curl -I -k https://$INGRESS_HOST/headers
```
```
```xxx
HTTP/1.1 404 Not Found
date: Mon, 29 Jan 2018 04:45:49 GMT
server: envoy
@ -287,18 +287,18 @@ The following are known limitations of Istio Ingress:
1. Configuring RBAC for ingress key/cert
There are service accounts which can access this ingress key/cert, and this leads to risks of
leaking key/cert. We can set up Role-Based Access Control ("RBAC") to protect it.
install/kubernetes/istio.yaml defines ClusterRoles and ClusterRoleBindings which allow service
accounts in namespace istio-system to access all secret resources. We need to update or replace
There are service accounts which can access this ingress key/cert, and this leads to risks of
leaking key/cert. We can set up Role-Based Access Control ("RBAC") to protect it.
install/kubernetes/istio.yaml defines `ClusterRoles` and `ClusterRoleBindings` which allow service
accounts in namespace istio-system to access all secret resources. We need to update or replace
these RBAC set up to only allow istio-ingress-service-account to access ingress key/cert.
We can use kubectl to list all secrets in namespace istio-system that we need to protect using RBAC.
We can use `kubectl` to list all secrets in namespace istio-system that we need to protect using RBAC.
```bash
kubectl get secrets -n istio-system
```
This produces the following output:
```bash
```xxx
NAME TYPE DATA AGE
istio-ingress-certs kubernetes.io/tls 2 7d
istio.istio-ingress-service-account istio.io/key-and-cert 3 7d
@ -306,33 +306,33 @@ The following are known limitations of Istio Ingress:
```
1. Update RBAC set up for istio-pilot-service-account and istio-mixer-istio-service-account
Record ClusterRole istio-mixer-istio-system and istio-pilot-istio-system. We will refer to
Record `ClusterRole` istio-mixer-istio-system and istio-pilot-istio-system. We will refer to
these copies when we redefine them to avoid breaking access permissions to other resources.
```bash
kubectl describe ClusterRole istio-mixer-istio-system
kubectl describe ClusterRole istio-pilot-istio-system
```
Delete existing ClusterRoleBindings and ClusterRole.
Delete existing `ClusterRoleBindings` and `ClusterRole`.
```bash
kubectl delete ClusterRoleBinding istio-pilot-admin-role-binding-istio-system
kubectl delete ClusterRoleBinding istio-mixer-admin-role-binding-istio-system
kubectl delete ClusterRole istio-mixer-istio-system
```
As istio-pilot-istio-system is also bound to istio-ingress-service-account, we will delete
istio-pilot-istio-system in next step.
Create istio-mixer-istio-system.yaml, which allows istio-mixer-service-account to read
istio.io/key-and-cert, and istio.io/ca-root types of secret instances. Refer to the recorded
copy of istio-mixer-istio-system and add access permissions to other resources.
```bash
As istio-pilot-istio-system is also bound to istio-ingress-service-account, we will delete
istio-pilot-istio-system in next step.
Create istio-mixer-istio-system.yaml, which allows istio-mixer-service-account to read
istio.io/key-and-cert, and istio.io/ca-root types of secret instances. Refer to the recorded
copy of istio-mixer-istio-system and add access permissions to other resources.
```yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: istio-mixer-istio-system
rules:
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets"]
resourceNames: ["istio.istio-ca-service-account"]
@ -341,7 +341,7 @@ The following are known limitations of Istio Ingress:
resources: ["secrets"]
resourceNames: ["istio-ca-secret"]
verbs: ["get", "list", "watch"]
......
......
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
@ -356,30 +356,30 @@ The following are known limitations of Istio Ingress:
name: istio-mixer-istio-system
apiGroup: rbac.authorization.k8s.io
```
```bash
kubectl apply -f istio-mixer-istio-system.yaml
```
1. Update RBAC set up for istio-pilot-service-account and istio-ingress-service-account
Delete existing ClusterRoleBinding and ClusterRole.
Delete existing `ClusterRoleBinding` and `ClusterRole`.
```bash
kubectl delete clusterrolebinding istio-ingress-admin-role-binding-istio-system
kubectl delete ClusterRole istio-pilot-istio-system
```
Create istio-pilot-istio-system.yaml, which allows istio-pilot-service-account to read
istio.io/key-and-cert, and istio.io/ca-root types of secret instances. Refer to the recorded
copy of istio-pilot-istio-system and add access permissions to other resources.
```bash
Create istio-pilot-istio-system.yaml, which allows istio-pilot-service-account to read
istio.io/key-and-cert, and istio.io/ca-root types of secret instances. Refer to the recorded
copy of istio-pilot-istio-system and add access permissions to other resources.
```yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: istio-pilot-istio-system
rules:
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets"]
resourceNames: ["istio.istio-ca-service-account"]
@ -388,7 +388,7 @@ The following are known limitations of Istio Ingress:
resources: ["secrets"]
resourceNames: ["istio-ca-secret"]
verbs: ["get", "list", "watch"]
......
......
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
@ -403,21 +403,21 @@ The following are known limitations of Istio Ingress:
name: istio-pilot-istio-system
apiGroup: rbac.authorization.k8s.io
```
```bash
kubectl apply -f istio-pilot-istio-system.yaml
```
Create istio-ingress-istio-system.yaml which allows istio-ingress-service-account to read
istio-ingress-certs as well as other secret instances. Refer to the recorded copy of
```
Create istio-ingress-istio-system.yaml which allows istio-ingress-service-account to read
istio-ingress-certs as well as other secret instances. Refer to the recorded copy of
istio-pilot-istio-system and add access permissions to other resources.
```bash
```yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: istio-ingress-istio-system
rules:
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets"]
resourceNames: ["istio.istio-ca-service-account"]
@ -445,23 +445,23 @@ The following are known limitations of Istio Ingress:
name: istio-ingress-istio-system
apiGroup: rbac.authorization.k8s.io
```
```bash
kubectl apply -f istio-ingress-istio-system.yaml
```
1. Update RBAC set up for istio-ca-service-account
Record ClusterRole istio-ca-istio-system.
Record `ClusterRole` istio-ca-istio-system.
```bash
kubectl describe ClusterRole istio-ca-istio-system
```
Create istio-ca-istio-system.yaml, which updates existing ClusterRole istio-ca-istio-system
that allows istio-ca-service-account to read, create and modify all istio.io/key-and-cert, and
Create istio-ca-istio-system.yaml, which updates existing `ClusterRole` istio-ca-istio-system
that allows istio-ca-service-account to read, create and modify all istio.io/key-and-cert, and
istio.io/ca-root types of secrets.
```bash
```yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
@ -492,45 +492,44 @@ The following are known limitations of Istio Ingress:
```bash
kubectl apply -f istio-ca-istio-system.yaml
```
1. Verify that the new ClusterRoles work as expected
1. Verify that the new `ClusterRoles` work as expected
```bash
kubectl auth can-i get secret/istio-ingress-certs --as system:serviceaccount:istio-system:istio-ingress-service-account -n istio-system
```
whose output should be
```bash
```xxx
yes
```
In this command, we can replace verb "get" with "list" or "watch", and the output should always
In this command, we can replace verb "get" with "list" or "watch", and the output should always
be "yes". Now let us test with other service accounts.
```bash
kubectl auth can-i get secret/istio-ingress-certs --as system:serviceaccount:istio-system:istio-pilot-service-account -n istio-system
```
```
whose output should be
```bash
```xxx
no - Unknown user "system:serviceaccount:istio-system:istio-pilot-service-account"
```
In this command, we can replace service account with istio-mixer-service-account, or
istio-ca-service-account, we can also replace verb "get" with "watch" or "list", and the output
In this command, we can replace service account with istio-mixer-service-account, or
istio-ca-service-account, we can also replace verb "get" with "watch" or "list", and the output
should look similarly.
Accessibility to secret resources except istio-ingress-certs should remain the same for
istio-ca-service-account, istio-ingress-service-account, istio-pilot-service-account and
Accessibility to secret resources except istio-ingress-certs should remain the same for
istio-ca-service-account, istio-ingress-service-account, istio-pilot-service-account and
istio-mixer-service-account.
```bash
kubectl auth can-i get secret/istio-ca-service-account-token-r14xm --as system:serviceaccount:istio-system:istio-ca-service-account -n istio-system
```
whose output should be
```bash
```xxx
yes
```
1. Cleanup
We can delete these newly defined ClusterRoles and ClusterRoleBindings, and restore original
ClusterRoles and ClusterRoleBindings according to those recorded copies.
We can delete these newly defined `ClusterRoles` and `ClusterRoleBindings`, and restore original
`ClusterRoles` and `ClusterRoleBindings` according to those recorded copies.
## Using Istio Routing Rules with Ingress
@ -574,13 +573,13 @@ instead of the expected 10s delay.
You can use other features of the route rules such as redirects, rewrites,
routing to multiple versions, regular expression based match in HTTP
headers, websocket upgrades, timeouts, retries, etc. Please refer to the
headers, WebSocket upgrades, timeouts, retries, etc. Please refer to the
[routing rules]({{home}}/docs/reference/config/istio.routing.v1alpha1.html)
for more details.
> Note 1: Fault injection does not work at the Ingress
> Note 2: When matching requests in the routing rule, use the same exact
> Fault injection does not work at the Ingress
>
> When matching requests in the routing rule, use the same exact
> path or prefix as the one used in the Ingress specification.
## Understanding ingresses
@ -589,7 +588,7 @@ Ingresses provide gateways for external traffic to enter the Istio service
mesh and make the traffic management and policy features of Istio available
for edge services.
The servicePort field in the Ingress specification can take a port number
The `servicePort` field in the Ingress specification can take a port number
(integer) or a name. The port name must follow the Istio port naming
conventions (e.g., `grpc-*`, `http2-*`, `http-*`, etc.) in order to
function properly. The name used must match the port name in the backend
@ -603,9 +602,9 @@ an Istio route rule.
## Cleanup
1. Remove the secret and Ingress Resource definitions.
```bash
kubectl delete ingress simple-ingress secure-ingress
kubectl delete ingress simple-ingress secure-ingress
kubectl delete -n istio-system secret istio-ingress-certs
```
@ -615,7 +614,6 @@ an Istio route rule.
kubectl delete -f samples/httpbin/httpbin.yaml
```
## What's next
* Learn more about [Ingress Resources](https://kubernetes.io/docs/concepts/services-networking/ingress/).

View File

@ -12,12 +12,11 @@ type: markdown
This task demonstrates Istio's traffic shadowing/mirroring capabilities. Traffic mirroring is a powerful concept that allows feature teams to bring
changes to production with as little risk as possible. Mirroring brings a copy of live traffic to a mirrored service and happens out of band of the critical request path for the primary service.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Start two versions of the `httpbin` service that have access logging enabled
httpbin-v1:
@ -88,14 +87,12 @@ spec:
selector:
app: httpbin
EOF
```
```
* Start the `sleep` service so we can use `curl` to provide load
sleep service:
```bash
```bash
cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
@ -114,14 +111,11 @@ spec:
command: ["/bin/sleep","infinity"]
imagePullPolicy: IfNotPresent
EOF
```
```
## Mirroring
Let's set up a scenario to demonstrate the traffic-mirroring capabilities of Istio. We have two versions of our `httpbin` service. By default Kubernetes will load balance across both versions of the service. We'll use Istio to force all traffic to v1 of the `httpbin` service.
Let's set up a scenario to demonstrate the traffic-mirroring capabilities of Istio. We have two versions of our `httpbin` service. By default Kubernetes will load balance across both versions of the service. We'll use Istio to force all traffic to v1 of the `httpbin` service.
### Creating default routing policy
@ -151,13 +145,13 @@ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8080/headers
{
"headers": {
"Accept": "*/*",
"Content-Length": "0",
"Host": "httpbin:8080",
"User-Agent": "curl/7.35.0",
"X-B3-Sampled": "1",
"X-B3-Spanid": "eca3d7ed8f2e6a0a",
"X-B3-Traceid": "eca3d7ed8f2e6a0a",
"Accept": "*/*",
"Content-Length": "0",
"Host": "httpbin:8080",
"User-Agent": "curl/7.35.0",
"X-B3-Sampled": "1",
"X-B3-Spanid": "eca3d7ed8f2e6a0a",
"X-B3-Traceid": "eca3d7ed8f2e6a0a",
"X-Ot-Span-Context": "eca3d7ed8f2e6a0a;eca3d7ed8f2e6a0a;0000000000000000"
}
}
@ -166,11 +160,13 @@ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8080/headers
If we check the logs for `v1` and `v2` of our `httpbin` pods, we should see access log entries for only `v1`:
```bash
$ kubectl logs -f httpbin-v1-2113278084-98whj -c httpbin
127.0.0.1 - - [07/Feb/2018:00:07:39 +0000] "GET /headers HTTP/1.1" 200 349 "-" "curl/7.35.0"
kubectl logs -f httpbin-v1-2113278084-98whj -c httpbin
```
```xxx
127.0.0.1 - - [07/Feb/2018:00:07:39 +0000] "GET /headers HTTP/1.1" 200 349 "-" "curl/7.35.0"
```
2. Create a route rule to mirror traffic to v2
1. Create a route rule to mirror traffic to v2
```bash
cat <<EOF | istioctl create -f -
@ -186,7 +182,7 @@ spec:
- labels:
version: v1
weight: 100
- labels:
- labels:
version: v2
weight: 0
mirror:
@ -196,10 +192,10 @@ spec:
EOF
```
This route rule specifies we route 100% of the traffic to v1 and 0% to v2. At the moment, it's necesary to call out the v2 service explicitly because this is what creates the envoy-cluster definitions in the background. In future versions, we'll work to improve this so we don't have to explicitly specify a 0% weighted routing.
The last stanza specifies we want to mirror to the `httpbin v2` service. When traffic gets mirrored, the requests are sent to the mirrored service with its Host/Authority header appended with *-shadow*. For example, *cluster-1* becomes *cluster-1-shadow*. Also important to realize is that these requests are mirrored as "fire and forget", i.e., the responses are discarded.
This route rule specifies we route 100% of the traffic to v1 and 0% to v2. At the moment, it's necessary to call out the v2 service explicitly because this is
what creates the envoy-cluster definitions in the background. In future versions, we'll work to improve this so we don't have to explicitly specify a 0% weighted routing.
The last stanza specifies we want to mirror to the `httpbin v2` service. When traffic gets mirrored, the requests are sent to the mirrored service with its Host/Authority header appended with *-shadow*. For example, *cluster-1* becomes *cluster-1-shadow*. Also important to realize is that these requests are mirrored as "fire and forget", i.e., the responses are discarded.
Now if we send in traffic:
@ -208,13 +204,11 @@ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8080/headers
```
We should see access logging for both `v1` and `v2`. The access logs created in `v2` is the mirrored requests that are actually going to `v1`.
## Cleaning up
1. Remove the rules.
```bash
istioctl delete routerule mirror-traffic-to-httbin-v2
istioctl delete routerule httpbin-default-v1

View File

@ -19,12 +19,12 @@ This task shows you how to configure dynamic request routing based on weights an
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application.
> Note: This task assumes you are deploying the application on Kubernetes.
All of the example commands are using the Kubernetes version of the rule yaml files
(e.g., `samples/bookinfo/kube/route-rule-all-v1.yaml`). If you are running this
task in a different environment, change `kube` to the directory that corresponds
to your runtime (e.g., `samples/bookinfo/consul/route-rule-all-v1.yaml` for
the Consul-based runtime).
> This task assumes you are deploying the application on Kubernetes.
All of the example commands are using the Kubernetes version of the rule yaml files
(e.g., `samples/bookinfo/kube/route-rule-all-v1.yaml`). If you are running this
task in a different environment, change `kube` to the directory that corresponds
to your runtime (e.g., `samples/bookinfo/consul/route-rule-all-v1.yaml` for
the Consul-based runtime).
## Content-based routing
@ -35,8 +35,8 @@ star ratings.
This is because without an explicit default version set, Istio will
route requests to all available versions of a service in a random fashion.
> Note: This task assumes you don't have any routes set yet. If you've already created conflicting route rules for the sample,
you'll need to use `replace` rather than `create` in one or both of the following commands.
> This task assumes you don't have any routes set yet. If you've already created conflicting route rules for the sample,
you'll need to use `replace` rather than `create` in one or both of the following commands.
1. Set the default version for all microservices to v1.
@ -44,7 +44,7 @@ route requests to all available versions of a service in a random fashion.
istioctl create -f samples/bookinfo/kube/route-rule-all-v1.yaml
```
> Note: In a Kubernetes deployment of Istio, you can replace `istioctl`
> In a Kubernetes deployment of Istio, you can replace `istioctl`
> with `kubectl` in the above, and for all other CLI commands.
> Note, however, that `kubectl` currently does not provide input validation.

View File

@ -1,7 +1,7 @@
---
title: Setting Request Timeouts
overview: This task shows you how to setup request timeouts in Envoy using Istio.
order: 28
layout: docs
@ -11,7 +11,6 @@ type: markdown
This task shows you how to setup request timeouts in Envoy using Istio.
## Before you begin
* Setup Istio by following the instructions in the
@ -25,12 +24,12 @@ This task shows you how to setup request timeouts in Envoy using Istio.
istioctl create -f samples/bookinfo/kube/route-rule-all-v1.yaml
```
> Note: This task assumes you are deploying the application on Kubernetes.
All of the example commands are using the Kubernetes version of the rule yaml files
(e.g., `samples/bookinfo/kube/route-rule-all-v1.yaml`). If you are running this
task in a different environment, change `kube` to the directory that corresponds
to your runtime (e.g., `samples/bookinfo/consul/route-rule-all-v1.yaml` for
the Consul-based runtime).
> This task assumes you are deploying the application on Kubernetes.
All of the example commands are using the Kubernetes version of the rule yaml files
(e.g., `samples/bookinfo/kube/route-rule-all-v1.yaml`). If you are running this
task in a different environment, change `kube` to the directory that corresponds
to your runtime (e.g., `samples/bookinfo/consul/route-rule-all-v1.yaml` for
the Consul-based runtime).
## Request timeouts
@ -39,7 +38,7 @@ By default, the timeout is 15 seconds, but in this task we'll override the `revi
timeout to 1 second.
To see its effect, however, we'll also introduce an artificial 2 second delay in calls
to the `ratings` service.
1. Route requests to v2 of the `reviews` service, i.e., a version that calls the `ratings` service
```bash
@ -84,7 +83,7 @@ to the `ratings` service.
but there is a 2 second delay whenever you refresh the page.
1. Now add a 1 second request timeout for calls to the `reviews` service
```bash
cat <<EOF | istioctl replace -f -
apiVersion: config.istio.io/v1alpha2
@ -107,15 +106,14 @@ to the `ratings` service.
You should now see that it returns in 1 second (instead of 2), but the reviews are unavailable.
## Understanding what happened
In this task, you used Istio to set the request timeout for calls to the `reviews`
microservice to 1 second (instead of the default 15 seconds).
microservice to 1 second (instead of the default 15 seconds).
Since the `reviews` service subsequently calls the `ratings` service when handling requests,
you used Istio to inject a 2 second delay in calls to `ratings`, so that you would cause the
`reviews` service to take longer than 1 second to complete and consequently you could see the
timeout in action.
timeout in action.
You observed that the Bookinfo productpage (which calls the `reviews` service to populate the page),
instead of displaying reviews, displayed
@ -132,7 +130,7 @@ More details can be found [here]({{home}}/docs/concepts/traffic-management/handl
One more thing to note about timeouts in Istio is that in addition to overriding them in route rules,
as you did in this task, they can also be overridden on a per-request basis if the application adds
an "x-envoy-upstream-rq-timeout-ms" header on outbound requests. In the header
the timeout is specified in millisecond (instead of second) units.
the timeout is specified in millisecond (instead of second) units.
## Cleanup

View File

@ -22,12 +22,12 @@ two steps: 50%, 100%.
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application.
> Note: This task assumes you are deploying the application on Kubernetes.
All of the example commands are using the Kubernetes version of the rule yaml files
(e.g., `samples/bookinfo/kube/route-rule-all-v1.yaml`). If you are running this
task in a different environment, change `kube` to the directory that corresponds
to your runtime (e.g., `samples/bookinfo/consul/route-rule-all-v1.yaml` for
the Consul-based runtime).
> This task assumes you are deploying the application on Kubernetes.
All of the example commands are using the Kubernetes version of the rule yaml files
(e.g., `samples/bookinfo/kube/route-rule-all-v1.yaml`). If you are running this
task in a different environment, change `kube` to the directory that corresponds
to your runtime (e.g., `samples/bookinfo/consul/route-rule-all-v1.yaml` for
the Consul-based runtime).
## Weight-based version routing
@ -42,8 +42,8 @@ two steps: 50%, 100%.
You should see the Bookinfo application productpage displayed.
Notice that the `productpage` is displayed with no rating stars since `reviews:v1` does not access the ratings service.
> Note: If you previously ran the [request routing](./request-routing.html) task, you may need to either log out
as test user "jason" or delete the test rules that were created exclusively for him:
> If you previously ran the [request routing](./request-routing.html) task, you may need to either log out
as test user "jason" or delete the test rules that were created exclusively for him:
```bash
istioctl delete routerule reviews-test-v2
@ -83,7 +83,7 @@ two steps: 50%, 100%.
1. Refresh the `productpage` in your browser and you should now see *red* colored star ratings approximately 50% of the time.
> Note: With the current Envoy sidecar implementation, you may need to refresh the `productpage` very many times
> With the current Envoy sidecar implementation, you may need to refresh the `productpage` very many times
> to see the proper distribution. It may require 15 refreshes or more before you see any change. You can modify the rules to route 90% of the traffic to v3 to see red stars more often.
1. When version v3 of the `reviews` microservice is considered stable, we can route 100% of the traffic to `reviews:v3`:

View File

@ -5,8 +5,5 @@ type: markdown
---
{% include home.html %}
If you'd like to speak to the Istio team about a potential integration
and/or a partnership opportunity, please complete this [form](https://goo.gl/forms/ax2SdpC6FpVh9Th02).

View File

@ -5,7 +5,7 @@ type: markdown
---
{% include home.html %}
Istio is designed and built to be platform-independent. For our
Istio is designed and built to be platform-independent. For our
{{site.data.istio.version}} release, Istio supports environments running
container orchestration platforms such as Kubernetes (v1.7.4 or greater)
and Nomad (with Consul).

View File

@ -9,7 +9,7 @@ Istio is an open platform-independent service mesh that provides traffic managem
*Open*: Istio is being developed and maintained as open-source software. We encourage contributions and feedback from the community at-large.
*Platform-independent*: Istio is not targeted at any specific deployment environment. During the initial stages of development, Istio will support
*Platform-independent*: Istio is not targeted at any specific deployment environment. During the initial stages of development, Istio will support
Kubernetes-based deployments. However, Istio is being built to enable rapid and easy adaptation to other environments.
*Service mesh*: Istio is designed to manage communications between microservices and applications. Without requiring changes to the underlying services, Istio provides automated baseline traffic resilience, service metrics collection, distributed tracing, traffic encryption, protocol upgrades, and advanced routing functionality for all service-to-service communication.

View File

@ -7,15 +7,15 @@ type: markdown
Check out the [documentation]({{home}}/docs/) right here on istio.io. The docs include
[concept overviews]({{home}}/docs/concepts/),
[task guides]({{home}}/docs/tasks/),
[task guides]({{home}}/docs/tasks/),
[guides]({{home}}/docs/guides/),
and the [complete reference documentation]({{home}}/docs/reference/).
Detailed developer-level documentation is maintained for each component in GitHub, alongside the code. Please visit each repository for those docs:
* [Envoy](https://envoyproxy.github.io/envoy/)
* [Envoy](https://envoyproxy.github.io/envoy/)
* [Pilot](https://github.com/istio/istio/tree/master/pilot/doc)
* [Pilot](https://github.com/istio/istio/tree/master/pilot/doc)
* [Mixer](https://github.com/istio/istio/tree/master/mixer/doc)
* [Mixer](https://github.com/istio/istio/tree/master/mixer/doc)

View File

@ -1,5 +1,5 @@
---
title: How do I see all of the configuration for Mixer?
title: How do I see all Mixer's configuration?
order: 10
type: markdown
---
@ -10,7 +10,7 @@ Configuration for *instances*, *handlers*, and *rules* is stored as Kubernetes
Configuration may be accessed by using `kubectl` to query the Kubernetes [API
server](https://kubernetes.io/docs/admin/kube-apiserver/) for the resources.
#### Rules
## Rules
To see the list of all rules, execute the following:
@ -20,7 +20,7 @@ kubectl get rules --all-namespaces
Output will be similar to:
```
```xxx
NAMESPACE NAME KIND
default mongoprom rule.v1alpha2.config.istio.io
istio-system promhttp rule.v1alpha2.config.istio.io
@ -34,7 +34,7 @@ To see an individual rule configuration, execute the following:
kubectl -n <namespace> get rules <name> -o yaml
```
#### Handlers
## Handlers
Handlers are defined based on Kubernetes [Custom Resource
Definitions](https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions)
@ -48,7 +48,7 @@ kubectl get crd -listio=mixer-adapter
The output will be similar to:
```
```xxx
NAME KIND
deniers.config.istio.io CustomResourceDefinition.v1beta1.apiextensions.k8s.io
listcheckers.config.istio.io CustomResourceDefinition.v1beta1.apiextensions.k8s.io
@ -69,7 +69,7 @@ kubectl get <adapter kind name> --all-namespaces
Output for `stdios` will be similar to:
```
```xxx
NAMESPACE NAME KIND
istio-system handler stdio.v1alpha2.config.istio.io
```
@ -80,7 +80,7 @@ To see an individual handler configuration, execute the following:
kubectl -n <namespace> get <adapter kind name> <name> -o yaml
```
#### Instances
## Instances
Instances are defined according to Kubernetes [Custom Resource
Definitions](https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions)
@ -94,7 +94,7 @@ kubectl get crd -listio=mixer-instance
The output will be similar to:
```
```xxx
NAME KIND
checknothings.config.istio.io CustomResourceDefinition.v1beta1.apiextensions.k8s.io
listentries.config.istio.io CustomResourceDefinition.v1beta1.apiextensions.k8s.io
@ -112,7 +112,7 @@ kubectl get <instance kind name> --all-namespaces
Output for `metrics` will be similar to:
```
```xxx
NAMESPACE NAME KIND
default mongoreceivedbytes metric.v1alpha2.config.istio.io
default mongosentbytes metric.v1alpha2.config.istio.io

View File

@ -18,7 +18,7 @@ requirements. Keeping the components separate enables independent component-appr
- *Resource Usage*.
Istio depends on being able to deploy many instances of its proxy, making it important to minimize the
cost of each individual instance. Moving Mixer's complex logic into a distinct component makes it
cost of each individual instance. Moving Mixer's complex logic into a distinct component makes it
possible for Envoy to remain svelte and agile.
- *Reliability*.
@ -28,7 +28,7 @@ it creates distinct failure domains which enables Envoy to continue operating ev
fails, preventing outages.
- *Isolation*.
Mixer provides a level of insulation between Istio and the infrastructure backends. Each Envoy instance can be configured to have a
Mixer provides a level of insulation between Istio and the infrastructure backends. Each Envoy instance can be configured to have a
very narrow scope of interaction, limiting the impact of potential attacks.
- *Extensibility*.

View File

@ -16,6 +16,6 @@ kubectl edit configmap -n istio-system istio
kubectl delete pods -n istio-system -l istio=pilot
```
> Note: DO NOT use this approach to disable mTLS for services that are managed
> Do not use this approach to disable mTLS for services that are managed
by Istio (i.e. using Istio sidecar). Instead, use service-level annotations
to overwrite the authentication policy (see above).

View File

@ -3,7 +3,7 @@ title: Can I enable Istio Auth with some services while disable others in the sa
order: 30
type: markdown
---
Starting with release 0.3, you can use service-level annotations to disable (or enable) Istio Auth for particular service-port.
Starting with release 0.3, you can use service-level annotations to disable (or enable) Istio Auth for particular service-port.
The annotation key should be `auth.istio.io/{port_number}`, and the value should be `NONE` (to disable), or `MUTUAL_TLS` (to enable).
Example: disable Istio Auth on port 9080 for service `details`.

View File

@ -43,7 +43,7 @@ For the workloads running on VMs and bare metal hosts, the lifetime of their Ist
`max-workload-cert-ttl` of the Istio CA.
To customize this configuration, the argument for the node agent service should be modified.
After [setting up th machines]({{home}}/docs/setup/kubernetes/mesh-expansion.html#setting-up-the-machines) for Istio
After [setting up the machines]({{home}}/docs/setup/kubernetes/mesh-expansion.html#setting-up-the-machines) for Istio
mesh expansion, modify the file `/lib/systemd/system/istio-auth-node-agent.service` on the VMs or bare metal hosts:
```bash
@ -56,7 +56,7 @@ RestartSec=10
...
```
The above configuraiton specifies that the Istio certificates for workloads running on this VM or bare metal host
The above configuration specifies that the Istio certificates for workloads running on this VM or bare metal host
will have 24 hours lifetime.
After configuring the service, restart the node agent by running `systemctl daemon-reload`.
After configuring the service, restart the node agent by running `systemctl daemon-reload`.

View File

@ -14,7 +14,7 @@ If you are an advanced user and understand the risks you can also do the followi
kubectl edit configmap -n istio-system istio
```
comment out or uncomment out `authPolicy: MUTUAL_TLS` to toggle mTLS and then
comment out or uncomment `authPolicy: MUTUAL_TLS` to toggle mTLS and then
```bash
kubectl delete pods -n istio-system -l istio=pilot

Some files were not shown because too many files have changed in this diff Show More