Rebase from Istio Master (#2)

* add example for disabling injection (#1021)

* Updated reference docs. (#1045)

* Add task for Istio CA health check. (#1038)

* Add task for Istio CA health check.

* Small fix.

* Small fix.

* Updates troubleshooting guide to add pilot (#1037)

* Fix misnamed link (#1050)

* update document generation for istioctl (#1047)

* Hack to get ownership of Google analytics account for the site.

* Don't need the analytics hack no more...

* Make the rake test ensure that we use {{home}} consistently. (#1053)

We now generate the test site into a subdirectory such that we can ensure all
links are correctly using {{home}}, which makes the site work correctly once
archived.

Fixed a bunch of broken cases.

* Reduce the visual weight of code blocks so they don't break up the page so much. (#1054)

* Introduce support for building the site in "preliminary" mode. (#1052)

* Notes for 0.6 (#1048)

* Refresh version selection menu given 0.6.

* update instructions for mesh expansion (#1056)

* update instructions for mesh expansion

* remove ISTIO_STAGING references

* Specify --debug option to use docker.io/istio/proxy_debug image for (#1057)

deployment.

* Update reference docs.

* Update Quick start Doc (#1059)

Fix Typo

* Update Istio RBAC document to relfect sample changes. (#1062)

* Fix typo in Cleanup section (#1061)

* clarify verification of injected proxy with automatic injection (#1024)

* Fixe wrong port number (#1041)

* Sidecar proxy help (#1044)

* Use same instance name in Mixer config example (#1051)

* Add a bunch of redirects for old pages (#1066)

The Google Crawl Engine reported a bunch of broken links pointing into istio.io.
This adds redirects so that these links work.

Add a hack such that the gear menu logic that lets you time travel through versions
of the site will insist that if a page existed in a given version, it must also exist
in subsequent versions. This will ensure we always create redirects when we move site
content, and thus avoid breaking links into the site. If a page is moved or removed,
this will lead to rake test errors when checking the content of archive.istio.io.

* Update reference docs.

* Fix bad formatting.

* Fix typos.

* Update reference docs.

* Eliminate flickering on page load. (#1068)

- Fix another issue with my arch-nemesis, the Copy button. My last fix for Copy button issues
resulted in screen flickering upon page loading. This is now fixed.

- Pin the size of the gear and magnifying glass icons in the header to avoid flicker as the
fonts for those renders a few ms too late and lead to flickering on page load.

- Cleaned up the site's JavaScript for clarity, and include minimized versions in the
site for improved perf.

* Improve formatting. (#1070)

- Remove the silly right indent used for list items. This was throwing away a lot of
useful screen real estate on mobile.

* Add support for dynamically inserting file content into the site. (#1069)

This is useful for pulling in content straight from GitHub on the fly,
rather than cut & pasting it into the site.

* Update sidecar AWS verification (#1060)

* Update sidecar AWS verification

Add verification without ssh access on master node. Perform check directly with kubectl client.

* Update sidecar injection Docs

Update with @ayj remarks

* Update link 

Update link for managing tls in a cluster, add a '/'

* Fix links. (#1073)

- Add a / to links pointing to directories

- Switch a bunch of links from http: to https:

* master branch is now server from preliminary.istio.io (#1075)

* Setup 0.7.

* Forgot to update releases.yml.

* Update README

* Consolidate cluster prerequisites for webhooks into k8s quick start (#1077)

The automatic sidecar injection has its own set of k8s install instructions for webhooks. This overlaps with the general k8s install instructions. We'll also introduce server-side configuration webhooks which need the same prerequisites.

* Add missing .html suffix on some links. (#1080)

* A few more link fixes (#1081)

* Fix handling of legacy community links.

* Add missing .html extension on search page reference.

* Add Certificate lifetime configuration in FAQ. (#1079)

* Update reference docs.

* Fix some newly broken links. (#1082)

* Update reference docs.

* Remove empty document. (#1085)

* Update Ansible documentation to reflect change in Jaeger addon (#1049)

* Update Ansible documentation to reflect change in Jaeger addon

Relates to: https://github.com/istio/istio/pull/3603

* Small polish to Ansible documentation

* Remove extra tilde in the docs (#1087)

Fixes #1004

* [WIP] Update traffic routing tasks to use v1alpha3 config (#1067)

* use v1alpha3 route rules

* circuit breaking task updated to v1alpha3

* convert mirroring task to v1alpha3

* convert egress task to v1alpha3

* Egress task corrections and clarifications

* use simpler rule names

* move new tasks to separate folder (keep old versions around for now)

* update example outputs

* egress tcp task

* fix broken refs

* more broken refs

* imporove wording

* add missing include home.html

* remove ingress task - will create a replacement in followup PR

* Improve sorting algorithm to use document title and not just document URL. (#1089)

This makes it so documents in the same directory get sorted by document title instead of
by the URL name (unless they have an order: directive, which takes precedence over alpha
order)

* Istio RBAC doc fix. (#1093)

* Improve readability

* Add one more faq for secret encryption (#1096)

* Add note to have debug version of proxy for curl command (#1097)

* Delete some old stuff we don't need anymore.

* Delete some old stuff we don't need anymore.

* Fix problem preventing proper section indices in the "About" section of the site.

* Revise note to install curl (#1098)

* Revise note to install curl

* Revise note to install curl

* Address comment

* Fix bug with the Copy button and proto documentation.

- HTML generated from protos encode preformatted blocks with <pre><code></code></pre>,
while HTML generated through Jekyll's markdown converter wraps an extra <div> around the
block. The logic to insert the Copy button on preformatted was assuming the presence of this
DIV. If the DIV is not present on input, we now explicitly add one which makes things work.

* Update reference docs.

* Fix bug that was messing up all the index pages in the site. (#1100)

Fix newly broken k8s link along the way...

* Revise curl instruction in master branch (#1107)

* Update intro.md (#1110)

* Update intro.md

Updating info per Wencheng's suggestion

* Update intro.md

* WIP - Combined ingress/gateway task for v1alpha3 (#1094)

* First pass combined ingress/gateway task

* Add verifying gateway section

* clarifications

* fix broken link

* fix build broken

* address review comments

* fix small grammar issue (#1112)

* Fix a few bugs and add a feature. (#1111)

- Link injection for document headers has been broken for a while due to my
misunderstanding of the "for in" syntax in JavaScript. This now works as expected.

- Same problem also prevented the feature that causes every link to outside of istio.io
to be opened in a separate window. This now works as intended.

- Made the gear dropdown menu be right-aligned such that it doesn't go off-screen on
portrait mode tablets.

- Stop importing Popper.js since it's only needed for dropdown menus that aren't in the
nav bar. Ours is in a nav bar...

- Added link injection for <dt> terms, which makes it easy to create links to individual glossary entries.

* 0.7 notes (#1101)

* Add an entry about creating quality hyperlinks. (#1114)

* 0.2.12 typo fix + doc link should be to docs/ directly + ... (#1115)

* 0.2.12 doc link should be to docs/ directly

+ note about shell security

* fix typo (for for)

* Revise wording and linking

Drop the double TOC (this page has very little traffic anyway)

* Fix inconsistent header use in this doc.

* Fix invalid index page.

* Update servicegraph docs with new viz. (#1074)

* Fix mobile navigation issues. (#1118)

When on mobile, the left sidebar is hidden by default. To make navigation easier, we allow the user to browse
the site entirely through the various index sections which provide links to all articles. This wasn't working
for the About and Blog links at the top of the page since they send you to a direct page instead of to the
relevant navigation page. So...

- Made the About link point to the about section's index page.

- Each blog page now contains a link to the next and previous blog post.

* [ImgBot] optimizes images (#1120)

/_docs/tasks/telemetry/img/servicegraph-example.png -- 41.49kb -> 28.62kb (31.03%)

* Add documentation for upgrade (#1108)

* Add upgrade doc and fixing a broken link.

* revert one file.

* Refine the doc.

* Move the doc.

* Fix syntax.

* Fix syntax

* Fix syntax

* Make non-manifest based installers have similar titles and overviews (#1086)

* Make the setup page a little more consistent.

* Make non-manifest based installers have similar titles and overviews

* Shorten the overview,tidy up the title, and add a helm.html redirect

* Installation typo in both files

* Fix inconsistent header use in this doc. (#1117)

* Improve layout on phone.

- We shrink the height of the header and footer when on mobile.

- We shrink the header font based on screen width, to avoid the nav bar being split on two lines
which leads to all sorts of bad things happening

* Since we shrink the brand more aggressively, allow the navbar to be displayed until the next bp.

* Oops, left a debugging change in accidentally, reverting.

* Add Istio mTLS support for https service demo (#1121)

* Add Istio mTLS support for https service demo

* Address comment

* Address comment

* Address comment

* Fix more headers. (#1126)

* Update procedures to access the team drive.

* Fix broken links, causing HTML proofer in circleci gates to fail (#1132)

* Fix broken links, causing HTML proofer in circleci gates to fail

* Add the same missing links to sidecar-injection.md

* Refine Helm installation warning. (#1133)

Helm charts are unstable prior to 0.7.  Remove the red warning
and instead add a simple notice that Helm charts =<0.7 are not functional.

* Fix typo

In AWS (w/Kops) section:
"openned" should be "opened"?

* prepare_proxy was refactored into istio-proxy (#1134)

* In Note 1: Consul modified to Eureka (#1122)

* Revamped nav header for better mobile experience. (#1129)

- We now only use the skinny version of the navbar instead of dynamically switching
based on viewport size. This looks cleaner, giving more screen space to the content rather than
our chrome.

- The search textbox is replaced with a search button. Clicking the button brings up the
search textbox. This looks less cluttered and works considerably better on smaller screens.

- When on a phone and the nav links are collapsed into a hamburger menu, cleanly show the
search box in the menu that comes up when you click the hamburger.

- Remove the down arrow next to the cog, it's superfluous and things look cleaner without
it.

* Add one faq item for istio on https service (#1127)

* Add one faq item for istio on https service

* Address comment

* Address comment

* Simplify the demo of plugin ca cert. (#1138)

* Update IBM Cloud Container Service (IKS) k8s setup instructions (#1136)

Copy IKS specific instructions from https://github.com/istio/istio.github.io/pull/1072 to general k8s setup page.

* Revamp the footer. (#1137)

- Remove all the redundant stuff and emphasize community resource via icons.

- Move the "Report a doc bug" and "Edit this page on GitHub" options to the gear
menu.

- Use Jekyll "include" support to store the landing page's artwork in external
SVG files instead of directly embedded in the HTML. Much nicer.

* Switching to 0.8.

* Update README

* Add placeholder 0.8 file to fix rake tests

* Create Owners

* Fix markdown (#1140)

* Cleans up the readability of the Ansible Installation (#1130)

* Cleans up the readability of the Ansible Installation

Run through a yaml linter Run through spell | sort | uniq
Reorganized to semi-match the Helm installation page as they have similar
functionality

There are things I like about how this document is structured now
and will carry those over to the Helm documentation in the future as time
permits.

* Remove customization example as suggested during the review

* Change Openshift->OpenShift

* Add labels over community icons in the footer. (#1142)

* Remove $ sign in command since it breaks the copy button (#1143)

* Update 0.7.md (#1144)

helm is working in master branch but not in 0.7.1

* Fix bug caused by #1138 (#1145)

* Switch back to normal html-proofer (#1146)

As my pr was merged

Fixes #849

* Setup for linting markdown files. (#1147)

- linters.sh will run spell-checking and a style checker on
markdown files.

- Fix a whole bunch of typos and bad markdown content throughout. There are many more fixes
to come before we can enable the linters as a checkin gate, but this takes care of a majority
of items. More to come later.

* Finish fixing remaining lint errors

* Make spell checking and style checking part of our doc checkin gate. (#1154)

* Update

* Inline the TOC on mobile.

- For small screens that don't have room for the righthand TOC, we now
display the TOC inline in the main document. This substantially improves
navigation on mobile.

- Fix the scroll offset which was off by a bit since the switch to the skinny
header.

* Update reference docs.

* Improve mobile experience. (#1158)

- The two call to action buttons on the landing page are now displayed one of top of
the other on small screens instead of next to one another.

- On mobile, when you scroll down a page, an arrow shows up in the top right of the screen
to let you scroll back to the top of the page. This is mighty handy since on mobile there
isn't a TOC available to click on.

- Add some convenient links on the docs' section landing page.

* Accessibility improvements. (#1159)

* www.yaml.org went missing - yaml.org seems to work. (#1166)

sdake@falkor-08:~/go/src/istio.io/istio.github.io/_docs$ dig www.yaml.org

; <<>> DiG 9.10.3-P4-Ubuntu <<>> www.yaml.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 34828
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;www.yaml.org.			IN	A

;; Query time: 917 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Sun Apr 08 09:10:51 MST 2018

* Authn policy concept and tutorial. (#1128)

* fix service account names in the instructions for OpenShift (#1083)

This commit replaces the service account names for grafana and
prometheus in the instructions to set the security context
constraints for OpenShift.

* Improve plugin cert task for better UX. (#1150)

* Update Security section in Istio overview (#1170)

* Update Security section in Istio overview

* Fix comment

* Update documentation for automatic sidecar injection webhook. (#1169)

* Add multicluster deployment documentation to Istio (#1139)

* Add multicluster deployment documentation to Istio

* Change *Ip to *Endpoint a per request

* Fix a typo

* Address all reviewer comments

Note, SVG diagram will be handled as a follow-on PR.

* Fix legitimate spelling errors found by gate

* Some backticks to fix spelling errors and other misc cleanups

* some spelling and backticks.

* Expand spelling exemptions dictionary slightly

* Correctly spell routable.

* Address reviewer comments.

Needed a rebase in the process.

* A minor consistency change

* Address reviewer comments.

* Add a caveats and known issue tracker to the documentation

Early on during review of this PR, I believe there was a review
asking for caveats, but it has disappeared from the github comments.

* Make istio.io support quality print output. (#1163)

- Get rid of all the chrome when printing a page. So no headers, sidebars, etc.

- Ensure that PRE blocks are fully expanded when printing instead of
showing a scroll bar.

- Generate endnotes for each page printed which lists the URLs of the various links
on the page. Each link site is annotated with a superscript number referencing this
table.

* Update doc for TCP periodical report. (#1095)

* Update doc for TCP periodical report.

* Add report response arrow into svg.

* Reference: https://istio.io/docs/reference/config/istio.routing.v1alpha1.html#StringMatch (#1180)

* Fix broken links caused by changes in istio/istio.

* Update reference docs.

* Improve sidenav behavior on mobile. (#1173)

The sidenav now hovers over the main text instead of pushing the main
text sideways.

The rendering of the sidenav toggler button now matches the "back to top"
button I added last week.

* Bunch of improvements (#1181)

- New visuals for the sailboat in the header. It now overflows the header.

- The TOC now highlights the currently displayed portion of the current page.
As you scroll through the doc, the selected entry updates accordingly.

- Add previous/next page links in every doc page. These used to be present only in
blog posts, but they're useful everywhere.

- Fix a few off-by-one formatting errors that stemed from using a mixed of
min-width and max-width throughout the stylesheet. This caused some strange
formatting to happen at specific window widths. Now, we're consistently using
min-width and everything lines up properly.

- Improved footer formatting so it looks better on mobile.

- Only display the TOC on XL screens, otherwise it wraps too much.
Screens smaller than XL now all get the inlined TOC instead.

- Add support for pages to request that the TOC be generated inline instead of in a sidebar.
This is useful for pages that have headings which cause too much wrapping in the TOC,
such as the Troubleshooting Guide.

- Add some blank space between an inlined TOC and the main text so that things don't look
so crowded, especially when printing.

- Inline the sailboat SVG into each page. This avoids a network roundtrip and allows the
SVG to be controlled with the same CSS as everything else.

- Eliminate a huge amount of redundancy in the four main layout file for the site.
They now share a single primary.html include file which carries most of the weight. This
will avoid having to constantly make the same change in four different files.

- Improve the generated HTML for <figure> elements which makes
things better for screen readers.

- Simplify the HTML & CSS for the footer.

* Fix indent issue (#1182)

* Rename Isito CA to Citadel. (#1179)

* Update feature-stages.md (#1183)

Updates to features as of 0.7 release

* Update Helm Documentation (#1168)

* Modify minimum pin of Istio version with Helm and improve prereqs

* Add section describing briefly how to use helm without tiller

* Change heading description for Helm method and add upgrade warning

* Make common customization options table match current master

* Subsection the two methods for installing with Helm

* Remove Helm keys from .spelling.  Add FQDNs as an acronym.

* Backtick the keys and defaults, values.yaml, and fix 1 spelling error

* Add uninstall instructions for both kubectl and helm with tiller

* Place backticks around architecture platforms and correctly list them

* Show both uninstall methods (kubectl & Helm)

* Remove two extra CRs

* Fix yaml linting errors

* Link to requirements for automatic sidecar injection.

* Change istio-auth to istio for rendering

* Address reviewer comments.

* Fix linting error.

* Notify operator they need capability to install service accounts.

* Fix lint error

* Switch to PrismJS for syntax highlighting. (#1184)

Instead of doing syntax highlighting statically in Jekyll, we now
go back to the PrimsJS library we used in the 0.2-0.4 timeframe.
It used to be problematic, but the cause for the problems have
been addressed a while ago.

This gives us highlighting for non-markdown content,
such as dynamically loaded PRE blocks and PRE blocks that
come from HTML generated from protos.

* Adding info about new expression language methods. (#1186)

Adding info about dnsName, email, and uri functions.

* Fix typo liveliness -> liveness (#1188)

* Fix typo liveliness -> liveness

Add mdspell dependency to gem installations

* Add backticks around firebase deploy command

* Fix a few bugs. (#1187)

- The slide-in sidenav used on mobile went all crazy when text got too long in the expanded
panel. We now set a max width to trigger controlled wrapping and avoid the nasties.

- The hamburger menu that replaces the link in the top header on small screens didn't render
right on medium-sized screens (a.k.a. portrait-mode tablets). I had one of my breakpoints set
inconsistently.

- Dynamically loaded PRE blocks were not being syntax colored, now they are.

- The Links endnote section created for printing pages was not dedupping identical
links.

- The Links endnote section contained entries for the next/previous links which are
normally at the bottom of each page. These links aren't visible when printing and so
shouldn't appear in the Links endnote section.

* Add rocket chat to our footer & community page. (#1189)

Also, update the mailing list icon on the community page to match what we use in the
footer.

* Add instructions to integrate Istio with existing Endpoints services.  (#1164)

* Add multitenancy blog (#1119)

* Add multitenancy blog

* Update soft-multitenancy.md

* Update soft-multitenancy.md

* Add multitenancy blog

* Add blog entry for configuring aws nlb for istio ingress (#1165)

* Don't add links from figures into endnotes. (#1192)

- The prior design for avoiding links for figures was brittle and was
in fact broken. Now it's more robust.

* [ImgBot] optimizes images (#1193)

*Total -- 683.39kb -> 440.68kb (35.52%)

/_blog/2018/img/roles_summary.png -- 101.32kb -> 61.03kb (39.77%)
/_blog/2018/img/policies.png -- 244.70kb -> 148.25kb (39.41%)
/_blog/2018/img/attach_policies.png -- 48.65kb -> 31.59kb (35.06%)
/_blog/2018/img/createpolicyjson.png -- 120.21kb -> 80.63kb (32.93%)
/_blog/2018/img/create_policy.png -- 86.38kb -> 60.62kb (29.82%)
/_blog/2018/img/createpolicystart.png -- 82.12kb -> 58.55kb (28.7%)

* Update circuit break use existing file. (#1091)

* Add proper link to Helm and Multicluster feature stages (#1196)

* Update multicluster installation to match master (#1195)

* Add a trailing / on an URL that was returning a 301

* Update multicluster intallation to match master

Big usability improvements have been made.  Document
the new workflow for multicluster.

* Address reviewer comments.

* Fix linting problem

* Fix docker run command (#1201)

The command as it stands will fail with "Gemfile not found". The working directory should be set to $(pwd) as well to start execution in the istio.github.io directory and find the Gemfile.

* remove installation instructions for prometheus (#1199)

* remove installation instructions for prometheus

* more doc fixes for 0.8

* Add request.auth.claims and update source.user, source.principal, and (#1205)

request.auth.principal

* Fix command to build & serve site locally using docker (bad workdir) (#1206)

* Add attributes into documentation. (#1200)

* add a step to define ingress gateway in bookinfo guide (#1207)

* add a step to define ingress gateway in bookinfo guide

following https://github.com/istio/istio/pull/5113

* make ingress gateway lower case

* Fix broken link in README.md (#1209)

* Adding Azure support instructions (#1202)

* adding docs for Azure

* minor misspelling fix

* adding acronyms

* removing blank line

* changing bash output to reflect only necessary flags

* fixing grammar errors

* Fix link to IBM cloud private (#1216)

* Typo fix (#1208)

* clarify we support more than just k8s (#1212)

* Update reference docs. (#1219)

* Quiet GitHub warning

* v1alpha3 routing blog (#1190)

* Clarify istio.io/preliminary.istio.io stuff (#1221)

* add galley.enabled option to helm instructions (#1222)

* Fix naming collision (#1226)

ingressgateway and ingress both match the grep, resulting in
incorect ingress name being produced in troubleshooting guide.

* adding the recommended namespace (#1218)

* adding the recommended namespace

https://github.com/istio/issues/issues/312

* add the recommended namespace

* add creating the namespace

* correct typos

* only need to create namespace 

for the template approach

* Introduce support for new fangled PRE blocks. (#1224)

Instead of having to have two PRE blocks, one for commands and one for the output,
we can now have a single PRE block and we take care of rendering things to show the
command vs. the output. The Copy button on such a thing only copy the command, and not
the output.

We now also show a $ on command-lines, but the Copy button doesn't copy that and knows to just
copy the usable part of the command-line.

* 0.8 release notes. (#1223)

* Fix incorrect behavior of the sidenav when dealing with long non-wrapping page titles. (#1229)

- When I was last fiddling with the sidenav on mobile, I messed up the sizing for non-mobile cases.
This cause the sidenav to grow beyond its expected size when presented with long non-wrapping page
titles. The text is now wrapped instead as it should.

- Shrank the font size of the list items in the sidenav to 85% to reduce the amount of wrapping that
happens.

- Reduce the right margin in the side nav to again try to reduce the amount of wrapping.

* Update content to help upcoming migration from Jekyll to Hugo (#1232)

- In front matter, order: and overview: are now weight: and description:

- In front matter, we generally don't need layout: and use config to assign layouts automatically

- Remove the useless type: front-matter entries, the type is infered from the file extension.

* Improves multicluster documentation (#1217)

* Improves multicluster documentation

Improve documentation based upon fresh eyes running through the
documented process.

* Address reviewer comments.

* More refinement.

* Exclude rule MD028

Rule 028 is: https://github.com/DavidAnson/markdownlint/blob/master/doc/Rules.md#md028---blank-line-inside-blockquote

The rationale below cut and pasted from markdownlint seems
valid for the general case, however, our MD parser always
produces seprate block-quotes, which is what I am after in
this PR.  I think other people will prefer our renders of
blockquotes (separate blockquotes);

Rationale: Some markdown parsers will treat two blockquotes
separated by one or more blank lines as the same blockquote,
while others will treat them as separate blockquotes.

* Improve the doc to apply istio-auth.yaml (#1227)

* Fix doc (#1228)

* Task/guide updates for v1alpha3 (#1231)

* Task/guide updates for v1alpha3

* fix typo

* remove trailing spaces

* tweaks

* Corrections and clarifications (#1238)

* clarify https external services support (#1239)

* clarify https external services support

* spelling error

* Hopefully finally really fix the issues with the sidenav on small screens. (#1240)

* fix manual sidecar injection docs for helm template changes (#1211)

Addresses https://github.com/istio/istio.github.io/issues/1210

* Switch most uses of ```bash to ```command. (#1242)

This takes advantage of the new rendering for command-lines and their outputs.

* Fixes to the doc after testing/reviewing it with release-0.8 istio branch (#1244)

* update format of a tcp ServiceEntry (#1237)

* Remove broken link. (#1250)

* WIP PR for v1alpha3 task corrections (#1247)

* ingress task corrections

* fault injection task version wrong

* Fault task corrections (#1253)

* update samples to align with latest proto definition (#1254)

* Traffic Shifting Review - Fixed wrong links (#1259)

* rbac.md: unindent yaml files (#1257)

also fixed a typo

Signed-off-by: Ahmet Alp Balkan <ahmetb@google.com>

* Create istio namespace before install remote cluster. (#1243)

* update instructions for gke-iam (#1260)

* Remove a broken link. (#1263)

* Fix another broken link. (#1265)

* [ImgBot] optimizes images (#1264)

*Total -- 73.77kb -> 65.13kb (11.72%)

/_docs/setup/kubernetes/img/dm_gcp_iam_role.png -- 38.54kb -> 33.47kb (13.15%)
/_docs/setup/kubernetes/img/dm_gcp_iam.png -- 35.23kb -> 31.65kb (10.15%)

* Fixes #1241 (#1258)

* Added namespace when create helm template. (#1234)

* Add istioctl proxy-config to the troubleshooting section (#1267)

* Fix istioctl proxy-config link to not point at prelim docs (#1269)

Because that would be a dumb thing to do

* Update how we insert images to make a transition from Jekyll to Hugo easier. (#1275)

* Change publish_date front-matter to publishdate to aid in the Jekyll to Hugo migration. (#1276)

* Remove stray quotes.

* Shorten long titles and descriptions. (#1278)

* Fix aspect ratio of a couple images. (#1277)

The incorrect aspect ratio value was leading to spurious top/bottom padding on
the images.

Also, delete unecessary .png version of some .svg files.
This commit is contained in:
john-a-joyce 2018-05-14 16:25:24 -04:00 committed by GitHub
parent 13ea629393
commit 6122f38a1f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
354 changed files with 16689 additions and 8277 deletions

View File

@ -17,28 +17,48 @@ jobs:
# Download and cache dependencies
- restore_cache:
keys:
- v3-dependencies-{{ checksum "Gemfile.lock" }}
- v4-dependencies-{{ checksum "Gemfile.lock" }}
# fallback to using the latest cache if no exact match is found
- v3-dependencies-
- v4-dependencies-
- run:
name: install dependencies
name: Installing proofer dependencies
command: |
gem install bundler
bundle install --jobs=4 --retry=3 --path vendor/bundle
bundle install --jobs=4 --retry=3 --path .vendor/bundle
- save_cache:
paths:
- ./vendor/bundle
key: v3-dependencies-{{ checksum "Gemfile.lock" }}
- ./.vendor/bundle
key: v4-dependencies-{{ checksum "Gemfile.lock" }}
- restore_cache:
keys:
- htmlproofer-cache-v3
# Run the test/html checker
- run: bundle exec rake test
- run:
name: Proofing HTML
command:
bundle exec rake test
# save the external URLs cache
- save_cache:
key: htmlproofer-cache-v3
paths:
- tmp/.htmlproofer
- run:
name: Installing speller dependencies
command: |
mkdir -p .vendor/node
npm install --prefix .vendor/node markdown-spellcheck
- run:
name: Checking markdown spelling
command:
.vendor/node/node_modules/markdown-spellcheck/bin/mdspell --en-us --ignore-acronyms --ignore-numbers --no-suggestions --report *.md */*.md */*/*.md */*/*/*.md */*/*/*/*.md
- run:
name: Checking markdown style
command:
mdl --ignore-front-matter --style mdl_style.rb .

View File

@ -1,6 +0,0 @@
{
"projects": {
"default": "istio-io",
"v01": "istiodocs-v01"
}
}

1
.gitignore vendored
View File

@ -1,4 +1,5 @@
_site
_rakesite
_static_site
.bundle
config_override.yml

554
.spelling Normal file
View File

@ -0,0 +1,554 @@
# markdown-spellcheck spelling configuration file
# Format - lines beginning # are comments
# global dictionary is at the start, file overrides afterwards
# one word per line, to define a file override use ' - filename'
# where filename is relative to this configuration file
0.1.x
0.2.x
0.8.x
1.x
10s
123456789012.my
15ms
1qps
243ms
24ms
290ms
2x
3s
404s
4s
5000qps
50Mb
6s
7Mb
7s
ACLs
API
APIs
Ansible
AppOptics
AuthPolicy
Autoscalers
Bookinfo
Budinsky
CAP_NET_ADMIN
CAs
CDNs
CIDRs
CIOs
Costin
CSRs
Chrony
Circonus
CloudWatch
Cmd
Config
ConfigMap
Ctrl
CustomResourceDefinition
D3.js
DaemonSet
Datadog
Datawire
DestinationRule
EgressRule
Elasticsearch
ExecAction
Exfiltrating
ExternalName
Fluentd
FQDNs
GATEWAY_URL
GCP-IAM
GCP_OPTS
GKE-IAM
GKE-Istio
GKE-Workloads
GitHub
GlueCon
Gmail
Grafana
Graphviz
Hystrix
ILBs
IPs
IPv4
ISTIO_INBOUND_PORTS
Incrementality
Initializers
Istio
IstioMesh
IstioRBAC.svg
Istiofied
JSON-formatted
JWT
JWTs
Kibana
Kops
Kuat
Kube
Kubecon
Kubelet
Kubernetes
L3-4
L4-L6
LabelDescription
LoadBalancers
LoadBalancing.svg
Lyft
MacOS
Manolache
Memquota
Mesos
Minikube
MongoDB
Multicloud
Multicluster
MutatingWebhookConfiguration
MySQL
Mysql
NamespaceSelector
NodePort
OAuth2
OP_QUERY
OpenID_Connect
OpenSSL
OpenShift
Ostrowski
PaaS
Papertrail
PilotAdapters.svg
Rajagopalan
RawVM
Redis
Redis-based
Redisquota
Registrator
Reviewer1
Reviewer2
SREs
ServiceEntry
ServiceGraph
ServiceModel_RequestFlow.svg
ServiceModel_Versions.svg
ServiceRole
Servicegraph
Sharding
Shriram
Snell-Feikema
SolarWinds
StatefulSets
TCP-level
TLS-secured
Tcpdump
Tigera
TrafficManagementOverview.svg
USE_ORIGIN
USE_ORIGIN
USE_PEER
Undeploy
VMware
VM-based
VMs
ValueType
VirtualService
WeaveWorks
WebSocket
Webhooks
X.509
X.509.
Yessenov
Zack
Zipkin
_CA_
_OK_
_V2_
_blog
_data
_docs
_help
_proxy
_v2_
_v3_
a.k.a.
abc
abcde12345
accounts.my
adapters.svg
addon
addons
admissionregistration
admissionregistration.k8s.io
analytics
api-server
api.operation
api.protocol
api.service
api.version
apiVersion
arch.svg
archive.istio.io
archive.istio.io.
attach_policies.png
auth.svg
authn.svg
autoscaler
autoscalers
autoscaling
backend
backends
base64
bind-productpager-viewer
bookinfo
booksale
bookstore.default.svc.cluster.local
boolean
bt
camelCase
canaried
canarying
check.error_code
check.error_message
cluster.local
colocated
concepts.yaml
config
configmap
configmaps
connection.duration
connection.id
connection.received.bytes
connection.received.bytes_total
connection.sent.bytes
connection.sent.bytes_total
containerID
context.protocol
context.time
coreos
create_policy.png
createpolicyjson.png
createpolicystart.png
current.istio.io
current.istio.io.
datastore
debian
default.svc.cluster.local
destination.domain
destination.ip
destination.labels
destination.name
destination.namespace
destination.port
destination.service
destination.uid
destination.user
dev
dm_bookinfo.png
dm_gcp_iam.png
dm_gcp_iam_role.png
dm_grafana.png
dm_kubernetes.png
dm_kubernetes_workloads.png
dm_launcher.png
dm_prometheus.png
dm_servicegraph.png
dm_zipkin.png
docker.io
e.g.
eBPF
enablement
endUser-to-Service
env
envars.yaml
errorFetchingBookDetails.png
errorFetchingBookRating.png
etcd
example.com
externalBookDetails.png
externalMySQLRatings.png
facto
failovers
faq
faq.md
fcm.googleapis.com
filename
filenames
fluentd
fortio
frontend
gRPC
gateways.svg
gbd
gcloud
gdb
getPetsById
git
golang
googleapis.com
googlegroups.com
goroutine
goroutines
grafana-istio-dashboard
grpc
helloworld
hostIP
hostname
hotspots
html
http
http2
httpReqTimeout
httpbin
httpbin.org
httpbin.yaml
https
https_from_the_app.svg
hyperkube
i.e.
image.html
img
ingressgateway
initializer
initializers
int64
intermediation
interoperate
intra-cluster
ip_address
iptables
istio
istio-apiserver
istio-system1
istio.github.io
istio.github.io.
istio.io
istio.io.
istio.yaml
istio_auth_overview.svg
istio_auth_workflow.svg
istio_grafana_dashboard-new
istio_grafana_disashboard-new
istio_zipkin_dashboard.png
istioctl
jaeger_dashboard.png
jaeger_trace.png
jason
json
k8s
key.pem
kube-api
kube-apiserver
kube-dns
kube-inject
kube-proxy
kube-public
kube-system
kubectl
kubelet
kubernetes
kubernetes.default
learnings
lifecycle
listchecker
liveness
mTLS
machine.svg
machineSetup
memcached
memquota
mesos-dns
metadata
metadata.initializers.pending
methodName
microservice
microservices
middleboxes
minikube
misconfigured
mixer-spof-myth-1
mixer-spof-myth-2
mongodb
mtls_excluded_services
multicloud
multicluster
mutual-tls
my-svc
my-svc-234443-5sffe
mysql
mysqldb
namespace
namespaces
natively
nginx
nginx-proxy
nodePorts
nodeport
noistio.svg
non-sandboxed
ns
oc
ok
onwards
openssl
packageName.serviceName
parenthesization
pem
phases.svg
platform-specific
pluggable
png
pprof
pre-specified
preconfigured
prefetching
preformatted
preliminary.istio.io
preliminary.istio.io.
prepends
prober
productpage
productpage.ns.svc.cluster.local
products.default.svc.cluster.local
prometheus
prometheus_query_result.png
proto
protobuf
protos
proxied
proxy_http_version
proxying
pwd
qps
quay.io
radis
ratelimit-handler
raw.githubusercontent.com
raw.githubusercontent.com)
reachability
readinessProbe
redis
redis-master-2353460263-1ecey
referer
registrator
reinject
repo
request.api_key
request.auth.audiences
request.auth.claims
request.auth.presenter
request.auth.principal
request.headers
request.host
request.id
request.method
request.path
request.reason
request.referer
request.responseheaders
request.scheme
request.size
request.time
request.useragent
requestcontext
response.code
response.duration
response.headers
response.size
response.time
reviews.abc.svc.cluster.local
roadmap
roleRef
roles_summary.png
rollout
rollouts
routable
runtime
runtimes
sa
sayin
schemas
secretName
serviceaccount
servicegraph-example
setupIstioVM.sh
setupMeshEx.sh
sharded
sharding
sidecar.env
sleep.legacy
sleep.yaml
source.domain
source.ip
source.labels
source.name
source.namespace
source.principal
source.service
source.uid
source.user
spiffe
stackdriver
statsd
stdout
struct
subdomains
substring
svc.cluster.local
svc.com
svg
tcp
team1
team1-ns
team2
team2-ns
templated
test-api
timeframe
timestamp
traffic.svg
trustability
ulimit
uncomment
uncommented
unencrypted
unmanaged
untrusted
uptime
url
user
user1
v1
v1.7.4
v1.7.6_coreos.0
v1alpha1
v1alpha3
v1beta1#MutatingWebhookConfiguration
v2
v2-mysql
v3
versioned
versioning
virtualservices-destrules
vm-1
webhook
webhooks
whitelist
whitelists
wikipedia.org
wildcard
withistio.svg
www.google.com
x-envoy-upstream-rq-timeout-ms
x.509
xDS
yaml
yamls
yournamespace
zipkin_dashboard.png
zipkin_span.png
qcc
- search.md
searchresults
gcse

7
404.md
View File

@ -1,16 +1,15 @@
---
title: Page Not Found
overview: Page redirection
description: Page redirection
order: 0
weight: 1
layout: notfound
type: markdown
---
{% include home.html %}
<div class="icon">
<img alt="Warning" title="Uh-ho" src="{{home}}/img/exclamation-mark.svg" />
<img alt="Warning" title="Uh-oh" src="{{home}}/img/exclamation-mark.svg" />
</div>
<div class="error">

2
CNAME
View File

@ -1 +1 @@
istio.io
preliminary.istio.io

View File

@ -1,5 +1,7 @@
# Contribution guidelines
## Contribution guidelines
So, you want to hack on the Istio web site? Yay! Please refer to Istio's overall
So, you want to hack on the Istio web site? Cool! Please refer to Istio's overall
[contribution guidelines](https://github.com/istio/community/blob/master/CONTRIBUTING.md)
to find out how you can help.
For specifics on hacking on our site, check this [info](https://istio.io/about/contribute/)

View File

@ -3,5 +3,6 @@ source "https://rubygems.org"
gem "github-pages", group: :jekyll_plugins
gem "jekyll-include-cache", "~> 0.1"
gem "nokogiri", ">= 1.8.1"
gem "html-proofer", :git => "https://github.com/ldemailly/html-proofer.git"
gem "html-proofer", ">= 3.8.0"
gem "rake"
gem "mdl"

View File

@ -1,17 +1,3 @@
GIT
remote: https://github.com/ldemailly/html-proofer.git
revision: a9fd7130ddf4bffc54ef938783d09a110e46574b
specs:
html-proofer (3.7.5)
activesupport (>= 4.2, < 6.0)
addressable (~> 2.3)
colorize (~> 0.8)
mercenary (~> 0.3.2)
nokogiri (~> 1.7)
parallel (~> 1.3)
typhoeus (~> 0.7)
yell (~> 2.0)
GEM
remote: https://rubygems.org/
specs:
@ -28,38 +14,42 @@ GEM
coffee-script-source (1.11.1)
colorator (1.1.0)
colorize (0.8.1)
commonmarker (0.17.7)
commonmarker (0.17.9)
ruby-enum (~> 0.5)
concurrent-ruby (1.0.5)
em-websocket (0.5.1)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0.6.0)
ethon (0.11.0)
ffi (>= 1.3.0)
eventmachine (1.2.5)
execjs (2.7.0)
faraday (0.13.1)
faraday (0.14.0)
multipart-post (>= 1.2, < 3)
ffi (1.9.18)
ffi (1.9.23)
forwardable-extended (2.6.0)
gemoji (3.0.0)
github-pages (172)
github-pages (180)
activesupport (= 4.2.9)
github-pages-health-check (= 1.3.5)
jekyll (= 3.6.2)
github-pages-health-check (= 1.4.0)
jekyll (= 3.7.3)
jekyll-avatar (= 0.5.0)
jekyll-coffeescript (= 1.0.2)
jekyll-commonmark-ghpages (= 0.1.3)
jekyll-coffeescript (= 1.1.1)
jekyll-commonmark-ghpages (= 0.1.5)
jekyll-default-layout (= 0.1.4)
jekyll-feed (= 0.9.2)
jekyll-gist (= 1.4.1)
jekyll-github-metadata (= 2.9.3)
jekyll-mentions (= 1.2.0)
jekyll-feed (= 0.9.3)
jekyll-gist (= 1.5.0)
jekyll-github-metadata (= 2.9.4)
jekyll-mentions (= 1.3.0)
jekyll-optional-front-matter (= 0.3.0)
jekyll-paginate (= 1.1.0)
jekyll-readme-index (= 0.2.0)
jekyll-redirect-from (= 0.12.1)
jekyll-relative-links (= 0.5.2)
jekyll-redirect-from (= 0.13.0)
jekyll-relative-links (= 0.5.3)
jekyll-remote-theme (= 0.2.3)
jekyll-sass-converter (= 1.5.0)
jekyll-seo-tag (= 2.3.0)
jekyll-sitemap (= 1.1.1)
jekyll-sass-converter (= 1.5.2)
jekyll-seo-tag (= 2.4.0)
jekyll-sitemap (= 1.2.0)
jekyll-swiss (= 0.4.0)
jekyll-theme-architect (= 0.1.0)
jekyll-theme-cayman (= 0.1.0)
@ -74,61 +64,74 @@ GEM
jekyll-theme-slate (= 0.1.0)
jekyll-theme-tactile (= 0.1.0)
jekyll-theme-time-machine (= 0.1.0)
jekyll-titles-from-headings (= 0.5.0)
jemoji (= 0.8.1)
kramdown (= 1.14.0)
jekyll-titles-from-headings (= 0.5.1)
jemoji (= 0.9.0)
kramdown (= 1.16.2)
liquid (= 4.0.0)
listen (= 3.0.6)
listen (= 3.1.5)
mercenary (~> 0.3)
minima (= 2.1.1)
minima (= 2.4.0)
nokogiri (>= 1.8.1, < 2.0)
rouge (= 2.2.1)
terminal-table (~> 1.4)
github-pages-health-check (1.3.5)
github-pages-health-check (1.4.0)
addressable (~> 2.3)
net-dns (~> 0.8)
octokit (~> 4.0)
public_suffix (~> 2.0)
typhoeus (~> 0.7)
typhoeus (~> 1.3)
html-pipeline (2.7.1)
activesupport (>= 2)
nokogiri (>= 1.4)
i18n (0.9.1)
html-proofer (3.8.0)
activesupport (>= 4.2, < 6.0)
addressable (~> 2.3)
colorize (~> 0.8)
mercenary (~> 0.3.2)
nokogiri (~> 1.8.1)
parallel (~> 1.3)
typhoeus (~> 1.3)
yell (~> 2.0)
http_parser.rb (0.6.0)
i18n (0.9.5)
concurrent-ruby (~> 1.0)
jekyll (3.6.2)
jekyll (3.7.3)
addressable (~> 2.4)
colorator (~> 1.0)
em-websocket (~> 0.5)
i18n (~> 0.7)
jekyll-sass-converter (~> 1.0)
jekyll-watch (~> 1.1)
jekyll-watch (~> 2.0)
kramdown (~> 1.14)
liquid (~> 4.0)
mercenary (~> 0.3.3)
pathutil (~> 0.9)
rouge (>= 1.7, < 3)
rouge (>= 1.7, < 4)
safe_yaml (~> 1.0)
jekyll-avatar (0.5.0)
jekyll (~> 3.0)
jekyll-coffeescript (1.0.2)
jekyll-coffeescript (1.1.1)
coffee-script (~> 2.2)
coffee-script-source (~> 1.11.1)
jekyll-commonmark (1.1.0)
jekyll-commonmark (1.2.0)
commonmarker (~> 0.14)
jekyll (>= 3.0, < 4.0)
jekyll-commonmark-ghpages (0.1.3)
jekyll-commonmark-ghpages (0.1.5)
commonmarker (~> 0.17.6)
jekyll-commonmark (~> 1)
rouge (~> 2)
jekyll-default-layout (0.1.4)
jekyll (~> 3.0)
jekyll-feed (0.9.2)
jekyll-feed (0.9.3)
jekyll (~> 3.3)
jekyll-gist (1.4.1)
jekyll-gist (1.5.0)
octokit (~> 4.2)
jekyll-github-metadata (2.9.3)
jekyll-github-metadata (2.9.4)
jekyll (~> 3.1)
octokit (~> 4.0, != 4.4.0)
jekyll-include-cache (0.1.0)
jekyll (~> 3.3)
jekyll-mentions (1.2.0)
jekyll-mentions (1.3.0)
activesupport (~> 4.0)
html-pipeline (~> 2.3)
jekyll (~> 3.0)
@ -137,19 +140,19 @@ GEM
jekyll-paginate (1.1.0)
jekyll-readme-index (0.2.0)
jekyll (~> 3.0)
jekyll-redirect-from (0.12.1)
jekyll-redirect-from (0.13.0)
jekyll (~> 3.3)
jekyll-relative-links (0.5.2)
jekyll-relative-links (0.5.3)
jekyll (~> 3.3)
jekyll-remote-theme (0.2.3)
jekyll (~> 3.5)
rubyzip (>= 1.2.1, < 3.0)
typhoeus (>= 0.7, < 2.0)
jekyll-sass-converter (1.5.0)
jekyll-sass-converter (1.5.2)
sass (~> 3.4)
jekyll-seo-tag (2.3.0)
jekyll-seo-tag (2.4.0)
jekyll (~> 3.3)
jekyll-sitemap (1.1.1)
jekyll-sitemap (1.2.0)
jekyll (~> 3.3)
jekyll-swiss (0.4.0)
jekyll-theme-architect (0.1.0)
@ -192,45 +195,56 @@ GEM
jekyll-theme-time-machine (0.1.0)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-titles-from-headings (0.5.0)
jekyll-titles-from-headings (0.5.1)
jekyll (~> 3.3)
jekyll-watch (1.5.1)
jekyll-watch (2.0.0)
listen (~> 3.0)
jemoji (0.8.1)
jemoji (0.9.0)
activesupport (~> 4.0, >= 4.2.9)
gemoji (~> 3.0)
html-pipeline (~> 2.2)
jekyll (>= 3.0)
kramdown (1.14.0)
jekyll (~> 3.0)
kramdown (1.16.2)
liquid (4.0.0)
listen (3.0.6)
rb-fsevent (>= 0.9.3)
rb-inotify (>= 0.9.7)
listen (3.1.5)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
ruby_dep (~> 1.2)
mdl (0.4.0)
kramdown (~> 1.12, >= 1.12.0)
mixlib-cli (~> 1.7, >= 1.7.0)
mixlib-config (~> 2.2, >= 2.2.1)
mercenary (0.3.6)
mini_portile2 (2.3.0)
minima (2.1.1)
jekyll (~> 3.3)
minitest (5.10.3)
minima (2.4.0)
jekyll (~> 3.5)
jekyll-feed (~> 0.9)
jekyll-seo-tag (~> 2.1)
minitest (5.11.3)
mixlib-cli (1.7.0)
mixlib-config (2.2.6)
tomlrb
multipart-post (2.0.0)
net-dns (0.8.0)
nokogiri (1.8.1)
nokogiri (1.8.2)
mini_portile2 (~> 2.3.0)
octokit (4.7.0)
octokit (4.8.0)
sawyer (~> 0.8.0, >= 0.5.3)
parallel (1.12.0)
pathutil (0.16.0)
parallel (1.12.1)
pathutil (0.16.1)
forwardable-extended (~> 2.6)
public_suffix (2.0.5)
rake (12.3.0)
rb-fsevent (0.10.2)
rake (12.3.1)
rb-fsevent (0.10.3)
rb-inotify (0.9.10)
ffi (>= 0.5.0, < 2)
rouge (2.2.1)
ruby-enum (0.7.1)
ruby-enum (0.7.2)
i18n
ruby_dep (1.5.0)
rubyzip (1.2.1)
safe_yaml (1.0.4)
sass (3.5.3)
sass (3.5.6)
sass-listen (~> 4.0.0)
sass-listen (4.0.0)
rb-fsevent (~> 0.9, >= 0.9.4)
@ -241,9 +255,10 @@ GEM
terminal-table (1.8.0)
unicode-display_width (~> 1.1, >= 1.1.1)
thread_safe (0.3.6)
typhoeus (0.8.0)
ethon (>= 0.8.0)
tzinfo (1.2.4)
tomlrb (1.2.6)
typhoeus (1.3.0)
ethon (>= 0.9.0)
tzinfo (1.2.5)
thread_safe (~> 0.1)
unicode-display_width (1.3.0)
yell (2.0.7)
@ -253,10 +268,11 @@ PLATFORMS
DEPENDENCIES
github-pages
html-proofer!
html-proofer (>= 3.8.0)
jekyll-include-cache (~> 0.1)
mdl
nokogiri (>= 1.8.1)
rake
BUNDLED WITH
1.16.0
1.16.1

4
OWNERS Normal file
View File

@ -0,0 +1,4 @@
approvers:
- geeknoid
- linsun

152
README.md
View File

@ -1,22 +1,31 @@
# istio.github.io
## istio.github.io
This repository contains the source code for the [istio.io](https://istio.io) web site.
This repository contains the source code for the [istio.io](https://istio.io),
[preliminary.istio.io](https://preliminary.istio.io) and [archive.istio.io](https://archive.istio.io) sites.
Please see the main Istio [README](https://github.com/istio/istio/blob/master/README.md)
file to learn about the overall Istio project and how to get in touch with us. To learn how you can
contribute to any of the Istio components, please
see the Istio [contribution guidelines](https://github.com/istio/community/blob/master/CONTRIBUTING.md).
The website uses [Jekyll](https://jekyllrb.com/) templates and is hosted on GitHub Pages. Please make sure you are
familiar with these before editing.
* [Working with the site](#working-with-the-site)
* [Linting](#linting)
* [Versions and releases](#versions-and-releases)
* [How versioning works](#how-versioning-works)
* [Publishing content immediately](#publishing-content-immediately)
* [Creating a version](#creating-a-version)
To run the site locally with Docker, use the following command from the toplevel directory for this git repo
## Working with the site
We use [Jekyll](https://jekyllrb.com/) to generate our sites.
To run the site locally with Docker, use the following command from the top level directory for this git repo
(e.g. pwd must be `~/github/istio.github.io` if you were in `~/github` when you issued
`git clone https://github.com/istio/istio.github.io.git`)
```bash
# First time: (slow)
docker run --name istio-jekyll --volume=$(pwd):/srv/jekyll -it -p 4000:4000 jekyll/jekyll:3.5.2 sh -c "bundle install && rake test && bundle exec jekyll serve --incremental --host 0.0.0.0"
docker run --name istio-jekyll --volume=$(pwd):/srv/jekyll -w /srv/jekyll -it -p 4000:4000 jekyll/jekyll:3.7.3 sh -c "bundle install && rake test && bundle exec jekyll serve --incremental --host 0.0.0.0"
# Then open browser with url 127.0.0.1:4000 to see the change.
# Subsequent, each time you want to see a new change and you stopped the previous run by ctrl+c: (much faster)
docker start istio-jekyll -a -i
@ -24,23 +33,24 @@ docker start istio-jekyll -a -i
docker rm istio-jekyll
```
The `rake test` part is to make sure you are not introducing html errors or bad links, you should see
```
The `rake test` part is to make sure you are not introducing HTML errors or bad links, you should see
```bash
HTML-Proofer finished successfully.
```
in the output
> In some cases the `--incremental` may not work properly and you might have to remove it.
in the output.
## Local/Native Jekyll install:
Alternatively, if you just want to develop locally w/o Docker/Kubernetes/Minikube, you can try installing Jekyll locally.
You may need to install other prerequisites manually (which is where using the docker image shines). Here's an example of doing
so for Mac OS X:
Alternatively, if you just want to develop locally w/o Docker/Kubernetes/Minikube, you can try installing Jekyll locally. You may need to install other prerequisites manually (which is where using the docker image shines). Here's an example of doing so for Mac OS X:
```
```bash
xcode-select --install
sudo xcodebuild -license
brew install ruby
gem update --system
gem install mdspell
gem install bundler
gem install jekyll
cd istio.github.io
@ -48,3 +58,117 @@ bundle install
bundle exec rake test
bundle exec jekyll serve
```
## Linting
You should run `scripts/linters.sh` prior to checking in your changes.
This will run 3 tests:
* HTML proofing, which ensures all your links are valid along with other checks.
* Spell checking.
* Style checking, which makes sure your markdown file complies with some common style rules.
If you get a spelling error, you have three choices to address it:
* It's a real typo, so fix your markdown.
* It's a command/field/symbol name, so stick some `backticks` around it.
* It's really valid, so go add the word to the .spelling file at the root of the repo.
## Versions and releases
Istio maintains three variations of its public site:
* [istio.io](https://istio.io) is the main site, showing documentation for the current release of the product.
This site is currently hosted on Firebase.
* [archive.istio.io](https://archive.istio.io) contains snapshots of the documentation for previous releases of the product.
This is useful for customers still using these older releases.
This site is currently hosted on Firebase.
* [preliminary.istio.io](https://preliminary.istio.io) contains the actively updated documentation for the next release of the product.
This site is hosted by GitHub Pages.
The user can trivially navigate between the different variations of the site using the gear menu in the top right
of each page.
### How versioning works
* Documentation changes are primarily committed to the master branch of istio.github.io. Changes committed to this branch
are automatically reflected on preliminary.istio.io.
* The content of istio.io is taken from the latest release-XXX branch. The specific branch that
is used is determined by the `BRANCH` variable in this [script](https://github.com/istio/admin-sites/blob/master/current.istio.io/build.sh)
* The content of archive.istio.io is taken from the older release-XXX branches. The set of branches that
are included on archive.istio.io is determined by the `TOBUILD` variable in this
[script](https://github.com/istio/admin-sites/blob/master/archive.istio.io/build.sh)
> The above means that if you want to do a change to the main istio.io site, you will need
to make the change in the master branch of istio.github.io and then merge that change into the
release branch.
### Publishing content immediately
Checking in updates to the master branch will automatically update preliminary.istio.io, and will only be reflected on
istio.io the next time a release is created, which can be several weeks in the future. If you'd like some changes to be
immediately reflected on istio.io, you need to check your changes both to the master branch and to the
current release branch (named release-XXX such as release-0.7).
### Creating a version
Here are the steps necessary to create a new documentation version. Let's assume the current
version of Istio is 0.6 and you wish to introduce 0.7 which has been under development.
1. Create a new release branch off of master, named as release-*major*.*minor*, which in this case would be
release-0.7. There is one such branch for every release.
1. In the **master** branch, edit the file `_data/istio.yml` and update the `version` field to have the version
of the next release of Istio. In this case, you would set the field to 0.8.
1. In the **master** branch, edit the file `_data/releases.yml` and add a new entry at the top of the file
for version 0.8. You'll need to make sure the URLs are updated for the first few entries. The top
entry (0.8) should point to preliminary.istio.io. The second entry (0.7) should point to istio.io. The third
and subsequent entries should point to archive.istio.io.
1. Commit the previous two edits to GitHub.
1. In the **release** branch you created, edit the file `_data/istio.yml`. Set the `preliminary` field to `false`.
1. Commit the previous edit to GitHub.
1. Go to the Google Search Console and create a new search engine that searches the archive.istio.io/V&lt;major&gt;.&lt;minor&gt;
directory. This search engine will be used to perform version-specific searches on archive.istio.io.
1. In the **previous release's** branch (in this case release-0.6), edit the file `_data/istio.yml`. Set the
`archive` field to true, the `archive_date` field to the current date, and the `search_engine_id` field
to the ID of the search engine you created in the prior step.
1. Switch to the istio/admin-sites repo.
1. Navigate to the archive.istio.io directory.
1. Edit the `build.sh` script to add the newest archive version (in this case
release-0.6) to the `TOBUILD` variable.
1. Commit the previous edit to GitHub.
1. Run the `build.sh` script.
1. Once the script completes, run `firebase deploy`. This will update archive.istio.io to contain the
right set of archives, based on the above steps.
1. Navigate to the current.istio.io directory.
1. Edit the `build.sh` script to set the `BRANCH` variable to the current release branch (in this case release-0.7)
1. Run the `build.sh` script.
1. Once the script completes, run 'firebase deploy`. This will update the content of istio.io to reflect what is the new release
branch you created.
Once all this is done, browse the three sites (preliminary.istio.io, istio.io, and archive.istio.io) to make sure
everything looks good.

View File

@ -1,7 +1,9 @@
require 'html-proofer'
task :test do
sh "bundle exec jekyll build --incremental"
sh "rm -fr _rakesite"
sh "mkdir _rakesite"
sh "bundle exec jekyll build --config _config.yml,_rake_config_override.yml"
typhoeus_configuration = {
:timeout => 30,
# :verbose => true
@ -17,5 +19,5 @@ task :test do
:url_ignore => [/localhost|github\.com\/istio\/istio\.github\.io\/edit\/master\//],
:typhoeus => typhoeus_configuration,
}
HTMLProofer.check_directory("./_site", options).run
HTMLProofer.check_directory("./_rakesite", options).run
end

View File

@ -1,23 +1,21 @@
---
title: Creating a Pull Request
overview: Shows you how to create a GitHub pull request in order to submit your docs for approval.
description: Shows you how to create a GitHub pull request in order to submit your docs for approval.
order: 20
weight: 20
layout: about
type: markdown
redirect_from: /docs/welcome/contribute/creating-a-pull-request.html
---
To contribute to Istio documentation, create a pull request against the
[istio/istio.github.io](https://github.com/istio/istio.github.io){: target="_blank"}
[istio/istio.github.io](https://github.com/istio/istio.github.io)
repository. This page shows the steps necessary to create a pull request.
## Before you begin
1. Create a [GitHub account](https://github.com){: target="_blank"}.
1. Create a [GitHub account](https://github.com).
1. Sign the [Contributor License Agreement](https://github.com/istio/community/blob/master/CONTRIBUTING.md#contributor-license-agreements)
1. Sign the [Contributor License Agreement](https://github.com/istio/community/blob/master/CONTRIBUTING.md#contributor-license-agreements).
Documentation will be published under the [Apache 2.0](https://github.com/istio/istio.github.io/blob/master/LICENSE) license.
@ -26,7 +24,7 @@ Documentation will be published under the [Apache 2.0](https://github.com/istio/
Before you can edit documentation, you need to create a fork of Istio's documentation GitHub repository:
1. Go to the
[istio/istio.github.io](https://github.com/istio/istio.github.io){: target="_blank"}
[istio/istio.github.io](https://github.com/istio/istio.github.io)
repository.
1. In the upper-right corner, click **Fork**. This creates a copy of Istio's
@ -57,3 +55,6 @@ repository. This opens a page that shows the status of your pull request.
1. During the next few days, check your pull request for reviewer comments.
If needed, revise your pull request by committing changes to your
new branch in your fork.
> Once your changes have been committed, they will show up immediately on [preliminary.istio.io](https://preliminary.istio.io), but
will only show up on [istio.io](http://istio.io) the next time we produce a new release, which happens around once a month.

View File

@ -1,17 +1,18 @@
---
title: Editing Docs
overview: Lets you start editing this site's documentation.
order: 10
description: Lets you start editing this site's documentation.
weight: 10
layout: about
type: markdown
redirect_from: /docs/welcome/contribute/editing.html
---
Click the button below to visit the GitHub repository for this whole web site. You can then click the
**Fork** button in the upper-right area of the screen to
**Fork** button in the upper-right area of the screen to
create a copy of our site in your GitHub account called a _fork_. Make any changes you want in your fork, and when you
are ready to send those changes to us, go to the index page for your fork and click **New Pull Request** to let us know about it.
<a class="btn btn-istio" href="https://github.com/istio/istio.github.io/">Browse this site's source code</a>
> Once your changes have been committed, they will show up immediately on [preliminary.istio.io](https://preliminary.istio.io), but
will only show up on [istio.io](http://istio.io) the next time we produce a new release, which happens around once a month.

View File

@ -1,13 +1,11 @@
---
title: Contributing to the Docs
overview: Learn how to contribute to improve and expand the Istio documentation.
description: Learn how to contribute to improve and expand the Istio documentation.
order: 100
weight: 100
layout: about
type: markdown
toc: false
redirect_from: /docs/welcome/contribute/index.html
---
{% include section-index.html docs=site.docs %}
{% include section-index.html docs=site.about %}

View File

@ -1,16 +1,14 @@
---
title: Doc Issues
overview: Explains the process involved in accepting documentation updates.
order: 60
description: Explains the process involved in accepting documentation updates.
weight: 60
layout: about
type: markdown
redirect_from: /docs/welcome/contribute/reviewing-doc-issues.html
---
This page explains how documentation issues are reviewed and prioritized for the
[istio/istio.github.io](https://github.com/istio/istio.github.io){: target="_blank"} repository.
[istio/istio.github.io](https://github.com/istio/istio.github.io) repository.
The purpose is to provide a way to organize issues and make it easier to contribute to
Istio documentation. The following should be used as the standard way of prioritizing,
labeling, and interacting with issues.
@ -25,7 +23,7 @@ the issue with your reasoning for the change.
<td>P1</td>
<td><ul>
<li>Major content errors affecting more than 1 page</li>
<li>Broken code sample on a heavily trafficked page</li>
<li>Broken code sample on a heavily trafficked page</li>
<li>Errors on a “getting started” page</li>
<li>Well known or highly publicized customer pain points</li>
<li>Automation issues</li>
@ -52,7 +50,7 @@ the issue with your reasoning for the change.
## Handling special issue types
If a single problem has one or more issues open for it, the problem should be consolidated into a single issue. You should decide which issue to keep open
If a single problem has one or more issues open for it, the problem should be consolidated into a single issue. You should decide which issue to keep open
(or open a new issue), port over all relevant information, link related issues, and close all the other issues that describe the same problem. Only having
a single issue to work on will help reduce confusion and avoid duplicating work on the same problem.

View File

@ -1,11 +1,9 @@
---
title: Staging Your Changes
overview: Explains how to test your changes locally before submitting them.
description: Explains how to test your changes locally before submitting them.
order: 40
weight: 40
layout: about
type: markdown
redirect_from: /docs/welcome/contribute/staging-your-changes.html
---

View File

@ -1,11 +1,9 @@
---
title: Style Guide
overview: Explains the dos and donts of writing Istio docs.
order: 70
description: Explains the dos and donts of writing Istio docs.
weight: 70
layout: about
type: markdown
redirect_from: /docs/welcome/contribute/style-guide.html
---
@ -29,18 +27,18 @@ objects use
[camelCase](https://en.wikipedia.org/wiki/Camel_case).
Don't split the API object name into separate words. For example, use
PodTemplateList, not Pod Template List.
`PodTemplateList`, not Pod Template List.
Refer to API objects without saying "object," unless omitting "object"
leads to an awkward construction.
|Do |Don't
|--------------------------------------------|------
|The Pod has two Containers. |The pod has two containers.
|The Deployment is responsible for ... |The Deployment object is responsible for ...
|A PodList is a list of Pods. |A Pod List is a list of pods.
|The two ContainerPorts ... |The two ContainerPort objects ...
|The two ContainerStateTerminated objects ...|The two ContainerStateTerminateds ...
|The `Pod` has two Containers. |The pod has two containers.
|The `Deployment` is responsible for ... |The `Deployment` object is responsible for ...
|A `PodList` is a list of Pods. |A Pod List is a list of pods.
|The two `ContainerPorts` ... |The two `ContainerPort` objects ...
|The two `ContainerStateTerminated` objects ...|The two `ContainerStateTerminated` ...
### Use angle brackets for placeholders
@ -49,10 +47,10 @@ represents.
1. Display information about a pod:
```bash
kubectl describe pod <pod-name>
```command
$ kubectl describe pod <pod-name>
```
where `<pod-name>` is the name of one of your pods.
### Use **bold** for user interface elements
@ -81,7 +79,7 @@ represents.
|Do | Don't
|----------------------------|------
|The `kubectl run` command creates a Deployment.|The "kubectl run" command creates a Deployment.
|The `kubectl run` command creates a `Deployment`.|The "kubectl run" command creates a `Deployment`.
|For declarative management, use `kubectl apply`.|For declarative management, use "kubectl apply".
### Use `code` style for object field names
@ -97,19 +95,20 @@ For field values of type string or integer, use normal style without quotation m
|Do | Don't
|----------------------------------------------|------
|Set the value of `imagePullPolicy` to Always. | Set the value of `imagePullPolicy` to "Always".|Set the value of `image` to nginx:1.8. | Set the value of `image` to `nginx:1.8`.
|Set the value of `imagePullPolicy` to Always. | Set the value of `imagePullPolicy` to "Always".
|Set the value of `image` to nginx:1.8. | Set the value of `image` to `nginx:1.8`.
|Set the value of the `replicas` field to 2. | Set the value of the `replicas` field to `2`.
### Only capitalize the first letter of headings
For any headings, only apply an uppercase letter to the first word of the heading,
except is a word is a proper noun or an acronym.
except if a word is a proper noun or an acronym.
|Do | Don't
|------------------------|-----
|Configuring rate limits | Configuring Rate Limits
|Using Envoy for ingress | Using envoy for ingress
|Using HTTPS | Using https
|Using HTTPS | Using https
## Code snippet formatting
@ -123,12 +122,8 @@ except is a word is a proper noun or an acronym.
Verify that the pod is running on your chosen node:
```bash
kubectl get pods --output=wide
```
The output is similar to this:
```bash
```command
$ kubectl get pods --output=wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 13s 10.200.0.4 worker0
```
@ -150,7 +145,7 @@ Synonyms:
- “Sidecar” -- mostly restricted to conceptual docs
- “Proxy -- only if context is obvious
Related Terms
Related Terms:
- Proxy agent - This is a minor infrastructural component and should only show up in low-level detail documentation.
It is not a proper noun.
@ -171,7 +166,7 @@ forms of configuration.
No dash, it's *load balancing* not *load-balancing*.
### Service mesh
### Service mesh
Not a proper noun. Use in place of service fabric.
@ -208,7 +203,7 @@ Use simple and direct language. Avoid using unnecessary phrases, such as saying
|Do | Don't
|----------------------------|------
|To create a ReplicaSet, ... | In order to create a ReplicaSet, ...
|To create a `ReplicaSet`, ... | In order to create a `ReplicaSet`, ...
|See the configuration file. | Please see the configuration file.
|View the Pods. | With this next command, we'll view the Pods.
@ -216,9 +211,17 @@ Use simple and direct language. Avoid using unnecessary phrases, such as saying
|Do | Don't
|---------------------------------------|------
|You can create a Deployment by ... | We'll create a Deployment by ...
|You can create a `Deployment` by ... | We'll create a `Deployment` by ...
|In the preceding output, you can see...| In the preceding output, we can see ...
### Create useful links
There are good hyperlinks, and bad hyperlinks. The common practice of calling links *here* or *click here* are examples of
bad hyperlinks. Check out this excellent article explaining what makes a good hyperlink and try to keep these guidelines in
mind when creating or reviewing site content.
[Why “click here” is a terrible link, and what to write instead](http://stephanieleary.com/2015/05/why-click-here-is-a-terrible-link-and-what-to-write-instead/).
## Patterns to avoid
### Avoid using "we"
@ -229,7 +232,7 @@ whether they're part of the "we" you're describing.
|Do | Don't
|------------------------------------------|------
|Version 1.4 includes ... | In version 1.4, we have added ...
|Kubernetes provides a new feature for ... | We provide a new feature ...
|Istio provides a new feature for ... | We provide a new feature ...
|This page teaches you how to use pods. | In this page, we are going to learn about pods.
### Avoid jargon and idioms

View File

@ -1,11 +1,9 @@
---
title: Writing a New Topic
overview: Explains the mechanics of creating new documentation pages.
order: 30
description: Explains the mechanics of creating new documentation pages.
weight: 30
layout: about
type: markdown
redirect_from: /docs/welcome/contribute/writing-a-new-topic.html
---
{% include home.html %}
@ -25,7 +23,7 @@ is the best fit for your content:
<table>
<tr>
<td>Concept</td>
<td>A concept page explains some significant aspect of Istio. For example, a concept page might describe the
<td>A concept page explains some significant aspect of Istio. For example, a concept page might describe the
Mixer's configuration model and explain some of its subtleties.
Typically, concept pages don't include sequences of steps, but instead provide links to
tasks that do.</td>
@ -55,10 +53,10 @@ is the best fit for your content:
<tr>
<td>Setup</td>
<td>A setup page is similar to a task page, except that it is focused on installation
ectivities.
activities.
</td>
</tr>
<tr>
<td>Blog Post</td>
<td>
@ -79,50 +77,47 @@ all in lower case.
## Updating the front matter
Every documentation file needs to start with Jekyll
Every documentation file needs to start with Jekyll
[front matter](https://jekyllrb.com/docs/frontmatter/).
The front matter is a block of YAML that is between the
triple-dashed lines at the top of each file. Here's the
chunk of front matter you should start with:
```
```yaml
---
title: <title>
overview: <overview>
order: <order>
layout: docs
type: markdown
description: <overview>
weight: <order>
---
```
Copy the above at the start of your new markdown file and update
the `<title>`, `<overview>` and `<order>` fields for your particular file. The available front
the `<title>`, `<description>` and `<weight>` fields for your particular file. The available front
matter fields are:
|Field | Description
|---------------|------------
|`title` | The short title of the page
|`overview` | A one-line description of what the topic is about
|`order` | An integer used to determine the sort order of this page relative to other pages in the same directory.
|`description` | A one-line description of what the topic is about
|`weight` | An integer used to determine the sort order of this page relative to other pages in the same directory.
|`layout` | Indicates which of the Jekyll layouts this page uses
|`index` | Indicates whether the page should appear in the doc's top nav tabs
|`draft` | When true, prevents the page from shownig up in any navigation area
|`publish_date` | For blog posts, indicates the date of publication of the post
|`draft` | When true, prevents the page from showing up in any navigation area
|`publishdate` | For blog posts, indicates the date of publication of the post
|`subtitle` | For blog posts, supplies an optional subtitle to be displayed below the main title
|`attribution` | For blog posts, supplies an optional author's name
|`toc` | Set this to false to prevent the page from having a table of contents generated for it
|`force_inline_toc` | Set this to true to force the generated table of contents from being inserted inline in the text instead of in a sidebar
## Choosing a directory
Depending on your page type, put your new file in a subdirectory of one of these:
* _blog/
* _docs/concepts/
* _docs/guides/
* _docs/reference/
* _docs/setup/
* _docs/tasks/
- _blog/
- _docs/concepts/
- _docs/guides/
- _docs/reference/
- _docs/setup/
- _docs/tasks/
You can put your file in an existing subdirectory, or you can create a new
subdirectory. For blog posts, put the file into a subdirectory for the current
@ -135,24 +130,28 @@ Put image files in an `img` subdirectory of where you put your markdown file. Th
If you must use a PNG or JPEG file instead, and the file
was generated from an original SVG file, please include the
SVG file in the repository even if it isn't used in the web
site itself. This is so we can update the imagery over time
site itself. This is so we can update the imagery over time
if needed.
Within markdown, use the following sequence to add the image:
```html
{% raw %}
{% include figure.html width='75%' ratio='69.52%'
img='./img/myfile.svg'
alt='Alternate text to display when the image is not available'
title='A tooltip displayed when hovering over the image'
caption='A caption displayed under the image'
{% include image.html width="75%" ratio="69.52%"
link="./img/myfile.svg"
alt="Alternate text to display when the image is not available"
title="A tooltip displayed when hovering over the image"
caption="A caption displayed under the image"
%}
{% endraw %}
```
You need to fill in all the values. The width represents the percentage of space used by the image
relative to the surrounding text. The ratio is (image height / image width) * 100.
The `width`, `ratio`, `link` and `caption` values are required. If the `title` value isn't
supplied, it'll default to the same as `caption`. If the `alt` value is not supplied, it'll
default to `title` or if that's not defined, to `caption`.
`width` represents the percentage of space used by the image
relative to the surrounding text. `ratio` (image height / image width) * 100.
## Linking to other pages
@ -181,10 +180,10 @@ current hierarchy:
{% raw %}[see here]({{home}}/docs/adir/afile.html){% endraw %}
```
In order to use \{\{home\}\} in a file,
In order to use \{\{home\}\} in a file,
you need to make sure that the file contains the following
line of boilerplate right after the block of front matter:
```markdown
...
---
@ -197,7 +196,7 @@ current hierarchy:
You can embed blocks of preformatted content using the normal markdown technique:
<pre class="language-markdown"><code>```
<pre class="language-markdown"><code>```plain
func HelloWorld() {
fmt.Println("Hello World")
}
@ -206,14 +205,14 @@ func HelloWorld() {
The above produces this kind of output:
```
```plain
func HelloWorld() {
fmt.Println("Hello World")
}
```
In general, you should indicate the nature of the content in the preformatted block. You do this
by appending a name after the initial set of tick marks
You must indicate the nature of the content in the preformatted block by appending a name after the initial set of tick
marks:
<pre class="language-markdown"><code>```go
func HelloWorld() {
@ -230,8 +229,76 @@ func HelloWorld() {
}
```
You can use `markdown`, `yaml`, `json`, `java`, `javascript`, `c`, `cpp`, `csharp`, `go`, `html`, `protobuf`,
`perl`, `docker`, and `bash`.
You can use `markdown`, `yaml`, `json`, `java`, `javascript`, `c`, `cpp`, `csharp`, `go`, `html`, `protobuf`,
`perl`, `docker`, and `bash`, along with `command` and its variants described below.
### Showing commands and command output
If you want to show one or more bash command-lines with some output, you use the `command` indicator:
<pre class="language-markdown"><code>```command
$ echo "Hello"
Hello
```
</code></pre>
which produces:
```command
$ echo "Hello"
Hello
```
You can have as many command-lines as you want, but only one chunk of output is recognized.
<pre class="language-markdown"><code>```command
$ echo "Hello" >file.txt
$ cat file.txt
Hello
```
</code></pre>
which yields:
```command
$ echo "Hello" >file.txt
$ cat file.txt
Hello
```
You can also use line continuation in your command-lines:
```command
$ echo "Hello" \
>file.txt
$ echo "There" >>file.txt
$ cat file.txt
Hello
There
```
If the output is the command is JSON or YAML, you can use `command-output-as-json` and `command-output-as-yaml`
instead of merely `command` in order to apply syntax coloring to the command's output.
## Displaying file content
You can pull in an external file and display its content as a preformatted block. This is handy to display a
config file or a test file. To do so, you use a Jekyll include statement such as:
```html
{% raw %}{% include file-content.html url='https://raw.githubusercontent.com/istio/istio/master/Makefile' %}{% endraw %}
```
which produces the following result:
{% include file-content.html url='https://raw.githubusercontent.com/istio/istio/master/Makefile' %}
If the file is from a different origin site, CORS should be enabled on that site. Note that the
GitHub raw content site (raw.githubusercontent.com) is CORS
enabled so it may be used here.
Note that unlike normal preformatted blocks, dynamically loaded preformatted blocks unfortunately
do not get syntax colored.
## Adding redirects
@ -241,44 +308,40 @@ redirects to the site very easily.
In the page that is the target of the redirect (where you'd like users to land), you simply add the
following to the front-matter:
```
```plain
redirect_from: <url>
```
For example
```
```plain
---
title: Frequantly Asked Questions
overview: Questions Asked Frequently
title: Frequently Asked Questions
description: Questions Asked Frequently
order: 12
weight: 12
layout: docs
type: markdown
redirect_from: /faq
---
```
```
With the above in a page saved as _help/faq.md, the user will be able to access the page by going
to istio.io/help/faq as normal, as well as istio.io/faq.
You can also add many redirects like so:
```
```plain
---
title: Frequantly Asked Questions
overview: Questions Asked Frequently
title: Frequently Asked Questions
description: Questions Asked Frequently
order: 12
weight: 12
layout: docs
type: markdown
redirect_from:
- /faq
- /faq2
- /faq3
---
```
```

View File

@ -1,11 +1,9 @@
---
title: Feature Status
overview: List of features and their release stages.
description: List of features and their release stages.
order: 10
weight: 10
layout: about
type: markdown
redirect_from:
- "/docs/reference/release-roadmap.html"
- "/docs/reference/feature-stages.html"
@ -13,28 +11,26 @@ redirect_from:
---
{% include home.html %}
Starting with 0.3, Istio releases are delivered on a monthly cadence. You can download the current version by visiting our
[release page](https://github.com/istio/istio/releases).
Please note that the phases (alpha, beta, and stable) are applied to individual features
This page lists the relative maturity and support
level of every Istio feature. Please note that the phases (Alpha, Beta, and Stable) are applied to individual features
within the project, not to the project as a whole. Here is a high level description of what these labels means:
## Feature Phase Definition
## Feature phase definitions
| | Alpha | Beta | Stable
| | Alpha | Beta | Stable
|-------------------|-------------------|-------------------|-------------------
| **Purpose** | Demo-able, works end-to-end but has limitations | Usable in production, not a toy anymore | Dependable, production hardened
| **Purpose** | Demo-able, works end-to-end but has limitations | Usable in production, not a toy anymore | Dependable, production hardened
| **API** | No guarantees on backward compatibility | APIs are versioned | Dependable, production-worthy. APIs are versioned, with automated version conversion for backward compatibility
| **Performance** | Not quantified or guaranteed | Not quantified or guaranteed | Perf (latency/scale) is quantified, documented, with guarantees against regression
| **Performance** | Not quantified or guaranteed | Not quantified or guaranteed | Performance (latency/scale) is quantified, documented, with guarantees against regression
| **Deprecation Policy** | None | Weak - 3 months | Dependable, Firm. 1 year notice will be provided before changes
## Istio features
Below is our list of existing features and their current phases. This information will be updated after every monthly release.
### Traffic Management
### Traffic management
| Feature | Phase
| Feature | Phase
|-------------------|-------------------
| [Protocols: HTTP 1.1](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/http_connection_management.html#http-protocols) | Beta
| [Protocols: HTTP 2.0](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/http_connection_management.html#http-protocols) | Alpha
@ -47,60 +43,63 @@ Below is our list of existing features and their current phases. This informatio
| [Routing Rules: Circuit Break]({{home}}/docs/tasks/traffic-management/request-routing.html) | Alpha
| [Routing Rules: Header Rewrite]({{home}}/docs/tasks/traffic-management/request-routing.html) | Alpha
| [Routing Rules: Traffic Splitting]({{home}}/docs/tasks/traffic-management/request-routing.html) | Alpha
| [Memquota Implementation and Integration]({{home}}/docs/tasks/telemetry/metrics-logs.html) | Alpha
| Improved Routing Rules: Composite Service | Alpha
| [Quota / Redis Rate Limiting (Adapter and Server)]({{home}}/docs/tasks/policy-enforcement/rate-limiting.html) | Alpha
| [Memquota Implementation and Integration]({{home}}/docs/tasks/telemetry/metrics-logs.html) | Stable
| [Ingress TLS]({{home}}/docs/tasks/traffic-management/ingress.html) | Alpha
| Egress Policy and Telemetry | Alpha
### Observability
| Feature | Phase
| Feature | Phase
|-------------------|-------------------
| [Prometheus Integration]({{home}}/docs/guides/telemetry.html) | Beta
| [Local Logging (STDIO)]({{home}}/docs/guides/telemetry.html) | Beta
| [Statsd Integration]({{home}}/docs/reference/config/adapters/statsd.html) | Stable
| [Service Dashboard in Grafana]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html) | Beta
| [Stackdriver Integration]({{home}}/docs/reference/config/adapters/stackdriver.html) | Alpha
| [Service Graph]({{home}}/docs/tasks/telemetry/servicegraph.html) | Alpha
| [Distributed Tracing to Zipkin / Jaeger]({{home}}/docs/tasks/telemetry/distributed-tracing.html) | Alpha
| [Istio Component Dashboard in Grafana]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html) - **New to 0.5** | Alpha
| [Prometheus Integration]({{home}}/docs/guides/telemetry.html) | Beta
| [Local Logging (STDIO)]({{home}}/docs/guides/telemetry.html) | Stable
| [Statsd Integration]({{home}}/docs/reference/config/adapters/statsd.html) | Stable
| [Service Dashboard in Grafana]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html) | Beta
| [Stackdriver Integration]({{home}}/docs/reference/config/adapters/stackdriver.html) | Alpha
| [Service Graph]({{home}}/docs/tasks/telemetry/servicegraph.html) | Alpha
| [Distributed Tracing to Zipkin / Jaeger]({{home}}/docs/tasks/telemetry/distributed-tracing.html) | Alpha
| [Istio Component Dashboard in Grafana]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html) | Beta
| Service Tracing | Alpha
### Security
| Feature | Phase
| Feature | Phase
|-------------------|-------------------
| [Deny Checker]({{home}}/docs/reference/config/adapters/denier.html) | Beta
| [List Checker]({{home}}/docs/reference/config/adapters/list.html) | Beta
| [Kubernetes: Service Credential Distribution]({{home}}/docs/concepts/security/mutual-tls.html) | Beta
| [Deny Checker]({{home}}/docs/reference/config/adapters/denier.html) | Stable
| [List Checker]({{home}}/docs/reference/config/adapters/list.html) | Stable
| [Kubernetes: Service Credential Distribution]({{home}}/docs/concepts/security/mutual-tls.html) | Stable
| [Pluggable Key/Cert Support for Istio CA]({{home}}/docs/tasks/security/plugin-ca-cert.html) | Stable
| [Service-to-service mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) | Beta
| [Service-to-service mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) | Stable
| [Incremental Enablement of service-to-service mutual TLS]({{home}}/docs/tasks/security/per-service-mtls.html) | Alpha
| [VM: Service Credential Distribution]({{home}}/docs/concepts/security/mutual-tls.html) | Alpha
| [OPA Checker](https://github.com/istio/istio/blob/41a8aa4f75f31bf0c1911d844a18da4cff8ac584/mixer/adapter/opa/README.md) | Alpha
| RBAC Mixer Adapter | Alpha
| API Keys | Alpha
### Core
| Feature | Phase
| Feature | Phase
|-------------------|-------------------
| [Kubernetes: Envoy Installation and Traffic Interception]({{home}}/docs/setup/kubernetes/) | Beta
| [Kubernetes: Istio Control Plane Installation]({{home}}/docs/setup/kubernetes/) | Beta
| [Pilot Integration into Kubernetes Service Discovery]({{home}}/docs/setup/kubernetes/) | Stable
| [Attribute Expression Language]({{home}}/docs/reference/config/mixer/expression-language.html) | Beta
| [Mixer Adapter Authoring Model]({{home}}/blog/2017/adapter-model.html) | Beta
| [Attribute Expression Language]({{home}}/docs/reference/config/mixer/expression-language.html) | Stable
| [Mixer Adapter Authoring Model]({{home}}/blog/2017/adapter-model.html) | Stable
| [VM: Envoy Installation, Traffic Interception and Service Registration]({{home}}/docs/guides/integrating-vms.html) | Alpha
| [VM: Istio Control Plane Installation and Upgrade (Galley, Mixer, Pilot, CA)](https://github.com/istio/istio/issues/2083) | Alpha
| [Kubernetes: Istio Control Plane Upgrade]({{home}}/docs/setup/kubernetes/) | Alpha
| [Pilot Integration into Consul]({{home}}/docs/setup/consul/quick-start.html) | Alpha
| [Pilot Integration into Eureka]({{home}}/docs/setup/consul/quick-start.html) | Alpha
| VM: Ansible Envoy Installation, Interception and Registration | Alpha
| [Kubernetes: Istio Control Plane Upgrade]({{home}}/docs/setup/kubernetes/) | Beta
| [Pilot Integration into Consul]({{home}}/docs/setup/consul/quick-start.html) | Alpha
| [Pilot Integration into Eureka]({{home}}/docs/setup/consul/quick-start.html) | Alpha
| [Pilot Integration into Cloud Foundry Service Discovery]({{home}}/docs/setup/consul/quick-start.html) | Alpha
| [Basic Config Resource Validation](https://github.com/istio/istio/issues/1894) | Alpha
| [Basic Config Resource Validation](https://github.com/istio/istio/issues/1894) | Alpha
| Mixer Telemetry Collection (Tracing, Logging, Monitoring) | Alpha
| Custom Mixer Build Model | Alpha
| Enable API attributes using an IDL | Alpha
| [Helm]({{home}}/docs/setup/kubernetes/helm-install.html) | Alpha
| [Multicluster Mesh]({{home}}/docs/setup/kubernetes/multicluster-install.html) | Alpha
> <img src="{{home}}/img/bulb.svg" alt="Bulb" title="Help" style="width: 32px; display:inline" />
Please get in touch by joining our [community]({{home}}/community) if there are features you'd like to see in our future releases!
Please get in touch by joining our [community]({{home}}/community.html) if there are features you'd like to see in our future releases!

View File

@ -1,12 +1,20 @@
---
title: About Istio
overview: All about Istio.
description: All about Istio.
order: 15
weight: 15
layout: about
type: markdown
toc: false
---
{% include home.html %}
{% include section-index.html docs=site.about %}
Get a bit more in-depth info about the Istio project.
- [What is Istio?]({{home}}/about/intro.html). Get some context about what problems Istio is designed to solve.
- [Release Notes]({{home}}/about/notes/). Learn about the latest features, improvements, and bug fixes.
- [Feature Status]({{home}}/about/feature-stages.html). Get a detailed list of Istio's individual features and their relative
maturity and support level.
- [Contributing to the Docs]({{home}}/about/contribute/). Learn how you can help contribute to improve Istio's documentation.

View File

@ -1,13 +1,10 @@
---
title: What is Istio?
overview: Context about what problems Istio is designed to solve.
description: Context about what problems Istio is designed to solve.
order: 0
weight: 1
layout: about
type: markdown
toc: false
redirect_from: /about.html
---
Istio is an open platform that provides a uniform way to connect, manage,
@ -26,11 +23,11 @@ rate limits and quotas.
- Automatic metrics, logs, and traces for all traffic within a cluster,
including cluster ingress and egress.
- Secure service-to-service authentication with strong identity assertions
between services in a cluster.
- Secure service-to-service communication in a cluster with strong
identity-based authentication and authorization.
Istio can be deployed on [Kubernetes](https://kubernetes.io),
[Nomad](https://nomadproject.io) with [Consul](https://www.consul.io/). We
plan to add support for additional platforms such as
[Cloud Foundry](https://www.cloudfoundry.org/),
and [Apache Mesos](http://mesos.apache.org/) in the near future.
and [Apache Mesos](https://mesos.apache.org/) in the near future.

View File

@ -1,10 +1,8 @@
---
title: Istio 0.1
order: 100
weight: 100
layout: about
type: markdown
toc: false
redirect_from: /docs/welcome/notes/0.1.html
---

View File

@ -1,43 +1,43 @@
---
title: Istio 0.2
order: 99
weight: 99
layout: about
type: markdown
redirect_from: /docs/welcome/notes/0.2.html
---
## General
- **Updated Config Model**. Istio now uses the Kubernetes [Custom Resource](https://kubernetes.io/docs/concepts/api-extension/custom-resources/)
model to describe and store its configuration. When running in Kubernetes, configuration can now be optionally managed using the `kubectl`
model to describe and store its configuration. When running in Kubernetes, configuration can now be optionally managed using the `kubectl`
command.
- **Multiple Namespace Support**. Istio control plane components are now in the dedicated "istio-system" namespace. Istio can manage
- **Multiple Namespace Support**. Istio control plane components are now in the dedicated "istio-system" namespace. Istio can manage
services in other non-system namespaces.
- **Mesh Expansion**. Initial support for adding non-Kubernetes services (in the form of VMs and/or physical machines) to a mesh. This is an early version of
this feature and has some limitations (such as requiring a flat network across containers and VMs).
- **Multi-Environment Support**. Initial support for using Istio in conjunction with other service registries
including Consul and Eureka.
- **Multi-Environment Support**. Initial support for using Istio in conjunction with other service registries
including Consul and Eureka.
- **Automatic injection of sidecars**. Istio sidecar can automatically be injected into a Pod upon deployment using the [Initializers](https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers) alpha feature in Kubernetes.
- **Automatic injection of sidecars**. Istio sidecar can automatically be injected into a Pod upon deployment using the
[Initializers](https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers) alpha feature in Kubernetes.
## Perf and quality
## Performance and quality
There have been many performance and reliability improvements throughout the system. We dont consider Istio 0.2 ready for production yet, but weve made excellent progress in that direction. Here are a few items of note:
There have been many performance and reliability improvements throughout the system. We dont consider Istio 0.2 ready for production yet, but
weve made excellent progress in that direction. Here are a few items of note:
- **Caching Client**. The Mixer client library used by Envoy now provides caching for Check calls and batching for Report calls, considerably reducing
- **Caching Client**. The Mixer client library used by Envoy now provides caching for Check calls and batching for Report calls, considerably reducing
end-to-end overhead.
- **Avoid Hot Restarts**. The need to hot-restart Envoy has been mostly eliminated through effective use of LDS/RDS/CDS/EDS.
- **Reduced Memory Use**. Significantly reduced the size of the sidecar helper agent, from 50Mb to 7Mb.
- **Improved Mixer Latency**. Mixer now clearly delineates configuration-time vs. request-time computations, which avoids doing extra setup work at
request-time for initial requests and thus delivers a smoother average latency. Better resource caching also contributes to better end-to-end perf.
- **Improved Mixer Latency**. Mixer now clearly delineates configuration-time vs. request-time computations, which avoids doing extra setup work at
request-time for initial requests and thus delivers a smoother average latency. Better resource caching also contributes to better end-to-end performance.
- **Reduced Latency for Egress Traffic**. We now forward traffic to external services directly from the sidecar.
@ -55,15 +55,15 @@ Jaeger tracing.
- **Ingress Policies**. In addition to east-west traffic supported in 0.1. policies can now be applied to north-south traffic.
- **Support for TCP Services**. In addition to the HTTP-level policy controls available in 0.1, 0.2 introduces policy controls for
- **Support for TCP Services**. In addition to the HTTP-level policy controls available in 0.1, 0.2 introduces policy controls for
TCP services.
- **New Mixer API**. The API that Envoy uses to interact with Mixer has been completely redesigned for increased robustness, flexibility, and to support
- **New Mixer API**. The API that Envoy uses to interact with Mixer has been completely redesigned for increased robustness, flexibility, and to support
rich proxy-side caching and batching for increased performance.
- **New Mixer Adapter Model**. A new adapter composition model makes it easier to extend Mixer by adding whole new classes of adapters via templates. This
- **New Mixer Adapter Model**. A new adapter composition model makes it easier to extend Mixer by adding whole new classes of adapters via templates. This
new model will serve as the foundational building block for many features in the future. See the
[Adapter Developer's Guide](https://github.com/istio/istio/blob/master/mixer/doc/adapters.md) to learn how
[Adapter Developer's Guide](https://github.com/istio/istio/wiki/Mixer-Adapter-Dev-Guide) to learn how
to write adapters.
- **Improved Mixer Build Model**. Its now easier to build a Mixer binary that includes custom adapters.
@ -84,10 +84,15 @@ identity provisioning. This agent runs on each node (VM / physical machine) and
- **Bring Your Own CA Certificates**. Allows users to provide their own key and certificate for Istio CA.
- **Persistent CA Key/Certificate Storage**. Istio CA now stores signing key/certificates in
persistent storage to facilitate CA restarts.
persistent storage to facilitate CA restarts.
## Known issues
- **User may get periodical 404 when accessing the application**: We have noticed that Envoy doesn't get routes properly occasionally thus a 404 is returned to the user. We are actively working on this [issue](https://github.com/istio/istio/issues/1038).
- **Istio Ingress or Egress reports ready before Pilot is actually ready**: You can check the istio-ingress and istio-egress pods status in the `istio-system` namespace and wait a few seconds after all the Istio pods reach ready status. We are actively working on this [issue](https://github.com/istio/istio/pull/1055).
- **User may get periodical 404 when accessing the application**: We have noticed that Envoy doesn't get routes properly occasionally
thus a 404 is returned to the user. We are actively working on this [issue](https://github.com/istio/istio/issues/1038).
- **Istio Ingress or Egress reports ready before Pilot is actually ready**: You can check the istio-ingress and istio-egress pods status
in the `istio-system` namespace and wait a few seconds after all the Istio pods reach ready status. We are actively working on this
[issue](https://github.com/istio/istio/pull/1055).
- **A service with Istio Auth enabled can't communicate with a service without Istio**: This limitation will be removed in the near future.

View File

@ -1,13 +1,13 @@
---
title: Istio 0.3
order: 98
weight: 98
layout: about
type: markdown
redirect_from: /docs/welcome/notes/0.3.html
---
{% include home.html %}
## General
Starting with 0.3, Istio is switching to a monthly release cadence. We hope this will help accelerate our ability
@ -40,6 +40,5 @@ significant drop in average latency for authorization checks.
- **Config Validation**. Mixer does more extensive validation of configuration state in order to catch problems earlier.
We expect to invest more in this area in coming releases.
If you're into the nitty-gritty details, you can see our more detailed low-level
release notes [here](https://github.com/istio/istio/wiki/v0.3.0).

View File

@ -1,17 +1,15 @@
---
title: Istio 0.4
order: 97
weight: 97
layout: about
type: markdown
toc: false
redirect_from: /docs/welcome/notes/0.4.html
---
{% include home.html %}
This release has only got a few weeks' worth of changes, as we stabilize our monthly release process.
In addition to the usual pile of bug fixes and perf improvements, this release includes:
This release has only got a few weeks' worth of changes, as we stabilize our monthly release process.
In addition to the usual pile of bug fixes and performance improvements, this release includes:
- **Cloud Foundry**. Added minimum Pilot support for the [Cloud Foundry](https://www.cloudfoundry.org) platform, making it
possible for Pilot to discover CF services and service instances.

View File

@ -1,14 +1,12 @@
---
title: Istio 0.5
order: 96
weight: 96
layout: about
type: markdown
---
{% include home.html %}
In addition to the usual pile of bug fixes and perf improvements, this release includes the new or
In addition to the usual pile of bug fixes and performance improvements, this release includes the new or
updated features detailed below.
## Networking
@ -17,22 +15,24 @@ updated features detailed below.
the components you want (e.g, Pilot+Ingress only as the minimal Istio install). Refer to the `istioctl` CLI tool for generating a
information on customized Istio deployments.
- **Automatic Proxy Injection**. We leverage Kubernetes 1.9's new [mutating webhook feature](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md#api-machinery) to provide automatic
- **Automatic Proxy Injection**. We leverage Kubernetes 1.9's new
[mutating webhook feature](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md#api-machinery) to provide automatic
pod-level proxy injection. Automatic injection requires Kubernetes 1.9 or beyond and
therefore doesn't work on older versions. The alpha initializer mechanism is no longer supported. [Learn more]({{home}}/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection)
therefore doesn't work on older versions. The alpha initializer mechanism is no longer supported.
[Learn more]({{home}}/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection)
- **Revised Traffic Rules**. Based on user feedback, we have made significant changes to Istio's traffic management
(routing rules, destination rules, etc.). We would love your continuing feedback while we polish this in the coming weeks.
## Mixer adapters
- **Open Policy Agent**. Mixer now has an authorization adapter implementing the [open policy agent](http://www.openpolicyagent.org) model,
- **Open Policy Agent**. Mixer now has an authorization adapter implementing the [open policy agent](https://www.openpolicyagent.org) model,
providing a flexible fine-grained access control mechanism. [Learn more](https://docs.google.com/document/d/1U2XFmah7tYdmC5lWkk3D43VMAAQ0xkBatKmohf90ICA/edit#heading=h.fmlgl8m03gfy)
- **Istio RBAC**. Mixer now has a role-based access control adapter.
[Learn more]({{home}}/docs/concepts/security/rbac.html)
- **Fluentd**. Mixer now has an adapter for log collection through [fluentd](https://www.fluentd.org).
- **Fluentd**. Mixer now has an adapter for log collection through [fluentd](https://www.fluentd.org).
[Learn more]({{home}}/docs/tasks/telemetry/fluentd.html)
- **Stdio**. The stdio adapter now lets you log to files with support for log rotation & backup, along with a host
@ -51,10 +51,10 @@ of controls.
## Other
- **Release-Mode Binaries**. We switched release and installation default to release for improved
performance and security.
performance and security.
- **Component Logging**. Istio components now offer a rich set of command-line options to control local logging, including
common support for log rotation.
common support for log rotation.
- **Consistent Version Reporting**. Istio components now offer a consistent command-line interface to report their version information.

42
_about/notes/0.6.md Normal file
View File

@ -0,0 +1,42 @@
---
title: Istio 0.6
weight: 95
---
{% include home.html %}
In addition to the usual pile of bug fixes and performance improvements, this release includes the new or
updated features detailed below.
## Networking
- **Custom Envoy Config**. Pilot now supports ferrying custom Envoy config to the
proxy. [Learn more](https://github.com/mandarjog/istioluawebhook)
## Mixer adapters
- **SolarWinds**. Mixer can now interface to AppOptics and Papertrail.
[Learn more]({{home}}/docs/reference/config/adapters/solarwinds.html)
- **Redisquota**. Mixer now supports a Redis-based adapter for rate limit tracking.
[Learn more]({{home}}/docs/reference/config/adapters/redisquota.html)
- **Datadog**. Mixer now provides an adapter to deliver metric data to a Datadog agent.
[Learn more]({{home}}/docs/reference/config/adapters/datadog.html)
## Other
- **Separate Check & Report Clusters**. When configuring Envoy, it's now possible to use different clusters
for Mixer instances that are used for Mixer's Check functionality from those used for Mixer's Report
functionality. This may be useful in large deployments for better scaling of Mixer instances.
- **Monitoring Dashboards**. There are now preliminary Mixer & Pilot monitoring dashboard in Grafana.
- **Servicegraph Visualization**. Servicegraph has a new visualization. [Learn more]({{home}}/docs/tasks/telemetry/servicegraph.html).
- **Liveness and Readiness Probes**. Istio components now provide canonical liveness and readiness
probe support to help ensure mesh infrastructure health. [Learn more]({{home}}/docs/tasks/security/health-check.html)
- **Egress Policy and Telemetry**. Istio can monitor traffic to external services defined by EgressRule or External Service. Istio can also apply
Mixer policies on this traffic.

22
_about/notes/0.7.md Normal file
View File

@ -0,0 +1,22 @@
---
title: Istio 0.7
weight: 94
---
{% include home.html %}
For this release, we focused on improving our build and test infrastructures and increasing the
quality of our tests. As a result, there are no new features for this month.
However, this release does include a large number of bug fixes and performance improvements.
Please note that this release includes preliminary support for the new v1alpha3 traffic management
functionality. This functionality is still in a great deal of flux and there may be some breaking
changes in 0.8. So if you feel like exploring, please go right ahead, but expect that this may
change in 0.8 and beyond.
Known Issues:
Our [helm chart](https://istio.io/docs/setup/kubernetes/helm.html)
currently requires some workaround to apply the chart correctly, see [4701](https://github.com/istio/istio/issues/4701) for details.

61
_about/notes/0.8.md Normal file
View File

@ -0,0 +1,61 @@
---
title: Istio 0.8
weight: 93
---
{% include home.html %}
In addition to the usual pile of bug fixes and performance improvements, this release includes the new or
updated features detailed below.
## Networking
- **Revamped Traffic Management Model**. We're finally ready to take the wraps off our
new [traffic management configuration model]({{home}}/blog/2018/v1alpha3-routing.html).
The new model adds many new features and addresses usability issues
with the prior model. There is a conversion tool built into `istioctl` to help migrate your config from
the old model. [Learn more about the new traffic management model]({{home}}/docs/tasks/traffic-management-v1alpha3/).
- **Envoy V2**. Users can choose to inject Envoy V2 as the sidecar. In this mode, Pilot uses Envoy's new API to push configuration to the data plane. This new approach
increases effective scalability and should eliminate spurious 404 errors. [TBD: docs on how to control this?]
- **Gateway for Ingress/Egress**. We no longer support combining Kubernetes Ingress specs with Istio route rules, as it has led to several
bugs and reliability issues. Istio now
supports a platform independent ingress/egress Gateway that works across Kubernetes and Cloud Foundry and works seamlessly with the routing rules
[TBD: doc link]
- **Constrained Inbound Ports**. We now restrict the inbound ports in a pod to the ones declared by the apps running inside that pod.
## Security
- **Introducing Citadel**. We've finally given a name to our security component. What was
formerly known as Istio-Auth or Istio-CA is now called Citadel.
- **Multicluster Support**. We support per-cluster Citadel in multicluster deployments such that all Citadels share the same root cert
and workloads can authenticate each other across the mesh.
- **Authentication Policy**. We've introduced authentication policy that can be used to configure service-to-service
authentication (mutual TLS) and end user authentication. This is the recommended way for enabling mutual TLS
(over the existing config flag and service annotations). [Learn more]({{home}}/docs/tasks/security/authn-policy.html).
## Telemetry
- **Self-Reporting**. Mixer and Pilot now produce telemetry that flows through the normal
Istio telemetry pipeline, just like services in the mesh.
## Setup
- **A la Carte Istio**. Istio has a rich set of features, however you don't need to install or consume them all together. By using
Helm or `istioctl gen-deploy`, users can install only the features they want. For example, users can install Pilot only and enjoy traffic
management functionality without dealing with Mixer or Citadel.
Learn more about [customization through Helm](https://istio.io/docs/setup/kubernetes/helm-install.html#customization-with-helm)
and about [`istioctl gen-deploy`](https://istio.io/docs/reference/commands/istioctl.html#istioctl%20gen-deploy).
## Mixer adapters
- **CloudWatch**. Mixer can now report metrics to AWS CloudWatch.
[Learn more]({{home}}/docs/reference/config/adapters/cloudwatch.html)
## Known issues with 0.8
- A gateway with virtual services pointing to a headless service won't work ([Issue #5005](https://github.com/istio/istio/issues/5005)).

View File

@ -1,31 +1,32 @@
---
title: Release Notes
overview: Istio releases information.
description: Description of features and improvements for every Istio release.
order: 5
weight: 5
layout: about
type: markdown
redirect_from:
- "/docs/reference/release-notes.html"
- "/release-notes"
- "/docs/welcome/notes/index.html"
toc: false
- "/docs/references/notes"
toc: false
---
{% include section-index.html docs=site.about %}
{% include section-index.html docs=site.docs %}
The latest Istio monthly release is {{site.data.istio.version}} ([release notes]({{site.data.istio.version}}.html)). You can
[download {{site.data.istio.version}}](https://github.com/istio/istio/releases) with:
```command
$ curl -L https://git.io/getLatestIstio | sh -
```
- The [latest](https://github.com/istio/istio/releases) Istio monthly release is {{site.data.istio.version}}. It is downloaded when the following is used(*):
```
curl -L https://git.io/getLatestIstio | sh -
```
The most recent stable release is 0.2.12. You can [download 0.2.12](https://github.com/istio/istio/releases/tag/0.2.12) with:
- The most recent 'stable' release is [0.2.12](https://github.com/istio/istio/releases/tag/0.2.12), the matching docs are [archive.istio.io/v0.2/docs/](https://archive.istio.io/v0.2/docs/)
```
curl -L https://git.io/getIstio | sh -
```
```command
$ curl -L https://git.io/getIstio | sh -
```
We typically wait to 'bake' the latest release for several weeks and ensure it is more stable than the previous one before promoting it to stable.
[Archived documentation for the 0.2.12 release](https://archive.istio.io/v0.2/docs/).
> (*) Note: security conscious users should examine the output of the curl command before piping it to a shell.
> As we don't control the `git.io` domain, please examine the output of the `curl` command before piping it to a shell if running in any
sensitive or non-sandboxed environment.

View File

@ -1,14 +1,12 @@
---
title: "Introducing Istio"
overview: Istio 0.1 announcement
publish_date: May 24, 2017
title: Introducing Istio
description: Istio 0.1 announcement
publishdate: 2018-05-24
subtitle: A robust service mesh for microservices
attribution: The Istio Team
order: 100
weight: 100
layout: blog
type: markdown
redirect_from:
- "/blog/istio-service-mesh-for-microservices.html"
- "/blog/0.1-announcement.html"
@ -24,7 +22,8 @@ Writing reliable, loosely coupled, production-grade applications based on micros
Inconsistent attempts at solving these challenges, cobbled together from libraries, scripts and Stack Overflow snippets leads to solutions that vary wildly across languages and runtimes, have poor observability characteristics and can often end up compromising security.
One solution is to standardize implementations on a common RPC library like [gRPC](http://grpc.io), but this can be costly for organizations to adopt wholesale and leaves out brownfield applications which may be practically impossible to change. Operators need a flexible toolkit to make their microservices secure, compliant, trackable and highly available, and developers need the ability to experiment with different features in production, or deploy canary releases, without impacting the system as a whole.
One solution is to standardize implementations on a common RPC library like [gRPC](https://grpc.io), but this can be costly for organizations to adopt wholesale
and leaves out brownfield applications which may be practically impossible to change. Operators need a flexible toolkit to make their microservices secure, compliant, trackable and highly available, and developers need the ability to experiment with different features in production, or deploy canary releases, without impacting the system as a whole.
## Solution: Service Mesh
@ -36,21 +35,17 @@ Google, IBM and Lyft joined forces to create Istio from a desire to provide a re
**Fleet-wide Visibility**: Failures happen, and operators need tools to stay on top of the health of clusters and their graphs of microservices. Istio produces detailed monitoring data about application and network behaviors that is rendered using [Prometheus](https://prometheus.io/) & [Grafana](https://github.com/grafana/grafana), and can be easily extended to send metrics and logs to any collection, aggregation and querying system. Istio enables analysis of performance hotspots and diagnosis of distributed failure modes with [Zipkin](https://github.com/openzipkin/zipkin) tracing.
{% include figure.html width='100%' ratio='55.42%'
img='./img/istio_grafana_dashboard-new.png'
alt='Grafana Dashboard with Response Size'
title='Grafana Dashboard with Response Size'
caption='Grafana Dashboard with Response Size'
{% include image.html width="100%" ratio="55.42%"
link="./img/istio_grafana_dashboard-new.png"
caption="Grafana Dashboard with Response Size"
%}
{% include figure.html width='100%' ratio='29.91%'
img='./img/istio_zipkin_dashboard.png'
alt='Zipkin Dashboard'
title='Zipkin Dashboard'
caption='Zipkin Dashboard'
{% include image.html width="100%" ratio="29.91%"
link="./img/istio_zipkin_dashboard.png"
caption="Zipkin Dashboard"
%}
**Resiliency and efficiency**: When developing microservices, operators need to assume that the network will be unreliable. Operators can use retries, load balancing, flow-control (HTTP/2), and circuit-breaking to compensate for some of the common failure modes due to an unreliable network. Istio provides a uniform approach to configuring these features, making it easier to operate a highly resilient service mesh.
**Resiliency and efficiency**: When developing microservices, operators need to assume that the network will be unreliable. Operators can use retries, load balancing, flow-control (HTTP/2), and circuit-breaking to compensate for some of the common failure modes due to an unreliable network. Istio provides a uniform approach to configuring these features, making it easier to operate a highly resilient service mesh.
**Developer productivity**: Istio provides a significant boost to developer productivity by letting them focus on building service features in their language of choice, while Istio handles resiliency and networking challenges in a uniform way. Developers are freed from having to bake solutions to distributed systems problems into their code. Istio further improves productivity by providing common functionality supporting A/B testing, canarying, and fault injection.
@ -58,14 +53,14 @@ Google, IBM and Lyft joined forces to create Istio from a desire to provide a re
**Secure by default**: It is a common fallacy of distributed computing that the network is secure. Istio enables operators to authenticate and secure all communication between services using a mutual TLS connection, without burdening the developer or the operator with cumbersome certificate management tasks. Our security framework is aligned with the emerging [SPIFFE](https://spiffe.github.io/) specification, and is based on similar systems that have been tested extensively inside Google.
**Incremental Adoption**: We designed Istio to be completely transparent to the services running in the mesh, allowing teams to incrementally adopt features of Istio over time. Adopters can start with enabling fleet-wide visibility and once theyre comfortable with Istio in their environment they can switch on other features as needed.
**Incremental Adoption**: We designed Istio to be completely transparent to the services running in the mesh, allowing teams to incrementally adopt features of Istio over time. Adopters can start with enabling fleet-wide visibility and once theyre comfortable with Istio in their environment they can switch on other features as needed.
## Join us in this journey
Istio is a completely open development project. Today we are releasing version 0.1, which works in a Kubernetes cluster, and we plan to have major new
releases every 3 months, including support for additional environments. Our goal is to enable developers and operators to rollout and operate microservices
with agility, complete visibility of the underlying network, and uniform control and security in all environments. We look forward to working with the Istio
community and our partners towards these goals, following our [roadmap]({{home}}/docs/reference/release-roadmap.html).
Istio is a completely open development project. Today we are releasing version 0.1, which works in a Kubernetes cluster, and we plan to have major new
releases every 3 months, including support for additional environments. Our goal is to enable developers and operators to rollout and operate microservices
with agility, complete visibility of the underlying network, and uniform control and security in all environments. We look forward to working with the Istio
community and our partners towards these goals, following our [roadmap]({{home}}/docs/reference/release-roadmap.html).
Visit [here](https://github.com/istio/istio/releases) to get the latest released bits.
@ -74,9 +69,9 @@ View the [presentation]({{home}}/talks/istio_talk_gluecon_2017.pdf) from GlueCon
## Community
We are excited to see early commitment to support the project from many companies in the community:
[Red Hat](https://blog.openshift.com/red-hat-istio-launch/) with Red Hat Openshift and OpenShift Application Runtimes,
[Red Hat](https://blog.openshift.com/red-hat-istio-launch/) with Red Hat OpenShift and OpenShift Application Runtimes,
Pivotal with [Pivotal Cloud Foundry](https://content.pivotal.io/blog/pivotal-and-istio-advancing-the-ecosystem-for-microservices-in-the-enterprise),
Weaveworks with [Weave Cloud](https://www.weave.works/blog/istio-weave-cloud/) and Weave Net 2.0,
WeaveWorks with [Weave Cloud](https://www.weave.works/blog/istio-weave-cloud/) and Weave Net 2.0,
[Tigera](https://www.projectcalico.org/welcoming-istio-to-the-kubernetes-networking-community) with the Project Calico Network Policy Engine
and [Datawire](https://www.datawire.io/istio-and-datawire-ecosystem/) with the Ambassador project. We hope to see many more companies join us in
this journey.
@ -86,12 +81,12 @@ To get involved, connect with us via any of these channels:
* [istio.io]({{home}}) for documentation and examples.
* The [istio-users@googlegroups.com](https://groups.google.com/forum/#!forum/istio-users) mailing list for general discussions,
or [istio-announce@googlegroups.com](https://groups.google.com/forum/#!forum/istio-announce) for key announcements regarding the project.
or [istio-announce@googlegroups.com](https://groups.google.com/forum/#!forum/istio-announce) for key announcements regarding the project.
* [Stack Overflow](https://stackoverflow.com/questions/tagged/istio) for curated questions and answers
* [GitHub](http://github.com/istio/issues/issues) for filing issues
* [GitHub](https://github.com/istio/issues/issues) for filing issues
* [@IstioMesh](https://twitter.com/IstioMesh) on Twitter
* [@IstioMesh](https://twitter.com/IstioMesh) on Twitter
From everyone working on Istio, welcome aboard!

View File

@ -1,14 +1,12 @@
---
title: "Using Istio to Improve End-to-End Security"
overview: Istio Auth 0.1 announcement
publish_date: May 25, 2017
title: Using Istio to Improve End-to-End Security
description: Istio Auth 0.1 announcement
publishdate: 2017-05-25
subtitle: Secure by default service to service communications
attribution: The Istio Team
order: 99
weight: 99
layout: blog
type: markdown
redirect_from:
- "/blog/0.1-auth.html"
- "/blog/istio-auth-for-microservices.html"
@ -16,23 +14,23 @@ redirect_from:
{% include home.html %}
Conventional network security approaches fail to address security threats to distributed applications deployed in dynamic production environments. Today, we describe how Istio Auth enables enterprises to transform their security posture from just protecting the edge to consistently securing all inter-service communications deep within their applications. With Istio Auth, developers and operators can protect services with sensitive data against unauthorized insider access and they can achieve this without any changes to the application code!
Istio Auth is the security component of the broader [Istio platform]({{home}}/). It incorporates the learnings of securing millions of microservice
Istio Auth is the security component of the broader [Istio platform]({{home}}/). It incorporates the learnings of securing millions of microservice
endpoints in Googles production environment.
## Background
Modern application architectures are increasingly based on shared services that are deployed and scaled dynamically on cloud platforms. Traditional network edge security (e.g. firewall) is too coarse-grained and allows access from unintended clients. An example of a security risk is stolen authentication tokens that can be replayed from another client. This is a major risk for companies with sensitive data that are concerned about insider threats. Other network security approaches like IP whitelists have to be statically defined, are hard to manage at scale, and are unsuitable for dynamic production environments.
Thus, security administrators need a tool that enables them to consistently, and by default, secure all communication between services across diverse production environments.
Thus, security administrators need a tool that enables them to consistently, and by default, secure all communication between services across diverse production environments.
## Solution: strong service identity and authentication
Google has, over the years, developed architecture and technology to uniformly secure millions of microservice endpoints in its production environment against
external
attacks and insider threats. Key security principles include trusting the endpoints and not the network, strong mutual authentication based on service identity and service level authorization. Istio Auth is based on the same principles.
Google has, over the years, developed architecture and technology to uniformly secure millions of microservice endpoints in its production environment against
external
attacks and insider threats. Key security principles include trusting the endpoints and not the network, strong mutual authentication based on service identity and service level authorization. Istio Auth is based on the same principles.
The version 0.1 release of Istio Auth runs on Kubernetes and provides the following features:
The version 0.1 release of Istio Auth runs on Kubernetes and provides the following features:
* Strong identity assertion between services
@ -46,11 +44,9 @@ Istio Auth is based on industry standards like mutual TLS and X.509. Furthermore
The diagram below provides an overview of the Istio Auth service authentication architecture on Kubernetes.
{% include figure.html width='100%' ratio='56.25%'
img='./img/istio_auth_overview.svg'
alt='Istio Auth Overview'
title='Istio Auth Overview'
caption='Istio Auth Overview'
{% include image.html width="100%" ratio="56.25%"
link="./img/istio_auth_overview.svg"
caption="Istio Auth Overview"
%}
The above diagram illustrates three key security features:
@ -67,7 +63,7 @@ Istio Auth uses [Kubernetes service accounts](https://kubernetes.io/docs/tasks/c
### Communication security
Service-to-service communication is tunneled through high performance client side and server side [Envoy](https://envoyproxy.github.io/envoy/) proxies. The communication between the proxies is secured using mutual TLS. The benefit of using mutual TLS is that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. Istio Auth also introduces the concept of Secure Naming to protect from a server spoofing attacks - the client side proxy verifies that the authenticated server's service account is allowed to run the named service.
Service-to-service communication is tunneled through high performance client side and server side [Envoy](https://envoyproxy.github.io/envoy/) proxies. The communication between the proxies is secured using mutual TLS. The benefit of using mutual TLS is that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. Istio Auth also introduces the concept of Secure Naming to protect from a server spoofing attacks - the client side proxy verifies that the authenticated server's service account is allowed to run the named service.
### Key management and distribution
@ -77,17 +73,15 @@ Istio Auth provides a per-cluster CA (Certificate Authority) and automated key &
* Distributes keys and certificates to the appropriate pods using [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
* Rotates keys and certificates periodically.
* Rotates keys and certificates periodically.
* Revokes a specific key and certificate pair when necessary (future).
The following diagram explains the end to end Istio Auth authentication workflow on Kubernetes:
{% include figure.html width='100%' ratio='56.25%'
img='./img/istio_auth_workflow.svg'
alt='Istio Auth Workflow'
title='Istio Auth Workflow'
caption='Istio Auth Workflow'
{% include image.html width="100%" ratio="56.25%"
link="./img/istio_auth_workflow.svg"
caption="Istio Auth Workflow"
%}
Istio Auth is part of the broader security story for containers. Red Hat, a partner on the development of Kubernetes, has identified [10 Layers](https://www.redhat.com/en/resources/container-security-openshift-cloud-devops-whitepaper) of container security. Istio and Istio Auth addresses two of these layers: "Network Isolation" and "API and Service Endpoint Management". As cluster federation evolves on Kubernetes and other platforms, our intent is for Istio to secure communications across services spanning multiple federated clusters.
@ -95,14 +89,14 @@ Istio Auth is part of the broader security story for containers. Red Hat, a part
## Benefits of Istio Auth
**Defense in depth**: When used in conjunction with Kubernetes (or infrastructure) network policies, users achieve higher levels of confidence, knowing that pod-to-pod or service-to-service communication is secured both at network and application layers.
**Secure by default**: When used with Istios proxy and centralized policy engine, Istio Auth can be configured during deployment with minimal or no application change. Administrators and operators can thus ensure that service communications are secured by default and that they can enforce these policies consistently across diverse protocols and runtimes.
**Strong service authentication**: Istio Auth secures service communication using mutual TLS to ensure that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. This ensures that services with sensitive data can only be accessed from strongly authenticated and authorized clients.
**Secure by default**: When used with Istios proxy and centralized policy engine, Istio Auth can be configured during deployment with minimal or no application change. Administrators and operators can thus ensure that service communications are secured by default and that they can enforce these policies consistently across diverse protocols and runtimes.
**Strong service authentication**: Istio Auth secures service communication using mutual TLS to ensure that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. This ensures that services with sensitive data can only be accessed from strongly authenticated and authorized clients.
## Join us in this journey
Istio Auth is the first step towards providing a full stack of capabilities to protect services with sensitive data from external attacks and insider
threats. While the initial version runs on Kubernetes, our goal is to enable Istio Auth to secure services across diverse production environments. We encourage the
community to [join us](https://github.com/istio/istio/blob/master/security) in making robust service security easy and ubiquitous across different application
stacks and runtime platforms.
threats. While the initial version runs on Kubernetes, our goal is to enable Istio Auth to secure services across diverse production environments. We encourage the
community to [join us](https://github.com/istio/istio/blob/master/security) in making robust service security easy and ubiquitous across different application
stacks and runtime platforms.

View File

@ -1,17 +1,17 @@
---
title: "Canary Deployments using Istio"
overview: Using Istio to create autoscaled canary deployments
publish_date: June 14, 2017
title: Canary Deployments using Istio
description: Using Istio to create autoscaled canary deployments
publishdate: 2017-06-14
subtitle:
attribution: Frank Budinsky
order: 98
weight: 98
layout: blog
type: markdown
redirect_from: "/blog/canary-deployments-using-istio.html"
---
{% include home.html %}
One of the benefits of the [Istio]({{home}}) project is that it provides the control needed to deploy canary services. The idea behind canary deployment (or rollout) is to introduce a new version of a service by first testing it using a small percentage of user traffic, and then if all goes well, increase, possibly gradually in increments, the percentage while simultaneously phasing out the old version. If anything goes wrong along the way, we abort and rollback to the previous version. In its simplest form, the traffic sent to the canary version is a randomly selected percentage of requests, but in more sophisticated schemes it can be based on the region, user, or other properties of the request.
Depending on your level of expertise in this area, you may wonder why Istio's support for canary deployment is even needed, given that platforms like Kubernetes already provide a way to do [version rollout](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) and [canary deployment](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments). Problem solved, right? Well, not exactly. Although doing a rollout this way works in simple cases, its very limited, especially in large scale cloud environments receiving lots of (and especially varying amounts of) traffic, where autoscaling is needed.
@ -28,8 +28,8 @@ Whether we use one deployment or two, canary management using deployment feature
With Istio, traffic routing and replica deployment are two completely independent functions. The number of pods implementing services are free to scale up and down based on traffic load, completely orthogonal to the control of version traffic routing. This makes managing a canary version in the presence of autoscaling a much simpler problem. Autoscalers may, in fact, respond to load variations resulting from traffic routing changes, but they are nevertheless functioning independently and no differently than when loads change for other reasons.
Istios [routing rules]({{home}}/docs/concepts/traffic-management/rules-configuration.html) also provide other important advantages; you can easily control
fine grain traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). To illustrate, lets look at deploying the **helloworld** service and see how simple the problem becomes.
Istios [routing rules]({{home}}/docs/concepts/traffic-management/rules-configuration.html) also provide other important advantages; you can easily control
fine grain traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). To illustrate, lets look at deploying the **helloworld** service and see how simple the problem becomes.
We begin by defining the **helloworld** Service, just like any other Kubernetes service, something like this:
@ -88,7 +88,7 @@ rule to control the traffic distribution. For example if we want to send 10% of
[istioctl]({{home}}/docs/reference/commands/istioctl.html) command to set a routing rule something like this:
```bash
$ cat <<EOF | istioctl create -f -
cat <<EOF | istioctl create -f -
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
@ -112,11 +112,15 @@ After setting this rule, Istio will ensure that only one tenth of the requests w
Because we dont need to maintain replica ratios anymore, we can safely add Kubernetes [horizontal pod autoscalers](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) to manage the replicas for both version Deployments:
```bash
```command
$ kubectl autoscale deployment helloworld-v1 --cpu-percent=50 --min=1 --max=10
deployment "helloworld-v1" autoscaled
```
```command
$ kubectl autoscale deployment helloworld-v2 --cpu-percent=50 --min=1 --max=10
deployment "helloworld-v2" autoscaled
```
```command
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
Helloworld-v1 Deployment/helloworld-v1 50% 47% 1 10 17s
@ -125,7 +129,7 @@ Helloworld-v2 Deployment/helloworld-v2 50% 40% 1 10 15s
If we now generate some load on the **helloworld** service, we would notice that when scaling begins, the **v1** autoscaler will scale up its replicas significantly higher than the **v2** autoscaler will for its replicas because **v1** pods are handling 90% of the load.
```bash
```command
$ kubectl get pods | grep helloworld
helloworld-v1-3523621687-3q5wh 0/2 Pending 0 15m
helloworld-v1-3523621687-73642 2/2 Running 0 11m
@ -141,7 +145,7 @@ helloworld-v2-4095161145-963wt 2/2 Running 0 50m
If we then change the routing rule to send 50% of the traffic to **v2**, we should, after a short delay, notice that the **v1** autoscaler will scale down the replicas of **v1** while the **v2** autoscaler will perform a corresponding scale up.
```bash
```command
$ kubectl get pods | grep helloworld
helloworld-v1-3523621687-73642 2/2 Running 0 35m
helloworld-v1-3523621687-7hs31 2/2 Running 0 43m
@ -159,7 +163,7 @@ helloworld-v2-4095161145-v3v9n 0/2 Pending 0 13m
The end result is very similar to the simple Kubernetes Deployment rollout, only now the whole process is not being orchestrated and managed in one place. Instead, were seeing several components doing their jobs independently, albeit in a cause and effect manner.
What's different, however, is that if we now stop generating load, the replicas of both versions will eventually scale down to their minimum (1), regardless of what routing rule we set.
```bash
```command
$ kubectl get pods | grep helloworld
helloworld-v1-3523621687-dt7n7 2/2 Running 0 1h
helloworld-v2-4095161145-963wt 2/2 Running 0 1h
@ -170,7 +174,7 @@ helloworld-v2-4095161145-963wt 2/2 Running 0 1h
As mentioned above, the Istio routing rules can be used to route traffic based on specific criteria, allowing more sophisticated canary deployment scenarios. Say, for example, instead of exposing the canary to an arbitrary percentage of users, we want to try it out on internal users, maybe even just a percentage of them. The following command could be used to send 50% of traffic from users at *some-company-name.com* to the canary version, leaving all other users unaffected:
```bash
$ cat <<EOF | istioctl create -f -
cat <<EOF | istioctl create -f -
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
@ -209,7 +213,7 @@ As before, the autoscalers bound to the 2 version Deployments will automatically
## Summary
In this article weve shown how Istio supports general scalable canary deployments, and how this differs from the basic deployment support in Kubernetes. Istios service mesh provides the control necessary to manage traffic distribution with complete independence from deployment scaling. This allows for a simpler, yet significantly more functional, way to do canary test and rollout.
In this article weve shown how Istio supports general scalable canary deployments, and how this differs from the basic deployment support in Kubernetes. Istios service mesh provides the control necessary to manage traffic distribution with complete independence from deployment scaling. This allows for a simpler, yet significantly more functional, way to do canary test and rollout.
Intelligent routing in support of canary deployment is just one of the many features of Istio that will make the production deployment of large-scale microservices-based applications much simpler. Check out [istio.io]({{home}}) for more information and to try it out.
The sample code used in this article can be found [here](https://github.com/istio/istio/tree/master/samples/helloworld).

View File

@ -1,14 +1,12 @@
---
title: "Using Network Policy with Istio"
overview: How Kubernetes Network Policy relates to Istio policy
publish_date: August 10, 2017
title: Using Network Policy with Istio
description: How Kubernetes Network Policy relates to Istio policy
publishdate: 2017-08-10
subtitle:
attribution: Spike Curtis
order: 97
weight: 97
layout: blog
type: markdown
redirect_from: "/blog/using-network-policy-in-concert-with-istio.html"
---
{% include home.html %}
@ -19,8 +17,8 @@ Lets start with the basics: why might you want to use both Istio and Kubernet
| | Istio Policy | Network Policy |
| --------------------- | ----------------- | ------------------ |
| **Layer** | "Service" --- L7 | "Network" --- L3-4 |
| **Implementation** | Userspace | Kernel |
| **Layer** | "Service" --- L7 | "Network" --- L3-4 |
| **Implementation** | User space | Kernel |
| **Enforcement Point** | Pod | Node |
## Layer
@ -33,13 +31,16 @@ In contrast, operating at the network layer has the advantage of being universal
## Implementation
The Istios proxy is based on [Envoy](https://envoyproxy.github.io/envoy/), which is implemented as a userspace daemon in the dataplane that interacts with the network layer using standard sockets. This gives it a large amount of flexibility in processing, and allows it to be distributed (and upgraded!) in a container.
The Istios proxy is based on [Envoy](https://envoyproxy.github.io/envoy/), which is implemented as a user space daemon in the data plane that
interacts with the network layer using standard sockets. This gives it a large amount of flexibility in processing, and allows it to be
distributed (and upgraded!) in a container.
Network Policy dataplane is typically implemented in kernel space (e.g. using iptables, eBPF filters, or even custom kernel modules). Being in kernel space allows them to be extremely fast, but not as flexible as the Envoy proxy.
Network Policy data plane is typically implemented in kernel space (e.g. using iptables, eBPF filters, or even custom kernel modules). Being in kernel space
allows them to be extremely fast, but not as flexible as the Envoy proxy.
## Enforcement Point
Policy enforcement using the Envoy proxy is implemented inside the pod, as a sidecar container in the same network namespace. This allows a simple deployment model. Some containers are given permission to reconfigure the networking inside their pod (CAP_NET_ADMIN). If such a service instance is compromised, or misbehaves (as in a malicious tenant) the proxy can be bypassed.
Policy enforcement using the Envoy proxy is implemented inside the pod, as a sidecar container in the same network namespace. This allows a simple deployment model. Some containers are given permission to reconfigure the networking inside their pod (CAP_NET_ADMIN). If such a service instance is compromised, or misbehaves (as in a malicious tenant) the proxy can be bypassed.
While this wont let an attacker access other Istio-enabled pods, so long as they are correctly configured, it opens several attack vectors:
@ -105,11 +106,9 @@ spec:
Here is the service graph for the Bookinfo application.
{% assign url = home | append: "/docs/guides/img/bookinfo/withistio.svg" %}
{% include figure.html width='80%' ratio='59.08%'
img=url
alt='Bookinfo Service Graph'
title='Bookinfo Service Graph'
caption='Bookinfo Service Graph'
{% include image.html width="80%" ratio="59.08%"
link=url
caption="Bookinfo Service Graph"
%}
This graph shows every connection that a correctly functioning application should be allowed to make. All other connections, say from the Istio Ingress directly to the Rating service, are not part of the application. Lets lock out those extraneous connections so they cannot be used by an attacker. Imagine, for example, that the Ingress pod is compromised by an exploit that allows an attacker to run arbitrary code. If we only allow connections to the Product Page pods using Network Policy, the attacker has gained no more access to my application backends _even though they have compromised a member of the service mesh_.

View File

@ -1,55 +1,55 @@
---
title: "Announcing Istio 0.2"
overview: Istio 0.2 announcement
publish_date: October 10, 2017
title: Announcing Istio 0.2
description: Istio 0.2 announcement
publishdate: 2017-10-10
subtitle: Improved mesh and support for multiple environments
attribution: The Istio Team
order: 96
weight: 96
layout: blog
type: markdown
toc: false
redirect_from: "/blog/istio-0.2-announcement.html"
---
{% include home.html %}
We launched Istio; an open platform to connect, manage, monitor, and secure microservices, on May 24, 2017. We have been humbled by the incredible interest, and rapid community growth of developers, operators, and partners. Our 0.1 release was focused on showing all the concepts of Istio in Kubernetes.
We launched Istio; an open platform to connect, manage, monitor, and secure microservices, on May 24, 2017. We have been humbled by the incredible interest, and
rapid community growth of developers, operators, and partners. Our 0.1 release was focused on showing all the concepts of Istio in Kubernetes.
Today we are happy to announce the 0.2 release which improves stability and performance, allows for cluster wide deployment and automated injection of sidecars in Kubernetes, adds policy and authentication for TCP services, and enables expansion of the mesh to include services deployed in virtual machines. In addition, Istio can now run outside Kubernetes, leveraging Consul/Nomad or Eureka. Beyond core features, Istio is now ready for extensions to be written by third party companies and developers.
## Highlights for the 0.2 release
**Usability improvements:**
### Usability improvements
* _Multiple namespace support_: Istio now works cluster-wide, across multiple namespaces and this was one of the top requests from community from 0.1 release.
* _Multiple namespace support_: Istio now works cluster-wide, across multiple namespaces and this was one of the top requests from community from 0.1 release.
* _Policy and security for TCP services_: In addition to HTTP, we have added transparent mutual TLS authentication and policy enforcement for TCP services as well. This will allow you to secure more of your kubernetes deployment, and get Istio features like telemetry, policy and security.
* _Policy and security for TCP services_: In addition to HTTP, we have added transparent mutual TLS authentication and policy enforcement for TCP services as well. This will allow you to secure more of your
Kubernetes deployment, and get Istio features like telemetry, policy and security.
* _Automated sidecar injection_: By leveraging the alpha [initializer](https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers) feature provided by Kubernetes 1.7, envoy sidecars can now be automatically injected into application deployments when your cluster has the initializer enabled. This enables you to deploy microservices using `kubectl`, the exact same command that you normally use for deploying the microservices without Istio.
* _Extending Istio_: An improved Mixer design that lets vendors write Mixer adapters to implement support for their own systems, such as application
* _Extending Istio_: An improved Mixer design that lets vendors write Mixer adapters to implement support for their own systems, such as application
management or policy enforcement. The
[Mixer Adapter Developer's Guide](https://github.com/istio/istio/blob/master/mixer/doc/adapters.md) can help
you easily integrate your solution with Istio.
[Mixer Adapter Developer's Guide](https://github.com/istio/istio/wiki/Mixer-Adapter-Dev-Guide) can help
you easily integrate your solution with Istio.
* _Bring your own CA certificates_: Allows users to provide their own key and certificate for Istio CA and persistent CA key/certificate Storage. Enables storing signing key/certificates in persistent storage to facilitate CA restarts.
* _Improved routing & metrics_: Support for WebSocket, MongoDB and Redis protocols. You can apply resilience features like circuit breakers on traffic to third party services. In addition to Mixers metrics, hundreds of metrics from Envoy are now visible inside Prometheus for all traffic entering, leaving and within Istio mesh.
* _Improved routing & metrics_: Support for WebSocket, MongoDB and Redis protocols. You can apply resilience features like circuit breakers on traffic to third party services. In addition to Mixers metrics, hundreds of metrics from Envoy are now visible inside Prometheus for all traffic entering, leaving and within Istio mesh.
**Cross environment support:**
### Cross environment support
* _Mesh expansion_: Istio mesh can now span services running outside of Kubernetes - like those running in virtual machines while enjoying benefits such as automatic mutual TLS authentication, traffic management, telemetry, and policy enforcement across the mesh.
* _Mesh expansion_: Istio mesh can now span services running outside of Kubernetes - like those running in virtual machines while enjoying benefits such as automatic mutual TLS authentication, traffic management, telemetry, and policy enforcement across the mesh.
* _Running outside Kubernetes_: We know many customers use other service registry and orchestration solutions like [Consul/Nomad]({{home}}/docs/setup/consul/quick-start.html) and [Eureka]({{home}}/docs/setup/eureka/quick-start.html). Istio Pilot can now run standalone outside Kubernetes, consuming information from these systems, and manage the Envoy fleet in VMs or containers.
## Get involved in shaping the future of Istio
## Get involved in shaping the future of Istio
We have a growing [roadmap]({{home}}/docs/reference/release-roadmap.html) ahead of us, full of great features to implement. Our focus next release is going to be on stability, reliability, integration with third party tools and multi-cluster use cases.
We have a growing [roadmap]({{home}}/docs/reference/release-roadmap.html) ahead of us, full of great features to implement. Our focus next release is going to be on stability, reliability, integration with third party tools and multicluster use cases.
To learn how to get involved and contribute to Istio's future, check out our [community](https://github.com/istio/community) GitHub repository which
will introduce you to our working groups, our mailing lists, our various community meetings, our general procedures and our guidelines.
We want to thank our fantastic community for field testing new versions, filing bug reports, contributing code, helping out other community members, and shaping Istio by participating in countless productive discussions. This has enabled the project to accrue 3000 stars on GitHub since launch and hundreds of active community members on Istio mailing lists.
We want to thank our fantastic community for field testing new versions, filing bug reports, contributing code, helping out other community members, and shaping Istio by participating in countless productive discussions. This has enabled the project to accrue 3000 stars on GitHub since launch and hundreds of active community members on Istio mailing lists.
Thank you

View File

@ -1,14 +1,12 @@
---
title: "Mixer Adapter Model"
overview: Provides an overview of the Mixer plug-in architecture
publish_date: November 3, 2017
title: Mixer Adapter Model
description: Provides an overview of the Mixer plug-in architecture
publishdate: 2017-11-03
subtitle: Extending Istio to integrate with a world of infrastructure backends
attribution: Martin Taillefer
order: 95
weight: 95
layout: blog
type: markdown
redirect_from: "/blog/mixer-adapter-model.html"
---
{% include home.html %}
@ -24,20 +22,18 @@ Mixer serves as an abstraction layer between Istio and an open-ended set of infr
In addition to insulating application-level code from the details of infrastructure backends, Mixer provides an intermediation model that allows operators to inject and control policies between application code and backends. Operators can control which data is reported to which backend, which backend to consult for authorization, and much more.
Given that individual infrastructure backends each have different interfaces and operational models, Mixer needs custom
code to deal with each and we call these custom bundles of code [*adapters*](https://github.com/istio/istio/blob/master/mixer/doc/adapters.md).
code to deal with each and we call these custom bundles of code [*adapters*](https://github.com/istio/istio/wiki/Mixer-Adapter-Dev-Guide).
Adapters are Go packages that are directly linked into the Mixer binary. Its fairly simple to create custom Mixer binaries linked with specialized sets of adapters, in case the default set of adapters is not sufficient for specific use cases.
Adapters are Go packages that are directly linked into the Mixer binary. Its fairly simple to create custom Mixer binaries linked with specialized sets of adapters, in case the default set of adapters is not sufficient for specific use cases.
## Philosophy
Mixer is essentially an attribute processing and routing machine. The proxy sends it [attributes]({{home}}/docs/concepts/policy-and-control/attributes.html) as part of doing precondition checks and telemetry reports, which it turns into a series of calls into adapters. The operator supplies configuration which describes how to map incoming attributes to inputs for the adapters.
Mixer is essentially an attribute processing and routing machine. The proxy sends it [attributes]({{home}}/docs/concepts/policy-and-control/attributes.html) as part of doing precondition checks and telemetry reports, which it turns into a series of calls into adapters. The operator supplies configuration which describes how to map incoming attributes to inputs for the adapters.
{% assign url = home | append: "/docs/concepts/policy-and-control/img/mixer-config/machine.svg" %}
{% include figure.html width='60%' ratio='42.60%'
img=url
alt='Attribute Machine'
title='Attribute Machine'
caption='Attribute Machine'
{% include image.html width="60%" ratio="42.60%"
link=url
caption="Attribute Machine"
%}
Configuration is a complex task. In fact, evidence shows that the overwhelming majority of service outages are caused by configuration errors. To help combat this, Mixers configuration model enforces a number of constraints designed to avoid errors. For example, the configuration model uses strong typing to ensure that only meaningful attributes or attribute expressions are used in any given context.
@ -46,7 +42,7 @@ Configuration is a complex task. In fact, evidence shows that the overwhelming m
Each adapter that Mixer uses requires some configuration to operate. Typically, adapters need things like the URL to their backend, credentials, caching options, and so forth. Each adapter defines the exact configuration data it needs via a [protobuf](https://developers.google.com/protocol-buffers/) message.
You configure each adapter by creating [*handlers*]({{home}}/docs/concepts/policy-and-control/mixer-config.html#handlers) for them. A handler is a
You configure each adapter by creating [*handlers*]({{home}}/docs/concepts/policy-and-control/mixer-config.html#handlers) for them. A handler is a
configuration resource which represents a fully configured adapter ready for use. There can be any number of handlers for a single adapter, making it possible to reuse an adapter in different scenarios.
## Templates: adapter input schema
@ -54,9 +50,9 @@ configuration resource which represents a fully configured adapter ready for use
Mixer is typically invoked twice for every incoming request to a mesh service, once for precondition checks and once for telemetry reporting. For every such call, Mixer invokes one or more adapters. Different adapters need different pieces of data as input in order to do their work. A logging adapter needs a log entry, a metric adapter needs a metric, an authorization adapter needs credentials, etc.
Mixer [*templates*]({{home}}/docs/reference/config/template/) are used to describe the exact data that an adapter consumes at request time.
Each template is specified as a [protobuf](https://developers.google.com/protocol-buffers/) message. A single template describes a bundle of data that is delivered to one or more adapters at runtime. Any given adapter can be designed to support any number of templates, the specific templates the adapter supports is determined by the adapter developer.
Each template is specified as a [protobuf](https://developers.google.com/protocol-buffers/) message. A single template describes a bundle of data that is delivered to one or more adapters at runtime. Any given adapter can be designed to support any number of templates, the specific templates the adapter supports is determined by the adapter developer.
[metric]({{home}}/docs/reference/config/template/metric.html) and [logentry]({{home}}/docs/reference/config/template/logentry.html) are two of the most essential templates used within Istio. They represent respectively the payload to report a single metric and a single log entry to appropriate backends.
[metric]({{home}}/docs/reference/config/template/metric.html) and [logentry]({{home}}/docs/reference/config/template/logentry.html) are two of the most essential templates used within Istio. They represent respectively the payload to report a single metric and a single log entry to appropriate backends.
## Instances: attribute mapping
@ -67,15 +63,18 @@ by the proxy into individual bundles of data that can be routed to different ada
Creating instances generally requires using [attribute expressions]({{home}}/docs/concepts/policy-and-control/mixer-config.html#attribute-expressions). The point of these expressions is to use any attribute or literal value in order to produce a result that can be assigned to an instances field.
Every instance field has a type, as defined in the template, every [attribute has a type](https://github.com/istio/api/blob/master/mixer/v1/config/descriptor/value_type.proto), and every attribute expression has a type. You can only assign type-compatible expressions to any given instance fields. For example, you cant assign an integer expression to a string field. This kind of strong typing is designed to minimize the risk of creating bogus configurations.
Every instance field has a type, as defined in the template, every attribute has a
[type](https://github.com/istio/api/blob/master/policy/v1beta1/value_type.proto), and every attribute expression has a type.
You can only assign type-compatible expressions to any given instance fields. For example, you cant assign an integer expression
to a string field. This kind of strong typing is designed to minimize the risk of creating bogus configurations.
## Rules: delivering data to adapters
The last piece to the puzzle is telling Mixer which instances to send to which handler and when. This is done by
creating [*rules*]({{home}}/docs/concepts/policy-and-control/mixer-config.html#rules). Each rule identifies a specific handler and the set of
creating [*rules*]({{home}}/docs/concepts/policy-and-control/mixer-config.html#rules). Each rule identifies a specific handler and the set of
instances to send to that handler. Whenever Mixer processes an incoming call, it invokes the indicated handler and gives it the specific set of instances for processing.
Rules contain matching predicates. A predicate is an attribute expression which returns a true/false value. A rule only takes effect if its predicate expression returns true. Otherwise, its like the rule didnt exist and the indicated handler isnt invoked.
Rules contain matching predicates. A predicate is an attribute expression which returns a true/false value. A rule only takes effect if its predicate expression returns true. Otherwise, its like the rule didnt exist and the indicated handler isnt invoked.
## Future

View File

@ -1,11 +1,9 @@
---
title: 2017 Posts
overview: Blog posts for 2017
description: Blog posts for 2017
order: 20
weight: 20
layout: blog
type: markdown
toc: false
---

View File

@ -1,15 +1,13 @@
---
title: "Mixer and the SPOF Myth"
overview: Improving availability and reducing latency
publish_date: December 7, 2017
title: Mixer and the SPOF Myth
description: Improving availability and reducing latency
publishdate: 2017-12-07
subtitle: Improving availability and reducing latency
attribution: Martin Taillefer
order: 94
layout: blog
type: markdown
weight: 94
redirect_from: /blog/posts/2017/mixer-spof-myth.html
---
{% include home.html %}
@ -17,7 +15,7 @@ As [Mixer]({{home}}/docs/concepts/policy-and-control/mixer.html) is in the reque
overall system availability and latency. A common refrain we hear when people first glance at Istio architecture diagrams is
"Isn't this just introducing a single point of failure?"
In this post, well dig deeper and cover the design principles that underpin Mixer and the surprising fact Mixer actually
In this post, well dig deeper and cover the design principles that underpin Mixer and the surprising fact Mixer actually
increases overall mesh availability and reduces average request latency.
Istio's use of Mixer has two main benefits in terms of overall system availability and latency:
@ -36,23 +34,20 @@ In 2014, we started an initiative to create a replacement architecture that woul
The older system was built around a centralized fleet of fairly heavy proxies into which all incoming traffic would flow, before being forwarded to the services where the real work was done. The newer architecture jettisons the shared proxy design and instead consists of a very lean and efficient distributed sidecar proxy sitting next to service instances, along with a shared fleet of sharded control plane intermediaries:
{% include figure.html width='75%' ratio='74.79%'
img='./img/mixer-spof-myth-1.svg'
alt='Google System Topology'
title='Google System Topology'
{% include image.html width="75%" ratio="74.79%"
link="./img/mixer-spof-myth-1.svg"
title="Google System Topology"
caption="Google's API & Service Management System"
%}
Look familiar? Of course: its just like Istio! Istio was conceived as a second generation of this distributed proxy architecture. We took the core lessons from this internal system, generalized many of the concepts by working with our partners, and created Istio.
Look familiar? Of course: its just like Istio! Istio was conceived as a second generation of this distributed proxy architecture. We took the core lessons from this internal system, generalized many of the concepts by working with our partners, and created Istio.
## Architecture recap
As shown in the diagram below, Mixer sits between the mesh and the infrastructure backends that support it:
{% include figure.html width='75%' ratio='65.89%'
img='./img/mixer-spof-myth-2.svg'
alt='Istio Topology'
title='Istio Topology'
{% include image.html width="75%" ratio="65.89%"
link="./img/mixer-spof-myth-2.svg"
caption="Istio Topology"
%}
@ -95,7 +90,7 @@ We have opportunities ahead to continue improving the system in many ways.
### Config canaries
Mixer is highly scaled so it is generally resistant to individual instance failures. However, Mixer is still susceptible to cascading failures in the case when a poison configuration is deployed which causes all Mixer instances to crash basically at the same time (yeah, that would be a bad day). To prevent this from happening, config changes can be canaried to a small set of Mixer instances, and then more broadly rolled out.
Mixer is highly scaled so it is generally resistant to individual instance failures. However, Mixer is still susceptible to cascading failures in the case when a poison configuration is deployed which causes all Mixer instances to crash basically at the same time (yeah, that would be a bad day). To prevent this from happening, config changes can be canaried to a small set of Mixer instances, and then more broadly rolled out.
Mixer doesnt yet do canarying of config changes, but we expect this to come online as part of Istios ongoing work on reliable config distribution.
@ -112,17 +107,17 @@ At the moment, each Mixer instance operates independently of all other instances
In very large meshes, the load on Mixer can be great. There can be a large number of Mixer instances, each straining to keep caches primed to
satisfy incoming traffic. We expect to eventually introduce intelligent sharding such that Mixer instances become slightly specialized in
handling particular data streams in order to increase the likelihood of cache hits. In other words, sharding helps improve cache
efficiency by routing related traffic to the same Mixer instance over time, rather than randomly dispatching to
efficiency by routing related traffic to the same Mixer instance over time, rather than randomly dispatching to
any available Mixer instance.
## Conclusion
Practical experience at Google showed that the model of a slim sidecar proxy and a large shared caching control plane intermediary hits a sweet
spot, delivering excellent perceived availability and latency. Weve taken the lessons learned there and applied them to create more sophisticated and
spot, delivering excellent perceived availability and latency. Weve taken the lessons learned there and applied them to create more sophisticated and
effective caching, prefetching, and buffering strategies in Istio. Weve also optimized the communication protocols to reduce overhead when a cache miss does occur.
Mixer is still young. As of Istio 0.3, we havent really done significant performance work within Mixer itself. This means when a request misses the sidecar
cache, we spend more time in Mixer to respond to requests than we should. Were doing a lot of work to improve this in coming months to reduce the overhead
Mixer is still young. As of Istio 0.3, we havent really done significant performance work within Mixer itself. This means when a request misses the sidecar
cache, we spend more time in Mixer to respond to requests than we should. Were doing a lot of work to improve this in coming months to reduce the overhead
that Mixer imparts in the synchronous precondition check case.
We hope this post makes you appreciate the inherent benefits that Mixer brings to Istio.

121
_blog/2018/aws-nlb.md Normal file
View File

@ -0,0 +1,121 @@
---
title: Configuring Istio Ingress with AWS NLB
description: Describes how to configure Istio ingress with a network load balancer on AWS
publishdate: 2018-04-20
subtitle: Ingress AWS Network Load Balancer
attribution: Julien SENON
weight: 89
redirect_from: "/blog/aws-nlb.html"
---
{% include home.html %}
This blog entry will provide instructions to use and configure ingress Istio with [AWS Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html).
Network load balancer (NLB) could be used instead of classical load balancer. You can find [comparison](https://aws.amazon.com/elasticloadbalancing/details/#compare) between different AWS `loadbalancer` for more explanation.
## Prerequisites
The following instructions require a Kubernetes **1.9.0 or newer** cluster.
<img src="{{home}}/img/exclamation-mark.svg" alt="Warning" title="Warning" style="width: 32px; display:inline" /> Usage of AWS `nlb` on kubernetes is an alpha feature and not recommended for production clusters.
## IAM Policy
You need to apply policy on the master role in order to be able to provision network load balancer.
1. In AWS `iam` console click on policies and click on create a new one:
{% include image.html width="80%" ratio="60%"
link="./img/createpolicystart.png"
caption="Create a new policy"
%}
1. Select `json`:
{% include image.html width="80%" ratio="60%"
link="./img/createpolicyjson.png"
caption="Select json"
%}
1. Copy/paste text bellow:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "kopsK8sNLBMasterPermsRestrictive",
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcs",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcs",
"ec2:DescribeRegions"
],
"Resource": "*"
}
]
}
```
1. Click review policy, fill all fields and click create policy:
{% include image.html width="80%" ratio="60%"
link="./img/create_policy.png"
caption="Validate policy"
%}
1. Click on roles, select you master role nodes, and click attach policy:
{% include image.html width="100%" ratio="35%"
link="./img/roles_summary.png"
caption="Attach policy"
%}
1. Your policy is now attach to your master node.
## Rewrite Istio Ingress Service
You need to rewrite ingress service with the following:
```yaml
apiVersion: v1
kind: Service
metadata:
name: istio-ingress
namespace: istio-system
labels:
istio: ingress
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
- port: 443
protocol: TCP
targetPort: 443
name: https
selector:
istio: ingress
type: LoadBalancer
```
## What's next
Kubernetes [service networking](https://kubernetes.io/docs/concepts/services-networking/service/) should be consulted if further information is needed.

View File

@ -1,14 +1,12 @@
---
title: "Consuming External Web Services"
overview: Describes a simple scenario based on Istio Bookinfo sample
publish_date: January 31, 2018
title: Consuming External Web Services
description: Describes a simple scenario based on Istio Bookinfo sample
publishdate: 2018-01-31
subtitle: Egress Rules for HTTPS traffic
attribution: Vadim Eisenberg
order: 93
weight: 93
layout: blog
type: markdown
redirect_from: "/blog/egress-https.html"
---
{% include home.html %}
@ -20,9 +18,10 @@ In this blog post, I modify the [Istio Bookinfo Sample Application]({{home}}/doc
## Bookinfo sample application with external details web service
### Initial setting
To demonstrate the scenario of consuming an external web service, I start with a Kubernetes cluster with [Istio installed]({{home}}/docs/setup/kubernetes/quick-start.html#installation-steps). Then I deploy [Istio Bookinfo Sample Application]({{home}}/docs/guides/bookinfo.html). This application uses the _details_ microservice to fetch book details, such as the number of pages and the publisher. The original _details_ microservice provides the book details without consulting any external service.
The example commands in this blog post work with Istio version 0.2+, with or without [Mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) enabled.
The example commands in this blog post work with Istio 0.2+, with or without [Mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) enabled.
The Bookinfo configuration files required for the scenario of this post appear starting from [Istio release version 0.5](https://github.com/istio/istio/releases/tag/0.5.0).
The Bookinfo configuration files reside in the `samples/bookinfo/kube` directory of the Istio release archive.
@ -30,27 +29,24 @@ The Bookinfo configuration files reside in the `samples/bookinfo/kube` directory
Here is a copy of the end-to-end architecture of the application from the original [Bookinfo Guide]({{home}}/docs/guides/bookinfo.html).
{% assign url = home | append: "/docs/guides/img/bookinfo/withistio.svg" %}
{% include figure.html width='80%' ratio='59.08%'
img=url
alt='The Original Bookinfo Application'
title='The Original Bookinfo Application'
caption='The Original Bookinfo Application'
{% include image.html width="80%" ratio="59.08%"
link=url
caption="The Original Bookinfo Application"
%}
### Bookinfo with details version 2
Let's add a new version of the _details_ microservice, _v2_, that fetches the book details from [Google Books APIs](https://developers.google.com/books/docs/v1/getting_started).
```bash
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-details-v2.yaml)
```command
$ kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-details-v2.yaml)
```
The updated architecture of the application now looks as follows:
{% include figure.html width='80%' ratio='65.16%'
img='./img/bookinfo-details-v2.svg'
alt='The Bookinfo Application with details V2'
title='The Bookinfo Application with details V2'
caption='The Bookinfo Application with details V2'
{% include image.html width="80%" ratio="65.16%"
link="./img/bookinfo-details-v2.svg"
caption="The Bookinfo Application with details V2"
%}
Note that the Google Books web service is outside the Istio service mesh, the boundary of which is marked by a dashed line.
@ -77,11 +73,9 @@ Let's access the web page of the application, after [determining the ingress IP
Oops... Instead of the book details we have the _Error fetching product details_ message displayed:
{% include figure.html width='80%' ratio='36.01%'
img='./img/errorFetchingBookDetails.png'
alt='The Error Fetching Product Details Message'
title='The Error Fetching Product Details Message'
caption='The Error Fetching Product Details Message'
{% include image.html width="80%" ratio="36.01%"
link="./img/errorFetchingBookDetails.png"
caption="The Error Fetching Product Details Message"
%}
The good news is that our application did not crash. With a good microservice design, we do not have **failure propagation**. In our case, the failing _details_ microservice does not cause the _productpage_ microservice to fail. Most of the functionality of the application is still provided, despite the failure in the _details_ microservice. We have **graceful service degradation**: as you can see, the reviews and the ratings are displayed correctly, and the application is still useful.
@ -89,7 +83,9 @@ The good news is that our application did not crash. With a good microservice de
So what might have gone wrong? Ah... The answer is that I forgot to enable traffic from inside the mesh to an external service, in this case to the Google Books web service. By default, the Istio sidecar proxies ([Envoy proxies](https://www.envoyproxy.io)) **block all the traffic to destinations outside the cluster**. To enable such traffic, we must define an [egress rule]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#EgressRule).
### Egress rule for Google Books web service
No worries, let's define an **egress rule** and fix our application:
```bash
cat <<EOF | istioctl create -f -
apiVersion: config.istio.io/v1alpha2
@ -108,40 +104,35 @@ EOF
Now accessing the web page of the application displays the book details without error:
{% include figure.html width='80%' ratio='34.82%'
img='./img/externalBookDetails.png'
alt='Book Details Displayed Correctly'
title='Book Details Displayed Correctly'
caption='Book Details Displayed Correctly'
{% include image.html width="80%" ratio="34.82%"
link="./img/externalBookDetails.png"
caption="Book Details Displayed Correctly"
%}
Note that our egress rule allows traffic to any domain matching _*.googleapis.com_, on port 443, using the HTTPS protocol. Let's assume for the sake of the example that the applications in our Istio service mesh must access multiple subdomains of _gooogleapis.com_, for example _www.googleapis.com_ and also _fcm.googleapis.com_. Our rule allows traffic to both _www.googleapis.com_ and _fcm.googleapis.com_, since they both match _*.googleapis.com_. This **wildcard** feature allows us to enable traffic to multiple domains using a single egress rule.
Note that our egress rule allows traffic to any domain matching _*.googleapis.com_, on port 443, using the HTTPS protocol. Let's assume for the sake of the example that the applications in our Istio service mesh must access multiple subdomains of _googleapis.com_, for example _www.googleapis.com_ and also _fcm.googleapis.com_. Our rule allows traffic to both _www.googleapis.com_ and _fcm.googleapis.com_, since they both match _*.googleapis.com_. This **wildcard** feature allows us to enable traffic to multiple domains using a single egress rule.
We can query our egress rules:
```bash
istioctl get egressrules
```
and see our new egress rule in the output:
```bash
NAME KIND NAMESPACE
googleapis EgressRule.v1alpha2.config.istio.io default
```command
$ istioctl get egressrules
NAME KIND NAMESPACE
googleapis EgressRule.v1alpha2.config.istio.io default
```
We can delete our egress rule:
```bash
istioctl delete egressrule googleapis -n default
```
and see in the output of _istioctl delete_ that the egress rule is deleted:
```
```command
$ istioctl delete egressrule googleapis -n default
Deleted config: egressrule googleapis
```
and see in the output that the egress rule is deleted.
Accessing the web page after deleting the egress rule produces the same error that we experienced before, namely _Error fetching product details_. As we can see, the egress rules are defined **dynamically**, as many other Istio configuration artifacts. The Istio operators can decide dynamically which domains they allow the microservices to access. They can enable and disable traffic to the external domains on the fly, without redeploying the microservices.
## Issues with Istio egress traffic control
### TLS origination by Istio
There is a caveat to this story. In HTTPS, all the HTTP details (hostname, path, headers etc.) are encrypted, so Istio cannot know the destination domain of the encrypted requests. Well, Istio could know the destination domain by the [SNI](https://tools.ietf.org/html/rfc3546#section-3.1) (_Server Name Indication_) field. This feature, however, is not yet implemented in Istio. Therefore, currently Istio cannot perform filtering of HTTPS requests based on the destination domains.
To allow Istio to perform filtering of egress requests based on domains, the microservices must issue HTTP requests. Istio then opens an HTTPS connection to the destination (performs TLS origination). The code of the microservices must be written differently or configured differently, according to whether the microservice runs inside or outside an Istio service mesh. This contradicts the Istio design goal of [maximizing transparency]({{home}}/docs/concepts/what-is-istio/goals.html). Sometimes we need to compromise...
@ -149,14 +140,13 @@ To allow Istio to perform filtering of egress requests based on domains, the mic
The diagram below shows how the HTTPS traffic to external services is performed. On the top, a microservice outside an Istio service mesh
sends regular HTTPS requests, encrypted end-to-end. On the bottom, the same microservice inside an Istio service mesh must send unencrypted HTTP requests inside a pod, which are intercepted by the sidecar Envoy proxy. The sidecar proxy performs TLS origination, so the traffic between the pod and the external service is encrypted.
{% include figure.html width='80%' ratio='65.16%'
img='./img/https_from_the_app.svg'
alt='HTTPS traffic to external services, from outside vs. from inside an Istio service mesh'
title='HTTPS traffic to external services, from outside vs. from inside an Istio service mesh'
caption='HTTPS traffic to external services, from outside vs. from inside an Istio service mesh'
{% include image.html width="80%" ratio="65.16%"
link="./img/https_from_the_app.svg"
caption="HTTPS traffic to external services, from outside vs. from inside an Istio service mesh"
%}
Here is how we code this behavior in the [the Bookinfo details microservice code](https://github.com/istio/istio/blob/master/samples/bookinfo/src/details/details.rb), using the Ruby [net/http module](https://docs.ruby-lang.org/en/2.0.0/Net/HTTP.html):
```ruby
uri = URI.parse('https://www.googleapis.com/books/v1/volumes?q=isbn:' + isbn)
http = Net::HTTP.new(uri.host, uri.port)
@ -170,7 +160,10 @@ Note that the port is derived by the `URI.parse` from the URI's schema (https://
When the `WITH_ISTIO` environment variable is defined, the request is performed without SSL (plain HTTP).
We set the `WITH_ISTIO` environment variable to _"true"_ in the [Kubernetes deployment spec of _details v2_](https://github.com/istio/istio/blob/master/samples/bookinfo/kube/bookinfo-details-v2.yaml), the `container` section:
We set the `WITH_ISTIO` environment variable to _"true"_ in the
[Kubernetes deployment spec of details v2](https://github.com/istio/istio/blob/master/samples/bookinfo/kube/bookinfo-details-v2.yaml),
the `container` section:
```yaml
env:
- name: WITH_ISTIO
@ -178,22 +171,27 @@ env:
```
#### Relation to Istio mutual TLS
Note that the TLS origination in this case is unrelated to [the mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) applied by Istio. The TLS origination for the external services will work, whether the Istio mutual TLS is enabled or not. The **mutual** TLS secures service-to-service communication **inside** the service mesh and provides each service with a strong identity. In the case of the **external services**, we have **one-way** TLS, the same mechanism used to secure communication between a web browser and a web server. TLS is applied to the communication with external services to verify the identity of the external server and to encrypt the traffic.
### Malicious microservices threat
Another issue is that the egress rules are currently **not a security feature**; they only **enable** traffic to external services. For HTTP-based protocols, the rules are based on domains. Istio does not check that the destination IP of the request matches the _Host_ header. This means that a malicious microservice inside a service mesh could trick Istio to allow traffic to a malicious IP. The attack is to set one of the domains allowed by some existing Egress Rule as the _Host_ header of the malicious request.
Securing egress traffic is currently not supported in Istio and should be performed elsewhere, for example by a firewall or by an additional proxy outside Istio. Right now, we're working to enable the application of Mixer security policies on the egress traffic and to prevent the attack described above.
### No tracing, telemetry and no mixer checks
Note that currently no tracing and telemetry information can be collected for the egress traffic. Mixer policies cannot be applied. We are working to fix this in future Istio releases.
## Future work
In my next blog posts I will demonstrate Istio egress rules for TCP traffic and will show examples of combining routing rules and egress rules.
In Istio, we are working on making Istio egress traffic more secure, and in particular on enabling tracing, telemetry, and Mixer checks for the egress traffic.
## Conclusion
In this blog post I demonstrated how the microservices in an Istio service mesh can consume external web services via HTTPS. By default, Istio blocks all the traffic to the hosts outside the cluster. To enable such traffic, egress rules must be created for the service mesh. It is possible to access the external sites by HTTPS, however the microservices must issue HTTP requests while Istio will perform TLS origination. Currently, no tracing, telemetry and Mixer checks are enabled for the egress traffic. Egress rules are currently not a security feature, so additional mechanisms are required for securing egress traffic. We're working to enable logging/telemetry and security policies for the egress traffic in future releases.
To read more about Istio egress traffic control, see [Control Egress Traffic Task]({{home}}/docs/tasks/traffic-management/egress.html).

View File

@ -1,14 +1,12 @@
---
title: "Consuming External TCP Services"
overview: Describes a simple scenario based on Istio Bookinfo sample
publish_date: February 6, 2018
title: Consuming External TCP Services
description: Describes a simple scenario based on Istio Bookinfo sample
publishdate: 2018-02-06
subtitle: Egress rules for TCP traffic
attribution: Vadim Eisenberg
order: 92
weight: 92
layout: blog
type: markdown
redirect_from: "/blog/egress-tcp.html"
---
{% include home.html %}
@ -16,48 +14,48 @@ redirect_from: "/blog/egress-tcp.html"
In my previous blog post, [Consuming External Web Services]({{home}}/blog/2018/egress-https.html), I described how external services can be consumed by in-mesh Istio applications via HTTPS. In this post, I demonstrate consuming external services over TCP. I use the [Istio Bookinfo sample application]({{home}}/docs/guides/bookinfo.html), the version in which the book ratings data is persisted in a MySQL database. I deploy this database outside the cluster and configure the _ratings_ microservice to use it. I define an [egress rule]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#EgressRule) to allow the in-mesh applications to access the external database.
## Bookinfo sample application with external ratings database
First, I set up a MySQL database instance to hold book ratings data, outside my Kubernetes cluster. Then I modify the [Bookinfo sample application]({{home}}/docs/guides/bookinfo.html) to use my database.
### Setting up the database for ratings data
For this task I set up an instance of [MySQL](https://www.mysql.com). You can use any MySQL instance; I use [Compose for MySQL](https://www.ibm.com/cloud/compose/mysql). I use `mysqlsh` ([MySQL Shell](https://dev.mysql.com/doc/refman/5.7/en/mysqlsh.html)) as a MySQL client to feed the ratings data.
1. To initialize the database, I run the following command entering the password when prompted. The command is performed with the credentials of the `admin` user, created by default by [Compose for MySQL](https://www.ibm.com/cloud/compose/mysql).
```bash
curl -s https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/src/mysql/mysqldb-init.sql |
```command
$ curl -s https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/src/mysql/mysqldb-init.sql | \
mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port>
```
_**OR**_
When using the `mysql` client and a local MySQL database, I would run:
```bash
curl -s https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/src/mysql/mysqldb-init.sql |
```command
$ curl -s https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/src/mysql/mysqldb-init.sql | \
mysql -u root -p
```
2. I then create a user with the name _bookinfo_ and grant it _SELECT_ privilege on the `test.ratings` table:
```bash
mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
1. I then create a user with the name _bookinfo_ and grant it _SELECT_ privilege on the `test.ratings` table:
```command
$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
-e "CREATE USER 'bookinfo' IDENTIFIED BY '<password you choose>'; GRANT SELECT ON test.ratings to 'bookinfo';"
```
_**OR**_
For `mysql` and the local database, the command would be:
```bash
mysql -u root -p -e \
```command
$ mysql -u root -p -e \
"CREATE USER 'bookinfo' IDENTIFIED BY '<password you choose>'; GRANT SELECT ON test.ratings to 'bookinfo';"
```
Here I apply the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege). This means that I do not use my _admin_ user in the Bookinfo application. Instead, I create a special user for the Bookinfo application , _bookinfo_, with minimal privileges. In this case, the _bookinfo_ user only has the `SELECT` privilege on a single table.
After running the command to create the user, I will clean my bash history by checking the number of the last command and running `history -d <the number of the command that created the user>`. I don't want the password of the new user to be stored in the bash history. If I'm using `mysql`, I'll remove the last command from `~/.mysql_history` file as well. Read more about password protection of the newly created user in [MySQL documentation](https://dev.mysql.com/doc/refman/5.5/en/create-user.html).
3. I inspect the created ratings to see that everything worked as expected:
```bash
mysqlsh --sql --ssl-mode=REQUIRED -u bookinfo -p --host <the database host> --port <the database port> \
1. I inspect the created ratings to see that everything worked as expected:
```command
$ mysqlsh --sql --ssl-mode=REQUIRED -u bookinfo -p --host <the database host> --port <the database port> \
-e "select * from test.ratings;"
```
```bash
Enter password:
+----------+--------+
| ReviewID | Rating |
@ -70,10 +68,8 @@ For this task I set up an instance of [MySQL](https://www.mysql.com). You can us
_**OR**_
For `mysql` and the local database:
```bash
mysql -u bookinfo -p -e "select * from test.ratings;"
```
```bash
```command
$ mysql -u bookinfo -p -e "select * from test.ratings;"
Enter password:
+----------+--------+
| ReviewID | Rating |
@ -83,12 +79,10 @@ For this task I set up an instance of [MySQL](https://www.mysql.com). You can us
+----------+--------+
```
4. I set the ratings temporarily to 1 to provide a visual clue when our database is used by the Bookinfo _ratings_ service:
```bash
mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
1. I set the ratings temporarily to 1 to provide a visual clue when our database is used by the Bookinfo _ratings_ service:
```command
$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
-e "update test.ratings set rating=1; select * from test.ratings;"
```
```bash
Enter password:
+----------+--------+
| ReviewID | Rating |
@ -101,10 +95,8 @@ For this task I set up an instance of [MySQL](https://www.mysql.com). You can us
_**OR**_
For `mysql` and the local database:
```bash
mysql -u root -p -e "update test.ratings set rating=1; select * from test.ratings;"
```
```bash
```command
$ mysql -u root -p -e "update test.ratings set rating=1; select * from test.ratings;"
Enter password:
+----------+--------+
| ReviewID | Rating |
@ -118,21 +110,21 @@ For this task I set up an instance of [MySQL](https://www.mysql.com). You can us
Now I am ready to deploy a version of the Bookinfo application that will use my database.
### Initial setting of Bookinfo application
To demonstrate the scenario of using an external database, I start with a Kubernetes cluster with [Istio installed]({{home}}/docs/setup/kubernetes/quick-start.html#installation-steps). Then I deploy the [Istio Bookinfo sample application]({{home}}/docs/guides/bookinfo.html). This application uses the _ratings_ microservice to fetch book ratings, a number between 1 and 5. The ratings are displayed as stars for each review. There are several versions of the _ratings_ microservice. Some use [MongoDB](https://www.mongodb.com), others use [MySQL](https://www.mysql.com) as their database.
The example commands in this blog post work with Istio version 0.3+, with or without [Mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) enabled.
The example commands in this blog post work with Istio 0.3+, with or without [Mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html) enabled.
As a reminder, here is the end-to-end architecture of the application from the [Bookinfo Guide]({{home}}/docs/guides/bookinfo.html).
{% assign url = home | append: "/docs/guides/img/bookinfo/withistio.svg" %}
{% include figure.html width='80%' ratio='59.08%'
img=url
alt='The original Bookinfo application'
title='The original Bookinfo application'
caption='The original Bookinfo application'
{% include image.html width="80%" ratio="59.08%"
link=url
caption="The original Bookinfo application"
%}
### Use the database for ratings data in Bookinfo application
1. I modify the deployment spec of a version of the _ratings_ microservice that uses a MySQL database, to use my database instance. The spec is in `samples/bookinfo/kube/bookinfo-ratings-v2-mysql.yaml` of an Istio release archive. I edit the following lines:
```yaml
@ -147,46 +139,40 @@ As a reminder, here is the end-to-end architecture of the application from the [
```
I replace the values in the snippet above, specifying the database host, port, user, and password. Note that the correct way to work with passwords in container's environment variables in Kubernetes is [to use secrets](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables). For this example task only, I write the password directly in the deployment spec. **Do not do it** in a real environment! I also assume everyone realizes that `"password"` should not be used as a password...
2. I apply the modified spec to deploy the version of the _ratings_ microservice, _v2-mysql_, that will use my database.
1. I apply the modified spec to deploy the version of the _ratings_ microservice, _v2-mysql_, that will use my database.
```bash
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql.yaml)
```
```bash
```command
$ kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql.yaml)
deployment "ratings-v2-mysql" created
```
3. I route all the traffic destined to the _reviews_ service to its _v3_ version. I do this to ensure that the _reviews_ service always calls the _ratings_ service. In addition, I route all the traffic destined to the _ratings_ service to _ratings v2-mysql_ that uses my database. I add routing for both services above by adding two [route rules]({{home}}/docs/reference/config/istio.routing.v1alpha1.html). These rules are specified in `samples/bookinfo/kube/route-rule-ratings-mysql.yaml` of an Istio release archive.
1. I route all the traffic destined to the _reviews_ service to its _v3_ version. I do this to ensure that the _reviews_ service always calls the _ratings_
service. In addition, I route all the traffic destined to the _ratings_ service to _ratings v2-mysql_ that uses my database. I add routing for both services above by adding two [route rules]({{home}}/docs/reference/config/istio.routing.v1alpha1.html). These rules are specified in `samples/bookinfo/kube/route-rule-ratings-mysql.yaml` of an Istio release archive.
```bash
istioctl create -f samples/bookinfo/kube/route-rule-ratings-mysql.yaml
```
```bash
```command
$ istioctl create -f samples/bookinfo/kube/route-rule-ratings-mysql.yaml
Created config route-rule/default/ratings-test-v2-mysql at revision 1918799
Created config route-rule/default/reviews-test-ratings-v2 at revision 1918800
```
The updated architecture appears below. Note that the blue arrows inside the mesh mark the traffic configured according to the route rules we added. According to the route rules, the traffic is sent to _reviews v3_ and _ratings v2-mysql_.
{% include figure.html width='80%' ratio='59.31%'
img='./img/bookinfo-ratings-v2-mysql-external.svg'
alt='The Bookinfo application with ratings v2-mysql and an external MySQL database'
title='The Bookinfo application with ratings v2-mysql and an external MySQL database'
caption='The Bookinfo application with ratings v2-mysql and an external MySQL database'
{% include image.html width="80%" ratio="59.31%"
link="./img/bookinfo-ratings-v2-mysql-external.svg"
caption="The Bookinfo application with ratings v2-mysql and an external MySQL database"
%}
Note that the MySQL database is outside the Istio service mesh, or more precisely outside the Kubernetes cluster. The boundary of the service mesh is marked by a dashed line.
### Access the webpage
Let's access the webpage of the application, after [determining the ingress IP and port]({{home}}/docs/guides/bookinfo.html#determining-the-ingress-ip-and-port).
We have a problem... Instead of the rating stars, the message _"Ratings service is currently unavailable"_ is currently displayed below each review:
{% include figure.html width='80%' ratio='36.19%'
img='./img/errorFetchingBookRating.png'
alt='The Ratings service error messages'
title='The Ratings service error messages'
caption='The Ratings service error messages'
{% include image.html width="80%" ratio="36.19%"
link="./img/errorFetchingBookRating.png"
caption="The Ratings service error messages"
%}
As in [Consuming External Web Services]({{home}}/blog/2018/egress-https.html), we experience **graceful service degradation**, which is good. The application did not crash due to the error in the _ratings_ microservice. The webpage of the application correctly displayed the book information, the details, and the reviews, just without the rating stars.
@ -194,6 +180,7 @@ As in [Consuming External Web Services]({{home}}/blog/2018/egress-https.html), w
We have the same problem as in [Consuming External Web Services]({{home}}/blog/2018/egress-https.html), namely all the traffic outside the Kubernetes cluster, both TCP and HTTP, is blocked by default by the sidecar proxies. To enable such traffic for TCP, an egress rule for TCP must be defined.
### Egress rule for an external MySQL instance
TCP egress rules come to our rescue. I copy the following YAML spec to a text file (let's call it `egress-rule-mysql.yaml`) and edit it to specify the IP of my database instance and its port.
```yaml
@ -211,21 +198,18 @@ spec:
```
Then I run `istioctl` to add the egress rule to the service mesh:
```bash
istioctl create -f egress-rule-mysql.yaml
```
```bash
```command
$ istioctl create -f egress-rule-mysql.yaml
Created config egress-rule/default/mysql at revision 1954425
```
Note that for a TCP egress rule, we specify `tcp` as the protocol of a port of the rule. Also note that we use an IP of the external service instead of its domain name. I will talk more about TCP egress rules [below](#egress-rules-for-tcp-traffic). For now, let's verify that the egress rule we added fixed the problem. Let's access the webpage and see if the stars are back.
It worked! Accessing the web page of the application displays the ratings without error:
{% include figure.html width='80%' ratio='36.69%'
img='./img/externalMySQLRatings.png'
alt='Book Ratings Displayed Correctly'
title='Book Ratings Displayed Correctly'
caption='Book Ratings Displayed Correctly'
{% include image.html width="80%" ratio="36.69%"
link="./img/externalMySQLRatings.png"
caption="Book Ratings Displayed Correctly"
%}
Note that we see a one-star rating for both displayed reviews, as expected. I changed the ratings to be one star to provide us with a visual clue that our external database is indeed being used.
@ -233,13 +217,17 @@ Note that we see a one-star rating for both displayed reviews, as expected. I ch
As with egress rules for HTTP/HTTPS, we can delete and create egress rules for TCP using `istioctl`, dynamically.
## Motivation for egress TCP traffic control
Some in-mesh Istio applications must access external services, for example legacy systems. In many cases, the access is not performed over HTTP or HTTPS protocols. Other TCP protocols are used, such as database-specific protocols like [MongoDB Wire Protocol](https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/) and [MySQL Client/Server Protocol](https://dev.mysql.com/doc/internals/en/client-server-protocol.html) to communicate with external databases.
Note that in case of access to external HTTPS services, as described in the [Control Egress TCP Traffic]({{home}}/docs/tasks/traffic-management/egress.html) task, an application must issue HTTP requests to the external service. The Envoy sidecar proxy attached to the pod or the VM, will intercept the requests and open an HTTPS connection to the external service. The traffic will be unencrypted inside the pod or the VM, but it will leave the pod or the VM encrypted.
However, sometimes this approach cannot work due to the following reasons:
* The code of the application is configured to use an HTTPS URL and cannot be changed
* The code of the application uses some library to access the external service and that library uses HTTPS only
* There are compliance requirements that do not allow unencrypted traffic, even if the traffic is unencrypted only inside the pod or the VM
In this case, HTTPS can be treated by Istio as _opaque TCP_ and can be handled in the same way as other TCP non-HTTP protocols.
@ -247,6 +235,7 @@ In this case, HTTPS can be treated by Istio as _opaque TCP_ and can be handled i
Next let's see how we define egress rules for TCP traffic.
## Egress rules for TCP traffic
The egress rules for enabling TCP traffic to a specific port must specify `TCP` as the protocol of the port. Additionally, for the [MongoDB Wire Protocol](https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/), the protocol can be specified as `MONGO`, instead of `TCP`.
For the `destination.service` field of the rule, an IP or a block of IPs in [CIDR](https://tools.ietf.org/html/rfc2317) notation must be used.
@ -258,6 +247,7 @@ Note that all the IPs of an external service are not always known. To enable TCP
Also note that the IPs of an external service are not always static, for example in the case of [CDNs](https://en.wikipedia.org/wiki/Content_delivery_network). Sometimes the IPs are static most of the time, but can be changed from time to time, for example due to infrastructure changes. In these cases, if the range of the possible IPs is known, you should specify the range by CIDR blocks (even by multiple egress rules if needed). As an example, see the approach we used in the case of `wikipedia.org`, described in [Control Egress TCP Traffic Task]({{home}}/docs/tasks/traffic-management/egress-tcp.html). If the range of the possible IPs is not known, egress rules for TCP cannot be used and [the external services must be called directly]({{home}}/docs/tasks/traffic-management/egress.html#calling-external-services-directly), circumventing the sidecar proxies.
## Relation to mesh expansion
Note that the scenario described in this post is different from the mesh expansion scenario, described in the
[Integrating Virtual Machines]({{home}}/docs/guides/integrating-vms.html) guide. In that scenario, a MySQL instance runs on an external
(outside the cluster) machine (a bare metal or a VM), integrated with the Istio service mesh. The MySQL service becomes a first-class citizen of the mesh with all the beneficial features of Istio applicable. Among other things, the service becomes addressable by a local cluster domain name, for example by `mysqldb.vm.svc.cluster.local`, and the communication to it can be secured by
@ -266,53 +256,53 @@ service must be registered with Istio. To enable such integration, Istio compone
installed on the machine and the Istio control plane (_Pilot_, _Mixer_, _CA_) must be accessible from it. See the
[Istio Mesh Expansion]({{home}}/docs/setup/kubernetes/mesh-expansion.html) instructions for more details.
In our case, the MySQL instance can run on any machine or can be provisioned as a service by a cloud provider. There is no requirement to integrate the machine with Istio. The Istio contol plane does not have to be accessible from the machine. In the case of MySQL as a service, the machine which MySQL runs on may be not accessible and installing on it the required components may be impossible. In our case, the MySQL instance is addressable by its global domain name, which could be beneficial if the consuming applications expect to use that domain name. This is especially relevant when that expected domain name cannot be changed in the deployment configuration of the consuming applications.
In our case, the MySQL instance can run on any machine or can be provisioned as a service by a cloud provider. There is no requirement to integrate the machine
with Istio. The Istio control plane does not have to be accessible from the machine. In the case of MySQL as a service, the machine which MySQL runs on may be not accessible and installing on it the required components may be impossible. In our case, the MySQL instance is addressable by its global domain name, which could be beneficial if the consuming applications expect to use that domain name. This is especially relevant when that expected domain name cannot be changed in the deployment configuration of the consuming applications.
## Cleanup
1. Drop the _test_ database and the _bookinfo_ user:
```bash
mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
```command
$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
-e "drop database test; drop user bookinfo;"
```
_**OR**_
For `mysql` and the local database:
```bash
mysql -u root -p -e "drop database test; drop user bookinfo;"
```command
$ mysql -u root -p -e "drop database test; drop user bookinfo;"
```
2. Remove the route rules:
```bash
istioctl delete -f samples/bookinfo/kube/route-rule-ratings-mysql.yaml
```
```bash
1. Remove the route rules:
```command
$ istioctl delete -f samples/bookinfo/kube/route-rule-ratings-mysql.yaml
Deleted config: route-rule/default/ratings-test-v2-mysql
Deleted config: route-rule/default/reviews-test-ratings-v2
```
3. Undeploy _ratings v2-mysql_:
```bash
kubectl delete -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql.yaml)
```
```bash
1. Undeploy _ratings v2-mysql_:
```command
$ kubectl delete -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql.yaml)
deployment "ratings-v2-mysql" deleted
```
4. Delete the egress rule:
```bash
istioctl delete egressrule mysql -n default
```
```bash
1. Delete the egress rule:
```command
$ istioctl delete egressrule mysql -n default
Deleted config: egressrule mysql
```
## Future work
In my next blog posts, I will show examples of combining route rules and egress rules, and also examples of accessing external services via Kubernetes _ExternalName_ services.
## Conclusion
In this blog post, I demonstrated how the microservices in an Istio service mesh can consume external services via TCP. By default, Istio blocks all the traffic, TCP and HTTP, to the hosts outside the cluster. To enable such traffic for TCP, TCP egress rules must be created for the service mesh.
## What's next
To read more about Istio egress traffic control:
* for TCP, see [Control Egress TCP Traffic Task]({{home}}/docs/tasks/traffic-management/egress-tcp.html)
* for HTTP/HTTPS, see [Control Egress Traffic Task]({{home}}/docs/tasks/traffic-management/egress.html)

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

202
_blog/2018/img/gateways.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 148 KiB

BIN
_blog/2018/img/policies.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 119 KiB

View File

@ -1,11 +1,9 @@
---
title: 2018 Posts
overview: Blog posts for 2018
description: Blog posts for 2018
order: 10
weight: 10
layout: blog
type: markdown
toc: false
---

View File

@ -0,0 +1,352 @@
---
title: Istio Soft Multi-tenancy Support
description: Using Kubernetes namespace and RBAC to create an Istio soft multi-tenancy environment
publishdate: 2018-04-19
subtitle: Using multiple Istio control planes and RBAC to create multi-tenancy
attribution: John Joyce and Rich Curran
weight: 90
redirect_from: "/blog/soft-multitenancy.html"
---
{% include home.html %}
Multi-tenancy is commonly used in many environments across many different applications,
but the implementation details and functionality provided on a per tenant basis does not
follow one model in all environments. The [Kubernetes multi-tenancy working group](
https://github.com/kubernetes/community/blob/master/wg-multitenancy/README.md)
is working to define the multi-tenant use cases and functionality that should be available
within Kubernetes. However, from their work so far it is clear that only "soft multi-tenancy"
is possible due to the inability to fully protect against malicious containers or workloads
gaining access to other tenant's pods or kernel resources.
## Soft multi-tenancy
For this blog, "soft multi-tenancy" is defined as having a single Kubernetes control plane
with multiple Istio control planes and multiple meshes, one control plane and one mesh
per tenant. The cluster administrator gets control and visibility across all the Istio
control planes, while the tenant administrator only gets control of a specific Istio
instance. Separation between the tenants is provided by Kubernetes namespaces and RBAC.
One use case for this deployment model is a shared corporate infrastructure where malicious
actions are not expected, but a clean separation of the tenants is still required.
Potential future Istio multi-tenant deployment models are described at the bottom of this
blog.
>Note: This blog is a high-level description of how to deploy Istio in a
limited multi-tenancy environment. The [docs]({{home}}/docs/) section will be updated
when official multi-tenancy support is provided.
## Deployment
### Multiple Istio control planes
Deploying multiple Istio control planes starts by replacing all `namespace` references
in a manifest file with the desired namespace. Using istio.yaml as an example, if two tenant
level Istio control planes are required; the first can use the istio.yaml default name of
*istio-system* and a second control plane can be created by generating a new yaml file with
a different namespace. As an example, the following command creates a yaml file with
the Istio namespace of *istio-system1*.
```command
$ cat istio.yaml | sed s/istio-system/istio-system1/g > istio-system1.yaml
```
The istio yaml file contains the details of the Istio control plane deployment, including the
pods that make up the control plane (mixer, pilot, ingress, CA). Deploying the two Istio
control plane yaml files:
```command
$ kubectl apply -f install/kubernetes/istio.yaml
$ kubectl apply -f install/kubernetes/istio-system1.yaml
```
Results in two Istio control planes running in two namespaces.
```command
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
istio-system istio-ca-ffbb75c6f-98w6x 1/1 Running 0 15d
istio-system istio-ingress-68d65fc5c6-dnvfl 1/1 Running 0 15d
istio-system istio-mixer-5b9f8dffb5-8875r 3/3 Running 0 15d
istio-system istio-pilot-678fc976c8-b8tv6 2/2 Running 0 15d
istio-system1 istio-ca-5f496fdbcd-lqhlk 1/1 Running 0 15d
istio-system1 istio-ingress-68d65fc5c6-2vldg 1/1 Running 0 15d
istio-system1 istio-mixer-7d4f7b9968-66z44 3/3 Running 0 15d
istio-system1 istio-pilot-5bb6b7669c-779vb 2/2 Running 0 15d
```
The Istio [sidecar]({{home}}/docs/setup/kubernetes/sidecar-injection.html) and
[addons]({{home}}/docs/tasks/telemetry/), if required, manifests must also
be deployed to match the configured `namespace` in use by the tenant's Istio control plane.
The execution of these two yaml files is the responsibility of the cluster
administrator, not the tenant level administrator. Additional RBAC restrictions will also
need to be configured and applied by the cluster administrator, limiting the tenant
administrator to only the assigned namespace.
### Split common and namespace specific resources
The manifest files in the Istio repositories create both common resources that would
be used by all Istio control planes as well as resources that are replicated per control
plane. Although it is a simple matter to deploy multiple control planes by replacing the
*istio-system* namespace references as described above, a better approach is to split the
manifests into a common part that is deployed once for all tenants and a tenant
specific part. For the [Custom Resource Definitions](https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions), the roles and the role
bindings should be separated out from the provided Istio manifests. Additionally, the
roles and role bindings in the provided Istio manifests are probably unsuitable for a
multi-tenant environment and should be modified or augmented as described in the next
section.
### Kubernetes RBAC for Istio control plane resources
To restrict a tenant administrator to a single Istio namespace, the cluster
administrator would create a manifest containing, at a minimum, a `Role` and `RoleBinding`
similar to the one below. In this example, a tenant administrator named *sales-admin*
is limited to the namespace *istio-system1*. A completed manifest would contain many
more `apiGroups` under the `Role` providing resource access to the tenant administrator.
```yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: istio-system1
name: ns-access-for-sales-admin-istio-system1
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: access-all-istio-system1
namespace: istio-system1
subjects:
- kind: User
name: sales-admin
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: ns-access-for-sales-admin-istio-system1
apiGroup: rbac.authorization.k8s.io
```
### Watching specific namespaces for service discovery
In addition to creating RBAC rules limiting the tenant administrator's access to a specific
Istio control plane, the Istio manifest must be updated to specify the application namespace
that Pilot should watch for creation of its xDS cache. This is done by starting the Pilot
component with the additional command line arguments `--appNamespace, ns-1`. Where *ns-1*
is the namespace that the tenants application will be deployed in. An example snippet from
the istio-system1.yaml file is included below.
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-pilot
namespace: istio-system1
annotations:
sidecar.istio.io/inject: "false"
spec:
replicas: 1
template:
metadata:
labels:
istio: pilot
spec:
serviceAccountName: istio-pilot-service-account
containers:
- name: discovery
image: docker.io/<user ID>/pilot:<tag>
imagePullPolicy: IfNotPresent
args: ["discovery", "-v", "2", "--admission-service", "istio-pilot", "--appNamespace", "ns-1"]
ports:
- containerPort: 8080
- containerPort: 443
```
### Deploying the tenant application in a namespace
Now that the cluster administrator has created the tenant's namespace (ex. *istio-system1*) and
Pilot's service discovery has been configured to watch for a specific application
namespace (ex. *ns-1*), create the application manifests to deploy in that tenant's specific
namespace. For example:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: ns-1
```
And add the namespace reference to each resource type included in the application's manifest
file. For example:
```yaml
apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
namespace: ns-1
```
Although not shown, the application namespaces will also have RBAC settings limiting access
to certain resources. These RBAC settings could be set by the cluster administrator and/or
the tenant administrator.
### Using `istioctl` in a multi-tenant environment
When defining [route rules]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#RouteRule)
or [destination policies]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#DestinationPolicy),
it is necessary to ensure that the `istioctl` command is scoped to
the namespace the Istio control plane is running in to ensure the resource is created
in the proper namespace. Additionally, the rule itself must be scoped to the tenant's namespace
so that it will be applied properly to that tenant's mesh. The *-i* option is used to create
(or get or describe) the rule in the namespace that the Istio control plane is deployed in.
The *-n* option will scope the rule to the tenant's mesh and should be set to the namespace that
the tenant's app is deployed in. Note that the *-n* option can be skipped on the command line if
the .yaml file for the resource scopes it properly instead.
For example, the following command would be required to add a route rule to the *istio-system1*
namespace:
```command
$ istioctl i istio-system1 create -n ns-1 -f route_rule_v2.yaml
```
And can be displayed using the command:
```command
$ istioctl -i istio-system1 -n ns-1 get routerule
NAME KIND NAMESPACE
details-Default RouteRule.v1alpha2.config.istio.io ns-1
productpage-default RouteRule.v1alpha2.config.istio.io ns-1
ratings-default RouteRule.v1alpha2.config.istio.io ns-1
reviews-default RouteRule.v1alpha2.config.istio.io ns-1
```
See the [Multiple Istio control planes]({{home}}/blog/2018/soft-multitenancy.html#multiple-istio-control-planes) section of this document for more details on `namespace` requirements in a
multi-tenant environment.
### Test results
Following the instructions above, a cluster administrator can create an environment limiting,
via RBAC and namespaces, what a tenant administrator can deploy.
After deployment, accessing the Istio control plane pods assigned to a specific tenant
administrator is permitted:
```command
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-78d649479f-8pqk9 1/1 Running 0 1d
istio-ca-ffbb75c6f-98w6x 1/1 Running 0 1d
istio-ingress-68d65fc5c6-dnvfl 1/1 Running 0 1d
istio-mixer-5b9f8dffb5-8875r 3/3 Running 0 1d
istio-pilot-678fc976c8-b8tv6 2/2 Running 0 1d
istio-sidecar-injector-7587bd559d-5tgk6 1/1 Running 0 1d
prometheus-cf8456855-hdcq7 1/1 Running 0 1d
servicegraph-75ff8f7c95-wcjs7 1/1 Running 0 1d
```
However, accessing all the cluster's pods is not permitted:
```command
$ kubectl get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "dev-admin" cannot list pods at the cluster scope
```
And neither is accessing another tenant's namespace:
```command
$ kubectl get pods -n istio-system1
Error from server (Forbidden): pods is forbidden: User "dev-admin" cannot list pods in the namespace "istio-system1"
```
The tenant administrator can deploy applications in the application namespace configured for
that tenant. As an example, updating the [Bookinfo]({{home}}/docs/guides/bookinfo.html)
manifests and then deploying under the tenant's application namespace of *ns-0*, listing the
pods in use by this tenant's namespace is permitted:
```command
$ kubectl get pods -n ns-0
NAME READY STATUS RESTARTS AGE
details-v1-64b86cd49-b7rkr 2/2 Running 0 1d
productpage-v1-84f77f8747-rf2mt 2/2 Running 0 1d
ratings-v1-5f46655b57-5b4c5 2/2 Running 0 1d
reviews-v1-ff6bdb95b-pm5lb 2/2 Running 0 1d
reviews-v2-5799558d68-b989t 2/2 Running 0 1d
reviews-v3-58ff7d665b-lw5j9 2/2 Running 0 1d
```
But accessing another tenant's application namespace is not:
```command
$ kubectl get pods -n ns-1
Error from server (Forbidden): pods is forbidden: User "dev-admin" cannot list pods in the namespace "ns-1"
```
If the [addon tools]({{home}}/docs/tasks/telemetry/), example
[prometheus]({{home}}/docs/tasks/telemetry//querying-metrics.html), are deployed
(also limited by an Istio `namespace`) the statistical results returned would represent only
that traffic seen from that tenant's application namespace.
## Conclusion
The evaluation performed indicates Istio has sufficient capabilities and security to meet a
small number of multi-tenant use cases. It also shows that Istio and Kubernetes __cannot__
provide sufficient capabilities and security for other use cases, especially those use
cases that require complete security and isolation between untrusted tenants. The improvements
required to reach a more secure model of security and isolation require work in container
technology, ex. Kubernetes, rather than improvements in Istio capabilities.
## Issues
* The CA (Certificate Authority) and mixer Istio pod logs from one tenant's Istio control
plane (ex. *istio-system* `namespace`) contained 'info' messages from a second tenant's
Istio control plane (ex *istio-system1* `namespace`).
## Challenges with other multi-tenancy models
Other multi-tenancy deployment models were considered:
1. A single mesh with multiple applications, one for each tenant on the mesh. The cluster
administrator gets control and visibility mesh wide and across all applications, while the
tenant administrator only gets control of a specific application.
1. A single Istio control plane with multiple meshes, one mesh per tenant. The cluster
administrator gets control and visibility across the entire Istio control plane and all
meshes, while the tenant administrator only gets control of a specific mesh.
1. A single cloud environment (cluster controlled), but multiple Kubernetes control planes
(tenant controlled).
These options either can't be properly supported without code changes or don't fully
address the use cases.
Current Istio capabilities are poorly suited to support the first model as it lacks
sufficient RBAC capabilities to support cluster versus tenant operations. Additionally,
having multiple tenants under one mesh is too insecure with the current mesh model and the
way Istio drives configuration to the envoy proxies.
Regarding the second option, the current Istio paradigm assumes a single mesh per Istio control
plane. The needed changes to support this model are substantial. They would require
finer grained scoping of resources and security domains based on namespaces, as well as,
additional Istio RBAC changes. This model will likely be addressed by future work, but not
currently possible.
The third model doesnt satisfy most use cases, as most cluster administrators prefer
a common Kubernetes control plane which they provide as a
[PaaS](https://en.wikipedia.org/wiki/Platform_as_a_service) to their tenants.
## Future work
Allowing a single Istio control plane to control multiple meshes would be an obvious next
feature. An additional improvement is to provide a single mesh that can host different
tenants with some level of isolation and security between the tenants. This could be done
by partitioning within a single control plane using the same logical notion of namespace as
Kubernetes. A [document](https://docs.google.com/document/d/14Hb07gSrfVt5KX9qNi7FzzGwB_6WBpAnDpPG6QEEd9Q)
has been started within the Istio community to define additional use cases and the
Istio functionality required to support those use cases.
## References
* Video on Kubernetes multi-tenancy support, [Multi-Tenancy Support & Security Modeling with RBAC and Namespaces](https://www.youtube.com/watch?v=ahwCkJGItkU), and the [supporting slide deck](https://schd.ws/hosted_files/kccncna17/21/Multi-tenancy%20Support%20%26%20Security%20Modeling%20with%20RBAC%20and%20Namespaces.pdf).
* Kubecon talk on security that discusses Kubernetes support for "Cooperative soft multi-tenancy", [Building for Trust: How to Secure Your Kubernetes](https://www.youtube.com/watch?v=YRR-kZub0cA).
* Kubernetes documentation on [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) and [namespaces](https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/).
* Kubecon slide deck on [Multi-tenancy Deep Dive](https://schd.ws/hosted_files/kccncna17/a9/kubecon-multitenancy.pdf).
* Google document on [Multi-tenancy models for Kubernetes](https://docs.google.com/document/d/15w1_fesSUZHv-vwjiYa9vN_uyc--PySRoLKTuDhimjc/edit#heading=h.3dawx97e3hz6). (Requires permission)
* Cloud Foundry WIP document, [Multi-cloud and Multi-tenancy](https://docs.google.com/document/d/14Hb07gSrfVt5KX9qNi7FzzGwB_6WBpAnDpPG6QEEd9Q)
* [Istio Auto Multi-Tenancy 101](https://docs.google.com/document/d/12F183NIRAwj2hprx-a-51ByLeNqbJxK16X06vwH5OWE/edit#heading=h.x0f9qplja3q)

View File

@ -1,23 +1,20 @@
---
title: "Traffic mirroring with Istio for testing in production"
overview: An introduction to safer, lower-risk deployments and release to production
publish_date: February 8, 2018
title: Traffic Mirroring with Istio for Testing in Production
description: An introduction to safer, lower-risk deployments and release to production
publishdate: 2018-02-08
subtitle: Routing rules for HTTP traffic
attribution: Christian Posta
order: 91
weight: 91
layout: blog
type: markdown
redirect_from: "/blog/traffic-mirroring.html"
---
{% include home.html %}
Trying to enumerate all the possible combinations of test cases for testing services in non-production/test environments can be daunting. In some cases, you'll find that all of the effort that goes into cataloging these use cases doesn't match up to real production use cases. Ideally, we could use live production use cases and traffic to help illuminate all of the feature areas of the service under test that we might miss in more contrived testing environments.
Trying to enumerate all the possible combinations of test cases for testing services in non-production/test environments can be daunting. In some cases, you'll find that all of the effort that goes into cataloging these use cases doesn't match up to real production use cases. Ideally, we could use live production use cases and traffic to help illuminate all of the feature areas of the service under test that we might miss in more contrived testing environments.
Istio can help here. With the release of [Istio 0.5.0]({{home}}/about/notes/0.5.html), Istio can mirror traffic to help test your services. You can write route rules similar to the following to enable traffic mirroring:
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
@ -31,14 +28,14 @@ spec:
- labels:
version: v1
weight: 100
- labels:
- labels:
version: v2
weight: 0
mirror:
name: httpbin
labels:
version: v2
```
```
A few things to note here:
@ -46,4 +43,5 @@ A few things to note here:
* Responses to any mirrored traffic is ignored; traffic is mirrored as "fire-and-forget"
* You'll need to have the 0-weighted route to hint to Istio to create the proper Envoy cluster under the covers; [this should be ironed out in future releases](https://github.com/istio/istio/issues/3270).
Learn more about mirroring by visiting the [Mirroring Task]({{home}}/docs/tasks/traffic-management/mirroring.html) and see a more [comprehensive treatment of this scenario on my blog](http://blog.christianposta.com/microservices/traffic-shadowing-with-istio-reduce-the-risk-of-code-release/).
Learn more about mirroring by visiting the [Mirroring Task]({{home}}/docs/tasks/traffic-management/mirroring.html) and see a more
[comprehensive treatment of this scenario on my blog](https://blog.christianposta.com/microservices/traffic-shadowing-with-istio-reduce-the-risk-of-code-release/).

View File

@ -0,0 +1,435 @@
---
title: Introducing the Istio v1alpha3 routing API
description: Introduction, motivation and design principles for the Istio v1alpha3 routing API.
publishdate: 2018-04-25
subtitle:
attribution: Frank Budinsky (IBM) and Shriram Rajagopalan (VMware)
weight: 88
redirect_from: "/blog/v1alpha3-routing.html"
---
{% include home.html %}
Up until now, Istio has provided a simple API for traffic management using four configuration resources:
`RouteRule`, `DestinationPolicy`, `EgressRule`, and (Kubernetes) `Ingress`.
With this API, users have been able to easily manage the flow of traffic in an Istio service mesh.
The API has allowed users to route requests to specific versions of services, inject delays and failures for resilience
testing, add timeouts and circuit breakers, and more, all without changing the application code itself.
While this functionality has proven to be a very compelling part of Istio, user feedback has also shown that this API does
have some shortcomings, specifically when using it to manage very large applications containing thousands of services, and
when working with protocols other than HTTP. Furthermore, the use of Kubernetes `Ingress` resources to configure external
traffic has proven to be woefully insufficient for our needs.
To address these, and other concerns, a new traffic management API, a.k.a. `v1alpha3`, is being introduced, which will
completely replace the previous API going forward. Although the `v1alpha3` model is fundamentally the same, it is not
backward compatible and will require manual conversion from the old API. A
[conversion tool]({{home}}/docs/reference/commands/istioctl.html#istioctl%20experimental%20convert-networking-config)
is included in the next few releases of Istio to help with the transition.
To justify this disruption, the `v1alpha3` API has gone through a long and painstaking community
review process that has hopefully resulted in a greatly improved API that will stand the test of time. In this article,
we will introduce the new configuration model and attempt to explain some of the motivation and design principles that
influenced it.
## Design principles
A few key design principles played a role in the routing model redesign:
* Explicitly model infrastructure as well as intent. For example, in addition to configuring an ingress gateway, the
component (controller) implementing it can also be specified.
* The authoring model should be "producer oriented" and "host centric" as opposed to compositional. For example, all
rules associated with a particular host are configured together, instead of individually.
* Clear separation of routing from post-routing behaviors.
## Configuration resources in v1alpha3
A typical mesh will have one or more load balancers (we call them gateways)
that terminate TLS from external networks and allow traffic into the mesh.
Traffic then flows through internal services via sidecar gateways.
It is also common for applications to consume external
services (e.g., Google Maps API). These may be called directly or, in certain deployments, all traffic
exiting the mesh may be forced through dedicated egress gateways. The following diagram depicts
this mental model.
{% include image.html width="80%" ratio="35.20%"
link="./img/gateways.svg"
alt="Role of gateways in the mesh"
caption="Gateways in an Istio service mesh"
%}
With the above setup in mind, `v1alpha3` introduces the following new
configuration resources to control traffic routing into, within, and out of the mesh.
1. `Gateway`
1. `VirtualService`
1. `DestinationRule`
1. `ServiceEntry`
`VirtualService`, `DestinationRule`, and `ServiceEntry` replace `RouteRule`,
`DestinationPolicy`, and `EgressRule` respectively. The `Gateway` is a
platform independent abstraction to model the traffic flowing into
dedicated middleboxes.
The figure below depicts the flow of control across configuration
resources.
{% include image.html width="80%" ratio="41.16%"
link="./img/virtualservices-destrules.svg"
caption="Relationship between different v1alpha3 elements"
%}
### Gateway
A [Gateway]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#Gateway)
configures a load balancer for HTTP/TCP traffic, regardless of
where it will be running. Any number of gateways can exist within the mesh
and multiple different gateway implementations can co-exist. In fact, a
gateway configuration can be bound to a particular workload by specifying
the set of workload (pod) labels as part of the configuration, allowing
users to reuse off the shelf network appliances by writing a simple gateway
controller.
For ingress traffic management, you might ask: _Why not reuse Kubernetes Ingress APIs_?
The Ingress APIs proved to be incapable of expressing Istio's routing needs.
By trying to draw a common denominator across different HTTP proxies, the
Ingress is only able to support the most basic HTTP routing and ends up
pushing every other feature of modern proxies into non-portable
annotations.
Istio `Gateway` overcomes the `Ingress` shortcomings by separating the
L4-L6 spec from L7. It only configures the L4-L6 functions (e.g., ports to
expose, TLS configuration) that are uniformly implemented by all good L7
proxies. Users can then use standard Istio rules to control HTTP
requests as well as TCP traffic entering a `Gateway` by binding a
`VirtualService` to it.
For example, the following simple `Gateway` configures a load balancer
to allow external https traffic for host `bookinfo.com` into the mesh:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- bookinfo.com
tls:
mode: SIMPLE
serverCertificate: /tmp/tls.crt
privateKey: /tmp/tls.key
```
To configure the corresponding routes, a `VirtualService` (described in the [following section](#virtualservice))
must be defined for the same host and bound to the `Gateway` using
the `gateways` field in the configuration:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- bookinfo.com
gateways:
- bookinfo-gateway # <---- bind to gateway
http:
- match:
- uri:
prefix: /reviews
route:
...
```
The `Gateway` can be used to model an edge-proxy or a purely internal proxy
as shown in the first figure. Irrespective of the location, all gateways
can be configured and controlled in the same way.
### VirtualService
Replacing route rules with something called “virtual services” might seem peculiar at first, but in reality its
fundamentally a much better name for what is being configured, especially after redesigning the API to address the
scalability issues with the previous model.
In effect, what has changed is that instead of configuring routing using a set of individual configuration resources
(rules) for a particular destination service, each containing a precedence field to control the order of evaluation, we
now configure the (virtual) destination itself, with all of its rules in an ordered list within a corresponding
[VirtualService]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#VirtualService) resource.
For example, where previously we had two `RouteRule` resources for the
[Bookinfo]({{home}}/docs/guides/bookinfo.html) applications `reviews` service, like this:
```yaml
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-default
spec:
destination:
name: reviews
precedence: 1
route:
- labels:
version: v1
---
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-test-v2
spec:
destination:
name: reviews
precedence: 2
match:
request:
headers:
cookie:
regex: "^(.*?;)?(user=jason)(;.*)?$"
route:
- labels:
version: v2
```
In `v1alph3`, we provide the same configuration in a single `VirtualService` resource:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
cookie:
regex: "^(.*?;)?(user=jason)(;.*)?$"
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
```
As you can see, both of the rules for the `reviews` service are consolidated in one place, which at first may or may not
seem preferable. However, if you look closer at this new model, youll see there are fundamental differences that make
`v1alpha3` vastly more functional.
First of all, notice that the destination service for the `VirtualService` is specified using a `hosts` field (repeated field, in fact) and is then again specified in a `destination` field of each of the route specifications. This is a
very important difference from the previous model.
A `VirtualService` describes the mapping between one or more user-addressable destinations to the actual destination workloads inside the mesh. In our example, they are the same, however, the user-addressed hosts can be any DNS
names with optional wildcard prefix or CIDR prefix that will be used to address the service. This can be particularly
useful in facilitating turning monoliths into a composite service built out of distinct microservices without requiring the
consumers of the service to adapt to the transition.
For example, the following rule allows users to address both the `reviews` and `ratings` services of the Bookinfo application
as if they are parts of a bigger (virtual) service at `http://bookinfo.com/`:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- bookinfo.com
http:
- match:
- uri:
prefix: /reviews
route:
- destination:
host: reviews
- match:
- uri:
prefix: /ratings
route:
- destination:
host: ratings
...
```
The hosts of a `VirtualService` do not actually have to be part of the service registry, they are simply virtual
destinations. This allows users to model traffic for virtual hosts that do not have routable entries inside the mesh.
These hosts can be exposed outside the mesh by binding the `VirtualService` to a `Gateway` configuration for the same host
(as described in the [previous section](#gateway)).
In addition to this fundamental restructuring, `VirtualService` includes several other important changes:
1. Multiple match conditions can be expressed inside the `VirtualService` configuration, reducing the need for redundant
rules.
1. Each service version has a name (called a service subset). The set of pods/VMs belonging to a subset is defined in a
`DestinationRule`, described in the following section.
1. `VirtualService` hosts can be specified using wildcard DNS prefixes to create a single rule for all matching services.
For example, in Kubernetes, to apply the same rewrite rule for all services in the `foo` namespace, the `VirtualService`
would use `*.foo.svc.cluster.local` as the host.
### DestinationRule
A [DestinationRule]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#DestinationRule)
configures the set of policies to be applied while forwarding traffic to a service. They are
intended to be authored by service owners, describing the circuit breakers, load balancer settings, TLS settings, etc..
`DestinationRule` is more or less the same as its predecessor, `DestinationPolicy`, with the following exceptions:
1. The `host` of a `DestinationRule` can include wildcard prefixes, allowing a single rule to be specified for many actual
services.
1. A `DestinationRule` defines addressable `subsets` (i.e., named versions) of the corresponding destination host. These
subsets are used in `VirtualService` route specifications when sending traffic to specific versions of the service.
Naming versions this way allows us to cleanly refer to them across different virtual services, simplify the stats that
Istio proxies emit, and to encode subsets in SNI headers.
A `DestinationRule` that configures policies and subsets for the reviews service might look something like this:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
- name: v3
labels:
version: v3
```
Notice that, unlike `DestinationPolicy`, multiple policies (e.g., default and v2-specific) are specified in a single
`DestinationRule` configuration.
### ServiceEntry
[ServiceEntry]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#ServiceEntry)
is used to add additional entries into the service registry that Istio maintains internally.
It is most commonly used to allow one to model traffic to external dependencies of the mesh
such as APIs consumed from the web or traffic to services in legacy infrastructure.
Everything you could previously configure using an `EgressRule` can just as easily be done with a `ServiceEntry`.
For example, access to a simple external service from inside the mesh can be enabled using a configuration
something like this:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: foo-ext
spec:
hosts:
- foo.com
ports:
- number: 80
name: http
protocol: HTTP
```
That said, `ServiceEntry` has significantly more functionality than its predecessor.
First of all, a `ServiceEntry` is not limited to external service configuration,
it can be of two types: mesh-internal or mesh-external.
Mesh-internal entries are like all other internal services but are used to explicitly add services
to the mesh. They can be used to add services as part of expanding the service mesh to include unmanaged infrastructure
(e.g., VMs added to a Kubernetes-based service mesh).
Mesh-external entries represent services external to the mesh.
For them, mTLS authentication is disabled and policy enforcement is performed on the client-side,
instead of on the usual server-side for internal service requests.
Because a `ServiceEntry` configuration simply adds a destination to the internal service registry, it can be
used in conjunction with a `VirtualService` and/or `DestinationRule`, just like any other service in the registry.
The following `DestinationRule`, for example, can be used to initiate mTLS connections for an external service:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: foo-ext
spec:
name: foo.com
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
```
In addition to its expanded generality, `ServiceEntry` provides several other improvements over `EgressRule`
including the following:
1. A single `ServiceEntry` can configure multiple service endpoints, which previously would have required multiple
`EgressRules`.
1. The resolution mode for the endpoints is now configurable (`NONE`, `STATIC`, or `DNS`).
1. Additionally, we are working on addressing another pain point: the need to access secure external services over plain
text ports (e.g., `http://google.com:443`). This should be fixed in the coming weeks, allowing you to directly access
`https://google.com` from your application. Stay tuned for an Istio patch release (0.8.x) that addresses this limitation.
## Creating and deleting v1alpha3 route rules
Because all route rules for a given destination are now stored together as an ordered
list in a single `VirtualService` resource, adding a second and subsequent rules for a particular destination
is no longer done by creating a new (`RouteRule`) resource, but instead by updating the one-and-only `VirtualService`
resource for the destination.
old routing rules:
```command
$ istioctl create -f my-second-rule-for-destination-abc.yaml
```
`v1alpha3` routing rules:
```command
$ istioctl replace -f my-updated-rules-for-destination-abc.yaml
```
Deleting route rules other than the last one for a particular destination is also done using `istioctl replace`.
When adding or removing routes that refer to service versions, the `subsets` will need to be updated in
the service's corresponding `DestinationRule`.
As you might have guessed, this is also done using `istioctl replace`.
## Summary
The Istio `v1alpha3` routing API has significantly more functionality than
its predecessor, but unfortunately is not backwards compatible, requiring a
one time manual conversion. The previous configuration resources,
`RouteRule`, `DesintationPolicy`, and `EgressRule`, will not be supported
from Istio 0.9 onwards. Kubernetes users can continue to use `Ingress` to
configure their edge load balancers for basic routing. However, advanced
routing features (e.g., traffic split across two versions) will require use
of `Gateway`, a significantly more functional and highly
recommended `Ingress` replacement.
## Acknowledgments
Credit for the routing model redesign and implementation work goes to the
following people (in alphabetical order):
* Frank Budinsky (IBM)
* Zack Butcher (Google)
* Greg Hanson (IBM)
* Costin Manolache (Google)
* Martin Ostrowski (Google)
* Shriram Rajagopalan (VMware)
* Louis Ryan (Google)
* Isaiah Snell-Feikema (IBM)
* Kuat Yessenov (Google)

View File

@ -1,6 +1,6 @@
---
title: Istio Blog
overview: The Istio blog
description: The Istio blog
layout: compress
---
{% include latest_blog_post.html %}

View File

@ -5,6 +5,8 @@ kramdown:
auto_ids: true
input: GFM
hard_wrap: false
syntax_highlighter_opts:
disable : true
baseurl:
@ -40,6 +42,32 @@ plugins:
- jekyll-redirect-from
- jekyll-sitemap
defaults:
-
scope:
path: ""
type: "about"
values:
layout: "about"
-
scope:
path: ""
type: "docs"
values:
layout: "docs"
-
scope:
path: ""
type: "help"
values:
layout: "help"
-
scope:
path: ""
type: "blog"
values:
layout: "blog"
exclude:
- README.md
- LICENSE
@ -63,6 +91,11 @@ exclude:
- repos/*.html
- repos/*.md
- vendor/
- js/misc.js
- js/styleSwitcher.js
- firebase.json
- _rakesite
- mdl_style.rb
repository:
istio/istio.github.io

View File

@ -1,4 +1,5 @@
version: 0.6 (preliminary)
version: 0.8
preliminary: true
archive: false
archive_date: DD-MMM-YYYY
search_engine_id: "013699703217164175118:veyyqmfmpj4"

View File

@ -1,7 +1,11 @@
- name: 0.6 (preliminary)
- name: 0.8
url: https://preliminary.istio.io
- name: 0.7
url: https://istio.io
- name: 0.6
url: https://archive.istio.io/v0.6
- name: 0.5
url: https://istio.io
url: https://archive.istio.io/v0.5
- name: 0.4
url: https://archive.istio.io/v0.4
- name: 0.3

View File

@ -1,11 +1,9 @@
---
title: Concepts
overview: Concepts help you learn about the different parts of the Istio system and the abstractions it uses.
description: Concepts help you learn about the different parts of the Istio system and the abstractions it uses.
order: 10
weight: 10
layout: docs
type: markdown
toc: false
---

View File

@ -1,11 +1,9 @@
---
title: Attributes
overview: Explains the important notion of attributes, which is a central mechanism for how policies and control are applied to services within the mesh.
order: 10
description: Explains the important notion of attributes, which is a central mechanism for how policies and control are applied to services within the mesh.
weight: 10
layout: docs
type: markdown
---
{% include home.html %}
@ -19,11 +17,13 @@ environment this traffic occurs in. An Istio attribute carries a specific piece
of information such as the error code of an API request, the latency of an API request, or the
original IP address of a TCP connection. For example:
request.path: xyz/abc
request.size: 234
request.time: 12:34:56.789 04/17/2017
source.ip: 192.168.0.1
destination.service: example
```plain
request.path: xyz/abc
request.size: 234
request.time: 12:34:56.789 04/17/2017
source.ip: 192.168.0.1
destination.service: example
```
## Attribute vocabulary
@ -44,4 +44,4 @@ separator. For example, `request.size` and `source.ip`.
## Attribute types
Istio attributes are strongly typed. The supported attribute types are defined by
[ValueType](https://github.com/istio/api/blob/master/mixer/v1/config/descriptor/value_type.proto).
[ValueType](https://github.com/istio/api/blob/master/policy/v1beta1/value_type.proto).

View File

@ -1,11 +1,9 @@
---
title: Policies and Control
overview: Introduces the policy control mechanisms.
description: Introduces the policy control mechanisms.
order: 40
weight: 40
layout: docs
type: markdown
toc: false
---

View File

@ -1,11 +1,9 @@
---
title: Mixer Configuration
overview: An overview of the key concepts used to configure Mixer.
order: 30
description: An overview of the key concepts used to configure Mixer.
weight: 30
layout: docs
type: markdown
---
{% include home.html %}
@ -45,11 +43,9 @@ The set of attributes determines which backend Mixer calls for a given request a
each is given. In order to hide the details of individual backends, Mixer uses modules
known as [*adapters*](./mixer.html#adapters).
{% include figure.html width='60%' ratio='42.60%'
img='./img/mixer-config/machine.svg'
alt='Attribute Machine'
title='Attribute Machine'
caption='Attribute Machine'
{% include image.html width="60%" ratio="42.60%"
link="./img/mixer-config/machine.svg"
caption="Attribute Machine"
%}
Mixer's configuration has the following central responsibilities:
@ -81,18 +77,18 @@ metadata:
namespace: istio-system
spec:
# kind specific configuration.
```
```
- **apiVersion** - A constant for an Istio release.
- **kind** - A Mixer assigned unique "kind" for every adapter and template.
- **name** - The configuration resource name.
- **namespace** - The namespace in which the configuration resource is applicable.
- **namespace** - The namespace in which the configuration resource is applicable.
- **spec** - The `kind`-specific configuration.
### Handlers
[Adapters](./mixer.html#adapters) encapsulate the logic necessary to interface Mixer with specific external infrastructure
backends such as [Prometheus](https://prometheus.io), [New Relic](https://newrelic.com), or [Stackdriver](https://cloud.google.com/logging).
Individual adapters generally need operational parameters in order to do their work. For example, a logging adapter may require
Individual adapters generally need operational parameters in order to do their work. For example, a logging adapter may require
the IP address and port of the log sink.
Here is an example showing how to configure an adapter of kind = `listchecker`. The listchecker adapter checks an input value against a list.
@ -109,7 +105,7 @@ spec:
blacklist: false
```
`{metadata.name}.{kind}.{metadata.namespace}` is the fully qualified name of a handler. The fully qualified name of the above handler is
`{metadata.name}.{kind}.{metadata.namespace}` is the fully qualified name of a handler. The fully qualified name of the above handler is
`staticversion.listchecker.istio-system` and it must be unique.
The schema of the data in the `spec` stanza depends on the specific adapter being configured.
@ -189,8 +185,8 @@ spec:
instances:
- requestduration.metric.istio-system
```
A rule contains a `match` predicate expression and a list of actions to perform if the predicate is true.
An action specifies the list of instances to be delivered to a handler.
A rule contains a `match` predicate expression and a list of actions to perform if the predicate is true.
An action specifies the list of instances to be delivered to a handler.
A rule must use the fully qualified names of handlers and instances.
If the rule, handlers, and instances are all in the same namespace, the namespace suffix can be elided from the fully qualified name as seen in `handler.prometheus`.
@ -221,7 +217,7 @@ destination_version: destination.labels["version"] | "unknown"
With the above, the `destination_version` label is assigned the value of `destination.labels["version"]`. However if that attribute
is not present, the literal `"unknown"` is used.
The attributes that can be used in attribute expressions must be defined in an
The attributes that can be used in attribute expressions must be defined in an
[*attribute manifest*](#manifests) for the deployment. Within the manifest, each attribute has
a type which represents the kind of data that the attribute carries. In the
same way, attribute expressions are also typed, and their type is derived from
@ -244,9 +240,9 @@ Mixer goes through the following steps to arrive at the set of `actions`.
1. Extract the value of the identity attribute from the request.
2. Extract the service namespace from the identity attribute.
1. Extract the service namespace from the identity attribute.
3. Evaluate the `match` predicate for all rules in the `configDefaultNamespace` and the service namespace.
1. Evaluate the `match` predicate for all rules in the `configDefaultNamespace` and the service namespace.
The actions resulting from these steps are performed by Mixer.
@ -291,4 +287,4 @@ configuration](https://github.com/istio/istio/blob/master/mixer/testdata/config)
## What's next
* Read the [blog post]({{home}}/blog/mixer-adapter-model.html) describing Mixer's adapter model.
-Read the [blog post]({{home}}/blog/mixer-adapter-model.html) describing Mixer's adapter model.

View File

@ -1,13 +1,13 @@
---
title: Mixer
overview: Architectural deep-dive into the design of Mixer, which provides the policy and control mechanisms within the service mesh.
order: 20
description: Architectural deep-dive into the design of Mixer, which provides the policy and control mechanisms within the service mesh.
weight: 20
layout: docs
type: markdown
---
{% include home.html %}
The page explains Mixer's role and general architecture.
## Background
@ -29,11 +29,10 @@ Mixer is designed to change the boundaries between layers in order to reduce
systemic complexity, eliminating policy logic from service code and giving
control to operators instead.
{% include figure.html width='60%' ratio='59%'
img='./img/mixer/traffic.svg'
alt='Showing the flow of traffic through Mixer.'
title='Mixer Traffic Flow'
caption='Mixer Traffic Flow'
{% include image.html width="60%" ratio="59%"
link="./img/mixer/traffic.svg"
alt="Showing the flow of traffic through Mixer."
caption="Mixer Traffic Flow"
%}
Mixer provides three core features:
@ -72,11 +71,10 @@ single consistent API, independent of the backends in use. The exact set of
adapters used at runtime is determined through configuration and can easily be
extended to target new or custom infrastructure backends.
{% include figure.html width='35%' ratio='138%'
img='./img/mixer/adapters.svg'
alt='Showing Mixer with adapters.'
title='Mixer and its Adapters'
caption='Mixer and its Adapters'
{% include image.html width="35%" ratio="138%"
link="./img/mixer/adapters.svg"
alt="Showing Mixer with adapters."
caption="Mixer and its Adapters"
%}
## Configuration state
@ -88,12 +86,12 @@ operator is responsible for:
- Configuring a set of *handlers* for Mixer-generated data. Handlers are
configured adapters (adapters being binary plugins as described
[below](#adapters)). Providing a `statsd` adapter with the IP address for a
[here](#adapters)). Providing a `statsd` adapter with the IP address for a
statsd backend is an example of handler configuration.
- Configuring a set of *instances* for Mixer to generate based on attributes and
literal values. They represent a chunk of data that adapter code will operate
on. For example, an operator may configure Mixer to generate `request_count`
on. For example, an operator may configure Mixer to generate `requestcount`
metric values from attributes such as `destination.service` and
`response.code`.
@ -136,13 +134,12 @@ phases:
parameters. The Adapter Dispatching phase invokes the adapters associated with
each aspect and passes them those parameters.
{% include figure.html width='50%' ratio='144%'
img='./img/mixer/phases.svg'
alt='Phases of Mixer request processing.'
title='Request Phases'
caption='Request Phases'
{% include image.html width="50%" ratio="144%"
link="./img/mixer/phases.svg"
alt="Phases of Mixer request processing."
caption="Request Phases"
%}
## What's next
* Read the [blog post]({{home}}/blog/2017/adapter-model.html) describing Mixer's adapter model.
- Read the [blog post]({{home}}/blog/2017/adapter-model.html) describing Mixer's adapter model.

View File

@ -0,0 +1,87 @@
---
title: Istio Authentication Policy
description: Describes Istio authentication policy
weight: 10
---
{% include home.html %}
Istio authentication policy enables operators to specify authentication requirements for a service (or services). Istio authentication policy is composed of two parts:
* Peer: verifies the party, the direct client, that makes the connection. The common authentication mechanism for this is [mutual TLS]({{home}}/docs/concepts/security/mutual-tls.html). Istio is responsible for managing both client and server sides to enforce the policy.
* Origin: verifies the party, the original client, that makes the request (e.g end-users, devices etc). JWT is the only supported mechanism for origin authentication at the moment. Istio configures the server side to perform authentication, but doesn't enforce that the client side sends the required token.
Identities from both authentication parts, if applicable, are output to the next layer (e.g authorization, Mixer). To simplify the authorization rules, the policy can also specify which identity (peer or origin) should be used as 'the principal'. By default, it is set to the peer's identity.
## Architecture
Authentication policies are saved in Istio config store (in 0.7, the storage implementation uses Kubernetes CRD), and distributed by control plane. Depending on the size of the mesh, config propagation may take a few seconds to a few minutes. During the transition, you can expect traffic lost or inconsistent authentication results.
{% include image.html width="80%" ratio="100%"
link="./img/authn.svg"
caption="Istio authentication policy architecture"
%}
Policy is scoped to namespaces, with (optional) target selector rules to narrow down the set of services (within the same namespace as the policy) on which the policy should be applied. This aligns with the ACL model based on Kubernetes RBAC. More specifically, only the admin of the namespace can set policies for services in that namespace.
Authentication is implemented by the Istio sidecars. For example, with an Envoy sidecar, it is a combination of SSL setting and HTTP filters. If authentication fails, requests will be rejected (either with SSL handshake error code, or http 401, depending on the type of authentication mechanism). If authentication succeeds, the following authenticated attributes will be generated:
* **source.principal**: peer principal. If peer authentication is not used, the attribute is not set.
* **request.auth.principal**: depends on the policy principal binding, this could be peer principal (if USE_PEER) or origin principal (if USE_ORIGIN).
* **request.auth.audiences**: reflect the audience (`aud`) claim within the origin JWT (JWT that is used for origin authentication)
* **request.auth.presenter**: similarly, reflect the authorized presenter (`azp`) claim of the origin JWT.
* **request.auth.claims**: all raw string claims from origin-JWT.
Origin principal (principal from origin authentication) is not explicitly output. In general, it can always be reconstructed by joining (`iss`) and subject (`sub`) claims with a "/" separator (for example, if `iss` and `sub` claims are "*googleapis.com*" and "*123456*" respectively, then origin principal is "*googleapis.com/123456*"). On the other hand, if principal binding is USE_ORIGIN, **request.auth.principal** carries the same value as origin principal.
## Anatomy of the policy
### Target selectors
Defines rule to find service(s) on which policy should be applied. If no rule is provided, the policy is matched to all services in the namespace, so-called namespace-level policy (as opposed to service-level policies which have non-empty selector rules). Istio uses the service-level policy if available, otherwise it falls back to namespace-level policy. If neither is defined, it uses the default policy based on service mesh config and/or service annotation, which can only set mutual TLS setting (these are mechanisms before Istio 0.7 to config mutual TLS for Istio service mesh). See [testing Istio mutual TLS]({{home}}/docs/tasks/security/mutual-tls.html) and [per-service mutual TLS enablement]({{home}}/docs/tasks/security/per-service-mtls.html) for more details.
Operators are responsible for avoiding conflicts, e.g create more than one service-level policy that matches to the same service(s) (or more than one namespace-level policy on the same namespace).
Example: rule to select product-page service (on any port), and reviews:9000.
```yalm
targets:
- name: product-page
- name: reviews
ports:
- number: 9000
```
### Peer authentication
Defines authentication methods (and associated parameters) that are supported for peer authentication. It can list more than one method; only one of them needs to be satisfied for the authentication to pass. However, starting with the 0.7 release, only mutual TLS is supported. Omit this if peer authentication is not needed.
Example of peer authentication using mutual TLS:
```yaml
peers:
- mtls:
```
> Starting with Istio 0.7, the `mtls` settings doesn't require any parameters (hence `-mtls: {}`, `- mtls:` or `- mtls: null` declaration is sufficient). In future, it may carry arguments to provide different mTLS implementations.
### Origin authentication
Defines authentication methods (and associated parameters) that are supported for origin authentication. Only JWT is supported for this, however, the policy can list multiple JWTs by different issuers. Similar to peer authentication, only one of the listed methods needs to be satisfied for the authentication to pass.
```yaml
origins:
- jwt:
issuer: "https://accounts.google.com"
jwksUri: "https://www.googleapis.com/oauth2/v3/certs"
```
### Principal binding
Defines what is the principal from the authentication. By default, this will be the peer's principal (and if peer authentication is not applied, it will be left unset). Policy writers can choose to overwrite it with USE_ORIGIN. In future, we will also support *conditional-binding* (e.g USE_PEER when peer is X, otherwise USE_ORIGIN)
## What's next
Try out the [Basic Istio authentication policy]({{home}}/docs/tasks/security/authn-policy.html) tutorial.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 169 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 176 KiB

After

Width:  |  Height:  |  Size: 178 KiB

View File

@ -1,11 +1,9 @@
---
title: Security
overview: Describes Istio's authorization and authentication functionality.
description: Describes Istio's authorization and authentication functionality.
order: 30
weight: 30
layout: docs
type: markdown
toc: false
---

View File

@ -1,166 +1,166 @@
---
title: Mutual TLS Authentication
overview: Describes Istio's mutual TLS authentication architecture which provides a strong service identity and secure communication channels between services.
order: 10
description: Describes Istio's mutual TLS authentication architecture which provides a strong service identity and secure communication channels between services.
weight: 10
layout: docs
type: markdown
---
{% include home.html %}
## Overview
Istio Auth's aim is to enhance the security of microservices and their communication without requiring service code changes. It is responsible for:
Istio's aim is to enhance the security of microservices and their communication without requiring service code changes. It is responsible for:
* Providing each service with a strong identity that represents its role to enable interoperability across clusters and clouds
* Securing service to service communication and end-user to service communication
* Providing each service with a strong identity that represents its role to enable interoperability across clusters and clouds
* Securing service to service communication and end-user to service communication
* Providing a key management system to automate key and certificate generation, distribution, rotation, and revocation
* Providing a key management system to automate key and certificate generation, distribution, rotation, and revocation
## Architecture
The diagram below shows Istio Auth's architecture, which includes three primary components: identity, key management, and communication security. This diagram describes how Istio Auth is used to secure the service-to-service communication between service 'frontend' running as the service account 'frontend-team' and service 'backend' running as the service account 'backend-team'. Istio supports services running on both Kubernetes containers and VM/bare-metal machines.
The diagram below shows Istio's security-related architecture, which includes three primary components: identity, key management, and communication
security. This diagram describes how Istio is used to secure the service-to-service communication between service 'frontend' running
as the service account 'frontend-team' and service 'backend' running as the service account 'backend-team'. Istio supports services running
on both Kubernetes containers and VM/bare-metal machines.
{% include figure.html width='80%' ratio='56.25%'
img='./img/mutual-tls/auth.svg'
alt='Components making up the Istio auth model.'
title='Istio Auth Architecture'
caption='Istio Auth Architecture'
{% include image.html width="80%" ratio="56.25%"
link="./img/mutual-tls/auth.svg"
alt="Components making up the Istio auth model."
caption="Istio Security Architecture"
%}
As illustrated in the diagram, Istio Auth leverages secret volume mount to deliver keys/certs from Istio CA to Kubernetes containers. For services running on VM/bare-metal machines, we introduce a node agent, which is a process running on each VM/bare-metal machine. It generates the private key and CSR (certificate signing request) locally, sends CSR to Istio CA for signing, and delivers the generated certificate together with the private key to Envoy.
As illustrated in the diagram, Istio leverages secret volume mount to deliver keys/certs from Citadel to Kubernetes containers. For services running on
VM/bare-metal machines, we introduce a node agent, which is a process running on each VM/bare-metal machine. It generates the private key and CSR (certificate
signing request) locally, sends CSR to Citadel for signing, and delivers the generated certificate together with the private key to Envoy.
## Components
### Identity
Istio Auth uses [Kubernetes service accounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) to identify who runs the service:
Istio uses [Kubernetes service accounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) to identify who runs the service:
* A service account in Istio has the format "spiffe://\<_domain_\>/ns/\<_namespace_>/sa/\<_serviceaccount_\>".
* A service account in Istio has the format "spiffe://\<_domain_\>/ns/\<_namespace_>/sa/\<_serviceaccount_\>".
* _domain_ is currently _cluster.local_. We will support customization of domain in the near future.
* _namespace_ is the namespace of the Kubernetes service account.
* _serviceaccount_ is the Kubernetes service account name.
* _domain_ is currently _cluster.local_. We will support customization of domain in the near future.
* _namespace_ is the namespace of the Kubernetes service account.
* _serviceaccount_ is the Kubernetes service account name.
* A service account is **the identity (or role) a workload runs as**, which represents that workload's privileges. For systems requiring strong security, the amount of privilege for a workload should not be identified by a random string (i.e., service name, label, etc), or by the binary that is deployed.
* A service account is **the identity (or role) a workload runs as**, which represents that workload's privileges. For systems requiring strong security, the
amount of privilege for a workload should not be identified by a random string (i.e., service name, label, etc), or by the binary that is deployed.
* For example, let's say we have a workload pulling data from a multi-tenant database. If Alice ran this workload, she will be able to pull a different set of data than if Bob ran this workload.
* For example, let's say we have a workload pulling data from a multi-tenant database. If Alice ran this workload, she will be able to pull
a different set of data than if Bob ran this workload.
* Service accounts enable strong security policies by offering the flexibility to identify a machine, a user, a workload, or a group of workloads (different workloads can run as the same service account).
* Service accounts enable strong security policies by offering the flexibility to identify a machine, a user, a workload, or a group of workloads (different
workloads can run as the same service account).
* The service account a workload runs as won't change during the lifetime of the workload.
* The service account a workload runs as won't change during the lifetime of the workload.
* Service account uniqueness can be ensured with domain name constraint
* Service account uniqueness can be ensured with domain name constraint
### Communication security
Service-to-service communication is tunneled through the client side [Envoy](https://envoyproxy.github.io/envoy/) and the server side Envoy. End-to-end communication is secured by:
* Local TCP connections between the service and Envoy
* Local TCP connections between the service and Envoy
* Mutual TLS connections between proxies
* Mutual TLS connections between proxies
* Secure Naming: during the handshake process, the client side Envoy checks that the service account provided by the server side certificate is allowed to run the target service
* Secure Naming: during the handshake process, the client side Envoy checks that the service account provided by the server side certificate is allowed to run the target service
### Key management
Istio v0.2 supports services running on both Kubernetes pods and VM/bare-metal machines. We use different key provisioning mechanisms for each scenario.
Istio 0.2 supports services running on both Kubernetes pods and VM/bare-metal machines. We use different key provisioning mechanisms for each scenario.
For services running on Kubernetes pods, the per-cluster Istio CA (Certificate Authority) automates the key & certificate management process. It mainly performs four critical operations :
For services running on Kubernetes pods, the per-cluster Citadel (acting as Certificate Authority) automates the key & certificate management process. It mainly performs four critical operations:
* Generate a [SPIFFE](https://spiffe.github.io/docs/svid) key and certificate pair for each service account
* Generate a [SPIFFE](https://spiffe.github.io/docs/svid) key and certificate pair for each service account
* Distribute a key and certificate pair to each pod according to the service account
* Distribute a key and certificate pair to each pod according to the service account
* Rotate keys and certificates periodically
* Rotate keys and certificates periodically
* Revoke a specific key and certificate pair when necessary
* Revoke a specific key and certificate pair when necessary
For services running on VM/bare-metal machines, the above four operations are performed by Istio CA together with node agents.
For services running on VM/bare-metal machines, the above four operations are performed by Citadel together with node agents.
## Workflow
The Istio Auth workflow consists of two phases, deployment and runtime. For the deployment phase, we discuss the two scenarios (i.e., in Kubernetes and VM/bare-metal machines) separately since they are different. Once the key and certificate are deployed, the runtime phase is the same for the two scenarios. We briefly cover the workflow in this section.
The Istio Security workflow consists of two phases, deployment and runtime. For the deployment phase, we discuss the two
scenarios (i.e., in Kubernetes and VM/bare-metal machines) separately since they are different. Once the key and
certificate are deployed, the runtime phase is the same for the two scenarios. We briefly cover the workflow in this
section.
### Deployment phase (Kubernetes Scenario)
1. Citadel watches the Kubernetes API Server, creates a [SPIFFE](https://spiffe.github.io/docs/svid) key and certificate
pair for each of the existing and new service accounts, and sends them to the API Server.
1. Istio CA watches Kubernetes API Server, creates a [SPIFFE](https://spiffe.github.io/docs/svid) key and certificate pair for each of the existing and new service accounts, and sends them to API Server.
1. When a pod is created, API Server mounts the key and certificate pair according to the service account using [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
1. When a pod is created, API Server mounts the key and certificate pair according to the service account using [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
1. [Pilot]({{home}}/docs/concepts/traffic-management/pilot.html) generates the config with proper key and certificate and secure naming information,
which
defines what service account(s) can run a certain service, and passes it to Envoy.
1. [Pilot]({{home}}/docs/concepts/traffic-management/pilot.html) generates the config with proper key and certificate and secure naming information,
which defines what service account(s) can run a certain service, and passes it to Envoy.
### Deployment phase (VM/bare-metal Machines Scenario)
1. Citadel creates a gRPC service to take CSR request.
1. Istio CA creates a gRPC service to take CSR request.
1. Node agent creates the private key and CSR, sends the CSR to Citadel for signing.
1. Node agent creates the private key and CSR, sends the CSR to Istio CA for signing.
1. Citadel validates the credentials carried in the CSR, and signs the CSR to generate the certificate.
1. Istio CA validates the credentials carried in the CSR, and signs the CSR to generate the certificate.
1. Node agent puts the certificate received from CA and the private key to Envoy.
1. The above CSR process repeats periodically for rotation.
1. Node agent puts the certificate received from Citadel and the private key to Envoy.
1. The above CSR process repeats periodically for rotation.
### Runtime phase
1. The outbound traffic from a client service is rerouted to its local Envoy.
1. The client side Envoy starts a mutual TLS handshake with the server side Envoy. During the handshake, it also does a secure naming check to verify that the service account presented in the server certificate can run the server service.
1. The outbound traffic from a client service is rerouted to its local Envoy.
1. The client side Envoy starts a mutual TLS handshake with the server side Envoy. During the handshake, it also does a secure naming check to verify that the service account presented in the server certificate can run the server service.
1. The traffic is forwarded to the server side Envoy after mTLS connection is established, which is then forwarded to the server service through local TCP connections.
1. The traffic is forwarded to the server side Envoy after mTLS connection is established, which is then forwarded to the server service through local TCP connections.
## Best practices
In this section, we provide a few deployment guidelines and then discuss a real-world scenario.
In this section, we provide a few deployment guidelines and then discuss a real-world scenario.
### Deployment guidelines
* If there are multiple service operators (a.k.a. [SREs](https://en.wikipedia.org/wiki/Site_reliability_engineering)) deploying different services in a cluster (typically in a medium- or large-size cluster), we recommend creating a separate [namespace](https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/) for each SRE team to isolate their access. For example, you could create a "team1-ns" namespace for team1, and "team2-ns" namespace for team2, such that both teams won't be able to access each other's services.
* If there are multiple service operators (a.k.a. [SREs](https://en.wikipedia.org/wiki/Site_reliability_engineering)) deploying different services in a cluster (typically in a medium- or large-size cluster), we recommend creating a separate [namespace](https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/) for each SRE team to isolate their access. For example, you could create a "team1-ns" namespace for team1, and "team2-ns" namespace for team2, such that both teams won't be able to access each other's services.
* If Istio CA is compromised, all its managed keys and certificates in the cluster may be exposed. We *strongly* recommend running Istio CA on a dedicated namespace (for example, istio-ca-ns), which only cluster admins have access to.
* If Citadel is compromised, all its managed keys and certificates in the cluster may be exposed. We *strongly* recommend running Citadel
on a dedicated namespace (for example, istio-citadel-ns), which only cluster admins have access to.
### Example
Let's consider a 3-tier application with three services: photo-frontend, photo-backend, and datastore. Photo-frontend and photo-backend services are managed by the photo SRE team while the datastore service is managed by the datastore SRE team. Photo-frontend can access photo-backend, and photo-backend can access datastore. However, photo-frontend cannot access datastore.
In this scenario, a cluster admin creates 3 namespaces: istio-ca-ns, photo-ns, and datastore-ns. Admin has access to all namespaces, and each team only has
access to its own namespace. The photo SRE team creates 2 service accounts to run photo-frontend and photo-backend respectively in namespace photo-ns. The
datastore SRE team creates 1 service account to run the datastore service in namespace datastore-ns. Moreover, we need to enforce the service access control
In this scenario, a cluster admin creates 3 namespaces: istio-citadel-ns, photo-ns, and datastore-ns. Admin has access to all namespaces, and each team only has
access to its own namespace. The photo SRE team creates 2 service accounts to run photo-frontend and photo-backend respectively in namespace photo-ns. The
datastore SRE team creates 1 service account to run the datastore service in namespace datastore-ns. Moreover, we need to enforce the service access control
in [Istio Mixer]({{home}}/docs/concepts/policy-and-control/mixer.html) such that photo-frontend cannot access datastore.
In this setup, Istio CA is able to provide keys and certificates management for all namespaces, and isolate microservice deployments from each other.
In this setup, Citadel is able to provide keys and certificates management for all namespaces, and isolate
microservice deployments from each other.
## Future work
* Inter-cluster service-to-service authentication
* Inter-cluster service-to-service authentication
* Powerful authorization mechanisms: ABAC, RBAC, etc
* Powerful authorization mechanisms: ABAC, RBAC, etc
* Per-service auth enablement support
* Per-service auth enablement support
* Secure Istio components (Mixer, Pilot)
* Secure Istio components (Mixer, Pilot)
* End-user to service authentication using JWT/OAuth2/OpenID_Connect.
* End-user to service authentication using JWT/OAuth2/OpenID_Connect.
* Support GCP service account
* Support GCP service account
* Unix domain socket for local communication between service and Envoy
* Unix domain socket for local communication between service and Envoy
* Middle proxy support
* Middle proxy support
* Pluggable key management component
* Pluggable key management component

View File

@ -1,39 +1,41 @@
---
title: Istio Role-Based Access Control (RBAC)
overview: Describes Istio RBAC which provides access control for services in Istio Mesh.
order: 10
description: Describes Istio RBAC which provides access control for services in Istio Mesh.
weight: 20
layout: docs
type: markdown
---
{% include home.html %}
## Overview
Istio Role-Based Access Control (RBAC) provides namespace-level, service-level, method-level access control for services in Istio Mesh.
Istio Role-Based Access Control (RBAC) provides namespace-level, service-level, method-level access control for services in the Istio Mesh.
It features:
* Role-Based semantics, which is simple and easy to use.
* Service-to-service and endUser-to-Service authorization.
* Flexibility through custom properties support in roles and role-bindings.
## Architecture
The diagram below shows Istio RBAC architecture. The admins specify Istio RBAC policies. The policies are saved in Istio config store.
The diagram below shows the Istio RBAC architecture. Operators specify Istio RBAC policies. The policies are saved in
the Istio config store.
{% include figure.html width='80%' ratio='56.25%'
img='./img/IstioRBAC.svg'
alt='Istio RBAC'
title='Istio RBAC Architecture'
caption='Istio RBAC Architecture'
{% include image.html width="80%" ratio="56.25%"
link="./img/IstioRBAC.svg"
alt="Istio RBAC"
caption="Istio RBAC Architecture"
%}
Istio RBAC engine does two things:
The Istio RBAC engine does two things:
* **Fetch RBAC policy.** Istio RBAC engine watches for changes on RBAC policy. It fetches the updated RBAC policy if it sees any changes.
* **Authorize Requests.** At runtime, when a request comes, the request context is passed to Istio RBAC engine. RBAC engine evaluates the
request context against the RBAC policies, and returns the authorization result (ALLOW or DENY).
### Request Context
### Request context
In the current release, Istio RBAC engine is implemented as a [Mixer adapter]({{home}}/docs/concepts/policy-and-control/mixer.html#adapters).
In the current release, the Istio RBAC engine is implemented as a [Mixer adapter]({{home}}/docs/concepts/policy-and-control/mixer.html#adapters).
The request context is provided as an instance of the
[authorization template](https://github.com/istio/istio/blob/master/mixer/template/authorization/template.proto). The request context
contains all the information about the request and the environment that an authorization module needs to know. In particular, it has two parts:
@ -44,180 +46,206 @@ or any additional properties about the subject such as namespace, service name.
and any additional properties about the action.
Below we show an example "requestcontext".
```rule
apiVersion: "config.istio.io/v1alpha2"
kind: authorization
metadata:
name: requestcontext
namespace: istio-system
spec:
subject:
user: request.auth.principal | ""
groups: request.auth.principal | ""
properties:
service: source.service | ""
namespace: source.namespace | ""
action:
namespace: destination.namespace | ""
service: destination.service | ""
method: request.method | ""
path: request.path | ""
properties:
version: request.headers["version"] | ""
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: authorization
metadata:
name: requestcontext
namespace: istio-system
spec:
subject:
user: source.user | ""
groups: ""
properties:
service: source.service | ""
namespace: source.namespace | ""
action:
namespace: destination.namespace | ""
service: destination.service | ""
method: request.method | ""
path: request.path | ""
properties:
version: request.headers["version"] | ""
```
## Istio RBAC Policy
## Istio RBAC policy
Istio RBAC introduces ServiceRole and ServiceRoleBinding, both of which are defined as Kubernetes CustomResourceDefinition (CRD) objects.
Istio RBAC introduces `ServiceRole` and `ServiceRoleBinding`, both of which are defined as Kubernetes CustomResourceDefinition (CRD) objects.
* **ServiceRole** defines a role for access to services in the mesh.
* **ServiceRoleBinding** grants a role to subjects (e.g., a user, a group, a service).
* **`ServiceRole`** defines a role for access to services in the mesh.
* **`ServiceRoleBinding`** grants a role to subjects (e.g., a user, a group, a service).
### ServiceRole
### `ServiceRole`
A ServiceRole specification includes a list of rules. Each rule has the following standard fields:
A `ServiceRole` specification includes a list of rules. Each rule has the following standard fields:
* **services**: A list of service names, which are matched against the `action.service` field of the "requestcontext".
* **methods**: A list of method names which are matched against the `action.method` field of the "requestcontext". In the above "requestcontext",
this is the HTTP or gRPC method. Note that gRPC methods are formatted in the form of "packageName.serviceName/methodName" (case sensitive).
* **paths**: A list of HTTP paths which are matched against the `action.path` field of the "requestcontext". It is ignored in gRPC case.
A ServiceRole specification only applies to the **namespace** specified in `"metadata"` section. The "services" and "methods" are required
A `ServiceRole` specification only applies to the **namespace** specified in `"metadata"` section. The "services" and "methods" are required
fields in a rule. "paths" is optional. If not specified or set to "*", it applies to "any" instance.
Here is an example of a simple role "service-admin", which has full access to all services in "default" namespace.
```rule
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
name: service-admin
namespace: default
spec:
rules:
- services: ["*"]
methods: ["*"]
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
name: service-admin
namespace: default
spec:
rules:
- services: ["*"]
methods: ["*"]
```
Here is another role "products-viewer", which has read ("GET" and "HEAD") access to service "products.default.svc.cluster.local"
in "default" namespace.
```rule
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
name: products-viewer
namespace: default
spec:
rules:
- services: ["products.default.svc.cluster.local"]
methods: ["GET", "HEAD"]
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
name: products-viewer
namespace: default
spec:
rules:
- services: ["products.default.svc.cluster.local"]
methods: ["GET", "HEAD"]
```
In addition, we support **prefix match** and **suffix match** for all the fields in a rule. For example, you can define a "tester" role that
In addition, we support **prefix matching** and **suffix matching** for all the fields in a rule. For example, you can define a "tester" role that
has the following permissions in "default" namespace:
* Full access to all services with prefix "test-" (e.g, "test-bookstore", "test-performance", "test-api.default.svc.cluster.local").
* Read ("GET") access to all paths with "/reviews" suffix (e.g, "/books/reviews", "/events/booksale/reviews", "/reviews")
in service "bookstore.default.svc.cluster.local".
```rule
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
name: tester
namespace: default
spec:
rules:
- services: ["test-*"]
methods: ["*"]
- services: ["bookstore.default.svc.cluster.local"]
paths: ["*/reviews"]
methods: ["GET"]
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
name: tester
namespace: default
spec:
rules:
- services: ["test-*"]
methods: ["*"]
- services: ["bookstore.default.svc.cluster.local"]
paths: ["*/reviews"]
methods: ["GET"]
```
In ServiceRole, the combination of "namespace"+"services"+"paths"+"methods" defines "how a service (services) is allowed to be accessed".
In `ServiceRole`, the combination of "namespace"+"services"+"paths"+"methods" defines "how a service (services) is allowed to be accessed".
In some situations, you may need to specify additional constraints that a rule applies to. For example, a rule may only applies to a
certain "version" of a service, or only applies to services that are labeled "foo". You can easily specify these constraints using
custom fields.
For example, the following ServiceRole definition extends the previous "products-viewer" role by adding a constraint on service "version"
For example, the following `ServiceRole` definition extends the previous "products-viewer" role by adding a constraint on service "version"
being "v1" or "v2". Note that the "version" property is provided by `"action.properties.version"` in "requestcontext".
```rule
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
name: products-viewer-version
namespace: default
spec:
rules:
- services: ["products.default.svc.cluster.local"]
methods: ["GET", "HEAD"]
constraints:
- key: "version"
values: ["v1", "v2"]
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRole
metadata:
name: products-viewer-version
namespace: default
spec:
rules:
- services: ["products.default.svc.cluster.local"]
methods: ["GET", "HEAD"]
constraints:
- key: "version"
values: ["v1", "v2"]
```
### ServiceRoleBinding
### `ServiceRoleBinding`
A ServiceRoleBinding specification includes two parts:
* **roleRef** refers to a ServiceRole object **in the same namespace**.
A `ServiceRoleBinding` specification includes two parts:
* **roleRef** refers to a `ServiceRole` resource **in the same namespace**.
* A list of **subjects** that are assigned the role.
A subject can either be a "user", or a "group", or is represented with a set of "properties". Each entry ("user" or "group" or an entry
in "properties") must match one of fields ("user" or "groups" or an entry in "properties") in the "subject" part of the "requestcontext"
instance.
Here is an example of ServiceRoleBinding object "test-binding-products", which binds two subjects to ServiceRole "product-viewer":
Here is an example of `ServiceRoleBinding` resource "test-binding-products", which binds two subjects to ServiceRole "product-viewer":
* user "alice@yahoo.com".
* "reviews.abc.svc.cluster.local" service in "abc" namespace.
```rule
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRoleBinding
metadata:
name: test-binding-products
namespace: default
spec:
subjects:
- user: "alice@yahoo.com"
- properties:
service: "reviews.abc.svc.cluster.local"
namespace: "abc"
roleRef:
kind: ServiceRole
name: "products-viewer"
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRoleBinding
metadata:
name: test-binding-products
namespace: default
spec:
subjects:
- user: "alice@yahoo.com"
- properties:
service: "reviews.abc.svc.cluster.local"
namespace: "abc"
roleRef:
kind: ServiceRole
name: "products-viewer"
```
In the case that you want to make a service(s) publicly accessible, you can use set the subject to `user: "*"`. This will assign a `ServiceRole`
to all users/services.
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRoleBinding
metadata:
name: binding-products-allusers
namespace: default
spec:
subjects:
- user: "*"
roleRef:
kind: ServiceRole
name: "products-viewer"
```
## Enabling Istio RBAC
Istio RBAC can be enabled by adding the following Mixer adapter rule. The rule has two parts. The first part defines a RBAC handler.
The `"config_store_url"` parameter specifies where RBAC engine fetches RBAC policies. The default value for "config_store_url" is `"k8s://"`,
which means Kubernetes API server. Alternatively, if you are testing RBAC policy locally, you may set it to a local directory such as
`"fs:///tmp/testdata/configroot"`.
It has two parameters, `"config_store_url"` and `"cache_duration"`.
* The `"config_store_url"` parameter specifies where RBAC engine fetches RBAC policies. The default value for `"config_store_url"` is
`"k8s://"`, which means Kubernetes API server. Alternatively, if you are testing RBAC policy locally, you may set it to a local directory
such as `"fs:///tmp/testdata/configroot"`.
* The `"cache_duration"` parameter specifies the duration for which the authorization results may be cached on Mixer client (i.e., Istio proxy).
The default value for `"cache_duration"` is 1 minute.
The second part defines a rule, which specifies that the RBAC handler should be invoked with the "requestcontext" instance [defined
earlier in the document](#request-context).
```rule
apiVersion: "config.istio.io/v1alpha2"
kind: rbac
metadata:
name: handler
namespace: istio-system
spec:
config_store_url: "k8s://"
In the following example, Istio RBAC is enabled for "default" namespace. And the cache duration is set to 30 seconds.
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: rbaccheck
namespace: istio-system
spec:
actions:
# handler and instance names default to the rule's namespace.
- handler: handler.rbac
instances:
- requestcontext.authorization
---
```yaml
apiVersion: "config.istio.io/v1alpha2"
kind: rbac
metadata:
name: handler
namespace: istio-system
spec:
config_store_url: "k8s://"
cache_duration: "30s"
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: rbaccheck
namespace: istio-system
spec:
match: destination.namespace == "default"
actions:
# handler and instance names default to the rule's namespace.
- handler: handler.rbac
instances:
- requestcontext.authorization
```
## What's next
Try out the [Istio RBAC with Bookinfo]({{home}}/docs/tasks/security/role-based-access-control.html) sample.

View File

@ -1,14 +1,12 @@
---
title: Fault Injection
overview: Introduces the idea of systematic fault injection that can be used to uncover conflicting failure recovery policies across services.
order: 40
description: Introduces the idea of systematic fault injection that can be used to uncover conflicting failure recovery policies across services.
weight: 40
layout: docs
type: markdown
toc: false
---
While Envoy sidecar/proxy provides a host of
[failure recovery mechanisms](./handling-failures.html) to services running
on Istio, it is still
@ -25,12 +23,12 @@ regardless of network level failures, and that more meaningful failures can
be injected at the application layer (e.g., HTTP error codes) to exercise
the resilience of an application.
Operators can configure faults to be injected into requests that match
Operators can configure faults to be injected into requests that match
specific criteria. Operators can further restrict the percentage of
requests that should be subjected to faults. Two types of faults can be
injected: delays and aborts. Delays are timing failures, mimicking
increased network latency, or an overloaded upstream service. Aborts are
crash failures that mimick failures in upstream services. Aborts usually
crash failures that mimic failures in upstream services. Aborts usually
manifest in the form of HTTP error codes, or TCP connection failures.
Refer to [Istio's traffic management rules](./rules-configuration.html) for more details.
Refer to [Istio's traffic management rules](./rules-configuration.html) for more details.

View File

@ -1,11 +1,9 @@
---
title: Handling Failures
overview: An overview of failure recovery capabilities in Envoy that can be leveraged by unmodified applications to improve robustness and prevent cascading failures.
description: An overview of failure recovery capabilities in Envoy that can be leveraged by unmodified applications to improve robustness and prevent cascading failures.
order: 30
weight: 30
layout: docs
type: markdown
---
{% include home.html %}
@ -14,18 +12,22 @@ that can be taken advantage of by the services in an application. Features
include:
1. Timeouts
2. Bounded retries with timeout budgets and variable jitter between retries
3. Limits on number of concurrent connections and requests to upstream services
4. Active (periodic) health checks on each member of the load balancing pool
5. Fine-grained circuit breakers (passive health checks) -- applied per
instance in the load balancing pool
1. Bounded retries with timeout budgets and variable jitter between retries
1. Limits on number of concurrent connections and requests to upstream services
1. Active (periodic) health checks on each member of the load balancing pool
1. Fine-grained circuit breakers (passive health checks) -- applied per
instance in the load balancing pool
These features can be dynamically configured at runtime through
[Istio's traffic management rules](./rules-configuration.html).
The jitter between retries minimizes the impact of retries on an overloaded
upstream service, while timeout budgets ensure that the calling service
gets a response (success/failure) within a predictable timeframe.
gets a response (success/failure) within a predictable time frame.
A combination of active and passive health checks (4 and 5 above)
minimizes the chances of accessing an unhealthy instance in the load
@ -37,7 +39,7 @@ mesh, minimizing the request failures and impact on latency.
Together, these features enable the service mesh to tolerate failing nodes
and prevent localized failures from cascading instability to other nodes.
## Fine tuning
## Fine tuning
Istio's traffic management rules allow
operators to set global defaults for failure recovery per
@ -46,13 +48,12 @@ service/version. However, consumers of a service can also override
and
[retry]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#HTTPRetry)
defaults by providing request-level overrides through special HTTP headers.
With the Envoy proxy implementation, the headers are "x-envoy-upstream-rq-timeout-ms" and
"x-envoy-max-retries", respectively.
With the Envoy proxy implementation, the headers are `x-envoy-upstream-rq-timeout-ms` and
`x-envoy-max-retries`, respectively.
## FAQ
_1. Do applications still handle failures when running in Istio?_
Q: *Do applications still handle failures when running in Istio?*
Yes. Istio improves the reliability and availability of services in the
mesh. However, **applications need to handle the failure (errors)
@ -61,15 +62,15 @@ a load balancing pool have failed, Envoy will return HTTP 503. It is the
responsibility of the application to implement any fallback logic that is
needed to handle the HTTP 503 error code from an upstream service.
_2. Will Envoy's failure recovery features break applications that already
use fault tolerance libraries (e.g., [Hystrix](https://github.com/Netflix/Hystrix))?_
Q: *Will Envoy's failure recovery features break applications that already
use fault tolerance libraries (e.g. [Hystrix](https://github.com/Netflix/Hystrix))?*
No. Envoy is completely transparent to the application. A failure response
returned by Envoy would not be distinguishable from a failure response
returned by the upstream service to which the call was made.
_3. How will failures be handled when using application-level libraries and
Envoy at the same time?_
Q: *How will failures be handled when using application-level libraries and
Envoy at the same time?*
Given two failure recovery policies for the same destination service (e.g.,
two timeouts -- one set in Envoy and another in application's library), **the

View File

@ -1,11 +1,9 @@
---
title: Traffic Management
overview: Describes the various Istio features focused on traffic routing and control.
description: Describes the various Istio features focused on traffic routing and control.
order: 20
weight: 20
layout: docs
type: markdown
toc: false
---

View File

@ -1,11 +1,9 @@
---
title: Discovery & Load Balancing
overview: Describes how traffic is load balanced across instances of a service in the mesh.
order: 25
description: Describes how traffic is load balanced across instances of a service in the mesh.
weight: 25
layout: docs
type: markdown
toc: false
---
@ -22,14 +20,12 @@ applications.
**Service Discovery:** Pilot consumes information from the service
registry and provides a platform-agnostic service discovery
interface. Envoy instances in the mesh perform service discovery and
interface. Envoy instances in the mesh perform service discovery and
dynamically update their load balancing pools accordingly.
{% include figure.html width='80%' ratio='74.79%'
img='./img/pilot/LoadBalancing.svg'
alt='Discovery and Load Balancing'
title='Discovery and Load Balancing'
caption='Discovery and Load Balancing'
{% include image.html width="80%" ratio="74.79%"
link="./img/pilot/LoadBalancing.svg"
caption="Discovery and Load Balancing"
%}
As illustrated in the figure above, services in the mesh access each other
@ -40,7 +36,6 @@ the load balancing pool. While Envoy supports several
Istio currently allows three load balancing modes:
round robin, random, and weighted least request.
In addition to load balancing, Envoy periodically checks the health of each
instance in the pool. Envoy follows a circuit breaker style pattern to
classify instances as unhealthy or healthy based on their failure rates for

View File

@ -1,13 +1,13 @@
---
title: Overview
overview: Provides a conceptual overview of traffic management in Istio and the features it enables.
order: 0
description: Provides a conceptual overview of traffic management in Istio and the features it enables.
weight: 1
layout: docs
type: markdown
---
{% include home.html %}
This page provides an overview of how traffic management works
in Istio, including the benefits of its traffic management
principles. It assumes that you've already read [What Is Istio?]({{home}}/docs/concepts/what-is-istio/overview.html)
@ -43,11 +43,9 @@ of traffic for a particular service to go to a canary version irrespective
of the size of the canary deployment, or send traffic to a particular version
depending on the content of the request.
{% include figure.html width='85%' ratio='69.52%'
img='./img/pilot/TrafficManagementOverview.svg'
alt='Traffic Management with Istio'
title='Traffic Management with Istio'
caption='Traffic Management with Istio'
{% include image.html width="85%" ratio="69.52%"
link="./img/pilot/TrafficManagementOverview.svg"
caption="Traffic Management with Istio"
%}
Decoupling traffic flow from infrastructure scaling like this allows Istio

View File

@ -1,23 +1,21 @@
---
title: Pilot
overview: Introduces Pilot, the component responsible for managing a distributed deployment of Envoy proxies in the service mesh.
order: 10
description: Introduces Pilot, the component responsible for managing a distributed deployment of Envoy proxies in the service mesh.
weight: 10
layout: docs
type: markdown
toc: false
redirect_from: /docs/concepts/traffic-management/manager.html
---
{% include home.html %}
Pilot is responsible for the lifecycle of Envoy instances deployed
across the Istio service mesh.
{% include figure.html width='60%' ratio='72.17%'
img='./img/pilot/PilotAdapters.svg'
{% include image.html width="60%" ratio="72.17%"
link="./img/pilot/PilotAdapters.svg"
alt="Pilot's overall architecture."
title='Pilot Architecture'
caption='Pilot Architecture'
caption="Pilot Architecture"
%}
As illustrated in the figure above, Pilot maintains a canonical
@ -36,6 +34,6 @@ and [routing tables](https://www.envoyproxy.io/docs/envoy/latest/configuration/h
These APIs decouple Envoy from platform-specific nuances, simplifying the
design and increasing portability across platforms.
Operators can specify high-level traffic management rules through
Operators can specify high-level traffic management rules through
[Pilot's Rules API]({{home}}/docs/reference/config/istio.routing.v1alpha1.html). These rules are translated into low-level
configurations and distributed to Envoy instances via the discovery API.

View File

@ -1,12 +1,11 @@
---
title: Request Routing
overview: Describes how requests are routed between services in an Istio service mesh.
order: 20
description: Describes how requests are routed between services in an Istio service mesh.
weight: 20
layout: docs
type: markdown
---
{% include home.html %}
This page describes how requests are routed between services in an Istio service mesh.
@ -20,7 +19,6 @@ etc.). Platform-specific adapters are responsible for populating the
internal model representation with various fields from the metadata found
in the platform.
Istio introduces the concept of a service version, which is a finer-grained
way to subdivide service instances by versions (`v1`, `v2`) or environment
(`staging`, `prod`). These variants are not necessarily different API
@ -32,11 +30,10 @@ additional control over traffic between services.
## Communication between services
{% include figure.html width='60%' ratio='100.42%'
img='./img/pilot/ServiceModel_Versions.svg'
alt='Showing how service versions are handled.'
title='Service Versions'
caption='Service Versions'
{% include image.html width="60%" ratio="100.42%"
link="./img/pilot/ServiceModel_Versions.svg"
alt="Showing how service versions are handled."
caption="Service Versions"
%}
As illustrated in the figure above, clients of a service have no knowledge
@ -73,10 +70,9 @@ via the sidecar Envoy, operators can add failure recovery features such as
timeouts, retries, circuit breakers, etc., and obtain detailed metrics on
the connections to these services.
{% include figure.html width='60%' ratio='28.88%'
img='./img/pilot/ServiceModel_RequestFlow.svg'
alt='Ingress and Egress through Envoy.'
title='Request Flow'
caption='Request Flow'
{% include image.html width="60%" ratio="28.88%"
link="./img/pilot/ServiceModel_RequestFlow.svg"
alt="Ingress and Egress through Envoy."
caption="Request Flow"
%}

View File

@ -1,11 +1,9 @@
---
title: Rules Configuration
overview: Provides a high-level overview of the domain-specific language used by Istio to configure traffic management rules in the service mesh.
description: Provides a high-level overview of the domain-specific language used by Istio to configure traffic management rules in the service mesh.
order: 50
weight: 50
layout: docs
type: markdown
---
{% include home.html %}
@ -37,7 +35,7 @@ spec:
The destination is the name of the service to which the traffic is being
routed. The route *labels* identify the specific service instances that will
recieve traffic. For example, in a Kubernetes deployment of Istio, the route
receive traffic. For example, in a Kubernetes deployment of Istio, the route
*label* "version: v1" indicates that only pods containing the label "version: v1"
will receive traffic.
@ -76,7 +74,7 @@ domain name (FQDN). It is used by Istio Pilot for matching rules to services.
Normally, the FQDN of the service is composed from three components: *name*,
*namespace*, and *domain*:
```
```plain
FQDN = name + "." + namespace + "." + domain
```
@ -409,7 +407,8 @@ spec:
match:
request:
headers:
Foo: bar
Foo:
exact: bar
route:
- labels:
version: v2

View File

@ -1,38 +1,36 @@
---
title: Design Goals
overview: Describes the core principles that Istio's design adheres to.
order: 20
description: Describes the core principles that Istio's design adheres to.
weight: 20
layout: docs
type: markdown
---
This page outlines the core principles that guide Istio's design.
Istios architecture is informed by a few key design goals that are essential to making the system capable of dealing with services at scale and with high
Istios architecture is informed by a few key design goals that are essential to making the system capable of dealing with services at scale and with high
performance.
- **Maximize Transparency**.
To adopt Istio, an operator or developer should be required to do the minimum amount of work possible to get real value from the system. To this end, Istio
can automatically inject itself into all the network paths between services. Istio uses sidecar proxies to capture traffic, and where possible, automatically
program the networking layer to route traffic through those proxies without any changes to the deployed application code. In Kubernetes, the proxies are
injected into pods and traffic is captured by programming iptables rules. Once the sidecar proxies are injected and traffic routing is programmed, Istio is
able to mediate all traffic. This principle also applies to performance. When applying Istio to a deployment, operators should see a minimal increase in
resource costs for the
To adopt Istio, an operator or developer should be required to do the minimum amount of work possible to get real value from the system. To this end, Istio
can automatically inject itself into all the network paths between services. Istio uses sidecar proxies to capture traffic, and where possible, automatically
program the networking layer to route traffic through those proxies without any changes to the deployed application code. In Kubernetes, the proxies are
injected into pods and traffic is captured by programming iptables rules. Once the sidecar proxies are injected and traffic routing is programmed, Istio is
able to mediate all traffic. This principle also applies to performance. When applying Istio to a deployment, operators should see a minimal increase in
resource costs for the
functionality being provided. Components and APIs must all be designed with performance and scale in mind.
- **Incrementality**.
As operators and developers become more dependent on the functionality that Istio provides, the system must grow with their needs. While we expect to
continue adding new features ourselves, we expect the greatest need will be the ability to extend the policy system, to integrate with other sources of policy and control and to propagate signals about mesh behavior to other systems for analysis. The policy runtime supports a standard extension mechanism for plugging in other services. In addition, it allows for the extension of its vocabulary to allow policies to be enforced based on new signals that the mesh produces.
As operators and developers become more dependent on the functionality that Istio provides, the system must grow with their needs. While we expect to
continue adding new features ourselves, we expect the greatest need will be the ability to extend the policy system, to integrate with other sources of policy and control and to propagate signals about mesh behavior to other systems for analysis. The policy runtime supports a standard extension mechanism for plugging in other services. In addition, it allows for the extension of its vocabulary to allow policies to be enforced based on new signals that the mesh produces.
- **Portability**.
The ecosystem in which Istio will be used varies along many dimensions. Istio must run on any cloud or on-prem environment with minimal effort. The task of
porting Istio-based services to new environments should be trivial, and it should be possible to operate a single service deployed into multiple
The ecosystem in which Istio will be used varies along many dimensions. Istio must run on any cloud or on-prem environment with minimal effort. The task of
porting Istio-based services to new environments should be trivial, and it should be possible to operate a single service deployed into multiple
environments (on multiple clouds for redundancy for example) using Istio.
- **Policy Uniformity**.
The application of policy to API calls between services provides a great deal of control over mesh behavior, but it can be equally important to apply
policies to resources which are not necessarily expressed at the API level. For example, applying quota to the amount of CPU consumed by an ML training task
is more useful than applying quota to the call which initiated the work. To this end, the policy system is maintained as a distinct service with its own API
The application of policy to API calls between services provides a great deal of control over mesh behavior, but it can be equally important to apply
policies to resources which are not necessarily expressed at the API level. For example, applying quota to the amount of CPU consumed by an ML training task
is more useful than applying quota to the call which initiated the work. To this end, the policy system is maintained as a distinct service with its own API
rather than being baked into the proxy/sidecar, allowing services to directly integrate with it as needed.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 316 KiB

After

Width:  |  Height:  |  Size: 112 KiB

View File

@ -1,11 +1,9 @@
---
title: What is Istio?
overview: A broad overview of the Istio system.
description: A broad overview of the Istio system.
order: 10
weight: 10
layout: docs
type: markdown
toc: false
---

View File

@ -1,16 +1,15 @@
---
title: Overview
overview: Provides a conceptual introduction to Istio, including the problems it solves and its high-level architecture.
order: 15
description: Provides a conceptual introduction to Istio, including the problems it solves and its high-level architecture.
weight: 15
layout: docs
type: markdown
---
{% include home.html %}
This document introduces Istio: an open platform to connect, manage, and secure microservices. Istio provides an easy way to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, configured and managed using Istio's control plane functionality.
Istio currently only supports service deployment on Kubernetes, though other environments will be supported in future versions.
Istio currently supports service deployment on Kubernetes, as well as services registered with Consul or Eureka and services running on individual VMs.
For detailed conceptual information about Istio components see our other [Concepts]({{home}}/docs/concepts/) guides.
@ -18,7 +17,7 @@ For detailed conceptual information about Istio components see our other [Concep
Istio addresses many of the challenges faced by developers and operators as monolithic applications transition towards a distributed microservice architecture. The term **service mesh** is often used to describe the network of
microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand
and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring, and often more complex operational requirements
and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring, and often more complex operational requirements
such as A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.
Istio provides a complete solution to satisfy the diverse requirements of microservice applications by providing
@ -27,10 +26,10 @@ network of services:
- **Traffic Management**. Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face
of adverse conditions.
- **Observability**. Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues.
- **Policy Enforcement**. Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly
- **Policy Enforcement**. Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly
distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code.
- **Service Identity and Security**. Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic
@ -38,14 +37,14 @@ as it flows over networks of varying degrees of trustability.
In addition to these behaviors, Istio is designed for extensibility to meet diverse deployment needs:
- **Platform Support**. Istio is designed to run in a variety of environments including ones that span Cloud, on-premise, Kubernetes, Mesos etc. Were
- **Platform Support**. Istio is designed to run in a variety of environments including ones that span Cloud, on-premise, Kubernetes, Mesos etc. Were
initially focused on Kubernetes but are working to support other environments soon.
- **Integration and Customization**. The policy enforcement component can be extended and customized to integrate with existing solutions for
- **Integration and Customization**. The policy enforcement component can be extended and customized to integrate with existing solutions for
ACLs, logging, monitoring, quotas, auditing and more.
These capabilities greatly decrease the coupling between application code, the underlying platform, and policy. This decreased coupling not only makes
services easier to implement, but also makes it simpler for operators to move application deployments between environments or to new policy schemes.
These capabilities greatly decrease the coupling between application code, the underlying platform, and policy. This decreased coupling not only makes
services easier to implement, but also makes it simpler for operators to move application deployments between environments or to new policy schemes.
Applications become inherently more portable as a result.
## Architecture
@ -55,21 +54,20 @@ An Istio service mesh is logically split into a **data plane** and a **control p
- The **data plane** is composed of a set of intelligent
proxies (Envoy) deployed as sidecars that mediate and control all network communication between microservices.
- The **control plane** is responsible for managing and
- The **control plane** is responsible for managing and
configuring proxies to route traffic, as well as enforcing policies at runtime.
The following diagram shows the different components that make up each plane:
{% include figure.html width='80%' ratio='56.25%'
img='./img/overview/arch.svg'
alt='The overall architecture of an Istio-based application.'
title='Istio Architecture'
caption='Istio Architecture'
{% include image.html width="80%" ratio="56.25%"
link="./img/overview/arch.svg"
alt="The overall architecture of an Istio-based application."
caption="Istio Architecture"
%}
### Envoy
Istio uses an extended version of the [Envoy](https://envoyproxy.github.io/envoy/) proxy, a high-performance proxy developed in C++, to mediate all inbound and outbound traffic for all services in the service mesh.
Istio uses an extended version of the [Envoy](https://envoyproxy.github.io/envoy/) proxy, a high-performance proxy developed in C++, to mediate all inbound and outbound traffic for all services in the service mesh.
Istio leverages Envoys many built-in features such as dynamic service discovery, load balancing, TLS termination, HTTP/2 & gRPC proxying, circuit breakers,
health checks, staged rollouts with %-based traffic split, fault injection, and rich metrics.
@ -77,8 +75,8 @@ Envoy is deployed as a **sidecar** to the relevant service in the same Kubernete
### Mixer
[Mixer]({{home}}/docs/concepts/policy-and-control/mixer.html) is a platform-independent component responsible for enforcing access control and usage policies across the service mesh and collecting telemetry data from the Envoy proxy and other
services. The proxy extracts request level [attributes]({{home}}/docs/concepts/policy-and-control/attributes.html), which are sent to Mixer for evaluation. More information on this attribute extraction and policy
[Mixer]({{home}}/docs/concepts/policy-and-control/mixer.html) is a platform-independent component responsible for enforcing access control and usage policies across the service mesh and collecting telemetry data from the Envoy proxy and other
services. The proxy extracts request level [attributes]({{home}}/docs/concepts/policy-and-control/attributes.html), which are sent to Mixer for evaluation. More information on this attribute extraction and policy
evaluation can be found in [Mixer Configuration]({{home}}/docs/concepts/policy-and-control/mixer-config.html). Mixer includes a flexible plugin model enabling it to interface with a variety of host environments and infrastructure backends, abstracting the Envoy proxy and Istio-managed services from these details.
### Pilot
@ -86,30 +84,30 @@ evaluation can be found in [Mixer Configuration]({{home}}/docs/concepts/policy-a
[Pilot]({{home}}/docs/concepts/traffic-management/pilot.html) provides
service discovery for the Envoy sidecars, traffic management capabilities
for intelligent routing (e.g., A/B tests, canary deployments, etc.),
and resiliency (timeouts, retries, circuit breakers, etc.). It converts a
and resiliency (timeouts, retries, circuit breakers, etc.). It converts
high level routing rules that control traffic behavior into Envoy-specific
configurations, and propagates them to the sidecars at runtime. Pilot
abstracts platform-specifc service discovery mechanisms and synthesizes
abstracts platform-specific service discovery mechanisms and synthesizes
them into a standard format consumable by any sidecar that conforms to the
[Envoy data plane APIs](https://github.com/envoyproxy/data-plane-api).
This loose coupling allows Istio to run on multiple environments
(e.g., Kubernetes, Consul/Nomad) while maintaining the same operator
This loose coupling allows Istio to run on multiple environments
(e.g., Kubernetes, Consul/Nomad) while maintaining the same operator
interface for traffic management.
### Istio-Auth
### Security
[Istio-Auth]({{home}}/docs/concepts/security/mutual-tls.html) provides strong service-to-service and end-user authentication using mutual TLS, with built-in identity and credential management.
It can be used to upgrade unencrypted traffic in the service mesh, and provides operators the ability to enforce policy based
on service identity rather than network controls. Future releases of Istio will add fine-grained access control and auditing to control
and monitor who accesses your service, API, or resource, using a variety of access control mechanisms, including attribute and
role-based access control as well as authorization hooks.
[Security]({{home}}/docs/concepts/security/) provides strong service-to-service and end-user authentication, with built-in identity and
credential management. It can be used to upgrade unencrypted traffic in the service mesh, and provides operators the ability to enforce
policy based on service identity rather than network controls. Starting from release 0.5, Istio supports
[role-based access control]({{home}}/docs/concepts/security/rbac.html) to control who can access your services. Future
releases of Istio will add service auditing feature.
## What's next
* Learn about Istio's [design goals]({{home}}/docs/concepts/what-is-istio/goals.html).
- Learn about Istio's [design goals]({{home}}/docs/concepts/what-is-istio/goals.html).
* Explore our [Guides]({{home}}/docs/guides/).
- Explore our [Guides]({{home}}/docs/guides/).
* Read about Istio components in detail in our other [Concepts]({{home}}/docs/concepts/) guides.
- Read about Istio components in detail in our other [Concepts]({{home}}/docs/concepts/) guides.
* Learn how to deploy Istio with your own services using our [Tasks]({{home}}/docs/tasks/) guides.
- Learn how to deploy Istio with your own services using our [Tasks]({{home}}/docs/tasks/) guides.

View File

@ -1,11 +1,9 @@
---
1;95;0ctitle: Bookinfo Sample Application
overview: This guide deploys a sample application composed of four separate microservices which will be used to demonstrate various features of the Istio service mesh.
description: This guide deploys a sample application composed of four separate microservices which will be used to demonstrate various features of the Istio service mesh.
order: 10
weight: 10
layout: docs
type: markdown
---
{% include home.html %}
@ -34,16 +32,14 @@ There are 3 versions of the reviews microservice:
The end-to-end architecture of the application is shown below.
{% include figure.html width='80%' ratio='68.52%'
img='./img/bookinfo/noistio.svg'
alt='Bookinfo Application without Istio'
title='Bookinfo Application without Istio'
caption='Bookinfo Application without Istio'
{% include image.html width="80%" ratio="68.52%"
link="./img/bookinfo/noistio.svg"
caption="Bookinfo Application without Istio"
%}
This application is polyglot, i.e., the microservices are written in different languages.
Its worth noting that these services have no dependencies on Istio, but make an interesting
sevice mesh example, particularly because of the multitude of services, languages and versions
service mesh example, particularly because of the multitude of services, languages and versions
for the reviews service.
## Before you begin
@ -59,11 +55,9 @@ Istio-enabled environment, with Envoy sidecars injected along side each service.
The needed commands and configuration vary depending on the runtime environment
although in all cases the resulting deployment will look like this:
{% include figure.html width='80%' ratio='59.08%'
img='./img/bookinfo/withistio.svg'
alt='Bookinfo Application'
title='Bookinfo Application'
caption='Bookinfo Application'
{% include image.html width="80%" ratio="59.08%"
link="./img/bookinfo/withistio.svg"
caption="Bookinfo Application"
%}
All of the microservices will be packaged with an Envoy sidecar that intercepts incoming
@ -75,46 +69,46 @@ To start the application, follow the instructions below corresponding to your Is
### Running on Kubernetes
> Note: If you use GKE, please ensure your cluster has at least 4 standard GKE nodes. If you use Minikube, please ensure you have at least 4GB RAM.
> If you use GKE, please ensure your cluster has at least 4 standard GKE nodes. If you use Minikube, please ensure you have at least 4GB RAM.
1. Change directory to the root of the Istio installation directory.
1. Bring up the application containers:
If you are using [manual sidecar injection]({{home}}/docs/setup/kubernetes/sidecar-injection.html#manual-sidecar-injection),
use the following command instead:
* If you are using [manual sidecar injection]({{home}}/docs/setup/kubernetes/sidecar-injection.html#manual-sidecar-injection),
use the following command
```bash
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)
```
```command
$ kubectl apply -f <(istioctl kube-inject --debug -f samples/bookinfo/kube/bookinfo.yaml)
```
If you are using a cluster with
[automatic sidecar injection]({{home}}/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection)
enabled, simply deploy the services using `kubectl`:
The `istioctl kube-inject` command is used to manually modify the `bookinfo.yaml`
file before creating the deployments as documented [here]({{home}}/docs/reference/commands/istioctl.html#istioctl kube-inject).
```bash
kubectl apply -f samples/bookinfo/kube/bookinfo.yaml
```
* If you are using a cluster with
[automatic sidecar injection]({{home}}/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection)
enabled, simply deploy the services using `kubectl`
The `istioctl kube-inject` command is used to manually modify the `bookinfo.yaml`
file before creating the deployments as documented [here]({{home}}/docs/reference/commands/istioctl.html#istioctl kube-inject).
```command
$ kubectl apply -f samples/bookinfo/kube/bookinfo.yaml
```
Either of the above commands launches all four microservices and creates the gateway
ingress resource as illustrated in the above diagram.
Either of the above commands launches all four microservices as illustrated in the above diagram.
All 3 versions of the reviews service, v1, v2, and v3, are started.
> Note that in a realistic deployment, new versions of a microservice are deployed
> In a realistic deployment, new versions of a microservice are deployed
over time instead of deploying all versions simultaneously.
1. Define the ingress gateway for the application:
```command
$ istioctl create -f samples/bookinfo/routing/bookinfo-gateway.yaml
```
1. Confirm all services and pods are correctly defined and running:
```bash
kubectl get services
```
which produces the following output:
```bash
```command
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details 10.0.0.31 <none> 9080/TCP 6m
kubernetes 10.0.0.1 <none> 443/TCP 7d
@ -125,13 +119,8 @@ To start the application, follow the instructions below corresponding to your Is
and
```bash
kubectl get pods
```
which produces
```bash
```command
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-1520924117-48z17 2/2 Running 0 6m
productpage-v1-560495357-jk1lz 2/2 Running 0 6m
@ -143,49 +132,41 @@ To start the application, follow the instructions below corresponding to your Is
#### Determining the ingress IP and Port
1. If your Kubernetes cluster is running in an environment that supports external load balancers, the IP address of ingress can be obtained by the following command:
Execute the following command to determine if your Kubernetes cluster is running in an environment that supports external load balancers
```bash
kubectl get ingress -o wide
```command
$ kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 172.21.109.129 130.211.10.121 80:31380/TCP,443:31390/TCP,31400:31400/TCP 17h
```
If the `EXTERNAL-IP` value is set, your environment has an external load balancer that you can use for the ingress gateway
```command
$ export GATEWAY_URL=130.211.10.121:80
```
If the `EXTERNAL-IP` value is `<none>` (or perpetually `<pending>`), your environment does not support external load balancers.
In this case, you can access the gateway using the service `nodePort`.
1. _GKE:_
```command
$ export GATEWAY_URL=<workerNodeAddress>:$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')
$ gcloud compute firewall-rules create allow-book --allow tcp:$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')
```
whose output should be similar to
1. _IBM Cloud Container Service Free Tier:_
```bash
NAME HOSTS ADDRESS PORTS AGE
gateway * 130.211.10.121 80 1d
```command
$ bx cs workers <cluster-name or id>
$ export GATEWAY_URL=<public IP of the worker node>:$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')
```
The address of the ingress service would then be
```bash
export GATEWAY_URL=130.211.10.121:80
```
1. _Other environments (e.g., minikube):_
1. _GKE:_ Sometimes when the service is unable to obtain an external IP, `kubectl get ingress -o wide` may display a list of worker node addresses. In this case, you can use any of the addresses, along with the NodePort, to access the ingress. If the cluster has a firewall, you will also need to create a firewall rule to allow TCP traffic to the NodePort.
```bash
export GATEWAY_URL=<workerNodeAddress>:$(kubectl get svc istio-ingress -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')
gcloud compute firewall-rules create allow-book --allow tcp:$(kubectl get svc istio-ingress -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')
```
1. _IBM Cloud Container Service Free Tier:_ External load balancer is not available for kubernetes clusters in the free tier. You can use the public IP of the worker node, along with the NodePort, to access the ingress. The public IP of the worker node can be obtained from the output of the following command:
```bash
bx cs workers <cluster-name or id>
export GATEWAY_URL=<public IP of the worker node>:$(kubectl get svc istio-ingress -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')
```
1. _IBM Cloud Private:_ External load balancers are not supported in IBM Cloud Private. You can use the host IP of the ingress service, along with the NodePort, to access the ingress.
```bash
export GATEWAY_URL=$(kubectl get po -l istio=ingress -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
```
1. _Minikube:_ External load balancers are not supported in Minikube. You can use the host IP of the ingress service, along with the NodePort, to access the ingress.
```bash
export GATEWAY_URL=$(kubectl get po -l istio=ingress -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
```command
$ export GATEWAY_URL=$(kubectl get po -l istio=ingressgateway -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
```
### Running on Docker with Consul or Eureka
@ -194,38 +175,40 @@ To start the application, follow the instructions below corresponding to your Is
1. Bring up the application containers.
* To test with Consul, run the following commands:
```bash
docker-compose -f samples/bookinfo/consul/bookinfo.yaml up -d
docker-compose -f samples/bookinfo/consul/bookinfo.sidecars.yaml up -d
```
* To test with Eureka, run the following commands:
```bash
docker-compose -f samples/bookinfo/eureka/bookinfo.yaml up -d
docker-compose -f samples/bookinfo/eureka/bookinfo.sidecars.yaml up -d
```
To test with Consul, run the following commands:
```command
$ docker-compose -f samples/bookinfo/consul/bookinfo.yaml up -d
$ docker-compose -f samples/bookinfo/consul/bookinfo.sidecars.yaml up -d
```
To test with Eureka, run the following commands:
```command
$ docker-compose -f samples/bookinfo/eureka/bookinfo.yaml up -d
$ docker-compose -f samples/bookinfo/eureka/bookinfo.sidecars.yaml up -d
```
1. Confirm that all docker containers are running:
```bash
docker ps -a
```command
$ docker ps -a
```
> If the Istio Pilot container terminates, re-run the command from the previous step.
1. Set the GATEWAY_URL:
```bash
export GATEWAY_URL=localhost:9081
```command
$ export GATEWAY_URL=localhost:9081
```
## What's next
To confirm that the Bookinfo application is running, run the following `curl` command:
```bash
curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
```
```
```command
$ curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
200
```
@ -236,7 +219,7 @@ stars, black stars, no stars), since we haven't yet used Istio to control the
version routing.
You can now use this sample to experiment with Istio's features for
traffic routing, fault injection, rate limitting, etc..
traffic routing, fault injection, rate limiting, etc..
To proceed, refer to one or more of the [Istio Guides]({{home}}/docs/guides),
depending on your interest. [Intelligent Routing]({{home}}/docs/guides/intelligent-routing.html)
is a good place to start for beginners.
@ -250,36 +233,36 @@ uninstall and clean it up using the following instructions.
1. Delete the routing rules and terminate the application pods
```bash
samples/bookinfo/kube/cleanup.sh
```command
$ samples/bookinfo/kube/cleanup.sh
```
1. Confirm shutdown
```bash
istioctl get routerules #-- there should be no more routing rules
kubectl get pods #-- the Bookinfo pods should be deleted
```command
$ istioctl get virtualservices #-- there should be no more routing rules
$ kubectl get pods #-- the Bookinfo pods should be deleted
```
### Uninstall from Docker environment
1. Delete the routing rules and application containers
1. In a Consul setup, run the following command:
In a Consul setup, run the following command:
```bash
samples/bookinfo/consul/cleanup.sh
```
1. In a Eureka setup, run the following command:
```bash
samples/bookinfo/eureka/cleanup.sh
```command
$ samples/bookinfo/consul/cleanup.sh
```
2. Confirm cleanup
In a Eureka setup, run the following command:
```bash
istioctl get routerules #-- there should be no more routing rules
docker ps -a #-- the Bookinfo containers should be deleted
```command
$ samples/bookinfo/eureka/cleanup.sh
```
1. Confirm cleanup
```command
$ istioctl get virtualservices #-- there should be no more routing rules
$ docker ps -a #-- the Bookinfo containers should be deleted
```

116
_docs/guides/endpoints.md Normal file
View File

@ -0,0 +1,116 @@
---
title: Install Istio for Google Cloud Endpoints Services
description: Explains how to manually integrate Google Cloud Endpoints services with Istio.
weight: 42
---
{% include home.html %}
This document shows how to manually integrate Istio with existing
Google Cloud Endpoints services.
## Before you begin
If you don't have an Endpoints service and want to try it out, you can follow
the [instructions](https://cloud.google.com/endpoints/docs/openapi/get-started-kubernetes-engine)
to setup an Endpoints service on GKE.
After setup, you should be able to get an API key and store it in `ENDPOINTS_KEY` environment variable and the external IP address `EXTERNAL_IP`.
You may test the service using the following command:
```command
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "http://${EXTERNAL_IP}:80/echo?key=${ENDPOINTS_KEY}"
```
You need to install Istio with [instructions]({{home}}/docs/setup/kubernetes/quick-start.html#google-kubernetes-engine).
## HTTP Endpoints service
1. Inject the service into the mesh using `--includeIPRanges` by following the
[instructions]({{home}}/docs/tasks/traffic-management/egress.html#calling-external-services-directly)
so that Egress is allowed to call external services directly.
Otherwise, ESP won't be able to access Google cloud service control.
1. After injection, issue the same test command as above to ensure that calling ESP continues to work.
1. If you want to access the service through Ingress, create the following Ingress definition:
```bash
cat <<EOF | istioctl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-ingress
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- http:
paths:
- path: /echo
backend:
serviceName: esp-echo
servicePort: 80
EOF
```
1. Get the Ingress IP through [instructions]({{home}}/docs/tasks/traffic-management/ingress.html#verifying-http-ingress).
You can verify accessing the Endpoints service through Ingress:
```command
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "http://${INGRESS_HOST}:80/echo?key=${ENDPOINTS_KEY}"i
```
## HTTPS Endpoints service using secured Ingress
The recommended way to securely access a mesh Endpoints service is through an ingress configured with mutual TLS.
1. Expose the HTTP port in your mesh service.
Adding `"--http_port=8081"` in the ESP deployment arguments and expose the HTTP port:
```yaml
- port: 80
targetPort: 8081
protocol: TCP
name: http
```
Update the mesh service deployment.
1. Turn on mTLS in Istio by using the following command:
```command
$ kubectl edit cm istio -n istio-system
```
And uncomment the line:
```yaml
authPolicy: MUTUAL_TLS
```
1. After this, you will find access to `EXTERNAL_IP` no longer works because istio proxy only accept secure mesh connections.
Accessing through Ingress works because Ingress does HTTP terminations.
1. To secure the access at Ingress, following the [instructions]({{home}}/docs/tasks/traffic-management/ingress.html#configuring-secure-ingress-https).
1. You can verify accessing the Endpoints service through secure Ingress:
```command
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "https://${INGRESS_HOST}/echo?key=${ENDPOINTS_KEY}" -k
```
## HTTPS Endpoints service using `LoadBalancer EXTERNAL_IP`
This solution uses Istio proxy for TCP bypassing. The traffic is secured through ESP. This is not a recommended way.
1. Modify the name of the HTTP port to be `tcp`
```yaml
- port: 80
targetPort: 8081
protocol: TCP
name: tcp
```
Update the mesh service deployment. See further readings on port naming rules
[here]({{home}}/docs/setup/kubernetes/sidecar-injection.html#pod-spec-requirements).
1. You can verify access to the Endpoints service through secure Ingress:
```command
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "https://${EXTERNAL_IP}/echo?key=${ENDPOINTS_KEY}" -k
```
## What's next
Learn more about [GCP Endpoints](https://cloud.google.com/endpoints/docs/).

View File

@ -1,11 +1,9 @@
---
title: Guides
overview: Guides include a variety of fully working example uses for Istio that you can experiment with.
description: Guides include a variety of fully working example uses for Istio that you can experiment with.
order: 30
weight: 30
layout: docs
type: markdown
toc: false
---

View File

@ -1,10 +1,8 @@
---
title: Integrating Virtual Machines
overview: This sample deploys the Bookinfo services across Kubernetes and a set of virtual machines, and illustrates how to use the Istio service mesh to control this infrastructure as a single mesh.
description: This sample deploys the Bookinfo services across Kubernetes and a set of virtual machines, and illustrates how to use the Istio service mesh to control this infrastructure as a single mesh.
order: 60
layout: docs
type: markdown
weight: 60
---
{% include home.html %}
@ -12,22 +10,19 @@ This sample deploys the Bookinfo services across Kubernetes and a set of
Virtual Machines, and illustrates how to use Istio service mesh to control
this infrastructure as a single mesh.
> Note: this guide is still under development and only tested on Google Cloud Platform.
On IBM Cloud or other platforms where overlay network of Pods is isolated from VM network,
VMs cannot initiate any direct communication to Kubernetes Pods even when using Istio.
> This guide is still under development and only tested on Google Cloud Platform.
On IBM Cloud or other platforms where overlay network of Pods is isolated from VM network,
VMs cannot initiate any direct communication to Kubernetes Pods even when using Istio.
## Overview
{% include figure.html width='80%' ratio='56.78%'
img='./img/mesh-expansion.svg'
alt='Bookinfo Application with Istio Mesh Expansion'
title='Bookinfo Application with Istio Mesh Expansion'
caption='Bookinfo Application with Istio Mesh Expansion'
{% include image.html width="80%" ratio="56.78%"
link="./img/mesh-expansion.svg"
caption="Bookinfo Application with Istio Mesh Expansion"
%}
<!-- source of the drawing https://docs.google.com/drawings/d/1gQp1OTusiccd-JUOHktQ9RFZaqREoQbwl2Vb-P3XlRQ/edit -->
## Before you begin
* Setup Istio by following the instructions in the
@ -36,38 +31,48 @@ this infrastructure as a single mesh.
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application (in the `bookinfo` namespace).
* Create a VM named 'vm-1' in the same project as Istio cluster, and [Join the Mesh]({{home}}/docs/setup/kubernetes/mesh-expansion.html).
## Running mysql on the VM
## Running MySQL on the VM
We will first install mysql on the VM, and configure it as a backend for the ratings service.
We will first install MySQL on the VM, and configure it as a backend for the ratings service.
On the VM:
```bash
sudo apt-get update && sudo apt-get install -y mariadb-server
sudo mysql
```command
$ sudo apt-get update && sudo apt-get install -y mariadb-server
$ sudo mysql
# Grant access to root
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
quit;
sudo systemctl restart mysql
```
You can find details of configuring mysql at [Mysql](https://mariadb.com/kb/en/library/download/).
```command
$ sudo systemctl restart mysql
```
You can find details of configuring MySQL at [Mysql](https://mariadb.com/kb/en/library/download/).
On the VM add ratings database to mysql.
```bash
# Add ratings db to the mysql db
curl -q https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/src/mysql/mysqldb-init.sql | mysql -u root -ppassword
```command
$ curl -q https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/src/mysql/mysqldb-init.sql | mysql -u root -ppassword
```
To make it easy to visually inspect the difference in the output of the bookinfo application, you can change the ratings that are generated by using the following commands
```bash
# To inspect the ratings
mysql -u root -ppassword test -e "select * from ratings;"
To make it easy to visually inspect the difference in the output of the Bookinfo application, you can change the ratings that are generated by using the
following commands to inspect the ratings:
```command
$ mysql -u root -ppassword test -e "select * from ratings;"
+----------+--------+
| ReviewID | Rating |
+----------+--------+
| 1 | 5 |
| 2 | 4 |
+----------+--------+
# To change the ratings
mysql -u root -ppassword test -e "update ratings set rating=1 where reviewid=1;select * from ratings;"
```
and to change the ratings
```command
$ mysql -u root -ppassword test -e "update ratings set rating=1 where reviewid=1;select * from ratings;"
+----------+--------+
| ReviewID | Rating |
+----------+--------+
@ -79,18 +84,17 @@ mysql -u root -ppassword test -e "update ratings set rating=1 where reviewid=1;
## Find out the IP address of the VM that will be used to add it to the mesh
On the VM:
```bash
hostname -I
```command
$ hostname -I
```
## Registering the mysql service with the mesh
On a host with access to `istioctl` commands, register the VM and mysql db service
```bash
istioctl register -n vm mysqldb <ip-address-of-vm> 3306
```
Sample output:
```
$ istioctl register -n vm mysqldb 10.150.0.5 3306
```command
$ istioctl register -n vm mysqldb <ip-address-of-vm> 3306
I1108 20:17:54.256699 40419 register.go:43] Registering for service 'mysqldb' ip '10.150.0.5', ports list [{3306 mysql}]
I1108 20:17:54.256815 40419 register.go:48] 0 labels ([]) and 1 annotations ([alpha.istio.io/kubernetes-serviceaccounts=default])
W1108 20:17:54.573068 40419 register.go:123] Got 'services "mysqldb" not found' looking up svc 'mysqldb' in namespace 'vm', attempting to create it
@ -104,14 +108,17 @@ Note that the 'mysqldb' virtual machine does not need and should not have specia
## Using the mysql service
The ratings service in bookinfo will use the DB on the machine. To verify that it works, create version 2 of the ratings service that uses the mysql db on the VM. Then specify route rules that force the review service to use the ratings version 2.
```bash
# Create the version of ratings service that will use mysql back end
istioctl kube-inject -n bookinfo -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql-vm.yaml | kubectl apply -n bookinfo -f -
# Create route rules that will force bookinfo to use the ratings back end
istioctl create -n bookinfo -f samples/bookinfo/kube/route-rule-ratings-mysql-vm.yaml
```command
$ istioctl kube-inject -n bookinfo -f samples/bookinfo/kube/bookinfo-ratings-v2-mysql-vm.yaml | kubectl apply -n bookinfo -f -
```
Create route rules that will force bookinfo to use the ratings back end:
```command
$ istioctl create -n bookinfo -f samples/bookinfo/kube/route-rule-ratings-mysql-vm.yaml
```
You can verify the output of bookinfo application is showing 1 star from Reviewer1 and 4 stars from Reviewer2 or change the ratings on your VM and see the results.
You can verify the output of the Bookinfo application is showing 1 star from Reviewer1 and 4 stars from Reviewer2 or change the ratings on your VM and see the
results.
You can also find some troubleshooting and other information in the [RawVM MySQL](https://github.com/istio/istio/blob/master/samples/rawvm/README.md) document in the meantime.

View File

@ -1,10 +1,8 @@
---
title: Intelligent Routing
overview: This guide demonstrates how to use various traffic management capabilities of an Istio service mesh.
description: This guide demonstrates how to use various traffic management capabilities of an Istio service mesh.
order: 20
layout: docs
type: markdown
weight: 20
---
{% include home.html %}

View File

@ -1,11 +1,9 @@
---
title: Policy Enforcement
overview: This sample uses the Bookinfo application to demonstrate policy enforcement using Istio Mixer.
description: This sample uses the Bookinfo application to demonstrate policy enforcement using Istio Mixer.
order: 40
weight: 40
draft: true
layout: docs
type: markdown
---
{% include home.html %}
@ -18,6 +16,7 @@ features are important, and so on. This is not a task, but a feature of
Istio.
## Before you begin
* Describe installation options.
* Install Istio control plane in a Kubernetes cluster by following the quick start instructions in the

View File

@ -1,11 +1,9 @@
---
title: Security
overview: This sample demonstrates how to obtain uniform metrics, logs, traces across different services using Istio Mixer and Istio sidecar.
description: This sample demonstrates how to obtain uniform metrics, logs, traces across different services using Istio Mixer and Istio sidecar.
order: 30
weight: 30
draft: true
layout: docs
type: markdown
---
{% include home.html %}
@ -16,6 +14,7 @@ This sample demonstrates how to obtain uniform metrics, logs, traces across diff
Placeholder.
## Before you begin
* Describe installation options.
* Install Istio control plane in a Kubernetes cluster by following the quick start instructions in the

View File

@ -1,10 +1,8 @@
---
title: In-Depth Telemetry
overview: This sample demonstrates how to obtain uniform metrics, logs, traces across different services using Istio Mixer and Istio sidecar.
description: This sample demonstrates how to obtain uniform metrics, logs, traces across different services using Istio Mixer and Istio sidecar.
order: 30
layout: docs
type: markdown
weight: 30
---
{% include home.html %}
@ -49,7 +47,7 @@ developers to manually instrument their applications.
applications.
1. [Using the Istio Dashboard]({{home}}/docs/tasks/telemetry/using-istio-dashboard.html)
This task installs the Grafana add-on with a pre-configured dashboard
This task installs the Grafana add-on with a preconfigured dashboard
for monitoring mesh traffic.
## Cleanup

View File

@ -1,24 +0,0 @@
---
title: Upgrading Istio
overview: This guide demonstrates how to upgrade the Istio control plane and data plane independently.
order: 70
draft: true
layout: docs
type: markdown
---
{% include home.html %}
This guide demonstrates how to upgrade the Istio control plane and data plane independently.
## Overview
Placeholder.
## Application Setup
1. Steps
## Tasks
1. some tasks that will complete the goal of this sample.

View File

@ -1,11 +1,9 @@
---
title: Integrating with External Services
overview: This sample integrates third party services with Bookinfo and demonstrates how to use Istio service mesh to provide metrics, and routing functions for these services.
description: This sample integrates third party services with Bookinfo and demonstrates how to use Istio service mesh to provide metrics, and routing functions for these services.
order: 50
weight: 50
draft: true
layout: docs
type: markdown
---
{% include home.html %}

View File

@ -1,11 +1,9 @@
---
title: Welcome
overview: Istio documentation home page.
description: Istio documentation home page.
order: 0
weight: 1
layout: docs
type: markdown
toc: false
---
{% include home.html %}
@ -19,8 +17,8 @@ is where you can learn about what Istio does and how it does it.
the Istio control plane in various environments, as well as instructions
for installing the sidecar in the application deployment. Quick start
instructions are available for
[Kubernetes]({{docs}}/docs/setup/kubernetes/quick-start.html) and
[Docker Compose w/ Consul]({{docs}}/docs/setup/consul/quick-start.html).
[Kubernetes]({{home}}/docs/setup/kubernetes/quick-start.html) and
[Docker Compose w/ Consul]({{home}}/docs/setup/consul/quick-start.html).
- [Tasks]({{home}}/docs/tasks/). Tasks show you how to do a single directed activity with Istio.
@ -30,9 +28,14 @@ intended to highlight a particular set of Istio's features.
- [Reference]({{home}}/docs/reference/). Detailed exhaustive lists of
command-line options, configuration options, API definitions, and procedures.
We're always looking for help improving our documentation, so please don't hesitate to
In addition, you might find these links interesting:
- The latest Istio monthly release is {{site.data.istio.version}}: [download {{site.data.istio.version}}](https://github.com/istio/istio/releases),
[release notes]({{home}}/about/notes/{{site.data.istio.version}}.html).
- Nostalgic for days gone by? We keep an [archive of the earlier releases' documentation](https://archive.istio.io/).
- We're always looking for help improving our documentation, so please don't hesitate to
[file an issue](https://github.com/istio/istio.github.io/issues/new) if you see some problem.
Or better yet, submit your own [contributions]({{home}}/about/contribute/editing.html) to help
make our docs better.
Follow this link for the archive of the [earlier releases' documentation](https://archive.istio.io/).

View File

@ -1,11 +1,9 @@
---
title: API
overview: Detailed information on API parameters.
description: Detailed information on API parameters.
order: 10
weight: 10
layout: docs
type: markdown
toc: false
---

View File

@ -1,6 +1,6 @@
---
title: Mixer
overview: API definitions to interact with Mixer
description: API definitions to interact with Mixer
location: https://istio.io/docs/reference/api/istio.mixer.v1.html
layout: protoc-gen-docs
redirect_from: /docs/reference/api/mixer/mixer.html
@ -71,7 +71,7 @@ specialized Mixer adapters and services can also generate attributes.</p>
<a href="{{site.baseurl}}/docs/reference/config/mixer/attribute-vocabulary.html">here</a>.</p>
<p>Attributes are strongly typed. The supported attribute types are defined by
<a href="https://github.com/istio/api/blob/master/mixer/v1/config/descriptor/value_type.proto">ValueType</a>.
<a href="https://github.com/istio/api/blob/master/policy/v1beta1/value_type.proto">ValueType</a>.
Each type of value is encoded into one of the so-called transport types present
in this message.</p>
@ -92,7 +92,7 @@ Following places may use this message:
<tbody>
<tr id="Attributes.attributes">
<td><code>attributes</code></td>
<td><code>map&lt;string,<a href="#Attributes.AttributeValue">Attributes.AttributeValue</a>&gt;</code></td>
<td><code>map&lt;string,&nbsp;<a href="#Attributes.AttributeValue">Attributes.AttributeValue</a>&gt;</code></td>
<td>
<p>A map of attribute name to its value.</p>
@ -196,7 +196,7 @@ Following places may use this message:
<tbody>
<tr id="Attributes.StringMap.entries">
<td><code>entries</code></td>
<td><code>map&lt;string,string&gt;</code></td>
<td><code>map&lt;string,&nbsp;string&gt;</code></td>
<td>
<p>Holds a set of name/value pairs.</p>
@ -250,7 +250,7 @@ per call, where the same UUID is used for retries of the same call.</p>
</tr>
<tr id="CheckRequest.quotas">
<td><code>quotas</code></td>
<td><code>map&lt;string,<a href="#CheckRequest.QuotaParams">CheckRequest.QuotaParams</a>&gt;</code></td>
<td><code>map&lt;string,&nbsp;<a href="#CheckRequest.QuotaParams">CheckRequest.QuotaParams</a>&gt;</code></td>
<td>
<p>The individual quotas to allocate</p>
@ -314,7 +314,7 @@ per call, where the same UUID is used for retries of the same call.</p>
</tr>
<tr id="CheckResponse.quotas">
<td><code>quotas</code></td>
<td><code>map&lt;string,<a href="#CheckResponse.QuotaResult">CheckResponse.QuotaResult</a>&gt;</code></td>
<td><code>map&lt;string,&nbsp;<a href="#CheckResponse.QuotaResult">CheckResponse.QuotaResult</a>&gt;</code></td>
<td>
<p>The resulting quota, one entry per requested quota.</p>
@ -457,7 +457,7 @@ configuration.</p>
</tr>
<tr id="CompressedAttributes.strings">
<td><code>strings</code></td>
<td><code>map&lt;int32,int32&gt;</code></td>
<td><code>map&lt;int32,&nbsp;int32&gt;</code></td>
<td>
<p>Holds attributes of type STRING, DNS<em>NAME, EMAIL</em>ADDRESS, URI</p>
@ -465,7 +465,7 @@ configuration.</p>
</tr>
<tr id="CompressedAttributes.int64s">
<td><code>int64s</code></td>
<td><code>map&lt;int32,int64&gt;</code></td>
<td><code>map&lt;int32,&nbsp;int64&gt;</code></td>
<td>
<p>Holds attributes of type INT64</p>
@ -473,7 +473,7 @@ configuration.</p>
</tr>
<tr id="CompressedAttributes.doubles">
<td><code>doubles</code></td>
<td><code>map&lt;int32,double&gt;</code></td>
<td><code>map&lt;int32,&nbsp;double&gt;</code></td>
<td>
<p>Holds attributes of type DOUBLE</p>
@ -481,7 +481,7 @@ configuration.</p>
</tr>
<tr id="CompressedAttributes.bools">
<td><code>bools</code></td>
<td><code>map&lt;int32,bool&gt;</code></td>
<td><code>map&lt;int32,&nbsp;bool&gt;</code></td>
<td>
<p>Holds attributes of type BOOL</p>
@ -489,7 +489,7 @@ configuration.</p>
</tr>
<tr id="CompressedAttributes.timestamps">
<td><code>timestamps</code></td>
<td><code>map&lt;int32,<a href="https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#timestamp">google.protobuf.Timestamp</a>&gt;</code></td>
<td><code>map&lt;int32,&nbsp;<a href="https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#timestamp">google.protobuf.Timestamp</a>&gt;</code></td>
<td>
<p>Holds attributes of type TIMESTAMP</p>
@ -497,7 +497,7 @@ configuration.</p>
</tr>
<tr id="CompressedAttributes.durations">
<td><code>durations</code></td>
<td><code>map&lt;int32,<a href="https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#duration">google.protobuf.Duration</a>&gt;</code></td>
<td><code>map&lt;int32,&nbsp;<a href="https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#duration">google.protobuf.Duration</a>&gt;</code></td>
<td>
<p>Holds attributes of type DURATION</p>
@ -505,7 +505,7 @@ configuration.</p>
</tr>
<tr id="CompressedAttributes.bytes">
<td><code>bytes</code></td>
<td><code>map&lt;int32,bytes&gt;</code></td>
<td><code>map&lt;int32,&nbsp;bytes&gt;</code></td>
<td>
<p>Holds attributes of type BYTES</p>
@ -513,7 +513,7 @@ configuration.</p>
</tr>
<tr id="CompressedAttributes.string_maps">
<td><code>stringMaps</code></td>
<td><code>map&lt;int32,<a href="#StringMap">StringMap</a>&gt;</code></td>
<td><code>map&lt;int32,&nbsp;<a href="#StringMap">StringMap</a>&gt;</code></td>
<td>
<p>Holds attributes of type STRING_MAP</p>
@ -740,7 +740,7 @@ indices (see the <a href="#CompressedAttributes">Attributes</a> message for an e
<tbody>
<tr id="StringMap.entries">
<td><code>entries</code></td>
<td><code>map&lt;int32,int32&gt;</code></td>
<td><code>map&lt;int32,&nbsp;int32&gt;</code></td>
<td>
<p>Holds a set of name/value pairs.</p>

View File

@ -1,11 +1,9 @@
---
title: Commands
overview: Describes usage and options of the Istio commands and utilities.
description: Describes usage and options of the Istio commands and utilities.
order: 30
weight: 30
layout: docs
type: markdown
toc: false
---

View File

@ -1,6 +1,6 @@
---
title: istio_ca
overview: Istio Certificate Authority (CA)
description: Istio Certificate Authority (CA)
layout: pkg-collateral-docs
number_of_entries: 4
---
@ -10,199 +10,160 @@ number_of_entries: 4
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
<td><code>--append-dns-names</code></td>
<td>Append DNS names to the certificates for webhook services. </td>
</tr>
<tr>
<td><code>--cert-chain &lt;string&gt;</code></td>
<td></td>
<td>Path to the certificate chain file (default ``)</td>
</tr>
<tr>
<td><code>--citadel-storage-namespace &lt;string&gt;</code></td>
<td>Namespace where the Citadel pod is running. Will not be used if explicit file or other storage mechanism is specified. (default `istio-system`)</td>
</tr>
<tr>
<td><code>--custom-dns-names &lt;string&gt;</code></td>
<td>The list of account.namespace:customdns names, separated by comma. (default ``)</td>
</tr>
<tr>
<td><code>--enable-profiling</code></td>
<td>Enabling profiling when monitoring Citadel. </td>
</tr>
<tr>
<td><code>--grpc-host-identities &lt;string&gt;</code></td>
<td>The list of hostnames for istio ca server, separated by comma. (default `istio-ca,istio-citadel`)</td>
</tr>
<tr>
<td><code>--grpc-hostname &lt;string&gt;</code></td>
<td></td>
<td>The hostname for GRPC server. (default `localhost`)</td>
<td>DEPRECATED, use --grpc-host-identites. (default `istio-ca`)</td>
</tr>
<tr>
<td><code>--grpc-port &lt;int&gt;</code></td>
<td></td>
<td>The port number for GRPC server. If unspecified, Istio CA will not server GRPC request. (default `0`)</td>
<td>The port number for Citadel GRPC server. If unspecified, Citadel will not serve GRPC requests. (default `8060`)</td>
</tr>
<tr>
<td><code>--istio-ca-storage-namespace &lt;string&gt;</code></td>
<td></td>
<td>Namespace where the Istio CA pods is running. Will not be used if explicit file or other storage mechanism is specified. (default `istio-system`)</td>
<td><code>--key-size &lt;int&gt;</code></td>
<td>Size of generated private key (default `2048`)</td>
</tr>
<tr>
<td><code>--kube-config &lt;string&gt;</code></td>
<td></td>
<td>Specifies path to kubeconfig file. This must be specified when not running inside a Kubernetes pod. (default ``)</td>
</tr>
<tr>
<td><code>--listened-namespace &lt;string&gt;</code></td>
<td></td>
<td>Select a namespace for the CA to listen to. If unspecified, Istio CA tries to use the ${NAMESPACE} environment variable. If neither is set, Istio CA listens to all namespaces. (default ``)</td>
<td>Select a namespace for the CA to listen to. If unspecified, Citadel tries to use the ${NAMESPACE} environment variable. If neither is set, Citadel listens to all namespaces. (default ``)</td>
</tr>
<tr>
<td><code>--liveness-probe-interval &lt;duration&gt;</code></td>
<td></td>
<td>Interval of updating file for the liveness probe. (default `0s`)</td>
</tr>
<tr>
<td><code>--liveness-probe-path &lt;string&gt;</code></td>
<td></td>
<td>Path to the file for the liveness probe. (default ``)</td>
</tr>
<tr>
<td><code>--log_as_json</code></td>
<td></td>
<td>Whether to format output as JSON or in plain console-friendly format </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_callers</code></td>
<td></td>
<td>Include caller information, useful for debugging </td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
<td><code>--log_caller &lt;string&gt;</code></td>
<td>Comma-separated list of scopes for which to include called information, scopes can be any of [default] (default ``)</td>
</tr>
<tr>
<td><code>--log_output_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level of messages to output, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `info`)</td>
<td>The minimum logging level of messages to output, can be one of [debug, info, warn, error, none] (default `default:info`)</td>
</tr>
<tr>
<td><code>--log_rotate &lt;string&gt;</code></td>
<td></td>
<td>The path for the optional rotating log file (default ``)</td>
</tr>
<tr>
<td><code>--log_rotate_max_age &lt;int&gt;</code></td>
<td></td>
<td>The maximum age in days of a log file beyond which the file is rotated (0 indicates no limit) (default `30`)</td>
</tr>
<tr>
<td><code>--log_rotate_max_backups &lt;int&gt;</code></td>
<td></td>
<td>The maximum number of log file backups to keep before older files are deleted (0 indicates no limit) (default `1000`)</td>
</tr>
<tr>
<td><code>--log_rotate_max_size &lt;int&gt;</code></td>
<td></td>
<td>The maximum size in megabytes of a log file beyond which the file is rotated (default `104857600`)</td>
</tr>
<tr>
<td><code>--log_stacktrace_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level at which stack traces are captured, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `none`)</td>
<td>The minimum logging level at which stack traces are captured, can be one of [debug, info, warn, error, none] (default `default:none`)</td>
</tr>
<tr>
<td><code>--log_target &lt;stringArray&gt;</code></td>
<td></td>
<td>The set of paths where to output the log. This can be any path as well as the special values stdout and stderr (default `[stdout]`)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
<td><code>--max-workload-cert-ttl &lt;duration&gt;</code></td>
<td>The max TTL of issued workload certificates (default `2160h0m0s`)</td>
</tr>
<tr>
<td><code>--max-workload-cert-ttl &lt;duration&gt;</code></td>
<td></td>
<td>The max TTL of issued workload certificates (default `168h0m0s`)</td>
<td><code>--monitoring-port &lt;int&gt;</code></td>
<td>The port number for monitoring Citadel. If unspecified, Citadel will disable monitoring. (default `9093`)</td>
</tr>
<tr>
<td><code>--org &lt;string&gt;</code></td>
<td>Organization for the cert (default ``)</td>
</tr>
<tr>
<td><code>--probe-check-interval &lt;duration&gt;</code></td>
<td></td>
<td>Interval of checking the liveness of the CA. (default `30s`)</td>
</tr>
<tr>
<td><code>--requested-ca-cert-ttl &lt;duration&gt;</code></td>
<td>The requested TTL for the workload (default `8760h0m0s`)</td>
</tr>
<tr>
<td><code>--root-cert &lt;string&gt;</code></td>
<td></td>
<td>Path to the root certificate file (default ``)</td>
</tr>
<tr>
<td><code>--self-signed-ca</code></td>
<td></td>
<td>Indicates whether to use auto-generated self-signed CA certificate. When set to true, the &#39;--signing-cert&#39; and &#39;--signing-key&#39; options are ignored. </td>
</tr>
<tr>
<td><code>--self-signed-ca-cert-ttl &lt;duration&gt;</code></td>
<td></td>
<td>The TTL of self-signed CA root certificate (default `8760h0m0s`)</td>
</tr>
<tr>
<td><code>--self-signed-ca-org &lt;string&gt;</code></td>
<td></td>
<td>The issuer organization used in self-signed CA certificate (default to k8s.cluster.local) (default `k8s.cluster.local`)</td>
</tr>
<tr>
<td><code>--sign-ca-certs</code></td>
<td>Whether Citadel signs certificates for other CAs </td>
</tr>
<tr>
<td><code>--signing-cert &lt;string&gt;</code></td>
<td></td>
<td>Path to the CA signing certificate file (default ``)</td>
</tr>
<tr>
<td><code>--signing-key &lt;string&gt;</code></td>
<td></td>
<td>Path to the CA signing key file (default ``)</td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--upstream-auth &lt;string&gt;</code></td>
<td></td>
<td>Specifies how the Istio CA is authenticated to the upstream CA. (default `mtls`)</td>
</tr>
<tr>
<td><code>--upstream-ca-address &lt;string&gt;</code></td>
<td></td>
<td>The IP:port address of the upstream CA. When set, the CA will rely on the upstream Istio CA to provision its own certificate. (default ``)</td>
</tr>
<tr>
<td><code>--upstream-ca-cert-file &lt;string&gt;</code></td>
<td></td>
<td>Path to the certificate for authenticating upstream CA. (default ``)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
<td>The IP:port address of the upstream CA. When set, the CA will rely on the upstream Citadel to provision its own certificate. (default ``)</td>
</tr>
<tr>
<td><code>--workload-cert-grace-period-ratio &lt;float32&gt;</code></td>
<td></td>
<td>The workload certificate rotation grace period, as a ratio of the workload certificate TTL. (default `0.5`)</td>
</tr>
<tr>
<td><code>--workload-cert-min-grace-period &lt;duration&gt;</code></td>
<td></td>
<td>The minimum workload certificate rotation grace period. (default `10m0s`)</td>
</tr>
<tr>
<td><code>--workload-cert-ttl &lt;duration&gt;</code></td>
<td></td>
<td>The TTL of issued workload certificates (default `19h0m0s`)</td>
<td>The TTL of issued workload certificates (default `2160h0m0s`)</td>
</tr>
</tbody>
</table>
@ -213,100 +174,53 @@ number_of_entries: 4
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--interval &lt;duration&gt;</code></td>
<td></td>
<td>Duration used for checking the target file&#39;s last modified time. (default `0s`)</td>
</tr>
<tr>
<td><code>--log_as_json</code></td>
<td></td>
<td>Whether to format output as JSON or in plain console-friendly format </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_callers</code></td>
<td></td>
<td>Include caller information, useful for debugging </td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
<td><code>--log_caller &lt;string&gt;</code></td>
<td>Comma-separated list of scopes for which to include called information, scopes can be any of [default] (default ``)</td>
</tr>
<tr>
<td><code>--log_output_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level of messages to output, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `info`)</td>
<td>The minimum logging level of messages to output, can be one of [debug, info, warn, error, none] (default `default:info`)</td>
</tr>
<tr>
<td><code>--log_rotate &lt;string&gt;</code></td>
<td></td>
<td>The path for the optional rotating log file (default ``)</td>
</tr>
<tr>
<td><code>--log_rotate_max_age &lt;int&gt;</code></td>
<td></td>
<td>The maximum age in days of a log file beyond which the file is rotated (0 indicates no limit) (default `30`)</td>
</tr>
<tr>
<td><code>--log_rotate_max_backups &lt;int&gt;</code></td>
<td></td>
<td>The maximum number of log file backups to keep before older files are deleted (0 indicates no limit) (default `1000`)</td>
</tr>
<tr>
<td><code>--log_rotate_max_size &lt;int&gt;</code></td>
<td></td>
<td>The maximum size in megabytes of a log file beyond which the file is rotated (default `104857600`)</td>
</tr>
<tr>
<td><code>--log_stacktrace_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level at which stack traces are captured, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `none`)</td>
<td>The minimum logging level at which stack traces are captured, can be one of [debug, info, warn, error, none] (default `default:none`)</td>
</tr>
<tr>
<td><code>--log_target &lt;stringArray&gt;</code></td>
<td></td>
<td>The set of paths where to output the log. This can be any path as well as the special values stdout and stderr (default `[stdout]`)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--probe-path &lt;string&gt;</code></td>
<td></td>
<td>Path of the file for checking the availability. (default ``)</td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
</tbody>
</table>
<h2 id="istio_ca version">istio_ca version</h2>
@ -321,34 +235,19 @@ number_of_entries: 4
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--log_as_json</code></td>
<td></td>
<td>Whether to format output as JSON or in plain console-friendly format </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td><code>--log_caller &lt;string&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_callers</code></td>
<td></td>
<td>Include caller information, useful for debugging </td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
<td>Comma-separated list of scopes for which to include called information, scopes can be any of [default] (default ``)</td>
</tr>
<tr>
<td><code>--log_output_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level of messages to output, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `info`)</td>
<td>The minimum logging level of messages to output, can be one of [debug, info, warn, error, none] (default `default:info`)</td>
</tr>
<tr>
<td><code>--log_rotate &lt;string&gt;</code></td>
@ -373,7 +272,7 @@ number_of_entries: 4
<tr>
<td><code>--log_stacktrace_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level at which stack traces are captured, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `none`)</td>
<td>The minimum logging level at which stack traces are captured, can be one of [debug, info, warn, error, none] (default `default:none`)</td>
</tr>
<tr>
<td><code>--log_target &lt;stringArray&gt;</code></td>
@ -381,29 +280,9 @@ number_of_entries: 4
<td>The set of paths where to output the log. This can be any path as well as the special values stdout and stderr (default `[stdout]`)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--short</code></td>
<td><code>-s</code></td>
<td>Displays a short form of the version information </td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
</tbody>
</table>

File diff suppressed because it is too large Load Diff

View File

@ -1,36 +1,12 @@
---
title: mixc
overview: Utility to trigger direct calls to Mixer&#39;s API.
description: Utility to trigger direct calls to Mixer&#39;s API.
layout: pkg-collateral-docs
number_of_entries: 5
---
<p>This command lets you interact with a running instance of
Mixer. Note that you need a pretty good understanding of Mixer&#39;s
API in order to use this command.</p>
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--trace_jaeger_url &lt;string&gt;</code></td>
<td></td>
<td>URL of Jaeger HTTP collector (example: &#39;http://jaeger:14268/api/traces?format=jaeger.thrift&#39;). (default ``)</td>
</tr>
<tr>
<td><code>--trace_log_spans</code></td>
<td></td>
<td>Whether or not to log trace spans. </td>
</tr>
<tr>
<td><code>--trace_zipkin_url &lt;string&gt;</code></td>
<td></td>
<td>URL of Zipkin collector (example: &#39;http://zipkin:9411/api/v1/spans&#39;). (default ``)</td>
</tr>
</tbody>
</table>
<h2 id="mixc check">mixc check</h2>
<p>The Check method is used to perform precondition checks and quota allocations. Mixer
expects a set of attributes as input, which it uses, along with
@ -224,20 +200,5 @@ which parameters in order to output the telemetry.</p>
<td><code>-s</code></td>
<td>Displays a short form of the version information </td>
</tr>
<tr>
<td><code>--trace_jaeger_url &lt;string&gt;</code></td>
<td></td>
<td>URL of Jaeger HTTP collector (example: &#39;http://jaeger:14268/api/traces?format=jaeger.thrift&#39;). (default ``)</td>
</tr>
<tr>
<td><code>--trace_log_spans</code></td>
<td></td>
<td>Whether or not to log trace spans. </td>
</tr>
<tr>
<td><code>--trace_zipkin_url &lt;string&gt;</code></td>
<td></td>
<td>URL of Zipkin collector (example: &#39;http://zipkin:9411/api/v1/spans&#39;). (default ``)</td>
</tr>
</tbody>
</table>

View File

@ -1,245 +1,25 @@
---
title: mixs
overview: Mixer is Istio&#39;s abstraction on top of infrastructure backends.
description: Mixer is Istio&#39;s abstraction on top of infrastructure backends.
layout: pkg-collateral-docs
number_of_entries: 10
number_of_entries: 9
---
<p>Mixer is Istio&#39;s point of integration with infrastructure backends and is the
nexus for policy evaluation and telemetry reporting.</p>
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
</tbody>
</table>
<h2 id="mixs crd">mixs crd</h2>
<p>CRDs (CustomResourceDefinition) available in Mixer</p>
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
</tbody>
</table>
<h2 id="mixs crd adapter">mixs crd adapter</h2>
<p>List CRDs for available adapters</p>
<pre class="language-bash"><code>mixs crd adapter [flags]
</code></pre>
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
</tbody>
</table>
<h2 id="mixs crd all">mixs crd all</h2>
<p>List all CRDs</p>
<pre class="language-bash"><code>mixs crd all [flags]
</code></pre>
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
</tbody>
</table>
<h2 id="mixs crd instance">mixs crd instance</h2>
<p>List CRDs for available instance kinds (mesh functions)</p>
<pre class="language-bash"><code>mixs crd instance [flags]
</code></pre>
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
</tbody>
</table>
<h2 id="mixs probe">mixs probe</h2>
<p>Check the liveness or readiness of a locally-running server</p>
<pre class="language-bash"><code>mixs probe [flags]
@ -247,100 +27,53 @@ nexus for policy evaluation and telemetry reporting.</p>
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--interval &lt;duration&gt;</code></td>
<td></td>
<td>Duration used for checking the target file&#39;s last modified time. (default `0s`)</td>
</tr>
<tr>
<td><code>--log_as_json</code></td>
<td></td>
<td>Whether to format output as JSON or in plain console-friendly format </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_callers</code></td>
<td></td>
<td>Include caller information, useful for debugging </td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
<td><code>--log_caller &lt;string&gt;</code></td>
<td>Comma-separated list of scopes for which to include caller information, scopes can be any of [adapters, default, attributes] (default ``)</td>
</tr>
<tr>
<td><code>--log_output_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level of messages to output, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `info`)</td>
<td>Comma-separated minimum per-scope logging level of messages to output, in the form of &lt;scope&gt;:&lt;level&gt;,&lt;scope&gt;:&lt;level&gt;,... where scope can be one of [adapters, default, attributes] and level can be one of [debug, info, warn, error, none] (default `default:info`)</td>
</tr>
<tr>
<td><code>--log_rotate &lt;string&gt;</code></td>
<td></td>
<td>The path for the optional rotating log file (default ``)</td>
</tr>
<tr>
<td><code>--log_rotate_max_age &lt;int&gt;</code></td>
<td></td>
<td>The maximum age in days of a log file beyond which the file is rotated (0 indicates no limit) (default `30`)</td>
</tr>
<tr>
<td><code>--log_rotate_max_backups &lt;int&gt;</code></td>
<td></td>
<td>The maximum number of log file backups to keep before older files are deleted (0 indicates no limit) (default `1000`)</td>
</tr>
<tr>
<td><code>--log_rotate_max_size &lt;int&gt;</code></td>
<td></td>
<td>The maximum size in megabytes of a log file beyond which the file is rotated (default `104857600`)</td>
</tr>
<tr>
<td><code>--log_stacktrace_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level at which stack traces are captured, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `none`)</td>
<td>Comma-separated minimum per-scope logging level at which stack traces are captured, in the form of &lt;scope&gt;:&lt;level&gt;,&lt;scope:level&gt;,... where scope can be one of [adapters, default, attributes] and level can be one of [debug, info, warn, error, none] (default `default:none`)</td>
</tr>
<tr>
<td><code>--log_target &lt;stringArray&gt;</code></td>
<td></td>
<td>The set of paths where to output the log. This can be any path as well as the special values stdout and stderr (default `[stdout]`)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--probe-path &lt;string&gt;</code></td>
<td></td>
<td>Path of the file for checking the availability. (default ``)</td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
</tbody>
</table>
<h2 id="mixs server">mixs server</h2>
@ -360,11 +93,6 @@ nexus for policy evaluation and telemetry reporting.</p>
<td>Max number of goroutines in the adapter worker pool (default `1024`)</td>
</tr>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--apiWorkerPoolSize &lt;int&gt;</code></td>
<td></td>
<td>Max number of goroutines in the API worker pool (default `1024`)</td>
@ -380,9 +108,9 @@ nexus for policy evaluation and telemetry reporting.</p>
<td>URL of the config store. Use k8s://path_to_kubeconfig or fs:// for file system. If path_to_kubeconfig is empty, in-cluster kubeconfig is used. (default ``)</td>
</tr>
<tr>
<td><code>--expressionEvalCacheSize &lt;int&gt;</code></td>
<td><code>--ctrlz_port &lt;uint16&gt;</code></td>
<td></td>
<td>Number of entries in the expression cache (default `1024`)</td>
<td>The IP port to use for the ControlZ introspection facility (default `9876`)</td>
</tr>
<tr>
<td><code>--livenessProbeInterval &lt;duration&gt;</code></td>
@ -400,24 +128,14 @@ nexus for policy evaluation and telemetry reporting.</p>
<td>Whether to format output as JSON or in plain console-friendly format </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td><code>--log_caller &lt;string&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_callers</code></td>
<td></td>
<td>Include caller information, useful for debugging </td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
<td>Comma-separated list of scopes for which to include caller information, scopes can be any of [adapters, default, attributes] (default ``)</td>
</tr>
<tr>
<td><code>--log_output_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level of messages to output, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `info`)</td>
<td>Comma-separated minimum per-scope logging level of messages to output, in the form of &lt;scope&gt;:&lt;level&gt;,&lt;scope&gt;:&lt;level&gt;,... where scope can be one of [adapters, default, attributes] and level can be one of [debug, info, warn, error, none] (default `default:info`)</td>
</tr>
<tr>
<td><code>--log_rotate &lt;string&gt;</code></td>
@ -442,7 +160,7 @@ nexus for policy evaluation and telemetry reporting.</p>
<tr>
<td><code>--log_stacktrace_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level at which stack traces are captured, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `none`)</td>
<td>Comma-separated minimum per-scope logging level at which stack traces are captured, in the form of &lt;scope&gt;:&lt;level&gt;,&lt;scope:level&gt;,... where scope can be one of [adapters, default, attributes] and level can be one of [debug, info, warn, error, none] (default `default:none`)</td>
</tr>
<tr>
<td><code>--log_target &lt;stringArray&gt;</code></td>
@ -450,11 +168,6 @@ nexus for policy evaluation and telemetry reporting.</p>
<td>The set of paths where to output the log. This can be any path as well as the special values stdout and stderr (default `[stdout]`)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--maxConcurrentStreams &lt;uint&gt;</code></td>
<td></td>
<td>Maximum number of outstanding RPCs per connection (default `1024`)</td>
@ -475,6 +188,11 @@ nexus for policy evaluation and telemetry reporting.</p>
<td>TCP port to use for Mixer&#39;s gRPC API (default `9091`)</td>
</tr>
<tr>
<td><code>--profile</code></td>
<td></td>
<td>Enable profiling via web interface host:port/debug/pprof </td>
</tr>
<tr>
<td><code>--readinessProbeInterval &lt;duration&gt;</code></td>
<td></td>
<td>Interval of updating file for the readiness probe. (default `0s`)</td>
@ -490,11 +208,6 @@ nexus for policy evaluation and telemetry reporting.</p>
<td>If true, each request to Mixer will be executed in a single go routine (useful for debugging) </td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--trace_jaeger_url &lt;string&gt;</code></td>
<td></td>
<td>URL of Jaeger HTTP collector (example: &#39;http://jaeger:14268/api/traces?format=jaeger.thrift&#39;). (default ``)</td>
@ -509,109 +222,6 @@ nexus for policy evaluation and telemetry reporting.</p>
<td></td>
<td>URL of Zipkin collector (example: &#39;http://zipkin:9411/api/v1/spans&#39;). (default ``)</td>
</tr>
<tr>
<td><code>--useNewRuntime</code></td>
<td></td>
<td>Use the new runtime code for processing requests. </td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
</tbody>
</table>
<h2 id="mixs validator">mixs validator</h2>
<p>Runs an https server for validations. Works as an external admission webhook for k8s</p>
<pre class="language-bash"><code>mixs validator [flags]
</code></pre>
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--external-admission-webook-name &lt;string&gt;</code></td>
<td></td>
<td>the name of the external admission webhook registration. Needs to be a domain with at least three segments separated by dots. (default `mixer-webhook.istio.io`)</td>
</tr>
<tr>
<td><code>--kubeconfig &lt;string&gt;</code></td>
<td></td>
<td>Use a Kubernetes configuration file instead of in-cluster configuration (default ``)</td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--namespace &lt;string&gt;</code></td>
<td></td>
<td>the namespace where this webhook is deployed (default `istio-system`)</td>
</tr>
<tr>
<td><code>--port &lt;int&gt;</code></td>
<td><code>-p</code></td>
<td>the port number of the webhook (default `9099`)</td>
</tr>
<tr>
<td><code>--registration-delay &lt;duration&gt;</code></td>
<td></td>
<td>Time to delay webhook registration after starting webhook server (default `5s`)</td>
</tr>
<tr>
<td><code>--secret-name &lt;string&gt;</code></td>
<td></td>
<td>The name of k8s secret where the certificates are stored (default ``)</td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--target-namespaces &lt;stringArray&gt;</code></td>
<td></td>
<td>the list of namespaces where changes should be validated. Empty means to validate everything. Used for test only. (default `[]`)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
<tr>
<td><code>--webhook-name &lt;string&gt;</code></td>
<td></td>
<td>the name of the webhook (default `istio-mixer-webhook`)</td>
</tr>
</tbody>
</table>
<h2 id="mixs version">mixs version</h2>
@ -626,44 +236,9 @@ nexus for policy evaluation and telemetry reporting.</p>
</thead>
<tbody>
<tr>
<td><code>--alsologtostderr</code></td>
<td></td>
<td>log to standard error as well as files </td>
</tr>
<tr>
<td><code>--log_backtrace_at &lt;traceLocation&gt;</code></td>
<td></td>
<td>when logging hits line file:N, emit a stack trace (default `:0`)</td>
</tr>
<tr>
<td><code>--log_dir &lt;string&gt;</code></td>
<td></td>
<td>If non-empty, write log files in this directory (default ``)</td>
</tr>
<tr>
<td><code>--logtostderr</code></td>
<td></td>
<td>log to standard error instead of files </td>
</tr>
<tr>
<td><code>--short</code></td>
<td><code>-s</code></td>
<td>Displays a short form of the version information </td>
</tr>
<tr>
<td><code>--stderrthreshold &lt;severity&gt;</code></td>
<td></td>
<td>logs at or above this threshold go to stderr (default `2`)</td>
</tr>
<tr>
<td><code>--v &lt;Level&gt;</code></td>
<td><code>-v</code></td>
<td>log level for V logs (default `0`)</td>
</tr>
<tr>
<td><code>--vmodule &lt;moduleSpec&gt;</code></td>
<td></td>
<td>comma-separated list of pattern=N settings for file-filtered logging (default ``)</td>
</tr>
</tbody>
</table>

View File

@ -1,6 +1,6 @@
---
title: node_agent
overview: Istio security per-node agent
description: Istio security per-node agent
layout: pkg-collateral-docs
number_of_entries: 3
---
@ -10,109 +10,80 @@ number_of_entries: 3
<table class="command-flags">
<thead>
<th>Flags</th>
<th>Shorthand</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--aws-root-cert &lt;string&gt;</code></td>
<td></td>
<td>Root Certificate file in AWS environment (default `/etc/certs/root-cert.pem`)</td>
<td><code>--ca-address &lt;string&gt;</code></td>
<td>Istio CA address (default `istio-citadel:8060`)</td>
</tr>
<tr>
<td><code>--ca-address &lt;string&gt;</code></td>
<td></td>
<td>Istio CA address (default `istio-ca:8060`)</td>
<td><code>--cert-chain &lt;string&gt;</code></td>
<td>Node Agent identity cert file (default `/etc/certs/cert-chain.pem`)</td>
</tr>
<tr>
<td><code>--env &lt;string&gt;</code></td>
<td></td>
<td>Node Environment : onprem | gcp | aws (default `onprem`)</td>
<td>Node Environment : unspecified | onprem | gcp | aws (default `unspecified`)</td>
</tr>
<tr>
<td><code>--gcp-ca-address &lt;string&gt;</code></td>
<td></td>
<td>Istio CA address in GCP environment (default `istio-ca:8060`)</td>
</tr>
<tr>
<td><code>--gcp-root-cert &lt;string&gt;</code></td>
<td></td>
<td>Root Certificate file in GCP environment (default `/etc/certs/root-cert.pem`)</td>
<td><code>--key &lt;string&gt;</code></td>
<td>Node Agent private key file (default `/etc/certs/key.pem`)</td>
</tr>
<tr>
<td><code>--key-size &lt;int&gt;</code></td>
<td></td>
<td>Size of generated private key (default `2048`)</td>
</tr>
<tr>
<td><code>--log_as_json</code></td>
<td></td>
<td>Whether to format output as JSON or in plain console-friendly format </td>
</tr>
<tr>
<td><code>--log_callers</code></td>
<td></td>
<td>Include caller information, useful for debugging </td>
<td><code>--log_caller &lt;string&gt;</code></td>
<td>Comma-separated list of scopes for which to include called information, scopes can be any of [default] (default ``)</td>
</tr>
<tr>
<td><code>--log_output_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level of messages to output, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `info`)</td>
<td>The minimum logging level of messages to output, can be one of [debug, info, warn, error, none] (default `default:info`)</td>
</tr>
<tr>
<td><code>--log_rotate &lt;string&gt;</code></td>
<td></td>
<td>The path for the optional rotating log file (default ``)</td>
</tr>
<tr>
<td><code>--log_rotate_max_age &lt;int&gt;</code></td>
<td></td>
<td>The maximum age in days of a log file beyond which the file is rotated (0 indicates no limit) (default `30`)</td>
</tr>
<tr>
<td><code>--log_rotate_max_backups &lt;int&gt;</code></td>
<td></td>
<td>The maximum number of log file backups to keep before older files are deleted (0 indicates no limit) (default `1000`)</td>
</tr>
<tr>
<td><code>--log_rotate_max_size &lt;int&gt;</code></td>
<td></td>
<td>The maximum size in megabytes of a log file beyond which the file is rotated (default `104857600`)</td>
</tr>
<tr>
<td><code>--log_stacktrace_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level at which stack traces are captured, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `none`)</td>
<td>The minimum logging level at which stack traces are captured, can be one of [debug, info, warn, error, none] (default `default:none`)</td>
</tr>
<tr>
<td><code>--log_target &lt;stringArray&gt;</code></td>
<td></td>
<td>The set of paths where to output the log. This can be any path as well as the special values stdout and stderr (default `[stdout]`)</td>
</tr>
<tr>
<td><code>--onprem-cert-chain &lt;string&gt;</code></td>
<td></td>
<td>Node Agent identity cert file in on premise environment (default `/etc/certs/cert-chain.pem`)</td>
</tr>
<tr>
<td><code>--onprem-key &lt;string&gt;</code></td>
<td></td>
<td>Node identity private key file in on premise environment (default `/etc/certs/key.pem`)</td>
</tr>
<tr>
<td><code>--onprem-root-cert &lt;string&gt;</code></td>
<td></td>
<td>Root Certificate file in on premise environment (default `/etc/certs/root-cert.pem`)</td>
</tr>
<tr>
<td><code>--org &lt;string&gt;</code></td>
<td></td>
<td>Organization for the cert (default ``)</td>
</tr>
<tr>
<td><code>--platform &lt;string&gt;</code></td>
<td>The platform istio runs on: vm | k8s (default `vm`)</td>
</tr>
<tr>
<td><code>--root-cert &lt;string&gt;</code></td>
<td>Root Certificate file (default `/etc/certs/root-cert.pem`)</td>
</tr>
<tr>
<td><code>--workload-cert-ttl &lt;duration&gt;</code></td>
<td></td>
<td>The requested TTL for the workload (default `19h0m0s`)</td>
<td>The requested TTL for the workload (default `2160h0m0s`)</td>
</tr>
</tbody>
</table>
@ -133,14 +104,14 @@ number_of_entries: 3
<td>Whether to format output as JSON or in plain console-friendly format </td>
</tr>
<tr>
<td><code>--log_callers</code></td>
<td><code>--log_caller &lt;string&gt;</code></td>
<td></td>
<td>Include caller information, useful for debugging </td>
<td>Comma-separated list of scopes for which to include called information, scopes can be any of [default] (default ``)</td>
</tr>
<tr>
<td><code>--log_output_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level of messages to output, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `info`)</td>
<td>The minimum logging level of messages to output, can be one of [debug, info, warn, error, none] (default `default:info`)</td>
</tr>
<tr>
<td><code>--log_rotate &lt;string&gt;</code></td>
@ -165,7 +136,7 @@ number_of_entries: 3
<tr>
<td><code>--log_stacktrace_level &lt;string&gt;</code></td>
<td></td>
<td>The minimum logging level at which stack traces are captured, can be one of &#34;debug&#34;, &#34;info&#34;, &#34;warn&#34;, &#34;error&#34;, or &#34;none&#34; (default `none`)</td>
<td>The minimum logging level at which stack traces are captured, can be one of [debug, info, warn, error, none] (default `default:none`)</td>
</tr>
<tr>
<td><code>--log_target &lt;stringArray&gt;</code></td>

Some files were not shown because too many files have changed in this diff Show More