istio.io/archive/v1.1/feed.xml

6195 lines
541 KiB
XML
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Istio Blog</title><description>Connect, secure, control, and observe services.</description><link>/v1.1</link><category>Service mesh</category><item><title>Announcing Istio 1.1.9</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.1.9. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/releases/tag/1.1.9&#34;&gt;DOWNLOAD 1.1.9&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://istio.io/docs&#34;&gt;1.1.9 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.1.8...1.1.9&#34;&gt;CHANGES IN 1.1.9&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;bug-fixes&#34;&gt;Bug fixes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Prevent overly large strings from being sent to Prometheus (&lt;a href=&#34;https://github.com/istio/istio/issues/14642&#34;&gt;Issue 14642&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Reuse previously cached JWT public keys if transport errors are encountered during renewal (&lt;a href=&#34;https://github.com/istio/istio/issues/14638&#34;&gt;Issue 14638&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Bypass JWT authentication for HTTP OPTIONS methods to support CORS requests (&lt;a href=&#34;https://github.com/istio/proxy/issues/2160&#34;&gt;Issue 2160&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fix Envoy crash caused by the Mixer filter (&lt;a href=&#34;https://github.com/istio/istio/issues/14707&#34;&gt;Issue 14707&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;small-enhancements&#34;&gt;Small enhancements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Expose cryptographic signature verification functions to &lt;code&gt;Lua&lt;/code&gt; Envoy filters (&lt;a href=&#34;https://github.com/envoyproxy/envoy/issues/7009&#34;&gt;Envoy Issue 7009&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Mon, 17 Jun 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.1.9/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.1.9/</guid></item><item><title>Extending Istio Self-Signed Root Certificate Lifetime</title><description>&lt;p&gt;Istio self-signed certificates have historically had a 1 year default lifetime.
If you are using Istio self-signed certificates,
you need to schedule regular root transitions before they expire.
An expiration of a root certificate may lead to an unexpected cluster-wide outage.
The issue affects all versions up to 1.0.7 and 1.1.7.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;/v1.1/help/ops/security/root-transition/&#34;&gt;Extending Self-Signed Certificate Lifetime&lt;/a&gt; for
information on how to gauge the age of your certificates and how to perform rotation.&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;We strongly recommend you rotate root keys and root certificates annually as a security best practice.
We will send out instructions for root key/cert rotation as a follow-up.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;</description><pubDate>Fri, 07 Jun 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/root-transition/</link><author>Oliver Liu</author><guid isPermaLink="true">/v1.1/blog/2019/root-transition/</guid><category>security</category><category>PKI</category><category>certificate</category><category>Citadel</category></item><item><title>Announcing Istio 1.0.8</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.0.8. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/releases/tag/1.0.8&#34;&gt;DOWNLOAD 1.0.8&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://archive.istio.io/v1.0&#34;&gt;1.0.8 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.0.7...1.0.8&#34;&gt;CHANGES IN 1.0.8&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;bug-fixes&#34;&gt;Bug fixes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Fix issue where Citadel could generate a new root CA if it cannot contact the Kubernetes API server, causing mutual TLS verification to incorrectly fail (&lt;a href=&#34;https://github.com/istio/istio/issues/14512&#34;&gt;Issue 14512&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;small-enhancements&#34;&gt;Small enhancements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Update Citadel&amp;rsquo;s default root CA certificate TTL from 1 year to 10 years.&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Fri, 07 Jun 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.0.8/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.0.8/</guid></item><item><title>Announcing Istio 1.1.8</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.1.8. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.1.8&#34;
data-updateadvice=&#39;Before you download 1.1.8, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.1.9&#39;
data-updatehref=&#34;/v1.1/about/notes/1.1.9&#34;&gt;
DOWNLOAD 1.1.8
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://istio.io/docs&#34;&gt;1.1.8 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.1.7...1.1.8&#34;&gt;CHANGES IN 1.1.8&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;bug-fixes&#34;&gt;Bug fixes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Fix &lt;code&gt;PASSTHROUGH&lt;/code&gt; &lt;code&gt;DestinationRules&lt;/code&gt; for CDS clusters (&lt;a href=&#34;https://github.com/istio/istio/issues/13744&#34;&gt;Issue 13744&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Make the &lt;code&gt;appVersion&lt;/code&gt; and &lt;code&gt;version&lt;/code&gt; fields in the Helm charts display the correct Istio version (&lt;a href=&#34;https://github.com/istio/istio/issues/14290&#34;&gt;Issue 14290&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fix Mixer crash affecting both policy and telemetry servers (&lt;a href=&#34;https://github.com/istio/istio/issues/14235&#34;&gt;Issue 14235&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fix multicluster issue where two pods in different clusters could not share the same IP address (&lt;a href=&#34;https://github.com/istio/istio/issues/14066&#34;&gt;Issue 14066&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fix issue where Citadel could generate a new root CA if it cannot contact the Kubernetes API server, causing mutual TLS verification to incorrectly fail (&lt;a href=&#34;https://github.com/istio/istio/issues/14512&#34;&gt;Issue 14512&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Improve Pilot validation to reject different &lt;code&gt;VirtualServices&lt;/code&gt; with the same domain since Envoy will not accept them (&lt;a href=&#34;https://github.com/istio/istio/issues/13267&#34;&gt;Issue 13267&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fix locality load balancing issue where only one replica in a locality would receive traffic (&lt;a href=&#34;https://github.com/istio/istio/issues/13994&#34;&gt;13994&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fix issue where Pilot Agent might not notice a TLS certificate rotation (&lt;a href=&#34;https://github.com/istio/istio/issues/14539&#34;&gt;Issue 14539&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fix a &lt;code&gt;LuaJIT&lt;/code&gt; panic in Envoy (&lt;a href=&#34;https://github.com/envoyproxy/envoy/pull/6994&#34;&gt;Envoy Issue 6994&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fix a race condition where Envoy might reuse a HTTP/1.1 connection after the downstream peer had already closed the TCP connection, causing 503 errors and retries (&lt;a href=&#34;https://github.com/istio/istio/issues/14037&#34;&gt;Issue 14037&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fix a tracing issue in Mixer&amp;rsquo;s Zipkin adapter causing missing spans (&lt;a href=&#34;https://github.com/istio/istio/issues/13391&#34;&gt;Issue 13391&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;small-enhancements&#34;&gt;Small enhancements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Reduce Pilot log spam by logging the &lt;code&gt;the endpoints within network ... will be ignored for no network configured&lt;/code&gt; message at &lt;code&gt;DEBUG&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Make it easier to rollback by making pilot-agent ignore unknown flags.&lt;/li&gt;
&lt;li&gt;Update Citadel&amp;rsquo;s default root CA certificate TTL from 1 year to 10 years.&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Thu, 06 Jun 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.1.8/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.1.8/</guid></item><item><title>Security Update - CVE-2019-12243</title><description>
&lt;p&gt;During review of the &lt;a href=&#34;/v1.1/about/notes/1.1.7&#34;&gt;Istio 1.1.7&lt;/a&gt; release notes, we realized that &lt;a href=&#34;https://github.com/istio/istio/issues/13868&#34;&gt;issue 13868&lt;/a&gt;,
which is fixed in the release, actually represents a security vulnerability.&lt;/p&gt;
&lt;p&gt;Initially we thought the bug was impacting the &lt;a href=&#34;/v1.1/about/feature-stages/#security-and-policy-enforcement&#34;&gt;TCP Authorization&lt;/a&gt; feature advertised
as alpha stability, which would not have required invoking this security advisory process, but we later realized that the
&lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/adapters/denier/&#34;&gt;Deny Checker&lt;/a&gt; and
&lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/adapters/list/&#34;&gt;List Checker&lt;/a&gt; feature were affected and those are considered stable features.
We are revisiting our processes to flag vulnerabilities that are initially reported as bugs instead of through the
&lt;a href=&#34;/v1.1/about/security-vulnerabilities/&#34;&gt;private disclosure process&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We tracked the bug to a code change introduced in Istio 1.1 and affecting all versions up to 1.1.6.&lt;/p&gt;
&lt;p&gt;This vulnerability is referred to as &lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12243&#34;&gt;CVE 2019-12243&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;affected-istio-releases&#34;&gt;Affected Istio releases&lt;/h2&gt;
&lt;p&gt;The following Istio releases are vulnerable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;1.1, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;impact-score&#34;&gt;Impact score&lt;/h2&gt;
&lt;p&gt;Overall CVSS score: 8.9 &lt;a href=&#34;https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:A/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N/E:H/RL:O/RC:C&#34;&gt;AV:A/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N/E:H/RL:O/RC:C&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;vulnerability-impact-and-detection&#34;&gt;Vulnerability impact and Detection&lt;/h2&gt;
&lt;p&gt;Since Istio 1.1, In the default Istio installation profile, policy enforcement is disabled by default.&lt;/p&gt;
&lt;p&gt;You can check the status of policy enforcement for your mesh with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl -n istio-system get cm istio -o jsonpath=&amp;#34;{@.data.mesh}&amp;#34; | grep disablePolicyChecks
disablePolicyChecks: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You are not impacted by this vulnerability if &lt;code&gt;disablePolicyChecks&lt;/code&gt; is set to true.&lt;/p&gt;
&lt;p&gt;You are impacted by the vulnerability issue if the following conditions are all true:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You are running one of the affected Istio releases.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;disablePolicyChecks&lt;/code&gt; is set to false (follow the steps mentioned above to check)&lt;/li&gt;
&lt;li&gt;Your workload is NOT using HTTP, HTTP/2, or gRPC protocols&lt;/li&gt;
&lt;li&gt;A mixer adapter (e.g., Deny Checker, List Checker) is used to provide authorization for your backend TCP service.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;mitigation&#34;&gt;Mitigation&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Users of Istio 1.0.x are not affected&lt;/li&gt;
&lt;li&gt;For Istio 1.1.x deployments: update to a minimum version of &lt;a href=&#34;/v1.1/about/notes/1.1.7&#34;&gt;Istio 1.1.7&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;credit&#34;&gt;Credit&lt;/h2&gt;
&lt;p&gt;The Istio team would like to thank &lt;code&gt;Haim Helman&lt;/code&gt; for the original bug report.&lt;/p&gt;</description><pubDate>Tue, 28 May 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/cve-2019-12243/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/cve-2019-12243/</guid></item><item><title>Support for Istio 1.0 ends on June 19th, 2019</title><description>&lt;p&gt;According to Istio&amp;rsquo;s &lt;a href=&#34;/v1.1/about/release-cadence/&#34;&gt;support policy&lt;/a&gt;, LTS releases like 1.0 are supported for three months after the next LTS release. Since &lt;a href=&#34;/v1.1/about/notes/1.1/&#34;&gt;1.1 was released on March 19th&lt;/a&gt;, support for 1.0 will end on June 19th, 2019.&lt;/p&gt;
&lt;p&gt;At that point we will stop back-porting fixes for security issues and critical bugs to 1.0, so we encourage you to upgrade to the latest version of Istio (1.1.9). If you don&amp;rsquo;t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.&lt;/p&gt;
&lt;p&gt;We care about you and your clusters, so please be kind to yourself and upgrade.&lt;/p&gt;</description><pubDate>Thu, 23 May 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.0-eol/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.0-eol/</guid></item><item><title>Announcing Istio 1.1.7</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.1.7. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.1.7&#34;
data-updateadvice=&#39;Before you download 1.1.7, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.1.9&#39;
data-updatehref=&#34;/v1.1/about/notes/1.1.9&#34;&gt;
DOWNLOAD 1.1.7
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://istio.io/docs&#34;&gt;1.1.7 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.1.6...1.1.7&#34;&gt;CHANGES IN 1.1.7&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;security-update&#34;&gt;Security update&lt;/h2&gt;
&lt;p&gt;This release fixes &lt;a href=&#34;/v1.1/blog/2019/cve-2019-12243&#34;&gt;CVE 2019-12243&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;bug-fixes&#34;&gt;Bug fixes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Fix issue where two gateways with overlapping hosts, created at the same second, can cause Pilot to fail to generate routes correctly and lead to Envoy listeners stuck indefinitely at startup in a warming state.&lt;/li&gt;
&lt;li&gt;Improve the robustness of the SDS node agent: if Envoy sends a SDS request with an empty &lt;code&gt;ResourceNames&lt;/code&gt;, ignore it and wait for the next request instead of closing the connection (&lt;a href=&#34;https://github.com/istio/istio/issues/13853&#34;&gt;Issue 13853&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;In prior releases Pilot automatically injected the experimental &lt;code&gt;envoy.filters.network.mysql_proxy&lt;/code&gt; filter into the outbound filter chain if the service port name is &lt;code&gt;mysql&lt;/code&gt;. This was surprising and caused issues for some operators, so Pilot will now automatically inject the &lt;code&gt;envoy.filters.network.mysql_proxy&lt;/code&gt; filter only if the &lt;code&gt;PILOT_ENABLE_MYSQL_FILTER&lt;/code&gt; environment variable is set to &lt;code&gt;1&lt;/code&gt; (&lt;a href=&#34;https://github.com/istio/istio/issues/13998&#34;&gt;Issue 13998&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fix issue where Mixer policy checks were incorrectly disabled for TCP (&lt;a href=&#34;https://github.com/istio/istio/issues/13868&#34;&gt;Issue 13868&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;small-enhancements&#34;&gt;Small enhancements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Add &lt;code&gt;--applicationPorts&lt;/code&gt; option to the &lt;code&gt;ingressgateway&lt;/code&gt; Helm charts. When set to a comma-delimited list of ports, readiness checks will fail until all the ports become active. When configured, traffic will not be sent to Envoys stuck in the warming state.&lt;/li&gt;
&lt;li&gt;Increase memory limit in the &lt;code&gt;ingressgateway&lt;/code&gt; Helm chart to 1GB and add resource &lt;code&gt;request&lt;/code&gt; and &lt;code&gt;limits&lt;/code&gt; to the SDS node agent container to support HPA autoscaling.&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Fri, 17 May 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.1.7/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.1.7/</guid></item><item><title>Announcing Istio 1.1.6</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.1.6. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.1.6&#34;
data-updateadvice=&#39;Before you download 1.1.6, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.1.9&#39;
data-updatehref=&#34;/v1.1/about/notes/1.1.9&#34;&gt;
DOWNLOAD 1.1.6
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://istio.io/docs&#34;&gt;1.1.6 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.1.5...1.1.6&#34;&gt;CHANGES IN 1.1.6&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;bug-fixes&#34;&gt;Bug fixes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Fix Galley Helm charts so that the &lt;code&gt;validatingwebhookconfiguration&lt;/code&gt; object can now deployed to a namespace other than &lt;code&gt;istio-system&lt;/code&gt; (&lt;a href=&#34;https://github.com/istio/istio/issues/13625&#34;&gt;Issue 13625&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Additional Helm chart fixes for anti-affinity support: fix &lt;code&gt;gatewaypodAntiAffinityRequiredDuringScheduling&lt;/code&gt; and &lt;code&gt;podAntiAffinityLabelSelector&lt;/code&gt; match expressions and fix the default value for &lt;code&gt;podAntiAffinityLabelSelector&lt;/code&gt; (&lt;a href=&#34;https://github.com/istio/istio/issues/13892&#34;&gt;Issue 13892&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Make Pilot handle a condition where Envoy continues to request routes for a deleted gateway while listeners are still draining (&lt;a href=&#34;https://github.com/istio/istio/issues/13739&#34;&gt;Issue 13739&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;small-enhancements&#34;&gt;Small enhancements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;If access logs are enabled, &lt;code&gt;passthrough&lt;/code&gt; listener requests will be logged.&lt;/li&gt;
&lt;li&gt;Make Pilot tolerate unknown JSON fields to make it easier to rollback to older versions during upgrade.&lt;/li&gt;
&lt;li&gt;Add support for fallback secrets to &lt;code&gt;SDS&lt;/code&gt; which Envoy can use instead of waiting indefinitely for late or non-existent secrets during startup (&lt;a href=&#34;https://github.com/istio/istio/issues/13853&#34;&gt;Issue 13853&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Sat, 11 May 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.1.6/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.1.6/</guid></item><item><title>Announcing Istio 1.1.5</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.1.5. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.1.5&#34;
data-updateadvice=&#39;Before you download 1.1.5, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.1.9&#39;
data-updatehref=&#34;/v1.1/about/notes/1.1.9&#34;&gt;
DOWNLOAD 1.1.5
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://istio.io/docs&#34;&gt;1.1.5 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.1.4...1.1.5&#34;&gt;CHANGES IN 1.1.5&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;bug-fixes&#34;&gt;Bug fixes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Add additional validation to Pilot to reject gateway configuration with overlapping hosts matches (&lt;a href=&#34;https://github.com/istio/istio/issues/13717&#34;&gt;Issue 13717&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Build against the latest stable version of &lt;code&gt;istio-cni&lt;/code&gt; instead of the latest daily build (&lt;a href=&#34;https://github.com/istio/istio/issues/13171&#34;&gt;Issue 13171&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;small-enhancements&#34;&gt;Small enhancements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Add additional logging to help diagnose hostname resolution failures (&lt;a href=&#34;https://github.com/istio/istio/issues/13581&#34;&gt;Issue 13581&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Improve ease of installing &lt;code&gt;prometheus&lt;/code&gt; by removing unnecessary use of &lt;code&gt;busybox&lt;/code&gt; image (&lt;a href=&#34;https://github.com/istio/istio/issues/13501&#34;&gt;Issue 13501&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Make Pilot Agent&amp;rsquo;s certificate paths configurable (&lt;a href=&#34;https://github.com/istio/istio/issues/11984&#34;&gt;Issue 11984&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Fri, 03 May 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.1.5/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.1.5/</guid></item><item><title>Announcing Istio 1.1.4</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.1.4. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.1.4&#34;
data-updateadvice=&#39;Before you download 1.1.4, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.1.9&#39;
data-updatehref=&#34;/v1.1/about/notes/1.1.9&#34;&gt;
DOWNLOAD 1.1.4
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://istio.io/docs&#34;&gt;1.1.4 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.1.3...1.1.4&#34;&gt;CHANGES IN 1.1.4&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;behavior-change&#34;&gt;Behavior change&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Changed the default behavior for Pilot to allow traffic to outside the mesh, even if it is on the same port as an internal service.
This behavior can be controlled by the &lt;code&gt;PILOT_ENABLE_FALLTHROUGH_ROUTE&lt;/code&gt; environment variable.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;bug-fixes&#34;&gt;Bug fixes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fixed egress route generation for services of type &lt;code&gt;ExternalName&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added support for configuring Envoy&amp;rsquo;s idle connection timeout, which prevents running out of
memory or IP ports over time (&lt;a href=&#34;https://github.com/istio/istio/issues/13355&#34;&gt;Issue 13355&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed a crashing bug in Pilot in failover handling of locality-based load balancing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed a crashing bug in Pilot when it was given custom certificate paths.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed a bug in Pilot where it was ignoring short names used as service entry hosts (&lt;a href=&#34;https://github.com/istio/istio/issues/13436&#34;&gt;Issue 13436&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added missing &lt;code&gt;https_protocol_options&lt;/code&gt; to the envoy-metrics-service cluster configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed a bug in Pilot where it didn&amp;rsquo;t handle https traffic correctly in the fall through route case (&lt;a href=&#34;https://github.com/istio/istio/issues/13386&#34;&gt;Issue 13386&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed a bug where Pilot didn&amp;rsquo;t remove endpoints from Envoy after they were removed from Kubernetes (&lt;a href=&#34;https://github.com/istio/istio/issues/13402&#34;&gt;Issue 13402&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed a crashing bug in the node agent (&lt;a href=&#34;https://github.com/istio/istio/issues/13325&#34;&gt;Issue 13325&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added missing validation to prevent gateway names from containing dots (&lt;a href=&#34;https://github.com/istio/istio/issues/13211&#34;&gt;Issue 13211&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed bug where &lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/destination-rule/#LoadBalancerSettings-ConsistentHashLB&#34;&gt;&lt;code&gt;ConsistentHashLB.minimumRingSize&lt;/code&gt;&lt;/a&gt;
was defaulting to 0 instead of the documented 1024 (&lt;a href=&#34;https://github.com/istio/istio/issues/13261&#34;&gt;Issue 13261&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;small-enhancements&#34;&gt;Small enhancements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Updated to the latest version of the &lt;a href=&#34;https://www.kiali.io&#34;&gt;Kiali&lt;/a&gt; add-on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Updated to the latest version of &lt;a href=&#34;https://grafana.com&#34;&gt;Grafana&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added validation to ensure Citadel is only deployed with a single replica (&lt;a href=&#34;https://github.com/istio/istio/issues/13383&#34;&gt;Issue 13383&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added support to configure the logging level of the proxy and Istio control plane ((&lt;a href=&#34;https://github.com/istio/istio/issues/11847&#34;&gt;Issue 11847&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allow sidecars to bind to any loopback address and not just 127.0.0.1 (&lt;a href=&#34;https://github.com/istio/istio/issues/13201&#34;&gt;Issue 13201&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Fri, 26 Apr 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.1.4/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.1.4/</guid></item><item><title>Announcing Istio 1.1.3</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.1.3. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.1.3&#34;
data-updateadvice=&#39;Before you download 1.1.3, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.1.9&#39;
data-updatehref=&#34;/v1.1/about/notes/1.1.9&#34;&gt;
DOWNLOAD 1.1.3
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://istio.io/docs&#34;&gt;1.1.3 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.1.2...1.1.3&#34;&gt;CHANGES IN 1.1.3&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;known-issues-with-1-1-3&#34;&gt;Known issues with 1.1.3&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;A &lt;a href=&#34;https://github.com/istio/istio/issues/13325&#34;&gt;panic in the Node Agent&lt;/a&gt; was discovered late in the 1.1.3 qualification process. The panic only occurs in clusters with the alpha-quality SDS certificate rotation feature enabled. Since this is the first time we have included SDS certificate rotation in our long-running release tests, we don&amp;rsquo;t know whether this is a latent bug or a new regression. Considering SDS certificate rotation is in alpha, we have decided to release 1.1.3 with this issue and target a fix for the 1.1.4 release.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;bug-fixes&#34;&gt;Bug fixes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Istio-specific back-ports of Envoy patches for &lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9900&#34;&gt;&lt;code&gt;CVE-2019-9900&lt;/code&gt;&lt;/a&gt; and
&lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9901&#34;&gt;&lt;code&gt;CVE-2019-9901&lt;/code&gt;&lt;/a&gt; included in Istio 1.1.2 have been dropped in favor of an
Envoy update which contains the final version of the patches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix load balancer weight setting for split horizon &lt;code&gt;EDS&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix typo in the default Envoy &lt;code&gt;JSON&lt;/code&gt; log format (&lt;a href=&#34;https://github.com/istio/istio/issues/12232&#34;&gt;Issue 12232&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Correctly reload out-of-process adapter address upon configuration change (&lt;a href=&#34;https://github.com/istio/istio/issues/12488&#34;&gt;Issue 12488&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Restore Kiali settings that were accidentally deleted (&lt;a href=&#34;https://github.com/istio/istio/issues/3660&#34;&gt;Issue 3660&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prevent services with same target port resulting in duplicate inbound listeners (&lt;a href=&#34;https://github.com/istio/istio/issues/9504&#34;&gt;Issue 9504&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix issue with configuring &lt;code&gt;Sidecar&lt;/code&gt; &lt;code&gt;egress&lt;/code&gt; ports for namespaces other than &lt;code&gt;istio-system&lt;/code&gt; resulting in a &lt;code&gt;envoy.tcp_proxy&lt;/code&gt; filter of &lt;code&gt;BlackHoleCluster&lt;/code&gt; by auto binding
to services for &lt;code&gt;Sidecar&lt;/code&gt; listeners (&lt;a href=&#34;https://github.com/istio/istio/issues/12536&#34;&gt;Issue 12536&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix gateway &lt;code&gt;vhost&lt;/code&gt; configuration generation issue by favoring more specific host matches (&lt;a href=&#34;https://github.com/istio/istio/issues/12655&#34;&gt;Issue 12655&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix &lt;code&gt;ALLOW_ANY&lt;/code&gt; so it now allows external traffic if there is already an http service present on a port.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix validation logic so that &lt;code&gt;port.name&lt;/code&gt; is no longer a valid &lt;code&gt;PortSelection&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix &lt;code&gt;istioctl proxy-config clusters&lt;/code&gt; cluster type column rendering (&lt;a href=&#34;https://github.com/istio/istio/issues/12455&#34;&gt;Issue 12455&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix SDS secret mount configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix incorrect Istio version in the Helm charts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix partial DNS failures in the presence of overlapping ports (&lt;a href=&#34;https://github.com/istio/istio/issues/11658&#34;&gt;Issue 11658&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix Helm &lt;code&gt;podAntiAffinity&lt;/code&gt; template error (&lt;a href=&#34;https://github.com/istio/istio/issues/12790&#34;&gt;Issue 12790&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix bug with the original destination service discovery not using the original destination load balancer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix SDS memory leak in the presence of invalid or missing keying materials (&lt;a href=&#34;https://github.com/istio/istio/issues/13197&#34;&gt;Issue 13197&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;small-enhancements&#34;&gt;Small enhancements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Hide &lt;code&gt;ServiceAccounts&lt;/code&gt; from &lt;code&gt;PushContext&lt;/code&gt; log to reduce log volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure &lt;code&gt;localityLbSetting&lt;/code&gt; in &lt;code&gt;values.yaml&lt;/code&gt; by passing it through to the mesh configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove the soon-to-be deprecated &lt;code&gt;critical-pod&lt;/code&gt; annotation from Helm charts (&lt;a href=&#34;https://github.com/istio/istio/issues/12650&#34;&gt;Issue 12650&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Support pod anti-affinity annotations to improve control plane availability (&lt;a href=&#34;https://github.com/istio/istio/issues/11333&#34;&gt;Issue 11333&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pretty print &lt;code&gt;IP&lt;/code&gt; addresses in access logs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove redundant write header to further reduce log volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improve destination host validation in Pilot.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explicitly configure &lt;code&gt;istio-init&lt;/code&gt; to run as root so use of pod-level &lt;code&gt;securityContext.runAsUser&lt;/code&gt; doesn&amp;rsquo;t break it (&lt;a href=&#34;https://github.com/istio/istio/issues/5453&#34;&gt;Issue 5453&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add configuration samples for Vault integration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Respect locality load balancing weight settings from &lt;code&gt;ServiceEntry&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make the TLS certificate location watched by Pilot Agent configurable (&lt;a href=&#34;https://github.com/istio/istio/issues/11984&#34;&gt;Issue 11984&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add support for Datadog tracing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add alias to &lt;code&gt;istioctl&lt;/code&gt; so &amp;lsquo;x&amp;rsquo; can be used instead of &amp;lsquo;experimental&amp;rsquo;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide improved distribution of sidecar certificate by adding jitter to their CSR requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allow weighted load balancing registry locality to be configured.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add support for standard CRDs for compiled-in Mixer adapters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reduce Pilot resource requirements for demo configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fully populate Galley dashboard by adding data source (&lt;a href=&#34;https://github.com/istio/istio/issues/13040&#34;&gt;Issue 13040&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Propagate 1.1.0 &lt;code&gt;sidecar&lt;/code&gt; performance tuning to the &lt;code&gt;istio-gateway&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improve destination host validation by rejecting &lt;code&gt;*&lt;/code&gt; hosts (&lt;a href=&#34;https://github.com/istio/istio/issues/12794&#34;&gt;Issue 12794&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expose upstream &lt;code&gt;idle_timeout&lt;/code&gt; in cluster definition so dead connections can sometimes be removed from connection pools before they are used
(&lt;a href=&#34;https://github.com/istio/istio/issues/9113&#34;&gt;Issue 9113&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When registering a &lt;code&gt;Sidecar&lt;/code&gt; resource to restrict what a pod can see, the restrictions are now applied if the spec contains a
&lt;code&gt;workloadSelector&lt;/code&gt; (&lt;a href=&#34;https://github.com/istio/istio/issues/11818&#34;&gt;Issue 11818&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the Bookinfo example to use port 80 for TLS origination.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add liveness probe for Citadel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improve AWS ELB interoperability by making 15020 the first port listed in the &lt;code&gt;ingressgateway&lt;/code&gt; service (&lt;a href=&#34;https://github.com/istio/istio/issues/12503&#34;&gt;Issue 12502&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use outlier detection for failover mode but not for distribute mode for locality weighted load balancing (&lt;a href=&#34;https://github.com/istio/istio/issues/12961&#34;&gt;Issues 12965&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Replace generation of Envoy&amp;rsquo;s deprecated &lt;code&gt;enabled&lt;/code&gt; field in &lt;code&gt;CorsPolicy&lt;/code&gt; with the replacement &lt;code&gt;filter_enabled&lt;/code&gt; field for 1.1.0+ sidecars only.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Standardize labels on Mixer&amp;rsquo;s Helm charts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Mon, 15 Apr 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.1.3/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.1.3/</guid></item><item><title>Announcing Istio 1.1.2 with Important Security Update</title><description>&lt;p&gt;We&amp;rsquo;re announcing immediate availability of Istio 1.1.2 which contains some important security updates. Please see below for details.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.1.2&#34;
data-updateadvice=&#39;Before you download 1.1.2, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.1.9&#39;
data-updatehref=&#34;/v1.1/about/notes/1.1.9&#34;&gt;
DOWNLOAD 1.1.2
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://istio.io/docs&#34;&gt;1.1.2 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.1.1...1.1.2&#34;&gt;CHANGES IN 1.1.2&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;security-update&#34;&gt;Security update&lt;/h2&gt;
&lt;p&gt;Two security vulnerabilities have recently been identified in the Envoy proxy
(&lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9900&#34;&gt;CVE 2019-9900&lt;/a&gt; and &lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9901&#34;&gt;CVE 2019-9901&lt;/a&gt;). The
vulnerabilities have now been patched in Envoy version 1.9.1, and correspondingly in the Envoy builds
embedded in Istio 1.1.2 and Istio 1.0.7. Since Envoy is an integral part of Istio, users are advised to update Istio
immediately to mitigate security risks arising from these vulnerabilities.&lt;/p&gt;
&lt;p&gt;The vulnerabilities are centered on the fact that Envoy did not normalize HTTP URI paths and did not fully validate HTTP/1.1 header values. These
vulnerabilities impact Istio features that rely on Envoy to enforce any of authorization, routing, or rate limiting.&lt;/p&gt;
&lt;h2 id=&#34;affected-istio-releases&#34;&gt;Affected Istio releases&lt;/h2&gt;
&lt;p&gt;The following Istio releases are vulnerable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;1.1, 1.1.1&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;These releases can be patched to Istio 1.1.2.&lt;/li&gt;
&lt;li&gt;1.1.2 is built from the same source as 1.1.1 with the addition of Envoy patches minimally sufficient to address the CVEs.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;1.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;These releases can be patched to Istio 1.0.7&lt;/li&gt;
&lt;li&gt;1.0.7 is built from the same source as 1.0.6 with the addition of Envoy patches minimally sufficient to address the CVEs.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;These releases are no longer supported and will not be patched. Please upgrade to a supported release with the necessary fixes.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;vulnerability-impact&#34;&gt;Vulnerability impact&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9900&#34;&gt;CVE 2019-9900&lt;/a&gt; and &lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9901&#34;&gt;CVE 2019-9901&lt;/a&gt;
allow remote attackers access to unauthorized resources by using specially crafted request URI paths (9901) and NUL bytes in
HTTP/1.1 headers (9900), potentially circumventing DoS prevention systems such as rate limiting, or routing to a unexposed upstream system. Refer to
&lt;a href=&#34;https://github.com/envoyproxy/envoy/issues/6434&#34;&gt;issue 6434&lt;/a&gt;
and &lt;a href=&#34;https://github.com/envoyproxy/envoy/issues/6435&#34;&gt;issue 6435&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;As Istio is based on Envoy, Istio customers can be affected by these vulnerabilities based on whether paths and request headers are used within Istio
policies or routing rules and how the backend HTTP implementation resolves them. If prefix path matching rules are used by Mixer or by Istio authorization
policies or the routing rules, an attacker could exploit these vulnerabilities to gain access to unauthorized paths on certain HTTP backends.&lt;/p&gt;
&lt;h2 id=&#34;mitigation&#34;&gt;Mitigation&lt;/h2&gt;
&lt;p&gt;Eliminating the vulnerabilities requires updating to a corrected version of Envoy. Weve incorporated the necessary updates in the latest Istio patch releases.&lt;/p&gt;
&lt;p&gt;For Istio 1.1.x deployments: update to a minimum of &lt;a href=&#34;/v1.1/about/notes/1.1.2&#34;&gt;Istio 1.1.2&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For Istio 1.0.x deployments: update to a minimum of &lt;a href=&#34;/v1.1/about/notes/1.0.7&#34;&gt;Istio 1.0.7&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;While Envoy 1.9.1 requires opting in to path normalization to address CVE 2019-9901, the version of Envoy embedded in Istio 1.1.2 and 1.0.7 enables path
normalization by default.&lt;/p&gt;
&lt;h2 id=&#34;detection-of-nul-header-exploit&#34;&gt;Detection of NUL header exploit&lt;/h2&gt;
&lt;p&gt;Based on current information, this only affects HTTP/1.1 traffic. If this is not structurally possible in your network or configuration, then it is unlikely
that this vulnerability applies.&lt;/p&gt;
&lt;p&gt;File-based access logging uses the &lt;code&gt;c_str()&lt;/code&gt; representation for header values, as does gRPC access logging, so there will be no trivial detection via
Envoys access logs by scanning for NUL. Instead, operators might look for inconsistencies in logs between the routing that Envoy performs and the logic
intended in the &lt;code&gt;RouteConfiguration&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;External authorization and rate limit services can check for NULs in headers. Backend servers might have sufficient logging to detect NULs or unintended
access; its likely that many will simply reject NULs in this scenario via 400 Bad Request, as per RFC 7230.&lt;/p&gt;
&lt;h2 id=&#34;detection-of-path-traversal-exploit&#34;&gt;Detection of path traversal exploit&lt;/h2&gt;
&lt;p&gt;Envoys access logs (whether file-based or gRPC) will contain the unnormalized path, so it is possible to examine these logs to detect suspicious patterns and
requests that are incongruous with the intended operator configuration intent. In addition, unnormalized paths are available at &lt;code&gt;ext_authz&lt;/code&gt;, rate limiting
and backend servers for log inspection.&lt;/p&gt;</description><pubDate>Fri, 05 Apr 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.1.2/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.1.2/</guid></item><item><title>Announcing Istio 1.0.7 with Important Security Update</title><description>&lt;p&gt;We&amp;rsquo;re announcing immediate availability of Istio 1.0.7 which contains some important security updates. Please see below for details.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.0.7&#34;
data-updateadvice=&#39;Before you download 1.0.7, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.0.8&#39;
data-updatehref=&#34;/v1.1/about/notes/1.0.8&#34;&gt;
DOWNLOAD 1.0.7
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://archive.istio.io/v1.0&#34;&gt;1.0.7 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.0.6...1.0.7&#34;&gt;CHANGES IN 1.0.7&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;security-update&#34;&gt;Security update&lt;/h2&gt;
&lt;p&gt;Two security vulnerabilities have recently been identified in the Envoy proxy
(&lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9900&#34;&gt;CVE 2019-9900&lt;/a&gt; and &lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9901&#34;&gt;CVE 2019-9901&lt;/a&gt;). The
vulnerabilities have now been patched in Envoy version 1.9.1, and correspondingly in the Envoy builds
embedded in Istio 1.1.2 and Istio 1.0.7. Since Envoy is an integral part of Istio, users are advised to update Istio
immediately to mitigate security risks arising from these vulnerabilities.&lt;/p&gt;
&lt;p&gt;The vulnerabilities are centered on the fact that Envoy did not normalize HTTP URI paths and did not fully validate HTTP/1.1 header values. These
vulnerabilities impact Istio features that rely on Envoy to enforce any of authorization, routing, or rate limiting.&lt;/p&gt;
&lt;h2 id=&#34;affected-istio-releases&#34;&gt;Affected Istio releases&lt;/h2&gt;
&lt;p&gt;The following Istio releases are vulnerable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;1.1, 1.1.1&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;These releases can be patched to Istio 1.1.2.&lt;/li&gt;
&lt;li&gt;1.1.2 is built from the same source as 1.1.1 with the addition of Envoy patches minimally sufficient to address the CVEs.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;1.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;These releases can be patched to Istio 1.0.7&lt;/li&gt;
&lt;li&gt;1.0.7 is built from the same source as 1.0.6 with the addition of Envoy patches minimally sufficient to address the CVEs.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;These releases are no longer supported and will not be patched. Please upgrade to a supported release with the necessary fixes.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;vulnerability-impact&#34;&gt;Vulnerability impact&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9900&#34;&gt;CVE 2019-9900&lt;/a&gt; and &lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9901&#34;&gt;CVE 2019-9901&lt;/a&gt;
allow remote attackers access to unauthorized resources by using specially crafted request URI paths (9901) and NUL bytes in
HTTP/1.1 headers (9900), potentially circumventing DoS prevention systems such as rate limiting, or routing to a unexposed upstream system. Refer to
&lt;a href=&#34;https://github.com/envoyproxy/envoy/issues/6434&#34;&gt;issue 6434&lt;/a&gt;
and &lt;a href=&#34;https://github.com/envoyproxy/envoy/issues/6435&#34;&gt;issue 6435&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;As Istio is based on Envoy, Istio customers can be affected by these vulnerabilities based on whether paths and request headers are used within Istio
policies or routing rules and how the backend HTTP implementation resolves them. If prefix path matching rules are used by Mixer or by Istio authorization
policies or the routing rules, an attacker could exploit these vulnerabilities to gain access to unauthorized paths on certain HTTP backends.&lt;/p&gt;
&lt;h2 id=&#34;mitigation&#34;&gt;Mitigation&lt;/h2&gt;
&lt;p&gt;Eliminating the vulnerabilities requires updating to a corrected version of Envoy. Weve incorporated the necessary updates in the latest Istio patch releases.&lt;/p&gt;
&lt;p&gt;For Istio 1.1.x deployments: update to a minimum of &lt;a href=&#34;/v1.1/about/notes/1.1.2&#34;&gt;Istio 1.1.2&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For Istio 1.0.x deployments: update to a minimum of &lt;a href=&#34;/v1.1/about/notes/1.0.7&#34;&gt;Istio 1.0.7&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;While Envoy 1.9.1 requires opting in to path normalization to address CVE 2019-9901, the version of Envoy embedded in Istio 1.1.2 and 1.0.7 enables path
normalization by default.&lt;/p&gt;
&lt;h2 id=&#34;detection-of-nul-header-exploit&#34;&gt;Detection of NUL header exploit&lt;/h2&gt;
&lt;p&gt;Based on current information, this only affects HTTP/1.1 traffic. If this is not structurally possible in your network or configuration, then it is unlikely
that this vulnerability applies.&lt;/p&gt;
&lt;p&gt;File-based access logging uses the &lt;code&gt;c_str()&lt;/code&gt; representation for header values, as does gRPC access logging, so there will be no trivial detection via
Envoys access logs by scanning for NUL. Instead, operators might look for inconsistencies in logs between the routing that Envoy performs and the logic
intended in the &lt;code&gt;RouteConfiguration&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;External authorization and rate limit services can check for NULs in headers. Backend servers might have sufficient logging to detect NULs or unintended
access; its likely that many will simply reject NULs in this scenario via 400 Bad Request, as per RFC 7230.&lt;/p&gt;
&lt;h2 id=&#34;detection-of-path-traversal-exploit&#34;&gt;Detection of path traversal exploit&lt;/h2&gt;
&lt;p&gt;Envoys access logs (whether file-based or gRPC) will contain the unnormalized path, so it is possible to examine these logs to detect suspicious patterns and
requests that are incongruous with the intended operator configuration intent. In addition, unnormalized paths are available at &lt;code&gt;ext_authz&lt;/code&gt;, rate limiting
and backend servers for log inspection.&lt;/p&gt;</description><pubDate>Fri, 05 Apr 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.0.7/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.0.7/</guid></item><item><title>Announcing Istio 1.1.1</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.1.1. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.1.1&#34;
data-updateadvice=&#39;Before you download 1.1.1, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.1.9&#39;
data-updatehref=&#34;/v1.1/about/notes/1.1.9&#34;&gt;
DOWNLOAD 1.1.1
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://istio.io/docs&#34;&gt;1.1.1 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.1.0...1.1.1&#34;&gt;CHANGES IN 1.1.1&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;bug-fixes-and-minor-enhancements&#34;&gt;Bug fixes and minor enhancements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Configure Prometheus to monitor Citadel (&lt;a href=&#34;https://github.com/istio/istio/pull/12175&#34;&gt;Issue 12175&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Improve output of &lt;a href=&#34;/v1.1/docs/reference/commands/istioctl/#istioctl-experimental-verify-install&#34;&gt;&lt;code&gt;istioctl experimental verify-install&lt;/code&gt;&lt;/a&gt; command (&lt;a href=&#34;https://github.com/istio/istio/pull/12174&#34;&gt;Issue 12174&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Reduce log level for missing service account messages for a SPIFFE URI (&lt;a href=&#34;https://github.com/istio/istio/issues/12108&#34;&gt;Issue 12108&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Fix broken path on the opt-in SDS feature&amp;rsquo;s Unix domain socket (&lt;a href=&#34;https://github.com/istio/istio/pull/12688&#34;&gt;Issue 12688&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Fix Envoy tracing that was preventing a child span from being created if the parent span was propagated with an empty string (&lt;a href=&#34;https://github.com/envoyproxy/envoy/pull/6263&#34;&gt;Envoy Issue 6263&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Add namespace scoping to the Gateway &amp;lsquo;port&amp;rsquo; names. This fixes two issues:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;IngressGateway&lt;/code&gt; only respects first port 443 Gateway definition (&lt;a href=&#34;https://github.com/istio/istio/issues/11509&#34;&gt;Issue 11509&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Istio &lt;code&gt;IngressGateway&lt;/code&gt; routing broken with two different gateways with same port name (SDS) (&lt;a href=&#34;https://github.com/istio/istio/issues/12500&#34;&gt;Issue 12500&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;Five bug fixes for locality weighted load balancing:
&lt;ul&gt;
&lt;li&gt;Fix bug causing empty endpoints per locality (&lt;a href=&#34;https://github.com/istio/istio/issues/12610&#34;&gt;Issue 12610&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Apply locality weighted load balancing configuration correctly (&lt;a href=&#34;https://github.com/istio/istio/issues/12587&#34;&gt;Issue 12587&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Locality label &lt;code&gt;istio-locality&lt;/code&gt; in Kubernetes should not contain &lt;code&gt;/&lt;/code&gt;, use &lt;code&gt;.&lt;/code&gt; (&lt;a href=&#34;https://github.com/istio/istio/issues/12582&#34;&gt;Issue 12582&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Fix crash in locality load balancing (&lt;a href=&#34;https://github.com/istio/istio/pull/12649&#34;&gt;Issue 12649&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Fix bug in locality load balancing normalization (&lt;a href=&#34;https://github.com/istio/istio/pull/12579&#34;&gt;Issue 12579&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;Propagate Envoy Metrics Service configuration (&lt;a href=&#34;https://github.com/istio/istio/issues/12569&#34;&gt;Issue 12569&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Do not apply &lt;code&gt;VirtualService&lt;/code&gt; rule to the wrong gateway (&lt;a href=&#34;https://github.com/istio/istio/issues/10313&#34;&gt;Issue 10313&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Mon, 25 Mar 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.1.1/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.1.1/</guid></item><item><title>Architecting Istio 1.1 for Performance</title><description>
&lt;p&gt;Hyper-scale, microservice-based cloud environments have been exciting to build but challenging to manage. Along came Kubernetes (container orchestration) in 2014, followed by Istio (container service management) in 2017. Both open-source projects enable developers to scale container-based applications without spending too much time on administration tasks.&lt;/p&gt;
&lt;p&gt;Now, new enhancements in Istio 1.1 deliver scale-up with improved application performance and service management efficiency.
Simulations using our sample commercial airline reservation application show the following improvements, compared to Istio 1.0.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;ve seen substantial application performance gains:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;up to 30% reduction in application average latency&lt;/li&gt;
&lt;li&gt;up to 40% faster service startup times in a large mesh&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As well as impressive improvements in service management efficiency:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;up to 90% reduction in Pilot CPU usage in a large mesh&lt;/li&gt;
&lt;li&gt;up to 50% reduction in Pilot memory usage in a large mesh&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With Istio 1.1, organizations can be more confident in their ability to scale applications with consistency and control &amp;ndash; even in hyper-scale cloud environments.&lt;/p&gt;
&lt;p&gt;Congratulations to the Istio experts around the world who contributed to this release. We could not be more pleased with these results.&lt;/p&gt;
&lt;h2 id=&#34;istio-1-1-performance-enhancements&#34;&gt;Istio 1.1 performance enhancements&lt;/h2&gt;
&lt;p&gt;As members of the Istio Performance and Scalability workgroup, we have done extensive performance evaluations. We introduced many performance design features for Istio 1.1, in collaboration with other Istio contributors.
Some of the most visible performance enhancements in 1.1 include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Significant reduction in default collection of Envoy-generated statistics&lt;/li&gt;
&lt;li&gt;Added load-shedding functionality to Mixer workloads&lt;/li&gt;
&lt;li&gt;Improved the protocol between Envoy and Mixer&lt;/li&gt;
&lt;li&gt;Namespace isolation, to reduce operational overhead&lt;/li&gt;
&lt;li&gt;Configurable concurrent worker threads, which can improve overall throughput&lt;/li&gt;
&lt;li&gt;Configurable filters that limit telemetry data&lt;/li&gt;
&lt;li&gt;Removal of synchronization bottlenecks&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;continuous-code-quality-and-performance-verification&#34;&gt;Continuous code quality and performance verification&lt;/h2&gt;
&lt;p&gt;Regression Patrol drives continuous improvement in Istio performance and quality. Behind the scenes, the Regression Patrol helps Istio developers to identify and fix code issues. Daily builds are checked using a customer-centric benchmark, &lt;a href=&#34;https://github.com/blueperf/&#34;&gt;BluePerf&lt;/a&gt;. The results are published to the &lt;a href=&#34;https://ibmcloud-perf.istio.io/regpatrol/&#34;&gt;Istio community web portal&lt;/a&gt;. Various application configurations are evaluated to help provide insights on Istio component performance.&lt;/p&gt;
&lt;p&gt;Another tool that is used to evaluate the performance of Istios builds is &lt;a href=&#34;https://fortio.org/&#34;&gt;Fortio&lt;/a&gt;, which provides a synthetic end to end load testing benchmark.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;Istio 1.1 was designed for performance and scalability. The Istio Performance and Scalability workgroup measured significant performance improvements over 1.0.
Istio 1.1 introduces new features and optimizations to help harden the service mesh for enterprise microservice workloads. The Istio 1.1 Performance and Tuning Guide documents performance simulations, provides sizing and capacity planning guidance, and includes best practices for tuning custom use cases.&lt;/p&gt;
&lt;h2 id=&#34;useful-links&#34;&gt;Useful links&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://www.youtube.com/watch?time_continue=349&amp;amp;v=G4F5aRFEXnU&#34;&gt;Istio Service Mesh Performance (34:30)&lt;/a&gt;, by Surya Duggirala, Laurent Demailly and Fawad Khaliq at Kubecon Europe 2018&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://discuss.istio.io/c/performance-and-scalability&#34;&gt;Istio Performance and Scalability discussion forum&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;disclaimer&#34;&gt;Disclaimer&lt;/h2&gt;
&lt;p&gt;The performance data contained herein was obtained in a controlled, isolated environment. Actual results that may be obtained in other operating environments may vary significantly. There is no guarantee that the same or similar results will be obtained elsewhere.&lt;/p&gt;</description><pubDate>Tue, 19 Mar 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/istio1.1_perf/</link><author>Surya V Duggirala (IBM), Mandar Jog (Google), Jose Nativio (IBM)</author><guid isPermaLink="true">/v1.1/blog/2019/istio1.1_perf/</guid><category>performance</category><category>scalability</category><category>scale</category><category>benchmarks</category></item><item><title>Announcing Istio 1.1</title><description>&lt;p&gt;We are pleased to announce the release of Istio 1.1!&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.1.0&#34;
data-updateadvice=&#39;Before you download 1.1, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.1.9&#39;
data-updatehref=&#34;/v1.1/about/notes/1.1.9&#34;&gt;
DOWNLOAD 1.1
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://istio.io/docs&#34;&gt;1.1 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;/v1.1/about/notes/1.1/&#34;&gt;1.1 RELEASE NOTES&lt;/a&gt;
&lt;/div&gt;
&lt;p&gt;Since we released 1.0 back in July, weve done a lot of work to help people get
into production. Not surprisingly, we had to do some &lt;a href=&#34;/v1.1/about/notes&#34;&gt;patch releases&lt;/a&gt;
(6 so far!), but weve also been hard at work adding new features to the
product.&lt;/p&gt;
&lt;p&gt;The theme for 1.1 is Enterprise Ready. Weve been very pleased to see more and
more companies using Istio in production, but as some larger companies tried to
adopt Istio they hit some limits.&lt;/p&gt;
&lt;p&gt;One of our prime areas of focus has been &lt;a href=&#34;/v1.1/docs/concepts/performance-and-scalability/&#34;&gt;performance and scalability&lt;/a&gt;.
As people moved into production with larger clusters running more services at
higher volume, they hit some scaling and performance issues. The
&lt;a href=&#34;/v1.1/docs/concepts/traffic-management/#sidecars&#34;&gt;sidecars&lt;/a&gt; took too many resources
and added too much latency. The control plane (especially
&lt;a href=&#34;/v1.1/docs/concepts/traffic-management/#pilot-and-envoy&#34;&gt;Pilot&lt;/a&gt;) was overly
resource hungry.&lt;/p&gt;
&lt;p&gt;Weve done a lot of work to make both the data plane and the control plane more
efficient. You can find the details of our 1.1 performance testing and the
results in our updated &lt;a href=&#34;/v1.1/docs/concepts/performance-and-scalability/&#34;&gt;performance ans scalability concept&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Weve done work around namespace isolation as well. This lets you use
Kubernetes namespaces to enforce boundaries of control, and ensures that your
teams cannot interfere with each other.&lt;/p&gt;
&lt;p&gt;We have also improved the &lt;a href=&#34;/v1.1/docs/concepts/multicluster-deployments/&#34;&gt;multicluster capabilities and usability&lt;/a&gt;.
We listened to the community and improved defaults for traffic control and
policy. We introduced a new component called
&lt;a href=&#34;/v1.1/docs/concepts/what-is-istio/#galley&#34;&gt;Galley&lt;/a&gt;. Galley validates that sweet,
sweet YAML, reducing the chance of configuration errors. Galley will also be
instrumental in &lt;a href=&#34;/v1.1/docs/setup/kubernetes/install/multicluster/&#34;&gt;multicluster setups&lt;/a&gt;,
gathering service discovery information from each Kubernetes cluster. We are
also supporting additional multicluster topologies including &lt;a href=&#34;/v1.1/docs/concepts/multicluster-deployments/#single-control-plane-topology&#34;&gt;single control plane&lt;/a&gt;
and &lt;a href=&#34;/v1.1/docs/concepts/multicluster-deployments/#multiple-control-plane-topology&#34;&gt;multiple synchronized control planes&lt;/a&gt;
without requiring a flat network.&lt;/p&gt;
&lt;p&gt;There is lots more &amp;ndash; see the &lt;a href=&#34;/v1.1/about/notes/1.1/&#34;&gt;release notes&lt;/a&gt; for complete
details.&lt;/p&gt;
&lt;p&gt;There is more going on in the project as well. We know that Istio has a lot of
moving parts and can be a lot to take on. To help address that, we recently
formed a &lt;a href=&#34;https://github.com/istio/community/blob/master/WORKING-GROUPS.md#working-group-meetings&#34;&gt;Usability Working Group&lt;/a&gt;
(feel free to join). There is also a lot happening in the &lt;a href=&#34;https://github.com/istio/community#community-meeting&#34;&gt;Community
Meeting&lt;/a&gt; (Thursdays at
&lt;code&gt;11 a.m.&lt;/code&gt;) and in the &lt;a href=&#34;https://github.com/istio/community/blob/master/WORKING-GROUPS.md&#34;&gt;Working
Groups&lt;/a&gt;. And
if you havent yet joined the conversation at
&lt;a href=&#34;https://discuss.istio.io&#34;&gt;discuss.istio.io&lt;/a&gt;, head over, log in with your
GitHub credentials and join us!&lt;/p&gt;
&lt;p&gt;We are grateful to everyone who has worked hard on Istio over the last few
months &amp;ndash; patching 1.0, adding features to 1.1, and, lately, doing tons of
testing on 1.1. Thanks especially to those companies and users who worked with
us installing and upgrading to the early builds and helping us catch problems
before the release.&lt;/p&gt;
&lt;p&gt;So: nows the time! Grab 1.1, check out &lt;a href=&#34;/v1.1/docs/&#34;&gt;the updated documentation&lt;/a&gt;,
&lt;a href=&#34;/v1.1/docs/setup/kubernetes/&#34;&gt;install it&lt;/a&gt; and&amp;hellip;happy meshing!&lt;/p&gt;</description><pubDate>Tue, 19 Mar 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.1/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.1/</guid></item><item><title>Announcing Istio 1.0.6</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.0.6. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.0.6&#34;
data-updateadvice=&#39;Before you download 1.0.6, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.0.8&#39;
data-updatehref=&#34;/v1.1/about/notes/1.0.8&#34;&gt;
DOWNLOAD 1.0.6
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://archive.istio.io/v1.0&#34;&gt;1.0.6 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.0.5...1.0.6&#34;&gt;CHANGES IN 1.0.6&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;security-vulnerability-fixes&#34;&gt;Security vulnerability fixes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Updated Go &lt;code&gt;requests&lt;/code&gt; and &lt;code&gt;urllib3&lt;/code&gt; libraries in Bookinfo sample code per &lt;a href=&#34;https://nvd.nist.gov/vuln/detail/CVE-2018-18074&#34;&gt;&lt;code&gt;CVE-2018-18074&lt;/code&gt;&lt;/a&gt; and &lt;a href=&#34;https://nvd.nist.gov/vuln/detail/CVE-2018-20060&#34;&gt;&lt;code&gt;CVE-2018-20060&lt;/code&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Fixed username and password being exposed in &lt;code&gt;Grafana&lt;/code&gt; and &lt;code&gt;Kiali&lt;/code&gt; (&lt;a href=&#34;https://github.com/istio/istio/issues/7476&#34;&gt;Issue 7446&lt;/a&gt;, &lt;a href=&#34;https://github.com/istio/istio/issues/7447&#34;&gt;Issue 7447&lt;/a&gt;). If you have trouble to start the &lt;code&gt;Grafana&lt;/code&gt; pod after upgrade to 1.0.6, please follow &lt;a href=&#34;https://github.com/istio/istio/tree/release-1.1/install/kubernetes/helm/istio#installing-the-chart&#34;&gt;the steps&lt;/a&gt; to create the secrete first.&lt;/li&gt;
&lt;li&gt;Removed in-memory service registry in Pilot. This allowed adding endpoints to proxy configurations from within the cluster through a Pilot debug API.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;robustness-improvements&#34;&gt;Robustness improvements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Fixed Pilot failing to push configuration under load (&lt;a href=&#34;https://github.com/istio/istio/issues/10360&#34;&gt;Issue 10360&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fixed a race condition that would lead Pilot to crash and restart (&lt;a href=&#34;https://github.com/istio/istio/issues/10868&#34;&gt;Issue 10868&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fixed a memory leak in Pilot (&lt;a href=&#34;https://github.com/istio/istio/issues/10822&#34;&gt;Issue 10822&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Fixed a memory leak in Mixer (&lt;a href=&#34;https://github.com/istio/istio/issues/10393&#34;&gt;Issue 10393&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Tue, 12 Feb 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-1.0.6/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-1.0.6/</guid></item><item><title>Version Routing in a Multicluster Service Mesh</title><description>
&lt;p&gt;If you&amp;rsquo;ve spent any time looking at Istio, you&amp;rsquo;ve probably noticed that it includes a lot of features that
can be demonstrated with simple &lt;a href=&#34;/v1.1/docs/tasks/&#34;&gt;tasks&lt;/a&gt; and &lt;a href=&#34;/v1.1/docs/examples/&#34;&gt;examples&lt;/a&gt;
running on a single Kubernetes cluster.
Because most, if not all, real-world cloud and microservices-based applications are not that simple
and will need to have the services distributed and running in more than one location, you may be
wondering if all these things will be just as simple in your real production environment.&lt;/p&gt;
&lt;p&gt;Fortunately, Istio provides several ways to configure a service mesh so that applications
can, more-or-less transparently, be part of a mesh where the services are running
in more than one cluster, i.e., in a
&lt;a href=&#34;/v1.1/docs/concepts/multicluster-deployments/#multicluster-service-mesh&#34;&gt;multicluster service mesh&lt;/a&gt;.
The simplest way to setup a multicluster mesh, because it has no special networking requirements,
is using a
&lt;a href=&#34;/v1.1/docs/concepts/multicluster-deployments/#multiple-control-plane-topology&#34;&gt;multiple control plane topology&lt;/a&gt;.
In this configuration, each Kubernetes cluster contributing to the mesh has its own control plane,
but each control plane is synchronized and running under a single administrative control.&lt;/p&gt;
&lt;p&gt;In this article we&amp;rsquo;ll look at how one of the features of Istio,
&lt;a href=&#34;/v1.1/docs/concepts/traffic-management/&#34;&gt;traffic management&lt;/a&gt;, works in a multicluster mesh with
a multiple control plane topology.
We&amp;rsquo;ll show how to configure Istio route rules to call remote services in a multicluster service mesh
by deploying the &lt;a href=&#34;https://github.com/istio/istio/tree/release-1.1/samples/bookinfo&#34;&gt;Bookinfo sample&lt;/a&gt; with version &lt;code&gt;v1&lt;/code&gt; of the &lt;code&gt;reviews&lt;/code&gt; service
running in one cluster, versions &lt;code&gt;v2&lt;/code&gt; and &lt;code&gt;v3&lt;/code&gt; running in a second cluster.&lt;/p&gt;
&lt;h2 id=&#34;setup-clusters&#34;&gt;Setup clusters&lt;/h2&gt;
&lt;p&gt;To start, you&amp;rsquo;ll need two Kubernetes clusters, both running a slightly customized configuration of Istio.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Set up a multicluster environment with two Istio clusters by following the
&lt;a href=&#34;/v1.1/docs/setup/kubernetes/install/multicluster/gateways/&#34;&gt;multiple control planes with gateways&lt;/a&gt; instructions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;kubectl&lt;/code&gt; command is used to access both clusters with the &lt;code&gt;--context&lt;/code&gt; flag.
Use the following command to list your contexts:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* cluster1 cluster1 user@foo.com default
cluster2 cluster2 user@foo.com default
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Export the following environment variables with the context names of your configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ export CTX_CLUSTER1=&amp;lt;cluster1 context name&amp;gt;
$ export CTX_CLUSTER2=&amp;lt;cluster2 context name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;deploy-version-v1-of-the-bookinfo-application-in-cluster1&#34;&gt;Deploy version v1 of the &lt;code&gt;bookinfo&lt;/code&gt; application in &lt;code&gt;cluster1&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Run the &lt;code&gt;productpage&lt;/code&gt; and &lt;code&gt;details&lt;/code&gt; services and version &lt;code&gt;v1&lt;/code&gt; of the &lt;code&gt;reviews&lt;/code&gt; service in &lt;code&gt;cluster1&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl label --context=$CTX_CLUSTER1 namespace default istio-injection=enabled
$ kubectl apply --context=$CTX_CLUSTER1 -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v1
spec:
replicas: 1
template:
metadata:
labels:
app: productpage
version: v1
spec:
containers:
- name: productpage
image: istio/examples-bookinfo-productpage-v1:1.10.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
spec:
ports:
- port: 9080
name: http
selector:
app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: details-v1
spec:
replicas: 1
template:
metadata:
labels:
app: details
version: v1
spec:
containers:
- name: details
image: istio/examples-bookinfo-details-v1:1.10.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v1
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v1
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v1:1.10.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;deploy-bookinfo-v2-and-v3-services-in-cluster2&#34;&gt;Deploy &lt;code&gt;bookinfo&lt;/code&gt; v2 and v3 services in &lt;code&gt;cluster2&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Run the &lt;code&gt;ratings&lt;/code&gt; service and version &lt;code&gt;v2&lt;/code&gt; and &lt;code&gt;v3&lt;/code&gt; of the &lt;code&gt;reviews&lt;/code&gt; service in &lt;code&gt;cluster2&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl label --context=$CTX_CLUSTER2 namespace default istio-injection=enabled
$ kubectl apply --context=$CTX_CLUSTER2 -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
spec:
ports:
- port: 9080
name: http
selector:
app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ratings-v1
spec:
replicas: 1
template:
metadata:
labels:
app: ratings
version: v1
spec:
containers:
- name: ratings
image: istio/examples-bookinfo-ratings-v1:1.10.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v2
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v2
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v2:1.10.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v3
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v3
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v3:1.10.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;access-the-bookinfo-application&#34;&gt;Access the &lt;code&gt;bookinfo&lt;/code&gt; application&lt;/h2&gt;
&lt;p&gt;Just like any application, we&amp;rsquo;ll use an Istio gateway to access the &lt;code&gt;bookinfo&lt;/code&gt; application.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create the &lt;code&gt;bookinfo&lt;/code&gt; gateway in &lt;code&gt;cluster1&lt;/code&gt;:&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/bookinfo-gateway.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply --context=$CTX_CLUSTER1 -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follow the &lt;a href=&#34;/v1.1/docs/examples/bookinfo/#determining-the-ingress-ip-and-port&#34;&gt;Bookinfo sample instructions&lt;/a&gt;
to determine the ingress IP and port and then point your browser to &lt;code&gt;http://$GATEWAY_URL/productpage&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You should see the &lt;code&gt;productpage&lt;/code&gt; with reviews, but without ratings, because only &lt;code&gt;v1&lt;/code&gt; of the &lt;code&gt;reviews&lt;/code&gt; service
is running on &lt;code&gt;cluster1&lt;/code&gt; and we have not yet configured access to &lt;code&gt;cluster2&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;create-a-service-entry-and-destination-rule-on-cluster1-for-the-remote-reviews-service&#34;&gt;Create a service entry and destination rule on &lt;code&gt;cluster1&lt;/code&gt; for the remote reviews service&lt;/h2&gt;
&lt;p&gt;As described in the &lt;a href=&#34;/v1.1/docs/setup/kubernetes/install/multicluster/gateways/#setup-dns&#34;&gt;setup instructions&lt;/a&gt;,
remote services are accessed with a &lt;code&gt;.global&lt;/code&gt; DNS name. In our case, it&amp;rsquo;s &lt;code&gt;reviews.default.global&lt;/code&gt;,
so we need to create a service entry and destination rule for that host.
The service entry will use the &lt;code&gt;cluster2&lt;/code&gt; gateway as the endpoint address to access the service.
You can use the gateway&amp;rsquo;s DNS name, if it has one, or its public IP, like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ export CLUSTER2_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway \
-n istio-system -o jsonpath=&amp;#34;{.items[0].status.loadBalancer.ingress[0].ip}&amp;#34;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now create the service entry and destination rule using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply --context=$CTX_CLUSTER1 -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: reviews-default
spec:
hosts:
- reviews.default.global
location: MESH_INTERNAL
ports:
- name: http1
number: 9080
protocol: http
resolution: DNS
addresses:
- 127.255.0.3
endpoints:
- address: ${CLUSTER2_GW_ADDR}
labels:
cluster: cluster2
ports:
http1: 15443 # Do not change this port value
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews-global
spec:
host: reviews.default.global
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v2
labels:
cluster: cluster2
- name: v3
labels:
cluster: cluster2
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The address &lt;code&gt;127.255.0.3&lt;/code&gt; of the service entry can be any arbitrary unallocated IP.
Using an IP from the loopback range 127.0.0.0/8 is a good choice.
Check out the
&lt;a href=&#34;/v1.1/docs/examples/multicluster/gateways/#configure-the-example-services&#34;&gt;gateway-connected multicluster example&lt;/a&gt;
for more details.&lt;/p&gt;
&lt;p&gt;Note that the labels of the subsets in the destination rule map to the service entry
endpoint label (&lt;code&gt;cluster: cluster2&lt;/code&gt;) corresponding to the &lt;code&gt;cluster2&lt;/code&gt; gateway.
Once the request reaches the destination cluster, a local destination rule will be used
to identify the actual pod labels (&lt;code&gt;version: v1&lt;/code&gt; or &lt;code&gt;version: v2&lt;/code&gt;) corresponding to the
requested subset.&lt;/p&gt;
&lt;h2 id=&#34;create-a-destination-rule-on-both-clusters-for-the-local-reviews-service&#34;&gt;Create a destination rule on both clusters for the local reviews service&lt;/h2&gt;
&lt;p&gt;Technically, we only need to define the subsets of the local service that are being used
in each cluster (i.e., &lt;code&gt;v1&lt;/code&gt; in &lt;code&gt;cluster1&lt;/code&gt;, &lt;code&gt;v2&lt;/code&gt; and &lt;code&gt;v3&lt;/code&gt; in &lt;code&gt;cluster2&lt;/code&gt;), but for simplicity we&amp;rsquo;ll
just define all three subsets in both clusters, since there&amp;rsquo;s nothing wrong with defining subsets
for versions that are not actually deployed.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply --context=$CTX_CLUSTER1 -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews.default.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply --context=$CTX_CLUSTER2 -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews.default.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;create-a-virtual-service-to-route-reviews-service-traffic&#34;&gt;Create a virtual service to route reviews service traffic&lt;/h2&gt;
&lt;p&gt;At this point, all calls to the &lt;code&gt;reviews&lt;/code&gt; service will go to the local &lt;code&gt;reviews&lt;/code&gt; pods (&lt;code&gt;v1&lt;/code&gt;) because
if you look at the source code you will see that the &lt;code&gt;productpage&lt;/code&gt; implementation is simply making
requests to &lt;code&gt;http://reviews:9080&lt;/code&gt; (which expands to host &lt;code&gt;reviews.default.svc.cluster.local&lt;/code&gt;), the
local version of the service.
The corresponding remote service is named &lt;code&gt;reviews.default.global&lt;/code&gt;, so route rules are needed to
redirect requests to the global host.&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;Note that if all of the versions of the &lt;code&gt;reviews&lt;/code&gt; service were remote, so there is no local &lt;code&gt;reviews&lt;/code&gt;
service defined, the DNS would resolve &lt;code&gt;reviews&lt;/code&gt; directly to &lt;code&gt;reviews.default.global&lt;/code&gt;. In that case
we could call the remote &lt;code&gt;reviews&lt;/code&gt; service without any route rules.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;p&gt;Apply the following virtual service to direct traffic for user &lt;code&gt;jason&lt;/code&gt; to &lt;code&gt;reviews&lt;/code&gt; versions &lt;code&gt;v2&lt;/code&gt; and &lt;code&gt;v3&lt;/code&gt; (&lt;sup&gt;50&lt;/sup&gt;&amp;frasl;&lt;sub&gt;50&lt;/sub&gt;)
which are running on &lt;code&gt;cluster2&lt;/code&gt;. Traffic for any other user will go to &lt;code&gt;reviews&lt;/code&gt; version &lt;code&gt;v1&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply --context=$CTX_CLUSTER1 -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews.default.svc.cluster.local
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews.default.global
subset: v2
weight: 50
- destination:
host: reviews.default.global
subset: v3
weight: 50
- route:
- destination:
host: reviews.default.svc.cluster.local
subset: v1
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;This 50/50 rule isn&amp;rsquo;t a particularly realistic example. It&amp;rsquo;s just a convenient way to demonstrate
accessing multiple subsets of a remote service.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;p&gt;Return to your browser and login as user &lt;code&gt;jason&lt;/code&gt;. If you refresh the page several times, you should see
the display alternating between black and red ratings stars (&lt;code&gt;v2&lt;/code&gt; and &lt;code&gt;v3&lt;/code&gt;). If you logout, you will
only see reviews without ratings (&lt;code&gt;v1&lt;/code&gt;).&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this article, we&amp;rsquo;ve seen how to use Istio route rules to distribute the versions of a service
across clusters in a multicluster service mesh with a
&lt;a href=&#34;/v1.1/docs/concepts/multicluster-deployments/#multiple-control-plane-topology&#34;&gt;multiple control plane topology&lt;/a&gt;.
In this example, we manually configured the &lt;code&gt;.global&lt;/code&gt; service entry and destination rules needed to provide
connectivity to one remote service, &lt;code&gt;reviews&lt;/code&gt;. In general, however, if we wanted to enable any service
to run either locally or remotely, we would need to create &lt;code&gt;.global&lt;/code&gt; resources for every service.
Fortunately, this process could be automated and likely will be in a future Istio release.&lt;/p&gt;</description><pubDate>Thu, 07 Feb 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/multicluster-version-routing/</link><author>Frank Budinsky (IBM)</author><guid isPermaLink="true">/v1.1/blog/2019/multicluster-version-routing/</guid><category>traffic-management</category><category>multicluster</category></item><item><title>Sail the Blog!</title><description>&lt;p&gt;Welcome to the Istio blog!&lt;/p&gt;
&lt;p&gt;To make it easier to publish your content on our website, we
&lt;a href=&#34;/v1.1/about/contribute/creating-and-editing-pages/#choosing-a-page-type&#34;&gt;updated the content types guide&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The goal of the updated guide is to make sharing and finding content easier.&lt;/p&gt;
&lt;p&gt;We want to make sharing timely information on Istio easy and the Istio blog is
a good place to start.&lt;/p&gt;
&lt;p&gt;We welcome your posts to the blog if you think your content falls in one of the
following four categories:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Your post details your experience using and configuring Istio. Ideally, your
post shares a novel experience or perspective.&lt;/li&gt;
&lt;li&gt;Your post highlights or announces Istio features.&lt;/li&gt;
&lt;li&gt;Your post announces or recaps an Istio-related event.&lt;/li&gt;
&lt;li&gt;Your post details how to accomplish a task or fulfill a specific use case
using Istio.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Posting your blog is only &lt;a href=&#34;/v1.1/about/contribute/github/#how-to-contribute&#34;&gt;one PR away&lt;/a&gt;
and, if you wish, you can &lt;a href=&#34;/v1.1/about/contribute/github/#review&#34;&gt;request a review&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We look forward to reading about your Istio experience on the blog soon!&lt;/p&gt;</description><pubDate>Tue, 05 Feb 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/sail-the-blog/</link><author>Rigs Caballero, Google</author><guid isPermaLink="true">/v1.1/blog/2019/sail-the-blog/</guid><category>community</category><category>blog</category><category>contribution</category><category>guide</category><category>guideline</category><category>event</category></item><item><title>Egress Gateway Performance Investigation</title><description>
&lt;p&gt;The main objective of this investigation was to determine the impact on performance and resource utilization when an egress gateway is added in the service mesh to access an external service (MongoDB, in this case). The steps to configure an egress gateway for an external MongoDB are described in the blog &lt;a href=&#34;/v1.1/blog/2018/egress-mongo/&#34;&gt;Consuming External MongoDB Services&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The application used for this investigation was the Java version of Acmeair, which simulates an airline reservation system. This application is used in the Performance Regression Patrol of Istio daily builds, but on that setup the microservices have been accessing the external MongoDB directly via their sidecars, without an egress gateway.&lt;/p&gt;
&lt;p&gt;The diagram below illustrates how regression patrol currently runs with Acmeair and Istio:&lt;/p&gt;
&lt;figure style=&#34;width:70%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:62.69230769230769%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2019/egress-performance/./acmeair_regpatrol3.png&#34; title=&#34;Acmeair benchmark in the Istio performance regression patrol environment&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2019/egress-performance/./acmeair_regpatrol3.png&#34; alt=&#34;Acmeair benchmark in the Istio performance regression patrol environment&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Acmeair benchmark in the Istio performance regression patrol environment&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Another difference is that the application communicates with the external DB with plain MongoDB protocol. The first change made for this study was to establish a TLS communication between the MongoDB and its clients running within the application, as this is a more realistic scenario.&lt;/p&gt;
&lt;p&gt;Several cases for accessing the external database from the mesh were tested and described next.&lt;/p&gt;
&lt;h2 id=&#34;egress-traffic-cases&#34;&gt;Egress traffic cases&lt;/h2&gt;
&lt;h3 id=&#34;case-1-bypassing-the-sidecar&#34;&gt;Case 1: Bypassing the sidecar&lt;/h3&gt;
&lt;p&gt;In this case, the sidecar does not intercept the communication between the application and the external DB. This is accomplished by setting the init container argument -x with the CIDR of the MongoDB, which makes the sidecar ignore messages to/from this IP address. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; - -x
- &amp;quot;169.47.232.211/32&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;figure style=&#34;width:70%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:76.45536869340232%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2019/egress-performance/./case1_sidecar_bypass3.png&#34; title=&#34;Traffic to external MongoDB by-passing the sidecar&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2019/egress-performance/./case1_sidecar_bypass3.png&#34; alt=&#34;Traffic to external MongoDB by-passing the sidecar&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Traffic to external MongoDB by-passing the sidecar&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;case-2-through-the-sidecar-with-service-entry&#34;&gt;Case 2: Through the sidecar, with service entry&lt;/h3&gt;
&lt;p&gt;This is the default configuration when the sidecar is injected into the application pod. All messages are intercepted by the sidecar and routed to the destination according to the configured rules, including the communication with external services. The MongoDB was defined as a &lt;code&gt;ServiceEntry&lt;/code&gt;.&lt;/p&gt;
&lt;figure style=&#34;width:70%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:74.41253263707573%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2019/egress-performance/./case2_sidecar_passthru3.png&#34; title=&#34;Sidecar intercepting traffic to external MongoDB&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2019/egress-performance/./case2_sidecar_passthru3.png&#34; alt=&#34;Sidecar intercepting traffic to external MongoDB&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Sidecar intercepting traffic to external MongoDB&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;case-3-egress-gateway&#34;&gt;Case 3: Egress gateway&lt;/h3&gt;
&lt;p&gt;The egress gateway and corresponding destination rule and virtual service resources are defined for accessing MongoDB. All traffic to and from the external DB goes through the egress gateway (envoy).&lt;/p&gt;
&lt;figure style=&#34;width:70%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:62.309368191721134%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2019/egress-performance/./case3_egressgw3.png&#34; title=&#34;Introduction of the egress gateway to access MongoDB&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2019/egress-performance/./case3_egressgw3.png&#34; alt=&#34;Introduction of the egress gateway to access MongoDB&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Introduction of the egress gateway to access MongoDB&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;case-4-mutual-tls-between-sidecars-and-the-egress-gateway&#34;&gt;Case 4: Mutual TLS between sidecars and the egress gateway&lt;/h3&gt;
&lt;p&gt;In this case, there is an extra layer of security between the sidecars and the gateway, so some impact in performance is expected.&lt;/p&gt;
&lt;figure style=&#34;width:70%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:63.968957871396896%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2019/egress-performance/./case4_egressgw_mtls3.png&#34; title=&#34;Enabling mutual TLS between sidecars and the egress gateway&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2019/egress-performance/./case4_egressgw_mtls3.png&#34; alt=&#34;Enabling mutual TLS between sidecars and the egress gateway&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Enabling mutual TLS between sidecars and the egress gateway&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;case-5-egress-gateway-with-sni-proxy&#34;&gt;Case 5: Egress gateway with SNI proxy&lt;/h3&gt;
&lt;p&gt;This scenario is used to evaluate the case where another proxy is required to access wildcarded domains. This may be required due current limitations of envoy. An nginx proxy was created as sidecar in the egress gateway pod.&lt;/p&gt;
&lt;figure style=&#34;width:70%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:65.2762119503946%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2019/egress-performance/./case5_egressgw_sni_proxy3.png&#34; title=&#34;Egress gateway with additional SNI Proxy&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2019/egress-performance/./case5_egressgw_sni_proxy3.png&#34; alt=&#34;Egress gateway with additional SNI Proxy&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Egress gateway with additional SNI Proxy&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2 id=&#34;environment&#34;&gt;Environment&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Istio version: 1.0.2&lt;/li&gt;
&lt;li&gt;&lt;code&gt;K8s&lt;/code&gt; version: &lt;code&gt;1.10.5_1517&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Acmeair App: 4 services (1 replica of each), inter-services transactions, external Mongo DB, avg payload: 620 bytes.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;results&#34;&gt;Results&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;Jmeter&lt;/code&gt; was used to generate the workload which consisted in a sequence of 5-minute runs, each one using a growing number of clients making http requests. The number of clients used were 1, 5, 10, 20, 30, 40, 50 and 60.&lt;/p&gt;
&lt;h3 id=&#34;throughput&#34;&gt;Throughput&lt;/h3&gt;
&lt;p&gt;The chart below shows the throughput obtained for the different cases:&lt;/p&gt;
&lt;figure style=&#34;width:75%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:54.29638854296388%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2019/egress-performance/./throughput3.png&#34; title=&#34;Throughput obtained for the different cases&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2019/egress-performance/./throughput3.png&#34; alt=&#34;Throughput obtained for the different cases&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Throughput obtained for the different cases&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;As you can see, there is no major impact in having sidecars and the egress gateway between the application and the external MongoDB, but enabling mutual TLS and then adding the SNI proxy caused a degradation in the throughput of about 10% and 24%, respectively.&lt;/p&gt;
&lt;h3 id=&#34;response-time&#34;&gt;Response time&lt;/h3&gt;
&lt;p&gt;The average response times for the different requests were collected when traffic was being driven with 20 clients. The chart below shows the average, median, 90%, 95% and 99% average values for each case:&lt;/p&gt;
&lt;figure style=&#34;width:75%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:48.76783398184176%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2019/egress-performance/./response_times3.png&#34; title=&#34;Response times obtained for the different configurations&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2019/egress-performance/./response_times3.png&#34; alt=&#34;Response times obtained for the different configurations&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Response times obtained for the different configurations&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Likewise, not much difference in the response times for the 3 first cases, but mutual TLS and the extra proxy adds noticeable latency.&lt;/p&gt;
&lt;h3 id=&#34;cpu-utilization&#34;&gt;CPU utilization&lt;/h3&gt;
&lt;p&gt;The CPU usage was collected for all Istio components as well as for the sidecars during the runs. For a fair comparison, CPU used by Istio was normalized by the throughput obtained for a given run. The results are shown in the following graph:&lt;/p&gt;
&lt;figure style=&#34;width:75%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:53.96174863387978%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2019/egress-performance/./cpu_usage3.png&#34; title=&#34;CPU usage normalized by TPS&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2019/egress-performance/./cpu_usage3.png&#34; alt=&#34;CPU usage normalized by TPS&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;CPU usage normalized by TPS&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;In terms of CPU consumption per transaction, Istio has used significantly more CPU only in the egress gateway + SNI proxy case.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this investigation, we tried different options to access an external TLS-enabled MongoDB to compare their performance. The introduction of the Egress Gateway did not have a significant impact on the performance nor meaningful additional CPU consumption. Only when enabling mutual TLS between sidecars and egress gateway or using an additional SNI proxy for wildcarded domains we could observe some degradation.&lt;/p&gt;</description><pubDate>Thu, 31 Jan 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/egress-performance/</link><author>Jose Nativio, IBM</author><guid isPermaLink="true">/v1.1/blog/2019/egress-performance/</guid><category>performance</category><category>traffic-management</category><category>egress</category><category>mongo</category></item><item><title>Demystifying Istio&#39;s Sidecar Injection Model</title><description>
&lt;p&gt;A simple overview of an Istio service-mesh architecture always starts with describing the control-plane and data-plane.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/v1.1/docs/concepts/what-is-istio/#architecture&#34;&gt;From Istios documentation:&lt;/a&gt;&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout quote&#34;&gt;
&lt;div class=&#34;type&#34;&gt;
&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-quote&#34;/&gt;&lt;/svg&gt;
&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;&lt;p&gt;An Istio service mesh is logically split into a data plane and a control plane.&lt;/p&gt;
&lt;p&gt;The data plane is composed of a set of intelligent proxies (Envoy) deployed as sidecars. These proxies mediate and control all network communication between microservices along with Mixer, a general-purpose policy and telemetry hub.&lt;/p&gt;
&lt;p&gt;The control plane manages and configures the proxies to route traffic. Additionally, the control plane configures Mixers to enforce policies and collect telemetry.&lt;/p&gt;
&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;figure style=&#34;width:40%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:80%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2019/data-plane-setup/./arch-2.svg&#34; title=&#34;Istio Architecture&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2019/data-plane-setup/./arch-2.svg&#34; alt=&#34;The overall architecture of an Istio-based application.&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Istio Architecture&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;It is important to understand that the sidecar injection into the application pods happens automatically, though manual injection is also possible. Traffic is directed from the application services to and from these sidecars without developers needing to worry about it. Once the applications are connected to the Istio service mesh, developers can start using and reaping the benefits of all that the service mesh has to offer. However, how does the data plane plumbing happen and what is really required to make it work seamlessly? In this post, we will deep-dive into the specifics of the sidecar injection models to gain a very clear understanding of how sidecar injection works.&lt;/p&gt;
&lt;h2 id=&#34;sidecar-injection&#34;&gt;Sidecar injection&lt;/h2&gt;
&lt;p&gt;In simple terms, sidecar injection is adding the configuration of additional containers to the pod template. The added containers needed for the Istio service mesh are:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;istio-init&lt;/code&gt;
This &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/pods/init-containers/&#34;&gt;init container&lt;/a&gt; is used to setup the &lt;code&gt;iptables&lt;/code&gt; rules so that inbound/outbound traffic will go through the sidecar proxy. An init container is different than an app container in following ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It runs before an app container is started and it always runs to completion.&lt;/li&gt;
&lt;li&gt;If there are many init containers, each should complete with success before the next container is started.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So, you can see how this type of container is perfect for a set-up or initialization job which does not need to be a part of the actual application container. In this case, &lt;code&gt;istio-init&lt;/code&gt; does just that and sets up the &lt;code&gt;iptables&lt;/code&gt; rules.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;istio-proxy&lt;/code&gt;
This is the actual sidecar proxy (based on Envoy).&lt;/p&gt;
&lt;h3 id=&#34;manual-injection&#34;&gt;Manual injection&lt;/h3&gt;
&lt;p&gt;In the manual injection method, you can use &lt;code&gt;istioctl&lt;/code&gt; to modify the pod template and add the configuration of the two containers previously mentioned. For both manual as well as automatic injection, Istio takes the configuration from the &lt;code&gt;istio-sidecar-injector&lt;/code&gt; configuration map (configmap) and the mesh&amp;rsquo;s &lt;code&gt;istio&lt;/code&gt; configmap.&lt;/p&gt;
&lt;p&gt;Lets look at the configuration of the &lt;code&gt;istio-sidecar-injector&lt;/code&gt; configmap, to get an idea of what actually is going on.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; data-outputis=&#39;yaml&#39; &gt;$ kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath=&amp;#39;{.data.config}&amp;#39;
SNIPPET from the output:
policy: enabled
template: |-
initContainers:
- name: istio-init
image: docker.io/istio/proxy_init:1.0.2
args:
- &amp;#34;-p&amp;#34;
- [[ .MeshConfig.ProxyListenPort ]]
- &amp;#34;-u&amp;#34;
- 1337
.....
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_ADMIN
restartPolicy: Always
containers:
- name: istio-proxy
image: [[ if (isset .ObjectMeta.Annotations &amp;#34;sidecar.istio.io/proxyImage&amp;#34;) -]]
&amp;#34;[[ index .ObjectMeta.Annotations &amp;#34;sidecar.istio.io/proxyImage&amp;#34; ]]&amp;#34;
[[ else -]]
docker.io/istio/proxyv2:1.0.2
[[ end -]]
args:
- proxy
- sidecar
.....
env:
.....
- name: ISTIO_META_INTERCEPTION_MODE
value: [[ or (index .ObjectMeta.Annotations &amp;#34;sidecar.istio.io/interceptionMode&amp;#34;) .ProxyConfig.InterceptionMode.String ]]
imagePullPolicy: IfNotPresent
securityContext:
readOnlyRootFilesystem: true
[[ if eq (or (index .ObjectMeta.Annotations &amp;#34;sidecar.istio.io/interceptionMode&amp;#34;) .ProxyConfig.InterceptionMode.String) &amp;#34;TPROXY&amp;#34; -]]
capabilities:
add:
- NET_ADMIN
restartPolicy: Always
.....
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, the configmap contains the configuration for both, the &lt;code&gt;istio-init&lt;/code&gt; init container and the &lt;code&gt;istio-proxy&lt;/code&gt; proxy container. The configuration includes the name of the container image and arguments like interception mode, capabilities, etc.&lt;/p&gt;
&lt;p&gt;From a security point of view, it is important to note that &lt;code&gt;istio-init&lt;/code&gt; requires &lt;code&gt;NET_ADMIN&lt;/code&gt; capabilities to modify &lt;code&gt;iptables&lt;/code&gt; within the pod&amp;rsquo;s namespace and so does &lt;code&gt;istio-proxy&lt;/code&gt; if configured in &lt;code&gt;TPROXY&lt;/code&gt; mode. As this is restricted to a pod&amp;rsquo;s namespace, there should be no problem. However, I have noticed that recent open-shift versions may have some issues with it and a workaround is needed. One such option is mentioned at the end of this post.&lt;/p&gt;
&lt;p&gt;To modify the current pod template for sidecar injection, you can:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ istioctl kube-inject -f demo-red.yaml | kubectl apply -f -
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;OR&lt;/p&gt;
&lt;p&gt;To use modified configmaps or local configmaps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create &lt;code&gt;inject-config.yaml&lt;/code&gt; and &lt;code&gt;mesh-config.yaml&lt;/code&gt; from the configmaps&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath=&amp;#39;{.data.config}&amp;#39; &amp;gt; inject-config.yaml
$ kubectl -n istio-system get configmap istio -o=jsonpath=&amp;#39;{.data.mesh}&amp;#39; &amp;gt; mesh-config.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Modify the existing pod template, in my case, &lt;code&gt;demo-red.yaml&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ istioctl kube-inject --injectConfigFile inject-config.yaml --meshConfigFile mesh-config.yaml --filename demo-red.yaml --output demo-red-injected.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apply the &lt;code&gt;demo-red-injected.yaml&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f demo-red-injected.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As seen above, we create a new template using the &lt;code&gt;sidecar-injector&lt;/code&gt; and the mesh configuration to then apply that new template using &lt;code&gt;kubectl&lt;/code&gt;. If we look at the injected YAML file, it has the configuration of the Istio-specific containers, as we discussed above. Once we apply the injected YAML file, we see two containers running. One of them is the actual application container, and the other is the &lt;code&gt;istio-proxy&lt;/code&gt; sidecar.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pods | grep demo-red
demo-red-pod-8b5df99cc-pgnl7 2/2 Running 0 3d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The count is not 3 because the &lt;code&gt;istio-init&lt;/code&gt; container is an init type container that exits after doing what it supposed to do, which is setting up the &lt;code&gt;iptable&lt;/code&gt; rules within the pod. To confirm the init container exit, lets look at the output of &lt;code&gt;kubectl describe&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; data-outputis=&#39;yaml&#39; &gt;$ kubectl describe pod demo-red-pod-8b5df99cc-pgnl7
SNIPPET from the output:
Name: demo-red-pod-8b5df99cc-pgnl7
Namespace: default
.....
Labels: app=demo-red
pod-template-hash=8b5df99cc
version=version-red
Annotations: sidecar.istio.io/status={&amp;#34;version&amp;#34;:&amp;#34;3c0b8d11844e85232bc77ad85365487638ee3134c91edda28def191c086dc23e&amp;#34;,&amp;#34;initContainers&amp;#34;:[&amp;#34;istio-init&amp;#34;],&amp;#34;containers&amp;#34;:[&amp;#34;istio-proxy&amp;#34;],&amp;#34;volumes&amp;#34;:[&amp;#34;istio-envoy&amp;#34;,&amp;#34;istio-certs...
Status: Running
IP: 10.32.0.6
Controlled By: ReplicaSet/demo-red-pod-8b5df99cc
Init Containers:
istio-init:
Container ID: docker://bef731eae1eb3b6c9d926cacb497bb39a7d9796db49cd14a63014fc1a177d95b
Image: docker.io/istio/proxy_init:1.0.2
Image ID: docker-pullable://docker.io/istio/proxy_init@sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
.....
State: Terminated
Reason: Completed
.....
Ready: True
Containers:
demo-red:
Container ID: docker://8cd9957955ff7e534376eb6f28b56462099af6dfb8b9bc37aaf06e516175495e
Image: chugtum/blue-green-image:v3
Image ID: docker-pullable://docker.io/chugtum/blue-green-image@sha256:274756dbc215a6b2bd089c10de24fcece296f4c940067ac1a9b4aea67cf815db
State: Running
Started: Sun, 09 Dec 2018 18:12:31 -0800
Ready: True
istio-proxy:
Container ID: docker://ca5d690be8cd6557419cc19ec4e76163c14aed2336eaad7ebf17dd46ca188b4a
Image: docker.io/istio/proxyv2:1.0.2
Image ID: docker-pullable://docker.io/istio/proxyv2@sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
Args:
proxy
sidecar
.....
State: Running
Started: Sun, 09 Dec 2018 18:12:31 -0800
Ready: True
.....
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As seen in the output, the &lt;code&gt;State&lt;/code&gt; of the &lt;code&gt;istio-init&lt;/code&gt; container is &lt;code&gt;Terminated&lt;/code&gt; with the &lt;code&gt;Reason&lt;/code&gt; being &lt;code&gt;Completed&lt;/code&gt;. The only two containers running are the main application &lt;code&gt;demo-red&lt;/code&gt; container and the &lt;code&gt;istio-proxy&lt;/code&gt; container.&lt;/p&gt;
&lt;h3 id=&#34;automatic-injection&#34;&gt;Automatic injection&lt;/h3&gt;
&lt;p&gt;Most of the times, you dont want to manually inject a sidecar every time you deploy an application, using the &lt;code&gt;istioctl&lt;/code&gt; command, but would prefer that Istio automatically inject the sidecar to your pod. This is the recommended approach and for it to work, all you need to do is to label the namespace where you are deploying the app with &lt;code&gt;istio-injection=enabled&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Once labeled, Istio injects the sidecar automatically for any pod you deploy in that namespace. In the following example, the sidecar gets automatically injected in the deployed pods in the &lt;code&gt;istio-dev&lt;/code&gt; namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get namespaces --show-labels
NAME STATUS AGE LABELS
default Active 40d &amp;lt;none&amp;gt;
istio-dev Active 19d istio-injection=enabled
istio-system Active 24d &amp;lt;none&amp;gt;
kube-public Active 40d &amp;lt;none&amp;gt;
kube-system Active 40d &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But how does this work? To get to the bottom of this, we need to understand Kubernetes admission controllers.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/&#34;&gt;From Kubernetes documentation:&lt;/a&gt;&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object, but after the request is authenticated and authorized. You can define two types of admission webhooks, validating admission Webhook and mutating admission webhook. With validating admission Webhooks, you may reject requests to enforce custom admission policies. With mutating admission Webhooks, you may change requests to enforce custom defaults.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;p&gt;For automatic sidecar injection, Istio relies on &lt;code&gt;Mutating Admission Webhook&lt;/code&gt;. Lets look at the details of the &lt;code&gt;istio-sidecar-injector&lt;/code&gt; mutating webhook configuration.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; data-outputis=&#39;yaml&#39; &gt;$ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml
SNIPPET from the output:
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{&amp;#34;apiVersion&amp;#34;:&amp;#34;admissionregistration.k8s.io/v1beta1&amp;#34;,&amp;#34;kind&amp;#34;:&amp;#34;MutatingWebhookConfiguration&amp;#34;,&amp;#34;metadata&amp;#34;:{&amp;#34;annotations&amp;#34;:{},&amp;#34;labels&amp;#34;:{&amp;#34;app&amp;#34;:&amp;#34;istio-sidecar-injector&amp;#34;,&amp;#34;chart&amp;#34;:&amp;#34;sidecarInjectorWebhook-1.0.1&amp;#34;,&amp;#34;heritage&amp;#34;:&amp;#34;Tiller&amp;#34;,&amp;#34;release&amp;#34;:&amp;#34;istio-remote&amp;#34;},&amp;#34;name&amp;#34;:&amp;#34;istio-sidecar-injector&amp;#34;,&amp;#34;namespace&amp;#34;:&amp;#34;&amp;#34;},&amp;#34;webhooks&amp;#34;:[{&amp;#34;clientConfig&amp;#34;:{&amp;#34;caBundle&amp;#34;:&amp;#34;&amp;#34;,&amp;#34;service&amp;#34;:{&amp;#34;name&amp;#34;:&amp;#34;istio-sidecar-injector&amp;#34;,&amp;#34;namespace&amp;#34;:&amp;#34;istio-system&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/inject&amp;#34;}},&amp;#34;failurePolicy&amp;#34;:&amp;#34;Fail&amp;#34;,&amp;#34;name&amp;#34;:&amp;#34;sidecar-injector.istio.io&amp;#34;,&amp;#34;namespaceSelector&amp;#34;:{&amp;#34;matchLabels&amp;#34;:{&amp;#34;istio-injection&amp;#34;:&amp;#34;enabled&amp;#34;}},&amp;#34;rules&amp;#34;:[{&amp;#34;apiGroups&amp;#34;:[&amp;#34;&amp;#34;],&amp;#34;apiVersions&amp;#34;:[&amp;#34;v1&amp;#34;],&amp;#34;operations&amp;#34;:[&amp;#34;CREATE&amp;#34;],&amp;#34;resources&amp;#34;:[&amp;#34;pods&amp;#34;]}]}]}
creationTimestamp: 2018-12-10T08:40:15Z
generation: 2
labels:
app: istio-sidecar-injector
chart: sidecarInjectorWebhook-1.0.1
heritage: Tiller
release: istio-remote
name: istio-sidecar-injector
.....
webhooks:
- clientConfig:
service:
name: istio-sidecar-injector
namespace: istio-system
path: /inject
name: sidecar-injector.istio.io
namespaceSelector:
matchLabels:
istio-injection: enabled
rules:
- apiGroups:
- &amp;#34;&amp;#34;
apiVersions:
- v1
operations:
- CREATE
resources:
- pods
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is where you can see the webhook &lt;code&gt;namespaceSelector&lt;/code&gt; label that is matched for sidecar injection with the label &lt;code&gt;istio-injection: enabled&lt;/code&gt;. In this case, you also see the operations and resources for which this is done when the pods are created. When an &lt;code&gt;apiserver&lt;/code&gt; receives a request that matches one of the rules, the &lt;code&gt;apiserver&lt;/code&gt; sends an admission review request to the webhook service as specified in the &lt;code&gt;clientConfig:&lt;/code&gt;configuration with the &lt;code&gt;name: istio-sidecar-injector&lt;/code&gt; key-value pair. We should be able to see that this service is running in the &lt;code&gt;istio-system&lt;/code&gt; namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get svc --namespace=istio-system | grep sidecar-injector
istio-sidecar-injector ClusterIP 10.102.70.184 &amp;lt;none&amp;gt; 443/TCP 24d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This configuration ultimately does pretty much the same as we saw in manual injection. Just that it is done automatically during pod creation, so you wont see the change in the deployment. You need to use &lt;code&gt;kubectl describe&lt;/code&gt; to see the sidecar proxy and the init proxy. In case you want to change the default behavior, like the namespaces where Istio applies the injection, you can edit the &lt;code&gt;MutatingWebhookConfiguration&lt;/code&gt; and restart the sidecar injector pod.&lt;/p&gt;
&lt;p&gt;The automatic sidecar injection depends on the &lt;code&gt;namespaceSelector&lt;/code&gt; webhook but also on the default injection policy and the per-pod override annotation.&lt;/p&gt;
&lt;p&gt;If you look at the &lt;code&gt;istio-sidecar-injector&lt;/code&gt; ConfigMap again, it has the default injection policy defined. In our case, it is enabled by default.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; data-outputis=&#39;yaml&#39; &gt;$ kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath=&amp;#39;{.data.config}&amp;#39;
SNIPPET from the output:
policy: enabled
template: |-
initContainers:
- name: istio-init
image: &amp;#34;gcr.io/istio-release/proxy_init:1.0.2&amp;#34;
args:
- &amp;#34;-p&amp;#34;
- [[ .MeshConfig.ProxyListenPort ]]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can also use the annotation &lt;code&gt;sidecar.istio.io/inject&lt;/code&gt; in the pod template to override the default policy. The following example disables the automatic injection of the sidecar for the pods in a &lt;code&gt;Deployment&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ignored
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: &amp;#34;false&amp;#34;
spec:
containers:
- name: ignored
image: tutum/curl
command: [&amp;#34;/bin/sleep&amp;#34;,&amp;#34;infinity&amp;#34;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This example shows there are many variables, based on whether the automatic sidecar injection is controlled in your namespace, ConfigMap, or pod and they are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;webhooks &lt;code&gt;namespaceSelector&lt;/code&gt; (&lt;code&gt;istio-injection: enabled&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;default policy (Configured in the ConfigMap &lt;code&gt;istio-sidecar-injector&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;per-pod override annotation (&lt;code&gt;sidecar.istio.io/inject&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;a href=&#34;../../../help/ops/setup/injection/&#34;&gt;injection status table&lt;/a&gt; shows a clear picture of the final injection status based on the value of the above variables.&lt;/p&gt;
&lt;h2 id=&#34;traffic-flow-from-application-container-to-sidecar-proxy&#34;&gt;Traffic flow from application container to sidecar proxy&lt;/h2&gt;
&lt;p&gt;Now that we are clear about how a sidecar container and an init container are injected into an application manifest, how does the sidecar proxy grab the inbound and outbound traffic to and from the container? We did briefly mention that it is done by setting up the &lt;code&gt;iptable&lt;/code&gt; rules within the pod namespace, which in turn is done by the &lt;code&gt;istio-init&lt;/code&gt; container. Now, it is time to verify what actually gets updated within the namespace.&lt;/p&gt;
&lt;p&gt;Lets get into the application pod namespace we deployed in the previous section and look at the configured iptables. I am going to show an example using &lt;code&gt;nsenter&lt;/code&gt;. Alternatively, you can enter the container in a privileged mode to see the same information. For folks without access to the nodes, using &lt;code&gt;exec&lt;/code&gt; to get into the sidecar and running &lt;code&gt;iptables&lt;/code&gt; is more practical.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ docker inspect b8de099d3510 --format &amp;#39;{{ .State.Pid }}&amp;#39;
4125
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ nsenter -t 4215 -n iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N ISTIO_INBOUND
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-N ISTIO_REDIRECT
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp -m tcp --dport 80 -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -j ISTIO_REDIRECT
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output above clearly shows that all the incoming traffic to port 80, which is the port our &lt;code&gt;red-demo&lt;/code&gt; application is listening, is now &lt;code&gt;REDIRECTED&lt;/code&gt; to port &lt;code&gt;15001&lt;/code&gt;, which is the port that the &lt;code&gt;istio-proxy&lt;/code&gt;, an Envoy proxy, is listening. The same holds true for the outgoing traffic.&lt;/p&gt;
&lt;p&gt;This brings us to the end of this post. I hope it helped to de-mystify how Istio manages to inject the sidecar proxies into an existing deployment and how Istio routes the traffic to the proxy.&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout idea&#34;&gt;
&lt;div class=&#34;type&#34;&gt;
&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-idea&#34;/&gt;&lt;/svg&gt;
&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;Update: In place of &lt;code&gt;istio-init&lt;/code&gt;, there now seems to be an option of using the new CNI, which removes the need for the init container and associated privileges. This &lt;a href=&#34;https://github.com/istio/cni&#34;&gt;&lt;code&gt;istio-cni&lt;/code&gt;&lt;/a&gt; plugin sets up the pods&amp;rsquo; networking to fulfill this requirement in place of the current Istio injected pod &lt;code&gt;istio-init&lt;/code&gt; approach.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;</description><pubDate>Thu, 31 Jan 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/data-plane-setup/</link><author>Manish Chugtu</author><guid isPermaLink="true">/v1.1/blog/2019/data-plane-setup/</guid><category>kubernetes</category><category>sidecar-injection</category><category>traffic-management</category></item><item><title>Sidestepping Dependency Ordering with AppSwitch</title><description>
&lt;p&gt;We are going through an interesting cycle of application decomposition and recomposition. While the microservice paradigm is driving monolithic applications to be broken into separate individual services, the service mesh approach is helping them to be connected back together into well-structured applications. As such, microservices are logically separate but not independent. They are usually closely interdependent and taking them apart introduces many new concerns such as need for mutual authentication between services. Istio directly addresses most of those issues.&lt;/p&gt;
&lt;h2 id=&#34;dependency-ordering-problem&#34;&gt;Dependency ordering problem&lt;/h2&gt;
&lt;p&gt;An issue that arises due to application decomposition and one that Istio doesnt address is dependency ordering &amp;ndash; bringing up individual services of an application in an order that guarantees that the application as a whole comes up quickly and correctly. In a monolithic application, with all its components built-in, dependency ordering between the components is enforced by internal locking mechanisms. But with individual services potentially scattered across the cluster in a service mesh, starting a service first requires checking that the services it depends on are up and available.&lt;/p&gt;
&lt;p&gt;Dependency ordering is deceptively nuanced with a host of interrelated problems. Ordering individual services requires having the dependency graph of the services so that they can be brought up starting from leaf nodes back to the root nodes. It is not easy to construct such a graph and keep it updated over time as interdependencies evolve with the behavior of the application. Even if the dependency graph is somehow provided, enforcing the ordering itself is not easy. Simply starting the services in the specified order obviously wont do. A service may have started but not be ready to accept connections yet. This is the problem with docker-compose&amp;rsquo;s &lt;code&gt;depends-on&lt;/code&gt; tag, for example.&lt;/p&gt;
&lt;p&gt;Apart from introducing sufficiently long sleeps between service startups, a common pattern that is often used is to check for readiness of dependencies before starting a service. In Kubernetes, this could be done with a wait script as part of the init container of the pod. However that means that the entire application would be held up until all its dependencies come alive. Sometimes applications spend several minutes initializing themselves on startup before making their first outbound connection. Not allowing a service to start at all adds substantial overhead to overall startup time of the application. Also, the strategy of waiting on the init container won&amp;rsquo;t work for the case of multiple interdependent services within the same pod.&lt;/p&gt;
&lt;h3 id=&#34;example-scenario-ibm-websphere-nd&#34;&gt;Example scenario: IBM WebSphere ND&lt;/h3&gt;
&lt;p&gt;Let us consider IBM WebSphere ND &amp;ndash; a widely deployed application middleware &amp;ndash; to grok these problems more closely. It is a fairly complex framework in itself and consists of a central component called deployment manager (&lt;code&gt;dmgr&lt;/code&gt;) that manages a set of node instances. It uses UDP to negotiate cluster membership among the nodes and requires that deployment manager is up and operational before any of the node instances can come up and join the cluster.&lt;/p&gt;
&lt;p&gt;Why are we talking about a traditional application in the modern cloud-native context? It turns out that there are significant gains to be had by enabling them to run on the Kubernetes and Istio platforms. Essentially it&amp;rsquo;s a part of the modernization journey that allows running traditional apps alongside green-field apps on the same modern platform to facilitate interoperation between the two. In fact, WebSphere ND is a demanding application. It expects a consistent network environment with specific network interface attributes etc. AppSwitch is equipped to take care of those requirements. For the purpose of this blog however, I&amp;rsquo;ll focus on the dependency ordering requirement and how AppSwitch addresses it.&lt;/p&gt;
&lt;p&gt;Simply deploying &lt;code&gt;dmgr&lt;/code&gt; and node instances as pods on a Kubernetes cluster does not work. &lt;code&gt;dmgr&lt;/code&gt; and the node instances happen to have a lengthy initialization process that can take several minutes. If they are all co-scheduled, the application typically ends up in a funny state. When a node instance comes up and finds that &lt;code&gt;dmgr&lt;/code&gt; is missing, it would take an alternate startup path. Instead, if it had exited immediately, Kubernetes crash-loop would have taken over and perhaps the application would have come up. But even in that case, it turns out that a timely startup is not guaranteed.&lt;/p&gt;
&lt;p&gt;One &lt;code&gt;dmgr&lt;/code&gt; along with its node instances is a basic deployment configuration for WebSphere ND. Applications like IBM Business Process Manager that are built on top of WebSphere ND running in production environments include several other services. In those configurations, there could be a chain of interdependencies. Depending on the applications hosted by the node instances, there may be an ordering requirement among them as well. With long service initialization times and crash-loop restarts, there is little chance for the application to start in any reasonable length of time.&lt;/p&gt;
&lt;h3 id=&#34;sidecar-dependency-in-istio&#34;&gt;Sidecar dependency in Istio&lt;/h3&gt;
&lt;p&gt;Istio itself is affected by a version of the dependency ordering problem. Since connections into and out of a service running under Istio are redirected through its sidecar proxy, an implicit dependency is created between the application service and its sidecar. Unless the sidecar is fully operational, all requests from and to the service get dropped.&lt;/p&gt;
&lt;h2 id=&#34;dependency-ordering-with-appswitch&#34;&gt;Dependency ordering with AppSwitch&lt;/h2&gt;
&lt;p&gt;So how do we go about addressing these issues? One way is to defer it to the applications and say that they are supposed to be &amp;ldquo;well behaved&amp;rdquo; and implement appropriate logic to make themselves immune to startup order issues. However, many applications (especially traditional ones) either timeout or deadlock if misordered. Even for new applications, implementing one off logic for each service is substantial additional burden that is best avoided. Service mesh needs to provide adequate support around these problems. After all, factoring out common patterns into an underlying framework is really the point of service mesh.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://appswitch.io&#34;&gt;AppSwitch&lt;/a&gt; explicitly addresses dependency ordering. It sits on the control path of the applications network interactions between clients and services in a cluster and knows precisely when a service becomes a client by making the &lt;code&gt;connect&lt;/code&gt; call and when a particular service becomes ready to accept connections by making the &lt;code&gt;listen&lt;/code&gt; call. It&amp;rsquo;s &lt;em&gt;service router&lt;/em&gt; component disseminates information about these events across the cluster and arbitrates interactions among clients and servers. That is how AppSwitch implements functionality such as load balancing and isolation in a simple and efficient manner. Leveraging the same strategic location of the application&amp;rsquo;s network control path, it is conceivable that the &lt;code&gt;connect&lt;/code&gt; and &lt;code&gt;listen&lt;/code&gt; calls made by those services can be lined up at a finer granularity rather than coarsely sequencing entire services as per a dependency graph. That would effectively solve the multilevel dependency problem and speedup application startup.&lt;/p&gt;
&lt;p&gt;But that still requires a dependency graph. A number of products and tools exist to help with discovering service dependencies. But they are typically based on passive monitoring of network traffic and cannot provide the information beforehand for any arbitrary application. Network level obfuscation due to encryption and tunneling also makes them unreliable. The burden of discovering and specifying the dependencies ultimately falls to the developer or the operator of the application. As it is, even consistency checking a dependency specification is itself quite complex and any way to avoid requiring a dependency graph would be most desirable.&lt;/p&gt;
&lt;p&gt;The point of a dependency graph is to know which clients depend on a particular service so that those clients can then be made to wait for the respective service to become live. But does it really matter which specific clients? Ultimately one tautology that always holds is that all clients of a service have an implicit dependency on the service. Thats what AppSwitch leverages to get around the requirement. In fact, that sidesteps dependency ordering altogether. All services of the application can be co-scheduled without regard to any startup order. Interdependencies among them automatically work themselves out at the granularity of individual requests and responses, resulting in quick and correct application startups.&lt;/p&gt;
&lt;h3 id=&#34;appswitch-model-and-constructs&#34;&gt;AppSwitch model and constructs&lt;/h3&gt;
&lt;p&gt;Now that we have a conceptual understanding of AppSwitchs high-level approach, lets look at the constructs involved. But first a quick summary of the usage model is in order. Even though it is written for a different context, reviewing my earlier &lt;a href=&#34;/v1.1/blog/2018/delayering-istio/&#34;&gt;blog&lt;/a&gt; on this topic would be useful as well. For completeness, let me also note AppSwitch doesnt bother with non-network dependencies. For example it may be possible for two services to interact using IPC mechanisms or through the shared file system. Processes with deep ties like that are typically part of the same service anyway and dont require frameworks intervention for ordering.&lt;/p&gt;
&lt;p&gt;At its core, AppSwitch is built on a mechanism that allows instrumenting the BSD socket API and other related calls like &lt;code&gt;fcntl&lt;/code&gt; and &lt;code&gt;ioctl&lt;/code&gt; that deal with sockets. As interesting as the details of its implementation are, its going to distract us from the main topic, so Id just summarize the key properties that distinguish it from other implementations. (1) Its fast. It uses a combination of &lt;code&gt;seccomp&lt;/code&gt; filtering and binary instrumentation to aggressively limit intervening with applications normal execution. AppSwitch is particularly suited for service mesh and application networking use cases given that it implements those features without ever having to actually touch the data. In contrast, network level approaches incur per-packet cost. Take a look at this &lt;a href=&#34;/v1.1/blog/2018/delayering-istio/&#34;&gt;blog&lt;/a&gt; for some of the performance measurements. (2) It doesnt require any kernel support, kernel module or a patch and works on standard distro kernels (3) It can run as regular user (no root). In fact, the mechanism can even make it possible to run &lt;a href=&#34;https://linuxpiter.com/en/materials/2478&#34;&gt;Docker daemon without root&lt;/a&gt; by removing root requirement to network containers (4) It doesnt require any changes to the applications whatsoever and works for any type of application &amp;ndash; from WebSphere ND and SAP to custom C apps to statically linked Golang apps. Only requirement at this point is Linux/x86.&lt;/p&gt;
&lt;h3 id=&#34;decoupling-services-from-their-references&#34;&gt;Decoupling services from their references&lt;/h3&gt;
&lt;p&gt;AppSwitch is built on the fundamental premise that applications should be decoupled from their references. The identity of applications is traditionally derived from the identity of the host on which they run. However, applications and hosts are very different objects that need to be referenced independently. Detailed discussion around this topic along with a conceptual foundation of AppSwitch is presented in this &lt;a href=&#34;http://cn.arxiv.org/abs/1711.02294&#34;&gt;research paper&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The central AppSwitch construct that achieves the decoupling between services objects and their identities is &lt;em&gt;service reference&lt;/em&gt; (&lt;em&gt;reference&lt;/em&gt;, for short). AppSwitch implements service references based on the API instrumentation mechanism outlined above. A service reference consists of an IP:port pair (and optionally a DNS name) and a label-selector that selects the service represented by the reference and the clients to which this reference applies. A reference supports a few key properties. (1) It can be named independently of the name of the object it refers to. That is, a service may be listening on an IP and port but a reference allows that service to be reached on any other IP and port chosen by the user. This is what allows AppSwitch to run traditional applications captured from their source environments with static IP configurations to run on Kubernetes by providing them with necessary IP addresses and ports regardless of the target network environment. (2) It remains unchanged even if the location of the target service changes. A reference automatically redirects itself as its label-selector now resolves to the new instance of the service (3) Most important for this discussion, a reference remains valid even as the target service is coming up.&lt;/p&gt;
&lt;p&gt;To facilitate discovering services that can be accessed through service references, AppSwitch provides an &lt;em&gt;auto-curated service registry&lt;/em&gt;. The registry is automatically kept up to date as services come and go across the cluster based on the network API that AppSwitch tracks. Each entry in the registry consists of the IP and port where the respective service is bound. Along with that, it includes a set of labels indicating the application to which this service belongs, the IP and port that the application passed through the socket API when creating the service, the IP and port where AppSwitch actually bound the service on the underlying host on behalf of the application etc. In addition, applications created under AppSwitch carry a set of labels passed by the user that describe the application together with a few default system labels indicating the user that created the application and the host where the application is running etc. These labels are all available to be expressed in the label-selector carried by a service reference. A service in the registry can be made accessible to clients by creating a service reference. A client would then be able to reach the service at the references name (IP:port). Now lets look at how AppSwitch guarantees that the reference remains valid even when the target service has not yet come up.&lt;/p&gt;
&lt;h3 id=&#34;non-blocking-requests&#34;&gt;Non-blocking requests&lt;/h3&gt;
&lt;p&gt;AppSwitch leverages the semantics of the BSD socket API to ensure that service references appear valid from the perspective of clients as corresponding services come up. When a client makes a blocking connect call to another service that has not yet come up, AppSwitch blocks the call for a certain time waiting for the target service to become live. Since it is known that the target service is a part of the application and is expected to come up shortly, making the client block rather than returning an error such as &lt;code&gt;ECONNREFUSED&lt;/code&gt; prevents the application from failing to start. If the service doesnt come up within time, an error is returned to the application so that framework-level mechanisms like Kubernetes crash-loop can kick in.&lt;/p&gt;
&lt;p&gt;If the client request is marked as non-blocking, AppSwitch handles that by returning &lt;code&gt;EAGAIN&lt;/code&gt; to inform the application to retry rather than give up. Once again, that is in-line with the semantics of socket API and prevents failures due to startup races. AppSwitch essentially enables the retry logic already built into applications in support of the BSD socket API to be transparently repurposed for dependency ordering.&lt;/p&gt;
&lt;h3 id=&#34;application-timeouts&#34;&gt;Application timeouts&lt;/h3&gt;
&lt;p&gt;What if the application times out based on its own internal timer? Truth be told, AppSwitch can also fake applications perception of time if wanted but that would be overstepping and actually unnecessary. Application decides and knows best how long it should wait and its not appropriate for AppSwitch to mess with that. Application timeouts are conservatively long and if the target service still hasnt come up in time, it is unlikely to be a dependency ordering issue. There must be something else going on that should not be masked.&lt;/p&gt;
&lt;h3 id=&#34;wildcard-service-references-for-sidecar-dependency&#34;&gt;Wildcard service references for sidecar dependency&lt;/h3&gt;
&lt;p&gt;Service references can be used to address the Istio sidecar dependency issue mentioned earlier. AppSwitch allows the IP:port specified as part of a service reference to be a wildcard. That is, the service reference IP address can be a netmask indicating the IP address range to be captured. If the label selector of the service reference points to the sidecar service, then all outgoing connections of any application for which this service reference is applied, will be transparently redirected to the sidecar. And of course, the service reference remains valid while sidecar is still coming up and the race is removed.&lt;/p&gt;
&lt;p&gt;Using service references for sidecar dependency ordering also implicitly redirects applications connections to the sidecar without requiring iptables and attendant privilege issues. Essentially it works as if the application is directly making connections to the sidecar rather than the target destination, leaving the sidecar in charge of what to do. AppSwitch would interject metadata about the original destination etc. into the data stream of the connection using the proxy protocol that the sidecar could decode before passing the connection through to the application. Some of these details were discussed &lt;a href=&#34;/v1.1/blog/2018/delayering-istio/&#34;&gt;here&lt;/a&gt;. That takes care of outbound connections but what about incoming connections? With all services and their sidecars running under AppSwitch, any incoming connections that would have come from remote nodes would be redirected to their respective remote sidecars. So nothing special to do about incoming connections.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;Dependency ordering is a pesky problem. This is mostly due to lack of access to fine-grain application-level events around inter-service interactions. Addressing this problem would have normally required applications to implement their own internal logic. But AppSwitch makes those internal application events to be instrumented without requiring application changes. AppSwitch then leverages the ubiquitous support for the BSD socket API to sidestep the requirement of ordering dependencies.&lt;/p&gt;
&lt;h2 id=&#34;acknowledgements&#34;&gt;Acknowledgements&lt;/h2&gt;
&lt;p&gt;Thanks to Eric Herness and team for their insights and support with IBM WebSphere and BPM products as we modernized them onto the Kubernetes platform and to Mandar Jog, Martin Taillefer and Shriram Rajagopalan for reviewing early drafts of this blog.&lt;/p&gt;</description><pubDate>Mon, 14 Jan 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/appswitch/</link><author>Dinesh Subhraveti (AppOrbit and Columbia University)</author><guid isPermaLink="true">/v1.1/blog/2019/appswitch/</guid><category>appswitch</category><category>performance</category></item><item><title>Deploy a Custom Ingress Gateway Using Cert-Manager</title><description>
&lt;p&gt;This post provides instructions to manually create a custom ingress &lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/gateway/&#34;&gt;gateway&lt;/a&gt; with automatic provisioning of certificates based on cert-manager.&lt;/p&gt;
&lt;p&gt;The creation of custom ingress gateway could be used in order to have different &lt;code&gt;loadbalancer&lt;/code&gt; in order to isolate traffic.&lt;/p&gt;
&lt;h2 id=&#34;before-you-begin&#34;&gt;Before you begin&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Setup Istio by following the instructions in the
&lt;a href=&#34;/v1.1/docs/setup/&#34;&gt;Installation guide&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Setup &lt;code&gt;cert-manager&lt;/code&gt; with helm &lt;a href=&#34;https://github.com/helm/charts/tree/master/stable/cert-manager#installing-the-chart&#34;&gt;chart&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;We will use &lt;code&gt;demo.mydemo.com&lt;/code&gt; for our example,
it must be resolved with your DNS&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;configuring-the-custom-ingress-gateway&#34;&gt;Configuring the custom ingress gateway&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Check if &lt;a href=&#34;https://github.com/helm/charts/tree/master/stable/cert-manager&#34;&gt;cert-manager&lt;/a&gt; was installed using Helm with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ helm ls
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output should be similar to the example below and show cert-manager with a &lt;code&gt;STATUS&lt;/code&gt; of &lt;code&gt;DEPLOYED&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-plain&#39; data-expandlinks=&#39;true&#39; &gt;NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
istio 1 Thu Oct 11 13:34:24 2018 DEPLOYED istio-1.0.X 1.0.X istio-system
cert 1 Wed Oct 24 14:08:36 2018 DEPLOYED cert-manager-v0.6.0-dev.2 v0.6.0-dev.2 istio-system
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To create the cluster&amp;rsquo;s issuer, apply the following configuration:&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;Change the cluster&amp;rsquo;s &lt;a href=&#34;https://cert-manager.readthedocs.io/en/latest/reference/issuers.html#issuers&#34;&gt;issuer&lt;/a&gt; provider with your own configuration values. The example uses the values under &lt;code&gt;route53&lt;/code&gt;.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-demo
namespace: kube-system
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: &amp;lt;REDACTED&amp;gt;
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-demo
dns01:
# Here we define a list of DNS-01 providers that can solve DNS challenges
providers:
- name: your-dns
route53:
accessKeyID: &amp;lt;REDACTED&amp;gt;
region: eu-central-1
secretAccessKeySecretRef:
name: prod-route53-credentials-secret
key: secret-access-key
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you use the &lt;code&gt;route53&lt;/code&gt; &lt;a href=&#34;https://cert-manager.readthedocs.io/en/latest/tasks/acme/configuring-dns01/route53.html&#34;&gt;provider&lt;/a&gt;, you must provide a secret to perform DNS ACME Validation. To create the secret, apply the following configuration file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: v1
kind: Secret
metadata:
name: prod-route53-credentials-secret
type: Opaque
data:
secret-access-key: &amp;lt;REDACTED BASE64&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create your own certificate:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: demo-certificate
namespace: istio-system
spec:
acme:
config:
- dns01:
provider: your-dns
domains:
- &amp;#39;*.mydemo.com&amp;#39;
commonName: &amp;#39;*.mydemo.com&amp;#39;
dnsNames:
- &amp;#39;*.mydemo.com&amp;#39;
issuerRef:
kind: ClusterIssuer
name: letsencrypt-demo
secretName: istio-customingressgateway-certs
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make a note of the value of &lt;code&gt;secretName&lt;/code&gt; since a future step requires it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To scale automatically, declare a new horizontal pod autoscaler with the following configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: my-ingressgateway
namespace: istio-system
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: my-ingressgateway
targetCPUUtilizationPercentage: 80
status:
currentCPUUtilizationPercentage: 0
currentReplicas: 1
desiredReplicas: 1
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apply your deployment with declaration provided in the &lt;a href=&#34;/v1.1/blog/2019/custom-ingress-gateway/deployment-custom-ingress.yaml&#34;&gt;yaml definition&lt;/a&gt;&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;The annotations used, for example &lt;code&gt;aws-load-balancer-type&lt;/code&gt;, only apply for AWS.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create your service:&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout warning&#34;&gt;
&lt;div class=&#34;type&#34;&gt;
&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-warning&#34;/&gt;&lt;/svg&gt;
&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;The &lt;code&gt;NodePort&lt;/code&gt; used needs to be an available port.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: v1
kind: Service
metadata:
name: my-ingressgateway
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app: my-ingressgateway
istio: my-ingressgateway
spec:
type: LoadBalancer
selector:
app: my-ingressgateway
istio: my-ingressgateway
ports:
-
name: http2
nodePort: 32380
port: 80
targetPort: 80
-
name: https
nodePort: 32390
port: 443
-
name: tcp
nodePort: 32400
port: 31400
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create your Istio custom gateway configuration object:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
name: istio-custom-gateway
namespace: default
spec:
selector:
istio: my-ingressgateway
servers:
- hosts:
- &amp;#39;*.mydemo.com&amp;#39;
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- &amp;#39;*.mydemo.com&amp;#39;
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Link your &lt;code&gt;istio-custom-gateway&lt;/code&gt; with your &lt;code&gt;VirtualService&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-virtualservice
spec:
hosts:
- &amp;#34;demo.mydemo.com&amp;#34;
gateways:
- istio-custom-gateway
http:
- route:
- destination:
host: my-demoapp
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Correct certificate is returned by the server and it is successfully verified (&lt;em&gt;SSL certificate verify ok&lt;/em&gt; is printed):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ curl -v `https://demo.mydemo.com`
Server certificate:
SSL certificate verify ok.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You can now use your custom &lt;code&gt;istio-custom-gateway&lt;/code&gt; &lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/gateway/&#34;&gt;gateway&lt;/a&gt; configuration object.&lt;/p&gt;</description><pubDate>Thu, 10 Jan 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/custom-ingress-gateway/</link><author>Julien Senon</author><guid isPermaLink="true">/v1.1/blog/2019/custom-ingress-gateway/</guid><category>ingress</category><category>traffic-management</category></item><item><title>Announcing discuss.istio.io</title><description>&lt;p&gt;We in the Istio community have been working to find the right medium for users to engage with other members of the community &amp;ndash; to ask questions,
to get help from other users, and to engage with developers working on the project.&lt;/p&gt;
&lt;p&gt;Weve tried several different avenues, but each has had some downsides. RocketChat was our most recent endeavor, but the lack of certain
features (for example, threading) meant it wasnt ideal for any longer discussions around a single issue. It also led to a dilemma for
some users &amp;ndash; when should I email istio-users@googlegroups.com and when should I use RocketChat?&lt;/p&gt;
&lt;p&gt;We think weve found the right balance of features in a single platform, and were happy to announce
&lt;a href=&#34;https://discuss.istio.io&#34;&gt;discuss.istio.io&lt;/a&gt;. Its a full-featured forum where we will have discussions about Istio from here on out.
It will allow you to ask a question and get threaded replies! As a real bonus, you can use your GitHub identity.&lt;/p&gt;
&lt;p&gt;If you prefer emails, you can configure it to send emails just like Google groups did.&lt;/p&gt;
&lt;p&gt;We will be marking our Google groups &amp;ldquo;read only&amp;rdquo; so that the content remains, but we ask you to send further questions over to
&lt;a href=&#34;https://discuss.istio.io&#34;&gt;discuss.istio.io&lt;/a&gt;. If you have any outstanding questions or discussions in the groups, please move the conversation over.&lt;/p&gt;
&lt;p&gt;Happy meshing!&lt;/p&gt;</description><pubDate>Thu, 10 Jan 2019 00:00:00 +0000</pubDate><link>/v1.1/blog/2019/announcing-discuss.istio.io/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2019/announcing-discuss.istio.io/</guid></item><item><title>Announcing Istio 1.0.5</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.0.5. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.0.5&#34;
data-updateadvice=&#39;Before you download 1.0.5, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.0.8&#39;
data-updatehref=&#34;/v1.1/about/notes/1.0.8&#34;&gt;
DOWNLOAD 1.0.5
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://archive.istio.io/v1.0&#34;&gt;1.0.5 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.0.4...1.0.5&#34;&gt;CHANGES IN 1.0.5&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;general&#34;&gt;General&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Disabled the precondition cache in the &lt;code&gt;istio-policy&lt;/code&gt; service as it lead to invalid results. The
cache will be reintroduced in a later release.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mixer now only generates spans when there is an enabled &lt;code&gt;tracespan&lt;/code&gt; adapter, resulting in lower CPU overhead in normal cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed a problem that could lead Pilot to hang.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Thu, 20 Dec 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/announcing-1.0.5/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2018/announcing-1.0.5/</guid></item><item><title>Incremental Istio Part 1, Traffic Management</title><description>
&lt;p&gt;Traffic management is one of the critical benefits provided by Istio. At the heart of Istios traffic management is the ability to decouple traffic flow and infrastructure scaling. This lets you control your traffic in ways that arent possible without a service mesh like Istio.&lt;/p&gt;
&lt;p&gt;For example, lets say you want to execute a &lt;a href=&#34;https://martinfowler.com/bliki/CanaryRelease.html&#34;&gt;canary deployment&lt;/a&gt;. With Istio, you can specify that &lt;strong&gt;v1&lt;/strong&gt; of a service receives 90% of incoming traffic, while &lt;strong&gt;v2&lt;/strong&gt; of that service only receives 10%. With standard Kubernetes deployments, the only way to achieve this is to manually control the number of available Pods for each version, for example 9 Pods running v1 and 1 Pod running v2. This type of manual control is hard to implement, and over time may have trouble scaling. For more information, check out &lt;a href=&#34;/v1.1/blog/2017/0.1-canary/&#34;&gt;Canary Deployments using Istio&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The same issue exists when deploying updates to existing services. While you can update deployments with Kubernetes, it requires replacing v1 Pods with v2 Pods. Using Istio, you can deploy v2 of your service and use built-in traffic management mechanisms to shift traffic to your updated services at a network level, then remove the v1 Pods.&lt;/p&gt;
&lt;p&gt;In addition to canary deployments and general traffic shifting, Istio also gives you the ability to implement dynamic request routing (based on HTTP headers), failure recovery, retries, circuit breakers, and fault injection. For more information, check out the &lt;a href=&#34;/v1.1/docs/concepts/traffic-management/&#34;&gt;Traffic Management documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This post walks through a technique that highlights a particularly useful way that you can implement Istio incrementally &amp;ndash; in this case, only the traffic management features &amp;ndash; without having to individually update each of your Pods.&lt;/p&gt;
&lt;h2 id=&#34;setup-why-implement-istio-traffic-management-features&#34;&gt;Setup: why implement Istio traffic management features?&lt;/h2&gt;
&lt;p&gt;Of course, the first question is: Why would you want to do this?&lt;/p&gt;
&lt;p&gt;If youre part of one of the many organizations out there that have a large cluster with lots of teams deploying, the answer is pretty clear. Lets say Team A is getting started with Istio and wants to start some canary deployments on Service A, but Team B hasnt started using Istio, so they dont have sidecars deployed.&lt;/p&gt;
&lt;p&gt;With Istio, Team A can still implement their canaries by having Service B call Service A through Istios ingress gateway.&lt;/p&gt;
&lt;h2 id=&#34;background-traffic-routing-in-an-istio-mesh&#34;&gt;Background: traffic routing in an Istio mesh&lt;/h2&gt;
&lt;p&gt;But how can you use Istios traffic management capabilities without updating each of your applications Pods to include the Istio sidecar? Before answering that question, lets take a quick high-level look at how traffic enters an Istio mesh and how its routed.&lt;/p&gt;
&lt;p&gt;Pods that are part of the Istio mesh contain a sidecar proxy that is responsible for mediating all inbound and outbound traffic to the Pod. Within an Istio mesh, Pilot is responsible for converting high-level routing rules into configurations and propagating them to the sidecar proxies. That means when services communicate with one another, their routing decisions are determined from the client side.&lt;/p&gt;
&lt;p&gt;Lets say you have two services that are part of the Istio mesh, Service A and Service B. When A wants to communicate with B, the sidecar proxy of Pod A is responsible for directing traffic to Service B. For example, if you wanted to split traffic &lt;sup&gt;50&lt;/sup&gt;&amp;frasl;&lt;sub&gt;50&lt;/sub&gt; across Service B v1 and v2, the traffic would flow as follows:&lt;/p&gt;
&lt;figure style=&#34;width:60%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:42.66666666666667%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/incremental-traffic-management/./fifty-fifty.png&#34; title=&#34;50/50 Traffic Split&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/incremental-traffic-management/./fifty-fifty.png&#34; alt=&#34;50/50 Traffic Split&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;50/50 Traffic Split&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;If Services A and B are not part of the Istio mesh, there is no sidecar proxy that knows how to route traffic to different versions of Service B. In that case you need to use another approach to get traffic from Service A to Service B, following the &lt;sup&gt;50&lt;/sup&gt;&amp;frasl;&lt;sub&gt;50&lt;/sub&gt; rules youve setup.&lt;/p&gt;
&lt;p&gt;Fortunately, a standard Istio deployment already includes a &lt;a href=&#34;/v1.1/docs/concepts/traffic-management/#gateways&#34;&gt;Gateway&lt;/a&gt; that specifically deals with ingress traffic outside of the Istio mesh. This Gateway is used to allow ingress traffic from outside the cluster via an external load balancer, or to allow ingress traffic from within the Kubernetes cluster but outside the service mesh. It can be configured to proxy incoming ingress traffic to the appropriate Pods, even if they dont have a sidecar proxy. While this approach allows you to leverage Istios traffic management features, it does mean that traffic going through the ingress gateway will incur an extra hop.&lt;/p&gt;
&lt;figure style=&#34;width:60%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:54.83870967741935%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/incremental-traffic-management/./fifty-fifty-ingress-gateway.png&#34; title=&#34;50/50 Traffic Split using Ingress Gateway&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/incremental-traffic-management/./fifty-fifty-ingress-gateway.png&#34; alt=&#34;50/50 Traffic Split using Ingress Gateway&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;50/50 Traffic Split using Ingress Gateway&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2 id=&#34;in-action-traffic-routing-with-istio&#34;&gt;In action: traffic routing with Istio&lt;/h2&gt;
&lt;p&gt;A simple way to see this type of approach in action is to first setup your Kubernetes environment using the &lt;a href=&#34;/v1.1/docs/setup/kubernetes/prepare/platform-setup/&#34;&gt;Platform Setup&lt;/a&gt; instructions, and then install the &lt;strong&gt;minimal&lt;/strong&gt; Istio profile using &lt;a href=&#34;/v1.1/docs/setup/kubernetes/install/helm/&#34;&gt;Helm&lt;/a&gt;, including only the traffic management components (ingress gateway, egress gateway, Pilot). The following example uses &lt;a href=&#34;https://cloud.google.com/gke&#34;&gt;Google Kubernetes Engine&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;First, setup and configure &lt;a href=&#34;/v1.1/docs/setup/kubernetes/prepare/platform-setup/gke/&#34;&gt;GKE&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ gcloud container clusters create istio-inc --zone us-central1-f
$ gcloud container clusters get-credentials istio-inc
$ kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, &lt;a href=&#34;https://helm.sh/docs/securing_installation/&#34;&gt;install Helm&lt;/a&gt; and &lt;a href=&#34;/v1.1/docs/setup/kubernetes/install/helm/&#34;&gt;generate a minimal Istio install&lt;/a&gt; &amp;ndash; only traffic management components:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ helm template install/kubernetes/helm/istio \
--name istio \
--namespace istio-system \
--set security.enabled=false \
--set galley.enabled=false \
--set sidecarInjectorWebhook.enabled=false \
--set mixer.enabled=false \
--set prometheus.enabled=false \
--set pilot.sidecar=false &amp;gt; istio-minimal.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then create the &lt;code&gt;istio-system&lt;/code&gt; namespace and deploy Istio:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl create namespace istio-system
$ kubectl apply -f istio-minimal.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, deploy the Bookinfo sample without the Istio sidecar containers:&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Now, configure a new Gateway that allows access to the reviews service from outside the Istio mesh, a new &lt;code&gt;VirtualService&lt;/code&gt; that splits traffic evenly between v1 and v2 of the reviews service, and a set of new &lt;code&gt;DestinationRule&lt;/code&gt; resources that match destination subsets to service versions:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: reviews-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- &amp;#34;*&amp;#34;
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- &amp;#34;*&amp;#34;
gateways:
- reviews-gateway
http:
- match:
- uri:
prefix: /reviews
route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v2
weight: 50
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, deploy a pod that you can use for testing with &lt;code&gt;curl&lt;/code&gt; (and without the Istio sidecar container):&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/sleep/sleep.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f @samples/sleep/sleep.yaml@
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h2 id=&#34;testing-your-deployment&#34;&gt;Testing your deployment&lt;/h2&gt;
&lt;p&gt;Now, you can test different behaviors using the &lt;code&gt;curl&lt;/code&gt; commands via the sleep Pod.&lt;/p&gt;
&lt;p&gt;The first example is to issue requests to the reviews service using standard Kubernetes service DNS behavior (&lt;strong&gt;note&lt;/strong&gt;: &lt;a href=&#34;https://stedolan.github.io/jq/&#34;&gt;&lt;code&gt;jq&lt;/code&gt;&lt;/a&gt; is used in the examples below to filter the output from &lt;code&gt;curl&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ export SLEEP_POD=$(kubectl get pod -l app=sleep \
-o jsonpath={.items..metadata.name})
$ for i in `seq 3`; do \
kubectl exec -it $SLEEP_POD curl http://reviews:9080/reviews/0 | \
jq &amp;#39;.reviews|.[]|.rating?&amp;#39;; \
done
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&#39;language-json&#39; data-expandlinks=&#39;true&#39; &gt;{
&amp;#34;stars&amp;#34;: 5,
&amp;#34;color&amp;#34;: &amp;#34;black&amp;#34;
}
{
&amp;#34;stars&amp;#34;: 4,
&amp;#34;color&amp;#34;: &amp;#34;black&amp;#34;
}
null
null
{
&amp;#34;stars&amp;#34;: 5,
&amp;#34;color&amp;#34;: &amp;#34;red&amp;#34;
}
{
&amp;#34;stars&amp;#34;: 4,
&amp;#34;color&amp;#34;: &amp;#34;red&amp;#34;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice how were getting responses from all three versions of the reviews service (&lt;code&gt;null&lt;/code&gt; is from reviews v1 which doesnt have ratings) and not getting the even split across v1 and v2. This is expected behavior because the &lt;code&gt;curl&lt;/code&gt; command is using Kubernetes service load balancing across all three versions of the reviews service. In order to access the reviews &lt;sup&gt;50&lt;/sup&gt;&amp;frasl;&lt;sub&gt;50&lt;/sub&gt; split we need to access the service via the ingress Gateway:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ for i in `seq 4`; do \
kubectl exec -it $SLEEP_POD curl http://istio-ingressgateway.istio-system/reviews/0 | \
jq &amp;#39;.reviews|.[]|.rating?&amp;#39;; \
done
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&#39;language-json&#39; data-expandlinks=&#39;true&#39; &gt;{
&amp;#34;stars&amp;#34;: 5,
&amp;#34;color&amp;#34;: &amp;#34;black&amp;#34;
}
{
&amp;#34;stars&amp;#34;: 4,
&amp;#34;color&amp;#34;: &amp;#34;black&amp;#34;
}
null
null
{
&amp;#34;stars&amp;#34;: 5,
&amp;#34;color&amp;#34;: &amp;#34;black&amp;#34;
}
{
&amp;#34;stars&amp;#34;: 4,
&amp;#34;color&amp;#34;: &amp;#34;black&amp;#34;
}
null
null
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Mission accomplished! This post showed how to deploy a minimal installation of Istio that only contains the traffic management components (Pilot, ingress Gateway), and then use those components to direct traffic to specific versions of the reviews service. And it wasn&amp;rsquo;t necessary to deploy the Istio sidecar proxy to gain these capabilities, so there was little to no interruption of existing workloads or applications.&lt;/p&gt;
&lt;p&gt;Using the built-in ingress gateway (along with some &lt;code&gt;VirtualService&lt;/code&gt; and &lt;code&gt;DestinationRule&lt;/code&gt; resources) this post showed how you can easily leverage Istios traffic management for cluster-external ingress traffic and cluster-internal service-to-service traffic. This technique is a great example of an incremental approach to adopting Istio, and can be especially useful in real-world cases where Pods are owned by different teams or deployed to different namespaces.&lt;/p&gt;</description><pubDate>Wed, 21 Nov 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/incremental-traffic-management/</link><author>Sandeep Parikh</author><guid isPermaLink="true">/v1.1/blog/2018/incremental-traffic-management/</guid><category>traffic-management</category><category>gateway</category></item><item><title>Announcing Istio 1.0.4</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.0.4. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.0.4&#34;
data-updateadvice=&#39;Before you download 1.0.4, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.0.8&#39;
data-updatehref=&#34;/v1.1/about/notes/1.0.8&#34;&gt;
DOWNLOAD 1.0.4
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://archive.istio.io/v1.0&#34;&gt;1.0.4 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.0.3...1.0.4&#34;&gt;CHANGES IN 1.0.4&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;known-issues&#34;&gt;Known issues&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Pilot may deadlock when using &lt;a href=&#34;/v1.1/docs/reference/commands/istioctl/#istioctl-proxy-status&#34;&gt;&lt;code&gt;istioctl proxy-status&lt;/code&gt;&lt;/a&gt; to get proxy synchronization status.
The work around is to &lt;em&gt;not use&lt;/em&gt; &lt;code&gt;istioctl proxy-status&lt;/code&gt;.
Once Pilot enters a deadlock, it exhibits continuous goroutine growth eventually running out of memory.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;networking&#34;&gt;Networking&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fixed the lack of removal of stale endpoints causing &lt;code&gt;503&lt;/code&gt; errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed sidecar injection when a pod label contains a &lt;code&gt;/&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;policy-and-telemetry&#34;&gt;Policy and telemetry&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fixed occasional data corruption problem with out-of-process Mixer adapters leading to incorrect behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed excessive CPU usage by Mixer when waiting for missing CRDs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Wed, 21 Nov 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/announcing-1.0.4/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2018/announcing-1.0.4/</guid></item><item><title>Consuming External MongoDB Services</title><description>
&lt;p&gt;In the &lt;a href=&#34;/v1.1/blog/2018/egress-tcp/&#34;&gt;Consuming External TCP Services&lt;/a&gt; blog post, I described how external services
can be consumed by in-mesh Istio applications via TCP. In this post, I demonstrate consuming external MongoDB services.
You use the &lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Istio Bookinfo sample application&lt;/a&gt;, the version in which the book
ratings data is persisted in a MongoDB database. You deploy this database outside the cluster and configure the
&lt;em&gt;ratings&lt;/em&gt; microservice to use it. You will learn multiple options of controlling traffic to external MongoDB services and their
pros and cons.&lt;/p&gt;
&lt;h2 id=&#34;bookinfo-with-external-ratings-database&#34;&gt;Bookinfo with external ratings database&lt;/h2&gt;
&lt;p&gt;First, you set up a MongoDB database instance to hold book ratings data outside of your Kubernetes cluster. Then you
modify the &lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Bookinfo sample application&lt;/a&gt; to use your database.&lt;/p&gt;
&lt;h3 id=&#34;setting-up-the-ratings-database&#34;&gt;Setting up the ratings database&lt;/h3&gt;
&lt;p&gt;For this task you set up an instance of &lt;a href=&#34;https://www.mongodb.com&#34;&gt;MongoDB&lt;/a&gt;. You can use any MongoDB instance; I used
&lt;a href=&#34;https://www.ibm.com/cloud/compose/mongodb&#34;&gt;Compose for MongoDB&lt;/a&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Set an environment variable for the password of your &lt;code&gt;admin&lt;/code&gt; user. To prevent the password from being preserved in
the Bash history, remove the command from the history immediately after running the command, using
&lt;a href=&#34;https://www.gnu.org/software/bash/manual/html_node/Bash-History-Builtins.html#Bash-History-Builtins&#34;&gt;history -d&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ export MONGO_ADMIN_PASSWORD=&amp;lt;your MongoDB admin password&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set an environment variable for the password of the new user you will create, namely &lt;code&gt;bookinfo&lt;/code&gt;.
Remove the command from the history using
&lt;a href=&#34;https://www.gnu.org/software/bash/manual/html_node/Bash-History-Builtins.html#Bash-History-Builtins&#34;&gt;history -d&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ export BOOKINFO_PASSWORD=&amp;lt;password&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set environment variables for your MongoDB service, &lt;code&gt;MONGODB_HOST&lt;/code&gt; and &lt;code&gt;MONGODB_PORT&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the &lt;code&gt;bookinfo&lt;/code&gt; user:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | mongo --ssl --sslAllowInvalidCertificates $MONGODB_HOST:$MONGODB_PORT -u admin -p $MONGO_ADMIN_PASSWORD --authenticationDatabase admin
use test
db.createUser(
{
user: &amp;#34;bookinfo&amp;#34;,
pwd: &amp;#34;$BOOKINFO_PASSWORD&amp;#34;,
roles: [ &amp;#34;read&amp;#34;]
}
);
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a &lt;em&gt;collection&lt;/em&gt; to hold ratings. The following command sets both ratings to be equal &lt;code&gt;1&lt;/code&gt; to provide a visual
clue when your database is used by the Bookinfo &lt;em&gt;ratings&lt;/em&gt; service (the default Bookinfo &lt;em&gt;ratings&lt;/em&gt; are &lt;code&gt;4&lt;/code&gt; and &lt;code&gt;5&lt;/code&gt;).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | mongo --ssl --sslAllowInvalidCertificates $MONGODB_HOST:$MONGODB_PORT -u admin -p $MONGO_ADMIN_PASSWORD --authenticationDatabase admin
use test
db.createCollection(&amp;#34;ratings&amp;#34;);
db.ratings.insert(
[{rating: 1},
{rating: 1}]
);
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check that &lt;code&gt;bookinfo&lt;/code&gt; user can get ratings:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | mongo --ssl --sslAllowInvalidCertificates $MONGODB_HOST:$MONGODB_PORT -u bookinfo -p $BOOKINFO_PASSWORD --authenticationDatabase test
use test
db.ratings.find({});
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output should be similar to:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-plain&#39; data-expandlinks=&#39;true&#39; &gt;MongoDB server version: 3.4.10
switched to db test
{ &amp;#34;_id&amp;#34; : ObjectId(&amp;#34;5b7c29efd7596e65b6ed2572&amp;#34;), &amp;#34;rating&amp;#34; : 1 }
{ &amp;#34;_id&amp;#34; : ObjectId(&amp;#34;5b7c29efd7596e65b6ed2573&amp;#34;), &amp;#34;rating&amp;#34; : 1 }
bye
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;initial-setting-of-bookinfo-application&#34;&gt;Initial setting of Bookinfo application&lt;/h3&gt;
&lt;p&gt;To demonstrate the scenario of using an external database, you start with a Kubernetes cluster with &lt;a href=&#34;/v1.1/docs/setup/kubernetes/install/kubernetes/#installation-steps&#34;&gt;Istio installed&lt;/a&gt;. Then you deploy the
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Istio Bookinfo sample application&lt;/a&gt;, &lt;a href=&#34;/v1.1/docs/examples/bookinfo/#apply-default-destination-rules&#34;&gt;apply the default destination rules&lt;/a&gt;, and
&lt;a href=&#34;/v1.1/docs/tasks/traffic-management/egress/#change-to-the-blocking-by-default-policy&#34;&gt;change Istio to the blocking-egress-by-default policy&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This application uses the &lt;code&gt;ratings&lt;/code&gt; microservice to fetch book ratings, a number between 1 and 5. The ratings are
displayed as stars for each review. There are several versions of the &lt;code&gt;ratings&lt;/code&gt; microservice. You will deploy the
version that uses &lt;a href=&#34;https://www.mongodb.com&#34;&gt;MongoDB&lt;/a&gt; as the ratings database in the next subsection.&lt;/p&gt;
&lt;p&gt;The example commands in this blog post work with Istio 1.0.&lt;/p&gt;
&lt;p&gt;As a reminder, here is the end-to-end architecture of the application from the
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Bookinfo sample application&lt;/a&gt;.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:59.086918235567985%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/docs/examples/bookinfo/withistio.svg&#34; title=&#34;The original Bookinfo application&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/docs/examples/bookinfo/withistio.svg&#34; alt=&#34;The original Bookinfo application&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;The original Bookinfo application&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;use-the-external-database-in-bookinfo-application&#34;&gt;Use the external database in Bookinfo application&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Deploy the spec of the &lt;em&gt;ratings&lt;/em&gt; microservice that uses a MongoDB database (&lt;em&gt;ratings v2&lt;/em&gt;), while setting
&lt;code&gt;MONGO_DB_URL&lt;/code&gt; environment variable of the spec:&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@ --dry-run -o yaml | kubectl set env --local -f - &amp;#34;MONGO_DB_URL=mongodb://bookinfo:$BOOKINFO_PASSWORD@$MONGODB_HOST:$MONGODB_PORT/test?authSource=test&amp;amp;ssl=true&amp;#34; -o yaml | kubectl apply -f -
deployment &amp;#34;ratings-v2&amp;#34; created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Route all the traffic destined to the &lt;em&gt;reviews&lt;/em&gt; service to its &lt;em&gt;v3&lt;/em&gt; version. You do this to ensure that the
&lt;em&gt;reviews&lt;/em&gt; service always calls the &lt;em&gt;ratings&lt;/em&gt; service. In addition, route all the traffic destined to the &lt;em&gt;ratings&lt;/em&gt;
service to &lt;em&gt;ratings v2&lt;/em&gt; that uses your database.&lt;/p&gt;
&lt;p&gt;Specify the routing for both services above by adding two
&lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/virtual-service/&#34;&gt;virtual services&lt;/a&gt;. These virtual services are
specified in &lt;code&gt;samples/bookinfo/networking/virtual-service-ratings-mongodb.yaml&lt;/code&gt; of an Istio release archive.
&lt;strong&gt;&lt;em&gt;Important:&lt;/em&gt;&lt;/strong&gt; make sure you
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/#apply-default-destination-rules&#34;&gt;applied the default destination rules&lt;/a&gt; before running the
following command.&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/virtual-service-ratings-db.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f @samples/bookinfo/networking/virtual-service-ratings-db.yaml@
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The updated architecture appears below. Note that the blue arrows inside the mesh mark the traffic configured according
to the virtual services we added. According to the virtual services, the traffic is sent to &lt;em&gt;reviews v3&lt;/em&gt; and
&lt;em&gt;ratings v2&lt;/em&gt;.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:59.314858206480224%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-mongo/./bookinfo-ratings-v2-mongodb-external.svg&#34; title=&#34;The Bookinfo application with ratings v2 and an external MongoDB database&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-mongo/./bookinfo-ratings-v2-mongodb-external.svg&#34; alt=&#34;The Bookinfo application with ratings v2 and an external MongoDB database&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;The Bookinfo application with ratings v2 and an external MongoDB database&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Note that the MongoDB database is outside the Istio service mesh, or more precisely outside the Kubernetes cluster. The
boundary of the service mesh is marked by a dashed line.&lt;/p&gt;
&lt;h3 id=&#34;access-the-webpage&#34;&gt;Access the webpage&lt;/h3&gt;
&lt;p&gt;Access the webpage of the application, after
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/#determining-the-ingress-ip-and-port&#34;&gt;determining the ingress IP and port&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Since you did not configure the egress traffic control yet, the access to the MongoDB service is blocked by Istio.
This is why instead of the rating stars, the message &lt;em&gt;&amp;ldquo;Ratings service is currently unavailable&amp;rdquo;&lt;/em&gt; is currently
displayed below each review:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:36.18705035971223%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-mongo/./errorFetchingBookRating.png&#34; title=&#34;The Ratings service error messages&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-mongo/./errorFetchingBookRating.png&#34; alt=&#34;The Ratings service error messages&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;The Ratings service error messages&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;In the following sections you will configure egress access to the external MongoDB service, using different options for
egress control in Istio.&lt;/p&gt;
&lt;h2 id=&#34;egress-control-for-tcp&#34;&gt;Egress control for TCP&lt;/h2&gt;
&lt;p&gt;Since &lt;a href=&#34;https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/&#34;&gt;MongoDB Wire Protocol&lt;/a&gt; runs on top of TCP, you
can control the egress traffic to your MongoDB as traffic to any other &lt;a href=&#34;/v1.1/blog/2018/egress-tcp/&#34;&gt;external TCP service&lt;/a&gt;. To
control TCP traffic, a block of IPs in the &lt;a href=&#34;https://tools.ietf.org/html/rfc2317&#34;&gt;CIDR&lt;/a&gt; notation that includes the IP
address of your MongoDB host must be specified. The caveat here is that sometimes the IP of the MongoDB host is not
stable or known in advance.&lt;/p&gt;
&lt;p&gt;In the cases when the IP of the MongoDB host is not stable, the egress traffic can either be
&lt;a href=&#34;#egress-control-for-tls&#34;&gt;controlled as TLS traffic&lt;/a&gt;, or the traffic can be routed
&lt;a href=&#34;/v1.1/docs/tasks/traffic-management/egress/#direct-access-to-external-services&#34;&gt;directly&lt;/a&gt;, bypassing the Istio sidecar
proxies.&lt;/p&gt;
&lt;p&gt;Get the IP address of your MongoDB database instance. As an option, you can use the
&lt;a href=&#34;https://linux.die.net/man/1/host&#34;&gt;host&lt;/a&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ export MONGODB_IP=$(host $MONGODB_HOST | grep &amp;#34; has address &amp;#34; | cut -d&amp;#34; &amp;#34; -f4)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;control-tcp-egress-traffic-without-a-gateway&#34;&gt;Control TCP egress traffic without a gateway&lt;/h3&gt;
&lt;p&gt;In case you do not need to direct the traffic through an
&lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway/#use-case&#34;&gt;egress gateway&lt;/a&gt;, for example if you do not have a
requirement that all the traffic that exists your mesh must exit through the gateway, follow the
instructions in this section. Alternatively, if you do want to direct your traffic through an egress gateway, proceed to
&lt;a href=&#34;#direct-tcp-egress-traffic-through-an-egress-gateway&#34;&gt;Direct TCP egress traffic through an egress gateway&lt;/a&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Define a TCP mesh-external service entry:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mongo
spec:
hosts:
- my-mongo.tcp.svc
addresses:
- $MONGODB_IP/32
ports:
- number: $MONGODB_PORT
name: tcp
protocol: TCP
location: MESH_EXTERNAL
resolution: STATIC
endpoints:
- address: $MONGODB_IP
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that the protocol &lt;code&gt;TCP&lt;/code&gt; is specified instead of &lt;code&gt;MONGO&lt;/code&gt; due to the fact that the traffic can be encrypted in
case &lt;a href=&#34;https://docs.mongodb.com/manual/tutorial/configure-ssl/&#34;&gt;the MongoDB protocol runs on top of TLS&lt;/a&gt;.
If the traffic is encrypted, the encrypted MongoDB protocol cannot be parsed by the Istio proxy.&lt;/p&gt;
&lt;p&gt;If you know that the plain MongoDB protocol is used, without encryption, you can specify the protocol as &lt;code&gt;MONGO&lt;/code&gt; and
let the Istio proxy produce
&lt;a href=&#34;https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/mongo_proxy_filter#statistics&#34;&gt;MongoDB related statistics&lt;/a&gt;.
Also note that when the protocol &lt;code&gt;TCP&lt;/code&gt; is specified, the configuration is not specific for MongoDB, but is the same
for any other database with the protocol on top of TCP.&lt;/p&gt;
&lt;p&gt;Note that the host of your MongoDB is not used in TCP routing, so you can use any host, for example &lt;code&gt;my-mongo.tcp.svc&lt;/code&gt;. Notice the &lt;code&gt;STATIC&lt;/code&gt; resolution and the endpoint with the IP of your MongoDB service. Once you define such an endpoint, you can access MongoDB services that do not have a domain name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Refresh the web page of the application. Now the application should display the ratings without error:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:36.69064748201439%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-mongo/./externalDBRatings.png&#34; title=&#34;Book Ratings Displayed Correctly&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-mongo/./externalDBRatings.png&#34; alt=&#34;Book Ratings Displayed Correctly&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Book Ratings Displayed Correctly&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Note that you see a one-star rating for both displayed reviews, as expected. You set the ratings to be one star to
provide yourself with a visual clue that your external database is indeed being used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you want to direct the traffic through an egress gateway, proceed to the next section. Otherwise, perform
&lt;a href=&#34;#cleanup-of-tcp-egress-traffic-control&#34;&gt;cleanup&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;direct-tcp-egress-traffic-through-an-egress-gateway&#34;&gt;Direct TCP Egress traffic through an egress gateway&lt;/h3&gt;
&lt;p&gt;In this section you handle the case when you need to direct the traffic through an
&lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway/#use-case&#34;&gt;egress gateway&lt;/a&gt;. The sidecar proxy routes TCP
connections from the MongoDB client to the egress gateway, by matching the IP of the MongoDB host (a CIDR block of
length 32). The egress gateway forwards the traffic to the MongoDB host, by its hostname.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway/#deploy-istio-egress-gateway&#34;&gt;Deploy Istio egress gateway&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you did not perform the steps in &lt;a href=&#34;#control-tcp-egress-traffic-without-a-gateway&#34;&gt;the previous section&lt;/a&gt;, perform them now.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Proceed to the following section.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;configure-tcp-traffic-from-sidecars-to-the-egress-gateway&#34;&gt;Configure TCP traffic from sidecars to the egress gateway&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Define the &lt;code&gt;EGRESS_GATEWAY_MONGODB_PORT&lt;/code&gt; environment variable to hold some port for directing traffic through
the egress gateway, e.g. &lt;code&gt;7777&lt;/code&gt;. You must select a port that is not used for any other service in the mesh.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ export EGRESS_GATEWAY_MONGODB_PORT=7777
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the selected port to the &lt;code&gt;istio-egressgateway&lt;/code&gt; service. You should use the same values you used for installing
Istio, in particular you have to specify all the ports of the &lt;code&gt;istio-egressgateway&lt;/code&gt; service that you previously
configured.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ helm template install/kubernetes/helm/istio/ --name istio-egressgateway --namespace istio-system -x charts/gateways/templates/service.yaml --set gateways.istio-ingressgateway.enabled=false --set gateways.istio-egressgateway.enabled=true --set gateways.istio-egressgateway.ports[0].port=80 --set gateways.istio-egressgateway.ports[0].name=http --set gateways.istio-egressgateway.ports[1].port=443 --set gateways.istio-egressgateway.ports[1].name=https --set gateways.istio-egressgateway.ports[2].port=$EGRESS_GATEWAY_MONGODB_PORT --set gateways.istio-egressgateway.ports[2].name=mongo | kubectl apply -f -
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check that the &lt;code&gt;istio-egressgateway&lt;/code&gt; service indeed has the selected port:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get svc istio-egressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-egressgateway ClusterIP 172.21.202.204 &amp;lt;none&amp;gt; 80/TCP,443/TCP,7777/TCP 34d
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an egress &lt;code&gt;Gateway&lt;/code&gt; for your MongoDB service, and destination rules and a virtual service to direct the
traffic through the egress gateway and from the egress gateway to the external service.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: $EGRESS_GATEWAY_MONGODB_PORT
name: tcp
protocol: TCP
hosts:
- my-mongo.tcp.svc
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-mongo
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: mongo
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: mongo
spec:
host: my-mongo.tcp.svc
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-mongo-through-egress-gateway
spec:
hosts:
- my-mongo.tcp.svc
gateways:
- mesh
- istio-egressgateway
tcp:
- match:
- gateways:
- mesh
destinationSubnets:
- $MONGODB_IP/32
port: $MONGODB_PORT
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: mongo
port:
number: $EGRESS_GATEWAY_MONGODB_PORT
- match:
- gateways:
- istio-egressgateway
port: $EGRESS_GATEWAY_MONGODB_PORT
route:
- destination:
host: my-mongo.tcp.svc
port:
number: $MONGODB_PORT
weight: 100
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;#verify-that-egress-traffic-is-directed-through-the-egress-gateway&#34;&gt;Verify that egress traffic is directed through the egress gateway&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;mutual-tls-between-the-sidecar-proxies-and-the-egress-gateway&#34;&gt;Mutual TLS between the sidecar proxies and the egress gateway&lt;/h4&gt;
&lt;p&gt;You may want to enable &lt;a href=&#34;/v1.1/docs/tasks/security/mutual-tls/&#34;&gt;mutual TLS Authentication&lt;/a&gt; between the sidecar proxies of
your MongoDB clients and the egress gateway to let the egress gateway monitor the identity of the source pods and to
enable Mixer policy enforcement based on that identity. By enabling mutual TLS you also encrypt the traffic.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Delete the configuration from the previous section:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete gateway istio-egressgateway --ignore-not-found=true
$ kubectl delete virtualservice direct-mongo-through-egress-gateway --ignore-not-found=true
$ kubectl delete destinationrule egressgateway-for-mongo mongo --ignore-not-found=true
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an egress &lt;code&gt;Gateway&lt;/code&gt; for your MongoDB service, and destination rules and a virtual service
to direct the traffic through the egress gateway and from the egress gateway to the external service.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: tls
protocol: TLS
hosts:
- my-mongo.tcp.svc
tls:
mode: MUTUAL
serverCertificate: /etc/certs/cert-chain.pem
privateKey: /etc/certs/key.pem
caCertificates: /etc/certs/root-cert.pem
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-mongo
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: ISTIO_MUTUAL
subsets:
- name: mongo
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: ISTIO_MUTUAL
sni: my-mongo.tcp.svc
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: mongo
spec:
host: my-mongo.tcp.svc
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-mongo-through-egress-gateway
spec:
hosts:
- my-mongo.tcp.svc
gateways:
- mesh
- istio-egressgateway
tcp:
- match:
- gateways:
- mesh
destinationSubnets:
- $MONGODB_IP/32
port: $MONGODB_PORT
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: mongo
port:
number: 443
- match:
- gateways:
- istio-egressgateway
port: 443
route:
- destination:
host: my-mongo.tcp.svc
port:
number: $MONGODB_PORT
weight: 100
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Proceed to the next section.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;verify-that-egress-traffic-is-directed-through-the-egress-gateway&#34;&gt;Verify that egress traffic is directed through the egress gateway&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Refresh the web page of the application again and verify that the ratings are still displayed correctly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;/v1.1/docs/tasks/telemetry/logs/access-log/#enable-envoy-s-access-logging&#34;&gt;Enable Envoys access logging&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the log of the egress gateway&amp;rsquo;s Envoy and see a line that corresponds to your
requests to the MongoDB service. If Istio is deployed in the &lt;code&gt;istio-system&lt;/code&gt; namespace, the command to print the
log is:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl logs -l istio=egressgateway -n istio-system
[2019-04-14T06:12:07.636Z] &amp;#34;- - -&amp;#34; 0 - &amp;#34;-&amp;#34; 1591 4393 94 - &amp;#34;-&amp;#34; &amp;#34;-&amp;#34; &amp;#34;-&amp;#34; &amp;#34;-&amp;#34; &amp;#34;&amp;lt;Your MongoDB IP&amp;gt;:&amp;lt;your MongoDB port&amp;gt;&amp;#34; outbound|&amp;lt;your MongoDB port&amp;gt;||my-mongo.tcp.svc 172.30.146.119:59924 172.30.146.119:443 172.30.230.1:59206 -
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;cleanup-of-tcp-egress-traffic-control&#34;&gt;Cleanup of TCP egress traffic control&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete serviceentry mongo
$ kubectl delete gateway istio-egressgateway --ignore-not-found=true
$ kubectl delete virtualservice direct-mongo-through-egress-gateway --ignore-not-found=true
$ kubectl delete destinationrule egressgateway-for-mongo mongo --ignore-not-found=true
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;egress-control-for-tls&#34;&gt;Egress control for TLS&lt;/h2&gt;
&lt;p&gt;In the real life, most of the communication to the external services must be encrypted and
&lt;a href=&#34;https://docs.mongodb.com/manual/tutorial/configure-ssl/&#34;&gt;the MongoDB protocol runs on top of TLS&lt;/a&gt;.
Also, the TLS clients usually send
&lt;a href=&#34;https://en.wikipedia.org/wiki/Server_Name_Indication&#34;&gt;Server Name Indication&lt;/a&gt;, SNI, as part of their handshake. If your
MongoDB server runs TLS and your MongoDB client sends SNI as part of the handshake, you can control your MongoDB egress
traffic as any other TLS-with-SNI traffic. With TLS and SNI, you do not need to specify the IP addresses of your MongoDB
servers. You specify their host names instead, which is more convenient since you do not have to rely on the stability of
the IP addresses. You can also specify wildcards as a prefix of the host names, for example allowing access to any
server from the &lt;code&gt;*.com&lt;/code&gt; domain.&lt;/p&gt;
&lt;p&gt;To check if your MongoDB server supports TLS, run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ openssl s_client -connect $MONGODB_HOST:$MONGODB_PORT -servername $MONGODB_HOST
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the command above prints a certificate returned by the server, the server supports TLS. If not, you have to control
your MongoDB egress traffic on the TCP level, as described in the previous sections.&lt;/p&gt;
&lt;h3 id=&#34;control-tls-egress-traffic-without-a-gateway&#34;&gt;Control TLS egress traffic without a gateway&lt;/h3&gt;
&lt;p&gt;In case you &lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway/#use-case&#34;&gt;do not need an egress gateway&lt;/a&gt;, follow the
instructions in this section. If you want to direct your traffic through an egress gateway, proceed to
&lt;a href=&#34;#direct-tcp-egress-traffic-through-an-egress-gateway&#34;&gt;Direct TCP Egress traffic through an egress gateway&lt;/a&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a &lt;code&gt;ServiceEntry&lt;/code&gt; for the MongoDB service:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mongo
spec:
hosts:
- $MONGODB_HOST
ports:
- number: $MONGODB_PORT
name: tls
protocol: TLS
resolution: DNS
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Refresh the web page of the application. The application should display the ratings without error.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;cleanup-of-the-egress-configuration-for-tls&#34;&gt;Cleanup of the egress configuration for TLS&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete serviceentry mongo
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;direct-tls-egress-traffic-through-an-egress-gateway&#34;&gt;Direct TLS Egress traffic through an egress gateway&lt;/h3&gt;
&lt;p&gt;In this section you handle the case when you need to direct the traffic through an
&lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway/#use-case&#34;&gt;egress gateway&lt;/a&gt;. The sidecar proxy routes TLS
connections from the MongoDB client to the egress gateway, by matching the SNI of the MongoDB host.
The egress gateway forwards the traffic to the MongoDB host. Note that the sidecar proxy rewrites the destination port
to be 443. The egress gateway accepts the MongoDB traffic on the port 443, matches the MongoDB host by SNI, and rewrites
the port again to be the port of the MongoDB server.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway/#deploy-istio-egress-gateway&#34;&gt;Deploy Istio egress gateway&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a &lt;code&gt;ServiceEntry&lt;/code&gt; for the MongoDB service:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mongo
spec:
hosts:
- $MONGODB_HOST
ports:
- number: $MONGODB_PORT
name: tls
protocol: TLS
- number: 443
name: tls-port-for-egress-gateway
protocol: TLS
resolution: DNS
location: MESH_EXTERNAL
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Refresh the web page of the application and verify that the ratings are displayed correctly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an egress &lt;code&gt;Gateway&lt;/code&gt; for your MongoDB service, and destination rules and virtual services
to direct the traffic through the egress gateway and from the egress gateway to the external service.&lt;/p&gt;
&lt;p&gt;If you want to enable &lt;a href=&#34;/v1.1/docs/tasks/security/mutual-tls/&#34;&gt;mutual TLS Authentication&lt;/a&gt; between the sidecar proxies of
your application pods and the egress gateway, use the following command. (You may want to enable mutual TLS to let
the egress gateway monitor the identity of the source pods and to enable Mixer policy enforcement based on that
identity.)&lt;/p&gt;
&lt;div id=&#34;tabset-blog-2018-egress-mongo-2&#34; role=&#34;tablist&#34; class=&#34;tabset&#34;&gt;
&lt;div class=&#34;tab-strip&#34; data-cookie-name=&#34;mtls&#34;&gt;&lt;button aria-selected=&#34;true&#34; data-cookie-value=&#34;enabled&#34;
aria-controls=&#34;tabset-blog-2018-egress-mongo-2-0-panel&#34; id=&#34;tabset-blog-2018-egress-mongo-2-0-tab&#34; role=&#34;tab&#34;&gt;&lt;span&gt;mutual TLS enabled&lt;/span&gt;
&lt;/button&gt;&lt;button tabindex=&#34;-1&#34; data-cookie-value=&#34;disabled&#34;
aria-controls=&#34;tabset-blog-2018-egress-mongo-2-1-panel&#34; id=&#34;tabset-blog-2018-egress-mongo-2-1-tab&#34; role=&#34;tab&#34;&gt;&lt;span&gt;mutual TLS disabled&lt;/span&gt;
&lt;/button&gt;&lt;/div&gt;
&lt;div class=&#34;tab-content&#34;&gt;&lt;div id=&#34;tabset-blog-2018-egress-mongo-2-0-panel&#34; role=&#34;tabpanel&#34; tabindex=&#34;0&#34; aria-labelledby=&#34;tabset-blog-2018-egress-mongo-2-0-tab&#34;&gt;&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: tls
protocol: TLS
hosts:
- $MONGODB_HOST
tls:
mode: MUTUAL
serverCertificate: /etc/certs/cert-chain.pem
privateKey: /etc/certs/key.pem
caCertificates: /etc/certs/root-cert.pem
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-mongo
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: ISTIO_MUTUAL
subsets:
- name: mongo
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: ISTIO_MUTUAL
sni: $MONGODB_HOST
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-mongo-through-egress-gateway
spec:
hosts:
- $MONGODB_HOST
gateways:
- mesh
- istio-egressgateway
tls:
- match:
- gateways:
- mesh
port: $MONGODB_PORT
sni_hosts:
- $MONGODB_HOST
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: mongo
port:
number: 443
tcp:
- match:
- gateways:
- istio-egressgateway
port: 443
route:
- destination:
host: $MONGODB_HOST
port:
number: $MONGODB_PORT
weight: 100
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div hidden id=&#34;tabset-blog-2018-egress-mongo-2-1-panel&#34; role=&#34;tabpanel&#34; tabindex=&#34;0&#34; aria-labelledby=&#34;tabset-blog-2018-egress-mongo-2-1-tab&#34;&gt;&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: tls
protocol: TLS
hosts:
- $MONGODB_HOST
tls:
mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-mongo
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: mongo
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-mongo-through-egress-gateway
spec:
hosts:
- $MONGODB_HOST
gateways:
- mesh
- istio-egressgateway
tls:
- match:
- gateways:
- mesh
port: $MONGODB_PORT
sni_hosts:
- $MONGODB_HOST
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: mongo
port:
number: 443
- match:
- gateways:
- istio-egressgateway
port: 443
sni_hosts:
- $MONGODB_HOST
route:
- destination:
host: $MONGODB_HOST
port:
number: $MONGODB_PORT
weight: 100
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;#verify-that-egress-traffic-is-directed-through-the-egress-gateway&#34;&gt;Verify that the traffic is directed though the egress gateway&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;cleanup-directing-tls-egress-traffic-through-an-egress-gateway&#34;&gt;Cleanup directing TLS Egress traffic through an egress gateway&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete serviceentry mongo
$ kubectl delete gateway istio-egressgateway
$ kubectl delete virtualservice direct-mongo-through-egress-gateway
$ kubectl delete destinationrule egressgateway-for-mongo
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;enable-mongodb-tls-egress-traffic-to-arbitrary-wildcarded-domains&#34;&gt;Enable MongoDB TLS egress traffic to arbitrary wildcarded domains&lt;/h3&gt;
&lt;p&gt;Sometimes you want to configure egress traffic to multiple hostnames from the same domain, for example traffic to all
MongoDB services from &lt;code&gt;*.&amp;lt;your company domain&amp;gt;.com&lt;/code&gt;. You do not want to create multiple configuration items, one for
each and every MongoDB service in your company. To configure access to all the external services from the same domain by
a single configuration, you use &lt;em&gt;wildcarded&lt;/em&gt; hosts.&lt;/p&gt;
&lt;p&gt;In this section you configure egress traffic for a wildcarded domain. I used a MongoDB instance at &lt;code&gt;composedb.com&lt;/code&gt;
domain, so configuring egress traffic for &lt;code&gt;*.com&lt;/code&gt; worked for me (I could have used &lt;code&gt;*.composedb.com&lt;/code&gt; as well).
You can pick a wildcarded domain according to your MongoDB host.&lt;/p&gt;
&lt;p&gt;To configure egress gateway traffic for a wildcarded domain, you will first need to deploy a custom egress
gateway with
&lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/wildcard-egress-hosts/#wildcard-configuration-for-arbitrary-domains&#34;&gt;an additional SNI proxy&lt;/a&gt;.
This is needed due to current limitations of Envoy, the proxy used by the standard Istio egress gateway.&lt;/p&gt;
&lt;h4 id=&#34;prepare-a-new-egress-gateway-with-an-sni-proxy&#34;&gt;Prepare a new egress gateway with an SNI proxy&lt;/h4&gt;
&lt;p&gt;In this subsection you deploy an egress gateway with an SNI proxy, in addition to the standard Istio Envoy proxy. You
can use any SNI proxy that is capable of routing traffic according to arbitrary, not-preconfigured SNI values; we used
&lt;a href=&#34;http://nginx.org&#34;&gt;Nginx&lt;/a&gt; to achieve this functionality.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a configuration file for the Nginx SNI proxy. You may want to edit the file to specify additional Nginx
settings, if required.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF &amp;gt; ./sni-proxy.conf
user www-data;
events {
}
stream {
log_format log_stream &amp;#39;\$remote_addr [\$time_local] \$protocol [\$ssl_preread_server_name]&amp;#39;
&amp;#39;\$status \$bytes_sent \$bytes_received \$session_time&amp;#39;;
access_log /var/log/nginx/access.log log_stream;
error_log /var/log/nginx/error.log;
# tcp forward proxy by SNI
server {
resolver 8.8.8.8 ipv6=off;
listen 127.0.0.1:$MONGODB_PORT;
proxy_pass \$ssl_preread_server_name:$MONGODB_PORT;
ssl_preread on;
}
}
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a Kubernetes &lt;a href=&#34;https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/&#34;&gt;ConfigMap&lt;/a&gt;
to hold the configuration of the Nginx SNI proxy:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl create configmap egress-sni-proxy-configmap -n istio-system --from-file=nginx.conf=./sni-proxy.conf
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The following command will generate &lt;code&gt;istio-egressgateway-with-sni-proxy.yaml&lt;/code&gt; to edit and deploy.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | helm template install/kubernetes/helm/istio/ --name istio-egressgateway-with-sni-proxy --namespace istio-system -x charts/gateways/templates/deployment.yaml -x charts/gateways/templates/service.yaml -x charts/gateways/templates/serviceaccount.yaml -x charts/gateways/templates/autoscale.yaml -x charts/gateways/templates/clusterrole.yaml -x charts/gateways/templates/clusterrolebindings.yaml --set global.mtls.enabled=true --set global.istioNamespace=istio-system -f - &amp;gt; ./istio-egressgateway-with-sni-proxy.yaml
gateways:
enabled: true
istio-ingressgateway:
enabled: false
istio-egressgateway:
enabled: false
istio-egressgateway-with-sni-proxy:
enabled: true
labels:
app: istio-egressgateway-with-sni-proxy
istio: egressgateway-with-sni-proxy
replicaCount: 1
autoscaleMin: 1
autoscaleMax: 5
cpu:
targetAverageUtilization: 80
serviceAnnotations: {}
type: ClusterIP
ports:
- port: 443
name: https
secretVolumes:
- name: egressgateway-certs
secretName: istio-egressgateway-certs
mountPath: /etc/istio/egressgateway-certs
- name: egressgateway-ca-certs
secretName: istio-egressgateway-ca-certs
mountPath: /etc/istio/egressgateway-ca-certs
configVolumes:
- name: sni-proxy-config
configMapName: egress-sni-proxy-configmap
additionalContainers:
- name: sni-proxy
image: nginx
volumeMounts:
- name: sni-proxy-config
mountPath: /etc/nginx
readOnly: true
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the new egress gateway:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f ./istio-egressgateway-with-sni-proxy.yaml
serviceaccount &amp;#34;istio-egressgateway-with-sni-proxy-service-account&amp;#34; created
clusterrole &amp;#34;istio-egressgateway-with-sni-proxy-istio-system&amp;#34; created
clusterrolebinding &amp;#34;istio-egressgateway-with-sni-proxy-istio-system&amp;#34; created
service &amp;#34;istio-egressgateway-with-sni-proxy&amp;#34; created
deployment &amp;#34;istio-egressgateway-with-sni-proxy&amp;#34; created
horizontalpodautoscaler &amp;#34;istio-egressgateway-with-sni-proxy&amp;#34; created
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify that the new egress gateway is running. Note that the pod has two containers (one is the Envoy proxy and the
second one is the SNI proxy).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pod -l istio=egressgateway-with-sni-proxy -n istio-system
NAME READY STATUS RESTARTS AGE
istio-egressgateway-with-sni-proxy-79f6744569-pf9t2 2/2 Running 0 17s
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a service entry with a static address equal to 127.0.0.1 (&lt;code&gt;localhost&lt;/code&gt;), and disable mutual TLS on the traffic directed to the new
service entry:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: sni-proxy
spec:
hosts:
- sni-proxy.local
location: MESH_EXTERNAL
ports:
- number: $MONGODB_PORT
name: tcp
protocol: TCP
resolution: STATIC
endpoints:
- address: 127.0.0.1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: disable-mtls-for-sni-proxy
spec:
host: sni-proxy.local
trafficPolicy:
tls:
mode: DISABLE
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;configure-access-to-com-using-the-new-egress-gateway&#34;&gt;Configure access to &lt;code&gt;*.com&lt;/code&gt; using the new egress gateway&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Define a &lt;code&gt;ServiceEntry&lt;/code&gt; for &lt;code&gt;*.com&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | kubectl create -f -
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mongo
spec:
hosts:
- &amp;#34;*.com&amp;#34;
ports:
- number: 443
name: tls
protocol: TLS
- number: $MONGODB_PORT
name: tls-mongodb
protocol: TLS
location: MESH_EXTERNAL
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an egress &lt;code&gt;Gateway&lt;/code&gt; for &lt;em&gt;*.com&lt;/em&gt;, port 443, protocol TLS, a destination rule to set the
&lt;a href=&#34;https://en.wikipedia.org/wiki/Server_Name_Indication&#34;&gt;SNI&lt;/a&gt; for the gateway, and Envoy filters to prevent tampering
with SNI by a malicious application (the filters verify that the SNI issued by the application is the SNI reported
to Mixer).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway-with-sni-proxy
spec:
selector:
istio: egressgateway-with-sni-proxy
servers:
- port:
number: 443
name: tls
protocol: TLS
hosts:
- &amp;#34;*.com&amp;#34;
tls:
mode: MUTUAL
serverCertificate: /etc/certs/cert-chain.pem
privateKey: /etc/certs/key.pem
caCertificates: /etc/certs/root-cert.pem
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: mtls-for-egress-gateway
spec:
host: istio-egressgateway-with-sni-proxy.istio-system.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: ISTIO_MUTUAL
subsets:
- name: mongo
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: ISTIO_MUTUAL
---
# The following filter is used to forward the original SNI (sent by the application) as the SNI of the mutual TLS
# connection.
# The forwarded SNI will be reported to Mixer so that policies will be enforced based on the original SNI value.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: forward-downstream-sni
spec:
filters:
- listenerMatch:
portNumber: $MONGODB_PORT
listenerType: SIDECAR_OUTBOUND
filterName: forward_downstream_sni
filterType: NETWORK
filterConfig: {}
---
# The following filter verifies that the SNI of the mutual TLS connection (the SNI reported to Mixer) is
# identical to the original SNI issued by the application (the SNI used for routing by the SNI proxy).
# The filter prevents Mixer from being deceived by a malicious application: routing to one SNI while
# reporting some other value of SNI. If the original SNI does not match the SNI of the mutual TLS connection, the
# filter will block the connection to the external service.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: egress-gateway-sni-verifier
spec:
workloadLabels:
app: istio-egressgateway-with-sni-proxy
filters:
- listenerMatch:
portNumber: 443
listenerType: GATEWAY
filterName: sni_verifier
filterType: NETWORK
filterConfig: {}
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Route the traffic destined for &lt;em&gt;*.com&lt;/em&gt; to the egress gateway and from the egress gateway to the SNI proxy.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-mongo-through-egress-gateway
spec:
hosts:
- &amp;#34;*.com&amp;#34;
gateways:
- mesh
- istio-egressgateway-with-sni-proxy
tls:
- match:
- gateways:
- mesh
port: $MONGODB_PORT
sni_hosts:
- &amp;#34;*.com&amp;#34;
route:
- destination:
host: istio-egressgateway-with-sni-proxy.istio-system.svc.cluster.local
subset: mongo
port:
number: 443
weight: 100
tcp:
- match:
- gateways:
- istio-egressgateway-with-sni-proxy
port: 443
route:
- destination:
host: sni-proxy.local
port:
number: $MONGODB_PORT
weight: 100
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Refresh the web page of the application again and verify that the ratings are still displayed correctly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;/v1.1/docs/tasks/telemetry/logs/access-log/#enable-envoy-s-access-logging&#34;&gt;Enable Envoys access logging&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the log of the egress gateway&amp;rsquo;s Envoy proxy. If Istio is deployed in the &lt;code&gt;istio-system&lt;/code&gt; namespace, the command
to print the log is:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl logs -l istio=egressgateway-with-sni-proxy -c istio-proxy -n istio-system
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should see lines similar to the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-plain&#39; data-expandlinks=&#39;true&#39; &gt;[2019-01-02T17:22:04.602Z] &amp;#34;- - -&amp;#34; 0 - 768 1863 88 - &amp;#34;-&amp;#34; &amp;#34;-&amp;#34; &amp;#34;-&amp;#34; &amp;#34;-&amp;#34; &amp;#34;127.0.0.1:28543&amp;#34; outbound|28543||sni-proxy.local 127.0.0.1:49976 172.30.146.115:443 172.30.146.118:58510 &amp;lt;your MongoDB host&amp;gt;
[2019-01-02T17:22:04.713Z] &amp;#34;- - -&amp;#34; 0 - 1534 2590 85 - &amp;#34;-&amp;#34; &amp;#34;-&amp;#34; &amp;#34;-&amp;#34; &amp;#34;-&amp;#34; &amp;#34;127.0.0.1:28543&amp;#34; outbound|28543||sni-proxy.local 127.0.0.1:49988 172.30.146.115:443 172.30.146.118:58522 &amp;lt;your MongoDB host&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the logs of the SNI proxy. If Istio is deployed in the &lt;code&gt;istio-system&lt;/code&gt; namespace, the command to print the
log is:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl logs -l istio=egressgateway-with-sni-proxy -n istio-system -c sni-proxy
127.0.0.1 [23/Aug/2018:03:28:18 +0000] TCP [&amp;lt;your MongoDB host&amp;gt;]200 1863 482 0.089
127.0.0.1 [23/Aug/2018:03:28:18 +0000] TCP [&amp;lt;your MongoDB host&amp;gt;]200 2590 1248 0.095
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;understanding-what-happened&#34;&gt;Understanding what happened&lt;/h4&gt;
&lt;p&gt;In this section you configured egress traffic to your MongoDB host using a wildcarded domain. While for a single MongoDB
host there is no gain in using wildcarded domains (an exact hostname can be specified), it could be beneficial for
cases when the applications in the cluster access multiple MongoDB hosts that match some wildcarded domain. For example,
if the applications access &lt;code&gt;mongodb1.composedb.com&lt;/code&gt;, &lt;code&gt;mongodb2.composedb.com&lt;/code&gt; and &lt;code&gt;mongodb3.composedb.com&lt;/code&gt;, the egress
traffic can be configured by a single configuration for the wildcarded domain &lt;code&gt;*.composedb.com&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I will leave it as an exercise for the reader to verify that no additional Istio configuration is required when you
configure an app to use another instance of MongoDB with a hostname that matches the wildcarded domain used in this
section.&lt;/p&gt;
&lt;h4 id=&#34;cleanup-of-configuration-for-mongodb-tls-egress-traffic-to-arbitrary-wildcarded-domains&#34;&gt;Cleanup of configuration for MongoDB TLS egress traffic to arbitrary wildcarded domains&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Delete the configuration items for &lt;em&gt;*.com&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete serviceentry mongo
$ kubectl delete gateway istio-egressgateway-with-sni-proxy
$ kubectl delete virtualservice direct-mongo-through-egress-gateway
$ kubectl delete destinationrule mtls-for-egress-gateway
$ kubectl delete envoyfilter forward-downstream-sni egress-gateway-sni-verifier
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Delete the configuration items for the &lt;code&gt;egressgateway-with-sni-proxy&lt;/code&gt; &lt;code&gt;Deployment&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete serviceentry sni-proxy
$ kubectl delete destinationrule disable-mtls-for-sni-proxy
$ kubectl delete -f ./istio-egressgateway-with-sni-proxy.yaml
$ kubectl delete configmap egress-sni-proxy-configmap -n istio-system
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove the configuration files you created:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ rm ./istio-egressgateway-with-sni-proxy.yaml
$ rm ./nginx-sni-proxy.conf
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;cleanup&#34;&gt;Cleanup&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Drop the &lt;code&gt;bookinfo&lt;/code&gt; user:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | mongo --ssl --sslAllowInvalidCertificates $MONGODB_HOST:$MONGODB_PORT -u admin -p $MONGO_ADMIN_PASSWORD --authenticationDatabase admin
use test
db.dropUser(&amp;#34;bookinfo&amp;#34;);
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Drop the &lt;em&gt;ratings&lt;/em&gt; collection:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | mongo --ssl --sslAllowInvalidCertificates $MONGODB_HOST:$MONGODB_PORT -u admin -p $MONGO_ADMIN_PASSWORD --authenticationDatabase admin
use test
db.ratings.drop();
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unset the environment variables you used:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ unset MONGO_ADMIN_PASSWORD BOOKINFO_PASSWORD MONGODB_HOST MONGODB_PORT MONGODB_IP
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove the virtual services:&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/virtual-service-ratings-db.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete -f @samples/bookinfo/networking/virtual-service-ratings-db.yaml@
Deleted config: virtual-service/default/reviews
Deleted config: virtual-service/default/ratings
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Undeploy &lt;em&gt;ratings v2-mongodb&lt;/em&gt;:&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@
deployment &amp;#34;ratings-v2&amp;#34; deleted
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog post I demonstrated various options for MongoDB egress traffic control. You can control the MongoDB egress
traffic on a TCP or TLS level where applicable. In both TCP and TLS cases, you can direct the traffic from the sidecar
proxies directly to the external MongoDB host, or direct the traffic through an egress gateway, according to your
organization&amp;rsquo;s security requirements. In the latter case, you can also decide to apply or disable mutual TLS
authentication between the sidecar proxies and the egress gateway. If you want to control MongoDB egress traffic on the
TLS level by specifying wildcarded domains like &lt;code&gt;*.com&lt;/code&gt; and you need to direct the traffic through the egress gateway,
you must deploy a custom egress gateway with an SNI proxy.&lt;/p&gt;
&lt;p&gt;Note that the configuration and considerations described in this blog post for MongoDB are rather the same for other
non-HTTP protocols on top of TCP/TLS.&lt;/p&gt;</description><pubDate>Fri, 16 Nov 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/egress-mongo/</link><author>Vadim Eisenberg</author><guid isPermaLink="true">/v1.1/blog/2018/egress-mongo/</guid><category>traffic-management</category><category>egress</category><category>tcp</category><category>mongo</category></item><item><title>Announcing Istio 1.0.3</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.0.3. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.0.3&#34;
data-updateadvice=&#39;Before you download 1.0.3, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.0.8&#39;
data-updatehref=&#34;/v1.1/about/notes/1.0.8&#34;&gt;
DOWNLOAD 1.0.3
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://archive.istio.io/v1.0&#34;&gt;1.0.3 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.0.2...1.0.3&#34;&gt;CHANGES IN 1.0.3&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;behavior-changes&#34;&gt;Behavior changes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;/v1.1/help/ops/setup/validation&#34;&gt;Validating webhook&lt;/a&gt; is now mandatory. Disabling it may result in Pilot crashes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/service-entry/&#34;&gt;Service entry&lt;/a&gt; validation now rejects the wildcard hostname (&lt;code&gt;*&lt;/code&gt;) when configuring DNS resolution. The API has never allowed this, however &lt;code&gt;ServiceEntry&lt;/code&gt; was erroneously excluded from validation in the previous release. Use of wildcards as part of a hostname, e.g. &lt;code&gt;*.bar.com&lt;/code&gt;, remains unchanged.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The core dump path for &lt;code&gt;istio-proxy&lt;/code&gt; has changed to &lt;code&gt;/var/lib/istio&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;networking&#34;&gt;Networking&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;/v1.1/docs/tasks/security/mutual-tls&#34;&gt;Mutual TLS&lt;/a&gt; Permissive mode is enabled by default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pilot performance and scalability has been greatly enhanced. Pilot now delivers endpoint updates to 500 sidecars in under 1 second.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Default &lt;a href=&#34;/v1.1/docs/tasks/telemetry/distributed-tracing/overview/#trace-sampling&#34;&gt;trace sampling&lt;/a&gt; is set to 1%.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;policy-and-telemetry&#34;&gt;Policy and telemetry&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Mixer (&lt;code&gt;istio-telemetry&lt;/code&gt;) now supports load shedding based on request rate and expected latency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mixer client (&lt;code&gt;istio-policy&lt;/code&gt;) now supports &lt;code&gt;FAIL_OPEN&lt;/code&gt; setting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Istio Performance dashboard added to Grafana.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reduced &lt;code&gt;istio-telemetry&lt;/code&gt; CPU usage by 10%.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Eliminated &lt;code&gt;statsd-to-prometheus&lt;/code&gt; deployment. Prometheus now directly scrapes from &lt;code&gt;istio-proxy&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Tue, 30 Oct 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/announcing-1.0.3/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2018/announcing-1.0.3/</guid></item><item><title>Announcing Istio 1.0.2</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.0.2. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.0.2&#34;
data-updateadvice=&#39;Before you download 1.0.2, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.0.8&#39;
data-updatehref=&#34;/v1.1/about/notes/1.0.8&#34;&gt;
DOWNLOAD 1.0.2
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://archive.istio.io/v1.0&#34;&gt;1.0.2 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.0.1...1.0.2&#34;&gt;CHANGES IN 1.0.2&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;general&#34;&gt;General&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fixed bug in Envoy where the sidecar would crash if receiving normal traffic on the mutual TLS port.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed bug with Pilot propagating incomplete updates to Envoy in a multicluster environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added a few more Helm options for Grafana.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improved Kubernetes service registry queue performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed bug where &lt;code&gt;istioctl proxy-status&lt;/code&gt; was not showing the patch version.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add validation of virtual service SNI hosts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Thu, 06 Sep 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/announcing-1.0.2/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2018/announcing-1.0.2/</guid></item><item><title>Announcing Istio 1.0.1</title><description>&lt;p&gt;We&amp;rsquo;re pleased to announce the availability of Istio 1.0.1. Please see below for what&amp;rsquo;s changed.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.0.1&#34;
data-updateadvice=&#39;Before you download 1.0.1, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.0.8&#39;
data-updatehref=&#34;/v1.1/about/notes/1.0.8&#34;&gt;
DOWNLOAD 1.0.1
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://archive.istio.io/v1.0&#34;&gt;1.0.1 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://github.com/istio/istio/compare/1.0.0...1.0.1&#34;&gt;CHANGES IN 1.0.1&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;networking&#34;&gt;Networking&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Improved Pilot scalability and Envoy startup time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed virtual service host mismatch issue when adding a port.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added limited support for &lt;a href=&#34;/v1.1/help/ops/traffic-management/deploy-guidelines/#multiple-virtual-services-and-destination-rules-for-the-same-host&#34;&gt;merging multiple virtual service or destination rule definitions&lt;/a&gt; for the same host.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allow &lt;a href=&#34;https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/cluster/outlier_detection.proto&#34;&gt;outlier&lt;/a&gt; consecutive gateway failures when using HTTP.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;environment&#34;&gt;Environment&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Made it possible to use Pilot standalone, for those users who want to only leverage Istio&amp;rsquo;s traffic management functionality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Introduced the convenient &lt;code&gt;values-istio-gateway.yaml&lt;/code&gt; configuration that enables users to run standalone gateways.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed a variety of Helm installation issues, including an issue with the &lt;code&gt;istio-sidecar-injector&lt;/code&gt; configmap not being found.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed the Istio installation error with Galley not being ready.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fixed a variety of issues around mesh expansion.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;policy-and-telemetry&#34;&gt;Policy and Telemetry&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Added an experimental metrics expiration configuration to the Mixer Prometheus adapter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Updated Grafana to 5.2.2.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;adapters&#34;&gt;Adapters&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Ability to specify sink options for the Stackdriver adapter.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;galley&#34;&gt;Galley&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Improved configuration validation for health checks.&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Wed, 29 Aug 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/announcing-1.0.1/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2018/announcing-1.0.1/</guid></item><item><title>All Day Istio Twitch Stream</title><description>
&lt;p&gt;To celebrate the 1.0 release and to promote the software to a wider audience, the Istio community is hosting an all day live stream on Twitch on August 17th.&lt;/p&gt;
&lt;h2 id=&#34;what-is-twitch&#34;&gt;What is Twitch?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://twitch.tv/&#34;&gt;Twitch&lt;/a&gt; is a popular video gaming live streaming platform and recently has seen a lot of coding content showing up. The IBM Advocates have been doing live coding and presentations there and it&amp;rsquo;s been fun. While mostly used for gaming content, there is a &lt;a href=&#34;https://www.twitch.tv/communities/programming&#34;&gt;growing community&lt;/a&gt; sharing and watching programming content on the site.&lt;/p&gt;
&lt;h2 id=&#34;what-does-this-have-to-do-with-istio&#34;&gt;What does this have to do with Istio?&lt;/h2&gt;
&lt;p&gt;The stream is going to be a full day of Istio content. Hopefully we&amp;rsquo;ll have a good mix of deep technical content, beginner content and line-of-business content for our audience. We&amp;rsquo;ll have developers, users, and evangelists on throughout the day to share their demos and stories. Expect live coding, q and a, and some surprises. We have stellar guests lined up from IBM, Google, Datadog, Pivotal, and more!&lt;/p&gt;
&lt;h2 id=&#34;recordings&#34;&gt;Recordings&lt;/h2&gt;
&lt;p&gt;Recordings are available &lt;a href=&#34;https://www.youtube.com/playlist?list=PLzpeuWUENMK0V3dwpx5gPJun-SLG0USqU&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;schedule&#34;&gt;Schedule&lt;/h2&gt;
&lt;p&gt;All times are &lt;code&gt;PDT&lt;/code&gt;.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Speaker&lt;/th&gt;
&lt;th&gt;Affiliation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;10:00 - 10:30&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Spencer Krum + Lisa-Marie Namphy&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;IBM / Portworx&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:30 - 11:00&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Lin Sun / Spencer Krum / Sven Mawson&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;IBM / Google&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:00 - 11:10&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Lin Sun / Spencer Krum&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;IBM&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:10 - 11:30&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Jason Yee / Ilan Rabinovich&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Datadog&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:30 - 11:50&lt;/td&gt;
&lt;td&gt;&lt;code&gt;April Nassl&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Google&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:50 - 12:10&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Spike Curtis&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Tigera&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:10 - 12:30&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Shannon Coen&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Pivotal&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:30 - 1:00&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Matt Klein&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Lyft&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1:00 - 1:20&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Zach Jory&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;F5/Aspen Mesh&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1:20 - 1:40&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Dan Ciruli&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Google&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1:40 - 2:00&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Isaiah Snell-Feikema&lt;/code&gt; / &lt;code&gt;Greg Hanson&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;IBM&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2:00 - 2:20&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Zach Butcher&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Tetrate&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2:20 - 2:40&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Ray Hudaihed&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;American Airlines&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2:40 - 3:00&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Christian Posta&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Red Hat&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3:00 - 3:20&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Google/IBM China&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Google / IBM&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3:20 - 3:40&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Colby Dyess&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Tuffin&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3:40 - 4:00&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Rohit Agarwalla&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Cisco&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</description><pubDate>Fri, 03 Aug 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/istio-twitch-stream/</link><author>Spencer Krum, IBM</author><guid isPermaLink="true">/v1.1/blog/2018/istio-twitch-stream/</guid></item><item><title>Istio a Game Changer for HP&#39;s FitStation Platform</title><description>&lt;p&gt;The FitStation team at HP strongly believes in the future of Kubernetes, BPF and service-mesh as the next standards in cloud infrastructure. We are also very happy to see Istio coming to its official Istio 1.0 release &amp;ndash; thanks to the joint collaboration that started at Google, IBM and Lyft beginning in May 2017.&lt;/p&gt;
&lt;p&gt;Throughout the development of FitStations large scale and progressive cloud platform, Istio, Cilium and Kubernetes technologies have delivered a multitude of opportunities to make our systems more robust and scalable. Istio was a game changer in creating reliable and dynamic network communication.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.fitstation.com&#34;&gt;FitStation powered by HP&lt;/a&gt; is a technology platform that captures 3D biometric data to design personalized footwear to perfectly fit individual foot size and shape as well as gait profile. It uses 3D scanning, pressure sensing, 3D printing and variable density injection molding to create unique footwear. Footwear brands such as Brooks, Steitz Secura or Superfeet are connecting to FitStation to build their next generation of high performance sports, professional and medical shoes.&lt;/p&gt;
&lt;p&gt;FitStation is built on the promise of ultimate security and privacy for users&amp;rsquo; biometric data. ISTIO is the cornerstone to make that possible for data-at-flight within our cloud. By managing these aspects at the infrastructure level, we focused on solving business problems instead of spending time on individual implementations of secure service communication. Using Istio allowed us to dramatically reduce the complexity of maintaining a multitude of libraries and services to provide secure service communication.&lt;/p&gt;
&lt;p&gt;As a bonus benefit of Istio 1.0, we gained network visibility, metrics and tracing out of the box. This radically improved decision-making and response quality for our development
and devops teams. The team got in-depth insight in the network communication across the entire platform, both for new as well as legacy applications. The integration of Cilium
with Envoy delivered a remarkable performance benefit on Istio service mesh communication, combined with a fine-grained kernel driven L7 network security layer. This was due to the powers of BPF brought to Istio by Cilium. We believe this will drive the future of Linux kernel security.&lt;/p&gt;
&lt;p&gt;It has been very exciting to follow Istios growth. We have been able to see clear improvements of performance and stability over the different development versions. The improvements between version 0.7 and 0.8 made our teams feel comfortable with version 1.0, we can state that Istio is now ready for real production usage.&lt;/p&gt;
&lt;p&gt;We are looking forward to the promising roadmaps of Istio, Envoy, Cilium and CNCF.&lt;/p&gt;</description><pubDate>Tue, 31 Jul 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/hp/</link><author>Steven Ceuppens, Chief Software Architect @ HP FitStation, Open Source Advocate &amp; Contributor</author><guid isPermaLink="true">/v1.1/blog/2018/hp/</guid></item><item><title>Announcing Istio 1.0</title><description>
&lt;p&gt;Today, were excited to announce &lt;a href=&#34;/v1.1/about/notes/1.0&#34;&gt;Istio 1.0&lt;/a&gt;. Its been a little over a year since our initial 0.1 release. Since then, Istio has evolved significantly with the help of a thriving and growing community of contributors and users. Weve now reached the point where many companies have successfully adopted Istio in production and have gotten real value from the insight and control it provides over their deployments. Weve helped large enterprises and fast-moving startups like &lt;a href=&#34;https://www.ebay.com/&#34;&gt;eBay&lt;/a&gt;, &lt;a href=&#34;https://www.autotrader.co.uk/&#34;&gt;Auto Trader UK&lt;/a&gt;, &lt;a href=&#34;http://www.descarteslabs.com/&#34;&gt;Descartes Labs&lt;/a&gt;, &lt;a href=&#34;https://www.fitstation.com/&#34;&gt;HP FitStation&lt;/a&gt;, &lt;a href=&#34;https://juspay.in&#34;&gt;JUSPAY&lt;/a&gt;, &lt;a href=&#34;https://www.namely.com/&#34;&gt;Namely&lt;/a&gt;, &lt;a href=&#34;https://www.pubnub.com/&#34;&gt;PubNub&lt;/a&gt; and &lt;a href=&#34;https://www.trulia.com/&#34;&gt;Trulia&lt;/a&gt; use Istio to connect, manage and secure their services from the ground up. Shipping this release as 1.0 is recognition that weve built a core set of functionality that our users can rely on for production use.&lt;/p&gt;
&lt;div class=&#34;call-to-action&#34;&gt;
&lt;button class=&#34;btn update-notice&#34;
data-title=&#39;Update Notice&#39;
data-downloadhref=&#34;https://github.com/istio/istio/releases/tag/1.0.0&#34;
data-updateadvice=&#39;Before you download 1.0, you should know that there&amp;#39;s a newer patch release with the latest bug fixes and perf improvements.&#39;
data-updatebutton=&#39;LEARN ABOUT ISTIO 1.0.8&#39;
data-updatehref=&#34;/v1.1/about/notes/1.0.8&#34;&gt;
DOWNLOAD 1.0
&lt;/button&gt;
&lt;a class=&#34;btn&#34; href=&#34;https://archive.istio.io/v1.0&#34;&gt;1.0 DOCS&lt;/a&gt;
&lt;a class=&#34;btn&#34; href=&#34;/v1.1/about/notes/1.0/&#34;&gt;1.0 RELEASE NOTES&lt;/a&gt;
&lt;/div&gt;
&lt;h2 id=&#34;ecosystem&#34;&gt;Ecosystem&lt;/h2&gt;
&lt;p&gt;Weve seen substantial growth in Istio&amp;rsquo;s ecosystem in the last year. &lt;a href=&#34;https://www.envoyproxy.io/&#34;&gt;Envoy&lt;/a&gt; continues its impressive growth and added numerous
features that are crucial for a production quality service mesh. Observability providers like &lt;a href=&#34;https://www.datadoghq.com/&#34;&gt;Datadog&lt;/a&gt;,
&lt;a href=&#34;https://www.solarwinds.com/&#34;&gt;SolarWinds&lt;/a&gt;, &lt;a href=&#34;https://sysdig.com/blog/monitor-istio/&#34;&gt;Sysdig&lt;/a&gt;, &lt;a href=&#34;https://cloud.google.com/stackdriver/&#34;&gt;Google Stackdriver&lt;/a&gt; and
&lt;a href=&#34;https://aws.amazon.com/cloudwatch/&#34;&gt;Amazon CloudWatch&lt;/a&gt; have written plugins to integrate Istio with their products.
&lt;a href=&#34;https://www.tigera.io/resources/using-network-policy-concert-istio-2/&#34;&gt;Tigera&lt;/a&gt;, &lt;a href=&#34;https://www.aporeto.com/&#34;&gt;Aporeto&lt;/a&gt;, &lt;a href=&#34;https://cilium.io/&#34;&gt;Cilium&lt;/a&gt;
and &lt;a href=&#34;https://styra.com/&#34;&gt;Styra&lt;/a&gt; built extensions to our policy enforcement and networking capabilities. &lt;a href=&#34;https://www.redhat.com/en&#34;&gt;Red Hat&lt;/a&gt; built &lt;a href=&#34;https://www.kiali.io&#34;&gt;Kiali&lt;/a&gt; to wrap a nice user-experience around mesh management and observability. &lt;a href=&#34;https://www.cloudfoundry.org/&#34;&gt;Cloud Foundry&lt;/a&gt; is building on Istio for its next generation traffic routing stack, the recently announced &lt;a href=&#34;https://github.com/knative/docs&#34;&gt;Knative&lt;/a&gt; serverless project is doing the same and &lt;a href=&#34;https://apigee.com/&#34;&gt;Apigee&lt;/a&gt; announced that they plan to use it in their API management solution. These are just some of the integrations the community has added in the last year.&lt;/p&gt;
&lt;h2 id=&#34;features&#34;&gt;Features&lt;/h2&gt;
&lt;p&gt;Since the 0.8 release weve added some important new features and more importantly marked many of our existing features as Beta signaling that theyre ready for production use. This is captured in more detail in the &lt;a href=&#34;/v1.1/about/notes/1.0/&#34;&gt;release notes&lt;/a&gt; but its worth calling out some highlights&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Multiple Kubernetes clusters can now be &lt;a href=&#34;/v1.1/docs/setup/kubernetes/install/multicluster/&#34;&gt;added to a single mesh&lt;/a&gt; and enabling cross-cluster communication and consistent policy enforcement. Multi-cluster support is now Beta.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Networking APIs that enable fine grained control over the flow of traffic through a mesh are now Beta. Explicitly modeling ingress and egress concerns using Gateways allows operators to &lt;a href=&#34;/v1.1/blog/2018/v1alpha3-routing/&#34;&gt;control the network topology&lt;/a&gt; and meet access security requirements at the edge.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mutual TLS can now be &lt;a href=&#34;/v1.1/docs/tasks/security/mtls-migration&#34;&gt;rolled out incrementally&lt;/a&gt; without requiring all clients of a service to be updated. This is a critical feature that unblocks adoption in-place by existing production deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mixer now has support for &lt;a href=&#34;https://github.com/istio/istio/wiki/Out-Of-Process-gRPC-Adapter-Dev-Guide&#34;&gt;developing out-of-process adapters&lt;/a&gt;. This will become the default way to extend Mixer over the coming releases and makes building adapters much simpler.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;/v1.1/docs/concepts/security/#authorization&#34;&gt;Authorization policies&lt;/a&gt; which control access to services are now entirely evaluated locally in Envoy increasing
their performance and reliability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;/v1.1/docs/setup/kubernetes/install/helm/&#34;&gt;Helm chart installation&lt;/a&gt; is now the recommended install method offering rich customization options to adopt Istio on your terms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Weve put a lot of effort into performance including continuous regression testing, large scale environment simulation and targeted fixes. Were very happy with the results and will share more on this in detail in the coming weeks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;what-s-next&#34;&gt;Whats next?&lt;/h2&gt;
&lt;p&gt;While this is a significant milestone for the project theres lots more to do. In working with adopters weve gotten a lot of great feedback about what to focus next. Weve heard consistent themes around support for hybrid-cloud, install modularity, richer networking features and scalability for massive deployments. Weve already taken some of this feedback into account in the 1.0 release and well continue to aggressively tackle this work in the coming months.&lt;/p&gt;
&lt;h2 id=&#34;getting-started&#34;&gt;Getting Started&lt;/h2&gt;
&lt;p&gt;If youre new to Istio and looking to use it for your deployment wed love to hear from you. Take a look at &lt;a href=&#34;/v1.1/docs/&#34;&gt;our docs&lt;/a&gt; or stop by our
&lt;a href=&#34;https://discuss.istio.io&#34;&gt;chat forum&lt;/a&gt;. If youd like
to go deeper and &lt;a href=&#34;/v1.1/about/community&#34;&gt;contribute to the project&lt;/a&gt; come to one of our community meetings and say hello.&lt;/p&gt;
&lt;h2 id=&#34;finally&#34;&gt;Finally&lt;/h2&gt;
&lt;p&gt;The Istio team would like to give huge thanks to everyone who has made a contribution to the project. It wouldnt be where it is today without your help. The last year has been pretty amazing and we look forward to the next one with excitement about what we can achieve together as a community.&lt;/p&gt;</description><pubDate>Tue, 31 Jul 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/announcing-1.0/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2018/announcing-1.0/</guid></item><item><title>Delayering Istio with AppSwitch</title><description>
&lt;div&gt;
&lt;aside class=&#34;callout quote&#34;&gt;
&lt;div class=&#34;type&#34;&gt;
&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-quote&#34;/&gt;&lt;/svg&gt;
&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;All problems in computer science can be solved with another layer, except of course the problem of too many layers. &amp;ndash; David Wheeler&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;p&gt;The sidecar proxy approach enables a lot of awesomeness. Squarely in the datapath between microservices, the sidecar can precisely tell what the application is trying to do. It can monitor and instrument protocol traffic, not in the bowels of the networking layers but at the application level, to enable deep visibility, access controls and traffic management.&lt;/p&gt;
&lt;p&gt;If we look closely however, there are many intermediate layers that the data has to pass through before the high-value analysis of application-traffic can be performed. Most of those layers are part of the base plumbing infrastructure that are there just to push the data along. In doing so, they add latency to communication and complexity to the overall system.&lt;/p&gt;
&lt;p&gt;Over the years, there has been much collective effort in implementing aggressive fine-grained optimizations within the layers of the network datapath. Each iteration may shave another few microseconds. But then the true necessity of those layers itself has not been questioned.&lt;/p&gt;
&lt;h2 id=&#34;don-t-optimize-layers-remove-them&#34;&gt;Dont optimize layers, remove them&lt;/h2&gt;
&lt;p&gt;In my belief, optimizing something is a poor fallback to removing its requirement altogether. That was the goal of my &lt;a href=&#34;https://apporbit.com/a-brief-history-of-containers-from-reality-to-hype/&#34;&gt;initial work&lt;/a&gt; on OS-level virtualization that led to Linux containers which effectively &lt;a href=&#34;https://www.oreilly.com/ideas/the-unwelcome-guest-why-vms-arent-the-solution-for-next-gen-applications&#34;&gt;removed virtual machines&lt;/a&gt; by running applications directly on the host operating system without requiring an intermediate guest. For a long time the industry was fighting the wrong battle distracted by optimizing VMs rather than removing the additional layer altogether.&lt;/p&gt;
&lt;p&gt;I see the same pattern repeat itself with the connectivity of microservices, and networking in general. The network has been going through the changes that physical servers have gone through a decade earlier. New set of layers and constructs are being introduced. They are being baked deep into the protocol stack and even silicon without adequately considering low-touch alternatives. Perhaps there is a way to remove those additional layers altogether.&lt;/p&gt;
&lt;p&gt;I have been thinking about these problems for some time and believe that an approach similar in concept to containers can be applied to the network stack that would fundamentally simplify how application endpoints are connected across the complexity of many intermediate layers. I have reapplied the same principles from the original work on containers to create &lt;a href=&#34;http://appswitch.io&#34;&gt;AppSwitch&lt;/a&gt;. Similar to the way containers provide an interface that applications can directly consume, AppSwitch plugs directly into well-defined and ubiquitous network API that applications currently use and directly connects application clients to appropriate servers, skipping all intermediate layers. In the end, that&amp;rsquo;s what networking is all about.&lt;/p&gt;
&lt;p&gt;Before going into the details of how AppSwitch promises to remove unnecessary layers from the Istio stack, let me give a very brief introduction to its architecture. Further details are available at the &lt;a href=&#34;https://appswitch.readthedocs.io/en/latest/&#34;&gt;documentation&lt;/a&gt; page.&lt;/p&gt;
&lt;h2 id=&#34;appswitch&#34;&gt;AppSwitch&lt;/h2&gt;
&lt;p&gt;Not unlike the container runtime, AppSwitch consists of a client and a daemon that speak over HTTP via a REST API. Both the client and the daemon are built as one self-contained binary, &lt;code&gt;ax&lt;/code&gt;. The client transparently plugs into the application and tracks its system calls related to network connectivity and notifies the daemon about their occurrences. As an example, lets say an application makes the &lt;code&gt;connect(2)&lt;/code&gt; system call to the service IP of a Kubernetes service. The AppSwitch client intercepts the connect call, nullifies it and notifies the daemon about its occurrence along with some context that includes the system call arguments. The daemon would then handle the system call, potentially by directly connecting to the Pod IP of the upstream server on behalf of the application.&lt;/p&gt;
&lt;p&gt;It is important to note that no data is forwarded between AppSwitch client and daemon. They are designed to exchange file descriptors (FDs) over a Unix domain socket to avoid having to copy data. Note also that client is not a separate process. Rather it directly runs in the context of the application itself. There is no data copy between the application and AppSwitch client either.&lt;/p&gt;
&lt;h2 id=&#34;delayering-the-stack&#34;&gt;Delayering the stack&lt;/h2&gt;
&lt;p&gt;Now that we have an idea about what AppSwitch does, lets look at the layers that it optimizes away from a standard service mesh.&lt;/p&gt;
&lt;h3 id=&#34;network-devirtualization&#34;&gt;Network Devirtualization&lt;/h3&gt;
&lt;p&gt;Kubernetes offers simple and well-defined network constructs to the microservice applications it runs. In order to support them however, it imposes specific &lt;a href=&#34;https://kubernetes.io/docs/concepts/cluster-administration/networking/&#34;&gt;requirements&lt;/a&gt; on the underlying network. Meeting those requirements is often not easy. The go-to solution of adding another layer is typically adopted to satisfy the requirements. In most cases the additional layer consists of a network overlay that sits between Kubernetes and underlying network. Traffic produced by the applications is encapsulated at the source and decapsulated at the target, which not only costs network resources but also takes up compute cores.&lt;/p&gt;
&lt;p&gt;Because AppSwitch arbitrates what the application sees through its touchpoints with the platform, it projects a consistent virtual view of the underlying network to the application similar to an overlay but without introducing an additional layer of processing along the datapath. Just to draw a parallel to containers, the inside of a container looks and feels like a VM. However the underlying implementation does not intervene along the high-incidence control paths of low-level interrupts etc.&lt;/p&gt;
&lt;p&gt;AppSwitch can be injected into a standard Kubernetes manifest (similar to Istio injection) such that the applications network is directly handled by AppSwitch bypassing any network overlay underneath. More details to follow in just a bit.&lt;/p&gt;
&lt;h3 id=&#34;artifacts-of-container-networking&#34;&gt;Artifacts of Container Networking&lt;/h3&gt;
&lt;p&gt;Extending network connectivity from host into the container has been a &lt;a href=&#34;https://kubernetes.io/blog/2016/01/why-kubernetes-doesnt-use-libnetwork/&#34;&gt;major challenge&lt;/a&gt;. New layers of network plumbing were invented explicitly for that purpose. As such, an application running in a container is simply a process on the host. However due to a &lt;a href=&#34;http://appswitch.io/blog/kubernetes_istio_and_network_function_devirtualization_with_appswitch/&#34;&gt;fundamental misalignment&lt;/a&gt; between the network abstraction expected by the application and the abstraction exposed by container network namespace, the process cannot directly access the host network. Applications think of networking in terms of sockets or sessions whereas network namespaces expose a device abstraction. Once placed in a network namespace, the process suddenly loses all connectivity. The notion of veth-pair and corresponding tooling were invented just to close that gap. The data would now have to go from a host interface into a virtual switch and then through a veth-pair to the virtual network interface of the container network namespace.&lt;/p&gt;
&lt;p&gt;AppSwitch can effectively remove both the virtual switch and veth-pair layers on both ends of the connection. Since the connections are established by the daemon running on the host using the network thats already available on the host, there is no need for additional plumbing to bridge host network into the container. The socket FDs created on the host are passed to the application running within the pods network namespace. By the time the application receives the FD, all control path work (security checks, connection establishment) is already done and the FD is ready for actual IO.&lt;/p&gt;
&lt;h3 id=&#34;skip-tcp-ip-for-colocated-endpoints&#34;&gt;Skip TCP/IP for colocated endpoints&lt;/h3&gt;
&lt;p&gt;TCP/IP is the universal protocol medium over which pretty much all communication occurs. But if application endpoints happen to be on the same host, is TCP/IP really required? After all, it does do quite a bit of work and it is quite complex. Unix sockets are explicitly designed for intrahost communication and AppSwitch can transparently switch the communication to occur over a Unix socket for colocated endpoints.&lt;/p&gt;
&lt;p&gt;For each listening socket of an application, AppSwitch maintains two listening sockets, one each for TCP and Unix. When a client tries to connect to a server that happens to be colocated, AppSwitch daemon would choose to connect to the Unix listening socket of the server. The resulting Unix sockets on each end are passed into respective applications. Once a fully connected FD is returned, the application would simply treat it as a bit pipe. The protocol doesnt really matter. The application may occasionally make protocol specific calls such as &lt;code&gt;getsockname(2)&lt;/code&gt; and AppSwitch would handle them in kind. It would present consistent responses such that the application would continue to run on.&lt;/p&gt;
&lt;h3 id=&#34;data-pushing-proxy&#34;&gt;Data Pushing Proxy&lt;/h3&gt;
&lt;p&gt;As we continue to look for layers to remove, let us also reconsider the requirement of the proxy layer itself. There are times when the role of the proxy may degenerate into a plain data pusher:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;There may not be a need for any protocol decoding&lt;/li&gt;
&lt;li&gt;The protocol may not be recognized by the proxy&lt;/li&gt;
&lt;li&gt;The communication may be encrypted and the proxy cannot access relevant headers&lt;/li&gt;
&lt;li&gt;The application (redis, memcached etc.) may be too latency-sensitive and cannot afford the cost of an intermediate proxy&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In all these cases, the proxy is not different from any low-level plumbing layer. In fact, the latency introduced can be far higher because the same level of optimizations wont be available to a proxy.&lt;/p&gt;
&lt;p&gt;To illustrate this with an example, consider the application shown below. It consists of a Python app and a set of memcached servers behind it. An upstream memcached server is selected based on connection time routing. Speed is the primary concern here.&lt;/p&gt;
&lt;figure style=&#34;width:75%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:38.63965267727931%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/delayering-istio/memcached.png&#34; title=&#34;Latency-sensitive application scenario&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/delayering-istio/memcached.png&#34; alt=&#34;Proxyless datapath&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Latency-sensitive application scenario&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;If we look at the data flow in this setup, the Python app makes a connection to the service IP of memcached. It is redirected to the client-side sidecar. The sidecar routes the connection to one of the memcached servers and copies the data between the two sockets &amp;ndash; one connected to the app and another connected to memcached. And the same also occurs on the server side between the server-side sidecar and memcached. The role of proxy at that point is just boring shoveling of bits between the two sockets. However, it ends up adding substantial latency to the end-to-end connection.&lt;/p&gt;
&lt;p&gt;Now let us imagine that the app is somehow made to connect directly to memcached, then the two intermediate proxies could be skipped. The data would flow directly between the app and memcached without any intermediate hops. AppSwitch can arrange for that by transparently tweaking the target address passed by the Python app when it makes the &lt;code&gt;connect(2)&lt;/code&gt; system call.&lt;/p&gt;
&lt;h3 id=&#34;proxyless-protocol-decoding&#34;&gt;Proxyless Protocol Decoding&lt;/h3&gt;
&lt;p&gt;Things are going to get a bit strange here. We have seen that the proxy can be bypassed for cases that dont involve looking into application traffic. But is there anything we can do even for those other cases? It turns out, yes.&lt;/p&gt;
&lt;p&gt;In a typical communication between microservices, much of the interesting information is exchanged in the initial headers. Headers are followed by body or payload which typically represents bulk of the communication. And once again the proxy degenerates into a data pusher for this part of communication. AppSwitch provides a nifty mechanism to skip proxy for these cases.&lt;/p&gt;
&lt;p&gt;Even though AppSwitch is not a proxy, it &lt;em&gt;does&lt;/em&gt; arbitrate connections between application endpoints and it &lt;em&gt;does&lt;/em&gt; have access to corresponding socket FDs. Normally, AppSwitch simply passes those FDs to the application. But it can also peek into the initial message received on the connection using the &lt;code&gt;MSG_PEEK&lt;/code&gt; option of the &lt;code&gt;recvfrom(2)&lt;/code&gt; system call on the socket. It allows AppSwitch to examine application traffic without actually removing it from the socket buffers. When AppSwitch returns the FD to the application and steps out of the datapath, the application would do an actual read on the connection. AppSwitch uses this technique to perform deeper analysis of application-level traffic and implement sophisticated network functions as discussed in the next section, all without getting into the datapath.&lt;/p&gt;
&lt;h3 id=&#34;zero-cost-load-balancer-firewall-and-network-analyzer&#34;&gt;Zero-Cost Load Balancer, Firewall and Network Analyzer&lt;/h3&gt;
&lt;p&gt;Typical implementations of network functions such as load balancers and firewalls require an intermediate layer that needs to tap into data/packet stream. Kubernetes&amp;rsquo; implementation of load balancer (&lt;code&gt;kube-proxy&lt;/code&gt;) for example introduces a probe into the packet stream through iptables and Istio implements the same at the proxy layer. But if all that is required is to redirect or drop connections based on policy, it is not really necessary to stay in the datapath during the entire course of the connection. AppSwitch can take care of that much more efficiently by simply manipulating the control path at the API level. Given its intimate proximity to the application, AppSwitch also has easy access to various pieces of application level metrics such as dynamics of stack and heap usage, precisely when a service comes alive, attributes of active connections etc., all of which could potentially form a rich signal for monitoring and analytics.&lt;/p&gt;
&lt;p&gt;To go a step further, AppSwitch can also perform L7 load balancing and firewall functions based on the protocol data that it obtains from the socket buffers. It can synthesize the protocol data and various other signals with the policy information acquired from Pilot to implement a highly efficient form of routing and access control enforcement. It can essentially &amp;ldquo;influence&amp;rdquo; the application to connect to the right backend server without requiring any changes to the application or its configuration. It is as if the application itself is infused with policy and traffic-management intelligence. Except in this case, the application can&amp;rsquo;t escape the influence.&lt;/p&gt;
&lt;p&gt;There is some more black-magic possible that would actually allow modifying the application data stream without getting into the datapath but I am going to save that for a later post. Current implementation of AppSwitch uses a proxy if the use case requires application protocol traffic to be modified. For those cases, AppSwitch provides a highly optimal mechanism to attract traffic to the proxy as discussed in the next section.&lt;/p&gt;
&lt;h3 id=&#34;traffic-redirection&#34;&gt;Traffic Redirection&lt;/h3&gt;
&lt;p&gt;Before the sidecar proxy can look into application protocol traffic, it needs to first receive the connections. Redirection of connections coming into and going out of the application is currently done by a layer of packet filtering that rewrites packets such that they go to respective sidecars. Creating potentially large number of rules required to represent the redirection policy is tedious. And the process of applying the rules and updating them, as the target subnets to be captured by the sidecar change, is expensive.&lt;/p&gt;
&lt;p&gt;While some of the performance concerns are being addressed by the Linux community, there is another concern related to privilege: iptables rules need to be updated whenever the policy changes. Given the current architecture, all privileged operations are performed in an init container that runs just once at the very beginning before privileges are dropped for the actual application. Since updating iptables rules requires root privileges, there is no way to do that without restarting the application.&lt;/p&gt;
&lt;p&gt;AppSwitch provides a way to redirect application connections without root privilege. As such, an unprivileged application is already able to connect to any host (modulo firewall rules etc.) and the owner of the application should be allowed to change the host address passed by its application via &lt;code&gt;connect(2)&lt;/code&gt; without requiring additional privilege.&lt;/p&gt;
&lt;h4 id=&#34;socket-delegation&#34;&gt;Socket Delegation&lt;/h4&gt;
&lt;p&gt;Let&amp;rsquo;s see how AppSwitch could help redirect connections without using iptables. Imagine that the application somehow voluntarily passes the socket FDs that it uses for its communication to the sidecar, then there would be no need for iptables. AppSwitch provides a feature called &lt;em&gt;socket delegation&lt;/em&gt; that does exactly that. It allows the sidecar to transparently gain access to copies of socket FDs that the application uses for its communication without any changes to the application itself.&lt;/p&gt;
&lt;p&gt;Here are the sequence of steps that would achieve this in the context of the Python application example.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The application initiates a connection request to the service IP of memcached service.&lt;/li&gt;
&lt;li&gt;The connection request from client is forwarded to the daemon.&lt;/li&gt;
&lt;li&gt;The daemon creates a pair of pre-connected Unix sockets (using &lt;code&gt;socketpair(2)&lt;/code&gt; system call).&lt;/li&gt;
&lt;li&gt;It passes one end of the socket pair into the application such that the application would use that socket FD for read/write. It also ensures that the application consistently sees it as a legitimate TCP socket as it expects by interposing all calls that query connection properties.&lt;/li&gt;
&lt;li&gt;The other end is passed to sidecar over a different Unix socket where the daemon exposes its API. Information such as the original destination that the application was connecting to is also conveyed over the same interface.&lt;/li&gt;
&lt;/ol&gt;
&lt;figure style=&#34;width:50%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:22.442748091603054%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/delayering-istio/socket-delegation.png&#34; title=&#34;Socket delegation based connection redirection&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/delayering-istio/socket-delegation.png&#34; alt=&#34;Socket delegation protocol&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Socket delegation based connection redirection&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Once the application and sidecar are connected, the rest happens as usual. Sidecar would initiate a connection to upstream server and proxy data between the socket received from the daemon and the socket connected to upstream server. The main difference here is that sidecar would get the connection, not through the &lt;code&gt;accept(2)&lt;/code&gt; system call as it is in the normal case, but from the daemon over the Unix socket. In addition to listening for connections from applications through the normal &lt;code&gt;accept(2)&lt;/code&gt; channel, the sidecar proxy would connect to the AppSwitch daemons REST endpoint and receive sockets that way.&lt;/p&gt;
&lt;p&gt;For completeness, here are the sequence of steps that would occur on the server side:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The application receives a connection&lt;/li&gt;
&lt;li&gt;AppSwitch daemon accepts the connection on behalf of the application&lt;/li&gt;
&lt;li&gt;It creates a pair of pre-connected Unix sockets using &lt;code&gt;socketpair(2)&lt;/code&gt; system call&lt;/li&gt;
&lt;li&gt;One end of the socket pair is returned to the application through the &lt;code&gt;accept(2)&lt;/code&gt; system call&lt;/li&gt;
&lt;li&gt;The other end of the socket pair along with the socket originally accepted by the daemon on behalf of the application is sent to sidecar&lt;/li&gt;
&lt;li&gt;Sidecar would extract the two socket FDs &amp;ndash; a Unix socket FD connected to the application and a TCP socket FD connected to the remote client&lt;/li&gt;
&lt;li&gt;Sidecar would read the metadata supplied by the daemon about the remote client and perform its usual operations&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;sidecar-aware-applications&#34;&gt;&amp;ldquo;Sidecar-Aware&amp;rdquo; Applications&lt;/h4&gt;
&lt;p&gt;Socket delegation feature can be very useful for applications that are explicitly aware of the sidecar and wish to take advantage of its features. They can voluntarily delegate their network interactions by passing their sockets to the sidecar using the same feature. In a way, AppSwitch transparently turns every application into a sidecar-aware application.&lt;/p&gt;
&lt;h2 id=&#34;how-does-it-all-come-together&#34;&gt;How does it all come together?&lt;/h2&gt;
&lt;p&gt;Just to step back, Istio offloads common connectivity concerns from applications to a sidecar proxy that performs those functions on behalf of the application. And AppSwitch simplifies and optimizes the service mesh by sidestepping intermediate layers and invoking the proxy only for cases where it is truly necessary.&lt;/p&gt;
&lt;p&gt;In the rest of this section, I outline how AppSwitch may be integrated with Istio based on a very cursory initial implementation. This is not intended to be anything like a design doc &amp;ndash; not every possible way of integration is explored and not every detail is worked out. The intent is to discuss high-level aspects of the implementation to present a rough idea of how the two systems may come together. The key is that AppSwitch would act as a cushion between Istio and a real proxy. It would serve as the &amp;ldquo;fast-path&amp;rdquo; for cases that can be performed more efficiently without invoking the sidecar proxy. And for the cases where the proxy is used, it would shorten the datapath by cutting through unnecessary layers. Look at this &lt;a href=&#34;http://appswitch.io/blog/kubernetes_istio_and_network_function_devirtualization_with_appswitch/&#34;&gt;blog&lt;/a&gt; for a more detailed walk through of the integration.&lt;/p&gt;
&lt;h3 id=&#34;appswitch-client-injection&#34;&gt;AppSwitch Client Injection&lt;/h3&gt;
&lt;p&gt;Similar to Istio sidecar-injector, a simple tool called &lt;code&gt;ax-injector&lt;/code&gt; injects AppSwitch client into a standard Kubernetes manifest. Injected client transparently monitors the application and intimates AppSwitch daemon of the control path network API events that the application produces.&lt;/p&gt;
&lt;p&gt;It is possible to not require the injection and work with standard Kubernetes manifests if AppSwitch CNI plugin is used. In that case, the CNI plugin would perform necessary injection when it gets the initialization callback. Using injector does have some advantages, however: (1) It works in tightly-controlled environments like GKE (2) It can be easily extended to support other frameworks such as Mesos (3) Same cluster would be able to run standard applications alongside &amp;ldquo;AppSwitch-enabled&amp;rdquo; applications.&lt;/p&gt;
&lt;h3 id=&#34;appswitch-daemonset&#34;&gt;AppSwitch &lt;code&gt;DaemonSet&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;AppSwitch daemon can be configured to run as a &lt;code&gt;DaemonSet&lt;/code&gt; or as an extension to the application that is directly injected into application manifest. In either case it handles network events coming in from the applications that it supports.&lt;/p&gt;
&lt;h3 id=&#34;agent-for-policy-acquisition&#34;&gt;Agent for policy acquisition&lt;/h3&gt;
&lt;p&gt;This is the component that conveys policy and configuration dictated by Istio to AppSwitch. It implements xDS API to listen from Pilot and calls appropriate AppSwitch APIs to program the daemon. For example, it allows the load balancing strategy, as specified by &lt;code&gt;istioctl&lt;/code&gt;, to be translated into equivalent AppSwitch capability.&lt;/p&gt;
&lt;h3 id=&#34;platform-adapter-for-appswitch-auto-curated-service-registry&#34;&gt;Platform Adapter for AppSwitch &amp;ldquo;Auto-Curated&amp;rdquo; Service Registry&lt;/h3&gt;
&lt;p&gt;Given that AppSwitch is in the control path of applications network APIs, it has ready access to the topology of services across the cluster. AppSwitch exposes that information in the form of a service registry that is automatically and (almost) synchronously updated as applications and their services come and go. A new platform adapter for AppSwitch alongside Kubernetes, Eureka etc. would provide the details of upstream services to Istio. This is not strictly necessary but it does make it easier to correlate service endpoints received from Pilot by AppSwitch agent above.&lt;/p&gt;
&lt;h3 id=&#34;proxy-integration-and-chaining&#34;&gt;Proxy integration and chaining&lt;/h3&gt;
&lt;p&gt;Connections that do require deep scanning and mutation of application traffic are handed off to an external proxy through the socket delegation mechanism discussed earlier. It uses an extended version of &lt;a href=&#34;https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt&#34;&gt;proxy protocol&lt;/a&gt;. In addition to the simple parameters supported by the proxy protocol, a variety of other metadata (including the initial protocol headers obtained from the socket buffers) and live socket FDs (representing application connections) are forwarded to the proxy.&lt;/p&gt;
&lt;p&gt;The proxy can look at the metadata and decide how to proceed. It could respond by accepting the connection to do the proxying or by directing AppSwitch to allow the connection and use the fast-path or to just drop the connection.&lt;/p&gt;
&lt;p&gt;One of the interesting aspects of the mechanism is that, when the proxy accepts a socket from AppSwitch, it can in turn delegate the socket to another proxy. In fact that is how AppSwitch currently works. It uses a simple built-in proxy to examine the metadata and decide whether to handle the connection internally or to hand it off to an external proxy (Envoy). The same mechanism can be potentially extended to allow for a chain of plugins, each looking for a specific signature, with the last one in the chain doing the real proxy work.&lt;/p&gt;
&lt;h2 id=&#34;it-s-not-just-about-performance&#34;&gt;It&amp;rsquo;s not just about performance&lt;/h2&gt;
&lt;p&gt;Removing intermediate layers along the datapath is not just about improving performance. Performance is a great side effect, but it &lt;em&gt;is&lt;/em&gt; a side effect. There are a number of important advantages to an API level approach.&lt;/p&gt;
&lt;h3 id=&#34;automatic-application-onboarding-and-policy-authoring&#34;&gt;Automatic application onboarding and policy authoring&lt;/h3&gt;
&lt;p&gt;Before microservices and service mesh, traffic management was done by load balancers and access controls were enforced by firewalls. Applications were identified by IP addresses and DNS names which were relatively static. In fact, that&amp;rsquo;s still the status quo in most environments. Such environments stand to benefit immensely from service mesh. However a practical and scalable bridge to the new world needs to be provided. The difficulty in transformation is not as much due to lack of features and functionality but the investment required to rethink and reimplement the entire application infrastructure. Currently most of the policy and configuration exists in the form of load balancer and firewall rules. Somehow that existing context needs to be leveraged in providing a scalable path to adopting the service mesh model.&lt;/p&gt;
&lt;p&gt;AppSwitch can substantially ease the onboarding process. It can project the same network environment to the application at the target as its current source environment. Not having any assistance here is typically a non-starter in case of traditional applications which have complex configuration files with static IP addresses or specific DNS names hard-coded in them. AppSwitch could help capture those applications along with their existing configuration and connect them over a service mesh without requiring any changes.&lt;/p&gt;
&lt;h3 id=&#34;broader-application-and-protocol-support&#34;&gt;Broader application and protocol support&lt;/h3&gt;
&lt;p&gt;HTTP clearly dominates the modern application landscapes but once we talk about traditional applications and environments, we&amp;rsquo;d encounter all kinds of protocols and transports. Particularly, support for UDP becomes unavoidable. Traditional application servers such as IBM WebSphere rely extensively on UDP. Most multimedia applications use UDP media streams. Of course DNS is probably the most widely used UDP &amp;ldquo;application&amp;rdquo;. AppSwitch supports UDP at the API level much the same way as TCP and when it detects a UDP connection, it can transparently handle it in its &amp;ldquo;fast-path&amp;rdquo; rather than delegating it to the proxy.&lt;/p&gt;
&lt;h3 id=&#34;client-ip-preservation-and-end-to-end-principle&#34;&gt;Client IP preservation and end-to-end principle&lt;/h3&gt;
&lt;p&gt;The same mechanism that preserves the source network environment can also preserve client IP addresses as seen by the servers. With a sidecar proxy in place, connection requests come from the proxy rather than the client. As a result, the peer address (IP:port) of the connection as seen by the server would be that of the proxy rather than the client. AppSwitch ensures that the server sees correct address of the client, logs it correctly and any decisions made based on the client address remain valid. More generally, AppSwitch preserves the &lt;a href=&#34;https://en.wikipedia.org/wiki/End-to-end_principle&#34;&gt;end-to-end principle&lt;/a&gt; which is otherwise broken by intermediate layers that obfuscate the true underlying context.&lt;/p&gt;
&lt;h3 id=&#34;enhanced-application-signal-with-access-to-encrypted-headers&#34;&gt;Enhanced application signal with access to encrypted headers&lt;/h3&gt;
&lt;p&gt;Encrypted traffic completely undermines the ability of the service mesh to analyze application traffic. API level interposition could potentially offer a way around it. Current implementation of AppSwitch gains access to application&amp;rsquo;s network API at the system call level. However it is possible in principle to influence the application at an API boundary, higher in the stack where application data is not yet encrypted or already decrypted. Ultimately the data is always produced in the clear by the application and then encrypted at some point before it goes out. Since AppSwitch directly runs within the memory context of the application, it is possible to tap into the data higher on the stack where it is still held in clear. Only requirement for this to work is that the API used for encryption should be well-defined and amenable for interposition. Particularly, it requires access to the symbol table of the application binaries. Just to be clear, AppSwitch doesn&amp;rsquo;t implement this today.&lt;/p&gt;
&lt;h2 id=&#34;so-what-s-the-net&#34;&gt;So whats the net?&lt;/h2&gt;
&lt;p&gt;AppSwitch removes a number of layers and processing from the standard service mesh stack. What does all that translate to in terms of performance?&lt;/p&gt;
&lt;p&gt;We ran some initial experiments to characterize the extent of the opportunity for optimization based on the initial integration of AppSwitch discussed earlier. The experiments were run on GKE using &lt;code&gt;fortio-0.11.0&lt;/code&gt;, &lt;code&gt;istio-0.8.0&lt;/code&gt; and &lt;code&gt;appswitch-0.4.0-2&lt;/code&gt;. In case of the proxyless test, AppSwitch daemon was run as a &lt;code&gt;DaemonSet&lt;/code&gt; on the Kubernetes cluster and the Fortio pod spec was modified to inject AppSwitch client. These were the only two changes made to the setup. The test was configured to measure the latency of GRPC requests across 100 concurrent connections.&lt;/p&gt;
&lt;figure style=&#34;width:100%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:54.66034755134282%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/delayering-istio/perf.png&#34; title=&#34;Latency with and without AppSwitch&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/delayering-istio/perf.png&#34; alt=&#34;Performance comparison&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Latency with and without AppSwitch&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Initial results indicate a difference of over 18x in p50 latency with and without AppSwitch (3.99ms vs 72.96ms). The difference was around 8x when mixer and access logs were disabled. Clearly the difference was due to sidestepping all those intermediate layers along the datapath. Unix socket optimization wasn&amp;rsquo;t triggered in case of AppSwitch because client and server pods were scheduled to separate hosts. End-to-end latency of AppSwitch case would have been even lower if the client and server happened to be colocated. Essentially the client and server running in their respective pods of the Kubernetes cluster are directly connected over a TCP socket going over the GKE network &amp;ndash; no tunneling, bridge or proxies.&lt;/p&gt;
&lt;h2 id=&#34;net-net&#34;&gt;Net Net&lt;/h2&gt;
&lt;p&gt;I started out with David Wheeler&amp;rsquo;s seemingly reasonable quote that says adding another layer is not a solution for the problem of too many layers. And I argued through most of the blog that current network stack already has too many layers and that they should be removed. But isn&amp;rsquo;t AppSwitch itself a layer?&lt;/p&gt;
&lt;p&gt;Yes, AppSwitch is clearly another layer. However it is one that can remove multiple other layers. In doing so, it seamlessly glues the new service mesh layer with existing layers of traditional network environments. It offsets the cost of sidecar proxy and as Istio graduates to 1.0, it provides a bridge for existing applications and their network environments to transition to the new world of service mesh.&lt;/p&gt;
&lt;p&gt;Perhaps Wheelers quote should read:&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout quote&#34;&gt;
&lt;div class=&#34;type&#34;&gt;
&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-quote&#34;/&gt;&lt;/svg&gt;
&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;All problems in computer science can be solved with another layer, &lt;strong&gt;even&lt;/strong&gt; the problem of too many layers!&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;h2 id=&#34;acknowledgements&#34;&gt;Acknowledgements&lt;/h2&gt;
&lt;p&gt;Thanks to Mandar Jog (Google) for several discussions about the value of AppSwitch for Istio and to the following individuals (in alphabetical order) for their review of early drafts of this blog.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Frank Budinsky (IBM)&lt;/li&gt;
&lt;li&gt;Lin Sun (IBM)&lt;/li&gt;
&lt;li&gt;Shriram Rajagopalan (VMware)&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Mon, 30 Jul 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/delayering-istio/</link><author>Dinesh Subhraveti (AppOrbit and Columbia University)</author><guid isPermaLink="true">/v1.1/blog/2018/delayering-istio/</guid><category>appswitch</category><category>performance</category></item><item><title>Micro-Segmentation with Istio Authorization</title><description>
&lt;p&gt;Micro-segmentation is a security technique that creates secure zones in cloud deployments and allows organizations to
isolate workloads from one another and secure them individually.
&lt;a href=&#34;/v1.1/docs/concepts/security/#authorization&#34;&gt;Istio&amp;rsquo;s authorization feature&lt;/a&gt;, also known as Istio Role Based Access Control,
provides micro-segmentation for services in an Istio mesh. It features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Authorization at different levels of granularity, including namespace level, service level, and method level.&lt;/li&gt;
&lt;li&gt;Service-to-service and end-user-to-service authorization.&lt;/li&gt;
&lt;li&gt;High performance, as it is enforced natively on Envoy.&lt;/li&gt;
&lt;li&gt;Role-based semantics, which makes it easy to use.&lt;/li&gt;
&lt;li&gt;High flexibility as it allows users to define conditions using
&lt;a href=&#34;/v1.1/docs/reference/config/authorization/constraints-and-properties/&#34;&gt;combinations of attributes&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this blog post, you&amp;rsquo;ll learn about the main authorization features and how to use them in different situations.&lt;/p&gt;
&lt;h2 id=&#34;characteristics&#34;&gt;Characteristics&lt;/h2&gt;
&lt;h3 id=&#34;rpc-level-authorization&#34;&gt;RPC level authorization&lt;/h3&gt;
&lt;p&gt;Authorization is performed at the level of individual RPCs. Specifically, it controls &amp;ldquo;who can access my &lt;code&gt;bookstore&lt;/code&gt; service”,
or &amp;ldquo;who can access method &lt;code&gt;getBook&lt;/code&gt; in my &lt;code&gt;bookstore&lt;/code&gt; service”. It is not designed to control access to application-specific
resource instances, like access to &amp;ldquo;storage bucket X” or access to &amp;ldquo;3rd book on 2nd shelf”. Today this kind of application
specific access control logic needs to be handled by the application itself.&lt;/p&gt;
&lt;h3 id=&#34;role-based-access-control-with-conditions&#34;&gt;Role-based access control with conditions&lt;/h3&gt;
&lt;p&gt;Authorization is a &lt;a href=&#34;https://en.wikipedia.org/wiki/Role-based_access_control&#34;&gt;role-based access control (RBAC)&lt;/a&gt; system,
contrast this to an &lt;a href=&#34;https://en.wikipedia.org/wiki/Attribute-based_access_control&#34;&gt;attribute-based access control (ABAC)&lt;/a&gt;
system. Compared to ABAC, RBAC has the following advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Roles allow grouping of attributes.&lt;/strong&gt; Roles are groups of permissions, which specifies the actions you are allowed
to perform on a system. Users are grouped based on the roles within an organization. You can define the roles and reuse
them for different cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It is easier to understand and reason about who has access.&lt;/strong&gt; The RBAC concepts map naturally to business concepts.
For example, a DB admin may have all access to DB backend services, while a web client may only be able to view the
frontend service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It reduces unintentional errors.&lt;/strong&gt; RBAC policies make otherwise complex security changes easier. You won&amp;rsquo;t have
duplicate configurations in multiple places and later forget to update some of them when you need to make changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;On the other hand, Istio&amp;rsquo;s authorization system is not a traditional RBAC system. It also allows users to define &lt;strong&gt;conditions&lt;/strong&gt; using
&lt;a href=&#34;/v1.1/docs/reference/config/authorization/constraints-and-properties/&#34;&gt;combinations of attributes&lt;/a&gt;. This gives Istio
flexibility to express complex access control policies. In fact, &lt;strong&gt;the &amp;ldquo;RBAC + conditions” model
that Istio authorization adopts, has all the benefits an RBAC system has, and supports the level of flexibility that
normally an ABAC system provides.&lt;/strong&gt; You&amp;rsquo;ll see some &lt;a href=&#34;#examples&#34;&gt;examples&lt;/a&gt; below.&lt;/p&gt;
&lt;h3 id=&#34;high-performance&#34;&gt;High performance&lt;/h3&gt;
&lt;p&gt;Because of its simple semantics, Istio authorization is enforced on Envoy as a native authorization support. At runtime, the
authorization decision is completely done locally inside an Envoy filter, without dependency to any external module.
This allows Istio authorization to achieve high performance and availability.&lt;/p&gt;
&lt;h3 id=&#34;work-with-without-primary-identities&#34;&gt;Work with/without primary identities&lt;/h3&gt;
&lt;p&gt;Like any other RBAC system, Istio authorization is identity aware. In Istio authorization policy, there is a primary
identity called &lt;code&gt;user&lt;/code&gt;, which represents the principal of the client.&lt;/p&gt;
&lt;p&gt;In addition to the primary identity, you can also specify any conditions that define the identities. For example,
you can specify the client identity as &amp;ldquo;user Alice calling from Bookstore frontend service”, in which case,
you have a combined identity of the calling service (&lt;code&gt;Bookstore frontend&lt;/code&gt;) and the end user (&lt;code&gt;Alice&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;To improve security, you should enable &lt;a href=&#34;/v1.1/docs/concepts/security/#authentication&#34;&gt;authentication features&lt;/a&gt;,
and use authenticated identities in authorization policies. However, strongly authenticated identity is not required
for using authorization. Istio authorization works with or without identities. If you are working with a legacy system,
you may not have mutual TLS or JWT authentication setup for your mesh. In this case, the only way to identify the client is, for example,
through IP. You can still use Istio authorization to control which IP addresses or IP ranges are allowed to access your service.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&#34;/v1.1/docs/tasks/security/authz-http/&#34;&gt;authorization task&lt;/a&gt; shows you how to
use Istio&amp;rsquo;s authorization feature to control namespace level and service level access using the
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Bookinfo application&lt;/a&gt;. In this section, you&amp;rsquo;ll see more examples on how to achieve
micro-segmentation with Istio authorization.&lt;/p&gt;
&lt;h3 id=&#34;namespace-level-segmentation-via-rbac-conditions&#34;&gt;Namespace level segmentation via RBAC + conditions&lt;/h3&gt;
&lt;p&gt;Suppose you have services in the &lt;code&gt;frontend&lt;/code&gt; and &lt;code&gt;backend&lt;/code&gt; namespaces. You would like to allow all your services
in the &lt;code&gt;frontend&lt;/code&gt; namespace to access all services that are marked &lt;code&gt;external&lt;/code&gt; in the &lt;code&gt;backend&lt;/code&gt; namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: &amp;#34;rbac.istio.io/v1alpha1&amp;#34;
kind: ServiceRole
metadata:
name: external-api-caller
namespace: backend
spec:
rules:
- services: [&amp;#34;*&amp;#34;]
methods: [&amp;#34;*”]
constraints:
- key: &amp;#34;destination.labels[visibility]”
values: [&amp;#34;external&amp;#34;]
---
apiVersion: &amp;#34;rbac.istio.io/v1alpha1&amp;#34;
kind: ServiceRoleBinding
metadata:
name: external-api-caller
namespace: backend
spec:
subjects:
- properties:
source.namespace: &amp;#34;frontend”
roleRef:
kind: ServiceRole
name: &amp;#34;external-api-caller&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;ServiceRole&lt;/code&gt; and &lt;code&gt;ServiceRoleBinding&lt;/code&gt; above expressed &amp;ldquo;&lt;em&gt;who&lt;/em&gt; is allowed to do &lt;em&gt;what&lt;/em&gt; under *which conditions*”
(RBAC + conditions). Specifically:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&amp;ldquo;who”&lt;/strong&gt; are the services in the &lt;code&gt;frontend&lt;/code&gt; namespace.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&amp;ldquo;what”&lt;/strong&gt; is to call services in &lt;code&gt;backend&lt;/code&gt; namespace.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&amp;ldquo;conditions”&lt;/strong&gt; is the &lt;code&gt;visibility&lt;/code&gt; label of the destination service having the value &lt;code&gt;external&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;service-method-level-isolation-with-without-primary-identities&#34;&gt;Service/method level isolation with/without primary identities&lt;/h3&gt;
&lt;p&gt;Here is another example that demonstrates finer grained access control at service/method level. The first step
is to define a &lt;code&gt;book-reader&lt;/code&gt; &lt;code&gt;ServiceRole&lt;/code&gt; that allows READ access to &lt;code&gt;/books/*&lt;/code&gt; resource in &lt;code&gt;bookstore&lt;/code&gt; service.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: &amp;#34;rbac.istio.io/v1alpha1&amp;#34;
kind: ServiceRole
metadata:
name: book-reader
namespace: default
spec:
rules:
- services: [&amp;#34;bookstore.default.svc.cluster.local&amp;#34;]
paths: [&amp;#34;/books/*”]
methods: [&amp;#34;GET”]
&lt;/code&gt;&lt;/pre&gt;
&lt;h4 id=&#34;using-authenticated-client-identities&#34;&gt;Using authenticated client identities&lt;/h4&gt;
&lt;p&gt;Suppose you want to grant this &lt;code&gt;book-reader&lt;/code&gt; role to your &lt;code&gt;bookstore-frontend&lt;/code&gt; service. If you have enabled
&lt;a href=&#34;/v1.1/docs/concepts/security/#mutual-tls-authentication&#34;&gt;mutual TLS authentication&lt;/a&gt; for your mesh, you can use a
service account to identify your &lt;code&gt;bookstore-frontend&lt;/code&gt; service. Granting the &lt;code&gt;book-reader&lt;/code&gt; role to the &lt;code&gt;bookstore-frontend&lt;/code&gt;
service can be done by creating a &lt;code&gt;ServiceRoleBinding&lt;/code&gt; as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: &amp;#34;rbac.istio.io/v1alpha1&amp;#34;
kind: ServiceRoleBinding
metadata:
name: book-reader
namespace: default
spec:
subjects:
- user: &amp;#34;cluster.local/ns/default/sa/bookstore-frontend”
roleRef:
kind: ServiceRole
name: &amp;#34;book-reader&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You may want to restrict this further by adding a condition that &amp;ldquo;only users who belong to the &lt;code&gt;qualified-reviewer&lt;/code&gt; group are
allowed to read books”. The &lt;code&gt;qualified-reviewer&lt;/code&gt; group is the end user identity that is authenticated by
&lt;a href=&#34;/v1.1/docs/concepts/security/#authentication&#34;&gt;JWT authentication&lt;/a&gt;. In this case, the combination of the client service identity
(&lt;code&gt;bookstore-frontend&lt;/code&gt;) and the end user identity (&lt;code&gt;qualified-reviewer&lt;/code&gt;) is used in the authorization policy.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: &amp;#34;rbac.istio.io/v1alpha1&amp;#34;
kind: ServiceRoleBinding
metadata:
name: book-reader
namespace: default
spec:
subjects:
- user: &amp;#34;cluster.local/ns/default/sa/bookstore-frontend”
properties:
request.auth.claims[group]: &amp;#34;qualified-reviewer”
roleRef:
kind: ServiceRole
name: &amp;#34;book-reader&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4 id=&#34;client-does-not-have-identity&#34;&gt;Client does not have identity&lt;/h4&gt;
&lt;p&gt;Using authenticated identities in authorization policies is strongly recommended for security. However, if you have a
legacy system that does not support authentication, you may not have authenticated identities for your services.
You can still use Istio authorization to protect your services even without authenticated identities. The example below
shows that you can specify allowed source IP range in your authorization policy.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: &amp;#34;rbac.istio.io/v1alpha1&amp;#34;
kind: ServiceRoleBinding
metadata:
name: book-reader
namespace: default
spec:
subjects:
- properties:
source.ip: 10.20.0.0/9
roleRef:
kind: ServiceRole
name: &amp;#34;book-reader&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;Istios authorization feature provides authorization at namespace-level, service-level, and method-level granularity.
It adopts &amp;ldquo;RBAC + conditions” model, which makes it easy to use and understand as an RBAC system, while providing the level of
flexibility that an ABAC system normally provides. Istio authorization achieves high performance as it is enforced
natively on Envoy. While it provides the best security by working together with
&lt;a href=&#34;/v1.1/docs/concepts/security/#authentication&#34;&gt;Istio authentication features&lt;/a&gt;, Istio authorization can also be used to
provide access control for legacy systems that do not have authentication.&lt;/p&gt;</description><pubDate>Fri, 20 Jul 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/istio-authorization/</link><author>Limin Wang</author><guid isPermaLink="true">/v1.1/blog/2018/istio-authorization/</guid><category>authorization</category><category>rbac</category><category>security</category></item><item><title>Exporting Logs to BigQuery, GCS, Pub/Sub through Stackdriver</title><description>
&lt;p&gt;This post shows how to direct Istio logs to &lt;a href=&#34;https://cloud.google.com/stackdriver/&#34;&gt;Stackdriver&lt;/a&gt;
and export those logs to various configured sinks such as such as
&lt;a href=&#34;https://cloud.google.com/bigquery/&#34;&gt;BigQuery&lt;/a&gt;, &lt;a href=&#34;https://cloud.google.com/storage/&#34;&gt;Google Cloud Storage&lt;/a&gt;
or &lt;a href=&#34;https://cloud.google.com/pubsub/&#34;&gt;Cloud Pub/Sub&lt;/a&gt;. At the end of this post you can perform
analytics on Istio data from your favorite places such as BigQuery, GCS or Cloud Pub/Sub.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Bookinfo&lt;/a&gt; sample application is used as the example
application throughout this task.&lt;/p&gt;
&lt;h2 id=&#34;before-you-begin&#34;&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;/v1.1/docs/setup/&#34;&gt;Install Istio&lt;/a&gt; in your cluster and deploy an application.&lt;/p&gt;
&lt;h2 id=&#34;configuring-istio-to-export-logs&#34;&gt;Configuring Istio to export logs&lt;/h2&gt;
&lt;p&gt;Istio exports logs using the &lt;code&gt;logentry&lt;/code&gt; &lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/templates/logentry&#34;&gt;template&lt;/a&gt;.
This specifies all the variables that are available for analysis. It
contains information like source service, destination service, auth
metrics (coming..) among others. Following is a diagram of the pipeline:&lt;/p&gt;
&lt;figure style=&#34;width:75%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:75%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/export-logs-through-stackdriver/./istio-analytics-using-stackdriver.png&#34; title=&#34;Exporting logs from Istio to Stackdriver for analysis&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/export-logs-through-stackdriver/./istio-analytics-using-stackdriver.png&#34; alt=&#34;Exporting logs from Istio to Stackdriver for analysis&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Exporting logs from Istio to Stackdriver for analysis&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Istio supports exporting logs to Stackdriver which can in turn be configured to export
logs to your favorite sink like BigQuery, Pub/Sub or GCS. Please follow the steps
below to setup your favorite sink for exporting logs first and then Stackdriver
in Istio.&lt;/p&gt;
&lt;h3 id=&#34;setting-up-various-log-sinks&#34;&gt;Setting up various log sinks&lt;/h3&gt;
&lt;p&gt;Common setup for all sinks:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Enable &lt;a href=&#34;https://cloud.google.com/monitoring/api/enable-api&#34;&gt;Stackdriver Monitoring API&lt;/a&gt; for the project.&lt;/li&gt;
&lt;li&gt;Make sure &lt;code&gt;principalEmail&lt;/code&gt; that would be setting up the sink has write access to the project and Logging Admin role permissions.&lt;/li&gt;
&lt;li&gt;Make sure the &lt;code&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/code&gt; environment variable is set. Please follow instructions &lt;a href=&#34;https://cloud.google.com/docs/authentication/getting-started&#34;&gt;here&lt;/a&gt; to set it up.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;bigquery&#34;&gt;BigQuery&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&#34;https://cloud.google.com/bigquery/docs/datasets&#34;&gt;Create a BigQuery dataset&lt;/a&gt; as a destination for the logs export.&lt;/li&gt;
&lt;li&gt;Record the ID of the dataset. It will be needed to configure the Stackdriver handler.
It would be of the form &lt;code&gt;bigquery.googleapis.com/projects/[PROJECT_ID]/datasets/[DATASET_ID]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Give &lt;a href=&#34;https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination&#34;&gt;sinks writer identity&lt;/a&gt;: &lt;code&gt;cloud-logs@system.gserviceaccount.com&lt;/code&gt; BigQuery Data Editor role in IAM.&lt;/li&gt;
&lt;li&gt;If using &lt;a href=&#34;/v1.1/docs/setup/kubernetes/prepare/platform-setup/gke/&#34;&gt;Google Kubernetes Engine&lt;/a&gt;, make sure &lt;code&gt;bigquery&lt;/code&gt; &lt;a href=&#34;https://cloud.google.com/sdk/gcloud/reference/container/clusters/create&#34;&gt;Scope&lt;/a&gt; is enabled on the cluster.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;google-cloud-storage-gcs&#34;&gt;Google Cloud Storage (GCS)&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&#34;https://cloud.google.com/storage/docs/creating-buckets&#34;&gt;Create a GCS bucket&lt;/a&gt; where you would like logs to get exported in GCS.&lt;/li&gt;
&lt;li&gt;Recode the ID of the bucket. It will be needed to configure Stackdriver.
It would be of the form &lt;code&gt;storage.googleapis.com/[BUCKET_ID]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Give &lt;a href=&#34;https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination&#34;&gt;sinks writer identity&lt;/a&gt;: &lt;code&gt;cloud-logs@system.gserviceaccount.com&lt;/code&gt; Storage Object Creator role in IAM.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;google-cloud-pub-sub&#34;&gt;Google Cloud Pub/Sub&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&#34;https://cloud.google.com/pubsub/docs/admin&#34;&gt;Create a topic&lt;/a&gt; where you would like logs to get exported in Google Cloud Pub/Sub.&lt;/li&gt;
&lt;li&gt;Recode the ID of the topic. It will be needed to configure Stackdriver.
It would be of the form &lt;code&gt;pubsub.googleapis.com/projects/[PROJECT_ID]/topics/[TOPIC_ID]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Give &lt;a href=&#34;https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination&#34;&gt;sinks writer identity&lt;/a&gt;: &lt;code&gt;cloud-logs@system.gserviceaccount.com&lt;/code&gt; Pub/Sub Publisher role in IAM.&lt;/li&gt;
&lt;li&gt;If using &lt;a href=&#34;/v1.1/docs/setup/kubernetes/prepare/platform-setup/gke/&#34;&gt;Google Kubernetes Engine&lt;/a&gt;, make sure &lt;code&gt;pubsub&lt;/code&gt; &lt;a href=&#34;https://cloud.google.com/sdk/gcloud/reference/container/clusters/create&#34;&gt;Scope&lt;/a&gt; is enabled on the cluster.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;setting-up-stackdriver&#34;&gt;Setting up Stackdriver&lt;/h3&gt;
&lt;p&gt;A Stackdriver handler must be created to export data to Stackdriver. The configuration for
a Stackdriver handler is described &lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/adapters/stackdriver/&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Save the following yaml file as &lt;code&gt;stackdriver.yaml&lt;/code&gt;. Replace &lt;code&gt;&amp;lt;project_id&amp;gt;,
&amp;lt;sink_id&amp;gt;, &amp;lt;sink_destination&amp;gt;, &amp;lt;log_filter&amp;gt;&lt;/code&gt; with their specific values.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: stackdriver
metadata:
name: handler
namespace: istio-system
spec:
# We&amp;#39;ll use the default value from the adapter, once per minute, so we don&amp;#39;t need to supply a value.
# pushInterval: 1m
# Must be supplied for the Stackdriver adapter to work
project_id: &amp;#34;&amp;lt;project_id&amp;gt;&amp;#34;
# One of the following must be set; the preferred method is `appCredentials`, which corresponds to
# Google Application Default Credentials.
# If none is provided we default to app credentials.
# appCredentials:
# apiKey:
# serviceAccountPath:
# Describes how to map Istio logs into Stackdriver.
logInfo:
accesslog.logentry.istio-system:
payloadTemplate: &amp;#39;{{or (.sourceIp) &amp;#34;-&amp;#34;}} - {{or (.sourceUser) &amp;#34;-&amp;#34;}} [{{or (.timestamp.Format &amp;#34;02/Jan/2006:15:04:05 -0700&amp;#34;) &amp;#34;-&amp;#34;}}] &amp;#34;{{or (.method) &amp;#34;-&amp;#34;}} {{or (.url) &amp;#34;-&amp;#34;}} {{or (.protocol) &amp;#34;-&amp;#34;}}&amp;#34; {{or (.responseCode) &amp;#34;-&amp;#34;}} {{or (.responseSize) &amp;#34;-&amp;#34;}}&amp;#39;
httpMapping:
url: url
status: responseCode
requestSize: requestSize
responseSize: responseSize
latency: latency
localIp: sourceIp
remoteIp: destinationIp
method: method
userAgent: userAgent
referer: referer
labelNames:
- sourceIp
- destinationIp
- sourceService
- sourceUser
- sourceNamespace
- destinationIp
- destinationService
- destinationNamespace
- apiClaims
- apiKey
- protocol
- method
- url
- responseCode
- responseSize
- requestSize
- latency
- connectionMtls
- userAgent
- responseTimestamp
- receivedBytes
- sentBytes
- referer
sinkInfo:
id: &amp;#39;&amp;lt;sink_id&amp;gt;&amp;#39;
destination: &amp;#39;&amp;lt;sink_destination&amp;gt;&amp;#39;
filter: &amp;#39;&amp;lt;log_filter&amp;gt;&amp;#39;
---
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: rule
metadata:
name: stackdriver
namespace: istio-system
spec:
match: &amp;#34;true&amp;#34; # If omitted match is true.
actions:
- handler: handler.stackdriver
instances:
- accesslog.logentry
---
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Push the configuration&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f stackdriver.yaml
stackdriver &amp;#34;handler&amp;#34; created
rule &amp;#34;stackdriver&amp;#34; created
logentry &amp;#34;stackdriverglobalmr&amp;#34; created
metric &amp;#34;stackdriverrequestcount&amp;#34; created
metric &amp;#34;stackdriverrequestduration&amp;#34; created
metric &amp;#34;stackdriverrequestsize&amp;#34; created
metric &amp;#34;stackdriverresponsesize&amp;#34; created
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Send traffic to the sample application.&lt;/p&gt;
&lt;p&gt;For the Bookinfo sample, visit &lt;code&gt;http://$GATEWAY_URL/productpage&lt;/code&gt; in your web
browser or issue the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ curl http://$GATEWAY_URL/productpage
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify that logs are flowing through Stackdriver to the configured sink.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Stackdriver: Navigate to the &lt;a href=&#34;https://pantheon.corp.google.com/logs/viewer&#34;&gt;Stackdriver Logs
Viewer&lt;/a&gt; for your project
and look under &amp;ldquo;GKE Container&amp;rdquo; -&amp;gt; &amp;ldquo;Cluster Name&amp;rdquo; -&amp;gt; &amp;ldquo;Namespace Id&amp;rdquo; for
Istio Access logs.&lt;/li&gt;
&lt;li&gt;BigQuery: Navigate to the &lt;a href=&#34;https://bigquery.cloud.google.com/&#34;&gt;BigQuery
Interface&lt;/a&gt; for your project and you
should find a table with prefix &lt;code&gt;accesslog_logentry_istio&lt;/code&gt; in your sink
dataset.&lt;/li&gt;
&lt;li&gt;GCS: Navigate to the &lt;a href=&#34;https://pantheon.corp.google.com/storage/browser/&#34;&gt;Storage
Browser&lt;/a&gt; for your
project and you should find a bucket named
&lt;code&gt;accesslog.logentry.istio-system&lt;/code&gt; in your sink bucket.&lt;/li&gt;
&lt;li&gt;Pub/Sub: Navigate to the &lt;a href=&#34;https://pantheon.corp.google.com/cloudpubsub/topicList&#34;&gt;Pub/Sub
Topic List&lt;/a&gt; for
your project and you should find a topic for &lt;code&gt;accesslog&lt;/code&gt; in your sink
topic.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;understanding-what-happened&#34;&gt;Understanding what happened&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;Stackdriver.yaml&lt;/code&gt; file above configured Istio to send access logs to
Stackdriver and then added a sink configuration where these logs could be
exported. In detail as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Added a handler of kind &lt;code&gt;stackdriver&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: stackdriver
metadata:
name: handler
namespace: &amp;lt;your defined namespace&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added logInfo in spec&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;spec:
logInfo: accesslog.logentry.istio-system:
labelNames:
- sourceIp
- destinationIp
...
...
sinkInfo:
id: &amp;#39;&amp;lt;sink_id&amp;gt;&amp;#39;
destination: &amp;#39;&amp;lt;sink_destination&amp;gt;&amp;#39;
filter: &amp;#39;&amp;lt;log_filter&amp;gt;&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the above configuration sinkInfo contains information about the sink where you want
the logs to get exported to. For more information on how this gets filled for different sinks please refer
&lt;a href=&#34;https://cloud.google.com/logging/docs/export/#sink-terms&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added a rule for Stackdriver&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: rule
metadata:
name: stackdriver
namespace: istio-system spec:
match: &amp;#34;true&amp;#34; # If omitted match is true
actions:
- handler: handler.stackdriver
instances:
- accesslog.logentry
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;cleanup&#34;&gt;Cleanup&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Remove the new Stackdriver configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete -f stackdriver.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you are not planning to explore any follow-on tasks, refer to the
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/#cleanup&#34;&gt;Bookinfo cleanup&lt;/a&gt; instructions to shutdown
the application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;availability-of-logs-in-export-sinks&#34;&gt;Availability of logs in export sinks&lt;/h2&gt;
&lt;p&gt;Export to BigQuery is within minutes (we see it to be almost instant), GCS can
have a delay of 2 to 12 hours and Pub/Sub is almost immediately.&lt;/p&gt;</description><pubDate>Mon, 09 Jul 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/export-logs-through-stackdriver/</link><author>Nupur Garg and Douglas Reid</author><guid isPermaLink="true">/v1.1/blog/2018/export-logs-through-stackdriver/</guid></item><item><title>Monitoring and Access Policies for HTTP Egress Traffic</title><description>
&lt;p&gt;While Istio&amp;rsquo;s main focus is management of traffic between microservices inside a service mesh, Istio can also manage
ingress (from outside into the mesh) and egress (from the mesh outwards) traffic. Istio can uniformly enforce access
policies and aggregate telemetry data for mesh-internal, ingress and egress traffic.&lt;/p&gt;
&lt;p&gt;In this blog post, we show how to apply monitoring and access policies to HTTP egress traffic with Istio.&lt;/p&gt;
&lt;h2 id=&#34;use-case&#34;&gt;Use case&lt;/h2&gt;
&lt;p&gt;Consider an organization that runs applications that process content from &lt;em&gt;cnn.com&lt;/em&gt;. The applications are decomposed
into microservices deployed in an Istio service mesh. The applications access pages of various topics from &lt;em&gt;cnn.com&lt;/em&gt;: &lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt;, &lt;a href=&#34;https://edition.cnn.com/sport&#34;&gt;edition.cnn.com/sport&lt;/a&gt; and &lt;a href=&#34;https://edition.cnn.com/health&#34;&gt;edition.cnn.com/health&lt;/a&gt;. The organization &lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway-tls-origination/&#34;&gt;configures Istio to allow access to edition.cnn.com&lt;/a&gt; and everything works fine. However, at some
point in time, the organization decides to banish politics. Practically, it means blocking access to
&lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt; and allowing access to
&lt;a href=&#34;https://edition.cnn.com/sport&#34;&gt;edition.cnn.com/sport&lt;/a&gt; and &lt;a href=&#34;https://edition.cnn.com/health&#34;&gt;edition.cnn.com/health&lt;/a&gt;
only. The organization will grant permissions to individual applications and to particular users to access &lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt;, on a case-by-case basis.&lt;/p&gt;
&lt;p&gt;To achieve that goal, the organization&amp;rsquo;s operations people monitor access to the external services and
analyze Istio logs to verify that no unauthorized request was sent to
&lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt;. They also configure Istio to prevent access to
&lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt; automatically.&lt;/p&gt;
&lt;p&gt;The organization is resolved to prevent any tampering with the new policy. It decides to put mechanisms in place that
will prevent any possibility for a malicious application to access the forbidden topic.&lt;/p&gt;
&lt;h2 id=&#34;related-tasks-and-examples&#34;&gt;Related tasks and examples&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;a href=&#34;/v1.1/docs/tasks/traffic-management/egress/&#34;&gt;Control Egress Traffic&lt;/a&gt; task demonstrates how external (outside the
Kubernetes cluster) HTTP and HTTPS services can be accessed by applications inside the mesh.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway/&#34;&gt;Configure an Egress Gateway&lt;/a&gt; example describes how to configure
Istio to direct egress traffic through a dedicated gateway service called &lt;em&gt;egress gateway&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway-tls-origination/&#34;&gt;Egress Gateway with TLS Origination&lt;/a&gt; example
demonstrates how to allow applications to send HTTP requests to external servers that require HTTPS, while directing
traffic through egress gateway.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&#34;/v1.1/docs/tasks/telemetry/metrics/collecting-metrics/&#34;&gt;Collecting Metrics&lt;/a&gt; task describes how to configure metrics for services in a mesh.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&#34;/v1.1/docs/tasks/telemetry/metrics/using-istio-dashboard/&#34;&gt;Visualizing Metrics with Grafana&lt;/a&gt;
describes the Istio Dashboard to monitor mesh traffic.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&#34;/v1.1/docs/tasks/policy-enforcement/denial-and-list/&#34;&gt;Basic Access Control&lt;/a&gt; task shows how to control access to
in-mesh services.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&#34;/v1.1/docs/tasks/policy-enforcement/denial-and-list/&#34;&gt;Denials and White/Black Listing&lt;/a&gt; task shows how to configure
access policies using black or white list checkers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As opposed to the telemetry and security tasks above, this blog post describes Istio&amp;rsquo;s monitoring and access policies
applied exclusively to the egress traffic.&lt;/p&gt;
&lt;h2 id=&#34;before-you-begin&#34;&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;Follow the steps in the &lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway-tls-origination/&#34;&gt;Egress Gateway with TLS Origination&lt;/a&gt; example, &lt;strong&gt;with mutual TLS authentication enabled&lt;/strong&gt;, without
the &lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway-tls-origination//#cleanup&#34;&gt;Cleanup&lt;/a&gt; step.
After completing that example, you can access &lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt; from an in-mesh container with &lt;code&gt;curl&lt;/code&gt; installed. This blog post assumes that the &lt;code&gt;SOURCE_POD&lt;/code&gt; environment variable contains the source pod&amp;rsquo;s name and that the container&amp;rsquo;s name is &lt;code&gt;sleep&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;configure-monitoring-and-access-policies&#34;&gt;Configure monitoring and access policies&lt;/h2&gt;
&lt;p&gt;Since you want to accomplish your tasks in a &lt;em&gt;secure way&lt;/em&gt;, you should direct egress traffic through
&lt;em&gt;egress gateway&lt;/em&gt;, as described in the &lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway-tls-origination/&#34;&gt;Egress Gateway with TLS Origination&lt;/a&gt;
task. The &lt;em&gt;secure way&lt;/em&gt; here means that you want to prevent malicious applications from bypassing Istio monitoring and
policy enforcement.&lt;/p&gt;
&lt;p&gt;According to our scenario, the organization performed the instructions in the
&lt;a href=&#34;#before-you-begin&#34;&gt;Before you begin&lt;/a&gt; section, enabled HTTP traffic to &lt;em&gt;edition.cnn.com&lt;/em&gt;, and configured that traffic
to pass through the egress gateway. The egress gateway performs TLS origination to &lt;em&gt;edition.cnn.com&lt;/em&gt;, so the traffic
leaves the mesh encrypted. At this point, the organization is ready to configure Istio to monitor and apply access policies for
the traffic to &lt;em&gt;edition.cnn.com&lt;/em&gt;.&lt;/p&gt;
&lt;h3 id=&#34;logging&#34;&gt;Logging&lt;/h3&gt;
&lt;p&gt;Configure Istio to log access to &lt;em&gt;*.cnn.com&lt;/em&gt;. You create a &lt;code&gt;logentry&lt;/code&gt; and two
&lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/adapters/stdio/&#34;&gt;stdio&lt;/a&gt; &lt;code&gt;handlers&lt;/code&gt;, one for logging forbidden access
(&lt;em&gt;error&lt;/em&gt; log level) and another one for logging all access to &lt;em&gt;*.cnn.com&lt;/em&gt; (&lt;em&gt;info&lt;/em&gt; log level). Then you create &lt;code&gt;rules&lt;/code&gt; to
direct your &lt;code&gt;logentry&lt;/code&gt; instances to your &lt;code&gt;handlers&lt;/code&gt;. One rule directs access to &lt;em&gt;*.cnn.com/politics&lt;/em&gt; to the handler for
logging forbidden access, another rule directs log entries to the handler that outputs each access to &lt;em&gt;*.cnn.com&lt;/em&gt; as an
&lt;em&gt;info&lt;/em&gt; log entry. To understand the Istio &lt;code&gt;logentries&lt;/code&gt;, &lt;code&gt;rules&lt;/code&gt;, and &lt;code&gt;handlers&lt;/code&gt;, see
&lt;a href=&#34;/v1.1/blog/2017/adapter-model/&#34;&gt;Istio Adapter Model&lt;/a&gt;. A diagram with the involved entities and dependencies between them
appears below:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:46.46700562636976%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-monitoring-access-control/egress-adapters-monitoring.svg&#34; title=&#34;Instances, rules and handlers for egress monitoring&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-monitoring-access-control/egress-adapters-monitoring.svg&#34; alt=&#34;Instances, rules and handlers for egress monitoring&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Instances, rules and handlers for egress monitoring&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create the &lt;code&gt;logentry&lt;/code&gt;, &lt;code&gt;rules&lt;/code&gt; and &lt;code&gt;handlers&lt;/code&gt;. Note that you specify &lt;code&gt;context.reporter.uid&lt;/code&gt; as
&lt;code&gt;kubernetes://istio-egressgateway&lt;/code&gt; in the rules to get logs from the egress gateway only.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
# Log entry for egress access
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: logentry
metadata:
name: egress-access
namespace: istio-system
spec:
severity: &amp;#39;&amp;#34;info&amp;#34;&amp;#39;
timestamp: request.time
variables:
destination: request.host | &amp;#34;unknown&amp;#34;
path: request.path | &amp;#34;unknown&amp;#34;
responseCode: response.code | 0
responseSize: response.size | 0
reporterUID: context.reporter.uid | &amp;#34;unknown&amp;#34;
sourcePrincipal: source.principal | &amp;#34;unknown&amp;#34;
monitored_resource_type: &amp;#39;&amp;#34;UNSPECIFIED&amp;#34;&amp;#39;
---
# Handler for error egress access entries
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: stdio
metadata:
name: egress-error-logger
namespace: istio-system
spec:
severity_levels:
info: 2 # output log level as error
outputAsJson: true
---
# Rule to handle access to *.cnn.com/politics
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: rule
metadata:
name: handle-politics
namespace: istio-system
spec:
match: request.host.endsWith(&amp;#34;cnn.com&amp;#34;) &amp;amp;&amp;amp; request.path.startsWith(&amp;#34;/politics&amp;#34;) &amp;amp;&amp;amp; context.reporter.uid.startsWith(&amp;#34;kubernetes://istio-egressgateway&amp;#34;)
actions:
- handler: egress-error-logger.stdio
instances:
- egress-access.logentry
---
# Handler for info egress access entries
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: stdio
metadata:
name: egress-access-logger
namespace: istio-system
spec:
severity_levels:
info: 0 # output log level as info
outputAsJson: true
---
# Rule to handle access to *.cnn.com
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: rule
metadata:
name: handle-cnn-access
namespace: istio-system
spec:
match: request.host.endsWith(&amp;#34;.cnn.com&amp;#34;) &amp;amp;&amp;amp; context.reporter.uid.startsWith(&amp;#34;kubernetes://istio-egressgateway&amp;#34;)
actions:
- handler: egress-access-logger.stdio
instances:
- egress-access.logentry
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Send three HTTP requests to &lt;em&gt;cnn.com&lt;/em&gt;, to &lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt;, &lt;a href=&#34;https://edition.cnn.com/sport&#34;&gt;edition.cnn.com/sport&lt;/a&gt; and &lt;a href=&#34;https://edition.cnn.com/health&#34;&gt;edition.cnn.com/health&lt;/a&gt;.
All three should return &lt;em&gt;200 OK&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c &amp;#39;curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/health&amp;#39;
200
200
200
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Query the Mixer log and see that the information about the requests appears in the log:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep egress-access | grep cnn | tail -4
{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T07:43:24.611462Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/politics&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:200,&amp;#34;responseSize&amp;#34;:1883355,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/sleep&amp;#34;}
{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T07:43:24.886316Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/sport&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:200,&amp;#34;responseSize&amp;#34;:2094561,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/sleep&amp;#34;}
{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T07:43:25.369663Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/health&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:200,&amp;#34;responseSize&amp;#34;:2157009,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/sleep&amp;#34;}
{&amp;#34;level&amp;#34;:&amp;#34;error&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T07:43:24.611462Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/politics&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:200,&amp;#34;responseSize&amp;#34;:1883355,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/sleep&amp;#34;}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You see four log entries related to your three requests. Three &lt;em&gt;info&lt;/em&gt; entries about the access to &lt;em&gt;edition.cnn.com&lt;/em&gt;
and one &lt;em&gt;error&lt;/em&gt; entry about the access to &lt;em&gt;edition.cnn.com/politics&lt;/em&gt;. The service mesh operators can see all the
access instances, and can also search the log for &lt;em&gt;error&lt;/em&gt; log entries that represent forbidden accesses. This is the
first security measure the organization can apply before blocking the forbidden accesses automatically, namely
logging all the forbidden access instances as errors. In some settings this can be a sufficient security measure.&lt;/p&gt;
&lt;p&gt;Note the attributes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;destination&lt;/code&gt;, &lt;code&gt;path&lt;/code&gt;, &lt;code&gt;responseCode&lt;/code&gt;, &lt;code&gt;responseSize&lt;/code&gt; are related to HTTP parameters of the requests&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sourcePrincipal&lt;/code&gt;:&lt;code&gt;cluster.local/ns/default/sa/sleep&lt;/code&gt; - a string that represents the &lt;code&gt;sleep&lt;/code&gt; service account in
the &lt;code&gt;default&lt;/code&gt; namespace&lt;/li&gt;
&lt;li&gt;&lt;code&gt;reporterUID&lt;/code&gt;: &lt;code&gt;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&lt;/code&gt; - a UID of the reporting pod, in
this case &lt;code&gt;istio-egressgateway-747b6764b8-44rrh&lt;/code&gt; in the &lt;code&gt;istio-system&lt;/code&gt; namespace&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;access-control-by-routing&#34;&gt;Access control by routing&lt;/h3&gt;
&lt;p&gt;After enabling logging of access to &lt;em&gt;edition.cnn.com&lt;/em&gt;, automatically enforce an access policy, namely allow
accessing &lt;em&gt;/health&lt;/em&gt; and &lt;em&gt;/sport&lt;/em&gt; URL paths only. Such a simple policy control can be implemented with Istio routing.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Redefine your &lt;code&gt;VirtualService&lt;/code&gt; for &lt;em&gt;edition.cnn.com&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-cnn-through-egress-gateway
spec:
hosts:
- edition.cnn.com
gateways:
- istio-egressgateway
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: cnn
port:
number: 443
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 443
uri:
regex: &amp;#34;/health|/sport&amp;#34;
route:
- destination:
host: edition.cnn.com
port:
number: 443
weight: 100
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that you added a &lt;code&gt;match&lt;/code&gt; by &lt;code&gt;uri&lt;/code&gt; condition that checks that the URL path is
either &lt;em&gt;/health&lt;/em&gt; or &lt;em&gt;/sport&lt;/em&gt;. Also note that this condition is added to the &lt;code&gt;istio-egressgateway&lt;/code&gt;
section of the &lt;code&gt;VirtualService&lt;/code&gt;, since the egress gateway is a hardened component in terms of security (see
&lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway/#additional-security-considerations&#34;&gt;egress gateway security considerations&lt;/a&gt;). You don&amp;rsquo;t want any tampering
with your policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Send the previous three HTTP requests to &lt;em&gt;cnn.com&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c &amp;#39;curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/health&amp;#39;
404
200
200
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The request to &lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt; returned &lt;em&gt;404 Not Found&lt;/em&gt;, while requests
to &lt;a href=&#34;https://edition.cnn.com/sport&#34;&gt;edition.cnn.com/sport&lt;/a&gt; and
&lt;a href=&#34;https://edition.cnn.com/health&#34;&gt;edition.cnn.com/health&lt;/a&gt; returned &lt;em&gt;200 OK&lt;/em&gt;, as expected.&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;You may need to wait several seconds for the update of the &lt;code&gt;VirtualService&lt;/code&gt; to propagate to the egress
gateway.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Query the Mixer log and see that the information about the requests appears again in the log:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep egress-access | grep cnn | tail -4
{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T07:55:59.686082Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/politics&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:404,&amp;#34;responseSize&amp;#34;:0,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/sleep&amp;#34;}
{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T07:55:59.697565Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/sport&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:200,&amp;#34;responseSize&amp;#34;:2094561,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/sleep&amp;#34;}
{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T07:56:00.264498Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/health&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:200,&amp;#34;responseSize&amp;#34;:2157009,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/sleep&amp;#34;}
{&amp;#34;level&amp;#34;:&amp;#34;error&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T07:55:59.686082Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/politics&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:404,&amp;#34;responseSize&amp;#34;:0,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/sleep&amp;#34;}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You still get info and error messages regarding accesses to
&lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt;, however this time the &lt;code&gt;responseCode&lt;/code&gt; is &lt;code&gt;404&lt;/code&gt;, as
expected.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;While implementing access control using Istio routing worked for us in this simple case, it would not suffice for more
complex cases. For example, the organization may want to allow access to
&lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt; under certain conditions, so more complex policy logic than
just filtering by URL paths will be required. You may want to apply &lt;a href=&#34;/v1.1/blog/2017/adapter-model/&#34;&gt;Istio Mixer Adapters&lt;/a&gt;,
for example
&lt;a href=&#34;/v1.1/docs/tasks/policy-enforcement/denial-and-list/#attribute-based-whitelists-or-blacklists&#34;&gt;white lists or black lists&lt;/a&gt;
of allowed/forbidden URL paths, respectively.
&lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/istio.policy.v1beta1/&#34;&gt;Policy Rules&lt;/a&gt; allow specifying complex conditions,
specified in a &lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/expression-language/&#34;&gt;rich expression language&lt;/a&gt;, which
includes AND and OR logical operators. The rules can be reused for both logging and policy checks. More advanced users
may want to apply &lt;a href=&#34;/v1.1/docs/concepts/security/#authorization&#34;&gt;Istio Role-Based Access Control&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;An additional aspect is integration with remote access policy systems. If the organization in our use case operates some
&lt;a href=&#34;https://en.wikipedia.org/wiki/Identity_management&#34;&gt;Identity and Access Management&lt;/a&gt; system, you may want to configure
Istio to use access policy information from such a system. You implement this integration by applying
&lt;a href=&#34;/v1.1/blog/2017/adapter-model/&#34;&gt;Istio Mixer Adapters&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Cancel the access control by routing you used in this section and implement access control by Mixer policy checks
in the next section.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Replace the &lt;code&gt;VirtualService&lt;/code&gt; for &lt;em&gt;edition.cnn.com&lt;/em&gt; with your previous version from the &lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway-tls-origination/#perform-tls-origination-with-an-egress-gateway&#34;&gt;Configure an Egress Gateway&lt;/a&gt; example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-cnn-through-egress-gateway
spec:
hosts:
- edition.cnn.com
gateways:
- istio-egressgateway
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: cnn
port:
number: 443
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 443
route:
- destination:
host: edition.cnn.com
port:
number: 443
weight: 100
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Send the previous three HTTP requests to &lt;em&gt;cnn.com&lt;/em&gt;, this time you should get three &lt;em&gt;200 OK&lt;/em&gt; responses as
previously:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c &amp;#39;curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/health&amp;#39;
200
200
200
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;You may need to wait several seconds for the update of the &lt;code&gt;VirtualService&lt;/code&gt; to propagate to the egress
gateway.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;h3 id=&#34;access-control-by-mixer-policy-checks&#34;&gt;Access control by Mixer policy checks&lt;/h3&gt;
&lt;p&gt;In this step you use a Mixer
&lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/adapters/list/&#34;&gt;&lt;code&gt;Listchecker&lt;/code&gt; adapter&lt;/a&gt;, its whitelist
variety. You define a &lt;code&gt;listentry&lt;/code&gt; with the URL path of the request and a &lt;code&gt;listchecker&lt;/code&gt; to check the &lt;code&gt;listentry&lt;/code&gt; using a
static list of allowed URL paths, specified by the &lt;code&gt;overrides&lt;/code&gt; field. For an external &lt;a href=&#34;https://en.wikipedia.org/wiki/Identity_management&#34;&gt;Identity and Access Management&lt;/a&gt; system, use the &lt;code&gt;providerurl&lt;/code&gt; field instead. The updated
diagram of the instances, rules and handlers appears below. Note that you reuse the same policy rule, &lt;code&gt;handle-cnn-access&lt;/code&gt;
both for logging and for access policy checks.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:52.79420593027812%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-monitoring-access-control/egress-adapters-monitoring-policy.svg&#34; title=&#34;Instances, rules and handlers for egress monitoring and access policies&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-monitoring-access-control/egress-adapters-monitoring-policy.svg&#34; alt=&#34;Instances, rules and handlers for egress monitoring and access policies&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Instances, rules and handlers for egress monitoring and access policies&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Define &lt;code&gt;path-checker&lt;/code&gt; and &lt;code&gt;request-path&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | kubectl create -f -
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: listchecker
metadata:
name: path-checker
namespace: istio-system
spec:
overrides: [&amp;#34;/health&amp;#34;, &amp;#34;/sport&amp;#34;] # overrides provide a static list
blacklist: false
---
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: listentry
metadata:
name: request-path
namespace: istio-system
spec:
value: request.path
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Modify the &lt;code&gt;handle-cnn-access&lt;/code&gt; policy rule to send &lt;code&gt;request-path&lt;/code&gt; instances to the &lt;code&gt;path-checker&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
# Rule handle egress access to cnn.com
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: rule
metadata:
name: handle-cnn-access
namespace: istio-system
spec:
match: request.host.endsWith(&amp;#34;.cnn.com&amp;#34;) &amp;amp;&amp;amp; context.reporter.uid.startsWith(&amp;#34;kubernetes://istio-egressgateway&amp;#34;)
actions:
- handler: egress-access-logger.stdio
instances:
- egress-access.logentry
- handler: path-checker.listchecker
instances:
- request-path.listentry
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Perform your usual test by sending HTTP requests to
&lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt;, &lt;a href=&#34;https://edition.cnn.com/sport&#34;&gt;edition.cnn.com/sport&lt;/a&gt;
and &lt;a href=&#34;https://edition.cnn.com/health&#34;&gt;edition.cnn.com/health&lt;/a&gt;. As expected, the request to
&lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt; returns &lt;em&gt;403&lt;/em&gt; (Forbidden).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c &amp;#39;curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/health&amp;#39;
403
200
200
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;access-control-by-mixer-policy-checks-part-2&#34;&gt;Access control by Mixer policy checks, part 2&lt;/h3&gt;
&lt;p&gt;After the organization in our use case managed to configure logging and access control, it decided to extend its access
policy by allowing the applications with a special
&lt;a href=&#34;https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/&#34;&gt;Service Account&lt;/a&gt; to access any topic of &lt;em&gt;cnn.com&lt;/em&gt;, without being monitored. You&amp;rsquo;ll see how this requirement can be configured in Istio.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Start the &lt;a href=&#34;https://github.com/istio/istio/tree/release-1.1/samples/sleep&#34;&gt;sleep&lt;/a&gt; sample with the &lt;code&gt;politics&lt;/code&gt; service account.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ sed &amp;#39;s/: sleep/: politics/g&amp;#39; samples/sleep/sleep.yaml | kubectl create -f -
serviceaccount &amp;#34;politics&amp;#34; created
service &amp;#34;politics&amp;#34; created
deployment &amp;#34;politics&amp;#34; created
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define the &lt;code&gt;SOURCE_POD_POLITICS&lt;/code&gt; shell variable to hold the name of the source pod with the &lt;code&gt;politics&lt;/code&gt; service
account, for sending requests to external services.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ export SOURCE_POD_POLITICS=$(kubectl get pod -l app=politics -o jsonpath={.items..metadata.name})
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Perform your usual test of sending three HTTP requests this time from &lt;code&gt;SOURCE_POD_POLITICS&lt;/code&gt;.
The request to &lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt; returns &lt;em&gt;403&lt;/em&gt;, since you did not configure
the exception for the &lt;em&gt;politics&lt;/em&gt; namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl exec -it $SOURCE_POD_POLITICS -c politics -- sh -c &amp;#39;curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/health&amp;#39;
403
200
200
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Query the Mixer log and see that the information about the requests from the &lt;em&gt;politics&lt;/em&gt; namespace appears in
the log:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep egress-access | grep cnn | tail -4
{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T08:04:42.559812Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/politics&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:403,&amp;#34;responseSize&amp;#34;:84,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/politics&amp;#34;}
{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T08:04:42.568424Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/sport&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:200,&amp;#34;responseSize&amp;#34;:2094561,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/politics&amp;#34;}
{&amp;#34;level&amp;#34;:&amp;#34;error&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T08:04:42.559812Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/politics&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:403,&amp;#34;responseSize&amp;#34;:84,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/politics&amp;#34;}
{&amp;#34;level&amp;#34;:&amp;#34;info&amp;#34;,&amp;#34;time&amp;#34;:&amp;#34;2019-01-29T08:04:42.615641Z&amp;#34;,&amp;#34;instance&amp;#34;:&amp;#34;egress-access.logentry.istio-system&amp;#34;,&amp;#34;destination&amp;#34;:&amp;#34;edition.cnn.com&amp;#34;,&amp;#34;path&amp;#34;:&amp;#34;/health&amp;#34;,&amp;#34;reporterUID&amp;#34;:&amp;#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&amp;#34;,&amp;#34;responseCode&amp;#34;:200,&amp;#34;responseSize&amp;#34;:2157009,&amp;#34;sourcePrincipal&amp;#34;:&amp;#34;cluster.local/ns/default/sa/politics&amp;#34;}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that &lt;code&gt;sourcePrincipal&lt;/code&gt; is &lt;code&gt;cluster.local/ns/default/sa/politics&lt;/code&gt; which represents the &lt;code&gt;politics&lt;/code&gt; service
account in the &lt;code&gt;default&lt;/code&gt; namespace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Redefine &lt;code&gt;handle-cnn-access&lt;/code&gt; and &lt;code&gt;handle-politics&lt;/code&gt; policy rules, to make the applications in the &lt;em&gt;politics&lt;/em&gt;
namespace exempt from monitoring and policy enforcement.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
# Rule to handle access to *.cnn.com/politics
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: rule
metadata:
name: handle-politics
namespace: istio-system
spec:
match: request.host.endsWith(&amp;#34;cnn.com&amp;#34;) &amp;amp;&amp;amp; context.reporter.uid.startsWith(&amp;#34;kubernetes://istio-egressgateway&amp;#34;) &amp;amp;&amp;amp; request.path.startsWith(&amp;#34;/politics&amp;#34;) &amp;amp;&amp;amp; source.principal != &amp;#34;cluster.local/ns/default/sa/politics&amp;#34;
actions:
- handler: egress-error-logger.stdio
instances:
- egress-access.logentry
---
# Rule handle egress access to cnn.com
apiVersion: &amp;#34;config.istio.io/v1alpha2&amp;#34;
kind: rule
metadata:
name: handle-cnn-access
namespace: istio-system
spec:
match: request.host.endsWith(&amp;#34;.cnn.com&amp;#34;) &amp;amp;&amp;amp; context.reporter.uid.startsWith(&amp;#34;kubernetes://istio-egressgateway&amp;#34;) &amp;amp;&amp;amp; source.principal != &amp;#34;cluster.local/ns/default/sa/politics&amp;#34;
actions:
- handler: egress-access-logger.stdio
instances:
- egress-access.logentry
- handler: path-checker.listchecker
instances:
- request-path.listentry
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Perform your usual test from &lt;code&gt;SOURCE_POD&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c &amp;#39;curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/health&amp;#39;
403
200
200
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Since &lt;code&gt;SOURCE_POD&lt;/code&gt; does not have &lt;code&gt;politics&lt;/code&gt; service account, access to
&lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt; is forbidden, as previously.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Perform the previous test from &lt;code&gt;SOURCE_POD_POLITICS&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl exec -it $SOURCE_POD_POLITICS -c politics -- sh -c &amp;#39;curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &amp;#34;%{http_code}\n&amp;#34; http://edition.cnn.com/health&amp;#39;
200
200
200
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Access to all the topics of &lt;em&gt;edition.cnn.com&lt;/em&gt; is allowed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Examine the Mixer log and see that no more requests with &lt;code&gt;sourcePrincipal&lt;/code&gt; equal
&lt;code&gt;cluster.local/ns/default/sa/politics&lt;/code&gt; appear in the log.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep egress-access | grep cnn | tail -4
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;comparison-with-https-egress-traffic-control&#34;&gt;Comparison with HTTPS egress traffic control&lt;/h2&gt;
&lt;p&gt;In this use case the applications use HTTP and Istio Egress Gateway performs TLS origination for them. Alternatively,
the applications could originate TLS themselves by issuing HTTPS requests to &lt;em&gt;edition.cnn.com&lt;/em&gt;. In this section we
describe both approaches and their pros and cons.&lt;/p&gt;
&lt;p&gt;In the HTTP approach, the requests are sent unencrypted on the local host, intercepted by the Istio sidecar proxy and
forwarded to the egress gateway. Since you configure Istio to use mutual TLS between the sidecar proxy and the egress
gateway, the traffic leaves the pod encrypted. The egress gateway decrypts the traffic, inspects the URL path, the
HTTP method and headers, reports telemetry and performs policy checks. If the request is not blocked by some policy
check, the egress gateway performs TLS origination to the external destination (&lt;em&gt;cnn.com&lt;/em&gt; in our case), so the request
is encrypted again and sent encrypted to the external destination. The diagram below demonstrates the network flow of
this approach. The HTTP protocol inside the gateway designates the protocol as seen by the gateway after decryption.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:64.81718469808756%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-monitoring-access-control/http-to-gateway.svg&#34; title=&#34;HTTP egress traffic through an egress gateway&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-monitoring-access-control/http-to-gateway.svg&#34; alt=&#34;HTTP egress traffic through an egress gateway&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;HTTP egress traffic through an egress gateway&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;The drawback of this approach is that the requests are sent unencrypted inside the pod, which may be against security
policies in some organizations. Also some SDKs have external service URLs hard-coded, including the protocol, so
sending HTTP requests could be impossible. The advantage of this approach is the ability to inspect HTTP methods,
headers and URL paths, and to apply policies based on them.&lt;/p&gt;
&lt;p&gt;In the HTTPS approach, the requests are encrypted end-to-end, from the application to the external destination. The
diagram below demonstrates the network flow of this approach. The HTTPS protocol inside the gateway designates the
protocol as seen by the gateway.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:64.81718469808756%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-monitoring-access-control/https-to-gateway.svg&#34; title=&#34;HTTPS egress traffic through an egress gateway&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-monitoring-access-control/https-to-gateway.svg&#34; alt=&#34;HTTPS egress traffic through an egress gateway&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;HTTPS egress traffic through an egress gateway&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;The end-to-end HTTPS is considered a better approach from the security point of view. However, since the traffic is
encrypted the Istio proxies and the egress gateway can only see the source and destination IPs and the &lt;a href=&#34;https://en.wikipedia.org/wiki/Server_Name_Indication&#34;&gt;SNI&lt;/a&gt; of the destination. Since you configure Istio to use mutual TLS between the sidecar proxy
and the egress gateway, the &lt;a href=&#34;/v1.1/docs/concepts/security/#istio-identity&#34;&gt;identity of the source&lt;/a&gt; is also known.
The gateway is unable to inspect the URL path, the HTTP method and the headers of the requests, so no monitoring and
policies based on the HTTP information can be possible.
In our use case, the organization would be able to allow access to &lt;em&gt;edition.cnn.com&lt;/em&gt; and to specify which applications
are allowed to access &lt;em&gt;edition.cnn.com&lt;/em&gt;.
However, it will not be possible to allow or block access to specific URL paths of &lt;em&gt;edition.cnn.com&lt;/em&gt;.
Neither blocking access to &lt;a href=&#34;https://edition.cnn.com/politics&#34;&gt;edition.cnn.com/politics&lt;/a&gt; nor monitoring such access are
possible with the HTTPS approach.&lt;/p&gt;
&lt;p&gt;We guess that each organization will consider the pros and cons of the two approaches and choose the one most
appropriate to its needs.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this blog post we showed how different monitoring and policy mechanisms of Istio can be applied to HTTP egress
traffic. Monitoring can be implemented by configuring a logging adapter. Access
policies can be implemented by configuring &lt;code&gt;VirtualServices&lt;/code&gt; or by configuring various policy check adapters. We
demonstrated a simple policy that allowed certain URL paths only. We also showed a more complex policy that extended the
simple policy by making an exemption to the applications with a certain service account. Finally, we compared
HTTP-with-TLS-origination egress traffic with HTTPS egress traffic, in terms of control possibilities by Istio.&lt;/p&gt;
&lt;h2 id=&#34;cleanup&#34;&gt;Cleanup&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Perform the instructions in &lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway//#cleanup&#34;&gt;Cleanup&lt;/a&gt; section of the
&lt;a href=&#34;/v1.1/docs/examples/advanced-gateways/egress-gateway//&#34;&gt;Configure an Egress Gateway&lt;/a&gt; example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Delete the logging and policy checks configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete logentry egress-access -n istio-system
$ kubectl delete stdio egress-error-logger -n istio-system
$ kubectl delete stdio egress-access-logger -n istio-system
$ kubectl delete rule handle-politics -n istio-system
$ kubectl delete rule handle-cnn-access -n istio-system
$ kubectl delete -n istio-system listchecker path-checker
$ kubectl delete -n istio-system listentry request-path
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Delete the &lt;em&gt;politics&lt;/em&gt; source pod:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ sed &amp;#39;s/: sleep/: politics/g&amp;#39; samples/sleep/sleep.yaml | kubectl delete -f -
serviceaccount &amp;#34;politics&amp;#34; deleted
service &amp;#34;politics&amp;#34; deleted
deployment &amp;#34;politics&amp;#34; deleted
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;</description><pubDate>Fri, 22 Jun 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/egress-monitoring-access-control/</link><author>Vadim Eisenberg and Ronen Schaffer (IBM)</author><guid isPermaLink="true">/v1.1/blog/2018/egress-monitoring-access-control/</guid><category>egress</category><category>traffic-management</category><category>access-control</category><category>monitoring</category></item><item><title>Introducing the Istio v1alpha3 routing API</title><description>
&lt;p&gt;Up until now, Istio has provided a simple API for traffic management using four configuration resources:
&lt;code&gt;RouteRule&lt;/code&gt;, &lt;code&gt;DestinationPolicy&lt;/code&gt;, &lt;code&gt;EgressRule&lt;/code&gt;, and (Kubernetes) &lt;code&gt;Ingress&lt;/code&gt;.
With this API, users have been able to easily manage the flow of traffic in an Istio service mesh.
The API has allowed users to route requests to specific versions of services, inject delays and failures for resilience
testing, add timeouts and circuit breakers, and more, all without changing the application code itself.&lt;/p&gt;
&lt;p&gt;While this functionality has proven to be a very compelling part of Istio, user feedback has also shown that this API does
have some shortcomings, specifically when using it to manage very large applications containing thousands of services, and
when working with protocols other than HTTP. Furthermore, the use of Kubernetes &lt;code&gt;Ingress&lt;/code&gt; resources to configure external
traffic has proven to be woefully insufficient for our needs.&lt;/p&gt;
&lt;p&gt;To address these, and other concerns, a new traffic management API, a.k.a. &lt;code&gt;v1alpha3&lt;/code&gt;, is being introduced, which will
completely replace the previous API going forward. Although the &lt;code&gt;v1alpha3&lt;/code&gt; model is fundamentally the same, it is not
backward compatible and will require manual conversion from the old API.&lt;/p&gt;
&lt;p&gt;To justify this disruption, the &lt;code&gt;v1alpha3&lt;/code&gt; API has gone through a long and painstaking community
review process that has hopefully resulted in a greatly improved API that will stand the test of time. In this article,
we will introduce the new configuration model and attempt to explain some of the motivation and design principles that
influenced it.&lt;/p&gt;
&lt;h2 id=&#34;design-principles&#34;&gt;Design principles&lt;/h2&gt;
&lt;p&gt;A few key design principles played a role in the routing model redesign:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Explicitly model infrastructure as well as intent. For example, in addition to configuring an ingress gateway, the
component (controller) implementing it can also be specified.&lt;/li&gt;
&lt;li&gt;The authoring model should be &amp;ldquo;producer oriented&amp;rdquo; and &amp;ldquo;host centric&amp;rdquo; as opposed to compositional. For example, all
rules associated with a particular host are configured together, instead of individually.&lt;/li&gt;
&lt;li&gt;Clear separation of routing from post-routing behaviors.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;configuration-resources-in-v1alpha3&#34;&gt;Configuration resources in v1alpha3&lt;/h2&gt;
&lt;p&gt;A typical mesh will have one or more load balancers (we call them gateways)
that terminate TLS from external networks and allow traffic into the mesh.
Traffic then flows through internal services via sidecar gateways.
It is also common for applications to consume external
services (e.g., Google Maps API). These may be called directly or, in certain deployments, all traffic
exiting the mesh may be forced through dedicated egress gateways. The following diagram depicts
this mental model.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:35.204472660409245%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/v1alpha3-routing/./gateways.svg&#34; title=&#34;Gateways in an Istio service mesh&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/v1alpha3-routing/./gateways.svg&#34; alt=&#34;Role of gateways in the mesh&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Gateways in an Istio service mesh&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;With the above setup in mind, &lt;code&gt;v1alpha3&lt;/code&gt; introduces the following new
configuration resources to control traffic routing into, within, and out of the mesh.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;Gateway&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;VirtualService&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DestinationRule&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ServiceEntry&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;VirtualService&lt;/code&gt;, &lt;code&gt;DestinationRule&lt;/code&gt;, and &lt;code&gt;ServiceEntry&lt;/code&gt; replace &lt;code&gt;RouteRule&lt;/code&gt;,
&lt;code&gt;DestinationPolicy&lt;/code&gt;, and &lt;code&gt;EgressRule&lt;/code&gt; respectively. The &lt;code&gt;Gateway&lt;/code&gt; is a
platform independent abstraction to model the traffic flowing into
dedicated middleboxes.&lt;/p&gt;
&lt;p&gt;The figure below depicts the flow of control across configuration
resources.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:41.164966727369595%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/v1alpha3-routing/./virtualservices-destrules.svg&#34; title=&#34;Relationship between different v1alpha3 elements&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/v1alpha3-routing/./virtualservices-destrules.svg&#34; alt=&#34;Relationship between different v1alpha3 elements&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Relationship between different v1alpha3 elements&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;gateway&#34;&gt;&lt;code&gt;Gateway&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;A &lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/gateway/&#34;&gt;&lt;code&gt;Gateway&lt;/code&gt;&lt;/a&gt;
configures a load balancer for HTTP/TCP traffic, regardless of
where it will be running. Any number of gateways can exist within the mesh
and multiple different gateway implementations can co-exist. In fact, a
gateway configuration can be bound to a particular workload by specifying
the set of workload (pod) labels as part of the configuration, allowing
users to reuse off the shelf network appliances by writing a simple gateway
controller.&lt;/p&gt;
&lt;p&gt;For ingress traffic management, you might ask: &lt;em&gt;Why not reuse Kubernetes Ingress APIs&lt;/em&gt;?
The Ingress APIs proved to be incapable of expressing Istio&amp;rsquo;s routing needs.
By trying to draw a common denominator across different HTTP proxies, the
Ingress is only able to support the most basic HTTP routing and ends up
pushing every other feature of modern proxies into non-portable
annotations.&lt;/p&gt;
&lt;p&gt;Istio &lt;code&gt;Gateway&lt;/code&gt; overcomes the &lt;code&gt;Ingress&lt;/code&gt; shortcomings by separating the
L4-L6 spec from L7. It only configures the L4-L6 functions (e.g., ports to
expose, TLS configuration) that are uniformly implemented by all good L7
proxies. Users can then use standard Istio rules to control HTTP
requests as well as TCP traffic entering a &lt;code&gt;Gateway&lt;/code&gt; by binding a
&lt;code&gt;VirtualService&lt;/code&gt; to it.&lt;/p&gt;
&lt;p&gt;For example, the following simple &lt;code&gt;Gateway&lt;/code&gt; configures a load balancer
to allow external https traffic for host &lt;code&gt;bookinfo.com&lt;/code&gt; into the mesh:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- bookinfo.com
tls:
mode: SIMPLE
serverCertificate: /tmp/tls.crt
privateKey: /tmp/tls.key
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To configure the corresponding routes, a &lt;code&gt;VirtualService&lt;/code&gt; (described in the &lt;a href=&#34;#virtualservice&#34;&gt;following section&lt;/a&gt;)
must be defined for the same host and bound to the &lt;code&gt;Gateway&lt;/code&gt; using
the &lt;code&gt;gateways&lt;/code&gt; field in the configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- bookinfo.com
gateways:
- bookinfo-gateway # &amp;lt;---- bind to gateway
http:
- match:
- uri:
prefix: /reviews
route:
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;Gateway&lt;/code&gt; can be used to model an edge-proxy or a purely internal proxy
as shown in the first figure. Irrespective of the location, all gateways
can be configured and controlled in the same way.&lt;/p&gt;
&lt;h3 id=&#34;virtualservice&#34;&gt;&lt;code&gt;VirtualService&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;Replacing route rules with something called &amp;ldquo;virtual services” might seem peculiar at first, but in reality its
fundamentally a much better name for what is being configured, especially after redesigning the API to address the
scalability issues with the previous model.&lt;/p&gt;
&lt;p&gt;In effect, what has changed is that instead of configuring routing using a set of individual configuration resources
(rules) for a particular destination service, each containing a precedence field to control the order of evaluation, we
now configure the (virtual) destination itself, with all of its rules in an ordered list within a corresponding
&lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/virtual-service/&#34;&gt;&lt;code&gt;VirtualService&lt;/code&gt;&lt;/a&gt; resource.
For example, where previously we had two &lt;code&gt;RouteRule&lt;/code&gt; resources for the
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Bookinfo&lt;/a&gt; applications &lt;code&gt;reviews&lt;/code&gt; service, like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-default
spec:
destination:
name: reviews
precedence: 1
route:
- labels:
version: v1
---
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: reviews-test-v2
spec:
destination:
name: reviews
precedence: 2
match:
request:
headers:
cookie:
regex: &amp;#34;^(.*?;)?(user=jason)(;.*)?$&amp;#34;
route:
- labels:
version: v2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In &lt;code&gt;v1alpha3&lt;/code&gt;, we provide the same configuration in a single &lt;code&gt;VirtualService&lt;/code&gt; resource:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
cookie:
regex: &amp;#34;^(.*?;)?(user=jason)(;.*)?$&amp;#34;
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, both of the rules for the &lt;code&gt;reviews&lt;/code&gt; service are consolidated in one place, which at first may or may not
seem preferable. However, if you look closer at this new model, youll see there are fundamental differences that make
&lt;code&gt;v1alpha3&lt;/code&gt; vastly more functional.&lt;/p&gt;
&lt;p&gt;First of all, notice that the destination service for the &lt;code&gt;VirtualService&lt;/code&gt; is specified using a &lt;code&gt;hosts&lt;/code&gt; field (repeated field, in fact) and is then again specified in a &lt;code&gt;destination&lt;/code&gt; field of each of the route specifications. This is a
very important difference from the previous model.&lt;/p&gt;
&lt;p&gt;A &lt;code&gt;VirtualService&lt;/code&gt; describes the mapping between one or more user-addressable destinations to the actual destination workloads inside the mesh. In our example, they are the same, however, the user-addressed hosts can be any DNS
names with optional wildcard prefix or CIDR prefix that will be used to address the service. This can be particularly
useful in facilitating turning monoliths into a composite service built out of distinct microservices without requiring the
consumers of the service to adapt to the transition.&lt;/p&gt;
&lt;p&gt;For example, the following rule allows users to address both the &lt;code&gt;reviews&lt;/code&gt; and &lt;code&gt;ratings&lt;/code&gt; services of the Bookinfo application
as if they are parts of a bigger (virtual) service at &lt;code&gt;http://bookinfo.com/&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- bookinfo.com
http:
- match:
- uri:
prefix: /reviews
route:
- destination:
host: reviews
- match:
- uri:
prefix: /ratings
route:
- destination:
host: ratings
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The hosts of a &lt;code&gt;VirtualService&lt;/code&gt; do not actually have to be part of the service registry, they are simply virtual
destinations. This allows users to model traffic for virtual hosts that do not have routable entries inside the mesh.
These hosts can be exposed outside the mesh by binding the &lt;code&gt;VirtualService&lt;/code&gt; to a &lt;code&gt;Gateway&lt;/code&gt; configuration for the same host
(as described in the &lt;a href=&#34;#gateway&#34;&gt;previous section&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;In addition to this fundamental restructuring, &lt;code&gt;VirtualService&lt;/code&gt; includes several other important changes:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Multiple match conditions can be expressed inside the &lt;code&gt;VirtualService&lt;/code&gt; configuration, reducing the need for redundant
rules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each service version has a name (called a service subset). The set of pods/VMs belonging to a subset is defined in a
&lt;code&gt;DestinationRule&lt;/code&gt;, described in the following section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;VirtualService&lt;/code&gt; hosts can be specified using wildcard DNS prefixes to create a single rule for all matching services.
For example, in Kubernetes, to apply the same rewrite rule for all services in the &lt;code&gt;foo&lt;/code&gt; namespace, the &lt;code&gt;VirtualService&lt;/code&gt;
would use &lt;code&gt;*.foo.svc.cluster.local&lt;/code&gt; as the host.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;destinationrule&#34;&gt;&lt;code&gt;DestinationRule&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;A &lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/destination-rule/&#34;&gt;&lt;code&gt;DestinationRule&lt;/code&gt;&lt;/a&gt;
configures the set of policies to be applied while forwarding traffic to a service. They are
intended to be authored by service owners, describing the circuit breakers, load balancer settings, TLS settings, etc..
&lt;code&gt;DestinationRule&lt;/code&gt; is more or less the same as its predecessor, &lt;code&gt;DestinationPolicy&lt;/code&gt;, with the following exceptions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;host&lt;/code&gt; of a &lt;code&gt;DestinationRule&lt;/code&gt; can include wildcard prefixes, allowing a single rule to be specified for many actual
services.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;DestinationRule&lt;/code&gt; defines addressable &lt;code&gt;subsets&lt;/code&gt; (i.e., named versions) of the corresponding destination host. These
subsets are used in &lt;code&gt;VirtualService&lt;/code&gt; route specifications when sending traffic to specific versions of the service.
Naming versions this way allows us to cleanly refer to them across different virtual services, simplify the stats that
Istio proxies emit, and to encode subsets in SNI headers.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;A &lt;code&gt;DestinationRule&lt;/code&gt; that configures policies and subsets for the reviews service might look something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
trafficPolicy:
loadBalancer:
simple: RANDOM
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
- name: v3
labels:
version: v3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that, unlike &lt;code&gt;DestinationPolicy&lt;/code&gt;, multiple policies (e.g., default and v2-specific) are specified in a single
&lt;code&gt;DestinationRule&lt;/code&gt; configuration.&lt;/p&gt;
&lt;h3 id=&#34;serviceentry&#34;&gt;&lt;code&gt;ServiceEntry&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/service-entry/&#34;&gt;&lt;code&gt;ServiceEntry&lt;/code&gt;&lt;/a&gt;
is used to add additional entries into the service registry that Istio maintains internally.
It is most commonly used to allow one to model traffic to external dependencies of the mesh
such as APIs consumed from the web or traffic to services in legacy infrastructure.&lt;/p&gt;
&lt;p&gt;Everything you could previously configure using an &lt;code&gt;EgressRule&lt;/code&gt; can just as easily be done with a &lt;code&gt;ServiceEntry&lt;/code&gt;.
For example, access to a simple external service from inside the mesh can be enabled using a configuration
something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: foo-ext
spec:
hosts:
- foo.com
ports:
- number: 80
name: http
protocol: HTTP
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That said, &lt;code&gt;ServiceEntry&lt;/code&gt; has significantly more functionality than its predecessor.
First of all, a &lt;code&gt;ServiceEntry&lt;/code&gt; is not limited to external service configuration,
it can be of two types: mesh-internal or mesh-external.
Mesh-internal entries are like all other internal services but are used to explicitly add services
to the mesh. They can be used to add services as part of expanding the service mesh to include unmanaged infrastructure
(e.g., VMs added to a Kubernetes-based service mesh).
Mesh-external entries represent services external to the mesh.
For them, mutual TLS authentication is disabled and policy enforcement is performed on the client-side,
instead of on the usual server-side for internal service requests.&lt;/p&gt;
&lt;p&gt;Because a &lt;code&gt;ServiceEntry&lt;/code&gt; configuration simply adds a destination to the internal service registry, it can be
used in conjunction with a &lt;code&gt;VirtualService&lt;/code&gt; and/or &lt;code&gt;DestinationRule&lt;/code&gt;, just like any other service in the registry.
The following &lt;code&gt;DestinationRule&lt;/code&gt;, for example, can be used to initiate mutual TLS connections for an external service:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: foo-ext
spec:
host: foo.com
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In addition to its expanded generality, &lt;code&gt;ServiceEntry&lt;/code&gt; provides several other improvements over &lt;code&gt;EgressRule&lt;/code&gt;
including the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A single &lt;code&gt;ServiceEntry&lt;/code&gt; can configure multiple service endpoints, which previously would have required multiple
&lt;code&gt;EgressRules&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The resolution mode for the endpoints is now configurable (&lt;code&gt;NONE&lt;/code&gt;, &lt;code&gt;STATIC&lt;/code&gt;, or &lt;code&gt;DNS&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Additionally, we are working on addressing another pain point: the need to access secure external services over plain
text ports (e.g., &lt;code&gt;http://google.com:443&lt;/code&gt;). This should be fixed in the coming weeks, allowing you to directly access
&lt;code&gt;https://google.com&lt;/code&gt; from your application. Stay tuned for an Istio patch release (0.8.x) that addresses this limitation.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;creating-and-deleting-v1alpha3-route-rules&#34;&gt;Creating and deleting v1alpha3 route rules&lt;/h2&gt;
&lt;p&gt;Because all route rules for a given destination are now stored together as an ordered
list in a single &lt;code&gt;VirtualService&lt;/code&gt; resource, adding a second and subsequent rules for a particular destination
is no longer done by creating a new (&lt;code&gt;RouteRule&lt;/code&gt;) resource, but instead by updating the one-and-only &lt;code&gt;VirtualService&lt;/code&gt;
resource for the destination.&lt;/p&gt;
&lt;p&gt;old routing rules:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f my-second-rule-for-destination-abc.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;v1alpha3&lt;/code&gt; routing rules:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f my-updated-rules-for-destination-abc.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Deleting route rules other than the last one for a particular destination is also done by updating
the existing resource using &lt;code&gt;kubectl apply&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;When adding or removing routes that refer to service versions, the &lt;code&gt;subsets&lt;/code&gt; will need to be updated in
the service&amp;rsquo;s corresponding &lt;code&gt;DestinationRule&lt;/code&gt;.
As you might have guessed, this is also done using &lt;code&gt;kubectl apply&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;The Istio &lt;code&gt;v1alpha3&lt;/code&gt; routing API has significantly more functionality than
its predecessor, but unfortunately is not backwards compatible, requiring a
one time manual conversion. The previous configuration resources,
&lt;code&gt;RouteRule&lt;/code&gt;, &lt;code&gt;DesintationPolicy&lt;/code&gt;, and &lt;code&gt;EgressRule&lt;/code&gt;, will not be supported
from Istio 0.9 onwards. Kubernetes users can continue to use &lt;code&gt;Ingress&lt;/code&gt; to
configure their edge load balancers for basic routing. However, advanced
routing features (e.g., traffic split across two versions) will require use
of &lt;code&gt;Gateway&lt;/code&gt;, a significantly more functional and highly
recommended &lt;code&gt;Ingress&lt;/code&gt; replacement.&lt;/p&gt;
&lt;h2 id=&#34;acknowledgments&#34;&gt;Acknowledgments&lt;/h2&gt;
&lt;p&gt;Credit for the routing model redesign and implementation work goes to the
following people (in alphabetical order):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Frank Budinsky (IBM)&lt;/li&gt;
&lt;li&gt;Zack Butcher (Google)&lt;/li&gt;
&lt;li&gt;Greg Hanson (IBM)&lt;/li&gt;
&lt;li&gt;Costin Manolache (Google)&lt;/li&gt;
&lt;li&gt;Martin Ostrowski (Google)&lt;/li&gt;
&lt;li&gt;Shriram Rajagopalan (VMware)&lt;/li&gt;
&lt;li&gt;Louis Ryan (Google)&lt;/li&gt;
&lt;li&gt;Isaiah Snell-Feikema (IBM)&lt;/li&gt;
&lt;li&gt;Kuat Yessenov (Google)&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Wed, 25 Apr 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/v1alpha3-routing/</link><author>Frank Budinsky (IBM) and Shriram Rajagopalan (VMware)</author><guid isPermaLink="true">/v1.1/blog/2018/v1alpha3-routing/</guid><category>traffic-management</category></item><item><title>Configuring Istio Ingress with AWS NLB</title><description>
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;This post was updated on January 16, 2019 to include some usage warnings.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;p&gt;This post provides instructions to use and configure ingress Istio with &lt;a href=&#34;https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html&#34;&gt;AWS Network Load Balancer&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Network load balancer (NLB) could be used instead of classical load balancer. You can see the &lt;a href=&#34;https://aws.amazon.com/elasticloadbalancing/details/#Product_comparisons&#34;&gt;comparison&lt;/a&gt; between different AWS &lt;code&gt;loadbalancer&lt;/code&gt; for more explanation.&lt;/p&gt;
&lt;h2 id=&#34;prerequisites&#34;&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;The following instructions require a Kubernetes &lt;strong&gt;1.9.0 or newer&lt;/strong&gt; cluster.&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout warning&#34;&gt;
&lt;div class=&#34;type&#34;&gt;
&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-warning&#34;/&gt;&lt;/svg&gt;
&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;&lt;p&gt;Usage of AWS &lt;code&gt;nlb&lt;/code&gt; on Kubernetes is an Alpha feature and not recommended for production clusters.&lt;/p&gt;
&lt;p&gt;Usage of AWS &lt;code&gt;nlb&lt;/code&gt; does not support the creation of two or more Kubernetes clusters running Istio in the same zone as a result of &lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/69264&#34;&gt;Kubernetes Bug #69264&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;h2 id=&#34;iam-policy&#34;&gt;IAM Policy&lt;/h2&gt;
&lt;p&gt;You need to apply policy on the master role in order to be able to provision network load balancer.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In AWS &lt;code&gt;iam&lt;/code&gt; console click on policies and click on create a new one:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:52.430278884462155%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/aws-nlb/./createpolicystart.png&#34; title=&#34;Create a new policy&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/aws-nlb/./createpolicystart.png&#34; alt=&#34;Create a new policy&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Create a new policy&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;code&gt;json&lt;/code&gt;:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:50.63492063492063%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/aws-nlb/./createpolicyjson.png&#34; title=&#34;Select json&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/aws-nlb/./createpolicyjson.png&#34; alt=&#34;Select json&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Select json&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy/paste text below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-json&#39; data-expandlinks=&#39;true&#39; &gt;{
&amp;#34;Version&amp;#34;: &amp;#34;2012-10-17&amp;#34;,
&amp;#34;Statement&amp;#34;: [
{
&amp;#34;Sid&amp;#34;: &amp;#34;kopsK8sNLBMasterPermsRestrictive&amp;#34;,
&amp;#34;Effect&amp;#34;: &amp;#34;Allow&amp;#34;,
&amp;#34;Action&amp;#34;: [
&amp;#34;ec2:DescribeVpcs&amp;#34;,
&amp;#34;elasticloadbalancing:AddTags&amp;#34;,
&amp;#34;elasticloadbalancing:CreateListener&amp;#34;,
&amp;#34;elasticloadbalancing:CreateTargetGroup&amp;#34;,
&amp;#34;elasticloadbalancing:DeleteListener&amp;#34;,
&amp;#34;elasticloadbalancing:DeleteTargetGroup&amp;#34;,
&amp;#34;elasticloadbalancing:DescribeListeners&amp;#34;,
&amp;#34;elasticloadbalancing:DescribeLoadBalancerPolicies&amp;#34;,
&amp;#34;elasticloadbalancing:DescribeTargetGroups&amp;#34;,
&amp;#34;elasticloadbalancing:DescribeTargetHealth&amp;#34;,
&amp;#34;elasticloadbalancing:ModifyListener&amp;#34;,
&amp;#34;elasticloadbalancing:ModifyTargetGroup&amp;#34;,
&amp;#34;elasticloadbalancing:RegisterTargets&amp;#34;,
&amp;#34;elasticloadbalancing:SetLoadBalancerPoliciesOfListener&amp;#34;
],
&amp;#34;Resource&amp;#34;: [
&amp;#34;*&amp;#34;
]
},
{
&amp;#34;Effect&amp;#34;: &amp;#34;Allow&amp;#34;,
&amp;#34;Action&amp;#34;: [
&amp;#34;ec2:DescribeVpcs&amp;#34;,
&amp;#34;ec2:DescribeRegions&amp;#34;
],
&amp;#34;Resource&amp;#34;: &amp;#34;*&amp;#34;
}
]
}
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click review policy, fill all fields and click create policy:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:60.08097165991902%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/aws-nlb/./create_policy.png&#34; title=&#34;Validate policy&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/aws-nlb/./create_policy.png&#34; alt=&#34;Validate policy&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Validate policy&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on roles, select you master role nodes, and click attach policy:&lt;/p&gt;
&lt;figure style=&#34;width:100%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:30.328324986087924%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/aws-nlb/./roles_summary.png&#34; title=&#34;Attach policy&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/aws-nlb/./roles_summary.png&#34; alt=&#34;Attach policy&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Attach policy&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your policy is now attach to your master node.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;generate-the-istio-manifest&#34;&gt;Generate the Istio manifest&lt;/h2&gt;
&lt;p&gt;To use an AWS &lt;code&gt;nlb&lt;/code&gt; load balancer, it is necessary to add an AWS specific
annotation to the Istio installation. These instructions explain how to
add the annotation.&lt;/p&gt;
&lt;p&gt;Save this as the file &lt;code&gt;override.yaml&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;gateways:
istio-ingressgateway:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-type: &amp;#34;nlb&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Generate a manifest with Helm:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ helm template install/kubernetes/helm/istio --namespace istio -f override.yaml &amp;gt; $HOME/istio.yaml
&lt;/code&gt;&lt;/pre&gt;</description><pubDate>Fri, 20 Apr 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/aws-nlb/</link><author>Julien SENON</author><guid isPermaLink="true">/v1.1/blog/2018/aws-nlb/</guid><category>ingress</category><category>traffic-management</category><category>aws</category></item><item><title>Istio Soft Multi-Tenancy Support</title><description>
&lt;p&gt;Multi-tenancy is commonly used in many environments across many different applications,
but the implementation details and functionality provided on a per tenant basis does not
follow one model in all environments. The &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/wg-multitenancy/README.md&#34;&gt;Kubernetes multi-tenancy working group&lt;/a&gt;
is working to define the multi-tenant use cases and functionality that should be available
within Kubernetes. However, from their work so far it is clear that only &amp;ldquo;soft multi-tenancy&amp;rdquo;
is possible due to the inability to fully protect against malicious containers or workloads
gaining access to other tenant&amp;rsquo;s pods or kernel resources.&lt;/p&gt;
&lt;h2 id=&#34;soft-multi-tenancy&#34;&gt;Soft multi-tenancy&lt;/h2&gt;
&lt;p&gt;For this blog, &amp;ldquo;soft multi-tenancy&amp;rdquo; is defined as having a single Kubernetes control plane
with multiple Istio control planes and multiple meshes, one control plane and one mesh
per tenant. The cluster administrator gets control and visibility across all the Istio
control planes, while the tenant administrator only gets control of a specific Istio
instance. Separation between the tenants is provided by Kubernetes namespaces and RBAC.&lt;/p&gt;
&lt;p&gt;One use case for this deployment model is a shared corporate infrastructure where malicious
actions are not expected, but a clean separation of the tenants is still required.&lt;/p&gt;
&lt;p&gt;Potential future Istio multi-tenant deployment models are described at the bottom of this
blog.&lt;/p&gt;
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;This blog is a high-level description of how to deploy Istio in a
limited multi-tenancy environment. The &lt;a href=&#34;/v1.1/docs/&#34;&gt;docs&lt;/a&gt; section will be updated
when official multi-tenancy support is provided.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;h2 id=&#34;deployment&#34;&gt;Deployment&lt;/h2&gt;
&lt;h3 id=&#34;multiple-istio-control-planes&#34;&gt;Multiple Istio control planes&lt;/h3&gt;
&lt;p&gt;Deploying multiple Istio control planes starts by replacing all &lt;code&gt;namespace&lt;/code&gt; references
in a manifest file with the desired namespace. Using &lt;code&gt;istio.yaml&lt;/code&gt; as an example, if two tenant
level Istio control planes are required; the first can use the &lt;code&gt;istio.yaml&lt;/code&gt; default name of
&lt;code&gt;istio-system&lt;/code&gt; and a second control plane can be created by generating a new yaml file with
a different namespace. As an example, the following command creates a yaml file with
the Istio namespace of &lt;code&gt;istio-system1&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ cat istio.yaml | sed s/istio-system/istio-system1/g &amp;gt; istio-system1.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;istio.yaml&lt;/code&gt; file contains the details of the Istio control plane deployment, including the
pods that make up the control plane (Mixer, Pilot, Ingress, Galley, CA). Deploying the two Istio
control plane yaml files:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f install/kubernetes/istio.yaml
$ kubectl apply -f install/kubernetes/istio-system1.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Results in two Istio control planes running in two namespaces.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
istio-system istio-ca-ffbb75c6f-98w6x 1/1 Running 0 15d
istio-system istio-ingress-68d65fc5c6-dnvfl 1/1 Running 0 15d
istio-system istio-mixer-5b9f8dffb5-8875r 3/3 Running 0 15d
istio-system istio-pilot-678fc976c8-b8tv6 2/2 Running 0 15d
istio-system1 istio-ca-5f496fdbcd-lqhlk 1/1 Running 0 15d
istio-system1 istio-ingress-68d65fc5c6-2vldg 1/1 Running 0 15d
istio-system1 istio-mixer-7d4f7b9968-66z44 3/3 Running 0 15d
istio-system1 istio-pilot-5bb6b7669c-779vb 2/2 Running 0 15d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Istio &lt;a href=&#34;/v1.1/docs/setup/kubernetes/additional-setup/sidecar-injection/&#34;&gt;sidecar&lt;/a&gt;
and &lt;a href=&#34;/v1.1/docs/tasks/telemetry/&#34;&gt;addons&lt;/a&gt;, if required, manifests must also be
deployed to match the configured &lt;code&gt;namespace&lt;/code&gt; in use by the tenant&amp;rsquo;s Istio
control plane.&lt;/p&gt;
&lt;p&gt;The execution of these two yaml files is the responsibility of the cluster
administrator, not the tenant level administrator. Additional RBAC restrictions will also
need to be configured and applied by the cluster administrator, limiting the tenant
administrator to only the assigned namespace.&lt;/p&gt;
&lt;h3 id=&#34;split-common-and-namespace-specific-resources&#34;&gt;Split common and namespace specific resources&lt;/h3&gt;
&lt;p&gt;The manifest files in the Istio repositories create both common resources that would
be used by all Istio control planes as well as resources that are replicated per control
plane. Although it is a simple matter to deploy multiple control planes by replacing the
&lt;code&gt;istio-system&lt;/code&gt; namespace references as described above, a better approach is to split the
manifests into a common part that is deployed once for all tenants and a tenant
specific part. For the &lt;a href=&#34;https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions&#34;&gt;Custom Resource Definitions&lt;/a&gt;, the roles and the role
bindings should be separated out from the provided Istio manifests. Additionally, the
roles and role bindings in the provided Istio manifests are probably unsuitable for a
multi-tenant environment and should be modified or augmented as described in the next
section.&lt;/p&gt;
&lt;h3 id=&#34;kubernetes-rbac-for-istio-control-plane-resources&#34;&gt;Kubernetes RBAC for Istio control plane resources&lt;/h3&gt;
&lt;p&gt;To restrict a tenant administrator to a single Istio namespace, the cluster
administrator would create a manifest containing, at a minimum, a &lt;code&gt;Role&lt;/code&gt; and &lt;code&gt;RoleBinding&lt;/code&gt;
similar to the one below. In this example, a tenant administrator named &lt;em&gt;sales-admin&lt;/em&gt;
is limited to the namespace &lt;code&gt;istio-system1&lt;/code&gt;. A completed manifest would contain many
more &lt;code&gt;apiGroups&lt;/code&gt; under the &lt;code&gt;Role&lt;/code&gt; providing resource access to the tenant administrator.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: istio-system1
name: ns-access-for-sales-admin-istio-system1
rules:
- apiGroups: [&amp;#34;&amp;#34;] # &amp;#34;&amp;#34; indicates the core API group
resources: [&amp;#34;*&amp;#34;]
verbs: [&amp;#34;*&amp;#34;]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: access-all-istio-system1
namespace: istio-system1
subjects:
- kind: User
name: sales-admin
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: ns-access-for-sales-admin-istio-system1
apiGroup: rbac.authorization.k8s.io
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;watching-specific-namespaces-for-service-discovery&#34;&gt;Watching specific namespaces for service discovery&lt;/h3&gt;
&lt;p&gt;In addition to creating RBAC rules limiting the tenant administrator&amp;rsquo;s access to a specific
Istio control plane, the Istio manifest must be updated to specify the application namespace
that Pilot should watch for creation of its xDS cache. This is done by starting the Pilot
component with the additional command line arguments &lt;code&gt;--appNamespace, ns-1&lt;/code&gt;. Where &lt;em&gt;ns-1&lt;/em&gt;
is the namespace that the tenants application will be deployed in. An example snippet from
the &lt;code&gt;istio-system1.yaml&lt;/code&gt; file is shown below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-pilot
namespace: istio-system1
annotations:
sidecar.istio.io/inject: &amp;#34;false&amp;#34;
spec:
replicas: 1
template:
metadata:
labels:
istio: pilot
spec:
serviceAccountName: istio-pilot-service-account
containers:
- name: discovery
image: docker.io/&amp;lt;user ID&amp;gt;/pilot:&amp;lt;tag&amp;gt;
imagePullPolicy: IfNotPresent
args: [&amp;#34;discovery&amp;#34;, &amp;#34;-v&amp;#34;, &amp;#34;2&amp;#34;, &amp;#34;--admission-service&amp;#34;, &amp;#34;istio-pilot&amp;#34;, &amp;#34;--appNamespace&amp;#34;, &amp;#34;ns-1&amp;#34;]
ports:
- containerPort: 8080
- containerPort: 443
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;deploying-the-tenant-application-in-a-namespace&#34;&gt;Deploying the tenant application in a namespace&lt;/h3&gt;
&lt;p&gt;Now that the cluster administrator has created the tenant&amp;rsquo;s namespace (ex. &lt;code&gt;istio-system1&lt;/code&gt;) and
Pilot&amp;rsquo;s service discovery has been configured to watch for a specific application
namespace (ex. &lt;em&gt;ns-1&lt;/em&gt;), create the application manifests to deploy in that tenant&amp;rsquo;s specific
namespace. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: v1
kind: Namespace
metadata:
name: ns-1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And add the namespace reference to each resource type included in the application&amp;rsquo;s manifest
file. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
namespace: ns-1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Although not shown, the application namespaces will also have RBAC settings limiting access
to certain resources. These RBAC settings could be set by the cluster administrator and/or
the tenant administrator.&lt;/p&gt;
&lt;h3 id=&#34;using-kubectl-in-a-multi-tenant-environment&#34;&gt;Using &lt;code&gt;kubectl&lt;/code&gt; in a multi-tenant environment&lt;/h3&gt;
&lt;p&gt;When defining &lt;a href=&#34;https://archive.istio.io/v0.7/docs/reference/config/istio.routing.v1alpha1/#RouteRule&#34;&gt;route rules&lt;/a&gt;
or &lt;a href=&#34;https://archive.istio.io/v0.7/docs/reference/config/istio.routing.v1alpha1/#DestinationPolicy&#34;&gt;destination policies&lt;/a&gt;,
it is necessary to ensure that the &lt;code&gt;kubectl&lt;/code&gt; command is scoped to
the namespace the Istio control plane is running in to ensure the resource is created
in the proper namespace. Additionally, the rule itself must be scoped to the tenant&amp;rsquo;s namespace
so that it will be applied properly to that tenant&amp;rsquo;s mesh. The &lt;em&gt;-i&lt;/em&gt; option is used to create
(or get or describe) the rule in the namespace that the Istio control plane is deployed in.
The &lt;em&gt;-n&lt;/em&gt; option will scope the rule to the tenant&amp;rsquo;s mesh and should be set to the namespace that
the tenant&amp;rsquo;s app is deployed in. Note that the &lt;em&gt;-n&lt;/em&gt; option can be skipped on the command line if
the .yaml file for the resource scopes it properly instead.&lt;/p&gt;
&lt;p&gt;For example, the following command would be required to add a route rule to the &lt;code&gt;istio-system1&lt;/code&gt;
namespace:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl i istio-system1 apply -n ns-1 -f route_rule_v2.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And can be displayed using the command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl -i istio-system1 -n ns-1 get routerule
NAME KIND NAMESPACE
details-Default RouteRule.v1alpha2.config.istio.io ns-1
productpage-default RouteRule.v1alpha2.config.istio.io ns-1
ratings-default RouteRule.v1alpha2.config.istio.io ns-1
reviews-default RouteRule.v1alpha2.config.istio.io ns-1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;See the &lt;a href=&#34;/v1.1/blog/2018/soft-multitenancy/#multiple-istio-control-planes&#34;&gt;Multiple Istio control planes&lt;/a&gt; section of this document for more details on &lt;code&gt;namespace&lt;/code&gt; requirements in a
multi-tenant environment.&lt;/p&gt;
&lt;h3 id=&#34;test-results&#34;&gt;Test results&lt;/h3&gt;
&lt;p&gt;Following the instructions above, a cluster administrator can create an environment limiting,
via RBAC and namespaces, what a tenant administrator can deploy.&lt;/p&gt;
&lt;p&gt;After deployment, accessing the Istio control plane pods assigned to a specific tenant
administrator is permitted:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-78d649479f-8pqk9 1/1 Running 0 1d
istio-ca-ffbb75c6f-98w6x 1/1 Running 0 1d
istio-ingress-68d65fc5c6-dnvfl 1/1 Running 0 1d
istio-mixer-5b9f8dffb5-8875r 3/3 Running 0 1d
istio-pilot-678fc976c8-b8tv6 2/2 Running 0 1d
istio-sidecar-injector-7587bd559d-5tgk6 1/1 Running 0 1d
prometheus-cf8456855-hdcq7 1/1 Running 0 1d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However, accessing all the cluster&amp;rsquo;s pods is not permitted:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User &amp;#34;dev-admin&amp;#34; cannot list pods at the cluster scope
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And neither is accessing another tenant&amp;rsquo;s namespace:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pods -n istio-system1
Error from server (Forbidden): pods is forbidden: User &amp;#34;dev-admin&amp;#34; cannot list pods in the namespace &amp;#34;istio-system1&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The tenant administrator can deploy applications in the application namespace configured for
that tenant. As an example, updating the &lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Bookinfo&lt;/a&gt;
manifests and then deploying under the tenant&amp;rsquo;s application namespace of &lt;em&gt;ns-0&lt;/em&gt;, listing the
pods in use by this tenant&amp;rsquo;s namespace is permitted:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pods -n ns-0
NAME READY STATUS RESTARTS AGE
details-v1-64b86cd49-b7rkr 2/2 Running 0 1d
productpage-v1-84f77f8747-rf2mt 2/2 Running 0 1d
ratings-v1-5f46655b57-5b4c5 2/2 Running 0 1d
reviews-v1-ff6bdb95b-pm5lb 2/2 Running 0 1d
reviews-v2-5799558d68-b989t 2/2 Running 0 1d
reviews-v3-58ff7d665b-lw5j9 2/2 Running 0 1d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But accessing another tenant&amp;rsquo;s application namespace is not:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pods -n ns-1
Error from server (Forbidden): pods is forbidden: User &amp;#34;dev-admin&amp;#34; cannot list pods in the namespace &amp;#34;ns-1&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the &lt;a href=&#34;/v1.1/docs/tasks/telemetry/&#34;&gt;add-on tools&lt;/a&gt;, example
&lt;a href=&#34;/v1.1/docs/tasks/telemetry/metrics/querying-metrics/&#34;&gt;Prometheus&lt;/a&gt;, are deployed
(also limited by an Istio &lt;code&gt;namespace&lt;/code&gt;) the statistical results returned would represent only
that traffic seen from that tenant&amp;rsquo;s application namespace.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The evaluation performed indicates Istio has sufficient capabilities and security to meet a
small number of multi-tenant use cases. It also shows that Istio and Kubernetes &lt;strong&gt;cannot&lt;/strong&gt;
provide sufficient capabilities and security for other use cases, especially those use
cases that require complete security and isolation between untrusted tenants. The improvements
required to reach a more secure model of security and isolation require work in container
technology, ex. Kubernetes, rather than improvements in Istio capabilities.&lt;/p&gt;
&lt;h2 id=&#34;issues&#34;&gt;Issues&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;The CA (Certificate Authority) and Mixer pod logs from one tenant&amp;rsquo;s Istio control
plane (e.g. &lt;code&gt;istio-system&lt;/code&gt; namespace) contained &amp;lsquo;info&amp;rsquo; messages from a second tenant&amp;rsquo;s
Istio control plane (e.g. &lt;code&gt;istio-system1&lt;/code&gt; namespace).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;challenges-with-other-multi-tenancy-models&#34;&gt;Challenges with other multi-tenancy models&lt;/h2&gt;
&lt;p&gt;Other multi-tenancy deployment models were considered:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A single mesh with multiple applications, one for each tenant on the mesh. The cluster
administrator gets control and visibility mesh wide and across all applications, while the
tenant administrator only gets control of a specific application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A single Istio control plane with multiple meshes, one mesh per tenant. The cluster
administrator gets control and visibility across the entire Istio control plane and all
meshes, while the tenant administrator only gets control of a specific mesh.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A single cloud environment (cluster controlled), but multiple Kubernetes control planes
(tenant controlled).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;These options either can&amp;rsquo;t be properly supported without code changes or don&amp;rsquo;t fully
address the use cases.&lt;/p&gt;
&lt;p&gt;Current Istio capabilities are poorly suited to support the first model as it lacks
sufficient RBAC capabilities to support cluster versus tenant operations. Additionally,
having multiple tenants under one mesh is too insecure with the current mesh model and the
way Istio drives configuration to the Envoy proxies.&lt;/p&gt;
&lt;p&gt;Regarding the second option, the current Istio paradigm assumes a single mesh per Istio control
plane. The needed changes to support this model are substantial. They would require
finer grained scoping of resources and security domains based on namespaces, as well as,
additional Istio RBAC changes. This model will likely be addressed by future work, but not
currently possible.&lt;/p&gt;
&lt;p&gt;The third model doesnt satisfy most use cases, as most cluster administrators prefer
a common Kubernetes control plane which they provide as a
&lt;a href=&#34;https://en.wikipedia.org/wiki/Platform_as_a_service&#34;&gt;PaaS&lt;/a&gt; to their tenants.&lt;/p&gt;
&lt;h2 id=&#34;future-work&#34;&gt;Future work&lt;/h2&gt;
&lt;p&gt;Allowing a single Istio control plane to control multiple meshes would be an obvious next
feature. An additional improvement is to provide a single mesh that can host different
tenants with some level of isolation and security between the tenants. This could be done
by partitioning within a single control plane using the same logical notion of namespace as
Kubernetes. A &lt;a href=&#34;https://docs.google.com/document/d/14Hb07gSrfVt5KX9qNi7FzzGwB_6WBpAnDpPG6QEEd9Q&#34;&gt;document&lt;/a&gt;
has been started within the Istio community to define additional use cases and the
Istio functionality required to support those use cases.&lt;/p&gt;
&lt;h2 id=&#34;references&#34;&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Video on Kubernetes multi-tenancy support, &lt;a href=&#34;https://www.youtube.com/watch?v=ahwCkJGItkU&#34;&gt;Multi-Tenancy Support &amp;amp; Security Modeling with RBAC and Namespaces&lt;/a&gt;, and the &lt;a href=&#34;https://schd.ws/hosted_files/kccncna17/21/Multi-tenancy%20Support%20%26%20Security%20Modeling%20with%20RBAC%20and%20Namespaces.pdf&#34;&gt;supporting slide deck&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Kubecon talk on security that discusses Kubernetes support for &amp;ldquo;Cooperative soft multi-tenancy&amp;rdquo;, &lt;a href=&#34;https://www.youtube.com/watch?v=YRR-kZub0cA&#34;&gt;Building for Trust: How to Secure Your Kubernetes&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Kubernetes documentation on &lt;a href=&#34;https://kubernetes.io/docs/reference/access-authn-authz/rbac/&#34;&gt;RBAC&lt;/a&gt; and &lt;a href=&#34;https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/&#34;&gt;namespaces&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Kubecon slide deck on &lt;a href=&#34;https://schd.ws/hosted_files/kccncna17/a9/kubecon-multitenancy.pdf&#34;&gt;Multi-tenancy Deep Dive&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Google document on &lt;a href=&#34;https://docs.google.com/document/d/15w1_fesSUZHv-vwjiYa9vN_uyc--PySRoLKTuDhimjc&#34;&gt;Multi-tenancy models for Kubernetes&lt;/a&gt;. (Requires permission)&lt;/li&gt;
&lt;li&gt;Cloud Foundry WIP document, &lt;a href=&#34;https://docs.google.com/document/d/14Hb07gSrfVt5KX9qNi7FzzGwB_6WBpAnDpPG6QEEd9Q&#34;&gt;Multi-cloud and Multi-tenancy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://docs.google.com/document/d/12F183NIRAwj2hprx-a-51ByLeNqbJxK16X06vwH5OWE&#34;&gt;Istio Auto Multi-Tenancy 101&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description><pubDate>Thu, 19 Apr 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/soft-multitenancy/</link><author>John Joyce and Rich Curran</author><guid isPermaLink="true">/v1.1/blog/2018/soft-multitenancy/</guid><category>tenancy</category></item><item><title>Traffic Mirroring with Istio for Testing in Production</title><description>&lt;p&gt;Trying to enumerate all the possible combinations of test cases for testing services in non-production/test environments can be daunting. In some cases, you&amp;rsquo;ll find that all of the effort that goes into cataloging these use cases doesn&amp;rsquo;t match up to real production use cases. Ideally, we could use live production use cases and traffic to help illuminate all of the feature areas of the service under test that we might miss in more contrived testing environments.&lt;/p&gt;
&lt;p&gt;Istio can help here. With the release of &lt;a href=&#34;/v1.1/about/notes/older/0.5/&#34;&gt;Istio 0.5.0&lt;/a&gt;, Istio can mirror traffic to help test your services. You can write route rules similar to the following to enable traffic mirroring:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: mirror-traffic-to-httbin-v2
spec:
destination:
name: httpbin
precedence: 11
route:
- labels:
version: v1
weight: 100
- labels:
version: v2
weight: 0
mirror:
name: httpbin
labels:
version: v2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A few things to note here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;When traffic gets mirrored to a different service, that happens outside the critical path of the request&lt;/li&gt;
&lt;li&gt;Responses to any mirrored traffic is ignored; traffic is mirrored as &amp;ldquo;fire-and-forget&amp;rdquo;&lt;/li&gt;
&lt;li&gt;You&amp;rsquo;ll need to have the 0-weighted route to hint to Istio to create the proper Envoy cluster under the covers; &lt;a href=&#34;https://github.com/istio/istio/issues/3270&#34;&gt;this should be ironed out in future releases&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Learn more about mirroring by visiting the &lt;a href=&#34;/v1.1/docs/tasks/traffic-management/mirroring/&#34;&gt;Mirroring Task&lt;/a&gt; and see a more
&lt;a href=&#34;https://blog.christianposta.com/microservices/traffic-shadowing-with-istio-reduce-the-risk-of-code-release/&#34;&gt;comprehensive treatment of this scenario on my blog&lt;/a&gt;.&lt;/p&gt;</description><pubDate>Thu, 08 Feb 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/traffic-mirroring/</link><author>Christian Posta</author><guid isPermaLink="true">/v1.1/blog/2018/traffic-mirroring/</guid><category>traffic-management</category><category>mirroring</category></item><item><title>Consuming External TCP Services</title><description>
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;This blog post was updated on July 23, 2018 to use the new
&lt;a href=&#34;/v1.1/blog/2018/v1alpha3-routing/&#34;&gt;v1alpha3 traffic management API&lt;/a&gt;. If you need to use the old version, follow these &lt;a href=&#34;https://archive.istio.io/v0.7/blog/2018/egress-tcp.html&#34;&gt;docs&lt;/a&gt;.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;p&gt;In my previous blog post, &lt;a href=&#34;/v1.1/blog/2018/egress-https/&#34;&gt;Consuming External Web Services&lt;/a&gt;, I described how external services
can be consumed by in-mesh Istio applications via HTTPS. In this post, I demonstrate consuming external services
over TCP. You will use the &lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Istio Bookinfo sample application&lt;/a&gt;, the version in which the book
ratings data is persisted in a MySQL database. You deploy this database outside the cluster and configure the
&lt;em&gt;ratings&lt;/em&gt; microservice to use it. You define a
&lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/service-entry/&#34;&gt;Service Entry&lt;/a&gt; to allow the in-mesh applications to
access the external database.&lt;/p&gt;
&lt;h2 id=&#34;bookinfo-sample-application-with-external-ratings-database&#34;&gt;Bookinfo sample application with external ratings database&lt;/h2&gt;
&lt;p&gt;First, you set up a MySQL database instance to hold book ratings data outside of your Kubernetes cluster. Then you
modify the &lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Bookinfo sample application&lt;/a&gt; to use your database.&lt;/p&gt;
&lt;h3 id=&#34;setting-up-the-database-for-ratings-data&#34;&gt;Setting up the database for ratings data&lt;/h3&gt;
&lt;p&gt;For this task you set up an instance of &lt;a href=&#34;https://www.mysql.com&#34;&gt;MySQL&lt;/a&gt;. You can use any MySQL instance; I used
&lt;a href=&#34;https://www.ibm.com/cloud/compose/mysql&#34;&gt;Compose for MySQL&lt;/a&gt;. I used &lt;code&gt;mysqlsh&lt;/code&gt;
(&lt;a href=&#34;https://dev.mysql.com/doc/mysql-shell/en/&#34;&gt;MySQL Shell&lt;/a&gt;) as a MySQL client to feed the ratings data.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Set the &lt;code&gt;MYSQL_DB_HOST&lt;/code&gt; and &lt;code&gt;MYSQL_DB_PORT&lt;/code&gt; environment variables:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ export MYSQL_DB_HOST=&amp;lt;your MySQL database host&amp;gt;
$ export MYSQL_DB_PORT=&amp;lt;your MySQL database port&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In case of a local MySQL database with the default port, the values are &lt;code&gt;localhost&lt;/code&gt; and &lt;code&gt;3306&lt;/code&gt;, respectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To initialize the database, run the following command entering the password when prompted. The command is
performed with the credentials of the &lt;code&gt;admin&lt;/code&gt; user, created by default by
&lt;a href=&#34;https://www.ibm.com/cloud/compose/mysql&#34;&gt;Compose for MySQL&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ curl -s https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/src/mysql/mysqldb-init.sql | mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;OR&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;When using the &lt;code&gt;mysql&lt;/code&gt; client and a local MySQL database, run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ curl -s https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/src/mysql/mysqldb-init.sql | mysql -u root -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a user with the name &lt;code&gt;bookinfo&lt;/code&gt; and grant it &lt;em&gt;SELECT&lt;/em&gt; privilege on the &lt;code&gt;test.ratings&lt;/code&gt; table:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &amp;#34;CREATE USER &amp;#39;bookinfo&amp;#39; IDENTIFIED BY &amp;#39;&amp;lt;password you choose&amp;gt;&amp;#39;; GRANT SELECT ON test.ratings to &amp;#39;bookinfo&amp;#39;;&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;OR&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For &lt;code&gt;mysql&lt;/code&gt; and the local database, the command is:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ mysql -u root -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &amp;#34;CREATE USER &amp;#39;bookinfo&amp;#39; IDENTIFIED BY &amp;#39;&amp;lt;password you choose&amp;gt;&amp;#39;; GRANT SELECT ON test.ratings to &amp;#39;bookinfo&amp;#39;;&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here you apply the &lt;a href=&#34;https://en.wikipedia.org/wiki/Principle_of_least_privilege&#34;&gt;principle of least privilege&lt;/a&gt;. This
means that you do not use your &lt;code&gt;admin&lt;/code&gt; user in the Bookinfo application. Instead, you create a special user for the
Bookinfo application , &lt;code&gt;bookinfo&lt;/code&gt;, with minimal privileges. In this case, the &lt;em&gt;bookinfo&lt;/em&gt; user only has the &lt;code&gt;SELECT&lt;/code&gt;
privilege on a single table.&lt;/p&gt;
&lt;p&gt;After running the command to create the user, you may want to clean your bash history by checking the number of the last
command and running &lt;code&gt;history -d &amp;lt;the number of the command that created the user&amp;gt;&lt;/code&gt;. You don&amp;rsquo;t want the password of the
new user to be stored in the bash history. If you&amp;rsquo;re using &lt;code&gt;mysql&lt;/code&gt;, remove the last command from
&lt;code&gt;~/.mysql_history&lt;/code&gt; file as well. Read more about password protection of the newly created user in &lt;a href=&#34;https://dev.mysql.com/doc/refman/5.5/en/create-user.html&#34;&gt;MySQL documentation&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Inspect the created ratings to see that everything worked as expected:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ mysqlsh --sql --ssl-mode=REQUIRED -u bookinfo -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &amp;#34;select * from test.ratings;&amp;#34;
Enter password:
+----------+--------+
| ReviewID | Rating |
+----------+--------+
| 1 | 5 |
| 2 | 4 |
+----------+--------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;OR&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For &lt;code&gt;mysql&lt;/code&gt; and the local database:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ mysql -u bookinfo -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &amp;#34;select * from test.ratings;&amp;#34;
Enter password:
+----------+--------+
| ReviewID | Rating |
+----------+--------+
| 1 | 5 |
| 2 | 4 |
+----------+--------+
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the ratings temporarily to &lt;code&gt;1&lt;/code&gt; to provide a visual clue when our database is used by the Bookinfo &lt;em&gt;ratings&lt;/em&gt;
service:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &amp;#34;update test.ratings set rating=1; select * from test.ratings;&amp;#34;
Enter password:
Rows matched: 2 Changed: 2 Warnings: 0
+----------+--------+
| ReviewID | Rating |
+----------+--------+
| 1 | 1 |
| 2 | 1 |
+----------+--------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;OR&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For &lt;code&gt;mysql&lt;/code&gt; and the local database:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ mysql -u root -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &amp;#34;update test.ratings set rating=1; select * from test.ratings;&amp;#34;
Enter password:
+----------+--------+
| ReviewID | Rating |
+----------+--------+
| 1 | 1 |
| 2 | 1 |
+----------+--------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You used the &lt;code&gt;admin&lt;/code&gt; user (and &lt;code&gt;root&lt;/code&gt; for the local database) in the last command since the &lt;code&gt;bookinfo&lt;/code&gt; user does not
have the &lt;code&gt;UPDATE&lt;/code&gt; privilege on the &lt;code&gt;test.ratings&lt;/code&gt; table.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now you are ready to deploy a version of the Bookinfo application that will use your database.&lt;/p&gt;
&lt;h3 id=&#34;initial-setting-of-bookinfo-application&#34;&gt;Initial setting of Bookinfo application&lt;/h3&gt;
&lt;p&gt;To demonstrate the scenario of using an external database, you start with a Kubernetes cluster with &lt;a href=&#34;/v1.1/docs/setup/kubernetes/install/kubernetes/#installation-steps&#34;&gt;Istio installed&lt;/a&gt;. Then you deploy the
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Istio Bookinfo sample application&lt;/a&gt;, &lt;a href=&#34;/v1.1/docs/examples/bookinfo/#apply-default-destination-rules&#34;&gt;apply the default destination rules&lt;/a&gt;, and &lt;a href=&#34;/v1.1/docs/tasks/traffic-management/egress/#change-to-the-blocking-by-default-policy&#34;&gt;change Istio to the blocking-egress-by-default policy&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This application uses the &lt;code&gt;ratings&lt;/code&gt; microservice to fetch
book ratings, a number between 1 and 5. The ratings are displayed as stars for each review. There are several versions
of the &lt;code&gt;ratings&lt;/code&gt; microservice. Some use &lt;a href=&#34;https://www.mongodb.com&#34;&gt;MongoDB&lt;/a&gt;, others use &lt;a href=&#34;https://www.mysql.com&#34;&gt;MySQL&lt;/a&gt;
as their database.&lt;/p&gt;
&lt;p&gt;The example commands in this blog post work with Istio 0.8+, with or without
&lt;a href=&#34;/v1.1/docs/concepts/security/#mutual-tls-authentication&#34;&gt;mutual TLS&lt;/a&gt; enabled.&lt;/p&gt;
&lt;p&gt;As a reminder, here is the end-to-end architecture of the application from the
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Bookinfo sample application&lt;/a&gt;.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:59.086918235567985%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/docs/examples/bookinfo/withistio.svg&#34; title=&#34;The original Bookinfo application&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/docs/examples/bookinfo/withistio.svg&#34; alt=&#34;The original Bookinfo application&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;The original Bookinfo application&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id=&#34;use-the-database-for-ratings-data-in-bookinfo-application&#34;&gt;Use the database for ratings data in Bookinfo application&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Modify the deployment spec of a version of the &lt;em&gt;ratings&lt;/em&gt; microservice that uses a MySQL database, to use your
database instance. The spec is in &lt;a href=&#34;https://github.com/istio/istio/blob/release-1.1/samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml&#34;&gt;&lt;code&gt;samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml&lt;/code&gt;&lt;/a&gt;
of an Istio release archive. Edit the following lines:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;- name: MYSQL_DB_HOST
value: mysqldb
- name: MYSQL_DB_PORT
value: &amp;#34;3306&amp;#34;
- name: MYSQL_DB_USER
value: root
- name: MYSQL_DB_PASSWORD
value: password
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Replace the values in the snippet above, specifying the database host, port, user, and password. Note that the
correct way to work with passwords in container&amp;rsquo;s environment variables in Kubernetes is &lt;a href=&#34;https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables&#34;&gt;to use secrets&lt;/a&gt;. For this
example task only, you may want to write the password directly in the deployment spec. &lt;strong&gt;Do not do it&lt;/strong&gt; in a real
environment! I also assume everyone realizes that &lt;code&gt;&amp;quot;password&amp;quot;&lt;/code&gt; should not be used as a password&amp;hellip;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apply the modified spec to deploy the version of the &lt;em&gt;ratings&lt;/em&gt; microservice, &lt;em&gt;v2-mysql&lt;/em&gt;, that will use your
database.&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml@
deployment &amp;#34;ratings-v2-mysql&amp;#34; created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Route all the traffic destined to the &lt;em&gt;reviews&lt;/em&gt; service to its &lt;em&gt;v3&lt;/em&gt; version. You do this to ensure that the
&lt;em&gt;reviews&lt;/em&gt; service always calls the &lt;em&gt;ratings&lt;/em&gt; service. In addition, route all the traffic destined to the &lt;em&gt;ratings&lt;/em&gt;
service to &lt;em&gt;ratings v2-mysql&lt;/em&gt; that uses your database.&lt;/p&gt;
&lt;p&gt;Specify the routing for both services above by adding two
&lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/virtual-service/&#34;&gt;virtual services&lt;/a&gt;. These virtual services are
specified in &lt;code&gt;samples/bookinfo/networking/virtual-service-ratings-mysql.yaml&lt;/code&gt; of an Istio release archive.
&lt;strong&gt;&lt;em&gt;Important:&lt;/em&gt;&lt;/strong&gt; make sure you
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/#apply-default-destination-rules&#34;&gt;applied the default destination rules&lt;/a&gt; before running the
following command.&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/virtual-service-ratings-mysql.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f @samples/bookinfo/networking/virtual-service-ratings-mysql.yaml@
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The updated architecture appears below. Note that the blue arrows inside the mesh mark the traffic configured according
to the virtual services we added. According to the virtual services, the traffic is sent to &lt;em&gt;reviews v3&lt;/em&gt; and
&lt;em&gt;ratings v2-mysql&lt;/em&gt;.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:59.314858206480224%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-tcp/./bookinfo-ratings-v2-mysql-external.svg&#34; title=&#34;The Bookinfo application with ratings v2-mysql and an external MySQL database&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-tcp/./bookinfo-ratings-v2-mysql-external.svg&#34; alt=&#34;The Bookinfo application with ratings v2-mysql and an external MySQL database&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;The Bookinfo application with ratings v2-mysql and an external MySQL database&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Note that the MySQL database is outside the Istio service mesh, or more precisely outside the Kubernetes cluster. The
boundary of the service mesh is marked by a dashed line.&lt;/p&gt;
&lt;h3 id=&#34;access-the-webpage&#34;&gt;Access the webpage&lt;/h3&gt;
&lt;p&gt;Access the webpage of the application, after
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/#determining-the-ingress-ip-and-port&#34;&gt;determining the ingress IP and port&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You have a problem&amp;hellip; Instead of the rating stars, the message &lt;em&gt;&amp;ldquo;Ratings service is currently unavailable&amp;rdquo;&lt;/em&gt; is currently
displayed below each review:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:36.18705035971223%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-tcp/./errorFetchingBookRating.png&#34; title=&#34;The Ratings service error messages&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-tcp/./errorFetchingBookRating.png&#34; alt=&#34;The Ratings service error messages&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;The Ratings service error messages&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;As in &lt;a href=&#34;/v1.1/blog/2018/egress-https/&#34;&gt;Consuming External Web Services&lt;/a&gt;, you experience &lt;strong&gt;graceful service degradation&lt;/strong&gt;,
which is good. The application did not crash due to the error in the &lt;em&gt;ratings&lt;/em&gt; microservice. The webpage of the
application correctly displayed the book information, the details, and the reviews, just without the rating stars.&lt;/p&gt;
&lt;p&gt;You have the same problem as in &lt;a href=&#34;/v1.1/blog/2018/egress-https/&#34;&gt;Consuming External Web Services&lt;/a&gt;, namely all the traffic
outside the Kubernetes cluster, both TCP and HTTP, is blocked by default by the sidecar proxies. To enable such traffic
for TCP, a mesh-external service entry for TCP must be defined.&lt;/p&gt;
&lt;h3 id=&#34;mesh-external-service-entry-for-an-external-mysql-instance&#34;&gt;Mesh-external service entry for an external MySQL instance&lt;/h3&gt;
&lt;p&gt;TCP mesh-external service entries come to our rescue.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Get the IP address of your MySQL database instance. As an option, you can use the
&lt;a href=&#34;https://linux.die.net/man/1/host&#34;&gt;host&lt;/a&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ export MYSQL_DB_IP=$(host $MYSQL_DB_HOST | grep &amp;#34; has address &amp;#34; | cut -d&amp;#34; &amp;#34; -f4)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For a local database, set &lt;code&gt;MYSQL_DB_IP&lt;/code&gt; to contain the IP of your machine, accessible from your cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define a TCP mesh-external service entry:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mysql-external
spec:
hosts:
- $MYSQL_DB_HOST
addresses:
- $MYSQL_DB_IP/32
ports:
- name: tcp
number: $MYSQL_DB_PORT
protocol: tcp
location: MESH_EXTERNAL
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review the service entry you just created and check that it contains the correct values:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get serviceentry mysql-external -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
...
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Note that for a TCP service entry, you specify &lt;code&gt;tcp&lt;/code&gt; as the protocol of a port of the entry. Also note that you have to
specify the IP of the external service in the list of addresses, as a &lt;a href=&#34;https://tools.ietf.org/html/rfc2317&#34;&gt;CIDR&lt;/a&gt; block
with suffix &lt;code&gt;32&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I will talk more about TCP service entries
&lt;a href=&#34;#service-entries-for-tcp-traffic&#34;&gt;below&lt;/a&gt;. For now, verify that the service entry we added fixed the problem. Access the
webpage and see if the stars are back.&lt;/p&gt;
&lt;p&gt;It worked! Accessing the web page of the application displays the ratings without error:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:36.69064748201439%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-tcp/./externalMySQLRatings.png&#34; title=&#34;Book Ratings Displayed Correctly&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-tcp/./externalMySQLRatings.png&#34; alt=&#34;Book Ratings Displayed Correctly&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Book Ratings Displayed Correctly&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Note that you see a one-star rating for both displayed reviews, as expected. You changed the ratings to be one star to
provide us with a visual clue that our external database is indeed being used.&lt;/p&gt;
&lt;p&gt;As with service entries for HTTP/HTTPS, you can delete and create service entries for TCP using &lt;code&gt;kubectl&lt;/code&gt;, dynamically.&lt;/p&gt;
&lt;h2 id=&#34;motivation-for-egress-tcp-traffic-control&#34;&gt;Motivation for egress TCP traffic control&lt;/h2&gt;
&lt;p&gt;Some in-mesh Istio applications must access external services, for example legacy systems. In many cases, the access is
not performed over HTTP or HTTPS protocols. Other TCP protocols are used, such as database-specific protocols like
&lt;a href=&#34;https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/&#34;&gt;MongoDB Wire Protocol&lt;/a&gt; and &lt;a href=&#34;https://dev.mysql.com/doc/internals/en/client-server-protocol.html&#34;&gt;MySQL Client/Server Protocol&lt;/a&gt; to communicate with external databases.&lt;/p&gt;
&lt;p&gt;Next let me provide more details about the service entries for TCP traffic.&lt;/p&gt;
&lt;h2 id=&#34;service-entries-for-tcp-traffic&#34;&gt;Service entries for TCP traffic&lt;/h2&gt;
&lt;p&gt;The service entries for enabling TCP traffic to a specific port must specify &lt;code&gt;TCP&lt;/code&gt; as the protocol of the port.
Additionally, for the &lt;a href=&#34;https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/&#34;&gt;MongoDB Wire Protocol&lt;/a&gt;, the
protocol can be specified as &lt;code&gt;MONGO&lt;/code&gt;, instead of &lt;code&gt;TCP&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;For the &lt;code&gt;addresses&lt;/code&gt; field of the entry, a block of IPs in &lt;a href=&#34;https://tools.ietf.org/html/rfc2317&#34;&gt;CIDR&lt;/a&gt;
notation must be used. Note that the &lt;code&gt;hosts&lt;/code&gt; field is ignored for TCP service entries.&lt;/p&gt;
&lt;p&gt;To enable TCP traffic to an external service by its hostname, all the IPs of the hostname must be specified. Each IP
must be specified by a CIDR block.&lt;/p&gt;
&lt;p&gt;Note that all the IPs of an external service are not always known. To enable egress TCP traffic, only the IPs that are
used by the applications must be specified.&lt;/p&gt;
&lt;p&gt;Also note that the IPs of an external service are not always static, for example in the case of
&lt;a href=&#34;https://en.wikipedia.org/wiki/Content_delivery_network&#34;&gt;CDNs&lt;/a&gt;. Sometimes the IPs are static most of the time, but can
be changed from time to time, for example due to infrastructure changes. In these cases, if the range of the possible
IPs is known, you should specify the range by CIDR blocks. If the range of the possible IPs is not known, service
entries for TCP cannot be used and
&lt;a href=&#34;/v1.1/docs/tasks/traffic-management/egress/#direct-access-to-external-services&#34;&gt;the external services must be called directly&lt;/a&gt;,
bypassing the sidecar proxies.&lt;/p&gt;
&lt;h2 id=&#34;relation-to-mesh-expansion&#34;&gt;Relation to mesh expansion&lt;/h2&gt;
&lt;p&gt;Note that the scenario described in this post is different from the mesh expansion scenario, described in the
&lt;a href=&#34;/v1.1/docs/examples/integrating-vms/&#34;&gt;Integrating Virtual Machines&lt;/a&gt; example. In that scenario, a MySQL instance runs on an
external
(outside the cluster) machine (a bare metal or a VM), integrated with the Istio service mesh. The MySQL service becomes
a first-class citizen of the mesh with all the beneficial features of Istio applicable. Among other things, the service
becomes addressable by a local cluster domain name, for example by &lt;code&gt;mysqldb.vm.svc.cluster.local&lt;/code&gt;, and the communication
to it can be secured by
&lt;a href=&#34;/v1.1/docs/concepts/security/#mutual-tls-authentication&#34;&gt;mutual TLS authentication&lt;/a&gt;. There is no need to create a service
entry to access this service; however, the service must be registered with Istio. To enable such integration, Istio
components (&lt;em&gt;Envoy proxy&lt;/em&gt;, &lt;em&gt;node-agent&lt;/em&gt;, &lt;code&gt;_istio-agent_&lt;/code&gt;) must be installed on the machine and the Istio control plane
(&lt;em&gt;Pilot&lt;/em&gt;, &lt;em&gt;Mixer&lt;/em&gt;, &lt;em&gt;Citadel&lt;/em&gt;) must be accessible from it. See the
&lt;a href=&#34;/v1.1/docs/setup/kubernetes/additional-setup/mesh-expansion/&#34;&gt;Istio Mesh Expansion&lt;/a&gt; instructions for more details.&lt;/p&gt;
&lt;p&gt;In our case, the MySQL instance can run on any machine or can be provisioned as a service by a cloud provider. There is
no requirement to integrate the machine with Istio. The Istio control plane does not have to be accessible from the
machine. In the case of MySQL as a service, the machine which MySQL runs on may be not accessible and installing on it
the required components may be impossible. In our case, the MySQL instance is addressable by its global domain name,
which could be beneficial if the consuming applications expect to use that domain name. This is especially relevant when
that expected domain name cannot be changed in the deployment configuration of the consuming applications.&lt;/p&gt;
&lt;h2 id=&#34;cleanup&#34;&gt;Cleanup&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Drop the &lt;code&gt;test&lt;/code&gt; database and the &lt;code&gt;bookinfo&lt;/code&gt; user:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &amp;#34;drop database test; drop user bookinfo;&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;OR&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For &lt;code&gt;mysql&lt;/code&gt; and the local database:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ mysql -u root -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &amp;#34;drop database test; drop user bookinfo;&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove the virtual services:&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/virtual-service-ratings-mysql.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete -f @samples/bookinfo/networking/virtual-service-ratings-mysql.yaml@
Deleted config: virtual-service/default/reviews
Deleted config: virtual-service/default/ratings
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Undeploy &lt;em&gt;ratings v2-mysql&lt;/em&gt;:&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml@
deployment &amp;#34;ratings-v2-mysql&amp;#34; deleted
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Delete the service entry:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete serviceentry mysql-external -n default
Deleted config: serviceentry mysql-external
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog post, I demonstrated how the microservices in an Istio service mesh can consume external services via TCP.
By default, Istio blocks all the traffic, TCP and HTTP, to the hosts outside the cluster. To enable such traffic for
TCP, TCP mesh-external service entries must be created for the service mesh.&lt;/p&gt;</description><pubDate>Tue, 06 Feb 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/egress-tcp/</link><author>Vadim Eisenberg</author><guid isPermaLink="true">/v1.1/blog/2018/egress-tcp/</guid><category>traffic-management</category><category>egress</category><category>tcp</category></item><item><title>Consuming External Web Services</title><description>
&lt;p&gt;In many cases, not all the parts of a microservices-based application reside in a &lt;em&gt;service mesh&lt;/em&gt;. Sometimes, the
microservices-based applications use functionality provided by legacy systems that reside outside the mesh. You may want
to migrate these systems to the service mesh gradually. Until these systems are migrated, they must be accessed by the
applications inside the mesh. In other cases, the applications use web services provided by third parties.&lt;/p&gt;
&lt;p&gt;In this blog post, I modify the &lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Istio Bookinfo Sample Application&lt;/a&gt; to fetch book details from
an external web service (&lt;a href=&#34;https://developers.google.com/books/docs/v1/getting_started&#34;&gt;Google Books APIs&lt;/a&gt;). I show how
to enable egress HTTPS traffic in Istio by using &lt;em&gt;mesh-external service entries&lt;/em&gt;. I provide two options for egress
HTTPS traffic and describe the pros and cons of each of the options.&lt;/p&gt;
&lt;h2 id=&#34;initial-setting&#34;&gt;Initial setting&lt;/h2&gt;
&lt;p&gt;To demonstrate the scenario of consuming an external web service, I start with a Kubernetes cluster with &lt;a href=&#34;/v1.1/docs/setup/kubernetes/install/kubernetes/#installation-steps&#34;&gt;Istio installed&lt;/a&gt;. Then I deploy
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Istio Bookinfo Sample Application&lt;/a&gt;. This application uses the &lt;em&gt;details&lt;/em&gt; microservice to fetch
book details, such as the number of pages and the publisher. The original &lt;em&gt;details&lt;/em&gt; microservice provides the book
details without consulting any external service.&lt;/p&gt;
&lt;p&gt;The example commands in this blog post work with Istio 1.0+, with or without
&lt;a href=&#34;/v1.1/docs/concepts/security/#mutual-tls-authentication&#34;&gt;mutual TLS&lt;/a&gt; enabled. The Bookinfo configuration files reside in the
&lt;code&gt;samples/bookinfo&lt;/code&gt; directory of the Istio release archive.&lt;/p&gt;
&lt;p&gt;Here is a copy of the end-to-end architecture of the application from the original
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/&#34;&gt;Bookinfo sample application&lt;/a&gt;.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:59.086918235567985%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/docs/examples/bookinfo/withistio.svg&#34; title=&#34;The Original Bookinfo Application&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/docs/examples/bookinfo/withistio.svg&#34; alt=&#34;The Original Bookinfo Application&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;The Original Bookinfo Application&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Perform the steps in the
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/#deploying-the-application&#34;&gt;Deploying the application&lt;/a&gt;,
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/#confirm-the-app-is-accessible-from-outside-the-cluster&#34;&gt;Confirm the app is running&lt;/a&gt;,
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/#apply-default-destination-rules&#34;&gt;Apply default destination rules&lt;/a&gt;
sections, and
&lt;a href=&#34;/v1.1/docs/tasks/traffic-management/egress/#change-to-the-blocking-by-default-policy&#34;&gt;change Istio to the blocking-egress-by-default policy&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;bookinfo-with-https-access-to-a-google-books-web-service&#34;&gt;Bookinfo with HTTPS access to a Google Books web service&lt;/h2&gt;
&lt;p&gt;Deploy a new version of the &lt;em&gt;details&lt;/em&gt; microservice, &lt;em&gt;v2&lt;/em&gt;, that fetches the book details from &lt;a href=&#34;https://developers.google.com/books/docs/v1/getting_started&#34;&gt;Google Books APIs&lt;/a&gt;. Run the following command; it sets the
&lt;code&gt;DO_NOT_ENCRYPT&lt;/code&gt; environment variable of the service&amp;rsquo;s container to &lt;code&gt;false&lt;/code&gt;. This setting will instruct the deployed
service to use HTTPS (instead of HTTP) to access to the external service.&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-details-v2.yaml@ --dry-run -o yaml | kubectl set env --local -f - &amp;#39;DO_NOT_ENCRYPT=false&amp;#39; -o yaml | kubectl apply -f -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The updated architecture of the application now looks as follows:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:65.1654485092242%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-https/./bookinfo-details-v2.svg&#34; title=&#34;The Bookinfo Application with details V2&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-https/./bookinfo-details-v2.svg&#34; alt=&#34;The Bookinfo Application with details V2&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;The Bookinfo Application with details V2&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Note that the Google Books web service is outside the Istio service mesh, the boundary of which is marked by a dashed
line.&lt;/p&gt;
&lt;p&gt;Now direct all the traffic destined to the &lt;em&gt;details&lt;/em&gt; microservice, to &lt;em&gt;details version v2&lt;/em&gt;.&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/virtual-service-details-v2.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f @samples/bookinfo/networking/virtual-service-details-v2.yaml@
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Note that the virtual service relies on a destination rule that you created in the &lt;a href=&#34;/v1.1/docs/examples/bookinfo/#apply-default-destination-rules&#34;&gt;Apply default destination rules&lt;/a&gt; section.&lt;/p&gt;
&lt;p&gt;Access the web page of the application, after
&lt;a href=&#34;/v1.1/docs/examples/bookinfo/#determining-the-ingress-ip-and-port&#34;&gt;determining the ingress IP and port&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Oops&amp;hellip; Instead of the book details you have the &lt;em&gt;Error fetching product details&lt;/em&gt; message displayed:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:36.18649965205289%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-https/./errorFetchingBookDetails.png&#34; title=&#34;The Error Fetching Product Details Message&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-https/./errorFetchingBookDetails.png&#34; alt=&#34;The Error Fetching Product Details Message&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;The Error Fetching Product Details Message&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;The good news is that your application did not crash. With a good microservice design, you do not have &lt;strong&gt;failure
propagation&lt;/strong&gt;. In your case, the failing &lt;em&gt;details&lt;/em&gt; microservice does not cause the &lt;code&gt;productpage&lt;/code&gt; microservice to fail.
Most of the functionality of the application is still provided, despite the failure in the &lt;em&gt;details&lt;/em&gt; microservice. You
have &lt;strong&gt;graceful service degradation&lt;/strong&gt;: as you can see, the reviews and the ratings are displayed correctly, and the
application is still useful.&lt;/p&gt;
&lt;p&gt;So what might have gone wrong? Ah&amp;hellip; The answer is that I forgot to tell you to enable traffic from inside the mesh to
an external service, in this case to the Google Books web service. By default, the Istio sidecar proxies
(&lt;a href=&#34;https://www.envoyproxy.io&#34;&gt;Envoy proxies&lt;/a&gt;) &lt;strong&gt;block all the traffic to destinations outside the cluster&lt;/strong&gt;. To enable
such traffic, you must define a
&lt;a href=&#34;/v1.1/docs/reference/config/networking/v1alpha3/service-entry/&#34;&gt;mesh-external service entry&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;enable-https-access-to-a-google-books-web-service&#34;&gt;Enable HTTPS access to a Google Books web service&lt;/h3&gt;
&lt;p&gt;No worries, define a &lt;strong&gt;mesh-external service entry&lt;/strong&gt; and fix your application. You must also define a &lt;em&gt;virtual
service&lt;/em&gt; to perform routing by &lt;a href=&#34;https://en.wikipedia.org/wiki/Server_Name_Indication&#34;&gt;SNI&lt;/a&gt; to the external service.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: googleapis
spec:
hosts:
- www.googleapis.com
ports:
- number: 443
name: https
protocol: HTTPS
location: MESH_EXTERNAL
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: googleapis
spec:
hosts:
- www.googleapis.com
tls:
- match:
- port: 443
sni_hosts:
- www.googleapis.com
route:
- destination:
host: www.googleapis.com
port:
number: 443
weight: 100
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now accessing the web page of the application displays the book details without error:&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:34.82831114225648%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-https/./externalBookDetails.png&#34; title=&#34;Book Details Displayed Correctly&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-https/./externalBookDetails.png&#34; alt=&#34;Book Details Displayed Correctly&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Book Details Displayed Correctly&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;You can query your service entries:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get serviceentries
NAME AGE
googleapis 8m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can delete your service entry:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete serviceentry googleapis
serviceentry &amp;#34;googleapis&amp;#34; deleted
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and see in the output that the service entry is deleted.&lt;/p&gt;
&lt;p&gt;Accessing the web page after deleting the service entry produces the same error that you experienced before, namely
&lt;em&gt;Error fetching product details&lt;/em&gt;. As you can see, the service entries are defined &lt;strong&gt;dynamically&lt;/strong&gt;, as are many other
Istio configuration artifacts. The Istio operators can decide dynamically which domains they allow the microservices to
access. They can enable and disable traffic to the external domains on the fly, without redeploying the microservices.&lt;/p&gt;
&lt;h3 id=&#34;cleanup-of-https-access-to-a-google-books-web-service&#34;&gt;Cleanup of HTTPS access to a Google Books web service&lt;/h3&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml&#39;&gt;Zip&lt;/a&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/virtual-service-details-v2.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete serviceentry googleapis
$ kubectl delete virtualservice googleapis
$ kubectl delete -f @samples/bookinfo/networking/virtual-service-details-v2.yaml@
$ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo-details-v2.yaml@
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h2 id=&#34;tls-origination-by-istio&#34;&gt;TLS origination by Istio&lt;/h2&gt;
&lt;p&gt;There is a caveat to this story. Suppose you want to monitor which specific set of
&lt;a href=&#34;https://developers.google.com/apis-explorer/&#34;&gt;Google APIs&lt;/a&gt; your microservices use
(&lt;a href=&#34;https://developers.google.com/books/docs/v1/getting_started&#34;&gt;Books&lt;/a&gt;,
&lt;a href=&#34;https://developers.google.com/calendar/&#34;&gt;Calendar&lt;/a&gt;, &lt;a href=&#34;https://developers.google.com/tasks/&#34;&gt;Tasks&lt;/a&gt; etc.)
Suppose you want to enforce a policy that using only
&lt;a href=&#34;https://developers.google.com/books/docs/v1/getting_started&#34;&gt;Books APIs&lt;/a&gt; is allowed. Suppose you want to monitor the
book identifiers that your microservices access. For these monitoring and policy tasks you need to know the URL path.
Consider for example the URL
&lt;a href=&#34;https://www.googleapis.com/books/v1/volumes?q=isbn:0486424618&#34;&gt;&lt;code&gt;www.googleapis.com/books/v1/volumes?q=isbn:0486424618&lt;/code&gt;&lt;/a&gt;.
In that URL, &lt;a href=&#34;https://developers.google.com/books/docs/v1/getting_started&#34;&gt;Books APIs&lt;/a&gt; is specified by the path segment
&lt;code&gt;/books&lt;/code&gt;, and the &lt;a href=&#34;https://en.wikipedia.org/wiki/International_Standard_Book_Number&#34;&gt;ISBN&lt;/a&gt; number by the path segment
&lt;code&gt;/volumes?q=isbn:0486424618&lt;/code&gt;. However, in HTTPS, all the HTTP details (hostname, path, headers etc.) are encrypted and
such monitoring and policy enforcement by the sidecar proxies is not possible. Istio can only know the server name of
the encrypted requests by the &lt;a href=&#34;https://tools.ietf.org/html/rfc3546#section-3.1&#34;&gt;SNI&lt;/a&gt; (&lt;em&gt;Server Name Indication&lt;/em&gt;) field,
in this case &lt;code&gt;www.googleapis.com&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;To allow Istio to perform monitoring and policy enforcement of egress requests based on HTTP details, the microservices
must issue HTTP requests. Istio then opens an HTTPS connection to the destination (performs TLS origination). The code
of the microservices must be written differently or configured differently, according to whether the microservice runs
inside or outside an Istio service mesh. This contradicts the Istio design goal of &lt;a href=&#34;/v1.1/docs/concepts/what-is-istio/#design-goals&#34;&gt;maximizing transparency&lt;/a&gt;. Sometimes you need to compromise&amp;hellip;&lt;/p&gt;
&lt;p&gt;The diagram below shows two options for sending HTTPS traffic to external services. On the top, a microservice sends
regular HTTPS requests, encrypted end-to-end. On the bottom, the same microservice sends unencrypted HTTP requests
inside a pod, which are intercepted by the sidecar Envoy proxy. The sidecar proxy performs TLS origination, so the
traffic between the pod and the external service is encrypted.&lt;/p&gt;
&lt;figure style=&#34;width:60%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:95.1355088590701%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2018/egress-https/./https_from_the_app.svg&#34; title=&#34;HTTPS traffic to external services, with TLS originated by the microservice vs. by the sidecar proxy&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2018/egress-https/./https_from_the_app.svg&#34; alt=&#34;HTTPS traffic to external services, with TLS originated by the microservice vs. by the sidecar proxy&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;HTTPS traffic to external services, with TLS originated by the microservice vs. by the sidecar proxy&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Here is how both patterns are supported in the
&lt;a href=&#34;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/src/details/details.rb&#34;&gt;Bookinfo details microservice code&lt;/a&gt;, using the Ruby
&lt;a href=&#34;https://docs.ruby-lang.org/en/2.0.0/Net/HTTP.html&#34;&gt;net/http module&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-ruby&#39; data-expandlinks=&#39;true&#39; &gt;uri = URI.parse(&amp;#39;https://www.googleapis.com/books/v1/volumes?q=isbn:&amp;#39; + isbn)
http = Net::HTTP.new(uri.host, ENV[&amp;#39;DO_NOT_ENCRYPT&amp;#39;] === &amp;#39;true&amp;#39; ? 80:443)
...
unless ENV[&amp;#39;DO_NOT_ENCRYPT&amp;#39;] === &amp;#39;true&amp;#39; then
http.use_ssl = true
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When the &lt;code&gt;DO_NOT_ENCRYPT&lt;/code&gt; environment variable is defined, the request is performed without SSL (plain HTTP) to port 80.&lt;/p&gt;
&lt;p&gt;You can set the &lt;code&gt;DO_NOT_ENCRYPT&lt;/code&gt; environment variable to &lt;em&gt;&amp;ldquo;true&amp;rdquo;&lt;/em&gt; in the
&lt;a href=&#34;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml&#34;&gt;Kubernetes deployment spec of details v2&lt;/a&gt;,
the &lt;code&gt;container&lt;/code&gt; section:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;env:
- name: DO_NOT_ENCRYPT
value: &amp;#34;true&amp;#34;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the next section you will configure TLS origination for accessing an external web service.&lt;/p&gt;
&lt;h2 id=&#34;bookinfo-with-tls-origination-to-a-google-books-web-service&#34;&gt;Bookinfo with TLS origination to a Google Books web service&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Deploy a version of &lt;em&gt;details v2&lt;/em&gt; that sends an HTTP request to
&lt;a href=&#34;https://developers.google.com/books/docs/v1/getting_started&#34;&gt;Google Books APIs&lt;/a&gt;. The &lt;code&gt;DO_NOT_ENCRYPT&lt;/code&gt; variable
is set to true in
&lt;a href=&#34;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml&#34;&gt;&lt;code&gt;bookinfo-details-v2.yaml&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-details-v2.yaml@
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Direct the traffic destined to the &lt;em&gt;details&lt;/em&gt; microservice, to &lt;em&gt;details version v2&lt;/em&gt;.&lt;/p&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/virtual-service-details-v2.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f @samples/bookinfo/networking/virtual-service-details-v2.yaml@
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a mesh-external service entry for &lt;code&gt;www.google.apis&lt;/code&gt; , a virtual service to rewrite the destination port from
80 to 443, and a destination rule to perform TLS origination.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: googleapis
spec:
hosts:
- www.googleapis.com
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rewrite-port-for-googleapis
spec:
hosts:
- www.googleapis.com
http:
- match:
- port: 80
route:
- destination:
host: www.googleapis.com
port:
number: 443
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-tls-for-googleapis
spec:
host: www.googleapis.com
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE # initiates HTTPS when accessing www.googleapis.com
EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access the web page of the application and verify that the book details are displayed without errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;/v1.1/docs/tasks/telemetry/logs/access-log/#enable-envoy-s-access-logging&#34;&gt;Enable Envoys access logging&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the log of of the sidecar proxy of &lt;em&gt;details v2&lt;/em&gt; and see the HTTP request.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl logs $(kubectl get pods -l app=details -l version=v2 -o jsonpath=&amp;#39;{.items[0].metadata.name}&amp;#39;) istio-proxy | grep googleapis
[2018-08-09T11:32:58.171Z] &amp;#34;GET /books/v1/volumes?q=isbn:0486424618 HTTP/1.1&amp;#34; 200 - 0 1050 264 264 &amp;#34;-&amp;#34; &amp;#34;Ruby&amp;#34; &amp;#34;b993bae7-4288-9241-81a5-4cde93b2e3a6&amp;#34; &amp;#34;www.googleapis.com:80&amp;#34; &amp;#34;172.217.20.74:80&amp;#34;
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note the URL path in the log, the path can be monitored and access policies can be applied based on it. To read more
about monitoring and access policies for HTTP egress traffic, check out &lt;a href=&#34;https://archive.istio.io/v0.8/blog/2018/egress-monitoring-access-control/#logging&#34;&gt;this blog post&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;cleanup-of-tls-origination-to-a-google-books-web-service&#34;&gt;Cleanup of TLS origination to a Google Books web service&lt;/h3&gt;
&lt;div&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml&#39;&gt;Zip&lt;/a&gt;&lt;a data-skipendnotes=&#39;true&#39; style=&#39;display:none&#39; href=&#39;https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/virtual-service-details-v2.yaml&#39;&gt;Zip&lt;/a&gt;&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl delete serviceentry googleapis
$ kubectl delete virtualservice rewrite-port-for-googleapis
$ kubectl delete destinationrule originate-tls-for-googleapis
$ kubectl delete -f @samples/bookinfo/networking/virtual-service-details-v2.yaml@
$ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo-details-v2.yaml@
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id=&#34;relation-to-istio-mutual-tls&#34;&gt;Relation to Istio mutual TLS&lt;/h3&gt;
&lt;p&gt;Note that the TLS origination in this case is unrelated to
&lt;a href=&#34;/v1.1/docs/concepts/security/#mutual-tls-authentication&#34;&gt;the mutual TLS&lt;/a&gt; applied by Istio. The TLS origination for the
external services will work, whether the Istio mutual TLS is enabled or not. The &lt;strong&gt;mutual&lt;/strong&gt; TLS secures
service-to-service communication &lt;strong&gt;inside&lt;/strong&gt; the service mesh and provides each service with a strong identity. The
&lt;strong&gt;external services&lt;/strong&gt; in this blog post were accessed using &lt;strong&gt;one-way TLS&lt;/strong&gt;, the same mechanism used to secure communication between a
web browser and a web server. TLS is applied to the communication with external services to verify the identity of the
external server and to encrypt the traffic.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog post I demonstrated how microservices in an Istio service mesh can consume external web services by
HTTPS. By default, Istio blocks all the traffic to the hosts outside the cluster. To enable such traffic, mesh-external
service entries must be created for the service mesh. It is possible to access the external sites either by
issuing HTTPS requests, or by issuing HTTP requests with Istio performing TLS origination. When the microservices issue
HTTPS requests, the traffic is encrypted end-to-end, however Istio cannot monitor HTTP details like the URL paths of the
requests. When the microservices issue HTTP requests, Istio can monitor the HTTP details of the requests and enforce
HTTP-based access policies. However, in that case the traffic between microservice and the sidecar proxy is unencrypted.
Having part of the traffic unencrypted can be forbidden in organizations with very strict security requirements.&lt;/p&gt;</description><pubDate>Wed, 31 Jan 2018 00:00:00 +0000</pubDate><link>/v1.1/blog/2018/egress-https/</link><author>Vadim Eisenberg</author><guid isPermaLink="true">/v1.1/blog/2018/egress-https/</guid><category>traffic-management</category><category>egress</category><category>https</category></item><item><title>Mixer and the SPOF Myth</title><description>
&lt;p&gt;As &lt;a href=&#34;/v1.1/docs/concepts/policies-and-telemetry/&#34;&gt;Mixer&lt;/a&gt; is in the request path, it is natural to question how it impacts
overall system availability and latency. A common refrain we hear when people first glance at Istio architecture diagrams is
&amp;ldquo;Isn&amp;rsquo;t this just introducing a single point of failure?&amp;rdquo;&lt;/p&gt;
&lt;p&gt;In this post, well dig deeper and cover the design principles that underpin Mixer and the surprising fact Mixer actually
increases overall mesh availability and reduces average request latency.&lt;/p&gt;
&lt;p&gt;Istio&amp;rsquo;s use of Mixer has two main benefits in terms of overall system availability and latency:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increased SLO&lt;/strong&gt;. Mixer insulates proxies and services from infrastructure backend failures, enabling higher effective mesh availability. The mesh as a whole tends to experience a lower rate of failure when interacting with the infrastructure backends than if Mixer were not present.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Latency&lt;/strong&gt;. Through aggressive use of shared multi-level caches and sharding, Mixer reduces average observed latencies across the mesh.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We&amp;rsquo;ll explain this in more detail below.&lt;/p&gt;
&lt;h2 id=&#34;how-we-got-here&#34;&gt;How we got here&lt;/h2&gt;
&lt;p&gt;For many years at Google, weve been using an internal API &amp;amp; service management system to handle the many APIs exposed by Google. This system has been fronting the worlds biggest services (Google Maps, YouTube, Gmail, etc) and sustains a peak rate of hundreds of millions of QPS. Although this system has served us well, it had problems keeping up with Googles rapid growth, and it became clear that a new architecture was needed in order to tamp down ballooning operational costs.&lt;/p&gt;
&lt;p&gt;In 2014, we started an initiative to create a replacement architecture that would scale better. The result has proven extremely successful and has been gradually deployed throughout Google, saving in the process millions of dollars a month in ops costs.&lt;/p&gt;
&lt;p&gt;The older system was built around a centralized fleet of fairly heavy proxies into which all incoming traffic would flow, before being forwarded to the services where the real work was done. The newer architecture jettisons the shared proxy design and instead consists of a very lean and efficient distributed sidecar proxy sitting next to service instances, along with a shared fleet of sharded control plane intermediaries:&lt;/p&gt;
&lt;figure style=&#34;width:75%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:74.79295535770372%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2017/mixer-spof-myth/./mixer-spof-myth-1.svg&#34; title=&#34;Google System Topology&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2017/mixer-spof-myth/./mixer-spof-myth-1.svg&#34; alt=&#34;Google System Topology&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Google&amp;#39;s API &amp;amp; Service Management System&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Look familiar? Of course: its just like Istio! Istio was conceived as a second generation of this distributed proxy architecture. We took the core lessons from this internal system, generalized many of the concepts by working with our partners, and created Istio.&lt;/p&gt;
&lt;h2 id=&#34;architecture-recap&#34;&gt;Architecture recap&lt;/h2&gt;
&lt;p&gt;As shown in the diagram below, Mixer sits between the mesh and the infrastructure backends that support it:&lt;/p&gt;
&lt;figure style=&#34;width:75%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:65.89948886170049%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2017/mixer-spof-myth/./mixer-spof-myth-2.svg&#34; title=&#34;Istio Topology&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2017/mixer-spof-myth/./mixer-spof-myth-2.svg&#34; alt=&#34;Istio Topology&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Istio Topology&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;The Envoy sidecar logically calls Mixer before each request to perform precondition checks, and after each request to report telemetry.
The sidecar has local caching such that a relatively large percentage of precondition checks can be performed from cache. Additionally, the
sidecar buffers outgoing telemetry such that it only actually needs to call Mixer once for every several thousands requests. Whereas precondition
checks are synchronous to request processing, telemetry reports are done asynchronously with a fire-and-forget pattern.&lt;/p&gt;
&lt;p&gt;At a high level, Mixer provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Backend Abstraction&lt;/strong&gt;. Mixer insulates the Istio components and services within the mesh from the implementation details of individual infrastructure backends.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Intermediation&lt;/strong&gt;. Mixer allows operators to have fine-grained control over all interactions between the mesh and the infrastructure backends.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;However, even beyond these purely functional aspects, Mixer has other characteristics that provide the system with additional benefits.&lt;/p&gt;
&lt;h2 id=&#34;mixer-slo-booster&#34;&gt;Mixer: SLO booster&lt;/h2&gt;
&lt;p&gt;Contrary to the claim that Mixer is a SPOF and can therefore lead to mesh outages, we believe it in fact improves the effective availability of a mesh. How can that be? There are three basic characteristics at play:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Statelessness&lt;/strong&gt;. Mixer is stateless in that it doesnt manage any persistent storage of its own.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hardening&lt;/strong&gt;. Mixer proper is designed to be a highly reliable component. The design intent is to achieve &amp;gt; 99.999% uptime for any individual Mixer instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Caching and Buffering&lt;/strong&gt;. Mixer is designed to accumulate a large amount of transient ephemeral state.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The sidecar proxies that sit next to each service instance in the mesh must necessarily be frugal in terms of memory consumption, which constrains the possible amount of local caching and buffering. Mixer, however, lives independently and can use considerably larger caches and output buffers. Mixer thus acts as a highly-scaled and highly-available second-level cache for the sidecars.&lt;/p&gt;
&lt;p&gt;Mixers expected availability is considerably higher than most infrastructure backends (those often have availability of perhaps 99.9%). Its local caches and buffers help mask infrastructure backend failures by being able to continue operating even when a backend has become unresponsive.&lt;/p&gt;
&lt;h2 id=&#34;mixer-latency-slasher&#34;&gt;Mixer: Latency slasher&lt;/h2&gt;
&lt;p&gt;As we explained above, the Istio sidecars generally have fairly effective first-level caching. They can serve the majority of their traffic from cache. Mixer provides a much greater shared pool of second-level cache, which helps Mixer contribute to a lower average per-request latency.&lt;/p&gt;
&lt;p&gt;While its busy cutting down latency, Mixer is also inherently cutting down the number of calls your mesh makes to infrastructure backends. Depending on how youre paying for these backends, this might end up saving you some cash by cutting down the effective QPS to the backends.&lt;/p&gt;
&lt;h2 id=&#34;work-ahead&#34;&gt;Work ahead&lt;/h2&gt;
&lt;p&gt;We have opportunities ahead to continue improving the system in many ways.&lt;/p&gt;
&lt;h3 id=&#34;configuration-canaries&#34;&gt;Configuration canaries&lt;/h3&gt;
&lt;p&gt;Mixer is highly scaled so it is generally resistant to individual instance failures. However, Mixer is still susceptible to cascading
failures in the case when a poison configuration is deployed which causes all Mixer instances to crash basically at the same time
(yeah, that would be a bad day). To prevent this from happening, configuration changes can be canaried to a small set of Mixer instances,
and then more broadly rolled out.&lt;/p&gt;
&lt;p&gt;Mixer doesnt yet do canarying of configuration changes, but we expect this to come online as part of Istios ongoing work on reliable
configuration distribution.&lt;/p&gt;
&lt;h3 id=&#34;cache-tuning&#34;&gt;Cache tuning&lt;/h3&gt;
&lt;p&gt;We have yet to fine-tune the sizes of the sidecar and Mixer caches. This work will focus on achieving the highest performance possible using the least amount of resources.&lt;/p&gt;
&lt;h3 id=&#34;cache-sharing&#34;&gt;Cache sharing&lt;/h3&gt;
&lt;p&gt;At the moment, each Mixer instance operates independently of all other instances. A request handled by one Mixer instance will not leverage data cached in a different instance. We will eventually experiment with a distributed cache such as memcached or Redis in order to provide a much larger mesh-wide shared cache, and further reduce the number of calls to infrastructure backends.&lt;/p&gt;
&lt;h3 id=&#34;sharding&#34;&gt;Sharding&lt;/h3&gt;
&lt;p&gt;In very large meshes, the load on Mixer can be great. There can be a large number of Mixer instances, each straining to keep caches primed to
satisfy incoming traffic. We expect to eventually introduce intelligent sharding such that Mixer instances become slightly specialized in
handling particular data streams in order to increase the likelihood of cache hits. In other words, sharding helps improve cache
efficiency by routing related traffic to the same Mixer instance over time, rather than randomly dispatching to
any available Mixer instance.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Practical experience at Google showed that the model of a slim sidecar proxy and a large shared caching control plane intermediary hits a sweet
spot, delivering excellent perceived availability and latency. Weve taken the lessons learned there and applied them to create more sophisticated and
effective caching, prefetching, and buffering strategies in Istio. Weve also optimized the communication protocols to reduce overhead when a cache miss does occur.&lt;/p&gt;
&lt;p&gt;Mixer is still young. As of Istio 0.3, we havent really done significant performance work within Mixer itself. This means when a request misses the sidecar
cache, we spend more time in Mixer to respond to requests than we should. Were doing a lot of work to improve this in coming months to reduce the overhead
that Mixer imparts in the synchronous precondition check case.&lt;/p&gt;
&lt;p&gt;We hope this post makes you appreciate the inherent benefits that Mixer brings to Istio.
Dont hesitate to post comments or questions to &lt;a href=&#34;https://groups.google.com/forum/#!forum/istio-policies-and-telemetry&#34;&gt;istio-policies-and-telemetry@&lt;/a&gt;.&lt;/p&gt;</description><pubDate>Thu, 07 Dec 2017 00:00:00 +0000</pubDate><link>/v1.1/blog/2017/mixer-spof-myth/</link><author>Martin Taillefer</author><guid isPermaLink="true">/v1.1/blog/2017/mixer-spof-myth/</guid><category>adapters</category><category>mixer</category><category>policies</category><category>telemetry</category><category>availability</category><category>latency</category></item><item><title>Mixer Adapter Model</title><description>
&lt;p&gt;Istio 0.2 introduced a new Mixer adapter model which is intended to increase Mixers flexibility to address a varied set of infrastructure backends. This post intends to put the adapter model in context and explain how it works.&lt;/p&gt;
&lt;h2 id=&#34;why-adapters&#34;&gt;Why adapters?&lt;/h2&gt;
&lt;p&gt;Infrastructure backends provide support functionality used to build services. They include such things as access control systems, telemetry capturing systems, quota enforcement systems, billing systems, and so forth. Services traditionally directly integrate with these backend systems, creating a hard coupling and baking-in specific semantics and usage options.&lt;/p&gt;
&lt;p&gt;Mixer serves as an abstraction layer between Istio and an open-ended set of infrastructure backends. The Istio components and services that run within the mesh can interact with these backends, while not being coupled to the backends specific interfaces.&lt;/p&gt;
&lt;p&gt;In addition to insulating application-level code from the details of infrastructure backends, Mixer provides an intermediation model that allows operators to inject and control policies between application code and backends. Operators can control which data is reported to which backend, which backend to consult for authorization, and much more.&lt;/p&gt;
&lt;p&gt;Given that individual infrastructure backends each have different interfaces and operational models, Mixer needs custom
code to deal with each and we call these custom bundles of code &lt;a href=&#34;https://github.com/istio/istio/wiki/Mixer-Compiled-In-Adapter-Dev-Guide&#34;&gt;&lt;em&gt;adapters&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Adapters are Go packages that are directly linked into the Mixer binary. Its fairly simple to create custom Mixer binaries linked with specialized sets of adapters, in case the default set of adapters is not sufficient for specific use cases.&lt;/p&gt;
&lt;h2 id=&#34;philosophy&#34;&gt;Philosophy&lt;/h2&gt;
&lt;p&gt;Mixer is essentially an attribute processing and routing machine. The proxy sends it &lt;a href=&#34;/v1.1/docs/concepts/policies-and-telemetry/#attributes&#34;&gt;attributes&lt;/a&gt; as part of doing precondition checks and telemetry reports, which it turns into a series of calls into adapters. The operator supplies configuration which describes how to map incoming attributes to inputs for the adapters.&lt;/p&gt;
&lt;figure style=&#34;width:60%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:42.60859894197215%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/docs/concepts/policies-and-telemetry/machine.svg&#34; title=&#34;Attribute Machine&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/docs/concepts/policies-and-telemetry/machine.svg&#34; alt=&#34;Attribute Machine&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Attribute Machine&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Configuration is a complex task. In fact, evidence shows that the overwhelming majority of service outages are caused by configuration errors. To help combat this, Mixers configuration model enforces a number of constraints designed to avoid errors. For example, the configuration model uses strong typing to ensure that only meaningful attributes or attribute expressions are used in any given context.&lt;/p&gt;
&lt;h2 id=&#34;handlers-configuring-adapters&#34;&gt;Handlers: configuring adapters&lt;/h2&gt;
&lt;p&gt;Each adapter that Mixer uses requires some configuration to operate. Typically, adapters need things like the URL to their backend, credentials, caching options, and so forth. Each adapter defines the exact configuration data it needs via a &lt;a href=&#34;https://developers.google.com/protocol-buffers/&#34;&gt;protobuf&lt;/a&gt; message.&lt;/p&gt;
&lt;p&gt;You configure each adapter by creating &lt;a href=&#34;/v1.1/docs/concepts/policies-and-telemetry/#handlers&#34;&gt;&lt;em&gt;handlers&lt;/em&gt;&lt;/a&gt; for them. A handler is a
configuration resource which represents a fully configured adapter ready for use. There can be any number of handlers for a single adapter, making it possible to reuse an adapter in different scenarios.&lt;/p&gt;
&lt;h2 id=&#34;templates-adapter-input-schema&#34;&gt;Templates: adapter input schema&lt;/h2&gt;
&lt;p&gt;Mixer is typically invoked twice for every incoming request to a mesh service, once for precondition checks and once for telemetry reporting. For every such call, Mixer invokes one or more adapters. Different adapters need different pieces of data as input in order to do their work. A logging adapter needs a log entry, a metric adapter needs a metric, an authorization adapter needs credentials, etc.
Mixer &lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/templates/&#34;&gt;&lt;em&gt;templates&lt;/em&gt;&lt;/a&gt; are used to describe the exact data that an adapter consumes at request time.&lt;/p&gt;
&lt;p&gt;Each template is specified as a &lt;a href=&#34;https://developers.google.com/protocol-buffers/&#34;&gt;protobuf&lt;/a&gt; message. A single template describes a bundle of data that is delivered to one or more adapters at runtime. Any given adapter can be designed to support any number of templates, the specific templates the adapter supports is determined by the adapter developer.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/templates/metric/&#34;&gt;&lt;code&gt;metric&lt;/code&gt;&lt;/a&gt; and &lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry/templates/logentry/&#34;&gt;&lt;code&gt;logentry&lt;/code&gt;&lt;/a&gt; are two of the most essential templates used within Istio. They represent respectively the payload to report a single metric and a single log entry to appropriate backends.&lt;/p&gt;
&lt;h2 id=&#34;instances-attribute-mapping&#34;&gt;Instances: attribute mapping&lt;/h2&gt;
&lt;p&gt;You control which data is delivered to individual adapters by creating
&lt;a href=&#34;/v1.1/docs/concepts/policies-and-telemetry/#instances&#34;&gt;&lt;em&gt;instances&lt;/em&gt;&lt;/a&gt;.
Instances control how Mixer uses the &lt;a href=&#34;/v1.1/docs/concepts/policies-and-telemetry/#attributes&#34;&gt;attributes&lt;/a&gt; delivered
by the proxy into individual bundles of data that can be routed to different adapters.&lt;/p&gt;
&lt;p&gt;Creating instances generally requires using &lt;a href=&#34;/v1.1/docs/concepts/policies-and-telemetry/#attribute-expressions&#34;&gt;attribute expressions&lt;/a&gt;. The point of these expressions is to use any attribute or literal value in order to produce a result that can be assigned to an instances field.&lt;/p&gt;
&lt;p&gt;Every instance field has a type, as defined in the template, every attribute has a
&lt;a href=&#34;https://github.com/istio/api/blob/master/policy/v1beta1/value_type.proto&#34;&gt;type&lt;/a&gt;, and every attribute expression has a type.
You can only assign type-compatible expressions to any given instance fields. For example, you cant assign an integer expression
to a string field. This kind of strong typing is designed to minimize the risk of creating bogus configurations.&lt;/p&gt;
&lt;h2 id=&#34;rules-delivering-data-to-adapters&#34;&gt;Rules: delivering data to adapters&lt;/h2&gt;
&lt;p&gt;The last piece to the puzzle is telling Mixer which instances to send to which handler and when. This is done by
creating &lt;a href=&#34;/v1.1/docs/concepts/policies-and-telemetry/#rules&#34;&gt;&lt;em&gt;rules&lt;/em&gt;&lt;/a&gt;. Each rule identifies a specific handler and the set of
instances to send to that handler. Whenever Mixer processes an incoming call, it invokes the indicated handler and gives it the specific set of instances for processing.&lt;/p&gt;
&lt;p&gt;Rules contain matching predicates. A predicate is an attribute expression which returns a true/false value. A rule only takes effect if its predicate expression returns true. Otherwise, its like the rule didnt exist and the indicated handler isnt invoked.&lt;/p&gt;
&lt;h2 id=&#34;future&#34;&gt;Future&lt;/h2&gt;
&lt;p&gt;We are working to improve the end to end experience of using and developing adapters. For example, several new features are planned to make templates more expressive. Additionally, the expression language is being substantially enhanced to be more powerful and well-rounded.&lt;/p&gt;
&lt;p&gt;Longer term, we are evaluating ways to support adapters which arent directly linked into the main Mixer binary. This would simplify deployment and composition.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The refreshed Mixer adapter model is designed to provide a flexible framework to support an open-ended set of infrastructure backends.&lt;/p&gt;
&lt;p&gt;Handlers provide configuration data for individual adapters, templates determine exactly what kind of data different adapters want to consume at runtime, instances let operators prepare this data, rules direct the data to one or more handlers.&lt;/p&gt;
&lt;p&gt;You can learn more about Mixer&amp;rsquo;s overall architecture &lt;a href=&#34;/v1.1/docs/concepts/policies-and-telemetry/&#34;&gt;here&lt;/a&gt;, and learn the specifics of templates, handlers,
and rules &lt;a href=&#34;/v1.1/docs/reference/config/policy-and-telemetry&#34;&gt;here&lt;/a&gt;. You can find many examples of Mixer configuration resources in the Bookinfo sample
&lt;a href=&#34;https://github.com/istio/istio/tree/release-1.1/samples/bookinfo&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;</description><pubDate>Fri, 03 Nov 2017 00:00:00 +0000</pubDate><link>/v1.1/blog/2017/adapter-model/</link><author>Martin Taillefer</author><guid isPermaLink="true">/v1.1/blog/2017/adapter-model/</guid><category>adapters</category><category>mixer</category><category>policies</category><category>telemetry</category></item><item><title>Announcing Istio 0.2</title><description>
&lt;p&gt;We launched Istio; an open platform to connect, manage, monitor, and secure microservices, on May 24, 2017. We have been humbled by the incredible interest, and
rapid community growth of developers, operators, and partners. Our 0.1 release was focused on showing all the concepts of Istio in Kubernetes.&lt;/p&gt;
&lt;p&gt;Today we are happy to announce the 0.2 release which improves stability and performance, allows for cluster wide deployment and automated injection of sidecars in Kubernetes, adds policy and authentication for TCP services, and enables expansion of the mesh to include services deployed in virtual machines. In addition, Istio can now run outside Kubernetes, leveraging Consul/Nomad or Eureka. Beyond core features, Istio is now ready for extensions to be written by third party companies and developers.&lt;/p&gt;
&lt;h2 id=&#34;highlights-for-the-0-2-release&#34;&gt;Highlights for the 0.2 release&lt;/h2&gt;
&lt;h3 id=&#34;usability-improvements&#34;&gt;Usability improvements&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Multiple namespace support&lt;/em&gt;: Istio now works cluster-wide, across multiple namespaces and this was one of the top requests from community from 0.1 release.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Policy and security for TCP services&lt;/em&gt;: In addition to HTTP, we have added transparent mutual TLS authentication and policy enforcement for TCP services as well. This will allow you to secure more of your
Kubernetes deployment, and get Istio features like telemetry, policy and security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Automated sidecar injection&lt;/em&gt;: By leveraging the alpha &lt;a href=&#34;https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/&#34;&gt;initializer&lt;/a&gt; feature provided by Kubernetes 1.7, envoy sidecars can now be automatically injected into application deployments when your cluster has the initializer enabled. This enables you to deploy microservices using &lt;code&gt;kubectl&lt;/code&gt;, the exact same command that you normally use for deploying the microservices without Istio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Extending Istio&lt;/em&gt;: An improved Mixer design that lets vendors write Mixer adapters to implement support for their own systems, such as application
management or policy enforcement. The
&lt;a href=&#34;https://github.com/istio/istio/wiki/Mixer-Compiled-In-Adapter-Dev-Guide&#34;&gt;Mixer Adapter Developer&amp;rsquo;s Guide&lt;/a&gt; can help
you easily integrate your solution with Istio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Bring your own CA certificates&lt;/em&gt;: Allows users to provide their own key and certificate for Istio CA and persistent CA key/certificate Storage. Enables storing signing key/certificates in persistent storage to facilitate CA restarts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Improved routing &amp;amp; metrics&lt;/em&gt;: Support for WebSocket, MongoDB and Redis protocols. You can apply resilience features like circuit breakers on traffic to third party services. In addition to Mixers metrics, hundreds of metrics from Envoy are now visible inside Prometheus for all traffic entering, leaving and within Istio mesh.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;cross-environment-support&#34;&gt;Cross environment support&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Mesh expansion&lt;/em&gt;: Istio mesh can now span services running outside of Kubernetes - like those running in virtual machines while enjoying benefits such as automatic mutual TLS authentication, traffic management, telemetry, and policy enforcement across the mesh.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Running outside Kubernetes&lt;/em&gt;: We know many customers use other service registry and orchestration solutions like &lt;a href=&#34;/v1.1/docs/setup/consul/quick-start/&#34;&gt;Consul/Nomad&lt;/a&gt; and Eureka. Istio Pilot can now run standalone outside Kubernetes, consuming information from these systems, and manage the Envoy fleet in VMs or containers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;get-involved-in-shaping-the-future-of-istio&#34;&gt;Get involved in shaping the future of Istio&lt;/h2&gt;
&lt;p&gt;We have a growing &lt;a href=&#34;/v1.1/about/feature-stages/&#34;&gt;roadmap&lt;/a&gt; ahead of us, full of great features to implement. Our focus next release is going to be on stability, reliability, integration with third party tools and multicluster use cases.&lt;/p&gt;
&lt;p&gt;To learn how to get involved and contribute to Istio&amp;rsquo;s future, check out our &lt;a href=&#34;https://github.com/istio/community&#34;&gt;community&lt;/a&gt; GitHub repository which
will introduce you to our working groups, our mailing lists, our various community meetings, our general procedures and our guidelines.&lt;/p&gt;
&lt;p&gt;We want to thank our fantastic community for field testing new versions, filing bug reports, contributing code, helping out other community members, and shaping Istio by participating in countless productive discussions. This has enabled the project to accrue 3000 stars on GitHub since launch and hundreds of active community members on Istio mailing lists.&lt;/p&gt;
&lt;p&gt;Thank you&lt;/p&gt;</description><pubDate>Tue, 10 Oct 2017 00:00:00 +0000</pubDate><link>/v1.1/blog/2017/0.2-announcement/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2017/0.2-announcement/</guid></item><item><title>Using Network Policy with Istio</title><description>
&lt;p&gt;The use of Network Policy to secure applications running on Kubernetes is a now a widely accepted industry best practice. Given that Istio also supports policy, we want to spend some time explaining how Istio policy and Kubernetes Network Policy interact and support each other to deliver your application securely.&lt;/p&gt;
&lt;p&gt;Lets start with the basics: why might you want to use both Istio and Kubernetes Network Policy? The short answer is that they are good at different things. Consider the main differences between Istio and Network Policy (we will describe &amp;ldquo;typical” implementations, e.g. Calico, but implementation details can vary with different network providers):&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Istio Policy&lt;/th&gt;
&lt;th&gt;Network Policy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Layer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&amp;ldquo;Service&amp;rdquo; &amp;mdash; L7&lt;/td&gt;
&lt;td&gt;&amp;ldquo;Network&amp;rdquo; &amp;mdash; L3-4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Implementation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;User space&lt;/td&gt;
&lt;td&gt;Kernel&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Enforcement Point&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Pod&lt;/td&gt;
&lt;td&gt;Node&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id=&#34;layer&#34;&gt;Layer&lt;/h2&gt;
&lt;p&gt;Istio policy operates at the &amp;ldquo;service” layer of your network application. This is Layer 7 (Application) from the perspective of the OSI model, but the de facto model of cloud native applications is that Layer 7 actually consists of at least two layers: a service layer and a content layer. The service layer is typically HTTP, which encapsulates the actual application data (the content layer). It is at this service layer of HTTP that the Istios Envoy proxy operates. In contrast, Network Policy operates at Layers 3 (Network) and 4 (Transport) in the OSI model.&lt;/p&gt;
&lt;p&gt;Operating at the service layer gives the Envoy proxy a rich set of attributes to base policy decisions on, for protocols it understands, which at present includes HTTP/1.1 &amp;amp; HTTP/2 (gRPC operates over HTTP/2). So, you can apply policy based on virtual host, URL, or other HTTP headers. In the future, Istio will support a wide range of Layer 7 protocols, as well as generic TCP and UDP transport.&lt;/p&gt;
&lt;p&gt;In contrast, operating at the network layer has the advantage of being universal, since all network applications use IP. At the network layer you can apply policy regardless of the layer 7 protocol: DNS, SQL databases, real-time streaming, and a plethora of other services that do not use HTTP can be secured. Network Policy isnt limited to a classic firewalls tuple of IP addresses, proto, and ports. Both Istio and Network Policy are aware of rich Kubernetes labels to describe pod endpoints.&lt;/p&gt;
&lt;h2 id=&#34;implementation&#34;&gt;Implementation&lt;/h2&gt;
&lt;p&gt;The Istios proxy is based on &lt;a href=&#34;https://envoyproxy.github.io/envoy/&#34;&gt;Envoy&lt;/a&gt;, which is implemented as a user space daemon in the data plane that
interacts with the network layer using standard sockets. This gives it a large amount of flexibility in processing, and allows it to be
distributed (and upgraded!) in a container.&lt;/p&gt;
&lt;p&gt;Network Policy data plane is typically implemented in kernel space (e.g. using iptables, eBPF filters, or even custom kernel modules). Being in kernel space
allows them to be extremely fast, but not as flexible as the Envoy proxy.&lt;/p&gt;
&lt;h2 id=&#34;enforcement-point&#34;&gt;Enforcement Point&lt;/h2&gt;
&lt;p&gt;Policy enforcement using the Envoy proxy is implemented inside the pod, as a sidecar container in the same network namespace. This allows a simple deployment model. Some containers are given permission to reconfigure the networking inside their pod (CAP_NET_ADMIN). If such a service instance is compromised, or misbehaves (as in a malicious tenant) the proxy can be bypassed.&lt;/p&gt;
&lt;p&gt;While this wont let an attacker access other Istio-enabled pods, so long as they are correctly configured, it opens several attack vectors:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Attacking unprotected pods&lt;/li&gt;
&lt;li&gt;Attempting to deny service to protected pods by sending lots of traffic&lt;/li&gt;
&lt;li&gt;Exfiltrating data collected in the pod&lt;/li&gt;
&lt;li&gt;Attacking the cluster infrastructure (servers or Kubernetes services)&lt;/li&gt;
&lt;li&gt;Attacking services outside the mesh, like databases, storage arrays, or legacy systems.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Network Policy is typically enforced at the host node, outside the network namespace of the guest pods. This means that compromised or misbehaving pods must break into the root namespace to avoid enforcement. With the addition of egress policy due in Kubernetes 1.8, this difference makes Network Policy a key part of protecting your infrastructure from compromised workloads.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;Lets walk through a few examples of what you might want to do with Kubernetes Network Policy for an Istio-enabled application. Consider the Bookinfo sample application. Were going to cover the following use cases for Network Policy:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reduce attack surface of the application ingress&lt;/li&gt;
&lt;li&gt;Enforce fine-grained isolation within the application&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;reduce-attack-surface-of-the-application-ingress&#34;&gt;Reduce attack surface of the application ingress&lt;/h3&gt;
&lt;p&gt;Our application ingress controller is the main entry-point to our application from the outside world. A quick peek at &lt;code&gt;istio.yaml&lt;/code&gt; (used to install Istio) defines the Istio ingress like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: v1
kind: Service
metadata:
name: istio-ingress
labels:
istio: ingress
spec:
type: LoadBalancer
ports:
- port: 80
name: http
- port: 443
name: https
selector:
istio: ingress
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;istio-ingress&lt;/code&gt; exposes ports 80 and 443. Lets limit incoming traffic to just these two ports. Envoy has a &lt;a href=&#34;https://www.envoyproxy.io/docs/envoy/latest/operations/admin.html#operations-admin-interface&#34;&gt;built-in administrative interface&lt;/a&gt;, and we dont want a misconfigured &lt;code&gt;istio-ingress&lt;/code&gt; image to accidentally expose our admin interface to the outside world. This is an example of defense in depth: a properly configured image should not expose the interface, and a properly configured Network Policy will prevent anyone from connecting to it. Either can fail or be misconfigured and we are still protected.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: istio-ingress-lockdown
namespace: default
spec:
podSelector:
matchLabels:
istio: ingress
ingress:
- ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;enforce-fine-grained-isolation-within-the-application&#34;&gt;Enforce fine-grained isolation within the application&lt;/h3&gt;
&lt;p&gt;Here is the service graph for the Bookinfo application.&lt;/p&gt;
&lt;figure style=&#34;width:80%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:59.086918235567985%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/docs/examples/bookinfo/withistio.svg&#34; title=&#34;Bookinfo Service Graph&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/docs/examples/bookinfo/withistio.svg&#34; alt=&#34;Bookinfo Service Graph&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Bookinfo Service Graph&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;This graph shows every connection that a correctly functioning application should be allowed to make. All other connections, say from the Istio Ingress directly to the Rating service, are not part of the application. Lets lock out those extraneous connections so they cannot be used by an attacker. Imagine, for example, that the Ingress pod is compromised by an exploit that allows an attacker to run arbitrary code. If we only allow connections to the Product Page pods using Network Policy, the attacker has gained no more access to my application backends &lt;em&gt;even though they have compromised a member of the service mesh&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: product-page-ingress
namespace: default
spec:
podSelector:
matchLabels:
app: productpage
ingress:
- ports:
- protocol: TCP
port: 9080
from:
- podSelector:
matchLabels:
istio: ingress
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can and should write a similar policy for each service to enforce which other pods are allowed to access each.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;Our take is that Istio and Network Policy have different strengths in applying policy. Istio is application-protocol aware and highly flexible, making it ideal for applying policy in support of operational goals, like service routing, retries, circuit-breaking, etc, and for security that operates at the application layer, such as token validation. Network Policy is universal, highly efficient, and isolated from the pods, making it ideal for applying policy in support of network security goals. Furthermore, having policy that operates at different layers of the network stack is a really good thing as it gives each layer specific context without commingling of state and allows separation of responsibility.&lt;/p&gt;
&lt;p&gt;This post is based on the three part blog series by Spike Curtis, one of the Istio team members at Tigera. The full series can be found here: &lt;a href=&#34;https://www.projectcalico.org/using-network-policy-in-concert-with-istio/&#34;&gt;https://www.projectcalico.org/using-network-policy-in-concert-with-istio/&lt;/a&gt;&lt;/p&gt;</description><pubDate>Thu, 10 Aug 2017 00:00:00 +0000</pubDate><link>/v1.1/blog/2017/0.1-using-network-policy/</link><author>Spike Curtis</author><guid isPermaLink="true">/v1.1/blog/2017/0.1-using-network-policy/</guid></item><item><title>Canary Deployments using Istio</title><description>
&lt;div&gt;
&lt;aside class=&#34;callout tip&#34;&gt;
&lt;div class=&#34;type&#34;&gt;&lt;svg class=&#34;large-icon&#34;&gt;&lt;use xlink:href=&#34;/v1.1/img/icons.svg#callout-tip&#34;/&gt;&lt;/svg&gt;&lt;/div&gt;
&lt;div class=&#34;content&#34;&gt;This post was updated on May 16, 2018 to use the latest version of the traffic management model.&lt;/div&gt;
&lt;/aside&gt;
&lt;/div&gt;
&lt;p&gt;One of the benefits of the &lt;a href=&#34;/v1.1/&#34;&gt;Istio&lt;/a&gt; project is that it provides the control needed to deploy canary services. The idea behind
canary deployment (or rollout) is to introduce a new version of a service by first testing it using a small percentage of user
traffic, and then if all goes well, increase, possibly gradually in increments, the percentage while simultaneously phasing out
the old version. If anything goes wrong along the way, we abort and rollback to the previous version. In its simplest form,
the traffic sent to the canary version is a randomly selected percentage of requests, but in more sophisticated schemes it
can be based on the region, user, or other properties of the request.&lt;/p&gt;
&lt;p&gt;Depending on your level of expertise in this area, you may wonder why Istio&amp;rsquo;s support for canary deployment is even needed, given that platforms like Kubernetes already provide a way to do &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment&#34;&gt;version rollout&lt;/a&gt; and &lt;a href=&#34;https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments&#34;&gt;canary deployment&lt;/a&gt;. Problem solved, right? Well, not exactly. Although doing a rollout this way works in simple cases, its very limited, especially in large scale cloud environments receiving lots of (and especially varying amounts of) traffic, where autoscaling is needed.&lt;/p&gt;
&lt;h2 id=&#34;canary-deployment-in-kubernetes&#34;&gt;Canary deployment in Kubernetes&lt;/h2&gt;
&lt;p&gt;As an example, let&amp;rsquo;s say we have a deployed service, &lt;strong&gt;helloworld&lt;/strong&gt; version &lt;strong&gt;v1&lt;/strong&gt;, for which we would like to test (or simply rollout) a new version, &lt;strong&gt;v2&lt;/strong&gt;. Using Kubernetes, you can rollout a new version of the &lt;strong&gt;helloworld&lt;/strong&gt; service by simply updating the image in the services corresponding &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/controllers/deployment/&#34;&gt;Deployment&lt;/a&gt; and letting the &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment&#34;&gt;rollout&lt;/a&gt; happen automatically. If we take particular care to ensure that there are enough &lt;strong&gt;v1&lt;/strong&gt; replicas running when we start and &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pausing-and-resuming-a-deployment&#34;&gt;pause&lt;/a&gt; the rollout after only one or two &lt;strong&gt;v2&lt;/strong&gt; replicas have been started, we can keep the canarys effect on the system very small. We can then observe the effect before deciding to proceed or, if necessary, &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment&#34;&gt;rollback&lt;/a&gt;. Best of all, we can even attach a &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deployment&#34;&gt;horizontal pod autoscaler&lt;/a&gt; to the Deployment and it will keep the replica ratios consistent if, during the rollout process, it also needs to scale replicas up or down to handle traffic load.&lt;/p&gt;
&lt;p&gt;Although fine for what it does, this approach is only useful when we have a properly tested version that we want to deploy, i.e., more of a blue/green, a.k.a. red/black, kind of upgrade than a &amp;ldquo;dip your feet in the water&amp;rdquo; kind of canary deployment. In fact, for the latter (for example, testing a canary version that may not even be ready or intended for wider exposure), the canary deployment in Kubernetes would be done using two Deployments with &lt;a href=&#34;https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively&#34;&gt;common pod labels&lt;/a&gt;. In this case, we cant use autoscaling anymore because its now being done by two independent autoscalers, one for each Deployment, so the replica ratios (percentages) may vary from the desired ratio, depending purely on load.&lt;/p&gt;
&lt;p&gt;Whether we use one deployment or two, canary management using deployment features of container orchestration platforms like Docker, Mesos/Marathon, or Kubernetes has a fundamental problem: the use of instance scaling to manage the traffic; traffic version distribution and replica deployment are not independent in these systems. All replica pods, regardless of version, are treated the same in the &lt;code&gt;kube-proxy&lt;/code&gt; round-robin pool, so the only way to manage the amount of traffic that a particular version receives is by controlling the replica ratio. Maintaining canary traffic at small percentages requires many replicas (e.g., 1% would require a minimum of 100 replicas). Even if we ignore this problem, the deployment approach is still very limited in that it only supports the simple (random percentage) canary approach. If, instead, we wanted to limit the visibility of the canary to requests based on some specific criteria, we still need another solution.&lt;/p&gt;
&lt;h2 id=&#34;enter-istio&#34;&gt;Enter Istio&lt;/h2&gt;
&lt;p&gt;With Istio, traffic routing and replica deployment are two completely independent functions. The number of pods implementing services are free to scale up and down based on traffic load, completely orthogonal to the control of version traffic routing. This makes managing a canary version in the presence of autoscaling a much simpler problem. Autoscalers may, in fact, respond to load variations resulting from traffic routing changes, but they are nevertheless functioning independently and no differently than when loads change for other reasons.&lt;/p&gt;
&lt;p&gt;Istios &lt;a href=&#34;/v1.1/docs/concepts/traffic-management/#rule-configuration&#34;&gt;routing rules&lt;/a&gt; also provide other important advantages; you can easily control
fine grain traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). To illustrate, lets look at deploying the &lt;strong&gt;helloworld&lt;/strong&gt; service and see how simple the problem becomes.&lt;/p&gt;
&lt;p&gt;We begin by defining the &lt;strong&gt;helloworld&lt;/strong&gt; Service, just like any other Kubernetes service, something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
selector:
app: helloworld
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We then add 2 Deployments, one for each version (&lt;strong&gt;v1&lt;/strong&gt; and &lt;strong&gt;v2&lt;/strong&gt;), both of which include the service selectors &lt;code&gt;app: helloworld&lt;/code&gt; label:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-yaml&#39; data-expandlinks=&#39;true&#39; &gt;kind: Deployment
metadata:
name: helloworld-v1
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- image: helloworld-v1
...
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v2
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v2
spec:
containers:
- image: helloworld-v2
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that this is exactly the same way we would do a &lt;a href=&#34;https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments&#34;&gt;canary deployment&lt;/a&gt; using plain Kubernetes, but in that case we would need to adjust the number of replicas of each Deployment to control the distribution of traffic. For example, to send 10% of the traffic to the canary version (&lt;strong&gt;v2&lt;/strong&gt;), the replicas for &lt;strong&gt;v1&lt;/strong&gt; and &lt;strong&gt;v2&lt;/strong&gt; could be set to 9 and 1, respectively.&lt;/p&gt;
&lt;p&gt;However, since we are going to deploy the service in an &lt;a href=&#34;/v1.1/docs/setup/&#34;&gt;Istio enabled&lt;/a&gt; cluster, all we need to do is set a routing
rule to control the traffic distribution. For example if we want to send 10% of the traffic to the canary, we could use &lt;code&gt;kubectl&lt;/code&gt;
to set a routing rule something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- helloworld
http:
- route:
- destination:
host: helloworld
subset: v1
weight: 90
- destination:
host: helloworld
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: helloworld
spec:
host: helloworld
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After setting this rule, Istio will ensure that only one tenth of the requests will be sent to the canary version, regardless of how many replicas of each version are running.&lt;/p&gt;
&lt;h2 id=&#34;autoscaling-the-deployments&#34;&gt;Autoscaling the deployments&lt;/h2&gt;
&lt;p&gt;Because we dont need to maintain replica ratios anymore, we can safely add Kubernetes &lt;a href=&#34;https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/&#34;&gt;horizontal pod autoscalers&lt;/a&gt; to manage the replicas for both version Deployments:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl autoscale deployment helloworld-v1 --cpu-percent=50 --min=1 --max=10
deployment &amp;#34;helloworld-v1&amp;#34; autoscaled
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl autoscale deployment helloworld-v2 --cpu-percent=50 --min=1 --max=10
deployment &amp;#34;helloworld-v2&amp;#34; autoscaled
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
Helloworld-v1 Deployment/helloworld-v1 50% 47% 1 10 17s
Helloworld-v2 Deployment/helloworld-v2 50% 40% 1 10 15s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If we now generate some load on the &lt;strong&gt;helloworld&lt;/strong&gt; service, we would notice that when scaling begins, the &lt;strong&gt;v1&lt;/strong&gt; autoscaler will scale up its replicas significantly higher than the &lt;strong&gt;v2&lt;/strong&gt; autoscaler will for its replicas because &lt;strong&gt;v1&lt;/strong&gt; pods are handling 90% of the load.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pods | grep helloworld
helloworld-v1-3523621687-3q5wh 0/2 Pending 0 15m
helloworld-v1-3523621687-73642 2/2 Running 0 11m
helloworld-v1-3523621687-7hs31 2/2 Running 0 19m
helloworld-v1-3523621687-dt7n7 2/2 Running 0 50m
helloworld-v1-3523621687-gdhq9 2/2 Running 0 11m
helloworld-v1-3523621687-jxs4t 0/2 Pending 0 15m
helloworld-v1-3523621687-l8rjn 2/2 Running 0 19m
helloworld-v1-3523621687-wwddw 2/2 Running 0 15m
helloworld-v1-3523621687-xlt26 0/2 Pending 0 19m
helloworld-v2-4095161145-963wt 2/2 Running 0 50m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If we then change the routing rule to send 50% of the traffic to &lt;strong&gt;v2&lt;/strong&gt;, we should, after a short delay, notice that the &lt;strong&gt;v1&lt;/strong&gt; autoscaler will scale down the replicas of &lt;strong&gt;v1&lt;/strong&gt; while the &lt;strong&gt;v2&lt;/strong&gt; autoscaler will perform a corresponding scale up.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pods | grep helloworld
helloworld-v1-3523621687-73642 2/2 Running 0 35m
helloworld-v1-3523621687-7hs31 2/2 Running 0 43m
helloworld-v1-3523621687-dt7n7 2/2 Running 0 1h
helloworld-v1-3523621687-gdhq9 2/2 Running 0 35m
helloworld-v1-3523621687-l8rjn 2/2 Running 0 43m
helloworld-v2-4095161145-57537 0/2 Pending 0 21m
helloworld-v2-4095161145-9322m 2/2 Running 0 21m
helloworld-v2-4095161145-963wt 2/2 Running 0 1h
helloworld-v2-4095161145-c3dpj 0/2 Pending 0 21m
helloworld-v2-4095161145-t2ccm 0/2 Pending 0 17m
helloworld-v2-4095161145-v3v9n 0/2 Pending 0 13m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The end result is very similar to the simple Kubernetes Deployment rollout, only now the whole process is not being orchestrated and managed in one place. Instead, were seeing several components doing their jobs independently, albeit in a cause and effect manner.
What&amp;rsquo;s different, however, is that if we now stop generating load, the replicas of both versions will eventually scale down to their minimum (1), regardless of what routing rule we set.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl get pods | grep helloworld
helloworld-v1-3523621687-dt7n7 2/2 Running 0 1h
helloworld-v2-4095161145-963wt 2/2 Running 0 1h
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;focused-canary-testing&#34;&gt;Focused canary testing&lt;/h2&gt;
&lt;p&gt;As mentioned above, the Istio routing rules can be used to route traffic based on specific criteria, allowing more sophisticated canary deployment scenarios. Say, for example, instead of exposing the canary to an arbitrary percentage of users, we want to try it out on internal users, maybe even just a percentage of them. The following command could be used to send 50% of traffic from users at &lt;em&gt;some-company-name.com&lt;/em&gt; to the canary version, leaving all other users unaffected:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#39;language-bash&#39; data-expandlinks=&#39;true&#39; &gt;$ kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- helloworld
http:
- match:
- headers:
cookie:
regex: &amp;#34;^(.*?;)?(email=[^;]*@some-company-name.com)(;.*)?$&amp;#34;
route:
- destination:
host: helloworld
subset: v1
weight: 50
- destination:
host: helloworld
subset: v2
weight: 50
- route:
- destination:
host: helloworld
subset: v1
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As before, the autoscalers bound to the 2 version Deployments will automatically scale the replicas accordingly, but that will have no affect on the traffic distribution.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this article weve seen how Istio supports general scalable canary deployments, and how this differs from the basic deployment support in Kubernetes. Istios service mesh provides the control necessary to manage traffic distribution with complete independence from deployment scaling. This allows for a simpler, yet significantly more functional, way to do canary test and rollout.&lt;/p&gt;
&lt;p&gt;Intelligent routing in support of canary deployment is just one of the many features of Istio that will make the production deployment of large-scale microservices-based applications much simpler. Check out &lt;a href=&#34;/v1.1/&#34;&gt;istio.io&lt;/a&gt; for more information and to try it out.
The sample code used in this article can be found &lt;a href=&#34;https://github.com/istio/istio/tree/release-1.1/samples/helloworld&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;</description><pubDate>Wed, 14 Jun 2017 00:00:00 +0000</pubDate><link>/v1.1/blog/2017/0.1-canary/</link><author>Frank Budinsky</author><guid isPermaLink="true">/v1.1/blog/2017/0.1-canary/</guid><category>traffic-management</category><category>canary</category></item><item><title>Using Istio to Improve End-to-End Security</title><description>
&lt;p&gt;Conventional network security approaches fail to address security threats to distributed applications deployed in dynamic production environments. Today, we describe how Istio Auth enables enterprises to transform their security posture from just protecting the edge to consistently securing all inter-service communications deep within their applications. With Istio Auth, developers and operators can protect services with sensitive data against unauthorized insider access and they can achieve this without any changes to the application code!&lt;/p&gt;
&lt;p&gt;Istio Auth is the security component of the broader &lt;a href=&#34;/v1.1/&#34;&gt;Istio platform&lt;/a&gt;. It incorporates the learnings of securing millions of microservice
endpoints in Googles production environment.&lt;/p&gt;
&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;Modern application architectures are increasingly based on shared services that are deployed and scaled dynamically on cloud platforms. Traditional network edge security (e.g. firewall) is too coarse-grained and allows access from unintended clients. An example of a security risk is stolen authentication tokens that can be replayed from another client. This is a major risk for companies with sensitive data that are concerned about insider threats. Other network security approaches like IP whitelists have to be statically defined, are hard to manage at scale, and are unsuitable for dynamic production environments.&lt;/p&gt;
&lt;p&gt;Thus, security administrators need a tool that enables them to consistently, and by default, secure all communication between services across diverse production environments.&lt;/p&gt;
&lt;h2 id=&#34;solution-strong-service-identity-and-authentication&#34;&gt;Solution: strong service identity and authentication&lt;/h2&gt;
&lt;p&gt;Google has, over the years, developed architecture and technology to uniformly secure millions of microservice endpoints in its production environment against
external
attacks and insider threats. Key security principles include trusting the endpoints and not the network, strong mutual authentication based on service identity and service level authorization. Istio Auth is based on the same principles.&lt;/p&gt;
&lt;p&gt;The version 0.1 release of Istio Auth runs on Kubernetes and provides the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Strong identity assertion between services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access control to limit the identities that can access a service (and its data)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automatic encryption of data in transit&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Management of keys and certificates at scale&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Istio Auth is based on industry standards like mutual TLS and X.509. Furthermore, Google is actively contributing to an open, community-driven service security framework called &lt;a href=&#34;https://spiffe.io/&#34;&gt;SPIFFE&lt;/a&gt;. As the &lt;a href=&#34;https://spiffe.io/&#34;&gt;SPIFFE&lt;/a&gt; specifications mature, we intend for Istio Auth to become a reference implementation of the same.&lt;/p&gt;
&lt;p&gt;The diagram below provides an overview of the Istio Auth service authentication architecture on Kubernetes.&lt;/p&gt;
&lt;figure style=&#34;width:100%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:56.25%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2017/0.1-auth/./istio_auth_overview.svg&#34; title=&#34;Istio Auth Overview&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2017/0.1-auth/./istio_auth_overview.svg&#34; alt=&#34;Istio Auth Overview&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Istio Auth Overview&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;The above diagram illustrates three key security features:&lt;/p&gt;
&lt;h3 id=&#34;strong-identity&#34;&gt;Strong identity&lt;/h3&gt;
&lt;p&gt;Istio Auth uses &lt;a href=&#34;https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/&#34;&gt;Kubernetes service accounts&lt;/a&gt; to identify who the service runs as. The identity is used to establish trust and define service level access policies. The identity is assigned at service deployment time and encoded in the SAN (Subject Alternative Name) field of an X.509 certificate. Using a service account as the identity has the following advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Administrators can configure who has access to a Service Account by using the &lt;a href=&#34;https://kubernetes.io/docs/reference/access-authn-authz/rbac/&#34;&gt;RBAC&lt;/a&gt; feature introduced in Kubernetes 1.6&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flexibility to identify a human user, a service, or a group of services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stability of the service identity for dynamically placed and auto-scaled workloads&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;communication-security&#34;&gt;Communication security&lt;/h3&gt;
&lt;p&gt;Service-to-service communication is tunneled through high performance client side and server side &lt;a href=&#34;https://envoyproxy.github.io/envoy/&#34;&gt;Envoy&lt;/a&gt; proxies. The communication between the proxies is secured using mutual TLS. The benefit of using mutual TLS is that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. Istio Auth also introduces the concept of Secure Naming to protect from a server spoofing attacks - the client side proxy verifies that the authenticated server&amp;rsquo;s service account is allowed to run the named service.&lt;/p&gt;
&lt;h3 id=&#34;key-management-and-distribution&#34;&gt;Key management and distribution&lt;/h3&gt;
&lt;p&gt;Istio Auth provides a per-cluster CA (Certificate Authority) and automated key &amp;amp; certificate management. In this context, Istio Auth:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Generates a key and certificate pair for each service account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributes keys and certificates to the appropriate pods using &lt;a href=&#34;https://kubernetes.io/docs/concepts/configuration/secret/&#34;&gt;Kubernetes Secrets&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rotates keys and certificates periodically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Revokes a specific key and certificate pair when necessary (future).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following diagram explains the end to end Istio Auth authentication workflow on Kubernetes:&lt;/p&gt;
&lt;figure style=&#34;width:100%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:56.25%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2017/0.1-auth/./istio_auth_workflow.svg&#34; title=&#34;Istio Auth Workflow&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2017/0.1-auth/./istio_auth_workflow.svg&#34; alt=&#34;Istio Auth Workflow&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Istio Auth Workflow&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Istio Auth is part of the broader security story for containers. Red Hat, a partner on the development of Kubernetes, has identified &lt;a href=&#34;https://www.redhat.com/en/resources/container-security-openshift-cloud-devops-whitepaper&#34;&gt;10 Layers&lt;/a&gt; of container security. Istio and Istio Auth addresses two of these layers: &amp;ldquo;Network Isolation&amp;rdquo; and &amp;ldquo;API and Service Endpoint Management&amp;rdquo;. As cluster federation evolves on Kubernetes and other platforms, our intent is for Istio to secure communications across services spanning multiple federated clusters.&lt;/p&gt;
&lt;h2 id=&#34;benefits-of-istio-auth&#34;&gt;Benefits of Istio Auth&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Defense in depth&lt;/strong&gt;: When used in conjunction with Kubernetes (or infrastructure) network policies, users achieve higher levels of confidence, knowing that pod-to-pod or service-to-service communication is secured both at network and application layers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Secure by default&lt;/strong&gt;: When used with Istios proxy and centralized policy engine, Istio Auth can be configured during deployment with minimal or no application change. Administrators and operators can thus ensure that service communications are secured by default and that they can enforce these policies consistently across diverse protocols and runtimes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Strong service authentication&lt;/strong&gt;: Istio Auth secures service communication using mutual TLS to ensure that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. This ensures that services with sensitive data can only be accessed from strongly authenticated and authorized clients.&lt;/p&gt;
&lt;h2 id=&#34;join-us-in-this-journey&#34;&gt;Join us in this journey&lt;/h2&gt;
&lt;p&gt;Istio Auth is the first step towards providing a full stack of capabilities to protect services with sensitive data from external attacks and insider
threats. While the initial version runs on Kubernetes, our goal is to enable Istio Auth to secure services across diverse production environments. We encourage the
community to &lt;a href=&#34;https://github.com/istio/istio/tree/release-1.1/security&#34;&gt;join us&lt;/a&gt; in making robust service security easy and ubiquitous across different application
stacks and runtime platforms.&lt;/p&gt;</description><pubDate>Thu, 25 May 2017 00:00:00 +0000</pubDate><link>/v1.1/blog/2017/0.1-auth/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2017/0.1-auth/</guid></item><item><title>Introducing Istio</title><description>
&lt;p&gt;Google, IBM, and Lyft are proud to announce the first public release of &lt;a href=&#34;/v1.1/&#34;&gt;Istio&lt;/a&gt;: an open source project that provides a uniform way to connect, secure, manage and monitor microservices. Our current release is targeted at the &lt;a href=&#34;https://kubernetes.io/&#34;&gt;Kubernetes&lt;/a&gt; environment; we intend to add support for other environments such as virtual machines and Cloud Foundry in the coming months.
Istio adds traffic management to microservices and creates a basis for value-add capabilities like security, monitoring, routing, connectivity management and policy. The software is built using the battle-tested &lt;a href=&#34;https://envoyproxy.github.io/envoy/&#34;&gt;Envoy&lt;/a&gt; proxy from Lyft, and gives visibility and control over traffic &lt;em&gt;without requiring any changes to application code&lt;/em&gt;. Istio gives CIOs a powerful tool to enforce security, policy and compliance requirements across the enterprise.&lt;/p&gt;
&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;Writing reliable, loosely coupled, production-grade applications based on microservices can be challenging. As monolithic applications are decomposed into microservices, software teams have to worry about the challenges inherent in integrating services in distributed systems: they must account for service discovery, load balancing, fault tolerance, end-to-end monitoring, dynamic routing for feature experimentation, and perhaps most important of all, compliance and security.&lt;/p&gt;
&lt;p&gt;Inconsistent attempts at solving these challenges, cobbled together from libraries, scripts and Stack Overflow snippets leads to solutions that vary wildly across languages and runtimes, have poor observability characteristics and can often end up compromising security.&lt;/p&gt;
&lt;p&gt;One solution is to standardize implementations on a common RPC library like &lt;a href=&#34;https://grpc.io&#34;&gt;gRPC&lt;/a&gt;, but this can be costly for organizations to adopt wholesale
and leaves out brownfield applications which may be practically impossible to change. Operators need a flexible toolkit to make their microservices secure, compliant, trackable and highly available, and developers need the ability to experiment with different features in production, or deploy canary releases, without impacting the system as a whole.&lt;/p&gt;
&lt;h2 id=&#34;solution-service-mesh&#34;&gt;Solution: Service Mesh&lt;/h2&gt;
&lt;p&gt;Imagine if we could transparently inject a layer of infrastructure between a service and the network that gives operators the controls they need while freeing developers from having to bake solutions to distributed system problems into their code. This uniform layer of infrastructure combined with service deployments is commonly referred to as a &lt;strong&gt;&lt;em&gt;service mesh&lt;/em&gt;&lt;/strong&gt;. Just as microservices help to decouple feature teams from each other, a service mesh helps to decouple operators from application feature development and release processes. Istio turns disparate microservices into an integrated service mesh by systemically injecting a proxy into the network paths among them.&lt;/p&gt;
&lt;p&gt;Google, IBM and Lyft joined forces to create Istio from a desire to provide a reliable substrate for microservice development and maintenance, based on our common experiences building and operating massive scale microservices for internal and enterprise customers. Google and IBM have extensive experience with these large scale microservices in their own applications and with their enterprise customers in sensitive/regulated environments, while Lyft developed Envoy to address their internal operability challenges. &lt;a href=&#34;https://eng.lyft.com/announcing-envoy-c-l7-proxy-and-communication-bus-92520b6c8191&#34;&gt;Lyft open sourced Envoy&lt;/a&gt; after successfully using it in production for over a year to manage more than 100 services spanning 10,000 VMs, processing 2M requests/second.&lt;/p&gt;
&lt;h2 id=&#34;benefits-of-istio&#34;&gt;Benefits of Istio&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Fleet-wide Visibility&lt;/strong&gt;: Failures happen, and operators need tools to stay on top of the health of clusters and their graphs of microservices. Istio produces detailed monitoring data about application and network behaviors that is rendered using &lt;a href=&#34;https://prometheus.io/&#34;&gt;Prometheus&lt;/a&gt; &amp;amp; &lt;a href=&#34;https://github.com/grafana/grafana&#34;&gt;Grafana&lt;/a&gt;, and can be easily extended to send metrics and logs to any collection, aggregation and querying system. Istio enables analysis of performance hotspots and diagnosis of distributed failure modes with &lt;a href=&#34;https://github.com/openzipkin/zipkin&#34;&gt;Zipkin&lt;/a&gt; tracing.&lt;/p&gt;
&lt;figure style=&#34;width:100%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:55.425531914893625%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2017/0.1-announcement/./istio_grafana_dashboard-new.png&#34; title=&#34;Grafana Dashboard with Response Size&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2017/0.1-announcement/./istio_grafana_dashboard-new.png&#34; alt=&#34;Grafana Dashboard with Response Size&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Grafana Dashboard with Response Size&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;figure style=&#34;width:100%&#34;&gt;
&lt;div class=&#34;wrapper-with-intrinsic-ratio&#34; style=&#34;padding-bottom:29.912663755458514%&#34;&gt;
&lt;a data-skipendnotes=&#34;true&#34; href=&#34;/v1.1/blog/2017/0.1-announcement/./istio_zipkin_dashboard.png&#34; title=&#34;Zipkin Dashboard&#34;&gt;
&lt;img class=&#34;element-to-stretch&#34; src=&#34;/v1.1/blog/2017/0.1-announcement/./istio_zipkin_dashboard.png&#34; alt=&#34;Zipkin Dashboard&#34; /&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;figcaption&gt;Zipkin Dashboard&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;strong&gt;Resiliency and efficiency&lt;/strong&gt;: When developing microservices, operators need to assume that the network will be unreliable. Operators can use retries, load balancing, flow-control (HTTP/2), and circuit-breaking to compensate for some of the common failure modes due to an unreliable network. Istio provides a uniform approach to configuring these features, making it easier to operate a highly resilient service mesh.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Developer productivity&lt;/strong&gt;: Istio provides a significant boost to developer productivity by letting them focus on building service features in their language of choice, while Istio handles resiliency and networking challenges in a uniform way. Developers are freed from having to bake solutions to distributed systems problems into their code. Istio further improves productivity by providing common functionality supporting A/B testing, canarying, and fault injection.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Policy Driven Ops&lt;/strong&gt;: Istio empowers teams with different areas of concern to operate independently. It decouples cluster operators from the feature development cycle, allowing improvements to security, monitoring, scaling, and service topology to be rolled out &lt;em&gt;without&lt;/em&gt; code changes. Operators can route a precise subset of production traffic to qualify a new service release. They can inject failures or delays into traffic to test the resilience of the service mesh, and set up rate limits to prevent services from being overloaded. Istio can also be used to enforce compliance rules, defining ACLs between services to allow only authorized services to talk to each other.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Secure by default&lt;/strong&gt;: It is a common fallacy of distributed computing that the network is secure. Istio enables operators to authenticate and secure all communication between services using a mutual TLS connection, without burdening the developer or the operator with cumbersome certificate management tasks. Our security framework is aligned with the emerging &lt;a href=&#34;https://spiffe.github.io/&#34;&gt;SPIFFE&lt;/a&gt; specification, and is based on similar systems that have been tested extensively inside Google.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Incremental Adoption&lt;/strong&gt;: We designed Istio to be completely transparent to the services running in the mesh, allowing teams to incrementally adopt features of Istio over time. Adopters can start with enabling fleet-wide visibility and once theyre comfortable with Istio in their environment they can switch on other features as needed.&lt;/p&gt;
&lt;h2 id=&#34;join-us-in-this-journey&#34;&gt;Join us in this journey&lt;/h2&gt;
&lt;p&gt;Istio is a completely open development project. Today we are releasing version 0.1, which works in a Kubernetes cluster, and we plan to have major new
releases every 3 months, including support for additional environments. Our goal is to enable developers and operators to rollout and operate microservices
with agility, complete visibility of the underlying network, and uniform control and security in all environments. We look forward to working with the Istio
community and our partners towards these goals, following our &lt;a href=&#34;/v1.1/about/feature-stages/&#34;&gt;roadmap&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Visit &lt;a href=&#34;https://github.com/istio/istio/releases&#34;&gt;here&lt;/a&gt; to get the latest released bits.&lt;/p&gt;
&lt;p&gt;View the &lt;a href=&#34;/v1.1/talks/istio_talk_gluecon_2017.pdf&#34;&gt;presentation&lt;/a&gt; from GlueCon 2017, where Istio was unveiled.&lt;/p&gt;
&lt;h2 id=&#34;community&#34;&gt;Community&lt;/h2&gt;
&lt;p&gt;We are excited to see early commitment to support the project from many companies in the community:
&lt;a href=&#34;https://blog.openshift.com/red-hat-istio-launch/&#34;&gt;Red Hat&lt;/a&gt; with Red Hat OpenShift and OpenShift Application Runtimes,
Pivotal with &lt;a href=&#34;https://content.pivotal.io/blog/pivotal-and-istio-advancing-the-ecosystem-for-microservices-in-the-enterprise&#34;&gt;Pivotal Cloud Foundry&lt;/a&gt;,
WeaveWorks with &lt;a href=&#34;https://www.weave.works/blog/istio-weave-cloud/&#34;&gt;Weave Cloud&lt;/a&gt; and Weave Net 2.0,
&lt;a href=&#34;https://www.projectcalico.org/welcoming-istio-to-the-kubernetes-networking-community&#34;&gt;Tigera&lt;/a&gt; with the Project Calico Network Policy Engine
and &lt;a href=&#34;https://www.datawire.io/istio-and-datawire-ecosystem/&#34;&gt;Datawire&lt;/a&gt; with the Ambassador project. We hope to see many more companies join us in
this journey.&lt;/p&gt;
&lt;p&gt;To get involved, connect with us via any of these channels:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;[istio.io]() for documentation and examples.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;a href=&#34;https://discuss.istio.io&#34;&gt;Istio discussion board&lt;/a&gt; general discussions,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;https://stackoverflow.com/questions/tagged/istio&#34;&gt;Stack Overflow&lt;/a&gt; for curated questions and answers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;https://github.com/istio/istio/issues&#34;&gt;GitHub&lt;/a&gt; for filing issues&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&#34;https://twitter.com/IstioMesh&#34;&gt;@IstioMesh&lt;/a&gt; on Twitter&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;From everyone working on Istio, welcome aboard!&lt;/p&gt;</description><pubDate>Wed, 24 May 2017 00:00:00 +0000</pubDate><link>/v1.1/blog/2017/0.1-announcement/</link><author>The Istio Team</author><guid isPermaLink="true">/v1.1/blog/2017/0.1-announcement/</guid></item></channel></rss>