Istio Blog and NewsConnect, secure, control, and observe services./v1.9/v1.9/favicons/android-192x192.png/v1.9Service meshSupport for Istio 1.8 has ended<p>As <a href="/v1.9/news/support/announcing-1.8-eol/">previously announced</a>, support for Istio 1.8 has now officially ended.</p> <p>At this point we will no longer back-port fixes for security issues and critical bugs to 1.8, so we heartily encourage you to upgrade to the latest version of Istio (1.9.5) if you haven&rsquo;t already.</p>Wed, 12 May 2021 00:00:00 +0000/v1.9/news/support/announcing-1.8-eol-final//v1.9/news/support/announcing-1.8-eol-final/ISTIO-SECURITY-2021-006 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31921">CVE-2021-31921</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>10 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aC%2fC%3aH%2fI%3aH%2fA%3aH">AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H</a></td> </tr> <tr> <td>Affected Releases</td> <td> All releases prior to 1.8.6<br> 1.9.0 to 1.9.4<br> </td> </tr> </tbody> </table> <h2 id="issue">Issue</h2> <p>Istio contains a remotely exploitable vulnerability where an external client can access unexpected services in the cluster, bypassing authorization checks, when a gateway is configured with <code>AUTO_PASSTHROUGH</code> routing configuration.</p> <h2 id="am-i-impacted">Am I impacted?</h2> <p>This vulnerability impacts only usage of the <code>AUTO_PASSTHROUGH</code> Gateway type, which is typically only used in multi-network multi-cluster deployments.</p> <p>The TLS mode of all Gateways in the cluster can be detected with the following command:</p> <pre><code><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get gateways.networking.istio.io -A -o &#34;custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,TLS_MODE:.spec.servers[*].tls.mode&#34; </code></pre></code></pre> <p>If the output shows any <code>AUTO_PASSTHROUGH</code> Gateways, you may be impacted.</p> <h2 id="mitigation">Mitigation</h2> <p>Update your cluster to the latest supported version:</p> <ul> <li>Istio 1.8.6, if using 1.8.x</li> <li>Istio 1.9.5 or up</li> <li>The patch version specified by your cloud provider</li> </ul> <h2 id="credit">Credit</h2> <p>We would like to thank John Howard (Google) for reporting this issue.</p>Tue, 11 May 2021 00:00:00 +0000/v1.9/news/security/istio-security-2021-006//v1.9/news/security/istio-security-2021-006/CVEISTIO-SECURITY-2021-005 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31920">CVE-2021-31920</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>8.1 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aL%2fPR%3aL%2fUI%3aN%2fS%3aU%2fC%3aH%2fI%3aH%2fA%3aN">AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N</a></td> </tr> <tr> <td>Affected Releases</td> <td> All releases prior to 1.8.6<br> 1.9.0 to 1.9.4<br> </td> </tr> </tbody> </table> <h2 id="issue">Issue</h2> <p>Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters (<code>%2F</code> or <code>%5C</code>) could potentially bypass an Istio authorization policy when path based authorization rules are used. Related Envoy CVE: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29492"><code>CVE-2021-29492</code></a>.</p> <p>For example, assume an Istio cluster administrator defines an authorization DENY policy to reject the request at path <code>/admin</code>. A request sent to the URL path <code>//admin</code> will NOT be rejected by the authorization policy.</p> <p>According to the <a href="https://tools.ietf.org/html/rfc3986#section-6">RFC 3986</a>, the path <code>//admin</code> with multiple slashes should technically be treated as a different path from the <code>/admin</code>. However, some backend services choose to normalize the URL paths by merging multiple slashes to a single slash. This can result in a bypass of the authorization policy (<code>//admin</code> does not match <code>/admin</code>) and a user can access the resource at path <code>/admin</code> in the backend; this would represent a security incident.</p> <h2 id="am-i-impacted">Am I impacted?</h2> <p>Your cluster is <strong>impacted</strong> by this vulnerability if you have authorization policies using <code>ALLOW action + notPaths field</code> or <code>DENY action + paths field</code> patterns. These patterns are vulnerable to unexpected policy bypasses and you should upgrade to fix the security issue as soon as possible.</p> <p>The following is an example of vulnerable policy that uses <code>DENY action + paths field</code> pattern:</p> <pre><code><pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-path-admin spec: action: DENY rules: - to: - operation: paths: [&#34;/admin&#34;] </code></pre></code></pre> <p>The following is another example of vulnerable policy that uses <code>ALLOW action + notPaths field</code> pattern:</p> <pre><code><pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-path-not-admin spec: action: ALLOW rules: - to: - operation: notPaths: [&#34;/admin&#34;] </code></pre></code></pre> <p>Your cluster is <strong>NOT impacted</strong> by this vulnerability if:</p> <ul> <li>You don’t have authorization policies</li> <li>Your authorization policies don’t define <code>paths</code> or <code>notPaths</code> fields.</li> <li>Your authorization policies use <code>ALLOW action + paths field</code> or <code>DENY action + notPaths field</code> patterns. These patterns could only cause unexpected rejection instead of policy bypasses. The upgrade is optional for these cases.</li> </ul> <h2 id="mitigation">Mitigation</h2> <ol> <li>Update your cluster to the latest supported version. These versions support configuring the Envoy proxies in the system with more normalization options: <ul> <li>Istio 1.8.6, if using 1.8.x</li> <li>Istio 1.9.5 or up</li> <li>The patch version specified by your cloud provider</li> </ul></li> <li>Follow the <a href="/v1.9/docs/ops/best-practices/security/#authorization-policies">security best practices</a> to configure your authorization policies.</li> </ol> <h2 id="credit">Credit</h2> <p>We would like to thank <a href="https://github.com/Ruil1n"><code>Ruilin</code></a> and <code>Test123</code> for discovering this issue.</p>Tue, 11 May 2021 00:00:00 +0000/v1.9/news/security/istio-security-2021-005//v1.9/news/security/istio-security-2021-005/CVEAnnouncing Istio 1.9.5 <p>This release fixes the security vulnerabilities described in our May 11th posts, <a href="/v1.9/news/security/istio-security-2021-005">ISTIO-SECURITY-2021-005</a> and <a href="/v1.9/news/security/istio-security-2021-006">ISTIO-SECURITY-2021-006</a>.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.9.x/announcing-1.9/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="entry" href="https://github.com/istio/istio/releases/tag/1.9.5"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://istio.io/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.9.4...1.9.5"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">The first 2 CVEs are highly related.</div> </aside> </div> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31920">CVE-2021-31920</a></strong>: Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters (<code>%2F</code> or <code>%5C</code>) could potentially bypass an Istio authorization policy when path based authorization rules are used. See the <a href="/v1.9/news/security/istio-security-2021-005">ISTIO-SECURITY-2021-005 bulletin</a> for more details. <ul> <li><strong>CVSS Score</strong>: 8.1 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N">AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N</a></li> </ul></li> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29492">CVE-2021-29492</a></strong>: Envoy contains a remotely exploitable vulnerability where an HTTP request with escaped slash characters can bypass Envoy&rsquo;s authorization mechanisms. <ul> <li><strong>CVSS Score</strong>: 8.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L">AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L</a></li> </ul></li> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31921">CVE-2021-31921</a></strong>: Istio contains a remotely exploitable vulnerability where an external client can access unexpected services in the cluster, bypassing authorization checks, when a gateway is configured with <code>AUTO_PASSTHROUGH</code> routing configuration. See the <a href="/v1.9/news/security/istio-security-2021-006">ISTIO-SECURITY-2021-006 bulletin</a> for more details. <ul> <li><strong>CVSS Score</strong>: 10.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H">AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H</a></li> </ul></li> </ul> <h2 id="changes">Changes</h2> <ul> <li><strong>Added</strong> <a href="/v1.9/docs/ops/best-practices/security/#authorization-policies">security best practice for authorization policies</a></li> </ul>Tue, 11 May 2021 00:00:00 +0000/v1.9/news/releases/1.9.x/announcing-1.9.5//v1.9/news/releases/1.9.x/announcing-1.9.5/Announcing Istio 1.8.6 <p>This release fixes the security vulnerabilities described in our May 11th posts, <a href="/v1.9/news/security/istio-security-2021-005">ISTIO-SECURITY-2021-005</a> and <a href="/v1.9/news/security/istio-security-2021-006">ISTIO-SECURITY-2021-006</a>.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.8.x/announcing-1.8/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="entry" href="https://github.com/istio/istio/releases/tag/1.8.6"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.8/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.8.5...1.8.6"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">This is the final release of 1.8. Please upgrade your Istio installation to a supported version.</div> </aside> </div> <h2 id="security-update">Security update</h2> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">The first 2 CVEs are highly related.</div> </aside> </div> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31920">CVE-2021-31920</a></strong>: Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters (<code>%2F</code> or <code>%5C</code>) could potentially bypass an Istio authorization policy when path based authorization rules are used. See the <a href="/v1.9/news/security/istio-security-2021-005">ISTIO-SECURITY-2021-005 bulletin</a> for more details. <ul> <li><strong>CVSS Score</strong>: 8.1 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N">AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N</a></li> </ul></li> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29492">CVE-2021-29492</a></strong>: Envoy contains a remotely exploitable vulnerability where an HTTP request with escaped slash characters can bypass Envoy&rsquo;s authorization mechanisms. <ul> <li><strong>CVSS Score</strong>: 8.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L">AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L</a></li> </ul></li> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31921">CVE-2021-31921</a></strong>: Istio contains a remotely exploitable vulnerability where an external client can access unexpected services in the cluster, bypassing authorization checks, when a gateway is configured with <code>AUTO_PASSTHROUGH</code> routing configuration. See the <a href="/v1.9/news/security/istio-security-2021-006">ISTIO-SECURITY-2021-006 bulletin</a> for more details. <ul> <li><strong>CVSS Score</strong>: 10.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H">AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H</a></li> </ul></li> </ul> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Added</strong> <a href="/v1.9/docs/ops/best-practices/security/#authorization-policies">security best practice for authorization policies</a></p></li> <li><p><strong>Fixed</strong> istiod so it will no longer generate listeners for privileged gateway ports (&lt;1024) if the gateway Pod does not have sufficient permissions. <a href="https://github.com/istio/istio/issues/27566">Issue 27566</a></p></li> <li><p><strong>Fixed</strong> an issue where transport socket parameters are now taken into account when configured in <code>EnvoyFilter</code>. <a href="https://github.com/istio/istio/issues/28996">Issue 28996</a></p></li> <li><p><strong>Fixed</strong> <code>PeerAuthentication</code> to not turn off mTLS while using multi-network, non-mTLS endpoints from the cross-network load-balancing endpoints to prevent 500 errors. <a href="https://github.com/istio/istio/issues/28798">Issue 28798</a></p></li> <li><p><strong>Fixed</strong> a bug causing runaway logs in istiod after disabling the default ingress controller. <a href="https://github.com/istio/istio/issues/31336">Issue 31336</a></p></li> <li><p><strong>Fixed</strong> the Kubernetes API server so it is now considered to be cluster-local by default . This means that any pod attempting to reach <code>kubernetes.default.svc</code> will always be directed to the in-cluster server. <a href="https://github.com/istio/istio/issues/31340">Issue 31340</a></p></li> <li><p><strong>Fixed</strong> Istio operator to prune resources that do not belong to the specific Istio operator CR. <a href="https://github.com/istio/istio/issues/30833">Issue 30833</a></p></li> </ul>Tue, 11 May 2021 00:00:00 +0000/v1.9/news/releases/1.8.x/announcing-1.8.6//v1.9/news/releases/1.8.x/announcing-1.8.6/Announcing Istio 1.9.4 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.9.3 and Istio 1.9.4</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.9.x/announcing-1.9/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.9.4" data-downloadbuttontext="DOWNLOAD 1.9.4" data-updateadvice='Before you download 1.9.4, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.9.5' data-updatehref="/v1.9/news/releases/1.9.x/announcing-1.9.5/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://istio.io/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.9.3...1.9.4"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Fixed</strong> an issue where the Istio operator prunes all resources created by the operator, including itself. Now the operator will only remove resources belonging to the custom resource. (<a href="https://github.com/istio/istio/issues/30833">Issue #30833</a>)</p></li> <li><p><strong>Fixed</strong> an issue ensuring lease duration is always greater than the user configured <code>RENEW_DEADLINE</code> for Istio operator manager. (<a href="https://github.com/istio/istio/issues/27509">Issue #27509</a>)</p></li> <li><p><strong>Fixed</strong> an issue where a certificate provisioned by sidecar proxy cannot be used by Prometheus. (<a href="https://github.com/istio/istio/issues/29919">Issue #29919</a>)</p></li> <li><p><strong>Fixed</strong> an issue that creates an IOP under <code>istio-system</code> when installing Istio in another namespace. (<a href="https://github.com/istio/istio/issues/31517">Issue #31517</a>)</p></li> <li><p><strong>Fixed</strong> an issue when using <code>PeerAuthentication</code> to turn off mTLS while using multi-network. Now non-mTLS endpoints will be removed from cross-network load-balancing endpoints to prevent 500 errors. (<a href="https://github.com/istio/istio/issues/28798">Issue #28798</a>)</p></li> <li><p><strong>Fixed</strong> <code>istiod</code> never becoming ready when it fails to read resources from clusters configured via remote secrets. After a timeout configured by <code>PILOT_REMOTE_CLUSTER_TIMEOUT</code> (default <code>30s</code>), <code>istiod</code> will become ready without syncing remote clusters. The stat <code>remote_cluster_sync_timeouts</code> will be incremented when this occurs. (<a href="https://github.com/istio/istio/issues/30838">Issue #30838</a>)</p></li> <li><p><strong>Fixed</strong> an issue where <code>istiod</code> will not create a self-signed root CA and <code>istio-ca-root-cert</code> configmap when <code>values.global.pilotCertProvider</code> is <code>kubernetes</code>. (<a href="https://github.com/istio/istio/issues/32023">Issue #32023</a>)</p></li> <li><p><strong>Improved</strong> the <code>istioctl x workload</code> command to configure VMs to disable inbound <code>iptables</code> capture for admin ports, matching the behavior of Kubernetes Pods. (<a href="https://github.com/istio/istio/issues/29412">Issue #29412</a>)</p></li> <li><p><strong>Improved</strong> performance of <code>istiod</code> when running on clusters with thousands of namespaces. (<a href="https://github.com/istio/istio/pull/32269">Issue #32269</a></p></li> <li><p><strong>Improved</strong> detection of Server Side Apply in Kubernetes. (<a href="https://github.com/istio/istio/issues/32101">Issue #32101</a>)</p></li> </ul>Tue, 27 Apr 2021 00:00:00 +0000/v1.9/news/releases/1.9.x/announcing-1.9.4//v1.9/news/releases/1.9.x/announcing-1.9.4/ISTIO-SECURITY-2021-004 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=N%2fA">N/A</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>N/A <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector="></a></td> </tr> <tr> <td>Affected Releases</td> <td> All releases 1.5 and later<br> </td> </tr> </tbody> </table> <p>This is a security advisory for customers to check the authorization policy to make sure <a href="/v1.9/docs/tasks/security/authentication/authn-policy/#globally-enabling-istio-mutual-tls-in-strict-mode">mTLS (STRICT mode) is enabled</a> when using <a href="/v1.9/docs/concepts/security/#dependency-on-mutual-tls">mTLS-only fields</a> in the authorization policy.</p> <p>You can stop reading if:</p> <ul> <li><p>Your authorization policy does not use <a href="/v1.9/docs/concepts/security/#dependency-on-mutual-tls">mTLS-only fields</a>; or</p></li> <li><p>Your authorization policy uses mTLS-only fields and you have also enabled mTLS with STRICT mode or your authorization policy is configured to reject plain text traffic explicitly.</p></li> </ul> <h2 id="issue">Issue</h2> <p>In authorization policy, the following are <a href="/v1.9/docs/concepts/security/#dependency-on-mutual-tls">mTLS-only fields</a>:</p> <ul> <li>the <code>principals</code> and <code>notPrincipals</code> field under the <code>source</code> section</li> <li>the <code>namespaces</code> and <code>notNamespaces</code> field under the <code>source</code> section</li> <li>the <code>source.principal</code> custom condition</li> <li>the <code>source.namespace</code> custom condition</li> </ul> <p>These mTLS-only fields will never match when the traffic is plain text (non mTLS) and the request might be allowed unexpectedly.</p> <p>The following is an example ALLOW policy that uses mTLS-only fields to allow requests if it is not from the namespace <code>foo</code>:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.istio.io/v1beta1&#34; kind: &#34;AuthorizationPolicy&#34; metadata: name: allow-ns-not-foo spec: action: ALLOW rules: - from: - source: notNamespaces: [&#34;foo&#34;] </code></pre> <p>A <strong>plain text request</strong> from the namespace <code>foo</code> will actually be allowed. The mTLS-only field <code>notNamespaces</code> will be compared to an empty value when mTLS is not used, resulting in a policy that allows the <strong>plain text request</strong> even if the source namespace is <code>foo</code>.</p> <p>The following is an example DENY policy that uses mTLS-only fields to reject a request if it is from the namespace <code>foo</code>:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.istio.io/v1beta1&#34; kind: &#34;AuthorizationPolicy&#34; metadata: name: reject-ns-foo spec: action: DENY rules: - from: - source: namespaces: [&#34;foo&#34;] </code></pre> <p>A <strong>plain text request</strong> from the namespace <code>foo</code> will not be rejected. The mTLS-only field <code>namespaces</code> will be compared to an empty value when mTLS is not used, resulting in a policy that does not reject the <strong>plain text request</strong> even if the source namespace is <code>foo</code>.</p> <h2 id="solution">Solution</h2> <p>To solve this problem, it&rsquo;s recommended to always <a href="/v1.9/docs/tasks/security/authentication/authn-policy/#enable-mutual-tls-per-namespace-or-workload">enable mTLS with STRICT mode</a> on the workloads before using any mTLS-only fields in the authorization policy on the same workload.</p> <p>If you are unable to enable mTLS with STRICT mode for the workload, the alternative solution is to update the authorization policy to explicitly allow traffic with non-empty namespaces or reject traffic with empty namespaces (<code>*</code> implies non-empty and <code>not *</code> implies empty). As namespace can only be extracted when mTLS is STRICT. The policies below effectively also reject any plain text traffic.</p> <p>If you are unable to enable mTLS with STRICT mode for the workload, the alternative solution is to update the authorization policy to explicitly allow traffic with non-empty namespaces or reject traffic with empty namespaces, as namespace can only be extracted when mTLS is STRICT.</p> <p><code>*</code> implies non-empty namespaces and <code>not *</code> implies empty namespaces. The policies below also reject any plain text traffic.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.istio.io/v1beta1&#34; kind: &#34;AuthorizationPolicy&#34; metadata: name: allow-ns-not-foo spec: action: ALLOW rules: - from: - source: notNamespaces: [&#34;foo&#34;] # Add the following to explicitly only allow mTLS traffic. namespaces: [&#34;*&#34;] </code></pre> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.istio.io/v1beta1&#34; kind: &#34;AuthorizationPolicy&#34; metadata: name: reject-ns-foo spec: action: DENY rules: - from: - source: namespaces: [&#34;foo&#34;] # Add the following rule to explicitly reject plain text traffic. - from: - source: notNamespaces: [&#34;*&#34;] </code></pre> <p>Also check the <a href="/v1.9/docs/ops/configuration/security/security-policy-examples/#require-mtls-in-authorization-layer-defense-in-depth">security policy examples</a> for more details about the above alternative solution.</p> <h2 id="credit">Credit</h2> <p>We&rsquo;d like to thank <a href="https://github.com/howardjohn/">John Howard</a> for reporting this issue.</p> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Thu, 15 Apr 2021 00:00:00 +0000/v1.9/news/security/istio-security-2021-004//v1.9/news/security/istio-security-2021-004/CVEISTIO-SECURITY-2021-003 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28683">CVE-2021-28683</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28682">CVE-2021-28682</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29258">CVE-2021-29258</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aN%2fI%3aN%2fA%3aH">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></td> </tr> <tr> <td>Affected Releases</td> <td> All releases prior to 1.8.5<br> 1.9.0 to 1.9.2<br> </td> </tr> </tbody> </table> <p>Envoy, and subsequently Istio, is vulnerable to several newly discovered vulnerabilities:</p> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28683">CVE-2021-28683</a></strong>: Envoy contains a remotely exploitable NULL pointer dereference and crash in TLS when an unknown TLS alert code is received. <ul> <li><strong>CVSS Score</strong>: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28682">CVE-2021-28682</a></strong>: Envoy contains a remotely exploitable integer overflow in which a very large grpc-timeout value leads to unexpected timeout calculations. <ul> <li><strong>CVSS Score</strong>: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29258">CVE-2021-29258</a></strong>: Envoy contains a remotely exploitable vulnerability where an HTTP2 request with an empty metadata map can cause Envoy to crash. <ul> <li><strong>CVSS Score</strong>: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Thu, 15 Apr 2021 00:00:00 +0000/v1.9/news/security/istio-security-2021-003//v1.9/news/security/istio-security-2021-003/CVEAnnouncing Istio 1.9.3 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2021-003">our April 15th post</a>.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.9.x/announcing-1.9/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.9.3" data-downloadbuttontext="DOWNLOAD 1.9.3" data-updateadvice='Before you download 1.9.3, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.9.5' data-updatehref="/v1.9/news/releases/1.9.x/announcing-1.9.5/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://istio.io/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.9.2...1.9.3"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28683">CVE-2021-28683</a></strong>: Envoy contains a remotely exploitable NULL pointer dereference and crash in TLS when an unknown TLS alert code is received. <ul> <li><strong>CVSS Score</strong>: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28682">CVE-2021-28682</a></strong>: Envoy contains a remotely exploitable integer overflow in which a very large grpc-timeout value leads to unexpected timeout calculations. <ul> <li><strong>CVSS Score</strong>: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29258">CVE-2021-29258</a></strong>: Envoy contains a remotely exploitable vulnerability where an HTTP2 request with an empty metadata map can cause Envoy to crash. <ul> <li><strong>CVSS Score</strong>: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> </ul>Thu, 15 Apr 2021 00:00:00 +0000/v1.9/news/releases/1.9.x/announcing-1.9.3//v1.9/news/releases/1.9.x/announcing-1.9.3/Announcing Istio 1.8.5 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2021-003">our April 15th post</a>.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.8.x/announcing-1.8/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.8.5" data-downloadbuttontext="DOWNLOAD 1.8.5" data-updateadvice='Before you download 1.8.5, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.8.6' data-updatehref="/v1.9/news/releases/1.8.x/announcing-1.8.6/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.8/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.8.4...1.8.5"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28683">CVE-2021-28683</a></strong>: Envoy contains a remotely exploitable NULL pointer dereference and crash in TLS when an unknown TLS alert code is received. <ul> <li><strong>CVSS Score</strong>: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28682">CVE-2021-28682</a></strong>: Envoy contains a remotely exploitable integer overflow in which a very large grpc-timeout value leads to unexpected timeout calculations. <ul> <li><strong>CVSS Score</strong>: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29258">CVE-2021-29258</a></strong>: Envoy contains a remotely exploitable vulnerability where an HTTP2 request with an empty metadata map can cause Envoy to crash. <ul> <li><strong>CVSS Score</strong>: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> </ul>Thu, 15 Apr 2021 00:00:00 +0000/v1.9/news/releases/1.8.x/announcing-1.8.5//v1.9/news/releases/1.8.x/announcing-1.8.5/Support for Istio 1.8 ends on May 12th, 2021<p>According to Istio&rsquo;s <a href="/v1.9/about/supported-releases#support-policy">support policy</a>, minor releases like 1.8 are supported for three months after the next minor release. Since <a href="/v1.9/news/releases/1.9.x/announcing-1.9/">1.9 was released on February 9th</a>, support for 1.8 will end on May 12th, 2021.</p> <p>At that point we will stop back-porting fixes for security issues and critical bugs to 1.8, so we encourage you to upgrade to the latest version of Istio (1.9.5). If you don&rsquo;t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.</p> <p>We care about you and your clusters, so please be kind to yourself and upgrade.</p>Mon, 12 Apr 2021 00:00:00 +0000/v1.9/news/support/announcing-1.8-eol//v1.9/news/support/announcing-1.8-eol/ISTIO-SECURITY-2021-002 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=N%2fA">N/A</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>N/A <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector="></a></td> </tr> <tr> <td>Affected Releases</td> <td> All releases 1.6 and later<br> </td> </tr> </tbody> </table> <p>Upgrading from Istio versions 1.5 and prior, to 1.6 and later, may result in access control bypass:</p> <ul> <li><strong>Incorrect gateway ports on authorization policies on upgrades</strong>: In Istio versions 1.6 and later, the default container ports for Istio ingress gateways are updated from port &ldquo;80&rdquo; to &ldquo;8080&rdquo; and &ldquo;443&rdquo; to &ldquo;8443&rdquo; to allow <a href="/v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes/#gateways-run-as-non-root">gateways to run as non-root</a> by default. With this change, any existing authorization policies targeting an Istio ingress gateway on ports <code>80</code> and <code>443</code> need to be migrated to use the new container ports <code>8080</code> and <code>8443</code>, before upgrading to the listed versions. Failure to migrate may result in traffic reaching ingress gateway service ports <code>80</code> and <code>443</code> to be incorrectly allowed or blocked, thereby causing policy violations.</li> </ul> <p>Example of an authorization policy resource that needs to be updated:</p> <pre><code><pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.istio.io/v1beta1&#34; kind: &#34;AuthorizationPolicy&#34; metadata: name: block-admin-access namespace: istio-system spec: selector: matchLabels: istio: ingressgateway action: DENY rules: - to: - operation: paths: [&#34;/admin&#34;] ports: [ &#34;80&#34; ] - to: - operation: paths: [&#34;/admin&#34;] ports: [ &#34;443&#34; ] </code></pre></code></pre> <p>The above policy in Istio versions 1.5 and prior will block all access to path <code>/admin</code> for traffic reaching an Istio ingress gateway on container ports <code>80</code> and <code>443</code>. On upgrading to Istio version 1.6 and later, this policy should be updated to the following to have the same effect:</p> <pre><code><pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.istio.io/v1beta1&#34; kind: &#34;AuthorizationPolicy&#34; metadata: name: block-admin-access namespace: istio-system spec: selector: matchLabels: istio: ingressgateway action: DENY rules: - to: - operation: paths: [&#34;/admin&#34;] ports: [ &#34;8080&#34; ] - to: - operation: paths: [&#34;/admin&#34;] ports: [ &#34;8443&#34; </code></pre></code></pre> <h2 id="mitigation">Mitigation</h2> <ul> <li>Update your authorization policies before upgrading to the affected Istio versions. You can use this <a href="./check.sh">script</a> to check if any of the existing authorization policies attached to the default Istio ingress gateway in the <code>istio-system</code> namespace need to be updated. If you’re using a custom gateway installation, you can customize the script to run with parameters applicable to your environment.</li> </ul> <p>It is recommended to create a copy of your existing authorization policies, update the copied version to use new gateway workload ports, and apply both existing and updated policies in your cluster, before initiating the upgrade process. You should only delete the old policies after a successful upgrade, to ensure no policy violations occur on upgrade failures or rollbacks.</p> <h2 id="credit">Credit</h2> <p>We&rsquo;d like to thank <a href="https://twitter.com/nrjpoddar">Neeraj Poddar</a> for reporting this issue.</p> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Wed, 07 Apr 2021 00:00:00 +0000/v1.9/news/security/istio-security-2021-002//v1.9/news/security/istio-security-2021-002/CVEAnnouncing Istio 1.9.2 <p>This release note describes what’s different between Istio 1.9.1 and Istio 1.9.2.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.9.x/announcing-1.9/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.9.2" data-downloadbuttontext="DOWNLOAD 1.9.2" data-updateadvice='Before you download 1.9.2, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.9.5' data-updatehref="/v1.9/news/releases/1.9.x/announcing-1.9.5/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://istio.io/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.9.1...1.9.2"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Fixed</strong> an issue so transport socket parameters are now taken into account when configured in <code>EnvoyFilter</code> (<a href="https://github.com/istio/istio/issues/28996">Issue #28996</a>)</p></li> <li><p><strong>Fixed</strong> a bug causing runaway logs in <code>istiod</code> after disabling the default ingress controller. (<a href="https://github.com/istio/istio/issues/31336">Issue #31336</a>)</p></li> <li><p><strong>Fixed</strong> an issue so the Kubernetes API server is now considered to be cluster-local by default. This means that any pod attempting to reach <code>kubernetes.default.svc</code> will always be directed to the in-cluster server. (<a href="https://github.com/istio/istio/issues/31340">Issue #31340</a>)</p></li> <li><p><strong>Fixed</strong> an issue with metadata handling for the Azure platform, allowing <code>tagsList</code> serialization of tags on instance metadata. (<a href="https://github.com/istio/istio/issues/31176">Issue #31176</a>)</p></li> <li><p><strong>Fixed</strong> an issue with DNS proxying causing <code>StatefulSets</code> addresses to not be load balanced. (<a href="https://github.com/istio/istio/issues/31064">Issue #31064</a>)</p></li> </ul>Thu, 25 Mar 2021 00:00:00 +0000/v1.9/news/releases/1.9.x/announcing-1.9.2//v1.9/news/releases/1.9.x/announcing-1.9.2/Announcing Istio 1.8.4 <p>This release contains bug fixes to improve stability. This release note describes what’s different between Istio 1.8.3 and Istio 1.8.4</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.8.x/announcing-1.8/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.8.4" data-downloadbuttontext="DOWNLOAD 1.8.4" data-updateadvice='Before you download 1.8.4, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.8.6' data-updatehref="/v1.9/news/releases/1.8.x/announcing-1.8.6/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.8/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.8.3...1.8.4"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Fixed</strong> issue with metadata handling for Azure platform. Support added for <code>tagsList</code> serialization of tags on instance metadata. (<a href="https://github.com/istio/istio/issues/31176">Issue #31176</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing an alternative Envoy binary to be included in the docker image. The binaries are functionally equivalent. (<a href="https://github.com/istio/istio/issues/31038">Issue #31038</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing HTTP headers to be duplicated when using Istio probe rewrite. (<a href="https://github.com/istio/istio/issues/28466">Issue #28466</a>)</p></li> </ul>Wed, 10 Mar 2021 00:00:00 +0000/v1.9/news/releases/1.8.x/announcing-1.8.4//v1.9/news/releases/1.8.x/announcing-1.8.4/ISTIO-SECURITY-2021-001 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21378">CVE-2021-21378</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>8.2 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aH%2fI%3aL%2fA%3aN">AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:N</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.9.0<br> </td> </tr> </tbody> </table> <p>Envoy, and subsequently Istio, is vulnerable to a newly discovered vulnerability:</p> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21378">CVE-2021-21378</a></strong>: JWT authentication bypass with unknown issuer token <ul> <li>CVSS Score: 8.2 <a href="https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:N">AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:N</a></li> </ul></li> </ul> <p>You are subject to the vulnerability if you are using <code>RequestAuthentication</code> alone for JWT validation.</p> <p>You are <strong>not</strong> subject to the vulnerability if you use <strong>both</strong> <code>RequestAuthentication</code> and <code>AuthorizationPolicy</code> for JWT validation.</p> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">Please note that <code>RequestAuthentication</code> is used to define a list of issuers that should be accepted. It does not reject a request without JWT token.</div> </aside> </div> <p>For Istio, this vulnerability only exists if your service: * Accepts JWT tokens (with <code>RequestAuthentication</code>) * Has some service paths without <code>AuthorizationPolicy</code> applied.</p> <p>For the service paths that both conditions are met, an incoming request with a JWT token, and the token issuer is not in <code>RequestAuthentication</code> will bypass the JWT validation, instead of getting rejected.</p> <h2 id="mitigation">Mitigation</h2> <p>For proper JWT validation, you should always use the <code>AuthorizationPolicy</code> as documented on istio.io for <a href="/v1.9/docs/tasks/security/authentication/authn-policy/#require-a-valid-token">specifying a valid token</a>. To do this you will have to audit all of your <code>RequestAuthentication</code> and subsequent <code>AuthorizationPolicy</code> resources to make sure they align with the documented practice.</p> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Mon, 01 Mar 2021 00:00:00 +0000/v1.9/news/security/istio-security-2021-001//v1.9/news/security/istio-security-2021-001/CVEAnnouncing Istio 1.9.1 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2021-001">our March 1st, 2021 news post</a> as well as bug fixes to improve robustness.</p> <p>This release note describes what’s different between Istio 1.9.0 and Istio 1.9.1.</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">Qualification testing for this release completed successfully on March 3rd, 2021.</div> </aside> </div> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.9.x/announcing-1.9/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.9.1" data-downloadbuttontext="DOWNLOAD 1.9.1" data-updateadvice='Before you download 1.9.1, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.9.5' data-updatehref="/v1.9/news/releases/1.9.x/announcing-1.9.5/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://istio.io/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.9.0...1.9.1"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>A <a href="https://groups.google.com/g/envoy-security-announce/c/Hp16L27L00Q">zero-day security vulnerability</a> was fixed in the version of Envoy shipped with Istio 1.9.0. This vulnerability was fixed on February 26th, 2021. 1.9.0 is the only version of Istio that includes the vulnerable version of Envoy. This vulnerability can only be exploited on misconfigured systems.</p> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Improved</strong> sidecar injection to automatically specify the <code>kubectl.kubernetes.io/default-logs-container</code>. This ensures <code>kubectl logs</code> defaults to reading the application container&rsquo;s logs, rather than requiring explicitly setting the container. (<a href="https://github.com/istio/istio/issues/26764">Issue #26764</a>)</p></li> <li><p><strong>Improved</strong> the sidecar injector to better utilize pod labels to determine if injection is required. This is not enabled by default in this release, but can be tested using <code>--set values.sidecarInjectorWebhook.useLegacySelectors=false</code>. (<a href="https://github.com/istio/istio/issues/30013">Issue #30013</a>)</p></li> <li><p><strong>Updated</strong> Prometheus metrics to include <code>source_cluster</code> and <code>destination_cluster</code> labels by default for all scenarios. Previously, this was only enabled for multi-cluster scenarios. (<a href="https://github.com/istio/istio/issues/30036">Issue #30036</a>)</p></li> <li><p><strong>Updated</strong> default access log to include <code>RESPONSE_CODE_DETAILS</code> and <code>CONNECTION_TERMINATION_DETAILS</code> for proxy version &gt;= 1.9. (<a href="https://github.com/istio/istio/issues/27903">Issue #27903</a>)</p></li> <li><p><strong>Updated</strong> Kiali addon to the latest version <code>v1.29</code>. (<a href="https://github.com/istio/istio/issues/30438">Issue #30438</a>)</p></li> <li><p><strong>Added</strong> <code>enableIstioConfigCRDs</code> to <code>base</code> to allow users to specify whether the Istio CRDs will be installed. (<a href="https://github.com/istio/istio/issues/28346">Issue #28346</a>)</p></li> <li><p><strong>Added</strong> support for <code>DestinationRule</code> inheritance for mesh/namespace level rules. Enable feature with the <code>PILOT_ENABLE_DESTINATION_RULE_INHERITANCE</code> environment variable. (<a href="https://github.com/istio/istio/issues/29525">Issue #29525</a>)</p></li> <li><p><strong>Added</strong> support for applications that bind to their pod IP address, rather than wildcard or localhost address, through the <code>Sidecar</code> API. (<a href="https://github.com/istio/istio/issues/28178">Issue #28178</a>)</p></li> <li><p><strong>Added</strong> flag to enable capture of DNS traffic to the <code>istio-iptables</code> script. (<a href="https://github.com/istio/istio/issues/29908">Issue #29908</a>)</p></li> <li><p><strong>Added</strong> canonical service tags to Envoy-generated trace spans. (<a href="https://github.com/istio/istio/issues/28801">Issue #28801</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing the timeout header <code>x-envoy-upstream-rq-timeout-ms</code> to not be honored. (<a href="https://github.com/istio/istio/issues/30885">Issue #30885</a>)</p></li> <li><p><strong>Fixed</strong> an issue where access log service causes Istio proxy to reject configuration. (<a href="https://github.com/istio/istio/issues/30939">Issue #30939</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing an alternative Envoy binary to be included in the Docker image. The binaries are functionally equivalent. (<a href="https://github.com/istio/istio/issues/31038">Issue #31038</a>)</p></li> <li><p><strong>Fixed</strong> an issue where the TLS v2 version was enforced only on HTTP ports. This option is now applied to all ports.</p></li> <li><p><strong>Fixed</strong> an issue where Wasm plugin configuration update will cause requests to fail. (<a href="https://github.com/istio/istio/issues/29843">Issue #29843</a>)</p></li> <li><p><strong>Removed</strong> support for reading Istio configuration over the Mesh Configuration Protocol (MCP). (<a href="https://github.com/istio/istio/issues/28634">Issue #28634</a>)</p></li> </ul>Mon, 01 Mar 2021 00:00:00 +0000/v1.9/news/releases/1.9.x/announcing-1.9.1//v1.9/news/releases/1.9.x/announcing-1.9.1/Support for Istio 1.7 has ended<p>As <a href="/v1.9/news/support/announcing-1.7-eol/">previously announced</a>, support for Istio 1.7 has now officially ended.</p> <p>At this point we will no longer back-port fixes for security issues and critical bugs to 1.7, so we heartily encourage you to upgrade to the latest version of Istio (1.9.5) if you haven&rsquo;t already.</p>Thu, 25 Feb 2021 00:00:00 +0000/v1.9/news/support/announcing-1.7-eol-final//v1.9/news/support/announcing-1.7-eol-final/Announcing Istio 1.7.8 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.7.7 and Istio 1.7.8</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="entry" href="https://github.com/istio/istio/releases/tag/1.7.8"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.7/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.7.7...1.7.8"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> an issue where dashboard <code>controlz</code> would not port forward to istiod pod. (<a href="https://github.com/istio/istio/issues/30208">Issue #30208</a>)</li> <li><strong>Fixed</strong> an issue where namespace isn’t resolved correctly in <code>VirtualService</code> delegation’s short destination host. (<a href="https://github.com/istio/istio/issues/30387">Issue #30387</a>)</li> <li><strong>Fixed</strong> an issue causing HTTP headers to be duplicated when using Istio probe rewrite. (<a href="https://github.com/istio/istio/issues/28466">Issue #28466</a>)</li> </ul>Thu, 25 Feb 2021 00:00:00 +0000/v1.9/news/releases/1.7.x/announcing-1.7.8//v1.9/news/releases/1.7.x/announcing-1.7.8/Announcing Istio 1.8.3 <p>This release contains bug fixes to improve stability. This release note describes what’s different between Istio 1.8.2 and Istio 1.8.3</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.8.x/announcing-1.8/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.8.3" data-downloadbuttontext="DOWNLOAD 1.8.3" data-updateadvice='Before you download 1.8.3, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.8.6' data-updatehref="/v1.9/news/releases/1.8.x/announcing-1.8.6/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.8/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.8.2...1.8.3"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security">Security</h2> <p>Istio 1.8.3 will not contain a security fix as previously announced on <a href="https://discuss.istio.io/t/upcoming-istio-1-7-8-and-1-8-3-security-release/9593">discuss.istio.io</a>. There is no currently planned date at this time. Be assured that this is a top priority for the Istio Product Security Working Group, but due to the details we cannot release more information at this time. An announcement regarding the delay can be found <a href="https://discuss.istio.io/t/istio-1-7-8-and-1-8-3-cve-fixes-delayed/9663">here</a>.</p> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Fixed</strong> an issue with aggregate cluster during TLS init in Envoy (<a href="https://github.com/istio/istio/issues/28620">Issue #28620</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing Istio 1.8 to configure Istio 1.7 proxies incorrectly when using the <code>Sidecar</code> <code>ingress</code> configuration. (<a href="https://github.com/istio/istio/issues/30437">Issue #30437</a>)</p></li> <li><p><strong>Fixed</strong> a bug where DNS agent preview produces malformed DNS responses. (<a href="https://github.com/istio/istio/issues/28970">Issue #28970</a>)</p></li> <li><p><strong>Fixed</strong> a bug where the env K8S settings are overridden by the env settings in the helm values. (<a href="https://github.com/istio/istio/issues/30079">Issue #30079</a>)</p></li> <li><p><strong>Fixed</strong> a bug where <code>istioctl dashboard controlz</code> could not port forward to istiod pod. (<a href="https://github.com/istio/istio/issues/30208">Issue #30208</a>)</p></li> <li><p><strong>Fixed</strong> a bug that prevented <code>Ingress</code> resources created with <code>IngressClass</code> from having their status field updated (<a href="https://github.com/istio/istio/issues/25308">Issue #25308</a>)</p></li> <li><p><strong>Fixed</strong> an issue where the <code>TLSv2</code> version was enforced only on HTTP ports. This option is now applied to all ports. (<a href="https://github.com/istio/istio/pull/30590">PR #30590</a>)</p></li> <li><p><strong>Fixed</strong> issues resulting in missing routes when using <code>httpsRedirect</code> in a <code>Gateway</code>. (<a href="https://github.com/istio/istio/issues/27315">Issue #27315</a>),(<a href="https://github.com/istio/istio/issues/27157">Issue #27157</a>)</p></li> </ul>Mon, 08 Feb 2021 00:00:00 +0000/v1.9/news/releases/1.8.x/announcing-1.8.3//v1.9/news/releases/1.8.x/announcing-1.8.3/Announcing Istio 1.7.7 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.7.6 and Istio 1.7.7</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.7.7" data-downloadbuttontext="DOWNLOAD 1.7.7" data-updateadvice='Before you download 1.7.7, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.7.8' data-updatehref="/v1.9/news/releases/1.7.x/announcing-1.7.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.7/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.7.6...1.7.7"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> an issue of using explicitly empty revision flag on install. (<a href="https://github.com/istio/istio/issues/26940">Issue #26940</a>)</li> <li><strong>Fixed</strong> the CA’s certificate signature algorithm to be the default algorithm corresponding to the CA’s signing key type. (<a href="https://github.com/istio/istio/issues/27238">Issue #27238</a>)</li> <li><strong>Fixed</strong> an issue showing unnecessary warnings when downgrading to a lower version of Istio. (<a href="https://github.com/istio/istio/issues/29183">Issue #29183</a>)</li> <li><strong>Fixed</strong> an issue causing older control planes relying on the <code>rbac.istio.io</code> CRD group to hang on restart due to the fact that newer control plane installations remove those permissions from istiod. (<a href="https://github.com/istio/istio/issues/29364">Issue #29364</a>)</li> <li><strong>Fixed</strong> a memory leak in WASM <code>NullPlugin</code> <code>onNetworkNewConnection</code>. (<a href="https://github.com/istio/istio/issues/24720">Issue #24720</a>)</li> </ul>Fri, 29 Jan 2021 00:00:00 +0000/v1.9/news/releases/1.7.x/announcing-1.7.7//v1.9/news/releases/1.7.x/announcing-1.7.7/Support for Istio 1.7 ends on February 19th, 2021<p>According to Istio&rsquo;s <a href="/v1.9/about/supported-releases#supported-releases/">support policy</a>, LTS releases like 1.7 are supported for three months after the next LTS release. Since <a href="/v1.9/news/releases/1.8.x/announcing-1.8/">1.8 was released on November 19th</a>, support for 1.7 will end on February 19th, 2021.</p> <p>At that point we will stop back-porting fixes for security issues and critical bugs to 1.7, so we encourage you to upgrade to the latest version of Istio (1.9.5). If you don&rsquo;t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.</p> <p>We care about you and your clusters, so please be kind to yourself and upgrade.</p>Tue, 19 Jan 2021 00:00:00 +0000/v1.9/news/support/announcing-1.7-eol//v1.9/news/support/announcing-1.7-eol/Announcing Istio 1.8.2 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.8.1 and Istio 1.8.2</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.8.x/announcing-1.8/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.8.2" data-downloadbuttontext="DOWNLOAD 1.8.2" data-updateadvice='Before you download 1.8.2, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.8.6' data-updatehref="/v1.9/news/releases/1.8.x/announcing-1.8.6/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.8/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.8.1...1.8.2"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Improved</strong> <code>WorkloadEntry</code> auto-registration stability. (<a href="https://github.com/istio/istio/pull/29876">PR #29876</a>)</p></li> <li><p><strong>Fixed</strong> the CA&rsquo;s certificate signature algorithm to be the default algorithm corresponding to the CA&rsquo;s signing key type. (<a href="https://github.com/istio/istio/issues/27238">Issue #27238</a>)</p></li> <li><p><strong>Fixed</strong> Newer control plane installations were removing permissions for <code>rbac.istio.io</code> from <code>istiod</code>, causing older control planes relying on that CRD group to hang on restart. (<a href="https://github.com/istio/istio/issues/29364">Issue #29364</a>)</p></li> <li><p><strong>Fixed</strong> empty service ports for customized gateway. (<a href="https://github.com/istio/istio/issues/29608">Issue #29608</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing usage of deprecated filter names in <code>EnvoyFilter</code> to overwrite other <code>EnvoyFilter</code>s. (<a href="https://github.com/istio/istio/issues/29858">Issue #29858</a>)(<a href="https://github.com/istio/istio/issues/29909">Issue #29909</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing <code>EnvoyFilter</code>s that match filter chains to fail to properly apply. (<a href="https://github.com/istio/istio/pull/29486">PR #29486</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing a Secret named <code>&lt;secret&gt;-cacert</code> to have lower precedence than a Secret named <code>&lt;secret&gt;</code> for Gateway Mutual TLS. This behavior was accidentally inverted in Istio 1.8; this change restores the behavior to match Istio 1.7 and earlier. (<a href="https://github.com/istio/istio/issues/29856">Issue #29856</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing only internal ALPN values to be set during external TLS origination. (<a href="https://github.com/istio/istio/issues/24619">Issue #24619</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing client side application TLS requests sent to a PERMISSIVE mode enabled server to fail. (<a href="https://github.com/istio/istio/issues/29538">Issue #29538</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing the <code>targetPort</code> option to not take affect for <code>WorkloadEntry</code>s with multiple ports. (<a href="https://github.com/istio/istio/pull/29887">PR #29887</a>)</p></li> </ul>Thu, 14 Jan 2021 00:00:00 +0000/v1.9/news/releases/1.8.x/announcing-1.8.2//v1.9/news/releases/1.8.x/announcing-1.8.2/Announcing Istio 1.7.6 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.7.5 and Istio 1.7.6</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.7.6" data-downloadbuttontext="DOWNLOAD 1.7.6" data-updateadvice='Before you download 1.7.6, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.7.8' data-updatehref="/v1.9/news/releases/1.7.x/announcing-1.7.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.7/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.7.5...1.7.6"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Fixed</strong> an issue causing telemetry HPA settings to be overridden by the inline replicas. (<a href="https://github.com/istio/istio/issues/28916">Issue #28916</a>)</p></li> <li><p><strong>Fixed</strong> an issue where a delegate <code>VirtualService</code> change would not trigger an xDS push. (<a href="https://github.com/istio/istio/issues/29123">Issue #29123</a>)</p></li> <li><p><strong>Fixed</strong> an issue that caused a very high memory usage with a large number of <code>ServiceEntry</code>s. (<a href="https://github.com/istio/istio/issues/25531">Issue #25531</a>)</p></li> </ul>Thu, 10 Dec 2020 00:00:00 +0000/v1.9/news/releases/1.7.x/announcing-1.7.6//v1.9/news/releases/1.7.x/announcing-1.7.6/Announcing Istio 1.8.1 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.8.0 and Istio 1.8.1</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.8.x/announcing-1.8/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.8.1" data-downloadbuttontext="DOWNLOAD 1.8.1" data-updateadvice='Before you download 1.8.1, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.8.6' data-updatehref="/v1.9/news/releases/1.8.x/announcing-1.8.6/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.8/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.8.0...1.8.1"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Fixed</strong> an issue showing unnecessary warnings when downgrading to a lower version of Istio. (<a href="https://github.com/istio/istio/issues/29183">Issue #29183</a>)</p></li> <li><p><strong>Fixed</strong> an issue where a delegate <code>VirtualService</code> change would not trigger an xDS push. (<a href="https://github.com/istio/istio/issues/29123">Issue #29123</a>)</p></li> <li><p><strong>Fixed</strong> a regression in Istio 1.8.0 causing workloads with multiple Services with overlapping ports to send traffic to the wrong port. (<a href="https://github.com/istio/istio/issues/29199">Issue #29199</a>)</p></li> <li><p><strong>Fixed</strong> a bug causing Istio to attempt to validate resource types it no longer supports.</p></li> </ul>Tue, 08 Dec 2020 00:00:00 +0000/v1.9/news/releases/1.8.x/announcing-1.8.1//v1.9/news/releases/1.8.x/announcing-1.8.1/Support for Istio 1.6 has ended<p>As <a href="/v1.9/news/support/announcing-1.6-eol/">previously announced</a>, support for Istio 1.6 has now officially ended.</p> <p>At this point we will no longer back-port fixes for security issues and critical bugs to 1.6, so we heartily encourage you to upgrade to the latest version of Istio (1.9.5) if you haven&rsquo;t already.</p>Mon, 23 Nov 2020 00:00:00 +0000/v1.9/news/support/announcing-1.6-eol-final//v1.9/news/support/announcing-1.6-eol-final/Announcing Istio 1.6.14 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.6.13 and Istio 1.6.14</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="entry" href="https://github.com/istio/istio/releases/tag/1.6.14"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.13...1.6.14"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> HPA settings for telemetry being overridden by the inline replicas. (<a href="https://github.com/istio/istio/issues/28916">Issue #28916</a>)</li> <li><strong>Fixed</strong> an issue that caused very high memory usage with a large number of <code>ServiceEntries</code>. (<a href="https://github.com/istio/istio/issues/25531">Issue #25531</a>)</li> <li><strong>Fixed</strong> an issue that caused the <code>user agent</code> header to be missing in the Stackdriver access log. (<a href="https://github.com/istio/proxy/pull/3083">PR #3083</a>)</li> </ul>Mon, 23 Nov 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.14//v1.9/news/releases/1.6.x/announcing-1.6.14/ISTIO-SECURITY-2020-011 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=N%2fA">N/A</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>N/A <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector="></a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.8.0<br> </td> </tr> </tbody> </table> <p>Envoy, and subsequently Istio, is vulnerable to a newly discovered vulnerability:</p> <ul> <li><a href="https://groups.google.com/g/envoy-security-announce/c/aqtBt5VUor0">Incorrect proxy protocol downstream address for non-HTTP connections</a>: Envoy incorrectly restores the proxy protocol downstream address for non-HTTP connections. Instead of restoring the address supplied by the proxy protocol filter, Envoy restores the address of the directly connected peer and passes it to subsequent filters. This will affect logging (<code>%DOWNSTREAM_REMOTE_ADDRESS%</code>) and authorization policy (<code>remoteIpBlocks</code> and <code>remote_ip</code>) for non-HTTP network connections because they will use the incorrect proxy protocol downstream address.</li> </ul> <p>This issue does not affect HTTP connections. The address from <code>X-Forwarded-For</code> is also not affected.</p> <p>Istio does not support proxy protocol, and the only way to enable it is to use a custom <code>EnvoyFilter</code> resource. It is not tested in Istio and should be used at your own risk.</p> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.8.0 deployments: do not use the proxy protocol for non-HTTP connections.</li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Sat, 21 Nov 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-011//v1.9/news/security/istio-security-2020-011/CVEAnnouncing Istio 1.7.5 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.7.4 and Istio 1.7.5</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.7.5" data-downloadbuttontext="DOWNLOAD 1.7.5" data-updateadvice='Before you download 1.7.5, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.7.8' data-updatehref="/v1.9/news/releases/1.7.x/announcing-1.7.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.7/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.7.4...1.7.5"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Fixed</strong> pilot agent app probe connection leak. (<a href="https://github.com/istio/istio/issues/27726">Issue #27726</a>)</p></li> <li><p><strong>Fixed</strong> how <code>install-cni</code> applies <code>istio-cni</code> plugin configuration. Previously, new configurations would be appended to the list. This has been changed to remove existing <code>istio-cni</code> plugins from the CNI config before inserting new plugins. (<a href="https://github.com/istio/istio/issues/27771">Issue #27771</a>)</p></li> <li><p><strong>Fixed</strong> when a node has multiple IP addresses (e.g., a VM in the mesh expansion scenario). Istio Proxy will now bind inbound listeners to the first applicable address in the list rather than to the last one. (<a href="https://github.com/istio/istio/issues/28269">Issue #28269</a>)</p></li> <li><p><strong>Fixed</strong> Istio to not run gateway secret fetcher when proxy is configured with <code>FILE_MOUNTED_CERTS</code>.</p></li> <li><p><strong>Fixed</strong> multicluster <code>EnvoyFilter</code> to have valid configuration following the underlying changes in Envoy’s API. (<a href="https://github.com/istio/istio/issues/27909">Issue #27909</a>)</p></li> <li><p><strong>Fixed</strong> an issue causing a short spike in errors during in place upgrades from Istio 1.6 to 1.7. Previously, the xDS version would be upgraded automatically from xDS v2 to xDS v3. This caused downtime with upgrades from Istio 1.6 to Istio 1.7. This has been fixed so that these upgrades no longer cause downtime. Note that, as a trade off, upgrading from Istio 1.7.x to Istio 1.7.5 still causes downtime in any existing 1.6 proxies; if you are in this scenario you may set the <code>PILOT_ENABLE_TLS_XDS_DYNAMIC_TYPES</code> environment variable to false in Istiod to retain the previous behavior. (<a href="https://github.com/istio/istio/issues/28120">Issue #28120</a>)</p></li> <li><p><strong>Fixed</strong> missing listeners on a VM when the VM sidecar is connected to <code>istiod</code> but a <code>WorkloadEntry</code> is registered later. (<a href="https://github.com/istio/istio/issues/28743">Issue #28743</a>)</p></li> </ul> <h3 id="upgrade-notice">Upgrade Notice</h3> <p>When upgrading your Istio data plane from 1.7.x (where x &lt; 5) to 1.7.5 or newer, you may observe connectivity issues between your gateway and your sidecars or among your sidecars with 503 errors in the log. This happens when 1.7.5+ proxies send HTTP 1xx or 204 response codes with headers that 1.7.x proxies reject. To fix this, upgrade all your proxies (gateways and sidecars) to 1.7.5+ as soon as possible. (<a href="https://github.com/istio/istio/issues/29427">Issue 29427</a>, <a href="https://github.com/istio/istio/pull/28450">More information</a>)</p>Thu, 19 Nov 2020 00:00:00 +0000/v1.9/news/releases/1.7.x/announcing-1.7.5//v1.9/news/releases/1.7.x/announcing-1.7.5/Announcing Istio 1.7.4 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.7.3 and Istio 1.7.4</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.7.4" data-downloadbuttontext="DOWNLOAD 1.7.4" data-updateadvice='Before you download 1.7.4, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.7.8' data-updatehref="/v1.9/news/releases/1.7.x/announcing-1.7.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.7/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.7.3...1.7.4"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Improved</strong> TLS configuration on sidecar server-side inbound paths to enforce TLS 2.0 version along with recommended cipher suites. This is disabled by default and can enabled by setting the environment variable <code>PILOT_SIDECAR_ENABLE_INBOUND_TLS_V2</code> to true.</p></li> <li><p><strong>Added</strong> ability to configure domain suffix for multicluster installation. (<a href="https://github.com/istio/istio/issues/27300">Issue #27300</a>)</p></li> <li><p><strong>Added</strong> <code>istioctl proxy-status</code> and other commands will attempt to contact the control plane using both port-forwarding and exec before giving up, restoring functionality on clusters that do not offer port-forwarding to the control plane. (<a href="https://github.com/istio/istio/issues/27421">Issue #27421</a>)</p></li> <li><p><strong>Added</strong> support for <code>securityContext</code> in the Kubernetes settings for the operator API. (<a href="https://github.com/istio/istio/issues/26275">Issue #26275</a>)</p></li> <li><p><strong>Added</strong> support for revision based istiod to istioctl version. (<a href="https://github.com/istio/istio/issues/27756">Issue #27756</a>)</p></li> <li><p><strong>Fixed</strong> deleting the remote-secret for multicluster installation removes remote endpoints.</p></li> <li><p><strong>Fixed</strong> an issue that Istiod’s <code>cacert.pem</code> is under the <code>testdata</code> directory. (<a href="https://github.com/istio/istio/issues/27574">Issue #27574</a>)</p></li> <li><p><strong>Fixed</strong> <code>PodDisruptionBudget</code> of <code>istio-egressgateway</code> does not match any pods. (<a href="https://github.com/istio/istio/issues/27730">Issue #27730</a>)</p></li> <li><p><strong>Fixed</strong> an issue preventing calls to wildcard (such as *.example.com) domains when a port is set in the Host header.</p></li> <li><p><strong>Fixed</strong> an issue periodically causing a deadlock in Pilot’s <code>syncz</code> debug endpoint.</p></li> <li><p><strong>Removed</strong> deprecated <code>outboundTrafficPolicy</code> from global values. (<a href="https://github.com/istio/istio/issues/27494">Issue #27494</a>)</p></li> </ul>Tue, 27 Oct 2020 00:00:00 +0000/v1.9/news/releases/1.7.x/announcing-1.7.4//v1.9/news/releases/1.7.x/announcing-1.7.4/Announcing Istio 1.6.13 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.6.12 and Istio 1.6.13</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.13" data-downloadbuttontext="DOWNLOAD 1.6.13" data-updateadvice='Before you download 1.6.13, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.12...1.6.13"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Fixed</strong> an issue where Istiod&rsquo;s <code>cacert.pem</code> was under the <code>testdata</code> directory (<a href="https://github.com/istio/istio/issues/27574">Issue #27574</a>)</p></li> <li><p><strong>Fixed</strong> Pilot agent app probe connection leak. (<a href="https://github.com/istio/istio/issues/27726">Issue #27726</a>)</p></li> </ul>Tue, 27 Oct 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.13//v1.9/news/releases/1.6.x/announcing-1.6.13/Support for Istio 1.6 ends on November 21st, 2020<p>According to Istio&rsquo;s <a href="/v1.9/about/supported-releases#supported-releases/">support policy</a>, LTS releases like 1.6 are supported for three months after the next LTS release. Since <a href="/v1.9/news/releases/1.7.x/announcing-1.7/">1.7 was released on August 21st</a>, support for 1.6 will end on November 21st, 2020.</p> <p>At that point we will stop back-porting fixes for security issues and critical bugs to 1.6, so we encourage you to upgrade to the latest version of Istio (1.9.5). If you don&rsquo;t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.</p> <p>We care about you and your clusters, so please be kind to yourself and upgrade.</p>Tue, 20 Oct 2020 00:00:00 +0000/v1.9/news/support/announcing-1.6-eol//v1.9/news/support/announcing-1.6-eol/Announcing Istio 1.6.12 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.6.11 and Istio 1.6.12</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.12" data-downloadbuttontext="DOWNLOAD 1.6.12" data-updateadvice='Before you download 1.6.12, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.11...1.6.12"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Added</strong> ability to configure domain suffix for multicluster installation (<a href="https://github.com/istio/istio/issues/27300">Issue #27300</a>)</p></li> <li><p><strong>Added</strong> support for <code>securityContext</code> in the Kubernetes settings for the operator API. (<a href="https://github.com/istio/istio/issues/26275">Issue #26275</a>)</p></li> <li><p><strong>Fixed</strong> an issue preventing calls to wildcard (such as <code>*.example.com</code>) domains when a port is set in the <code>Host</code> header. (<a href="https://github.com/istio/istio/issues/25350">Issue #25350</a>)</p></li> </ul>Tue, 06 Oct 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.12//v1.9/news/releases/1.6.x/announcing-1.6.12/ISTIO-SECURITY-2020-010 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25017">CVE-2020-25017</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>8.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aC%2fC%3aL%2fI%3aL%2fA%3aL">AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.6 to 1.6.10<br> 1.7 to 1.7.2<br> </td> </tr> </tbody> </table> <p>Envoy, and subsequently Istio, is vulnerable to a newly discovered vulnerability:</p> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25017">CVE-2020-25017</a></strong>: In some cases, Envoy only considers the first value when multiple headers are present. Also, Envoy does not replace all existing occurrences of a non-inline header. <ul> <li><strong>CVSS Score</strong>: 8.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L</a></li> </ul></li> </ul> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.6.x deployments: update to <a href="/v1.9/news/releases/1.6.x/announcing-1.6.11">Istio 1.6.11</a> or later.</li> <li>For Istio 1.7.x deployments: update to <a href="/v1.9/news/releases/1.7.x/announcing-1.7.3">Istio 1.7.3</a> or later.</li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 29 Sep 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-010//v1.9/news/security/istio-security-2020-010/CVEAnnouncing Istio 1.7.3 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-010">our September 29 post</a>.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.7.3" data-downloadbuttontext="DOWNLOAD 1.7.3" data-updateadvice='Before you download 1.7.3, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.7.8' data-updatehref="/v1.9/news/releases/1.7.x/announcing-1.7.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.7/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.7.2...1.7.3"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25017">CVE-2020-25017</a></strong>: In some cases, Envoy only considers the first value when multiple headers are present. Also, Envoy does not replace all existing occurrences of a non-inline header. <ul> <li><strong>CVSS Score</strong>: 8.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L</a></li> </ul></li> </ul>Tue, 29 Sep 2020 00:00:00 +0000/v1.9/news/releases/1.7.x/announcing-1.7.3//v1.9/news/releases/1.7.x/announcing-1.7.3/Announcing Istio 1.6.11 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-010">our September 29 post</a>.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.11" data-downloadbuttontext="DOWNLOAD 1.6.11" data-updateadvice='Before you download 1.6.11, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.10...1.6.11"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25017">CVE-2020-25017</a></strong>: In some cases, Envoy only considers the first value when multiple headers are present. Also, Envoy does not replace all existing occurrences of a non-inline header. <ul> <li><strong>CVSS Score</strong>: 8.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L</a></li> </ul></li> </ul>Tue, 29 Sep 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.11//v1.9/news/releases/1.6.x/announcing-1.6.11/Announcing Istio 1.6.10 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.6.9 and Istio 1.6.10.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.10" data-downloadbuttontext="DOWNLOAD 1.6.10" data-updateadvice='Before you download 1.6.10, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.9...1.6.10"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Added</strong> quotes in log sampling config and Stackdriver test</li> <li><strong>Fixed</strong> gateways missing endpoint instances of headless services (<a href="https://github.com/istio/istio/issues/27041">Istio #27041</a>)</li> <li><strong>Fixed</strong> locality load balancer settings were applied to inbound clusters unnecessarily (<a href="https://github.com/istio/istio/issues/27293">Istio #27293</a>)</li> <li><strong>Fixed</strong> unbounded cardinality of Istio metrics for <code>CronJob</code> workloads (<a href="https://github.com/istio/istio/issues/24058">Istio #24058</a>)</li> <li><strong>Improved</strong> envoy to cache readiness value</li> <li><strong>Removed</strong> deprecated help message for <code>istioctl manifest migrate</code> (<a href="https://github.com/istio/istio/issues/26230">Istio #26230</a>)</li> </ul>Tue, 22 Sep 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.10//v1.9/news/releases/1.6.x/announcing-1.6.10/Announcing Istio 1.7.2 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.7.1 and Istio 1.7.2</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.7.2" data-downloadbuttontext="DOWNLOAD 1.7.2" data-updateadvice='Before you download 1.7.2, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.7.8' data-updatehref="/v1.9/news/releases/1.7.x/announcing-1.7.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.7/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.7.1...1.7.2"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Fixed</strong> locality load balancer settings being applied to inbound clusters unnecessarily. (<a href="https://github.com/istio/istio/issues/27293">Issue #27293</a>)</p></li> <li><p><strong>Fixed</strong> unbounded cardinality of Istio metrics for <code>CronJob</code> job workloads. (<a href="https://github.com/istio/istio/issues/24058">Issue #24058</a>)</p></li> <li><p><strong>Fixed</strong> setting the <code>ISTIO_META_REQUESTED_NETWORK_VIEW</code> environment variable for a proxy will filter out endpoints that aren’t part of the comma-separated list of networks. This should be set to the local-network on the ingress-gateway used for cross-network traffic to prevent odd load balancing behavior. (<a href="https://github.com/istio/istio/issues/26293">Issue #26293</a>)</p></li> <li><p><strong>Fixed</strong> issues with <code>WorkloadEntry</code> when the Service or <code>WorkloadEntry</code> is updated after creation. (<a href="https://github.com/istio/istio/issues/27183">Issue #27183</a>),(<a href="https://github.com/istio/istio/issues/27151">Issue #27151</a>),(<a href="https://github.com/istio/istio/issues/27185">Issue #27185</a>)</p></li> </ul>Fri, 18 Sep 2020 00:00:00 +0000/v1.9/news/releases/1.7.x/announcing-1.7.2//v1.9/news/releases/1.7.x/announcing-1.7.2/Announcing Istio 1.7.1 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.7.0 and Istio 1.7.1</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.7.1" data-downloadbuttontext="DOWNLOAD 1.7.1" data-updateadvice='Before you download 1.7.1, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.7.8' data-updatehref="/v1.9/news/releases/1.7.x/announcing-1.7.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.7/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.7.0...1.7.1"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><p><strong>Added</strong> Envoy <a href="https://github.com/istio/istio/wiki/Enabling-Envoy-Authorization-Service-and-gRPC-Access-Log-Service-With-Mixer">ext <code>authz</code> and gRPC access log API support</a> in Mixer, which makes Mixer based configuration and out of process adapter still work after upgrading to future versions of Istio. (<a href="https://github.com/istio/istio/issues/23580">Issue #23580</a>)</p></li> <li><p><strong>Fixed</strong> the <code>istioctl x authz check</code> command to work properly with the v1beta1 AuthorizationPolicy. (<a href="https://github.com/istio/istio/pull/26625">PR #26625</a>)</p></li> <li><p><strong>Fixed</strong> unreachable endpoints for non-injected workloads across networks by removing them. (<a href="https://github.com/istio/istio/issues/26517">Issue #26517</a>)</p></li> <li><p><strong>Fixed</strong> enabling hold application until proxy starts feature flag breaking rewriting application probe logic. (<a href="https://github.com/istio/istio/issues/26873">Issue #26873</a>)</p></li> <li><p><strong>Fixed</strong> deleting the remote-secret for multicluster installation removes remote endpoints. (<a href="https://github.com/istio/istio/issues/27187">Issue #27187</a>)</p></li> <li><p><strong>Fixed</strong> missing endpoints when Service is populated later than Endpoints.</p></li> <li><p><strong>Fixed</strong> an issue causing headless Service updates to be missed (<a href="https://github.com/istio/istio/issues/26617">Issue #26617</a>). (<a href="https://github.com/istio/istio/issues/26617">Issue #26617</a>)</p></li> <li><p><strong>Fixed</strong> an issue with Kiali RBAC permissions which prevented its deployment from working properly. (<a href="https://github.com/istio/istio/issues/27109">Issue #27109</a>)</p></li> <li><p><strong>Fixed</strong> an issue where <code>remove-from-mesh</code> did not remove the init containers when using Istio CNI (<a href="https://github.com/istio/istio/issues/26938">Issue #26938</a>)</p></li> <li><p><strong>Fixed</strong> Kiali to use anonymous authentication strategy since newer versions have removed the login authentication strategy.</p></li> </ul>Thu, 10 Sep 2020 00:00:00 +0000/v1.9/news/releases/1.7.x/announcing-1.7.1//v1.9/news/releases/1.7.x/announcing-1.7.1/Announcing Istio 1.6.9 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.6.8 and Istio 1.6.9.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.9" data-downloadbuttontext="DOWNLOAD 1.6.9" data-updateadvice='Before you download 1.6.9, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.8...1.6.9"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Added</strong> istioctl analyzer to detect when Destination Rules do not specify <code>caCertificates</code> (<a href="https://github.com/istio/istio/issues/25652">Istio #25652</a>)</li> <li><strong>Added</strong> missing <code>telemetry.loadshedding.*</code> options to mixer container arguments</li> <li><strong>Fixed</strong> HTTP match request without headers conflict</li> <li><strong>Fixed</strong> Istio operator to watch multiple namespaces (<a href="https://github.com/istio/istio/issues/26317">Istio #26317</a>)</li> <li><strong>Fixed</strong> <code>EDS</code> cache when an endpoint appears after its service resource (<a href="https://github.com/istio/istio/issues/26983">Istio #26983</a>)</li> <li><strong>Fixed</strong> <code>istioctl remove-from-mesh</code> not removing init containers on CNI installations.</li> <li><strong>Fixed</strong> <code>istioctl</code> <code>add-to-mesh</code> and <code>remove-from-mesh</code> commands from affecting <code>OwnerReferences</code> (<a href="https://github.com/istio/istio/issues/26720">Istio #26720</a>)</li> <li><strong>Fixed</strong> cleaning up of service information when the cluster secret is deleted</li> <li><strong>Fixed</strong> egress gateway ports binding to <sup>80</sup>&frasl;<sub>443</sub> due to user permissions</li> <li><strong>Fixed</strong> gateway listeners created with traffic direction outbound to be drained properly on exit</li> <li><strong>Fixed</strong> headless services not updating listeners (<a href="https://github.com/istio/istio/issues/26617">Istio #26617</a>)</li> <li><strong>Fixed</strong> inaccurate <code>endpointsPendingPodUpdate</code> metric</li> <li><strong>Fixed</strong> ingress SDS from not getting secret update (<a href="https://github.com/istio/istio/issues/18912">Istio #18912</a>)</li> <li><strong>Fixed</strong> ledger capacity size</li> <li><strong>Fixed</strong> operator to update service monitor due to invalid permissions (<a href="https://github.com/istio/istio/issues/26961">Istio #26961</a>)</li> <li><strong>Fixed</strong> regression in gateway name resolution (<a href="https://github.com/istio/istio/issues/26264">Istio 26264</a>)</li> <li><strong>Fixed</strong> rotated certificates not being stored to <code>/etc/istio-certs</code> <code>VolumeMount</code> (<a href="https://github.com/istio/istio/issues/26821">Istio #26821</a>)</li> <li><strong>Fixed</strong> trust domain validation in transport socket level (<a href="https://github.com/istio/istio/issues/26435">Istio #26435</a>)</li> <li><strong>Improved</strong> specifying network for a cluster without <code>meshNetworks</code> also being configured</li> <li><strong>Improved</strong> the cache readiness state with TTL (<a href="https://github.com/istio/istio/issues/26418">Istio #26418</a>)</li> <li><strong>Updated</strong> SDS timeout to fetch workload certificates to <code>0s</code></li> <li><strong>Updated</strong> <code>app_containers</code> to use comma separated values for container specification</li> <li><strong>Updated</strong> default protocol sniffing timeout to <code>5s</code> (<a href="https://github.com/istio/istio/issues/24379">Istio #24379</a>)</li> </ul>Wed, 09 Sep 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.9//v1.9/news/releases/1.6.x/announcing-1.6.9/Support for Istio 1.5 has ended<p>As <a href="/v1.9/news/support/announcing-1.5-eol/">previously announced</a>, support for Istio 1.5 has now officially ended.</p> <p>At this point we will no longer back-port fixes for security issues and critical bugs to 1.5, so we heartily encourage you to upgrade to the latest version of Istio (1.9.5) if you haven&rsquo;t already.</p>Mon, 24 Aug 2020 00:00:00 +0000/v1.9/news/support/announcing-1.5-eol-final//v1.9/news/support/announcing-1.5-eol-final/Announcing Istio 1.5.10 <p>This release includes bug fixes to improve robustness. These release notes describe what&rsquo;s different between Istio 1.5.9 and Istio 1.5.10.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="entry" href="https://github.com/istio/istio/releases/tag/1.5.10"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.5.9...1.5.10"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> container name as <code>app_container</code> in telemetry v2.</li> <li><strong>Fixed</strong> ingress SDS not getting secret updates. (<a href="https://github.com/istio/istio/issues/23715">Issue 23715</a>).</li> </ul>Mon, 24 Aug 2020 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5.10//v1.9/news/releases/1.5.x/announcing-1.5.10/ISTIO-SECURITY-2020-009 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-16844">CVE-2020-16844</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>6.8 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aH%2fPR%3aL%2fUI%3aN%2fS%3aU%2fC%3aH%2fI%3aH%2fA%3aN">AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.5 to 1.5.8<br> 1.6 to 1.6.7<br> </td> </tr> </tbody> </table> <p>Istio is vulnerable to a newly discovered vulnerability:</p> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-16844"><code>CVE-2020-16844</code></a></strong>: Callers to TCP services that have a defined Authorization Policies with <code>DENY</code> actions using wildcard suffixes (e.g. <code>*-some-suffix</code>) for source principals or namespace fields will never be denied access. <ul> <li>CVSS Score: 6.8 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N&amp;version=3.1">AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N</a></li> </ul></li> </ul> <p>Istio users are exposed to this vulnerability in the following ways:</p> <p>If the user has an Authorization similar to</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: foo namespace: foo spec: action: DENY rules: - from: - source: principals: - */ns/ns1/sa/foo # indicating any trust domain, ns1 namespace, foo svc account </code></pre> <p>Istio translates the principal (and <code>source.principal</code>) field to an Envoy level string match</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >stringMatch: suffix: spiffe:///ns/ns1/sa/foo </code></pre> <p>which will not match any legitimate caller as it included the <code>spiffe://</code> string incorrectly. The correct string match should be</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >stringMatch: regex: spiffe://.*/ns/ns1/sa/foo </code></pre> <p>Prefix and exact matches in <code>AuthorizationPolicy</code> is unaffected, as are ALLOW actions in them; HTTP is also unaffected.</p> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.5.x deployments: update to <a href="/v1.9/news/releases/1.5.x/announcing-1.5.8">Istio 1.5.9</a> or later.</li> <li>For Istio 1.6.x deployments: update to <a href="/v1.9/news/releases/1.6.x/announcing-1.6.8">Istio 1.6.8</a> or later.</li> <li>Do not use suffix matching in DENY policies in the source principal or namespace field for TCP services and use Prefix and Exact matching where applicable. Where possible change TCP to HTTP for port name suffixes in your Services.</li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 11 Aug 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-009//v1.9/news/security/istio-security-2020-009/CVEAnnouncing Istio 1.6.8 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-009">our August 11th, 2020 news post</a>.</p> <p>This release contains bug fixes to improve robustness. These release notes describe what’s different between Istio 1.6.7 and Istio 1.6.8.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.8" data-downloadbuttontext="DOWNLOAD 1.6.8" data-updateadvice='Before you download 1.6.8, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.7...1.6.8"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-16844">CVE-2020-16844</a></strong>: Callers to TCP services that have a defined Authorization Policies with <code>DENY</code> actions using wildcard suffixes (e.g. <code>*-some-suffix</code>) for source principals or namespace fields will never be denied access. <ul> <li>CVSS Score: 6.8 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N&amp;version=3.1">AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N</a></li> </ul></li> </ul>Tue, 11 Aug 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.8//v1.9/news/releases/1.6.x/announcing-1.6.8/Announcing Istio 1.5.9 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-009">our August 11th, 2020 news post</a>.</p> <p>These release notes describe what&rsquo;s different between Istio 1.5.8 and Istio 1.5.9.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.5.9" data-downloadbuttontext="DOWNLOAD 1.5.9" data-updateadvice='Before you download 1.5.9, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.5.10' data-updatehref="/v1.9/news/releases/1.5.x/announcing-1.5.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.5.8...1.5.9"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-16844">CVE-2020-16844</a></strong>: Callers to TCP services that have a defined Authorization Policies with <code>DENY</code> actions using wildcard suffixes (e.g. <code>*-some-suffix</code>) for source principals or namespace fields will never be denied access. <ul> <li>CVSS Score: 6.8 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N&amp;version=3.1">AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N</a></li> </ul></li> </ul>Tue, 11 Aug 2020 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5.9//v1.9/news/releases/1.5.x/announcing-1.5.9/Announcing Istio 1.6.7 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.6.6 and Istio 1.6.7.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.7" data-downloadbuttontext="DOWNLOAD 1.6.7" data-updateadvice='Before you download 1.6.7, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.6...1.6.7"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> an issue which prevented endpoints not associated with pods from working. (<a href="https://github.com/istio/istio/issues/25974">Issue #25974</a>)</li> </ul>Thu, 30 Jul 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.7//v1.9/news/releases/1.6.x/announcing-1.6.7/Announcing Istio 1.6.6 <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">This release contains a regression from 1.6.5 that prevents endpoints not associated with pods from working. Please upgrade to 1.6.7 when it is available.</div> </aside> </div> <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.6.5 and Istio 1.6.6.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.6" data-downloadbuttontext="DOWNLOAD 1.6.6" data-updateadvice='Before you download 1.6.6, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.5...1.6.6"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Optimized</strong> performance in scenarios with large numbers of gateways. (<a href="https://github.com/istio/istio/issues/25116">Issue 25116</a>)</li> <li><strong>Fixed</strong> an issue where out of order events may cause the Istiod update queue to get stuck. This resulted in proxies with stale configuration.</li> <li><strong>Fixed</strong> <code>istioctl upgrade</code> so that it no longer checks remote component versions when using <code>--dry-run</code>. (<a href="https://github.com/istio/istio/issues/24865">Issue 24865</a>)</li> <li><strong>Fixed</strong> long log messages for clusters with many gateways.</li> <li><strong>Fixed</strong> outlier detection to only fire on user configured errors and not depend on success rate. (<a href="https://github.com/istio/istio/issues/25220">Issue 25220</a>)</li> <li><strong>Fixed</strong> demo profile to use port 15021 as the status port. (<a href="https://github.com/istio/istio/issues/25626">Issue #25626</a>)</li> <li><strong>Fixed</strong> Galley to properly handle errors from Kubernetes tombstones.</li> <li><strong>Fixed</strong> an issue where manually enabling TLS/mTLS for communication between a sidecar and an egress gateway did not work. (<a href="https://github.com/istio/istio/issues/23910">Issue 23910</a>)</li> <li><strong>Fixed</strong> Bookinfo demo application to verify if a specified namespace exists and if not, use the default namespace.</li> <li><strong>Added</strong> a label to the <code>pilot_xds</code> metric in order to give more information on data plane versions without scraping the data plane.</li> <li><strong>Added</strong> <code>CA_ADDR</code> field to allow configuring the certificate authority address on the egress gateway configuration and fixed the <code>istio-certs</code> mount secret name.</li> <li><strong>Updated</strong> Bookinfo demo application to latest versions of libraries.</li> <li><strong>Updated</strong> Istio to disable auto mTLS when sending traffic to headless services without a sidecar.</li> </ul>Wed, 29 Jul 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.6//v1.9/news/releases/1.6.x/announcing-1.6.6/Support for Istio 1.5 ends on August 21st, 2020<p>According to Istio&rsquo;s <a href="/v1.9/about/supported-releases/">support policy</a>, LTS releases like 1.5 are supported for three months after the next LTS release. Since <a href="/v1.9/news/releases/1.6.x/announcing-1.6/">1.6 was released on May 21st</a>, support for 1.5 will end on August 21st, 2020.</p> <p>At that point we will stop back-porting fixes for security issues and critical bugs to 1.5, so we encourage you to upgrade to the latest version of Istio (1.9.5). If you don&rsquo;t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.</p> <p>We care about you and your clusters, so please be kind to yourself and upgrade.</p>Wed, 22 Jul 2020 00:00:00 +0000/v1.9/news/support/announcing-1.5-eol//v1.9/news/support/announcing-1.5-eol/ISTIO-SECURITY-2020-008 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15104">CVE-2020-15104</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>6.6 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aH%2fPR%3aH%2fUI%3aN%2fS%3aC%2fC%3aH%2fI%3aL%2fA%3aN%2fE%3aF%2fRL%3aO%2fRC%3aC">AV:N/AC:H/PR:H/UI:N/S:C/C:H/I:L/A:N/E:F/RL:O/RC:C</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.5 to 1.5.7<br> 1.6 to 1.6.4<br> All releases prior to 1.5<br> </td> </tr> </tbody> </table> <p>Istio is vulnerable to a newly discovered vulnerability:</p> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15104"><code>CVE-2020-15104</code></a></strong>: When validating TLS certificates, Envoy incorrectly allows a wildcard DNS Subject Alternative Name apply to multiple subdomains. For example, with a SAN of <code>*.example.com</code>, Envoy incorrectly allows <code>nested.subdomain.example.com</code>, when it should only allow <code>subdomain.example.com</code>. <ul> <li>CVSS Score: 6.6 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:H/PR:H/UI:N/S:C/C:H/I:L/A:N/E:F/RL:O/RC:C&amp;version=3.1">AV:N/AC:H/PR:H/UI:N/S:C/C:H/I:L/A:N/E:F/RL:O/RC:C</a></li> </ul></li> </ul> <p>Istio users are exposed to this vulnerability in the following ways:</p> <ul> <li><p>Direct use of Envoy&rsquo;s <code>verify_subject_alt_name</code> and <code>match_subject_alt_names</code> configuration via <a href="/v1.9/docs/reference/config/networking/envoy-filter/">Envoy Filter</a>.</p></li> <li><p>Use of Istio&rsquo;s <a href="/v1.9/docs/reference/config/networking/destination-rule/#ClientTLSSettings"><code>subjectAltNames</code> field in destination rules with client TLS settings</a>. A destination rule with a <code>subjectAltNames</code> field containing <code>nested.subdomain.example.com</code> incorrectly accepts a certificate from an upstream peer with a Subject Alternative Name (SAN) of <code>*.example.com</code>. Instead a SAN of <code>*.subdomain.example.com</code> or <code>nested.subdomain.example.com</code> should be present.</p></li> <li><p>Use of Istio&rsquo;s <a href="/v1.9/docs/reference/config/networking/service-entry/"><code>subjectAltNames</code> in service entries</a>. A service entry with a <code>subjectAltNames</code> field with a value similar to <code>nested.subdomain.example.com</code> incorrectly accepts a certificate from an upstream peer with a SAN of <code>*.example.com</code>.</p></li> </ul> <p>The Istio CA, which was formerly known as Citadel, does not issue certificates with DNS wildcard SANs. The vulnerability only impacts configurations that validate externally issued certificates.</p> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.5.x deployments: update to <a href="/v1.9/news/releases/1.5.x/announcing-1.5.8">Istio 1.5.8</a> or later.</li> <li>For Istio 1.6.x deployments: update to <a href="/v1.9/news/releases/1.6.x/announcing-1.6.5">Istio 1.6.5</a> or later.</li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Thu, 09 Jul 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-008//v1.9/news/security/istio-security-2020-008/CVEAnnouncing Istio 1.6.5 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-008">our July 9th, 2020 news post</a>.</p> <p>This release contains bug fixes to improve robustness. These release notes describe what’s different between Istio 1.6.5 and Istio 1.6.4.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.5" data-downloadbuttontext="DOWNLOAD 1.6.5" data-updateadvice='Before you download 1.6.5, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.4...1.6.5"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15104">CVE-2020-15104</a></strong>: When validating TLS certificates, Envoy incorrectly allows a wildcard DNS Subject Alternative Name to apply to multiple subdomains. For example, with a SAN of <code>*.example.com</code>, Envoy incorrectly allows <code>nested.subdomain.example.com</code>, when it should only allow <code>subdomain.example.com</code>. <ul> <li>CVSS Score: 6.6 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:H/PR:H/UI:N/S:C/C:H/I:L/A:N/E:F/RL:O/RC:C&amp;version=3.1">AV:N/AC:H/PR:H/UI:N/S:C/C:H/I:L/A:N/E:F/RL:O/RC:C</a></li> </ul></li> </ul> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> return the proper source name after Mixer does a lookup by IP if multiple pods have the same IP.</li> <li><strong>Improved</strong> the sidecar injection control based on revision at a per-pod level (<a href="https://github.com/istio/istio/issues/24801">Issue 24801</a>)</li> <li><strong>Improved</strong> <code>istioctl validate</code> to disallow unknown fields not included in the Open API specification (<a href="https://github.com/istio/istio/issues/24860">Issue 24860</a>)</li> <li><strong>Changed</strong> <code>stsPort</code> to <code>sts_port</code> in Envoy&rsquo;s bootstrap file.</li> <li><strong>Preserved</strong> existing WASM state schema for state objects to reference it later as needed.</li> <li><strong>Added</strong> <code>targetUri</code> to <code>stackdriver_grpc_service</code>.</li> <li><strong>Updated</strong> WASM state to log for Access Log Service.</li> <li><strong>Increased</strong> default protocol detection timeout from 100 ms to 5 s (<a href="https://github.com/istio/istio/issues/24379">Issue 24379</a>)</li> <li><strong>Removed</strong> UDP port 53 from Istiod.</li> <li><strong>Allowed</strong> setting <code>status.sidecar.istio.io/port</code> to zero (<a href="https://github.com/istio/istio/issues/24722">Issue 24722</a>)</li> <li><strong>Fixed</strong> EDS endpoint selection for subsets with no or empty label selector. (<a href="https://github.com/istio/istio/issues/24969">Issue 24969</a>)</li> <li><strong>Allowed</strong> <code>k8s.overlays</code> on <code>BaseComponentSpec</code>. (<a href="https://github.com/istio/istio/issues/24476">Issue 24476</a>)</li> <li><strong>Fixed</strong> <code>istio-agent</code> to create <em>elliptical</em> curve CSRs when <code>ECC_SIGNATURE_ALGORITHM</code> is set.</li> <li><strong>Improved</strong> mapping of gRPC status codes into HTTP domain for telemetry.</li> <li><strong>Fixed</strong> <code>scaleTargetRef</code> naming in <code>HorizontalPodAutoscaler</code> for Istiod (<a href="https://github.com/istio/istio/issues/24809">Issue 24809</a>)</li> </ul>Thu, 09 Jul 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.5//v1.9/news/releases/1.6.x/announcing-1.6.5/Announcing Istio 1.5.8 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-008">our July 9th, 2020 news post</a>.</p> <p>These release notes describe what&rsquo;s different between Istio 1.5.8 and Istio 1.5.7.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.5.8" data-downloadbuttontext="DOWNLOAD 1.5.8" data-updateadvice='Before you download 1.5.8, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.5.10' data-updatehref="/v1.9/news/releases/1.5.x/announcing-1.5.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.5.7...1.5.8"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15104">CVE-2020-15104</a></strong>: When validating TLS certificates, Envoy incorrectly allows wildcards in DNS Subject Alternative Name (SAN) to apply to multiple subdomains. For example, with a SAN of <code>*.example.com</code>, Envoy incorrectly allows <code>nested.subdomain.example.com</code>, when it should only allow <code>subdomain.example.com</code>. <ul> <li>CVSS Score: 6.6 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:H/PR:H/UI:N/S:C/C:H/I:L/A:N/E:F/RL:O/RC:C&amp;version=3.1">AV:N/AC:H/PR:H/UI:N/S:C/C:H/I:L/A:N/E:F/RL:O/RC:C</a></li> </ul></li> </ul> <h2 id="changes">Changes</h2> <ul> <li><strong>Allowed</strong> setting <code>status.sidecar.istio.io/port</code> to zero (<a href="https://github.com/istio/istio/issues/24722">Issue 24722</a>)</li> <li><strong>Improved</strong> <code>istioctl validate</code> to disallow unknown fields not included in the Open API specification (<a href="https://github.com/istio/istio/issues/24860">Issue 24860</a>)</li> <li><strong>Fixed</strong> a bug in Mixer where it would incorrectly return source names when it did lookup by IP.</li> </ul>Thu, 09 Jul 2020 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5.8//v1.9/news/releases/1.5.x/announcing-1.5.8/ISTIO-SECURITY-2020-007 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12603">CVE-2020-12603</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12605">CVE-2020-12605</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8663">CVE-2020-8663</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12604">CVE-2020-12604</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aN%2fI%3aN%2fA%3aH">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.5 to 1.5.6<br> 1.6 to 1.6.3<br> </td> </tr> </tbody> </table> <p>Envoy, and subsequently Istio, are vulnerable to four newly discovered vulnerabilities:</p> <ul> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12603">CVE-2020-12603</a></strong>: By sending a specially crafted packet, an attacker could cause Envoy to consume excessive amounts of memory when proxying HTTP/2 requests or responses.</p> <ul> <li>CVSS Score: 7.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12605">CVE-2020-12605</a></strong>: An attacker could cause Envoy to consume excessive amounts of memory when processing specially crafted HTTP/1.1 packets.</p> <ul> <li>CVSS Score: 7.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8663">CVE-2020-8663</a></strong>: An attacker could cause Envoy to exhaust file descriptors when accepting too many connections.</p> <ul> <li>CVSS Score: 7.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12604">CVE-2020-12604</a></strong>: An attacker could cause increased memory usage when processing specially crafted packets.</p> <ul> <li>CVSS Score: 5.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L</a></li> </ul></li> </ul> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.5.x deployments: update to <a href="/v1.9/news/releases/1.5.x/announcing-1.5.7">Istio 1.5.7</a> or later.</li> <li>For Istio 1.6.x deployments: update to <a href="/v1.9/news/releases/1.6.x/announcing-1.6.4">Istio 1.6.4</a> or later.</li> </ul> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">You must take the following additional steps to mitigate CVE-2020-8663.</div> </aside> </div> <p>CVE-2020-8663 is addressed in Envoy by adding a configurable limit on <a href="https://www.envoyproxy.io/docs/envoy/v1.14.3/configuration/operations/overload_manager/overload_manager#limiting-active-connections">downstream connections</a>. The limit must be configured to mitigate this vulnerability. Perform the following steps to configure limits at the ingress gateway.</p> <ol> <li><p>Create a config map by downloading <a href="/v1.9/news/security/istio-security-2020-007/custom-bootstrap-runtime.yaml">custom-bootstrap-runtime.yaml</a>. Update <code>global_downstream_max_connections</code> in the config map according to the number of concurrent connections needed by individual gateway instances in your deployment. Once the limit is reached, Envoy will start rejecting tcp connections.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl -n istio-system apply -f custom-bootstrap-runtime.yaml </code></pre></li> <li><p>Patch the ingress gateway deployment to use the above configuration. Download <a href="/v1.9/news/security/istio-security-2020-007/gateway-patch.yaml">gateway-patch.yaml</a> and apply it using the following command.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl --namespace istio-system patch deployment istio-ingressgateway --patch &#34;$(cat gateway-patch.yaml)&#34; </code></pre></li> <li><p>Confirm that the new limits are in place.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ ISTIO_INGRESS_PODNAME=$(kubectl get pods -l app=istio-ingressgateway -n istio-system -o jsonpath=&#34;{.items[0].metadata.name}&#34;) $ kubectl --namespace istio-system exec -i -t ${ISTIO_INGRESS_PODNAME} -c istio-proxy -- curl -sS http://localhost:15000/runtime { &#34;entries&#34;: { &#34;overload.global_downstream_max_connections&#34;: { &#34;layer_values&#34;: [ &#34;&#34;, &#34;250000&#34;, &#34;&#34; ], &#34;final_value&#34;: &#34;250000&#34; } }, &#34;layers&#34;: [ &#34;static_layer_0&#34;, &#34;admin&#34; ] } </code></pre></li> </ol> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 30 Jun 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-007//v1.9/news/security/istio-security-2020-007/CVEAnnouncing Istio 1.6.4 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-007">our June 30th, 2020 news post</a>.</p> <p>This release note describes what&rsquo;s different between Istio 1.6.4 and Istio 1.6.3.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.4" data-downloadbuttontext="DOWNLOAD 1.6.4" data-updateadvice='Before you download 1.6.4, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.3...1.6.4"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12603">CVE-2020-12603</a></strong>: By sending a specially crafted packet, an attacker could cause Envoy to consume excessive amounts of memory when proxying HTTP/2 requests or responses.</p> <ul> <li>CVSS Score: 7.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12605">CVE-2020-12605</a></strong>: An attacker could cause Envoy to consume excessive amounts of memory when processing specially crafted HTTP/1.1 packets.</p> <ul> <li>CVSS Score: 7.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8663">CVE-2020-8663</a></strong>: An attacker could cause Envoy to exhaust file descriptors when accepting too many connections.</p> <ul> <li>CVSS Score: 7.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12604">CVE-2020-12604</a></strong>: An attacker could cause increased memory usage when processing specially crafted packets.</p> <ul> <li>CVSS Score: 5.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L</a></li> </ul></li> </ul>Tue, 30 Jun 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.4//v1.9/news/releases/1.6.x/announcing-1.6.4/Announcing Istio 1.5.7 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-007">our June 30th, 2020 news post</a>.</p> <p>This release note describes what&rsquo;s different between Istio 1.5.7 and Istio 1.5.6.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.5.7" data-downloadbuttontext="DOWNLOAD 1.5.7" data-updateadvice='Before you download 1.5.7, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.5.10' data-updatehref="/v1.9/news/releases/1.5.x/announcing-1.5.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.5.6...1.5.7"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12603">CVE-2020-12603</a></strong>: By sending a specially crafted packet, an attacker could cause Envoy to consume excessive amounts of memory when proxying HTTP/2 requests or responses.</p> <ul> <li>CVSS Score: 7.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12605">CVE-2020-12605</a></strong>: An attacker could cause Envoy to consume excessive amounts of memory when processing specially crafted HTTP/1.1 packets.</p> <ul> <li>CVSS Score: 7.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8663">CVE-2020-8663</a></strong>: An attacker could cause Envoy to exhaust file descriptors when accepting too many connections.</p> <ul> <li>CVSS Score: 7.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12604">CVE-2020-12604</a></strong>: An attacker could cause increased memory usage when processing specially crafted packets.</p> <ul> <li>CVSS Score: 5.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L</a></li> </ul></li> </ul>Tue, 30 Jun 2020 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5.7//v1.9/news/releases/1.5.x/announcing-1.5.7/Announcing Istio 1.4.10 <p>This is the final release for Istio 1.4.</p> <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-006">our June 11th, 2020 news post</a> as well as bug fixes to improve robustness.</p> <p>This release note describes what&rsquo;s different between Istio 1.4.9 and Istio 1.4.10.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="entry" href="https://github.com/istio/istio/releases/tag/1.4.10"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.4.9...1.4.10"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2020-006</strong> Excessive CPU usage when processing HTTP/2 SETTINGS frames with too many parameters, potentially leading to a denial of service.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11080">CVE-2020-11080</a></strong>: By sending a specially crafted packet, an attacker could cause the CPU to spike at 100%. This could be sent to the ingress gateway or a sidecar.</p> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> <code>istio-cni-node</code> crash when <code>COS_CONTAINERD</code> and Istio CNI are enabled when running on Google Kubernetes Engine (<a href="https://github.com/istio/istio/issues/23643">Issue 23643</a>)</li> <li><strong>Fixed</strong> Istio CNI causes pod initialization to experience a 30-40 second delay on startup when DNS is unreachable (<a href="https://github.com/istio/istio/issues/23770">Issue 23770</a>)</li> </ul> <h2 id="bookinfo-sample-application-security-fixes">Bookinfo sample application security fixes</h2> <p>We&rsquo;ve updated the versions of Node.js and jQuery used in the Bookinfo sample application. Node.js has been upgraded from version 12.9 to 12.18. jQuery has been updated from version 2.1.4 to version 3.5.0. The highest rated vulnerability fixed: <em>HTTP request smuggling using malformed Transfer-Encoding header (Critical) (CVE-2019-15605)</em></p>Mon, 22 Jun 2020 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4.10//v1.9/news/releases/1.4.x/announcing-1.4.10/Announcing Istio 1.6.3 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.6.2 and Istio 1.6.3.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.3" data-downloadbuttontext="DOWNLOAD 1.6.3" data-updateadvice='Before you download 1.6.3, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.2...1.6.3"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> an issue preventing the operator from recreating watched resources if they are deleted (<a href="https://github.com/istio/istio/issues/23238">Issue 23238</a>).</li> <li><strong>Fixed</strong> an issue where Istio crashed with the message: <code>proto.Message is *client.QuotaSpecBinding, not *client.QuotaSpecBinding</code>(<a href="https://github.com/istio/istio/issues/24264">Issue 24624</a>).</li> <li><strong>Fixed</strong> an issue preventing operator reconciliation due to improper labels on watched resources (<a href="https://github.com/istio/istio/issues/23603">Issue 23603</a>).</li> <li><strong>Added</strong> support for the <code>k8s.v1.cni.cncf.io/networks</code> annotation (<a href="https://github.com/istio/istio/issues/24425">Issue 24425</a>).</li> <li><strong>Updated</strong> the <code>SidecarInjectionSpec</code> CRD to read the <code>imagePullSecret</code> from <code>.Values.global</code> (<a href="https://github.com/istio/istio/pull/24365">Pull 24365</a>).</li> <li><strong>Updated</strong> split horizon to skip gateways that resolve hostnames.</li> <li><strong>Fixed</strong> <code>istioctl experimental metrics</code> to only flag error response codes as errors (<a href="https://github.com/istio/istio/issues/24322">Issue 24322</a>)</li> <li><strong>Updated</strong> <code>istioctl analyze</code> to sort output formats.</li> <li><strong>Updated</strong> gateways to use <code>proxyMetadata</code></li> <li><strong>Updated</strong> the Prometheus sidecar to use <code>proxyMetadata</code>(<a href="https://github.com/istio/istio/pull/24415">Issue 24415</a>).</li> <li><strong>Removed</strong> invalid configuration from <code>PodSecurityContext</code> when <code>gateway.runAsRoot</code> is enabled (<a href="https://github.com/istio/istio/issues/24469">Issue 24469</a>).</li> </ul> <h2 id="grafana-addon-security-fixes">Grafana addon security fixes</h2> <p>We&rsquo;ve updated the version of Grafana shipped with Istio from 6.5.2 to 6.7.4. This addresses a Grafana security issue, rated high, that can allow access to internal cluster resources using the Grafana avatar feature. <a href="https://grafana.com/blog/2020/06/03/grafana-6.7.4-and-7.0.2-released-with-important-security-fix/">(CVE-2020-13379)</a></p>Thu, 18 Jun 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.3//v1.9/news/releases/1.6.x/announcing-1.6.3/Announcing Istio 1.5.6 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.5.5 and Istio 1.5.6.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.5.6" data-downloadbuttontext="DOWNLOAD 1.5.6" data-updateadvice='Before you download 1.5.6, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.5.10' data-updatehref="/v1.9/news/releases/1.5.x/announcing-1.5.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.5.5...1.5.6"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security">Security</h2> <ul> <li><strong>Updated</strong> Node.js and jQuery versions used in bookinfo.</li> </ul> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> Transfer-Encoding value case-sensitivity in Envoy (<a href="https://github.com/envoyproxy/envoy/issues/10041">Envoy&rsquo;s issue 10041</a>)</li> <li><strong>Fixed</strong> handling of user defined ingress gateway configuration (<a href="https://github.com/istio/istio/issues/23303">Issue 23303</a>)</li> <li><strong>Fixed</strong> Add <code>TCP MX ALPN</code> in <code>UpstreamTlsContext</code> for clusters that specify <code>http2_protocol_options</code> (<a href="https://github.com/istio/istio/issues/23907">Issue 23907</a>)</li> <li><strong>Fixed</strong> election lock for namespace configmap controller.</li> <li><strong>Fixed</strong> <code>istioctl validate -f</code> for <code>networking.istio.io/v1beta1</code> rules (<a href="https://github.com/istio/istio/issues/24064">Issue 24064</a>)</li> <li><strong>Fixed</strong> aggregate clusters configuration (<a href="https://github.com/istio/istio/issues/23909">Issue 23909</a>)</li> <li><strong>Fixed</strong> Prometheus mTLS poods scraping (<a href="https://github.com/istio/istio/issues/22391">Issue 22391</a>)</li> <li><strong>Fixed</strong> ingress crash for overlapping hosts without match (<a href="https://github.com/istio/istio/issues/22910">Issue 22910</a>)</li> <li><strong>Fixed</strong> Istio telemetry Pod crashes (<a href="https://github.com/istio/istio/issues/23813">Issue 23813</a>)</li> <li><strong>Removed</strong> hard-coded operator namespace (<a href="https://github.com/istio/istio/issues/24073">Issue 24073</a>)</li> </ul>Wed, 17 Jun 2020 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5.6//v1.9/news/releases/1.5.x/announcing-1.5.6/ISTIO-SECURITY-2020-006 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11080">CVE-2020-11080</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aN%2fI%3aN%2fA%3aH">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.4 to 1.4.9<br> 1.5 to 1.5.4<br> 1.6 to 1.6.1<br> </td> </tr> </tbody> </table> <p>A vulnerability affecting the HTTP2 library used by Envoy has been fixed and publicly disclosed (c.f. <a href="https://github.com/nghttp2/nghttp2/security/advisories/GHSA-q5wr-xfw9-q7xr">Denial of service: Overly large SETTINGS frames</a> ). Unfortunately Istio did not benefit from a responsible disclosure process.</p> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11080">CVE-2020-11080</a></strong>: By sending a specially crafted packet, an attacker could cause the CPU to spike at 100%. This could be sent to the ingress gateway or a sidecar. <ul> <li>CVSS Score: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:A/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:N&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> </ul> <h2 id="mitigation">Mitigation</h2> <p>HTTP2 support could be disabled on the Ingress Gateway as a temporary workaround using the following configuration for example (Note that HTTP2 support at ingress can be disabled if you are not exposing gRPC services through ingress):</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' > apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: disable-ingress-h2 namespace: istio-system spec: workloadSelector: labels: istio: ingressgateway configPatches: - applyTo: NETWORK_FILTER # http connection manager is a filter in Envoy match: context: GATEWAY listener: filterChain: filter: name: &#34;envoy.http_connection_manager&#34; patch: operation: MERGE value: typed_config: &#34;@type&#34;: type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager codec_type: HTTP1 </code></pre> <ul> <li>For Istio 1.4.x deployments: update to <a href="/v1.9/news/releases/1.4.x/announcing-1.4.10">Istio 1.4.10</a> or later.</li> <li>For Istio 1.5.x deployments: update to <a href="/v1.9/news/releases/1.5.x/announcing-1.5.5">Istio 1.5.5</a> or later.</li> <li>For Istio 1.6.x deployments: update to <a href="/v1.9/news/releases/1.6.x/announcing-1.6.2">Istio 1.6.2</a> or later.</li> </ul> <h2 id="credit">Credit</h2> <p>We&rsquo;d like to thank <code>Michael Barton</code> for bringing this publicly disclosed vulnerability to our attention.</p> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Thu, 11 Jun 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-006//v1.9/news/security/istio-security-2020-006/CVEAnnouncing Istio 1.6.2 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-006">our June 11th, 2020 news post</a>.</p> <p>This release note describes what&rsquo;s different between Istio 1.6.2 and Istio 1.6.1.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.2" data-downloadbuttontext="DOWNLOAD 1.6.2" data-updateadvice='Before you download 1.6.2, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.1...1.6.2"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2020-006</strong> Excessive CPU usage when processing HTTP/2 SETTINGS frames with too many parameters, potentially leading to a denial of service.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11080">CVE-2020-11080</a></strong>: By sending a specially crafted packet, an attacker could cause the CPU to spike at 100%. This could be sent to the ingress gateway or a sidecar.</p>Thu, 11 Jun 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.2//v1.9/news/releases/1.6.x/announcing-1.6.2/Announcing Istio 1.5.5 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-006">our June 11th, 2020 news post</a>.</p> <p>This release note describes what&rsquo;s different between Istio 1.5.5 and Istio 1.5.4.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.5.5" data-downloadbuttontext="DOWNLOAD 1.5.5" data-updateadvice='Before you download 1.5.5, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.5.10' data-updatehref="/v1.9/news/releases/1.5.x/announcing-1.5.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.5.4...1.5.5"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2020-006</strong> Excessive CPU usage when processing HTTP/2 SETTINGS frames with too many parameters, potentially leading to a denial of service.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11080">CVE-2020-11080</a></strong>: By sending a specially crafted packet, an attacker could cause the CPU to spike at 100%. This could be sent to the ingress gateway or a sidecar.</p>Thu, 11 Jun 2020 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5.5//v1.9/news/releases/1.5.x/announcing-1.5.5/Support for Istio 1.4 has ended<p>As <a href="/v1.9/news/support/announcing-1.4-eol/">previously announced</a>, support for Istio 1.4 has now officially ended.</p> <p>At this point we will no longer back-port fixes for security issues and critical bugs to 1.4, so we heartily encourage you to upgrade to the latest version of Istio (1.9.5) if you haven&rsquo;t already.</p>Fri, 05 Jun 2020 00:00:00 +0000/v1.9/news/support/announcing-1.4-eol-final//v1.9/news/support/announcing-1.4-eol-final/Announcing Istio 1.6.1 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.6.0 and Istio 1.6.1.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.6.1" data-downloadbuttontext="DOWNLOAD 1.6.1" data-updateadvice='Before you download 1.6.1, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.6.14' data-updatehref="/v1.9/news/releases/1.6.x/announcing-1.6.14/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.6.0...1.6.1"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> support for pod annotations to override mesh-wide proxy settings</li> <li><strong>Updated</strong> <code>EnvoyFilter</code> to register all filter types in order to support <code>typed_config</code> attributes (<a href="https://github.com/istio/istio/issues/23909">Issue 23909</a>)</li> <li><strong>Fixed</strong> handling of custom resource names for Gateways (<a href="https://github.com/istio/istio/issues/23303">Issue 23303</a>)</li> <li><strong>Fixed</strong> an issue where <code>istiod</code> fails to issue certificates to a remote cluster. <code>Istiod</code> now has support for the cluster name and certificate to generate the <code>injectionURL</code> (<a href="https://github.com/istio/istio/issues/23879">Issue 23879</a>)</li> <li><strong>Fixed</strong> remote cluster&rsquo;s validation controller to check <code>istiod</code>&rsquo;s ready status endpoint (<a href="https://github.com/istio/istio/issues/23945">Issue 23945</a>)</li> <li><strong>Improved</strong> <code>regexp</code> fields validation to match Envoy&rsquo;s validation (<a href="https://github.com/istio/istio/issues/23436">Issue 23436</a>)</li> <li><strong>Fixed</strong> <code>istioctl analyze</code> to validate <code>networking.istio.io/v1beta1</code> resources (<a href="https://github.com/istio/istio/issues/24064">Issue 24064</a>)</li> <li><strong>Fixed</strong> typo of <code>istio</code> in <code>ControlZ</code> dashboard log (<a href="https://github.com/istio/istio/issues/24039">Issue 24039</a>)</li> <li><strong>Fixed</strong> tar name to directory translation (<a href="https://github.com/istio/istio/issues/23635">Issue 23635</a>)</li> <li><strong>Improved</strong> certificate management for multi-cluster and virtual machine setup from <code>samples/certs</code> directory to <code>install/tools/certs</code> directory</li> <li><strong>Improved</strong> <code>pilot-agent</code>&rsquo;s handling of client certificates when only a CA client certificate is present</li> <li><strong>Improved</strong> <code>istiocl upgrade</code> to direct users to the <code>istio.io</code> website to migrate from <code>v1alpha1</code> security policies to <code>v1beta1</code> security policies</li> <li><strong>Fixed</strong> release URL name for <code>istioctl upgrade</code></li> <li><strong>Fixed</strong> <code>k8s.overlays</code> for cluster resources</li> <li><strong>Fixed</strong> <code>HTTP/HTTP2</code> conflict at Gateway (<a href="https://github.com/istio/istio/issues/24061">Issue 24061</a> and <a href="https://github.com/istio/istio/issues/19690">Issue 19690</a>)</li> <li><strong>Fixed</strong> Istio operator to respect the <code>--operatorNamespace</code> argument (<a href="https://github.com/istio/istio/issues/24073">Issue 24073</a>)</li> <li><strong>Fixed</strong> Istio operator hanging when uninstalling Istio (<a href="https://github.com/istio/istio/issues/24038">Issue 24038</a>)</li> <li><strong>Fixed</strong> TCP metadata exchange for upstream clusters that specify <code>http2_protocol_options</code> (<a href="https://github.com/istio/istio/issues/23907">Issue 23907</a>)</li> <li><strong>Added</strong> <code>sideEffects</code> field to <code>MutatingWebhookConfiguration</code> for <code>istio-sidecar-injector</code> (<a href="https://github.com/istio/istio/issues/23485">Issue 23485</a>)</li> <li><strong>Improved</strong> installation for replicated control planes (<a href="https://github.com/istio/istio/issues/23871">Issue 23871</a>)</li> <li><strong>Fixed</strong> <code>istioctl experimental precheck</code> to report compatible versions of Kubernetes (1.14-1.18) (<a href="https://github.com/istio/istio/issues/24132">Issue 24132</a>)</li> <li><strong>Fixed</strong> Istio operator namespace mismatches that caused a resource leak when pruning resources (<a href="https://github.com/istio/istio/issues/24222">Issue 24222</a>)</li> <li><strong>Fixed</strong> SDS Agent failing to start when proxy uses file mounted certs for Gateways (<a href="https://github.com/istio/istio/issues/23646">Issue 23646</a>)</li> <li><strong>Fixed</strong> TCP over HTTP conflicts that caused invalid configuration to be generated (<a href="https://github.com/istio/istio/issues/24084">Issue 24084</a>)</li> <li><strong>Fixed</strong> the use of external name when remote Pilot address is a hostname (<a href="https://github.com/istio/istio/issues/24155">Issue 24155</a>)</li> <li><strong>Fixed</strong> Istio CNI node <code>DaemonSet</code> starting when Istio CNI and <code>cos_containerd</code> are enabled on Google Kubernetes Engine (GKE) (<a href="https://github.com/istio/istio/issues/23643">Issue 23643</a>)</li> <li><strong>Fixed</strong> Istio CNI causing pod initialization to experience a 30-40 second delay on startup when DNS unreachable (<a href="https://github.com/istio/istio/issues/23770">Issue 23770</a>)</li> <li><strong>Improved</strong> Google Stackdriver telemetry use of UIDs with GCE VMs</li> <li><strong>Improved</strong> telemetry plugins to not crash due invalid configuration (<a href="https://github.com/istio/istio/issues/23865">Issue 23865</a>)</li> <li><strong>Fixed</strong> a proxy sidecar segfault when the response to HTTP calls by WASM filters are empty (<a href="https://github.com/istio/istio/issues/23890">Issue 23890</a>)</li> <li><strong>Fixed</strong> a proxy sidecar segfault while parsing CEL expressions (<a href="https://github.com/envoyproxy/envoy-wasm/issues/497">Issue 497</a>)</li> </ul> <h2 id="bookinfo-sample-application-security-fixes">Bookinfo sample application security fixes</h2> <p>We&rsquo;ve updated the versions of Node.js and jQuery used in the Bookinfo sample application. Node.js has been upgraded from version 12.9 to 12.18. jQuery has been updated from version 2.1.4 to version 3.5.0. The highest rated vulnerability fixed: <em>HTTP request smuggling using malformed Transfer-Encoding header (Critical) (CVE-2019-15605)</em></p>Thu, 04 Jun 2020 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6.1//v1.9/news/releases/1.6.x/announcing-1.6.1/Announcing Istio 1.5.4 <p>This release fixes the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-005">our May 12th, 2020 news post</a>.</p> <p>This release note describes what&rsquo;s different between Istio 1.5.4 and Istio 1.5.3.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.5.4" data-downloadbuttontext="DOWNLOAD 1.5.4" data-updateadvice='Before you download 1.5.4, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.5.10' data-updatehref="/v1.9/news/releases/1.5.x/announcing-1.5.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.5.3...1.5.4"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2020-005</strong> Denial of Service with Telemetry V2 enabled.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10739">CVE-2020-10739</a></strong>: By sending a specially crafted packet, an attacker could trigger a Null Pointer Exception resulting in a Denial of Service. This could be sent to the ingress gateway or a sidecar.</p>Wed, 13 May 2020 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5.4//v1.9/news/releases/1.5.x/announcing-1.5.4/ISTIO-SECURITY-2020-005 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10739">CVE-2020-10739</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aN%2fI%3aN%2fA%3aH">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.4 to 1.4.8<br> 1.5 to 1.5.3<br> </td> </tr> </tbody> </table> <p>Istio 1.4 with telemetry v2 enabled and Istio 1.5 contain the following vulnerability when telemetry v2 is enabled:</p> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10739">CVE-2020-10739</a></strong>: By sending a specially crafted packet, an attacker could trigger a Null Pointer Exception resulting in a Denial of Service. This could be sent to the ingress gateway or a sidecar. <ul> <li>CVSS Score: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:A/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:N&amp;version=3.1">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></li> </ul></li> </ul> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.4.x deployments: update to <a href="/v1.9/news/releases/1.4.x/announcing-1.4.9">Istio 1.4.9</a> or later.</li> <li>For Istio 1.5.x deployments: update to <a href="/v1.9/news/releases/1.5.x/announcing-1.5.4">Istio 1.5.4</a> or later.</li> <li>Workaround: Alternatively, you can disable telemetry v2 by running the following:</li> </ul> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl manifest apply --set values.telemetry.v2.enabled=false </code></pre> <h2 id="credit">Credit</h2> <p>We&rsquo;d like to thank <code>Joren Zandstra</code> for the original bug report.</p> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 12 May 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-005//v1.9/news/security/istio-security-2020-005/CVEAnnouncing Istio 1.5.3 <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">DO NOT USE this release. USE release 1.5.4 instead.</div> </aside> </div> <p>Due to a publishing error, the 1.5.3 images do not contain the fix for CVE-2020-10739 as claimed in the original announcement.</p> <p>This release contains bug fixes to improve robustness. This release note describes what&rsquo;s different between Istio 1.5.3 and Istio 1.5.2.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.5.3" data-downloadbuttontext="DOWNLOAD 1.5.3" data-updateadvice='Before you download 1.5.3, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.5.10' data-updatehref="/v1.9/news/releases/1.5.x/announcing-1.5.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.5.2...1.5.3"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> the Helm installer to install Kiali using a dynamically generated signing key.</li> <li><strong>Fixed</strong> overlaying the generated Kubernetes resources for addon components with user-defined overlays <a href="https://github.com/istio/istio/issues/23048">(Issue 23048)</a></li> <li><strong>Fixed</strong> <code>istio-sidecar.deb</code> failing to start on Debian buster with <code>iptables</code> default <code>nftables</code> setting <a href="https://github.com/istio/istio/issues/23279">(Issue 23279)</a></li> <li><strong>Fixed</strong> the corresponding hash policy not being updated after the header name specified in <code>DestinationRule.trafficPolicy.loadBalancer.consistentHash.httpHeaderName</code> is changed <a href="https://github.com/istio/istio/issues/23434">(Issue 23434)</a></li> <li><strong>Fixed</strong> traffic routing when deployed in a namespace other than istio-system <a href="https://github.com/istio/istio/issues/23401">(Issue 23401)</a></li> </ul>Tue, 12 May 2020 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5.3//v1.9/news/releases/1.5.x/announcing-1.5.3/Announcing Istio 1.4.9 <p>This release contains bug fixes to improve robustness and fixes for the security vulnerabilities described in <a href="/v1.9/news/security/istio-security-2020-005">our May 12th, 2020 news post</a>. This release note describes what&rsquo;s different between Istio 1.4.9 and Istio 1.4.8.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.4.9" data-downloadbuttontext="DOWNLOAD 1.4.9" data-updateadvice='Before you download 1.4.9, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.4.10' data-updatehref="/v1.9/news/releases/1.4.x/announcing-1.4.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.4.8...1.4.9"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2020-005</strong> Denial of Service with Telemetry V2 enabled.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10739">CVE-2020-10739</a></strong>: By sending a specially crafted packet, an attacker could trigger a Null Pointer Exception resulting in a Denial of Service. This could be sent to the ingress gateway or a sidecar.</p> <h2 id="bug-fixes">Bug Fixes</h2> <ul> <li><strong>Fixed</strong> the Helm installer to install Kiali using an dynamically generated signing key.</li> <li><strong>Fixed</strong> Citadel to ignore namespaces that are not part of the mesh.</li> <li><strong>Fixed</strong> the Istio operator installer to print the name of any resources that are not ready when an installation timeout occurs.</li> </ul>Tue, 12 May 2020 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4.9//v1.9/news/releases/1.4.x/announcing-1.4.9/Support for Istio 1.4 ends on June 5th, 2020<p>According to Istio&rsquo;s <a href="/v1.9/about/supported-releases#support-policy">support policy</a>, LTS releases like 1.4 are supported for three months after the next LTS release. Since <a href="/v1.9/news/releases/1.5.x/announcing-1.5/">1.5 was released on March 5th</a>, support for 1.4 will end on June 5th, 2020.</p> <p>At that point we will stop back-porting fixes for security issues and critical bugs to 1.4, so we encourage you to upgrade to the latest version of Istio (1.9.5). If you don&rsquo;t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.</p> <p>We care about you and your clusters, so please be kind to yourself and upgrade.</p>Tue, 05 May 2020 00:00:00 +0000/v1.9/news/support/announcing-1.4-eol//v1.9/news/support/announcing-1.4-eol/Announcing Istio 1.5.2 <p>This release contains bug fixes to improve robustness. This release note describes what’s different between Istio 1.5.1 and Istio 1.5.2.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.5.2" data-downloadbuttontext="DOWNLOAD 1.5.2" data-updateadvice='Before you download 1.5.2, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.5.10' data-updatehref="/v1.9/news/releases/1.5.x/announcing-1.5.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.5.1...1.5.2"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> Istiod deployment lacking label used by the matching <code>PodDisruptionBudget</code> (<a href="https://github.com/istio/istio/issues/22267">Issue 22267</a>)</li> <li><strong>Fixed</strong> Custom Istio installation with istioctl not working using external charts (<a href="https://github.com/istio/istio/issues/22368">Issue 22368</a>)</li> <li><strong>Fixed</strong> Panic in <code>istio-init</code> with GKE+COS and <code>interceptionMode</code>: TPROXY (<a href="https://github.com/istio/istio/issues/22500">Issue 22500</a>)</li> <li><strong>Fixed</strong> Logging for validation by sending warnings to <code>stdErr</code> (<a href="https://github.com/istio/istio/issues/22496">Issue 22496</a>)</li> <li><strong>Fixed</strong> Kiali not working when external Prometheus link used for the IstioOperator API (<a href="https://github.com/istio/istio/issues/22510">Issue 22510</a>)</li> <li><strong>Fixed</strong> Istio agent should calculate grace period based on the cert TTL, not client-side settings (<a href="https://github.com/istio/istio/issues/22226">Issue 22226</a>]</li> <li><strong>Fixed</strong> Incorrect error message referring to incorrect CLI option for the <code>istioctl kube-inject</code> command (<a href="https://github.com/istio/istio/issues/22501">Issue 22501</a>)</li> <li><strong>Fixed</strong> IstioOperator validation of slice (<a href="https://github.com/istio/istio/issues/21915">Issue 21915</a>)</li> <li><strong>Fixed</strong> Race condition caused by read/write of <code>rootCert</code> and <code>rootCertExpireTime</code> not always being protected (<a href="https://github.com/istio/istio/issues/22627">Issue 22627</a>)</li> <li><strong>Fixed</strong> BlackHoleCluster HTTP metrics broken with Telemetry v2 (<a href="https://github.com/istio/istio/issues/21385">Issue 21385</a>)</li> <li><strong>Fixed</strong> <code>istio-init</code> container failing when Istio CNI is enabled (<a href="https://github.com/istio/istio/issues/22695">Issue 22695</a>)</li> <li><strong>Fixed</strong> istioctl does not set gateway name for multiple gateways (<a href="https://github.com/istio/istio/issues/22703">Issue 22703</a>)</li> <li><strong>Fixed</strong> Unstable inbound bind address when configuring a sidecar ingress listener without bind address (<a href="https://github.com/istio/istio/issues/22830">Issue 22830</a>)</li> <li><strong>Fixed</strong> Proxy pods for Istio 1.4 not showing up when upgrading from Istio 1.4 to 1.5 using default profile (<a href="https://github.com/istio/istio/issues/22841">Issue 22841</a>)</li> <li><strong>Fixed</strong> <code>PersistentVolumeClaim</code> for Grafana not being created in the namespace specified in the IstioOperator spec (<a href="https://github.com/istio/istio/issues/22835">Issue 22835</a>)</li> <li><strong>Fixed</strong> <code>istio-sidecar-injector</code> and istiod related pods crashing when applying new manifest through istioctl because <code>alwaysInjectSelector</code> and <code>neverInjectSelector</code> are not correctly indented in the <code>istio-sidecar-injector</code> config map (<a href="https://github.com/istio/istio/issues/23027">Issue 23027</a>)</li> <li><strong>Fixed</strong> Prometheus scraping failing in CNI injected pods because the default <code>excludeInboundPort</code> configuration does not include port 15090 (<a href="https://github.com/istio/istio/issues/23038">Issue 23038</a>)</li> <li><strong>Fixed</strong> <code>Lightstep</code> secret volume issue causing the bundled Prometheus to not install correctly with Istio operator (<a href="https://github.com/istio/istio/issues/23078">Issue 23078</a>)</li> <li><strong>Fixed</strong> Avoid using host header to extract destination service name at gateway in default Telemetry V2 configuration.</li> <li><strong>Fixed</strong> Zipkin: Fix wrongly rendered timestamp value (<a href="https://github.com/istio/istio/issues/22968">Issue 22968</a>)</li> <li><strong>Improved</strong> Add annotations for setting CPU/memory limits on sidecar (<a href="https://github.com/istio/istio/issues/16126">Issue 16126</a>)</li> <li><strong>Improved</strong> Enable <code>rewriteAppHTTPProbe</code> annotation for liveness probe rewrite by default(<a href="https://github.com/istio/istio/issues/10357">Issue 10357</a>)</li> </ul>Fri, 24 Apr 2020 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5.2//v1.9/news/releases/1.5.x/announcing-1.5.2/Announcing Istio 1.4.8 <p>This release includes bug fixes to improve robustness. This release note describes what’s different between Istio 1.4.7 and Istio 1.4.8.</p> <p>The fixes below focus on various issues related to installing Istio on OpenShift with CNI. Instructions for installing Istio on OpenShift with CNI can be found <a href="/v1.9/docs/setup/additional-setup/cni/#instructions-for-istio-1-4-x-and-openshift">here</a>.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.4.8" data-downloadbuttontext="DOWNLOAD 1.4.8" data-updateadvice='Before you download 1.4.8, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.4.10' data-updatehref="/v1.9/news/releases/1.4.x/announcing-1.4.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.4.7...1.4.8"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> Fixed CNI installation on OpenShift (<a href="https://github.com/istio/istio/pull/21421">Issue 21421</a>) (<a href="https://github.com/istio/istio/issues/22449">Issue 22449</a>).</li> <li><strong>Fixed</strong> Not all inbound ports are redirected when CNI is enabled (<a href="https://github.com/istio/istio/issues/22498">Issue 22448</a>).</li> <li><strong>Fixed</strong> Syntax errors in gateway templates with GoLang 1.14 (<a href="https://github.com/istio/istio/issues/22366">Issue 22366</a>).</li> <li><strong>Fixed</strong> Remove namespace from <code>clusterrole</code> and <code>clusterrolebinding</code> (<a href="https://github.com/istio/cni/pull/297">PR 297</a>).</li> </ul>Thu, 23 Apr 2020 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4.8//v1.9/news/releases/1.4.x/announcing-1.4.8/ISTIO-SECURITY-2020-004 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1764">CVE-2020-1764</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>8.7 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aA%2fAC%3aL%2fPR%3aL%2fUI%3aN%2fS%3aC%2fC%3aH%2fI%3aH%2fA%3aN">AV:A/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:N</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.4 to 1.4.6<br> 1.5<br> </td> </tr> </tbody> </table> <p>Istio 1.4 to 1.4.6 and Istio 1.5 contain the following vulnerability:</p> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1764"><code>CVE-2020-1764</code></a></strong>: Istio uses a default <code>signing_key</code> for Kiali. This can allow an attacker to view and modify the Istio configuration. <ul> <li>CVSS Score: 8.7 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:A/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:N&amp;version=3.1">AV:A/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:N</a></li> </ul></li> </ul> <p>In addition, another CVE is fixed in this release, described by this <a href="https://kiali.io/news/security-bulletins/kiali-security-001/">Kiali security bulletin</a>.</p> <h2 id="detection">Detection</h2> <p>Your installation is vulnerable in the following configuration:</p> <ul> <li>The Kiali version is 1.15 or earlier.</li> <li>The Kiali login token and signing key is unset.</li> </ul> <p>To check your Kiali version, run this command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods -n istio-system -l app=kiali -o yaml | grep image: </code></pre> <p>To determine if your login token is unset, run this command and check for blank output:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get deploy kiali -n istio-system -o yaml | grep LOGIN_TOKEN_SIGNING_KEY </code></pre> <p>To determine if your signing key is unset, run this command and check for blank output:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get cm kiali -n istio-system -o yaml | grep signing_key </code></pre> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.4.x deployments: update to <a href="/v1.9/news/releases/1.4.x/announcing-1.4.7">Istio 1.4.7</a> or later.</li> <li>For Istio 1.5.x deployments: update to <a href="/v1.9/news/releases/1.5.x/announcing-1.5.1">Istio 1.5.1</a> or later.</li> <li><p>Workaround: You can manually update the signing key to a random token using the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get cm kiali -n istio-system -o yaml | sed &#34;s/server:/login_token:\\\n \ signing_key: $(tr -dc &#39;a-zA-Z0-9&#39; &lt; /dev/urandom | fold -w 20 | head -n 1)\\\nserver:/&#34; \ | kubectl apply -f - ; kubectl delete pod -l app=kiali -n istio-system </code></pre></li> </ul>Wed, 25 Mar 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-004//v1.9/news/security/istio-security-2020-004/CVEAnnouncing Istio 1.5.1 <p>This release contains bug fixes to improve robustness and fixes for the security vulnerabilities described in <a href="/v1.9/news/security/istio-security-2020-004">our March 25th, 2020 news post</a>. This release note describes what’s different between Istio 1.5.0 and Istio 1.5.1.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.5.1" data-downloadbuttontext="DOWNLOAD 1.5.1" data-updateadvice='Before you download 1.5.1, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.5.10' data-updatehref="/v1.9/news/releases/1.5.x/announcing-1.5.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.5.0...1.5.1"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2020-004</strong> Istio uses a hard coded <code>signing_key</code> for Kiali.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1764">CVE-2020-1764</a></strong>: Istio uses a default <code>signing key</code> to install Kiali. This can allow an attacker with access to Kiali to bypass authentication and gain administrative privileges over Istio. In addition, another CVE is fixed in this release, described in the Kiali 1.15.1 <a href="https://kiali.io/news/security-bulletins/kiali-security-001/">release</a>.</p> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> an issue where Istio Operator instance deletion hangs for in-cluster operator (<a href="https://github.com/istio/istio/issues/22280">Issue 22280</a>)</li> <li><strong>Fixed</strong> istioctl proxy-status should not list differences if just the order of the routes have changed (<a href="https://github.com/istio/istio/issues/21709">Issue 21709</a>)</li> <li><strong>Fixed</strong> Incomplete support for array notation in &ldquo;istioctl manifest apply —set&rdquo; (<a href="https://github.com/istio/istio/issues/20950">Issue 20950</a>)</li> <li><strong>Fixed</strong> Add possibility to add annotations to services in Kubernetes service spec (<a href="https://github.com/istio/istio/issues/21995">Issue 21995</a>)</li> <li><strong>Fixed</strong> Enable setting ILB Gateway using istioctl (<a href="https://github.com/istio/istio/issues/20033">Issue 20033</a>)</li> <li><strong>Fixed</strong> istioctl does not correctly set names on gateways (<a href="https://github.com/istio/istio/issues/21938">Issue 21938</a>)</li> <li><strong>Fixed</strong> OpenID discovery does not work with beta request authentication policy (<a href="https://github.com/istio/istio/issues/21954">Issue 21954</a>)</li> <li><strong>Fixed</strong> Issues related to shared control plane multicluster (<a href="https://github.com/istio/istio/pull/22173">Issue 22173</a>)</li> <li><strong>Fixed</strong> Ingress port displaying target port instead of actual port (<a href="https://github.com/istio/istio/issues/22125">Issue 22125</a>)</li> <li><strong>Fixed</strong> Issue where endpoints were being pruned automatically when installing the Istio Controller (<a href="https://github.com/istio/istio/issues/21495">Issue 21495</a>)</li> <li><strong>Fixed</strong> Add istiod port to gateways for mesh expansion(<a href="https://github.com/istio/istio/issues/22027">Issue 22027</a>)</li> <li><strong>Fixed</strong> Multicluster secret controller silently ignoring updates to secrets (<a href="https://github.com/istio/istio/issues/18708">Issue 18708</a>)</li> <li><strong>Fixed</strong> Autoscaler for mixer-telemetry always being generated when deploying with istioctl or Helm (<a href="https://github.com/istio/istio/issues/20935">Issue 20935</a>)</li> <li><strong>Fixed</strong> Prometheus certificate provisioning is broken (<a href="https://github.com/istio/istio/issues/21843">Issue 21843</a>)</li> <li><strong>Fixed</strong> Segmentation fault in Pilot with beta mutual TLS (<a href="https://github.com/istio/istio/issues/21816">Issue 21816</a>)</li> <li><strong>Fixed</strong> Operator status enumeration not being rendered as a string (<a href="https://github.com/istio/istio/issues/21554">Issue 21554</a>)</li> <li><strong>Fixed</strong> in-cluster operator fails to install control plane after having deleted a prior control plane (<a href="https://github.com/istio/istio/issues/21467">Issue 21467</a>)</li> <li><strong>Fixed</strong> TCP metrics for BlackHole clusters does not work with Telemetry v2 (<a href="https://github.com/istio/istio/issues/21566">Issue 21566</a>)</li> <li><strong>Improved</strong> Add option to enable V8 runtime for telemetry V2 (<a href="https://github.com/istio/istio/pull/21846">Issue 21846</a>)</li> <li><strong>Improved</strong> compatibility of Helm gateway chart (<a href="https://github.com/istio/istio/pull/22295">Issue 22295</a>)</li> <li><strong>Improved</strong> operator by adding a Helm installation chart (<a href="https://github.com/istio/istio/issues/21861">Issue 21861</a>)</li> <li><strong>Improved</strong> Support custom CA on istio-agent (<a href="https://github.com/istio/istio/pull/22113">Issue 22113</a>)</li> <li><strong>Improved</strong> Add a flag that supports passing GCP metadata to STS (<a href="https://github.com/istio/istio/issues/21904">Issue 21904</a>)</li> </ul>Wed, 25 Mar 2020 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5.1//v1.9/news/releases/1.5.x/announcing-1.5.1/Announcing Istio 1.4.7 <p>This release contains fixes for the security vulnerabilities described in <a href="/v1.9/news/security/istio-security-2020-004">our March 25th, 2020 news post</a>. This release note describes what’s different between Istio 1.4.6 and Istio 1.4.7.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.4.7" data-downloadbuttontext="DOWNLOAD 1.4.7" data-updateadvice='Before you download 1.4.7, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.4.10' data-updatehref="/v1.9/news/releases/1.4.x/announcing-1.4.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.4.6...1.4.7"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security Update</h2> <ul> <li><strong>ISTIO-SECURITY-2020-004</strong> Istio uses a hard coded <code>signing_key</code> for Kiali.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1764">CVE-2020-1764</a></strong>: Istio uses a default <code>signing key</code> to install Kiali. This can allow an attacker with access to Kiali to bypass authentication and gain administrative privileges over Istio. In addition, another CVE is fixed in this release, described in the Kiali 1.15.1 <a href="https://kiali.io/news/security-bulletins/kiali-security-001/">release</a>.</p> <h2 id="changes">Changes</h2> <ul> <li><strong>Fixed</strong> an issue causing protocol detection to break HTTP2 traffic to gateways (<a href="https://github.com/istio/istio/issues/21230">Issue 21230</a>).</li> </ul>Wed, 25 Mar 2020 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4.7//v1.9/news/releases/1.4.x/announcing-1.4.7/ISTIO-SECURITY-2020-003 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8659">CVE-2020-8659</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8660">CVE-2020-8660</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8661">CVE-2020-8661</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8664">CVE-2020-8664</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector="></a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.4 to 1.4.5<br> </td> </tr> </tbody> </table> <p>Envoy, and subsequently Istio are vulnerable to four newly discovered vulnerabilities:</p> <ul> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8659">CVE-2020-8659</a></strong>: The Envoy proxy may consume excessive memory when proxying HTTP/1.1 requests or responses with many small (i.e. 1 byte) chunks. Envoy allocates a separate buffer fragment for each incoming or outgoing chunk with the size rounded to the nearest 4Kb and does not release empty chunks after committing data. Processing requests or responses with a lot of small chunks may result in extremely high memory overhead if the peer is slow or unable to read proxied data. The memory overhead could be two to three orders of magnitude more than configured buffer limits.</p> <ul> <li>CVSS Score: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H/E:F/RL:X/RC:X">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H/E:F</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8660">CVE-2020-8660</a></strong>: The Envoy proxy contains a TLS inspector that can be bypassed (not recognized as a TLS client) by a client using only TLS 1.3. Because TLS extensions (SNI, ALPN) are not inspected, those connections may be matched to a wrong filter chain, possibly bypassing some security restrictions.</p> <ul> <li>CVSS Score: 5.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N">AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8661">CVE-2020-8661</a></strong>: The Envoy proxy may consume excessive amounts of memory when responding to pipelined HTTP/1.1 requests. In the case of illegally formed requests, Envoy sends an internally generated 400 error, which is sent to the <code>Network::Connection</code> buffer. If the client reads these responses slowly, it is possible to build up a large number of responses, and consume functionally unlimited memory. This bypasses Envoy’s overload manager, which will itself send an internally generated response when Envoy approaches configured memory thresholds, exacerbating the problem.</p> <ul> <li>CVSS Score: 7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H/E:F/RL:X/RC:X">AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H/E:F</a></li> </ul></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8664">CVE-2020-8664</a></strong>: For the SDS TLS validation context in the Envoy proxy, the update callback is called only when the secret is received for the first time or when its value changes. This leads to a race condition where other resources referencing the same secret (e.g,. trusted CA) remain unconfigured until the secret&rsquo;s value changes, creating a potentially sizable window where a complete bypass of security checks from the static (&ldquo;default&rdquo;) section can occur.</p> <ul> <li>CVSS Score: 5.3 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N">AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N</a></li> </ul> <p>This vulnerability only affects the SDS implementation of Istio&rsquo;s certificate rotation mechanism for Istio 1.4.5 and earlier which is only when SDS and mutual TLS are enabled. SDS is off by default and must be explicitly enabled by the operator in all versions of Istio prior to Istio 1.5. Istio&rsquo;s default secret distribution implementation based on Kubernetes secret mounts is not affected by this vulnerability.</p> <p><strong>Detection</strong></p> <p>To determine if SDS is enabled in your system, run:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pod -l app=pilot -o yaml | grep SDS_ENABLED -A 1 </code></pre> <p>If the output contains:</p> <pre><code class='language-plain' data-expandlinks='true' data-repo='istio' >- name: SDS_ENABLED value: &#34;true&#34; </code></pre> <p>your system has SDS enabled.</p> <p>To determine if mutual TLS is enabled in your system, run:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get destinationrule --all-namespaces -o yaml | grep trafficPolicy -A 2 </code></pre> <p>If the output contains:</p> <pre><code class='language-plain' data-expandlinks='true' data-repo='istio' >-- trafficPolicy: tls: mode: ISTIO_MUTUAL </code></pre> <p>your system has mutual TLS enabled.</p></li> </ul> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.4.x deployments: update to <a href="/v1.9/news/releases/1.4.x/announcing-1.4.6">Istio 1.4.6</a> or later.</li> <li>For Istio 1.5.x deployments: Istio 1.5.0 will contain the equivalent security fixes.</li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 03 Mar 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-003//v1.9/news/security/istio-security-2020-003/CVEAnnouncing Istio 1.4.6 <p>This release contains fixes for the security vulnerabilities described in <a href="/v1.9/news/security/istio-security-2020-003">our March 3rd, 2020 news post</a>. This release note describes what’s different between Istio 1.4.5 and Istio 1.4.6.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.4.6" data-downloadbuttontext="DOWNLOAD 1.4.6" data-updateadvice='Before you download 1.4.6, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.4.10' data-updatehref="/v1.9/news/releases/1.4.x/announcing-1.4.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.4.5...1.4.6"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2020-003</strong> Two Uncontrolled Resource Consumption and Two Incorrect Access Control Vulnerabilities in Envoy.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8659">CVE-2020-8659</a></strong>: The Envoy proxy may consume excessive memory when proxying HTTP/1.1 requests or responses with many small (i.e. 1 byte) chunks. Envoy allocates a separate buffer fragment for each incoming or outgoing chunk with the size rounded to the nearest 4Kb and does not release empty chunks after committing data. Processing requests or responses with a lot of small chunks may result in extremely high memory overhead if the peer is slow or unable to read proxied data. The memory overhead could be two to three orders of magnitude more than configured buffer limits.</p> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8660">CVE-2020-8660</a></strong>: The Envoy proxy contains a TLS inspector that can be bypassed (not recognized as a TLS client) by a client using only TLS 1.3. Because TLS extensions (SNI, ALPN) are not inspected, those connections may be matched to a wrong filter chain, possibly bypassing some security restrictions.</p> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8661">CVE-2020-8661</a></strong>: The Envoy proxy may consume excessive amounts of memory when responding to pipelined HTTP/1.1 requests. In the case of illegally formed requests, Envoy sends an internally generated 400 error, which is sent to the <code>Network::Connection</code> buffer. If the client reads these responses slowly, it is possible to build up a large number of responses, and consume functionally unlimited memory. This bypasses Envoy’s overload manager, which will itself send an internally generated response when Envoy approaches configured memory thresholds, exacerbating the problem.</p> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8664">CVE-2020-8664</a></strong>: For the SDS TLS validation context in the Envoy proxy, the update callback is called only when the secret is received for the first time or when its value changes. This leads to a race condition where other resources referencing the same secret (e.g,. trusted CA) remain unconfigured until the secret&rsquo;s value changes, creating a potentially sizable window where a complete bypass of security checks from the static (&ldquo;default&rdquo;) section can occur.</p> <ul> <li>This vulnerability only affects the SDS implementation of Istio&rsquo;s certificate rotation mechanism for Istio 1.4.5 and earlier which is only when SDS and mutual TLS are enabled. SDS is off by default and must be explicitly enabled by the operator in all versions of Istio prior to Istio 1.5. Istio&rsquo;s default secret distribution implementation based on Kubernetes secret mounts is not affected by this vulnerability.</li> </ul>Tue, 03 Mar 2020 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4.6//v1.9/news/releases/1.4.x/announcing-1.4.6/Announcing Istio 1.4.5 <p>This release includes bug fixes to improve robustness. This release note describes what’s different between Istio 1.4.4 and Istio 1.4.5.</p> <p>The fixes below focus on various bugs occurring during node restarts. If you use Istio CNI, or have nodes that restart, you are highly encouraged to upgrade.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.4.5" data-downloadbuttontext="DOWNLOAD 1.4.5" data-updateadvice='Before you download 1.4.5, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.4.10' data-updatehref="/v1.9/news/releases/1.4.x/announcing-1.4.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.4.4...1.4.5"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="improvements">Improvements</h2> <ul> <li><strong>Fixed</strong> a bug triggered by node restart causing Pods to receive incorrect configuration (<a href="https://github.com/istio/istio/issues/20676">Issue 20676</a>).</li> <li><strong>Improved</strong> <a href="/v1.9/docs/setup/additional-setup/cni/">Istio CNI</a> robustness. Previously, when a node restarted, new pods may be created before the CNI was setup, causing pods to be created without <code>iptables</code> rules configured (<a href="https://github.com/istio/istio/issues/14327">Issue 14327</a>).</li> <li><strong>Fixed</strong> MCP metrics to include the size of the MCP responses, rather than just requests (<a href="https://github.com/istio/istio/issues/21049">Issue 21049</a>).</li> </ul>Tue, 18 Feb 2020 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4.5//v1.9/news/releases/1.4.x/announcing-1.4.5/Support for Istio 1.3 has ended<p>As <a href="/v1.9/news/support/announcing-1.3-eol/">previously announced</a>, support for Istio 1.3 has now officially ended.</p> <p>At this point we will no longer back-port fixes for security issues and critical bugs to 1.3, so we heartily encourage you to upgrade to the latest version of Istio (1.9.5) if you haven&rsquo;t already.</p>Fri, 14 Feb 2020 00:00:00 +0000/v1.9/news/support/announcing-1.3-eol-final//v1.9/news/support/announcing-1.3-eol-final/ISTIO-SECURITY-2020-002 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8843">CVE-2020-8843</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.4 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aH%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aH%2fI%3aH%2fA%3aN">AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:N</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.3 to 1.3.6<br> </td> </tr> </tbody> </table> <p>Istio 1.3 to 1.3.6 contain a vulnerability affecting Mixer policy checks.</p> <p>Note: We regret that the vulnerability was silently fixed in Istio 1.4.0 and Istio 1.3.7. An <a href="https://github.com/istio/istio/issues/12063">issue was raised</a> and <a href="https://github.com/istio/istio/pull/17692">fixed</a> in Istio 1.4.0 as a non-security issue. We reclassified the issue as a vulnerability in Dec 2019.</p> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8843">CVE-2020-8843</a></strong>: Under certain circumstances it is possible to bypass a specifically configured Mixer policy. Istio-proxy accepts <code>x-istio-attributes</code> header at ingress that can be used to affect policy decisions when Mixer policy selectively applies to source equal to ingress. To be vulnerable, Istio must have Mixer Policy enabled and used in the specified way. This feature is disabled by default in Istio 1.3 and 1.4.</li> </ul> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.3.x deployments: update to <a href="/v1.9/news/releases/1.3.x/announcing-1.3.7">Istio 1.3.7</a> or later.</li> </ul> <h2 id="credit">Credit</h2> <p>The Istio team would like to thank Krishnan Anantheswaran and Eric Zhang of <a href="https://www.splunk.com/">Splunk</a> for the private bug report.</p> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 11 Feb 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-002//v1.9/news/security/istio-security-2020-002/CVEISTIO-SECURITY-2020-001 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8595">CVE-2020-8595</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>9.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV%3aN%2fAC%3aH%2fPR%3aN%2fUI%3aN%2fS%3aC%2fC%3aH%2fI%3aH%2fA%3aH">AV:N/AC:H/PR:N/UI:N/S:C/C:H/I:H/A:H</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.3 to 1.3.7<br> 1.4 to 1.4.3<br> </td> </tr> </tbody> </table> <p>Istio 1.3 to 1.3.7 and 1.4 to 1.4.3 are vulnerable to a newly discovered vulnerability affecting <a href="https://archive.istio.io/1.4/docs/reference/config/security/istio.authentication.v1alpha1/#Policy">Authentication Policy</a>:</p> <ul> <li><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8595">CVE-2020-8595</a></strong>: A bug in Istio&rsquo;s Authentication Policy exact path matching logic allows unauthorized access to resources without a valid JWT token. This bug affects all versions of Istio that support JWT Authentication Policy with path based trigger rules. The logic for the exact path match in the Istio JWT filter includes query strings or fragments instead of stripping them off before matching. This means attackers can bypass the JWT validation by appending <code>?</code> or <code>#</code> characters after the protected paths.</li> </ul> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.3.x deployments: update to <a href="/v1.9/news/releases/1.3.x/announcing-1.3.8">Istio 1.3.8</a> or later.</li> <li>For Istio 1.4.x deployments: update to <a href="/v1.9/news/releases/1.4.x/announcing-1.4.4">Istio 1.4.4</a> or later.</li> </ul> <h2 id="credit">Credit</h2> <p>The Istio team would like to thank <a href="https://aspenmesh.com/2H8qf3r">Aspen Mesh</a> for the original bug report and code fix of <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8595">CVE-2020-8595</a>.</p> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 11 Feb 2020 00:00:00 +0000/v1.9/news/security/istio-security-2020-001//v1.9/news/security/istio-security-2020-001/CVEAnnouncing Istio 1.4.4 <p>This release includes bug fixes to improve robustness and user experience as well as a fix for the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-001">our February 11th, 2020 news post</a>. This release note describes what’s different between Istio 1.4.3 and Istio 1.4.4.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.4.4" data-downloadbuttontext="DOWNLOAD 1.4.4" data-updateadvice='Before you download 1.4.4, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.4.10' data-updatehref="/v1.9/news/releases/1.4.x/announcing-1.4.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.4.3...1.4.4"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2020-001</strong> An improper input validation has been discovered in <code>AuthenticationPolicy</code>.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8595">CVE-2020-8595</a></strong>: A bug in Istio&rsquo;s <a href="https://archive.istio.io/1.4/docs/reference/config/security/istio.authentication.v1alpha1/#Policy">Authentication Policy</a> exact path matching logic allows unauthorized access to resources without a valid JWT token.</p> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> Debian packaging of <code>iptables</code> scripts (<a href="https://github.com/istio/istio/issues/19615">Issue 19615</a>).</li> <li><strong>Fixed</strong> an issue where Pilot generated a wrong Envoy configuration when the same port was used more than once (<a href="https://github.com/istio/istio/issues/19935">Issue 19935</a>).</li> <li><strong>Fixed</strong> an issue where running multiple instances of Pilot could lead to a crash (<a href="https://github.com/istio/istio/issues/20047">Issue 20047</a>).</li> <li><strong>Fixed</strong> a potential flood of configuration pushes from Pilot to Envoy when scaling the deployment to zero (<a href="https://github.com/istio/istio/issues/17957">Issue 17957</a>).</li> <li><strong>Fixed</strong> an issue where Mixer could not fetch the correct information from the request/response when pod contains a dot in its name (<a href="https://github.com/istio/istio/issues/20028">Issue 20028</a>).</li> <li><strong>Fixed</strong> an issue where Pilot sometimes would not send a correct pod configuration to Envoy (<a href="https://github.com/istio/istio/issues/19025">Issue 19025</a>).</li> <li><strong>Fixed</strong> an issue where sidecar injector with SDS enabled was overwriting pod <code>securityContext</code> section, instead of just patching it (<a href="https://github.com/istio/istio/issues/20409">Issue 20409</a>).</li> </ul> <h2 id="improvements">Improvements</h2> <ul> <li><strong>Improved</strong> Better compatibility with Google CA. (Issues <a href="https://github.com/istio/istio/issues/20530">20530</a>, <a href="https://github.com/istio/istio/issues/20560">20560</a>).</li> <li><strong>Improved</strong> Added analyzer error message when Policies using JWT are not configured properly (Issues <a href="https://github.com/istio/istio/issues/20884">20884</a>, <a href="https://github.com/istio/istio/issues/20767">20767</a>).</li> </ul>Tue, 11 Feb 2020 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4.4//v1.9/news/releases/1.4.x/announcing-1.4.4/Announcing Istio 1.3.8 <p>This release contains a fix for the security vulnerability described in <a href="/v1.9/news/security/istio-security-2020-001">our February 11th, 2020 news post</a>. This release note describes what&rsquo;s different between Istio 1.3.7 and Istio 1.3.8.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.3.x/announcing-1.3/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="entry" href="https://github.com/istio/istio/releases/tag/1.3.8"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.3/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.3.7...1.3.8"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2020-001</strong> Improper input validation have been discovered in <code>AuthenticationPolicy</code>.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8595">CVE-2020-8595</a></strong>: A bug in Istio&rsquo;s <a href="https://archive.istio.io/1.3/docs/reference/config/security/istio.authentication.v1alpha1/#Policy">Authentication Policy</a> exact path matching logic allows unauthorized access to resources without a valid JWT token.</p>Tue, 11 Feb 2020 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3.8//v1.9/news/releases/1.3.x/announcing-1.3.8/Announcing Istio 1.3.7 <p>This release includes bug fixes to improve robustness. This release note describes what&rsquo;s different between Istio 1.3.6 and Istio 1.3.7.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.3.x/announcing-1.3/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.3.7" data-downloadbuttontext="DOWNLOAD 1.3.7" data-updateadvice='Before you download 1.3.7, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.3.8' data-updatehref="/v1.9/news/releases/1.3.x/announcing-1.3.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.3/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.3.6...1.3.7"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> root certificate rotation in Citadel to reuse values from the expiring root certificate into the new root certificate (<a href="https://github.com/istio/istio/issues/19644">Issue 19644</a>).</li> <li><strong>Fixed</strong> telemetry to ignore forwarded attributes at the gateway.</li> <li><strong>Fixed</strong> sidecar injection into pods with containers that export no port (<a href="https://github.com/istio/istio/issues/18594">Issue 18594</a>).</li> <li><strong>Added</strong> telemetry support for pod names containing periods (<a href="https://github.com/istio/istio/issues/19015">Issue 19015</a>).</li> <li><strong>Added</strong> support for generating <code>PKCS#8</code> private keys in Citadel agent (<a href="https://github.com/istio/istio/issues/19948">Issue 19948</a>).</li> </ul> <h2 id="minor-enhancements">Minor enhancements</h2> <ul> <li><strong>Improved</strong> injection template to fully specify <code>securityContext</code>, allowing <code>PodSecurityPolicies</code> to properly validate injected deployments (<a href="https://github.com/istio/istio/issues/17318">Issue 17318</a>).</li> <li><strong>Added</strong> support for setting the <code>lifecycle</code> for proxy containers.</li> <li><strong>Added</strong> support for setting the Mesh UID in the Stackdriver Mixer adapter (<a href="https://github.com/istio/istio/issues/17952">Issue 17952</a>).</li> </ul> <h2 id="security-update">Security update</h2> <ul> <li><a href="/v1.9/news/security/istio-security-2020-002"><strong>ISTIO-SECURITY-2020-002</strong></a> Mixer policy check bypass caused by improperly accepting certain request headers.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8843">CVE-2020-8843</a></strong>: Under certain circumstances it is possible to bypass a specifically configured Mixer policy. Istio-proxy accepts <code>x-istio-attributes</code> header at ingress that can be used to affect policy decisions when Mixer policy selectively applies to source equal to ingress. Istio 1.3 to 1.3.6 is vulnerable.</p>Tue, 04 Feb 2020 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3.7//v1.9/news/releases/1.3.x/announcing-1.3.7/Support for Istio 1.3 ends on February 14th, 2020<p>According to Istio&rsquo;s <a href="/v1.9/about/supported-releases#supported-releases">support policy</a>, LTS releases like 1.3 are supported for three months after the next LTS release. Since <a href="/v1.9/news/releases/1.4.x/announcing-1.4/">1.4 was released on November 14th</a>, support for 1.3 will end on February 14th, 2020.</p> <p>At that point we will stop back-porting fixes for security issues and critical bugs to 1.3, so we encourage you to upgrade to the latest version of Istio (1.9.5). If you don&rsquo;t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.</p> <p>We care about you and your clusters, so please be kind to yourself and upgrade.</p>Wed, 15 Jan 2020 00:00:00 +0000/v1.9/news/support/announcing-1.3-eol//v1.9/news/support/announcing-1.3-eol/Announcing Istio 1.4.3 <p>This release includes bug fixes to improve robustness and user experience. This release note describes what’s different between Istio 1.4.2 and Istio 1.4.3.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.4.3" data-downloadbuttontext="DOWNLOAD 1.4.3" data-updateadvice='Before you download 1.4.3, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.4.10' data-updatehref="/v1.9/news/releases/1.4.x/announcing-1.4.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.4.2...1.4.3"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> an issue where Mixer creates too many watches, overloading <code>kube-apiserver</code> (<a href="https://github.com/istio/istio/issues/19481">Issue 19481</a>).</li> <li><strong>Fixed</strong> an issue with injection when pod has multiple containers without exposed ports (<a href="https://github.com/istio/istio/issues/18594">Issue 18594</a>).</li> <li><strong>Fixed</strong> overly restrictive validation of <code>regex</code> field (<a href="https://github.com/istio/istio/pull/19212">Issue 19212</a>).</li> <li><strong>Fixed</strong> an upgrade issue with <code>regex</code> field (<a href="https://github.com/istio/istio/pull/19665">Issue 19665</a>).</li> <li><strong>Fixed</strong> <code>istioctl</code> install to properly send logs to <code>stderr</code> (<a href="https://github.com/istio/istio/issues/17743">Issue 17743</a>).</li> <li><strong>Fixed</strong> an issue where a file and profile could not be specified for <code>istioctl</code> installs (<a href="https://github.com/istio/istio/issues/19503">Issue 19503</a>).</li> <li><strong>Fixed</strong> an issue preventing certain objects from being installed for <code>istioctl</code> installs (<a href="https://github.com/istio/istio/issues/19371">Issue 19371</a>).</li> <li><strong>Fixed</strong> an issue preventing using certain JWKS with EC keys in JWT policy (<a href="https://github.com/istio/istio/issues/19424">Issue 19424</a>).</li> </ul> <h2 id="improvements">Improvements</h2> <ul> <li><strong>Improved</strong> injection template to fully specify <code>securityContext</code>, allowing <code>PodSecurityPolicies</code> to properly validate injected deployments (<a href="https://github.com/istio/istio/issues/17318">Issue 17318</a>).</li> <li><strong>Improved</strong> telemetry v2 configuration to support Stackdriver and forward compatibility (<a href="https://github.com/istio/installer/pull/591">Issue 591</a>).</li> <li><strong>Improved</strong> output of <code>istioctl</code> installation (<a href="https://github.com/istio/istio/issues/19451">Issue 19451</a>).</li> <li><strong>Improved</strong> <code>istioctl</code> installation to set exit code upon failure (<a href="https://github.com/istio/istio/issues/19747">Issue 19747</a>).</li> </ul>Wed, 08 Jan 2020 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4.3//v1.9/news/releases/1.4.x/announcing-1.4.3/Support for Istio 1.2 has ended<p>As <a href="/v1.9/news/support/announcing-1.2-eol/">previously announced</a>, support for Istio 1.2 has now officially ended.</p> <p>At this point we will no longer back-port fixes for security issues and critical bugs to 1.2, so we heartily encourage you to upgrade to the latest version of Istio (1.9.5) if you haven&rsquo;t already.</p>Fri, 13 Dec 2019 00:00:00 +0000/v1.9/news/support/announcing-1.2-eol-final//v1.9/news/support/announcing-1.2-eol-final/ISTIO-SECURITY-2019-007 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18801">CVE-2019-18801</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18802">CVE-2019-18802</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>9.0 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=CVSS%3a3.0%2fAV%3aN%2fAC%3aH%2fPR%3aN%2fUI%3aN%2fS%3aC%2fC%3aH%2fI%3aH%2fA%3aH">CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:C/C:H/I:H/A:H</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.2 to 1.2.9<br> 1.3 to 1.3.5<br> 1.4 to 1.4.1<br> </td> </tr> </tbody> </table> <p>Envoy, and subsequently Istio are vulnerable to two newly discovered vulnerabilities:</p> <ul> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18801">CVE-2019-18801</a></strong>: This vulnerability affects Envoy’s HTTP/1 codec in its way it processes downstream&rsquo;s requests with large HTTP/2 headers. A successful exploitation of this vulnerability could lead to a denial of Service, escalation of privileges, or information disclosure.</p></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18802">CVE-2019-18802</a></strong>: HTTP/1 codec incorrectly fails to trim whitespace after header values. This could allow an attacker to bypass Istio&rsquo;s policy either for information disclosure or escalation of privileges.</p></li> <li><p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18838">CVE-2019-18838</a></strong>: Upon receipt of a malformed HTTP request without the &ldquo;Host&rdquo; header, an encoder filter invoking Envoy&rsquo;s route manager APIs that access request&rsquo;s &ldquo;Host&rdquo; header will cause a NULL pointer to be dereferenced and result in abnormal termination of the Envoy process.</p></li> </ul> <h2 id="impact-and-detection">Impact and detection</h2> <p>Both Istio gateways and sidecars are vulnerable to this issue. If you are running one of the affected releases where downstream&rsquo;s requests are HTTP/2 while upstream&rsquo;s are HTTP/1, then your cluster is vulnerable. We expect this to be true of most clusters.</p> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.2.x deployments: update to <a href="/v1.9/news/releases/1.2.x/announcing-1.2.10">Istio 1.2.10</a> or later.</li> <li>For Istio 1.3.x deployments: update to <a href="/v1.9/news/releases/1.3.x/announcing-1.3.6">Istio 1.3.6</a> or later.</li> <li>For Istio 1.4.x deployments: update to <a href="/v1.9/news/releases/1.4.x/announcing-1.4.2">Istio 1.4.2</a> or later.</li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 10 Dec 2019 00:00:00 +0000/v1.9/news/security/istio-security-2019-007//v1.9/news/security/istio-security-2019-007/CVEAnnouncing Istio 1.4.2 <p>This release contains fixes for the security vulnerability described in <a href="/v1.9/news/security/istio-security-2019-007">our December 10th, 2019 news post</a>. This release note describes what’s different between Istio 1.4.1 and Istio 1.4.2.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.4.2" data-downloadbuttontext="DOWNLOAD 1.4.2" data-updateadvice='Before you download 1.4.2, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.4.10' data-updatehref="/v1.9/news/releases/1.4.x/announcing-1.4.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.4.1...1.4.2"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2019-007</strong> A heap overflow and improper input validation have been discovered in Envoy.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18801">CVE-2019-18801</a></strong>: Fix a vulnerability affecting Envoy&rsquo;s processing of large HTTP/2 request headers. A successful exploitation of this vulnerability could lead to a denial of service, escalation of privileges, or information disclosure. <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18802">CVE-2019-18802</a></strong>: Fix a vulnerability resulting from whitespace after HTTP/1 header values which could allow an attacker to bypass Istio&rsquo;s policy checks, potentially resulting in information disclosure or escalation of privileges. <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18838">CVE-2019-18838</a></strong>: Fix a vulnerability resulting from malformed HTTP request missing the &ldquo;Host&rdquo; header. An encoder filter that invokes Envoy&rsquo;s route manager APIs that access request&rsquo;s &ldquo;Host&rdquo; header will cause a NULL pointer to be dereferenced and result in abnormal termination of the Envoy process.</p>Tue, 10 Dec 2019 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4.2//v1.9/news/releases/1.4.x/announcing-1.4.2/Announcing Istio 1.3.6 <p>This release contains fixes for the security vulnerability described in <a href="/v1.9/news/security/istio-security-2019-007">our December 10th, 2019 news post</a> as well as bug fixes to improve robustness. This release note describes what&rsquo;s different between Istio 1.3.5 and Istio 1.3.6.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.3.x/announcing-1.3/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.3.6" data-downloadbuttontext="DOWNLOAD 1.3.6" data-updateadvice='Before you download 1.3.6, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.3.8' data-updatehref="/v1.9/news/releases/1.3.x/announcing-1.3.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.3/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.3.5...1.3.6"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2019-007</strong> A heap overflow and improper input validation have been discovered in Envoy.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18801">CVE-2019-18801</a></strong>: Fix a vulnerability affecting Envoy&rsquo;s processing of large HTTP/2 request headers. A successful exploitation of this vulnerability could lead to a denial of service, escalation of privileges, or information disclosure. <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18802">CVE-2019-18802</a></strong>: Fix a vulnerability resulting from whitespace after HTTP/1 header values which could allow an attacker to bypass Istio&rsquo;s policy checks, potentially resulting in information disclosure or escalation of privileges. <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18838">CVE-2019-18838</a></strong>: Fix a vulnerability resulting from malformed HTTP request missing the &ldquo;Host&rdquo; header. An encoder filter that invokes Envoy&rsquo;s route manager APIs that access request&rsquo;s &ldquo;Host&rdquo; header will cause a NULL pointer to be dereferenced and result in abnormal termination of the Envoy process.</p> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> an issue where a duplicate listener was generated for a proxy&rsquo;s IP address when using a headless <code>TCP</code> service. (<a href="https://github.com/istio/istio/issues/17748">Issue 17748</a>)</li> <li><strong>Fixed</strong> an issue with the <code>destination_service</code> label in HTTP related metrics incorrectly falling back to <code>request.host</code> which can cause a metric cardinality explosion for ingress traffic. (<a href="https://github.com/istio/istio/issues/18818">Issue 18818</a>)</li> </ul> <h2 id="minor-enhancements">Minor enhancements</h2> <ul> <li><strong>Improved</strong> load-shedding options for Mixer. Added support for a <code>requests-per-second</code> threshold for load-shedding enforcement. This allows operators to turn off load-shedding for Mixer in low traffic scenarios.</li> </ul>Tue, 10 Dec 2019 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3.6//v1.9/news/releases/1.3.x/announcing-1.3.6/Announcing Istio 1.2.10 <p>This release contains fixes for the security vulnerability described in <a href="/v1.9/news/security/istio-security-2019-007">our December 10th, 2019 news post</a>. This release note describes what’s different between Istio 1.2.9 and Istio 1.2.10.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="entry" href="https://github.com/istio/istio/releases/tag/1.2.10"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.2/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.2.9...1.2.10"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2019-007</strong> A heap overflow and improper input validation have been discovered in Envoy.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18801">CVE-2019-18801</a></strong>: Fix a vulnerability affecting Envoy&rsquo;s processing of large HTTP/2 request headers. A successful exploitation of this vulnerability could lead to a denial of service, escalation of privileges, or information disclosure. <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18802">CVE-2019-18802</a></strong>: Fix a vulnerability resulting from whitespace after HTTP/1 header values which could allow an attacker to bypass Istio&rsquo;s policy checks, potentially resulting in information disclosure or escalation of privileges. <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18838">CVE-2019-18838</a></strong>: Fix a vulnerability resulting from malformed HTTP request missing the &ldquo;Host&rdquo; header. An encoder filter that invokes Envoy&rsquo;s route manager APIs that access request&rsquo;s &ldquo;Host&rdquo; header will cause a NULL pointer to be dereferenced and result in abnormal termination of the Envoy process.</p> <h2 id="bug-fix">Bug fix</h2> <ul> <li>Add support for Citadel to automatically rotate root cert. (<a href="https://github.com/istio/istio/issues/17059">Issue 17059</a>)</li> </ul>Tue, 10 Dec 2019 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2.10//v1.9/news/releases/1.2.x/announcing-1.2.10/Announcing Istio 1.4.1 <p>This release includes bug fixes to improve robustness. This release note describes what’s different between Istio 1.4.0 and Istio 1.4.1.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.4.1" data-downloadbuttontext="DOWNLOAD 1.4.1" data-updateadvice='Before you download 1.4.1, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.4.10' data-updatehref="/v1.9/news/releases/1.4.x/announcing-1.4.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.4.0...1.4.1"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> <code>istioctl</code> installation on Windows (<a href="https://github.com/istio/istio/pull/19020">Issue 19020</a>).</li> <li><strong>Fixed</strong> an issue with route matching order when using cert-manager with Kubernetes Ingress (<a href="https://github.com/istio/istio/pull/19000">Issue 19000</a>).</li> <li><strong>Fixed</strong> Mixer source namespace attribute when the pod name contains a period (<a href="https://github.com/istio/istio/issues/19015">Issue 19015</a>).</li> <li><strong>Fixed</strong> excessive metrics generated by Galley (<a href="https://github.com/istio/istio/issues/19165">Issue 19165</a>).</li> <li><strong>Fixed</strong> tracing Service port to correctly listen on port 80 (<a href="https://github.com/istio/istio/issues/19227">Issue 19227</a>).</li> <li><strong>Fixed</strong> missing <code>istioctl</code> auto-completion files (<a href="https://github.com/istio/istio/issues/19297">Issue 19297</a>).</li> </ul>Thu, 05 Dec 2019 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4.1//v1.9/news/releases/1.4.x/announcing-1.4.1/Support for Istio 1.2 ends on December 13th, 2019<p>According to Istio&rsquo;s <a href="/v1.9/about/supported-releases#supported-releases">support policy</a>, LTS releases like 1.2 are supported for three months after the next LTS release. Since <a href="/v1.9/news/releases/1.3.x/announcing-1.3/">1.3 was released on September 12th</a>, support for 1.2 will end on December 13th, 2019.</p> <p>At that point we will stop back-porting fixes for security issues and critical bugs to 1.2, so we encourage you to upgrade to the latest version of Istio (1.9.5). If you don&rsquo;t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.</p> <p>We care about you and your clusters, so please be kind to yourself and upgrade.</p>Mon, 11 Nov 2019 00:00:00 +0000/v1.9/news/support/announcing-1.2-eol//v1.9/news/support/announcing-1.2-eol/Announcing Istio 1.3.5 <p>This release contains fixes for the security vulnerability described in <a href="/v1.9/news/security/istio-security-2019-006">our November 11, 2019 news post</a> as well as bug fixes to improve robustness. This release note describes what&rsquo;s different between Istio 1.3.4 and Istio 1.3.5.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.3.x/announcing-1.3/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.3.5" data-downloadbuttontext="DOWNLOAD 1.3.5" data-updateadvice='Before you download 1.3.5, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.3.8' data-updatehref="/v1.9/news/releases/1.3.x/announcing-1.3.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.3/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.3.4...1.3.5"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <ul> <li><strong>ISTIO-SECURITY-2019-006</strong> A DoS vulnerability has been discovered in Envoy.</li> </ul> <p><strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18817">CVE-2019-18817</a></strong>: An infinite loop can be triggered in Envoy if the option <code>continue_on_listener_filters_timeout</code> is set to True, which is the case in Istio. This vulnerability could be leveraged for a DoS attack. If you applied the mitigation mentioned in <a href="/v1.9/news/security/istio-security-2019-006">our November 11, 2019 news post</a>, you can remove the mitigation once you upgrade to Istio 1.3.5 or newer.</p> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> Envoy listener configuration for TCP headless services. (<a href="https://github.com/istio/istio/issues/17748">Issue #17748</a>)</li> <li><strong>Fixed</strong> an issue which caused stale endpoints to remain even when a deployment was scaled to 0 replicas. (<a href="https://github.com/istio/istio/issues/14336">Issue #14436</a>)</li> <li><strong>Fixed</strong> Pilot to no longer crash when an invalid Envoy configuration is generated. (<a href="https://github.com/istio/istio/issues/17266">Issue 17266</a>)</li> <li><strong>Fixed</strong> an issue with the <code>destination_service_name</code> label not getting populated for TCP metrics related to BlackHole/Passthrough clusters. (<a href="https://github.com/istio/istio/issues/17271">Issue 17271</a>)</li> <li><strong>Fixed</strong> an issue with telemetry not reporting metrics for BlackHole/Passthrough clusters when fall through filter chains were invoked. This occurred when explicit <code>ServiceEntries</code> were configured for external services. (<a href="https://github.com/istio/istio/issues/17759">Issue 17759</a>)</li> </ul> <h2 id="minor-enhancements">Minor enhancements</h2> <ul> <li><strong>Added</strong> support for Citadel to periodically check the root certificate remaining lifetime and rotate expiring root certificates. (<a href="https://github.com/istio/istio/issues/17059">Issue 17059</a>)</li> <li><strong>Added</strong> <code>PILOT_BLOCK_HTTP_ON_443</code> boolean environment variable to Pilot. If enabled, this flag prevents HTTP services from running on port 443 in order to prevent conflicts with external HTTP services. This is disabled by default. (<a href="https://github.com/istio/istio/issues/16458">Issue 16458</a>)</li> </ul>Mon, 11 Nov 2019 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3.5//v1.9/news/releases/1.3.x/announcing-1.3.5/ISTIO-SECURITY-2019-006 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18817">CVE-2019-18817</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=CVSS%3a3.1%2fAV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aN%2fI%3aN%2fA%3aH%2fE%3aH%2fRL%3aO%2fRC%3aC">CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H/E:H/RL:O/RC:C</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.3 to 1.3.4<br> </td> </tr> </tbody> </table> <p>Envoy, and subsequently Istio, are vulnerable to the following DoS attack. An infinite loop can be triggered in Envoy if the option <code>continue_on_listener_filters_timeout</code> is set to <code>True</code>. This has been the case for Istio since the introduction of the Protocol Detection feature in Istio 1.3 A remote attacker may trivially trigger that vulnerability, effectively exhausting Envoy’s CPU resources and causing a denial-of-service attack.</p> <h2 id="impact-and-detection">Impact and detection</h2> <p>Both Istio gateways and sidecars are vulnerable to this issue. If you are running one of the affected releases, your cluster is vulnerable.</p> <h2 id="mitigation">Mitigation</h2> <ul> <li><p>Workaround: The exploitation of that vulnerability can be prevented by customizing Istio installation (as described in <a href="https://archive.istio.io/v1.3/docs/reference/config/installation-options/#pilot-options">installation options</a> ), using Helm to override the following options:</p> <pre><code class='language-plain' data-expandlinks='true' data-repo='istio' >--set pilot.env.PILOT_INBOUND_PROTOCOL_DETECTION_TIMEOUT=0s --set global.proxy.protocolDetectionTimeout=0s </code></pre></li> <li><p>For Istio 1.3.x deployments: update to <a href="/v1.9/news/releases/1.3.x/announcing-1.3.5">Istio 1.3.5</a> or later.</p></li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Thu, 07 Nov 2019 00:00:00 +0000/v1.9/news/security/istio-security-2019-006//v1.9/news/security/istio-security-2019-006/CVEAnnouncing Istio 1.2.9 <p>We&rsquo;re pleased to announce the availability of Istio 1.2.9. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.2.9" data-downloadbuttontext="DOWNLOAD 1.2.9" data-updateadvice='Before you download 1.2.9, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.2.10' data-updatehref="/v1.9/news/releases/1.2.x/announcing-1.2.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.2/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.2.8...1.2.9"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix a proxy startup race condition.</li> </ul> <h2 id="features">Features</h2> <ul> <li>Adding support for Citadel automatic root certificate rotation (<a href="https://github.com/istio/istio/issues/17059">Issue 17059</a>).</li> </ul>Wed, 06 Nov 2019 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2.9//v1.9/news/releases/1.2.x/announcing-1.2.9/Announcing Istio 1.3.4 <p>This release includes bug fixes to improve robustness. This release note describes what&rsquo;s different between Istio 1.3.3 and Istio 1.3.4.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.3.x/announcing-1.3/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.3.4" data-downloadbuttontext="DOWNLOAD 1.3.4" data-updateadvice='Before you download 1.3.4, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.3.8' data-updatehref="/v1.9/news/releases/1.3.x/announcing-1.3.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.3/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.3.3...1.3.4"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> a crashing bug in the Google node agent provider. (<a href="https://github.com/istio/istio/pull/18260">Pull Request #18296</a>)</li> <li><strong>Fixed</strong> Prometheus annotations and updated Jaeger to 1.14. (<a href="https://github.com/istio/istio/pull/18274">Pull Request #18274</a>)</li> <li><strong>Fixed</strong> in-bound listener reloads that occur on 5 minute intervals. (<a href="https://github.com/istio/istio/issues/18088">Issue #18138</a>)</li> <li><strong>Fixed</strong> validation of key and certificate rotation. (<a href="https://github.com/istio/istio/issues/17718">Issue #17718</a>)</li> <li><strong>Fixed</strong> invalid internal resource garbage collection. (<a href="https://github.com/istio/istio/issues/16818">Issue #16818</a>)</li> <li><strong>Fixed</strong> webhooks that were not updated on a failure. (<a href="https://github.com/istio/istio/pull/17820">Pull Request #17820</a></li> <li><strong>Improved</strong> performance of OpenCensus tracing adapter. (<a href="https://github.com/istio/istio/issues/18042">Issue #18042</a>)</li> </ul> <h2 id="minor-enhancements">Minor enhancements</h2> <ul> <li><strong>Improved</strong> reliability of the SDS service. (<a href="https://github.com/istio/istio/issues/17409">Issue #17409</a>, <a href="https://github.com/istio/istio/issues/17905">Issue #17905</a>)</li> <li><strong>Added</strong> stable versions of failure domain labels. (<a href="https://github.com/istio/istio/pull/17755">Pull Request #17755</a>)</li> <li><strong>Added</strong> update of the global mesh policy on upgrades. (<a href="https://github.com/istio/istio/pull/17033">Pull Request #17033</a>)</li> </ul>Fri, 01 Nov 2019 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3.4//v1.9/news/releases/1.3.x/announcing-1.3.4/Announcing Istio 1.2.8 <p>We&rsquo;re pleased to announce the availability of Istio 1.2.8. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.2.8" data-downloadbuttontext="DOWNLOAD 1.2.8" data-updateadvice='Before you download 1.2.8, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.2.10' data-updatehref="/v1.9/news/releases/1.2.x/announcing-1.2.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.2/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.2.7...1.2.8"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><p>Fix a bug introduced by <a href="/v1.9/news/security/istio-security-2019-005">our October 8th security release</a> which incorrectly calculated HTTP header and body sizes (<a href="https://github.com/istio/istio/issues/17735">Issue 17735</a>).</p></li> <li><p>Fix a minor bug where endpoints still remained in /clusters while scaling a deployment to 0 replica (<a href="https://github.com/istio/istio/issues/14336">Issue 14336</a>).</p></li> <li><p>Fix Helm upgrade process to correctly update mesh policy for mutual TLS (<a href="https://github.com/istio/istio/issues/16170">Issue 16170</a>).</p></li> <li><p>Fix inconsistencies in the destination service label for TCP connection opened/closed metrics (<a href="https://github.com/istio/istio/issues/17234">Issue 17234</a>).</p></li> <li><p>Fix the Istio secret cleanup mechanism (<a href="https://github.com/istio/istio/issues/17122">Issue 17122</a>).</p></li> <li><p>Fix the Mixer Stackdriver adapter encoding process to handle invalid UTF-8 (<a href="https://github.com/istio/istio/issues/16966">Issue 16966</a>).</p></li> </ul> <h2 id="features">Features</h2> <ul> <li>Add <code>pilot</code> support for the new failure domain labels: <code>zone</code> and <code>region</code>.</li> </ul>Wed, 23 Oct 2019 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2.8//v1.9/news/releases/1.2.x/announcing-1.2.8/Support for Istio 1.1 has ended<p>As <a href="/v1.9/news/support/announcing-1.1-eol/">previously announced</a>, support for Istio 1.1 has now officially ended.</p> <p>Since we learned of the security vulnerability <a href="/v1.9/news/security/istio-security-2019-005">behind our October 8th security release</a> while still barely within the 1.1 support period, we decided to extend the 1.1 support period beyond the original announcement and release <a href="/v1.9/news/releases/1.1.x/announcing-1.1.16">1.1.16</a>. Then we discovered a <a href="https://github.com/istio/istio/issues/17735">bug in HTTP header size calculation</a> was introduced by the security release, so we decided to release a fix in one last <a href="/v1.9/news/releases/1.1.x/announcing-1.1.17">1.1.17</a> release before closing out the 1.1 series for good.</p> <p>At this point we will no longer back-port fixes for security issues and critical bugs to 1.1, so we heartily encourage you to upgrade to the latest version of Istio (1.9.5) if you haven&rsquo;t already.</p>Mon, 21 Oct 2019 00:00:00 +0000/v1.9/news/support/announcing-1.1-eol-final//v1.9/news/support/announcing-1.1-eol-final/Announcing Istio 1.1.17 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.17. This will be the last 1.1.x patch release. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="entry" href="https://github.com/istio/istio/releases/tag/1.1.17"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.16...1.1.17"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix a bug introduced by <a href="/v1.9/news/security/istio-security-2019-005">our October 8th security release</a> which incorrectly calculated HTTP header and body sizes (<a href="https://github.com/istio/istio/issues/17735">Issue 17735</a>.</li> </ul>Mon, 21 Oct 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.17//v1.9/news/releases/1.1.x/announcing-1.1.17/Announcing Istio 1.3.3 <p>This release includes bug fixes to improve robustness. This release note describes what&rsquo;s different between Istio 1.3.2 and Istio 1.3.3.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.3.x/announcing-1.3/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.3.3" data-downloadbuttontext="DOWNLOAD 1.3.3" data-updateadvice='Before you download 1.3.3, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.3.8' data-updatehref="/v1.9/news/releases/1.3.x/announcing-1.3.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.3/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.3.2...1.3.3"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> an issue which caused Prometheus to install improperly when using <code>istioctl x manifest apply</code>. (<a href="https://github.com/istio/istio/issues/16970">Issue 16970</a>)</li> <li><strong>Fixed</strong> a bug where locality load balancing can not read locality information from the node. (<a href="https://github.com/istio/istio/issues/17337">Issue 17337</a>)</li> <li><strong>Fixed</strong> a bug where long-lived connections were getting dropped by the Envoy proxy as the listeners were getting reconfigured without any user configuration changes. (<a href="https://github.com/istio/istio/issues/17383">Issue 17383</a>, <a href="https://github.com/istio/istio/issues/17139">Issue 17139</a>)</li> <li><strong>Fixed</strong> a crash in <code>istioctl x analyze</code> command. (<a href="https://github.com/istio/istio/issues/17449">Issue 17449</a>)</li> <li><strong>Fixed</strong> <code>istioctl x manifest diff</code> to diff text blocks in ConfigMaps. (<a href="https://github.com/istio/istio/issues/16828">Issue 16828</a>)</li> <li><strong>Fixed</strong> a segmentation fault crash in the Envoy proxy. (<a href="https://github.com/istio/istio/issues/17699">Issue 17699</a>)</li> </ul>Mon, 14 Oct 2019 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3.3//v1.9/news/releases/1.3.x/announcing-1.3.3/ISTIO-SECURITY-2019-005 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15226">CVE-2019-15226</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=CVSS%3a3.0%2fAV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aN%2fI%3aN%2fA%3aH">CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.1 to 1.1.15<br> 1.2 to 1.2.6<br> 1.3 to 1.3.1<br> </td> </tr> </tbody> </table> <p>Envoy, and subsequently Istio, are vulnerable to the following DoS attack. Upon receiving each incoming request, Envoy will iterate over the request headers to verify that the total size of the headers stays below a maximum limit. A remote attacker may craft a request that stays below the maximum request header size but consists of many thousands of small headers to consume CPU and result in a denial-of-service attack.</p> <h2 id="impact-and-detection">Impact and detection</h2> <p>Both Istio gateways and sidecars are vulnerable to this issue. If you are running one of the affected releases, your cluster is vulnerable.</p> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.1.x deployments: update all control plane components (Pilot, Mixer, Citadel, and Galley) and then <a href="https://archive.istio.io/1.1/docs/setup/upgrade/cni-helm-upgrade/#sidecar-upgrade">upgrade the data plane</a> to <a href="/v1.9/news/releases/1.1.x/announcing-1.1.16">Istio 1.1.16</a> or later.</li> <li>For Istio 1.2.x deployments: update all control plane components (Pilot, Mixer, Citadel, and Galley) and then <a href="https://archive.istio.io/1.2/docs/setup/upgrade/cni-helm-upgrade/#sidecar-upgrade">upgrade the data plane</a> to <a href="/v1.9/news/releases/1.2.x/announcing-1.2.7">Istio 1.2.7</a> or later.</li> <li>For Istio 1.3.x deployments: update all control plane components (Pilot, Mixer, Citadel, and Galley) and then <a href="https://archive.istio.io/1.3/docs/setup/upgrade/cni-helm-upgrade/#sidecar-upgrade">upgrade the data plane</a> to <a href="/v1.9/news/releases/1.3.x/announcing-1.3.2">Istio 1.3.2</a> or later.</li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 08 Oct 2019 00:00:00 +0000/v1.9/news/security/istio-security-2019-005//v1.9/news/security/istio-security-2019-005/CVEAnnouncing Istio 1.3.2 <p>We&rsquo;re pleased to announce the availability of Istio 1.3.2. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.3.x/announcing-1.3/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.3.2" data-downloadbuttontext="DOWNLOAD 1.3.2" data-updateadvice='Before you download 1.3.2, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.3.8' data-updatehref="/v1.9/news/releases/1.3.x/announcing-1.3.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.3/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.3.1...1.3.2"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>This release contains fixes for the security vulnerability described in <a href="/v1.9/news/security/istio-security-2019-005">our October 8th, 2019 news post</a>. Specifically:</p> <p><strong>ISTIO-SECURITY-2019-005</strong>: A DoS vulnerability has been discovered by the Envoy community. * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15226">CVE-2019-15226</a></strong>: After investigation, the Istio team has found that this issue could be leveraged for a DoS attack in Istio if an attacker uses a high quantity of very small headers.</p> <p>Nothing else is included in this release except for the above security fix. Distroless images will be available in a few days.</p>Tue, 08 Oct 2019 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3.2//v1.9/news/releases/1.3.x/announcing-1.3.2/Announcing Istio 1.2.7 <p>We&rsquo;re pleased to announce the availability of Istio 1.2.7. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.2.7" data-downloadbuttontext="DOWNLOAD 1.2.7" data-updateadvice='Before you download 1.2.7, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.2.10' data-updatehref="/v1.9/news/releases/1.2.x/announcing-1.2.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.2/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.2.6...1.2.7"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>This release contains fixes for the security vulnerability described in <a href="/v1.9/news/security/istio-security-2019-005">our October 8th, 2019 news post</a>. Specifically:</p> <p><strong>ISTIO-SECURITY-2019-005</strong>: A DoS vulnerability has been discovered by the Envoy community. * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15226">CVE-2019-15226</a></strong>: After investigation, the Istio team has found that this issue could be leveraged for a DoS attack in Istio if an attacker uses a high quantity of very small headers.</p> <h2 id="bug-fix">Bug fix</h2> <ul> <li>Fix a bug where <code>nodeagent</code> was failing to start when using citadel (<a href="https://github.com/istio/istio/issues/17108">Issue 15876</a>)</li> </ul>Tue, 08 Oct 2019 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2.7//v1.9/news/releases/1.2.x/announcing-1.2.7/Announcing Istio 1.1.16 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.16. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.16" data-downloadbuttontext="DOWNLOAD 1.1.16" data-updateadvice='Before you download 1.1.16, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.15...1.1.16"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>This release contains fixes for the security vulnerability described in <a href="/v1.9/news/security/istio-security-2019-005">our October 8th, 2019 news post</a>. Specifically:</p> <p><strong>ISTIO-SECURITY-2019-005</strong>: A DoS vulnerability has been discovered by the Envoy community. * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15226">CVE-2019-15226</a></strong>: After investigation, the Istio team has found that this issue could be leveraged for a DoS attack in Istio if an attacker uses a high quantity of very small headers.</p> <p>Nothing else is included in this release except for the above security fix.</p>Tue, 08 Oct 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.16//v1.9/news/releases/1.1.x/announcing-1.1.16/Announcing Istio 1.3.1 <p>This release includes bug fixes to improve robustness. This release note describes what’s different between Istio 1.3.0 and Istio 1.3.1.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.3.x/announcing-1.3/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.3.1" data-downloadbuttontext="DOWNLOAD 1.3.1" data-updateadvice='Before you download 1.3.1, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.3.8' data-updatehref="/v1.9/news/releases/1.3.x/announcing-1.3.8/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.3/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.3.0...1.3.1"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><strong>Fixed</strong> an issue which caused the secret cleanup job to erroneously run during upgrades (<a href="https://github.com/istio/istio/issues/16873">Issue 16873</a>).</li> <li><strong>Fixed</strong> an issue where the default configuration disabled Kubernetes Ingress support (<a href="https://github.com/istio/istio/issues/17148">Issue 17148</a>)</li> <li><strong>Fixed</strong> an issue with handling invalid <code>UTF-8</code> characters in the Stackdriver logging adapter (<a href="https://github.com/istio/istio/issues/16966">Issue 16966</a>).</li> <li><strong>Fixed</strong> an issue which caused the <code>destination_service</code> label in HTTP metrics not to be set for <code>BlackHoleCluster</code> and <code>PassThroughCluster</code> (<a href="https://github.com/istio/istio/issues/16629">Issue 16629</a>).</li> <li><strong>Fixed</strong> an issue with the <code>destination_service</code> label in the <code>istio_tcp_connections_closed_total</code> and <code>istio_tcp_connections_opened_total</code> metrics which caused them to not be set correctly (<a href="https://github.com/istio/istio/issues/17234">Issue 17234</a>).</li> <li><strong>Fixed</strong> an Envoy crash introduced in Istio 1.2.4 (<a href="https://github.com/istio/istio/issues/16357">Issue 16357</a>).</li> <li><strong>Fixed</strong> Istio CNI sidecar initialization when IPv6 is disabled on the node (<a href="https://github.com/istio/istio/issues/15895">Issue 15895</a>).</li> <li><strong>Fixed</strong> a regression affecting support of RS384 and RS512 algorithms in JWTs (<a href="https://github.com/istio/istio/issues/15380">Issue 15380</a>).</li> </ul> <h2 id="minor-enhancements">Minor enhancements</h2> <ul> <li><strong>Added</strong> support for <code>.Values.global.priorityClassName</code> to the telemetry deployment.</li> <li><strong>Added</strong> annotations for Datadog tracing that controls extra features in sidecars.</li> <li><strong>Added</strong> the <code>pilot_xds_push_time</code> metric to report Pilot xDS push time.</li> <li><strong>Added</strong> <code>istioctl experimental analyze</code> to support multi-resource analysis and validation.</li> <li><strong>Added</strong> support for running metadata exchange and stats extensions in a WebAssembly sandbox.</li> <li><strong>Removed</strong> time diff info in the proxy-status command.</li> </ul>Fri, 27 Sep 2019 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3.1//v1.9/news/releases/1.3.x/announcing-1.3.1/Announcing Istio 1.2.6 <p>We&rsquo;re pleased to announce the availability of Istio 1.2.6. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.2.6" data-downloadbuttontext="DOWNLOAD 1.2.6" data-updateadvice='Before you download 1.2.6, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.2.10' data-updatehref="/v1.9/news/releases/1.2.x/announcing-1.2.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.2/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.2.5...1.2.6"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix <code>redisquota</code> inconsistency in regards to <code>memquota</code> counting (<a href="https://github.com/istio/istio/issues/15543">Issue 15543</a>).</li> <li>Fix an Envoy crash introduced in Istio 1.2.5 (<a href="https://github.com/istio/istio/issues/16357">Issue 16357</a>).</li> <li>Fix Citadel health check broken in the context of plugin certs (with intermediate certs) (<a href="https://github.com/istio/istio/issues/16593">Issue 16593</a>).</li> <li>Fix Stackdriver Mixer Adapter error log verbosity (<a href="https://github.com/istio/istio/issues/16782">Issue 16782</a>).</li> <li>Fix a bug where the service account map would be erased for service hostnames with more than one port.</li> <li>Fix incorrect <code>filterChainMatch</code> wildcard hosts duplication produced by Pilot (<a href="https://github.com/istio/istio/issues/16573">Issue 16573</a>).</li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li>Expose <code>sidecarToTelemetrySessionAffinity</code> (required for Mixer V1) when it talks to services like Stackdriver. (<a href="https://github.com/istio/istio/issues/16862">Issue 16862</a>).</li> <li>Expose <code>HTTP/2</code> window size settings as Pilot environment variables (<a href="https://github.com/istio/istio/issues/17117">Issue 17117</a>).</li> </ul>Tue, 17 Sep 2019 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2.6//v1.9/news/releases/1.2.x/announcing-1.2.6/Announcing Istio 1.1.15 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.15. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.15" data-downloadbuttontext="DOWNLOAD 1.1.15" data-updateadvice='Before you download 1.1.15, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.14...1.1.15"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix an Envoy crash introduced in Istio 1.1.14 (<a href="https://github.com/istio/istio/issues/16357">Issue 16357</a>).</li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li>Expose <code>HTTP/2</code> window size settings as Pilot environment variables (<a href="https://github.com/istio/istio/issues/17117">Issue 17117</a>).</li> </ul>Mon, 16 Sep 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.15//v1.9/news/releases/1.1.x/announcing-1.1.15/Istio 1.2.4 sidecar image vulnerability <p>To the Istio’s user community,</p> <p>For the period between Aug 23rd 2019 09:16PM PST and Sep 6th 2019 09:26AM PST a Docker image shipped as Istio <code>proxyv2</code> 1.2.4 (c.f. <a href="https://hub.docker.com/r/istio/proxyv2">https://hub.docker.com/r/istio/proxyv2</a> ) contained a faulty version of the proxy against the vulnerabilities <a href="/v1.9/news/security/istio-security-2019-003/">ISTIO-SECURITY-2019-003</a> and <a href="/v1.9/news/security/istio-security-2019-004/">ISTIO-SECURITY-2019-004</a>.</p> <p>If you have installed Istio 1.2.4 during that time, please consider upgrading to Istio 1.2.5 that also contains additional security fixes.</p> <h2 id="detailed-explanation">Detailed explanation</h2> <p>Because of the communication embargo that we have exercised when fixing the recent HTTP2 DoS vulnerabilities, as it is usual for this type of release, we have built, in advance, a fixed image of the sidecar privately. At the moment of the public disclosure, we pushed that image manually on Docker hub.</p> <p>For any release that isn’t fixing a privately disclosed security vulnerability, this Docker image is usually pushed through our release pipeline job, entirely automatically.</p> <p>Our automated release process does not work correctly with the manual interactions required by the vulnerability disclosure embargo: the release pipeline code kept a reference to an outdated version of the Istio repository.</p> <p>For a problem to occur, an automated build needed to be launched on an old version, this is what happened during the release of Istio 1.2.5: we have experienced a problem that required a <a href="https://github.com/istio-releases/pipeline/commit/635d276ad7eac01bef9c3f195520a0f722626c0f">revert commit</a> which triggered a rebuild of 1.2.4 against an outdated version of Istio’s code.</p> <p>This revert commit happened on Aug 23rd 2019 09:16PM PST. We have noticed this problem and pushed back the fixed image on Sep 6th 2019 09:26AM PST.</p> <p>We are sorry for any inconvenience you may have experienced due to this incident, and <a href="https://github.com/istio/istio/issues/16887">are working towards a better release system</a>, as well as a more efficient way to deal with vulnerability reports.</p> <ul> <li>The release managers for 1.2</li> </ul>Tue, 10 Sep 2019 00:00:00 +0000/v1.9/news/security/incorrect-sidecar-image-1.2.4//v1.9/news/security/incorrect-sidecar-image-1.2.4/communityblogsecurityAnnouncing Istio 1.2.5 <p>We&rsquo;re pleased to announce the availability of Istio 1.2.5. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.2.5" data-downloadbuttontext="DOWNLOAD 1.2.5" data-updateadvice='Before you download 1.2.5, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.2.10' data-updatehref="/v1.9/news/releases/1.2.x/announcing-1.2.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.2/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.2.4...1.2.5"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>Following the previous fixes for the security vulnerabilities described in <a href="/v1.9/news/security/istio-security-2019-003/">ISTIO-SECURITY-2019-003</a> and <a href="/v1.9/news/security/istio-security-2019-004">ISTIO-SECURITY-2019-004</a>, we are now addressing the internal control plane communication surface. These security fixes were not available at the time of our previous security release, and we considered the control plane gRPC surface to be harder to exploit.</p> <p>You can find the gRPC vulnerability fix description on their mailing list (c.f. <a href="https://groups.google.com/forum/#!topic/grpc-io/w5jPamxdda4">HTTP/2 Security Vulnerabilities</a>).</p> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix an Envoy bug that breaks <code>java.net.http.HttpClient</code> and other clients that attempt to upgrade from <code>HTTP/1.1</code> to <code>HTTP/2</code> using the <code>Upgrade: h2c</code> header (<a href="https://github.com/istio/istio/issues/16391">Issue 16391</a>).</li> <li>Fix a memory leak on send timeout (<a href="https://github.com/istio/istio/issues/15876">Issue 15876</a>).</li> </ul>Mon, 26 Aug 2019 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2.5//v1.9/news/releases/1.2.x/announcing-1.2.5/Announcing Istio 1.1.14 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.14. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.14" data-downloadbuttontext="DOWNLOAD 1.1.14" data-updateadvice='Before you download 1.1.14, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.13...1.1.14"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>Following the previous fixes for the security vulnerabilities described in <a href="/v1.9/news/security/istio-security-2019-003/">ISTIO-SECURITY-2019-003</a> and <a href="/v1.9/news/security/istio-security-2019-004/">ISTIO-SECURITY-2019-004</a>, we are now addressing the internal control plane communication surface. These security fixes were not available at the time of our previous security release, and we considered the control plane gRPC surface to be harder to exploit.</p> <p>You can find the gRPC vulnerability fix description on their mailing list (c.f. <a href="https://groups.google.com/forum/#!topic/grpc-io/w5jPamxdda4">HTTP/2 Security Vulnerabilities</a>).</p> <h2 id="bug-fix">Bug fix</h2> <ul> <li>Fix an Envoy bug that breaks <code>java.net.http.HttpClient</code> and other clients that attempt to upgrade from <code>HTTP/1.1</code> to <code>HTTP/2</code> using the <code>Upgrade: h2c</code> header (<a href="https://github.com/istio/istio/issues/16391">Issue 16391</a>).</li> </ul>Mon, 26 Aug 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.14//v1.9/news/releases/1.1.x/announcing-1.1.14/Support for Istio 1.1 ends on September 19th, 2019<p>According to Istio&rsquo;s <a href="/v1.9/about/supported-releases#supported-releases/">support policy</a>, LTS releases like 1.1 are supported for three months after the next LTS release. Since <a href="/v1.9/news/releases/1.2.x/announcing-1.2/">1.2 was released on June 18th</a>, support for 1.1 will end on September 19th, 2019.</p> <p>At that point we will stop back-porting fixes for security issues and critical bugs to 1.1, so we encourage you to upgrade to the latest version of Istio (1.9.5). If you don&rsquo;t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.</p> <p>We care about you and your clusters, so please be kind to yourself and upgrade.</p>Thu, 15 Aug 2019 00:00:00 +0000/v1.9/news/support/announcing-1.1-eol//v1.9/news/support/announcing-1.1-eol/ISTIO-SECURITY-2019-004 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9512">CVE-2019-9512</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9513">CVE-2019-9513</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9514">CVE-2019-9514</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9515">CVE-2019-9515</a><br> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9518">CVE-2019-9518</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=CVSS%3a3.0%2fAV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aN%2fI%3aN%2fA%3aH">CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.1 to 1.1.12<br> 1.2 to 1.2.3<br> </td> </tr> </tbody> </table> <p>Envoy, and subsequently Istio are vulnerable to a series of trivial HTTP/2-based DoS attacks:</p> <ul> <li>HTTP/2 flood using PING frames and queuing of response PING ACK frames that results in unbounded memory growth (which can lead to out of memory conditions).</li> <li>HTTP/2 flood using PRIORITY frames that results in excessive CPU usage and starvation of other clients.</li> <li>HTTP/2 flood using HEADERS frames with invalid HTTP headers and queuing of response <code>RST_STREAM</code> frames that results in unbounded memory growth (which can lead to out of memory conditions).</li> <li>HTTP/2 flood using SETTINGS frames and queuing of SETTINGS ACK frames that results in unbounded memory growth (which can lead to out of memory conditions).</li> <li>HTTP/2 flood using frames with an empty payload that results in excessive CPU usage and starvation of other clients.</li> </ul> <p>Those vulnerabilities were reported externally and affect multiple proxy implementations. See <a href="https://github.com/Netflix/security-bulletins/blob/master/advisories/third-party/2019-002.md">this security bulletin</a> for more information.</p> <h2 id="impact-and-detection">Impact and detection</h2> <p>If Istio terminates externally originated HTTP then it is vulnerable. If Istio is instead fronted by an intermediary that terminates HTTP (e.g., a HTTP load balancer), then that intermediary would protect Istio, assuming the intermediary is not itself vulnerable to the same HTTP/2 exploits.</p> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.1.x deployments: update to a <a href="/v1.9/news/releases/1.1.x/announcing-1.1.13">Istio 1.1.13</a> or later.</li> <li>For Istio 1.2.x deployments: update to a <a href="/v1.9/news/releases/1.2.x/announcing-1.2.4">Istio 1.2.4</a> or later.</li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 13 Aug 2019 00:00:00 +0000/v1.9/news/security/istio-security-2019-004//v1.9/news/security/istio-security-2019-004/CVEISTIO-SECURITY-2019-003 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14993">CVE-2019-14993</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=CVSS%3a3.0%2fAV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aN%2fI%3aN%2fA%3aH">CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.1 to 1.1.12<br> 1.2 to 1.2.3<br> </td> </tr> </tbody> </table> <p>An Envoy user reported publicly an issue (c.f. <a href="https://github.com/envoyproxy/envoy/issues/7728">Envoy Issue 7728</a>) about regular expressions (or regex) matching that crashes Envoy with very large URIs. After investigation, the Istio team has found that this issue could be leveraged for a DoS attack in Istio, if users are employing regular expressions in some of the Istio APIs: <code>JWT</code>, <code>VirtualService</code>, <code>HTTPAPISpecBinding</code>, <code>QuotaSpecBinding</code>.</p> <h2 id="impact-and-detection">Impact and detection</h2> <p>To detect if there is any regular expressions used in Istio APIs in your cluster, run the following command which prints either of the following output:</p> <ul> <li>YOU ARE AFFECTED: found regex used in <code>AuthenticationPolicy</code> or <code>VirtualService</code></li> <li>YOU ARE NOT AFFECTED: did not find regex usage</li> </ul> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;&#39;EOF&#39; | bash - set -e set -u set -o pipefail red=`tput setaf 1` green=`tput setaf 2` reset=`tput sgr0` echo &#34;Checking regex usage in Istio API ...&#34; AFFECTED=() JWT_REGEX=() JWT_REGEX+=($(kubectl get Policy --all-namespaces -o jsonpath=&#39;{..regex}&#39;)) JWT_REGEX+=($(kubectl get MeshPolicy --all-namespaces -o jsonpath=&#39;{..regex}&#39;)) if [ &#34;${#JWT_REGEX[@]}&#34; != 0 ]; then AFFECTED+=(&#34;AuthenticationPolicy&#34;) fi VS_REGEX=() VS_REGEX+=($(kubectl get VirtualService --all-namespaces -o jsonpath=&#39;{..regex}&#39;)) if [ &#34;${#VS_REGEX[@]}&#34; != 0 ]; then AFFECTED+=(&#34;VirtualService&#34;) fi HTTPAPI_REGEX=() HTTPAPI_REGEX+=($(kubectl get HTTPAPISpec --all-namespaces -o jsonpath=&#39;{..regex}&#39;)) if [ &#34;${#HTTPAPI_REGEX[@]}&#34; != 0 ]; then AFFECTED+=(&#34;HTTPAPISpec&#34;) fi QUOTA_REGEX=() QUOTA_REGEX+=($(kubectl get QuotaSpec --all-namespaces -o jsonpath=&#39;{..regex}&#39;)) if [ &#34;${#QUOTA_REGEX[@]}&#34; != 0 ]; then AFFECTED+=(&#34;QuotaSpec&#34;) fi if [ &#34;${#AFFECTED[@]}&#34; != 0 ]; then echo &#34;${red}YOU ARE AFFECTED: found regex used in ${AFFECTED[@]}${reset}&#34; exit 1 fi echo &#34;${green}YOU ARE NOT AFFECTED: did not find regex usage${reset}&#34; EOF </code></pre> <h2 id="mitigation">Mitigation</h2> <ul> <li>For Istio 1.1.x deployments: update to <a href="/v1.9/news/releases/1.1.x/announcing-1.1.13">Istio 1.1.13</a> or later</li> <li>For Istio 1.2.x deployments: update to <a href="/v1.9/news/releases/1.2.x/announcing-1.2.4">Istio 1.2.4</a> or later.</li> </ul> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 13 Aug 2019 00:00:00 +0000/v1.9/news/security/istio-security-2019-003//v1.9/news/security/istio-security-2019-003/CVEAnnouncing Istio 1.2.4 <p>We&rsquo;re pleased to announce the availability of Istio 1.2.4. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.2.4" data-downloadbuttontext="DOWNLOAD 1.2.4" data-updateadvice='Before you download 1.2.4, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.2.10' data-updatehref="/v1.9/news/releases/1.2.x/announcing-1.2.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.2/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.2.3...1.2.4"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>This release contains fixes for the security vulnerabilities described in <a href="/v1.9/news/security/istio-security-2019-003/">ISTIO-SECURITY-2019-003</a>] <a href="/v1.9/news/security/istio-security-2019-004/">ISTIO-SECURITY-2019-004</a>. Specifically:</p> <p><strong>ISTIO-SECURITY-2019-003</strong>: An Envoy user reported publicly an issue (c.f. <a href="https://github.com/envoyproxy/envoy/issues/7728">Envoy Issue 7728</a>) about regular expressions matching that crashes Envoy with very large URIs. * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14993">CVE-2019-14993</a></strong>: After investigation, the Istio team has found that this issue could be leveraged for a DoS attack in Istio, if users are employing regular expressions in some of the Istio APIs: <code>JWT</code>, <code>VirtualService</code>, <code>HTTPAPISpecBinding</code>, <code>QuotaSpecBinding</code>.</p> <p><strong>ISTIO-SECURITY-2019-004</strong>: Envoy, and subsequently Istio are vulnerable to a series of trivial HTTP/2-based DoS attacks: * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9512">CVE-2019-9512</a></strong>: HTTP/2 flood using <code>PING</code> frames and queuing of response <code>PING</code> ACK frames that results in unbounded memory growth (which can lead to out of memory conditions). * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9513">CVE-2019-9513</a></strong>: HTTP/2 flood using PRIORITY frames that results in excessive CPU usage and starvation of other clients. * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9514">CVE-2019-9514</a></strong>: HTTP/2 flood using <code>HEADERS</code> frames with invalid HTTP headers and queuing of response <code>RST_STREAM</code> frames that results in unbounded memory growth (which can lead to out of memory conditions). * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9515">CVE-2019-9515</a></strong>: HTTP/2 flood using <code>SETTINGS</code> frames and queuing of <code>SETTINGS</code> ACK frames that results in unbounded memory growth (which can lead to out of memory conditions). * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9518">CVE-2019-9518</a></strong>: HTTP/2 flood using frames with an empty payload that results in excessive CPU usage and starvation of other clients.</p> <p>Nothing else is included in this release except for the above security fixes.</p>Tue, 13 Aug 2019 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2.4//v1.9/news/releases/1.2.x/announcing-1.2.4/Announcing Istio 1.1.13 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.13. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.13" data-downloadbuttontext="DOWNLOAD 1.1.13" data-updateadvice='Before you download 1.1.13, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.12...1.1.13"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>This release contains fixes for the security vulnerabilities described in <a href="/v1.9/news/security/istio-security-2019-003/">ISTIO-SECURITY-2019-003</a> and <a href="/v1.9/news/security/istio-security-2019-004/">ISTIO-SECURITY-2019-004</a>. Specifically:</p> <p><strong>ISTIO-SECURITY-2019-003</strong>: An Envoy user reported publicly an issue (c.f. <a href="https://github.com/envoyproxy/envoy/issues/7728">Envoy Issue 7728</a>) about regular expressions matching that crashes Envoy with very large URIs. * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14993">CVE-2019-14993</a></strong>: After investigation, the Istio team has found that this issue could be leveraged for a DoS attack in Istio, if users are employing regular expressions in some of the Istio APIs: <code>JWT</code>, <code>VirtualService</code>, <code>HTTPAPISpecBinding</code>, <code>QuotaSpecBinding</code>.</p> <p><strong>ISTIO-SECURITY-2019-004</strong>: Envoy, and subsequently Istio are vulnerable to a series of trivial HTTP/2-based DoS attacks: * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9512">CVE-2019-9512</a></strong>: HTTP/2 flood using <code>PING</code> frames and queuing of response <code>PING</code> ACK frames that results in unbounded memory growth (which can lead to out of memory conditions). * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9513">CVE-2019-9513</a></strong>: HTTP/2 flood using PRIORITY frames that results in excessive CPU usage and starvation of other clients. * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9514">CVE-2019-9514</a></strong>: HTTP/2 flood using <code>HEADERS</code> frames with invalid HTTP headers and queuing of response <code>RST_STREAM</code> frames that results in unbounded memory growth (which can lead to out of memory conditions). * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9515">CVE-2019-9515</a></strong>: HTTP/2 flood using <code>SETTINGS</code> frames and queuing of <code>SETTINGS</code> ACK frames that results in unbounded memory growth (which can lead to out of memory conditions). * <strong><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9518">CVE-2019-9518</a></strong>: HTTP/2 flood using frames with an empty payload that results in excessive CPU usage and starvation of other clients.</p> <p>Nothing else is included in this release except for the above security fixes.</p>Tue, 13 Aug 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.13//v1.9/news/releases/1.1.x/announcing-1.1.13/Announcing Istio 1.2.3 <p>We&rsquo;re pleased to announce the availability of Istio 1.2.3. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.2.3" data-downloadbuttontext="DOWNLOAD 1.2.3" data-updateadvice='Before you download 1.2.3, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.2.10' data-updatehref="/v1.9/news/releases/1.2.x/announcing-1.2.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.2/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.2.2...1.2.3"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix a bug where the sidecar could infinitely forward requests to itself when pod defines a port undefined for service (<a href="https://github.com/istio/istio/issues/14443">Issue 14443</a>) and (<a href="https://github.com/istio/istio/issues/14242">Issue 14242</a>)</li> <li>Fix a bug where Stackdriver adapter shuts down after telemetry is started.</li> <li>Fix Redis connectivity issues.</li> <li>Fix case-sensitivity in regex-based HTTP URI matching for Virtual Service (<a href="https://github.com/istio/istio/issues/14983">Issue 14983</a>)</li> <li>Fix HPA and CPU settings for demo profile (<a href="https://github.com/istio/istio/issues/15338">Issue 15338</a>)</li> <li>Relax Keep-Alive enforcement policy to avoid dropping connections under load (<a href="https://github.com/istio/istio/issues/15088">Issue 15088</a>)</li> <li>When SDS is not used, skip Kubernetes JWT authentication to mitigate the risk of compromised (untrustworthy) JWTs being used.</li> </ul> <h2 id="tests-upgrade">Tests upgrade</h2> <ul> <li>Update base image version for Bookinfo reviews sample app (<a href="https://github.com/istio/istio/issues/15477">Issue 15477</a>)</li> <li>Bookinfo samples image qualification (<a href="https://github.com/istio/istio/issues/14237">Issue 14237</a>)</li> </ul>Fri, 02 Aug 2019 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2.3//v1.9/news/releases/1.2.x/announcing-1.2.3/Announcing Istio 1.1.12 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.12. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.12" data-downloadbuttontext="DOWNLOAD 1.1.12" data-updateadvice='Before you download 1.1.12, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.11...1.1.12"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix a bug where the sidecar could infinitely forward requests to itself when a <code>Pod</code> resource defines a port that isn&rsquo;t defined for a service (<a href="https://github.com/istio/istio/issues/14443">Issue 14443</a>) and (<a href="https://github.com/istio/istio/issues/14242">Issue 14242</a>)</li> </ul>Fri, 02 Aug 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.12//v1.9/news/releases/1.1.x/announcing-1.1.12/Announcing Istio 1.1.11 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.11. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.11" data-downloadbuttontext="DOWNLOAD 1.1.11" data-updateadvice='Before you download 1.1.11, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.10...1.1.11"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li>Add ability to enable <code>HTTP/1.0</code> support in ingress gateway (<a href="https://github.com/istio/istio/issues/13085">Issue 13085</a>).</li> </ul>Wed, 03 Jul 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.11//v1.9/news/releases/1.1.x/announcing-1.1.11/ISTIO-SECURITY-2019-002 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12995">CVE-2019-12995</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>7.5 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=CVSS%3a3.0%2fAV%3aN%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aU%2fC%3aN%2fI%3aN%2fA%3aH%2fE%3aF%2fRL%3aO%2fRC%3aC">CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H/E:F/RL:O/RC:C</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.0 to 1.0.8<br> 1.1 to 1.1.9<br> 1.2 to 1.2.1<br> </td> </tr> </tbody> </table> <p>A bug in Istio’s JWT validation filter causes Envoy to crash in certain cases when the request contains a malformed JWT token. The bug was discovered and reported by a user <a href="https://github.com/istio/istio/issues/15084">on GitHub</a> on June 23, 2019.</p> <p>This bug affects all versions of Istio that are using the JWT authentication policy.</p> <p>The symptoms of the bug are an HTTP 503 error seen by the client, and</p> <pre><code class='language-plain' data-expandlinks='true' data-repo='istio' >Epoch 0 terminated with an error: signal: segmentation fault (core dumped) </code></pre> <p>in the Envoy logs.</p> <p>The Envoy crash can be triggered using a malformed JWT without a valid signature, and on any URI being accessed regardless of the <code>trigger_rules</code> in the JWT specification. Thus, this bug makes Envoy vulnerable to a potential DoS attack.</p> <h2 id="impact-and-detection">Impact and detection</h2> <p>Envoy is vulnerable if the following two conditions are satisfied:</p> <ul> <li>A JWT authentication policy is applied to it.</li> <li>The JWT issuer (specified by <code>jwksUri</code>) uses the RSA algorithm for signature verification</li> </ul> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">The RSA algorithm used for signature verification does not contain any known security vulnerability. This CVE is triggered only when using this algorithm but is unrelated to the security of the system.</div> </aside> </div> <p>If JWT policy is applied to the Istio ingress gateway, please be aware that any external user who has access to the ingress gateway could crash it with a single HTTP request.</p> <p>If JWT policy is applied to the sidecar only, please keep in mind it might still be vulnerable. For example, the Istio ingress gateway might forward the JWT token to the sidecar which could be a malformed JWT token that crashes the sidecar.</p> <p>A vulnerable Envoy will crash on an HTTP request with a malformed JWT token. When Envoy crashes, all existing connections will be disconnected immediately. The <code>pilot-agent</code> will restart the crashed Envoy automatically and it may take a few seconds to a few minutes for the restart. pilot-agent will stop restarting Envoy after it crashed more than ten times. In this case, Kubernetes will redeploy the pod, including the workload behind Envoy.</p> <p>To detect if there is any JWT authentication policy applied in your cluster, run the following command which print either of the following output:</p> <ul> <li>Found JWT in authentication policy, <strong>YOU ARE AFFECTED</strong></li> <li>Did NOT find JWT in authentication policy, <em>YOU ARE NOT AFFECTED</em></li> </ul> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;&#39;EOF&#39; | bash - set -e set -u set -o pipefail red=`tput setaf 1` green=`tput setaf 2` reset=`tput sgr0` echo &#34;Checking authentication policy...&#34; JWKS_URI=() JWKS_URI+=($(kubectl get policy --all-namespaces -o jsonpath=&#39;{range .items[*]}{.spec.origins[*].jwt.jwksUri}{&#34; &#34;}{end}&#39;)) JWKS_URI+=($(kubectl get meshpolicy --all-namespaces -o jsonpath=&#39;{range .items[*]}{.spec.origins[*].jwt.jwksUri}{&#34; &#34;}{end}&#39;)) if [ &#34;${#JWKS_URI[@]}&#34; != 0 ]; then echo &#34;${red}Found JWT in authentication policy, YOU ARE AFFECTED${reset}&#34; exit 1 fi echo &#34;${green}Did NOT find JWT in authentication policy, YOU ARE NOT AFFECTED${reset}&#34; EOF </code></pre> <h2 id="mitigation">Mitigation</h2> <p>This bug is fixed in the following Istio releases:</p> <ul> <li>For Istio 1.0.x deployments: update to <a href="/v1.9/news/releases/1.0.x/announcing-1.0.9">Istio 1.0.9</a> or later.</li> <li>For Istio 1.1.x deployments: update to <a href="/v1.9/news/releases/1.1.x/announcing-1.1.10">Istio 1.1.10</a> or later.</li> <li>For Istio 1.2.x deployments: update to <a href="/v1.9/news/releases/1.2.x/announcing-1.2.2">Istio 1.2.2</a> or later.</li> </ul> <p>If you cannot immediately upgrade to one of these releases, you have the additional option of injecting a <a href="https://github.com/istio/tools/tree/master/examples/luacheck">Lua filter</a> into older releases of Istio. This filter has been verified to work with Istio 1.1.9, 1.0.8, 1.0.6, and 1.1.3.</p> <p>The Lua filter is injected <em>before</em> the Istio <code>jwt-auth</code> filter. If a JWT token is presented on an http request, the <code>Lua</code> filter will check if the JWT token header contains alg:ES256. If the filter finds such a JWT token, the request is rejected.</p> <p>To install the Lua filter, please invoke the following commands:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ git clone git@github.com:istio/tools.git $ cd tools/examples/luacheck/ $ ./setup.sh </code></pre> <p>The setup script uses helm template to produce an <code>envoyFilter</code> resource that deploys to gateways. You may change the listener type to <code>ANY</code> to also apply it to sidecars. You should only do this if you enforce JWT policies on sidecars <em>and</em> sidecars receive direct traffic from the outside.</p> <h2 id="credit">Credit</h2> <p>The Istio team would like to thank Divya Raj for the original bug report.</p> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Fri, 28 Jun 2019 00:00:00 +0000/v1.9/news/security/istio-security-2019-002//v1.9/news/security/istio-security-2019-002/CVEAnnouncing Istio 1.2.2 <p>We&rsquo;re pleased to announce the availability of Istio 1.2.2. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.2.2" data-downloadbuttontext="DOWNLOAD 1.2.2" data-updateadvice='Before you download 1.2.2, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.2.10' data-updatehref="/v1.9/news/releases/1.2.x/announcing-1.2.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.2/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.2.1...1.2.2"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix crash in Istio&rsquo;s JWT Envoy filter caused by malformed JWT (<a href="https://github.com/istio/istio/issues/15084">Issue 15084</a>)</li> <li>Fix incorrect overwrite of x-forwarded-proto header (<a href="https://github.com/istio/istio/issues/15124">Issue 15124</a>)</li> </ul>Fri, 28 Jun 2019 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2.2//v1.9/news/releases/1.2.x/announcing-1.2.2/Announcing Istio 1.1.10 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.10. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.10" data-downloadbuttontext="DOWNLOAD 1.1.10" data-updateadvice='Before you download 1.1.10, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.9...1.1.10"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Eliminate 503 errors caused by Envoy not being able to talk to the SDS Node Agent after a restart (<a href="https://github.com/istio/istio/issues/14853">Issue 14853</a>).</li> <li>Fix cause of &lsquo;TLS error: Secret is not supplied by SDS&rsquo; errors during upgrade (<a href="https://github.com/istio/istio/issues/15020">Issue 15020</a>).</li> <li>Fix crash in Istio&rsquo;s JWT Envoy filter caused by malformed JWT (<a href="https://github.com/istio/istio/issues/15084">Issue 15084</a>).</li> </ul>Fri, 28 Jun 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.10//v1.9/news/releases/1.1.x/announcing-1.1.10/Announcing Istio 1.0.9 <p>We&rsquo;re pleased to announce the availability of Istio 1.0.9. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="https://github.com/istio/istio/releases/tag/1.0.9"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.0/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.0.8...1.0.9"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix crash in Istio&rsquo;s JWT Envoy filter caused by malformed JWT (<a href="https://github.com/istio/istio/issues/15084">Issue 15084</a>).</li> </ul>Fri, 28 Jun 2019 00:00:00 +0000/v1.9/news/releases/1.0.x/announcing-1.0.9//v1.9/news/releases/1.0.x/announcing-1.0.9/Announcing Istio 1.2.1 <p>We&rsquo;re pleased to announce the availability of Istio 1.2.1. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.2.1" data-downloadbuttontext="DOWNLOAD 1.2.1" data-updateadvice='Before you download 1.2.1, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.2.10' data-updatehref="/v1.9/news/releases/1.2.x/announcing-1.2.10/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.2/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.2.0...1.2.1"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix duplicate CRD being generated in the install (<a href="https://github.com/istio/istio/issues/14976">Issue 14976</a>)</li> <li>Fix Mixer unable to start when Galley is disabled (<a href="https://github.com/istio/istio/issues/14841">Issue 14841</a>)</li> <li>Fix environment variable shadowing (NAMESPACE is used for listened namespaces and overwrites Citadel storage namespace (istio-system))</li> <li>Fix cause of &lsquo;TLS error: Secret is not supplied by SDS&rsquo; errors during upgrade (<a href="https://github.com/istio/istio/issues/15020">Issue 15020</a>)</li> </ul> <h2 id="minor-enhancements">Minor enhancements</h2> <ul> <li>Allow users to disable Istio default retries by setting retries to 0 (<a href="https://github.com/istio/istio/issues/14900">Issue 14900</a>)</li> <li>Introduction of a Redis filter (this feature is guarded with the environment feature flag <code>PILOT_ENABLE_REDIS_FILTER</code>, disabled by default)</li> <li>Add HTTP/1.0 support to gateway configuration generation (<a href="https://github.com/istio/istio/issues/13085">Issue 13085</a>)</li> <li>Add <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/">toleration</a> for Istio components (<a href="https://github.com/istio/istio/pull/15081">Pull Request 15081</a>)</li> </ul>Thu, 27 Jun 2019 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2.1//v1.9/news/releases/1.2.x/announcing-1.2.1/Support for Istio 1.0 has ended<p>As <a href="/v1.9/news/support/announcing-1.0-eol/">previously announced</a>, support for Istio 1.0 has now officially ended.</p> <p>We will no longer back-port fixes for security issues and critical bugs to 1.0, so we encourage you to upgrade to the latest version of Istio (1.9.5) if you haven&rsquo;t already.</p>Wed, 19 Jun 2019 00:00:00 +0000/v1.9/news/support/announcing-1.0-eol-final//v1.9/news/support/announcing-1.0-eol-final/Announcing Istio 1.1.9 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.9. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.9" data-downloadbuttontext="DOWNLOAD 1.1.9" data-updateadvice='Before you download 1.1.9, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.8...1.1.9"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Prevent overly large strings from being sent to Prometheus (<a href="https://github.com/istio/istio/issues/14642">Issue 14642</a>).</li> <li>Reuse previously cached JWT public keys if transport errors are encountered during renewal (<a href="https://github.com/istio/istio/issues/14638">Issue 14638</a>).</li> <li>Bypass JWT authentication for HTTP OPTIONS methods to support CORS requests.</li> <li>Fix Envoy crash caused by the Mixer filter (<a href="https://github.com/istio/istio/issues/14707">Issue 14707</a>).</li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li>Expose cryptographic signature verification functions to <code>Lua</code> Envoy filters (<a href="https://github.com/envoyproxy/envoy/issues/7009">Envoy Issue 7009</a>).</li> </ul>Mon, 17 Jun 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.9//v1.9/news/releases/1.1.x/announcing-1.1.9/Announcing Istio 1.0.8 <p>We&rsquo;re pleased to announce the availability of Istio 1.0.8. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.0.8" data-downloadbuttontext="DOWNLOAD 1.0.8" data-updateadvice='Before you download 1.0.8, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.0.9' data-updatehref="/v1.9/news/releases/1.0.x/announcing-1.0.9/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.0/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.0.7...1.0.8"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix issue where Citadel could generate a new root CA if it cannot contact the Kubernetes API server, causing mutual TLS verification to incorrectly fail (<a href="https://github.com/istio/istio/issues/14512">Issue 14512</a>).</li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li>Update Citadel&rsquo;s default root CA certificate TTL from 1 year to 10 years.</li> </ul>Fri, 07 Jun 2019 00:00:00 +0000/v1.9/news/releases/1.0.x/announcing-1.0.8//v1.9/news/releases/1.0.x/announcing-1.0.8/Announcing Istio 1.1.8 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.8. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.8" data-downloadbuttontext="DOWNLOAD 1.1.8" data-updateadvice='Before you download 1.1.8, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.7...1.1.8"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix <code>PASSTHROUGH DestinationRules</code> for CDS clusters (<a href="https://github.com/istio/istio/issues/13744">Issue 13744</a>).</li> <li>Make the <code>appVersion</code> and <code>version</code> fields in the Helm charts display the correct Istio version (<a href="https://github.com/istio/istio/issues/14290">Issue 14290</a>).</li> <li>Fix Mixer crash affecting both policy and telemetry servers (<a href="https://github.com/istio/istio/issues/14235">Issue 14235</a>).</li> <li>Fix multicluster issue where two pods in different clusters could not share the same IP address (<a href="https://github.com/istio/istio/issues/14066">Issue 14066</a>).</li> <li>Fix issue where Citadel could generate a new root CA if it cannot contact the Kubernetes API server, causing mutual TLS verification to incorrectly fail (<a href="https://github.com/istio/istio/issues/14512">Issue 14512</a>).</li> <li>Improve Pilot validation to reject different <code>VirtualServices</code> with the same domain since Envoy will not accept them (<a href="https://github.com/istio/istio/issues/13267">Issue 13267</a>).</li> <li>Fix locality load balancing issue where only one replica in a locality would receive traffic (<a href="https://github.com/istio/istio/issues/13994">13994</a>).</li> <li>Fix issue where Pilot Agent might not notice a TLS certificate rotation (<a href="https://github.com/istio/istio/issues/14539">Issue 14539</a>).</li> <li>Fix a <code>LuaJIT</code> panic in Envoy (<a href="https://github.com/envoyproxy/envoy/pull/6994">Envoy Issue 6994</a>).</li> <li>Fix a race condition where Envoy might reuse a HTTP/1.1 connection after the downstream peer had already closed the TCP connection, causing 503 errors and retries (<a href="https://github.com/istio/istio/issues/14037">Issue 14037</a>).</li> <li>Fix a tracing issue in Mixer&rsquo;s Zipkin adapter causing missing spans (<a href="https://github.com/istio/istio/issues/13391">Issue 13391</a>).</li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li>Reduce Pilot log spam by logging the <code>the endpoints within network ... will be ignored for no network configured</code> message at <code>DEBUG</code>.</li> <li>Make it easier to rollback by making pilot-agent ignore unknown flags.</li> <li>Update Citadel&rsquo;s default root CA certificate TTL from 1 year to 10 years.</li> </ul>Thu, 06 Jun 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.8//v1.9/news/releases/1.1.x/announcing-1.1.8/ISTIO-SECURITY-2019-001 <table> <thead> <tr> <th colspan="2">Disclosure Details</th> </tr> </thead> <tbody> <tr> <td>CVE(s)</td> <td> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12243">CVE-2019-12243</a><br> </td> </tr> <tr> <td>CVSS Impact Score</td> <td>8.9 <a href="https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=CVSS%3a3.0%2fAV%3aA%2fAC%3aL%2fPR%3aN%2fUI%3aN%2fS%3aC%2fC%3aH%2fI%3aH%2fA%3aN%2fE%3aH%2fRL%3aO%2fRC%3aC">CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N/E:H/RL:O/RC:C</a></td> </tr> <tr> <td>Affected Releases</td> <td> 1.1 to 1.1.6<br> </td> </tr> </tbody> </table> <p>During review of the <a href="/v1.9/news/releases/1.1.x/announcing-1.1.7">Istio 1.1.7</a> release notes, we realized that <a href="https://github.com/istio/istio/issues/13868">issue 13868</a>, which is fixed in the release, actually represents a security vulnerability.</p> <p>Initially we thought the bug was impacting the <a href="/v1.9/about/feature-stages/#security-and-policy-enforcement">TCP Authorization</a> feature advertised as alpha stability, which would not have required invoking this security advisory process, but we later realized that the <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/adapters/denier/">Deny Checker</a> and <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/adapters/list/">List Checker</a> feature were affected and those are considered stable features. We are revisiting our processes to flag vulnerabilities that are initially reported as bugs instead of through the <a href="/v1.9/about/security-vulnerabilities/">private disclosure process</a>.</p> <p>We tracked the bug to a code change introduced in Istio 1.1 and affecting all releases up to 1.1.6.</p> <h2 id="impact-and-detection">Impact and detection</h2> <p>Since Istio 1.1, In the default Istio installation profile, policy enforcement is disabled by default.</p> <p>You can check the status of policy enforcement for your mesh with the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl -n istio-system get cm istio -o jsonpath=&#34;{@.data.mesh}&#34; | grep disablePolicyChecks disablePolicyChecks: true </code></pre> <p>You are not impacted by this vulnerability if <code>disablePolicyChecks</code> is set to true.</p> <p>You are impacted by the vulnerability issue if the following conditions are all true:</p> <ul> <li>You are running one of the affected Istio releases.</li> <li><code>disablePolicyChecks</code> is set to false (follow the steps mentioned above to check)</li> <li>Your workload is NOT using HTTP, HTTP/2, or gRPC protocols</li> <li>A mixer adapter (e.g., Deny Checker, List Checker) is used to provide authorization for your backend TCP service.</li> </ul> <h2 id="mitigation">Mitigation</h2> <ul> <li>Users of Istio 1.0.x are not affected.</li> <li>For Istio 1.1.x deployments: update to <a href="/v1.9/news/releases/1.1.x/announcing-1.1.7">Istio 1.1.7</a> or later.</li> </ul> <h2 id="credit">Credit</h2> <p>The Istio team would like to thank <code>Haim Helman</code> for the original bug report.</p> <h2 id="reporting-vulnerabilities">Reporting vulnerabilities</h2> <p>We’d like to remind our community to follow the <a href="/v1.9/about/security-vulnerabilities/">vulnerability reporting process</a> to report any bug that can result in a security vulnerability.Tue, 28 May 2019 00:00:00 +0000/v1.9/news/security/istio-security-2019-001//v1.9/news/security/istio-security-2019-001/CVESupport for Istio 1.0 ends on June 19th, 2019<p>According to Istio&rsquo;s <a href="/v1.9/about/supported-releases#supported-releases">support policy</a>, LTS releases like 1.0 are supported for three months after the next LTS release. Since <a href="/v1.9/news/releases/1.1.x/announcing-1.1/">1.1 was released on March 19th</a>, support for 1.0 will end on June 19th, 2019.</p> <p>At that point we will stop back-porting fixes for security issues and critical bugs to 1.0, so we encourage you to upgrade to the latest version of Istio (1.9.5). If you don&rsquo;t do this you may put yourself in the position of having to do a major upgrade on a short timeframe to pick up a critical fix.</p> <p>We care about you and your clusters, so please be kind to yourself and upgrade.</p>Thu, 23 May 2019 00:00:00 +0000/v1.9/news/support/announcing-1.0-eol//v1.9/news/support/announcing-1.0-eol/Announcing Istio 1.1.7 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.7. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.7" data-downloadbuttontext="DOWNLOAD 1.1.7" data-updateadvice='Before you download 1.1.7, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.6...1.1.7"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>This release fixes <a href="/v1.9/news/security/istio-security-2019-001">CVE 2019-12243</a>.</p> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix issue where two gateways with overlapping hosts, created at the same second, can cause Pilot to fail to generate routes correctly and lead to Envoy listeners stuck indefinitely at startup in a warming state.</li> <li>Improve the robustness of the SDS node agent: if Envoy sends a SDS request with an empty <code>ResourceNames</code>, ignore it and wait for the next request instead of closing the connection (<a href="https://github.com/istio/istio/issues/13853">Issue 13853</a>).</li> <li>In prior releases Pilot automatically injected the experimental <code>envoy.filters.network.mysql_proxy</code> filter into the outbound filter chain if the service port name is <code>mysql</code>. This was surprising and caused issues for some operators, so Pilot will now automatically inject the <code>envoy.filters.network.mysql_proxy</code> filter only if the <code>PILOT_ENABLE_MYSQL_FILTER</code> environment variable is set to <code>1</code> (<a href="https://github.com/istio/istio/issues/13998">Issue 13998</a>).</li> <li>Fix issue where Mixer policy checks were incorrectly disabled for TCP (<a href="https://github.com/istio/istio/issues/13868">Issue 13868</a>).</li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li>Add <code>--applicationPorts</code> option to the <code>ingressgateway</code> Helm charts. When set to a comma-delimited list of ports, readiness checks will fail until all the ports become active. When configured, traffic will not be sent to Envoys stuck in the warming state.</li> <li>Increase memory limit in the <code>ingressgateway</code> Helm chart to 1GB and add resource <code>request</code> and <code>limits</code> to the SDS node agent container to support HPA autoscaling.</li> </ul>Fri, 17 May 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.7//v1.9/news/releases/1.1.x/announcing-1.1.7/Announcing Istio 1.1.6 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.6. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.6" data-downloadbuttontext="DOWNLOAD 1.1.6" data-updateadvice='Before you download 1.1.6, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.5...1.1.6"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix Galley Helm charts so that the <code>validatingwebhookconfiguration</code> object can now deployed to a namespace other than <code>istio-system</code> (<a href="https://github.com/istio/istio/issues/13625">Issue 13625</a>).</li> <li>Additional Helm chart fixes for anti-affinity support: fix <code>gatewaypodAntiAffinityRequiredDuringScheduling</code> and <code>podAntiAffinityLabelSelector</code> match expressions and fix the default value for <code>podAntiAffinityLabelSelector</code> (<a href="https://github.com/istio/istio/issues/13892">Issue 13892</a>).</li> <li>Make Pilot handle a condition where Envoy continues to request routes for a deleted gateway while listeners are still draining (<a href="https://github.com/istio/istio/issues/13739">Issue 13739</a>).</li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li>If access logs are enabled, <code>passthrough</code> listener requests will be logged.</li> <li>Make Pilot tolerate unknown JSON fields to make it easier to rollback to older versions during upgrade.</li> <li>Add support for fallback secrets to <code>SDS</code> which Envoy can use instead of waiting indefinitely for late or non-existent secrets during startup (<a href="https://github.com/istio/istio/issues/13853">Issue 13853</a>).</li> </ul>Sat, 11 May 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.6//v1.9/news/releases/1.1.x/announcing-1.1.6/Announcing Istio 1.1.5 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.5. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.5" data-downloadbuttontext="DOWNLOAD 1.1.5" data-updateadvice='Before you download 1.1.5, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.4...1.1.5"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Add additional validation to Pilot to reject gateway configuration with overlapping hosts matches (<a href="https://github.com/istio/istio/issues/13717">Issue 13717</a>).</li> <li>Build against the latest stable version of <code>istio-cni</code> instead of the latest daily build (<a href="https://github.com/istio/istio/issues/13171">Issue 13171</a>).</li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li>Add additional logging to help diagnose hostname resolution failures (<a href="https://github.com/istio/istio/issues/13581">Issue 13581</a>).</li> <li>Improve ease of installing <code>prometheus</code> by removing unnecessary use of <code>busybox</code> image (<a href="https://github.com/istio/istio/issues/13501">Issue 13501</a>).</li> <li>Make Pilot Agent&rsquo;s certificate paths configurable (<a href="https://github.com/istio/istio/issues/11984">Issue 11984</a>).</li> </ul>Fri, 03 May 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.5//v1.9/news/releases/1.1.x/announcing-1.1.5/Announcing Istio 1.1.4 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.4. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.4" data-downloadbuttontext="DOWNLOAD 1.1.4" data-updateadvice='Before you download 1.1.4, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.3...1.1.4"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="behavior-change">Behavior change</h2> <ul> <li>Changed the default behavior for Pilot to allow traffic to outside the mesh, even if it is on the same port as an internal service. This behavior can be controlled by the <code>PILOT_ENABLE_FALLTHROUGH_ROUTE</code> environment variable.</li> </ul> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><p>Fixed egress route generation for services of type <code>ExternalName</code>.</p></li> <li><p>Added support for configuring Envoy&rsquo;s idle connection timeout, which prevents running out of memory or IP ports over time (<a href="https://github.com/istio/istio/issues/13355">Issue 13355</a>).</p></li> <li><p>Fixed a crashing bug in Pilot in failover handling of locality-based load balancing.</p></li> <li><p>Fixed a crashing bug in Pilot when it was given custom certificate paths.</p></li> <li><p>Fixed a bug in Pilot where it was ignoring short names used as service entry hosts (<a href="https://github.com/istio/istio/issues/13436">Issue 13436</a>).</p></li> <li><p>Added missing <code>https_protocol_options</code> to the envoy-metrics-service cluster configuration.</p></li> <li><p>Fixed a bug in Pilot where it didn&rsquo;t handle https traffic correctly in the fall through route case (<a href="https://github.com/istio/istio/issues/13386">Issue 13386</a>).</p></li> <li><p>Fixed a bug where Pilot didn&rsquo;t remove endpoints from Envoy after they were removed from Kubernetes (<a href="https://github.com/istio/istio/issues/13402">Issue 13402</a>).</p></li> <li><p>Fixed a crashing bug in the node agent (<a href="https://github.com/istio/istio/issues/13325">Issue 13325</a>).</p></li> <li><p>Added missing validation to prevent gateway names from containing dots (<a href="https://github.com/istio/istio/issues/13211">Issue 13211</a>).</p></li> <li><p>Fixed bug where <a href="/v1.9/docs/reference/config/networking/destination-rule#LoadBalancerSettings-ConsistentHashLB"><code>ConsistentHashLB.minimumRingSize</code></a> was defaulting to 0 instead of the documented 1024 (<a href="https://github.com/istio/istio/issues/13261">Issue 13261</a>).</p></li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li><p>Updated to the latest version of the <a href="https://www.kiali.io">Kiali</a> add-on.</p></li> <li><p>Updated to the latest version of <a href="https://grafana.com">Grafana</a>.</p></li> <li><p>Added validation to ensure Citadel is only deployed with a single replica (<a href="https://github.com/istio/istio/issues/13383">Issue 13383</a>).</p></li> <li><p>Added support to configure the logging level of the proxy and Istio control plane ((<a href="https://github.com/istio/istio/issues/11847">Issue 11847</a>).</p></li> <li><p>Allow sidecars to bind to any loopback address and not just 127.0.0.1 (<a href="https://github.com/istio/istio/issues/13201">Issue 13201</a>).</p></li> </ul>Wed, 24 Apr 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.4//v1.9/news/releases/1.1.x/announcing-1.1.4/Announcing Istio 1.1.3 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.3. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.3" data-downloadbuttontext="DOWNLOAD 1.1.3" data-updateadvice='Before you download 1.1.3, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.2...1.1.3"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="known-issues-with-1-1-3">Known issues with 1.1.3</h2> <ul> <li>A <a href="https://github.com/istio/istio/issues/13325">panic in the Node Agent</a> was discovered late in the 1.1.3 qualification process. The panic only occurs in clusters with the alpha-quality SDS certificate rotation feature enabled. Since this is the first time we have included SDS certificate rotation in our long-running release tests, we don&rsquo;t know whether this is a latent bug or a new regression. Considering SDS certificate rotation is in alpha, we have decided to release 1.1.3 with this issue and target a fix for the 1.1.4 release.</li> </ul> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li><p>Istio-specific back-ports of Envoy patches for <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9900"><code>CVE-2019-9900</code></a> and <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9901"><code>CVE-2019-9901</code></a> included in Istio 1.1.2 have been dropped in favor of an Envoy update which contains the final version of the patches.</p></li> <li><p>Fix load balancer weight setting for split horizon <code>EDS</code>.</p></li> <li><p>Fix typo in the default Envoy <code>JSON</code> log format (<a href="https://github.com/istio/istio/issues/12232">Issue 12232</a>).</p></li> <li><p>Correctly reload out-of-process adapter address upon configuration change (<a href="https://github.com/istio/istio/issues/12488">Issue 12488</a>).</p></li> <li><p>Restore Kiali settings that were accidentally deleted (<a href="https://github.com/istio/istio/issues/3660">Issue 3660</a>).</p></li> <li><p>Prevent services with same target port resulting in duplicate inbound listeners (<a href="https://github.com/istio/istio/issues/9504">Issue 9504</a>).</p></li> <li><p>Fix issue with configuring <code>Sidecar egress</code> ports for namespaces other than <code>istio-system</code> resulting in a <code>envoy.tcp_proxy</code> filter of <code>BlackHoleCluster</code> by auto binding to services for <code>Sidecar</code> listeners (<a href="https://github.com/istio/istio/issues/12536">Issue 12536</a>).</p></li> <li><p>Fix gateway <code>vhost</code> configuration generation issue by favoring more specific host matches (<a href="https://github.com/istio/istio/issues/12655">Issue 12655</a>).</p></li> <li><p>Fix <code>ALLOW_ANY</code> so it now allows external traffic if there is already an http service present on a port.</p></li> <li><p>Fix validation logic so that <code>port.name</code> is no longer a valid <code>PortSelection</code>.</p></li> <li><p>Fix <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-proxy-config-cluster"><code>istioctl proxy-config cluster</code></a> cluster type column rendering (<a href="https://github.com/istio/istio/issues/12455">Issue 12455</a>).</p></li> <li><p>Fix SDS secret mount configuration.</p></li> <li><p>Fix incorrect Istio version in the Helm charts.</p></li> <li><p>Fix partial DNS failures in the presence of overlapping ports (<a href="https://github.com/istio/istio/issues/11658">Issue 11658</a>).</p></li> <li><p>Fix Helm <code>podAntiAffinity</code> template error (<a href="https://github.com/istio/istio/issues/12790">Issue 12790</a>).</p></li> <li><p>Fix bug with the original destination service discovery not using the original destination load balancer.</p></li> <li><p>Fix SDS memory leak in the presence of invalid or missing keying materials (<a href="https://github.com/istio/istio/issues/13197">Issue 13197</a>).</p></li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li><p>Hide <code>ServiceAccounts</code> from <code>PushContext</code> log to reduce log volume.</p></li> <li><p>Configure <code>localityLbSetting</code> in <code>values.yaml</code> by passing it through to the mesh configuration.</p></li> <li><p>Remove the soon-to-be deprecated <code>critical-pod</code> annotation from Helm charts (<a href="https://github.com/istio/istio/issues/12650">Issue 12650</a>).</p></li> <li><p>Support pod anti-affinity annotations to improve control plane availability (<a href="https://github.com/istio/istio/issues/11333">Issue 11333</a>).</p></li> <li><p>Pretty print <code>IP</code> addresses in access logs.</p></li> <li><p>Remove redundant write header to further reduce log volume.</p></li> <li><p>Improve destination host validation in Pilot.</p></li> <li><p>Explicitly configure <code>istio-init</code> to run as root so use of pod-level <code>securityContext.runAsUser</code> doesn&rsquo;t break it (<a href="https://github.com/istio/istio/issues/5453">Issue 5453</a>).</p></li> <li><p>Add configuration samples for Vault integration.</p></li> <li><p>Respect locality load balancing weight settings from <code>ServiceEntry</code>.</p></li> <li><p>Make the TLS certificate location watched by Pilot Agent configurable (<a href="https://github.com/istio/istio/issues/11984">Issue 11984</a>).</p></li> <li><p>Add support for Datadog tracing.</p></li> <li><p>Add alias to <code>istioctl</code> so &lsquo;x&rsquo; can be used instead of &lsquo;experimental&rsquo;.</p></li> <li><p>Provide improved distribution of sidecar certificate by adding jitter to their CSR requests.</p></li> <li><p>Allow weighted load balancing registry locality to be configured.</p></li> <li><p>Add support for standard CRDs for compiled-in Mixer adapters.</p></li> <li><p>Reduce Pilot resource requirements for demo configuration.</p></li> <li><p>Fully populate Galley dashboard by adding data source (<a href="https://github.com/istio/istio/issues/13040">Issue 13040</a>).</p></li> <li><p>Propagate Istio 1.1 <code>sidecar</code> performance tuning to the <code>istio-gateway</code>.</p></li> <li><p>Improve destination host validation by rejecting <code>*</code> hosts (<a href="https://github.com/istio/istio/issues/12794">Issue 12794</a>).</p></li> <li><p>Expose upstream <code>idle_timeout</code> in cluster definition so dead connections can sometimes be removed from connection pools before they are used (<a href="https://github.com/istio/istio/issues/9113">Issue 9113</a>).</p></li> <li><p>When registering a <code>Sidecar</code> resource to restrict what a pod can see, the restrictions are now applied if the spec contains a <code>workloadSelector</code> (<a href="https://github.com/istio/istio/issues/11818">Issue 11818</a>).</p></li> <li><p>Update the Bookinfo example to use port 80 for TLS origination.</p></li> <li><p>Add liveness probe for Citadel.</p></li> <li><p>Improve AWS ELB interoperability by making 15020 the first port listed in the <code>ingressgateway</code> service (<a href="https://github.com/istio/istio/issues/12503">Issue 12502</a>).</p></li> <li><p>Use outlier detection for failover mode but not for distribute mode for locality weighted load balancing (<a href="https://github.com/istio/istio/issues/12961">Issues 12965</a>).</p></li> <li><p>Replace generation of Envoy&rsquo;s deprecated <code>enabled</code> field in <code>CorsPolicy</code> with the replacement <code>filter_enabled</code> field for 1.1.0+ sidecars only.</p></li> <li><p>Standardize labels on Mixer&rsquo;s Helm charts.</p></li> </ul>Mon, 15 Apr 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.3//v1.9/news/releases/1.1.x/announcing-1.1.3/Announcing Istio 1.1.2 with Important Security Update <p>We&rsquo;re announcing immediate availability of Istio 1.1.2 which contains some important security updates. Please see below for details.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.2" data-downloadbuttontext="DOWNLOAD 1.1.2" data-updateadvice='Before you download 1.1.2, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.1...1.1.2"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>Two security vulnerabilities have recently been identified in the Envoy proxy (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9900">CVE 2019-9900</a> and <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9901">CVE 2019-9901</a>). The vulnerabilities have now been patched in Envoy version 1.9.1, and correspondingly in the Envoy builds embedded in Istio 1.1.2 and Istio 1.0.7. Since Envoy is an integral part of Istio, users are advised to update Istio immediately to mitigate security risks arising from these vulnerabilities.</p> <p>The vulnerabilities are centered on the fact that Envoy did not normalize HTTP URI paths and did not fully validate HTTP/1.1 header values. These vulnerabilities impact Istio features that rely on Envoy to enforce any of authorization, routing, or rate limiting.</p> <h2 id="affected-istio-releases">Affected Istio releases</h2> <p>The following Istio releases are vulnerable:</p> <ul> <li><p>1.1, 1.1.1</p> <ul> <li>These releases can be patched to Istio 1.1.2.</li> <li>1.1.2 is built from the same source as 1.1.1 with the addition of Envoy patches minimally sufficient to address the CVEs.</li> </ul></li> <li><p>1.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6</p> <ul> <li>These releases can be patched to Istio 1.0.7</li> <li>1.0.7 is built from the same source as 1.0.6 with the addition of Envoy patches minimally sufficient to address the CVEs.</li> </ul></li> <li><p>0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8</p> <ul> <li>These releases are no longer supported and will not be patched. Please upgrade to a supported release with the necessary fixes.</li> </ul></li> </ul> <h2 id="vulnerability-impact">Vulnerability impact</h2> <p><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9900">CVE 2019-9900</a> and <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9901">CVE 2019-9901</a> allow remote attackers access to unauthorized resources by using specially crafted request URI paths (9901) and NUL bytes in HTTP/1.1 headers (9900), potentially circumventing DoS prevention systems such as rate limiting, or routing to a unexposed upstream system. Refer to <a href="https://github.com/envoyproxy/envoy/issues/6434">issue 6434</a> and <a href="https://github.com/envoyproxy/envoy/issues/6435">issue 6435</a> for more information.</p> <p>As Istio is based on Envoy, Istio customers can be affected by these vulnerabilities based on whether paths and request headers are used within Istio policies or routing rules and how the backend HTTP implementation resolves them. If prefix path matching rules are used by Mixer or by Istio authorization policies or the routing rules, an attacker could exploit these vulnerabilities to gain access to unauthorized paths on certain HTTP backends.</p> <h2 id="mitigation">Mitigation</h2> <p>Eliminating the vulnerabilities requires updating to a corrected version of Envoy. We’ve incorporated the necessary updates in the latest Istio patch releases.</p> <p>For Istio 1.1.x deployments: update to a minimum of <a href="/v1.9/news/releases/1.1.x/announcing-1.1.2">Istio 1.1.2</a></p> <p>For Istio 1.0.x deployments: update to a minimum of <a href="/v1.9/news/releases/1.0.x/announcing-1.0.7">Istio 1.0.7</a></p> <p>While Envoy 1.9.1 requires opting in to path normalization to address CVE 2019-9901, the version of Envoy embedded in Istio 1.1.2 and 1.0.7 enables path normalization by default.</p> <h2 id="detection-of-nul-header-exploit">Detection of NUL header exploit</h2> <p>Based on current information, this only affects HTTP/1.1 traffic. If this is not structurally possible in your network or configuration, then it is unlikely that this vulnerability applies.</p> <p>File-based access logging uses the <code>c_str()</code> representation for header values, as does gRPC access logging, so there will be no trivial detection via Envoy’s access logs by scanning for NUL. Instead, operators might look for inconsistencies in logs between the routing that Envoy performs and the logic intended in the <code>RouteConfiguration</code>.</p> <p>External authorization and rate limit services can check for NULs in headers. Backend servers might have sufficient logging to detect NULs or unintended access; it’s likely that many will simply reject NULs in this scenario via 400 Bad Request, as per RFC 7230.</p> <h2 id="detection-of-path-traversal-exploit">Detection of path traversal exploit</h2> <p>Envoy’s access logs (whether file-based or gRPC) will contain the unnormalized path, so it is possible to examine these logs to detect suspicious patterns and requests that are incongruous with the intended operator configuration intent. In addition, unnormalized paths are available at <code>ext_authz</code>, rate limiting and backend servers for log inspection.</p>Fri, 05 Apr 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.2//v1.9/news/releases/1.1.x/announcing-1.1.2/Announcing Istio 1.0.7 with Important Security Update <p>We&rsquo;re announcing immediate availability of Istio 1.0.7 which contains some important security updates. Please see below for details.</p> <div class="relnote-actions call-to-action"> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.0.7" data-downloadbuttontext="DOWNLOAD 1.0.7" data-updateadvice='Before you download 1.0.7, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.0.9' data-updatehref="/v1.9/news/releases/1.0.x/announcing-1.0.9/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.0/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.0.6...1.0.7"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="security-update">Security update</h2> <p>Two security vulnerabilities have recently been identified in the Envoy proxy (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9900">CVE 2019-9900</a> and <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9901">CVE 2019-9901</a>). The vulnerabilities have now been patched in Envoy version 1.9.1, and correspondingly in the Envoy builds embedded in Istio 1.1.2 and Istio 1.0.7. Since Envoy is an integral part of Istio, users are advised to update Istio immediately to mitigate security risks arising from these vulnerabilities.</p> <p>The vulnerabilities are centered on the fact that Envoy did not normalize HTTP URI paths and did not fully validate HTTP/1.1 header values. These vulnerabilities impact Istio features that rely on Envoy to enforce any of authorization, routing, or rate limiting.</p> <h2 id="affected-istio-releases">Affected Istio releases</h2> <p>The following Istio releases are vulnerable:</p> <ul> <li><p>1.1, 1.1.1</p> <ul> <li>These releases can be patched to Istio 1.1.2.</li> <li>1.1.2 is built from the same source as 1.1.1 with the addition of Envoy patches minimally sufficient to address the CVEs.</li> </ul></li> <li><p>1.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6</p> <ul> <li>These releases can be patched to Istio 1.0.7</li> <li>1.0.7 is built from the same source as 1.0.6 with the addition of Envoy patches minimally sufficient to address the CVEs.</li> </ul></li> <li><p>0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8</p> <ul> <li>These releases are no longer supported and will not be patched. Please upgrade to a supported release with the necessary fixes.</li> </ul></li> </ul> <h2 id="vulnerability-impact">Vulnerability impact</h2> <p><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9900">CVE 2019-9900</a> and <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9901">CVE 2019-9901</a> allow remote attackers access to unauthorized resources by using specially crafted request URI paths (9901) and NUL bytes in HTTP/1.1 headers (9900), potentially circumventing DoS prevention systems such as rate limiting, or routing to a unexposed upstream system. Refer to <a href="https://github.com/envoyproxy/envoy/issues/6434">issue 6434</a> and <a href="https://github.com/envoyproxy/envoy/issues/6435">issue 6435</a> for more information.</p> <p>As Istio is based on Envoy, Istio customers can be affected by these vulnerabilities based on whether paths and request headers are used within Istio policies or routing rules and how the backend HTTP implementation resolves them. If prefix path matching rules are used by Mixer or by Istio authorization policies or the routing rules, an attacker could exploit these vulnerabilities to gain access to unauthorized paths on certain HTTP backends.</p> <h2 id="mitigation">Mitigation</h2> <p>Eliminating the vulnerabilities requires updating to a corrected version of Envoy. We’ve incorporated the necessary updates in the latest Istio patch releases.</p> <p>For Istio 1.1.x deployments: update to a minimum of <a href="/v1.9/news/releases/1.1.x/announcing-1.1.2">Istio 1.1.2</a></p> <p>For Istio 1.0.x deployments: update to a minimum of <a href="/v1.9/news/releases/1.0.x/announcing-1.0.7">Istio 1.0.7</a></p> <p>While Envoy 1.9.1 requires opting in to path normalization to address CVE 2019-9901, the version of Envoy embedded in Istio 1.1.2 and 1.0.7 enables path normalization by default.</p> <h2 id="detection-of-nul-header-exploit">Detection of NUL header exploit</h2> <p>Based on current information, this only affects HTTP/1.1 traffic. If this is not structurally possible in your network or configuration, then it is unlikely that this vulnerability applies.</p> <p>File-based access logging uses the <code>c_str()</code> representation for header values, as does gRPC access logging, so there will be no trivial detection via Envoy’s access logs by scanning for NUL. Instead, operators might look for inconsistencies in logs between the routing that Envoy performs and the logic intended in the <code>RouteConfiguration</code>.</p> <p>External authorization and rate limit services can check for NULs in headers. Backend servers might have sufficient logging to detect NULs or unintended access; it’s likely that many will simply reject NULs in this scenario via 400 Bad Request, as per RFC 7230.</p> <h2 id="detection-of-path-traversal-exploit">Detection of path traversal exploit</h2> <p>Envoy’s access logs (whether file-based or gRPC) will contain the unnormalized path, so it is possible to examine these logs to detect suspicious patterns and requests that are incongruous with the intended operator configuration intent. In addition, unnormalized paths are available at <code>ext_authz</code>, rate limiting and backend servers for log inspection.</p>Fri, 05 Apr 2019 00:00:00 +0000/v1.9/news/releases/1.0.x/announcing-1.0.7//v1.9/news/releases/1.0.x/announcing-1.0.7/Announcing Istio 1.1.1 <p>We&rsquo;re pleased to announce the availability of Istio 1.1.1. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/"> <h5>BEFORE YOU UPGRADE</h5> <p>Things to know and prepare before upgrading.</p> </a> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.1.1" data-downloadbuttontext="DOWNLOAD 1.1.1" data-updateadvice='Before you download 1.1.1, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.1.17' data-updatehref="/v1.9/news/releases/1.1.x/announcing-1.1.17/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.1/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.1.0...1.1.1"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes-and-minor-enhancements">Bug fixes and minor enhancements</h2> <ul> <li>Configure Prometheus to monitor Citadel (<a href="https://github.com/istio/istio/pull/12175">Issue 12175</a>)</li> <li>Improve output of <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-verify-install"><code>istioctl verify-install</code></a> command (<a href="https://github.com/istio/istio/pull/12174">Issue 12174</a>)</li> <li>Reduce log level for missing service account messages for a SPIFFE URI (<a href="https://github.com/istio/istio/issues/12108">Issue 12108</a>)</li> <li>Fix broken path on the opt-in SDS feature&rsquo;s Unix domain socket (<a href="https://github.com/istio/istio/pull/12688">Issue 12688</a>)</li> <li>Fix Envoy tracing that was preventing a child span from being created if the parent span was propagated with an empty string (<a href="https://github.com/envoyproxy/envoy/pull/6263">Envoy Issue 6263</a>)</li> <li>Add namespace scoping to the Gateway &lsquo;port&rsquo; names. This fixes two issues: <ul> <li><code>IngressGateway</code> only respects first port 443 Gateway definition (<a href="https://github.com/istio/istio/issues/11509">Issue 11509</a>)</li> <li>Istio <code>IngressGateway</code> routing broken with two different gateways with same port name (SDS) (<a href="https://github.com/istio/istio/issues/12500">Issue 12500</a>)</li> </ul></li> <li>Five bug fixes for locality weighted load balancing: <ul> <li>Fix bug causing empty endpoints per locality (<a href="https://github.com/istio/istio/issues/12610">Issue 12610</a>)</li> <li>Apply locality weighted load balancing configuration correctly (<a href="https://github.com/istio/istio/issues/12587">Issue 12587</a>)</li> <li>Locality label <code>istio-locality</code> in Kubernetes should not contain <code>/</code>, use <code>.</code> (<a href="https://github.com/istio/istio/issues/12582">Issue 12582</a>)</li> <li>Fix crash in locality load balancing (<a href="https://github.com/istio/istio/pull/12649">Issue 12649</a>)</li> <li>Fix bug in locality load balancing normalization (<a href="https://github.com/istio/istio/pull/12579">Issue 12579</a>)</li> </ul></li> <li>Propagate Envoy Metrics Service configuration (<a href="https://github.com/istio/istio/issues/12569">Issue 12569</a>)</li> <li>Do not apply <code>VirtualService</code> rule to the wrong gateway (<a href="https://github.com/istio/istio/issues/10313">Issue 10313</a>)</li> </ul>Mon, 25 Mar 2019 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1.1//v1.9/news/releases/1.1.x/announcing-1.1.1/Announcing Istio 1.0.6 <p>We&rsquo;re pleased to announce the availability of Istio 1.0.6. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.0.6" data-downloadbuttontext="DOWNLOAD 1.0.6" data-updateadvice='Before you download 1.0.6, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.0.9' data-updatehref="/v1.9/news/releases/1.0.x/announcing-1.0.9/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.0/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.0.5...1.0.6"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="bug-fixes">Bug fixes</h2> <ul> <li>Fix Galley Helm charts so that the <code>validatingwebhookconfiguration</code> object can now deployed to a namespace other than <code>istio-system</code> (<a href="https://github.com/istio/istio/issues/13625">Issue 13625</a>).</li> <li>Additional Helm chart fixes for anti-affinity support: fix <code>gatewaypodAntiAffinityRequiredDuringScheduling</code> and <code>podAntiAffinityLabelSelector</code> match expressions and fix the default value for <code>podAntiAffinityLabelSelector</code> (<a href="https://github.com/istio/istio/issues/13892">Issue 13892</a>).</li> <li>Make Pilot handle a condition where Envoy continues to request routes for a deleted gateway while listeners are still draining (<a href="https://github.com/istio/istio/issues/13739">Issue 13739</a>).</li> </ul> <h2 id="small-enhancements">Small enhancements</h2> <ul> <li>If access logs are enabled, <code>passthrough</code> listener requests will be logged.</li> <li>Make Pilot tolerate unknown JSON fields to make it easier to rollback to older versions during upgrade.</li> <li>Add support for fallback secrets to <code>SDS</code> which Envoy can use instead of waiting indefinitely for late or non-existent secrets during startup (<a href="https://github.com/istio/istio/issues/13853">Issue 13853</a>).</li> </ul>Tue, 12 Feb 2019 00:00:00 +0000/v1.9/news/releases/1.0.x/announcing-1.0.6//v1.9/news/releases/1.0.x/announcing-1.0.6/Announcing Istio 1.0.5 <p>We&rsquo;re pleased to announce the availability of Istio 1.0.5. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.0.5" data-downloadbuttontext="DOWNLOAD 1.0.5" data-updateadvice='Before you download 1.0.5, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.0.9' data-updatehref="/v1.9/news/releases/1.0.x/announcing-1.0.9/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.0/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.0.4...1.0.5"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="general">General</h2> <ul> <li><p>Disabled the precondition cache in the <code>istio-policy</code> service as it lead to invalid results. The cache will be reintroduced in a later release.</p></li> <li><p>Mixer now only generates spans when there is an enabled <code>tracespan</code> adapter, resulting in lower CPU overhead in normal cases.</p></li> <li><p>Fixed a problem that could lead Pilot to hang.</p></li> </ul>Thu, 20 Dec 2018 00:00:00 +0000/v1.9/news/releases/1.0.x/announcing-1.0.5//v1.9/news/releases/1.0.x/announcing-1.0.5/Announcing Istio 1.0.4 <p>We&rsquo;re pleased to announce the availability of Istio 1.0.4. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.0.4" data-downloadbuttontext="DOWNLOAD 1.0.4" data-updateadvice='Before you download 1.0.4, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.0.9' data-updatehref="/v1.9/news/releases/1.0.x/announcing-1.0.9/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.0/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.0.3...1.0.4"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="known-issues">Known issues</h2> <ul> <li>Pilot may deadlock when using <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-proxy-status"><code>istioctl proxy-status</code></a> to get proxy synchronization status. The work around is to <em>not use</em> <code>istioctl proxy-status</code>. Once Pilot enters a deadlock, it exhibits continuous memory growth eventually running out of memory.</li> </ul> <h2 id="networking">Networking</h2> <ul> <li><p>Fixed the lack of removal of stale endpoints causing <code>503</code> errors.</p></li> <li><p>Fixed sidecar injection when a pod label contains a <code>/</code>.</p></li> </ul> <h2 id="policy-and-telemetry">Policy and telemetry</h2> <ul> <li><p>Fixed occasional data corruption problem with out-of-process Mixer adapters leading to incorrect behavior.</p></li> <li><p>Fixed excessive CPU usage by Mixer when waiting for missing CRDs.</p></li> </ul>Wed, 21 Nov 2018 00:00:00 +0000/v1.9/news/releases/1.0.x/announcing-1.0.4//v1.9/news/releases/1.0.x/announcing-1.0.4/Announcing Istio 1.0.3 <p>We&rsquo;re pleased to announce the availability of Istio 1.0.3. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.0.3" data-downloadbuttontext="DOWNLOAD 1.0.3" data-updateadvice='Before you download 1.0.3, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.0.9' data-updatehref="/v1.9/news/releases/1.0.x/announcing-1.0.9/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.0/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.0.2...1.0.3"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="behavior-changes">Behavior changes</h2> <ul> <li><p><a href="/v1.9/docs/ops/common-problems/validation">Validating webhook</a> is now mandatory. Disabling it may result in Pilot crashes.</p></li> <li><p><a href="/v1.9/docs/reference/config/networking/service-entry/">Service entry</a> validation now rejects the wildcard hostname (<code>*</code>) when configuring DNS resolution. The API has never allowed this, however <code>ServiceEntry</code> was erroneously excluded from validation in the previous release. Use of wildcards as part of a hostname, e.g. <code>*.bar.com</code>, remains unchanged.</p></li> <li><p>The core dump path for <code>istio-proxy</code> has changed to <code>/var/lib/istio</code>.</p></li> </ul> <h2 id="networking">Networking</h2> <ul> <li><p>Mutual TLS Permissive mode is enabled by default.</p></li> <li><p>Pilot performance and scalability has been greatly enhanced. Pilot now delivers endpoint updates to 500 sidecars in under 1 second.</p></li> <li><p>Default <a href="/v1.9/docs/tasks/observability/distributed-tracing/configurability/#trace-sampling">trace sampling</a> is set to 1%.</p></li> </ul> <h2 id="policy-and-telemetry">Policy and telemetry</h2> <ul> <li><p>Mixer (<code>istio-telemetry</code>) now supports load shedding based on request rate and expected latency.</p></li> <li><p>Mixer client (<code>istio-policy</code>) now supports <code>FAIL_OPEN</code> setting.</p></li> <li><p>Istio Performance dashboard added to Grafana.</p></li> <li><p>Reduced <code>istio-telemetry</code> CPU usage by 10%.</p></li> <li><p>Eliminated <code>statsd-to-prometheus</code> deployment. Prometheus now directly scrapes from <code>istio-proxy</code>.</p></li> </ul>Tue, 30 Oct 2018 00:00:00 +0000/v1.9/news/releases/1.0.x/announcing-1.0.3//v1.9/news/releases/1.0.x/announcing-1.0.3/Announcing Istio 1.0.2 <p>We&rsquo;re pleased to announce the availability of Istio 1.0.2. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.0.2" data-downloadbuttontext="DOWNLOAD 1.0.2" data-updateadvice='Before you download 1.0.2, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.0.9' data-updatehref="/v1.9/news/releases/1.0.x/announcing-1.0.9/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.0/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.0.1...1.0.2"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="general">General</h2> <ul> <li><p>Fixed bug in Envoy where the sidecar would crash if receiving normal traffic on the mutual TLS port.</p></li> <li><p>Fixed bug with Pilot propagating incomplete updates to Envoy in a multicluster environment.</p></li> <li><p>Added a few more Helm options for Grafana.</p></li> <li><p>Improved Kubernetes service registry queue performance.</p></li> <li><p>Fixed bug where <code>istioctl proxy-status</code> was not showing the patch version.</p></li> <li><p>Add validation of virtual service SNI hosts.</p></li> </ul>Thu, 06 Sep 2018 00:00:00 +0000/v1.9/news/releases/1.0.x/announcing-1.0.2//v1.9/news/releases/1.0.x/announcing-1.0.2/Announcing Istio 1.0.1 <p>We&rsquo;re pleased to announce the availability of Istio 1.0.1. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.0.1" data-downloadbuttontext="DOWNLOAD 1.0.1" data-updateadvice='Before you download 1.0.1, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.0.9' data-updatehref="/v1.9/news/releases/1.0.x/announcing-1.0.9/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.0/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> <a class="entry" href="https://github.com/istio/istio/compare/1.0.0...1.0.1"> <h5>SOURCE CHANGES</h5> <p>Inspect the full set of source code changes.</p> </a> </div> <h2 id="networking">Networking</h2> <ul> <li><p>Improved Pilot scalability and Envoy startup time.</p></li> <li><p>Fixed virtual service host mismatch issue when adding a port.</p></li> <li><p>Added limited support for <a href="/v1.9/docs/ops/best-practices/traffic-management/#split-virtual-services">merging multiple virtual service or destination rule definitions</a> for the same host.</p></li> <li><p>Allow <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/cluster/outlier_detection.proto">outlier</a> consecutive gateway failures when using HTTP.</p></li> </ul> <h2 id="environment">Environment</h2> <ul> <li><p>Made it possible to use Pilot standalone, for those users who want to only leverage Istio&rsquo;s traffic management functionality.</p></li> <li><p>Introduced the convenient <code>values-istio-gateway.yaml</code> configuration that enables users to run standalone gateways.</p></li> <li><p>Fixed a variety of Helm installation issues, including an issue with the <code>istio-sidecar-injector</code> configmap not being found.</p></li> <li><p>Fixed the Istio installation error with Galley not being ready.</p></li> <li><p>Fixed a variety of issues around mesh expansion.</p></li> </ul> <h2 id="policy-and-telemetry">Policy and telemetry</h2> <ul> <li><p>Added an experimental metrics expiration configuration to the Mixer Prometheus adapter.</p></li> <li><p>Updated Grafana to 5.2.2.</p></li> </ul> <h3 id="adapters">Adapters</h3> <ul> <li>Ability to specify sink options for the Stackdriver adapter.</li> </ul> <h2 id="galley">Galley</h2> <ul> <li>Improved configuration validation for health checks.</li> </ul>Wed, 29 Aug 2018 00:00:00 +0000/v1.9/news/releases/1.0.x/announcing-1.0.1//v1.9/news/releases/1.0.x/announcing-1.0.1/Announcing Istio 1.0 <p>Today, we’re excited to announce Istio 1.0! It’s been a little over a year since our initial 0.1 release. Since then, Istio has evolved significantly with the help of a thriving and growing community of contributors and users. We’ve now reached the point where many companies have successfully adopted Istio in production and have gotten real value from the insight and control it provides over their deployments. We’ve helped large enterprises and fast-moving startups like <a href="https://www.ebay.com/">eBay</a>, <a href="https://www.autotrader.co.uk/">Auto Trader UK</a>, <a href="http://www.descarteslabs.com/">Descartes Labs</a>, <a href="https://www.fitstation.com/">HP FitStation</a>, <a href="https://juspay.in">JUSPAY</a>, <a href="https://www.namely.com/">Namely</a>, <a href="https://www.pubnub.com/">PubNub</a> and <a href="https://www.trulia.com/">Trulia</a> use Istio to connect, manage and secure their services from the ground up. Shipping this release as 1.0 is recognition that we’ve built a core set of functionality that our users can rely on for production use.</p> <div class="relnote-actions call-to-action"> <a class="update-notice entry" data-title='Update Notice' data-downloadhref="https://github.com/istio/istio/releases/tag/1.0.0" data-downloadbuttontext="DOWNLOAD 1.0.0" data-updateadvice='Before you download 1.0, you should know that there&#39;s a newer patch release with the latest bug fixes and perf improvements.' data-updatebutton='LEARN ABOUT ISTIO 1.0.9' data-updatehref="/v1.9/news/releases/1.0.x/announcing-1.0.9/"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v1.0/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> </div> <h2 id="ecosystem">Ecosystem</h2> <p>We’ve seen substantial growth in Istio&rsquo;s ecosystem in the last year. <a href="https://www.envoyproxy.io/">Envoy</a> continues its impressive growth and added numerous features that are crucial for a production quality service mesh. Observability providers like <a href="https://www.datadoghq.com/">Datadog</a>, <a href="https://www.solarwinds.com/">SolarWinds</a>, <a href="https://sysdig.com/blog/monitor-istio/">Sysdig</a>, <a href="https://cloud.google.com/stackdriver/">Google Stackdriver</a> and <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a> have written plugins to integrate Istio with their products. <a href="https://www.tigera.io/resources/using-network-policy-concert-istio-2/">Tigera</a>, <a href="https://www.aporeto.com/">Aporeto</a>, <a href="https://cilium.io/">Cilium</a> and <a href="https://styra.com/">Styra</a> built extensions to our policy enforcement and networking capabilities. <a href="https://www.redhat.com/en">Red Hat</a> built <a href="https://www.kiali.io">Kiali</a> to wrap a nice user-experience around mesh management and observability. <a href="https://www.cloudfoundry.org/">Cloud Foundry</a> is building on Istio for it’s next generation traffic routing stack, the recently announced <a href="https://github.com/knative/docs">Knative</a> serverless project is doing the same and <a href="https://apigee.com/">Apigee</a> announced that they plan to use it in their API management solution. These are just some of the integrations the community has added in the last year.</p> <h2 id="features">Features</h2> <p>Since the 0.8 release we’ve added some important new features and more importantly marked many of our existing features as Beta signaling that they’re ready for production use. Here are some highlights:</p> <ul> <li><p>Multiple Kubernetes clusters can now be <a href="/v1.9/docs/setup/install/multicluster/">added to a single mesh</a> and enabling cross-cluster communication and consistent policy enforcement. Multicluster support is now Beta.</p></li> <li><p>Networking APIs that enable fine grained control over the flow of traffic through a mesh are now Beta. Explicitly modeling ingress and egress concerns using Gateways allows operators to <a href="/v1.9/blog/2018/v1alpha3-routing/">control the network topology</a> and meet access security requirements at the edge.</p></li> <li><p>Mutual TLS can now be <a href="/v1.9/docs/tasks/security/authentication/mtls-migration">rolled out incrementally</a> without requiring all clients of a service to be updated. This is a critical feature that unblocks adoption in-place by existing production deployments.</p></li> <li><p>Mixer now has support for <a href="https://github.com/istio/istio/wiki/Out-Of-Process-gRPC-Adapter-Dev-Guide">developing out-of-process adapters</a>. This will become the default way to extend Mixer over the coming releases and makes building adapters much simpler.</p></li> <li><p><a href="/v1.9/docs/concepts/security/#authorization">Authorization policies</a> which control access to services are now entirely evaluated locally in Envoy increasing their performance and reliability.</p></li> <li><p><a href="https://archive.istio.io/1.0/docs/setup/install/helm/">Helm chart installation</a> is now the recommended install method offering rich customization options to adopt Istio on your terms.</p></li> <li><p>We’ve put a lot of effort into performance including continuous regression testing, large scale environment simulation and targeted fixes. We’re very happy with the results and will share more on this in detail in the coming weeks.</p></li> </ul> <h2 id="what-s-next">What’s next?</h2> <p>While this is a significant milestone for the project there’s lots more to do. In working with adopters we’ve gotten a lot of great feedback about what to focus next. We’ve heard consistent themes around support for hybrid-cloud, install modularity, richer networking features and scalability for massive deployments. We’ve already taken some of this feedback into account in the 1.0 release and we’ll continue to aggressively tackle this work in the coming months.</p> <h2 id="getting-started">Getting started</h2> <p>If you’re new to Istio and looking to use it for your deployment we’d love to hear from you. Take a look at <a href="/v1.9/docs/">our docs</a> or stop by our <a href="https://discuss.istio.io">chat forum</a>. If you’d like to go deeper and <a href="/v1.9/about/community">contribute to the project</a> come to one of our community meetings and say hello.</p> <h2 id="thanks">Thanks</h2> <p>The Istio team would like to give huge thanks to everyone who has made a contribution to the project. It wouldn’t be where it is today without your help. The last year has been pretty amazing and we look forward to the next one with excitement about what we can achieve together as a community.</p> <h2 id="release-notes">Release notes</h2> <h3 id="networking">Networking</h3> <ul> <li><p><strong>SNI Routing using Virtual Services</strong>. Newly introduced <code>TLS</code> sections in <a href="/v1.9/docs/reference/config/networking/virtual-service/"><code>VirtualService</code></a> can be used to route TLS traffic based on SNI values. Service ports named as TLS/HTTPS can be used in conjunction with virtual service TLS routes. TLS/HTTPS ports without an accompanying virtual service will be treated as opaque TCP.</p></li> <li><p><strong>Streaming gRPC Restored</strong>. Istio 0.8 caused periodic termination of long running streaming gRPC connections. This has been fixed in 1.0.</p></li> <li><p><strong>Old (v1alpha1) Networking APIs Removed</strong>. Support for the old <code>v1alpha1</code> traffic management model has been removed.</p></li> <li><p><strong>Istio Ingress Deprecated</strong>. The old Istio ingress is deprecated and disabled by default. We encourage users to use <a href="/v1.9/docs/concepts/traffic-management/#gateways">gateways</a> instead.</p></li> </ul> <h3 id="policy-and-telemetry">Policy and telemetry</h3> <ul> <li><p><strong>Updated Attributes</strong>. The set of <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/attribute-vocabulary/">attributes</a> used to describe the source and destination of traffic have been completely revamped in order to be more precise and comprehensive.</p></li> <li><p><strong>Policy Check Cache</strong>. Mixer now features a large level 2 cache for policy checks, complementing the level 1 cache present in the sidecar proxy. This further reduces the average latency of externally-enforced policy checks.</p></li> <li><p><strong>Telemetry Buffering</strong>. Mixer now buffers report calls before dispatching to adapters, which gives an opportunity for adapters to process telemetry data in bigger chunks, reducing overall computational overhead in Mixer and its adapters.</p></li> <li><p><strong>Out of Process Adapters</strong>. Mixer now includes initial support for out-of-process adapters. This will be the recommended approach moving forward for integrating with Mixer. Initial documentation on how to build an out-of-process adapter is provided by the <a href="https://github.com/istio/istio/wiki/Mixer-Out-Of-Process-Adapter-Dev-Guide">Out Of Process Adapter Dev Guide</a> and the <a href="https://github.com/istio/istio/wiki/Mixer-Out-Of-Process-Adapter-Walkthrough">Out Of Process Adapter Walk-through</a>.</p></li> <li><p><strong>Client-Side Telemetry</strong>. It&rsquo;s now possible to collect telemetry from the client of an interaction, in addition to the server-side telemetry.</p></li> </ul> <h4 id="adapters">Adapters</h4> <ul> <li><p><strong>SignalFX</strong>. There is a new <code>signalfx</code> adapter.</p></li> <li><p><strong>Stackdriver</strong>. The <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/adapters/stackdriver/"><code>stackdriver</code></a> adapter has been substantially enhanced in this release to add new features and improve performance.</p></li> </ul> <h3 id="security">Security</h3> <ul> <li><p><strong>Authorization</strong>. We&rsquo;ve reimplemented our <a href="/v1.9/docs/concepts/security/#authorization">authorization functionality</a>. RPC-level authorization policies can now be implemented without the need for Mixer and Mixer adapters.</p></li> <li><p><strong>Improved Mutual TLS Authentication Control</strong>. It&rsquo;s now easier to <a href="/v1.9/docs/concepts/security/#authentication">control mutual TLS authentication</a> between services. We provide &lsquo;PERMISSIVE&rsquo; mode so that you can <a href="/v1.9/docs/tasks/security/authentication/mtls-migration/">incrementally turn on mutual TLS</a> for your services. We removed service annotations and have a <a href="/v1.9/docs/tasks/security/authentication/authn-policy/">unique approach to turn on mutual TLS</a>, coupled with client-side <a href="/v1.9/docs/concepts/traffic-management/#destination-rules">destination rules</a>.</p></li> <li><p><strong>JWT Authentication</strong>. We now support <a href="/v1.9/docs/concepts/security/#authentication">JWT authentication</a> which can be configured using <a href="/v1.9/docs/concepts/security/#authentication-policies">authentication policies</a>.</p></li> </ul> <h3 id="istioctl"><code>istioctl</code></h3> <ul> <li><p>Added the <a href="https://archive.istio.io/v1.0/docs/reference/commands/istioctl/#istioctl-authn-tls-check"><code>istioctl authn tls-check</code></a> command.</p></li> <li><p>Added the <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-proxy-status"><code>istioctl proxy-status</code></a> command.</p></li> <li><p>Added the <code>istioctl experimental convert-ingress</code> command.</p></li> <li><p>Removed the <code>istioctl experimental convert-networking-config</code> command.</p></li> <li><p>Enhancements and bug fixes:</p> <ul> <li><p>Align <code>kubeconfig</code> handling with <code>kubectl</code></p></li> <li><p><code>istioctl get all</code> returns all types of networking and authentication configuration.</p></li> <li><p>Added the <code>--all-namespaces</code> flag to <code>istioctl get</code> to retrieve resources across all namespaces.</p></li> </ul></li> </ul> <h3 id="known-issues-with-1-0">Known issues with 1.0</h3> <ul> <li><p>Amazon&rsquo;s EKS service does not implement automatic sidecar injection. Istio can be used in Amazon&rsquo;s EKS by using <a href="/v1.9/docs/setup/additional-setup/sidecar-injection/#manual-sidecar-injection">manual injection</a> for sidecars and turning off galley using the <a href="https://archive.istio.io/1.0/docs/setup/install/helm">Helm parameter</a> <code>--set galley.enabled=false</code>.</p></li> <li><p>In a <a href="/v1.9/docs/setup/install/multicluster">multicluster deployment</a> the mixer-telemetry and mixer-policy components do not connect to the Kubernetes API endpoints of any of the remote clusters. This results in a loss of telemetry fidelity as some of the metadata associated with workloads on remote clusters is incomplete.</p></li> <li><p>There are Kubernetes manifests available for using Citadel standalone or with Citadel health checking enabled. There is not a Helm implementation of these modes. See <a href="https://github.com/istio/istio/issues/6922">Issue 6922</a> for more details.</p></li> <li><p>Mesh expansion functionality, which lets you add raw VMs to a mesh is broken in 1.0. We&rsquo;re expecting to produce a patch that fixes this problem within a few days.</p></li> </ul>Tue, 31 Jul 2018 00:00:00 +0000/v1.9/news/releases/1.0.x/announcing-1.0//v1.9/news/releases/1.0.x/announcing-1.0/Announcing Istio 0.8 <p>This is a major release for Istio on the road to 1.0. There are a great many new features and architectural improvements in addition to the usual pile of bug fixes and performance improvements.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="https://github.com/istio/istio/releases/tag/0.8.0"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v0.8/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> </div> <h2 id="networking">Networking</h2> <ul> <li><p><strong>Revamped Traffic Management Model</strong>. We&rsquo;re finally ready to take the wraps off our <a href="/v1.9/blog/2018/v1alpha3-routing/">new traffic management APIs</a>. We believe this new model is easier to understand while covering more real world deployment <a href="/v1.9/docs/tasks/traffic-management/">use-cases</a>. For folks upgrading from earlier releases there is a <a href="/v1.9/docs/setup/upgrade/">migration guide</a> and a conversion tool built into <code>istioctl</code> to help convert your configuration from the old model.</p></li> <li><p><strong>Streaming Envoy configuration</strong>. By default Pilot now streams configuration to Envoy using its <a href="https://github.com/envoyproxy/data-plane-api/blob/master/xds_protocol.rst">ADS API</a>. This new approach increases effective scalability, reduces rollout delay and should eliminate spurious 404 errors.</p></li> <li><p><strong>Gateway for Ingress/Egress</strong>. We no longer support combining Kubernetes Ingress specs with Istio routing rules as it has led to several bugs and reliability issues. Istio now supports a platform independent <a href="/v1.9/docs/concepts/traffic-management/#gateways">Gateway</a> model for ingress &amp; egress proxies that works across Kubernetes and Cloud Foundry and works seamlessly with routing. The Gateway supports <a href="https://en.wikipedia.org/wiki/Server_Name_Indication">Server Name Indication</a> based routing, as well as serving a certificate based on the server name presented by the client.</p></li> <li><p><strong>Constrained Inbound Ports</strong>. We now restrict the inbound ports in a pod to the ones declared by the apps running inside that pod.</p></li> </ul> <h2 id="security">Security</h2> <ul> <li><p><strong>Introducing Citadel</strong>. We&rsquo;ve finally given a name to our security component. What was formerly known as Istio-Auth or Istio-CA is now called Citadel.</p></li> <li><p><strong>Multicluster Support</strong>. We support per-cluster Citadel in multicluster deployments such that all Citadels share the same root certificate and workloads can authenticate each other across the mesh.</p></li> <li><p><strong>Authentication Policy</strong>. We&rsquo;ve created a unified API for <a href="/v1.9/docs/tasks/security/authentication/authn-policy/">authentication policy</a> that controls whether service-to-service communication uses mutual TLS as well as end user authentication. This is now the recommended way to control these behaviors.</p></li> </ul> <h2 id="telemetry">Telemetry</h2> <ul> <li><strong>Self-Reporting</strong>. Mixer and Pilot now produce telemetry that flows through the normal Istio telemetry pipeline, just like services in the mesh.</li> </ul> <h2 id="setup">Setup</h2> <ul> <li><strong>A la Carte Istio</strong>. Istio has a rich set of features, however you don&rsquo;t need to install or consume them all together. By using Helm or <code>istioctl gen-deploy</code>, users can install only the features they want. For example, users can install Pilot only and enjoy traffic management functionality without dealing with Mixer or Citadel.</li> </ul> <h2 id="mixer-adapters">Mixer adapters</h2> <ul> <li><strong>CloudWatch</strong>. Mixer can now report metrics to AWS CloudWatch. <a href="https://istio.io/v0.8/docs/reference/config/policy-and-telemetry/adapters/cloudwatch/">Learn more</a></li> </ul> <h2 id="known-issues-with-0-8">Known issues with 0.8</h2> <ul> <li><p>A gateway with virtual services pointing to a headless service won&rsquo;t work (<a href="https://github.com/istio/istio/issues/5005">Issue #5005</a>).</p></li> <li><p>There is a <a href="https://github.com/istio/istio/issues/5723">problem with Google Kubernetes Engine 1.10.2</a>. The workaround is to use Kubernetes 1.9 or switch the node image to Ubuntu. A fix is expected in GKE 1.10.4.</p></li> <li><p>There is a known namespace issue with <code>istioctl experimental convert-networking-config</code> tool where the desired namespace may be changed to the <code>istio-system</code> namespace, please manually adjust to use the desired namespace after running the conversation tool. <a href="https://github.com/istio/istio/issues/5817">Learn more</a></p></li> </ul>Fri, 01 Jun 2018 00:00:00 +0000/v1.9/news/releases/0.x/announcing-0.8//v1.9/news/releases/0.x/announcing-0.8/Announcing Istio 0.7<p>For this release, we focused on improving our build and test infrastructures and increasing the quality of our tests. As a result, there are no new features for this month.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="https://github.com/istio/istio/releases/tag/0.7.0"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v0.7/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> </div> <p>Please note that this release includes preliminary support for the new v1alpha3 traffic management functionality. This functionality is still in a great deal of flux and there may be some breaking changes in 0.8. So if you feel like exploring, please go right ahead, but expect that this may change in 0.8 and beyond.</p> <p>Known Issues:</p> <p>Our <a href="https://archive.istio.io/0.7/docs/setup/install/helm">Helm chart</a> currently requires some workaround to apply the chart correctly, see <a href="https://github.com/istio/istio/issues/4701">4701</a> for details.</p>Wed, 28 Mar 2018 00:00:00 +0000/v1.9/news/releases/0.x/announcing-0.7//v1.9/news/releases/0.x/announcing-0.7/Announcing Istio 0.6 <p>In addition to the usual pile of bug fixes and performance improvements, this release includes the new or updated features detailed below.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="https://github.com/istio/istio/releases/tag/0.6.0"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v0.6/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> </div> <h2 id="networking">Networking</h2> <ul> <li><strong>Custom Envoy Configuration</strong>. Pilot now supports ferrying custom Envoy configuration to the proxy. <a href="https://github.com/mandarjog/istioluawebhook">Learn more</a></li> </ul> <h2 id="mixer-adapters">Mixer adapters</h2> <ul> <li><p><strong>SolarWinds</strong>. Mixer can now interface to AppOptics and Papertrail. <a href="https://istio.io/v0.6/docs/reference/config/policy-and-telemetry/adapters/solarwinds/">Learn more</a></p></li> <li><p><strong>Redis Quota</strong>. Mixer now supports a Redis-based adapter for rate limit tracking. <a href="https://istio.io/v0.6/docs/reference/config/policy-and-telemetry/adapters/redisquota/">Learn more</a></p></li> <li><p><strong>Datadog</strong>. Mixer now provides an adapter to deliver metric data to a Datadog agent. <a href="https://istio.io/v0.6/docs/reference/config/policy-and-telemetry/adapters/datadog/">Learn more</a></p></li> </ul> <h2 id="other">Other</h2> <ul> <li><p><strong>Separate Check &amp; Report Clusters</strong>. When configuring Envoy, it&rsquo;s now possible to use different clusters for Mixer instances that are used for Mixer&rsquo;s Check functionality from those used for Mixer&rsquo;s Report functionality. This may be useful in large deployments for better scaling of Mixer instances.</p></li> <li><p><strong>Monitoring Dashboards</strong>. There are now preliminary Mixer &amp; Pilot monitoring dashboard in Grafana.</p></li> <li><p><strong>Liveness and Readiness Probes</strong>. Istio components now provide canonical liveness and readiness probe support to help ensure mesh infrastructure health.</p></li> <li><p><strong>Egress Policy and Telemetry</strong>. Istio can monitor traffic to external services defined by <code>EgressRule</code> or External Service. Istio can also apply Mixer policies on this traffic.</p></li> </ul>Thu, 08 Mar 2018 00:00:00 +0000/v1.9/news/releases/0.x/announcing-0.6//v1.9/news/releases/0.x/announcing-0.6/Announcing Istio 0.5 <p>In addition to the usual pile of bug fixes and performance improvements, this release includes the new or updated features detailed below.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="https://github.com/istio/istio/releases/tag/0.5.0"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v0.5/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> </div> <h2 id="networking">Networking</h2> <ul> <li><p><strong>Incremental Istio Deployment</strong>. (Preview) You can now adopt Istio incrementally, more easily than before, by installing only the components you want (e.g, Pilot+Ingress only as the minimal Istio install). Refer to the <code>istioctl</code> CLI tool for generating a information on customized Istio deployments.</p></li> <li><p><strong>Automatic Proxy Injection</strong>. We leverage Kubernetes 1.9&rsquo;s new <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.9.md#api-machinery">mutating webhook feature</a> to provide automatic pod-level proxy injection. Automatic injection requires Kubernetes 1.9 or beyond and therefore doesn&rsquo;t work on older versions. The alpha initializer mechanism is no longer supported. <a href="/v1.9/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection">Learn more</a></p></li> <li><p><strong>Revised Traffic Rules</strong>. Based on user feedback, we have made significant changes to Istio&rsquo;s traffic management (routing rules, destination rules, etc.). We would love your continuing feedback while we polish this in the coming weeks.</p></li> </ul> <h2 id="mixer-adapters">Mixer adapters</h2> <ul> <li><p><strong>Open Policy Agent</strong>. Mixer now has an authorization adapter implementing the <a href="https://www.openpolicyagent.org">open policy agent</a> model, providing a flexible fine-grained access control mechanism. <a href="https://docs.google.com/document/d/1U2XFmah7tYdmC5lWkk3D43VMAAQ0xkBatKmohf90ICA">Learn more</a></p></li> <li><p><strong>Istio RBAC</strong>. Mixer now has a role-based access control adapter. <a href="/v1.9/docs/concepts/security/#authorization">Learn more</a></p></li> <li><p><strong>Fluentd</strong>. Mixer now has an adapter for log collection through <a href="https://www.fluentd.org">Fluentd</a>.</p></li> <li><p><strong>Stdio</strong>. The stdio adapter now lets you log to files with support for log rotation &amp; backup, along with a host of controls.</p></li> </ul> <h2 id="security">Security</h2> <ul> <li><p><strong>Bring Your Own CA</strong>. There have been many enhancements to the &lsquo;bring your own CA&rsquo; feature. <a href="/v1.9/docs/tasks/security/cert-management/plugin-ca-cert/">Learn more</a></p></li> <li><p><strong>PKCS8</strong>. Add support for PKCS8 keys to Istio PKI.</p></li> <li><p><strong>Istio RBAC</strong> Istio RBAC provides access control for services in Istio mesh. <a href="/v1.9/docs/concepts/security/#authorization">Learn more</a>.</p></li> </ul> <h2 id="other">Other</h2> <ul> <li><p><strong>Release-Mode Binaries</strong>. We switched release and installation default to release for improved performance and security.</p></li> <li><p><strong>Component Logging</strong>. Istio components now offer a rich set of command-line options to control local logging, including common support for log rotation.</p></li> <li><p><strong>Consistent Version Reporting</strong>. Istio components now offer a consistent command-line interface to report their version information.</p></li> <li><p><strong>Optional Instance Fields</strong>. Within configuration, definitions of Mixer instances no longer need to include every field of the associated template. Omitted fields get a zero or empty value.</p></li> </ul> <h2 id="known-issues">Known issues</h2> <ul> <li><p>Installing with Helm charts is currently broken.</p></li> <li><p>Automatic sidecar injection only works with Kubernetes 1.9 or later.</p></li> </ul>Fri, 02 Feb 2018 00:00:00 +0000/v1.9/news/releases/0.x/announcing-0.5//v1.9/news/releases/0.x/announcing-0.5/Announcing Istio 0.4 <p>This release has only got a few weeks&rsquo; worth of changes, as we stabilize our monthly release process. In addition to the usual pile of bug fixes and performance improvements, this release includes the items below.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="https://github.com/istio/istio/releases/tag/0.4.0"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v0.4/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> </div> <h2 id="general">General</h2> <ul> <li><p><strong>Cloud Foundry</strong>. Added minimum Pilot support for the <a href="https://www.cloudfoundry.org">Cloud Foundry</a> platform, making it possible for Pilot to discover CF services and service instances.</p></li> <li><p><strong>Circonus</strong>. Mixer now includes an adapter for the <a href="https://www.circonus.com">Circonus</a> analytics and monitoring platform.</p></li> <li><p><strong>Pilot Metrics</strong>. Pilot now collects metrics for diagnostics.</p></li> <li><p><strong>Helm Charts</strong>. We now provide Helm charts to install Istio.</p></li> <li><p><strong>Enhanced Attribute Expressions</strong>. Mixer&rsquo;s expression language gained a few new functions to make it easier to write policy rules. <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/expression-language/">Learn more</a></p></li> </ul> <p>If you&rsquo;re into the nitty-gritty details, you can see our more detailed low-level release notes <a href="https://github.com/istio/istio/wiki/v0.4.0">here</a>.</p>Mon, 18 Dec 2017 00:00:00 +0000/v1.9/news/releases/0.x/announcing-0.4//v1.9/news/releases/0.x/announcing-0.4/Announcing Istio 0.3 <p>We&rsquo;re pleased to announce the availability of Istio 0.3. Please see below for what&rsquo;s changed.</p> <div class="relnote-actions call-to-action"> <a class="entry" href="https://github.com/istio/istio/releases/tag/0.3.0"> <h5>DOWNLOAD</h5> <p>Download and install this release.</p> </a> <a class="entry" href="https://archive.istio.io/v0.3/docs"> <h5>DOCS</h5> <p>Visit the documentation for this release.</p> </a> </div> <h2 id="general">General</h2> <p>Starting with 0.3, Istio is switching to a monthly release cadence. We hope this will help accelerate our ability to deliver timely improvements. See <a href="/v1.9/about/feature-stages/">here</a> for information on the state of individual features for this release.</p> <p>This is a fairly modest release in terms of new features as the team put emphasis on internal infrastructure work to improve our velocity. Many bugs and smaller issues have been addressed and overall performance has been improved in a number of areas.</p> <h2 id="security">Security</h2> <ul> <li><p><strong>Secure Control Plane Communication</strong>. Mixer and Pilot are now secured with mutual TLS, just like all services in a mesh.</p></li> <li><p><strong>Selective Authentication</strong>. You can now control authentication on a per-service basis via service annotations, which helps with incremental migration to Istio.</p></li> </ul> <h2 id="networking">Networking</h2> <ul> <li><strong>Egress rules for TCP</strong>. You can now specify egress rules that affect TCP-level traffic.</li> </ul> <h2 id="policy-enforcement-and-telemetry">Policy enforcement and telemetry</h2> <ul> <li><p><strong>Improved Caching</strong>. Caching between Envoy and Mixer has gotten substantially better, resulting in a significant drop in average latency for authorization checks.</p></li> <li><p><strong>Improved list Adapter</strong>. The Mixer &lsquo;list&rsquo; adapter now supports regular expression matching. See the adapter&rsquo;s <a href="https://istio.io/v0.3/docs/reference/config/policy-and-telemetry/adapters/list/">configuration options</a> for details.</p></li> <li><p><strong>Configuration Validation</strong>. Mixer does more extensive validation of configuration state in order to catch problems earlier. We expect to invest more in this area in coming releases.</p></li> </ul> <p>If you&rsquo;re into the nitty-gritty details, you can see our more detailed low-level release notes <a href="https://github.com/istio/istio/wiki/v0.3.0">here</a>.</p>Wed, 29 Nov 2017 00:00:00 +0000/v1.9/news/releases/0.x/announcing-0.3//v1.9/news/releases/0.x/announcing-0.3/Announcing Istio 0.2 <p>We launched Istio; an open platform to connect, manage, monitor, and secure microservices, on May 24, 2017. We have been humbled by the incredible interest, and rapid community growth of developers, operators, and partners. Our 0.1 release was focused on showing all the concepts of Istio in Kubernetes.</p> <p>Today we are happy to announce the 0.2 release which improves stability and performance, allows for cluster wide deployment and automated injection of sidecars in Kubernetes, adds policy and authentication for TCP services, and enables expansion of the mesh to include services deployed in virtual machines. In addition, Istio can now run outside Kubernetes, leveraging Consul/Nomad or Eureka. Beyond core features, Istio is now ready for extensions to be written by third party companies and developers.</p> <h2 id="highlights-for-the-0-2-release">Highlights for the 0.2 release</h2> <h3 id="usability-improvements">Usability improvements</h3> <ul> <li><p><em>Multiple namespace support</em>: Istio now works cluster-wide, across multiple namespaces and this was one of the top requests from community from 0.1 release.</p></li> <li><p><em>Policy and security for TCP services</em>: In addition to HTTP, we have added transparent mutual TLS authentication and policy enforcement for TCP services as well. This will allow you to secure more of your Kubernetes deployment, and get Istio features like telemetry, policy and security.</p></li> <li><p><em>Automated sidecar injection</em>: By leveraging the alpha <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/">initializer</a> feature provided by Kubernetes 1.7, envoy sidecars can now be automatically injected into application deployments when your cluster has the initializer enabled. This enables you to deploy microservices using <code>kubectl</code>, the exact same command that you normally use for deploying the microservices without Istio.</p></li> <li><p><em>Extending Istio</em>: An improved Mixer design that lets vendors write Mixer adapters to implement support for their own systems, such as application management or policy enforcement. The <a href="https://github.com/istio/istio/wiki/Mixer-Compiled-In-Adapter-Dev-Guide">Mixer Adapter Developer&rsquo;s Guide</a> can help you easily integrate your solution with Istio.</p></li> <li><p><em>Bring your own CA certificates</em>: Allows users to provide their own key and certificate for Istio CA and persistent CA key/certificate Storage. Enables storing signing key/certificates in persistent storage to facilitate CA restarts.</p></li> <li><p><em>Improved routing &amp; metrics</em>: Support for WebSocket, MongoDB and Redis protocols. You can apply resilience features like circuit breakers on traffic to third party services. In addition to Mixer’s metrics, hundreds of metrics from Envoy are now visible inside Prometheus for all traffic entering, leaving and within Istio mesh.</p></li> </ul> <h3 id="cross-environment-support">Cross environment support</h3> <ul> <li><p><em>Mesh expansion</em>: Istio mesh can now span services running outside of Kubernetes - like those running in virtual machines while enjoying benefits such as automatic mutual TLS authentication, traffic management, telemetry, and policy enforcement across the mesh.</p></li> <li><p><em>Running outside Kubernetes</em>: We know many customers use other service registry and orchestration solutions like Consul/Nomad and Eureka. Istio Pilot can now run standalone outside Kubernetes, consuming information from these systems, and manage the Envoy fleet in VMs or containers.</p></li> </ul> <h2 id="get-involved-in-shaping-the-future-of-istio">Get involved in shaping the future of Istio</h2> <p>We have a growing <a href="/v1.9/about/feature-stages/">roadmap</a> ahead of us, full of great features to implement. Our focus next release is going to be on stability, reliability, integration with third party tools and multicluster use cases.</p> <p>To learn how to get involved and contribute to Istio&rsquo;s future, check out our <a href="https://github.com/istio/community">community</a> GitHub repository which will introduce you to our working groups, our mailing lists, our various community meetings, our general procedures and our guidelines.</p> <p>We want to thank our fantastic community for field testing new versions, filing bug reports, contributing code, helping out other community members, and shaping Istio by participating in countless productive discussions. This has enabled the project to accrue 3000 stars on GitHub since launch and hundreds of active community members on Istio mailing lists.</p> <p>Thank you</p> <h2 id="release-notes">Release notes</h2> <h3 id="general">General</h3> <ul> <li><p><strong>Updated Configuration Model</strong>. Istio now uses the Kubernetes <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/">Custom Resource</a> model to describe and store its configuration. When running in Kubernetes, configuration can now be optionally managed using the <code>kubectl</code> command.</p></li> <li><p><strong>Multiple Namespace Support</strong>. Istio control plane components are now in the dedicated <code>istio-system</code> namespace. Istio can manage services in other non-system namespaces.</p></li> <li><p><strong>Mesh Expansion</strong>. Initial support for adding non-Kubernetes services (in the form of VMs and/or physical machines) to a mesh. This is an early version of this feature and has some limitations (such as requiring a flat network across containers and VMs).</p></li> <li><p><strong>Multi-Environment Support</strong>. Initial support for using Istio in conjunction with other service registries including Consul and Eureka.</p></li> <li><p><strong>Automatic injection of sidecars</strong>. Istio sidecar can automatically be injected into a pod upon deployment using the <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/">Initializers</a> alpha feature in Kubernetes.</p></li> </ul> <h3 id="performance-and-quality">Performance and quality</h3> <p>There have been many performance and reliability improvements throughout the system. We don’t consider Istio 0.2 ready for production yet, but we’ve made excellent progress in that direction. Here are a few items of note:</p> <ul> <li><p><strong>Caching Client</strong>. The Mixer client library used by Envoy now provides caching for Check calls and batching for Report calls, considerably reducing end-to-end overhead.</p></li> <li><p><strong>Avoid Hot Restarts</strong>. The need to hot-restart Envoy has been mostly eliminated through effective use of LDS/RDS/CDS/EDS.</p></li> <li><p><strong>Reduced Memory Use</strong>. Significantly reduced the size of the sidecar helper agent, from 50Mb to 7Mb.</p></li> <li><p><strong>Improved Mixer Latency</strong>. Mixer now clearly delineates configuration-time vs. request-time computations, which avoids doing extra setup work at request-time for initial requests and thus delivers a smoother average latency. Better resource caching also contributes to better end-to-end performance.</p></li> <li><p><strong>Reduced Latency for Egress Traffic</strong>. We now forward traffic to external services directly from the sidecar.</p></li> </ul> <h3 id="traffic-management">Traffic management</h3> <ul> <li><p><strong>Egress Rules</strong>. It’s now possible to specify routing rules for egress traffic.</p></li> <li><p><strong>New Protocols</strong>. Mesh-wide support for WebSocket connections, MongoDB proxying, and Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services">headless services</a>.</p></li> <li><p><strong>Other Improvements</strong>. Ingress properly supports gRPC services, better support for health checks, and Jaeger tracing.</p></li> </ul> <h3 id="policy-enforcement-telemetry">Policy enforcement &amp; telemetry</h3> <ul> <li><p><strong>Ingress Policies</strong>. In addition to east-west traffic supported in 0.1. policies can now be applied to north-south traffic.</p></li> <li><p><strong>Support for TCP Services</strong>. In addition to the HTTP-level policy controls available in 0.1, 0.2 introduces policy controls for TCP services.</p></li> <li><p><strong>New Mixer API</strong>. The API that Envoy uses to interact with Mixer has been completely redesigned for increased robustness, flexibility, and to support rich proxy-side caching and batching for increased performance.</p></li> <li><p><strong>New Mixer Adapter Model</strong>. A new adapter composition model makes it easier to extend Mixer by adding whole new classes of adapters via templates. This new model will serve as the foundational building block for many features in the future. See the <a href="https://github.com/istio/istio/wiki/Mixer-Compiled-In-Adapter-Dev-Guide">Adapter Developer&rsquo;s Guide</a> to learn how to write adapters.</p></li> <li><p><strong>Improved Mixer Build Model</strong>. It’s now easier to build a Mixer binary that includes custom adapters.</p></li> <li><p><strong>Mixer Adapter Updates</strong>. The built-in adapters have all been rewritten to fit into the new adapter model. The <code>stackdriver</code> adapter has been added for this release. The experimental <code>redisquota</code> adapter has been removed in the 0.2 release, but is expected to come back in production quality for the 0.3 release.</p></li> <li><p><strong>Mixer Call Tracing</strong>. Calls between Envoy and Mixer can now be traced and analyzed in the Zipkin dashboard.</p></li> </ul> <h3 id="security">Security</h3> <ul> <li><p><strong>Mutual TLS for TCP Traffic</strong>. In addition to HTTP traffic, mutual TLS is now supported for TCP traffic as well.</p></li> <li><p><strong>Identity Provisioning for VMs and Physical Machines</strong>. Auth supports a new mechanism using a per-node agent for identity provisioning. This agent runs on each node (VM / physical machine) and is responsible for generating and sending out the CSR (Certificate Signing Request) to get certificates from Istio CA.</p></li> <li><p><strong>Bring Your Own CA Certificates</strong>. Allows users to provide their own key and certificate for Istio CA.</p></li> <li><p><strong>Persistent CA Key/Certificate Storage</strong>. Istio CA now stores signing key/certificates in persistent storage to facilitate CA restarts.</p></li> </ul> <h2 id="known-issues">Known issues</h2> <ul> <li><p><strong>User may get periodical 404 when accessing the application</strong>: We have noticed that Envoy doesn&rsquo;t get routes properly occasionally thus a 404 is returned to the user. We are actively working on this <a href="https://github.com/istio/istio/issues/1038">issue</a>.</p></li> <li><p><strong>Istio Ingress or Egress reports ready before Pilot is actually ready</strong>: You can check the <code>istio-ingress</code> and <code>istio-egress</code> pods status in the <code>istio-system</code> namespace and wait a few seconds after all the Istio pods reach ready status. We are actively working on this <a href="https://github.com/istio/istio/pull/1055">issue</a>.</p></li> <li><p><strong>A service with Istio Auth enabled can&rsquo;t communicate with a service without Istio</strong>: This limitation will be removed in the near future.</p></li> </ul>Tue, 10 Oct 2017 00:00:00 +0000/v1.9/news/releases/0.x/announcing-0.2//v1.9/news/releases/0.x/announcing-0.2/Introducing Istio <p>Google, IBM, and Lyft are proud to announce the first public release of <a href="/v1.9/">Istio</a>: an open source project that provides a uniform way to connect, secure, manage and monitor microservices. Our current release is targeted at the <a href="https://kubernetes.io/">Kubernetes</a> environment; we intend to add support for other environments such as virtual machines and Cloud Foundry in the coming months. Istio adds traffic management to microservices and creates a basis for value-add capabilities like security, monitoring, routing, connectivity management and policy. The software is built using the battle-tested <a href="https://envoyproxy.github.io/envoy/">Envoy</a> proxy from Lyft, and gives visibility and control over traffic <em>without requiring any changes to application code</em>. Istio gives CIOs a powerful tool to enforce security, policy and compliance requirements across the enterprise.</p> <h2 id="background">Background</h2> <p>Writing reliable, loosely coupled, production-grade applications based on microservices can be challenging. As monolithic applications are decomposed into microservices, software teams have to worry about the challenges inherent in integrating services in distributed systems: they must account for service discovery, load balancing, fault tolerance, end-to-end monitoring, dynamic routing for feature experimentation, and perhaps most important of all, compliance and security.</p> <p>Inconsistent attempts at solving these challenges, cobbled together from libraries, scripts and Stack Overflow snippets leads to solutions that vary wildly across languages and runtimes, have poor observability characteristics and can often end up compromising security.</p> <p>One solution is to standardize implementations on a common RPC library like <a href="https://grpc.io">gRPC</a>, but this can be costly for organizations to adopt wholesale and leaves out brownfield applications which may be practically impossible to change. Operators need a flexible toolkit to make their microservices secure, compliant, trackable and highly available, and developers need the ability to experiment with different features in production, or deploy canary releases, without impacting the system as a whole.</p> <h2 id="solution-service-mesh">Solution: service mesh</h2> <p>Imagine if we could transparently inject a layer of infrastructure between a service and the network that gives operators the controls they need while freeing developers from having to bake solutions to distributed system problems into their code. This uniform layer of infrastructure combined with service deployments is commonly referred to as a <strong><em>service mesh</em></strong>. Just as microservices help to decouple feature teams from each other, a service mesh helps to decouple operators from application feature development and release processes. Istio turns disparate microservices into an integrated service mesh by systemically injecting a proxy into the network paths among them.</p> <p>Google, IBM and Lyft joined forces to create Istio from a desire to provide a reliable substrate for microservice development and maintenance, based on our common experiences building and operating massive scale microservices for internal and enterprise customers. Google and IBM have extensive experience with these large scale microservices in their own applications and with their enterprise customers in sensitive/regulated environments, while Lyft developed Envoy to address their internal operability challenges. <a href="https://eng.lyft.com/announcing-envoy-c-l7-proxy-and-communication-bus-92520b6c8191">Lyft open sourced Envoy</a> after successfully using it in production for over a year to manage more than 100 services spanning 10,000 VMs, processing 2M requests/second.</p> <h2 id="benefits-of-istio">Benefits of Istio</h2> <p><strong>Fleet-wide Visibility</strong>: Failures happen, and operators need tools to stay on top of the health of clusters and their graphs of microservices. Istio produces detailed monitoring data about application and network behaviors that is rendered using <a href="https://prometheus.io/">Prometheus</a> &amp; <a href="https://github.com/grafana/grafana">Grafana</a>, and can be easily extended to send metrics and logs to any collection, aggregation and querying system. Istio enables analysis of performance hotspots and diagnosis of distributed failure modes with <a href="https://github.com/openzipkin/zipkin">Zipkin</a> tracing.</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:55.425531914893625%"> <a data-skipendnotes="true" href="/v1.9/news/releases/0.x/announcing-0.1/istio_grafana_dashboard-new.png" title="Grafana Dashboard with Response Size"> <img class="element-to-stretch" src="/v1.9/news/releases/0.x/announcing-0.1/istio_grafana_dashboard-new.png" alt="Grafana Dashboard with Response Size" /> </a> </div> <figcaption>Grafana Dashboard with Response Size</figcaption> </figure> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:29.912663755458514%"> <a data-skipendnotes="true" href="/v1.9/news/releases/0.x/announcing-0.1/istio_zipkin_dashboard.png" title="Zipkin Dashboard"> <img class="element-to-stretch" src="/v1.9/news/releases/0.x/announcing-0.1/istio_zipkin_dashboard.png" alt="Zipkin Dashboard" /> </a> </div> <figcaption>Zipkin Dashboard</figcaption> </figure> <p><strong>Resiliency and efficiency</strong>: When developing microservices, operators need to assume that the network will be unreliable. Operators can use retries, load balancing, flow-control (HTTP/2), and circuit-breaking to compensate for some of the common failure modes due to an unreliable network. Istio provides a uniform approach to configuring these features, making it easier to operate a highly resilient service mesh.</p> <p><strong>Developer productivity</strong>: Istio provides a significant boost to developer productivity by letting them focus on building service features in their language of choice, while Istio handles resiliency and networking challenges in a uniform way. Developers are freed from having to bake solutions to distributed systems problems into their code. Istio further improves productivity by providing common functionality supporting A/B testing, canarying, and fault injection.</p> <p><strong>Policy Driven Ops</strong>: Istio empowers teams with different areas of concern to operate independently. It decouples cluster operators from the feature development cycle, allowing improvements to security, monitoring, scaling, and service topology to be rolled out <em>without</em> code changes. Operators can route a precise subset of production traffic to qualify a new service release. They can inject failures or delays into traffic to test the resilience of the service mesh, and set up rate limits to prevent services from being overloaded. Istio can also be used to enforce compliance rules, defining ACLs between services to allow only authorized services to talk to each other.</p> <p><strong>Secure by default</strong>: It is a common fallacy of distributed computing that the network is secure. Istio enables operators to authenticate and secure all communication between services using a mutual TLS connection, without burdening the developer or the operator with cumbersome certificate management tasks. Our security framework is aligned with the emerging <a href="https://spiffe.io/">SPIFFE</a> specification, and is based on similar systems that have been tested extensively inside Google.</p> <p><strong>Incremental Adoption</strong>: We designed Istio to be completely transparent to the services running in the mesh, allowing teams to incrementally adopt features of Istio over time. Adopters can start with enabling fleet-wide visibility and once they’re comfortable with Istio in their environment they can switch on other features as needed.</p> <h2 id="join-us-in-this-journey">Join us in this journey</h2> <p>Istio is a completely open development project. Today we are releasing version 0.1, which works in a Kubernetes cluster, and we plan to have major new releases every 3 months, including support for additional environments. Our goal is to enable developers and operators to rollout and operate microservices with agility, complete visibility of the underlying network, and uniform control and security in all environments. We look forward to working with the Istio community and our partners towards these goals, following our <a href="/v1.9/about/feature-stages/">roadmap</a>.</p> <p>Visit <a href="https://github.com/istio/istio/releases">here</a> to get the latest released bits.</p> <p>View the <a href="/v1.9/talks/istio_talk_gluecon_2017.pdf">presentation</a> from GlueCon 2017, where Istio was unveiled.</p> <h2 id="community">Community</h2> <p>We are excited to see early commitment to support the project from many companies in the community: <a href="https://blog.openshift.com/red-hat-istio-launch/">Red Hat</a> with Red Hat OpenShift and OpenShift Application Runtimes, Pivotal with <a href="https://content.pivotal.io/blog/pivotal-and-istio-advancing-the-ecosystem-for-microservices-in-the-enterprise">Pivotal Cloud Foundry</a>, WeaveWorks with <a href="https://www.weave.works/blog/istio-weave-cloud/">Weave Cloud</a> and Weave Net 2.0, <a href="https://www.projectcalico.org/welcoming-istio-to-the-kubernetes-networking-community">Tigera</a> with the Project Calico Network Policy Engine and <a href="https://www.datawire.io/istio-and-datawire-ecosystem/">Datawire</a> with the Ambassador project. We hope to see many more companies join us in this journey.</p> <p>To get involved, connect with us via any of these channels:</p> <ul> <li><p>[istio.io]() for documentation and examples.</p></li> <li><p>The <a href="https://discuss.istio.io">Istio discussion board</a> general discussions,</p></li> <li><p><a href="https://stackoverflow.com/questions/tagged/istio">Stack Overflow</a> for curated questions and answers</p></li> <li><p><a href="https://github.com/istio/istio/issues">GitHub</a> for filing issues</p></li> <li><p><a href="https://twitter.com/IstioMesh">@IstioMesh</a> on Twitter</p></li> </ul> <p>From everyone working on Istio, welcome aboard!</p> <h2 id="release-notes">Release notes</h2> <ul> <li>Installation of Istio into a Kubernetes namespace with a single command.</li> <li>Semi-automated injection of Envoy proxies into Kubernetes pods.</li> <li>Automatic traffic capture for Kubernetes pods using iptables.</li> <li>In-cluster load balancing for HTTP, gRPC, and TCP traffic.</li> <li>Support for timeouts, retries with budgets, and circuit breakers.</li> <li>Istio-integrated Kubernetes Ingress support (Istio acts as an Ingress Controller).</li> <li>Fine-grained traffic routing controls, including A/B testing, canarying, red/black deployments.</li> <li>Flexible in-memory rate limiting.</li> <li>L7 telemetry and logging for HTTP and gRPC using Prometheus.</li> <li>Grafana dashboards showing per-service L7 metrics.</li> <li>Request tracing through Envoy with Zipkin.</li> <li>Service-to-service authentication using mutual TLS.</li> <li>Simple service-to-service authorization using deny expressions.</li> </ul>Wed, 24 May 2017 00:00:00 +0000/v1.9/news/releases/0.x/announcing-0.1//v1.9/news/releases/0.x/announcing-0.1/Helm Changes <p>The tables below show changes made to the installation options used to customize Istio install using Helm between Istio 1.2 and Istio 1.3. The tables are grouped in to three different categories:</p> <ul> <li>The installation options already in the previous release but whose values have been modified in the new release.</li> <li>The new installation options added in the new release.</li> <li>The installation options removed from the new release.</li> </ul> <!-- Run python scripts/tablegen.py to generate this table --> <!-- AUTO-GENERATED-START --> <h2 id="modified-configuration-options">Modified configuration options</h2> <h3 id="modified-kiali-key-value-pairs">Modified <code>kiali</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>kiali.tag</code></td> <td><code>v0.20</code></td> <td><code>v1.1.0</code></td> <td></td> <td></td> </tr> </tbody> </table> <h3 id="modified-global-key-value-pairs">Modified <code>global</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>global.tag</code></td> <td><code>1.2.0-rc.3</code></td> <td><code>release-1.3-latest-daily</code></td> <td><code>Default tag for Istio images.</code></td> <td><code>Default tag for Istio images.</code></td> </tr> </tbody> </table> <h3 id="modified-gateways-key-value-pairs">Modified <code>gateways</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>gateways.istio-egressgateway.resources.limits.memory</code></td> <td><code>256Mi</code></td> <td><code>1024Mi</code></td> <td></td> <td></td> </tr> </tbody> </table> <h3 id="modified-tracing-key-value-pairs">Modified <code>tracing</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>tracing.jaeger.tag</code></td> <td><code>1.9</code></td> <td><code>1.12</code></td> <td></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.tag</code></td> <td><code>2</code></td> <td><code>2.14.2</code></td> <td></td> <td></td> </tr> </tbody> </table> <h2 id="new-configuration-options">New configuration options</h2> <h3 id="new-tracing-key-value-pairs">New <code>tracing</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>tracing.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>tracing.jaeger.image</code></td> <td><code>all-in-one</code></td> <td></td> </tr> <tr> <td><code>tracing.jaeger.spanStorageType</code></td> <td><code>badger</code></td> <td><code>spanStorageType value can be &quot;memory&quot; and &quot;badger&quot; for all-in-one image</code></td> </tr> <tr> <td><code>tracing.jaeger.persist</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>tracing.jaeger.storageClassName</code></td> <td><code>&quot;&quot;</code></td> <td></td> </tr> <tr> <td><code>tracing.jaeger.accessMode</code></td> <td><code>ReadWriteMany</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.image</code></td> <td><code>zipkin</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-sidecarinjectorwebhook-key-value-pairs">New <code>sidecarInjectorWebhook</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>sidecarInjectorWebhook.rollingMaxSurge</code></td> <td><code>100%</code></td> <td></td> </tr> <tr> <td><code>sidecarInjectorWebhook.rollingMaxUnavailable</code></td> <td><code>25%</code></td> <td></td> </tr> <tr> <td><code>sidecarInjectorWebhook.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-global-key-value-pairs">New <code>global</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>global.proxy.init.resources.limits.cpu</code></td> <td><code>100m</code></td> <td></td> </tr> <tr> <td><code>global.proxy.init.resources.limits.memory</code></td> <td><code>50Mi</code></td> <td></td> </tr> <tr> <td><code>global.proxy.init.resources.requests.cpu</code></td> <td><code>10m</code></td> <td></td> </tr> <tr> <td><code>global.proxy.init.resources.requests.memory</code></td> <td><code>10Mi</code></td> <td></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.host</code></td> <td>``</td> <td><code>example: accesslog-service.istio-system</code></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.port</code></td> <td>``</td> <td><code>example: 15000</code></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.tlsSettings.mode</code></td> <td><code>DISABLE</code></td> <td><code>DISABLE, SIMPLE, MUTUAL, ISTIO_MUTUAL</code></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.tlsSettings.clientCertificate</code></td> <td>``</td> <td><code>example: /etc/istio/als/cert-chain.pem</code></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.tlsSettings.privateKey</code></td> <td>``</td> <td><code>example: /etc/istio/als/key.pem</code></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.tlsSettings.caCertificates</code></td> <td>``</td> <td><code>example: /etc/istio/als/root-cert.pem</code></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.tlsSettings.sni</code></td> <td>``</td> <td><code>example: als.somedomain</code></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.tlsSettings.subjectAltNames</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.tcpKeepalive.probes</code></td> <td><code>3</code></td> <td></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.tcpKeepalive.time</code></td> <td><code>10s</code></td> <td></td> </tr> <tr> <td><code>global.proxy.envoyAccessLogService.tcpKeepalive.interval</code></td> <td><code>10s</code></td> <td></td> </tr> <tr> <td><code>global.proxy.protocolDetectionTimeout</code></td> <td><code>10ms</code></td> <td><code>Automatic protocol detection uses a set of heuristics to determine whether the connection is using TLS or not (on the server side), as well as the application protocol being used (e.g., http vs tcp). These heuristics rely on the client sending the first bits of data. For server first protocols like MySQL, MongoDB, etc., Envoy will timeout on the protocol detection after the specified period, defaulting to non mTLS plain TCP traffic. Set this field to tweak the period that Envoy will wait for the client to send the first bits of data. (MUST BE &gt;=1ms)</code></td> </tr> <tr> <td><code>global.proxy.enableCoreDumpImage</code></td> <td><code>ubuntu:xenial</code></td> <td><code>Image used to enable core dumps. This is only used, when &quot;enableCoreDump&quot; is set to true.</code></td> </tr> <tr> <td><code>global.defaultTolerations</code></td> <td><code>[]</code></td> <td><code>Default node tolerations to be applied to all deployments so that all pods can be scheduled to a particular nodes with matching taints. Each component can overwrite these default values by adding its tolerations block in the relevant section below and setting the desired values. Configure this field in case that all pods of Istio control plane are expected to be scheduled to particular nodes with specified taints.</code></td> </tr> <tr> <td><code>global.meshID</code></td> <td><code>&quot;&quot;</code></td> <td><code>Mesh ID means Mesh Identifier. It should be unique within the scope where meshes will interact with each other, but it is not required to be globally/universally unique. For example, if any of the following are true, then two meshes must have different Mesh IDs: - Meshes will have their telemetry aggregated in one place - Meshes will be federated together - Policy will be written referencing one mesh from the other If an administrator expects that any of these conditions may become true in the future, they should ensure their meshes have different Mesh IDs assigned. Within a multicluster mesh, each cluster must be (manually or auto) configured to have the same Mesh ID value. If an existing cluster 'joins' a multicluster mesh, it will need to be migrated to the new mesh ID. Details of migration TBD, and it may be a disruptive operation to change the Mesh ID post-install. If the mesh admin does not specify a value, Istio will use the value of the mesh's Trust Domain. The best practice is to select a proper Trust Domain value.</code></td> </tr> <tr> <td><code>global.localityLbSetting.enabled</code></td> <td><code>true</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-galley-key-value-pairs">New <code>galley</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>galley.rollingMaxSurge</code></td> <td><code>100%</code></td> <td></td> </tr> <tr> <td><code>galley.rollingMaxUnavailable</code></td> <td><code>25%</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-mixer-key-value-pairs">New <code>mixer</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>mixer.policy.rollingMaxSurge</code></td> <td><code>100%</code></td> <td></td> </tr> <tr> <td><code>mixer.policy.rollingMaxUnavailable</code></td> <td><code>25%</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.rollingMaxSurge</code></td> <td><code>100%</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.rollingMaxUnavailable</code></td> <td><code>25%</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.reportBatchMaxEntries</code></td> <td><code>100</code></td> <td><code>Set reportBatchMaxEntries to 0 to use the default batching behavior (i.e., every 100 requests). A positive value indicates the number of requests that are batched before telemetry data is sent to the mixer server</code></td> </tr> <tr> <td><code>mixer.telemetry.reportBatchMaxTime</code></td> <td><code>1s</code></td> <td><code>Set reportBatchMaxTime to 0 to use the default batching behavior (i.e., every 1 second). A positive time value indicates the maximum wait time since the last request will telemetry data be batched before being sent to the mixer server</code></td> </tr> </tbody> </table> <h3 id="new-grafana-key-value-pairs">New <code>grafana</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>grafana.env</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>grafana.envSecrets</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.type.orgId</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.type.url</code></td> <td><code>http://prometheus:9090</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.type.access</code></td> <td><code>proxy</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.type.isDefault</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.type.jsonData.timeInterval</code></td> <td><code>5s</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.type.editable</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.orgId.folder</code></td> <td><code>'istio'</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.orgId.type</code></td> <td><code>file</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.orgId.disableDeletion</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.orgId.options.path</code></td> <td><code>/var/lib/grafana/dashboards/istio</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-prometheus-key-value-pairs">New <code>prometheus</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>prometheus.image</code></td> <td><code>prometheus</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-gateways-key-value-pairs">New <code>gateways</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>gateways.istio-ingressgateway.rollingMaxSurge</code></td> <td><code>100%</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.rollingMaxUnavailable</code></td> <td><code>25%</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.rollingMaxSurge</code></td> <td><code>100%</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.rollingMaxUnavailable</code></td> <td><code>25%</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ilbgateway.rollingMaxSurge</code></td> <td><code>100%</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ilbgateway.rollingMaxUnavailable</code></td> <td><code>25%</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-certmanager-key-value-pairs">New <code>certmanager</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>certmanager.image</code></td> <td><code>cert-manager-controller</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-kiali-key-value-pairs">New <code>kiali</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>kiali.image</code></td> <td><code>kiali</code></td> <td></td> </tr> <tr> <td><code>kiali.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>kiali.dashboard.auth.strategy</code></td> <td><code>login</code></td> <td><code>Can be anonymous, login, or openshift</code></td> </tr> <tr> <td><code>kiali.security.enabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>kiali.security.cert_file</code></td> <td><code>/kiali-cert/cert-chain.pem</code></td> <td></td> </tr> <tr> <td><code>kiali.security.private_key_file</code></td> <td><code>/kiali-cert/key.pem</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-istiocoredns-key-value-pairs">New <code>istiocoredns</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>istiocoredns.rollingMaxSurge</code></td> <td><code>100%</code></td> <td></td> </tr> <tr> <td><code>istiocoredns.rollingMaxUnavailable</code></td> <td><code>25%</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-security-key-value-pairs">New <code>security</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>security.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>security.rollingMaxSurge</code></td> <td><code>100%</code></td> <td></td> </tr> <tr> <td><code>security.rollingMaxUnavailable</code></td> <td><code>25%</code></td> <td></td> </tr> <tr> <td><code>security.workloadCertTtl</code></td> <td><code>2160h</code></td> <td><code>90*24hour = 2160h</code></td> </tr> <tr> <td><code>security.enableNamespacesByDefault</code></td> <td><code>true</code></td> <td><code>Determines Citadel default behavior if the ca.istio.io/env or ca.istio.io/override labels are not found on a given namespace. For example: consider a namespace called &quot;target&quot;, which has neither the &quot;ca.istio.io/env&quot; nor the &quot;ca.istio.io/override&quot; namespace labels. To decide whether or not to generate secrets for service accounts created in this &quot;target&quot; namespace, Citadel will defer to this option. If the value of this option is &quot;true&quot; in this case, secrets will be generated for the &quot;target&quot; namespace. If the value of this option is &quot;false&quot; Citadel will not generate secrets upon service account creation.</code></td> </tr> </tbody> </table> <h3 id="new-pilot-key-value-pairs">New <code>pilot</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>pilot.rollingMaxSurge</code></td> <td><code>100%</code></td> <td></td> </tr> <tr> <td><code>pilot.rollingMaxUnavailable</code></td> <td><code>25%</code></td> <td></td> </tr> <tr> <td><code>pilot.enableProtocolSniffing</code></td> <td><code>false</code></td> <td><code>if protocol sniffing is enabled. Default to false.</code></td> </tr> </tbody> </table> <h2 id="removed-configuration-options">Removed configuration options</h2> <h3 id="removed-global-key-value-pairs">Removed <code>global</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>global.sds.useTrustworthyJwt</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>global.sds.useNormalJwt</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>global.localityLbSetting</code></td> <td><code>{}</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-mixer-key-value-pairs">Removed <code>mixer</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>mixer.templates.useTemplateCRDs</code></td> <td><code>false</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-grafana-key-value-pairs">Removed <code>grafana</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.disableDeletion</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.type</code></td> <td><code>file</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.folder</code></td> <td><code>'istio'</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.isDefault</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.url</code></td> <td><code>http://prometheus:9090</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.access</code></td> <td><code>proxy</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.jsonData.timeInterval</code></td> <td><code>5s</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.options.path</code></td> <td><code>/var/lib/grafana/dashboards/istio</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.editable</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.orgId</code></td> <td><code>1</code></td> <td></td> </tr> </tbody> </table> <!-- AUTO-GENERATED-END -->Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3/helm-changes//v1.9/news/releases/1.3.x/announcing-1.3/helm-changes/Helm Changes <p>The tables below show changes made to the installation options used to customize Istio install using Helm between Istio 1.1 and Istio 1.2. The tables are grouped in to three different categories:</p> <ul> <li>The installation options already in the previous release but whose values have been modified in the new release.</li> <li>The new installation options added in the new release.</li> <li>The installation options removed from the new release.</li> </ul> <!-- Run python scripts/tablegen.py to generate this table --> <!-- AUTO-GENERATED-START --> <h2 id="modified-configuration-options">Modified configuration options</h2> <h3 id="modified-kiali-key-value-pairs">Modified <code>kiali</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>kiali.hub</code></td> <td><code>docker.io/kiali</code></td> <td><code>quay.io/kiali</code></td> <td></td> <td></td> </tr> <tr> <td><code>kiali.tag</code></td> <td><code>v0.14</code></td> <td><code>v0.20</code></td> <td></td> <td></td> </tr> </tbody> </table> <h3 id="modified-prometheus-key-value-pairs">Modified <code>prometheus</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>prometheus.tag</code></td> <td><code>v2.3.1</code></td> <td><code>v2.8.0</code></td> <td></td> <td></td> </tr> </tbody> </table> <h3 id="modified-global-key-value-pairs">Modified <code>global</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>global.tag</code></td> <td><code>release-1.1-latest-daily</code></td> <td><code>1.2.0-rc.3</code></td> <td><code>Default tag for Istio images.</code></td> <td><code>Default tag for Istio images.</code></td> </tr> <tr> <td><code>global.proxy.resources.limits.memory</code></td> <td><code>128Mi</code></td> <td><code>1024Mi</code></td> <td></td> <td></td> </tr> <tr> <td><code>global.proxy.dnsRefreshRate</code></td> <td><code>5s</code></td> <td><code>300s</code></td> <td><code>Configure the DNS refresh rate for Envoy cluster of type STRICT_DNS 5 seconds is the default refresh rate used by Envoy</code></td> <td><code>Configure the DNS refresh rate for Envoy cluster of type STRICT_DNS This must be given it terms of seconds. For example, 300s is valid but 5m is invalid.</code></td> </tr> </tbody> </table> <h3 id="modified-mixer-key-value-pairs">Modified <code>mixer</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>mixer.adapters.useAdapterCRDs</code></td> <td><code>true</code></td> <td><code>false</code></td> <td><code>Setting this to false sets the useAdapterCRDs mixer startup argument to false</code></td> <td><code>Setting this to false sets the useAdapterCRDs mixer startup argument to false</code></td> </tr> </tbody> </table> <h3 id="modified-grafana-key-value-pairs">Modified <code>grafana</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>grafana.image.tag</code></td> <td><code>5.4.0</code></td> <td><code>6.1.6</code></td> <td></td> <td></td> </tr> </tbody> </table> <h2 id="new-configuration-options">New configuration options</h2> <h3 id="new-tracing-key-value-pairs">New <code>tracing</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>tracing.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>tracing.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-sidecarinjectorwebhook-key-value-pairs">New <code>sidecarInjectorWebhook</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>sidecarInjectorWebhook.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>sidecarInjectorWebhook.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>sidecarInjectorWebhook.neverInjectSelector</code></td> <td><code>[]</code></td> <td><code>You can use the field called alwaysInjectSelector and neverInjectSelector which will always inject the sidecar or always skip the injection on pods that match that label selector, regardless of the global policy. See https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/more-control-adding-exceptions</code></td> </tr> <tr> <td><code>sidecarInjectorWebhook.alwaysInjectSelector</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-global-key-value-pairs">New <code>global</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>global.logging.level</code></td> <td><code>&quot;default:info&quot;</code></td> <td></td> </tr> <tr> <td><code>global.proxy.logLevel</code></td> <td><code>&quot;&quot;</code></td> <td><code>Log level for proxy, applies to gateways and sidecars. If left empty, &quot;warning&quot; is used. Expected values are: trace\|debug\|info\|warning\|error\|critical\|off</code></td> </tr> <tr> <td><code>global.proxy.componentLogLevel</code></td> <td><code>&quot;&quot;</code></td> <td><code>Per Component log level for proxy, applies to gateways and sidecars. If a component level is not set, then the global &quot;logLevel&quot; will be used. If left empty, &quot;misc:error&quot; is used.</code></td> </tr> <tr> <td><code>global.proxy.excludeOutboundPorts</code></td> <td><code>&quot;&quot;</code></td> <td></td> </tr> <tr> <td><code>global.tracer.datadog.address</code></td> <td><code>&quot;$(HOST_IP):8126&quot;</code></td> <td></td> </tr> <tr> <td><code>global.imagePullSecrets</code></td> <td><code>[]</code></td> <td><code>Lists the secrets you need to use to pull Istio images from a secure registry.</code></td> </tr> <tr> <td><code>global.localityLbSetting</code></td> <td><code>{}</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-galley-key-value-pairs">New <code>galley</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>galley.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>galley.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>galley.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>galley.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-mixer-key-value-pairs">New <code>mixer</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>mixer.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>mixer.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>mixer.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>mixer.templates.useTemplateCRDs</code></td> <td><code>false</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-grafana-key-value-pairs">New <code>grafana</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>grafana.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>grafana.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>grafana.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-prometheus-key-value-pairs">New <code>prometheus</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>prometheus.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>prometheus.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>prometheus.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-gateways-key-value-pairs">New <code>gateways</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>gateways.istio-ingressgateway.sds.resources.requests.cpu</code></td> <td><code>100m</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.sds.resources.requests.memory</code></td> <td><code>128Mi</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.sds.resources.limits.cpu</code></td> <td><code>2000m</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.sds.resources.limits.memory</code></td> <td><code>1024Mi</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.resources.requests.cpu</code></td> <td><code>100m</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.resources.requests.memory</code></td> <td><code>128Mi</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.resources.limits.cpu</code></td> <td><code>2000m</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.resources.limits.memory</code></td> <td><code>1024Mi</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.applicationPorts</code></td> <td><code>&quot;&quot;</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.resources.requests.cpu</code></td> <td><code>100m</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.resources.requests.memory</code></td> <td><code>128Mi</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.resources.limits.cpu</code></td> <td><code>2000m</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.resources.limits.memory</code></td> <td><code>256Mi</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ilbgateway.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-certmanager-key-value-pairs">New <code>certmanager</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>certmanager.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>certmanager.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>certmanager.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>certmanager.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>certmanager.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-kiali-key-value-pairs">New <code>kiali</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>kiali.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>kiali.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>kiali.dashboard.viewOnlyMode</code></td> <td><code>false</code></td> <td><code>Bind the service account to a role with only read access</code></td> </tr> </tbody> </table> <h3 id="new-istiocoredns-key-value-pairs">New <code>istiocoredns</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>istiocoredns.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>istiocoredns.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>istiocoredns.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-security-key-value-pairs">New <code>security</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>security.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>security.citadelHealthCheck</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>security.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>security.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-nodeagent-key-value-pairs">New <code>nodeagent</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>nodeagent.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>nodeagent.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>nodeagent.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-pilot-key-value-pairs">New <code>pilot</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>pilot.tolerations</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>pilot.podAntiAffinityLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>pilot.podAntiAffinityTermLabelSelector</code></td> <td><code>[]</code></td> <td></td> </tr> </tbody> </table> <h2 id="removed-configuration-options">Removed configuration options</h2> <h3 id="removed-kiali-key-value-pairs">Removed <code>kiali</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>kiali.dashboard.usernameKey</code></td> <td><code>username</code></td> <td><code>This is the key name within the secret whose value is the actual username.</code></td> </tr> <tr> <td><code>kiali.dashboard.passphraseKey</code></td> <td><code>passphrase</code></td> <td><code>This is the key name within the secret whose value is the actual passphrase.</code></td> </tr> </tbody> </table> <h3 id="removed-security-key-value-pairs">Removed <code>security</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>security.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-gateways-key-value-pairs">Removed <code>gateways</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>gateways.istio-ingressgateway.resources</code></td> <td><code>{}</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-mixer-key-value-pairs">Removed <code>mixer</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>mixer.enabled</code></td> <td><code>true</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-servicegraph-key-value-pairs">Removed <code>servicegraph</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>servicegraph.ingress.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>servicegraph.service.name</code></td> <td><code>http</code></td> <td></td> </tr> <tr> <td><code>servicegraph.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>servicegraph.service.type</code></td> <td><code>ClusterIP</code></td> <td></td> </tr> <tr> <td><code>servicegraph.service.annotations</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>servicegraph.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>servicegraph.image</code></td> <td><code>servicegraph</code></td> <td></td> </tr> <tr> <td><code>servicegraph.service.externalPort</code></td> <td><code>8088</code></td> <td></td> </tr> <tr> <td><code>servicegraph.ingress.hosts</code></td> <td><code>servicegraph.local</code></td> <td><code>Used to create an Ingress record.</code></td> </tr> <tr> <td><code>servicegraph.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>servicegraph.prometheusAddr</code></td> <td><code>http://prometheus:9090</code></td> <td></td> </tr> </tbody> </table> <!-- AUTO-GENERATED-END -->Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2/helm-changes//v1.9/news/releases/1.2.x/announcing-1.2/helm-changes/Helm Changes <p>The tables below show changes made to the installation options used to customize Istio install using Helm between Istio 1.0 and Istio 1.1. The tables are grouped in to three different categories:</p> <ul> <li>The installation options already in the previous release but whose values or descriptions have been modified in the new release.</li> <li>The new installation options added in the new release.</li> <li>The installation options removed from the new release.</li> </ul> <!-- Run python scripts/tablegen.py to generate this table --> <!-- AUTO-GENERATED-START --> <h2 id="modified-configuration-options">Modified configuration options</h2> <h3 id="modified-servicegraph-key-value-pairs">Modified <code>servicegraph</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>servicegraph.ingress.hosts</code></td> <td><code>servicegraph.local</code></td> <td><code>servicegraph.local</code></td> <td></td> <td><code>Used to create an Ingress record.</code></td> </tr> </tbody> </table> <h3 id="modified-tracing-key-value-pairs">Modified <code>tracing</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>tracing.jaeger.tag</code></td> <td><code>1.5</code></td> <td><code>1.9</code></td> <td></td> <td></td> </tr> </tbody> </table> <h3 id="modified-global-key-value-pairs">Modified <code>global</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>global.hub</code></td> <td><code>gcr.io/istio-release</code></td> <td><code>gcr.io/istio-release</code></td> <td></td> <td><code>Default hub for Istio images.Releases are published to docker hub under 'istio' project.Daily builds from prow are on gcr.io, and nightly builds from circle on docker.io/istionightly</code></td> </tr> <tr> <td><code>global.tag</code></td> <td><code>release-1.0-latest-daily</code></td> <td><code>release-1.1-latest-daily</code></td> <td></td> <td><code>Default tag for Istio images.</code></td> </tr> <tr> <td><code>global.proxy.resources.requests.cpu</code></td> <td><code>10m</code></td> <td><code>100m</code></td> <td></td> <td></td> </tr> <tr> <td><code>global.proxy.accessLogFile</code></td> <td><code>&quot;/dev/stdout&quot;</code></td> <td><code>&quot;&quot;</code></td> <td></td> <td></td> </tr> <tr> <td><code>global.proxy.enableCoreDump</code></td> <td><code>false</code></td> <td><code>false</code></td> <td></td> <td><code>If set, newly injected sidecars will have core dumps enabled.</code></td> </tr> <tr> <td><code>global.proxy.autoInject</code></td> <td><code>enabled</code></td> <td><code>enabled</code></td> <td></td> <td><code>This controls the 'policy' in the sidecar injector.</code></td> </tr> <tr> <td><code>global.proxy.envoyStatsd.enabled</code></td> <td><code>true</code></td> <td><code>false</code></td> <td></td> <td><code>If enabled is set to true, host and port must also be provided. Istio no longer provides a statsd collector.</code></td> </tr> <tr> <td><code>global.proxy.envoyStatsd.host</code></td> <td><code>istio-statsd-prom-bridge</code></td> <td>``</td> <td></td> <td><code>example: statsd-svc.istio-system</code></td> </tr> <tr> <td><code>global.proxy.envoyStatsd.port</code></td> <td><code>9125</code></td> <td>``</td> <td></td> <td><code>example: 9125</code></td> </tr> <tr> <td><code>global.proxy_init.image</code></td> <td><code>proxy_init</code></td> <td><code>proxy_init</code></td> <td></td> <td><code>Base name for the proxy_init container, used to configure iptables.</code></td> </tr> <tr> <td><code>global.controlPlaneSecurityEnabled</code></td> <td><code>false</code></td> <td><code>false</code></td> <td></td> <td><code>controlPlaneMtls enabled. Will result in delays starting the pods while secrets arepropagated, not recommended for tests.</code></td> </tr> <tr> <td><code>global.disablePolicyChecks</code></td> <td><code>false</code></td> <td><code>true</code></td> <td></td> <td><code>disablePolicyChecks disables mixer policy checks.if mixer.policy.enabled==true then disablePolicyChecks has affect.Will set the value with same name in istio config map - pilot needs to be restarted to take effect.</code></td> </tr> <tr> <td><code>global.enableTracing</code></td> <td><code>true</code></td> <td><code>true</code></td> <td></td> <td><code>EnableTracing sets the value with same name in istio config map, requires pilot restart to take effect.</code></td> </tr> <tr> <td><code>global.mtls.enabled</code></td> <td><code>false</code></td> <td><code>false</code></td> <td></td> <td><code>Default setting for service-to-service mtls. Can be set explicitly usingdestination rules or service annotations.</code></td> </tr> <tr> <td><code>global.oneNamespace</code></td> <td><code>false</code></td> <td><code>false</code></td> <td></td> <td><code>Whether to restrict the applications namespace the controller manages;If not set, controller watches all namespaces</code></td> </tr> <tr> <td><code>global.configValidation</code></td> <td><code>true</code></td> <td><code>true</code></td> <td></td> <td><code>Whether to perform server-side validation of configuration.</code></td> </tr> </tbody> </table> <h3 id="modified-gateways-key-value-pairs">Modified <code>gateways</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>gateways.istio-ingressgateway.type</code></td> <td><code>LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be</code></td> <td><code>LoadBalancer</code></td> <td></td> <td><code>change to NodePort, ClusterIP or LoadBalancer if need be</code></td> </tr> <tr> <td><code>gateways.istio-egressgateway.enabled</code></td> <td><code>true</code></td> <td><code>false</code></td> <td></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.type</code></td> <td><code>ClusterIP #change to NodePort or LoadBalancer if need be</code></td> <td><code>ClusterIP</code></td> <td></td> <td><code>change to NodePort or LoadBalancer if need be</code></td> </tr> </tbody> </table> <h3 id="modified-certmanager-key-value-pairs">Modified <code>certmanager</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>certmanager.tag</code></td> <td><code>v0.3.1</code></td> <td><code>v0.6.2</code></td> <td></td> <td></td> </tr> </tbody> </table> <h3 id="modified-kiali-key-value-pairs">Modified <code>kiali</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>kiali.tag</code></td> <td><code>istio-release-1.0</code></td> <td><code>v0.14</code></td> <td></td> <td></td> </tr> </tbody> </table> <h3 id="modified-security-key-value-pairs">Modified <code>security</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>security.selfSigned</code></td> <td><code>true # indicate if self-signed CA is used.</code></td> <td><code>true</code></td> <td></td> <td><code>indicate if self-signed CA is used.</code></td> </tr> </tbody> </table> <h3 id="modified-pilot-key-value-pairs">Modified <code>pilot</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Old Default Value</th> <th>New Default Value</th> <th>Old Description</th> <th>New Description</th> </tr> </thead> <tbody> <tr> <td><code>pilot.autoscaleMax</code></td> <td><code>1</code></td> <td><code>5</code></td> <td></td> <td></td> </tr> <tr> <td><code>pilot.traceSampling</code></td> <td><code>100.0</code></td> <td><code>1.0</code></td> <td></td> <td></td> </tr> </tbody> </table> <h2 id="new-configuration-options">New configuration options</h2> <h3 id="new-istio-cni-key-value-pairs">New <code>istio_cni</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>istio_cni.enabled</code></td> <td><code>false</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-servicegraph-key-value-pairs">New <code>servicegraph</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>servicegraph.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-tracing-key-value-pairs">New <code>tracing</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>tracing.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.hub</code></td> <td><code>docker.io/openzipkin</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.tag</code></td> <td><code>2</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.probeStartupDelay</code></td> <td><code>200</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.queryPort</code></td> <td><code>9411</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.resources.limits.cpu</code></td> <td><code>300m</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.resources.limits.memory</code></td> <td><code>900Mi</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.resources.requests.cpu</code></td> <td><code>150m</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.resources.requests.memory</code></td> <td><code>900Mi</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.javaOptsHeap</code></td> <td><code>700</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.maxSpans</code></td> <td><code>500000</code></td> <td></td> </tr> <tr> <td><code>tracing.zipkin.node.cpus</code></td> <td><code>2</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-sidecarinjectorwebhook-key-value-pairs">New <code>sidecarInjectorWebhook</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>sidecarInjectorWebhook.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>sidecarInjectorWebhook.rewriteAppHTTPProbe</code></td> <td><code>false</code></td> <td><code>If true, webhook or istioctl injector will rewrite PodSpec for livenesshealth check to redirect request to sidecar. This makes liveness check workeven when mTLS is enabled.</code></td> </tr> </tbody> </table> <h3 id="new-global-key-value-pairs">New <code>global</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>global.monitoringPort</code></td> <td><code>15014</code></td> <td><code>monitoring port used by mixer, pilot, galley</code></td> </tr> <tr> <td><code>global.k8sIngress.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>global.k8sIngress.gatewayName</code></td> <td><code>ingressgateway</code></td> <td><code>Gateway used for k8s Ingress resources. By default it isusing 'istio:ingressgateway' that will be installed by setting'gateways.enabled' and 'gateways.istio-ingressgateway.enabled'flags to true.</code></td> </tr> <tr> <td><code>global.k8sIngress.enableHttps</code></td> <td><code>false</code></td> <td><code>enableHttps will add port 443 on the ingress.It REQUIRES that the certificates are installed in theexpected secrets - enabling this option without certificateswill result in LDS rejection and the ingress will not work.</code></td> </tr> <tr> <td><code>global.proxy.clusterDomain</code></td> <td><code>&quot;cluster.local&quot;</code></td> <td><code>cluster domain. Default value is &quot;cluster.local&quot;.</code></td> </tr> <tr> <td><code>global.proxy.resources.requests.memory</code></td> <td><code>128Mi</code></td> <td></td> </tr> <tr> <td><code>global.proxy.resources.limits.cpu</code></td> <td><code>2000m</code></td> <td></td> </tr> <tr> <td><code>global.proxy.resources.limits.memory</code></td> <td><code>128Mi</code></td> <td></td> </tr> <tr> <td><code>global.proxy.concurrency</code></td> <td><code>2</code></td> <td><code>Controls number of Proxy worker threads.If set to 0 (default), then start worker thread for each CPU thread/core.</code></td> </tr> <tr> <td><code>global.proxy.accessLogFormat</code></td> <td><code>&quot;&quot;</code></td> <td><code>Configure how and what fields are displayed in sidecar access log. Setting toempty string will result in default log format</code></td> </tr> <tr> <td><code>global.proxy.accessLogEncoding</code></td> <td><code>TEXT</code></td> <td><code>Configure the access log for sidecar to JSON or TEXT.</code></td> </tr> <tr> <td><code>global.proxy.dnsRefreshRate</code></td> <td><code>5s</code></td> <td><code>Configure the DNS refresh rate for Envoy cluster of type STRICT_DNS5 seconds is the default refresh rate used by Envoy</code></td> </tr> <tr> <td><code>global.proxy.privileged</code></td> <td><code>false</code></td> <td><code>If set to true, istio-proxy container will have privileged securityContext</code></td> </tr> <tr> <td><code>global.proxy.statusPort</code></td> <td><code>15020</code></td> <td><code>Default port for Pilot agent health checks. A value of 0 will disable health checking.</code></td> </tr> <tr> <td><code>global.proxy.readinessInitialDelaySeconds</code></td> <td><code>1</code></td> <td><code>The initial delay for readiness probes in seconds.</code></td> </tr> <tr> <td><code>global.proxy.readinessPeriodSeconds</code></td> <td><code>2</code></td> <td><code>The period between readiness probes.</code></td> </tr> <tr> <td><code>global.proxy.readinessFailureThreshold</code></td> <td><code>30</code></td> <td><code>The number of successive failed probes before indicating readiness failure.</code></td> </tr> <tr> <td><code>global.proxy.kubevirtInterfaces</code></td> <td><code>&quot;&quot;</code></td> <td><code>pod internal interfaces</code></td> </tr> <tr> <td><code>global.proxy.envoyMetricsService.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>global.proxy.envoyMetricsService.host</code></td> <td>``</td> <td><code>example: metrics-service.istio-system</code></td> </tr> <tr> <td><code>global.proxy.envoyMetricsService.port</code></td> <td>``</td> <td><code>example: 15000</code></td> </tr> <tr> <td><code>global.proxy.tracer</code></td> <td><code>&quot;zipkin&quot;</code></td> <td><code>Specify which tracer to use. One of: lightstep, zipkin</code></td> </tr> <tr> <td><code>global.policyCheckFailOpen</code></td> <td><code>false</code></td> <td><code>policyCheckFailOpen allows traffic in cases when the mixer policy service cannot be reached.Default is false which means the traffic is denied when the client is unable to connect to Mixer.</code></td> </tr> <tr> <td><code>global.tracer.lightstep.address</code></td> <td><code>&quot;&quot;</code></td> <td><code>example: lightstep-satellite:443</code></td> </tr> <tr> <td><code>global.tracer.lightstep.accessToken</code></td> <td><code>&quot;&quot;</code></td> <td><code>example: abcdefg1234567</code></td> </tr> <tr> <td><code>global.tracer.lightstep.secure</code></td> <td><code>true</code></td> <td><code>example: true\|false</code></td> </tr> <tr> <td><code>global.tracer.lightstep.cacertPath</code></td> <td><code>&quot;&quot;</code></td> <td><code>example: /etc/lightstep/cacert.pem</code></td> </tr> <tr> <td><code>global.tracer.zipkin.address</code></td> <td><code>&quot;&quot;</code></td> <td></td> </tr> <tr> <td><code>global.defaultNodeSelector</code></td> <td><code>{}</code></td> <td><code>Default node selector to be applied to all deployments so that all pods can beconstrained to run a particular nodes. Each component can overwrite these defaultvalues by adding its node selector block in the relevant section below and settingthe desired values.</code></td> </tr> <tr> <td><code>global.meshExpansion.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>global.meshExpansion.useILB</code></td> <td><code>false</code></td> <td><code>If set to true, the pilot and citadel mtls and the plain text pilot portswill be exposed on an internal gateway</code></td> </tr> <tr> <td><code>global.multiCluster.enabled</code></td> <td><code>false</code></td> <td><code>Set to true to connect two kubernetes clusters via their respectiveingressgateway services when pods in each cluster cannot directlytalk to one another. All clusters should be using Istio mTLS and musthave a shared root CA for this model to work.</code></td> </tr> <tr> <td><code>global.defaultPodDisruptionBudget.enabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>global.useMCP</code></td> <td><code>true</code></td> <td><code>Use the Mesh Control Protocol (MCP) for configuring Mixer andPilot. Requires galley (--set galley.enabled=true).</code></td> </tr> <tr> <td><code>global.trustDomain</code></td> <td><code>&quot;&quot;</code></td> <td></td> </tr> <tr> <td><code>global.outboundTrafficPolicy.mode</code></td> <td><code>ALLOW_ANY</code></td> <td></td> </tr> <tr> <td><code>global.sds.enabled</code></td> <td><code>false</code></td> <td><code>SDS enabled. IF set to true, mTLS certificates for the sidecars will bedistributed through the SecretDiscoveryService instead of using K8S secrets to mount the certificates.</code></td> </tr> <tr> <td><code>global.sds.udsPath</code></td> <td><code>&quot;&quot;</code></td> <td></td> </tr> <tr> <td><code>global.sds.useTrustworthyJwt</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>global.sds.useNormalJwt</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>global.meshNetworks</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>global.enableHelmTest</code></td> <td><code>false</code></td> <td><code>Specifies whether helm test is enabled or not.This field is set to false by default, so 'helm template ...'will ignore the helm test yaml files when generating the template</code></td> </tr> </tbody> </table> <h3 id="new-mixer-key-value-pairs">New <code>mixer</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>mixer.env.GODEBUG</code></td> <td><code>gctrace=1</code></td> <td></td> </tr> <tr> <td><code>mixer.env.GOMAXPROCS</code></td> <td><code>&quot;6&quot;</code></td> <td><code>max procs should be ceil(cpu limit + 1)</code></td> </tr> <tr> <td><code>mixer.policy.enabled</code></td> <td><code>false</code></td> <td><code>if policy is enabled, global.disablePolicyChecks has affect.</code></td> </tr> <tr> <td><code>mixer.policy.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>mixer.policy.autoscaleEnabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>mixer.policy.autoscaleMin</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>mixer.policy.autoscaleMax</code></td> <td><code>5</code></td> <td></td> </tr> <tr> <td><code>mixer.policy.cpu.targetAverageUtilization</code></td> <td><code>80</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.enabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.autoscaleEnabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.autoscaleMin</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.autoscaleMax</code></td> <td><code>5</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.cpu.targetAverageUtilization</code></td> <td><code>80</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.sessionAffinityEnabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.loadshedding.mode</code></td> <td><code>enforce</code></td> <td><code>disabled, logonly or enforce</code></td> </tr> <tr> <td><code>mixer.telemetry.loadshedding.latencyThreshold</code></td> <td><code>100ms</code></td> <td><code>based on measurements 100ms p50 translates to p99 of under 1s. This is ok for telemetry which is inherently async.</code></td> </tr> <tr> <td><code>mixer.telemetry.resources.requests.cpu</code></td> <td><code>1000m</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.resources.requests.memory</code></td> <td><code>1G</code></td> <td></td> </tr> <tr> <td><code>mixer.telemetry.resources.limits.cpu</code></td> <td><code>4800m</code></td> <td><code>It is best to do horizontal scaling of mixer using moderate cpu allocation.We have experimentally found that these values work well.</code></td> </tr> <tr> <td><code>mixer.telemetry.resources.limits.memory</code></td> <td><code>4G</code></td> <td></td> </tr> <tr> <td><code>mixer.podAnnotations</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>mixer.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>mixer.adapters.kubernetesenv.enabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>mixer.adapters.stdio.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>mixer.adapters.stdio.outputAsJson</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>mixer.adapters.prometheus.enabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>mixer.adapters.prometheus.metricsExpiryDuration</code></td> <td><code>10m</code></td> <td></td> </tr> <tr> <td><code>mixer.adapters.useAdapterCRDs</code></td> <td><code>true</code></td> <td><code>Setting this to false sets the useAdapterCRDs mixer startup argument to false</code></td> </tr> </tbody> </table> <h3 id="new-grafana-key-value-pairs">New <code>grafana</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>grafana.image.repository</code></td> <td><code>grafana/grafana</code></td> <td></td> </tr> <tr> <td><code>grafana.image.tag</code></td> <td><code>5.4.0</code></td> <td></td> </tr> <tr> <td><code>grafana.ingress.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>grafana.ingress.hosts</code></td> <td><code>grafana.local</code></td> <td><code>Used to create an Ingress record.</code></td> </tr> <tr> <td><code>grafana.persist</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>grafana.storageClassName</code></td> <td><code>&quot;&quot;</code></td> <td></td> </tr> <tr> <td><code>grafana.accessMode</code></td> <td><code>ReadWriteMany</code></td> <td></td> </tr> <tr> <td><code>grafana.security.secretName</code></td> <td><code>grafana</code></td> <td></td> </tr> <tr> <td><code>grafana.security.usernameKey</code></td> <td><code>username</code></td> <td></td> </tr> <tr> <td><code>grafana.security.passphraseKey</code></td> <td><code>passphrase</code></td> <td></td> </tr> <tr> <td><code>grafana.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>grafana.contextPath</code></td> <td><code>/grafana</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.apiVersion</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.type</code></td> <td><code>prometheus</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.orgId</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.url</code></td> <td><code>http://prometheus:9090</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.access</code></td> <td><code>proxy</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.isDefault</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.jsonData.timeInterval</code></td> <td><code>5s</code></td> <td></td> </tr> <tr> <td><code>grafana.datasources.datasources.datasources.editable</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.apiVersion</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.orgId</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.folder</code></td> <td><code>'istio'</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.type</code></td> <td><code>file</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.disableDeletion</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>grafana.dashboardProviders.dashboardproviders.providers.options.path</code></td> <td><code>/var/lib/grafana/dashboards/istio</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-prometheus-key-value-pairs">New <code>prometheus</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>prometheus.retention</code></td> <td><code>6h</code></td> <td></td> </tr> <tr> <td><code>prometheus.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>prometheus.scrapeInterval</code></td> <td><code>15s</code></td> <td><code>Controls the frequency of prometheus scraping</code></td> </tr> <tr> <td><code>prometheus.contextPath</code></td> <td><code>/prometheus</code></td> <td></td> </tr> <tr> <td><code>prometheus.ingress.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>prometheus.ingress.hosts</code></td> <td><code>prometheus.local</code></td> <td><code>Used to create an Ingress record.</code></td> </tr> <tr> <td><code>prometheus.security.enabled</code></td> <td><code>true</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-gateways-key-value-pairs">New <code>gateways</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>gateways.istio-ingressgateway.sds.enabled</code></td> <td><code>false</code></td> <td><code>If true, ingress gateway fetches credentials from SDS server to handle TLS connections.</code></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.sds.image</code></td> <td><code>node-agent-k8s</code></td> <td><code>SDS server that watches kubernetes secrets and provisions credentials to ingress gateway.This server runs in the same pod as ingress gateway.</code></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.autoscaleEnabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.cpu.targetAverageUtilization</code></td> <td><code>80</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.loadBalancerSourceRanges</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.externalIPs</code></td> <td><code>[]</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.podAnnotations</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.targetPort</code></td> <td><code>15029</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.name</code></td> <td><code>https-kiali</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.name</code></td> <td><code>https-prometheus</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.name</code></td> <td><code>https-grafana</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.targetPort</code></td> <td><code>15032</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.name</code></td> <td><code>https-tracing</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.targetPort</code></td> <td><code>15443</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.name</code></td> <td><code>tls</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.targetPort</code></td> <td><code>15020</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.name</code></td> <td><code>status-port</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.meshExpansionPorts.targetPort</code></td> <td><code>15011</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.meshExpansionPorts.name</code></td> <td><code>tcp-pilot-grpc-tls</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.meshExpansionPorts.targetPort</code></td> <td><code>15004</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.meshExpansionPorts.name</code></td> <td><code>tcp-mixer-grpc-tls</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.meshExpansionPorts.targetPort</code></td> <td><code>8060</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.meshExpansionPorts.name</code></td> <td><code>tcp-citadel-grpc-tls</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.meshExpansionPorts.targetPort</code></td> <td><code>853</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.meshExpansionPorts.name</code></td> <td><code>tcp-dns-tls</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.env.ISTIO_META_ROUTER_MODE</code></td> <td><code>&quot;sni-dnat&quot;</code></td> <td><code>A gateway with this mode ensures that pilot generates an additionalset of clusters for internal services but without Istio mTLS, toenable cross cluster routing.</code></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.autoscaleEnabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.cpu.targetAverageUtilization</code></td> <td><code>80</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.podAnnotations</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.ports.targetPort</code></td> <td><code>15443</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.ports.name</code></td> <td><code>tls</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.env.ISTIO_META_ROUTER_MODE</code></td> <td><code>&quot;sni-dnat&quot;</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ilbgateway.autoscaleEnabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ilbgateway.cpu.targetAverageUtilization</code></td> <td><code>80</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ilbgateway.podAnnotations</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ilbgateway.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-kiali-key-value-pairs">New <code>kiali</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>kiali.contextPath</code></td> <td><code>/kiali</code></td> <td></td> </tr> <tr> <td><code>kiali.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>kiali.ingress.hosts</code></td> <td><code>kiali.local</code></td> <td><code>Used to create an Ingress record.</code></td> </tr> <tr> <td><code>kiali.dashboard.secretName</code></td> <td><code>kiali</code></td> <td></td> </tr> <tr> <td><code>kiali.dashboard.usernameKey</code></td> <td><code>username</code></td> <td></td> </tr> <tr> <td><code>kiali.dashboard.passphraseKey</code></td> <td><code>passphrase</code></td> <td></td> </tr> <tr> <td><code>kiali.prometheusAddr</code></td> <td><code>http://prometheus:9090</code></td> <td></td> </tr> <tr> <td><code>kiali.createDemoSecret</code></td> <td><code>false</code></td> <td><code>When true, a secret will be created with a default username and password. Useful for demos.</code></td> </tr> </tbody> </table> <h3 id="new-istiocoredns-key-value-pairs">New <code>istiocoredns</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>istiocoredns.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>istiocoredns.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>istiocoredns.coreDNSImage</code></td> <td><code>coredns/coredns:1.1.2</code></td> <td></td> </tr> <tr> <td><code>istiocoredns.coreDNSPluginImage</code></td> <td><code>istio/coredns-plugin:0.2-istio-1.1</code></td> <td></td> </tr> <tr> <td><code>istiocoredns.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-security-key-value-pairs">New <code>security</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>security.enabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>security.createMeshPolicy</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>security.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-nodeagent-key-value-pairs">New <code>nodeagent</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>nodeagent.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>nodeagent.image</code></td> <td><code>node-agent-k8s</code></td> <td></td> </tr> <tr> <td><code>nodeagent.env.CA_PROVIDER</code></td> <td><code>&quot;&quot;</code></td> <td><code>name of authentication provider.</code></td> </tr> <tr> <td><code>nodeagent.env.CA_ADDR</code></td> <td><code>&quot;&quot;</code></td> <td><code>CA endpoint.</code></td> </tr> <tr> <td><code>nodeagent.env.Plugins</code></td> <td><code>&quot;&quot;</code></td> <td><code>names of authentication provider's plugins.</code></td> </tr> <tr> <td><code>nodeagent.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> </tbody> </table> <h3 id="new-pilot-key-value-pairs">New <code>pilot</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>pilot.autoscaleEnabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>pilot.env.PILOT_PUSH_THROTTLE</code></td> <td><code>100</code></td> <td></td> </tr> <tr> <td><code>pilot.env.GODEBUG</code></td> <td><code>gctrace=1</code></td> <td></td> </tr> <tr> <td><code>pilot.cpu.targetAverageUtilization</code></td> <td><code>80</code></td> <td></td> </tr> <tr> <td><code>pilot.nodeSelector</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>pilot.keepaliveMaxServerConnectionAge</code></td> <td><code>30m</code></td> <td><code>The following is used to limit how long a sidecar can be connectedto a pilot. It balances out load across pilot instances at the cost ofincreasing system churn.</code></td> </tr> </tbody> </table> <h2 id="removed-configuration-options">Removed configuration options</h2> <h3 id="removed-ingress-key-value-pairs">Removed <code>ingress</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>ingress.service.ports.nodePort</code></td> <td><code>32000</code></td> <td></td> </tr> <tr> <td><code>ingress.service.selector.istio</code></td> <td><code>ingress</code></td> <td></td> </tr> <tr> <td><code>ingress.autoscaleMin</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>ingress.service.loadBalancerIP</code></td> <td><code>&quot;&quot;</code></td> <td></td> </tr> <tr> <td><code>ingress.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>ingress.service.annotations</code></td> <td><code>{}</code></td> <td></td> </tr> <tr> <td><code>ingress.service.ports.name</code></td> <td><code>http</code></td> <td></td> </tr> <tr> <td><code>ingress.service.ports.name</code></td> <td><code>https</code></td> <td></td> </tr> <tr> <td><code>ingress.autoscaleMax</code></td> <td><code>5</code></td> <td></td> </tr> <tr> <td><code>ingress.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>ingress.service.type</code></td> <td><code>LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-servicegraph-key-value-pairs">Removed <code>servicegraph</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>servicegraph</code></td> <td><code>servicegraph.local</code></td> <td></td> </tr> <tr> <td><code>servicegraph.ingress</code></td> <td><code>servicegraph.local</code></td> <td></td> </tr> <tr> <td><code>servicegraph.service.internalPort</code></td> <td><code>8088</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-telemetry-gateway-key-value-pairs">Removed <code>telemetry-gateway</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>telemetry-gateway.prometheusEnabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>telemetry-gateway.gatewayName</code></td> <td><code>ingressgateway</code></td> <td></td> </tr> <tr> <td><code>telemetry-gateway.grafanaEnabled</code></td> <td><code>false</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-global-key-value-pairs">Removed <code>global</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>global.hyperkube.tag</code></td> <td><code>v1.7.6_coreos.0</code></td> <td></td> </tr> <tr> <td><code>global.k8sIngressHttps</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>global.crds</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>global.hyperkube.hub</code></td> <td><code>quay.io/coreos</code></td> <td></td> </tr> <tr> <td><code>global.meshExpansion</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>global.k8sIngressSelector</code></td> <td><code>ingress</code></td> <td></td> </tr> <tr> <td><code>global.meshExpansionILB</code></td> <td><code>false</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-mixer-key-value-pairs">Removed <code>mixer</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>mixer.autoscaleMin</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>mixer.istio-policy.cpu.targetAverageUtilization</code></td> <td><code>80</code></td> <td></td> </tr> <tr> <td><code>mixer.autoscaleMax</code></td> <td><code>5</code></td> <td></td> </tr> <tr> <td><code>mixer.istio-telemetry.autoscaleMin</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>mixer.prometheusStatsdExporter.tag</code></td> <td><code>v0.6.0</code></td> <td></td> </tr> <tr> <td><code>mixer.istio-telemetry.autoscaleMax</code></td> <td><code>5</code></td> <td></td> </tr> <tr> <td><code>mixer.istio-telemetry.cpu.targetAverageUtilization</code></td> <td><code>80</code></td> <td></td> </tr> <tr> <td><code>mixer.istio-policy.autoscaleEnabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>mixer.istio-telemetry.autoscaleEnabled</code></td> <td><code>true</code></td> <td></td> </tr> <tr> <td><code>mixer.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>mixer.prometheusStatsdExporter.hub</code></td> <td><code>docker.io/prom</code></td> <td></td> </tr> <tr> <td><code>mixer.istio-policy.autoscaleMin</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>mixer.istio-policy.autoscaleMax</code></td> <td><code>5</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-grafana-key-value-pairs">Removed <code>grafana</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>grafana.image</code></td> <td><code>grafana</code></td> <td></td> </tr> <tr> <td><code>grafana.service.internalPort</code></td> <td><code>3000</code></td> <td></td> </tr> <tr> <td><code>grafana.security.adminPassword</code></td> <td><code>admin</code></td> <td></td> </tr> <tr> <td><code>grafana.security.adminUser</code></td> <td><code>admin</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-gateways-key-value-pairs">Removed <code>gateways</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>gateways.istio-ilbgateway.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-egressgateway.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.name</code></td> <td><code>tcp-pilot-grpc-tls</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.name</code></td> <td><code>tcp-citadel-grpc-tls</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.name</code></td> <td><code>http2-prometheus</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.name</code></td> <td><code>http2-grafana</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.targetPort</code></td> <td><code>15011</code></td> <td></td> </tr> <tr> <td><code>gateways.istio-ingressgateway.ports.targetPort</code></td> <td><code>8060</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-tracing-key-value-pairs">Removed <code>tracing</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>tracing.service.internalPort</code></td> <td><code>9411</code></td> <td></td> </tr> <tr> <td><code>tracing.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> <tr> <td><code>tracing.jaeger.ingress</code></td> <td><code>jaeger.local</code></td> <td></td> </tr> <tr> <td><code>tracing.ingress</code></td> <td><code>tracing.local</code></td> <td></td> </tr> <tr> <td><code>tracing.jaeger</code></td> <td><code>jaeger.local</code></td> <td></td> </tr> <tr> <td><code>tracing</code></td> <td><code>jaeger.local tracing.local</code></td> <td></td> </tr> <tr> <td><code>tracing.jaeger.ingress.hosts</code></td> <td><code>jaeger.local</code></td> <td></td> </tr> <tr> <td><code>tracing.jaeger.ingress.enabled</code></td> <td><code>false</code></td> <td></td> </tr> <tr> <td><code>tracing.ingress.hosts</code></td> <td><code>tracing.local</code></td> <td></td> </tr> <tr> <td><code>tracing.jaeger.ui.port</code></td> <td><code>16686</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-kiali-key-value-pairs">Removed <code>kiali</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>kiali.dashboard.username</code></td> <td><code>admin</code></td> <td></td> </tr> <tr> <td><code>kiali.dashboard.passphrase</code></td> <td><code>admin</code></td> <td></td> </tr> </tbody> </table> <h3 id="removed-pilot-key-value-pairs">Removed <code>pilot</code> key/value pairs</h3> <table> <thead> <tr> <th>Key</th> <th>Default Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>pilot.replicaCount</code></td> <td><code>1</code></td> <td></td> </tr> </tbody> </table> <!-- AUTO-GENERATED-END -->Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1/helm-changes//v1.9/news/releases/1.1.x/announcing-1.1/helm-changes/Upgrade Notes <p>When you upgrade from Istio 1.8 to Istio 1.9.x, you need to consider the changes on this page. These notes detail the changes which purposefully break backwards compatibility with Istio 1.8. The notes also mention changes which preserve backwards compatibility while introducing new behavior. Changes are only included if the new behavior would be unexpected to a user of Istio 1.8.</p> <h2 id="peerauthentication-per-port-level-configuration-will-now-also-apply-to-pass-through-filter-chains">PeerAuthentication per-port-level configuration will now also apply to pass through filter chains</h2> <p>Previously the PeerAuthentication per-port-level configuration is ignored if the port number is not defined in a service and the traffic will be handled by a pass through filter chain. Now the per-port-level setting will be supported even if the port number is not defined in a service, a special pass through filter chain will be added to respect the corresponding per-port-level mTLS specification. Please check your PeerAuthentication to make sure you are not using the per-port-level configuration on pass through filter chains, it was not a supported feature and you should update your PeerAuthentication accordingly if you are currently relying on the unsupported behavior before the upgrade. You don&rsquo;t need to do anything if you are not using per-port-level PeerAuthentication on pass through filter chains.</p> <h2 id="service-tags-added-to-trace-spans">Service Tags added to trace spans</h2> <p>Istio now configures Envoy to include tags identifying the canonical service for a workload in generated trace spans.</p> <p>This will lead to a small increase in storage per span for tracing backends.</p> <p>To disable these additional tags, modify the &lsquo;istiod&rsquo; deployment to set an environment variable of <code>PILOT_ENABLE_ISTIO_TAGS=false</code>.</p> <h2 id="envoyfilter-xds-v2-removal"><code>EnvoyFilter</code> XDS v2 removal</h2> <p>Envoy has removed support for the XDS v2 API. <code>EnvoyFilter</code>s depending on these APIs must be updated before upgrading.</p> <p>For example:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: add-header spec: configPatches: - applyTo: HTTP_FILTER match: context: SIDECAR_OUTBOUND listener: filterChain: filter: name: envoy.http_connection_manager subFilter: name: envoy.router patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: &#34;@type&#34;: type.googleapis.com/envoy.config.filter.http.lua.v2.Lua inlineCode: | function envoy_on_request(handle) handle:headers():add(&#34;foo&#34;, &#34;bar&#34;) end </code></pre> <p>Should be updated to:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: add-header spec: configPatches: - applyTo: HTTP_FILTER match: context: SIDECAR_OUTBOUND listener: filterChain: filter: name: envoy.filters.network.http_connection_manager subFilter: name: envoy.filters.http.router patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: &#34;@type&#34;: type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua inlineCode: | function envoy_on_request(handle) handle:headers():add(&#34;foo&#34;, &#34;bar&#34;) end </code></pre> <p>Both <code>istioctl analyze</code> and the validating webhook (run during <code>kubectl apply</code>) will warn about deprecated usage:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f envoyfilter.yaml Warning: using deprecated filter name &#34;envoy.http_connection_manager&#34;; use &#34;envoy.filters.network.http_connection_manager&#34; instead Warning: using deprecated filter name &#34;envoy.router&#34;; use &#34;envoy.filters.http.router&#34; instead Warning: using deprecated type_url(s); type.googleapis.com/envoy.config.filter.http.lua.v2.Lua envoyfilter.networking.istio.io/add-header configured </code></pre> <p>If these filters are applied, the Envoy proxy will reject the configuration (<code>The v2 xDS major version is deprecated and disabled by default.</code>) and be unable to receive updated configurations.</p> <p>In general, we recommend that <code>EnvoyFilter</code>s are applied to a specific version to ensure Envoy changes do not break them during upgrade. This can be done with a <code>match</code> clause:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >match: proxy: proxyVersion: ^1\.9.* </code></pre> <p>However, since Istio 1.8 supports both v2 and v3 XDS versions, your <code>EnvoyFilter</code>s may also be updated before upgrading Istio.</p>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.9.x/announcing-1.9/upgrade-notes//v1.9/news/releases/1.9.x/announcing-1.9/upgrade-notes/Upgrade Notes <p>When you upgrade from Istio 1.7.x to Istio 1.8.x, you need to consider the changes on this page. These notes detail the changes which purposefully break backwards compatibility with Istio 1.7.x. The notes also mention changes which preserve backwards compatibility while introducing new behavior. Changes are only included if the new behavior would be unexpected to a user of Istio 1.7.x.</p> <h2 id="multicluster-global-stub-domain-deprecation">Multicluster <code>.global</code> Stub Domain Deprecation</h2> <p>As part of this release, Istio has switched to a new configuration for multi-primary (formerly &ldquo;replicated control planes&rdquo;). The new configuration is simpler, has fewer limitations, and has been thoroughly tested in a variety of environments. As a result, the <code>.global</code> stub domain is now deprecated and no longer guaranteed to work going forward.</p> <h2 id="mixer-is-no-longer-supported-in-istio">Mixer is no longer supported in Istio</h2> <p>If you are using the <code>istio-policy</code> or <code>istio-telemetry</code> services, or any related Mixer configuration, you will not be able to upgrade without taking action to either (a) convert your existing configuration and code to the new extension model for Istio or (b) use the gRPC shim developed to bridge transition to the new model. For more details, please refer to the <a href="https://github.com/istio/istio/wiki/Enabling-Envoy-Authorization-Service-and-gRPC-Access-Log-Service-With-Mixer">developer wiki</a>.</p> <h2 id="the-semantics-of-revision-for-gateways-in-istiooperator-has-changed-from-1-7-to-1-8">The semantics of revision for gateways in <code>IstioOperator</code> has changed from 1.7 to 1.8</h2> <p>In 1.7, <code>revision</code> means you are creating a new gateway with a different revision so it would not conflict with the default gateway. In 1.8, it means the revision of istiod the gateway is configuring with. If you are using revision for gateways in <code>IstioOperator</code> in 1.7, before moving to 1.8, you must upgrade it to the revision of the Istiod (or delete the revision if you don’t use revision). See <a href="https://github.com/istio/istio/issues/28849">Issue #28849</a>.</p> <h2 id="istio-coredns-plugin-deprecation">Istio CoreDNS Plugin Deprecation</h2> <p>The Istio sidecar now provides native support for DNS resolution with <code>ServiceEntries</code> using <code>meshConfig.defaultConfig.proxyMetadata.ISTIO_META_DNS_CAPTURE=&quot;true&quot;</code>. Previously, this support was provided by the third party <a href="https://github.com/istio-ecosystem/istio-coredns-plugin">Istio CoreDNS plugin</a>. As a result, the <code>istio-coredns-plugin</code> is now deprecated and will be removed in a future release.</p> <h2 id="use-the-new-filter-names-for-envoyfilter">Use the new filter names for <code>EnvoyFilter</code></h2> <p>If you are using <code>EnvoyFilter</code> API, it is recommended to change to the new filter names as described in Envoy&rsquo;s <a href="https://www.envoyproxy.io/docs/envoy/latest/version_history/v1.14.0#deprecated">deprecation notice</a> The deprecated filter names will be supported in this release for backward compatibility but will be removed in future releases.</p> <h2 id="inbound-cluster-name-format">Inbound Cluster Name Format</h2> <p>The format of inbound Envoy cluster names has changed. Previously, they included the Service hostname and port name, such as <code>inbound|80|http|httpbin.default.svc.cluster.local</code>. This lead to issues when multiple Services select the same pod. As a result, we have removed the port name and hostname - the new format will instead resemble <code>inbound|80||</code>.</p> <p>For most users, this is an implementation detail, and will only impact debugging or tooling that directly interacts with Envoy configuration.</p> <h2 id="avoid-use-of-mesh-expansion-installation-flags">Avoid use of mesh expansion installation flags</h2> <p>To ease setup for multicluster and virtual machines while giving more control to users, the <code>meshExpansion</code> and <code>meshExpansionPorts</code> installation flags have been deprecated, and port 15012 has been added to the default list of ports for the <code>istio-ingressgateway</code> Service.</p> <p>For users with <code>values.global.meshExpansion.enabled=true</code>, perform the following steps before upgrading Istio:</p> <ol> <li>Apply the code sample for exposing Istiod through ingress.</li> </ol> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/multicluster/expose-istiod.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/multicluster/expose-istiod.yaml@ </code></pre></div> <p>This removes <code>operator.istio.io/managed</code> labels from the associated Istio networking resources so that the Istio installer won&rsquo;t delete them. After this step, you can modify these resources freely.</p> <ol> <li>If <code>components.ingressGateways[name=istio-ingressgateway].k8s.service.ports</code> is overridden, add port 15012 to the list of ports:</li> </ol> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >- port: 15012 targetPort: 15012 name: tcp-istiod </code></pre> <ol> <li><p>If <code>values.gateways.istio-ingressgateway.meshExpansionPorts</code> is set, move all ports to <code>components.ingressGateways[name=istio-ingressgateway].k8s.service.ports</code> if they&rsquo;re not already present. Then, unset this value.</p></li> <li><p>Unset <code>values.global.meshExpansion.enabled</code>.</p></li> </ol> <h2 id="protocol-detection-timeout-changes">Protocol Detection Timeout Changes</h2> <p>In order to support permissive mTLS traffic as well as <a href="/v1.9/docs/ops/configuration/traffic-management/protocol-selection/#automatic-protocol-selection">automatic protocol detection</a>, the proxy will sniff the first few bytes of traffic to determine the protocol used. For certain &ldquo;server first&rdquo; protocols, such as the protocol used by <code>MySQL</code>, there will be no initial bytes to sniff. To mitigate this issue in the past, Istio introduced a detection timeout. However, we found this caused frequent telemetry and traffic failures during slow connections, while increasing latency for misconfigured server first protocols rather than failing fast.</p> <p>This timeout has been disabled by default. This has the following impacts:</p> <ul> <li>Non &ldquo;server first&rdquo; protocols will no longer have a risk of telemetry or traffic failures during slow connections</li> <li>Properly configured &ldquo;server first&rdquo; protocols will no longer have an extra 5 seconds latency on each connection</li> <li>Improperly configured &ldquo;server first&rdquo; protocols will experience connection timeouts. Please ensure you follow the steps listed in <a href="/v1.9/docs/ops/configuration/traffic-management/protocol-selection/#server-first-protocols">Server First Protocols</a> to ensure you do not run into traffic issues.</li> </ul> <h2 id="update-authorizationpolicy-resources-to-use-remoteipblocks-notremoteipblocks-instead-of-ipblocks-notipblocks-if-using-the-proxy-protocol">Update AuthorizationPolicy resources to use <code>remoteIpBlocks</code>/<code>notRemoteIpBlocks</code> instead of <code>ipBlocks</code>/<code>notIpBlocks</code> if using the Proxy Protocol</h2> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content"><p>A critical <a href="https://groups.google.com/g/envoy-security-announce/c/aqtBt5VUor0">bug</a> has been identified in Envoy that the proxy protocol downstream address is restored incorrectly for non-HTTP connections.</p> <p>Please DO NOT USE the <code>remoteIpBlocks</code> field and <code>remote_ip</code> attribute with proxy protocol on non-HTTP connections until a newer version of Istio is released with a proper fix.</p> <p>Note that Istio doesn&rsquo;t support the proxy protocol and it can be enabled only with the <code>EnvoyFilter</code> API and should be used at your own risk.</p> </div> </aside> </div> <p>If using the Proxy Protocol on a load balancer in front an ingress gateway in conjunction with <code>ipBlocks</code>/<code>notIpBlocks</code> on an AuthorizationPolicy to perform IP-based access control, then please update the AuthorizationPolicy to use <code>remoteIpBlocks</code>/<code>notRemoteIpBlocks</code> instead after upgrading. The <code>ipBlocks</code>/<code>notIpBlocks</code> fields now strictly refer to the source IP address of the packet that arrives at the sidecar.</p> <h2 id="auto-passthrough-gateway-mode"><code>AUTO_PASSTHROUGH</code> Gateway mode</h2> <p>Previously, gateways were configured with multiple Envoy <code>cluster</code> configurations for each Service in the cluster, even those not referenced by any <code>Gateway</code> or <code>VirtualService</code>. This was added to support the <code>AUTO_PASSTHROUGH</code> mode on Gateway, generally used for exposing Services across networks.</p> <p>However, this came at an increased CPU and memory cost in the gateway and Istiod. As a result, we have disabled these by default on the <code>istio-ingressgateway</code> and <code>istio-egressgateway</code>.</p> <p>If you are relying on this feature for multi-network support, please ensure you apply one of the following changes:</p> <ol> <li>Follow our new <a href="/v1.9/docs/setup/install/multicluster/">Multicluster Installation</a> documentation.</li> </ol> <p>This documentation will guide you through running a dedicate gateway deployment for this type of traffic (generally referred to as the <code>eastwest-gateway</code>). This <code>eastwest-gateway</code> will automatically be configured to support <code>AUTO_PASSTHROUGH</code>.</p> <ol> <li>Modify your installation of the gateway deployment to include this configuration. This is controlled by the <code>ISTIO_META_ROUTER_MODE</code> environment variable. Setting this to <code>sni-dnat</code> enables these clusters, while <code>standard</code> (the new default) disables them.</li> </ol> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >ingressGateways: - name: istio-ingressgateway enabled: true k8s: env: - name: ISTIO_META_ROUTER_MODE value: &#34;sni-dnat&#34; </code></pre> <h2 id="connectivity-issues-among-your-proxies-when-updating-from-1-7-x-where-x-5">Connectivity issues among your proxies when updating from 1.7.x (where x &lt; 5)</h2> <p>When upgrading your Istio data plane from 1.7.x (where x &lt; 5) to 1.8, you may observe connectivity issues between your gateway and your sidecars or among your sidecars with 503 errors in the log. This happens when 1.7.5+ proxies send HTTP 1xx or 204 response codes with headers that 1.7.x proxies reject. To fix this, upgrade all your proxies (gateways and sidecars) to 1.7.5+ as soon as possible. (<a href="https://github.com/istio/istio/issues/29427">Issue 29427</a>, <a href="https://github.com/istio/istio/pull/28450">More information</a>)</p>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.8.x/announcing-1.8/upgrade-notes//v1.9/news/releases/1.8.x/announcing-1.8/upgrade-notes/Upgrade Notes <p>When you upgrade from Istio 1.6.x to Istio 1.7.x, you need to consider the changes on this page. These notes detail the changes which purposefully break backwards compatibility with Istio 1.6.x. The notes also mention changes which preserve backwards compatibility while introducing new behavior. Changes are only included if the new behavior would be unexpected to a user of Istio 1.6.x.</p> <h2 id="require-kubernetes-1-16">Require Kubernetes 1.16+</h2> <p>Kubernetes 1.16+ is now required for installation.</p> <h2 id="installation">Installation</h2> <ul> <li><code>istioctl manifest apply</code> is removed, please use <code>istioctl install</code> instead.</li> <li>Installation of telemetry addons by istioctl is deprecated, please use these <a href="/v1.9/docs/ops/integrations/">addons integration instructions</a>.</li> </ul> <h2 id="gateways-run-as-non-root">Gateways run as non-root</h2> <p>Gateways will now run without root permissions by default. As a result, they will no longer be able to bind to ports below 1024. By default, we will bind to valid ports. However, if you are explicitly declaring ports on the gateway, you may need to modify your installation. For example, if you previously had the following configuration:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >ingressGateways: - name: istio-ingressgateway enabled: true k8s: service: ports: - port: 15021 targetPort: 15021 name: status-port - port: 80 name: http2 - port: 443 name: https </code></pre> <p>It should be changed to specify a valid <code>targetPort</code> that can be bound to:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >ingressGateways: - name: istio-ingressgateway enabled: true k8s: service: ports: - port: 15021 targetPort: 15021 name: status-port - port: 80 name: http2 targetPort: 8080 - port: 443 name: https targetPort: 8443 </code></pre> <p>Note: the <code>targetPort</code> only modifies which port the gateway binds to. Clients will still connect to the port defined by <code>port</code> (generally 80 and 443), so this change should be transparent.</p> <p>If you need to run as root, this option can be enabled with <code>--set values.gateways.istio-ingressgateway.runAsRoot=true</code>.</p> <h2 id="envoyfilter-syntax-change"><code>EnvoyFilter</code> syntax change</h2> <p><code>EnvoyFilter</code>s using the legacy <code>config</code> syntax will need to be migrated to the new <code>typed_config</code>. This is due to <a href="https://github.com/istio/istio/issues/19885">underlying changes</a> in Envoy&rsquo;s API.</p> <p>As <code>EnvoyFilter</code> is a <a href="/v1.9/docs/reference/config/networking/envoy-filter/">break glass API</a> without backwards compatibility guarantees, we recommend users explicitly bind <code>EnvoyFilter</code>s to specific versions and appropriately test them prior to upgrading.</p> <p>For example, a configuration for Istio 1.6, using the legacy <code>config</code> syntax:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: lua-1.6 spec: configPatches: - applyTo: HTTP_FILTER match: context: ANY listener: filterChain: filter: name: envoy.http_connection_manager proxy: proxyVersion: ^1\.6.* patch: operation: INSERT_BEFORE value: name: envoy.lua config: inlineCode: | function envoy_on_request(handle) request_handle:headers():add(&#34;foo&#34;, &#34;bar&#34;) end </code></pre> <p>When upgrading to Istio 1.7, a new filter should be added:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: lua-1.7 spec: configPatches: - applyTo: HTTP_FILTER match: context: ANY listener: filterChain: filter: name: envoy.http_connection_manager proxy: proxyVersion: ^1\.7.* patch: operation: INSERT_BEFORE value: name: envoy.filters.http.lua typed_config: &#39;@type&#39;: type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua inlineCode: | function envoy_on_request(handle) request_handle:headers():add(&#34;foo&#34;, &#34;bar&#34;) end </code></pre>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes//v1.9/news/releases/1.7.x/announcing-1.7/upgrade-notes/Upgrade Notes <p>When you upgrade from Istio 1.5.x to Istio 1.6.x, you need to consider the changes on this page. These notes detail the changes which purposefully break backwards compatibility with Istio 1.5.x. The notes also mention changes which preserve backwards compatibility while introducing new behavior. Changes are only included if the new behavior would be unexpected to a user of Istio 1.5.x.</p> <p>Currently, Istio doesn&rsquo;t support skip-level upgrades. If you are using Istio 1.4, you must upgrade to Istio 1.5 first, and then upgrade to Istio 1.6. If you upgrade from versions earlier than Istio 1.4, you should first disable Galley&rsquo;s configuration validation.</p> <p>Update the Galley deployment using the following steps:</p> <ol> <li><p>To edit the Galley deployment configuration, run the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl edit deployment -n istio-system istio-galley </code></pre></li> <li><p>Add the <code>--enable-validation=false</code> option to the <code>command:</code> section as shown below:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: extensions/v1beta1 kind: Deployment ... spec: ... template: ... spec: ... containers: - command: ... - --log_output_level=default:info - --enable-validation=false </code></pre></li> <li><p>Save and quit the editor to update the deployment configuration in the cluster.</p></li> </ol> <p>Remove the <code>ValidatingWebhookConfiguration</code> Custom Resource (CR) with the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete ValidatingWebhookConfiguration istio-galley -n istio-system </code></pre> <h2 id="change-the-readiness-port-of-gateways">Change the readiness port of gateways</h2> <p>If you are using the <code>15020</code> port to check the health of your Istio ingress gateway with your Kubernetes network load balancer, change the port from <code>15020</code> to <code>15021</code>.</p> <h2 id="removal-of-legacy-helm-charts">Removal of legacy Helm charts</h2> <p>Istio 1.4 introduced a <a href="/v1.9/blog/2019/introducing-istio-operator/">new way to install Istio</a> using the in-cluster Operator or <code>istioctl install</code> command. Part of this change meant deprecating the old Helm charts in 1.5. Many new Istio features rely on the new installation method. As a result, Istio 1.6 doesn&rsquo;t include the old Helm installation charts.</p> <p>Go to the <a href="/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/#control-plane-restructuring">Istio 1.5 Upgrade Notes</a> before you continue because Istio 1.5 introduced several changes not present in the legacy installation method, such as Istiod and telemetry v2.</p> <p>To safely upgrade from the legacy installation method that uses Helm charts, perform a <a href="/v1.9/blog/2020/multiple-control-planes/">control plane revision</a>. Upgrading in-place is not supported. Upgrading could result in downtime unless you perform a <a href="/v1.9/docs/setup/upgrade/#canary-upgrades">canary upgrade</a>.</p> <h2 id="support-ended-for-v1alpha1-security-policy">Support ended for <code>v1alpha1</code> security policy</h2> <p>Istio 1.6 no longer supports the following security policy APIs:</p> <ul> <li><a href="https://archive.istio.io/v1.4/docs/reference/config/security/istio.authentication.v1alpha1/"><code>v1alpha1</code> authentication policy</a></li> <li><a href="https://archive.istio.io/v1.4/docs/reference/config/security/istio.rbac.v1alpha1/"><code>v1alpha1</code> RBAC policy</a></li> </ul> <p>Starting in Istio 1.6, Istio ignores these <code>v1alpha1</code> security policy APIs.</p> <p>Istio 1.6 replaced the <code>v1alpha1</code> authentication policy with the following APIs:</p> <ul> <li>The <a href="/v1.9/docs/reference/config/security/request_authentication"><code>v1beta1</code> request authentication policy</a></li> <li>The <a href="/v1.9/docs/reference/config/security/peer_authentication"><code>v1beta1</code> peer authentication policy</a></li> </ul> <p>Istio 1.6 replaces the <code>v1alpha1</code> RBAC policy APIs with the <a href="/v1.9/docs/reference/config/security/authorization-policy/"><code>v1beta1</code> authorization policy APIs</a>.</p> <p>Verify that there are no <code>v1alpha1</code> security policies in your clusters the following commands:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get policies.authentication.istio.io --all-namespaces $ kubectl get meshpolicies.authentication.istio.io --all-namespaces $ kubectl get rbacconfigs.rbac.istio.io --all-namespaces $ kubectl get clusterrbacconfigs.rbac.istio.io --all-namespaces $ kubectl get serviceroles.rbac.istio.io --all-namespaces $ kubectl get servicerolebindings.rbac.istio.io --all-namespaces </code></pre> <p>If there are any <code>v1alpha1</code> security policies in your clusters, migrate to the new APIs before upgrading.</p> <h2 id="istio-configuration-during-installation">Istio configuration during installation</h2> <p>Past Istio releases deployed configuration objects during installation. The presence of those objects caused the following issues:</p> <ul> <li>Problems with upgrades</li> <li>A confusing user experience</li> <li>A less flexible installation</li> </ul> <p>To address these issues, Istio 1.6 minimized the configuration objects deployed during installation.</p> <p>The following configurations are impacted:</p> <ul> <li><code>global.mtls.enabled</code>: Configuration removed to avoid confusion. Configure a peer authentication policy to enable <a href="/v1.9/docs/tasks/security/authentication/authn-policy/#globally-enabling-istio-mutual-tls-in-strict-mode">strict mTLS</a> instead.</li> <li>No default <code>Gateway</code> and associated <code>Certificate</code> custom resources are deployed during installation. Go to the <a href="/v1.9/docs/tasks/traffic-management/ingress/">Ingress task</a> to configure a gateway for your mesh.</li> <li>Istio no longer creates <code>Ingress</code> custom resources for telemetry addons. Visit <a href="/v1.9/docs/tasks/observability/gateways/">remotely accessing telemetry addons</a> to learn how to reach the addons externally.</li> <li>The default sidecar configuration is no longer defined through the automatically generated <code>Sidecar</code> custom resource. The default configuration is implemented internally and the change should have no impact on deployments.</li> </ul> <h2 id="reach-istiod-through-external-workloads">Reach Istiod through external workloads</h2> <p>In Istio 1.6, Istiod is configured to be <code>cluster-local</code> by default. With <code>cluster-local</code> enabled, only workloads running on the same cluster can reach Istiod. Workloads on another cluster can only access the Istiod instance through the Istio gateway. This configuration prevents the ingress gateway of the master cluster from incorrectly forwarding service discovery requests to Istiod instances in remote clusters. The Istio team is actively investigating alternatives to no longer require <code>cluster-local</code>.</p> <p>To override the default <code>cluster-local</code> behavior, modify the configuration in the <code>MeshConfig</code> section as shown below:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >values: meshConfig: serviceSettings: - settings: clusterLocal: false hosts: - &#34;istiod.istio-system.svc.cluster.local&#34; </code></pre>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes//v1.9/news/releases/1.6.x/announcing-1.6/upgrade-notes/Upgrade Notes <p>This page describes changes you need to be aware of when upgrading from Istio 1.4.x to 1.5.x. Here, we detail cases where we intentionally broke backwards compatibility. We also mention cases where backwards compatibility was preserved but new behavior was introduced that would be surprising to someone familiar with the use and operation of Istio 1.4.</p> <h2 id="control-plane-restructuring">Control Plane Restructuring</h2> <p>In Istio 1.5, we have moved towards a new deployment model for the control plane, with many components consolidated. The following describes where various functionality has been moved to.</p> <h3 id="istiod">Istiod</h3> <p>In Istio 1.5, there will be a new deployment, <code>istiod</code>. This component is the core of the control plane, and will handle configuration and certificate distribution, sidecar injection, and more.</p> <h3 id="sidecar-injection">Sidecar injection</h3> <p>Previously, sidecar injection was handled by a mutating webhook that was processed by a deployment named <code>istio-sidecar-injector</code>. In Istio 1.5, the same mutating webhook remains, but it will now point to the <code>istiod</code> deployment. All injection logic remains the same.</p> <h3 id="galley">Galley</h3> <ul> <li>Configuration Validation - this functionality remains the same, but is now handled by the <code>istiod</code> deployment.</li> <li>MCP Server - the MCP server has been disabled by default. For most users, this is an implementation detail. If you depend on this functionality, you will need to run the <code>istio-galley</code> deployment.</li> <li>Experimental features (such as configuration analysis) - These features will require the <code>istio-galley</code> deployment.</li> </ul> <h3 id="citadel">Citadel</h3> <p>Previously, Citadel served two functions: writing certificates to secrets in each namespace, and serving secrets to the <code>nodeagent</code> over <code>gRPC</code> when SDS is used. In Istio 1.5, secrets are no longer written to each namespace. Instead, they are only served over gRPC. This functionality has been moved to the <code>istiod</code> deployment.</p> <h3 id="sds-node-agent">SDS Node Agent</h3> <p>The <code>nodeagent</code> deployment has been removed. This functionality now exists in the Envoy sidecar.</p> <h3 id="sidecar">Sidecar</h3> <p>Previously, the sidecar was able to access certificates in two ways: through secrets mounted as files, or over SDS (through the <code>nodeagent</code> deployment). In Istio 1.5, this has been simplified. All secrets will be served over a locally run SDS server. For most users, these secrets will be fetched from the <code>istiod</code> deployment. For users with a custom CA, file mounted secrets can still be used, however, these will still be served by the local SDS server. This means that certificate rotations will no longer require Envoy to restart.</p> <h3 id="cni">CNI</h3> <p>There have been no changes to the deployment of <code>istio-cni</code>.</p> <h3 id="pilot">Pilot</h3> <p>The <code>istio-pilot</code> deployment has been removed in favor of the <code>istiod</code> deployment, which contains all functionality that Pilot once had. For backwards compatibility, there are still some references to Pilot.</p> <h2 id="mixer-deprecation">Mixer deprecation</h2> <p>Mixer, the process behind the <code>istio-telemetry</code> and <code>istio-policy</code> deployments, has been deprecated with the 1.5 release. <code>istio-policy</code> was disabled by default since Istio 1.3 and <code>istio-telemetry</code> is disabled by default in Istio 1.5.</p> <p>Telemetry is collected using an in-proxy extension mechanism (Telemetry V2) that does not require Mixer.</p> <p>If you depend on specific Mixer features like out of process adapters, you may re-enable Mixer. Mixer will continue receiving bug fixes and security fixes until Istio 1.7. Many features supported by Mixer have alternatives as specified in the <a href="https://tinyurl.com/mixer-deprecation">Mixer Deprecation</a> document including the <a href="https://github.com/istio/proxy/tree/master/extensions">in-proxy extensions</a> based on the WebAssembly sandbox API.</p> <p>If you rely on a Mixer feature that does not have an equivalent, we encourage you to open issues and discuss in the community.</p> <p>Please check <a href="https://tinyurl.com/mixer-deprecation">Mixer Deprecation</a> notice for details.</p> <h3 id="feature-gaps-between-telemetry-v2-and-mixer-telemetry">Feature gaps between Telemetry V2 and Mixer Telemetry</h3> <ul> <li>Out of mesh telemetry is not supported. Some telemetry is missing if the traffic source or destination is not sidecar injected.</li> <li>Egress gateway telemetry is <a href="https://github.com/istio/istio/issues/19385">not supported</a>.</li> <li>TCP telemetry is only supported with <code>mtls</code>.</li> <li>Black Hole telemetry for TCP and HTTP protocols is not supported.</li> <li>Histogram buckets are <a href="https://github.com/istio/istio/issues/20483">significantly different</a> than Mixer Telemetry and cannot be changed.</li> </ul> <h2 id="traffic-management-resource-visibility-changes">Traffic management resource visibility changes</h2> <p>In Istio 1.5 proxy configuration for hosts is determined by <a href="/v1.9/docs/reference/config/networking/virtual-service"><code>VirtualService</code></a> visibility in addition to that of any relevant <a href="/v1.9/docs/reference/config/networking/service-entry/"><code>ServiceEntry</code></a>.</p> <p>If in previous versions you relied on <a href="/v1.9/docs/reference/config/networking/sidecar/"><code>Sidecar</code></a> resources to restrict the visibility of hosts (mesh internal or external) to a target set of sidecar proxies, you now also need to consider the hosts implied by any <a href="/v1.9/docs/reference/config/networking/virtual-service"><code>VirtualService</code></a>.</p> <p>Depending on your use of <a href="/v1.9/docs/reference/config/networking/sidecar/"><code>Sidecar</code></a> resources in your mesh, this may require you to review the namespaces that your <a href="/v1.9/docs/reference/config/networking/virtual-service"><code>VirtualService</code></a>s are in to ensure only the intended workloads can see them.</p> <p>More details on this change can be found at <a href="https://github.com/istio/istio/issues/24251">24251</a> and <a href="https://github.com/istio/istio/pull/20408">20408</a>.</p> <h2 id="authentication-policy">Authentication policy</h2> <p>Istio 1.5 introduces <a href="/v1.9/docs/reference/config/security/peer_authentication/"><code>PeerAuthentication</code></a> and <a href="/v1.9/docs/reference/config/security/request_authentication/"><code>RequestAuthentication</code></a>, which are replacing the alpha version of the Authentication API. For more information about how to use the new API, see the <a href="/v1.9/docs/tasks/security/authentication/authn-policy">authentication policy</a> tutorial.</p> <ul> <li>After you upgrade Istio, your alpha authentication policies remain in place and being used. You can gradually replace them with the equivalent <code>PeerAuthentication</code> and <code>RequestAuthentication</code>. The new policy will take over the old policy in the scope it is defined. We recommend starting with workload-wide (the most specific scope), then namespace-wide, and finally mesh-wide.</li> <li>After you replace policies for workload, namespace, and mesh, you can safely remove the alpha authentication policies. To delete the alpha policies, use this command:</li> </ul> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete policies.authentication.istio.io --all-namespaces --all $ kubectl delete meshpolicies.authentication.istio.io --all </code></pre> <h2 id="istio-workload-key-and-certificate-provisioning">Istio workload key and certificate provisioning</h2> <ul> <li>We have stabilized the SDS certificate and key provisioning flow. Now the Istio workloads are using SDS to provision certificates. The secret volume mount approach is deprecated.</li> <li>Please note when mutual TLS is enabled, Prometheus deployment needs to be manually modified to monitor the workloads. The details are described in this <a href="https://github.com/istio/istio/issues/21843">issue</a>. This is not required in 1.5.1.</li> </ul> <h2 id="automatic-mutual-tls">Automatic mutual TLS</h2> <p>Automatic mutual TLS is now enabled by default. Traffic between sidecars is automatically configured as mutual TLS. You can disable this explicitly if you worry about the encryption overhead by adding the option <code>-- set values.global.mtls.auto=false</code> during install. For more details, refer to <a href="/v1.9/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls">automatic mutual TLS</a>.</p> <h2 id="control-plane-security">Control plane security</h2> <p>As part of the Istiod effort, we have changed how proxies securely communicate with the control plane. In previous versions, proxies would connect to the control plane securely when the setting <code>values.global.controlPlaneSecurityEnabled=true</code> was configured, which was the default for Istio 1.4. Each control plane component ran a sidecar with Citadel certificates, and proxies connected to Pilot over port 15011.</p> <p>In Istio 1.5, this is no longer the recommended or default way to connect the proxies with the control plane; instead, DNS certificates, which can be signed by Kubernetes or Istiod, will be used to connect to Istiod over port 15012.</p> <p>Note: despite the naming, in Istio 1.5 when <code>controlPlaneSecurityEnabled</code> is set to <code>false</code>, communication between the control plane will be secure by default.</p> <h2 id="multicluster-setup">Multicluster setup</h2> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content"><p>We recommend that you <strong>do not upgrade</strong> to Istio 1.5.0 if you are using a multicluster setup.</p> <p>Istio 1.5.0 multicluster setup has several known issues (<a href="https://github.com/istio/istio/issues/21702">27102</a>, <a href="https://github.com/istio/istio/issues/21676">21676</a>) that make it unusable in both shared control plane and replicated control plane deployments. These issues will be resolved in Istio 1.5.1.</p> </div> </aside> </div> <h2 id="helm-upgrade">Helm upgrade</h2> <p>If you used <code>helm upgrade</code> to update your cluster to newer Istio versions, we recommend you to switch to use <a href="https://archive.istio.io/v1.5/docs/setup/upgrade/istioctl-upgrade/"><code>istioctl upgrade</code></a> or follow the <a href="https://istio.io/v1.4/docs/setup/upgrade/cni-helm-upgrade/">helm template</a> steps.</p>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes//v1.9/news/releases/1.5.x/announcing-1.5/upgrade-notes/Upgrade Notes <p>This page describes changes you need to be aware of when upgrading from Istio 1.3.x to 1.4.x. Here, we detail cases where we intentionally broke backwards compatibility. We also mention cases where backwards compatibility was preserved but new behavior was introduced that would be surprising to someone familiar with the use and operation of Istio 1.3.</p> <h2 id="traffic-management">Traffic management</h2> <h3 id="http-services-on-port-443">HTTP services on port 443</h3> <p>Services of type <code>http</code> are no longer allowed on port 443. This change was made to prevent protocol conflicts with external HTTPS services.</p> <p>If you depend on this behavior, there are a few options:</p> <ul> <li>Move the application to another port.</li> <li>Change the protocol from type <code>http</code> to type <code>tcp</code></li> <li>Specify the environment variable <code>PILOT_BLOCK_HTTP_ON_443=false</code> to the Pilot deployment. Note: this may be removed in future releases.</li> </ul> <p>See <a href="/v1.9/docs/ops/configuration/traffic-management/protocol-selection/">Protocol Selection</a> for more information about specifying the protocol of a port</p> <h3 id="regex-engine-changes">Regex Engine Changes</h3> <p>To prevent excessive resource consumption from large regular expressions, Envoy has moved to a new regular expression engine based on <a href="https://github.com/google/re2"><code>re2</code></a>. Previously, <code>std::regex</code> was used. These two engines may have slightly different syntax; in particular, the regex fields are now limited to 100 bytes.</p> <p>If you depend on specific behavior of the old regex engine, you can opt out of this change by adding the environment variable <code>PILOT_ENABLE_UNSAFE_REGEX=true</code> to the Pilot deployment. Note: this will be removed in future releases.</p> <h2 id="configuration-management">Configuration management</h2> <p>We introduced OpenAPI v3 schemas in the Kubernetes <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions">Custom Resource Definitions (CRD)</a> of Istio resources. The schemas describe the Istio resources and help ensure the Istio resources you create and modify are structurally correct.</p> <p>If one or more fields in your configurations are unknown or have wrong types, they will be rejected by the Kubernetes API server when you create or modify Istio resources. This feature, <code>CustomResourceValidation</code>, is on by default for Kubernetes 1.9+ clusters. Please note that existing configurations already in Kubernetes are <strong>NOT</strong> affected if they stay unchanged.</p> <p>To help with your upgrade, here are some steps you could take:</p> <ul> <li>After upgrading Istio, run your Istio configurations with <code>kubectl apply --dry-run</code> so that you are able to know if the configurations can be accepted by the API server as well as any possible unknown and/or invalid fields to the API server. (<code>DryRun</code> feature is on by default for Kubernetes 1.13+ clusters.)</li> <li>Use the <a href="/v1.9/docs/reference/config/">reference documentation</a> to confirm and correct the field names and data types.</li> <li>In addition to structural validation, you can also use <code>istioctl x analyze</code> to help you detect other potential issues with your Istio configurations. Refer to <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/">here</a> for more details.</li> </ul> <p>If you choose to ignore the validation errors, add <code>--validate=false</code> to your <code>kubectl</code> command when you create or modify Istio resources. We strongly discourage doing so however, since it is willingly introducing incorrect configuration.</p> <h2 id="leftover-crd">Leftover CRD</h2> <p>Istio 1.4 introduces a new CRD <code>authorizationpolicies.security.istio.io</code> for the <a href="/v1.9/docs/reference/config/security/authorization-policy/">authorization policy</a>. Your cluster may have an interim leftover CRD <code>authorizationpolicies.rbac.istio.io</code> due to an internal implementation detail before Istio 1.4.</p> <p>The leftover CRD is unused and you can safely remove it from the cluster using this command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete crd authorizationpolicies.rbac.istio.io --ignore-not-found=true </code></pre>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes//v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes/Upgrade Notes <p>This page describes changes you need to be aware of when upgrading from Istio 1.2.x to 1.3.x. Here, we detail cases where we intentionally broke backwards compatibility. We also mention cases where backwards compatibility was preserved but new behavior was introduced that would be surprising to someone familiar with the use and operation of Istio 1.2.</p> <h2 id="installation-and-upgrade">Installation and upgrade</h2> <p>We simplified the configuration model for Mixer and removed support for adapter-specific and template-specific Custom Resource Definitions (CRDs) entirely in 1.3. Please move to the new configuration model.</p> <p>We removed the Mixer CRDs from the system to simplify the configuration model, improve Mixer&rsquo;s performance in Kubernetes deployments, and improve reliability in various Kubernetes environments.</p> <h2 id="traffic-management">Traffic management</h2> <p>Istio now captures all ports by default. If you don&rsquo;t specify container ports to intentionally bypass Envoy, you must opt out of port capturing with the <code>traffic.sidecar.istio.io/excludeInboundPorts</code> option.</p> <p>Protocol sniffing is now enabled by default. Disable protocol sniffing with the <code>--set pilot.enableProtocolSniffing=false</code> option when you upgrade to get the previous behavior. To learn more see our <a href="/v1.9/docs/ops/configuration/traffic-management/protocol-selection/">protocol selection page</a>.</p> <p>To specify a hostname in multiple namespaces, you must select a single host using a <a href="/v1.9/docs/reference/config/networking/sidecar/"><code>Sidecar</code> resource</a>.</p> <h2 id="trust-domain-validation">Trust domain validation</h2> <p>Trust domain validation is new in Istio 1.3. If you only have one trust domain or you don&rsquo;t enable mutual TLS through authentication policies, there is nothing you must do.</p> <p>To opt-out the trust domain validation, include the following flag in your Helm template before upgrading to Istio 1.3: <code>--set pilot.env.PILOT_SKIP_VALIDATE_TRUST_DOMAIN=true</code></p> <h2 id="secret-discovery-service">Secret discovery service</h2> <p>In Istio 1.3, we are taking advantage of improvements in Kubernetes to issue certificates for workload instances more securely.</p> <p>Kubernetes 1.12 introduces <code>trustworthy</code> JWTs to solve these issues. <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md">Kubernetes 1.13</a> introduced the ability to change the value of the <code>aud</code> field to a value other than the API server. The <code>aud</code> field represents the audience in Kubernetes. To better secure the mesh, Istio 1.3 only supports <code>trustworthy</code> JWTs and requires the audience, the value of the <code>aud</code> field, to be <code>istio-ca</code> when you enable SDS.</p> <p>Before upgrading to Istio 1.3 with SDS enabled, see our blog post on <a href="/v1.9/blog/2019/trustworthy-jwt-sds/">trustworthy JWTs and SDS</a>.</p>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3/upgrade-notes//v1.9/news/releases/1.3.x/announcing-1.3/upgrade-notes/Upgrade Notes <p>This page describes changes you need to be aware of when upgrading from Istio 1.1.x to 1.2.x. Here, we detail cases where we intentionally broke backwards compatibility. We also mention cases where backwards compatibility was preserved but new behavior was introduced that would be surprising to someone familiar with the use and operation of Istio 1.1.</p> <h2 id="installation-and-upgrade">Installation and upgrade</h2> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">The configuration model for Mixer has been simplified. Support for adapter-specific and template-specific Custom Resources has been removed by default in 1.2 and will be removed entirely in 1.3. Please move to the new configuration model.</div> </aside> </div> <p>Most Mixer CRDs were removed from the system to simplify the configuration model, improve performance of Mixer when used with Kubernetes, and improve reliability in a variety of Kubernetes environments.</p> <p>The following CRDs remain:</p> <table> <thead> <tr> <th>Custom Resource Definition name</th> <th>Purpose</th> </tr> </thead> <tbody> <tr> <td><code>adapter</code></td> <td>Specification of Istio extension declarations</td> </tr> <tr> <td><code>attributemanifest</code></td> <td>Specification of Istio extension declarations</td> </tr> <tr> <td><code>template</code></td> <td>Specification of Istio extension declarations</td> </tr> <tr> <td><code>handler</code></td> <td>Specification of extension invocations</td> </tr> <tr> <td><code>rule</code></td> <td>Specification of extension invocations</td> </tr> <tr> <td><code>instance</code></td> <td>Specification of extension invocations</td> </tr> </tbody> </table> <p>In the event you are using the removed mixer configuration schemas, set the following Helm flags during upgrade of the main Helm chart: <code>--set mixer.templates.useTemplateCRDs=true --set mixer.adapters.useAdapterCRDs=true</code></p>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes//v1.9/news/releases/1.2.x/announcing-1.2/upgrade-notes/Upgrade Notes <p>This page describes changes you need to be aware of when upgrading from Istio 1.0.x to 1.1.x. Here we detail cases where we intentionally broke backwards compatibility. We also mention cases where backwards compatibility was preserved but new behavior was introduced that would be surprising to someone familiar with the use and operation of Istio 1.0.</p> <p>For an overview of new features introduced with Istio 1.1, please refer to the <a href="/v1.9/news/releases/1.1.x/announcing-1.1/change-notes/">1.1 change notes</a>.</p> <h2 id="installation">Installation</h2> <ul> <li><p>We have increased the control plane and envoy sidecar’s required CPU and memory. It is critical to ensure your cluster have enough resource before proceeding the update.</p></li> <li><p>Istio’s CRDs have been placed into their own Helm chart <code>istio-init</code>. This prevents loss of custom resource data, facilitates the upgrade process, and enables Istio to evolve beyond a Helm-based installation. The <a href="/v1.9/docs/setup/upgrade/">upgrade documentation</a> provides the proper procedures for upgrading from Istio 1.0.6 to Istio 1.1. Please follow these instructions carefully when upgrading. If <code>certmanager</code> is desired, use the <code>--set certmanager=true</code> flag when installing both <code>istio-init</code> and Istio charts with either <code>template</code> or <code>tiller</code> installation modes.</p></li> <li><p>Many installation options have been added, removed, or changed. Refer to <a href="/v1.9/news/releases/1.1.x/announcing-1.1/helm-changes/">Installation Options Changes</a> for a detailed summary of the changes.</p></li> <li><p>The 1.0 <code>istio-remote</code> chart used for <a href="https://archive.istio.io/v1.1/docs/setup/kubernetes/install/multicluster/vpn/">multicluster VPN</a> and <a href="https://archive.istio.io/v1.1/docs/examples/multicluster/split-horizon-eds/">multicluster split horizon</a> remote cluster installation has been consolidated into the Istio chart. To generate an equivalent <code>istio-remote</code> chart, use the <code>--set global.istioRemote=true</code> flag.</p></li> <li><p>Addons are no longer exposed via separate load balancers. Instead addons can now be optionally exposed via the Ingress Gateway. To expose an addon via the Ingress Gateway, please follow the <a href="/v1.9/docs/tasks/observability/gateways/">Remotely Accessing Telemetry Addons</a> guide.</p></li> <li><p>The built-in Istio Statsd collector has been removed. Istio retains the capability of integrating with your own Statsd collector, using the <code>--set global.envoyStatsd.enabled=true</code> flag.</p></li> <li><p>The <code>ingress</code> series of options for configuring a Kubernetes Ingress have been removed. Kubernetes Ingress is still functional and can be enabled using the <code>--set global.k8sIngress.enabled=true</code> flag. Check out <a href="/v1.9/docs/ops/integrations/certmanager/">Securing Kubernetes Ingress with Cert-Manager</a> to learn how to secure your Kubernetes ingress resources.</p></li> </ul> <h2 id="traffic-management">Traffic management</h2> <ul> <li><p>Outbound traffic policy now defaults to <code>ALLOW_ANY</code>. Traffic to unknown ports will be forwarded as-is. Traffic to known ports (e.g., port 80) will be matched with one of the services in the system and forwarded accordingly.</p></li> <li><p>During sidecar routing to a service, destination rules for the target service in the same namespace as the sidecar will take precedence, followed by destination rules in the service’s namespace, and finally followed by destination rules in other namespaces if applicable.</p></li> <li><p>We recommend storing gateway resources in the same namespace as the gateway workload (e.g., <code>istio-system</code> in case of <code>istio-ingressgateway</code>). When referring to gateway resources in virtual services, use the namespace/name format instead of using <code>name.namespace.svc.cluster.local</code>.</p></li> <li><p>The optional egress gateway is now disabled by default. It is enabled in the demo profile for users to explore but disabled in all other profiles by default. If you need to control and secure your outbound traffic through the egress gateway, you will need to enable <code>gateways.istio-egressgateway.enabled=true</code> manually in any of the non-demo profiles.</p></li> </ul> <h2 id="policy-telemetry">Policy &amp; telemetry</h2> <ul> <li><p><code>istio-policy</code> check is now disabled by default. It is enabled in the demo profile for users to explore but disabled in all other profiles. This change is only for <code>istio-policy</code> and not for <code>istio-telemetry</code>. In order to re-enable policy checking, run <code>helm template</code> with <code>--set global.disablePolicyChecks=false</code> and re-apply the configuration.</p></li> <li><p>The Service Graph component has now been deprecated in favor of <a href="https://www.kiali.io/">Kiali</a>.</p></li> </ul> <h2 id="security">Security</h2> <ul> <li>RBAC configuration has been modified to implement cluster scoping. The <code>RbacConfig</code> resource has been replaced with the <code>ClusterRbacConfig</code> resource. Refer to <a href="https://archive.istio.io/v1.1/docs/setup/kubernetes/upgrade/steps/#migrating-from-rbacconfig-to-clusterrbacconfig">Migrating <code>RbacConfig</code> to <code>ClusterRbacConfig</code></a> for migration instructions.</li> </ul>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes//v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes/Change Notes <h2 id="known-issues">Known Issues</h2> <ul> <li>Wasm extension configuration updates can be disruptive (see <a href="https://github.com/istio/istio/issues/29843">Issue #29843</a>).</li> </ul> <h2 id="traffic-management">Traffic Management</h2> <ul> <li><p><strong>Added</strong> Add <a href="https://github.com/google/pprof">pprof</a> endpoint to pilot-agent. (<a href="https://github.com/istio/istio/issues/28040">Issue #28040</a>)</p></li> <li><p><strong>Added</strong> Allow enabling gRPC logging with &ndash;log_output_level for pilot. (<a href="https://github.com/istio/istio/issues/28482">Issue #28482</a>)</p></li> <li><p><strong>Added</strong> a new experimental proxy option <a href="/v1.9/docs/ops/configuration/traffic-management/dns-proxy">DNS_AUTO_ALLOCATE</a>, to control auto allocation of ServiceEntry addresses. Previously, this option was tied to <code>DNS_CAPTURE</code>. Now, <code>DNS_CAPTURE</code> can be enabled without auto allocation. See <a href="/v1.9/blog/2020/dns-proxy/">Smart DNS Proxying</a> for more info. (<a href="https://github.com/istio/istio/issues/29324">Issue #29324</a>)</p></li> <li><p><strong>Fixed</strong> istiod will no longer generate listeners for privileged gateway ports (&lt;1024) if the gateway Pod does not have sufficient permissions. (<a href="https://github.com/istio/istio/issues/27566">Issue #27566</a>)</p></li> <li><p><strong>Fixed</strong> an issue that caused very high memory usage with a large number of <code>ServiceEntries</code>. (<a href="https://github.com/istio/istio/issues/25531">Issue #25531</a>)</p></li> <li><p><strong>Removed</strong> support for reading Istio configuration over the Mesh Configuration Protocol (MCP). (<a href="https://github.com/istio/istio/pull/28634">Pull Request #28634</a>)</p></li> </ul> <h2 id="security">Security</h2> <ul> <li><p><strong>Added</strong> option to allow users to enable token exchange for their XDS flows, which exchanges a k8s token for a token that can be authenticated by their XDS servers. (<a href="https://github.com/istio/istio/issues/29943">Issue #29943</a>)</p></li> <li><p><strong>Added</strong> OIDC JWT authenticator that supports both JWKS-URI and OIDC discovery. The OIDC JWT authenticator will be used when configured through the JWT_RULE env variable. (<a href="https://github.com/istio/istio/issues/30295">Issue #30295</a>)</p></li> <li><p><strong>Added</strong> support of PeerAuthentication per-port-level configuration on pass through filter chains. (<a href="https://github.com/istio/istio/issues/27994">Issue #27994</a>)</p></li> <li><p><strong>Added</strong> an experimental <a href="/v1.9/docs/reference/config/security/authorization-policy/#AuthorizationPolicy-Action"><code>CUSTOM</code> action</a> in AuthorizationPolicy for integration with external authorization systems like OPA, OAuth2 and more. See <a href="/v1.9/blog/2021/better-external-authz/">the blog on this feature</a> for more info. (<a href="https://github.com/istio/istio/issues/27790">Issue #27790</a>)</p></li> </ul> <h2 id="telemetry">Telemetry</h2> <ul> <li><p><strong>Added</strong> Istio Grafana Dashboards Query Reporter Dropdown. (<a href="https://github.com/istio/istio/issues/27595">Issue #27595</a>)</p></li> <li><p><strong>Added</strong> canonical service tags to Envoy-generated trace spans. (<a href="https://github.com/istio/istio/pull/28801">Pull Request #28801</a>)</p></li> <li><p><strong>Fixed</strong> an issue to allow nested JSON structure in <code>meshConfig.accessLogFormat</code>. (<a href="https://github.com/istio/istio/issues/28597">Issue #28597</a>)</p></li> <li><p><strong>Updated</strong> Prometheus metrics to include <code>source_cluster</code> and <code>destination_cluster</code> labels by default for all scenarios. Previously, this was only enabled for multi-cluster scenarios. (<a href="https://github.com/istio/istio/pull/30036">Pull Request #30036</a>)</p></li> <li><p><strong>Updated</strong> default access log to include <code>RESPONSE_CODE_DETAILS</code> and <code>CONNECTION_TERMINATION_DETAILS</code> for proxy version &gt;= 1.9. (<a href="https://github.com/istio/istio/pull/27903">Pull Request #27903</a>)</p></li> </ul> <h2 id="extensibility">Extensibility</h2> <ul> <li><strong>Added</strong> <a href="/v1.9/docs/ops/configuration/extensibility/wasm-module-distribution">Reliable Wasm module remote load</a> with Istio agent. (<a href="https://github.com/istio/istio/issues/29989">Issue #29989</a>)</li> </ul> <h2 id="networking">Networking</h2> <ul> <li><p><strong>Added</strong> Correctly iptables rules and listener filters setting to support original src ip preserve in TPROXY mode within a cluster. (<a href="https://github.com/istio/istio/issues/23369">Issue #23369</a>)</p></li> <li><p><strong>Fixed</strong> a bug where locality weights are only applied when outlier detection is enabled. (<a href="https://github.com/istio/istio/issues/28942">Issue #28942</a>)</p></li> </ul> <h2 id="installation">Installation</h2> <ul> <li><p><strong>Added</strong> post-install/in-place upgrade verification of control plane health. Use <code>--verify</code> flag with <code>istioctl install</code> or <code>istioctl upgrade</code>. (<a href="https://github.com/istio/istio/issues/21715">Issue #21715</a>)</p></li> <li><p><strong>Added</strong> Add <a href="https://github.com/google/pprof">pprof</a> endpoint to pilot-agent. (<a href="https://github.com/istio/istio/issues/28040">Issue #28040</a>)</p></li> <li><p><strong>Added</strong> <code>enableIstioConfigCRDs</code> to <code>base</code> to allow user specify whether the Istio CRDs will be installed. (<a href="https://github.com/istio/istio/pull/28346">Pull Request #28346</a>)</p></li> <li><p><strong>Added</strong> Istio 1.9 supports Kubernetes versions 1.17 to 1.20. (<a href="https://github.com/istio/istio/issues/30176">Issue #30176</a>)</p></li> <li><p><strong>Added</strong> support for applications that bind to their pod IP address, rather than wildcard or localhost address, through the <code>Sidecar</code> API. (<a href="https://github.com/istio/istio/pull/28178">Pull Request #28178</a>)</p></li> <li><p><strong>Fixed</strong> revision is not applied to the scale target reference of <code>HorizontalPodAutoscaler</code> when helm values for <code>hpa</code> are specified explicitly. (<a href="https://github.com/istio/istio/issues/30203">Issue #30203</a>)</p></li> <li><p><strong>Improved</strong> the sidecar injector to better utilize pod labels to determine if injection is required. This is not enabled by default in this release, but can be tested using <code>--set values.sidecarInjectorWebhook.useLegacySelectors=false</code>. (<a href="https://github.com/istio/istio/pull/30013">Pull Request #30013</a>)</p></li> <li><p><strong>Updated</strong> Kiali addon to the latest version v1.29 . (<a href="https://github.com/istio/istio/pull/30438">Pull Request #30438</a>)</p></li> </ul> <h2 id="istioctl">istioctl</h2> <ul> <li><p><strong>Added</strong> <code>istioctl install</code> will detect different Istio version installed (istioctl, control plan version) and display warning. (<a href="https://github.com/istio/istio/issues/18487">Issue #18487</a>)</p></li> <li><p><strong>Added</strong> <code>istioctl apply</code> as an alias for <code>istioctl install</code>. (<a href="https://github.com/istio/istio/issues/28753">Issue #28753</a>)</p></li> <li><p><strong>Added</strong> <code>--browser</code> flag to <code>istioctl dashboard</code>, which controls whether you want to open a browser to view the dashboard. (<a href="https://github.com/istio/istio/issues/29022">Issue #29022</a>)</p></li> <li><p><strong>Added</strong> <code>istioctl verify-install</code> will indicate errors in red and expected configuration in green. (<a href="https://github.com/istio/istio/issues/29336">Issue #29336</a>)</p></li> <li><p><strong>Added</strong> the severity level for each analysis message in the <code>validationMessages</code> field within the <code>status</code> field. (<a href="https://github.com/istio/istio/issues/29445">Issue #29445</a>)</p></li> <li><p><strong>Added</strong> <code>WorkloadEntry</code> resources will be read from all clusters in multi-cluster installations and do not need to be duplicated. Makes Virtual Machine auto-registration compatible with multi-primary multi-cluster. This feature is disabled by default and can be enabled by setting the <code>PILOT_ENABLE_CROSS_CLUSTER_WORKLOAD_ENTRY</code> environment variable in istiod. (<a href="https://github.com/istio/istio/issues/29026">Issue #29026</a>)</p></li> <li><p><strong>Added</strong> <code>istioctl analyze</code> now informs if deprecated or alpha-level annotations are present. (These checks can be disabled using <code>--suppress &quot;IST0135=*&quot;</code> and <code>--suppress &quot;IST0136=*&quot;</code> respectively.) (<a href="https://github.com/istio/istio/issues/29154">Issue #29154</a>)</p></li> <li><p><strong>Added</strong> <code>istioctl x injector list</code> command to show which namespaces have Istio sidecar injection and, for control plane canaries, show all Istio injectors and the namespaces they control. (<a href="https://github.com/istio/istio/issues/23892">Issue #23892</a>)</p></li> <li><p><strong>Fixed</strong> <code>istioctl</code> wait now tracks resource&rsquo;s <code>metadata.generation</code> field, rather than <code>metadata.resourceVersion</code>. Command line arguments have been updated to reflect this. (<a href="https://github.com/istio/istio/issues/28797">Issue #28797</a>)</p></li> <li><p><strong>Fixed</strong> namespace shorthand flag missing in dashboard subcommand. (<a href="https://github.com/istio/istio/issues/28970">Issue #28970</a>)</p></li> <li><p><strong>Fixed</strong> <code>istioctl dashboard controlz</code> could not port forward to istiod pod. (<a href="https://github.com/istio/istio/issues/30208">Issue #30208</a>)</p></li> <li><p><strong>Fixed</strong> installation issue in which <code>--readiness-timeout</code> flag is not honored. (<a href="https://github.com/istio/istio/issues/30221">Issue #30221</a>)</p></li> <li><p><strong>Improved</strong> <code>verify-install</code> detects Istio injector without control plane. (<a href="https://github.com/istio/istio/issues/29607">Issue #29607</a>)</p></li> <li><p><strong>Removed</strong> <code>istioctl convert-ingress</code> command. (<a href="https://github.com/istio/istio/issues/29153">Issue #29153</a>)</p></li> <li><p><strong>Removed</strong> <code>istioctl experimental multicluster</code> command. (<a href="https://github.com/istio/istio/issues/29153">Issue #29153</a>)</p></li> <li><p><strong>Removed</strong> <code>istioctl experimental post-install</code> webhook command. (<a href="https://github.com/istio/istio/issues/29153">Issue #29153</a>)</p></li> <li><p><strong>Removed</strong> <code>istioctl register</code> and <code>deregister</code> commands. (<a href="https://github.com/istio/istio/issues/29153">Issue #29153</a>)</p></li> <li><p><strong>Updated</strong> <code>istioctl proxy-config log</code> to allow filtering logs based on label. (<a href="https://github.com/istio/istio/issues/27490">Issue #27490</a>)</p></li> </ul> <h2 id="documentation">Documentation</h2> <ul> <li><strong>Added</strong> The locality load balancing docs have been re-written into a formal traffic management task. The new docs describe in more detail how locality load balancing works as well as how to configure both failover and weighted distribution. In addition, the new docs are now automatically verified for correctness. (<a href="https://github.com/istio/istio/pull/29651">Pull Request #29651</a>)</li> </ul>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.9.x/announcing-1.9/change-notes//v1.9/news/releases/1.9.x/announcing-1.9/change-notes/Change Notes <h2 id="known-issues">Known Issues</h2> <ul> <li><p>Memory leak in TCP Wasm extensions affecting TCP telemetry (see <a href="https://github.com/istio/istio/issues/24720">Issue #24720</a>). The leak occurs when upstream connections are interrupted mid-stream.</p></li> <li><p>Wasm extension configuration updates are disruptive (see <a href="https://github.com/envoyproxy/envoy/issues/13690">Issue #13690</a>). The configuration is immediately applied for existing requests and connections, and is not reverted if the outer xDS is rejected.</p></li> <li><p>Race condition with Envoy aggregate cluster when creating an <code>EnvoyFilter</code> and <code>ServiceEntry</code> for the same service. Istio-injected pods are unable to start up due to <code>istio-proxy</code> crashing with a segfault. See <a href="https://github.com/istio/istio/issues/28620">Issue #28620</a> for more information.</p></li> </ul> <h2 id="traffic-management">Traffic Management</h2> <ul> <li><p><strong>Deprecated</strong> the use of the <code>.global</code> stub domain for multi-primary (formerly &ldquo;replicated control planes&rdquo;) multicluster configurations. The new onboarding flow uses a simpler configuration which allows services across the mesh to be accessed via <code>*.cluster.local</code>. There were several limitations with <code>.global</code>, such as poor load balancing when using gateways, which are no longer an issue with the new configuration.</p></li> <li><p><strong>Added</strong> DNS capture in istio-agent by default for VMs installed using <code>istioctl x workload entry configure</code>.</p></li> <li><p><strong>Added</strong> <code>holdApplicationUntilProxyStarts</code> field to <code>ProxyConfig</code>, allowing it to be configured at the pod level. Should not be used in conjunction with the deprecated <code>values.global.proxy.holdApplicationUntilProxyStarts</code> value. (<a href="https://github.com/istio/istio/issues/27696">Issue #27696</a>)</p></li> </ul> <!-- - **Added** support for injecting `istio-cni` into `k8s.v1.cni.cncf.io/networks` annotation with preexisting value that uses JSON notation. ([Issue #25744](https://github.com/istio/istio/issues/25744)) --> <ul> <li><p><strong>Added</strong> support for <code>INSERT_FIRST</code>, <code>INSERT_BEFORE</code>, <code>INSERT_AFTER</code> insert operations for <code>HTTP_ROUTE</code> in <code>EnvoyFilter</code> (<a href="https://github.com/istio/istio/issues/26692">Issue #26692</a>)</p></li> <li><p><strong>Added</strong> <code>REPLACE</code> operation for <code>EnvoyFilter</code>. <code>REPLACE</code> operation can replace the contents of a named filter with new contents. It is only valid for <code>HTTP_FILTER</code> and <code>NETWORK_FILTER</code>. (<a href="https://github.com/istio/istio/issues/27425">Issue #27425</a>)</p></li> <li><p><strong>Added</strong> Istio resource status now includes observed generation (<a href="https://github.com/istio/istio/issues/28003">Issue #28003</a>)</p></li> <li><p><strong>Fixed</strong> remove endpoints when the new labels in <code>WorkloadEntry</code> do not match the <code>workloadSelector</code> in <code>ServiceEntry</code>. (<a href="https://github.com/istio/istio/issues/25678">Issue #25678</a>)</p></li> <li><p><strong>Fixed</strong> when a node has multiple IP addresses (e.g., a VM in the mesh expansion scenario), Istio Proxy will now bind <code>inbound</code> listeners to the first applicable address in the list (new behavior) rather than to the last one (former behavior). (<a href="https://github.com/istio/istio/issues/28269">Issue #28269</a>)</p></li> </ul> <h2 id="security">Security</h2> <ul> <li><p><strong>Improved</strong> Gateway certificates to be read and distributed from Istiod, rather than in the gateway pods. This reduces the permissions required in the gateways, improves performance, and makes certificate reading more extensible. This change is fully backwards compatible with the old mechanism, and requires no changes to your cluster. If required, it can be disabled by setting the <code>ISTIOD_ENABLE_SDS_SERVER=false</code> environment variable in Istiod. (<a href="https://github.com/istio/istio/pull/27744">Pull Request #27744</a>)</p></li> <li><p><strong>Improved</strong> TLS configuration on sidecar server side inbound paths to enforce <code>TLSv2</code> version along with recommended cipher suites. If this is not needed or creates problems with non Envoy clients, it can disabled by setting Istiod env variable <code>PILOT_SIDECAR_ENABLE_INBOUND_TLS_V2</code> to false. (<a href="https://github.com/istio/istio/pull/27500">Pull Request #27500</a>)</p></li> <li><p><strong>Updated</strong> The <code>ipBlocks</code>/<code>notIpBlocks</code> fields of an <code>AuthorizationPolicy</code> now strictly refer to the source IP address of the IP packet as it arrives at the sidecar. Prior to this release, if using the Proxy Protocol, then the <code>ipBlocks</code>/<code>notIpBlocks</code> would refer to the IP address determined by the Proxy Protocol. Now the <code>remoteIpBlocks</code>/<code>notRemoteIpBlocks</code> fields must be used to refer to the client IP address from the Proxy Protocol. (<a href="/v1.9/docs/reference/config/security/authorization-policy/">reference</a>)(<a href="/v1.9/docs/ops/configuration/traffic-management/network-topologies/">usage</a>)(<a href="/v1.9/docs/tasks/security/authorization/authz-ingress/">usage</a>) (<a href="https://github.com/istio/istio/issues/22341">Issue #22341</a>)</p></li> <li><p><strong>Added</strong> <code>AuthorizationPolicy</code> now supports nested JWT claims. (<a href="https://github.com/istio/istio/issues/21340">Issue #21340</a>)</p></li> <li><p><strong>Added</strong> support for client side Envoy secure naming config when trust domain alias is used. This fixes the multi-cluster service discovery client SAN generation to use all endpoints&rsquo; service accounts rather than the first found service registry. (<a href="https://github.com/istio/istio/pull/26185">Pull Request #26185</a>)</p></li> <li><p><strong>Added</strong> Experimental feature support allowing Istiod to integrate with external certificate authorities using Kubernetes CSR API (&gt;=1.18 only). (<a href="https://github.com/istio/istio/issues/27606">Issue #27606</a>)(<a href="/v1.9/docs/tasks/security/cert-management/custom-ca-k8s/">usage</a>)</p></li> <li><p><strong>Added</strong> Enable user to set the custom VM identity provider for credential authentication (<a href="https://github.com/istio/istio/issues/27947">Issue #27947</a>)</p></li> <li><p><strong>Added</strong> action &lsquo;AUDIT&rsquo; to Authorization Policy that can be used to determine which requests should be audited. (<a href="https://github.com/istio/istio/issues/25591">Issue #25591</a>)</p></li> <li><p><strong>Added</strong> support for migration and concurrent use of regular K8S tokens as well as new K8S tokens with audience. This feature is enabled by default, can be disabled by <code>REQUIRE_3P_TOKEN</code> environment variable in Istiod, which will require new tokens with audience. The <code>TOKEN_AUDIENCES</code> environment variable allows customizing the checked audience, default remains <code>istio-ca</code>. (<a href="https://github.com/istio/istio/pull/26482">Pull Request #26482</a>)</p></li> <li><p><strong>Added</strong> <code>AuthorizationPolicy</code> now supports a <code>Source</code> of type <code>remoteIpBlocks</code>/<code>notRemoteIpBlocks</code> that map to a new <code>Condition</code> attribute called <code>remote.ip</code> that can also be used in the &ldquo;when&rdquo; clause. If using an http/https load balancer in front of the ingress gateway, the <code>remote.ip</code> attribute is set to the original client IP address determined by the <code>X-Forwarded-For</code> http header from the trusted proxy configured through the <code>numTrustedProxies</code> field of the <code>gatewayTopology</code> under the <code>meshConfig</code> when you install Istio or set it via an annotation on the ingress gateway. See the documentation here: <a href="/v1.9/docs/ops/configuration/traffic-management/network-topologies/">Configuring Gateway Network Topology</a>. If using a TCP load balancer with the Proxy Protocol in front of the ingress gateway, the <code>remote.ip</code> is set to the original client IP address as given by the Proxy Protocol. (<a href="/v1.9/docs/reference/config/security/authorization-policy/">reference</a>)(<a href="/v1.9/docs/ops/configuration/traffic-management/network-topologies/">usage</a>)(<a href="/v1.9/docs/tasks/security/authorization/authz-ingress/">usage</a>) (<a href="https://github.com/istio/istio/issues/22341">Issue #22341</a>)</p></li> </ul> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content"><p>A critical <a href="https://groups.google.com/g/envoy-security-announce/c/aqtBt5VUor0">bug</a> has been identified in Envoy that the proxy protocol downstream address is restored incorrectly for non-HTTP connections.</p> <p>Please DO NOT USE the <code>remoteIpBlocks</code> field and <code>remote_ip</code> attribute with proxy protocol on non-HTTP connections until a newer version of Istio is released with a proper fix.</p> <p>Note that Istio doesn&rsquo;t support the proxy protocol and it can be enabled only with the <code>EnvoyFilter</code> API and should be used at your own risk.</p> </div> </aside> </div> <h2 id="telemetry">Telemetry</h2> <ul> <li><p><strong>Updated</strong> the &ldquo;Control Plane Dashboard&rdquo; and the &ldquo;Performance Dashboard&rdquo; to use the <code>container_memory_working_set_bytes</code> metric to display memory. This metric only counts memory that <em>cannot be reclaimed</em> by the kernel even under memory pressure, and therefore more relevant for tracking. It is also consistent with <code>kubectl top</code>. The reported values are lower than the previous values.</p></li> <li><p><strong>Updated</strong> the Istio Workload and Istio Service dashboards resulting in faster load time. (<a href="https://github.com/istio/istio/issues/22408">Issue #22408</a>)</p></li> <li><p><strong>Added</strong> <code>datasource</code> parameter to Grafana dashboards (<a href="https://github.com/istio/istio/issues/22408">Issue #22408</a>)</p></li> <li><p><strong>Added</strong> Listener Access Logs when <code>ResponseFlag</code> from Envoy is set. (<a href="https://github.com/istio/istio/issues/26851">Issue #26851</a>)</p></li> <li><p><strong>Added</strong> support for <code>OpenCensusAgent</code> formatted trace export with configurable trace context headers.</p></li> <li><p><strong>Added</strong> Proxy config to control Envoy native stats generation. (<a href="https://github.com/istio/istio/issues/26546">Issue #26546</a>)</p></li> <li><p><strong>Added</strong> Istio Wasm Extension Grafana Dashboard. (<a href="https://github.com/istio/istio/issues/25843">Issue #25843</a>)</p></li> <li><p><strong>Added</strong> gRPC streaming message count proxy Prometheus <code>metrics istio_request_messages_total</code> and <code>istio_response_messages_total</code> (<a href="https://github.com/istio/proxy/pull/3048">Pull Request #3048</a>)</p></li> <li><p><strong>Added</strong> support for properly labeling traffic in client metrics for cases when the destination is not reached or is not behind a proxy. (<a href="https://github.com/istio/istio/issues/20538">Issue #20538</a>)</p></li> <li><p><strong>Fixed</strong> interpretation of <code>$(HOST_IP)</code> in Zipkin and Datadog tracer address. (<a href="https://github.com/istio/istio/issues/27911">Issue #27911</a>)</p></li> <li><p><strong>Removed</strong> all Mixer-related features and functionality. This is a scheduled removal of a deprecated Istio services and deployments, as well as Mixer-focused CRDs and component and related functionality. (<a href="https://github.com/istio/istio/issues/25333">Issue #25333</a>),(<a href="https://github.com/istio/istio/issues/24300">Issue #24300</a>)</p></li> </ul> <h2 id="installation">Installation</h2> <ul> <li><p><strong>Promoted</strong> <a href="/v1.9/docs/setup/additional-setup/external-controlplane/">external control plane</a> to alpha. (<a href="https://github.com/istio/enhancements/issues/11">Issue #11</a>)</p></li> <li><p><strong>Updated</strong> Kiali addon to version 1.26.</p></li> <li><p><strong>Added</strong> support for <a href="/v1.9/docs/setup/install/helm/">installing and upgrading Istio</a> using <a href="https://helm.sh/docs/">Helm 3</a></p></li> <li><p><strong>Improved</strong> multi-network configuration so that labeling a service with <code>topology.istio.io/network=network-name</code> can configure cross-network gateways without using <a href="/v1.9/docs/reference/config/istio.mesh.v1alpha1/#MeshNetworks">mesh networks</a>.</p></li> <li><p><strong>Improved</strong> sidecar injection to not modify the pod <code>securityPolicy.fsGroup</code> which could conflict with existing settings and secret mounts. This option is enabled automatically on Kubernetes 1.19+ and is not supported on older versions. (<a href="https://github.com/istio/istio/issues/26882">Issue #26882</a>)</p></li> <li><p><strong>Improved</strong> Generated operator manifests for use with <code>kustomize</code> are available in the <a href="https://github.com/istio/istio/tree/release-1.9/manifests/charts/istio-operator/files">manifests</a> directory. (<a href="https://github.com/istio/istio/issues/27139">Issue #27139</a>)</p></li> <li><p><strong>Updated</strong> install script to bypass GitHub API Rate Limiting. (<a href="https://github.com/istio/istio/pull/23469">Pull Request #23469</a>)</p></li> <li><p><strong>Added</strong> port <code>15012</code> to the default list of ports for the <code>istio-ingressgateway</code> Service. (<a href="https://github.com/istio/istio/issues/25933">Issue #25933</a>)</p></li> <li><p><strong>Added</strong> support for Kubernetes versions 1.16 to 1.19 to Istio 1.8. (<a href="https://github.com/istio/istio/issues/28814">Issue #28814</a>)</p></li> <li><p><strong>Added</strong> the ability to specify the network for a Pod using the label <code>topology.istio.io/network</code>. This overrides the setting for the cluster&rsquo;s installation values (<code>values.globalnetwork</code>). If the label isn&rsquo;t set, it is injected based on the global value for the cluster. (<a href="https://github.com/istio/istio/issues/25500">Issue #25500</a>)</p></li> <li><p><strong>Deprecated</strong> installation flags <code>values.global.meshExpansion.enabled</code> in favor of user-managed config and <code>values.gateways.istio-ingressgateway.meshExpansionPorts</code> in favor of <code>components.ingressGateways[name=istio-ingressgateway].k8s.service.ports</code> (<a href="https://github.com/istio/istio/issues/25933">Issue #25933</a>)</p></li> <li><p><strong>Fixed</strong> Istio operator manager to allow configuring <code>RENEW_DEADLINE</code>. (<a href="https://github.com/istio/istio/issues/27509">Issue #27509</a>)</p></li> <li><p><strong>Fixed</strong> an issue preventing <code>NodePort</code> services from being used as the <code>registryServiceName</code> in <code>meshNetworks</code>.</p></li> <li><p><strong>Removed</strong> support for installing third-party telemetry applications with <code>istioctl</code>. These applications (Prometheus, Grafana, Zipkin, Jaeger, and Kiali), often referred to as the Istio addons, must now be installed separately. This does not impact Istio&rsquo;s ability to produce telemetry for those use in the addons. See <a href="/v1.9/blog/2020/addon-rework/">Reworking our Addon Integrations</a> for more info. (<a href="https://github.com/istio/istio/issues/23868">Issue #23868</a>),(<a href="https://github.com/istio/istio/issues/23583">Issue #23583</a>)</p></li> <li><p><strong>Removed</strong> <code>istio-telemetry</code> and <code>istio-policy</code> services and deployments from installation by <code>istioctl</code>. (<a href="https://github.com/istio/istio/issues/23868">Issue #23868</a>),(<a href="https://github.com/istio/istio/issues/23583">Issue #23583</a>)</p></li> <li><p><strong>Fixed</strong> Istio Grafana Dashboards queries which have <code>reporter</code> field. (<a href="https://github.com/istio/istio/issues/27595">Issue #27595</a>)</p></li> </ul> <h2 id="istioctl">istioctl</h2> <ul> <li><p><strong>Improved</strong> <code>istioctl analyze</code> to find the exact line number with configuration errors when analyzing yaml files. Before, it would return the first line of the resource with the error. (<a href="https://github.com/istio/istio/issues/22872">Issue #22872</a>)</p></li> <li><p><strong>Updated</strong> <code>istioctl experimental version</code> and <code>proxy-status</code> to use token security. A new option, <code>--plaintext</code>, has been created for testing without tokens. (<a href="https://github.com/istio/istio/issues/24905">Issue #24905</a>)</p></li> <li><p><strong>Added</strong> istioctl commands may now refer to pods indirectly, for example <code>istioctl dashboard envoy deployment/httpbin</code> (<a href="https://github.com/istio/istio/issues/26080">Issue #26080</a>)</p></li> <li><p><strong>Added</strong> <code>io</code> as short name for Istio Operator resources in addition to <code>iop</code>. (<a href="https://github.com/istio/istio/issues/27159">Issue #27159</a>)</p></li> <li><p><strong>Added</strong> <code>--type</code> for <code>istioctl experimental create-remote-secret</code> to allow user specify type for the created secret.</p></li> <li><p><strong>Added</strong> an experimental OpenShift Kubernetes platform profile to <code>istioctl</code>. To install with the OpenShift profile, use <code>istioctl install --set profile=openshift</code>. (<a href="/v1.9/docs/setup/platform-setup/openshift/">OpenShift Platform Setup</a>)(<a href="/v1.9/docs/setup/install/istioctl/#install-a-different-profile">Install OpenShift using <code>istioctl</code></a>)</p></li> <li><p><strong>Added</strong> <code>istioctl bug-report</code> command to generate an archive of Istio and cluster information to assist with debugging. (<a href="https://github.com/istio/istio/issues/26045">Issue #26045</a>)</p></li> <li><p><strong>Added</strong> new command <code>istioctl experimental istiod log</code> to enable managing logging levels of <code>istiod</code> components. (<a href="https://github.com/istio/istio/issues/25276">Issue #25276</a>),(<a href="https://github.com/istio/istio/issues/27797">Issue #27797</a>)</p></li> <li><p><strong>Deprecated</strong> <code>centralIstiod</code> flag in favor of <code>externalIstiod</code> to better support external control plane model. (<a href="https://github.com/istio/istio/issues/24471">Issue #24471</a>)</p></li> <li><p><strong>Fixed</strong> an issue which allowed an empty revision flag on install. (<a href="https://github.com/istio/istio/issues/26940">Issue #26940</a>)</p></li> </ul> <h2 id="documentation">Documentation</h2> <ul> <li><strong>Improved</strong> Multicluster install docs to include current best practices, incorporating recent updates to onboarding tooling. In particular, the multi-primary configuration (formerly known as &ldquo;replicated control planes&rdquo;) no longer relies on manually configuring the <code>.global</code> stub domain, preferring instead to use <code>*.svc.cluster.local</code> for accessing services throughout the mesh.</li> </ul>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.8.x/announcing-1.8/change-notes//v1.9/news/releases/1.8.x/announcing-1.8/change-notes/Change Notes <h2 id="traffic-management">Traffic Management</h2> <ul> <li><strong>Added</strong> config option <code>values.global.proxy.holdApplicationUntilProxyStarts</code>, which causes the sidecar injector to inject the sidecar at the start of the pod&rsquo;s container list and configures it to block the start of all other containers until the proxy is ready. This option is disabled by default. (<a href="https://github.com/istio/istio/issues/11130">Issue #11130</a>)</li> <li><strong>Added</strong> SDS support for Client Certificate and CA certificate used for <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway-tls-origination-sds/">TLS/mTLS Origination from Egress Gateway</a> using <code>DestinationRule</code>. (<a href="https://github.com/istio/istio/issues/14039">Issue #14039</a>)</li> </ul> <h2 id="security">Security</h2> <ul> <li><strong>Improved</strong> Trust Domain Validation to validate TCP traffic as well, previously only HTTP traffic was validated. (<a href="https://github.com/istio/istio/issues/26224">Issue #26224</a>)</li> <li><strong>Improved</strong> Istio Gateways to allow use of source principal based authorization when the Server&rsquo;s TLS mode is <code>ISTIO_MUTUAL</code>. (<a href="https://github.com/istio/istio/issues/25818">Issue #25818</a>)</li> <li><strong>Improved</strong> VM security. VM identity is now bootstrapped from a short-lived Kubernetes service account token. And VM&rsquo;s workload certificate is automatically rotated. (<a href="https://github.com/istio/istio/issues/24554">Issue #24554</a>)</li> </ul> <h2 id="telemetry">Telemetry</h2> <ul> <li><strong>Added</strong> Prometheus metrics to istio-agent. (<a href="https://github.com/istio/istio/issues/22825">Issue #22825</a>)</li> <li><strong>Added</strong> Metric customization with <code>istioctl</code>. (<a href="https://github.com/istio/istio/issues/25963">Issue #25963</a>)</li> <li><strong>Added</strong> TCP Metrics and Access Logs to Stackdriver. (<a href="https://github.com/istio/istio/issues/23134">Issue #23134</a>)</li> <li><strong>Deprecated</strong> installation of telemetry addons by <code>istioctl</code>. These will be disabled by default, and in a future release removed entirely. More information on installing these addons can be found in the <a href="/v1.9/docs/ops/integrations/">Integrations</a> page. (<a href="https://github.com/istio/istio/issues/22762">Issue #22762</a>)</li> <li><strong>Enabled</strong> Prometheus <a href="/v1.9/docs/ops/integrations/prometheus/#option-1-metrics-merging">metrics merging</a> by default. (<a href="https://github.com/istio/istio/issues/21366">Issue #21366</a>)</li> <li><strong>Fixed</strong> Prometheus <a href="/v1.9/docs/ops/integrations/prometheus/#option-1-metrics-merging">metrics merging</a> to not drop Envoy metrics during application failures. (<a href="https://github.com/istio/istio/issues/22825">Issue #22825</a>)</li> <li><strong>Fixed</strong> Fix unexplained telemetry which affects Kiali graph. This fix increases default outbound protocol sniffing timeout to <code>5s</code>, which has impact on server first protocol like <code>mysql</code>. (<a href="https://github.com/istio/istio/issues/24379">Issue #24379</a>)</li> <li><strong>Removed</strong> the <code>pilot_xds_eds_instances</code> and <code>pilot_xds_eds_all_locality_endpoints</code> Istiod metrics, which were not accurate. (<a href="https://github.com/istio/istio/issues/25154">Issue #25154</a>)</li> </ul> <h2 id="installation">Installation</h2> <ul> <li><strong>Added</strong> RPM packages for running the Istio sidecar on a VM to the release. (<a href="https://github.com/istio/istio/issues/9117">Issue #9117</a>)</li> <li><strong>Added</strong> experimental <a href="/v1.9/blog/2020/new-deployment-model/">external Istiod</a> support.</li> <li><strong>Fixed</strong> an issue preventing <code>NodePort</code> services from being used as the <code>registryServiceName</code> in <code>meshNetworks</code>.</li> <li><strong>Improved</strong> gateway deployments to run as non-root by default. (<a href="https://github.com/istio/istio/issues/23379">Issue #23379</a>)</li> <li><strong>Improved</strong> the operator to run as non-root by default. (<a href="https://github.com/istio/istio/issues/24960">Issue #24960</a>)</li> <li><strong>Improved</strong> the operator by specifying a rigorous security context. (<a href="https://github.com/istio/istio/issues/24963">Issue #24963</a>)</li> <li><strong>Improved</strong> Istiod to run as non-root by default. (<a href="https://github.com/istio/istio/issues/24961">Issue #24961</a>)</li> <li><strong>Improved</strong> Kubernetes strategic merge is used to overlay IstioOperator user files, which improves how list items are handled. (<a href="https://github.com/istio/istio/issues/24432">Issue #24432</a>)</li> <li><strong>Upgraded</strong> the CRD and Webhook versions to <code>v1</code>. (<a href="https://github.com/istio/istio/issues/18771">Issue #18771</a>),(<a href="https://github.com/istio/istio/issues/18838">Issue #18838</a>)</li> </ul> <h2 id="istioctl">istioctl</h2> <ul> <li><strong>Added</strong> Allow <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-proxy-status"><code>proxy-status &lt;pod&gt;</code> command</a> for non Kubernetes workloads with proxy config passed in from the <code>--file</code> parameter.</li> <li><strong>Added</strong> a configuration file to hold istioctl default flags. Its default location (<code>$HOME/.istioctl/config.yaml</code>) can be changed using the environment variable <code>ISTIOCONFIG</code>. The new command <code>istioctl experimental config list</code> shows the default flags. (<a href="https://github.com/istio/istio/issues/23868">Issue #23868</a>)</li> <li><strong>Added</strong> <code>--revision</code> flag to <code>istioctl operator init</code> and <code>istioctl operator remove</code> commands to support multiple control plane upgrade. (<a href="https://github.com/istio/istio/issues/23479">Issue #23479</a>)</li> <li><strong>Added</strong> <code>istioctl x uninstall</code> command to uninstall Istio control plane. (<a href="https://github.com/istio/istio/issues/24360">Issue #24360</a>)</li> <li><strong>Improved</strong> <code>istioctl analyze</code> to warn if deprecated mixer resources are present (<a href="https://github.com/istio/istio/issues/24471">Issue #24471</a>)</li> <li><strong>Improved</strong> <code>istioctl analyze</code> to warn if <code>DestinationRule</code> is not using <code>CaCertificates</code> to validate server identity.</li> <li><strong>Improved</strong> <code>istioctl validate</code> to check for unknown fields in resources. (<a href="https://github.com/istio/istio/issues/24861">Issue #24861</a>)</li> <li><strong>Improved</strong> <code>istioctl install</code> to emit a warning when attempting to install Istio in an old, non supported Kubernetes version. (<a href="https://github.com/istio/istio/issues/26141">Issue #26141</a>)</li> <li><strong>Removed</strong> <code>istioctl manifest apply</code>. The simpler <code>install</code> command replaces manifest apply. (<a href="https://github.com/istio/istio/issues/25737">Issue #25737</a>)</li> </ul> <h2 id="documentation-changes">Documentation changes</h2> <ul> <li><strong>Added</strong> visual indication if an istio.io page has been tested by istio.io automated tests. (<a href="https://github.com/istio/istio.io/issues/7672">Issue #7672</a>)</li> </ul>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.7.x/announcing-1.7/change-notes//v1.9/news/releases/1.7.x/announcing-1.7/change-notes/Change Notes <h2 id="traffic-management">Traffic Management</h2> <ul> <li><strong><em>Added</em></strong> <a href="https://github.com/istio/istio/pull/22118"><code>VirtualService</code> delegation</a>. This allows mesh routing configurations to be specified in multiple composable <code>VirtualServices</code>.</li> <li><strong><em>Added</em></strong> the new <a href="/v1.9/docs/reference/config/networking/workload-entry/">Workload Entry</a> resource. This allows easier configuration for non-Kubernetes workloads to join the mesh.</li> <li><strong><em>Added</em></strong> configuration for gateway topology. This addresses providing correct <a href="https://github.com/istio/istio/issues/7679">X-Forwarded-For headers</a> and X-Forwarded-Client-Cert headers based on gateway deployment topology .</li> <li><strong><em>Added</em></strong> experimental support for the <a href="https://github.com/kubernetes-sigs/service-apis/">Kubernetes Service APIs</a>.</li> <li><strong><em>Added</em></strong> support for using <code>appProtocol</code> to select the <a href="/v1.9/docs/ops/configuration/traffic-management/protocol-selection/">protocol for a port</a> introduced in Kubernetes 1.18.</li> <li><strong><em>Changed</em></strong> Gateway SDS to be enabled by default. File mounted gateway continues to be available to help users to transition to secure gateway SDS.</li> <li><strong><em>Added</em></strong> support for reading certificates from Secrets, <code>pathType</code>, and <code>IngressClass</code>, which provides better support for <a href="/v1.9/docs/tasks/traffic-management/ingress/kubernetes-ingress/">Kubernetes ingress</a>.</li> <li><strong><em>Added</em></strong> a new <code>proxy.istio.io/config</code> annotation to override proxy configuration per pod.</li> <li><strong><em>Removed</em></strong> most configuration flags and environment variables for the proxy. These now read directly from the mesh configuration.</li> <li><strong><em>Changed</em></strong> the proxy readiness probe to port 15021.</li> <li><strong><em>Fixed</em></strong> a <a href="https://github.com/istio/istio/issues/16458">bug</a>, which blocked external HTTPS/TCP traffic in some cases.</li> </ul> <h2 id="security">Security</h2> <ul> <li><strong><em>Added</em></strong> <a href="https://github.com/istio/istio/pull/22789">JSON Web Token (JWT) caching</a> to the Istio-agent, which provides better Istio Agent SDS performance.</li> <li><strong><em>Fixed</em></strong> the Istio Agent certificate provisioning <a href="https://github.com/istio/istio/pull/22617">grace period calculation</a>.</li> <li><strong><em>Removed</em></strong> Security alpha API. Security beta API, which was introduced in Istio 1.5, is the only supported security API in Istio 1.6.</li> </ul> <h2 id="telemetry">Telemetry</h2> <ul> <li><strong><em>Added</em></strong> experimental support for <a href="/v1.9/docs/tasks/observability/metrics/classify-metrics/">request classification</a> filters. This enables operators to configure new attributes for use in telemetry, based on request information. A primary use case for this feature is labeling of traffic by API method.</li> <li><strong><em>Added</em></strong> an experimental <a href="/v1.9/docs/tasks/observability/distributed-tracing/configurability/">mesh-wide tracing configuration API</a>. This API provides control of trace sampling rates, the <a href="https://github.com/istio/istio/issues/14563">maximum tag lengths</a> for URL tags, and <a href="https://github.com/istio/istio/issues/13018">custom tags extraction</a> for all traces within the mesh.</li> <li><strong><em>Added</em></strong> standard Prometheus scrape annotations to proxies and the control plane workloads, which improves the Prometheus integration experience. This removes the need for specialized configuration to discover and consume Istio metrics. More details are available in the <a href="/v1.9/docs/ops/integrations/prometheus#option-2-metrics-merging/">operations guide for Prometheus</a>.</li> <li><strong><em>Added</em></strong> the ability for mesh operators to add and remove labels used in Istio metrics, based on expressions over the set of available request and response attributes. This improves Istio&rsquo;s support for <a href="/v1.9/docs/tasks/observability/metrics/customize-metrics/">customizing v2 metrics generation</a>.</li> <li><strong><em>Updated</em></strong> default telemetry v2 configuration to avoid using host header to extract destination service name at the gateway. This prevents unbound cardinality due to an untrusted host header, and implies that destination service labels are going to be omitted for requests that hit <code>blackhole</code> and <code>passthrough</code> at the gateway.</li> <li><strong><em>Added</em></strong> automated publishing of Grafana dashboards to <code>grafana.com</code> as part of the Istio release process. Please see the <a href="https://grafana.com/orgs/istio">Istio org page</a> for more information.</li> <li><strong><em>Updated</em></strong> Grafana dashboards to adapt to the new Istiod deployment model.</li> </ul> <h2 id="installation">Installation</h2> <ul> <li><strong><em>Added</em></strong> support for Istio canary upgrades. See the <a href="/v1.9/docs/setup/upgrade/">Upgrade guide</a> for more information.</li> <li><strong><em>Removed</em></strong> the legacy Helm charts. For migration from them please see the <a href="/v1.9/docs/setup/upgrade/">Upgrade guide</a>.</li> <li><strong><em>Added</em></strong> the ability for users to add a custom hostname for istiod.</li> <li><strong><em>Changed</em></strong> gateway readiness port used from 15020 to 15021. If you check health on your Istio <code>ingressgateway</code> from your Kubernetes network load balancer you will need to update the port.</li> <li><strong><em>Added</em></strong> functionality to save installation state in a <code>CustomResource</code> in the cluster.</li> <li><strong><em>Changed</em></strong> the Istio installation to no longer manage the installation namespace, allowing more flexibility.</li> <li><strong><em>Removed</em></strong> the separate Citadel, Sidecar Injector, and Galley deployments. These were disabled by default in 1.5, and all functionality has moved into Istiod.</li> <li><strong><em>Removed</em></strong> the legacy <code>istio-pilot</code> configurations, such as Service.</li> <li><strong><em>Removed</em></strong> ports 15029-15032 from the default <code>ingressgateway</code>. It is recommended to expose telemetry addons by <a href="/v1.9/docs/tasks/observability/gateways/">host routing</a> instead.</li> <li><strong><em>Removed</em></strong> built in Istio configurations from the installation, including the Gateway, <code>VirtualServices</code>, and mTLS settings.</li> <li><strong><em>Added</em></strong> a new profile, called <code>preview</code>, allowing users to try out new experimental features that include WASM enabled telemetry v2.</li> <li><strong><em>Added</em></strong> <code>istioctl install</code> command as a replacement for <code>istioctl manifest apply</code>.</li> <li><strong><em>Added</em></strong> istiod-remote chart to allow users to <a href="https://github.com/istio/istio/wiki/Central-Istiod-manages-remote-data-plane">experiment with a central Istiod managing a remote data plane</a>.</li> </ul> <h2 id="istioctl">istioctl</h2> <ul> <li><strong><em>Added</em></strong> better display characteristics for the istioctl command.</li> <li><strong><em>Added</em></strong> support for key:value list selection when using &ndash;set flag paths.</li> <li><strong><em>Added</em></strong> support for deletes and setting non-scalar values when using the Kubernetes overlays patching mechanism.</li> </ul> <h2 id="documentation-changes">Documentation changes</h2> <ul> <li><strong><em>Added</em></strong> new and improved Istio documentation. For more information, see <a href="/v1.9/about/log/">Website content changes</a>.</li> </ul>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.6.x/announcing-1.6/change-notes//v1.9/news/releases/1.6.x/announcing-1.6/change-notes/Change Notes <h2 id="traffic-management">Traffic management</h2> <ul> <li><strong>Improved</strong> performance of the <code>ServiceEntry</code> resource by avoiding unnecessary full pushes <a href="https://github.com/istio/istio/pull/19305">#19305</a></li> <li><strong>Improved</strong> Envoy sidecar readiness probe to more accurate determine readiness <a href="https://github.com/istio/istio/pull/18164">#18164</a>.</li> <li><strong>Improved</strong> performance of Envoy proxy configuration updates via xDS by sending partial updates where possible <a href="https://github.com/istio/istio/pull/18354">#18354</a>.</li> <li><strong>Added</strong> an option to configure locality load balancing settings for each targeted service via destination rule<a href="https://github.com/istio/istio/pull/18406">#18406</a>.</li> <li><strong>Fixed</strong> an issue where pods crashing would trigger excessive Envoy proxy configuration pushes <a href="https://github.com/istio/istio/pull/18574">#18574</a>.</li> <li><strong>Fixed</strong> issues with applications such as headless services to call themselves directly without going through Envoy proxy <a href="https://github.com/istio/istio/pull/19308">#19308</a>.</li> <li><strong>Added</strong> detection of <code>iptables</code> failure when using Istio CNI <a href="https://github.com/istio/istio/pull/19534">#19534</a></li> <li><strong>Added</strong> <code>consecutiveGatewayErrors</code> and <code>consecutive5xxErrors</code> as outlier detection options within destination rule <a href="https://github.com/istio/istio/pull/19771">#19771</a>.</li> <li><strong>Improved</strong> <code>EnvoyFilter</code> matching performance <a href="https://github.com/istio/istio/pull/19786">#19786</a></li> <li><strong>Added</strong> support for <code>HTTP_PROXY</code> protocol <a href="https://github.com/istio/istio/pull/19919">#19919</a>.</li> <li><strong>Improved</strong> <code>iptables</code> setup to use <code>iptables-restore</code> by default <a href="https://github.com/istio/istio/pull/18847">#18847</a>.</li> <li><strong>Improved</strong> Gateway performance by filtering unused clusters. This setting is disabled by default <a href="https://github.com/istio/istio/pull/20124">#20124</a>.</li> </ul> <h2 id="security">Security</h2> <ul> <li><strong>Graduated</strong> SDS to stable and enabled by default. It provides identity provisioning for Istio Envoy proxies.</li> <li><strong>Added</strong> Beta authentication API. The new API separates peer (i.e mutual TLS) and origin (JWT) authentication into <a href="https://github.com/istio/api/blob/master/security/v1beta1/peer_authentication.proto"><code>PeerAuthentication</code></a> and <a href="https://github.com/istio/api/blob/master/security/v1beta1/request_authentication.proto"><code>RequestAuthentication</code></a> respectively. Both new APIs are workload-oriented, as opposed to service-oriented in alpha <code>AuthenticationPolicy</code>.</li> <li><strong>Added</strong> <a href="/v1.9/docs/tasks/security/authorization/authz-deny">deny semantics</a> and <a href="/v1.9/docs/concepts/security/#exclusion-matching">exclusion matching</a> to Authorization Policy.</li> <li><strong>Graduated</strong> <a href="/v1.9/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls">auto mutual TLS</a> from alpha to beta. This feature is now enabled by default.</li> <li><strong>Improved</strong> <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/security/secret">SDS security</a> by merging Node Agent with Pilot Agent as Istio Agent and removing cross-pod UDS, which no longer requires users to deploy Kubernetes pod security policies for UDS connections.</li> <li><strong>Improved</strong> Istio by including certificate provisioning functionality within Istiod.</li> <li><strong>Added</strong> Support Kubernetes <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens"><code>first-party-jwt</code></a> as a fallback token for CSR authentication in clusters where <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection"><code>third-party-jwt</code></a> is not supported.</li> <li><strong>Added</strong> Support Istio CA and Kubernetes CA to provision certificates for the control plane, configurable via <code>values.global.pilotCertProvider</code>.</li> <li><strong>Added</strong> Istio Agent provisions a key and certificates for Prometheus.</li> </ul> <h2 id="telemetry">Telemetry</h2> <ul> <li><strong>Added</strong> TCP protocol support for v2 telemetry.</li> <li><strong>Added</strong> gRPC response status code support in metrics/logs.</li> <li><strong>Added</strong> support for Istio Canonical Service.</li> <li><strong>Improved</strong> stability of v2 telemetry pipeline.</li> <li><strong>Added</strong> alpha-level support for configurability in v2 telemetry.</li> <li><strong>Added</strong> support for populating AWS platform metadata in Envoy node metadata.</li> <li><strong>Improved</strong> Stackdriver adapter for Mixer to support configurable flush intervals for tracing data.</li> <li><strong>Added</strong> support for a headless collector service to the Jaeger addon.</li> <li><strong>Fixed</strong> <code>kubernetesenv</code> adapter to provide proper support for pods that contain a dot in their name.</li> <li><strong>Improved</strong> the Fluentd adapter for Mixer to provide millisecond-resolution in exported timestamps.</li> </ul> <h2 id="configuration-management">Configuration management</h2> <h2 id="operator">Operator</h2> <ul> <li><strong>Replaced</strong> the alpha <code>IstioControlPlane</code> API with the new <a href="/v1.9/docs/reference/config/istio.operator.v1alpha1/"><code>IstioOperator</code></a> API to align with existing <code>MeshConfig</code> API.</li> <li><strong>Added</strong> <code>istioctl operator init</code> and <code>istioctl operator remove</code> commands.</li> <li><strong>Improved</strong> reconciliation speed with caching <a href="https://github.com/istio/operator/pull/667"><code>operator#667</code></a>.</li> </ul> <h2 id="istioctl"><code>istioctl</code></h2> <ul> <li><strong>Graduated</strong> <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>Istioctl Analyze</code></a> out of experimental.</li> <li><strong>Added</strong> various analyzers: mutual TLS, JWT, <code>ServiceAssociation</code>, Secret, sidecar image, port name and policy deprecated analyzers.</li> <li><strong>Updated</strong> more validation rules for <code>RequestAuthentication</code>.</li> <li><strong>Added</strong> a new flag <code>-A|--all-namespaces</code> to <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a> to analyze the entire cluster.</li> <li><strong>Added</strong> support for analyzing content passed via <code>stdin</code> to <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a>.</li> <li><strong>Added</strong> <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze -L</code></a> to show a list of all analyzers available.</li> <li><strong>Added</strong> the ability to suppress messages from <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a>.</li> <li><strong>Added</strong> structured format options to <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a>.</li> <li><strong>Added</strong> links to relevant documentation to <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a> output.</li> <li><strong>Updated</strong> annotation methods provided by Istio API in <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>Istioctl Analyze</code></a>.</li> <li><strong>Updated</strong> <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a> now loads files from a directory.</li> <li><strong>Updated</strong> <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a> to try to associate message with their source filename.</li> <li><strong>Updated</strong> <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a> to print the namespace that is being analyzed.</li> <li><strong>Updated</strong> <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a> to analyze in-cluster resources by default.</li> <li><strong>Fixed</strong> bug where <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a> suppressed cluster-level resource messages.</li> <li><strong>Added</strong> support for multiple input files to <code>istioctl manifest</code>.</li> <li><strong>Replaced</strong> the <code>IstioControlPlane</code> API with the <code>IstioOperator</code> API.</li> <li><strong>Added</strong> selector for <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-dashboard"><code>istioctl dashboard</code></a>.</li> <li><strong>Added</strong> support for slices and lists in <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-manifest"><code>istioctl manifest --set</code></a> flag.</li> <li><strong>Added</strong> support for <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-manifest"><code>istioctl manifest</code></a> to read profiles from <code>stdin</code>.</li> <li><strong>Added</strong> a <code>docker/istioctl</code> image #19079.</li> </ul>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.5.x/announcing-1.5/change-notes//v1.9/news/releases/1.5.x/announcing-1.5/change-notes/Change Notes <h2 id="traffic-management">Traffic management</h2> <ul> <li><strong>Added</strong> support for <a href="/v1.9/docs/tasks/traffic-management/mirroring/">mirroring</a> a percentage of traffic.</li> <li><strong>Improved</strong> the Envoy sidecar. The Envoy sidecar now exits when it crashes. This change makes it easier to see whether or not the Envoy sidecar is healthy.</li> <li><strong>Improved</strong> Pilot to skip sending redundant configuration to Envoy when no changes are required.</li> <li><strong>Improved</strong> headless services to avoid conflicts with different services on the same port.</li> <li><strong>Disabled</strong> default <a href="/v1.9/docs/tasks/traffic-management/circuit-breaking/">circuit breakers</a>.</li> <li><strong>Updated</strong> the default regex engine to <code>re2</code>. Please see the <a href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes">Upgrade Notes</a> for details.</li> </ul> <h2 id="security">Security</h2> <ul> <li><strong>Added</strong> the <a href="/v1.9/blog/2019/v1beta1-authorization-policy/"><code>v1beta1</code> authorization policy model</a> for enforcing access control. This will eventually replace the <a href="https://archive.istio.io/1.4/docs/reference/config/security/istio.rbac.v1alpha1/"><code>v1alpha1</code> RBAC policy</a>.</li> <li><strong>Added</strong> experimental support for automatic mutual TLS to enable mutual TLS without destination rule configuration.</li> <li><strong>Added</strong> experimental support for <a href="/v1.9/docs/tasks/security/authorization/authz-td-migration/">authorization policy trust domain migration</a>.</li> <li><strong>Added</strong> experimental <a href="/v1.9/blog/2019/dns-cert/">DNS certificate management</a> to securely provision and manage DNS certificates signed by the Kubernetes CA.</li> <li><strong>Improved</strong> Citadel to periodically check and rotate the expired root certificate when running in self-sign CA mode.</li> <li><strong>Updated</strong> JWT authentication to treat <a href="https://github.com/istio/istio/issues/13565">space-delimited claim</a> as a list of claims.</li> </ul> <h2 id="telemetry">Telemetry</h2> <ul> <li><strong>Added</strong> experimental in-proxy telemetry reporting to <a href="https://github.com/istio/proxy/blob/release-1.9/extensions/stackdriver/README.md">Stackdriver</a>.</li> <li><strong>Improved</strong> support for in-proxy Prometheus generation of HTTP service metrics (from experimental to alpha).</li> <li><strong>Improved</strong> telemetry collection for <a href="/v1.9/blog/2019/monitoring-external-service-traffic/">blocked and passthrough external service traffic</a>.</li> <li><strong>Added</strong> the option to configure <a href="/v1.9/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig">stat patterns</a> for Envoy stats.</li> <li><strong>Added</strong> the <code>inbound</code> and <code>outbound</code> prefixes to the Envoy HTTP stats to specify traffic direction.</li> <li><strong>Improved</strong> reporting of telemetry for traffic that goes through an egress gateway.</li> </ul> <h2 id="configuration-management">Configuration management</h2> <ul> <li><strong>Added</strong> multiple validation checks to the <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/"><code>istioctl analyze</code></a> sub-command.</li> <li><strong>Added</strong> the experimental option to enable validation messages for Istio <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/#enabling-validation-messages-for-resource-status">resource statuses</a>.</li> <li><strong>Added</strong> OpenAPI v3 schema validation of Custom Resource Definitions (CRDs). Please see the <a href="/v1.9/news/releases/1.4.x/announcing-1.4/upgrade-notes">Upgrade Notes</a> for details.</li> <li><strong>Added</strong> <a href="https://github.com/istio/client-go">client-go</a> libraries to access Istio APIs.</li> </ul> <h2 id="installation">Installation</h2> <ul> <li><strong>Added</strong> the experimental <a href="/v1.9/docs/setup/install/operator/">operator controller</a> for dynamic updates to an Istio installation.</li> <li><strong>Removed</strong> the <code>proxy_init</code> Docker image. Instead, the <code>istio-init</code> container reuses the <code>proxyv2</code> image.</li> <li><strong>Updated</strong> the base image to <code>ubuntu:bionic</code>.</li> </ul> <h2 id="istioctl"><code>istioctl</code></h2> <ul> <li><strong>Added</strong> the <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-proxy-config-log"><code>istioctl proxy-config logs</code></a> sub-command retrieve and update Envoy logging levels.</li> <li><strong>Updated</strong> the <a href="https://archive.istio.io/v1.4/docs/reference/commands/istioctl/#istioctl-authn-tls-check"><code>istioctl authn tls-check</code></a> sub-command to display which policy is in use.</li> <li><strong>Added</strong> the experimental <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-experimental-wait"><code>istioctl experimental wait</code></a> sub-command to have Istio wait until it has pushed a configuration to all Envoy sidecars.</li> <li><strong>Added</strong> the experimental <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-experimental-multicluster"><code>istioctl experimental multicluster</code></a> sub-command to help manage Istio across multiple clusters.</li> <li><strong>Added</strong> the experimental <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-experimental-post-install-webhook"><code>istioctl experimental post-install webhook</code></a> sub-command to <a href="/v1.9/blog/2019/webhook/">securely manage webhook configurations</a>.</li> <li><strong>Added</strong> the experimental <a href="https://archive.istio.io/v1.4/docs/setup/upgrade/istioctl-upgrade/"><code>istioctl experimental upgrade</code></a> sub-command to perform upgrades of Istio.</li> <li><strong>Improved</strong> the <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-version"><code>istioctl version</code></a> sub-command. It now shows the Envoy proxy versions.</li> </ul>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.4.x/announcing-1.4/change-notes//v1.9/news/releases/1.4.x/announcing-1.4/change-notes/Change Notes <h2 id="installation">Installation</h2> <ul> <li><strong>Added</strong> experimental <a href="/v1.9/docs/setup/install/istioctl/">manifest and profile commands</a> to install and manage the Istio control plane for evaluation.</li> </ul> <h2 id="traffic-management">Traffic management</h2> <ul> <li><strong>Added</strong> <a href="/v1.9/docs/ops/configuration/traffic-management/protocol-selection/">automatic protocol determination</a> of HTTP or TCP for outbound traffic when ports are not named according to Istio’s <a href="/v1.9/docs/ops/deployment/requirements/">conventions</a>.</li> <li><strong>Added</strong> a mode to the Gateway API for mutual TLS operation.</li> <li><strong>Fixed</strong> issues present when a service communicates over the network first in permissive mutual TLS mode for protocols like MySQL and MongoDB.</li> <li><strong>Improved</strong> Envoy proxy readiness checks. They now check Envoy&rsquo;s readiness status.</li> <li><strong>Improved</strong> container ports are no longer required in the pod spec. All ports are <a href="/v1.9/faq/traffic-management/#controlling-inbound-ports">captured by default</a>.</li> <li><strong>Improved</strong> the <code>EnvoyFilter</code> API. You can now add or update all configurations.</li> <li><strong>Improved</strong> the Redis load balancer to now default to <a href="https://www.envoyproxy.io/docs/envoy/v1.6.0/intro/arch_overview/load_balancing#maglev"><code>MAGLEV</code></a> when using the Redis proxy.</li> <li><strong>Improved</strong> load balancing to direct traffic to the <a href="/v1.9/faq/traffic-management/#controlling-inbound-ports">same region and zone</a> by default.</li> <li><strong>Improved</strong> Pilot by reducing CPU utilization. The reduction approaches 90% depending on the specific deployment.</li> <li><strong>Improved</strong> the <code>ServiceEntry</code> API to allow for the same hostname in different namespaces.</li> <li><strong>Improved</strong> the <a href="/v1.9/docs/reference/config/networking/sidecar/#OutboundTrafficPolicy">Sidecar API</a> to customize the <code>OutboundTrafficPolicy</code> policy.</li> </ul> <h2 id="security">Security</h2> <ul> <li><strong>Added</strong> trust domain validation for services using mutual TLS. By default, the server only authenticates the requests from the same trust domain.</li> <li><strong>Added</strong> [labels]((/docs/ops/configuration/mesh/secret-creation/) to control service account secret generation by namespace.</li> <li><strong>Added</strong> SDS support to deliver the private key and certificates to each Istio control plane service.</li> <li><strong>Added</strong> support for <a href="/v1.9/docs/ops/diagnostic-tools/controlz/">introspection</a> to Citadel.</li> <li><strong>Added</strong> metrics to the <code>/metrics</code> endpoint of Citadel Agent on port 15014 to monitor the SDS service.</li> <li><strong>Added</strong> diagnostics to the Citadel Agent using the <code>/debug/sds/workload</code> and <code>/debug/sds/gateway</code> on port 8080.</li> <li><strong>Improved</strong> the ingress gateway to <a href="https://archive.istio.io/v1.3/docs/tasks/traffic-management/ingress/secure-ingress-sds/#configure-a-mutual-tls-ingress-gateway">load the trusted CA certificate from a separate secret</a> when using SDS.</li> <li><strong>Improved</strong> SDS security by enforcing the usage of <a href="/v1.9/blog/2019/trustworthy-jwt-sds">Kubernetes Trustworthy JWTs</a>.</li> <li><strong>Improved</strong> Citadel Agent logs by unifying the logging pattern.</li> <li><strong>Removed</strong> support for Istio SDS when using <a href="/v1.9/blog/2019/trustworthy-jwt-sds">Kubernetes versions earlier than 1.13</a>.</li> <li><strong>Removed</strong> integration with Vault CA temporarily. SDS requirements caused the temporary removal but we will reintroduce Vault CA integration in a future release.</li> <li><strong>Enabled</strong> the Envoy JWT filter by default to improve security and reliability.</li> </ul> <h2 id="telemetry">Telemetry</h2> <ul> <li><strong>Added</strong> Access Log Service <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v2/service/accesslog/v2/als.proto#grpc-access-log-service-als">ALS</a> support for Envoy gRPC.</li> <li><strong>Added</strong> a Grafana dashboard for Citadel monitoring.</li> <li><strong>Added</strong> <a href="https://archive.istio.io/v1.3/docs/reference/commands/sidecar-injector/#metrics">metrics</a> for monitoring the sidecar injector webhook.</li> <li><strong>Added</strong> control plane metrics to monitor Istio&rsquo;s configuration state.</li> <li><strong>Added</strong> telemetry reporting for traffic destined to the <code>Passthrough</code> and <code>BlackHole</code> clusters.</li> <li><strong>Added</strong> alpha support for in-proxy generation of service metrics using Prometheus.</li> <li><strong>Added</strong> alpha support for environmental metadata in Envoy node metadata.</li> <li><strong>Added</strong> alpha support for Proxy Metadata Exchange.</li> <li><strong>Added</strong> alpha support for the OpenCensus trace driver.</li> <li><strong>Improved</strong> reporting for external services by removing requirements to add a service entry.</li> <li><strong>Improved</strong> the mesh dashboard to provide monitoring of Istio&rsquo;s configuration state.</li> <li><strong>Improved</strong> the Pilot dashboard to expose additional key metrics to more clearly identify errors.</li> <li><strong>Removed</strong> deprecated <code>Adapter</code> and <code>Template</code> custom resource definitions (CRDs).</li> <li><strong>Deprecated</strong> the HTTP API spec used to produce API attributes. We will remove support for producing API attributes in Istio 1.4.</li> </ul> <h2 id="policy">Policy</h2> <ul> <li><strong>Improved</strong> rate limit enforcement to allow communication when the quota backend is unavailable.</li> </ul> <h2 id="configuration-management">Configuration management</h2> <ul> <li><strong>Fixed</strong> Galley to stop too many gRPC pings from closing connections.</li> <li><strong>Improved</strong> Galley to avoid control plane upgrade failures.</li> </ul> <h2 id="istioctl"><code>istioctl</code></h2> <ul> <li><strong>Added</strong> <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-manifest"><code>istioctl experimental manifest</code></a> to manage the new experimental install manifests.</li> <li><strong>Added</strong> <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-profile"><code>istioctl experimental profile</code></a> to manage the new experimental install profiles.</li> <li><strong>Added</strong> <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-experimental-metrics"><code>istioctl experimental metrics</code></a></li> <li><strong>Added</strong> <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-experimental-describe-pod"><code>istioctl experimental describe pod</code></a> to describe an Istio pod&rsquo;s configuration.</li> <li><strong>Added</strong> <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-experimental-add-to-mesh"><code>istioctl experimental add-to-mesh</code></a> to add Kubernetes services or virtual machines to an existing Istio service mesh.</li> <li><strong>Added</strong> <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-experimental-remove-from-mesh"><code>istioctl experimental remove-from-mesh</code></a> to remove Kubernetes services or virtual machines from an existing Istio service mesh.</li> <li><strong>Promoted</strong> the <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-convert-ingress"><code>istioctl experimental convert-ingress</code></a> command to <code>istioctl convert-ingress</code>.</li> <li><strong>Promoted</strong> the <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-dashboard"><code>istioctl experimental dashboard</code></a> command to <code>istioctl dashboard</code>.</li> </ul> <h2 id="miscellaneous">Miscellaneous</h2> <ul> <li><strong>Added</strong> new images based on <a href="/v1.9/docs/ops/configuration/security/harden-docker-images/">distroless</a> base images.</li> <li><strong>Improved</strong> the Istio CNI Helm chart to have consistent versions with Istio.</li> <li><strong>Improved</strong> Kubernetes Jobs behavior. Kubernetes Jobs now exit correctly when the job manually calls the <code>/quitquitquit</code> endpoint.</li> </ul>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.3.x/announcing-1.3/change-notes//v1.9/news/releases/1.3.x/announcing-1.3/change-notes/Change Notes <h2 id="general">General</h2> <ul> <li><strong>Added</strong> <code>traffic.sidecar.istio.io/includeInboundPorts</code> annotation to eliminate the need for service owner to declare <code>containerPort</code> in the deployment yaml file. This will become the default in a future release.</li> <li><strong>Added</strong> IPv6 experimental support for Kubernetes clusters.</li> </ul> <h2 id="traffic-management">Traffic management</h2> <ul> <li><strong>Improved</strong> <a href="/v1.9/docs/tasks/traffic-management/locality-load-balancing/">locality based routing</a> in multicluster environments.</li> <li><strong>Improved</strong> outbound traffic policy in <a href="https://archive.istio.io/v1.2/docs/reference/config/installation-options/#global-options"><code>ALLOW_ANY</code> mode</a>. Traffic for unknown HTTP/HTTPS hosts on an existing port will be <a href="/v1.9/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services">forwarded as is</a>. Unknown traffic will be logged in Envoy access logs.</li> <li><strong>Added</strong> support for setting HTTP idle timeouts to upstream services.</li> <li><strong>Improved</strong> Sidecar support for <a href="/v1.9/docs/reference/config/networking/sidecar/#CaptureMode">NONE mode</a> (without iptables) .</li> <li><strong>Added</strong> ability to configure the <a href="https://archive.istio.io/v1.2/docs/reference/config/installation-options/#global-options">DNS refresh rate</a> for sidecar Envoys, to reduce the load on the DNS servers.</li> <li><strong>Graduated</strong> <a href="/v1.9/docs/reference/config/networking/sidecar/">Sidecar API</a> from Alpha to Alpha API and Beta runtime.</li> </ul> <h2 id="security">Security</h2> <ul> <li><strong>Improved</strong> extend the default lifetime of self-signed Citadel root certificates to 10 years.</li> <li><strong>Added</strong> Kubernetes health check prober rewrite per deployment via <code>sidecar.istio.io/rewriteAppHTTPProbers: &quot;true&quot;</code> in the <code>PodSpec</code> <a href="/v1.9/docs/ops/configuration/mesh/app-health-check/#use-annotations-on-pod">annotation</a>.</li> <li><strong>Added</strong> support for configuring the secret paths for Istio mutual TLS certificates. Refer <a href="https://github.com/istio/istio/issues/11984">here</a> for more details.</li> <li><strong>Added</strong> support for <a href="https://en.wikipedia.org/wiki/PKCS_8">PKCS 8</a> private keys for workloads, enabled by the flag <code>pkcs8-keys</code> on Citadel.</li> <li><strong>Improved</strong> JWT public key fetching logic to be more resilient to network failure.</li> <li><strong>Fixed</strong> <a href="https://tools.ietf.org/html/rfc5280#section-4.2.1.6">SAN</a> field in workload certificates is set as <code>critical</code>. This fixes the issue that some custom certificate verifiers cannot verify Istio certificates.</li> <li><strong>Fixed</strong> mutual TLS probe rewrite for HTTPS probes.</li> <li><strong>Graduated</strong> <a href="/v1.9/docs/reference/config/networking/gateway/">SNI with multiple certificates support at ingress gateway</a> from Alpha to Stable.</li> <li><strong>Graduated</strong> <a href="https://archive.istio.io/v1.2/docs/tasks/traffic-management/ingress/secure-ingress-sds/">certification management on Ingress Gateway</a> from Alpha to Beta.</li> </ul> <h2 id="telemetry">Telemetry</h2> <ul> <li><strong>Added</strong> Full support for control over Envoy stats generation, based on stats prefixes, suffixes, and regular expressions through the use of annotations.</li> <li><strong>Changed</strong> Prometheus generated traffic is excluded from metrics.</li> <li><strong>Added</strong> support for sending traces to Datadog.</li> <li><strong>Graduated</strong> <a href="/v1.9/docs/tasks/observability/distributed-tracing/">distributed tracing</a> from Beta to Stable.</li> </ul> <h2 id="policy">Policy</h2> <ul> <li><strong>Fixed</strong> <a href="https://github.com/istio/istio/issues/13868">Mixer based</a>TCP Policy enforcement.</li> <li><strong>Graduated</strong> <a href="https://archive.istio.io/1.2/docs/reference/config/security/istio.rbac.v1alpha1/">Authorization (RBAC)</a> from Alpha to Alpha API and Beta runtime.</li> </ul> <h2 id="configuration-management">Configuration management</h2> <ul> <li><strong>Improved</strong> validation of Policy &amp; Telemetry CRDs.</li> <li><strong>Graduated</strong> basic configuration resource validation from Alpha to Beta.</li> </ul> <h2 id="installation-and-upgrade">Installation and upgrade</h2> <ul> <li><strong>Updated</strong> default proxy memory limit size(<code>global.proxy.resources.limits.memory</code>) from <code>128Mi</code> to <code>1024Mi</code> to ensure proxy has sufficient memory.</li> <li><strong>Added</strong> pod <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity">anti-affinity</a> and <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/">toleration</a> support to all of our control plane components.</li> <li><strong>Added</strong> <code>sidecarInjectorWebhook.neverInjectSelector</code> and <code>sidecarInjectorWebhook.alwaysInjectSelector</code> to allow users to further refine whether workloads should have sidecar automatically injected or not, based on label selectors.</li> <li><strong>Added</strong> <code>global.logging.level</code> and <code>global.proxy.logLevel</code> to allow users to easily configure logs for control plane and data plane components globally.</li> <li><strong>Added</strong> support to configure the Datadog location via <a href="https://archive.istio.io/v1.2/docs/reference/config/installation-options/#global-options"><code>global.tracer.datadog.address</code></a>.</li> <li><strong>Removed</strong> Previously <a href="https://discuss.istio.io/t/deprecation-notice-custom-mixer-adapter-crds/2055">deprecated</a> Adapter and Template CRDs are disabled by default. Use <code>mixer.templates.useTemplateCRDs=true</code> and <code>mixer.adapters.useAdapterCRDs=true</code> install options to re-enable them.</li> </ul> <p>Refer to the <a href="/v1.9/news/releases/1.2.x/announcing-1.2/helm-changes/">installation option change page</a> to view the complete list of changes.</p> <h2 id="istioctl-and-kubectl"><code>istioctl</code> and <code>kubectl</code></h2> <ul> <li><strong>Graduated</strong> <code>istioctl verify-install</code> out of experimental.</li> <li><strong>Improved</strong> <code>istioctl verify-install</code> to validate if a given Kubernetes environment meets Istio&rsquo;s prerequisites.</li> <li><strong>Added</strong> auto-completion support to <code>istioctl</code>.</li> <li><strong>Added</strong> <code>istioctl experimental dashboard</code> to allow users to easily open the web UI of any Istio addons.</li> <li><strong>Added</strong> <code>istioctl x</code> alias to conveniently run <code>istioctl experimental</code> command.</li> <li><strong>Improved</strong> <code>istioctl version</code> to report both Istio control plane and <code>istioctl</code> version info by default.</li> <li><strong>Improved</strong> <code>istioctl validate</code> to validate Mixer configuration and supports deep validation with referential integrity.</li> </ul> <h2 id="miscellaneous">Miscellaneous</h2> <ul> <li><strong>Added</strong> <a href="/v1.9/docs/setup/additional-setup/cni/">Istio CNI support</a> to setup sidecar network redirection and remove the use of <code>istio-init</code> containers requiring <code>NET_ADMIN</code> capability.</li> <li><strong>Added</strong> a new experimental <a href="https://github.com/istio/installer/wiki">&lsquo;a-la-carte&rsquo; Istio installer</a> to enable users to install and upgrade Istio with desired isolation and security.</li> <li><strong>Added</strong> <a href="https://docs.google.com/document/d/1M-qqBMNbhbAxl3S_8qQfaeOLAiRqSBpSgfWebFBRuu8/edit">environment variable and configuration file support</a> for configuring Galley, in addition to command-line flags.</li> <li><strong>Added</strong> <a href="/v1.9/docs/ops/diagnostic-tools/controlz/">ControlZ</a> support to visualize the state of the MCP Server in Galley.</li> <li><strong>Added</strong> the <a href="https://archive.istio.io/v1.2/docs/reference/commands/galley/#galley-server"><code>enableServiceDiscovery</code> command-line flag</a> to control the service discovery module in Galley.</li> <li><strong>Added</strong> <code>InitialWindowSize</code> and <code>InitialConnWindowSize</code> parameters to Galley and Pilot to allow fine-tuning of MCP (gRPC) connection settings.</li> <li><strong>Graduated</strong> configuration processing with Galley from Alpha to Beta.</li> </ul>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.2.x/announcing-1.2/change-notes//v1.9/news/releases/1.2.x/announcing-1.2/change-notes/Change Notes <h2 id="incompatible-changes-from-1-0">Incompatible changes from 1.0</h2> <p>In addition to the new features and improvements listed below, Istio 1.1 has introduced a number of significant changes from 1.0 that can alter the behavior of applications. A concise list of these changes can be found in the <a href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes">upgrade notice</a>.</p> <h2 id="upgrades">Upgrades</h2> <p>We recommend a manual upgrade of the control plane and data plane to 1.1. See the <a href="/v1.9/docs/setup/upgrade/">upgrades documents</a> for more information.</p> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">Be sure to check out the <a href="/v1.9/news/releases/1.1.x/announcing-1.1/upgrade-notes">upgrade notice</a> for a concise list of things you should know before upgrading your deployment to Istio 1.1.</div> </aside> </div> <h2 id="installation">Installation</h2> <ul> <li><p><strong>CRD Install Separated from Istio Install</strong>. Placed Istio’s Custom Resource Definitions (CRDs) into the <code>istio-init</code> Helm chart. Placing the CRDs in their own Helm chart preserves the data continuity of the custom resource content during the upgrade process and further enables Istio to evolve beyond a Helm-based installation.</p></li> <li><p><strong>Installation Configuration Profiles</strong>. Added several installation configuration profiles to simplify the installation process using well-known and well-tested patterns. Learn more about the better user experience afforded by the <a href="/v1.9/docs/setup/additional-setup/config-profiles/">installation profile feature</a>.</p></li> <li><p><strong>Improved Multicluster Integration</strong>. Consolidated the 1.0 <code>istio-remote</code> chart previously used for <a href="https://archive.istio.io/v1.1/docs/setup/kubernetes/install/multicluster/vpn/">multicluster VPN</a> and <a href="https://archive.istio.io/v1.1/docs/examples/multicluster/split-horizon-eds/">multicluster split horizon</a> remote cluster installation into the Istio Helm chart simplifying the operational experience.</p></li> </ul> <h2 id="traffic-management">Traffic management</h2> <ul> <li><p><strong>New <code>Sidecar</code> Resource</strong>. The new <a href="/v1.9/docs/concepts/traffic-management/#sidecars">sidecar</a> resource enables more fine-grained control over the behavior of the sidecar proxies attached to workloads within a namespace. In particular it adds support to limit the set of services a sidecar will send traffic to. This reduces the amount of configuration computed and transmitted to the proxy, improving startup time, resource consumption and control-plane scalability. For large deployments, we recommend adding a sidecar resource per namespace. Controls are also provided for ports, protocols and traffic capture for advanced use-cases.</p></li> <li><p><strong>Restrict Visibility of Services</strong>. Added the new <code>exportTo</code> feature which allows service owners to control which namespaces can reference their services. This feature is added to <code>ServiceEntry</code>, <code>VirtualService</code> and is also supported on a Kubernetes Service via the <code>networking.istio.io/exportTo</code> annotation.</p></li> <li><p><strong>Namespace Scoping</strong>. When referring to a <code>VirtualService</code> in a Gateway we use DNS based name matching in our configuration model. This can be ambiguous when more than one namespace defines a virtual service for the same host name. To resolve ambiguity it is now possible to explicitly scope these references by namespace using a syntax of the form <strong><code>[{namespace-name}]/{hostname-match}</code></strong> in the <code>hosts</code> field. The equivalent capability is also available in <code>Sidecar</code> for egress.</p></li> <li><p><strong>Updates to <code>ServiceEntry</code> Resources</strong>. Added support to specify the locality of a service and the associated SAN to use with mutual TLS. Service entries with HTTPS ports no longer need an additional virtual service to enable SNI-based routing.</p></li> <li><p><strong>Locality-Aware Routing</strong>. Added full support for routing to services in the same locality before picking services in other localities. See <a href="/v1.9/docs/reference/config/networking/destination-rule#LocalityLoadBalancerSetting">Locality Load Balancer Settings</a></p></li> <li><p><strong>Refined Multicluster Routing</strong>. Simplified the multicluster setup and enabled additional deployment modes. You can now connect multiple clusters simply using their ingress gateways without needing pod-level VPNs, deploy control planes in each cluster for high-availability cases, and span a namespace across several clusters to create global namespaces. Locality-aware routing is enabled by default in the high-availability control plane solution.</p></li> <li><p><strong>Istio Ingress Deprecated</strong>. Removed the previously deprecated Istio ingress. Refer to the <a href="/v1.9/docs/ops/integrations/certmanager/">Securing Kubernetes Ingress with Cert-Manager</a> example for more details on how to use Kubernetes Ingress resources with <a href="/v1.9/docs/concepts/traffic-management/#gateways">gateways</a>.</p></li> <li><p><strong>Performance and Scalability Improvements</strong>. Tuned the performance and scalability of Istio and Envoy. Read more about <a href="/v1.9/docs/ops/deployment/performance-and-scalability/">Performance and Scalability</a> enhancements.</p></li> <li><p><strong>Access Logging Off by Default</strong>. Disabled the access logs for all Envoy sidecars by default to improve performance.</p></li> </ul> <h3 id="security">Security</h3> <ul> <li><p><strong>Readiness and Liveness Probes</strong>. Added support for Kubernetes&rsquo; HTTP <a href="/v1.9/faq/security/#k8s-health-checks">readiness and liveness probes</a> when mutual TLS is enabled.</p></li> <li><p><strong>Cluster RBAC Configuration</strong>. Replaced the <code>RbacConfig</code> resource with the <code>ClusterRbacConfig</code> resource to implement the correct cluster scope. See <a href="https://archive.istio.io/v1.1/docs/setup/kubernetes/upgrade/steps/#migrating-from-rbacconfig-to-clusterrbacconfig">Migrating <code>RbacConfig</code> to <code>ClusterRbacConfig</code></a>. for migration instructions.</p></li> <li><p><strong>Identity Provisioning Through SDS</strong>. Added SDS support to provide stronger security with on-node key generation and dynamic certificate rotation without restarting Envoy.</p></li> <li><p><strong>Authorization for TCP Services</strong>. Added support of authorization for TCP services in addition to HTTP and gRPC services. See <a href="/v1.9/docs/tasks/security/authorization/authz-tcp">Authorization for TCP Services</a> for more information.</p></li> <li><p><strong>Authorization for End-User Groups</strong>. Added authorization based on <code>groups</code> claim or any list-typed claims in JWT. See <a href="/v1.9/docs/tasks/security/authorization/authz-jwt/">Authorization for JWT</a> for more information.</p></li> <li><p><strong>External Certificate Management on Ingress Gateway Controller</strong>. Added a controller to dynamically load and rotate external certificates.</p></li> <li><p><strong>Custom PKI Integration</strong>. Added Vault PKI integration with support for Vault-protected signing keys and ability to integrate with existing Vault PKIs.</p></li> <li><p><strong>Customized (non <code>cluster.local</code>) Trust Domains</strong>. Added support for organization- or cluster-specific trust domains in the identities.</p></li> </ul> <h2 id="policies-and-telemetry">Policies and telemetry</h2> <ul> <li><p><strong>Policy Checks Off By Default</strong>. Changed policy checks to be turned off by default to improve performance for most customer scenarios. <a href="https://istio.io/v1.6/docs/tasks/policy-enforcement/enabling-policy/">Enabling Policy Enforcement</a> details how to turn on Istio policy checks, if needed.</p></li> <li><p><strong>Kiali</strong>. Replaced the <a href="https://github.com/istio/istio/issues/9066">Service Graph addon</a> with <a href="https://www.kiali.io">Kiali</a> to provide a richer visualization experience. See the <a href="/v1.9/docs/tasks/observability/kiali/">Kiali task</a> for more details.</p></li> <li><p><strong>Reduced Overhead</strong>. Added several performance and scale improvements including:</p> <ul> <li><p>Significant reduction in default collection of Envoy-generated statistics.</p></li> <li><p>Added load-shedding functionality to Mixer workloads.</p></li> <li><p>Improved the protocol between Envoy and Mixer.</p></li> </ul></li> <li><p><strong>Control Headers and Routing</strong>. Added the option to create adapters to influence the headers and routing of an incoming request. See the <a href="https://istio.io/v1.6/docs/tasks/policy-enforcement/control-headers">Control Headers and Routing</a> task for more information.</p></li> <li><p><strong>Out of Process Adapters</strong>. Added the out-of-process adapter functionality for production use. As a result, we deprecated the in-process adapter model in this release. All new adapter development should use the out-of-process model moving forward.</p></li> <li><p><strong>Tracing Improvements</strong>. Performed many improvements in our overall tracing story:</p> <ul> <li><p>Trace ids are now 128 bit wide.</p></li> <li><p>Added support for sending trace data to <a href="/v1.9/docs/tasks/observability/distributed-tracing/lightstep/">Lightstep</a></p></li> <li><p>Added the option to disable tracing for Mixer-backed services entirely.</p></li> <li><p>Added policy decision-aware tracing.</p></li> </ul></li> <li><p><strong>Default TCP Metrics</strong>. Added default metrics for tracking TCP connections.</p></li> <li><p><strong>Reduced Load Balancer Requirements for Addons</strong>. Stopped exposing addons via separate load balancers. Instead, addons are exposed via the Istio gateway. To expose addons externally using either HTTP or HTTPS protocols, please use the <a href="/v1.9/docs/tasks/observability/gateways/">Addon Gateway documentation</a>.</p></li> <li><p><strong>Secure Addon Credentials</strong>. Changed storage of the addon credentials. Grafana, Kiali, and Jaeger passwords and username are now stored in <a href="https://kubernetes.io/docs/concepts/configuration/secret/">Kubernetes secrets</a> for improved security and compliance.</p></li> <li><p><strong>More Flexibility with <code>statsd</code> Collector</strong>. Removed the built-in <code>statsd</code> collector. Istio now supports bring your own <code>statsd</code> for improved flexibility with existing Kubernetes deployments.</p></li> </ul> <h3 id="configuration-management">Configuration management</h3> <ul> <li><p><strong>Galley</strong>. Added <a href="https://archive.istio.io/v1.1/docs/concepts/what-is-istio/#galley">Galley</a> as the primary configuration ingestion and distribution mechanism within Istio. It provides a robust model to validate, transform, and distribute configuration states to Istio components insulating the Istio components from Kubernetes details. Galley uses the <a href="https://github.com/istio/api/tree/release-1.9/mcp">Mesh Configuration Protocol</a> to interact with components.</p></li> <li><p><strong>Monitoring Port</strong>. Changed Galley&rsquo;s default monitoring port from 9093 to 15014.</p></li> </ul> <h2 id="istioctl-and-kubectl"><code>istioctl</code> and <code>kubectl</code></h2> <ul> <li><p><strong>Validate Command</strong>. Added the <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-validate"><code>istioctl validate</code></a> command for offline validation of Istio Kubernetes resources.</p></li> <li><p><strong>Verify-Install Command</strong>. Added the <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-verify-install"><code>istioctl verify-install</code></a> command to verify the status of an Istio installation given a specified installation YAML file.</p></li> <li><p><strong>Deprecated Commands</strong>. Deprecated the <code>istioctl create</code>, <code>istioctl replace</code>, <code>istioctl get</code>, and <code>istioctl delete</code> commands. Use the <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl"><code>kubectl</code></a> equivalents instead. Deprecated the <code>istioctl gen-deploy</code> command too. Use a <a href="https://archive.istio.io/v1.1/docs/setup/kubernetes/install/helm/#option-1-install-with-helm-via-helm-template"><code>helm template</code></a> instead. Release 1.2 will remove these commands.</p></li> <li><p><strong>Short Commands</strong>. Included short commands in <code>kubectl</code> for gateways, virtual services, destination rules and service entries.</p></li> </ul>Mon, 01 Jan 0001 00:00:00 +0000/v1.9/news/releases/1.1.x/announcing-1.1/change-notes//v1.9/news/releases/1.1.x/announcing-1.1/change-notes/Updates to how Istio security releases are handled: Patch Tuesday, embargoes, and 0-days <p>While most of the work in the Istio Product Security Working Group is done behind the scenes, we are listening to the community in setting expectations for security releases. We understand that it is difficult for mesh administrators, operators and vendors to be aware of security bulletins and security releases.</p> <p>We currently disclose vulnerabilities and security releases via numerous channels:</p> <ul> <li><a href="https://istio.io">istio.io</a> via our <a href="/v1.9/news/releases/">Release Announcements</a> and <a href="/v1.9/news/security/">Security Bulletins</a></li> <li><a href="https://discuss.istio.io/c/announcements/5">Discuss</a></li> <li>announcements channel on <a href="https://istio.slack.com">Slack</a></li> <li><a href="https://twitter.com/IstioMesh">Twitter</a></li> <li><a href="/v1.9/news/feed.xml">RSS</a></li> </ul> <p>When operating any software, it is preferable to plan for possible downtime when upgrading. Given the work that the Istio community is doing around Day 2 operations in 2021, the Environments working group has done a good job to streamline many upgrade issues users have seen. The Product Security Working Group intends to help Day 2 operations by having routine security release days so that upgrade operations can be planned in advance for our users.</p> <h2 id="patch-tuesdays">Patch Tuesdays</h2> <p>The Product Security working group is intending to ship a security release the 2nd Tuesday of each month. These security releases may contain fixes for multiple CVEs. It is the intent of the Product Security working group to have these security releases not contain any other fixes, although that may not always be possible.</p> <p>When the Product Security working group intends to ship an upcoming security patch, an announcement will be made on <a href="https://discuss.istio.io/c/announcements/5">the Istio discussion board</a> 2 weeks prior to release. If you&rsquo;re running Istio in production, we suggest you watch the Announcements category to be notified of such a release. If no such announcement is made there will not be a security release for that month, barring some exceptions listed below.</p> <h3 id="first-patch-tuesday">First Patch Tuesday</h3> <p>We are pleased to announce that <a href="/v1.9/news/releases/1.9.x/announcing-1.9.5/">Istio 1.9.5</a>, and the final release of Istio 1.8, <a href="/v1.9/news/releases/1.8.x/announcing-1.8.6/">1.8.6</a>, are the first security releases to fit this pattern. As Istio 1.10 will be shipping soon we are intending to continue this new tradition in June.</p> <p>These releases fix 3 CVEs. Please see the release pages for information regarding the specific CVEs fixed.</p> <h2 id="unscheduled-security-releases">Unscheduled security releases</h2> <h3 id="0-day-vulnerabilities">0-day vulnerabilities</h3> <p>Unfortunately, 0-day vulnerabilities cannot be planned. Upon disclosure, the Product Security Working Group will need to issue an out-of-band security release. The above methods will be used to disclose such issues, so please use at least one of them to be notified of such disclosures.</p> <h3 id="third-party-embargoes">Third party embargoes</h3> <p>Similar to 0-day vulnerabilities, security releases can be dictated by third party embargoes, namely Envoy. When this occurs, Istio will release a same-day patch once the embargo is lifted.</p> <h2 id="security-best-practices">Security Best Practices</h2> <p>The <a href="/v1.9/docs/ops/best-practices/security/">Istio Security Best Practices</a> has seen many improvements over the past few months. We recommend you check it regularly, as many of our recent security bulletins can be mitigated by utilizing methods discussed in the Security Best Practices page.</p> <h2 id="early-disclosure-list">Early Disclosure List</h2> <p>If you meet <a href="https://github.com/istio/community/blob/master/EARLY-DISCLOSURE.md#membership-criteria">the criteria</a> to be a part of the <a href="https://github.com/istio/community/blob/master/EARLY-DISCLOSURE.md">Istio Early Disclosure</a> list, please apply for membership. Patches for upcoming security releases will be made available to the early disclosure list ~2 weeks prior to Istio&rsquo;s Patch Tuesday.</p> <p>There will be times when an upcoming Istio security release will also need patches from Envoy. We cannot redistribute Envoy patches due to their embargo. <a href="https://github.com/envoyproxy/envoy/security/policy">Please refer to Envoy&rsquo;s guidance</a> on how to join their early disclosure list.</p> <h2 id="security-feedback">Security Feedback</h2> <p>The Product Security Working Group holds bi-weekly meetings on Tuesdays from 9:00-9:30 Pacific. For more information see the <a href="https://calendar.google.com/calendar/embed?src=4uhe8fi8sf1e3tvmvh6vrq2dog%40group.calendar.google.com&amp;ctz=America%2FLos_Angeles">Istio Working Group Calendar</a>.</p> <p>Our next public meeting will be held on May 25, 2021. Please join us!</p>Tue, 11 May 2021 00:00:00 +0000/v1.9/blog/2021/patch-tuesdays/Jacob Delgado (Aspen Mesh)/v1.9/blog/2021/patch-tuesdays/cveproduct securityUse discovery selectors to configure namespaces for your Istio service mesh <p>As users move their services to run in the Istio service mesh, they are often surprised that the control plane watches and processes all of the Kubernetes resources, from all namespaces in the cluster, by default. This can be an issue for very large clusters with lots of namespaces and deployments, or even for a moderately sized cluster with rapidly churning resources (for example, Spark jobs).</p> <p>Both <a href="https://github.com/istio/istio/issues/26679">in the community</a> as well as for our large-scale customers at <a href="https://solo.io">Solo.io</a>, we need a way to dynamically restrict the set of namespaces that are part of the mesh so that the Istio control plane only processes resources in those namespaces. The ability to restrict the namespaces enables Istiod to watch and push fewer resources and associated changes to the sidecars, thus improving the overall performance on the control plane and data plane.</p> <h2 id="background">Background</h2> <p>By default, Istio watches all Namespaces, Services, Endpoints and Pods in a cluster. For example, in my Kubernetes cluster, I deployed the <code>sleep</code> service in the default namespace, and the <code>httpbin</code> service in the <code>ns-x</code> namespace. I’ve added the <code>sleep</code> service to the mesh, but I have no plan to add the <code>httpbin</code> service to the mesh, or have any service in the mesh interact with the <code>httpbin</code> service.</p> <p>Use <code>istioctl proxy-config endpoint</code> command to display all the endpoints for the <code>sleep</code> deployment:</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:35.128205128205124%"> <a data-skipendnotes="true" href="/v1.9/blog/2021/discovery-selectors/endpoints-default.png" title="Endpoints for Sleep Deployment"> <img class="element-to-stretch" src="/v1.9/blog/2021/discovery-selectors/endpoints-default.png" alt="Endpoints for Sleep Deployment" /> </a> </div> <figcaption>Endpoints for Sleep Deployment</figcaption> </figure> <p>Note that the <code>httpbin</code> service endpoint in the <code>ns-x</code> namespace is in the list of discovered endpoints. This may not be an issue when you only have a few services. However, when you have hundreds of services that don&rsquo;t interact with any of the services running in the Istio service mesh, you probably don&rsquo;t want your Istio control plane to watch these services and send their information to the sidecars of your services in the mesh.</p> <h2 id="introducing-discovery-selectors">Introducing Discovery Selectors</h2> <p>Starting with Istio 1.10, we are introducing the new <code>discoverySelectors</code> option to <a href="/v1.9/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig">MeshConfig</a>, which is an array of Kubernetes <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements">selectors</a>. The exact type is <code>[]LabelSelector</code>, as defined <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements">here</a>, allowing both simple selectors and set-based selectors. These selectors apply to labels on namespaces.</p> <p>You can configure each label selector for expressing a variety of use cases, including but not limited to:</p> <ul> <li>Arbitrary label names/values, for example, all namespaces with label <code>istio-discovery=enabled</code></li> <li>A list of namespace labels using set-based selectors which carries OR semantics, for example, all namespaces with label <code>istio-discovery=enabled</code> OR <code>region=us-east1</code></li> <li>Inclusion and/or exclusion of namespaces, for example, all namespaces with label istio-discovery=enabled AND label key <code>app</code> equal to <code>helloworld</code></li> </ul> <p>Note: <code>discoverySelectors</code> is not a security boundary. Istiod will continue to have access to all namespaces even when you have configured your <code>discoverySelectors</code>.</p> <h2 id="discovery-selectors-in-action">Discovery Selectors in Action</h2> <p>Assuming you know which namespaces to include as part of the service mesh, as a mesh administrator, you can configure <code>discoverySelectors</code> at installation time or post-installation by adding your desired discovery selectors to Istio’s MeshConfig resource. For example, you can configure Istio to discover only the namespaces that have the label <code>istio-discovery=enabled</code>.</p> <ol> <li><p>Using our examples earlier, let’s label the <code>default</code> namespace with label <code>istio-discovery=enabled</code>.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl label namespace default istio-discovery=enabled </code></pre></li> <li><p>Use <code>istioctl</code> to apply the yaml with <code>discoverySelectors</code> to update your Istio installation. Note, to avoid any impact to your stable environment, we recommend that you use a different revision for your Istio installation:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl install --skip-confirmation -f - &lt;&lt;EOF apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system spec: # You may override parts of meshconfig by uncommenting the following lines. meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled EOF </code></pre></li> <li><p>Display the endpoint configuration for the <code>sleep</code> deployment:</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:16.08212147134303%"> <a data-skipendnotes="true" href="/v1.9/blog/2021/discovery-selectors/endpoints-with-discovery-selectors.png" title="Endpoints for Sleep Deployment With Discovery Selectors"> <img class="element-to-stretch" src="/v1.9/blog/2021/discovery-selectors/endpoints-with-discovery-selectors.png" alt="Endpoints for Sleep Deployment With Discovery Selectors" /> </a> </div> <figcaption>Endpoints for Sleep Deployment With Discovery Selectors</figcaption> </figure> <p>Note this time the <code>httpbin</code> service in the <code>ns-x</code> namespace is NOT in the list of discovered endpoints, along with many other services that are not in the default namespace. If you display routes (or cluster or listeners) information for the <code>sleep</code> deployment, you will also notice much less configuration is returned:</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:26.88356164383562%"> <a data-skipendnotes="true" href="/v1.9/blog/2021/discovery-selectors/routes-with-discovery-selectors.png" title="Routes for Sleep Deployment With Discovery Selectors"> <img class="element-to-stretch" src="/v1.9/blog/2021/discovery-selectors/routes-with-discovery-selectors.png" alt="Routes for Sleep Deployment With Discovery Selectors" /> </a> </div> <figcaption>Routes for Sleep Deployment With Discovery Selectors</figcaption> </figure></li> </ol> <p>You can use <code>matchLabels</code> to configure multiple labels with AND semantics or use <code>matchLabels</code> sets to configure OR semantics among multiple labels. Whether you deploy services or pods to namespaces with different sets of labels or multiple application teams in your organization use different labeling conventions, <code>discoverySelectors</code> provides the flexibility you need. Furthermore, you could use <code>matchLabels</code> and <code>matchExpressions</code> together per our <a href="https://github.com/istio/api/blob/master/mesh/v1alpha1/config.proto#L792">documentation</a>. Refer to the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors">Kubernetes selector docs</a> for additional detail on selector semantics.</p> <h2 id="discovery-selectors-vs-sidecar-resource">Discovery Selectors vs Sidecar Resource</h2> <p>The <code>discoverySelectors</code> configuration enables users to dynamically restrict the set of namespaces that are part of the mesh. A <a href="/v1.9/docs/reference/config/networking/sidecar/">Sidecar</a> resource also controls the visibility of sidecar configurations and what gets pushed to the sidecar proxy. What are the differences between them?</p> <ul> <li>The <code>discoverySelectors</code> configuration declares what Istio control plane watches and processes. Without <code>discoverySelectors</code> configuration, the Istio control plane watches and processes all namespaces/services/endpoints/pods in the cluster regardless of the sidecar resources you have.</li> <li><code>discoverySelectors</code> is configured globally for the mesh by the mesh administrators. While Sidecar resources can also be configured for the mesh globally by the mesh administrators in the MeshConfig root namespace, they are commonly configured by service owners for their namespaces.</li> </ul> <p>You can use <code>discoverySelectors</code> with Sidecar resources. You can use <code>discoverySelectors</code> to configure at the mesh-wide level what namespaces the Istio control plane should watch and process. For these namespaces in the Istio service mesh, you can create Sidecar resources globally or per namespace to further control what gets pushed to the sidecar proxies. Let us add <code>Bookinfo</code> services to the <code>ns-y</code> namespace in the mesh as shown in the diagram below. <code>discoverySelectors</code> enables us to define the <code>default</code> and <code>ns-y</code> namespaces are part of the mesh. How can we configure the <code>sleep</code> service not to see anything other than the <code>default</code> namespace? Adding a Sidecar resource for the default namespace, we can effectively configure the <code>sleep</code> sidecar to only have visibility to the clusters/routes/listeners/endpoints associated with its current namespace plus any other required namespaces.</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:83.06010928961749%"> <a data-skipendnotes="true" href="/v1.9/blog/2021/discovery-selectors/discovery-selectors-vs-sidecar.png" title="Discovery Selectors vs Sidecar Resource"> <img class="element-to-stretch" src="/v1.9/blog/2021/discovery-selectors/discovery-selectors-vs-sidecar.png" alt="Discovery Selectors vs Sidecar Resource" /> </a> </div> <figcaption>Discovery Selectors vs Sidecar Resource</figcaption> </figure> <h2 id="wrapping-up">Wrapping up</h2> <p>Discovery selectors are powerful configurations to tune the Istio control plane to only watch and process specific namespaces. If you don&rsquo;t want all namespaces in your Kubernetes cluster to be part of the service mesh or you have multiple Istio service meshes within your Kubernetes cluster, we highly recommend that you explore this configuration and reach out to us for feedback on our <a href="https://istio.slack.com">Istio slack</a> or GitHub.</p>Fri, 30 Apr 2021 00:00:00 +0000/v1.9/blog/2021/discovery-selectors/Lin Sun (Solo.io), Christian Posta (Solo.io), Harvey Xia (Solo.io)/v1.9/blog/2021/discovery-selectors/discoveryselectorsIstionamespacessidecarUpcoming networking changes in Istio 1.10 <h2 id="background">Background</h2> <p>While Kubernetes networking is customizable, a typical pod&rsquo;s network will look like this:</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:56.25%"> <a data-skipendnotes="true" href="/v1.9/blog/2021/upcoming-networking-changes/pod.svg" title="A pod&#39;s network"> <img class="element-to-stretch" src="/v1.9/blog/2021/upcoming-networking-changes/pod.svg" alt="A pod&#39;s network" /> </a> </div> <figcaption>A pod&#39;s network</figcaption> </figure> <p>An application may choose to bind to either the loopback interface <code>lo</code> (typically binding to <code>127.0.0.1</code>), or the pods network interface <code>eth0</code> (typically to the pod&rsquo;s IP), or both (typically binding to <code>0.0.0.0</code>).</p> <p>Binding to <code>lo</code> allows calls such as <code>curl localhost</code> to work from within the pod. Binding to <code>eth0</code> allows calls to the pod from other pods.</p> <p>Typically, an application will bind to both. However, applications which have internal logic, such as an admin interface may choose to bind to only <code>lo</code> to avoid access from other pods. Additionally, some applications, typically stateful applications, choose to bind only to <code>eth0</code>.</p> <h2 id="current-behavior">Current behavior</h2> <p>In Istio prior to release 1.10, the Envoy proxy, running in the same pod as the application, binds to the <code>eth0</code> interface and redirects all inbound traffic to the <code>lo</code> interface.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:56.25%"> <a data-skipendnotes="true" href="/v1.9/blog/2021/upcoming-networking-changes/current.svg" title="A pod&#39;s network with Istio today"> <img class="element-to-stretch" src="/v1.9/blog/2021/upcoming-networking-changes/current.svg" alt="A pod&#39;s network with Istio today" /> </a> </div> <figcaption>A pod&#39;s network with Istio today</figcaption> </figure> <p>This has two important side effects that cause the behavior to differ from standard Kubernetes:</p> <ul> <li>Applications binding only to <code>lo</code> will receive traffic from other pods, when otherwise this is not allowed.</li> <li>Applications binding only to <code>eth0</code> will not receive traffic.</li> </ul> <p>Applications that bind to both interfaces (which is typical) will not be impacted.</p> <h2 id="future-behavior">Future behavior</h2> <p>Starting with Istio 1.10, the networking behavior is changed to align with the standard behavior present in Kubernetes.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:56.25%"> <a data-skipendnotes="true" href="/v1.9/blog/2021/upcoming-networking-changes/planned.svg" title="A pod&#39;s network with Istio in the future"> <img class="element-to-stretch" src="/v1.9/blog/2021/upcoming-networking-changes/planned.svg" alt="A pod&#39;s network with Istio in the future" /> </a> </div> <figcaption>A pod&#39;s network with Istio in the future</figcaption> </figure> <p>Here we can see that the proxy no longer redirects the traffic to the <code>lo</code> interface, but instead forwards it to the application on <code>eth0</code>. As a result, the standard behavior of Kubernetes is retained, but we still get all the benefits of Istio. This change allows Istio to get closer to its goal of being a drop-in transparent proxy that works with existing workloads with <a href="/v1.9/blog/2021/zero-config-istio/">zero configuration</a>. Additionally, it avoids unintended exposure of applications binding only to <code>lo</code>.</p> <h2 id="am-i-impacted">Am I impacted?</h2> <p>For new users, this change should only be an improvement. However, if you are an existing user, you may have come to depend on the old behavior, intentionally or accidentally.</p> <p>To help detect these situations, we have added a check to find pods that will be impacted. You can run the <code>istioctl experimental precheck</code> command to get a report of any pods binding to <code>lo</code> on a port exposed in a <code>Service</code>. This command is available in Istio 1.10+. <strong>Without action, these ports will no longer be accessible upon upgrade.</strong></p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl experimental precheck Error [IST0143] (Pod echo-local-849647c5bd-g9wxf.default) Port 443 is exposed in a Service but listens on localhost. It will not be exposed to other pods. Error [IST0143] (Pod echo-local-849647c5bd-g9wxf.default) Port 7070 is exposed in a Service but listens on localhost. It will not be exposed to other pods. Error: Issues found when checking the cluster. Istio may not be safe to install or upgrade. See https://istio.io/latest/docs/reference/config/analysis for more information about causes and resolutions. </code></pre> <h3 id="migration">Migration</h3> <p>If you are currently binding to <code>lo</code>, you have a few options:</p> <ul> <li>Switch your application to bind to all interfaces (<code>0.0.0.0</code> or <code>::</code>).</li> <li><p>Explicitly configure the port using the <a href="/v1.9/docs/reference/config/networking/sidecar/#IstioIngressListener"><code>Sidecar</code> ingress configuration</a> to send to <code>lo</code>, preserving the old behavior.</p> <p>For example, to configure request to be sent to <code>localhost</code> for the <code>ratings</code> application:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1beta1 kind: Sidecar metadata: name: ratings spec: workloadSelector: labels: app: ratings ingress: - port: number: 8080 protocol: HTTP name: http defaultEndpoint: 127.0.0.1:8080 </code></pre></li> <li><p>Disable the change entirely with the <code>PILOT_ENABLE_INBOUND_PASSTHROUGH=false</code> environment variable in Istiod, to enable the same behavior as prior to Istio 1.10. This option will be removed in the future.</p></li> </ul>Thu, 15 Apr 2021 00:00:00 +0000/v1.9/blog/2021/upcoming-networking-changes/John Howard (Google)/v1.9/blog/2021/upcoming-networking-changes/Istio and Envoy WebAssembly Extensibility, One Year On <p>One year ago today, in the 1.5 release, we introduced <a href="/v1.9/blog/2020/wasm-announce/">WebAssembly-based extensibility</a> to Istio. Over the course of the year, the Istio, Envoy, and Proxy-Wasm communities have continued our joint efforts to make WebAssembly (Wasm) extensibility stable, reliable, and easy to adopt. Let&rsquo;s walk through the updates to Wasm support through the Istio 1.9 release, and our plans for the future.</p> <h2 id="webassembly-support-merged-in-upstream-envoy">WebAssembly support merged in upstream Envoy</h2> <p>After adding experimental support for Wasm and the WebAssembly for Proxies (Proxy-Wasm) ABI to Istio&rsquo;s fork of Envoy, we collected some great feedback from our community of early adopters. This, combined with the experience gained from developing core Istio Wasm extensions, helped us mature and stabilize the runtime. These improvements unblocked merging Wasm support directly into Envoy upstream in October 2020, allowing it to become part of all official Envoy releases. This was a significant milestone, since it indicates that:</p> <ul> <li>The runtime is ready for wider adoption.</li> <li>The programming ABI/API, extension configuration API, and runtime behavior, are becoming stable.</li> <li>You can expect a larger community of adoption and support moving forward.</li> </ul> <h2 id="wasm-extensions-ecosystem-repository"><code>wasm-extensions</code> Ecosystem Repository</h2> <p>As an early adopter of the Envoy Wasm runtime, the Istio Extensions and Telemetry working group gained a lot of experience in developing extensions. We built several first-class extensions, including <a href="/v1.9/docs/reference/config/proxy_extensions/metadata_exchange/">metadata exchange</a>, <a href="/v1.9/docs/reference/config/proxy_extensions/stats/">Prometheus stats</a>, and <a href="/v1.9/docs/reference/config/proxy_extensions/attributegen/">attribute generation</a>. In order to share our learning more broadly, we created a <a href="https://github.com/istio-ecosystem/wasm-extensions"><code>wasm-extensions</code> repository</a> in the <code>istio-ecosystem</code> organization. This repository serves two purposes:</p> <ul> <li>It provides canonical example extensions, covering several highly demanded features (such as <a href="https://github.com/istio-ecosystem/wasm-extensions/tree/master/extensions/basic_auth">basic authentication</a>).</li> <li>It provides a guide for Wasm extension development, testing, and release. The guide is based on the same build tool chains and test frameworks that are used, maintained and tested by the Istio extensibility team.</li> </ul> <p>The guide currently covers <a href="https://github.com/istio-ecosystem/wasm-extensions/blob/master/doc/write-a-wasm-extension-with-cpp.md">WebAssembly extension development</a> and <a href="https://github.com/istio-ecosystem/wasm-extensions/blob/master/doc/write-cpp-unit-test.md">unit testing</a> with C++, as well as <a href="https://github.com/istio-ecosystem/wasm-extensions/blob/master/doc/write-integration-test.md">integration testing</a> with a Go test framework, which simulates a real runtime by running a Wasm module with the Istio proxy binary. In the future, we will also add several more canonical extensions, such as an integration with Open Policy Agent, and header manipulation based on JWT tokens.</p> <h2 id="wasm-module-distribution-via-the-istio-agent">Wasm module distribution via the Istio Agent</h2> <p>Prior to Istio 1.9, <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/base.proto#config-core-v3-remotedatasource">Envoy remote data sources</a> were needed to distribute remote Wasm modules to the proxy. <a href="https://gist.github.com/bianpengyuan/8377898190e8052ffa36e88a16911910">In this example</a>, you can see two <code>EnvoyFilter</code> resources are defined: one to add a remote fetch Envoy cluster, and the other one to inject a Wasm filter into the HTTP filter chain. This method has a drawback: if remote fetch fails, either due to bad configuration or transient error, Envoy will be stuck with the bad configuration. If a Wasm extension is configured as <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/wasm/v3/wasm.proto#extensions-wasm-v3-pluginconfig">fail closed</a>, a bad remote fetch will stop Envoy from serving. To fix this issue, <a href="https://github.com/envoyproxy/envoy/issues/9447">a fundamental change</a> is needed to the Envoy xDS protocol to make it allow asynchronous xDS responses.</p> <p>Istio 1.9 provides a reliable distribution mechanism out of the box by leveraging the xDS proxy inside istio-agent and Envoy&rsquo;s <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/overview/extension">Extension Configuration Discovery Service</a> (ECDS).</p> <p>istio-agent intercepts the extension config resource update from istiod, reads the remote fetch hint from it, downloads the Wasm module, and rewrites the ECDS configuration with the path of the downloaded Wasm module. If the download fails, istio-agent will reject the ECDS update and prevent a bad configuration reaching Envoy. For more detail, please see <a href="/v1.9/docs/ops/configuration/extensibility/wasm-module-distribution/">our docs on Wasm module distribution</a>.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:75%"> <a data-skipendnotes="true" href="/v1.9/blog/2021/wasm-progress/architecture-istio-agent-downloading-wasm-module.svg" title="Remote Wasm module fetch flow"> <img class="element-to-stretch" src="/v1.9/blog/2021/wasm-progress/architecture-istio-agent-downloading-wasm-module.svg" alt="Remote Wasm module fetch flow" /> </a> </div> <figcaption>Remote Wasm module fetch flow</figcaption> </figure> <h2 id="istio-wasm-sig-and-future-work">Istio Wasm SIG and Future Work</h2> <p>Although we have made a lot of progress on Wasm extensibility, there are still many aspects of the project that remain to be completed. In order to consolidate the efforts from various parties and better tackle the challenges ahead, we have formed an <a href="https://discuss.istio.io/t/introducing-wasm-sig/9930">Istio WebAssembly SIG</a>, with aim of providing a standard and reliable way for Istio to consume Wasm extensions. Here are some of the things we are working on:</p> <ul> <li><strong>A first-class extension API</strong>: Currently Wasm extensions needs to be injected via Istio&rsquo;s <code>EnvoyFilter</code> API. A first-class extension API will make using Wasm with Istio easier, and we expect this to be introduced in Istio 1.10.</li> <li><strong>Distribution artifacts interoperability</strong>: Built on top of Solo.io’s <a href="https://www.solo.io/blog/announcing-the-webassembly-wasm-oci-image-spec/">WebAssembly OCI image spec effort</a>, a standard Wasm artifacts format will make it easy to build, pull, publish, and execute.</li> <li><strong>Container Storage Interface (CSI) based artifacts distribution</strong>: Using istio-agent to distribute modules is easy for adoption, but may not be efficient as each proxy will keep a copy of the Wasm module. As a more efficient solution, with <a href="https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html">Ephemeral CSI</a>, a DaemonSet will be provided which could configure storage for pods. Working similarly to a CNI plugin, a CSI driver would fetch the Wasm module out-of-band from the xDS flow and mount it inside the <code>rootfs</code> when the pod starts up.</li> </ul> <p>If you would like to join us, the group will meet every other week Tuesdays at 2PM PT. You can find the meeting on the <a href="https://github.com/istio/community/blob/master/WORKING-GROUPS.md#working-group-meetings">Istio working group calendar</a>.</p> <p>We look forward to seeing how you will use Wasm to extend Istio!</p>Fri, 05 Mar 2021 00:00:00 +0000/v1.9/blog/2021/wasm-progress/Pengyuan Bian (Google)/v1.9/blog/2021/wasm-progress/wasmextensibilityWebAssemblyMigrate pre-Istio 1.4 Alpha security policy to the current APIs <p>In versions of Istio prior to 1.4, security policy was configured using <code>v1alpha1</code> APIs (<code>MeshPolicy</code>, <code>Policy</code>, <code>ClusterRbacConfig</code>, <code>ServiceRole</code> and <code>ServiceRoleBinding</code>). After consulting with our early adopters, we made <a href="/v1.9/blog/2019/v1beta1-authorization-policy/">major improvements to the policy system</a> and released <code>v1beta1</code> APIs along with Istio 1.4. These refreshed APIs (<code>PeerAuthentication</code>, <code>RequestAuthentication</code> and <code>AuthorizationPolicy</code>) helped standardize how we define policy targets in Istio, helped users understand where policies were applied, and cut the number of configuration objects required.</p> <p>The old APIs were deprecated in Istio 1.4. Two releases after the <code>v1beta1</code> APIs were introduced, Istio 1.6 removed support for the <code>v1alpha1</code> APIs.</p> <p>If you are using a version of Istio prior to 1.6 and you want to upgrade, you will have to migrate your alpha security policy objects to the beta API. This tutorial will help you make that move.</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">If you adopted Istio after version 1.6, or you&rsquo;re not using <code>v1alpha1</code> security APIs, you can stop reading.</div> </aside> </div> <h2 id="overview">Overview</h2> <p>Your control plane must first be upgraded to a version that supports the <code>v1beta1</code> security policy.</p> <p>It is recommended to first upgrade to Istio 1.5 as a transitive version, because it is the only version that supports both <code>v1alpha1</code> and <code>v1beta1</code> security policies. You will complete the security policy migration in Istio 1.5, remove the <code>v1alpha1</code> security policy, and then continue to upgrade to later Istio versions. For a given workload, the <code>v1beta1</code> version will take precedence over the <code>v1alpha1</code> version.</p> <p>Alternatively, if you want to do a skip-level upgrade directly from Istio 1.4 to 1.6 or later, you should use the <a href="/v1.9/docs/setup/upgrade/canary/">canary upgrade</a> method to install a new Istio version as a separate control plane, and gradually migrate your workloads to the new control plane completing the security policy migration at the same time.</p> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">Skip-level upgrades are not supported by Istio and there might be other issues in this process. Istio 1.6 does not support the <code>v1alpha1</code> security policy, and if you do not migrate your old policies before the upgrade, you are essentially removing all your security policies.</div> </aside> </div> <p>In either case, it is recommended to migrate using namespace granularity: for each namespace, find all the <code>v1alpha1</code> policies that have an effect on workloads in the namespace and migrate all the policies to <code>v1beta1</code> at the same time. This allows a safer migration as you can make sure everything is working as expected, and then move forward to the next namespace.</p> <h2 id="major-differences">Major differences</h2> <p>Before starting the migration, read through the <code>v1beta1</code> <a href="/v1.9/docs/concepts/security/#authentication">authentication</a> and <a href="/v1.9/docs/concepts/security/#authorization">authorization</a> documentation to understand the <code>v1beta1</code> policy.</p> <p>You should examine all of your existing <code>v1alpha1</code> security policies, find out what fields are used and which policies need migration, compare the findings with the major differences listed below and confirm there are no blocking issues (e.g., using an alpha feature that is no longer supported in beta):</p> <table> <thead> <tr> <th>Major Differences</th> <th><code>v1alpha1</code></th> <th><code>v1beta1</code></th> </tr> </thead> <tbody> <tr> <td>API stability</td> <td>not backward compatible</td> <td>backward compatible</td> </tr> <tr> <td>mTLS</td> <td><code>MeshPolicy</code> and <code>Policy</code></td> <td><code>PeerAuthentication</code></td> </tr> <tr> <td>JWT</td> <td><code>MeshPolicy</code> and <code>Policy</code></td> <td><code>RequestAuthentication</code></td> </tr> <tr> <td>Authorization</td> <td><code>ClusterRbacConfig</code>, <code>ServiceRole</code> and <code>ServiceRoleBinding</code></td> <td><code>AuthorizationPolicy</code></td> </tr> <tr> <td>Policy target</td> <td>service name based</td> <td>workload selector based</td> </tr> <tr> <td>Port number</td> <td>service ports</td> <td>workload ports</td> </tr> </tbody> </table> <p>Although <code>RequestAuthentication</code> in <code>v1beta1</code> security policy is similar to the <code>v1alpha1</code> JWT policy, there is a notable semantics change. The <code>v1alpha1</code> JWT policy needs to be migrated to two <code>v1beta1</code> resources: <code>RequestAuthentication</code> and <code>AuthorizationPolicy</code>. This will change the JWT deny message due to the use of <code>AuthorizationPolicy</code>. In the alpha version, the HTTP code 401 is returned with the body <code>Origin authentication failed</code>. In the beta version, the HTTP code 403 is returned with the body <code>RBAC: access denied</code>.</p> <p>The <code>v1alpha1</code> JWT policy <a href="https://istio.io/v1.4/docs/reference/config/security/istio.authentication.v1alpha1/#Jwt-TriggerRule"><code>triggerRule</code> field</a> is replaced by the <code>AuthorizationPolicy</code> with the exception that the <a href="https://istio.io/v1.4/docs/reference/config/security/istio.authentication.v1alpha1/#StringMatch"><code>regex</code> field</a> is no longer supported.</p> <h2 id="migration-flow">Migration flow</h2> <p>This section describes in detail how to migrate a <code>v1alpha1</code> security policy.</p> <h3 id="step-1-find-related-policies">Step 1: Find related policies</h3> <p>For each namespace, find all <code>v1alpha1</code> security policies that have an effect on workloads in the namespace. The result could include:</p> <ul> <li>a single <code>MeshPolicy</code> that applies to all services in the mesh;</li> <li>a single namespace-level <code>Policy</code> that applies to all workloads in the namespace;</li> <li>multiple service-level <code>Policy</code> objects that apply to the selected services in the namespace;</li> <li>a single <code>ClusterRbacConfig</code> that enables the RBAC on the whole namespace or some services in the namespace;</li> <li>multiple namespace-level <code>ServiceRole</code> and <code>ServiceRoleBinding</code> objects that apply to all services in the namespace;</li> <li>multiple service-level <code>ServiceRole</code> and <code>ServiceRoleBinding</code> objects that apply to the selected services in the namespace;</li> </ul> <h3 id="step-2-convert-service-name-to-workload-selector">Step 2: Convert service name to workload selector</h3> <p>The <code>v1alpha1</code> policy selects targets using their service name. You should refer to the corresponding service definition to decide the workload selector that should be used in the <code>v1beta1</code> policy.</p> <p>A single <code>v1alpha1</code> policy may include multiple services. It will need to be migrated to multiple <code>v1beta1</code> policies because the <code>v1beta1</code> policy currently only supports at most one workload selector per policy.</p> <p>Also note the <code>v1alpha1</code> policy uses service port but the <code>v1beta1</code> policy uses the workload port. This means the port number might be different in the migrated <code>v1beta1</code> policy.</p> <h3 id="step-3-migrate-authentication-policy">Step 3: Migrate authentication policy</h3> <p>For each <code>v1alpha1</code> authentication policy, migrate with the following rules:</p> <ol> <li><p>If the whole namespace is enabled with mTLS or JWT, create the <code>PeerAuthentication</code>, <code>RequestAuthentication</code> and <code>AuthorizationPolicy</code> without a workload selector for the whole namespace. Fill out the policy based on the semantics of the corresponding <code>MeshPolicy</code> or <code>Policy</code> for the namespace.</p></li> <li><p>If a workload is enabled with mTLS or JWT, create the <code>PeerAuthentication</code>, <code>RequestAuthentication</code> and <code>AuthorizationPolicy</code> with a corresponding workload selector for the workload. Fill out the policy based on the semantics of the corresponding <code>MeshPolicy</code> or <code>Policy</code> for the workload.</p></li> <li><p>For mTLS related configuration, use <code>STRICT</code> mode if the alpha policy is using <code>STRICT</code>, or use <code>PERMISSIVE</code> in all other cases.</p></li> <li><p>For JWT related configuration, refer to the <a href="/v1.9/docs/tasks/security/authentication/authn-policy/#end-user-authentication"><code>end-user authentication</code> documentation</a> to learn how to migrate to <code>RequestAuthentication</code> and <code>AuthorizationPolicy</code>.</p></li> </ol> <p>A <a href="https://github.com/istio-ecosystem/security-policy-migrate">security policy migration tool</a> is provided to automatically migrate authentication policy automatically. Please refer to the tool&rsquo;s README for its usage.</p> <h3 id="step-4-migrate-rbac-policy">Step 4: Migrate RBAC policy</h3> <p>For each <code>v1alpha1</code> RBAC policy, migrate with the following rules:</p> <ol> <li><p>If the whole namespace is enabled with RBAC, create an <code>AuthorizationPolicy</code> without a workload selector for the whole namespace. Add an empty rule so that it will deny all requests to the namespace by default.</p></li> <li><p>If a workload is enabled with RBAC, create an <code>AuthorizationPolicy</code> with a corresponding workload selector for the workload. Add rules based on the semantics of the corresponding <code>ServiceRole</code> and <code>ServiceRoleBinding</code> for the workload.</p></li> </ol> <h3 id="step-5-verify-migrated-policy">Step 5: Verify migrated policy</h3> <ol> <li><p>Double check the migrated <code>v1beta1</code> policies: make sure there are no policies with duplicate names, the namespace is specified correctly and all <code>v1alpha1</code> policies for the given namespace are migrated.</p></li> <li><p>Dry-run the <code>v1beta1</code> policy with the command <code>kubectl apply --dry-run=server -f beta-policy.yaml</code> to make sure it is valid.</p></li> <li><p>Apply the <code>v1beta1</code> policy to the given namespace and closely monitor the effect. Make sure to test both allow and deny scenarios if JWT or authorization are used.</p></li> <li><p>Migrate the next namespace. Only remove the <code>v1alpha1</code> policy after completing migration for all namespaces successfully.</p></li> </ol> <h2 id="example">Example</h2> <h3 id="v1alpha1-policy"><code>v1alpha1</code> policy</h3> <p>This section gives a full example showing the migration for namespace <code>foo</code>. Assume the namespace <code>foo</code> has the following <code>v1alpha1</code> policies that affect the workloads in it:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' ># A MeshPolicy that enables mTLS globally, including the whole foo namespace apiVersion: &#34;authentication.istio.io/v1alpha1&#34; kind: &#34;MeshPolicy&#34; metadata: name: &#34;default&#34; spec: peers: - mtls: {} --- # A Policy that enables mTLS permissive mode and enables JWT for the httpbin service on port 8000 apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: httpbin namespace: foo spec: targets: - name: httpbin ports: - number: 8000 peers: - mtls: mode: PERMISSIVE origins: - jwt: issuer: testing@example.com jwksUri: https://www.example.com/jwks.json triggerRules: - includedPaths: - prefix: /admin/ excludedPaths: - exact: /admin/status principalBinding: USE_ORIGIN --- # A ClusterRbacConfig that enables RBAC globally, including the foo namespace apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ClusterRbacConfig metadata: name: default spec: mode: &#39;ON&#39; --- # A ServiceRole that enables RBAC for the httpbin service apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRole metadata: name: httpbin namespace: foo spec: rules: - services: [&#34;httpbin.foo.svc.cluster.local&#34;] methods: [&#34;GET&#34;] --- # A ServiceRoleBinding for the above ServiceRole apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRoleBinding metadata: name: httpbin namespace: foo spec: subjects: - user: cluster.local/ns/foo/sa/sleep roleRef: kind: ServiceRole name: httpbin </code></pre> <h3 id="httpbin-service"><code>httpbin</code> service</h3> <p>The <code>httpbin</code> service has the following definition:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 kind: Service metadata: name: httpbin namespace: foo spec: ports: - name: http port: 8000 targetPort: 80 selector: app: httpbin </code></pre> <p>This means the service name <code>httpbin</code> should be replaced by the workload selector <code>app: httpbin</code>, and the service port 8000 should be replaced by the workload port 80.</p> <h3 id="v1beta1-authentication-policy"><code>v1beta1</code> authentication policy</h3> <p>The migrated <code>v1beta1</code> policies for the <code>v1alpha1</code> authentication policies in <code>foo</code> namespace are listed below:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' ># A PeerAuthentication that enables mTLS for the foo namespace, migrated from the MeshPolicy # Alternatively the MeshPolicy could also be migrated to a PeerAuthentication at mesh level apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: foo spec: mtls: mode: STRICT --- # A PeerAuthentication that enables mTLS for the httpbin workload, migrated from the Policy apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: httpbin namespace: foo spec: selector: matchLabels: app: httpbin # port level mtls set for the workload port 80 corresponding to the service port 8000 portLevelMtls: 80: mode: PERMISSIVE -- # A RequestAuthentication that enables JWT for the httpbin workload, migrated from the Policy apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: httpbin namespace: foo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: testing@example.com jwksUri: https://www.example.com/jwks.json --- # An AuthorizationPolicy that enforces to require JWT validation for the httpbin workload, migrated from the Policy apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-jwt namespace: foo spec: # Use DENY action to explicitly deny requests without JWT token action: DENY selector: matchLabels: app: httpbin rules: - from: - source: # This makes sure requests without JWT token will be denied notRequestPrincipals: [&#34;*&#34;] to: - operation: # This should be the workload port 80, not the service port 8000 ports: [&#34;80&#34;] # The path and notPath is converted from the trigger rule in the Policy paths: [&#34;/admin/*&#34;] notPaths: [&#34;/admin/status&#34;] </code></pre> <h3 id="v1beta1-authorization-policy"><code>v1beta1</code> authorization policy</h3> <p>The migrated <code>v1beta1</code> policies for the <code>v1alpha1</code> RBAC policies in <code>foo</code> namespace are listed below:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' ># An AuthorizationPolicy that denies by default, migrated from the ClusterRbacConfig apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: default namespace: foo spec: # An empty rule that allows nothing {} --- # An AuthorizationPolicy that enforces to authorization for the httpbin workload, migrated from the ServiceRole and ServiceRoleBinding apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: selector: matchLabels: app: httpbin version: v1 action: ALLOW rules: - from: - source: principals: [&#34;cluster.local/ns/foo/sa/sleep&#34;] to: - operation: methods: [&#34;GET&#34;] </code></pre> <h2 id="finish-the-upgrade">Finish the upgrade</h2> <p>Congratulations; having reached this point, you should only have <code>v1beta1</code> policy objects, and you will be able to continue upgrading Istio to 1.6 and beyond.</p>Wed, 03 Mar 2021 00:00:00 +0000/v1.9/blog/2021/migrate-alpha-policy/Yangmin Zhu (Google), Craig Box (Google)/v1.9/blog/2021/migrate-alpha-policy/securitypolicymigratealphabetadeprecatepeerjwtauthorizationZero Configuration Istio <p>When a new user encounters Istio for the first time, they are sometimes overwhelmed by the vast feature set it exposes. Unfortunately, this can give the impression that Istio is needlessly complex and not fit for small teams or clusters.</p> <p>One great part about Istio, however, is that it aims to bring as much value to users out of the box without any configuration at all. This enables users to get most of the benefits of Istio with minimal efforts. For some users with simple requirements, custom configurations may never be required at all. Others will be able to incrementally add Istio configurations once they are more comfortable and as they need them, such as to add ingress routing, fine-tune networking settings, or lock down security policies.</p> <h2 id="getting-started">Getting started</h2> <p>To get started, check out our <a href="/v1.9/docs/setup/getting-started/">getting started</a> documentation, where you will learn how to install Istio. If you are already familiar, you can simply run <code>istioctl install</code>.</p> <p>Next, we will explore all the benefits Istio provides us, without any configuration or changes to application code.</p> <h2 id="security">Security</h2> <p>Istio automatically enables <a href="/v1.9/docs/concepts/security/#mutual-tls-authentication">mutual TLS</a> for traffic between pods in the mesh. This enables applications to forgo complex TLS configuration and certificate management, and offload all transport layer security to the sidecar.</p> <p>Once comfortable with automatic TLS, you may choose to <a href="/v1.9/docs/tasks/security/authentication/mtls-migration/">allow only mTLS traffic</a>, or configure custom <a href="/v1.9/docs/tasks/security/authorization/">authorization policies</a> for your needs.</p> <h2 id="observability">Observability</h2> <p>Istio automatically generates detailed telemetry for all service communications within a mesh. This telemetry provides observability of service behavior, empowering operators to troubleshoot, maintain, and optimize their applications – without imposing any additional burdens on service developers. Through Istio, operators gain a thorough understanding of how monitored services are interacting, both with other services and with the Istio components themselves.</p> <p>All of this functionality is added by Istio without any configuration. <a href="/v1.9/docs/ops/integrations/">Integrations</a> with tools such as Prometheus, Grafana, Jaeger, Zipkin, and Kiali are also available.</p> <p>For more information about the observability Istio provides, check out the <a href="/v1.9/docs/concepts/observability/">observability overview</a>.</p> <h2 id="traffic-management">Traffic Management</h2> <p>While Kubernetes provides a lot of networking functionality, such as service discovery and DNS, this is done at Layer 4, which can have unintended inefficiencies. For example, in a simple HTTP application sending traffic to a service with 3 replicas, we can see unbalanced load:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl http://echo/{0..5} -s | grep Hostname Hostname=echo-cb96f8d94-2ssll Hostname=echo-cb96f8d94-2ssll Hostname=echo-cb96f8d94-2ssll Hostname=echo-cb96f8d94-2ssll Hostname=echo-cb96f8d94-2ssll Hostname=echo-cb96f8d94-2ssll $ curl http://echo/{0..5} -s | grep Hostname Hostname=echo-cb96f8d94-879sn Hostname=echo-cb96f8d94-879sn Hostname=echo-cb96f8d94-879sn Hostname=echo-cb96f8d94-879sn Hostname=echo-cb96f8d94-879sn Hostname=echo-cb96f8d94-879sn </code></pre> <p>The problem here is Kubernetes will determine the backend to send to when the connection is established, and all future requests on the same connection will be sent to the same backend. In our example here, our first 5 requests are all sent to <code>echo-cb96f8d94-2ssll</code>, while our next set (using a new connection) are all sent to <code>echo-cb96f8d94-879sn</code>. Our third instance never receives any requests.</p> <p>With Istio, HTTP traffic (including HTTP/2 and gRPC) is automatically detected, and our services will automatically be load balanced per <em>request</em>, rather than per <em>connection</em>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl http://echo/{0..5} -s | grep Hostname Hostname=echo-cb96f8d94-wf4xk Hostname=echo-cb96f8d94-rpfqz Hostname=echo-cb96f8d94-cgmxr Hostname=echo-cb96f8d94-wf4xk Hostname=echo-cb96f8d94-rpfqz Hostname=echo-cb96f8d94-cgmxr </code></pre> <p>Here we can see our requests are <a href="/v1.9/docs/concepts/traffic-management/#load-balancing-options">round-robin</a> load balanced between all backends.</p> <p>In addition to these better defaults, Istio offers customization of a <a href="/v1.9/docs/concepts/traffic-management/">variety of traffic management settings</a>, including timeouts, retries, and much more.</p>Thu, 25 Feb 2021 00:00:00 +0000/v1.9/blog/2021/zero-config-istio/John Howard (Google)/v1.9/blog/2021/zero-config-istio/IstioCon 2021: Schedule Is Live!<p><a href="https://events.istio.io/istiocon-2021/">IstioCon 2021</a> is a week-long, community-led, virtual conference starting on February 22. This event provides an opportunity to hear the lessons learned from companies like Atlassian, Airbnb, FICO, eBay, T-Mobile and Salesforce running Istio in production, hands-on experiences from the Istio community, and will feature maintainers from across the Istio ecosystem.</p> <p>You can now find the <a href="https://events.istio.io/istiocon-2021/schedule/">full schedule of events</a> which includes a series of <a href="https://events.istio.io/istiocon-2021/schedule/english/">English</a> sessions and <a href="https://events.istio.io/istiocon-2021/schedule/chinese/">Chinese</a> sessions.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:56.25%"> <a data-skipendnotes="true" href="/v1.9/blog/2021/istiocon-2021-program/istiocon-program.png" title=""> <img class="element-to-stretch" src="/v1.9/blog/2021/istiocon-2021-program/istiocon-program.png" alt="IstioCon logo" /> </a> </div> <figcaption></figcaption> </figure> <p>By attending the conference, you’ll connect with community members from across the globe. Each day you will find keynotes, technical talks, lightning talks, panel discussions, workshops and roadmap sessions led by diverse speakers representing the Istio community. You can also connect with other Istio and Open Source ecosystem community members through social hour events that include activities on the social platform <a href="https://events.istio.io/istiocon-2021/networking/">Gather.town</a>, a live cartoonist, virtual swag bags, raffles, live music and games.</p> <p>Don’t miss it! <a href="https://events.istio.io/istiocon-2021/">Registration</a> is free. We look forward to seeing you at the first IstioCon!</p>Tue, 16 Feb 2021 00:00:00 +0000/v1.9/blog/2021/istiocon-2021-program/Istio Steering Committee/v1.9/blog/2021/istiocon-2021-program/IstioConIstioconferenceBetter External Authorization <h2 id="background">Background</h2> <p>Istio&rsquo;s authorization policy provides access control for services in the mesh. It is fast, powerful and a widely used feature. We have made continuous improvements to make policy more flexible since its first release in Istio 1.4, including the <a href="/v1.9/docs/tasks/security/authorization/authz-deny/"><code>DENY</code> action</a>, <a href="/v1.9/docs/tasks/security/authorization/authz-deny/">exclusion semantics</a>, <a href="/v1.9/docs/tasks/security/authorization/authz-ingress/"><code>X-Forwarded-For</code> header support</a>, <a href="/v1.9/docs/tasks/security/authorization/authz-jwt/">nested JWT claim support</a> and more. These features improve the flexibility of the authorization policy, but there are still many use cases that cannot be supported with this model, for example:</p> <ul> <li><p>You have your own in-house authorization system that cannot be easily migrated to, or cannot be easily replaced by, the authorization policy.</p></li> <li><p>You want to integrate with a 3rd-party solution (e.g. <a href="https://www.openpolicyagent.org/docs/latest/envoy-authorization/">Open Policy Agent</a> or <a href="https://github.com/oauth2-proxy/oauth2-proxy"><code>oauth2</code> proxy</a>) which may require use of the <a href="/v1.9/docs/reference/config/networking/envoy-filter/">low-level Envoy configuration APIs</a> in Istio, or may not be possible at all.</p></li> <li><p>Authorization policy lacks necessary semantics for your use case.</p></li> </ul> <h2 id="solution">Solution</h2> <p>In Istio 1.9, we have implemented extensibility into authorization policy by introducing a <a href="/v1.9/docs/reference/config/security/authorization-policy/#AuthorizationPolicy-Action"><code>CUSTOM</code> action</a>, which allows you to delegate the access control decision to an external authorization service.</p> <p>The <code>CUSTOM</code> action allows you to integrate Istio with an external authorization system that implements its own custom authorization logic. The following diagram shows the high level architecture of this integration:</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:48.57376397295984%"> <a data-skipendnotes="true" href="/v1.9/blog/2021/better-external-authz/external_authz.svg" title="External Authorization Architecture"> <img class="element-to-stretch" src="/v1.9/blog/2021/better-external-authz/external_authz.svg" alt="External Authorization Architecture" /> </a> </div> <figcaption>External Authorization Architecture</figcaption> </figure> <p>At configuration time, the mesh admin configures an authorization policy with a <code>CUSTOM</code> action to enable the external authorization on a proxy (either gateway or sidecar). The admin should verify the external auth service is up and running.</p> <p>At runtime,</p> <ol> <li><p>A request is intercepted by the proxy, and the proxy will send check requests to the external auth service, as configured by the user in the authorization policy.</p></li> <li><p>The external auth service will make the decision whether to allow it or not.</p></li> <li><p>If allowed, the request will continue and will be enforced by any local authorization defined by <code>ALLOW</code>/<code>DENY</code> action.</p></li> <li><p>If denied, the request will be rejected immediately.</p></li> </ol> <p>Let&rsquo;s look at an example authorization policy with the <code>CUSTOM</code> action:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ext-authz namespace: istio-system spec: # The selector applies to the ingress gateway in the istio-system namespace. selector: matchLabels: app: istio-ingressgateway # The action &#34;CUSTOM&#34; delegates the access control to an external authorizer, this is different from # the ALLOW/DENY action that enforces the access control right inside the proxy. action: CUSTOM # The provider specifies the name of the external authorizer defined in the meshconfig, which tells where and how to # talk to the external auth service. We will cover this more later. provider: name: &#34;my-ext-authz-service&#34; # The rule specifies that the access control is triggered only if the request path has the prefix &#34;/admin/&#34;. # This allows you to easily enable or disable the external authorization based on the requests, avoiding the external # check request if it is not needed. rules: - to: - operation: paths: [&#34;/admin/*&#34;] </code></pre> <p>It refers to a provider called <code>my-ext-authz-service</code> which is defined in the mesh config:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >extensionProviders: # The name &#34;my-ext-authz-service&#34; is referred to by the authorization policy in its provider field. - name: &#34;my-ext-authz-service&#34; # The &#34;envoyExtAuthzGrpc&#34; field specifies the type of the external authorization service is implemented by the Envoy # ext-authz filter gRPC API. The other supported type is the Envoy ext-authz filter HTTP API. # See more in https://www.envoyproxy.io/docs/envoy/v1.16.2/intro/arch_overview/security/ext_authz_filter. envoyExtAuthzGrpc: # The service and port specifies the address of the external auth service, &#34;ext-authz.istio-system.svc.cluster.local&#34; # means the service is deployed in the mesh. It can also be defined out of the mesh or even inside the pod as a separate # container. service: &#34;ext-authz.istio-system.svc.cluster.local&#34; port: 9000 </code></pre> <p>The authorization policy of <a href="/v1.9/docs/reference/config/security/authorization-policy/#AuthorizationPolicy-Action"><code>CUSTOM</code> action</a> enables the external authorization in runtime, it could be configured to trigger the external authorization conditionally based on the request using the same rule that you have already been using with other actions.</p> <p>The external authorization service is currently defined in the <a href="/v1.9/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ExtensionProvider"><code>meshconfig</code> API</a> and referred to by its name. It could be deployed in the mesh with or without proxy. If with the proxy, you could further use <code>PeerAuthentication</code> to enable mTLS between the proxy and your external authorization service.</p> <p>The <code>CUSTOM</code> action is currently in the <strong>experimental stage</strong>; the API might change in a non-backward compatible way based on user feedback. The rule currently does not support authentication related fields (e.g. source principal or JWT claim) and only one provider is allowed for a given workload, but you can still use different providers on different workloads.</p> <p>For more information, please see the <a href="https://docs.google.com/document/d/1V4mCQCw7mlGp0zSQQXYoBdbKMDnkPOjeyUb85U07iSI/edit#">Better External Authorization design doc</a>.</p> <h2 id="example-with-opa">Example with OPA</h2> <p>In this section, we will demonstrate using the <code>CUSTOM</code> action with the Open Policy Agent as the external authorizer on the ingress gateway. We will conditionally enable the external authorization on all paths except <code>/ip</code>.</p> <p>You can also refer to the <a href="/v1.9/docs/tasks/security/authorization/authz-custom/">external authorization task</a> for a more basic introduction that uses a sample <code>ext-authz</code> server.</p> <h3 id="create-the-example-opa-policy">Create the example OPA policy</h3> <p>Run the following command create an OPA policy that allows the request if the prefix of the path is matched with the claim &ldquo;path&rdquo; (base64 encoded) in the JWT token:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &gt; policy.rego &lt;&lt;EOF package envoy.authz import input.attributes.request.http as http_request default allow = false token = {&#34;valid&#34;: valid, &#34;payload&#34;: payload} { [_, encoded] := split(http_request.headers.authorization, &#34; &#34;) [valid, _, payload] := io.jwt.decode_verify(encoded, {&#34;secret&#34;: &#34;secret&#34;}) } allow { is_token_valid action_allowed } is_token_valid { token.valid now := time.now_ns() / 1000000000 token.payload.nbf &lt;= now now &lt; token.payload.exp } action_allowed { startswith(http_request.path, base64url.decode(token.payload.path)) } EOF $ kubectl create secret generic opa-policy --from-file policy.rego </code></pre> <h3 id="deploy-httpbin-and-opa">Deploy httpbin and OPA</h3> <p>Enable the sidecar injection:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl label ns default istio-injection=enabled </code></pre> <p>Run the following command to deploy the example application httpbin and OPA. The OPA could be deployed either as a separate container in the httpbin pod or completely in a separate pod:</p> <div id="tabset-blog-2021-better-external-authz-1" role="tablist" class="tabset"> <div class="tab-strip" data-category-name="opa-deploy"><button aria-selected="true" data-category-value="opa-same" aria-controls="tabset-blog-2021-better-external-authz-1-0-panel" id="tabset-blog-2021-better-external-authz-1-0-tab" role="tab"><span>Deploy OPA in the same pod</span> </button><button tabindex="-1" data-category-value="opa-standalone" aria-controls="tabset-blog-2021-better-external-authz-1-1-panel" id="tabset-blog-2021-better-external-authz-1-1-tab" role="tab"><span>Deploy OPA in a separate pod</span> </button></div> <div class="tab-content"><div id="tabset-blog-2021-better-external-authz-1-0-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2021-better-external-authz-1-0-tab"><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: v1 kind: Service metadata: name: httpbin-with-opa labels: app: httpbin-with-opa service: httpbin-with-opa spec: ports: - name: http port: 8000 targetPort: 80 selector: app: httpbin-with-opa --- # Define the service entry for the local OPA service on port 9191. apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: local-opa-grpc spec: hosts: - &#34;local-opa-grpc.local&#34; endpoints: - address: &#34;127.0.0.1&#34; ports: - name: grpc number: 9191 protocol: GRPC resolution: STATIC --- kind: Deployment apiVersion: apps/v1 metadata: name: httpbin-with-opa labels: app: httpbin-with-opa spec: replicas: 1 selector: matchLabels: app: httpbin-with-opa template: metadata: labels: app: httpbin-with-opa spec: containers: - image: docker.io/kennethreitz/httpbin imagePullPolicy: IfNotPresent name: httpbin ports: - containerPort: 80 - name: opa image: openpolicyagent/opa:latest-envoy securityContext: runAsUser: 1111 volumeMounts: - readOnly: true mountPath: /policy name: opa-policy args: - &#34;run&#34; - &#34;--server&#34; - &#34;--addr=localhost:8181&#34; - &#34;--diagnostic-addr=0.0.0.0:8282&#34; - &#34;--set=plugins.envoy_ext_authz_grpc.addr=:9191&#34; - &#34;--set=plugins.envoy_ext_authz_grpc.query=data.envoy.authz.allow&#34; - &#34;--set=decision_logs.console=true&#34; - &#34;--ignore=.*&#34; - &#34;/policy/policy.rego&#34; livenessProbe: httpGet: path: /health?plugins scheme: HTTP port: 8282 initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: httpGet: path: /health?plugins scheme: HTTP port: 8282 initialDelaySeconds: 5 periodSeconds: 5 volumes: - name: proxy-config configMap: name: proxy-config - name: opa-policy secret: secretName: opa-policy EOF </code></pre> </div><div hidden id="tabset-blog-2021-better-external-authz-1-1-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2021-better-external-authz-1-1-tab"><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: v1 kind: Service metadata: name: opa labels: app: opa spec: ports: - name: grpc port: 9191 targetPort: 9191 selector: app: opa --- kind: Deployment apiVersion: apps/v1 metadata: name: opa labels: app: opa spec: replicas: 1 selector: matchLabels: app: opa template: metadata: labels: app: opa spec: containers: - name: opa image: openpolicyagent/opa:latest-envoy securityContext: runAsUser: 1111 volumeMounts: - readOnly: true mountPath: /policy name: opa-policy args: - &#34;run&#34; - &#34;--server&#34; - &#34;--addr=localhost:8181&#34; - &#34;--diagnostic-addr=0.0.0.0:8282&#34; - &#34;--set=plugins.envoy_ext_authz_grpc.addr=:9191&#34; - &#34;--set=plugins.envoy_ext_authz_grpc.query=data.envoy.authz.allow&#34; - &#34;--set=decision_logs.console=true&#34; - &#34;--ignore=.*&#34; - &#34;/policy/policy.rego&#34; ports: - containerPort: 9191 livenessProbe: httpGet: path: /health?plugins scheme: HTTP port: 8282 initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: httpGet: path: /health?plugins scheme: HTTP port: 8282 initialDelaySeconds: 5 periodSeconds: 5 volumes: - name: proxy-config configMap: name: proxy-config - name: opa-policy secret: secretName: opa-policy EOF </code></pre> <p>Deploy the httpbin as well:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/httpbin/httpbin.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/httpbin/httpbin.yaml@ </code></pre></div> </div></div> </div> <h3 id="define-external-authorizer">Define external authorizer</h3> <p>Run the following command to edit the <code>meshconfig</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl edit configmap istio -n istio-system </code></pre> <p>Add the following <code>extensionProviders</code> to the <code>meshconfig</code>:</p> <div id="tabset-blog-2021-better-external-authz-2" role="tablist" class="tabset"> <div class="tab-strip" data-category-name="opa-deploy"><button aria-selected="true" data-category-value="opa-same" aria-controls="tabset-blog-2021-better-external-authz-2-0-panel" id="tabset-blog-2021-better-external-authz-2-0-tab" role="tab"><span>Deploy OPA in the same pod</span> </button><button tabindex="-1" data-category-value="opa-standalone" aria-controls="tabset-blog-2021-better-external-authz-2-1-panel" id="tabset-blog-2021-better-external-authz-2-1-tab" role="tab"><span>Deploy OPA in a separate pod</span> </button></div> <div class="tab-content"><div id="tabset-blog-2021-better-external-authz-2-0-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2021-better-external-authz-2-0-tab"><pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 data: mesh: |- # Add the following contents: extensionProviders: - name: &#34;opa.local&#34; envoyExtAuthzGrpc: service: &#34;local-opa-grpc.local&#34; port: &#34;9191&#34; </code></pre> </div><div hidden id="tabset-blog-2021-better-external-authz-2-1-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2021-better-external-authz-2-1-tab"><pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 data: mesh: |- # Add the following contents: extensionProviders: - name: &#34;opa.default&#34; envoyExtAuthzGrpc: service: &#34;opa.default.svc.cluster.local&#34; port: &#34;9191&#34; </code></pre> </div></div> </div> <h3 id="create-an-authorizationpolicy-with-a-custom-action">Create an AuthorizationPolicy with a CUSTOM action</h3> <p>Run the following command to create the authorization policy that enables the external authorization on all paths except <code>/ip</code>:</p> <div id="tabset-blog-2021-better-external-authz-3" role="tablist" class="tabset"> <div class="tab-strip" data-category-name="opa-deploy"><button aria-selected="true" data-category-value="opa-same" aria-controls="tabset-blog-2021-better-external-authz-3-0-panel" id="tabset-blog-2021-better-external-authz-3-0-tab" role="tab"><span>Deploy OPA in the same pod</span> </button><button tabindex="-1" data-category-value="opa-standalone" aria-controls="tabset-blog-2021-better-external-authz-3-1-panel" id="tabset-blog-2021-better-external-authz-3-1-tab" role="tab"><span>Deploy OPA in a separate pod</span> </button></div> <div class="tab-content"><div id="tabset-blog-2021-better-external-authz-3-0-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2021-better-external-authz-3-0-tab"><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-opa spec: selector: matchLabels: app: httpbin-with-opa action: CUSTOM provider: name: &#34;opa.local&#34; rules: - to: - operation: notPaths: [&#34;/ip&#34;] EOF </code></pre> </div><div hidden id="tabset-blog-2021-better-external-authz-3-1-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2021-better-external-authz-3-1-tab"><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-opa spec: selector: matchLabels: app: httpbin action: CUSTOM provider: name: &#34;opa.default&#34; rules: - to: - operation: notPaths: [&#34;/ip&#34;] EOF </code></pre> </div></div> </div> <h3 id="test-the-opa-policy">Test the OPA policy</h3> <ol> <li><p>Create a client pod to send the request:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/sleep/sleep.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/sleep/sleep.yaml@ $ export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) </code></pre></div></li> <li><p>Use a test JWT token signed by the OPA:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export TOKEN_PATH_HEADERS=&#34;eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiTDJobFlXUmxjbk09IiwibmJmIjoxNTAwMDAwMDAwLCJleHAiOjE5MDAwMDAwMDB9.9yl8LcZdq-5UpNLm0Hn0nnoBHXXAnK4e8RSl9vn6l98&#34; </code></pre> <p>The test JWT token has the following claims:</p> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;path&#34;: &#34;L2hlYWRlcnM=&#34;, &#34;nbf&#34;: 1500000000, &#34;exp&#34;: 1900000000 } </code></pre> <p>The <code>path</code> claim has value <code>L2hlYWRlcnM=</code> which is the base64 encode of <code>/headers</code>.</p></li> <li><p>Send a request to path <code>/headers</code> without a token. This should be rejected with 403 because there is no JWT token:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec ${SLEEP_POD} -c sleep -- curl http://httpbin-with-opa:8000/headers -s -o /dev/null -w &#34;%{http_code}\n&#34; 403 </code></pre></li> <li><p>Send a request to path <code>/get</code> with a valid token. This should be rejected with 403 because the path <code>/get</code> is not matched with the token <code>/headers</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec ${SLEEP_POD} -c sleep -- curl http://httpbin-with-opa:8000/get -H &#34;Authorization: Bearer $TOKEN_PATH_HEADERS&#34; -s -o /dev/null -w &#34;%{http_code}\n&#34; 403 </code></pre></li> <li><p>Send a request to path <code>/headers</code> with valid token. This should be allowed with 200 because the path is matched with the token:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec ${SLEEP_POD} -c sleep -- curl http://httpbin-with-opa:8000/headers -H &#34;Authorization: Bearer $TOKEN_PATH_HEADERS&#34; -s -o /dev/null -w &#34;%{http_code}\n&#34; 200 </code></pre></li> <li><p>Send request to path <code>/ip</code> without token. This should be allowed with 200 because the path <code>/ip</code> is excluded from authorization:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec ${SLEEP_POD} -c sleep -- curl http://httpbin-with-opa:8000/ip -s -o /dev/null -w &#34;%{http_code}\n&#34; 200 </code></pre></li> <li><p>Check the proxy and OPA logs to confirm the result.</p></li> </ol> <h2 id="summary">Summary</h2> <p>In Istio 1.9, the <code>CUSTOM</code> action in the authorization policy allows you to easily integrate Istio with any external authorization system with the following benefits:</p> <ul> <li><p>First-class support in the authorization policy API</p></li> <li><p>Ease of usage: define the external authorizer simply with a URL and enable with the authorization policy, no more hassle with the <code>EnvoyFilter</code> API</p></li> <li><p>Conditional triggering, allowing improved performance</p></li> <li><p>Support for various deployment type of the external authorizer:</p> <ul> <li><p>A normal service and pod with or without proxy</p></li> <li><p>Inside the workload pod as a separate container</p></li> <li><p>Outside the mesh</p></li> </ul></li> </ul> <p>We&rsquo;re working to promote this feature to a more stable stage in following versions and welcome your feedback at <a href="https://discuss.istio.io/c/security/">discuss.istio.io</a>.</p> <h2 id="acknowledgements">Acknowledgements</h2> <p>Thanks to <code>Craig Box</code>, <code>Christian Posta</code> and <code>Limin Wang</code> for reviewing drafts of this blog.</p>Tue, 09 Feb 2021 00:00:00 +0000/v1.9/blog/2021/better-external-authz/Yangmin Zhu (Google)/v1.9/blog/2021/better-external-authz/authorizationaccess controlopaoauth2Proxying legacy services using Istio egress gateways <p>At <a href="https://pan-net.cloud/aboutus">Deutsche Telekom Pan-Net</a>, we have embraced Istio as the umbrella to cover our services. Unfortunately, there are services which have not yet been migrated to Kubernetes, or cannot be.</p> <p>We can set Istio up as a proxy service for these upstream services. This allows us to benefit from capabilities like authorization/authentication, traceability and observability, even while legacy services stand as they are.</p> <p>At the end of this article there is a hands-on exercise where you can simulate the scenario. In the exercise, an upstream service hosted at <a href="https://httpbin.org">https://httpbin.org</a> will be proxied by an Istio egress gateway.</p> <p>If you are familiar with Istio, one of the methods offered to connect to upstream services is through an <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/">egress gateway</a>.</p> <p>You can deploy one to control all the upstream traffic or you can deploy multiple in order to have fine-grained control and satisfy the <a href="https://en.wikipedia.org/wiki/Single-responsibility_principle">single-responsibility principle</a> as this picture shows:</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/proxying-legacy-services-using-egress-gateways/proxying-legacy-services-using-egress-gateways-overview.svg" title="Overview multiple Egress Gateways"> <img class="element-to-stretch" src="/v1.9/blog/2020/proxying-legacy-services-using-egress-gateways/proxying-legacy-services-using-egress-gateways-overview.svg" alt="Overview multiple Egress Gateways" /> </a> </div> <figcaption>Overview multiple Egress Gateways</figcaption> </figure> <p>With this model, one egress gateway is in charge of exactly one upstream service.</p> <p>Although the Operator spec allows you to deploy multiple egress gateways, the manifest can become unmanageable:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: install.istio.io/v1alpha1 kind: IstioOperator [...] spec: egressGateways: - name: egressgateway-1 enabled: true - name: egressgateway-2 enabled: true [egressgateway-3, egressgateway-4, ...] - name: egressgateway-N enabled: true [...] </code></pre> <p>As a benefit of decoupling egress getaways from the Operator manifest, you have enabled the possibility of setting up custom readiness probes to have both services (Gateway and upstream Service) aligned.</p> <p>You can also inject OPA as a sidecar into the pod to perform authorization with complex rules (<a href="https://github.com/open-policy-agent/opa-envoy-plugin">OPA envoy plugin</a>).</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/proxying-legacy-services-using-egress-gateways/proxying-legacy-services-using-egress-gateways-authz.svg" title="Authorization with OPA and `healthcheck` to external"> <img class="element-to-stretch" src="/v1.9/blog/2020/proxying-legacy-services-using-egress-gateways/proxying-legacy-services-using-egress-gateways-authz.svg" alt="Authorization with OPA and `healthcheck` to upstream service" /> </a> </div> <figcaption>Authorization with OPA and `healthcheck` to external</figcaption> </figure> <p>As you can see, your possibilities increase and Istio becomes very extensible.</p> <p>Let&rsquo;s look at how you can implement this pattern.</p> <h2 id="solution">Solution</h2> <p>There are several ways to perform this task, but here you will find how to define multiple Operators and deploy the generated resources.</p> <div> <aside class="callout quote"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-quote"/></svg> </div> <div class="content">Yes! <code>Istio 1.8.0</code> introduced the possibility to have fine-grained control over the objects that Operator deploys. This gives you the opportunity to patch them as you wish. Exactly what you need to proxy legacy services using Istio egress gateways.</div> </aside> </div> <p>In the following section you will deploy an egress gateway to connect to an upstream service: <code>httpbin</code> (<a href="https://httpbin.org/">https://httpbin.org/</a>)</p> <p>At the end, you will have:</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/proxying-legacy-services-using-egress-gateways/proxying-legacy-services-using-egress-gateways-communication.svg" title="Communication"> <img class="element-to-stretch" src="/v1.9/blog/2020/proxying-legacy-services-using-egress-gateways/proxying-legacy-services-using-egress-gateways-communication.svg" alt="Communication" /> </a> </div> <figcaption>Communication</figcaption> </figure> <h2 id="hands-on">Hands on</h2> <h3 id="prerequisites">Prerequisites</h3> <ul> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/">kind</a> (Kubernetes-in-Docker - perfect for local development)</li> <li><a href="/v1.9/docs/setup/getting-started/#download">istioctl</a></li> </ul> <h4 id="kind">Kind</h4> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">If you use <code>kind</code>, do not forget to set up <code>service-account-issuer</code> and <code>service-account-signing-key-file</code> as described below. Otherwise, Istio may not install correctly.</div> </aside> </div> <p>Save this as <code>config.yaml</code>.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 kubeadmConfigPatches: - | apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration metadata: name: config apiServer: extraArgs: &#34;service-account-issuer&#34;: &#34;kubernetes.default.svc&#34; &#34;service-account-signing-key-file&#34;: &#34;/etc/kubernetes/pki/sa.key&#34; </code></pre> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kind create cluster --name &lt;my-cluster-name&gt; --config config.yaml </code></pre> <p>Where <code>&lt;my-cluster-name&gt;</code> is the name for the cluster.</p> <h4 id="istio-operator-with-istioctl">Istio Operator with Istioctl</h4> <p>Install the Operator</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl operator init --watchedNamespaces=istio-operator </code></pre> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl create ns istio-system </code></pre> <p>Save this as <code>operator.yaml</code>:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: istio-operator namespace: istio-operator spec: profile: default tag: 1.8.0 meshConfig: accessLogFile: /dev/stdout outboundTrafficPolicy: mode: REGISTRY_ONLY </code></pre> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content"><code>outboundTrafficPolicy.mode: REGISTRY_ONLY</code> is used to block all external communications which are not specified by a <code>ServiceEntry</code> resource.</div> </aside> </div> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f operator.yaml </code></pre> <h3 id="deploy-egress-gateway">Deploy Egress Gateway</h3> <p>The steps for this task assume:</p> <ul> <li>The service is installed under the namespace: <code>httpbin</code>.</li> <li>The service name is: <code>http-egress</code>.</li> </ul> <p>Istio 1.8 introduced the possibility to apply overlay configuration, to give fine-grain control over the created resources.</p> <p>Save this as <code>egress.yaml</code>:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: profile: empty tag: 1.8.0 namespace: httpbin components: egressGateways: - name: httpbin-egress enabled: true label: app: istio-egressgateway istio: egressgateway custom-egress: httpbin-egress k8s: overlays: - kind: Deployment name: httpbin-egress patches: - path: spec.template.spec.containers[0].readinessProbe value: failureThreshold: 30 exec: command: - /bin/sh - -c - curl http://localhost:15021/healthz/ready &amp;&amp; curl https://httpbin.org/status/200 initialDelaySeconds: 1 periodSeconds: 2 successThreshold: 1 timeoutSeconds: 1 values: gateways: istio-egressgateway: runAsRoot: true </code></pre> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">Notice the block under <code>overlays</code>. You are patching the default <code>egressgateway</code> to deploy only that component with the new <code>readinessProbe</code>.</div> </aside> </div> <p>Create the namespace where you will install the egress gateway:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl create ns httpbin </code></pre> <p>As it is described in the <a href="/v1.9/docs/setup/install/istioctl/#customize-kubernetes-settings">documentation</a>, you can deploy several Operator resources. However, they have to be pre-parsed and then applied to the cluster.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl manifest generate -f egress.yaml | kubectl apply -f - </code></pre> <h3 id="istio-configuration">Istio configuration</h3> <p>Now you will configure Istio to allow connections to the upstream service at <a href="https://httpbin.org">https://httpbin.org</a>.</p> <h4 id="certificate-for-tls">Certificate for TLS</h4> <p>You need a certificate to make a secure connection from outside the cluster to your egress service.</p> <p>How to generate a certificate is explained in the <a href="/v1.9/docs/tasks/traffic-management/ingress/secure-ingress/#generate-client-and-server-certificates-and-keys">Istio ingress documentation</a>.</p> <p>Create and apply one to be used at the end of this article to access the service from outside the cluster (<code>&lt;my-proxied-service-hostname&gt;</code>):</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl create -n istio-system secret tls &lt;my-secret-name&gt; --key=&lt;key&gt; --cert=&lt;cert&gt; </code></pre> <p>Where <code>&lt;my-secret-name&gt;</code> is the name used later for the <code>Gateway</code> resource. <code>&lt;key&gt;</code> and <code>&lt;cert&gt;</code> are the files for the certificate. <code>&lt;cert&gt;</code>.</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">You need to remember <code>&lt;my-proxied-service-hostname&gt;</code>, <code>&lt;cert&gt;</code> and <code>&lt;my-secret-name&gt;</code> because you will use them later in the article.</div> </aside> </div> <h4 id="ingress-gateway">Ingress Gateway</h4> <p>Create a <code>Gateway</code> resource to operate ingress gateway to accept requests.</p> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">Make sure that only one Gateway spec matches the hostname. Istio gets confused when there are multiple Gateway definitions covering the same hostname.</div> </aside> </div> <p>An example:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: my-ingressgateway namespace: istio-system spec: selector: istio: ingressgateway servers: - hosts: - &#34;&lt;my-proxied-service-hostname&gt;&#34; port: name: http number: 80 protocol: HTTP tls: httpsRedirect: true - port: number: 443 name: https protocol: https hosts: - &#34;&lt;my-proxied-service-hostname&gt;&#34; tls: mode: SIMPLE credentialName: &lt;my-secret-name&gt; </code></pre> <p>Where <code>&lt;my-proxied-service-hostname&gt;</code> is the hostname to access the service through the <code>my-ingressgateway</code> and <code>&lt;my-secret-name&gt;</code> is the secret which contains the certificate.</p> <h4 id="egress-gateway">Egress Gateway</h4> <p>Create another Gateway object, but this time to operate the egress gateway you have already installed:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: &#34;httpbin-egress&#34; namespace: &#34;httpbin&#34; spec: selector: istio: egressgateway service.istio.io/canonical-name: &#34;httpbin-egress&#34; servers: - hosts: - &#34;&lt;my-proxied-service-hostname&gt;&#34; port: number: 80 name: http protocol: HTTP </code></pre> <p>Where <code>&lt;my-proxied-service-hostname&gt;</code> is the hostname to access through the <code>my-ingressgateway</code>.</p> <h4 id="virtual-service">Virtual Service</h4> <p>Create a <code>VirtualService</code> for three use cases:</p> <ul> <li><strong>Mesh</strong> gateway for service-to-service communications within the mesh</li> <li><strong>Ingress Gateway</strong> for the communication from outside the mesh</li> <li><strong>Egress Gateway</strong> for the communication to the upstream service</li> </ul> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">Mesh and Ingress Gateway will share the same specification. It will redirect the traffic to your egress gateway service.</div> </aside> </div> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: &#34;httpbin-egress&#34; namespace: &#34;httpbin&#34; spec: hosts: - &#34;&lt;my-proxied-service-hostname&gt;&#34; gateways: - mesh - &#34;istio-system/my-ingressgateway&#34; - &#34;httpbin/httpbin-egress&#34; http: - match: - gateways: - &#34;istio-system/my-ingressgateway&#34; - mesh uri: prefix: &#34;/&#34; route: - destination: host: &#34;httpbin-egress.httpbin.svc.cluster.local&#34; port: number: 80 - match: - gateways: - &#34;httpbin/httpbin-egress&#34; uri: prefix: &#34;/&#34; route: - destination: host: &#34;httpbin.org&#34; subset: &#34;http-egress-subset&#34; port: number: 443 </code></pre> <p>Where <code>&lt;my-proxied-service-hostname&gt;</code> is the hostname to access through the <code>my-ingressgateway</code>.</p> <h4 id="service-entry">Service Entry</h4> <p>Create a <code>ServiceEntry</code> to allow the communication to the upstream service:</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">Notice that the port is configured for TLS protocol</div> </aside> </div> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: &#34;httpbin-egress&#34; namespace: &#34;httpbin&#34; spec: hosts: - &#34;httpbin.org&#34; location: MESH_EXTERNAL ports: - number: 443 name: https protocol: TLS resolution: DNS </code></pre> <h4 id="destination-rule">Destination Rule</h4> <p>Create a <code>DestinationRule</code> to allow TLS origination for egress traffic as explained in the <a href="/v1.9/docs/tasks/traffic-management/egress/egress-tls-origination/#tls-origination-for-egress-traffic">documentation</a></p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: &#34;httpbin-egress&#34; namespace: &#34;httpbin&#34; spec: host: &#34;httpbin.org&#34; subsets: - name: &#34;http-egress-subset&#34; trafficPolicy: loadBalancer: simple: ROUND_ROBIN portLevelSettings: - port: number: 443 tls: mode: SIMPLE </code></pre> <h4 id="peer-authentication">Peer Authentication</h4> <p>To secure the service-to-service, you need to enforce mTLS:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.istio.io/v1beta1&#34; kind: &#34;PeerAuthentication&#34; metadata: name: &#34;httpbin-egress&#34; namespace: &#34;httpbin&#34; spec: mtls: mode: STRICT </code></pre> <h3 id="test">Test</h3> <p>Verify that your objects were all specified correctly:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl analyze --all-namespaces </code></pre> <h4 id="external-access">External access</h4> <p>Test the egress gateway from outside the cluster forwarding the <code>ingressgateway</code> service&rsquo;s port and calling the service</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl -n istio-system port-forward svc/istio-ingressgateway 15443:443 </code></pre> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl -vvv -k -HHost:&lt;my-proxied-service-hostname&gt; --resolve &#34;&lt;my-proxied-service-hostname&gt;:15443:127.0.0.1&#34; --cacert &lt;cert&gt; &#34;https://&lt;my-proxied-service-hostname&gt;:15443/status/200&#34; </code></pre> <p>Where <code>&lt;my-proxied-service-hostname&gt;</code> is the hostname to access through the <code>my-ingressgateway</code> and <code>&lt;cert&gt;</code> is the certificate defined for the <code>ingressgateway</code> object. This is due to <code>tls.mode: SIMPLE</code> which <a href="/v1.9/docs/tasks/traffic-management/ingress/secure-ingress/">does not terminate TLS</a></p> <h4 id="service-to-service-access">Service-to-service access</h4> <p>Test the egress gateway from inside the cluster deploying the sleep service. This is useful when you design failover.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl label namespace httpbin istio-injection=enabled --overwrite </code></pre> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -n httpbin -f https://raw.githubusercontent.com/istio/istio/release-1.9/samples/sleep/sleep.yaml </code></pre> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl -n httpbin &#34;$(kubectl get pod -n httpbin -l app=sleep -o jsonpath={.items..metadata.name})&#34; -- curl -vvv http://&lt;my-proxied-service-hostname&gt;/status/200 </code></pre> <p>Where <code>&lt;my-proxied-service-hostname&gt;</code> is the hostname to access through the <code>my-ingressgateway</code>.</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">Notice that <code>http</code> (and not <code>https</code>) is the protocol used for service-to-service communication. This is due to Istio handling the <code>TLS</code> itself. Developers do not care anymore about certificates management. <strong>Fancy!</strong></div> </aside> </div> <div> <aside class="callout quote"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-quote"/></svg> </div> <div class="content">Eat, Sleep, Rave, <strong>REPEAT!</strong></div> </aside> </div> <p>Now it is time to create a second, third and fourth egress gateway pointing to other upstream services.</p> <h2 id="final-thoughts">Final thoughts</h2> <div> <aside class="callout quote"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-quote"/></svg> </div> <div class="content">Is the juice worth the squeeze?</div> </aside> </div> <p>Istio might seem complex to configure. But it is definitely worthwhile, due to the huge set of benefits it brings to your services (with an extra <strong>Olé!</strong> for Kiali).</p> <p>The way Istio is developed allows us, with minimal effort, to satisfy uncommon requirements like the one presented in this article.</p> <p>To finish, I just wanted to point out that Istio, as a good cloud native technology, does not require a large team to maintain. For example, our current team is composed of 3 engineers.</p> <p>To discuss more about Istio and its possibilities, please contact one of us:</p> <ul> <li><a href="https://twitter.com/antonio_berben">Antonio Berben</a></li> <li><a href="https://www.linkedin.com/in/piotr-ciazynski">Piotr Ciążyński</a></li> <li><a href="https://www.linkedin.com/in/patlevic">Kristián Patlevič</a></li> </ul>Wed, 16 Dec 2020 00:00:00 +0000/v1.9/blog/2020/proxying-legacy-services-using-egress-gateways/Antonio Berben (Deutsche Telekom - PAN-NET)/v1.9/blog/2020/proxying-legacy-services-using-egress-gateways/configurationegressgatewayexternalserviceProxy protocol on AWS NLB and Istio ingress gateway <p>This blog presents my latest experience about how to configure and enable proxy protocol with stack of AWS NLB and Istio Ingress gateway. The <a href="https://www.haproxy.com/blog/haproxy/proxy-protocol/">Proxy Protocol</a> was designed to chain proxies and reverse-proxies without losing the client information. The proxy protocol prevents the need for infrastructure changes or <code>NATing</code> firewalls, and offers the benefits of being protocol agnostic and providing good scalability. Additionally, we also enable the <code>X-Forwarded-For</code> HTTP header in the deployment to make the client IP address easy to read. In this blog, traffic management of Istio ingress is shown with an httpbin service on ports 80 and 443 to demonstrate the use of proxy protocol. Note that both v1 and v2 of the proxy protocol work for the purpose of this example, but because the AWS NLB currently only supports v2, proxy protocol v2 is used in the rest of this blog by default. The following image shows the use of proxy protocol v2 with an AWS NLB.</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content"><p>A receiver may be configured to support both version 1 and version 2 of the protocol. Identifying the protocol version is easy:</p> <ul> <li><p>If the incoming byte count is 16 or more and the first 13 bytes match the protocol signature block <code>\x0D\x0A\x0D\x0A\x00\x0D\x0A\x51\x55\x49\x54\x0A\x02</code>, the protocol is version 2.</p></li> <li><p>Otherwise, if the incoming byte count is 8 or more, and the 5 first characters match the <code>US-ASCII</code> representation of &ldquo;PROXY&rdquo;(<code>\x50\x52\x4F\x58\x59</code>), then the protocol must be parsed as version 1.</p></li> <li><p>Otherwise the protocol is not covered by this specification and the connection must be dropped.</p></li> </ul> </div> </aside> </div> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:80.81149619611158%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/show-source-ip/aws-proxy-protocol.png" title="AWS NLB portal to enable proxy protocol"> <img class="element-to-stretch" src="/v1.9/blog/2020/show-source-ip/aws-proxy-protocol.png" alt="AWS NLB portal to enable proxy protocol" /> </a> </div> <figcaption>AWS NLB portal to enable proxy protocol</figcaption> </figure> <h2 id="separate-setups-for-80-and-443">Separate setups for 80 and 443</h2> <p>Before going through the following steps, an AWS environment that is configured with the proper VPC, IAM, and Kubernetes setup is assumed.</p> <h3 id="step-1-install-istio-with-aws-nlb">Step 1: Install Istio with AWS NLB</h3> <p>The blog <a href="/v1.9/blog/2018/aws-nlb/">Configuring Istio Ingress with AWS NLB</a> provides detailed steps to set up AWS IAM roles and enable the usage of AWS NLB by Helm. You can also use other automation tools, such as Terraform, to achieve the same goal. In the following example, more complete configurations are shown in order to enable proxy protocol and <code>X-Forwarded-For</code> at the same time.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: &#34;*&#34; service.beta.kubernetes.io/aws-load-balancer-type: &#34;nlb&#34; proxy.istio.io/config: &#39;{&#34;gatewayTopology&#34; : { &#34;numTrustedProxies&#34;: 2 } }&#39; labels: app: istio-ingressgateway istio: ingressgateway release: istio name: istio-ingressgateway </code></pre> <h3 id="step-2-create-proxy-protocol-envoy-filter">Step 2: Create proxy-protocol Envoy Filter</h3> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: proxy-protocol namespace: istio-system spec: workloadSelector: labels: istio: ingressgateway configPatches: - applyTo: LISTENER patch: operation: MERGE value: listener_filters: - name: envoy.filters.listener.proxy_protocol - name: envoy.filters.listener.tls_inspector </code></pre> <h3 id="step-3-enable-x-forwarded-for-header">Step 3: Enable <code>X-Forwarded-For</code> header</h3> <p>This <a href="/v1.9/docs/ops/configuration/traffic-management/network-topologies/">blog</a> includes several samples of configuring Gateway Network Topology. In the following example, the configurations are tuned to enable <code>X-Forwarded-For</code> without any middle proxy.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingressgateway-settings namespace: istio-system spec: configPatches: - applyTo: NETWORK_FILTER match: listener: filterChain: filter: name: envoy.http_connection_manager patch: operation: MERGE value: name: envoy.http_connection_manager typed_config: &#34;@type&#34;: type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager skip_xff_append: false use_remote_address: true xff_num_trusted_hops: 1 </code></pre> <h3 id="step-4-deploy-ingress-gateway-for-httpbin-on-port-80-and-443">Step 4: Deploy ingress gateway for httpbin on port 80 and 443</h3> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">When following the <a href="/v1.9/docs/tasks/traffic-management/ingress/secure-ingress/">secure ingress setup</a>, macOS users must add an additional patch to generate certificates for TLS.</div> </aside> </div> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: httpbin-gateway spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: http protocol: HTTP hosts: - &#34;a25fa0b4835b.elb.us-west-2.amazonaws.com&#34; --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - &#34;a25fa0b4835b.elb.us-west-2.amazonaws.com&#34; gateways: - httpbin-gateway http: - match: - uri: prefix: /headers route: - destination: port: number: 8000 host: httpbin </code></pre> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: mygateway2 spec: selector: istio: ingressgateway # use istio default ingress gateway servers: - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE credentialName: httpbin-credential # must be the same as secret hosts: - &#34;a25fa0b4835b.elb.us-west-2.amazonaws.com&#34; --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - &#34;a25fa0b4835b.elb.us-west-2.amazonaws.com&#34; gateways: - mygateway2 http: - match: - uri: prefix: /headers route: - destination: port: number: 8000 host: httpbin </code></pre> <h3 id="step-5-check-header-output-of-httpbin">Step 5: Check header output of httpbin</h3> <p>Check port 443 (80 will be similar) and compare the cases with and without proxy protocol.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >//////with proxy_protocal enabled in the stack * Trying YY.XXX.141.26... * TCP_NODELAY set * Connection failed * connect to YY.XXX.141.26 port 443 failed: Operation timed out * Trying YY.XXX.205.117... * TCP_NODELAY set * Connected to a25fa0b4835b.elb.us-west-2.amazonaws.com (XX.YYY.205.117) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: new_certificates/example.com.crt CApath: none * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305 * ALPN, server accepted to use h2 * Server certificate: * subject: CN=a25fa0b4835b.elb.us-west-2.amazonaws.com; O=httpbin organization * start date: Oct 29 20:39:12 2020 GMT * expire date: Oct 29 20:39:12 2021 GMT * common name: a25fa0b4835b.elb.us-west-2.amazonaws.com (matched) * issuer: O=example Inc.; CN=example.com * SSL certificate verify ok. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x7fc6c8810800) &gt; GET /headers?show_env=1 HTTP/2 &gt; Host: a25fa0b4835b.elb.us-west-2.amazonaws.com &gt; User-Agent: curl/7.64.1 &gt; Accept: */* &gt; * Connection state changed (MAX_CONCURRENT_STREAMS == 2147483647)! &lt; HTTP/2 200 &lt; server: istio-envoy &lt; date: Thu, 29 Oct 2020 21:39:46 GMT &lt; content-type: application/json &lt; content-length: 629 &lt; access-control-allow-origin: * &lt; access-control-allow-credentials: true &lt; x-envoy-upstream-service-time: 2 &lt; { &#34;headers&#34;: { &#34;Accept&#34;: &#34;*/*&#34;, &#34;Content-Length&#34;: &#34;0&#34;, &#34;Host&#34;: &#34;a25fa0b4835b.elb.us-west-2.amazonaws.com&#34;, &#34;User-Agent&#34;: &#34;curl/7.64.1&#34;, &#34;X-B3-Sampled&#34;: &#34;0&#34;, &#34;X-B3-Spanid&#34;: &#34;74f99a1c6fc29975&#34;, &#34;X-B3-Traceid&#34;: &#34;85db86fe6aa322a074f99a1c6fc29975&#34;, &#34;X-Envoy-Attempt-Count&#34;: &#34;1&#34;, &#34;X-Envoy-Decorator-Operation&#34;: &#34;httpbin.default.svc.cluster.local:8000/headers*&#34;, &#34;X-Envoy-External-Address&#34;: &#34;XX.110.54.41&#34;, &#34;X-Forwarded-For&#34;: &#34;XX.110.54.41&#34;, &#34;X-Forwarded-Proto&#34;: &#34;https&#34;, &#34;X-Request-Id&#34;: &#34;5c3bc236-0c49-4401-b2fd-2dbfbce506fc&#34; } } * Connection #0 to host a25fa0b4835b.elb.us-west-2.amazonaws.com left intact * Closing connection 0 </code></pre> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >//////////without proxy_protocal * Trying YY.XXX.141.26... * TCP_NODELAY set * Connection failed * connect to YY.XXX.141.26 port 443 failed: Operation timed out * Trying YY.XXX.205.117... * TCP_NODELAY set * Connected to a25fa0b4835b.elb.us-west-2.amazonaws.com (YY.XXX.205.117) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: new_certificates/example.com.crt CApath: none * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305 * ALPN, server accepted to use h2 * Server certificate: * subject: CN=a25fa0b4835b.elb.us-west-2.amazonaws.com; O=httpbin organization * start date: Oct 29 20:39:12 2020 GMT * expire date: Oct 29 20:39:12 2021 GMT * common name: a25fa0b4835b.elb.us-west-2.amazonaws.com (matched) * issuer: O=example Inc.; CN=example.com * SSL certificate verify ok. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x7fbf8c808200) &gt; GET /headers?show_env=1 HTTP/2 &gt; Host: a25fa0b4835b.elb.us-west-2.amazonaws.com &gt; User-Agent: curl/7.64.1 &gt; Accept: */* &gt; * Connection state changed (MAX_CONCURRENT_STREAMS == 2147483647)! &lt; HTTP/2 200 &lt; server: istio-envoy &lt; date: Thu, 29 Oct 2020 20:44:01 GMT &lt; content-type: application/json &lt; content-length: 612 &lt; access-control-allow-origin: * &lt; access-control-allow-credentials: true &lt; x-envoy-upstream-service-time: 1 &lt; { &#34;headers&#34;: { &#34;Accept&#34;: &#34;*/*&#34;, &#34;Content-Length&#34;: &#34;0&#34;, &#34;Host&#34;: &#34;a25fa0b4835b.elb.us-west-2.amazonaws.com&#34;, &#34;User-Agent&#34;: &#34;curl/7.64.1&#34;, &#34;X-B3-Sampled&#34;: &#34;0&#34;, &#34;X-B3-Spanid&#34;: &#34;69913a6e6e949334&#34;, &#34;X-B3-Traceid&#34;: &#34;729d5da3618545da69913a6e6e949334&#34;, &#34;X-Envoy-Attempt-Count&#34;: &#34;1&#34;, &#34;X-Envoy-Decorator-Operation&#34;: &#34;httpbin.default.svc.cluster.local:8000/headers*&#34;, &#34;X-Envoy-Internal&#34;: &#34;true&#34;, &#34;X-Forwarded-For&#34;: &#34;172.16.5.30&#34;, &#34;X-Forwarded-Proto&#34;: &#34;https&#34;, &#34;X-Request-Id&#34;: &#34;299c7f8a-5f89-480a-82c9-028c76d45d84&#34; } } * Connection #0 to host a25fa0b4835b.elb.us-west-2.amazonaws.com left intact * Closing connection 0 </code></pre> <h2 id="conclusion">Conclusion</h2> <p>This blog presents the deployment of a stack that consists of an AWS NLB and Istio ingress gateway that are enabled with proxy-protocol. We hope it is useful to you if you are interested in protocol enabling in an anecdotal, experiential, and more informal way. However, note that the <code>X-Forwarded-For</code> header should be used only for the convenience of reading in test, as dealing with fake <code>X-Forwarded-For</code> attacks is not within the scope of this blog.</p> <h2 id="references">References</h2> <ul> <li><p><a href="https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/">protocol settings</a></p></li> <li><p><a href="https://www.haproxy.com/blog/haproxy/proxy-protocol/">protocol introduction</a></p></li> </ul>Fri, 11 Dec 2020 00:00:00 +0000/v1.9/blog/2020/show-source-ip/Xinhui Li (Salesforce)/v1.9/blog/2020/show-source-ip/trafficManagementprotocol extendingJoin us for the first IstioCon in 2021!<p>IstioCon 2021 will be the inaugural conference for Istio, the industry&rsquo;s <a href="https://www.cncf.io/wp-content/uploads/2020/11/CNCF_Survey_Report_2020.pdf">most popular service mesh</a>. In its inaugural year, IstioCon will be 100% virtual, connecting community members across the globe with Istio&rsquo;s ecosystem. This conference will take place at the end of February.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:40.3125%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/istiocon-2021/istioconlogo.jpg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/istiocon-2021/istioconlogo.jpg" alt="IstioCon logo" /> </a> </div> <figcaption></figcaption> </figure> <p>All the information related to IstioCon will be published on the <a href="https://events.istio.io/">conference website</a>. IstioCon provides an opportunity to showcase the lessons learned from running Istio in production, hands-on experiences from the Istio community, and will feature maintainers from across the Istio ecosystem. At this time, we encourage Istio users, developers, partners, and advocates to <a href="https://sessionize.com/istiocon-2021/">submit a session proposal through the conference&rsquo;s CFP portal</a>. The conference offers a mix of keynotes, technical talks, lightning talks, workshops, and roadmap sessions. Choose from the following formats to submit a session proposal for IstioCon:</p> <ul> <li><strong>Presentation:</strong> 40 minute presentation, maximum of 2 speakers</li> <li><strong>Panel:</strong> 40 minutes of discussion among 3 to 5 speakers</li> <li><strong>Workshop:</strong> 160 minute (2h 40m), in-depth, hands-on presentation with 1–4 speakers</li> <li><strong>Lighting Talk:</strong> 10 minute presentation, limited to 1 speaker</li> </ul> <p>This community-led event also has in store two social hours to take the load off and mesh with the Istio community, vendors, and maintainers. Participation in the event is free of charge, and will only require participants to register in order to join.</p> <p>Stay tuned to hear more about this conference, and we hope you can join us at the first IstioCon in 2021!</p>Tue, 08 Dec 2020 00:00:00 +0000/v1.9/blog/2020/istiocon-2021/Istio Steering Committee/v1.9/blog/2020/istiocon-2021/IstioConIstioconferenceHandling Docker Hub rate limiting <p>Since November 20th, 2020, Docker Hub has introduced <a href="https://www.docker.com/increase-rate-limits">rate limits</a> on image pulls.</p> <p>Because Istio uses <a href="https://hub.docker.com/u/istio">Docker Hub</a> as the default registry, usage on a large cluster may lead to pods failing to startup due to exceeding rate limits. This can be especially problematic for Istio, as there is typically the Istio sidecar image alongside most pods in the cluster.</p> <h2 id="mitigations">Mitigations</h2> <p>Istio allows you to specify a custom docker registry which you can use to make container images be fetched from your private registry. This can be configured by passing <code>--set hub=&lt;some-custom-registry&gt;</code> at installation time.</p> <p>Istio provides official mirrors to <a href="https://gcr.io/istio-release">Google Container Registry</a>. This can be configured with <code>--set hub=gcr.io/istio-release</code>. This is available for Istio 1.5+.</p> <p>Alternatively, you can copy the official Istio images to your own registry. This is especially useful if your cluster runs in an environment with a registry tailored for your use case (for example, on AWS you may want to mirror images to Amazon ECR) or you have air gapped security requirements where access to public registries is restricted. This can be done with the following script:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ SOURCE_HUB=istio $ DEST_HUB=my-registry # Replace this with the destination hub $ IMAGES=( install-cni operator pilot proxyv2 ) # Images to mirror. $ VERSIONS=( 1.7.5 1.8.0 ) # Versions to copy $ VARIANTS=( &#34;&#34; &#34;-distroless&#34; ) # Variants to copy $ for image in $IMAGES; do $ for version in $VERSIONS; do $ for variant in $VARIANTS; do $ name=$image:$version$variant $ docker pull $SOURCE_HUB/$name $ docker tag $SOURCE_HUB/$name $DEST_HUB/$name $ docker push $DEST_HUB/$name $ docker rmi $SOURCE_HUB/$name $ docker rmi $DEST_HUB/$name $ done $ done $ done </code></pre>Mon, 07 Dec 2020 00:00:00 +0000/v1.9/blog/2020/docker-rate-limit/John Howard (Google)/v1.9/blog/2020/docker-rate-limit/dockerExpanding into New Frontiers - Smart DNS Proxying in Istio <p>DNS resolution is a vital component of any application infrastructure on Kubernetes. When your application code attempts to access another service in the Kubernetes cluster or even a service on the internet, it has to first lookup the IP address corresponding to the hostname of the service, before initiating a connection to the service. This name lookup process is often referred to as <strong>service discovery</strong>. In Kubernetes, the cluster DNS server, be it <code>kube-dns</code> or CoreDNS, resolves the service&rsquo;s hostname to a unique non-routable virtual IP (VIP), if it is a service of type <code>clusterIP</code>. The <code>kube-proxy</code> on each node maps this VIP to a set of pods of the service, and forwards the traffic to one of them selected at random. When using a service mesh, the sidecar works similarly to the <code>kube-proxy</code> as far as traffic forwarding is concerned.</p> <p>The following diagram depicts the role of DNS today:</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:57.00636942675159%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/dns-proxy/role-of-dns-today.png" title="Role of DNS in Istio, today"> <img class="element-to-stretch" src="/v1.9/blog/2020/dns-proxy/role-of-dns-today.png" alt="Role of DNS in Istio, today" /> </a> </div> <figcaption>Role of DNS in Istio, today</figcaption> </figure> <h2 id="problems-posed-by-dns">Problems posed by DNS</h2> <p>While the role of DNS within the service mesh may seem insignificant, it has consistently stood in the way of expanding the mesh to VMs and enabling seamless multicluster access.</p> <h3 id="vm-access-to-kubernetes-services">VM access to Kubernetes services</h3> <p>Consider the case of a VM with a sidecar. As shown in the illustration below, applications on the VM look up the IP addresses of services inside the Kubernetes cluster as they typically have no access to the cluster&rsquo;s DNS server.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:42.37837837837838%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/dns-proxy/vm-dns-resolution-issues.png" title="DNS resolution issues on VMs accessing Kubernetes services"> <img class="element-to-stretch" src="/v1.9/blog/2020/dns-proxy/vm-dns-resolution-issues.png" alt="DNS resolution issues on VMs accessing Kubernetes services" /> </a> </div> <figcaption>DNS resolution issues on VMs accessing Kubernetes services</figcaption> </figure> <p>It is technically possible to use <code>kube-dns</code> as a name server on the VM if one is willing to engage in some convoluted workarounds involving <code>dnsmasq</code> and external exposure of <code>kube-dns</code> using <code>NodePort</code> services: assuming you manage to convince your cluster administrator to do so. Even so, you are opening the door to a host of <a href="https://blog.aquasec.com/dns-spoofing-kubernetes-clusters">security issues</a>. At the end of the day, these are point solutions that are typically out of scope for those with limited organizational capability and domain expertise.</p> <h3 id="external-tcp-services-without-vips">External TCP services without VIPs</h3> <p>It is not just the VMs in the mesh that suffer from the DNS issue. For the sidecar to accurately distinguish traffic between two different TCP services that are outside the mesh, the services must be on different ports or they need to have a globally unique VIP, much like the <code>clusterIP</code> assigned to Kubernetes services. But what if there is no VIP? Cloud hosted services like hosted databases, typically do not have a VIP. Instead, the provider&rsquo;s DNS server returns one of the instance IPs that can then be directly accessed by the application. For example, consider the two service entries below, pointing to two different AWS RDS services:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: db1 namespace: ns1 spec: hosts: - mysql-instance1.us-east-1.rds.amazonaws.com ports: - name: mysql number: 3306 protocol: TCP resolution: DNS --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: db2 namespace: ns1 spec: hosts: - mysql-instance2.us-east-1.rds.amazonaws.com ports: - name: mysql number: 3306 protocol: TCP resolution: DNS </code></pre> <p>The sidecar has a single listener on <code>0.0.0.0:3306</code> that looks up the IP address of <code>mysql-instance1.us-east1.rds.amazonaws.com</code> from public DNS servers and forwards traffic to it. It cannot route traffic to <code>db2</code> as it has no way of distinguishing whether traffic arriving at <code>0.0.0.0:3306</code> is bound for <code>db1</code> or <code>db2</code>. The only way to accomplish this is to set the resolution to <code>NONE</code> causing the sidecar to <em>blindly forward any traffic</em> on port <code>3306</code> to the original IP requested by the application. This is akin to punching a hole in the firewall allowing all traffic to port <code>3306</code> irrespective of the destination IP. To get traffic flowing, you are now forced to compromise on the security posture of your system.</p> <h3 id="resolving-dns-for-services-in-remote-clusters">Resolving DNS for services in remote clusters</h3> <p>The DNS limitations of a multicluster mesh are well known. Services in one cluster cannot lookup the IP addresses of services in other clusters, without clunky workarounds such as creating stub services in the caller namespace.</p> <h2 id="taking-control-of-dns">Taking control of DNS</h2> <p>All in all, DNS has been a thorny issue in Istio for a while. It was time to slay the beast. We (the Istio networking team) decided to tackle the problem once and for all in a way that is completely transparent to you, the end user. Our first attempt involved utilizing Envoy&rsquo;s DNS proxy. It turned out to be very unreliable, and disappointing overall due to the general lack of sophistication in the c-ares DNS library used by Envoy. Determined to solve the problem, we decided to implement the DNS proxy in the Istio sidecar agent, written in Go. We were able to optimize the implementation to handle all the scenarios that we wanted to tackle without compromising on scale and stability. The Go DNS library we use is the same one used by scalable DNS implementations such as CoreDNS, Consul, Mesos, etc. It has been battle tested in production for scale and stability.</p> <p>Starting with Istio 1.8, the Istio agent on the sidecar will ship with a caching DNS proxy, programmed dynamically by Istiod. Istiod pushes the hostname-to-IP-address mappings for all the services that the application may access based on the Kubernetes services and service entries in the cluster. DNS lookup queries from the application are transparently intercepted and served by the Istio agent in the pod or VM. If the query is for a service within the mesh, <em>irrespective of the cluster that the service is in</em>, the agent responds directly to the application. If not, it forwards the query to the upstream name servers defined in <code>/etc/resolv.conf</code>. The following diagram depicts the interactions that occur when an application tries to access a service using its hostname.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:41.07929515418502%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/dns-proxy/dns-interception-in-istio.png" title="Smart DNS proxying in Istio sidecar agent"> <img class="element-to-stretch" src="/v1.9/blog/2020/dns-proxy/dns-interception-in-istio.png" alt="Smart DNS proxying in Istio sidecar agent" /> </a> </div> <figcaption>Smart DNS proxying in Istio sidecar agent</figcaption> </figure> <p>As you will see in the following sections, <em>the DNS proxying feature has had an enormous impact across many aspects of Istio.</em></p> <h3 id="reduced-load-on-your-dns-servers-w-faster-resolution">Reduced load on your DNS servers w/ faster resolution</h3> <p>The load on your cluster’s Kubernetes DNS server drops drastically as almost all DNS queries are resolved within the pod by Istio. The bigger the footprint of mesh on a cluster, the lesser the load on your DNS servers. Implementing our own DNS proxy in the Istio agent has allowed us to implement cool optimizations such as <a href="https://coredns.io/plugins/autopath/">CoreDNS auto-path</a> without the correctness issues that CoreDNS currently faces.</p> <p>To understand the impact of this optimization, lets take a simple DNS lookup scenario, in a standard Kubernetes cluster without any custom DNS setup for pods - i.e., with the default setting of <code>ndots:5</code> in <code>/etc/resolv.conf</code>. When your application starts a DNS lookup for <code>productpage.ns1.svc.cluster.local</code>, it appends the DNS search namespaces in <code>/etc/resolv.conf</code> (e.g., <code>ns1.svc.cluster.local</code>) as part of the DNS query, before querying the host as-is. As a result, the first DNS query that is actually sent out will look like <code>productpage.ns1.svc.cluster.local.ns1.svc.cluster.local</code>, which will inevitably fail DNS resolution when Istio is not involved. If your <code>/etc/resolv.conf</code> has 5 search namespaces, the application will send two DNS queries for each search namespace, one for the IPv4 <code>A</code> record and another for the IPv6 <code>AAAA</code> record, and then a final pair of queries with the exact hostname used in the code. <em>Before establishing the connection, the application performs 12 DNS lookup queries for each host!</em></p> <p>With Istio&rsquo;s implementation of the CoreDNS style auto-path technique, the sidecar agent will detect the real hostname being queried within the first query and return a <code>cname</code> record to <code>productpage.ns1.svc.cluster.local</code> as part of this DNS response, as well as the <code>A/AAAA</code> record for <code>productpage.ns1.svc.cluster.local</code>. The application receiving this response can now extract the IP address immediately and proceed to establishing a TCP connection to that IP. <em>The smart DNS proxy in the Istio agent dramatically cuts down the number of DNS queries from 12 to just 2!</em></p> <h3 id="vms-to-kubernetes-integration">VMs to Kubernetes integration</h3> <p>Since the Istio agent performs local DNS resolution for services within the mesh, DNS lookup queries for Kubernetes services from VMs will now succeed without requiring clunky workarounds for exposing <code>kube-dns</code> outside the cluster. The ability to seamlessly resolve internal services in a cluster will now simplify your monolith to microservice journey, as the monolith on VMs can now access microservices on Kubernetes without additional levels of indirection via API gateways.</p> <h3 id="automatic-vip-allocation-where-possible">Automatic VIP allocation where possible</h3> <p>You may ask, how does this DNS functionality in the agent solve the problem of distinguishing between multiple external TCP services without VIPs on the same port?</p> <p>Taking inspiration from Kubernetes, Istio will now automatically allocate non-routable VIPs (from the Class E subnet) to such services as long as they do not use a wildcard host. The Istio agent on the sidecar will use the VIPs as responses to the DNS lookup queries from the application. Envoy can now clearly distinguish traffic bound for each external TCP service and forward it to the right target. With the introduction of the DNS proxying, you will no longer need to use <code>resolution: NONE</code> for non-wildcard TCP services, improving your overall security posture. Istio cannot help much with wildcard external services (e.g., <code>*.us-east1.rds.amazonaws.com</code>). You will have to resort to NONE resolution mode to handle such services.</p> <h3 id="multicluster-dns-lookup">Multicluster DNS lookup</h3> <p>For the adventurous lot, attempting to weave a multicluster mesh where applications directly call internal services of a namespace in a remote cluster, the DNS proxy functionality comes in quite handy. Your applications can <em>resolve Kubernetes services on any cluster in any namespace</em>, without the need to create stub Kubernetes services in every cluster.</p> <p>The benefits of the DNS proxy extend beyond the multicluster models that are currently described in Istio today. At Tetrate, we use this mechanism extensively in our customers&rsquo; multicluster deployments to enable sidecars to resolve DNS for hosts exposed at ingress gateways of all the clusters in a mesh, and access them over mutual TLS.</p> <h2 id="concluding-thoughts">Concluding thoughts</h2> <p>The problems caused by lack of control over DNS have often been overlooked and ignored in its entirety when it comes to weaving a mesh across many clusters, different environments, and integrating external services. The introduction of a caching DNS proxy in the Istio sidecar agent solves these issues. Exercising control over the application’s DNS resolution allows Istio to accurately identify the target service to which traffic is bound, and enhance the overall security, routing, and telemetry posture in Istio within and across clusters.</p> <p>Smart DNS proxying is enabled in the <code>preview</code> profile in Istio 1.8. Please try it out!</p>Thu, 12 Nov 2020 00:00:00 +0000/v1.9/blog/2020/dns-proxy/Shriram Rajagopalan (Tetrate.io) on behalf of Istio Networking WG/v1.9/blog/2020/dns-proxy/dnssidecarmulticlustervmexternal services2020 Steering Committee Election Results<p>Last month, we <a href="../steering-changes/">announced a revision to our Steering Committee charter</a>, opening up governance roles to more contributors and community members. The Steering Committee now consists of 9 proportionally-allocated Contribution Seats, and 4 elected Community Seats.</p> <p>We have now concluded our <a href="https://github.com/istio/community/tree/master/steering/elections/2020">inaugural election</a> for the Community Seats, and we&rsquo;re excited to welcome the following new members to the Committee:</p> <ul> <li><a href="https://github.com/istio/community/blob/master/steering/elections/2020/nrjpoddar.md">Neeraj Poddar</a> (Aspen Mesh)</li> <li><a href="https://github.com/istio/community/blob/master/steering/elections/2020/zackbutcher.md">Zack Butcher</a> (Tetrate)</li> <li><a href="https://github.com/istio/community/blob/master/steering/elections/2020/ceposta.md">Christian Posta</a> (Solo.io)</li> <li><a href="https://github.com/istio/community/blob/master/steering/elections/2020/hzxuzhonghu.md">Zhonghu Xu</a> (Huawei)</li> </ul> <p>They join Contribution Seat holders from Google, IBM/Red Hat and Salesforce. We now have representation from 7 organizations on Steering, reflecting the breadth of our contributor ecosystem.</p> <p>Thank you to everyone who participated in the election process. The next election will be in July 2021.</p>Tue, 29 Sep 2020 00:00:00 +0000/v1.9/blog/2020/steering-election-results/Istio Steering Committee/v1.9/blog/2020/steering-election-results/istiosteeringgovernancecommunityelectionLarge Scale Security Policy Performance Tests <h2 id="overview">Overview</h2> <p>Istio has a wide range of security policies which can be easily configured into systems of services. As the number of applied policies increases, it is important to understand the relationship of latency, memory usage, and CPU usage of the system.</p> <p>This blog post goes over common security policies use cases and how the number of security policies or the number of specific rules in a security policy can affect the overall latency of requests.</p> <h2 id="setup">Setup</h2> <p>There are a wide range of security policies and many more combinations of those policies. We will go over 6 of the most commonly used test cases.</p> <p>The following test cases are run in an environment which consists of a <a href="https://fortio.org/">Fortio</a> client sending requests to a Fortio server, with a baseline of no Envoy sidecars deployed. The following data was gathered by using the <a href="https://github.com/istio/tools/tree/master/perf/benchmark">Istio performance benchmarking tool</a>. <figure style="width:55%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/istio_setup.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/istio_setup.svg" alt="Environment setup" /> </a> </div> <figcaption></figcaption> </figure></p> <p>In these test cases, requests either do not match any rules or match only the very last rule in the security policies. This ensures that the RBAC filter is applied to all policy rules, and never matches a policy rule before before viewing all the policies. Even though this is not necessarily what will happen in your own system, this policy setup provides data for the worst possible performance of each test case.</p> <h2 id="test-cases">Test cases</h2> <ol> <li><p>Mutual TLS STRICT vs plaintext.</p></li> <li><p>A single authorization policy with a variable number of principal rules as well as a <code>PeerAuthentication</code> policy. The principal rule is dependent on the <code>PeerAuthentication</code> policy being applied to the system.</p></li> <li><p>A single authorization policy with a variable number of <code>requestPrincipal</code> rules as well as a <code>RequestAuthentication</code> policy. The <code>requestPrincipal</code> is dependent on the <code>RequestAuthentication</code> policy being applied to the system.</p></li> <li><p>A single authorization policy with a variable number of <code>paths</code> vs <code>sourceIP</code> rules.</p></li> <li><p>A variable number of authorization policies consisting of a single path or <code>sourceIP</code> rule.</p></li> <li><p>A single <code>RequestAuthentication</code> policy with variable number of <code>JWTRules</code> rules.</p></li> </ol> <h2 id="data">Data</h2> <p>The y-axis of each test is the latency in milliseconds, and the x-axis is the number of concurrent connections. The x-axis of each graph consists of 3 data points that represent a small load (qps=100, conn=8), medium load (qps=500, conn=32), and large load (qps=1000, conn=64).</p> <div id="tabset-blog-2020-large-scale-security-policy-performance-tests-1" role="tablist" class="tabset"> <div class="tab-strip" data-category-name="platform"><button aria-selected="true" data-category-value="one" aria-controls="tabset-blog-2020-large-scale-security-policy-performance-tests-1-0-panel" id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-0-tab" role="tab"><span>MTLS vs plainText</span> </button><button tabindex="-1" data-category-value="two" aria-controls="tabset-blog-2020-large-scale-security-policy-performance-tests-1-1-panel" id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-1-tab" role="tab"><span>AuthZ mTLS SourcePrincipals</span> </button><button tabindex="-1" data-category-value="three" aria-controls="tabset-blog-2020-large-scale-security-policy-performance-tests-1-2-panel" id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-2-tab" role="tab"><span>AuthZ JWT RequestPrincipal</span> </button><button tabindex="-1" data-category-value="four" aria-controls="tabset-blog-2020-large-scale-security-policy-performance-tests-1-3-panel" id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-3-tab" role="tab"><span>AuthZ sourceIP</span> </button><button tabindex="-1" data-category-value="five" aria-controls="tabset-blog-2020-large-scale-security-policy-performance-tests-1-4-panel" id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-4-tab" role="tab"><span>AuthZ paths</span> </button><button tabindex="-1" data-category-value="six" aria-controls="tabset-blog-2020-large-scale-security-policy-performance-tests-1-5-panel" id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-5-tab" role="tab"><span>RequestAuthN JWT Issuer</span> </button><button tabindex="-1" data-category-value="seven" aria-controls="tabset-blog-2020-large-scale-security-policy-performance-tests-1-6-panel" id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-6-tab" role="tab"><span>Variable AuthZ</span> </button></div> <div class="tab-content"><div id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-0-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2020-large-scale-security-policy-performance-tests-1-0-tab"><figure style="width:90%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/mtls_plaintext.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/mtls_plaintext.svg" alt="MTLS vs plaintext" /> </a> </div> <figcaption></figcaption> </figure> The difference of latency between MTLS mode STRICT and plaintext is very small in lower loads. As the <code>qps</code> and <code>conn</code> increase, the latency of requests with MTLS STRICT increases. The additional latency increased in larger loads is minimal compared to that of the increase from having no sidecars to having sidecars in the plaintext.</div><div hidden id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-1-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2020-large-scale-security-policy-performance-tests-1-1-tab"><figure style="width:90%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_principals.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_principals.svg" alt="Authorization policy variable number of principals" /> </a> </div> <figcaption></figcaption> </figure> <p>For Authorization policies with 10 vs 1000 principal rules, the latency increase of 10 principal rules compared to no policies is greater than the latency increase of 1000 principals compared to 10 principals.</div><div hidden id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-2-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2020-large-scale-security-policy-performance-tests-1-2-tab"><figure style="width:90%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_requestPrincipals.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_requestPrincipals.svg" alt="Authorization policy with variable principals" /> </a> </div> <figcaption></figcaption> </figure> For Authorization policies with a variable number of <code>requestPrincipal</code> rules, the latency increase of 10 <code>requestPrincipal</code> rules compared to no policies is nearly the same as the latency increase of 1000 <code>requestPrincipal</code> rules compared to 10 <code>requestPrincipal</code> rules.</div><div hidden id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-3-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2020-large-scale-security-policy-performance-tests-1-3-tab"><p><figure style="width:90%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_sourceIP.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_sourceIP.svg" alt="Authorization policy with variable `sourceIP` rules" /> </a> </div> <figcaption></figcaption> </figure> The latency increase of a single <code>AuthZ</code> policy with 10 <code>sourceIP</code> rules is not proportional to the latency increase of a single <code>AuthZ</code> policy with 1000 <code>sourceIP</code> rules compared to the system with sidecar and no policies.</p> <p><figure style="width:90%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_paths_vs_sourceIP.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_paths_vs_sourceIP.svg" alt="Authorization policy with both paths and `sourceIP`" /> </a> </div> <figcaption></figcaption> </figure> The latency increase of a variable number of <code>sourceIP</code> rules is marginally greater than that of path rules.</p> </div><div hidden id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-4-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2020-large-scale-security-policy-performance-tests-1-4-tab"><figure style="width:90%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_paths.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_paths.svg" alt="Authorization policy with variable number of paths" /> </a> </div> <figcaption></figcaption> </figure> The latency increase of a single <code>AuthZ</code> policy with 10 path rules is not proportional to the latency increase of a single <code>AuthZ</code> policy with 1000 path rules compared to the system with sidecar and no policies. This trend is similar to that of <code>sourceIP</code> rules. <figure style="width:90%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_paths_vs_sourceIP.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_paths_vs_sourceIP.svg" alt="Authorization policy with both paths and `sourceIP`" /> </a> </div> <figcaption></figcaption> </figure> The latency of a variable number of paths rules is marginally lesser than that of <code>sourceIP</code> rules.</div><div hidden id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-5-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2020-large-scale-security-policy-performance-tests-1-5-tab"><figure style="width:90%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/RequestAuthN_jwks.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/RequestAuthN_jwks.svg" alt="Request Authentication with variable number of JWT issuers" /> </a> </div> <figcaption></figcaption> </figure> The latency of a single JWT issuer is comparable to that of no policies, but as the number of JWT issuers increase, the latency increases disproportionately.</div><div hidden id="tabset-blog-2020-large-scale-security-policy-performance-tests-1-6-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2020-large-scale-security-policy-performance-tests-1-6-tab"><p>To test how the number of Authorization policies affect runtime, the tests can be broken into two cases:</p> <ol> <li><p>Every Authorization policy has a single <code>sourceIP</code> rule.</p></li> <li><p>Every Authorization policy has a single path rule.</p></li> </ol> <p><figure style="width:90%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_policies_sourceIP.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_policies_sourceIP.svg" alt="Authorization policy with variable number of policies, with `sourceIP` rule" /> </a> </div> <figcaption></figcaption> </figure> <figure style="width:90%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.34%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_policies_paths.svg" title=""> <img class="element-to-stretch" src="/v1.9/blog/2020/large-scale-security-policy-performance-tests/AuthZ_var_policies_paths.svg" alt="Authorization policy with variable number of policies, with path rule" /> </a> </div> <figcaption></figcaption> </figure> The overall trends of both graphs are similar. This is consistent to the paths vs <code>sourceIP</code> data, which showed that the latency is marginally greater for <code>sourceIP</code> rules than that of path rules.</p> </div></div> </div> <h2 id="conclusion">Conclusion</h2> <ul> <li><p>In general, adding security policies does not add relatively high overhead to the system. The policies that add the most latency include:</p> <ol> <li><p>Authorization policy with <code>JWTRules</code> rules.</p></li> <li><p>Authorization policy with <code>requestPrincipal</code> rules.</p></li> <li><p>Authorization policy with principals rules.</p></li> </ol></li> <li><p>In lower loads (requests with lower qps and conn) the difference in latency for most policies is minimal.</p></li> <li><p>Envoy proxy sidecars increase latency more than most policies, even if the policies are large.</p></li> <li><p>The latency increase of extremely large policies is relatively similar to the latency increase of adding Envoy proxy sidecars compared to that of no sidecars.</p></li> <li><p>Two different tests determined that the <code>sourceIP</code> rule is marginally slower than a path rule.</p></li> </ul> <p>If you are interested in creating your own large scale security policies and running performance tests with them, see the <a href="https://github.com/istio/tools/tree/master/perf/benchmark/security/generate_policies">performance benchmarking tool README</a>.</p> <p>If you are interested in reading more about the security policies tests, see <a href="https://docs.google.com/document/d/1ZP9eQ_2EJEG12xnfsoo7125FDN38r62iqY1PUn9Dz-0/edit?usp=sharing">our design doc</a>. If you don&rsquo;t already have access, you can <a href="/v1.9/about/community/join/">join the Istio team drive</a>.</p>Tue, 15 Sep 2020 00:00:00 +0000/v1.9/blog/2020/large-scale-security-policy-performance-tests/Michael Eizaguirre (Google), Yangmin Zhu (Google), Carolyn Hu (Google)/v1.9/blog/2020/large-scale-security-policy-performance-tests/testsecurity policyperformanceDeploying Istio Control Planes Outside the Mesh <h2 id="overview">Overview</h2> <p>From experience working with various service mesh users and vendors, we believe there are 3 key personas for a typical service mesh:</p> <ul> <li><p>Mesh Operator, who manages the service mesh control plane installation and upgrade.</p></li> <li><p>Mesh Admin, often referred as Platform Owner, who owns the service mesh platform and defines the overall strategy and implementation for service owners to adopt service mesh.</p></li> <li><p>Mesh User, often referred as Service Owner, who owns one or more services in the mesh.</p></li> </ul> <p>Prior to version 1.7, Istio required the control plane to run in one of the <span class="term" data-title="Primary Cluster" data-body="&lt;p&gt;A primary cluster is a &lt;a href=&#34;/docs/reference/glossary/#cluster&#34;&gt;cluster&lt;/a&gt; with a &lt;a href=&#34;/docs/reference/glossary/#control-plane&#34;&gt;control plane&lt;/a&gt;. A single &lt;a href=&#34;/docs/reference/glossary/#service-mesh&#34;&gt;mesh&lt;/a&gt; can have more than one primary cluster for HA or to reduce latency. Primary clusters can act as the control plane for &lt;a href=&#34;/docs/reference/glossary/#remote-cluster&#34;&gt;remote clusters&lt;/a&gt;.&lt;/p&gt; ">primary clusters</span> in the mesh, leading to a lack of separation between the mesh operator and the mesh admin. Istio 1.7 introduces a new <span class="term" data-title="External Control Plane" data-body="&lt;p&gt;An external control plane is a &lt;a href=&#34;/docs/reference/glossary/#control-plane&#34;&gt;control plane&lt;/a&gt; that externally manages mesh workloads running in their own &lt;a href=&#34;/docs/reference/glossary/#cluster&#34;&gt;clusters&lt;/a&gt; or other infrastructure. The control plane may, itself, be deployed in a cluster, although not in one of the clusters that is part of the mesh it&amp;rsquo;s controlling. Its purpose is to cleanly separate the control plane from the data plane of a mesh.&lt;/p&gt; ">external control plane</span> deployment model which enables mesh operators to install and manage mesh control planes on separate external clusters. This deployment model allows a clear separation between mesh operators and mesh admins. Istio mesh operators can now run Istio control planes for mesh admins while mesh admins can still control the configuration of the control plane without worrying about installing or managing the control plane. This model is transparent to mesh users.</p> <h2 id="external-control-plane-deployment-model">External control plane deployment model</h2> <p>After installing Istio using the <a href="/v1.9/docs/setup/install/istioctl/#install-istio-using-the-default-profile">default installation profile</a>, you will have an Istiod control plane installed in a single cluster like the diagram below:</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:40.78842240615207%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/new-deployment-model/single-cluster.svg" title="Single cluster Istio mesh"> <img class="element-to-stretch" src="/v1.9/blog/2020/new-deployment-model/single-cluster.svg" alt="Istio mesh in a single cluster" /> </a> </div> <figcaption>Istio mesh in a single cluster</figcaption> </figure> <p>With the new deployment model in Istio 1.7, it&rsquo;s possible to run Istiod on an external cluster, separate from the mesh services as shown in the diagram below. The external control plane cluster is owned by the mesh operator while the mesh admin owns the cluster running services deployed in the mesh. The mesh admin has no access to the external control plane cluster. Mesh operators can follow the <a href="https://github.com/istio/istio/wiki/External-Istiod-single-cluster-steps">external istiod single cluster step by step guide</a> to explore more on this. (Note: In some internal discussions among Istio maintainers, this model was previously referred to as &ldquo;central istiod&rdquo;.)</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:47.926329500847174%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/new-deployment-model/single-cluster-external-Istiod.svg" title="Single cluster Istio mesh with Istiod outside"> <img class="element-to-stretch" src="/v1.9/blog/2020/new-deployment-model/single-cluster-external-Istiod.svg" alt="Istio mesh in a single cluster with Istiod outside" /> </a> </div> <figcaption>Single cluster Istio mesh with Istiod in an external control plane cluster</figcaption> </figure> <p>Mesh admins can expand the service mesh to multiple clusters, which are managed by the same Istiod running in the external cluster. None of the mesh clusters are <span class="term" data-title="Primary Cluster" data-body="&lt;p&gt;A primary cluster is a &lt;a href=&#34;/docs/reference/glossary/#cluster&#34;&gt;cluster&lt;/a&gt; with a &lt;a href=&#34;/docs/reference/glossary/#control-plane&#34;&gt;control plane&lt;/a&gt;. A single &lt;a href=&#34;/docs/reference/glossary/#service-mesh&#34;&gt;mesh&lt;/a&gt; can have more than one primary cluster for HA or to reduce latency. Primary clusters can act as the control plane for &lt;a href=&#34;/docs/reference/glossary/#remote-cluster&#34;&gt;remote clusters&lt;/a&gt;.&lt;/p&gt; ">primary clusters</span>, in this case. They are all <span class="term" data-title="Remote Cluster" data-body="&lt;p&gt;A remote cluster is a &lt;a href=&#34;/docs/reference/glossary/#cluster&#34;&gt;cluster&lt;/a&gt; that connects to a &lt;a href=&#34;/docs/reference/glossary/#control-plane&#34;&gt;control plane&lt;/a&gt; residing outside of the cluster. A remote cluster can connect to a control plane running in a &lt;a href=&#34;/docs/reference/glossary/#primary-cluster&#34;&gt;primary cluster&lt;/a&gt; or to an &lt;a href=&#34;/docs/reference/glossary/#external-control-plane&#34;&gt;external control plane&lt;/a&gt;.&lt;/p&gt; ">remote clusters</span>. However, one of them also serves as the Istio configuration cluster, in addition to running services. The external control plane reads Istio configurations from the <code>config cluster</code> and Istiod pushes configuration to the data plane running in both the config cluster and other remote clusters as shown in the diagram below.</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:44.93126790115233%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/new-deployment-model/multiple-clusters-external-Istiod.svg" title="Multicluster Istio mesh with Istiod outside"> <img class="element-to-stretch" src="/v1.9/blog/2020/new-deployment-model/multiple-clusters-external-Istiod.svg" alt="Multicluster Istio mesh with Istiod outside" /> </a> </div> <figcaption>Multicluster Istio mesh with Istiod in an external control plane cluster</figcaption> </figure> <p>Mesh operators can further expand this deployment model to manage multiple Istio control planes from an external cluster running multiple Istiod control planes:</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.55366354432676%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/new-deployment-model/multiple-external-Istiods.svg" title="Multiple single clusters Istio meshes with Istiod outside"> <img class="element-to-stretch" src="/v1.9/blog/2020/new-deployment-model/multiple-external-Istiods.svg" alt="Istio meshes in single clusters with Istiod outside" /> </a> </div> <figcaption>Multiple single clusters with multiple Istiod control planes in an external control plane cluster</figcaption> </figure> <p>In this case, each Istiod manages its own remote cluster(s). Mesh operators can even install their own Istio mesh in the external control plane cluster and configure its <code>istio-ingress</code> gateway to route traffic from remote clusters to their corresponding Istiod control planes. To learn more about this, check out <a href="https://github.com/istio/istio/wiki/External-Istiod-single-cluster-steps#deploy-istio-mesh-on-external-control-plane-cluster-to-manage-traffic-to-istiod-deployments">these steps</a>.</p> <h2 id="conclusion">Conclusion</h2> <p>The external control plane deployment model enables the Istio control plane to be run and managed by mesh operators who have operational expertise in Istio, and provides a clean separation between service mesh control and data planes. Mesh operators can run the control plane in their own clusters or other environments, providing the control plane as a service to mesh admins. Mesh operators can run multiple Istiod control planes in a single cluster, deploying their own Istio mesh and using <code>istio-ingress</code> gateways to control access to these Istiod control planes. Through the examples provided here, mesh operators can explore different implementation choices and choose what works best for them.</p> <p>This new model reduces complexity for mesh admins by allowing them to focus on mesh configurations without operating the control plane themselves. Mesh admins can continue to configure mesh-wide settings and Istio resources without any access to external control plane clusters. Mesh users can continue to interact with the service mesh without any changes.</p>Thu, 27 Aug 2020 00:00:00 +0000/v1.9/blog/2020/new-deployment-model/Lin Sun (IBM), Iris Ding (IBM)/v1.9/blog/2020/new-deployment-model/istioddeployment modelinstalldeploy1.7Introducing the new Istio steering committee <p>Today, the Istio project is pleased to announce a new revision to its steering charter, which opens up governance roles to more contributors and community members. This revision solidifies our commitment to open governance, ensuring that the community around the project will always be able to steer its direction, and that no one company has majority voting control over the project.</p> <p>The Istio Steering Committee oversees the administrative aspects of the project and sets the marketing direction. From the earliest days of the project, it was bootstrapped with members from Google and IBM, the two founders and largest contributors, with the explicit intention that other seats would be added. We are very happy to deliver on that promise today, with a new charter designed to reward contribution and community.</p> <p>The new Steering Committee consists of 13 seats: 9 proportionally allocated <strong>Contribution Seats</strong>, and 4 elected <strong>Community Seats</strong>.</p> <h2 id="contribution-seats">Contribution Seats</h2> <p>The direction of a project is set by the people who contribute to it. We&rsquo;ve designed our committee to reflect that, with 9 seats to be attributed in proportion to contributions made to Istio in the previous 12 months. In Kubernetes, the mantra was &ldquo;chop wood, carry water,&rdquo; and we similarly want to reward companies who are fueling the growth of the project with contributions.</p> <p>This year, we&rsquo;ve chosen to use <strong>merged pull requests</strong> as our <a href="https://github.com/istio/community/blob/master/steering/CONTRIBUTION-FORMULA.md">proxy for proportional contribution</a>. We know that no measure of contribution is perfect, and as such we will explicitly reconsider the formula every year. (Other measures we considered, including commits, comments, and actions, gave the same results for this period.)</p> <p>In order to ensure corporate diversity, there will always be a minimum of three companies represented in Contribution Seats.</p> <h2 id="community-seats">Community Seats</h2> <p>There are many wonderful contributors to the Istio community, including developers, SREs and mesh admins, working for companies large and small. We wanted to ensure that their voices were included, both in terms of representation and selection.</p> <p>We have added 4 seats for representatives from 4 different organizations, who are not represented in the Contribution Seat allocation. These seats will be voted on by the Istio community in an <a href="https://github.com/istio/community/tree/master/steering/elections">annual election</a>.</p> <p>Any <a href="https://github.com/istio/community/blob/master/ROLES.md#member">project member</a> can stand for election; all Istio members who have been active in the last 12 months are eligible to vote.</p> <h2 id="corporate-diversification-is-the-goal">Corporate diversification is the goal</h2> <p>Our goal is that the governance of Istio reflects the diverse set of contributors. Both Google and IBM/Red Hat will have fewer seats than previously, and the new model is designed to ensure representation from at least 7 different organizations.</p> <p>We also want to make it clear that no single vendor, no matter how large their contribution, has majority voting control over the Istio project. We&rsquo;ve implemented a cap on the number of seats a company can hold, such that they can neither unanimously win a vote, or veto a decision of the rest of the committee.</p> <h2 id="the-2020-committee-and-election">The 2020 committee and election</h2> <p>According to our <a href="https://docs.google.com/spreadsheets/d/1Dt-h9s8G7Wyt4r16ZVqcmdWXDuCaPC0kPS21BuAfCL8/edit#gid=0">seat allocation process</a>, this year Google will be allocated 5 seats and IBM/Red Hat will be allocated 3. As the third largest contributor to Istio in the last 12 months, we are pleased to announce that Salesforce has earned a Contribution Seat.</p> <p>The first <a href="https://github.com/istio/community/tree/master/steering/elections/2020">election for Community Seats</a> begins today. Members have two weeks to nominate themselves, and voting will run from 14 to 27 September. You can learn all about the election in the <code>istio/community</code> repository on GitHub. We&rsquo;re also hosting a special <a href="http://bit.ly/istiocommunitymeet">community meeting</a> this Thursday at 10:00 Pacific to discuss the changes and the election process. We&rsquo;d love to see you there!</p>Mon, 24 Aug 2020 00:00:00 +0000/v1.9/blog/2020/steering-changes/Istio Steering Committee/v1.9/blog/2020/steering-changes/istiosteeringgovernancecommunityelectionUsing MOSN with Istio: an alternative data plane <p><a href="https://github.com/mosn/mosn">MOSN</a> (Modular Open Smart Network) is a network proxy server written in GoLang. It was built at <a href="https://www.antfin.com">Ant Group</a> as a sidecar/API Gateway/cloud-native Ingress/Layer 4 or Layer 7 load balancer etc. Over time, we&rsquo;ve added extra features, like a multi-protocol framework, multi-process plug-in mechanism, a DSL, and support for the <a href="https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol">xDS APIs</a>. Supporting xDS means we are now able to use MOSN as the network proxy for Istio. This configuration is not supported by the Istio project; for help, please see <a href="#learn-more">Learn More</a> below.</p> <h2 id="background">Background</h2> <p>In the service mesh world, using Istio as the control plane has become the mainstream. Because Istio was built on Envoy, it uses Envoy&rsquo;s data plane <a href="https://blog.envoyproxy.io/the-universal-data-plane-api-d15cec7a">APIs</a> (collectively known as the xDS APIs). These APIs have been standardized separately from Envoy, and so by implementing them in MOSN, we are able to drop in MOSN as a replacement for Envoy. Istio&rsquo;s integration of third-party data planes can be implemented in three steps, as follows.</p> <ul> <li>Implement xDS protocols to fulfill the capabilities for data plane related services.</li> <li>Build <code>proxyv2</code> images using Istio&rsquo;s script and set the relevant <code>SIDECAR</code> and other parameters.</li> <li>Specify a specific data plane via the <code>istioctl</code> tool and set the proxy-related configuration.</li> </ul> <h2 id="architecture">Architecture</h2> <p>MOSN has a layered architecture with four layers, NET/IO, Protocol, Stream, and Proxy, as shown in the following figure.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:45.77056778679027%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/mosn-proxy/mosn-arch.png" title="The architecture of MOSN"> <img class="element-to-stretch" src="/v1.9/blog/2020/mosn-proxy/mosn-arch.png" alt="The architecture of MOSN" /> </a> </div> <figcaption>The architecture of MOSN</figcaption> </figure> <ul> <li>NET/IO acts as the network layer, monitoring connections and incoming packets, and as a mount point for the listener filter and network filter.</li> <li>Protocol is the multi-protocol engine layer that examines packets and uses the corresponding protocol for decode/encode processing.</li> <li>Stream does a secondary encapsulation of the decode packet into stream, which acts as a mount for the stream filter.</li> <li>Proxy acts as a forwarding framework for MOSN, and does proxy processing on the encapsulated streams.</li> </ul> <h2 id="why-use-mosn">Why use MOSN?</h2> <p>Before the service mesh transformation, we have expected that as the next generation of Ant Group&rsquo;s infrastructure, service mesh will inevitably bring revolutionary changes and evolution costs. We have a very ambitious blueprint: ready to integrate the original network and middleware various capabilities have been re-precipitated and polished to create a low-level platform for the next-generation architecture of the future, which will carry the responsibility of various service communications.</p> <p>This is a long-term planning project that takes many years to build and meets the needs of the next five or even ten years, and cooperates to build a team that spans business, SRE, middleware, and infrastructure departments. We must have a network proxy forwarding plane with flexible expansion, high performance, and long-term evolution. Nginx and Envoy have a very long-term capacity accumulation and active community in the field of network agents. We have also borrowed from other excellent open source network agents such as Nginx and Envoy. At the same time, we have enhanced research and development efficiency and flexible expansion. Mesh transformation involves a large number of departments and R &amp; D personnel. We must consider the landing cost of cross-team cooperation. Therefore, we have developed a new network proxy MOSN based on GoLang in the cloud-native scenario. For GoLang&rsquo;s performance, we also did a full investigation and test in the early stage to meet the performance requirements of Ant Group&rsquo;s services.</p> <p>At the same time, we received a lot of feedback and needs from the end user community. Everyone has the same needs and thoughts. So we combined the actual situation of the community and ourselves to conduct the research and development of MOSN from the perspective of satisfying the community and users. We believe that the open source competition is mainly competition between standards and specifications. We need to make the most suitable implementation choice based on open source standards.</p> <h2 id="what-is-the-difference-between-mosn-and-istio-s-default-proxy">What is the difference between MOSN and Istio&rsquo;s default proxy?</h2> <h3 id="differences-in-language-stacks">Differences in language stacks</h3> <p>MOSN is written in GoLang. GoLang has strong guarantees in terms of production efficiency and memory security. At the same time, GoLang has an extensive library ecosystem in the cloud-native era. The performance is acceptable and usable in the service mesh scenario. Therefore, MOSN has a lower intellectual cost for companies and individuals using languages such as GoLang and Java.</p> <h3 id="differentiation-of-core-competence">Differentiation of core competence</h3> <ul> <li>MOSN supports a multi-protocol framework, and users can easily access private protocols with a unified routing framework.</li> <li>Multi-process plug-in mechanism, which can easily extend the plug-ins of independent MOSN processes through the plug-in framework, and do some other management, bypass and other functional module extensions.</li> <li>Transport layer national secret algorithm support with Chinese encryption compliance, etc.</li> </ul> <h3 id="what-are-the-drawbacks-of-mosn">What are the drawbacks of MOSN</h3> <ul> <li>Because MOSN is written in GoLang, it doesn&rsquo;t have as good performance as Istio default proxy, but the performance is acceptable and usable in the service mesh scenario.</li> <li>Compared with Istio default proxy, some features are not fully supported, such as WASM, HTTP3, Lua, etc. However, these are all in the <a href="https://docs.google.com/document/d/12lgyCW-GmlErr_ihvAO7tMmRe87i70bv2xqe4h2LUz4/edit?usp=sharing">roadmap</a> of MOSN, and the goal is to be fully compatible with Istio.</li> </ul> <h2 id="mosn-with-istio">MOSN with Istio</h2> <p>The following describes how to set up MOSN as the data plane for Istio.</p> <h2 id="setup-istio">Setup Istio</h2> <p>You can download a zip file for your operating system from the <a href="https://github.com/istio/istio/releases/tag/1.5.2">Istio release</a> page. This file contains: the installation file, examples and the <code>istioctl</code> command line tool. To download Istio (this example uses Istio 1.5.2) uses the following command.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export ISTIO_VERSION=1.5.2 $ curl -L https://istio.io/downloadIstio | sh - </code></pre> <p>The downloaded Istio package is named <code>istio-1.5.2</code> and contains: - <code>install/kubernetes</code>: Contains YAML installation files related to Kubernetes. - <code>examples/</code>: Contains example applications. - <code>bin/</code>: Contains the istioctl client files.</p> <p>Switch to the folder where Istio is located.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cd istio-$ISTIO_VERSION/ </code></pre> <p>Add the <code>istioctl</code> client path to <code>$PATH</code> with the following command.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export PATH=$PATH:$(pwd)/bin </code></pre> <h2 id="setting-mosn-as-the-data-plane">Setting MOSN as the Data Plane</h2> <p>It is possible to flexibly customize the Istio control plane and data plane configuration parameters using the <code>istioctl</code> command line tool. MOSN can be specified as the data plane for Istio using the following command.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl manifest apply --set .values.global.proxy.image=&#34;mosnio/proxyv2:1.5.2-mosn&#34; --set meshConfig.defaultConfig.binaryPath=&#34;/usr/local/bin/mosn&#34; </code></pre> <p>Check that Istio-related pods and services are deployed successfully.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get svc -n istio-system </code></pre> <p>If the service <code>STATUS</code> is Running, then Istio has been successfully installed using MOSN and you can now deploy the Bookinfo sample.</p> <h2 id="bookinfo-examples">Bookinfo Examples</h2> <p>You can run the Bookinfo sample by following the <a href="https://katacoda.com/mosn/courses/istio/mosn-with-istio">MOSN with Istio tutorial</a> where you can find instructions for using MOSN and Istio. You can install MOSN and get to the same point you would have using the default Istio instructions with Envoy.</p> <h2 id="moving-forward">Moving forward</h2> <p>Next, MOSN will not only be compatible with the features of the latest version of Istio, but also evolve in the following aspects.</p> <ul> <li><em>As a microservices runtime</em>, MOSN oriented programming makes services lighter, smaller and faster.</li> <li><em>Programmable</em>, support WASM.</li> <li><em>More scenario support</em>, Cache Mesh/Message Mesh/Block-chain Mesh etc.</li> </ul> <p>MOSN is an open source project that anyone in the community can use, improve, and enjoy. We&rsquo;d love you to join us! <a href="https://github.com/mosn/community">Here</a> are a few ways to find out what&rsquo;s happening and get involved.</p> <h2 id="learn-more">Learn More</h2> <ul> <li><a href="https://mosn.io/en">MOSN website</a></li> <li><a href="https://mosn.io/en/docs/community/">MOSN community</a></li> <li><a href="https://katacoda.com/mosn">MOSN tutorials</a></li> </ul>Tue, 28 Jul 2020 00:00:00 +0000/v1.9/blog/2020/mosn-proxy/Wang Fakang (mosn.io)/v1.9/blog/2020/mosn-proxy/mosnsidecarproxyOpen and neutral: transferring our trademarks to the Open Usage Commons <p>Since <a href="/v1.9/news/releases/0.x/announcing-0.1/">day one</a>, the Istio project has believed in the importance of being contributor-run, open, transparent and available to all. In that spirit, Google is pleased to announce that it will be transferring ownership of the project’s trademarks to the new Open Usage Commons.</p> <p>Istio is an open source project, released under the Apache 2.0 license. That means people can copy, modify, distribute, make, use and sell the source code. The only freedom people don&rsquo;t have under the Apache 2.0 license is to use the name Istio, or its logo, in a way that would confuse consumers.</p> <p>As one of the founders of the project, Google is the current owner of the Istio trademark. While anyone who is using the software in accordance with the license can use the trademarks, the historic ownership has caused some confusion and uncertainty about who can use the name and how, and at times this confusion has been a barrier to community growth. So today, as part of Istio’s continued commitment to openness, Google is announcing that the Istio trademarks will be transferred to a new organization, the Open Usage Commons, to provide neutral, independent oversight of the marks.</p> <h2 id="a-neutral-home-for-istio-s-trademarks">A neutral home for Istio’s trademarks</h2> <p>The Open Usage Commons is a new organization that is focused solely on providing management and guidance of open source project trademarks in a way that is aligned with the <a href="https://opensource.org/osd">Open Source Definition</a>. For projects, particularly projects with robust ecosystems like Istio, ensuring that the trademark is available to anyone who is using the software in accordance with the license is important. The trademark allows maintainers to grow a community and use the name to do so. It also lets ecosystem partners create services on top of the project, and it enables developers to create tooling and integrations that reference the project. Maintainers, ecosystem partners, and developers alike must feel confident in their investments in Istio - for the long term. Google thinks having the Istio trademarks in the Open Usage Commons is the right way to give that clarity and provide that confidence.</p> <p>The Open Usage Commons will work with the Istio Steering Committee to generate trademark usage guidelines. There will be no immediate changes to the Istio usage guidelines, and if you are currently using the Istio marks in a way that follows the existing brand guide, you can continue to do so.</p> <p>You can learn more about <a href="https://openusage.org/faq">open source project IP and the Open Usage Commons</a> at <a href="https://openusage.org">openusage.org</a>.</p> <h2 id="a-continued-commitment-to-open">A continued commitment to open</h2> <p>The Open Usage Commons is focused on project trademarks; it does not address other facets of an open project, like rules around who gets decision-making votes. Similar to many projects in their early days, Istio’s committees started as small groups that stemmed from the founding companies. But Istio has grown and matured (last year Istio was <a href="https://octoverse.github.com/#fastest-growing-oss-projects-by-contributors">#4 on GitHub&rsquo;s list of fastest growing open source projects!</a>), and it is time for the next evolution of Istio’s governance.</p> <p>Recently, <a href="https://aspenmesh.io/helping-istio-sail/">we were proud to appoint Neeraj Poddar, Co-founder &amp; Chief Architect of Aspen Mesh</a>, to the Technical Oversight Committee — the group responsible for all technical decision-making in the project. Neeraj is a long-time contributor to the project and served as a Working Group lead. The <a href="https://github.com/istio/community/blob/master/TECH-OVERSIGHT-COMMITTEE.md#committee-members">TOC is now made up of</a> 7 members from 4 different companies - Tetrate, IBM, Google &amp; now Aspen Mesh.</p> <p>Our community is currently discussing how the Steering Committee, which oversees marketing and community activities, should be governed, to reflect the expanding community and ecosystem. If you have ideas for this new governance, visit the <a href="https://github.com/istio/community/pull/361">pull request on GitHub</a> where an active discussion is taking place.</p> <p>In the last 12 months, Istio has had commits from <a href="https://istio.teststats.cncf.io/d/5/companies-table?var-period_name=Last%20year&amp;var-metric=commits">more than 100 organizations</a> and currently has <a href="http://eng.istio.io/maintainers">70 maintainers from 14 different companies</a>. This trend is the kind of contributor diversity the project’s founders intended, and nurturing it remains a priority. Google is excited about what the future holds for Istio, and hopes you’ll be a part of it.</p>Wed, 08 Jul 2020 00:00:00 +0000/v1.9/blog/2020/open-usage/Sean Suchter (Google)/v1.9/blog/2020/open-usage/trademarkgovernancesteeringReworking our Addon Integrations <p>Starting with Istio 1.6, we are introducing a new method for integration with telemetry addons, such as Grafana, Prometheus, Zipkin, Jaeger, and Kiali.</p> <p>In previous releases, these addons were bundled as part of the Istio installation. This allowed users to quickly get started with Istio without any complicated configurations to install and integrate these addons. However, it came with some issues:</p> <ul> <li>The Istio addon installations were not as up to date or feature rich as upstream installation methods. Users were left missing out on some of the great features provided by these applications, such as: <ul> <li>Persistent storage</li> <li>Features like <code>Alertmanager</code> for Prometheus</li> <li>Advanced security settings</li> </ul></li> <li>Integration with existing deployments that were using these features was more challenging than it should be.</li> </ul> <h2 id="changes">Changes</h2> <p>In order to address these gaps, we have made a number of changes:</p> <ul> <li><p>Added a new <a href="/v1.9/docs/ops/integrations/">Integrations</a> documentation section to explain which applications Istio can integrate with, how to use them, and best practices.</p></li> <li><p>Reduced the amount of configuration required to set up telemetry addons</p> <ul> <li><p>Grafana dashboards are now <a href="/v1.9/docs/ops/integrations/grafana/#import-from-grafana-com">published to <code>grafana.com</code></a>.</p></li> <li><p>Prometheus can now scrape all Istio pods <a href="/v1.9/docs/ops/integrations/prometheus/#option-2-metrics-merging">using standard <code>prometheus.io</code> annotations</a>. This allows most Prometheus deployments to work with Istio without any special configuration.</p></li> </ul></li> <li><p>Removed the bundled addon installations from <code>istioctl</code> and the operator. Istio does not install components that are not delivered by the Istio project. As a result, Istio will stop shipping installation artifacts related to addons. However, Istio will guarantee version compatibility where necessary. It is the user&rsquo;s responsibility to install these components by using the official <a href="/v1.9/docs/ops/integrations/">Integrations</a> documentation and artifacts provided by the respective projects. For demos, users can deploy simple YAML files from the <a href="https://github.com/istio/istio/tree/release-1.9/samples/addons"><code>samples/addons/</code> directory</a>.</p></li> </ul> <p>We hope these changes allow users to make the most of these addons so as to fully experience what Istio can offer.</p> <h2 id="timeline">Timeline</h2> <ul> <li>Istio 1.6: The new demo deployments for telemetry addons are available under <code>samples/addons/</code> directory.</li> <li>Istio 1.7: Upstream installation methods or the new samples deployment are the recommended installation methods. Installation by <code>istioctl</code> is deprecated.</li> <li>Istio 1.8: Installation of addons by <code>istioctl</code> is removed.</li> </ul>Thu, 04 Jun 2020 00:00:00 +0000/v1.9/blog/2020/addon-rework/John Howard (Google)/v1.9/blog/2020/addon-rework/telemetryaddonsintegrationsgrafanaprometheusIntroducing Workload Entries <h2 id="introducing-workload-entries-bridging-kubernetes-and-vms">Introducing Workload Entries: Bridging Kubernetes and VMs</h2> <p>Historically, Istio has provided great experience to workloads that run on Kubernetes, but it has been less smooth for other types of workloads, such as Virtual Machines (VMs) and bare metal. The gaps included the inability to declaratively specify the properties of a sidecar on a VM, inability to properly respond to the lifecycle changes of the workload (e.g., booting to not ready to ready, or health checks), and cumbersome DNS workarounds as the workloads are migrated into Kubernetes to name a few.</p> <p>Istio 1.6 has introduced a few changes in how you manage non-Kubernetes workloads, driven by a desire to make it easier to gain Istio&rsquo;s benefits for use cases beyond containers, such as running traditional databases on a platform outside of Kubernetes, or adopting Istio&rsquo;s features for existing applications without rewriting them.</p> <h3 id="background">Background</h3> <p>Prior to Istio 1.6, non-containerized workloads were configurable simply as an IP address in a <code>ServiceEntry</code>, which meant that they only existed as part of a service. Istio lacked a first-class abstraction for these non-containerized workloads, something similar to how Kubernetes treats Pods as the fundamental unit of compute - a named object that serves as the collection point for all things related to a workload - name, labels, security properties, lifecycle status events, etc. Enter <code>WorkloadEntry</code>.</p> <p>Consider the following <code>ServiceEntry</code> describing a service implemented by a few tens of VMs with IP addresses:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc1 spec: hosts: - svc1.internal.com ports: - number: 80 name: http protocol: HTTP resolution: STATIC endpoints: - address: 1.1.1.1 - address: 2.2.2.2 .... </code></pre> <p>If you wanted to migrate this service into Kubernetes in an active-active manner - i.e. launch a bunch of Pods, send a portion of the traffic to the Pods over Istio mutual TLS (mTLS) and send the rest to the VMs without sidecars - how would you do it? You would have needed to use a combination of a Kubernetes service, a virtual service, and a destination rule to achieve the behavior. Now, let&rsquo;s say you decided to add sidecars to these VMs, one by one, such that you want only the traffic to the VMs with sidecars to use Istio mTLS. If any other Service Entry happens to include the same VM in its addresses, things start to get very complicated and error prone.</p> <p>The primary source of these complications is that Istio lacked a first-class definition of a non-containerized workload, whose properties can be described independently of the service(s) it is part of.</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:75%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/workload-entry/workload-entry-first-example.svg" title="The Internal of Service Entries Pointing to Workload Entries"> <img class="element-to-stretch" src="/v1.9/blog/2020/workload-entry/workload-entry-first-example.svg" alt="Service Entries Pointing to Workload Entries" /> </a> </div> <figcaption>The Internal of Service Entries Pointing to Workload Entries</figcaption> </figure> <h3 id="workload-entry-a-non-kubernetes-endpoint">Workload Entry: A Non-Kubernetes Endpoint</h3> <p><code>WorkloadEntry</code> was created specifically to solve this problem. <code>WorkloadEntry</code> allows you to describe non-Pod endpoints that should still be part of the mesh, and treat them the same as a Pod. From here everything becomes easier, like enabling <code>MUTUAL_TLS</code> between workloads, whether they are containerized or not.</p> <p>To create a <a href="/v1.9/docs/reference/config/networking/workload-entry/"><code>WorkloadEntry</code></a> and attach it to a <a href="/v1.9/docs/reference/config/networking/service-entry/"><code>ServiceEntry</code></a> you can do something like this:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >--- apiVersion: networking.istio.io/v1alpha3 kind: WorkloadEntry metadata: name: vm1 namespace: ns1 spec: address: 1.1.1.1 labels: app: foo instance-id: vm-78ad2 class: vm --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc1 namespace: ns1 spec: hosts: - svc1.internal.com ports: - number: 80 name: http protocol: HTTP resolution: STATIC workloadSelector: labels: app: foo </code></pre> <p>This creates a new <code>WorkloadEntry</code> with a set of labels and an address, and a <code>ServiceEntry</code> that uses a <code>WorkloadSelector</code> to select all endpoints with the desired labels, in this case including the <code>WorkloadEntry</code> that are created for the VM.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:75%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/workload-entry/workload-entry-final.svg" title="The Internal of Service Entries Pointing to Workload Entries"> <img class="element-to-stretch" src="/v1.9/blog/2020/workload-entry/workload-entry-final.svg" alt="Service Entries Pointing to Workload Entries" /> </a> </div> <figcaption>The Internal of Service Entries Pointing to Workload Entries</figcaption> </figure> <p>Notice that the <code>ServiceEntry</code> can reference both Pods and <code>WorkloadEntries</code>, using the same selector. VMs and Pods can now be treated identically by Istio, rather than being kept separate.</p> <p>If you were to migrate some of your workloads to Kubernetes, and you choose to keep a substantial number of your VMs, the <code>WorkloadSelector</code> can select both Pods and VMs, and Istio will automatically load balance between them. The 1.6 changes also mean that <code>WorkloadSelector</code> syncs configurations between the Pods and VMs and removes the manual requirement to target both infrastructures with duplicate policies like mTLS and authorization. The Istio 1.6 release provides a great starting point for what will be possible for the future of Istio. The ability to describe what exists outside of the mesh the same way you do with a Pod leads to added benefits like improved bootstrapping experience. However, these benefits are merely side effects. The core benefit is you can now have VMs, and Pods co-exist without any configuration needed to bridge the two together.</p>Thu, 21 May 2020 00:00:00 +0000/v1.9/blog/2020/workload-entry/Cynthia Coan (Tetrate), Shriram Rajagopalan (Tetrate), Tia Louden (Tetrate), John Howard (Google), Sven Mawson (Google)/v1.9/blog/2020/workload-entry/vmworkloadentrymigration1.6baremetalserviceentrydiscoverySafely Upgrade Istio using a Canary Control Plane Deployment <p>Canary deployments are a core feature of Istio. Users rely on Istio&rsquo;s traffic management features to safely control the rollout of new versions of their applications, while making use of Istio&rsquo;s rich telemetry to compare the performance of canaries. However, when it came to upgrading Istio, there was not an easy way to canary the upgrade, and due to the in-place nature of the upgrade, issues or changes found affect the entire mesh at once.</p> <p>Istio 1.6 will support a new upgrade model to safely canary-deploy new versions of Istio. In this new model, proxies will associate with a specific control plane that they use. This allows a new version to deploy to the cluster with less risk - no proxies connect to the new version until the user explicitly chooses to. This allows gradually migrating workloads to the new control plane, while monitoring changes using Istio telemetry to investigate any issues, just like using <code>VirtualService</code> for workloads. Each independent control plane is referred to as a &ldquo;revision&rdquo; and has an <code>istio.io/rev</code> label.</p> <h2 id="understanding-upgrades">Understanding upgrades</h2> <p>Upgrading Istio is a complicated process. During the transition period between two versions, which might take a long time for large clusters, there are version differences between proxies and the control plane. In the old model the old and new control planes use the same Service, traffic is randomly distributed between the two, offering no control to the user. However, in the new model, there is not cross-version communication. Look at how the upgrade changes:</p> <iframe src="https://docs.google.com/presentation/d/e/2PACX-1vR2R_Nd1XsjriBfwbqmcBc8KtdP4McDqNpp8S5v6woq28FnsW-kATBrKtLEG9k61DuBwTgFKLWyAxuK/embed?start=false&loop=true&delayms=3000" frameborder="0" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe> <h2 id="configuring">Configuring</h2> <p>Control plane selection is done based on the sidecar injection webhook. Each control plane is configured to select objects with a matching <code>istio.io/rev</code> label on the namespace. Then, the upgrade process configures the pods to connect to a control plane specific to that revision. Unlike in the current model, this means that a given proxy connects to the same revision during its lifetime. This avoids subtle issues that might arise when a proxy switches which control plane it is connected to.</p> <p>The new <code>istio.io/rev</code> label will replace the <code>istio-injection=enabled</code> label when using revisions. For example, if we had a revision named canary, we would label our namespaces that we want to use this revision with istio.io/rev=canary. See the <a href="/v1.9/docs/setup/upgrade">upgrade guide</a> for more information.</p>Tue, 19 May 2020 00:00:00 +0000/v1.9/blog/2020/multiple-control-planes/John Howard (Google)/v1.9/blog/2020/multiple-control-planes/installupgraderevisioncontrol planeDirect encrypted traffic from IBM Cloud Kubernetes Service Ingress to Istio Ingress Gateway <p>In this blog post I show how to configure the <a href="https://cloud.ibm.com/docs/containers?topic=containers-ingress-about">Ingress Application Load Balancer (ALB)</a> on <a href="https://www.ibm.com/cloud/kubernetes-service/">IBM Cloud Kubernetes Service (IKS)</a> to direct traffic to the Istio ingress gateway, while securing the traffic between them using <span class="term" data-title="Mutual TLS Authentication" data-body="&lt;p&gt;Mutual TLS provides strong service-to-service authentication with built-in identity and credential management. &lt;a href=&#34;/docs/concepts/security/#mutual-tls-authentication&#34;&gt;Learn more about mutual TLS authentication&lt;/a&gt;.&lt;/p&gt; ">mutual TLS authentication</span>.</p> <p>When you use IKS without Istio, you may control your ingress traffic using the provided ALB. This ingress-traffic routing is configured using a Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Ingress</a> resource with <a href="https://cloud.ibm.com/docs/containers?topic=containers-ingress_annotation">ALB-specific annotations</a>. IKS provides a DNS domain name, a TLS certificate that matches the domain, and a private key for the certificate. IKS stores the certificates and the private key in a <a href="https://kubernetes.io/docs/concepts/configuration/secret/">Kubernetes secret</a>.</p> <p>When you start using Istio in your IKS cluster, the recommended method to send traffic to your Istio enabled workloads is by using the <a href="/v1.9/docs/tasks/traffic-management/ingress/ingress-control/">Istio Ingress Gateway</a> instead of using the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Kubernetes Ingress</a>. One of the main reasons to use the Istio ingress gateway is the fact the ALB provided by IKS will not be able to communicate directly with the services inside the mesh when you enable STRICT mutual TLS. During your transition to having only Istio ingress gateway as your main entry point, you can continue to use the traditional Ingress for non-Istio services while using the Istio ingress gateway for services that are part of the mesh.</p> <p>IKS provides a convenient way for clients to access Istio ingress gateway by letting you <a href="https://cloud.ibm.com/docs/containers?topic=containers-loadbalancer_hostname">register a new DNS subdomain</a> for the Istio gateway&rsquo;s IP with an IKS command. The domain is in the following <a href="https://cloud.ibm.com/docs/containers?topic=containers-loadbalancer_hostname#loadbalancer_hostname_format">format</a>: <code>&lt;cluster_name&gt;-&lt;globally_unique_account_HASH&gt;-0001.&lt;region&gt;.containers.appdomain.cloud</code>, for example <code>mycluster-a1b2cdef345678g9hi012j3kl4567890-0001.us-south.containers.appdomain.cloud</code>. In the same way as for the ALB domain, IKS provides a certificate and a private key, storing them in another Kubernetes secret.</p> <p>This blog describes how you can chain together the IKS Ingress ALB and the Istio ingress gateway to send traffic to your Istio enabled workloads while being able to continue using the ALB specific features and the ALB subdomain name. You configure the IKS Ingress ALB to direct traffic to the services inside an Istio service mesh through the Istio ingress gateway, while using mutual TLS authentication between the ALB and the gateway. For the mutual TLS authentication, you will configure the ALB and the Istio ingress gateway to use the certificates and keys provided by IKS for the ALB and NLB subdomains. Using certificates provided by IKS saves you the overhead of managing your own certificates for the connection between the ALB and the Istio ingress gateway.</p> <p>You will use the NLB subdomain certificate as the server certificate for the Istio ingress gateway as intended. The NLB subdomain certificate represents the identity of the server that serves a particular NLB subdomain, in this case, the ingress gateway.</p> <p>You will use the ALB subdomain certificate as the client certificate in mutual TLS authentication between the ALB and the Istio Ingress. When ALB acts as a server it presents the ALB certificate to the clients so the clients can authenticate the ALB. When ALB acts as a client of the Istio ingress gateway, it presents the same certificate to the Istio ingress gateway, so the Istio ingress gateway could authenticate the ALB.</p> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">Note that the instructions in this blog post only configure the ALB and the Istio ingress gateway to encrypt the traffic between them and to verify that they receive valid certificates issued by <a href="https://letsencrypt.org">Let&rsquo;s Encrypt</a>. In order to specify that only the ALB is allowed to talk to the Istio ingress gateway, an additional Istio security policy must be defined. In order to verify that the ALB indeed talks to the Istio ingress gateway, additional configuration must be added to the ALB. The additional configuration of the Istio ingress gateway and the ALB is out of scope for this blog.</div> </aside> </div> <p>Traffic to the services without an Istio sidecar can continue to flow as before directly from the ALB.</p> <p>The diagram below exemplifies the described setting. It shows two services in the cluster, <code>service A</code> and <code>service B</code>. <code>service A</code> has an Istio sidecar injected and requires mutual TLS. <code>service B</code> has no Istio sidecar. <code>service B</code> can be accessed by clients through the ALB, which directly communicates with <code>service B</code>. <code>service A</code> can be also accessed by clients through the ALB, but in this case the traffic must pass through the Istio ingress gateway. Mutual TLS authentication between the ALB and the gateway is based on the certificates provided by IKS. The clients can also access the Istio ingress gateway directly. IKS registers different DNS domains for the ALB and for the ingress gateway.</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:63.32720606343596%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/alb-ingress-gateway-iks/alb-ingress-gateway.svg" title="A cluster with the ALB and the Istio ingress gateway"> <img class="element-to-stretch" src="/v1.9/blog/2020/alb-ingress-gateway-iks/alb-ingress-gateway.svg" alt="A cluster with the ALB and the Istio ingress gateway" /> </a> </div> <figcaption>A cluster with the ALB and the Istio ingress gateway</figcaption> </figure> <h2 id="initial-setting">Initial setting</h2> <ol> <li><p>Create the <code>httptools</code> namespace and enable Istio sidecar injection:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl create namespace httptools $ kubectl label namespace httptools istio-injection=enabled namespace/httptools created namespace/httptools labeled </code></pre></li> <li><p>Deploy the <code>httpbin</code> sample to <code>httptools</code>:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/httpbin/httpbin.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/httpbin/httpbin.yaml@ -n httptools service/httpbin created deployment.apps/httpbin created </code></pre></div></li> </ol> <h2 id="create-secrets-for-the-alb-and-the-istio-ingress-gateway">Create secrets for the ALB and the Istio ingress gateway</h2> <p>IKS generates a TLS certificate and a private key and stores them as a secret in the default namespace when you register a DNS domain for an external IP by using the <code>ibmcloud ks nlb-dns-create</code> command. IKS stores the ALB&rsquo;s certificate and private key also as a secret in the default namespace. You need these credentials to establish the identities that the ALB and the Istio ingress gateway will present during the mutual TLS authentication between them. You will configure the ALB and the Istio ingress gateway to exchange these certificates, to trust the certificates of one another, and to use their private keys to encrypt and sign the traffic.</p> <ol> <li><p>Store the name of your cluster in the <code>CLUSTER_NAME</code> environment variable:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export CLUSTER_NAME=&lt;your cluster name&gt; </code></pre></li> <li><p>Store the domain name of your ALB in the <code>ALB_INGRESS_DOMAIN</code> environment variable:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ ibmcloud ks cluster get --cluster $CLUSTER_NAME | grep Ingress Ingress Subdomain: &lt;your ALB ingress domain&gt; Ingress Secret: &lt;your ALB secret&gt; </code></pre> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export ALB_INGRESS_DOMAIN=&lt;your ALB ingress domain&gt; $ export ALB_SECRET=&lt;your ALB secret&gt; </code></pre></li> <li><p>Store the external IP of your <code>istio-ingressgateway</code> service in an environment variable.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export INGRESS_GATEWAY_IP=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath=&#39;{.status.loadBalancer.ingress[0].ip}&#39;) $ echo INGRESS_GATEWAY_IP = $INGRESS_GATEWAY_IP </code></pre></li> <li><p>Create a DNS domain and certificates for the IP of the Istio Ingress Gateway service:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ ibmcloud ks nlb-dns create classic --cluster $CLUSTER_NAME --ip $INGRESS_GATEWAY_IP --secret-namespace istio-system Host name subdomain is created as &lt;some domain&gt; </code></pre></li> <li><p>Store the domain name from the previous command in an environment variable:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export INGRESS_GATEWAY_DOMAIN=&lt;the domain from the previous command&gt; </code></pre></li> <li><p>List the registered domain names:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ ibmcloud ks nlb-dnss --cluster $CLUSTER_NAME Retrieving host names, certificates, IPs, and health check monitors for network load balancer (NLB) pods in cluster &lt;your cluster&gt;... OK Hostname IP(s) Health Monitor SSL Cert Status SSL Cert Secret Name Secret Namespace &lt;your ingress gateway hostname&gt; &lt;your ingress gateway IP&gt; None created &lt;the matching secret name&gt; istio-system ... </code></pre> <p>Wait until the status of the certificate (the fourth field) of the new domain name becomes <code>enabled</code> (initially it is <code>pending</code>).</p></li> <li><p>Store the name of the secret of the new domain name:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export INGRESS_GATEWAY_SECRET=&lt;the secret&#39;s name as shown in the SSL Cert Secret Name column&gt; </code></pre></li> <li><p>Extract the certificate and the key from the secret provided for the ALB:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ mkdir alb_certs $ kubectl get secret $ALB_SECRET --namespace=default -o yaml | grep &#39;tls.key:&#39; | cut -f2 -d: | base64 --decode &gt; alb_certs/client.key $ kubectl get secret $ALB_SECRET --namespace=default -o yaml | grep &#39;tls.crt:&#39; | cut -f2 -d: | base64 --decode &gt; alb_certs/client.crt $ ls -al alb_certs -rw-r--r-- 1 user staff 3738 Sep 11 07:57 client.crt -rw-r--r-- 1 user staff 1675 Sep 11 07:57 client.key </code></pre></li> <li><p>Download the issuer certificate of the <a href="https://letsencrypt.org">Let&rsquo;s Encrypt</a> certificate, which is the issuer of the certificates provided by IKS. You specify this certificate as the certificate of a certificate authority to trust, for both the ALB and the Istio ingress gateway.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl https://letsencrypt.org/certs/trustid-x3-root.pem --output trusted.crt </code></pre></li> <li><p>Create a Kubernetes secret to be used by the ALB to establish mutual TLS connection.</p> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">The certificates provided by IKS expire every 90 days and are automatically renewed by IKS 37 days before they expire. You will have to recreate the secrets by rerunning the instructions of this section every time the secrets provided by IKS are updated. You may want to use scripts or operators to automate this and keep the secrets in sync.</div> </aside> </div> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl create secret generic alb-certs -n istio-system --from-file=trusted.crt --from-file=alb_certs/client.crt --from-file=alb_certs/client.key secret &#34;alb-certs&#34; created </code></pre></li> <li><p>For mutual TLS, a separate Secret named <code>&lt;tls-cert-secret&gt;-cacert</code> with a <code>cacert</code> key is needed for the ingress gateway.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl create -n istio-system secret generic $INGRESS_GATEWAY_SECRET-cacert --from-file=ca.crt=trusted.crt secret/cluster_name-hash-XXXX-cacert created </code></pre></li> </ol> <h2 id="configure-a-mutual-tls-ingress-gateway">Configure a mutual TLS ingress gateway</h2> <p>In this section you configure the Istio ingress gateway to perform mutual TLS between external clients and the gateway. You use the certificates and the keys provided to you for the ingress gateway and the ALB.</p> <ol> <li><p>Define a <code>Gateway</code> to allow access on port 443 only, with mutual TLS:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -n httptools -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: default-ingress-gateway spec: selector: istio: ingressgateway # use istio default ingress gateway servers: - port: number: 443 name: https protocol: HTTPS tls: mode: MUTUAL credentialName: $INGRESS_GATEWAY_SECRET hosts: - &#34;$INGRESS_GATEWAY_DOMAIN&#34; - &#34;httpbin.$ALB_INGRESS_DOMAIN&#34; EOF </code></pre></li> <li><p>Configure routes for traffic entering via the <code>Gateway</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -n httptools -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: default-ingress spec: hosts: - &#34;$INGRESS_GATEWAY_DOMAIN&#34; - &#34;httpbin.$ALB_INGRESS_DOMAIN&#34; gateways: - default-ingress-gateway http: - match: - uri: prefix: /status route: - destination: port: number: 8000 host: httpbin.httptools.svc.cluster.local EOF </code></pre></li> <li><p>Send a request to <code>httpbin</code> by <em>curl</em>, passing as parameters the client certificate (the <code>--cert</code> option) and the private key (the <code>--key</code> option):</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl https://$INGRESS_GATEWAY_DOMAIN/status/418 --cert alb_certs/client.crt --key alb_certs/client.key -=[ teapot ]=- _...._ .&#39; _ _ `. | .&#34;` ^ `&#34;. _, \_;`&#34;---&#34;`|// | ;/ \_ _/ `&#34;&#34;&#34;` </code></pre></li> <li><p>Remove the directories with the ALB and ingress gateway certificates and keys.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ rm -r alb_certs trusted.crt </code></pre></li> </ol> <h2 id="configure-the-alb">Configure the ALB</h2> <p>You need to configure your Ingress resource to direct traffic to the Istio ingress gateway while using the certificate stored in the <code>alb-certs</code> secret. Normally, the ALB decrypts HTTPS requests before forwarding traffic to your apps. You can configure the ALB to re-encrypt the traffic before it is forwarded to the Istio ingress gateway by using the <code>ssl-services</code> annotation on the Ingress resource. This annotation also allows you to specify the certificate stored in the <code>alb-certs</code> secret, required for mutual TLS.</p> <ol> <li><p>Configure the <code>Ingress</code> resource for the ALB. You must create the <code>Ingress</code> resource in the <code>istio-system</code> namespace in order to forward the traffic to the Istio ingress gateway.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: extensions/v1beta1 kind: Ingress metadata: name: alb-ingress namespace: istio-system annotations: ingress.bluemix.net/ssl-services: &#34;ssl-service=istio-ingressgateway ssl-secret=alb-certs proxy-ssl-name=$INGRESS_GATEWAY_DOMAIN&#34; spec: tls: - hosts: - httpbin.$ALB_INGRESS_DOMAIN secretName: $ALB_SECRET rules: - host: httpbin.$ALB_INGRESS_DOMAIN http: paths: - path: /status backend: serviceName: istio-ingressgateway servicePort: 443 EOF </code></pre></li> <li><p>Test the ALB ingress:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl https://httpbin.$ALB_INGRESS_DOMAIN/status/418 -=[ teapot ]=- _...._ .&#39; _ _ `. | .&#34;` ^ `&#34;. _, \_;`&#34;---&#34;`|// | ;/ \_ _/ `&#34;&#34;&#34;` </code></pre></li> </ol> <p>Congratulations! You configured the IKS Ingress ALB to send encrypted traffic to the Istio ingress gateway. You allocated a host name and certificate for your Istio ingress gateway and used that certificate as the server certificate for Istio ingress gateway. As the client certificate of the ALB you used the certificate provided by IKS for the ALB. Once you had the certificates deployed as Kubernetes secrets, you directed the ingress traffic from the ALB to the Istio ingress gateway for some specific paths and used the certificates for mutual TLS authentication between the ALB and the Istio ingress gateway.</p> <h2 id="cleanup">Cleanup</h2> <ol> <li><p>Delete the <code>Gateway</code> configuration, the <code>VirtualService</code>, and the secrets:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete ingress alb-ingress -n istio-system $ kubectl delete virtualservice default-ingress -n httptools $ kubectl delete gateway default-ingress-gateway -n httptools $ kubectl delete secrets alb-certs -n istio-system $ rm -rf alb_certs trusted.crt $ unset CLUSTER_NAME ALB_INGRESS_DOMAIN ALB_SECRET INGRESS_GATEWAY_DOMAIN INGRESS_GATEWAY_SECRET </code></pre></li> <li><p>Shutdown the <code>httpbin</code> service:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/httpbin/httpbin.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete -f @samples/httpbin/httpbin.yaml@ -n httptools </code></pre></div></li> <li><p>Delete the <code>httptools</code> namespace:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete namespace httptools </code></pre></li> </ol>Fri, 15 May 2020 00:00:00 +0000/v1.9/blog/2020/alb-ingress-gateway-iks/Vadim Eisenberg (IBM)/v1.9/blog/2020/alb-ingress-gateway-iks/traffic-managementingresssds-credentialsiksmutual-tlsProvision a certificate and key for an application without sidecars<div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">The following information describes an experimental feature, which is intended for evaluation purposes only.</div> </aside> </div> <p>Istio sidecars obtain their certificates using the secret discovery service. A service in the service mesh may not need (or want) an Envoy sidecar to handle its traffic. In this case, the service will need to obtain a certificate itself if it wants to connect to other TLS or mutual TLS secured services.</p> <p>For a service with no need of a sidecar to manage its traffic, a sidecar can nevertheless still be deployed only to provision the private key and certificates through the CSR flow from the CA and then share the certificate with the service through a mounted file in <code>tmpfs</code>. We have used Prometheus as our example application for provisioning a certificate using this mechanism.</p> <p>In the example application (i.e., Prometheus), a sidecar is added to the Prometheus deployment by setting the flag <code>.Values.prometheus.provisionPrometheusCert</code> to <code>true</code> (this flag is set to true by default in an Istio installation). This deployed sidecar will then request and share a certificate with Prometheus.</p> <p>The key and certificate provisioned for the example application are mounted in the directory <code>/etc/istio-certs/</code>. We can list the key and certificate provisioned for the application by running the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec -it `kubectl get pod -l app=prometheus -n istio-system -o jsonpath=&#39;{.items[0].metadata.name}&#39;` -c prometheus -n istio-system -- ls -la /etc/istio-certs/ </code></pre> <p>The output from the above command should include non-empty key and certificate files, similar to the following:</p> <pre><code class='language-plain' data-expandlinks='true' data-repo='istio' >-rwxr-xr-x 1 root root 2209 Feb 25 13:06 cert-chain.pem -rwxr-xr-x 1 root root 1679 Feb 25 13:06 key.pem -rwxr-xr-x 1 root root 1054 Feb 25 13:06 root-cert.pem </code></pre> <p>If you want to use this mechanism to provision a certificate for your own application, take a look at our <a href="https://github.com/istio/istio/blob/release-1.9/manifests/charts/istio-telemetry/prometheus/templates/deployment.yaml">Prometheus example application</a> and simply follow the same pattern.</p>Wed, 25 Mar 2020 00:00:00 +0000/v1.9/blog/2020/proxy-cert/Lei Tang (Google)/v1.9/blog/2020/proxy-cert/certificatesidecarExtended and Improved WebAssemblyHub to Bring the Power of WebAssembly to Envoy and Istio <p><a href="https://www.solo.io/blog/an-extended-and-improved-webassembly-hub-to-helps-bring-the-power-of-webassembly-to-envoy-and-istio/"><em>Originally posted on the Solo.io blog</em></a></p> <p>As organizations adopt Envoy-based infrastructure like Istio to help solve challenges with microservices communication, they inevitably find themselves needing to customize some part of that infrastructure to fit within their organization&rsquo;s constraints. <a href="https://webassembly.org/">WebAssembly (Wasm)</a> has emerged as a safe, secure, and dynamic environment for platform extension.</p> <p>In the recent <a href="/v1.9/blog/2020/wasm-announce/">announcement of Istio 1.5</a>, the Istio project lays the foundation for bringing WebAssembly to the popular Envoy proxy. <a href="https://solo.io">Solo.io</a> is collaborating with Google and the Istio community to simplify the overall experience of creating, sharing, and deploying WebAssembly extensions to Envoy and Istio. It wasn&rsquo;t that long ago that Google and others laid the foundation for containers, and Docker built a great user experience to make it consumable. Similarly, this effort makes Wasm consumable by building the best user experience for WebAssembly on Istio.</p> <p>Back in December 2019, Solo.io began an effort to provide a great developer experience for WebAssembly with the announcement of WebAssembly Hub. The WebAssembly Hub allows developers to very quickly spin up a new WebAssembly project in C++ (we&rsquo;re expanding this language choice, see below), build it using Bazel in Docker, and push it to an OCI-compliant registry. From there, operators had to pull the module, and configure Envoy proxies themselves to load it from disk. Beta support in <a href="https://docs.solo.io/gloo/latest/">Gloo, an API Gateway built on Envoy</a> allows you to declaratively and dynamically load the module, the Solo.io team wanted to bring the same effortless and secure experience to other Envoy-based frameworks as well - like Istio.</p> <p>There has been a lot of interest in the innovation in this area, and the Solo.io team has been working hard to further the capabilities of WebAssembly Hub and the workflows it supports. In conjunction with Istio 1.5, Solo.io is thrilled to announce new enhancements to WebAssembly Hub that evolve the viability of WebAssembly with Envoy for production, improve the developer experience, and streamline using Wasm with Envoy in Istio.</p> <h2 id="evolving-toward-production">Evolving toward production</h2> <p>The Envoy community is working hard to bring Wasm support into the upstream project (right now it lives on a working development fork), with Istio declaring Wasm support an Alpha feature. In <a href="https://www.solo.io/blog/announcing-gloo-1-0-a-production-ready-envoy-based-api-gateway/">Gloo 1.0, we also announced</a> early, non-production support for Wasm. What is Gloo? Gloo is a modern API Gateway and Ingress Controller (built on Envoy Proxy) that supports routing and securing incoming traffic to legacy monoliths, microservices / Kubernetes and serverless functions. Dev and ops teams are able to shape and control traffic patterns from external end users/clients to backend application services. Gloo is a Kubernetes and Istio native ingress gateway.</p> <p>Although it&rsquo;s still maturing in each individual project, there are things that we, as a community, can do to improve the foundation for production support.</p> <p>The first area is standardizing what a WebAssembly extension for Envoy looks like. Solo.io, Google, and the Istio community have defined an open specification for bundling and distributing WebAssembly modules as OCI images. This specification provides a powerful model for distributing any type of Wasm module including Envoy extensions.</p> <p>This is open to the community - <a href="https://github.com/solo-io/wasm-image-spec">Join in the effort</a></p> <p>The next area is improving the experience of deploying Wasm extensions into an Envoy-based framework running in production. In the Kubernetes ecosystem, it is considered a best practice in production to use declarative CRD-based configuration to manage cluster configuration. The new <a href="https://docs.solo.io/web-assembly-hub/latest/tutorial_code/wasme_operator/">WebAssembly Hub Operator</a> adds a single, declarative CRD which automatically deploys and configures Wasm filters to Envoy proxies running inside of a Kubernetes cluster. This operator enables GitOps workflows and cluster automation to manage Wasm filters without human intervention or imperative workflows. We will provide more information about the Operator in an upcoming blog post.</p> <p>Lastly, the interactions between developers of Wasm extensions and the teams that deploy them need some kind of role-based access, organization management, and facilities to share, discover, and consume these extensions. The WebAssembly Hub adds team management features like permissions, organizations, user management, sharing, and more.</p> <h2 id="improving-the-developer-experience">Improving the developer experience</h2> <p>As developers want to target more languages and runtimes, the experience must be kept as simple and as productive as possible. Multi-language support and runtime ABI (Application Binary Interface) targets should be handled automatically in tooling.</p> <p>One of the benefits of Wasm is the ability to write modules in many languages. The collaboration between Solo.io and Google provides out-of-the-box support for Envoy filters written in C++, Rust, and AssemblyScript. We will continue to add support for more languages.</p> <p>Wasm extensions use the Application Binary Interface (ABI) within the Envoy proxy to which they are deployed. The WebAssembly Hub provides strong ABI versioning guarantees between Envoy, Istio, and Gloo to prevent unpredictable behavior and bugs. All you have to worry about is writing your extension code.</p> <p>Lastly, like Docker, the WebAssembly Hub stores and distributes Wasm extensions as OCI images. This makes pushing, pulling, and running extensions as easy as Docker containers. Wasm extension images are versioned and cryptographically secure, making it safe to run extensions locally the same way you would in production. This allows you to build and push as well as trust the source when they pull down and deploy images.</p> <h2 id="webassembly-hub-with-istio">WebAssembly Hub with Istio</h2> <p>The WebAssembly Hub now fully automates the process of deploying Wasm extensions to Istio, (as well as other Envoy-based frameworks like <a href="https://docs.solo.io/gloo/latest/">Gloo API Gateway</a>) installed in Kubernetes. With this deployment feature, the WebAssembly Hub relieves the operator or end user from having to manually configure the Envoy proxy in their Istio service mesh to use their WebAssembly modules.</p> <p>Take a look at the following video to see just how easy it is to get started with WebAssembly and Istio:</p> <ul> <li><a href="https://www.youtube.com/watch?v=-XPTGXEpUp8">Part 1</a></li> <li><a href="https://youtu.be/vuJKRnjh1b8">Part 2</a></li> </ul> <h2 id="get-started">Get Started</h2> <p>We hope that the WebAssembly Hub will become a meeting place for the community to share, discover, and distribute Wasm extensions. By providing a great user experience, we hope to make developing, installing, and running Wasm easier and more rewarding. Join us at the <a href="https://webassemblyhub.io">WebAssembly Hub</a>, share your extensions and <a href="https://slack.solo.io">ideas</a>, and join an <a href="https://solo.zoom.us/webinar/register/WN_i8MiDTIpRxqX-BjnXbj9Xw">upcoming webinar</a>.</p>Wed, 25 Mar 2020 00:00:00 +0000/v1.9/blog/2020/wasmhub-istio/Idit Levine (Solo.io)/v1.9/blog/2020/wasmhub-istio/wasmextensibilityalphaperformanceoperatorIntroducing istiod: simplifying the control plane <p>Microservices are a great pattern when they map services to disparate teams that deliver them, or when the value of independent rollout and the value of independent scale are greater than the cost of orchestration. We regularly talk to customers and teams running Istio in the real world, and they told us that none of these were the case for the Istio control plane. So, in Istio 1.5, we&rsquo;ve changed how Istio is packaged, consolidating the control plane functionality into a single binary called <strong>istiod</strong>.</p> <h2 id="history-of-the-istio-control-plane">History of the Istio control plane</h2> <p>Istio implements a pattern that has been in use at both Google and IBM for many years, which later became known as &ldquo;service mesh&rdquo;. By pairing client and server processes with proxy servers, they act as an application-aware <em>data plane</em> that’s not simply moving packets around hosts, or pulses over wires.</p> <p>This pattern helps the world come to terms with <em>microservices</em>: fine-grained, loosely-coupled services connected via lightweight protocols. The common cross-platform and cross-language standards like HTTP and gRPC that replace proprietary transports, and the widespread presence of the needed libraries, empower different teams to write different parts of an overall architecture in whatever language makes the most sense. Furthermore, each service can scale independently as needed. A desire to implement security, observability and traffic control for such a network powers Istio’s popularity.</p> <p>Istio&rsquo;s <em>control plane</em> is, itself, a modern, cloud-native application. Thus, it was built from the start as a set of microservices. Individual Istio components like service discovery (Pilot), configuration (Galley), certificate generation (Citadel) and extensibility (Mixer) were all written and deployed as separate microservices. The need for these components to communicate securely and be observable, provided opportunities for Istio to eat its own dogfood (or &ldquo;drink its own champagne&rdquo;, to use a more French version of the metaphor!).</p> <h2 id="the-cost-of-complexity">The cost of complexity</h2> <p>Good teams look back upon their choices and, with the benefit of hindsight, revisit them. Generally, when a team adopts microservices and their inherent complexity, they look for improvements in other areas to justify the tradeoffs. Let&rsquo;s look at the Istio control plane through that lens.</p> <ul> <li><p><strong>Microservices empower you to write in different languages.</strong> The data plane (the Envoy proxy) is written in C++, and this boundary benefits from a clean separation in terms of the xDS APIs. However, all of the Istio control plane components are written in Go. We were able to choose the appropriate language for the appropriate job: highly performant C++ for the proxy, but accessible and speedy-development for everything else.</p></li> <li><p><strong>Microservices empower you to allow different teams to manage services individually.</strong>. In the vast majority of Istio installations, all the components are installed and operated by a single team or individual. The componentization done within Istio is aligned along the boundaries of the development teams who build it. This would make sense if the Istio components were delivered as a managed service by the people who wrote them, but this is not the case! Making life simpler for the development teams had an outsized impact of the usability for the orders-of-magnitude more users.</p></li> <li><p><strong>Microservices empower you to decouple versions, and release different components at different times.</strong> All the components of the control plane have always been released at the same version, at the same time. We have never tested or supported running different versions of (for example) Citadel and Pilot.</p></li> <li><p><strong>Microservices empower you to scale components independently.</strong> In Istio 1.5, control plane costs are dominated by a single feature: serving the Envoy xDS APIs that program the data plane. Every other feature has a marginal cost, which means there is very little value to having those features in separately-scalable microservices.</p></li> <li><p><strong>Microservices empower you to maintain security boundaries.</strong> Another good reason to separate an application into different microservices is if they have different security roles. Multiple Istio microservices like the sidecar injector, the Envoy bootstrap, Citadel, and Pilot hold nearly equivalent permissions to change the proxy configuration. Therefore, exploiting any of these services would cause near equivalent damage. When you deploy Istio, all the components are installed by default into the same Kubernetes namespace, offering limited security isolation.</p></li> </ul> <h2 id="the-benefit-of-consolidation-introducing-istiod">The benefit of consolidation: introducing istiod</h2> <p>Having established that many of the common benefits of microservices didn&rsquo;t apply to the Istio control plane, we decided to unify them into a single binary: <strong>istiod</strong> (the &rsquo;d&rsquo; is for <a href="https://en.wikipedia.org/wiki/Daemon_%28computing%29">daemon</a>).</p> <p>Let&rsquo;s look at the benefits of the new packaging:</p> <ul> <li><p><strong>Installation becomes easier.</strong> Fewer Kubernetes deployments and associated configurations are required, so the set of configuration options and flags for Istio is reduced significantly. In the simplest case, <strong><em>you can start the Istio control plane, with all features enabled, by starting a single Pod.</em></strong></p></li> <li><p><strong>Configuration becomes easier.</strong> Many of the configuration options that Istio has today are ways to orchestrate the control plane components, and so are no longer needed. You also no longer need to change cluster-wide <code>PodSecurityPolicy</code> to deploy Istio.</p></li> <li><p><strong>Using VMs becomes easier.</strong> To add a workload to a mesh, you now just need to install one agent and the generated certificates. That agent connects back to only a single service.</p></li> <li><p><strong>Maintenance becomes easier.</strong> Installing, upgrading, and removing Istio no longer require a complicated dance of version dependencies and startup orders. For example: To upgrade, you only need to start a new istiod version alongside your existing control plane, canary it, and then move all traffic over to it.</p></li> <li><p><strong>Scalability becomes easier.</strong> There is now only one component to scale.</p></li> <li><p><strong>Debugging becomes easier.</strong> Fewer components means less cross-component environmental debugging.</p></li> <li><p><strong>Startup time goes down.</strong> Components no longer need to wait for each other to start in a defined order.</p></li> <li><p><strong>Resource usage goes down and responsiveness goes up.</strong> Communication between components becomes guaranteed, and not subject to gRPC size limits. Caches can be shared safely, which decreases the resource footprint as a result.</p></li> </ul> <p>istiod unifies functionality that Pilot, Galley, Citadel and the sidecar injector previously performed, into a single binary.</p> <p>A separate component, the istio-agent, helps each sidecar connect to the mesh by securely passing configuration and secrets to the Envoy proxies. While the agent, strictly speaking, is still part of the control plane, it runs on a per-pod basis. We’ve further simplified by rolling per-node functionality that used to run as a DaemonSet, into that per-pod agent.</p> <h2 id="extra-for-experts">Extra for experts</h2> <p>There will still be some cases where you might want to run Istio components independently, or replace certain components.</p> <p>Some users might want to use a Certificate Authority (CA) outside the mesh, and we have <a href="/v1.9/docs/tasks/security/cert-management/plugin-ca-cert/">documentation on how to do that</a>. If you do your certificate provisioning using a different tool, we can use that instead of the built-in CA.</p> <h2 id="moving-forward">Moving forward</h2> <p>At its heart, istiod is just a packaging and optimization change. It&rsquo;s built on the same code and API contracts as the separate components, and remains covered by our comprehensive test suite. This gives us confidence in making it the default in Istio 1.5. The service is now called <code>istiod</code> - you’ll see an <code>istio-pilot</code> for existing proxies as the upgrade process completes.</p> <p>While the move to istiod may seem like a big change, and is a huge improvement for the people who <em>administer</em> and <em>maintain</em> the mesh, it won’t make the day-to-day life of <em>using</em> Istio any different. istiod is not changing any of the APIs used to configure your mesh, so your existing processes will all stay the same.</p> <p>Does this change imply that microservice are a mistake for <em>all</em> workloads and architectures? Of course not. They are a tool in a toolbelt, and they work best when they are reflected in your organizational reality. Instead, this change shows a willingness in the project to change based on user feedback, and a continued focus on simplification for all users. Microservices have to be right sized, and we believe we have found the right size for Istio.</p>Thu, 19 Mar 2020 00:00:00 +0000/v1.9/blog/2020/istiod/Craig Box (Google)/v1.9/blog/2020/istiod/istiodcontrol planeoperatorDeclarative WebAssembly deployment for Istio <p>As outlined in the <a href="/v1.9/blog/2020/tradewinds-2020/">Istio 2020 trade winds blog</a> and more recently <a href="/v1.9/news/releases/1.5.x/announcing-1.5/">announced with Istio 1.5</a>, WebAssembly (Wasm) is now an (alpha) option for extending the functionality of the Istio service proxy (Envoy proxy). With Wasm, users can build support for new protocols, custom metrics, loggers, and other filters. Working closely with Google, we in the community (<a href="https://solo.io">Solo.io</a>) have focused on the user experience of building, socializing, and deploying Wasm extensions to Istio. We&rsquo;ve announced <a href="https://webassemblyhub.io">WebAssembly Hub</a> and <a href="https://docs.solo.io/web-assembly-hub/latest/installation/">associated tooling</a> to build a &ldquo;docker-like&rdquo; experience for working with Wasm.</p> <h2 id="background">Background</h2> <p>With the WebAssembly Hub tooling, we can use the <code>wasme</code> CLI to easily bootstrap a Wasm project for Envoy, push it to a repository, and then pull/deploy it to Istio. For example, to deploy a Wasm extension to Istio with <code>wasme</code> we can run the following:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ wasme deploy istio webassemblyhub.io/ceposta/demo-add-header:v0.2 \ --id=myfilter \ --namespace=bookinfo \ --config &#39;tomorrow&#39; </code></pre> <p>This will add the <code>demo-add-header</code> extension to all workloads running in the <code>bookinfo</code> namespace. We can get more fine-grained control over which workloads get the extension by using the <code>--labels</code> parameter:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ wasme deploy istio webassemblyhub.io/ceposta/demo-add-header:v0.2 \ --id=myfilter \ --namespace=bookinfo \ --config &#39;tomorrow&#39; \ --labels app=details </code></pre> <p>This is a much easier experience than manually creating <code>EnvoyFilter</code> resources and trying to get the Wasm module to each of the pods that are part of the workload you&rsquo;re trying to target. However, this is a very imperative approach to interacting with Istio. Just like users typically don&rsquo;t use <code>kubectl</code> directly in production and prefer a declarative, resource-based workflow, we want the same for making customizations to our Istio proxies.</p> <h2 id="a-declarative-approach">A declarative approach</h2> <p>The WebAssembly Hub tooling also includes <a href="https://docs.solo.io/web-assembly-hub/latest/tutorial_code/wasme_operator/">an operator for deploying Wasm extensions to Istio workloads</a>. The <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/">operator</a> allows users to define their WebAssembly extensions using a declarative format and leave it to the operator to rectify the deployment. For example, we use a <code>FilterDeployment</code> resource to define what image and workloads need the extension:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: wasme.io/v1 kind: FilterDeployment metadata: name: bookinfo-custom-filter namespace: bookinfo spec: deployment: istio: kind: Deployment labels: app: details filter: config: &#39;world&#39; image: webassemblyhub.io/ceposta/demo-add-header:v0.2 </code></pre> <p>We could then take this <code>FilterDeployment</code> document and version it with the rest of our Istio resources. You may be wondering why we need this Custom Resource to configure Istio&rsquo;s service proxy to use a Wasm extension when Istio already has the <code>EnvoyFilter</code> resource.</p> <p>Let&rsquo;s take a look at exactly how all of this works under the covers.</p> <h2 id="how-it-works">How it works</h2> <p>Under the covers the operator is doing a few things that aid in deploying and configuring a Wasm extension into the Istio service proxy (Envoy Proxy).</p> <ul> <li>Set up local cache of Wasm extensions</li> <li>Pull desired Wasm extension into the local cache</li> <li>Mount the <code>wasm-cache</code> into appropriate workloads</li> <li>Configure Envoy with <code>EnvoyFilter</code> CRD to use the Wasm filter</li> </ul> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:42.663891779396465%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/deploy-wasm-declarative/how-it-works.png" title="Understanding how wasme operator works"> <img class="element-to-stretch" src="/v1.9/blog/2020/deploy-wasm-declarative/how-it-works.png" alt="How the wasme operator works" /> </a> </div> <figcaption>Understanding how wasme operator works</figcaption> </figure> <p>At the moment, the Wasm image needs to be published into a registry for the operator to correctly cache it. The cache pods run as DaemonSet on each node so that the cache can be mounted into the Envoy container. This is being improved, as it&rsquo;s not the ideal mechanism. Ideally we wouldn&rsquo;t have to deal with mounting anything and could stream the module to the proxy directly over HTTP, so stay tuned for updates (should land within next few days). The mount is established by using the <code>sidecar.istio.io/userVolume</code> and <code>sidecar.istio.io/userVolumeMount</code> annotations. See <a href="/v1.9/docs/reference/config/annotations/">the docs on Istio Resource Annotations</a> for more about how that works.</p> <p>Once the Wasm module is cached correctly and mounted into the workload&rsquo;s service proxy, the operator then configures the <code>EnvoyFilter</code> resources.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: details-v1-myfilter namespace: bookinfo spec: configPatches: - applyTo: HTTP_FILTER match: context: SIDECAR_INBOUND listener: filterChain: filter: name: envoy.http_connection_manager subFilter: name: envoy.router patch: operation: INSERT_BEFORE value: config: config: configuration: tomorrow name: myfilter rootId: add_header vmConfig: code: local: filename: /var/local/lib/wasme-cache/44bf95b368e78fafb663020b43cf099b23fc6032814653f2f47e4d20643e7267 runtime: envoy.wasm.runtime.v8 vmId: myfilter name: envoy.filters.http.wasm workloadSelector: labels: app: details version: v1 </code></pre> <p>You can see the <code>EnvoyFilter</code> resource configures the proxy to add the <code>envoy.filter.http.wasm</code> filter and load the Wasm module from the <code>wasme-cache</code>.</p> <p>Once the Wasm extension is loaded into the Istio service proxy, it will extend the capabilities of the proxy with whatever custom code you introduced.</p> <h2 id="next-steps">Next Steps</h2> <p>In this blog we explored options for installing Wasm extensions into Istio workloads. The easiest way to get started with WebAssembly on Istio is to use the <code>wasme</code> tool <a href="https://docs.solo.io/web-assembly-hub/latest/tutorial_code/getting_started/">to bootstrap a new Wasm project</a> with C++, AssemblyScript [or Rust coming really soon!]. For example, to set up a C++ Wasm module, you can run:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ wasme init ./filter --language cpp --platform istio --platform-version 1.5.x </code></pre> <p>If we didn&rsquo;t have the extra flags, <code>wasme init</code> would enter an interactive mode walking you through the correct values to choose.</p> <p>Take a look at the <a href="https://docs.solo.io/web-assembly-hub/latest/tutorial_code/getting_started/">WebAssembly Hub wasme tooling</a> to get started with Wasm on Istio.</p> <h2 id="learn-more">Learn more</h2> <ul> <li><p>Redefine Extensibility <a href="/v1.9/blog/2020/wasm-announce/">with WebAssembly on Envoy and Istio</a></p></li> <li><p>WebAssembly SF talk (video): <a href="https://www.youtube.com/watch?v=OIUPf8m7CGA">Extensions for network proxies</a>, by John Plevyak</p></li> <li><p><a href="https://www.solo.io/blog/an-extended-and-improved-webassembly-hub-to-helps-bring-the-power-of-webassembly-to-envoy-and-istio/">Solo blog</a></p></li> <li><p><a href="https://github.com/proxy-wasm/spec">Proxy-Wasm ABI specification</a></p></li> <li><p><a href="https://github.com/proxy-wasm/proxy-wasm-cpp-sdk/blob/master/docs/wasm_filter.md">Proxy-Wasm C++ SDK</a> and its <a href="https://github.com/proxy-wasm/proxy-wasm-cpp-sdk/blob/master/docs/wasm_filter.md">developer documentation</a></p></li> <li><p><a href="https://github.com/proxy-wasm/proxy-wasm-rust-sdk">Proxy-Wasm Rust SDK</a></p></li> <li><p><a href="https://github.com/solo-io/proxy-runtime">Proxy-Wasm AssemblyScript SDK</a></p></li> <li><p><a href="https://docs.solo.io/web-assembly-hub/latest/tutorial_code/">Tutorials</a></p></li> <li><p>Videos on the <a href="https://www.youtube.com/channel/UCuketWAG3WqYjjxtQ9Q8ApQ">Solo.io Youtube Channel</a></p></li> </ul>Mon, 16 Mar 2020 00:00:00 +0000/v1.9/blog/2020/deploy-wasm-declarative/Christian Posta (Solo.io)/v1.9/blog/2020/deploy-wasm-declarative/wasmextensibilityalphaoperatorRedefining extensibility in proxies - introducing WebAssembly to Envoy and Istio <p>Since adopting <a href="https://www.envoyproxy.io/">Envoy</a> in 2016, the Istio project has always wanted to provide a platform on top of which a rich set of extensions could be built, to meet the diverse needs of our users. There are many reasons to add capability to the data plane of a service mesh &mdash; to support newer protocols, integrate with proprietary security controls, or enhance observability with custom metrics, to name a few.</p> <p>Over the last year and a half our team here at Google has been working on adding dynamic extensibility to the Envoy proxy using <a href="https://webassembly.org/">WebAssembly</a>. We are delighted to share that work with the world today, as well as unveiling <a href="https://github.com/proxy-wasm/spec">WebAssembly (Wasm) for Proxies</a> (Proxy-Wasm): an ABI, which we intend to standardize; SDKs; and its first major implementation, the new, lower-latency <a href="/v1.9/docs/reference/config/proxy_extensions/wasm_telemetry/">Istio telemetry system</a>.</p> <p>We have also worked closely with the community to ensure that there is a great developer experience for users to get started quickly. The Google team has been working closely with the team at <a href="https://solo.io">Solo.io</a> who have built the <a href="https://webassemblyhub.io/">WebAssembly Hub,</a> a service for building, sharing, discovering and deploying Wasm extensions. With the WebAssembly Hub, Wasm extensions are as easy to manage, install and and run as containers.</p> <p>This work is being released today in Alpha and there is still lots of <a href="#next-steps">work to be done</a>, but we are excited to get this into the hands of developers so they can start experimenting with the tremendous possibilities this opens up.</p> <h2 id="background">Background</h2> <p>The need for extensibility has been a founding tenet of both the Istio and Envoy projects, but the two projects took different approaches. Istio project focused on enabling a generic out-of-process extension model called <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/mixer-overview/">Mixer</a> with a lightweight developer experience, while Envoy focused on in-proxy <a href="https://www.envoyproxy.io/docs/envoy/latest/extending/extending">extensions</a>.</p> <p>Each approach has its share of pros and cons. The Istio model led to significant resource inefficiencies that impacted tail latencies and resource utilization. This model was also intrinsically limited - for example, it was never going to provide support for implementing <a href="https://blog.envoyproxy.io/how-to-write-envoy-filters-like-a-ninja-part-1-d166e5abec09">custom protocol handling</a>.</p> <p>The Envoy model imposed a monolithic build process, and required extensions to be written in C++, limiting the developer ecosystem. Rolling out a new extension to the fleet required pushing new binaries and rolling restarts, which can be difficult to coordinate, and risk downtime. This also incentivized developers to upstream extensions into Envoy that were used by only a small percentage of deployments, just to piggyback on its release mechanisms.</p> <p>Over time some of the most performance-sensitive features of Istio have been upstreamed into Envoy - <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/security/rbac_filter">policy checks on traffic</a>, and <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/security/jwt_authn_filter">JWT authentication</a>, for example. Still, we have always wanted to converge on a single stack for extensibility that imposes fewer tradeoffs: something that decouples Envoy releases from its extension ecosystem, enables developers to work in their languages of choice, and enables Istio to reliably roll out new capability without downtime risk. Enter WebAssembly.</p> <h2 id="what-is-webassembly">What is WebAssembly?</h2> <p><a href="https://webassembly.org/">WebAssembly</a> (Wasm) is a portable bytecode format for executing code written in <a href="https://github.com/appcypher/awesome-wasm-langs">multiple languages</a> at near-native speed. Its initial <a href="https://webassembly.org/docs/high-level-goals/">design goals</a> align well with the challenges outlined above, and it has sizable industry support behind it. Wasm is the fourth standard language (following HTML, CSS and JavaScript) to run natively in all the major browsers, having become a <a href="https://www.w3.org/TR/wasm-core-1/">W3C Recommendation</a> in December 2019. That gives us confidence in making a strategic bet on it.</p> <p>While WebAssembly started life as a client-side technology, there are a number of advantages to using it on the server. The runtime is memory-safe and sandboxed for security. There is a large tooling ecosystem for compiling and debugging Wasm in its textual or binary format. The <a href="https://www.w3.org/">W3C</a> and <a href="https://bytecodealliance.org/">BytecodeAlliance</a> have become active hubs for other server-side efforts. For example, the Wasm community is standardizing a <a href="https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/">&ldquo;WebAssembly System Interface&rdquo; (WASI)</a> at the W3C, with a sample implementation, which provides an OS-like abstraction to Wasm &lsquo;programs&rsquo;.</p> <h2 id="bringing-webassembly-to-envoy">Bringing WebAssembly to Envoy</h2> <p><a href="https://github.com/envoyproxy/envoy/issues/4272">Over the past 18 months</a>, we have been working with the Envoy community to build Wasm extensibility into Envoy and contribute it upstream. We&rsquo;re pleased to announce it is available as Alpha in the Envoy build shipped with <a href="/v1.9/news/releases/1.5.x/announcing-1.5/">Istio 1.5</a>, with source in the <a href="https://github.com/envoyproxy/envoy-wasm/"><code>envoy-wasm</code></a> development fork and work ongoing to merge it into the main Envoy tree. The implementation uses the WebAssembly runtime built into Google&rsquo;s high performance <a href="https://v8.dev/">V8 engine</a>.</p> <p>In addition to the underlying runtime, we have also built:</p> <ul> <li><p>A generic Application Binary Interface (ABI) for embedding Wasm in proxies, which means compiled extensions will work across different versions of Envoy - or even other proxies, should they choose to implement the ABI</p></li> <li><p>SDKs for easy extension development in <a href="https://github.com/proxy-wasm/proxy-wasm-cpp-sdk">C++</a>, <a href="https://github.com/proxy-wasm/proxy-wasm-rust-sdk">Rust</a> and <a href="https://github.com/solo-io/proxy-runtime">AssemblyScript</a>, with more to follow</p></li> <li><p>Comprehensive <a href="https://docs.solo.io/web-assembly-hub/latest/tutorial_code/">samples and instructions</a> on how to deploy in Istio and standalone Envoy</p></li> <li><p>Abstractions to allow for other Wasm runtimes to be used, including a &lsquo;null&rsquo; runtime which simply compiles the extension natively into Envoy &mdash; very useful for testing and debugging</p></li> </ul> <p>Using Wasm for extending Envoy brings us several key benefits:</p> <ul> <li><p>Agility: Extensions can be delivered and reloaded at runtime using the Istio control plane. This enables a fast develop → test → release cycle for extensions without requiring Envoy rollouts.</p></li> <li><p>Stock releases: Once merging into the main tree is complete, Istio and others will be able to use stock releases of Envoy, instead of custom builds. This will also free the Envoy community to move some of the built-in extensions to this model, thereby reducing their supported footprint.</p></li> <li><p>Reliability and isolation: Extensions are deployed inside a sandbox with resource constraints, which means they can now crash, or leak memory, without bringing the whole Envoy process down. CPU and memory usage can also be constrained.</p></li> <li><p>Security: The sandbox has a clearly defined API for communicating with Envoy, so extensions only have access to, and can modify, a limited number of properties of a connection or request. Furthermore, because Envoy mediates this interaction, it can hide or sanitize sensitive information from the extension (e.g. &ldquo;Authorization&rdquo; and &ldquo;Cookie&rdquo; HTTP headers, or the client&rsquo;s IP address).</p></li> <li><p>Flexibility: <a href="https://github.com/appcypher/awesome-wasm-langs">over 30 programming languages can be compiled to WebAssembly</a>, allowing developers from all backgrounds - C++, Go, Rust, Java, TypeScript, etc. - to write Envoy extensions in their language of choice.</p></li> </ul> <p>&ldquo;I am extremely excited to see WASM support land in Envoy; this is the future of Envoy extensibility, full stop. Envoy&rsquo;s WASM support coupled with a community driven hub will unlock an incredible amount of innovation in the networking space across both service mesh and API gateway use cases. I can&rsquo;t wait to see what the community builds moving forward.&rdquo; &ndash; Matt Klein, Envoy creator.</p> <p>For technical details of the implementation, look out for an upcoming post to <a href="https://blog.envoyproxy.io/">the Envoy blog</a>.</p> <p>The <a href="https://github.com/proxy-wasm">Proxy-Wasm</a> interface between host environment and extensions is deliberately proxy agnostic. We&rsquo;ve built it into Envoy, but it was designed to be adopted by other proxy vendors. We want to see a world where you can take an extension written for Istio and Envoy and run it in other infrastructure; you&rsquo;ll hear more about that soon.</p> <h2 id="building-on-webassembly-in-istio">Building on WebAssembly in Istio</h2> <p>Istio moved several of its extensions into its build of Envoy as part of the 1.5 release, in order to significantly improve performance. While doing that work we have been testing to ensure those same extensions can compile and run as Proxy-Wasm modules with no variation in behavior. We&rsquo;re not quite ready to make this setup the default, given that we consider Wasm support to be Alpha; however, this has given us a lot of confidence in our general approach and in the host environment, ABI and SDKs that have been developed.</p> <p>We have also been careful to ensure that the Istio control plane and its <a href="/v1.9/docs/reference/config/networking/envoy-filter/">Envoy configuration APIs</a> are Wasm-ready. We have samples to show how several common customizations such as custom header decoding or programmatic routing can be performed which are common asks from users. As we move this support to Beta, you will see documentation showing best practices for using Wasm with Istio.</p> <p>Finally, we are working with the many vendors who have written <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/adapters/">Mixer adapters</a>, to help them with a migration to Wasm &mdash; if that is the best path forward. Mixer will move to a community project in a future release, where it will remain available for legacy use cases.</p> <h2 id="developer-experience">Developer Experience</h2> <p>Powerful tooling is nothing without a great developer experience. Solo.io <a href="https://www.solo.io/blog/an-extended-and-improved-webassembly-hub-to-helps-bring-the-power-of-webassembly-to-envoy-and-istio/">recently announced</a> the release of <a href="https://webassemblyhub.io/">WebAssembly Hub</a>, a set of tools and repository for building, deploying, sharing and discovering Envoy Proxy Wasm extensions for Envoy and Istio.</p> <p>The WebAssembly Hub fully automates many of the steps required for developing and deploying Wasm extensions. Using WebAssembly Hub tooling, users can easily compile their code - in any supported language - into Wasm extensions. The extensions can then be uploaded to the Hub registry, and be deployed and undeployed to Istio with a single command.</p> <p>Behind the scenes the Hub takes care of much of the nitty-gritty, such as pulling in the correct toolchain, ABI version verification, permission control, and more. The workflow also eliminates toil with configuration changes across Istio service proxies by automating the deployment of your extensions. This tooling helps users and operators avoid unexpected behaviors due to misconfiguration or version mismatches.</p> <p>The WebAssembly Hub tools provide a powerful CLI as well as an elegant and easy-to-use graphical user interface. An important goal of the WebAssembly Hub is to simplify the experience around building Wasm modules and provide a place of collaboration for developers to share and discover useful extensions.</p> <p>Check out the <a href="https://docs.solo.io/web-assembly-hub/latest/tutorial_code/">getting started guide</a> to create your first Proxy-Wasm extension.</p> <h2 id="next-steps">Next Steps</h2> <p>In addition to working towards a beta release, we are committed to making sure that there is a durable community around Proxy-Wasm. The ABI needs to be finalized, and turning it into a standard will be done with broader feedback within the appropriate standards body. Completing upstreaming support into the Envoy mainline is still in progress. We are also seeking an appropriate community home for the tooling and the WebAssembly Hub</p> <h2 id="learn-more">Learn more</h2> <ul> <li><p>WebAssembly SF talk (video): <a href="https://www.youtube.com/watch?v=OIUPf8m7CGA">Extensions for network proxies</a>, by John Plevyak</p></li> <li><p><a href="https://www.solo.io/blog/an-extended-and-improved-webassembly-hub-to-helps-bring-the-power-of-webassembly-to-envoy-and-istio/">Solo blog</a></p></li> <li><p><a href="https://github.com/proxy-wasm/spec">Proxy-Wasm ABI specification</a></p></li> <li><p><a href="https://github.com/proxy-wasm/proxy-wasm-cpp-sdk/blob/master/docs/wasm_filter.md">Proxy-Wasm C++ SDK</a> and its <a href="https://github.com/proxy-wasm/proxy-wasm-cpp-sdk/blob/master/docs/wasm_filter.md">developer documentation</a></p></li> <li><p><a href="https://github.com/proxy-wasm/proxy-wasm-rust-sdk">Proxy-Wasm Rust SDK</a></p></li> <li><p><a href="https://github.com/solo-io/proxy-runtime">Proxy-Wasm AssemblyScript SDK</a></p></li> <li><p><a href="https://docs.solo.io/web-assembly-hub/latest/tutorial_code/">Tutorials</a></p></li> <li><p>Videos on the <a href="https://www.youtube.com/channel/UCuketWAG3WqYjjxtQ9Q8ApQ">Solo.io Youtube Channel</a></p></li> </ul>Thu, 05 Mar 2020 00:00:00 +0000/v1.9/blog/2020/wasm-announce/Craig Box, Mandar Jog, John Plevyak, Louis Ryan, Piotr Sikora (Google), Yuval Kohavi, Scott Weiss (Solo.io)/v1.9/blog/2020/wasm-announce/wasmextensibilityalphaperformanceoperatorIstio in 2020 - Following the Trade Winds <p>Istio solves real problems that people encounter running microservices. Even <a href="https://kubernetespodcast.com/episode/016-descartes-labs/">very early pre-release versions</a> helped users debug the latency in their architecture, increase the reliability of services, and transparently secure traffic behind the firewall.</p> <p>Last year, the Istio project experienced major growth. After a 9-month gestation before the 1.1 release in Q1, we set a goal of having a quarterly release cadence. We knew it was important to deliver value consistently and predictably. With three releases landing in the successive quarters as planned, we are proud to have reached that goal.</p> <p>During that time, we improved our build and test infrastructure, resulting in higher quality and easier release cycles. We doubled down on user experience, adding many commands to make operating and debugging the mesh easier. We also saw tremendous growth in the number of developers and companies contributing to the product - culminating in us being <a href="https://octoverse.github.com/#fastest-growing-oss-projects-by-contributors">#4 on GitHub&rsquo;s top ten list of fastest growing projects</a>!</p> <p>We have ambitious goals for Istio in 2020 and there are many major efforts underway, but at the same time we strongly believe that good infrastructure should be &ldquo;boring.&rdquo; Using Istio in production should be a seamless experience; performance should not be a concern, upgrades should be a non-event and complex tasks should be automated away. With our investment in a more powerful extensibility story we think the pace of innovation in the service mesh space can increase while Istio focuses on being gloriously dull. More details on our major efforts in 2020 below.</p> <h2 id="sleeker-smoother-and-faster">Sleeker, smoother and faster</h2> <p>Istio provided for extensibility from day one, implemented by a component called Mixer. Mixer is a platform that allows custom <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/mixer-overview/#adapters">adapters</a> to act as an intermediary between the data plane and the backends you use for policy or telemetry. Mixer necessarily added overhead to requests because it required extensions to be out-of-process. So, we&rsquo;re moving to a model that enables extension directly in the proxies instead.</p> <p>Most of Mixer’s use cases for policy enforcement are already addressed with Istio&rsquo;s <a href="/v1.9/docs/concepts/security/#authentication-policies">authentication</a> and <a href="/v1.9/docs/concepts/security/#authorization">authorization</a> policies, which allow you to control workload-to-workload and end-user-to-workload authorization directly in the proxy. Common monitoring use cases have already moved into the proxy too - we have <a href="/v1.9/docs/reference/config/metrics">introduced in-proxy support</a> for sending telemetry to Prometheus and Stackdriver.</p> <p>Our benchmarking shows that the new telemetry model reduces our latency dramatically and gives us industry-leading performance, with 50% reductions in both latency and CPU consumption.</p> <h2 id="a-new-model-for-istio-extensibility">A new model for Istio extensibility</h2> <p>The model that replaces Mixer uses extensions in Envoy to provide even more capability. The Istio community is leading the implementation of a <a href="https://webassembly.org/">WebAssembly</a> (Wasm) runtime in Envoy, which lets us implement extensions that are modular, sandboxed, and developed in one of <a href="https://github.com/appcypher/awesome-wasm-langs">over 20 languages</a>. Extensions can be dynamically loaded and reloaded while the proxy continues serving traffic. Wasm extensions will also be able to extend the platform in ways that Mixer simply couldn’t. They can act as custom protocol handlers and transform payloads as they pass through Envoy — in short they can do the same things as modules built into Envoy.</p> <p>We&rsquo;re working with the Envoy community on ways to discover and distribute these extensions. We want to make WebAssembly extensions as easy to install and run as containers. Many of our partners have written Mixer adapters, and together we are getting them ported to Wasm. We are also developing guides and codelabs on how to write your own extensions for custom integrations.</p> <p>By changing the extension model, we were also able to remove dozens of CRDs. You no longer need a unique CRD for every piece of software you integrate with Istio.</p> <p>Installing Istio 1.5 with the &lsquo;preview&rsquo; configuration profile won&rsquo;t install Mixer. If you upgrade from a previous release, or install the &lsquo;default&rsquo; profile, we still keep Mixer around, to be safe. When using Prometheus or Stackdriver for metrics, we recommend you try out the new mode and see how much your performance improves.</p> <p>You can keep Mixer installed and enabled if you need it. Eventually Mixer will become a separately released add-on to Istio that is part of the <a href="https://github.com/istio-ecosystem/">istio-ecosystem</a>.</p> <h2 id="fewer-moving-parts">Fewer moving parts</h2> <p>We are also simplifying the deployment of the rest of the control plane. To that end, we combined several of the control plane components into a single component: Istiod. This binary includes the features of Pilot, Citadel, Galley, and the sidecar injector. This approach improves many aspects of installing and managing Istio &ndash; reducing installation and configuration complexity, maintenance effort, and issue diagnosis time while increasing responsiveness. Read more about Istiod in <a href="https://blog.christianposta.com/microservices/istio-as-an-example-of-when-not-to-do-microservices/">this post from Christian Posta</a>.</p> <p>We are shipping Istiod as the default for all profiles in 1.5.</p> <p>To reduce the per-node footprint, we are getting rid of the node-agent, used to distribute certificates, and moving its functionality to the istio-agent, which already runs in each Pod. For those of you who like pictures we are moving from this &hellip;</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:75%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/tradewinds-2020/architecture-pre-istiod.svg" title="The Istio architecture today"> <img class="element-to-stretch" src="/v1.9/blog/2020/tradewinds-2020/architecture-pre-istiod.svg" alt="Istio architecture with Pilot, Mixer, Citadel, Sidecar injector" /> </a> </div> <figcaption>The Istio architecture today</figcaption> </figure> <p>to this&hellip;</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:75%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/tradewinds-2020/architecture-post-istiod.svg" title="The Istio architecture in 2020"> <img class="element-to-stretch" src="/v1.9/blog/2020/tradewinds-2020/architecture-post-istiod.svg" alt="Istio architecture with Istiod" /> </a> </div> <figcaption>The Istio architecture in 2020</figcaption> </figure> <p>In 2020, we will continue to invest in onboarding to achieve our goal of a &ldquo;zero config&rdquo; default that doesn’t require you to change any of your application&rsquo;s configuration to take advantage of most Istio features.</p> <h2 id="improved-lifecycle-management">Improved lifecycle management</h2> <p>To improve Istio’s life-cycle management, we moved to an <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/">operator</a>-based installation. We introduced the <strong><a href="/v1.9/docs/setup/install/istioctl/">IstioOperator CRD and two installation modes</a></strong>:</p> <ul> <li>Human-triggered: use istioctl to apply the settings to the cluster.</li> <li>Machine-triggered: use a controller that is continually watching for changes in that CRD and affecting those in real time.</li> </ul> <p>In 2020 upgrades will getting easier too. We will add support for &ldquo;canarying&rdquo; new versions of the Istio control plane, which allows you to run a new version alongside the existing version and gradually switch your data plane over to use the new one.</p> <h2 id="secure-by-default">Secure By Default</h2> <p>Istio already provides the fundamentals for strong service security: reliable workload identity, robust access policies and comprehensive audit logging. We’re stabilizing APIs for these features; many Alpha APIs are moving to Beta in 1.5, and we expect them to all be v1 by the end of 2020. To learn more about the status of our APIs, see our <a href="/v1.9/about/feature-stages/#istio-features">features page</a>.</p> <p>Network traffic is also becoming more secure by default. After many users enabled it in preview, <a href="/v1.9/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls">automated rollout of mutual TLS</a> is becoming the recommended practice in Istio 1.5.</p> <p>In addition we will make Istio require fewer privileges and simplify its dependencies which in turn make it a more robust system. Historically, you had to mount certificates into Envoy using Kubernetes Secrets, which were mounted as files into each proxy. By leveraging the <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/security/secret">Secret Discovery Service</a> we can distribute these certificates securely without concern of them being intercepted by other workloads on the machine. This mode will become the default in 1.5.</p> <p>Getting rid of the node-agent not only simplifies the deployment, but also removes the requirement for a cluster-wide <code>PodSecurityPolicy</code>, further improving the security posture of your cluster.</p> <h2 id="other-features">Other features</h2> <p>Here&rsquo;s a snapshot of some more exciting things you can expect from Istio in 2020:</p> <ul> <li>Integration with more hosted Kubernetes environments - service meshes powered by Istio are currently available from 15 vendors, including Google, IBM, Red Hat, VMware, Alibaba and Huawei</li> <li>More investment in <code>istioctl</code> and its ability to help diagnose problems</li> <li>Better integration of VM-based workloads into meshes</li> <li>Continued work towards making multi-cluster and multi-network meshes easier to configure, maintain, and run</li> <li>Integration with more service discovery systems, including Functions-as-a-Service</li> <li>Implementation of the new <a href="https://kubernetes-sigs.github.io/service-apis/">Kubernetes service APIs</a>, which are currently in development</li> <li>An <a href="https://github.com/istio/enhancements/">enhancement repository</a>, to track feature development</li> <li>Making it easier to run Istio without needing Kubernetes!</li> </ul> <p>From the seas to <a href="https://www.youtube.com/watch?v=YjZ4AZ7hRM0">the skies</a>, we&rsquo;re excited to see where you take Istio next.</p>Tue, 03 Mar 2020 00:00:00 +0000/v1.9/blog/2020/tradewinds-2020/Istio Team/v1.9/blog/2020/tradewinds-2020/roadmapsecurityperformanceoperatorRemove cross-pod unix domain sockets<p>In Istio versions before 1.5, during secret discovery service (SDS) execution, the SDS client and the SDS server communicate through a cross-pod Unix domain socket (UDS), which needs to be protected by Kubernetes pod security policies.</p> <p>With Istio 1.5, Pilot Agent, Envoy, and Citadel Agent will be running in the same container (the architecture is shown in the following diagram). To defend against attackers eavesdropping on the cross-pod UDS between Envoy (SDS client) and Citadel Agent (SDS server), Istio 1.5 merges Pilot Agent and Citadel Agent into a single Istio Agent and makes the UDS between Envoy and Citadel Agent private to the Istio Agent container. The Istio Agent container is deployed as the sidecar of the application service container.</p> <figure style="width:70%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:80.39622513132818%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/istio-agent/istio_agent.svg" title="The architecture of Istio Agent"> <img class="element-to-stretch" src="/v1.9/blog/2020/istio-agent/istio_agent.svg" alt="The architecture of Istio Agent" /> </a> </div> <figcaption>The architecture of Istio Agent</figcaption> </figure>Thu, 20 Feb 2020 00:00:00 +0000/v1.9/blog/2020/istio-agent/Lei Tang (Google)/v1.9/blog/2020/istio-agent/securitysecret discovery serviceunix domain socketMulticluster Istio configuration and service discovery using Admiral <p>At Intuit, we read the blog post <a href="/v1.9/blog/2019/isolated-clusters/">Multi-Mesh Deployments for Isolation and Boundary Protection</a> and immediately related to some of the problems mentioned. We realized that even though we wanted to configure a single multi-cluster mesh, instead of a federation of multiple meshes as described in the blog post, the same non-uniform naming issues also applied in our environment. This blog post explains how we solved these problems using <a href="https://github.com/istio-ecosystem/admiral">Admiral</a>, an open source project under <code>istio-ecosystem</code> in GitHub.</p> <h2 id="background">Background</h2> <p>Using Istio, we realized the configuration for multi-cluster was complex and challenging to maintain over time. As a result, we chose the model described in <a href="https://istio.io/v1.6/docs/setup/install/multicluster/gateways/#deploy-the-istio-control-plane-in-each-cluster">Multi-Cluster Istio Service Mesh with replicated control planes</a> for scalability and other operational reasons. Following this model, we had to solve these key requirements before widely adopting an Istio service mesh:</p> <ul> <li>Creation of service DNS entries decoupled from the namespace, as described in <a href="/v1.9/blog/2019/isolated-clusters/#features-of-multi-mesh-deployments">Features of multi-mesh deployments</a>.</li> <li>Service discovery across many clusters.</li> <li>Supporting active-active &amp; HA/DR deployments. We also had to support these crucial resiliency patterns with services being deployed in globally unique namespaces across discrete clusters.</li> </ul> <p>We have over 160 Kubernetes clusters with a globally unique namespace name across all clusters. In this configuration, we can have the same service workload deployed in different regions running in namespaces with different names. As a result, following the routing strategy mentioned in <a href="/v1.9/blog/2019/multicluster-version-routing">Multicluster version routing</a>, the example name <code>foo.namespace.global</code> wouldn&rsquo;t work across clusters. We needed a globally unique and discoverable service DNS that resolves service instances in multiple clusters, each instance running/addressable with its own unique Kubernetes FQDN. For example, <code>foo.global</code> should resolve to both <code>foo.uswest2.svc.cluster.local</code> &amp; <code>foo.useast2.svc.cluster.local</code> if <code>foo</code> is running in two Kubernetes clusters with different names. Also, our services need additional DNS names with different resolution and global routing properties. For example, <code>foo.global</code> should resolve locally first, then route to a remote instance using topology routing, while <code>foo-west.global</code> and <code>foo-east.global</code> (names used for testing) should always resolve to the respective regions.</p> <h2 id="contextual-configuration">Contextual Configuration</h2> <p>After further investigation, it was apparent that configuration needed to be contextual: each cluster needs a configuration specifically tailored for its view of the world.</p> <p>For example, we have a payments service consumed by orders and reports. The payments service has a HA/DR deployment across <code>us-east</code> (cluster 3) and <code>us-west</code> (cluster 2). The payments service is deployed in namespaces with different names in each region. The orders service is deployed in a different cluster as payments in <code>us-west</code> (cluster 1). The reports service is deployed in the same cluster as payments in <code>us-west</code> (cluster 2).</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:60.81558869142712%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/multi-cluster-mesh-automation/Istio_mesh_example.svg" title="Cross cluster workload communication with Istio"> <img class="element-to-stretch" src="/v1.9/blog/2020/multi-cluster-mesh-automation/Istio_mesh_example.svg" alt="Example of calling a workload in Istio multicluster" /> </a> </div> <figcaption>Cross cluster workload communication with Istio</figcaption> </figure> <p>Istio <code>ServiceEntry</code> yaml for payments service in Cluster 1 and Cluster 2 below illustrates the contextual configuration that other services need to use the payments service:</p> <p>Cluster 1 Service Entry</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: payments.global-se spec: addresses: - 240.0.0.10 endpoints: - address: ef394f...us-east-2.elb.amazonaws.com locality: us-east-2 ports: http: 15443 - address: ad38bc...us-west-2.elb.amazonaws.com locality: us-west-2 ports: http: 15443 hosts: - payments.global location: MESH_INTERNAL ports: - name: http number: 80 protocol: http resolution: DNS </code></pre> <p>Cluster 2 Service Entry</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: payments.global-se spec: addresses: - 240.0.0.10 endpoints: - address: ef39xf...us-east-2.elb.amazonaws.com locality: us-east-2 ports: http: 15443 - address: payments.default.svc.cluster.local locality: us-west-2 ports: http: 80 hosts: - payments.global location: MESH_INTERNAL ports: - name: http number: 80 protocol: http resolution: DNS </code></pre> <p>The payments <code>ServiceEntry</code> (Istio CRD) from the point of view of the reports service in Cluster 2, would set the locality <code>us-west</code> pointing to the local Kubernetes FQDN and locality <code>us-east</code> pointing to the <code>istio-ingressgateway</code> (load balancer) for Cluster 3. The payments <code>ServiceEntry</code> from the point of view of the orders service in Cluster 1, will set the locality <code>us-west</code> pointing to Cluster 2 <code>istio-ingressgateway</code> and locality <code>us-east</code> pointing to the <code>istio-ingressgateway</code> for Cluster 3.</p> <p>But wait, there&rsquo;s even more complexity: What if the payment services want to move traffic to the <code>us-east</code> region for a planned maintenance in <code>us-west</code>? This would require the payments service to change the Istio configuration in all of their clients&rsquo; clusters. This would be nearly impossible to do without automation.</p> <h2 id="admiral-to-the-rescue-admiral-is-that-automation">Admiral to the Rescue: Admiral is that Automation</h2> <p><em>Admiral is a controller of Istio control planes.</em></p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:69.43866943866944%"> <a data-skipendnotes="true" href="/v1.9/blog/2020/multi-cluster-mesh-automation/Istio_mesh_example_with_admiral.svg" title="Cross cluster workload communication with Istio and Admiral"> <img class="element-to-stretch" src="/v1.9/blog/2020/multi-cluster-mesh-automation/Istio_mesh_example_with_admiral.svg" alt="Example of calling a workload in Istio multicluster with Admiral" /> </a> </div> <figcaption>Cross cluster workload communication with Istio and Admiral</figcaption> </figure> <p>Admiral provides automatic configuration for an Istio mesh spanning multiple clusters to work as a single mesh based on a unique service identifier that associates workloads running on multiple clusters to a service. It also provides automatic provisioning and syncing of Istio configuration across clusters. This removes the burden on developers and mesh operators, which helps scale beyond a few clusters.</p> <h2 id="admiral-crds">Admiral CRDs</h2> <h3 id="global-traffic-routing">Global Traffic Routing</h3> <p>With Admiral’s global traffic policy CRD, the payments service can update regional traffic weights and Admiral updates the Istio configuration in all clusters that consume the payments service.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: admiral.io/v1alpha1 kind: GlobalTrafficPolicy metadata: name: payments-gtp spec: selector: identity: payments policy: - dns: default.payments.global lbType: 1 target: - region: us-west-2/* weight: 10 - region: us-east-2/* weight: 90 </code></pre> <p>In the example above, 90% of the payments service traffic is routed to the <code>us-east</code> region. This Global Traffic Configuration is automatically converted into Istio configuration and contextually mapped into Kubernetes clusters to enable multi-cluster global routing for the payments service for its clients within the Mesh.</p> <p>This Global Traffic Routing feature relies on Istio&rsquo;s locality load-balancing per service available in Istio 1.5 or later.</p> <h3 id="dependency">Dependency</h3> <p>The Admiral <code>Dependency</code> CRD allows us to specify a service&rsquo;s dependencies based on a service identifier. This optimizes the delivery of Admiral generated configuration only to the required clusters where the dependent clients of a service are running (instead of writing it to all clusters). Admiral also configures and/or updates the Sidecar Istio CRD in the client&rsquo;s workload namespace to limit the Istio configuration to only its dependencies. We use service-to-service authorization information recorded elsewhere to generate this <code>dependency</code> records for Admiral to use.</p> <p>An example <code>dependency</code> for the <code>orders</code> service:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: admiral.io/v1alpha1 kind: Dependency metadata: name: dependency namespace: admiral spec: source: orders identityLabel: identity destinations: - payments </code></pre> <p><code>Dependency</code> is optional and a missing dependency for a service will result in an Istio configuration for that service pushed to all clusters.</p> <h2 id="summary">Summary</h2> <p>Admiral provides a new Global Traffic Routing and unique service naming functionality to address some challenges posed by the Istio model described in <a href="https://istio.io/v1.6/docs/setup/install/multicluster/gateways/#deploy-the-istio-control-plane-in-each-cluster">multi-cluster deployment with replicated control planes</a>. It removes the need for manual configuration synchronization between clusters and generates contextual configuration for each cluster. This makes it possible to operate a Service Mesh composed of many Kubernetes clusters.</p> <p>We think Istio/Service Mesh community would benefit from this approach, so we <a href="https://github.com/istio-ecosystem/admiral">open sourced Admiral</a> and would love your feedback and support!</p>Sun, 05 Jan 2020 00:00:00 +0000/v1.9/blog/2020/multi-cluster-mesh-automation/Anil Attuluri (Intuit), Jason Webb (Intuit)/v1.9/blog/2020/multi-cluster-mesh-automation/traffic-managementautomationconfigurationmulticlustermulti-meshgatewayfederatedglobalidentiferSecure Webhook Management<p>Istio has two webhooks: Galley and the sidecar injector. Galley validates Kubernetes resources and the sidecar injector injects sidecar containers into Istio.</p> <p>By default, Galley and the sidecar injector manage their own webhook configurations. This can pose a security risk if they are compromised, for example, through buffer overflow attacks. Configuring a webhook is a highly privileged operation as a webhook may monitor and mutate all Kubernetes resources.</p> <p>In the following example, the attacker compromises Galley and modifies the webhook configuration of Galley to eavesdrop on all Kubernetes secrets (the <code>clientConfig</code> is modified by the attacker to direct the <code>secrets</code> resources to a service owned by the attacker).</p> <figure style="width:70%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:44.37367303609342%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/webhook/example_attack.png" title="An example attack"> <img class="element-to-stretch" src="/v1.9/blog/2019/webhook/example_attack.png" alt="An example attack" /> </a> </div> <figcaption>An example attack</figcaption> </figure> <p>To protect against this kind of attack, Istio 1.4 introduces a new feature to securely manage webhooks using <code>istioctl</code>:</p> <ol> <li><p><code>istioctl</code>, instead of Galley and the sidecar injector, manage the webhook configurations. Galley and the sidecar injector are de-privileged so even if they are compromised, they will not be able to alter the webhook configurations.</p></li> <li><p>Before configuring a webhook, <code>istioctl</code> will verify the webhook server is up and that the certificate chain used by the webhook server is valid. This reduces the errors that can occur before a server is ready or if a server has invalid certificates.</p></li> </ol> <p>To try this new feature, refer to the <a href="https://archive.istio.io/v1.4/docs/tasks/security/webhook">Istio webhook management task</a>.</p>Thu, 14 Nov 2019 00:00:00 +0000/v1.9/blog/2019/webhook/Lei Tang (Google)/v1.9/blog/2019/webhook/securitykuberneteswebhookIntroducing the Istio v1beta1 Authorization Policy <p>Istio 1.4 introduces the <a href="/v1.9/docs/reference/config/security/authorization-policy/"><code>v1beta1</code> authorization policy</a>, which is a major update to the previous <code>v1alpha1</code> role-based access control (RBAC) policy. The new policy provides these improvements:</p> <ul> <li>Aligns with Istio configuration model.</li> <li>Improves the user experience by simplifying the API.</li> <li>Supports more use cases (e.g. Ingress/Egress gateway support) without added complexity.</li> </ul> <p>The <code>v1beta1</code> policy is not backward compatible and requires a one time conversion. A tool is provided to automate this process. The previous configuration resources <code>ClusterRbacConfig</code>, <code>ServiceRole</code>, and <code>ServiceRoleBinding</code> will not be supported from Istio 1.6 onwards.</p> <p>This post describes the new <code>v1beta1</code> authorization policy model, its design goals and the migration from <code>v1alpha1</code> RBAC policies. See the <a href="/v1.9/docs/concepts/security/#authorization">authorization concept page</a> for a detailed in-depth explanation of the <code>v1beta1</code> authorization policy.</p> <p>We welcome your feedback about the <code>v1beta1</code> authorization policy at <a href="https://discuss.istio.io/c/security">discuss.istio.io</a>.</p> <h2 id="background">Background</h2> <p>To date, Istio provided RBAC policies to enforce access control on <span class="term" data-title="Service" data-body="&lt;p&gt;A delineated group of related behaviors within a &lt;a href=&#34;/docs/reference/glossary/#service-mesh&#34;&gt;service mesh&lt;/a&gt;. Services are identified using a &lt;a href=&#34;/docs/reference/glossary/#service-name&#34;&gt;service name&lt;/a&gt;, and Istio policies such as load balancing and routing are applied using these names. A service is typically materialized by one or more &lt;a href=&#34;/docs/reference/glossary/#service-endpoint&#34;&gt;service endpoints&lt;/a&gt;, and may consist of multiple &lt;a href=&#34;/docs/reference/glossary/#service-version&#34;&gt;service versions&lt;/a&gt;.&lt;/p&gt; ">services</span> using three configuration resources: <code>ClusterRbacConfig</code>, <code>ServiceRole</code> and <code>ServiceRoleBinding</code>. With this API, users have been able to enforce control access at mesh-level, namespace-level and service-level. Like other RBAC policies, Istio RBAC uses the same concept of role and binding for granting permissions to identities.</p> <p>Although Istio RBAC has been working reliably, we&rsquo;ve found that many improvements were possible.</p> <p>For example, users have mistakenly assumed that access control enforcement happens at service-level because <code>ServiceRole</code> uses service to specify where to apply the policy, however, the policy is actually applied on <span class="term" data-title="Workload" data-body="&lt;p&gt;A binary deployed by &lt;a href=&#34;/docs/reference/glossary/#operator&#34;&gt;operators&lt;/a&gt; to deliver some function of a service mesh application. Workloads have names, namespaces, and unique ids. These properties are available in policy and telemetry configuration using the following &lt;a href=&#34;/docs/reference/glossary/#attribute&#34;&gt;attributes&lt;/a&gt;:&lt;/p&gt; &lt;ul&gt; &lt;li&gt;&lt;code&gt;source.workload.name&lt;/code&gt;, &lt;code&gt;source.workload.namespace&lt;/code&gt;, &lt;code&gt;source.workload.uid&lt;/code&gt;&lt;/li&gt; &lt;li&gt;&lt;code&gt;destination.workload.name&lt;/code&gt;, &lt;code&gt;destination.workload.namespace&lt;/code&gt;, &lt;code&gt;destination.workload.uid&lt;/code&gt;&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;In Kubernetes, a workload typically corresponds to a Kubernetes deployment, while a &lt;a href=&#34;/docs/reference/glossary/#workload-instance&#34;&gt;workload instance&lt;/a&gt; corresponds to an individual &lt;a href=&#34;/docs/reference/glossary/#pod&#34;&gt;pod&lt;/a&gt; managed by the deployment.&lt;/p&gt; ">workloads</span>, the service is only used to find the corresponding workload. This nuance is significant when multiple services are referring to the same workload. A <code>ServiceRole</code> for service A will also affect service B if the two services are referring to the same workload, which can cause confusion and incorrect configuration.</p> <p>An other example is that it&rsquo;s proven difficult for users to maintain and manage the Istio RBAC configurations because of the need to deeply understand three related resources.</p> <h2 id="design-goals">Design goals</h2> <p>The new <code>v1beta1</code> authorization policy had several design goals:</p> <ul> <li><p>Align with <a href="https://goo.gl/x3STjD">Istio Configuration Model</a> for better clarity on the policy target. The configuration model provides a unified configuration hierarchy, resolution and target selection.</p></li> <li><p>Improve the user experience by simplifying the API. It&rsquo;s easier to manage one custom resource definition (CRD) that includes all access control specifications, instead of multiple CRDs.</p></li> <li><p>Support more use cases without added complexity. For example, allow the policy to be applied on Ingress/Egress gateway to enforce access control for traffic entering/exiting the mesh.</p></li> </ul> <h2 id="authorizationpolicy"><code>AuthorizationPolicy</code></h2> <p>An <a href="/v1.9/docs/reference/config/security/authorization-policy/"><code>AuthorizationPolicy</code> custom resource</a> enables access control on workloads. This section gives an overview of the changes in the <code>v1beta1</code> authorization policy.</p> <p>An <code>AuthorizationPolicy</code> includes a <code>selector</code> and a list of <code>rule</code>. The <code>selector</code> specifies on which workload to apply the policy and the list of <code>rule</code> specifies the detailed access control rule for the workload.</p> <p>The <code>rule</code> is additive, which means a request is allowed if any <code>rule</code> allows the request. Each <code>rule</code> includes a list of <code>from</code>, <code>to</code> and <code>when</code>, which specifies <strong>who</strong> is allowed to do <strong>what</strong> under which <strong>conditions</strong>.</p> <p>The <code>selector</code> replaces the functionality provided by <code>ClusterRbacConfig</code> and the <code>services</code> field in <code>ServiceRole</code>. The <code>rule</code> replaces the other fields in the <code>ServiceRole</code> and <code>ServiceRoleBinding</code>.</p> <h3 id="example">Example</h3> <p>The following authorization policy applies to workloads with <code>app: httpbin</code> and <code>version: v1</code> label in the <code>foo</code> namespace:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: selector: matchLabels: app: httpbin version: v1 rules: - from: - source: principals: [&#34;cluster.local/ns/default/sa/sleep&#34;] to: - operation: methods: [&#34;GET&#34;] when: - key: request.headers[version] values: [&#34;v1&#34;, &#34;v2&#34;] </code></pre> <p>The policy allows principal <code>cluster.local/ns/default/sa/sleep</code> to access the workload using the <code>GET</code> method when the request includes a <code>version</code> header of value <code>v1</code> or <code>v2</code>. Any requests not matched with the policy will be denied by default.</p> <p>Assuming the <code>httpbin</code> service is defined as:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 kind: Service metadata: name: httpbin namespace: foo spec: selector: app: httpbin version: v1 ports: # omitted </code></pre> <p>You would need to configure three resources to achieve the same result in <code>v1alpha1</code>:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ClusterRbacConfig metadata: name: default spec: mode: &#39;ON_WITH_INCLUSION&#39; inclusion: services: [&#34;httpbin.foo.svc.cluster.local&#34;] --- apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRole metadata: name: httpbin namespace: foo spec: rules: - services: [&#34;httpbin.foo.svc.cluster.local&#34;] methods: [&#34;GET&#34;] constraints: - key: request.headers[version] values: [&#34;v1&#34;, &#34;v2&#34;] --- apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRoleBinding metadata: name: httpbin namespace: foo spec: subjects: - user: &#34;cluster.local/ns/default/sa/sleep&#34; roleRef: kind: ServiceRole name: &#34;httpbin&#34; </code></pre> <h3 id="workload-selector">Workload selector</h3> <p>A major change in the <code>v1beta1</code> authorization policy is that it now uses workload selector to specify where to apply the policy. This is the same workload selector used in the <code>Gateway</code>, <code>Sidecar</code> and <code>EnvoyFilter</code> configurations.</p> <p>The workload selector makes it clear that the policy is applied and enforced on workloads instead of services. If a policy applies to a workload that is used by multiple different services, the same policy will affect the traffic to all the different services.</p> <p>You can simply leave the <code>selector</code> empty to apply the policy to all workloads in a namespace. The following policy applies to all workloads in the namespace <code>bar</code>:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: policy namespace: bar spec: rules: # omitted </code></pre> <h3 id="root-namespace">Root namespace</h3> <p>A policy in the root namespace applies to all workloads in the mesh in every namespaces. The root namespace is configurable in the <a href="/v1.9/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig"><code>MeshConfig</code></a> and has the default value of <code>istio-system</code>.</p> <p>For example, you installed Istio in <code>istio-system</code> namespace and deployed workloads in <code>default</code> and <code>bookinfo</code> namespace. The root namespace is changed to <code>istio-config</code> from the default value. The following policy will apply to workloads in every namespace including <code>default</code>, <code>bookinfo</code> and the <code>istio-system</code>:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: policy namespace: istio-config spec: rules: # omitted </code></pre> <h3 id="ingress-egress-gateway-support">Ingress/Egress Gateway support</h3> <p>The <code>v1beta1</code> authorization policy can also be applied on ingress/egress gateway to enforce access control on traffic entering/leaving the mesh, you only need to change the <code>selector</code> to make select the ingress/egress workload.</p> <p>The following policy applies to workloads with the <code>app: istio-ingressgateway</code> label:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway rules: # omitted </code></pre> <p>Remember the authorization policy only applies to workloads in the same namespace as the policy, unless the policy is applied in the root namespace:</p> <ul> <li><p>If you don&rsquo;t change the default root namespace value (i.e. <code>istio-system</code>), the above policy will apply to workloads with the <code>app: istio-ingressgateway</code> label in <strong>every</strong> namespace.</p></li> <li><p>If you have changed the root namespace to a different value, the above policy will only apply to workloads with the <code>app: istio-ingressgateway</code> label <strong>only</strong> in the <code>istio-system</code> namespace.</p></li> </ul> <h3 id="comparison">Comparison</h3> <p>The following table highlights the key differences between the old <code>v1alpha1</code> RBAC policies and the new <code>v1beta1</code> authorization policy.</p> <h4 id="feature">Feature</h4> <table> <thead> <tr> <th>Feature</th> <th><code>v1alpha1</code> RBAC policy</th> <th><code>v1beta1</code> Authorization Policy</th> </tr> </thead> <tbody> <tr> <td>API stability</td> <td><code>alpha</code>: <strong>No</strong> backward compatible</td> <td><code>beta</code>: backward compatible <strong>guaranteed</strong></td> </tr> <tr> <td>Number of CRDs</td> <td>Three: <code>ClusterRbacConfig</code>, <code>ServiceRole</code> and <code>ServiceRoleBinding</code></td> <td>Only One: <code>AuthorizationPolicy</code></td> </tr> <tr> <td>Policy target</td> <td><strong>service</strong></td> <td><strong>workload</strong></td> </tr> <tr> <td>Deny-by-default behavior</td> <td>Enabled <strong>explicitly</strong> by configuring <code>ClusterRbacConfig</code></td> <td>Enabled <strong>implicitly</strong> with <code>AuthorizationPolicy</code></td> </tr> <tr> <td>Ingress/Egress gateway support</td> <td>Not supported</td> <td>Supported</td> </tr> <tr> <td>The <code>&quot;*&quot;</code> value in policy</td> <td>Match all contents (empty and non-empty)</td> <td>Match non-empty contents only</td> </tr> </tbody> </table> <p>The following tables show the relationship between the <code>v1alpha1</code> and <code>v1beta1</code> API.</p> <h4 id="clusterrbacconfig"><code>ClusterRbacConfig</code></h4> <table> <thead> <tr> <th><code>ClusterRbacConfig.Mode</code></th> <th><code>AuthorizationPolicy</code></th> </tr> </thead> <tbody> <tr> <td><code>OFF</code></td> <td>No policy applied</td> </tr> <tr> <td><code>ON</code></td> <td>A deny-all policy applied in root namespace</td> </tr> <tr> <td><code>ON_WITH_INCLUSION</code></td> <td>policies should be applied to namespaces or workloads included by <code>ClusterRbacConfig</code></td> </tr> <tr> <td><code>ON_WITH_EXCLUSION</code></td> <td>policies should be applied to namespaces or workloads excluded by <code>ClusterRbacConfig</code></td> </tr> </tbody> </table> <h4 id="servicerole"><code>ServiceRole</code></h4> <table> <thead> <tr> <th><code>ServiceRole</code></th> <th><code>AuthorizationPolicy</code></th> </tr> </thead> <tbody> <tr> <td><code>services</code></td> <td><code>selector</code></td> </tr> <tr> <td><code>paths</code></td> <td><code>paths</code> in <code>to</code></td> </tr> <tr> <td><code>methods</code></td> <td><code>methods</code> in <code>to</code></td> </tr> <tr> <td><code>destination.ip</code> in constraint</td> <td>Not supported</td> </tr> <tr> <td><code>destination.port</code> in constraint</td> <td><code>ports</code> in <code>to</code></td> </tr> <tr> <td><code>destination.labels</code> in constraint</td> <td><code>selector</code></td> </tr> <tr> <td><code>destination.namespace</code> in constraint</td> <td>Replaced by the namespace of the policy, i.e. the <code>namespace</code> in metadata</td> </tr> <tr> <td><code>destination.user</code> in constraint</td> <td>Not supported</td> </tr> <tr> <td><code>experimental.envoy.filters</code> in constraint</td> <td><code>experimental.envoy.filters</code> in <code>when</code></td> </tr> <tr> <td><code>request.headers</code> in constraint</td> <td><code>request.headers</code> in <code>when</code></td> </tr> </tbody> </table> <h4 id="servicerolebinding"><code>ServiceRoleBinding</code></h4> <table> <thead> <tr> <th><code>ServiceRoleBinding</code></th> <th><code>AuthorizationPolicy</code></th> </tr> </thead> <tbody> <tr> <td><code>user</code></td> <td><code>principals</code> in <code>from</code></td> </tr> <tr> <td><code>group</code></td> <td><code>request.auth.claims[group]</code> in <code>when</code></td> </tr> <tr> <td><code>source.ip</code> in property</td> <td><code>ipBlocks</code> in <code>from</code></td> </tr> <tr> <td><code>source.namespace</code> in property</td> <td><code>namespaces</code> in <code>from</code></td> </tr> <tr> <td><code>source.principal</code> in property</td> <td><code>principals</code> in <code>from</code></td> </tr> <tr> <td><code>request.headers</code> in property</td> <td><code>request.headers</code> in <code>when</code></td> </tr> <tr> <td><code>request.auth.principal</code> in property</td> <td><code>requestPrincipals</code> in <code>from</code> or <code>request.auth.principal</code> in <code>when</code></td> </tr> <tr> <td><code>request.auth.audiences</code> in property</td> <td><code>request.auth.audiences</code> in <code>when</code></td> </tr> <tr> <td><code>request.auth.presenter</code> in property</td> <td><code>request.auth.presenter</code> in <code>when</code></td> </tr> <tr> <td><code>request.auth.claims</code> in property</td> <td><code>request.auth.claims</code> in <code>when</code></td> </tr> </tbody> </table> <p>Beyond all the differences, the <code>v1beta1</code> policy is enforced by the same engine in Envoy and supports the same authenticated identity (mutual TLS or JWT), condition and other primitives (e.g. IP, port and etc.) as the <code>v1alpha1</code> policy.</p> <h2 id="future-of-the-v1alpha1-policy">Future of the <code>v1alpha1</code> policy</h2> <p>The <code>v1alpha1</code> RBAC policy (<code>ClusterRbacConfig</code>, <code>ServiceRole</code>, and <code>ServiceRoleBinding</code>) is deprecated by the <code>v1beta1</code> authorization policy.</p> <p>Istio 1.4 continues to support the <code>v1alpha1</code> RBAC policy to give you enough time to move away from the alpha policies.</p> <h2 id="migration-from-the-v1alpha1-policy">Migration from the <code>v1alpha1</code> policy</h2> <p>Istio only supports one of the two versions for a given workload:</p> <ul> <li>If there is only <code>v1beta1</code> policy for a workload, the <code>v1beta1</code> policy will be used.</li> <li>If there is only <code>v1alpha1</code> policy for a workload, the <code>v1alpha1</code> policy will be used.</li> <li>If there are both <code>v1beta1</code> and <code>v1alpha1</code> policies for a workload, only the <code>v1beta1</code> policy will be used and the the <code>v1alpha1</code> policy will be ignored.</li> </ul> <h3 id="general-guideline">General Guideline</h3> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">When migrating to use <code>v1beta1</code> policy for a given workload, make sure the new <code>v1beta1</code> policy covers all the existing <code>v1alpha1</code> policies applied for the workload, because the <code>v1alpha1</code> policies applied for the workload will be ignored after you applied the <code>v1beta1</code> policies.</div> </aside> </div> <p>The typical flow of migrating to <code>v1beta1</code> policy is to start by checking the <code>ClusterRbacConfig</code> to decide which namespace or service is enabled with RBAC.</p> <p>For each service enabled with RBAC:</p> <ol> <li>Get the workload selector from the service definition.</li> <li>Create a <code>v1beta1</code> policy with the workload selector.</li> <li>Update the <code>v1beta1</code> policy for each <code>ServiceRole</code> and <code>ServiceRoleBinding</code> applied to the service.</li> <li>Apply the <code>v1beta1</code> policy and monitor the traffic to make sure the policy is working as expected.</li> <li>Repeat the process for the next service enabled with RBAC.</li> </ol> <p>For each namespace enabled with RBAC:</p> <ol> <li>Apply a <code>v1beta1</code> policy that denies all traffic to the given namespace.</li> </ol> <h3 id="migration-example">Migration Example</h3> <p>Assume you have the following <code>v1alpha1</code> policies for the <code>httpbin</code> service in the <code>foo</code> namespace:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ClusterRbacConfig metadata: name: default spec: mode: &#39;ON_WITH_INCLUSION&#39; inclusion: namespaces: [&#34;foo&#34;] --- apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRole metadata: name: httpbin namespace: foo spec: rules: - services: [&#34;httpbin.foo.svc.cluster.local&#34;] methods: [&#34;GET&#34;] --- apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRoleBinding metadata: name: httpbin namespace: foo spec: subjects: - user: &#34;cluster.local/ns/default/sa/sleep&#34; roleRef: kind: ServiceRole name: &#34;httpbin&#34; </code></pre> <p>Migrate the above policies to <code>v1beta1</code> in the following ways:</p> <ol> <li><p>Assume the <code>httpbin</code> service has the following workload selector:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >selector: app: httpbin version: v1 </code></pre></li> <li><p>Create a <code>v1beta1</code> policy with the workload selector:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: selector: matchLabels: app: httpbin version: v1 </code></pre></li> <li><p>Update the <code>v1beta1</code> policy with each <code>ServiceRole</code> and <code>ServiceRoleBinding</code> applied to the service:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: selector: matchLabels: app: httpbin version: v1 rules: - from: - source: principals: [&#34;cluster.local/ns/default/sa/sleep&#34;] to: - operation: methods: [&#34;GET&#34;] </code></pre></li> <li><p>Apply the <code>v1beta1</code> policy and monitor the traffic to make sure it works as expected.</p></li> <li><p>Apply the following <code>v1beta1</code> policy that denies all traffic to the <code>foo</code> namespace because the <code>foo</code> namespace is enabled with RBAC:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: foo spec: {} </code></pre></li> </ol> <p>Make sure the <code>v1beta1</code> policy is working as expected and then you can delete the <code>v1alpha1</code> policies from the cluster.</p> <h3 id="automation-of-the-migration">Automation of the Migration</h3> <p>To help ease the migration, the <code>istioctl experimental authz convert</code> command is provided to automatically convert the <code>v1alpha1</code> policies to the <code>v1beta1</code> policy.</p> <p>You can evaluate the command but it is experimental in Istio 1.4 and doesn&rsquo;t support the full <code>v1alpha1</code> semantics as of the date of this blog post.</p> <p>The command to support the full <code>v1alpha1</code> semantics is expected in a patch release following Istio 1.4.</p>Thu, 14 Nov 2019 00:00:00 +0000/v1.9/blog/2019/v1beta1-authorization-policy/Yangmin Zhu (Google)/v1.9/blog/2019/v1beta1-authorization-policy/securityRBACaccess controlauthorizationIntroducing the Istio Operator <p>Kubernetes <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/">operators</a> provide a pattern for encoding human operational knowledge in software and are a popular way to simplify the administration of software infrastructure components. Istio is a natural candidate for an automated operator as it is challenging to administer.</p> <p>Up until now, <a href="https://github.com/helm/helm">Helm</a> has been the primary tool to install and upgrade Istio. Istio 1.4 introduces a new method of <a href="/v1.9/docs/setup/install/istioctl/">installation using istioctl</a>. This new installation method builds on the strengths of Helm with the addition of the following:</p> <ul> <li>Users only need to install one tool: <code>istioctl</code></li> <li>All API fields are validated</li> <li>Small customizations not in the API don&rsquo;t require chart or API changes</li> <li>Version specific upgrade hooks can be easily and robustly implemented</li> </ul> <p>The <a href="https://archive.istio.io/1.4/docs/setup/install/helm/">Helm installation</a> method is in the process of deprecation. Upgrading from Istio 1.4 with a version not initially installed with Helm will also be replaced by a new <a href="https://archive.istio.io/v1.4/docs/setup/upgrade/istioctl-upgrade/">istioctl upgrade feature</a>.</p> <p>The new <code>istioctl</code> installation commands use a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/">custom resource</a> to configure the installation. The custom resource is part of a new Istio operator implementation intended to simplify the common administrative tasks of installation, upgrade, and complex configuration changes for Istio. Validation and checking for installation and upgrade is tightly integrated with the tools to prevent common errors and simplify troubleshooting.</p> <h2 id="the-operator-api">The Operator API</h2> <p>Every operator implementation requires a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions">custom resource definition (CRD)</a> to define its custom resource, that is, its API. Istio&rsquo;s operator API is defined by the <a href="https://archive.istio.io/v1.4/docs/reference/config/istio.operator.v1alpha12.pb/"><code>IstioControlPlane</code> CRD</a>, which is generated from an <a href="https://github.com/istio/operator/blob/release-1.4/pkg/apis/istio/v1alpha2/istiocontrolplane_types.proto"><code>IstioControlPlane</code> proto</a>. The API supports all of Istio&rsquo;s current <a href="/v1.9/docs/setup/additional-setup/config-profiles/">configuration profiles</a> using a single field to select the profile. For example, the following <code>IstioControlPlane</code> resource configures Istio using the <code>demo</code> profile:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: install.istio.io/v1alpha2 kind: IstioControlPlane metadata: namespace: istio-operator name: example-istiocontrolplane spec: profile: demo </code></pre> <p>You can then customize the configuration with additional settings. For example, to disable telemetry:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: install.istio.io/v1alpha2 kind: IstioControlPlane metadata: namespace: istio-operator name: example-istiocontrolplane spec: profile: demo telemetry: enabled: false </code></pre> <h2 id="installing-with-hahahugoshortcode-s4-hbhb">Installing with istioctl</h2> <p>The recommended way to use the Istio operator API is through a new set of <code>istioctl</code> commands. For example, to install Istio into a cluster:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl manifest apply -f &lt;your-istiocontrolplane-customresource&gt; </code></pre> <p>Make changes to the installation configuration by editing the configuration file and executing <code>istioctl manifest apply</code> again.</p> <p>To upgrade to a new version of Istio:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl x upgrade -f &lt;your-istiocontrolplane-config-changes&gt; </code></pre> <p>In addition to specifying the complete configuration in an <code>IstioControlPlane</code> resource, the <code>istioctl</code> commands can also be passed individual settings using a <code>--set</code> flag:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl manifest apply --set telemetry.enabled=false </code></pre> <p>There are also a number of other <code>istioctl</code> commands that, for example, help you list, display, and compare configuration profiles and manifests.</p> <p>Refer to the Istio <a href="/v1.9/docs/setup/install/istioctl">install instructions</a> for more details.</p> <h2 id="istio-controller-alpha">Istio Controller (alpha)</h2> <p>Operator implementations use a Kubernetes controller to continuously monitor their custom resource and apply the corresponding configuration changes. The Istio controller monitors an <code>IstioControlPlane</code> resource and reacts to changes by updating the Istio installation configuration in the corresponding cluster.</p> <p>In the 1.4 release, the Istio controller is in the alpha phase of development and not fully integrated with <code>istioctl</code>. It is, however, <a href="/v1.9/docs/setup/install/operator/">available for experimentation</a> using <code>kubectl</code> commands. For example, to install the controller and a default version of Istio into your cluster, run the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f https://&lt;repo URL&gt;/operator.yaml $ kubectl apply -f https://&lt;repo URL&gt;/default-cr.yaml </code></pre> <p>You can then make changes to the Istio installation configuration:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl edit istiocontrolplane example-istiocontrolplane -n istio-system </code></pre> <p>As soon as the resource is updated, the controller will detect the changes and respond by updating the Istio installation correspondingly.</p> <p>Both the operator controller and <code>istioctl</code> commands share the same implementation. The significant difference is the execution context. In the <code>istioctl</code> case, the operation runs in the admin user’s command execution and security context. In the controller case, a pod in the cluster runs the code in its security context. In both cases, configuration is validated against a schema and the same correctness checks are performed.</p> <h2 id="migration-from-helm">Migration from Helm</h2> <p>To help ease the transition from previous configurations using Helm, <code>istioctl</code> and the controller support pass-through access for the full Helm installation API.</p> <p>You can pass Helm configuration options using <code>istioctl --set</code> by prepending the string <code>values.</code> to the option name. For example, instead of this Helm command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ helm template ... --set global.mtls.enabled=true </code></pre> <p>You can use this <code>istioctl</code> command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl manifest generate ... --set values.global.mtls.enabled=true </code></pre> <p>You can also set Helm configuration values in an <code>IstioControlPlane</code> custom resource. See <a href="/v1.9/docs/setup/install/istioctl/#customize-istio-settings-using-the-helm-api">Customize Istio settings using Helm</a> for details.</p> <p>Another feature to help with the transition from Helm is the alpha <a href="/v1.9/docs/reference/commands/istioctl/#istioctl-manifest-migrate">istioctl manifest migrate</a> command. This command can be used to automatically convert a Helm <code>values.yaml</code> file to a corresponding <code>IstioControlPlane</code> configuration.</p> <h2 id="implementation">Implementation</h2> <p>Several frameworks have been created to help implement operators by generating stubs for some or all of the components. The Istio operator was created with the help of a combination of <a href="https://github.com/kubernetes-sigs/kubebuilder">kubebuilder</a> and <a href="https://github.com/operator-framework">operator framework</a>. Istio&rsquo;s installation now uses a proto to describe the API such that runtime validation can be executed against a schema.</p> <p>More information about the implementation can be found in the README and ARCHITECTURE documents in the <a href="https://github.com/istio/operator">Istio operator repository</a>.</p> <h2 id="summary">Summary</h2> <p>Starting in Istio 1.4, Helm installation is being replaced by new <code>istioctl</code> commands using a new operator custom resource definition, <code>IstioControlPlane</code>, for the configuration API. An alpha controller is also available for early experimentation with the operator.</p> <p>The new <code>istioctl</code> commands and operator controller both validate configuration schemas and perform a range of checks for installation change or upgrade. These checks are tightly integrated with the tools to prevent common errors and simplify troubleshooting.</p> <p>The Istio maintainers expect that this new approach will improve the user experience during Istio installation and upgrade, better stabilize the installation API, and help users better manage and monitor their Istio installations.</p> <p>We welcome your feedback about the new installation approach at <a href="https://discuss.istio.io/">discuss.istio.io</a>.</p>Thu, 14 Nov 2019 00:00:00 +0000/v1.9/blog/2019/introducing-istio-operator/Martin Ostrowski (Google), Frank Budinsky (IBM)/v1.9/blog/2019/introducing-istio-operator/installconfigurationistioctloperatorIntroducing istioctl analyze <p>Istio 1.4 introduces an experimental new tool to help you analyze and debug your clusters running Istio.</p> <p><a href="/v1.9/docs/reference/commands/istioctl/#istioctl-experimental-analyze"><code>istioctl analyze</code></a> is a diagnostic tool that detects potential issues with your Istio configuration, as well as gives general insights to improve your configuration. It can run against a live cluster or a set of local configuration files. It can also run against a combination of the two, allowing you to catch problems before you apply changes to a cluster.</p> <p>To get started with it in just minutes, head over to the <a href="/v1.9/docs/ops/diagnostic-tools/istioctl-analyze/">documentation</a>.</p> <h2 id="designed-to-be-approachable-for-novice-users">Designed to be approachable for novice users</h2> <p>One of the key design goals that we followed for this feature is to make it extremely approachable. This is achieved by making the command useful without having to pass any required complex parameters.</p> <p>In practice, here are some of the scenarios that it goes after:</p> <ul> <li><em>&ldquo;There is some problem with my cluster, but I have no idea where to start&rdquo;</em></li> <li><em>&ldquo;Things are generally working, but I&rsquo;m wondering if there is anything I could improve&rdquo;</em></li> </ul> <p>In that sense, it is very different from some of the more advanced diagnostic tools, which go after scenarios along the lines of (taking <code>istioctl proxy-config</code> as an example):</p> <ul> <li><em>&ldquo;Show me the Envoy configuration for this specific pod so I can see if anything looks wrong&rdquo;</em></li> </ul> <p>This can be very useful for advanced debugging, but it requires a lot of expertize before you can figure out that you need to run this specific command, and which pod to run it on.</p> <p>So really, the one-line pitch for <code>analyze</code> is: just run it! It&rsquo;s completely safe, it takes no thinking, it might help you, and at worst, you&rsquo;ll have wasted a minute!</p> <h2 id="improving-this-tool-over-time">Improving this tool over time</h2> <p>In Istio 1.4, <code>analyze</code> comes with a nice set of analyzers that can detect a number of common issues. But this is just the beginning, and we are planning to keep growing and fine tuning the analyzers with each release.</p> <p>In fact, we would welcome suggestions from Istio users. Specifically, if you encounter a situation where you think an issue could be detected via configuration analysis, but is not currently flagged by <code>analyze</code>, please do let us know. The best way to do this is to <a href="https://github.com/istio/istio/issues">open an issue on GitHub</a>.</p>Thu, 14 Nov 2019 00:00:00 +0000/v1.9/blog/2019/introducing-istioctl-analyze/David Ebbo (Google)/v1.9/blog/2019/introducing-istioctl-analyze/debuggingistioctlconfigurationDNS Certificate Management<p>By default, Citadel manages the DNS certificates of the Istio control plane. Citadel is a large component that maintains its own private signing key, and acts as a Certificate Authority (CA).</p> <p>New in Istio 1.4, we introduce a feature to securely provision and manage DNS certificates signed by the Kubernetes CA, which has the following advantages.</p> <ul> <li><p>Lighter weight DNS certificate management with no dependency on Citadel.</p></li> <li><p>Unlike Citadel, this feature doesn&rsquo;t maintain a private signing key, which enhances security.</p></li> <li><p>Simplified root certificate distribution to TLS clients. Clients no longer need to wait for Citadel to generate and distribute its CA certificate.</p></li> </ul> <p>The following diagram shows the architecture of provisioning and managing DNS certificates in Istio. Chiron is the component provisioning and managing DNS certificates in Istio.</p> <figure style="width:50%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:82.13367609254499%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/dns-cert/architecture.png" title="The architecture of provisioning and managing DNS certificates in Istio"> <img class="element-to-stretch" src="/v1.9/blog/2019/dns-cert/architecture.png" alt="The architecture of provisioning and managing DNS certificates in Istio" /> </a> </div> <figcaption>The architecture of provisioning and managing DNS certificates in Istio</figcaption> </figure> <p>To try this new feature, refer to the <a href="/v1.9/docs/tasks/security/cert-management/dns-cert">DNS certificate management task</a>.</p>Thu, 14 Nov 2019 00:00:00 +0000/v1.9/blog/2019/dns-cert/Lei Tang (Google)/v1.9/blog/2019/dns-cert/securitykubernetescertificatesDNSAnnouncing Istio client-go <p>We are pleased to announce the initial release of the Istio <a href="https://github.com/istio/client-go">client go</a> repository which enables developers to gain programmatic access to Istio APIs in a Kubernetes environment. The generated Kubernetes informers and client set in this repository makes it easy for developers to create controllers and perform Create, Read, Update and Delete (CRUD) operations for all Istio Custom Resource Definitions (CRDs).</p> <p>This was a highly requested functionality by many Istio users, as is evident from the feature requests on the clients generated by <a href="https://github.com/aspenmesh/istio-client-go">Aspen Mesh</a> and the <a href="https://github.com/knative/pkg">Knative project</a>. If you&rsquo;re currently using one of the above mentioned clients, you can easily switch to using <a href="https://github.com/istio/client-go">Istio client go</a> like this:</p> <pre><code class='language-go' data-expandlinks='true' data-repo='istio' >import ( ... - versionedclient &#34;github.com/aspenmesh/istio-client-go/pkg/client/clientset/versioned&#34; + versionedclient &#34;istio.io/client-go/pkg/clientset/versioned&#34; ) </code></pre> <p>As the generated client sets are functionally equivalent, switching the imported client libraries should be sufficient in order to consume the newly generated library.</p> <h2 id="how-to-use-client-go">How to use client-go</h2> <p>The Istio <a href="https://github.com/istio/client-go">client go</a> repository follows the same branching strategy as the <a href="https://github.com/istio/api">Istio API</a> repository, as the client repository depends on the API definitions. If you want to use a stable client set, you can use the release branches or tagged versions in the <a href="https://github.com/istio/client-go">client go</a> repository. Using the client set is very similar to using the <a href="https://github.com/kubernetes/client-go">Kubernetes client go</a>, here&rsquo;s a quick example of using the client to list all <a href="/v1.9/docs/reference/config/networking/virtual-service">Istio virtual services</a> in the passed namespace:</p> <pre><code class='language-go' data-expandlinks='true' data-repo='istio' >package main import ( &#34;log&#34; &#34;os&#34; metav1 &#34;k8s.io/apimachinery/pkg/apis/meta/v1&#34; &#34;k8s.io/client-go/tools/clientcmd&#34; versionedclient &#34;istio.io/client-go/pkg/clientset/versioned&#34; ) func main() { kubeconfig := os.Getenv(&#34;KUBECONFIG&#34;) namespace := os.Getenv(&#34;NAMESPACE&#34;) if len(kubeconfig) == 0 || len(namespace) == 0 { log.Fatalf(&#34;Environment variables KUBECONFIG and NAMESPACE need to be set&#34;) } restConfig, err := clientcmd.BuildConfigFromFlags(&#34;&#34;, kubeconfig) if err != nil { log.Fatalf(&#34;Failed to create k8s rest client: %s&#34;, err) } ic, err := versionedclient.NewForConfig(restConfig) if err != nil { log.Fatalf(&#34;Failed to create istio client: %s&#34;, err) } // Print all VirtualServices vsList, err := ic.NetworkingV1alpha3().VirtualServices(namespace).List(metav1.ListOptions{}) if err != nil { log.Fatalf(&#34;Failed to get VirtualService in %s namespace: %s&#34;, namespace, err) } for i := range vsList.Items { vs := vsList.Items[i] log.Printf(&#34;Index: %d VirtualService Hosts: %+v\n&#34;, i, vs.Spec.GetHosts()) } } </code></pre> <p>You can find a more in-depth example <a href="https://github.com/istio/client-go/blob/release-1.9/cmd/example/client.go">here</a>.</p> <h2 id="useful-tools-created-for-generating-istio-client-go">Useful tools created for generating Istio client-go</h2> <p>If you&rsquo;re wondering why it took so long or why was it difficult to generate this client set, this section is for you. In Istio, we use <a href="https://developers.google.com/protocol-buffers">protobuf</a> specifications to write APIs which are then converted to Go definitions using the protobuf tool chain. There are three major challenges which you might face if you&rsquo;re trying to generate Kubernetes client set from a protobuf-generated API:</p> <ul> <li><p><strong>Creating Kubernetes Wrapper Types</strong> - Kubernetes <a href="https://github.com/kubernetes/code-generator/tree/master/cmd/client-gen">client generation</a> library only works for Go objects which follow the Kubernetes object specification for e.g. <a href="https://github.com/istio/client-go/blob/release-1.9/pkg/apis/authentication/v1alpha1/types.gen.go">Authentication Policy Kubernetes Wrappers</a>. This means for every API which needs programmatic access, you need to create these wrappers. Additionally, there is a fair amount of boilerplate needed for every <code>CRD</code> group, version and kind that needs client code generation. To automate this process, we created a <a href="https://github.com/istio/tools/tree/master/cmd/kubetype-gen">Kubernetes type generator</a> tool which can automatically create the Kubernetes types based on annotations. The annotations parsed by this tool and the various available options are explained in the <a href="https://github.com/istio/tools/blob/master/cmd/kubetype-gen/README.md">README</a>. Note that if you&rsquo;re using protobuf tools to generate Go types, you would need to add these annotations as comments in the proto files, so that the comments are present in the generated Go files which are then used by this tool.</p></li> <li><p><strong>Generating deep copy methods</strong> - In Kubernetes client machinery, if you want to mutate any object returned from the client set, you are required to make a copy of the object to prevent modifying the object in-place in the cache store. The canonical way to do this is to create a <code>deepcopy</code> method on all nested types. We created a tool <a href="https://github.com/istio/tools/tree/master/cmd/protoc-gen-deepcopy">protoc deep copy generator</a> which is a <code>protoc</code> plugin and can automatically create <code>deepcopy</code> method based on annotations using the Proto library utility <a href="https://godoc.org/github.com/golang/protobuf/proto#Clone">Proto Clone</a>. Here&rsquo;s an <a href="https://github.com/istio/api/blob/release-1.9/authentication/v1alpha1/policy_deepcopy.gen.go">example</a> of the generated <code>deepcopy</code> method.</p></li> <li><p><strong>Marshaling and Unmarshaling types to/from JSON</strong> - For the types generated from proto definitions, it is often problematic to use the default Go JSON encoder/decoder as there are various fields like protobuf&rsquo;s <code>oneof</code> which requires special handling. Additionally, any Proto fields with underscores in their name might serialize/deserialize to different field names depending on the encoder/decoder as the Go struct tag are <a href="https://github.com/istio/istio/issues/17600">generated differently</a>. It is always recommended to use protobuf primitives for serializing/deserializing to JSON instead of relying on default Go library. We created a tool <a href="https://github.com/istio/tools/tree/master/cmd/protoc-gen-jsonshim">protoc JSON shim</a> which is a <code>protoc</code> plugin and can automatically create Marshalers/Unmarshalers for all Go type generated from Proto definitions. Here&rsquo;s an <a href="https://github.com/istio/api/blob/release-1.9/authentication/v1alpha1/policy_json.gen.go">example</a> of the code generated by this tool.</p></li> </ul> <p>I&rsquo;m hoping that the newly released client library enables users to create more integrations and controllers for the Istio APIs, and the tools mentioned above can be used by developers to generate Kubernetes client set from Proto APIs.</p>Thu, 14 Nov 2019 00:00:00 +0000/v1.9/blog/2019/announcing-istio-client-go/Neeraj Poddar (Aspen Mesh)/v1.9/blog/2019/announcing-istio-client-go/client-gotoolscrdIstio as a Proxy for External Services <p>The <a href="/v1.9/docs/tasks/traffic-management/ingress/ingress-control/">Control Ingress Traffic</a> and the <a href="/v1.9/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/">Ingress Gateway without TLS Termination</a> tasks describe how to configure an ingress gateway to expose services inside the mesh to external traffic. The services can be HTTP or HTTPS. In the case of HTTPS, the gateway passes the traffic through, without terminating TLS.</p> <p>This blog post describes how to use the same ingress gateway mechanism of Istio to enable access to external services and not to applications inside the mesh. This way Istio as a whole can serve just as a proxy server, with the added value of observability, traffic management and policy enforcement.</p> <p>The blog post shows configuring access to an HTTP and an HTTPS external service, namely <code>httpbin.org</code> and <code>edition.cnn.com</code>.</p> <h2 id="configure-an-ingress-gateway">Configure an ingress gateway</h2> <ol> <li><p>Define an ingress gateway with a <code>servers:</code> section configuring the <code>80</code> and <code>443</code> ports. Ensure <code>mode:</code> is set to <code>PASSTHROUGH</code> for <code>tls:</code> in the port <code>443</code>, which instructs the gateway to pass the ingress traffic AS IS, without terminating TLS.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: proxy spec: selector: istio: ingressgateway # use istio default ingress gateway servers: - port: number: 80 name: http protocol: HTTP hosts: - httpbin.org - port: number: 443 name: tls protocol: TLS tls: mode: PASSTHROUGH hosts: - edition.cnn.com EOF </code></pre></li> <li><p>Create service entries for the <code>httpbin.org</code> and <code>edition.cnn.com</code> services to make them accessible from the ingress gateway:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin-ext spec: hosts: - httpbin.org ports: - number: 80 name: http protocol: HTTP resolution: DNS location: MESH_EXTERNAL --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: cnn spec: hosts: - edition.cnn.com ports: - number: 443 name: tls protocol: TLS resolution: DNS location: MESH_EXTERNAL EOF </code></pre></li> <li><p>Create a service entry and configure a destination rule for the <code>localhost</code> service. You need this service entry in the next step as a destination for traffic to the external services from applications inside the mesh to block traffic from inside the mesh. In this example you use Istio as a proxy between external applications and external services.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: localhost spec: hosts: - localhost.local location: MESH_EXTERNAL ports: - number: 80 name: http protocol: HTTP - number: 443 name: tls protocol: TLS resolution: STATIC endpoints: - address: 127.0.0.1 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: localhost spec: host: localhost.local trafficPolicy: tls: mode: DISABLE sni: localhost.local EOF </code></pre></li> <li><p>Create a virtual service for each external service to configure routing to it. Both virtual services include the <code>proxy</code> gateway in the <code>gateways:</code> section and in the <code>match:</code> section for HTTP and HTTPS traffic accordingly.</p> <p>Notice the <code>route:</code> section for the <code>mesh</code> gateway, the gateway that represents the applications inside the mesh. The <code>route:</code> for the <code>mesh</code> gateway shows how the traffic is directed to the <code>localhost.local</code> service, effectively blocking the traffic.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - httpbin.org gateways: - proxy - mesh http: - match: - gateways: - proxy port: 80 uri: prefix: /status route: - destination: host: httpbin.org port: number: 80 - match: - gateways: - mesh port: 80 route: - destination: host: localhost.local port: number: 80 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: cnn spec: hosts: - edition.cnn.com gateways: - proxy - mesh tls: - match: - gateways: - proxy port: 443 sni_hosts: - edition.cnn.com route: - destination: host: edition.cnn.com port: number: 443 - match: - gateways: - mesh port: 443 sni_hosts: - edition.cnn.com route: - destination: host: localhost.local port: number: 443 EOF </code></pre></li> <li><p><a href="/v1.9/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging">Enable Envoy&rsquo;s access logging</a>.</p></li> <li><p>Follow the instructions in <a href="/v1.9/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports">Determining the ingress IP and ports</a> to define the <code>SECURE_INGRESS_PORT</code> and <code>INGRESS_HOST</code> environment variables.</p></li> <li><p>Access the <code>httbin.org</code> service through your ingress IP and port which you stored in the <code>$INGRESS_HOST</code> and <code>$INGRESS_PORT</code> environment variables, respectively, during the previous step. Access the <code>/status/418</code> path of the <code>httpbin.org</code> service that returns the HTTP status <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/418">418 I&rsquo;m a teapot</a>.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl $INGRESS_HOST:$INGRESS_PORT/status/418 -Hhost:httpbin.org -=[ teapot ]=- _...._ .&#39; _ _ `. | .&#34;` ^ `&#34;. _, \_;`&#34;---&#34;`|// | ;/ \_ _/ `&#34;&#34;&#34;` </code></pre></li> <li><p>If the Istio ingress gateway is deployed in the <code>istio-system</code> namespace, print the gateway&rsquo;s log with the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl logs -l istio=ingressgateway -c istio-proxy -n istio-system | grep &#39;httpbin.org&#39; </code></pre></li> <li><p>Search the log for an entry similar to:</p> <pre><code class='language-plain' data-expandlinks='true' data-repo='istio' >[2019-01-31T14:40:18.645Z] &#34;GET /status/418 HTTP/1.1&#34; 418 - 0 135 187 186 &#34;10.127.220.75&#34; &#34;curl/7.54.0&#34; &#34;28255618-6ca5-9d91-9634-c562694a3625&#34; &#34;httpbin.org&#34; &#34;34.232.181.106:80&#34; outbound|80||httpbin.org - 172.30.230.33:80 10.127.220.75:52077 - </code></pre></li> <li><p>Access the <code>edition.cnn.com</code> service through your ingress gateway:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl -s --resolve edition.cnn.com:$SECURE_INGRESS_PORT:$INGRESS_HOST https://edition.cnn.com:$SECURE_INGRESS_PORT | grep -o &#34;&lt;title&gt;.*&lt;/title&gt;&#34; &lt;title&gt;CNN International - Breaking News, US News, World News and Video&lt;/title&gt; </code></pre></li> <li><p>If the Istio ingress gateway is deployed in the <code>istio-system</code> namespace, print the gateway&rsquo;s log with the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl logs -l istio=ingressgateway -c istio-proxy -n istio-system | grep &#39;edition.cnn.com&#39; </code></pre></li> <li><p>Search the log for an entry similar to:</p> <pre><code class='language-plain' data-expandlinks='true' data-repo='istio' >[2019-01-31T13:40:11.076Z] &#34;- - -&#34; 0 - 589 17798 1644 - &#34;-&#34; &#34;-&#34; &#34;-&#34; &#34;-&#34; &#34;172.217.31.132:443&#34; outbound|443||edition.cnn.com 172.30.230.33:54508 172.30.230.33:443 10.127.220.75:49467 edition.cnn.com </code></pre></li> </ol> <h2 id="cleanup">Cleanup</h2> <p>Remove the gateway, the virtual services and the service entries:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete gateway proxy $ kubectl delete virtualservice cnn httpbin $ kubectl delete serviceentry cnn httpbin-ext localhost $ kubectl delete destinationrule localhost </code></pre>Tue, 15 Oct 2019 00:00:00 +0000/v1.9/blog/2019/proxy/Vadim Eisenberg (IBM)/v1.9/blog/2019/proxy/traffic-managementingresshttpshttpMulti-Mesh Deployments for Isolation and Boundary Protection <p>Various compliance standards require protection of sensitive data environments. Some of the important standards and the types of sensitive data they protect appear in the following table:</p> <table> <thead> <tr> <th>Standard</th> <th>Sensitive data</th> </tr> </thead> <tbody> <tr> <td><a href="https://www.pcisecuritystandards.org/pci_security">PCI DSS</a></td> <td>payment card data</td> </tr> <tr> <td><a href="https://www.fedramp.gov">FedRAMP</a></td> <td>federal information, data and metadata</td> </tr> <tr> <td><a href="http://www.gpo.gov/fdsys/search/pagedetails.action?granuleId=CRPT-104hrpt736&amp;packageId=CRPT-104hrpt736">HIPAA</a></td> <td>personal health data</td> </tr> <tr> <td><a href="https://gdpr-info.eu">GDPR</a></td> <td>personal data</td> </tr> </tbody> </table> <p><a href="https://www.pcisecuritystandards.org/pci_security">PCI DSS</a>, for example, recommends putting cardholder data environment on a network, separate from the rest of the system. It also requires using a <a href="https://en.wikipedia.org/wiki/DMZ_(computing)">DMZ</a>, and setting firewalls between the public Internet and the DMZ, and between the DMZ and the internal network.</p> <p>Isolation of sensitive data environments from other information systems can reduce the scope of the compliance checks and improve the security of the sensitive data. Reducing the scope reduces the risks of failing a compliance check and reduces the costs of compliance since there are less components to check and secure, according to compliance requirements.</p> <p>You can achieve isolation of sensitive data by separating the parts of the application that process that data into a separate service mesh, preferably on a separate network, and then connect the meshes with different compliance requirements together in a <span class="term" data-title="Multi-Mesh" data-body="&lt;p&gt;Multi-mesh is a deployment model that consists of two or more &lt;a href=&#34;/docs/reference/glossary/#service-mesh&#34;&gt;service meshes&lt;/a&gt;. Each mesh has independent administration for naming and identities but you can expose services between meshes through &lt;a href=&#34;/docs/reference/glossary/#mesh-federation&#34;&gt;mesh federation&lt;/a&gt;. The resulting deployment is a multi-mesh deployment.&lt;/p&gt; ">multi-mesh</span> deployment. The process of connecting inter-mesh applications is called <span class="term" data-title="Mesh Federation" data-body="&lt;p&gt;Mesh federation is the act of exposing services between meshes and enabling communication across mesh boundaries. Each mesh may expose a subset of its services to enable one or more other meshes to consume the exposed services. You can use mesh federation to enable communication between meshes in a &lt;a href=&#34;/docs/ops/deployment/deployment-models/#multiple-meshes&#34;&gt;multi-mesh deployment&lt;/a&gt;.&lt;/p&gt; ">mesh federation</span>.</p> <p>Note that using mesh federation to create a multi-mesh deployment is very different than creating a <span class="term" data-title="Multicluster" data-body="&lt;p&gt;Multicluster is a deployment model that consists of a &lt;a href=&#34;/docs/reference/glossary/#service-mesh&#34;&gt;mesh&lt;/a&gt; with multiple &lt;a href=&#34;/docs/reference/glossary/#cluster&#34;&gt;clusters&lt;/a&gt;.&lt;/p&gt; ">multicluster</span> deployment, which defines a single service mesh composed from services spanning more than one cluster. Unlike multi-mesh, a multicluster deployment is not suitable for applications that require isolation and boundary protection.</p> <p>In this blog post I describe the requirements for isolation and boundary protection, and outline the principles of multi-mesh deployments. Finally, I touch on the current state of mesh-federation support and automation work under way for Istio.</p> <h2 id="isolation-and-boundary-protection">Isolation and boundary protection</h2> <p>Isolation and boundary protection mechanisms are explained in the <a href="http://dx.doi.org/10.6028/NIST.SP.800-53r4">NIST Special Publication 800-53, Revision 4, Security and Privacy Controls for Federal Information Systems and Organizations</a>, <em>Appendix F, Security Control Catalog, SC-7 Boundary Protection</em>.</p> <p>In particular, the <em>Boundary protection, isolation of information system components</em> control enhancement:</p> <div> <aside class="callout quote"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-quote"/></svg> </div> <div class="content">Organizations can isolate information system components performing different missions and/or business functions. Such isolation limits unauthorized information flows among system components and also provides the opportunity to deploy greater levels of protection for selected components. Separating system components with boundary protection mechanisms provides the capability for increased protection of individual components and to more effectively control information flows between those components. This type of enhanced protection limits the potential harm from cyber attacks and errors. The degree of separation provided varies depending upon the mechanisms chosen. Boundary protection mechanisms include, for example, routers, gateways, and firewalls separating system components into physically separate networks or subnetworks, cross-domain devices separating subnetworks, virtualization techniques, and encrypting information flows among system components using distinct encryption keys.</div> </aside> </div> <p>Various compliance standards recommend isolating environments that process sensitive data from the rest of the organization. The <a href="https://www.pcisecuritystandards.org/pci_security/">Payment Card Industry (PCI) Data Security Standard</a> recommends implementing network isolation for <em>cardholder data</em> environment and requires isolating this environment from the <a href="https://en.wikipedia.org/wiki/DMZ_(computing)">DMZ</a>. <a href="https://www.fedramp.gov/assets/resources/documents/CSP_A_FedRAMP_Authorization_Boundary_Guidance.pdf">FedRAMP Authorization Boundary Guidance</a> describes <em>authorization boundary</em> for federal information and data, while <a href="https://doi.org/10.6028/NIST.SP.800-37r2">NIST Special Publication 800-37, Revision 2, Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy</a> recommends protecting of such a boundary in <em>Appendix G, Authorization Boundary Considerations</em>:</p> <div> <aside class="callout quote"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-quote"/></svg> </div> <div class="content">Dividing a system into subsystems (i.e., divide and conquer) facilitates a targeted application of controls to achieve adequate security, protection of individual privacy, and a cost-effective risk management process. Dividing complex systems into subsystems also supports the important security concepts of domain separation and network segmentation, which can be significant when dealing with high value assets. When systems are divided into subsystems, organizations may choose to develop individual subsystem security and privacy plans or address the system and subsystems in the same security and privacy plans. Information security and privacy architectures play a key part in the process of dividing complex systems into subsystems. This includes monitoring and controlling communications at internal boundaries among subsystems and selecting, allocating, and implementing controls that meet or exceed the security and privacy requirements of the constituent subsystems.</div> </aside> </div> <p>Boundary protection, in particular, means:</p> <ul> <li>put an access control mechanism at the boundary (firewall, gateway, etc.)</li> <li>monitor the incoming/outgoing traffic at the boundary</li> <li>all the access control mechanisms must be <em>deny-all</em> by default</li> <li>do not expose private IP addresses from the boundary</li> <li>do not let components from outside the boundary to impact security inside the boundary</li> </ul> <p>Multi-mesh deployments facilitate division of a system into subsystems with different security and compliance requirements, and facilitate the boundary protection. You put each subsystem into a separate service mesh, preferably on a separate network. You connect the Istio meshes using gateways. The gateways monitor and control cross-mesh traffic at the boundary of each mesh.</p> <h2 id="features-of-multi-mesh-deployments">Features of multi-mesh deployments</h2> <ul> <li><strong>non-uniform naming</strong>. The <code>withdraw</code> service in the <code>accounts</code> namespace in one mesh might have different functionality and API than the <code>withdraw</code> services in the <code>accounts</code> namespace in other meshes. Such situation could happen in an organization where there is no uniform policy on naming of namespaces and services, or when the meshes belong to different organizations.</li> <li><strong>expose-nothing by default</strong>. None of the services in a mesh are exposed by default, the mesh owners must explicitly specify which services are exposed.</li> <li><strong>boundary protection</strong>. The access control of the traffic must be enforced at the ingress gateway, which stops forbidden traffic from entering the mesh. This requirement implements <a href="https://en.wikipedia.org/wiki/Defense_in_depth_(computing)">Defense-in-depth principle</a> and is part of some compliance standards, such as the <a href="https://www.pcisecuritystandards.org/pci_security/">Payment Card Industry (PCI) Data Security Standard</a>.</li> <li><strong>common trust may not exist</strong>. The Istio sidecars in one mesh may not trust the Citadel certificates in other meshes, due to some security requirement or due to the fact that the mesh owners did not initially plan to federate the meshes.</li> </ul> <p>While <strong>expose-nothing by default</strong> and <strong>boundary protection</strong> are required to facilitate compliance and improve security, <strong>non-uniform naming</strong> and <strong>common trust may not exist</strong> are required when connecting meshes of different organizations, or of an organization that cannot enforce uniform naming or cannot or may not establish common trust between the meshes.</p> <p>An optional feature that you may want to use is <strong>service location transparency</strong>: consuming services send requests to the exposed services in remote meshes using local service names. The consuming services are oblivious to the fact that some of the destinations are in remote meshes and some are local services. The access is uniform, using the local service names, for example, in Kubernetes, <code>reviews.default.svc.cluster.local</code>. <strong>Service location transparency</strong> is useful in the cases when you want to be able to change the location of the consumed services, for example when some service is migrated from private cloud to public cloud, without changing the code of your applications.</p> <h2 id="the-current-mesh-federation-work">The current mesh-federation work</h2> <p>While you can perform mesh federation using standard Istio configurations already today, it requires writing a lot of boilerplate YAML files and is error-prone. There is an effort under way to automate the mesh federation process. In the meantime, you can look at these <a href="https://github.com/istio-ecosystem/multi-mesh-examples">multi-mesh deployment examples</a> to get an idea of what a generated federation might include.</p> <h2 id="summary">Summary</h2> <p>In this blog post I described the requirements for isolation and boundary protection of sensitive data environments by using Istio multi-mesh deployments. I outlined the principles of Istio multi-mesh deployments and reported the current work on mesh federation in Istio.</p> <p>I will be happy to hear your opinion about <span class="term" data-title="Multi-Mesh" data-body="&lt;p&gt;Multi-mesh is a deployment model that consists of two or more &lt;a href=&#34;/docs/reference/glossary/#service-mesh&#34;&gt;service meshes&lt;/a&gt;. Each mesh has independent administration for naming and identities but you can expose services between meshes through &lt;a href=&#34;/docs/reference/glossary/#mesh-federation&#34;&gt;mesh federation&lt;/a&gt;. The resulting deployment is a multi-mesh deployment.&lt;/p&gt; ">multi-mesh</span> and <span class="term" data-title="Multicluster" data-body="&lt;p&gt;Multicluster is a deployment model that consists of a &lt;a href=&#34;/docs/reference/glossary/#service-mesh&#34;&gt;mesh&lt;/a&gt; with multiple &lt;a href=&#34;/docs/reference/glossary/#cluster&#34;&gt;clusters&lt;/a&gt;.&lt;/p&gt; ">multicluster</span> at <a href="https://discuss.istio.io">discuss.istio.io</a>.</p>Wed, 02 Oct 2019 00:00:00 +0000/v1.9/blog/2019/isolated-clusters/Vadim Eisenberg (IBM)/v1.9/blog/2019/isolated-clusters/traffic-managementmulticlustersecuritygatewaytlsMonitoring Blocked and Passthrough External Service Traffic <p>Understanding, controlling and securing your external service access is one of the key benefits that you get from a service mesh like Istio. From a security and operations point of view, it is critical to monitor what external service traffic is getting blocked as they might surface possible misconfigurations or a security vulnerability if an application is attempting to communicate with a service that it should not be allowed to. Similarly, if you currently have a policy of allowing any external service access, it is beneficial to monitor the traffic so you can incrementally add explicit Istio configuration to allow access and better secure your cluster. In either case, having visibility into this traffic via telemetry is quite helpful as it enables you to create alerts and dashboards, and better reason about your security posture. This was a highly requested feature by production users of Istio and we are excited that the support for this was added in release 1.3.</p> <p>To implement this, the Istio <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/metrics">default metrics</a> are augmented with explicit labels to capture blocked and passthrough external service traffic. This blog will cover how you can use these augmented metrics to monitor all external service traffic.</p> <p>The Istio control plane configures the sidecar proxy with predefined clusters called BlackHoleCluster and Passthrough which block or allow all traffic respectively. To understand these clusters, let&rsquo;s start with what external and internal services mean in the context of Istio service mesh.</p> <h2 id="external-and-internal-services">External and internal services</h2> <p>Internal services are defined as services which are part of your platform and are considered to be in the mesh. For internal services, Istio control plane provides all the required configuration to the sidecars by default. For example, in Kubernetes clusters, Istio configures the sidecars for all Kubernetes services to preserve the default Kubernetes behavior of all services being able to communicate with other.</p> <p>External services are services which are not part of your platform i.e. services which are outside of the mesh. For external services, Istio provides two options, first to block all external service access (enabled by setting <code>global.outboundTrafficPolicy.mode</code> to <code>REGISTRY_ONLY</code>) and second to allow all access to external service (enabled by setting <code>global.outboundTrafficPolicy.mode</code> to <code>ALLOW_ANY</code>). The default option for this setting (as of Istio 1.3) is to allow all external service access. This option can be configured via <a href="/v1.9/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-OutboundTrafficPolicy-Mode">mesh configuration</a>.</p> <p>This is where the BlackHole and Passthrough clusters are used.</p> <h2 id="what-are-blackhole-and-passthrough-clusters">What are BlackHole and Passthrough clusters?</h2> <ul> <li><strong>BlackHoleCluster</strong> - The BlackHoleCluster is a virtual cluster created in the Envoy configuration when <code>global.outboundTrafficPolicy.mode</code> is set to <code>REGISTRY_ONLY</code>. In this mode, all traffic to external service is blocked unless <a href="/v1.9/docs/reference/config/networking/service-entry">service entries</a> are explicitly added for each service. To implement this, the default virtual outbound listener at <code>0.0.0.0:15001</code> which uses <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#original-destination">original destination</a> is setup as a TCP Proxy with the BlackHoleCluster as the static cluster. The configuration for the BlackHoleCluster looks like this:</li> </ul> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;name&#34;: &#34;BlackHoleCluster&#34;, &#34;type&#34;: &#34;STATIC&#34;, &#34;connectTimeout&#34;: &#34;10s&#34; } </code></pre> <p>As you can see, this cluster is static with no endpoints so all the traffic will be dropped. Additionally, Istio creates unique listeners for every port/protocol combination of platform services which gets hit instead of the virtual listener if the request is made to an external service on the same port. In that case, the route configuration of every virtual route in Envoy is augmented to add the BlackHoleCluster like this:</p> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;name&#34;: &#34;block_all&#34;, &#34;domains&#34;: [ &#34;*&#34; ], &#34;routes&#34;: [ { &#34;match&#34;: { &#34;prefix&#34;: &#34;/&#34; }, &#34;directResponse&#34;: { &#34;status&#34;: 502 } } ] } </code></pre> <p>The route is setup as <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/route/route_components.proto#envoy-api-field-route-route-direct-response">direct response</a> with <code>502</code> response code which means if no other routes match the Envoy proxy will directly return a <code>502</code> HTTP status code.</p> <ul> <li><strong>PassthroughCluster</strong> - The PassthroughCluster is a virtual cluster created in the Envoy configuration when <code>global.outboundTrafficPolicy.mode</code> is set to <code>ALLOW_ANY</code>. In this mode, all traffic to any external service external is allowed. To implement this, the default virtual outbound listener at <code>0.0.0.0:15001</code> which uses <code>SO_ORIGINAL_DST</code>, is setup as a TCP Proxy with the PassthroughCluster as the static cluster. The configuration for the PassthroughCluster looks like this:</li> </ul> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;name&#34;: &#34;PassthroughCluster&#34;, &#34;type&#34;: &#34;ORIGINAL_DST&#34;, &#34;connectTimeout&#34;: &#34;10s&#34;, &#34;lbPolicy&#34;: &#34;ORIGINAL_DST_LB&#34;, &#34;circuitBreakers&#34;: { &#34;thresholds&#34;: [ { &#34;maxConnections&#34;: 102400, &#34;maxRetries&#34;: 1024 } ] } } </code></pre> <p>This cluster uses the <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#original-destination">original destination load balancing</a> policy which configures Envoy to send the traffic to the original destination i.e. passthrough.</p> <p>Similar to the BlackHoleCluster, for every port/protocol based listener the virtual route configuration is augmented to add the PassthroughCluster as the default route:</p> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;name&#34;: &#34;allow_any&#34;, &#34;domains&#34;: [ &#34;*&#34; ], &#34;routes&#34;: [ { &#34;match&#34;: { &#34;prefix&#34;: &#34;/&#34; }, &#34;route&#34;: { &#34;cluster&#34;: &#34;PassthroughCluster&#34; } } ] } </code></pre> <p>Prior to Istio 1.3, there were no metrics reported or if metrics were reported there were no explicit labels set when traffic hit these clusters, resulting in lack of visibility in traffic flowing through the mesh.</p> <p>The next section covers how to take advantage of this enhancement as the metrics and labels emitted are conditional on whether the virtual outbound or explicit port/protocol listener is being hit.</p> <h2 id="using-the-augmented-metrics">Using the augmented metrics</h2> <p>To capture all external service traffic in either of the cases (BlackHole or Passthrough), you will need to monitor <code>istio_requests_total</code> and <code>istio_tcp_connections_closed_total</code> metrics. Depending upon the Envoy listener type i.e. TCP proxy or HTTP proxy that gets invoked, one of these metrics will be incremented.</p> <p>Additionally, in case of a TCP proxy listener in order to see the IP address of the external service that is blocked or allowed via BlackHole or Passthrough cluster, you will need to add the <code>destination_ip</code> label to the <code>istio_tcp_connections_closed_total</code> metric. In this scenario, the host name of the external service is not captured. This label is not added by default and can be easily added by augmenting the Istio configuration for attribute generation and Prometheus handler. You should be careful about cardinality explosion in time series if you have many services with non-stable IP addresses.</p> <h3 id="passthroughcluster-metrics">PassthroughCluster metrics</h3> <p>This section explains the metrics and the labels emitted based on the listener type invoked in Envoy.</p> <ul> <li>HTTP proxy listener: This happens when the port of the external service is same as one of the service ports defined in the cluster. In this scenario, when the PassthroughCluster is hit, <code>istio_requests_total</code> will get increased like this:</li> </ul> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;metric&#34;: { &#34;__name__&#34;: &#34;istio_requests_total&#34;, &#34;connection_security_policy&#34;: &#34;unknown&#34;, &#34;destination_app&#34;: &#34;unknown&#34;, &#34;destination_principal&#34;: &#34;unknown&#34;, &#34;destination_service&#34;: &#34;httpbin.org&#34;, &#34;destination_service_name&#34;: &#34;PassthroughCluster&#34;, &#34;destination_service_namespace&#34;: &#34;unknown&#34;, &#34;destination_version&#34;: &#34;unknown&#34;, &#34;destination_workload&#34;: &#34;unknown&#34;, &#34;destination_workload_namespace&#34;: &#34;unknown&#34;, &#34;instance&#34;: &#34;100.96.2.183:42422&#34;, &#34;job&#34;: &#34;istio-mesh&#34;, &#34;permissive_response_code&#34;: &#34;none&#34;, &#34;permissive_response_policyid&#34;: &#34;none&#34;, &#34;reporter&#34;: &#34;source&#34;, &#34;request_protocol&#34;: &#34;http&#34;, &#34;response_code&#34;: &#34;200&#34;, &#34;response_flags&#34;: &#34;-&#34;, &#34;source_app&#34;: &#34;sleep&#34;, &#34;source_principal&#34;: &#34;unknown&#34;, &#34;source_version&#34;: &#34;unknown&#34;, &#34;source_workload&#34;: &#34;sleep&#34;, &#34;source_workload_namespace&#34;: &#34;default&#34; }, &#34;value&#34;: [ 1567033080.282, &#34;1&#34; ] } </code></pre> <p>Note that the <code>destination_service_name</code> label is set to PassthroughCluster to indicate that this cluster was hit and the <code>destination_service</code> is set to the host of the external service.</p> <ul> <li>TCP proxy virtual listener - If the external service port doesn&rsquo;t map to any HTTP based service ports within the cluster, this listener is invoked and <code>istio_tcp_connections_closed_total</code> is the metric that will be increased:</li> </ul> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;status&#34;: &#34;success&#34;, &#34;data&#34;: { &#34;resultType&#34;: &#34;vector&#34;, &#34;result&#34;: [ { &#34;metric&#34;: { &#34;__name__&#34;: &#34;istio_tcp_connections_closed_total&#34;, &#34;connection_security_policy&#34;: &#34;unknown&#34;, &#34;destination_app&#34;: &#34;unknown&#34;, &#34;destination_ip&#34;: &#34;52.22.188.80&#34;, &#34;destination_principal&#34;: &#34;unknown&#34;, &#34;destination_service&#34;: &#34;unknown&#34;, &#34;destination_service_name&#34;: &#34;PassthroughCluster&#34;, &#34;destination_service_namespace&#34;: &#34;unknown&#34;, &#34;destination_version&#34;: &#34;unknown&#34;, &#34;destination_workload&#34;: &#34;unknown&#34;, &#34;destination_workload_namespace&#34;: &#34;unknown&#34;, &#34;instance&#34;: &#34;100.96.2.183:42422&#34;, &#34;job&#34;: &#34;istio-mesh&#34;, &#34;reporter&#34;: &#34;source&#34;, &#34;response_flags&#34;: &#34;-&#34;, &#34;source_app&#34;: &#34;sleep&#34;, &#34;source_principal&#34;: &#34;unknown&#34;, &#34;source_version&#34;: &#34;unknown&#34;, &#34;source_workload&#34;: &#34;sleep&#34;, &#34;source_workload_namespace&#34;: &#34;default&#34; }, &#34;value&#34;: [ 1567033761.879, &#34;1&#34; ] } ] } } </code></pre> <p>In this case, <code>destination_service_name</code> is set to PassthroughCluster and the <code>destination_ip</code> is set to the IP address of the external service. The <code>destination_ip</code> label can be used to do a reverse DNS lookup and get the host name of the external service. As this cluster is passthrough, other TCP related metrics like <code>istio_tcp_connections_opened_total</code>, <code>istio_tcp_received_bytes_total</code> and <code>istio_tcp_sent_bytes_total</code> are also updated.</p> <h3 id="blackholecluster-metrics">BlackHoleCluster metrics</h3> <p>Similar to the PassthroughCluster, this section explains the metrics and the labels emitted based on the listener type invoked in Envoy.</p> <ul> <li>HTTP proxy listener: This happens when the port of the external service is same as one of the service ports defined in the cluster. In this scenario, when the BlackHoleCluster is hit, <code>istio_requests_total</code> will get increased like this:</li> </ul> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;metric&#34;: { &#34;__name__&#34;: &#34;istio_requests_total&#34;, &#34;connection_security_policy&#34;: &#34;unknown&#34;, &#34;destination_app&#34;: &#34;unknown&#34;, &#34;destination_principal&#34;: &#34;unknown&#34;, &#34;destination_service&#34;: &#34;httpbin.org&#34;, &#34;destination_service_name&#34;: &#34;BlackHoleCluster&#34;, &#34;destination_service_namespace&#34;: &#34;unknown&#34;, &#34;destination_version&#34;: &#34;unknown&#34;, &#34;destination_workload&#34;: &#34;unknown&#34;, &#34;destination_workload_namespace&#34;: &#34;unknown&#34;, &#34;instance&#34;: &#34;100.96.2.183:42422&#34;, &#34;job&#34;: &#34;istio-mesh&#34;, &#34;permissive_response_code&#34;: &#34;none&#34;, &#34;permissive_response_policyid&#34;: &#34;none&#34;, &#34;reporter&#34;: &#34;source&#34;, &#34;request_protocol&#34;: &#34;http&#34;, &#34;response_code&#34;: &#34;502&#34;, &#34;response_flags&#34;: &#34;-&#34;, &#34;source_app&#34;: &#34;sleep&#34;, &#34;source_principal&#34;: &#34;unknown&#34;, &#34;source_version&#34;: &#34;unknown&#34;, &#34;source_workload&#34;: &#34;sleep&#34;, &#34;source_workload_namespace&#34;: &#34;default&#34; }, &#34;value&#34;: [ 1567034251.717, &#34;1&#34; ] } </code></pre> <p>Note the <code>destination_service_name</code> label is set to BlackHoleCluster and the <code>destination_service</code> to the host name of the external service. The response code should always be <code>502</code> in this case.</p> <ul> <li>TCP proxy virtual listener - If the external service port doesn&rsquo;t map to any HTTP based service ports within the cluster, this listener is invoked and <code>istio_tcp_connections_closed_total</code> is the metric that will be increased:</li> </ul> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;metric&#34;: { &#34;__name__&#34;: &#34;istio_tcp_connections_closed_total&#34;, &#34;connection_security_policy&#34;: &#34;unknown&#34;, &#34;destination_app&#34;: &#34;unknown&#34;, &#34;destination_ip&#34;: &#34;52.22.188.80&#34;, &#34;destination_principal&#34;: &#34;unknown&#34;, &#34;destination_service&#34;: &#34;unknown&#34;, &#34;destination_service_name&#34;: &#34;BlackHoleCluster&#34;, &#34;destination_service_namespace&#34;: &#34;unknown&#34;, &#34;destination_version&#34;: &#34;unknown&#34;, &#34;destination_workload&#34;: &#34;unknown&#34;, &#34;destination_workload_namespace&#34;: &#34;unknown&#34;, &#34;instance&#34;: &#34;100.96.2.183:42422&#34;, &#34;job&#34;: &#34;istio-mesh&#34;, &#34;reporter&#34;: &#34;source&#34;, &#34;response_flags&#34;: &#34;-&#34;, &#34;source_app&#34;: &#34;sleep&#34;, &#34;source_principal&#34;: &#34;unknown&#34;, &#34;source_version&#34;: &#34;unknown&#34;, &#34;source_workload&#34;: &#34;sleep&#34;, &#34;source_workload_namespace&#34;: &#34;default&#34; }, &#34;value&#34;: [ 1567034481.03, &#34;1&#34; ] } </code></pre> <p>Note the <code>destination_ip</code> label represents the IP address of the external service and the <code>destination_service_name</code> is set to BlackHoleCluster to indicate that this traffic was blocked by the mesh. Is is interesting to note that for the BlackHole cluster case, other TCP related metrics like <code>istio_tcp_connections_opened_total</code> are not increased as there&rsquo;s no connection that is ever established.</p> <p>Monitoring these metrics can help operators easily understand all the external services consumed by the applications in their cluster.</p>Sat, 28 Sep 2019 00:00:00 +0000/v1.9/blog/2019/monitoring-external-service-traffic/Neeraj Poddar (Aspen Mesh)/v1.9/blog/2019/monitoring-external-service-traffic/monitoringblackholepassthroughMixer Adapter for Knative <p>This post demonstrates how you can use Mixer to push application logic into Istio. It describes a Mixer adapter which implements the <a href="https://knative.dev/">Knative</a> scale-from-zero logic with simple code and similar performance to the original implementation.</p> <h2 id="knative-serving">Knative serving</h2> <p><a href="https://knative.dev/docs/serving/">Knative Serving</a> builds on <a href="https://kubernetes.io/">Kubernetes</a> to support deploying and serving of serverless applications. A core capability of serverless platforms is scale-to-zero functionality which reduces resource usage and cost of inactive workloads. A new mechanism is required to scale from zero when an idle application receives a new request.</p> <p>The following diagram represents the current Knative architecture for scale-from-zero.</p> <figure style="width:60%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:76.29350893697084%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/knative-activator-adapter/knative-activator.png" title="Knative scale-from-zero"> <img class="element-to-stretch" src="/v1.9/blog/2019/knative-activator-adapter/knative-activator.png" alt="Knative scale-from-zero" /> </a> </div> <figcaption>Knative scale-from-zero</figcaption> </figure> <p>The traffic for an idle application is redirected to <strong>Activator</strong> component by programming Istio with <code>VirtualServices</code> and <code>DestinationRules</code>. When <strong>Activator</strong> receives a new request, it:</p> <ol> <li>buffers incoming requests</li> <li>triggers the <strong>Autoscaler</strong></li> <li>redirects requests to the application after it has been scaled up, including retries and load-balancing (if needed)</li> </ol> <p>Once the application is up and running again, Knative restores the routing from <strong>Activator</strong> to the running application.</p> <h2 id="mixer-adapter">Mixer adapter</h2> <p>Mixer provides a rich intermediation layer between the Istio components and infrastructure backends. It is designed as a stand-alone component, separate from <a href="https://www.envoyproxy.io/">Envoy</a>, and has a simple extensibility model to enable Istio to interoperate with a wide breadth of backends. Mixer is inherently easier to extend than Envoy is.</p> <p>Mixer is an attribute processing engine that uses operator-supplied configuration to map request attributes from the Istio proxy into calls to the infrastructure backends systems via a pluggable set of adapters. Adapters enable <strong>Mixer</strong> to expose a single consistent API, independent of the infrastructure backends in use. The exact set of adapters used at runtime is determined through operator configuration and can easily be extended to target new or custom infrastructure backends.</p> <p>In order to achieve Knative scale-from-zero, we use a Mixer <a href="https://github.com/istio/istio/wiki/Mixer-Out-Of-Process-Adapter-Dev-Guide">out-of-process adapter</a> to call the Autoscaler. Out-of-process adapters for Mixer allow developers to use any programming language and to build and maintain your extension as a stand-alone program without the need to build the Istio proxy.</p> <p>The following diagram represents the Knative design using the <strong>Mixer</strong> adapter.</p> <figure style="width:60%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:76.29350893697084%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/knative-activator-adapter/knative-mixer-adapter.png" title="Knative scale-from-zero"> <img class="element-to-stretch" src="/v1.9/blog/2019/knative-activator-adapter/knative-mixer-adapter.png" alt="Knative scale-from-zero" /> </a> </div> <figcaption>Knative scale-from-zero</figcaption> </figure> <p>In this design, there is no need to change the routing from/to <strong>Activator</strong> for an idle application as in the original Knative setup. When the Istio proxy represented by the ingress gateway component receives a new request for an idle application, it informs <strong>Mixer</strong>, including all the relevant metadata information. <strong>Mixer</strong> then calls your adapter which triggers the Knative <strong>Autoscaler</strong> using the original Knative protocol.</p> <div> <aside class="callout idea"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-idea"/></svg> </div> <div class="content">By using this design you do not need to deal with buffering, retries and load-balancing because it is already handled by the Istio proxy.</div> </aside> </div> <p>Istio&rsquo;s use of Mixer adapters makes it possible to replace otherwise complex networking-based application logic with a more straightforward implementation, as demonstrated in the <a href="https://github.com/zachidan/istio-kactivator">Knative adapter</a>.</p> <p>When the adapter receives a message from <strong>Mixer</strong>, it sends a <code>StatMessage</code> directly to <strong>Autoscaler</strong> component using the Knative protocol. The metadata information (<code>namespace</code> and <code>service name</code>) required by <strong>Autoscaler</strong> are transferred by Istio proxy to <strong>Mixer</strong> and from there to the adapter.</p> <h2 id="summary">Summary</h2> <p>I compared the cold-start time of the original Knative reference architecture to the new Istio Mixer adapter reference architecture. The results show similar cold-start times. The implementation using the Mixer adapter has greater simplicity. It is not necessary to handle low-level network-based mechanisms as these are handled by Envoy.</p> <p>The next step is converting this Mixer adapter into an Envoy-specific filter running inside an ingress gateway. This will allow to further improve the latency overhead (no more calls to <strong>Mixer</strong> and the adapter) and to remove the dependency on the Istio Mixer.</p>Wed, 18 Sep 2019 00:00:00 +0000/v1.9/blog/2019/knative-activator-adapter/Idan Zach (IBM)/v1.9/blog/2019/knative-activator-adapter/mixeradapterknativescale-from-zeroApp Identity and Access Adapter <p>If you are running your containerized applications on Kubernetes, you can benefit from using the App Identity and Access Adapter for an abstracted level of security with zero code changes or redeploys.</p> <p>Whether your computing environment is based on a single cloud provider, a combination of multiple cloud providers, or following a hybrid cloud approach, having a centralized identity management can help you to preserve existing infrastructure and avoid vendor lock-in.</p> <p>With the <a href="https://github.com/ibm-cloud-security/app-identity-and-access-adapter">App Identity and Access Adapter</a>, you can use any OAuth2/OIDC provider: IBM Cloud App ID, Auth0, Okta, Ping Identity, AWS Cognito, Azure AD B2C and more. Authentication and authorization policies can be applied in a streamlined way in all environments — including frontend and backend applications — all without code changes or redeploys.</p> <h2 id="understanding-istio-and-the-adapter">Understanding Istio and the adapter</h2> <p><a href="/v1.9/docs/concepts/what-is-istio/">Istio</a> is an open source service mesh that transparently layers onto distributed applications and seamlessly integrates with Kubernetes. To reduce the complexity of deployments Istio provides behavioral insights and operational control over the service mesh as a whole. See the <a href="/v1.9/docs/ops/deployment/architecture/">Istio Architecture</a> for more details.</p> <p>Istio uses <a href="/v1.9/blog/2019/data-plane-setup/">Envoy proxy sidecars</a> to mediate inbound and outbound traffic for all pods in the service mesh. Istio extracts telemetry from the Envoy sidecars and sends it to Mixer, the Istio component responsible for collecting telemetry and enforcing policy.</p> <p>The App Identity and Access adapter extends the Mixer functionality by analyzing the telemetry (attributes) against various access control policies across the service mesh. The access control policies can be linked to a particular Kubernetes services and can be finely tuned to specific service endpoints. For more information about policies and telemetry, see the Istio documentation.</p> <p>When <a href="https://github.com/ibm-cloud-security/app-identity-and-access-adapter">App Identity and Access Adapter</a> is combined with Istio, it provides a scalable, integrated identity and access solution for multicloud architectures that does not require any custom application code changes.</p> <h2 id="installation">Installation</h2> <p>App Identity and Access adapter can be installed using Helm directly from the <code>github.com</code> repository</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ helm repo add appidentityandaccessadapter https://raw.githubusercontent.com/ibm-cloud-security/app-identity-and-access-adapter/master/helm/appidentityandaccessadapter $ helm install --name appidentityandaccessadapter appidentityandaccessadapter/appidentityandaccessadapter </code></pre> <p>Alternatively, you can clone the repository and install the Helm chart locally</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ git clone git@github.com:ibm-cloud-security/app-identity-and-access-adapter.git $ helm install ./helm/appidentityandaccessadapter --name appidentityandaccessadapter. </code></pre> <h2 id="protecting-web-applications">Protecting web applications</h2> <p>Web applications are most commonly protected by the OpenID Connect (OIDC) workflow called <code>authorization_code</code>. When an unauthenticated/unauthorized user is detected, they are automatically redirected to the identity service of your choice and presented with the authentication page. When authentication completes, the browser is redirected back to an implicit <code>/oidc/callback</code> endpoint intercepted by the adapter. At this point, the adapter obtains access and identity tokens from the identity service and then redirects users back to their originally requested URL in the web app.</p> <p>Authentication state and tokens are maintained by the adapter. Each request processed by the adapter will include the Authorization header bearing both access and identity tokens in the following format <code>Authorization: Bearer &lt;access_token&gt; &lt;id_token&gt;</code></p> <p>Developers can read leverage the tokens for application experience adjustments, e.g. displaying user name, adjusting UI based on user role etc.</p> <p>In order to terminate the authenticated session and wipe tokens, aka user logout, simply redirect browser to the <code>/oidc/logout</code> endpoint under the protected service, e.g. if you&rsquo;re serving your app from <code>https://example.com/myapp</code>, redirect users to <code>https://example.com/myapp/oidc/logout</code></p> <p>Whenever access token expires, a refresh token is used to automatically acquire new access and identity tokens without your user&rsquo;s needing to re-authenticate. If the configured identity provider returns a refresh token, it is persisted by the adapter and used to retrieve new access and identity tokens when the old ones expire.</p> <h3 id="applying-web-application-protection">Applying web application protection</h3> <p>Protecting web applications requires creating two types of resources - use <code>OidcConfig</code> resources to define various OIDC providers, and <code>Policy</code> resources to define the web app protection policies.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.cloud.ibm.com/v1&#34; kind: OidcConfig metadata: name: my-oidc-provider-config namespace: sample-namespace spec: discoveryUrl: &lt;discovery-url-from-oidc-provider&gt; clientId: &lt;client-id-from-oidc-provider&gt; clientSecretRef: name: &lt;kubernetes-secret-name&gt; key: &lt;kubernetes-secret-key&gt; </code></pre> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.cloud.ibm.com/v1&#34; kind: Policy metadata: name: my-sample-web-policy namespace: sample-namespace spec: targets: - serviceName: &lt;kubernetes-service-name-to-protect&gt; paths: - prefix: /webapp method: ALL policies: - policyType: oidc config: my-oidc-provider-config rules: // optional - claim: iss match: ALL source: access_token values: - &lt;expected-issuer-id&gt; - claim: scope match: ALL source: access_token values: - openid </code></pre> <p><a href="https://github.com/ibm-cloud-security/app-identity-and-access-adapter">Read more about protecting web applications</a></p> <h2 id="protecting-backend-application-and-apis">Protecting backend application and APIs</h2> <p>Backend applications and APIs are protected using the Bearer Token flow, where an incoming token is validated against a particular policy. The Bearer Token authorization flow expects a request to contain the <code>Authorization</code> header with a valid access token in JWT format. The expected header structure is <code>Authorization: Bearer {access_token}</code>. In case token is successfully validated request will be forwarded to the requested service. In case token validation fails the HTTP 401 will be returned back to the client with a list of scopes that are required to access the API.</p> <h3 id="applying-backend-application-and-apis-protection">Applying backend application and APIs protection</h3> <p>Protecting backend applications and APIs requires creating two types of resources - use <code>JwtConfig</code> resources to define various JWT providers, and <code>Policy</code> resources to define the backend app protection policies.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.cloud.ibm.com/v1&#34; kind: JwtConfig metadata: name: my-jwt-config namespace: sample-namespace spec: jwksUrl: &lt;the-jwks-url&gt; </code></pre> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;security.cloud.ibm.com/v1&#34; kind: Policy metadata: name: my-sample-backend-policy namespace: sample-namespace spec: targets: - serviceName: &lt;kubernetes-service-name-to-protect&gt; paths: - prefix: /api/files method: ALL policies: - policyType: jwt config: my-oidc-provider-config rules: // optional - claim: iss match: ALL source: access_token values: - &lt;expected-issuer-id&gt; - claim: scope match: ALL source: access_token values: - files.read - files.write </code></pre> <p><a href="https://github.com/ibm-cloud-security/app-identity-and-access-adapter">Read more about protecting backend applications</a></p> <h2 id="known-limitations">Known limitations</h2> <p>At the time of writing this blog there are two known limitations of the App Identity and Access adapter:</p> <ul> <li><p>If you use the App Identity and Access adapter for Web Applications you should not create more than a single replica of the adapter. Due to the way Envoy Proxy was handling HTTP headers it was impossible to return multiple <code>Set-Cookie</code> headers from Mixer back to Envoy. Therefore we couldn&rsquo;t set all the cookies required for handling Web Application scenarios. The issue was recently addressed in Envoy and Mixer and we&rsquo;re planning to address this in future versions of our adapter. <strong>Note that this only affects Web Applications, and doesn&rsquo;t affect Backend Apps and APIs in any way</strong>.</p></li> <li><p>As a general best practice you should always consider using mutual-tls for any in-cluster communications. At the moment the communications channel between Mixer and App Identity and Access adapter currently does not use mutual-tls. In future we plan to address this by implementing an approach described in the <a href="https://github.com/istio/istio/wiki/Mixer-Out-of-Process-Adapter-Walkthrough#step-7-encrypt-connection-between-mixer-and-grpc-adapter">Mixer Adapter developer guide</a>.</p></li> </ul> <h2 id="summary">Summary</h2> <p>When a multicloud strategy is in place, security can become complicated as the environment grows and diversifies. While cloud providers supply protocols and tools to ensure their offerings are safe, the development teams are still responsible for the application-level security, such as API access control with OAuth2, defending against man-in-the-middle attacks with traffic encryption, and providing mutual TLS for service access control. However, this becomes complex in a multicloud environment since you might need to define those security details for each service separately. With proper security protocols in place, those external and internal threats can be mitigated.</p> <p>Development teams have spent time making their services portable to different cloud providers, and in the same regard, the security in place should be flexible and not infrastructure-dependent.</p> <p>Istio and App Identity and Access Adapter allow you to secure your Kubernetes apps with absolutely zero code changes or redeployments regardless of which programming language and which frameworks you use. Following this approach ensures maximum portability of your apps, and ability to easily enforce same security policies across multiple environments.</p> <p>You can read more about the App Identity and Access Adapter in the <a href="https://www.ibm.com/cloud/blog/using-istio-to-secure-your-multicloud-kubernetes-applications-with-zero-code-change">release blog</a>.</p>Wed, 18 Sep 2019 00:00:00 +0000/v1.9/blog/2019/app-identity-and-access-adapter/Anton Aleksandrov (IBM)/v1.9/blog/2019/app-identity-and-access-adapter/securityoidcjwtpoliciesChange in Secret Discovery Service in Istio 1.3<p>In Istio 1.3, we are taking advantage of improvements in Kubernetes to issue certificates for workload instances more securely.</p> <p>When a Citadel Agent sends a certificate signing request to Citadel to get a certificate for a workload instance, it includes the JWT that the Kubernetes API server issued representing the service account of the workload instance. If Citadel can authenticate the JWT, it extracts the service account name needed to issue the certificate for the workload instance.</p> <p>Before Kubernetes 1.12, the Kubernetes API server issues JWTs with the following issues:</p> <ol> <li>The tokens don&rsquo;t have important fields to limit their scope of usage, such as <code>aud</code> or <code>exp</code>. See <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auth/bound-service-account-tokens.md">Bound Service Tokens</a> for more info.</li> <li>The tokens are mounted onto all the pods without a way to opt-out. See <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/svcacct-token-volume-source.md">Service Account Token Volumes</a> for motivation.</li> </ol> <p>Kubernetes 1.12 introduces <code>trustworthy</code> JWTs to solve these issues. However, support for the <code>aud</code> field to have a different value than the API server audience didn&rsquo;t become available until <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md">Kubernetes 1.13</a>. To better secure the mesh, Istio 1.3 only supports <code>trustworthy</code> JWTs and requires the value of the <code>aud</code> field to be <code>istio-ca</code> when you enable SDS. Before upgrading your Istio deployment to 1.3 with SDS enabled, verify that you use Kubernetes 1.13 or later.</p> <p>Make the following considerations based on your platform of choice:</p> <ul> <li><strong>GKE:</strong> Upgrade your cluster version to at least 1.13.</li> <li><strong>On-prem Kubernetes</strong> and <strong>GKE on-prem:</strong> Add <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection">extra configurations</a> to your Kubernetes. You may also want to refer to the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/">api-server page</a> for the most up-to-date flag names.</li> <li>For other platforms, check with your provider. If your vendor does not support trustworthy JWTs, you will need to fall back to the file-mount approach to propagate the workload keys and certificates in Istio 1.3.</li> </ul>Tue, 10 Sep 2019 00:00:00 +0000/v1.9/blog/2019/trustworthy-jwt-sds/Phillip Quy Le (Google)/v1.9/blog/2019/trustworthy-jwt-sds/securityPKIcertificatenodeagentsdsThe Evolution of Istio's APIs <p>One of Istio’s main goals has always been, and continues to be, enabling teams to develop abstractions that work best for their specific organization and workloads. Istio provides robust and powerful building blocks for service-to-service networking. Since <a href="/v1.9/news/releases/0.x/announcing-0.1">Istio 0.1</a>, the Istio team has been learning from production users about how they map their own architectures, workloads, and constraints to Istio’s capabilities, and we’ve been evolving Istio’s APIs to make them work better for you.</p> <h2 id="evolving-istio-s-apis">Evolving Istio’s APIs</h2> <p>The next step in Istio’s evolution is to sharpen our focus and align with the roles of Istio’s users. A security admin should be able to interact with an API that logically groups and simplifies security operations within an Istio mesh; the same goes for service operators and traffic management operations.</p> <p>Taking it a step further, there’s an opportunity to provide improved experiences for beginning, intermediate, and advanced use cases for each role. There are many common use cases that can be addressed with obvious default settings and a better defined initial experience that requires little to no configuration. For intermediate use cases, the Istio team wants to leverage contextual cues from the environment and provide you with a simpler configuration experience. Finally, for advanced scenarios, our goal is to make <a href="https://www.quora.com/What-is-the-origin-of-the-phrase-make-the-easy-things-easy-and-the-hard-things-possible">easy things easy and hard things possible</a>.</p> <p>To provide these sorts of role-centric abstractions, however, the APIs underneath them must be able to describe all of Istio’s power and capabilities. Historically, Istio’s approach to API design followed paths similar to those of other infrastructure APIs. Istio follows these design principles:</p> <ol> <li>The Istio APIs should seek to: <ul> <li>Properly represent the underlying resources to which they are mapped</li> <li>Shouldn’t hide any of the underlying resource’s useful capabilities</li> </ul></li> <li>The Istio APIs should also be <a href="https://en.wikipedia.org/wiki/Composability">composable</a>, so end users can combine infrastructure APIs in a way that makes sense for their own needs.</li> <li>The Istio APIs should be flexible: Within an organization, it should be possible to have different representations of the underlying resources and surface the ones that make sense for each individual team.</li> </ol> <p>Over the course of the next several releases we will share our progress as we strengthen the alignment between Istio’s APIs and the roles of Istio users.</p> <h2 id="composability-and-abstractions">Composability and abstractions</h2> <p>Istio and Kubernetes often go together, but Istio is much more than an add-on to Kubernetes – it is as much a <em>platform</em> as Kubernetes is. Istio aims to provide infrastructure, and surface the capabilities you need in a powerful service mesh. For example, there are platform-as-a-service offerings that use Kubernetes as their foundation, and build on Kubernetes’ composability to provide a subset of APIs to application developers.</p> <p>The number of objects that must be configured to deploy applications is a concrete example of Kubernetes’ composability. By our count, at least 10 objects need to be configured: <code>Namespace</code>, <code>Service</code>, <code>Ingress</code>, <code>Deployment</code>, <code>HorizontalPodAutoscaler</code>, <code>Secret</code>, <code>ConfigMap</code>, <code>RBAC</code>, <code>PodDisruptionBudget</code>, and <code>NetworkPolicy</code>.</p> <p>It sounds complicated, but not everyone needs to interact with those concepts. Some are the responsibility of different teams like the cluster, network, or security admin teams, and many provide sensible defaults. A great benefit of cloud native platforms and deployment tools is that they can hide that complexity by taking in a small amount of information and configuring those objects for you.</p> <p>Another example of composability in the networking space can be found in the <a href="https://cloud.google.com/load-balancing/docs/https/">Google Cloud HTTP(S) Load Balancer</a> (GCLB). To correctly use an instance of the GCLB, six different infrastructure objects need to be created and configured. This design is the result of our 20 years of experience in operating distributed systems and <a href="https://www.youtube.com/watch?v=J5HJ1y6PeyE">there is a reason why each one is separate from the others</a>. But the steps are simplified when you’re creating an instance via the Google Cloud console. We provide the more common end-user/role-specific configurations, and you can configure less common settings later. Ultimately, the goals of infrastructure APIs are to offer the most flexibility without sacrificing functionality.</p> <p><a href="http://knative.dev">Knative</a> is a platform for building, running, and operating serverless workloads that provides a great real-world example of role-centric, higher-level APIs. <a href="https://knative.dev/docs/serving/">Knative Serving</a>, a component of Knative that builds on Kubernetes and Istio to support deploying and serving serverless applications and functions, provides an opinionated workflow for application developers to manage routes and revisions of their services. Thanks to that opinionated approach, Knative Serving exposes a subset of Istio’s networking APIs that are most relevant to application developers via a simplified <a href="https://github.com/knative/docs/blob/master/docs/serving/spec/knative-api-specification-1.0.md#route">Routes</a> object that supports revisions and traffic routing, abstracting Istio’s <a href="/v1.9/docs/reference/config/networking/virtual-service/"><code>VirtualService</code></a> and <a href="/v1.9/docs/reference/config/networking/destination-rule/"><code>DestinationRule</code></a> resources.</p> <p>As Istio has matured, we’ve also seen production users develop workload- and organization-specific abstractions on top of Istio’s infrastructure APIs.</p> <p>AutoTrader UK has one of our favorite examples of a custom platform built on Istio. In <a href="https://kubernetespodcast.com/episode/052-autotrader/">an interview with the Kubernetes Podcast from Google</a>, Russel Warman and Karl Stoney describe their Kubernetes-based delivery platform, with <a href="https://karlstoney.com/2018/07/07/managing-your-costs-on-kubernetes/">cost dashboards using Prometheus and Grafana</a>. With minimal effort, they added configuration options to determine what their developers want configured on the network, and it now manages the Istio objects required to make that happen. There are countless other platforms being built in enterprise and cloud-native companies: some designed to replace a web of company-specific custom scripts, and some aimed to be a general-purpose public tool. As more companies start to talk about their tooling publicly, we&rsquo;ll bring their stories to this blog.</p> <h2 id="what-s-coming-next">What’s coming next</h2> <p>Some areas of improvement that we’re working on for upcoming releases include:</p> <ul> <li>Installation profiles to setup standard patterns for ingress and egress, with the Istio operator</li> <li>Automatic inference of container ports and protocols for telemetry</li> <li>Support for routing all traffic by default to constrain routing incrementally</li> <li>Add a single global flag to enable mutual TLS and encrypt all inter-pod traffic</li> </ul> <p>Oh, and if for some reason you judge a toolbox by the list of CRDs it installs, in Istio 1.2 we cut the number from 54 down to 23. Why? It turns out that if you have a bunch of features, you need to have a way to configure them all. With the improvements we’ve made to our installer, you can now install Istio using a <a href="/v1.9/docs/setup/additional-setup/config-profiles/">configuration</a> that works with your adapters.</p> <p>All service meshes and, by extension, Istio seeks to automate complex infrastructure operations, like networking and security. That means there will always be complexity in its APIs, but Istio will always aim to solve the needs of operators, while continuing to evolve the API to provide robust building blocks and prioritize flexibility through role-centric abstractions.</p> <p>We can&rsquo;t wait for you to join our <a href="/v1.9/about/community/join/">community</a> to see what you build with Istio next!</p>Mon, 05 Aug 2019 00:00:00 +0000/v1.9/blog/2019/evolving-istios-apis/Louis Ryan (Google), Sandeep Parikh (Google)/v1.9/blog/2019/evolving-istios-apis/apiscomposabilityevolutionSecure Control of Egress Traffic in Istio, part 3 <p>Welcome to part 3 in our series about secure control of egress traffic in Istio. In <a href="/v1.9/blog/2019/egress-traffic-control-in-istio-part-1/">the first part in the series</a>, I presented the attacks involving egress traffic and the requirements we collected for a secure control system for egress traffic. In <a href="/v1.9/blog/2019/egress-traffic-control-in-istio-part-2/">the second part in the series</a>, I presented the Istio way of securing egress traffic and showed how you can prevent the attacks using Istio.</p> <p>In this installment, I compare secure control of egress traffic in Istio with alternative solutions such as using Kubernetes network policies and legacy egress proxies and firewalls. Finally, I describe the performance considerations regarding the secure control of egress traffic in Istio.</p> <h2 id="alternative-solutions-for-egress-traffic-control">Alternative solutions for egress traffic control</h2> <p>First, let&rsquo;s remember the <a href="/v1.9/blog/2019/egress-traffic-control-in-istio-part-1/#requirements-for-egress-traffic-control">requirements for egress traffic control</a> we previously collected:</p> <ol> <li>Support of <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security">TLS</a> with <a href="https://en.wikipedia.org/wiki/Server_Name_Indication">SNI</a> or of <a href="/v1.9/docs/reference/glossary/#tls-origination">TLS origination</a>.</li> <li><strong>Monitor</strong> SNI and the source workload of every egress access.</li> <li>Define and enforce <strong>policies per cluster</strong>.</li> <li>Define and enforce <strong>policies per source</strong>, <em>Kubernetes-aware</em>.</li> <li><strong>Prevent tampering</strong>.</li> <li>Traffic control is <strong>transparent</strong> to the applications.</li> </ol> <p>Next, I&rsquo;m going to cover two alternative solutions for egress traffic control: the Kubernetes network policies and egress proxies and firewalls. I show the requirements they satisfy, and, more importantly, the requirements they can&rsquo;t satisfy.</p> <p>Kubernetes provides a native solution for traffic control, and in particular, for control of egress traffic, through the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/">network policies</a>. Using these network policies, cluster operators can configure which pods can access specific external services. Cluster operators can identify pods by pod labels, namespace labels, or by IP ranges. To specify the external services, cluster operators can use IP ranges, but cannot use domain names like <code>cnn.com</code>. This is because <strong>Kubernetes network policies are not DNS-aware</strong>. Network policies satisfy the first requirement since they can control any TCP traffic. Network policies only partially satisfy the third and the fourth requirements because cluster operators can specify policies per cluster or per pod but operators can&rsquo;t identify external services by domain names. Network policies only satisfy the fifth requirement if the attackers are not able to break from a malicious container into the Kubernetes node and interfere with the implementation of the policies inside said node. Lastly, network policies do satisfy the sixth requirement: Operators don&rsquo;t need to change the code or the container environment. In summary, we can say that Kubernetes Network Policies provide transparent, Kubernetes-aware egress traffic control, which is not DNS-aware.</p> <p>The second alternative predates the Kubernetes network policies. Using a <strong>DNS-aware egress proxy or firewall</strong> lets you configure applications to direct the traffic to the proxy and use some proxy protocol, for example, <a href="https://en.wikipedia.org/wiki/SOCKS">SOCKS</a>. Since operators must configure the applications, this solution is not transparent. Moreover, operators can&rsquo;t use pod labels or pod service accounts to configure the proxies because the egress proxies don&rsquo;t know about them. Therefore, <strong>the egress proxies are not Kubernetes-aware</strong> and can&rsquo;t fulfill the fourth requirement because egress proxies cannot enforce policies by source if a Kubernetes artifact specifies the source. In summary, egress proxies can fulfill the first, second, third and fifth requirements, but can&rsquo;t satisfy the fourth and the six requirements because they are not transparent and not Kubernetes-aware.</p> <h2 id="advantages-of-istio-egress-traffic-control">Advantages of Istio egress traffic control</h2> <p>Istio egress traffic control is <strong>DNS-aware</strong>: you can define policies based on URLs or on wildcard domains like <code>*.ibm.com</code>. In this sense, it is better than Kubernetes network policies which are not DNS-aware.</p> <p>Istio egress traffic control is <strong>transparent</strong> with regard to TLS traffic, since Istio is transparent: you don&rsquo;t need to change the applications or configure their containers. For HTTP traffic with TLS origination, you must configure the applications in the mesh to use HTTP instead of HTTPS.</p> <p>Istio egress traffic control is <strong>Kubernetes-aware</strong>: the identity of the source of egress traffic is based on Kubernetes service accounts. Istio egress traffic control is better than the legacy DNS-aware proxies or firewalls which are not transparent and not Kubernetes-aware.</p> <p>Istio egress traffic control is <strong>secure</strong>: it is based on the strong identity of Istio and, when you apply <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations">additional security measures</a>, Istio&rsquo;s traffic control is resilient to tampering.</p> <p>Additionally, Istio&rsquo;s egress traffic control provides the following advantages:</p> <ul> <li>Define access policies in the same language for ingress, egress, and in-cluster traffic. You need to learn a single policy and configuration language for all types of traffic.</li> <li>Out-of-the-Box integration of Istio&rsquo;s egress traffic control with Istio&rsquo;s policy and observability adapters.</li> <li>Write the adapters to use external monitoring or access control systems with Istio only once and apply them for all types of traffic: ingress, egress, and in-cluster.</li> <li>Use Istio&rsquo;s <a href="/v1.9/docs/concepts/traffic-management/">traffic management features</a> for egress traffic: load balancing, passive and active health checking, circuit breaker, timeouts, retries, fault injection, and others.</li> </ul> <p>We refer to a system with the advantages above as <strong>Istio-aware</strong>.</p> <p>The following table summarizes the egress traffic control features that Istio and the alternative solutions provide:</p> <table> <thead> <tr> <th></th> <th>Istio Egress Traffic Control</th> <th>Kubernetes Network Policies</th> <th>Legacy Egress Proxy or Firewall</th> </tr> </thead> <tbody> <tr> <td>DNS-aware</td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#checkmark"/></svg></td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#cancel"/></svg></td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#checkmark"/></svg></td> </tr> <tr> <td>Kubernetes-aware</td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#checkmark"/></svg></td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#checkmark"/></svg></td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#cancel"/></svg></td> </tr> <tr> <td>Transparent</td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#checkmark"/></svg></td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#checkmark"/></svg></td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#cancel"/></svg></td> </tr> <tr> <td>Istio-aware</td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#checkmark"/></svg></td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#cancel"/></svg></td> <td><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#cancel"/></svg></td> </tr> </tbody> </table> <h2 id="performance-considerations">Performance considerations</h2> <p>Controlling egress traffic using Istio has a price: increased latency of calls to external services and increased CPU usage by the cluster&rsquo;s pods. Traffic passes through two proxies:</p> <ul> <li>The application&rsquo;s sidecar proxy</li> <li>The egress gateway&rsquo;s proxy</li> </ul> <p>If you use <a href="/v1.9/docs/tasks/traffic-management/egress/wildcard-egress-hosts/">TLS egress traffic to wildcard domains</a>, you must add <a href="/v1.9/docs/tasks/traffic-management/egress/wildcard-egress-hosts/#wildcard-configuration-for-arbitrary-domains">an additional proxy</a> between the application and the external service. Since the traffic between the egress gateway&rsquo;s proxy and the proxy needed for the configuration of arbitrary domains using wildcards is on the pod&rsquo;s local network, that traffic shouldn&rsquo;t have a significant impact on latency.</p> <p>See a <a href="/v1.9/blog/2019/egress-performance/">performance evaluation</a> of different Istio configurations set to control egress traffic. I would encourage you to carefully measure different configurations with your own applications and your own external services, before you decide whether you can afford the performance overhead for your use cases. You should weigh the required level of security versus your performance requirements and compare the performance overhead of all alternative solutions.</p> <p>Let me share my thoughts on the performance overhead that controlling egress traffic using Istio adds: Accessing external services already could have high latency and the overhead added because of two or three proxies inside the cluster could likely not be very significant by comparison. After all, applications with a microservice architecture can have chains of dozens of calls between microservices. Therefore, an additional hop with one or two proxies in the egress gateway should not have a large impact.</p> <p>Moreover, we continue to work towards reducing Istio&rsquo;s performance overhead. Possible optimizations include:</p> <ul> <li>Extending Envoy to handle wildcard domains: This would eliminate the need for a third proxy between the application and the external services for that use case.</li> <li>Using mutual TLS for authentication only without encrypting the TLS traffic, since the traffic is already encrypted.</li> </ul> <h2 id="summary">Summary</h2> <p>I hope that after reading this series you are convinced that controlling egress traffic is very important for the security of your cluster. Hopefully, I also managed to convince you that Istio is an effective tool to control egress traffic securely, and that Istio has multiple advantages over the alternative solutions. Istio is the only solution I&rsquo;m aware of that lets you:</p> <ul> <li>Control egress traffic in a secure and transparent way</li> <li>Specify external services as domain names</li> <li>Use Kubernetes artifacts to specify the traffic source</li> </ul> <p>In my opinion, secure control of egress traffic is a great choice if you are looking for your first Istio use case. In this case, Istio already provides you some benefits even before you start using all other Istio features: <a href="/v1.9/docs/tasks/traffic-management/">traffic management</a>, <a href="/v1.9/docs/tasks/security/">security</a>, <a href="https://istio.io/v1.6/docs/tasks/policy-enforcement/">policies</a> and <a href="/v1.9/docs/tasks/observability/">observability</a>, applied to traffic between microservices inside the cluster.</p> <p>So, if you haven&rsquo;t had the chance to work with Istio yet, <a href="/v1.9/docs/setup/install/">install Istio</a> on your cluster and check our <a href="/v1.9/docs/tasks/traffic-management/egress/">egress traffic control tasks</a> and the tasks for the other <a href="/v1.9/docs/tasks/">Istio features</a>. We also want to hear from you, please join us at <a href="https://discuss.istio.io">discuss.istio.io</a>.</p>Mon, 22 Jul 2019 00:00:00 +0000/v1.9/blog/2019/egress-traffic-control-in-istio-part-3/Vadim Eisenberg (IBM)/v1.9/blog/2019/egress-traffic-control-in-istio-part-3/traffic-managementegresssecuritygatewaytlsSecure Control of Egress Traffic in Istio, part 2 <p>Welcome to part 2 in our new series about secure control of egress traffic in Istio. In <a href="/v1.9/blog/2019/egress-traffic-control-in-istio-part-1/">the first part in the series</a>, I presented the attacks involving egress traffic and the requirements we collected for a secure control system for egress traffic. In this installment, I describe the Istio way to securely control the egress traffic, and show how Istio can help you prevent the attacks.</p> <h2 id="secure-control-of-egress-traffic-in-istio">Secure control of egress traffic in Istio</h2> <p>To implement secure control of egress traffic in Istio, you must <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#egress-gateway-for-https-traffic">direct TLS traffic to external services through an egress gateway</a>. Alternatively, you can <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#egress-gateway-for-http-traffic">direct HTTP traffic through an egress gateway</a> and <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/#perform-tls-origination-with-an-egress-gateway">let the egress gateway perform TLS origination</a>.</p> <p>Both alternatives have their pros and cons, you should choose between them according to your circumstances. The choice mainly depends on whether your application can send unencrypted HTTP requests and whether your organization&rsquo;s security policies allow sending unencrypted HTTP requests. For example, if your application uses some client library that encrypts the traffic without a possibility to cancel the encryption, you cannot use the option of sending unencrypted HTTP traffic. The same in the case your organization&rsquo;s security policies do not allow sending unencrypted HTTP requests <strong>inside the pod</strong> (outside the pod the traffic is encrypted by Istio).</p> <p>If the application sends HTTP requests and the egress gateway performs TLS origination, you can monitor HTTP information like HTTP methods, headers, and URL paths. You can also <a href="/v1.9/blog/2018/egress-monitoring-access-control">define policies</a> based on said HTTP information. If the application performs TLS origination, you can <a href="https://istio.io/v1.6/docs/tasks/traffic-management/egress/egress_sni_monitoring_and_policies/">monitor SNI and the service account</a> of the source pod&rsquo;s TLS traffic, and define policies based on SNI and service accounts.</p> <p>You must ensure that traffic from your cluster to the outside cannot bypass the egress gateway. Istio cannot enforce it for you, so you must apply some <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations">additional security mechanisms</a>, for example, the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/">Kubernetes network policies</a> or an L3 firewall. See an example of the <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#apply-kubernetes-network-policies">Kubernetes network policies configuration</a>. According to the <a href="https://en.wikipedia.org/wiki/Defense_in_depth_(computing)">Defense in depth</a> concept, the more security mechanisms you apply for the same goal, the better.</p> <p>You must also ensure that Istio control plane and the egress gateway cannot be compromised. While you may have hundreds or thousands of application pods in your cluster, there are only a dozen of Istio control plane pods and the gateways. You can and should focus on protecting the control plane pods and the gateways, since it is easy (there is a small number of pods to protect) and it is most crucial for the security of your cluster. If attackers compromise the control plane or the egress gateway, they could violate any policy.</p> <p>You might have multiple tools to protect the control plane pods, depending on your environment. The reasonable security measures are:</p> <ul> <li>Run the control plane pods on nodes separate from the application nodes.</li> <li>Run the control plane pods in their own separate namespace.</li> <li>Apply the Kubernetes RBAC and network policies to protect the control plane pods.</li> <li>Monitor the control plane pods more closely than you do the application pods.</li> </ul> <p>Once you direct egress traffic through an egress gateway and apply the additional security mechanisms, you can securely monitor and enforce security policies for the traffic.</p> <p>The following diagram shows Istio&rsquo;s security architecture, augmented with an L3 firewall which is part of the <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations">additional security mechanisms</a> that should be provided outside of Istio.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:54.89557965057057%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/egress-traffic-control-in-istio-part-2/SecurityArchitectureWithL3Firewalls.svg" title="Istio Security Architecture with Egress Gateway and L3 Firewall"> <img class="element-to-stretch" src="/v1.9/blog/2019/egress-traffic-control-in-istio-part-2/SecurityArchitectureWithL3Firewalls.svg" alt="Istio Security Architecture with Egress Gateway and L3 Firewall" /> </a> </div> <figcaption>Istio Security Architecture with Egress Gateway and L3 Firewall</figcaption> </figure> <p>You can configure the L3 firewall trivially to only allow incoming traffic through the Istio ingress gateway and only allow outgoing traffic through the Istio egress gateway. The Istio proxies of the gateways enforce policies and report telemetry just as all other proxies in the mesh do.</p> <p>Now let&rsquo;s examine possible attacks and let me show you how the secure control of egress traffic in Istio prevents them.</p> <h2 id="preventing-possible-attacks">Preventing possible attacks</h2> <p>Consider the following security policies for egress traffic:</p> <ul> <li>Application <strong>A</strong> is allowed to access <code>*.ibm.com</code>, which includes all the external services with URLs matching <code>*.ibm.com</code>.</li> <li>Application <strong>B</strong> is allowed to access <code>mongo1.composedb.com</code>.</li> <li>All egress traffic is monitored.</li> </ul> <p>Suppose the attackers have the following goals:</p> <ul> <li>Access <code>*.ibm.com</code> from your cluster.</li> <li>Access <code>*.ibm.com</code> from your cluster, unmonitored. The attackers want their traffic to be unmonitored to prevent a possibility that you will detect the forbidden access.</li> <li>Access <code>mongo1.composedb.com</code> from your cluster.</li> </ul> <p>Now suppose that the attackers manage to break into one of the pods of application <strong>A</strong>, and try to use the compromised pod to perform the forbidden access. The attackers may try their luck and access the external services in a straightforward way. You will react to the straightforward attempts as follows:</p> <ul> <li>Initially, there is no way to prevent a compromised application <strong>A</strong> to access <code>*.ibm.com</code>, because the compromised pod is indistinguishable from the original pod.</li> <li>Fortunately, you can monitor all access to external services, detect suspicious traffic, and thwart attackers from gaining unmonitored access to <code>*.ibm.com</code>. For example, you could apply anomaly detection tools on the egress traffic logs.</li> <li>To stop attackers from accessing <code>mongo1.composedb.com</code> from your cluster, Istio will correctly detect the source of the traffic, application <strong>A</strong> in this case, and verify that it is not allowed to access <code>mongo1.composedb.com</code> according to the security policies mentioned above.</li> </ul> <p>Having failed to achieve their goals in a straightforward way, the malicious actors may resort to advanced attacks:</p> <ul> <li><strong>Bypass the container&rsquo;s sidecar proxy</strong> to be able to access any external service directly, without the sidecar&rsquo;s policy enforcement and reporting. This attack is prevented by a Kubernetes Network Policy or by an L3 firewall that allow egress traffic to exit the mesh only from the egress gateway.</li> <li><strong>Compromise the egress gateway</strong> to be able to force it to send fake information to the monitoring system or to disable enforcement of the security policies. This attack is prevented by applying the special security measures to the egress gateway pods.</li> <li><strong>Impersonate as application B</strong> since application <strong>B</strong> is allowed to access <code>mongo1.composedb.com</code>. This attack, fortunately, is prevented by Istio&rsquo;s <a href="/v1.9/docs/concepts/security/#istio-identity">strong identity support</a>.</li> </ul> <p>As far as we can see, all the forbidden access is prevented, or at least is monitored and can be prevented later. If you see other attacks that involve egress traffic or security holes in the current design, we would be happy <a href="https://discuss.istio.io">to hear about it</a>.</p> <h2 id="summary">Summary</h2> <p>Hopefully, I managed to convince you that Istio is an effective tool to prevent attacks involving egress traffic. In <a href="/v1.9/blog/2019/egress-traffic-control-in-istio-part-3/">the next part of this series</a>, I compare secure control of egress traffic in Istio with alternative solutions such as <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/">Kubernetes Network Policies</a> and legacy egress proxies/firewalls.</p>Wed, 10 Jul 2019 00:00:00 +0000/v1.9/blog/2019/egress-traffic-control-in-istio-part-2/Vadim Eisenberg (IBM)/v1.9/blog/2019/egress-traffic-control-in-istio-part-2/traffic-managementegresssecuritygatewaytlsBest Practices: Benchmarking Service Mesh Performance <p>Service meshes add a lot of functionality to application deployments, including <a href="/v1.9/docs/concepts/what-is-istio/#traffic-management">traffic policies</a>, <a href="/v1.9/docs/concepts/what-is-istio/#observability">observability</a>, and <a href="/v1.9/docs/concepts/what-is-istio/#security">secure communication</a>. But adding a service mesh to your environment comes at a cost, whether that&rsquo;s time (added latency) or resources (CPU cycles). To make an informed decision on whether a service mesh is right for your use case, it&rsquo;s important to evaluate how your application performs when deployed with a service mesh.</p> <p>Earlier this year, we published a <a href="/v1.9/blog/2019/istio1.1_perf/">blog post</a> on Istio&rsquo;s performance improvements in version 1.1. Following the release of <a href="/v1.9/news/releases/1.2.x/announcing-1.2/">Istio 1.2</a>, we want to provide guidance and tools to help you benchmark Istio&rsquo;s data plane performance in a production-ready Kubernetes environment.</p> <p>Overall, we found that Istio&rsquo;s <a href="/v1.9/docs/ops/deployment/architecture/#envoy">sidecar proxy</a> latency scales with the number of concurrent connections. At 1000 requests per second (RPS), across 16 connections, Istio adds <strong>3 milliseconds</strong> per request in the 50th percentile, and <strong>10 milliseconds</strong> in the 99th percentile.</p> <p>In the <a href="https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/benchmark">Istio Tools repository</a>, you’ll find scripts and instructions for measuring Istio&rsquo;s data plane performance, with additional instructions on how to run the scripts with <a href="https://linkerd.io">Linkerd</a>, another service mesh implementation. <a href="https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/benchmark#setup">Follow along</a> as we detail some best practices for each step of the performance test framework.</p> <h2 id="1-use-a-production-ready-istio-installation">1. Use a production-ready Istio installation</h2> <p>To accurately measure the performance of a service mesh at scale, it&rsquo;s important to use an <a href="https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/istio-install#istio-setup">adequately-sized</a> Kubernetes cluster. We test using three worker nodes, each with at least 4 vCPUs and 15 GB of memory.</p> <p>Then, it&rsquo;s important to use a production-ready Istio <strong>installation profile</strong> on that cluster. This lets us achieve performance-oriented settings such as control plane pod autoscaling, and ensures that resource limits are appropriate for heavy traffic load. The <a href="https://archive.istio.io/1.4/docs/setup/install/helm/#installation-steps">default</a> Istio installation is suitable for most benchmarking use cases. For extensive performance benchmarking, with thousands of proxy-injected services, we also provide <a href="https://github.com/istio/tools/blob/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/istio-install/values.yaml">a tuned Istio install</a> that allocates extra memory and CPU to the Istio control plane.</p> <p><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#exclamation-mark"/></svg> Istio&rsquo;s <a href="/v1.9/docs/setup/getting-started/">demo installation</a> is not suitable for performance testing, because it is designed to be deployed on a small trial cluster, and has full tracing and access logs enabled to showcase Istio&rsquo;s features.</p> <h2 id="2-focus-on-the-data-plane">2. Focus on the data plane</h2> <p>Our benchmarking scripts focus on evaluating the Istio data plane: the <span class="term" data-title="Envoy" data-body="&lt;p&gt;The high-performance proxy that Istio uses to mediate inbound and outbound traffic for all &lt;a href=&#34;/docs/reference/glossary/#service&#34;&gt;services&lt;/a&gt; in the &lt;a href=&#34;/docs/reference/glossary/#service-mesh&#34;&gt;service mesh&lt;/a&gt;. &lt;a href=&#34;https://envoyproxy.github.io/envoy/&#34;&gt;Learn more about Envoy&lt;/a&gt;.&lt;/p&gt; ">Envoy</span> proxies that mediate traffic between application containers. Why focus on the data plane? Because at scale, with lots of application containers, the data plane’s <strong>memory</strong> and <strong>CPU</strong> usage quickly eclipses that of the Istio control plane. Let&rsquo;s look at an example of how this happens:</p> <p>Say you run 2,000 Envoy-injected pods, each handling 1,000 requests per second. Each proxy is using 50 MB of memory, and to configure all these proxies, Pilot is using 1 vCPU and 1.5 GB of memory. All together, the Istio data plane (the sum of all the Envoy proxies) is using 100 GB of memory, compared to Pilot&rsquo;s 1.5 GB.</p> <p>It is also important to focus on data plane performance for <strong>latency</strong> reasons. This is because most application requests move through the Istio data plane, not the control plane. There are two exceptions:</p> <ol> <li><strong>Telemetry reporting:</strong> Each proxy sends raw telemetry data to Mixer, which Mixer processes into metrics, traces, and other telemetry. The raw telemetry data is similar to access logs, and therefore comes at a cost. Access log processing consumes CPU and keeps a worker thread from picking up the next unit of work. At higher throughput, it is more likely that the next unit of work is waiting in the queue to be picked up by the worker. This can lead to long-tail (99th percentile) latency for Envoy.</li> <li><strong>Custom policy checks:</strong> When using <a href="/v1.9/docs/concepts/observability/">custom Istio policy adapters</a>, policy checks are on the request path. This means that request headers and metadata on the data path will be sent to the control plane (Mixer), resulting in higher request latency. <strong>Note:</strong> These policy checks are <a href="https://archive.istio.io/v1.4/docs/reference/config/installation-options/">disabled by default</a>, as the most common policy use case (<a href="https://archive.istio.io/v1.4/docs/reference/config/security/istio.rbac.v1alpha1">RBAC</a>) is performed entirely by the Envoy proxies.</li> </ol> <p>Both of these exceptions will go away in a future Istio release, when <a href="https://docs.google.com/document/d/1QKmtem5jU_2F3Lh5SqLp0IuPb80_70J7aJEYu4_gS-s">Mixer V2</a> moves all policy and telemetry features directly into the proxies.</p> <p>Next, when testing Istio&rsquo;s data plane performance at scale, it&rsquo;s important to test not only at increasing requests per second, but also against an increasing number of <strong>concurrent</strong> connections. This is because real-world, high-throughput traffic comes from multiple clients. The <a href="https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/benchmark#run-performance-tests">provided scripts</a> allow you to perform the same load test with any number of concurrent connections, at increasing RPS.</p> <p>Lastly, our test environment measures requests between two pods, not many. The client pod is <a href="http://fortio.org/">Fortio</a>, which sends traffic to the server pod.</p> <p>Why test with only two pods? Because scaling up throughput (RPS) and connections (threads) has a greater effect on Envoy&rsquo;s performance than increasing the total size of the service registry — or, the total number of pods and services in the Kubernetes cluster. When the size of the service registry grows, Envoy does have to keep track of more endpoints, and lookup time per request does increase, but by a tiny constant. If you have many services, and this constant becomes a latency concern, Istio provides a <a href="/v1.9/docs/reference/config/networking/sidecar/">Sidecar resource</a>, which allows you to limit which services each Envoy knows about.</p> <h2 id="3-measure-with-and-without-proxies">3. Measure with and without proxies</h2> <p>While many Istio features, such as <a href="/v1.9/docs/concepts/security/#mutual-tls-authentication">mutual TLS authentication</a>, rely on an Envoy proxy next to an application pod, you can <a href="https://archive.istio.io/1.4/docs/setup/additional-setup/sidecar-injection/#disabling-or-updating-the-webhook">selectively disable</a> sidecar proxy injection for some of your mesh services. As you scale up Istio for production, you may want to incrementally add the sidecar proxy to your workloads.</p> <p>To that end, the test scripts provide <a href="https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/benchmark#run-performance-tests">three different modes</a>. These modes analyze Istio&rsquo;s performance when a request goes through both the client and server proxies (<code>both</code>), just the server proxy (<code>serveronly</code>), and neither proxy (<code>baseline</code>).</p> <p>You can also disable <a href="/v1.9/docs/concepts/observability/">Mixer</a> to stop Istio&rsquo;s telemetry during the performance tests, which provides results in line with the performance we expect when the Mixer V2 work is completed. Istio also supports <a href="https://github.com/istio/istio/wiki/Envoy-native-telemetry">Envoy native telemetry</a>, which performs similarly to having Istio&rsquo;s telemetry disabled.</p> <h2 id="istio-1-2-performance">Istio 1.2 Performance</h2> <p>Let&rsquo;s see how to use this test environment to analyze the data plane performance of Istio 1.2. We also provide instructions to run the <a href="https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/benchmark/linkerd">same performance tests for the Linkerd data plane</a>. Currently, only latency benchmarking is supported for Linkerd.</p> <p>For measuring Istio&rsquo;s sidecar proxy latency, we look at the 50th, 90th, and 99th percentiles for an increasing number of concurrent connections,keeping request throughput (RPS) constant.</p> <p>We found that with 16 concurrent connections and 1000 RPS, Istio adds <strong>3ms</strong> over the baseline (P50) when a request travels through both a client and server proxy. (Subtract the pink line, <code>base</code>, from the green line, <code>both</code>.) At 64 concurrent connections, Istio adds <strong>12ms</strong> over the baseline, but with Mixer disabled (<code>nomixer_both</code>), Istio only adds <strong>7ms</strong>.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:60%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/performance-best-practices/latency_p50.png" title="Istio sidecar proxy, 50th percentile latency"> <img class="element-to-stretch" src="/v1.9/blog/2019/performance-best-practices/latency_p50.png" alt="Istio sidecar proxy, 50th percentile latency" /> </a> </div> <figcaption></figcaption> </figure> <p>In the 90th percentile, with 16 concurrent connections, Istio adds <strong>6ms</strong>; with 64 connections, Istio adds <strong>20ms</strong>.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:60%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/performance-best-practices/latency_p90.png" title="Istio sidecar proxy, 90th percentile latency"> <img class="element-to-stretch" src="/v1.9/blog/2019/performance-best-practices/latency_p90.png" alt="Istio sidecar proxy, 90th percentile latency" /> </a> </div> <figcaption></figcaption> </figure> <p>Finally, in the 99th percentile, with 16 connections, Istio adds <strong>10ms</strong> over the baseline. At 64 connections, Istio adds <strong>25ms</strong> with Mixer, or <strong>10ms</strong> without Mixer.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:60%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/performance-best-practices/latency_p99.png" title="Istio sidecar proxy, 99th percentile latency"> <img class="element-to-stretch" src="/v1.9/blog/2019/performance-best-practices/latency_p99.png" alt="Istio sidecar proxy, 99th percentile latency" /> </a> </div> <figcaption></figcaption> </figure> <p>For CPU usage, we measured with an increasing request throughput (RPS), and a constant number of concurrent connections. We found that Envoy&rsquo;s maximum CPU usage at 3000 RPS, with Mixer enabled, was <strong>1.2 vCPUs</strong>. At 1000 RPS, one Envoy uses approximately half of a CPU.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:60%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/performance-best-practices/cpu_max.png" title="Istio sidecar proxy, max CPU usage"> <img class="element-to-stretch" src="/v1.9/blog/2019/performance-best-practices/cpu_max.png" alt="Istio sidecar proxy, max CPU usage" /> </a> </div> <figcaption></figcaption> </figure> <h2 id="summary">Summary</h2> <p>In the process of benchmarking Istio&rsquo;s performance, we learned several key lessons:</p> <ul> <li>Use an environment that mimics production.</li> <li>Focus on data plane traffic.</li> <li>Measure against a baseline.</li> <li>Increase concurrent connections as well as total throughput.</li> </ul> <p>For a mesh with 1000 RPS across 16 connections, Istio 1.2 adds just <strong>3 milliseconds</strong> of latency over the baseline, in the 50th percentile.</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">Istio&rsquo;s performance depends on your specific setup and traffic load. Because of this variance, make sure your test setup accurately reflects your production workloads. To try out the benchmarking scripts, head over <a href="https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/benchmark">to the Istio Tools repository</a>.</div> </aside> </div> <p>Also check out the <a href="/v1.9/docs/ops/deployment/performance-and-scalability">Istio Performance and Scalability guide</a> for the most up-to-date performance data.</p> <p>Thank you for reading, and happy benchmarking!</p>Tue, 09 Jul 2019 00:00:00 +0000/v1.9/blog/2019/performance-best-practices/Megan O'Keefe (Google), John Howard (Google), Mandar Jog (Google)/v1.9/blog/2019/performance-best-practices/performancescalabilityscalebenchmarksExtending Istio Self-Signed Root Certificate Lifetime<p>Istio self-signed certificates have historically had a 1 year default lifetime. If you are using Istio self-signed certificates, you need to schedule regular root transitions before they expire. An expiration of a root certificate may lead to an unexpected cluster-wide outage. The issue affects new clusters created with versions up to 1.0.7 and 1.1.7.</p> <p>See <a href="https://istio.io/v1.7/docs/ops/configuration/security/root-transition/">Extending Self-Signed Certificate Lifetime</a> for information on how to gauge the age of your certificates and how to perform rotation.</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">We strongly recommend you rotate root keys and root certificates annually as a security best practice. We will send out instructions for root key/cert rotation soon.</div> </aside> </div>Fri, 07 Jun 2019 00:00:00 +0000/v1.9/blog/2019/root-transition/Oliver Liu/v1.9/blog/2019/root-transition/securityPKIcertificateCitadelSecure Control of Egress Traffic in Istio, part 1 <p>This is part 1 in a new series about secure control of egress traffic in Istio that I am going to publish. In this installment, I explain why you should apply egress traffic control to your cluster, the attacks involving egress traffic you want to prevent, and the requirements for a system for egress traffic control to do so. Once you agree that you should control the egress traffic coming from your cluster, the following questions arise: What is required from a system for secure control of egress traffic? Which is the best solution to fulfill these requirements? (spoiler: Istio in my opinion) Future installments will describe <a href="/v1.9/blog/2019/egress-traffic-control-in-istio-part-2/">the implementation of the secure control of egress traffic in Istio</a> and compare it with other solutions.</p> <p>The most important security aspect for a service mesh is probably ingress traffic. You definitely must prevent attackers from penetrating the cluster through ingress APIs. Having said that, securing the traffic leaving the mesh is also very important. Once your cluster is compromised, and you must be prepared for that scenario, you want to reduce the damage as much as possible and prevent the attackers from using the cluster for further attacks on external services and legacy systems outside of the cluster. To achieve that goal, you need secure control of egress traffic.</p> <p>Compliance requirements are another reason to implement secure control of egress traffic. For example, the <a href="https://www.pcisecuritystandards.org/pci_security/">Payment Card Industry (PCI) Data Security Standard</a> requires that inbound and outbound traffic must be restricted to that which is necessary:</p> <div> <aside class="callout quote"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-quote"/></svg> </div> <div class="content"><em>1.2.1 Restrict inbound and outbound traffic to that which is necessary for the cardholder data environment, and specifically deny all other traffic.</em></div> </aside> </div> <p>And specifically regarding outbound traffic:</p> <div> <aside class="callout quote"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-quote"/></svg> </div> <div class="content"><em>1.3.4 Do not allow unauthorized outbound traffic from the cardholder data environment to the Internet&hellip; All traffic outbound from the cardholder data environment should be evaluated to ensure that it follows established, authorized rules. Connections should be inspected to restrict traffic to only authorized communications (for example by restricting source/destination addresses/ports, and/or blocking of content).</em></div> </aside> </div> <p>Let&rsquo;s start with the attacks that involve egress traffic.</p> <h2 id="the-attacks">The attacks</h2> <p>An IT organization must assume it will be attacked if it hasn&rsquo;t been attacked already, and that part of its infrastructure could already be compromised or become compromised in the future. Once attackers are able to penetrate an application in a cluster, they can proceed to attack external services: legacy systems, external web services and databases. The attackers may want to steal the data of the application and to transfer it to their external servers. Attackers&rsquo; malware may require access to attackers&rsquo; servers to download updates. The attackers may use pods in the cluster to perform DDOS attacks or to break into external systems. Even though you <a href="https://en.wikipedia.org/wiki/There_are_known_knowns">cannot know</a> all the possible types of attacks, you want to reduce possibilities for any attacks, both for known and unknown ones.</p> <p>The external attackers gain access to the application’s container from outside the mesh through a bug in the application but attackers can also be internal, for example, malicious DevOps people inside the organization.</p> <p>To prevent the attacks described above, some form of egress traffic control must be applied. Let me present egress traffic control in the following section.</p> <h2 id="the-solution-secure-control-of-egress-traffic">The solution: secure control of egress traffic</h2> <p>Secure control of egress traffic means monitoring the egress traffic and enforcing all the security policies regarding the egress traffic. Monitoring the egress traffic, enables you to analyze it, possibly offline, and detect the attacks even if you were unable to prevent them in real time. Another good practice to reduce possibilities of attacks is to specify policies that limit access following the <a href="https://en.wikipedia.org/wiki/Need_to_know#In_computer_technology]">Need to know</a> principle: only the applications that need external services should be allowed to access the external services they need.</p> <p>Let me now turn to the requirements for egress traffic control we collected.</p> <h2 id="requirements-for-egress-traffic-control">Requirements for egress traffic control</h2> <p>My colleagues at IBM and I collected requirements for secure control of egress traffic from several customers, and combined them with the <a href="https://docs.google.com/document/d/1-Cq_Y-yuyNklvdnaZF9Qngl3xe0NnArT7Xt_Wno9beg">egress traffic control requirements from Kubernetes Network Special Interest Group</a>.</p> <p>Istio 1.1 satisfies all gathered requirements:</p> <ol> <li><p>Support for <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security">TLS</a> with <a href="https://en.wikipedia.org/wiki/Server_Name_Indication">SNI</a> or for <a href="/v1.9/docs/reference/glossary/#tls-origination">TLS origination</a> by Istio.</p></li> <li><p><strong>Monitor</strong> SNI and the source workload of every egress access.</p></li> <li><p>Define and enforce <strong>policies per cluster</strong>, e.g.:</p> <ul> <li><p>all applications in the cluster may access <code>service1.foo.com</code> (a specific host)</p></li> <li><p>all applications in the cluster may access any host of the form <code>*.bar.com</code> (a wildcarded domain)</p></li> </ul> <p>All unspecified access must be blocked.</p></li> <li><p>Define and enforce <strong>policies per source</strong>, <em>Kubernetes-aware</em>:</p> <ul> <li><p>application <code>A</code> may access <code>*.foo.com</code>.</p></li> <li><p>application <code>B</code> may access <code>*.bar.com</code>.</p></li> </ul> <p>All other access must be blocked, in particular access of application <code>A</code> to <code>service1.bar.com</code>.</p></li> <li><p><strong>Prevent tampering</strong>. In case an application pod is compromised, prevent the compromised pod from escaping monitoring, from sending fake information to the monitoring system, and from breaking the egress policies.</p></li> <li><p>Nice to have: traffic control is <strong>transparent</strong> to the applications.</p></li> </ol> <p>Let me explain each requirement in more detail. The first requirement states that only TLS traffic to the external services must be supported. The requirement emerged upon observation that all the traffic that leaves the cluster must be encrypted. This means that either the applications perform TLS origination or Istio must perform TLS origination for them. Note that in the case an application performs TLS origination, the Istio proxies cannot see the original traffic, only the encrypted one, so the proxies see the TLS protocol only. For the proxies it does not matter if the original protocol is HTTP or MongoDB, all the Istio proxies can see is TLS traffic.</p> <p>The second requirement states that SNI and the source of the traffic must be monitored. Monitoring is the first step to prevent attacks. Even if attackers would be able to access external services from the cluster, if the access is monitored, there is a chance to discover the suspicious traffic and take a corrective action.</p> <p>Note that in the case of TLS originated by an application, the Istio sidecar proxies can only see TCP traffic and a TLS handshake that includes SNI. A label of the source pod could identify the source of the traffic but a service account of the pod or some other source identifier could be used. We call this property of an egress control system as <em>being Kubernetes-aware</em>: the system must understand Kubernetes artifacts like pods and service accounts. If the system is not Kubernetes-aware, it can only monitor the IP address as the identifier of the source.</p> <p>The third requirement states that Istio operators must be able to define policies for egress traffic for the entire cluster. The policies state which external services may be accessed by any pod in the cluster. The external services can be identified either by a <a href="https://en.wikipedia.org/wiki/Fully_qualified_domain_name">Fully qualified domain name</a> of the service, e.g. <code>www.ibm.com</code> or by a wildcarded domain, e.g. <code>*.ibm.com</code>. Only the specified external services may be accessed, all other egress traffic is blocked.</p> <p>This requirement originates from the need to prevent attackers from accessing malicious sites, for example for downloading updates/instructions for their malware. You also want to limit the number of external sites that the attackers can access and attack. You want to allow access only to the external services that the applications in the cluster need to access and to block access to all the other services, this way you reduce the <a href="https://en.wikipedia.org/wiki/Attack_surface">attack surface</a>. While the external services can have their own security mechanisms, you want to exercise <a href="https://en.wikipedia.org/wiki/Defense_in_depth_(computing)">Defense in depth</a> and to have multiple security layers: a security layer in your cluster in addition to the security layers in the external systems.</p> <p>This requirement means that the external services must be identifiable by domain names. We call this property of an egress control system as <em>being DNS-aware</em>. If the system is not DNS-aware, the external services must be specified by IP addresses. Using IP addresses is not convenient and often is not feasible, since the IP addresses of a service can change. Sometimes all the IP addresses of a service are not even known, for example in the case of <a href="https://en.wikipedia.org/wiki/Content_delivery_network">CDNs</a>.</p> <p>The fourth requirement states that the source of the egress traffic must be added to the policies effectively extending the third requirement. Policies can specify which source can access which external service and the source must be identified just as in the second requirement, for example, by a label of the source pod or by service account of the pod. It means that policy enforcement must also be <em>Kubernetes-aware</em>. If policy enforcement is not Kubernetes-aware, the policies must identify the source of traffic by the IP of the pod, which is not convenient, especially since the pods can come and go so their IPs are not static.</p> <p>The fifth requirement states that even if the cluster is compromised and the attackers control some of the pods, they must not be able to cheat the monitoring or to violate policies of the egress control system. We say that such a system provides <em>secure</em> control of egress traffic.</p> <p>The sixth requirement states that the traffic control should be provided without changing the application containers, in particular without changing the code of the applications and without changing the environment of the containers. We call such a control of egress traffic <em>transparent</em>.</p> <p>In the next posts I will show that Istio can function as an example of an egress traffic control system that satisfies all of these requirements, in particular it is transparent, DNS-aware, and Kubernetes-aware.</p> <h2 id="summary">Summary</h2> <p>I hope that you are convinced that controlling egress traffic is important for the security of your cluster. In <a href="/v1.9/blog/2019/egress-traffic-control-in-istio-part-2/">the part 2 of this series</a> I describe the Istio way to perform secure control of egress traffic. In <a href="/v1.9/blog/2019/egress-traffic-control-in-istio-part-3/">the part 3 of this series</a> I compare it with alternative solutions such as <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/">Kubernetes Network Policies</a> and legacy egress proxies/firewalls.</p>Wed, 22 May 2019 00:00:00 +0000/v1.9/blog/2019/egress-traffic-control-in-istio-part-1/Vadim Eisenberg (IBM)/v1.9/blog/2019/egress-traffic-control-in-istio-part-1/traffic-managementegresssecurityArchitecting Istio 1.1 for Performance <p>Hyper-scale, microservice-based cloud environments have been exciting to build but challenging to manage. Along came Kubernetes (container orchestration) in 2014, followed by Istio (container service management) in 2017. Both open-source projects enable developers to scale container-based applications without spending too much time on administration tasks.</p> <p>Now, new enhancements in Istio 1.1 deliver scale-up with improved application performance and service management efficiency. Simulations using our sample commercial airline reservation application show the following improvements, compared to Istio 1.0.</p> <p>We&rsquo;ve seen substantial application performance gains:</p> <ul> <li>up to 30% reduction in application average latency</li> <li>up to 40% faster service startup times in a large mesh</li> </ul> <p>As well as impressive improvements in service management efficiency:</p> <ul> <li>up to 90% reduction in Pilot CPU usage in a large mesh</li> <li>up to 50% reduction in Pilot memory usage in a large mesh</li> </ul> <p>With Istio 1.1, organizations can be more confident in their ability to scale applications with consistency and control &ndash; even in hyper-scale cloud environments.</p> <p>Congratulations to the Istio experts around the world who contributed to this release. We could not be more pleased with these results.</p> <h2 id="istio-1-1-performance-enhancements">Istio 1.1 performance enhancements</h2> <p>As members of the Istio Performance and Scalability workgroup, we have done extensive performance evaluations. We introduced many performance design features for Istio 1.1, in collaboration with other Istio contributors. Some of the most visible performance enhancements in 1.1 include:</p> <ul> <li>Significant reduction in default collection of Envoy-generated statistics</li> <li>Added load-shedding functionality to Mixer workloads</li> <li>Improved the protocol between Envoy and Mixer</li> <li>Namespace isolation, to reduce operational overhead</li> <li>Configurable concurrent worker threads, which can improve overall throughput</li> <li>Configurable filters that limit telemetry data</li> <li>Removal of synchronization bottlenecks</li> </ul> <h2 id="continuous-code-quality-and-performance-verification">Continuous code quality and performance verification</h2> <p>Regression Patrol drives continuous improvement in Istio performance and quality. Behind the scenes, the Regression Patrol helps Istio developers to identify and fix code issues. Daily builds are checked using a customer-centric benchmark, <a href="https://github.com/blueperf/">BluePerf</a>. The results are published to the <a href="https://ibmcloud-perf.istio.io/regpatrol/">Istio community web portal</a>. Various application configurations are evaluated to help provide insights on Istio component performance.</p> <p>Another tool that is used to evaluate the performance of Istio’s builds is <a href="https://fortio.org/">Fortio</a>, which provides a synthetic end to end load testing benchmark.</p> <h2 id="summary">Summary</h2> <p>Istio 1.1 was designed for performance and scalability. The Istio Performance and Scalability workgroup measured significant performance improvements over 1.0. Istio 1.1 introduces new features and optimizations to help harden the service mesh for enterprise microservice workloads. The Istio 1.1 Performance and Tuning Guide documents performance simulations, provides sizing and capacity planning guidance, and includes best practices for tuning custom use cases.</p> <h2 id="useful-links">Useful links</h2> <ul> <li><a href="https://www.youtube.com/watch?time_continue=349&amp;v=G4F5aRFEXnU">Istio Service Mesh Performance (34:30)</a>, by Surya Duggirala, Laurent Demailly and Fawad Khaliq at KubeCon Europe 2018</li> <li><a href="https://discuss.istio.io/c/performance-and-scalability">Istio Performance and Scalability discussion forum</a></li> </ul> <h2 id="disclaimer">Disclaimer</h2> <p>The performance data contained herein was obtained in a controlled, isolated environment. Actual results that may be obtained in other operating environments may vary significantly. There is no guarantee that the same or similar results will be obtained elsewhere.</p>Tue, 19 Mar 2019 00:00:00 +0000/v1.9/blog/2019/istio1.1_perf/Surya V Duggirala (IBM), Mandar Jog (Google), Jose Nativio (IBM)/v1.9/blog/2019/istio1.1_perf/performancescalabilityscalebenchmarksVersion Routing in a Multicluster Service Mesh <p>If you&rsquo;ve spent any time looking at Istio, you&rsquo;ve probably noticed that it includes a lot of features that can be demonstrated with simple <a href="/v1.9/docs/tasks/">tasks</a> and <a href="/v1.9/docs/examples/">examples</a> running on a single Kubernetes cluster. Because most, if not all, real-world cloud and microservices-based applications are not that simple and will need to have the services distributed and running in more than one location, you may be wondering if all these things will be just as simple in your real production environment.</p> <p>Fortunately, Istio provides several ways to configure a service mesh so that applications can, more-or-less transparently, be part of a mesh where the services are running in more than one cluster, i.e., in a <a href="/v1.9/docs/ops/deployment/deployment-models/#multiple-clusters">multicluster deployment</a>. The simplest way to set up a multicluster mesh, because it has no special networking requirements, is using a replicated <a href="/v1.9/docs/ops/deployment/deployment-models/#control-plane-models">control plane model</a>. In this configuration, each Kubernetes cluster contributing to the mesh has its own control plane, but each control plane is synchronized and running under a single administrative control.</p> <p>In this article we&rsquo;ll look at how one of the features of Istio, <a href="/v1.9/docs/concepts/traffic-management/">traffic management</a>, works in a multicluster mesh with a dedicated control plane topology. We&rsquo;ll show how to configure Istio route rules to call remote services in a multicluster service mesh by deploying the <a href="https://github.com/istio/istio/tree/release-1.9/samples/bookinfo">Bookinfo sample</a> with version <code>v1</code> of the <code>reviews</code> service running in one cluster, versions <code>v2</code> and <code>v3</code> running in a second cluster.</p> <h2 id="set-up-clusters">Set up clusters</h2> <p>To start, you&rsquo;ll need two Kubernetes clusters, both running a slightly customized configuration of Istio.</p> <ul> <li><p>Set up a multicluster environment with two Istio clusters by following the <a href="/v1.9/docs/setup/install/multicluster">replicated control planes</a> instructions.</p></li> <li><p>The <code>kubectl</code> command is used to access both clusters with the <code>--context</code> flag. Use the following command to list your contexts:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * cluster1 cluster1 user@foo.com default cluster2 cluster2 user@foo.com default </code></pre></li> <li><p>Export the following environment variables with the context names of your configuration:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export CTX_CLUSTER1=&lt;cluster1 context name&gt; $ export CTX_CLUSTER2=&lt;cluster2 context name&gt; </code></pre></li> </ul> <h2 id="deploy-version-v1-of-the-bookinfo-application-in-cluster1">Deploy version v1 of the <code>bookinfo</code> application in <code>cluster1</code></h2> <p>Run the <code>productpage</code> and <code>details</code> services and version <code>v1</code> of the <code>reviews</code> service in <code>cluster1</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl label --context=$CTX_CLUSTER1 namespace default istio-injection=enabled $ kubectl apply --context=$CTX_CLUSTER1 -f - &lt;&lt;EOF apiVersion: v1 kind: Service metadata: name: productpage labels: app: productpage spec: ports: - port: 9080 name: http selector: app: productpage --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: productpage-v1 spec: replicas: 1 template: metadata: labels: app: productpage version: v1 spec: containers: - name: productpage image: istio/examples-bookinfo-productpage-v1:1.10.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 --- apiVersion: v1 kind: Service metadata: name: details labels: app: details spec: ports: - port: 9080 name: http selector: app: details --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: details-v1 spec: replicas: 1 template: metadata: labels: app: details version: v1 spec: containers: - name: details image: istio/examples-bookinfo-details-v1:1.10.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 --- apiVersion: v1 kind: Service metadata: name: reviews labels: app: reviews spec: ports: - port: 9080 name: http selector: app: reviews --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reviews-v1 spec: replicas: 1 template: metadata: labels: app: reviews version: v1 spec: containers: - name: reviews image: istio/examples-bookinfo-reviews-v1:1.10.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 EOF </code></pre> <h2 id="deploy-bookinfo-v2-and-v3-services-in-cluster2">Deploy <code>bookinfo</code> v2 and v3 services in <code>cluster2</code></h2> <p>Run the <code>ratings</code> service and version <code>v2</code> and <code>v3</code> of the <code>reviews</code> service in <code>cluster2</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl label --context=$CTX_CLUSTER2 namespace default istio-injection=enabled $ kubectl apply --context=$CTX_CLUSTER2 -f - &lt;&lt;EOF apiVersion: v1 kind: Service metadata: name: ratings labels: app: ratings spec: ports: - port: 9080 name: http selector: app: ratings --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ratings-v1 spec: replicas: 1 template: metadata: labels: app: ratings version: v1 spec: containers: - name: ratings image: istio/examples-bookinfo-ratings-v1:1.10.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 --- apiVersion: v1 kind: Service metadata: name: reviews labels: app: reviews spec: ports: - port: 9080 name: http selector: app: reviews --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reviews-v2 spec: replicas: 1 template: metadata: labels: app: reviews version: v2 spec: containers: - name: reviews image: istio/examples-bookinfo-reviews-v2:1.10.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reviews-v3 spec: replicas: 1 template: metadata: labels: app: reviews version: v3 spec: containers: - name: reviews image: istio/examples-bookinfo-reviews-v3:1.10.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 EOF </code></pre> <h2 id="access-the-bookinfo-application">Access the <code>bookinfo</code> application</h2> <p>Just like any application, we&rsquo;ll use an Istio gateway to access the <code>bookinfo</code> application.</p> <ul> <li><p>Create the <code>bookinfo</code> gateway in <code>cluster1</code>:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/networking/bookinfo-gateway.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply --context=$CTX_CLUSTER1 -f @samples/bookinfo/networking/bookinfo-gateway.yaml@ </code></pre></div></li> <li><p>Follow the <a href="/v1.9/docs/examples/bookinfo/#determine-the-ingress-ip-and-port">Bookinfo sample instructions</a> to determine the ingress IP and port and then point your browser to <code>http://$GATEWAY_URL/productpage</code>.</p></li> </ul> <p>You should see the <code>productpage</code> with reviews, but without ratings, because only <code>v1</code> of the <code>reviews</code> service is running on <code>cluster1</code> and we have not yet configured access to <code>cluster2</code>.</p> <h2 id="create-a-service-entry-and-destination-rule-on-cluster1-for-the-remote-reviews-service">Create a service entry and destination rule on <code>cluster1</code> for the remote reviews service</h2> <p>As described in the <a href="https://istio.io/v1.6/docs/docs/setup/install/multicluster/gateways/#setup-dns">setup instructions</a>, remote services are accessed with a <code>.global</code> DNS name. In our case, it&rsquo;s <code>reviews.default.global</code>, so we need to create a service entry and destination rule for that host. The service entry will use the <code>cluster2</code> gateway as the endpoint address to access the service. You can use the gateway&rsquo;s DNS name, if it has one, or its public IP, like this:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export CLUSTER2_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway \ -n istio-system -o jsonpath=&#34;{.items[0].status.loadBalancer.ingress[0].ip}&#34;) </code></pre> <p>Now create the service entry and destination rule using the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply --context=$CTX_CLUSTER1 -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: reviews-default spec: hosts: - reviews.default.global location: MESH_INTERNAL ports: - name: http1 number: 9080 protocol: http resolution: DNS addresses: - 240.0.0.3 endpoints: - address: ${CLUSTER2_GW_ADDR} labels: cluster: cluster2 ports: http1: 15443 # Do not change this port value --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews-global spec: host: reviews.default.global trafficPolicy: tls: mode: ISTIO_MUTUAL subsets: - name: v2 labels: cluster: cluster2 - name: v3 labels: cluster: cluster2 EOF </code></pre> <p>The address <code>240.0.0.3</code> of the service entry can be any arbitrary unallocated IP. Using an IP from the class E addresses range 240.0.0.0/4 is a good choice. Check out the <a href="/v1.9/docs/setup/install/multicluster">gateway-connected multicluster example</a> for more details.</p> <p>Note that the labels of the subsets in the destination rule map to the service entry endpoint label (<code>cluster: cluster2</code>) corresponding to the <code>cluster2</code> gateway. Once the request reaches the destination cluster, a local destination rule will be used to identify the actual pod labels (<code>version: v1</code> or <code>version: v2</code>) corresponding to the requested subset.</p> <h2 id="create-a-destination-rule-on-both-clusters-for-the-local-reviews-service">Create a destination rule on both clusters for the local reviews service</h2> <p>Technically, we only need to define the subsets of the local service that are being used in each cluster (i.e., <code>v1</code> in <code>cluster1</code>, <code>v2</code> and <code>v3</code> in <code>cluster2</code>), but for simplicity we&rsquo;ll just define all three subsets in both clusters, since there&rsquo;s nothing wrong with defining subsets for versions that are not actually deployed.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply --context=$CTX_CLUSTER1 -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews.default.svc.cluster.local trafficPolicy: tls: mode: ISTIO_MUTUAL subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3 EOF </code></pre> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply --context=$CTX_CLUSTER2 -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews.default.svc.cluster.local trafficPolicy: tls: mode: ISTIO_MUTUAL subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3 EOF </code></pre> <h2 id="create-a-virtual-service-to-route-reviews-service-traffic">Create a virtual service to route reviews service traffic</h2> <p>At this point, all calls to the <code>reviews</code> service will go to the local <code>reviews</code> pods (<code>v1</code>) because if you look at the source code you will see that the <code>productpage</code> implementation is simply making requests to <code>http://reviews:9080</code> (which expands to host <code>reviews.default.svc.cluster.local</code>), the local version of the service. The corresponding remote service is named <code>reviews.default.global</code>, so route rules are needed to redirect requests to the global host.</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">Note that if all of the versions of the <code>reviews</code> service were remote, so there is no local <code>reviews</code> service defined, the DNS would resolve <code>reviews</code> directly to <code>reviews.default.global</code>. In that case we could call the remote <code>reviews</code> service without any route rules.</div> </aside> </div> <p>Apply the following virtual service to direct traffic for user <code>jason</code> to <code>reviews</code> versions <code>v2</code> and <code>v3</code> (<sup>50</sup>&frasl;<sub>50</sub>) which are running on <code>cluster2</code>. Traffic for any other user will go to <code>reviews</code> version <code>v1</code>.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply --context=$CTX_CLUSTER1 -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews.default.svc.cluster.local http: - match: - headers: end-user: exact: jason route: - destination: host: reviews.default.global subset: v2 weight: 50 - destination: host: reviews.default.global subset: v3 weight: 50 - route: - destination: host: reviews.default.svc.cluster.local subset: v1 EOF </code></pre> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">This 50/50 rule isn&rsquo;t a particularly realistic example. It&rsquo;s just a convenient way to demonstrate accessing multiple subsets of a remote service.</div> </aside> </div> <p>Return to your browser and login as user <code>jason</code>. If you refresh the page several times, you should see the display alternating between black and red ratings stars (<code>v2</code> and <code>v3</code>). If you logout, you will only see reviews without ratings (<code>v1</code>).</p> <h2 id="summary">Summary</h2> <p>In this article, we&rsquo;ve seen how to use Istio route rules to distribute the versions of a service across clusters in a multicluster service mesh with a replicated control plane model. In this example, we manually configured the <code>.global</code> service entry and destination rules needed to provide connectivity to one remote service, <code>reviews</code>. In general, however, if we wanted to enable any service to run either locally or remotely, we would need to create <code>.global</code> resources for every service. Fortunately, this process could be automated and likely will be in a future Istio release.</p>Thu, 07 Feb 2019 00:00:00 +0000/v1.9/blog/2019/multicluster-version-routing/Frank Budinsky (IBM)/v1.9/blog/2019/multicluster-version-routing/traffic-managementmulticlusterSail the Blog!<p>Welcome to the Istio blog!</p> <p>To make it easier to publish your content on our website, we <a href="/v1.9/about/contribute/add-content/#content-types">updated the content types guide</a>.</p> <p>The goal of the updated guide is to make sharing and finding content easier.</p> <p>We want to make sharing timely information on Istio easy and the <a href="/v1.9/blog">Istio blog</a> is a good place to start.</p> <p>We welcome your posts to the blog if you think your content falls in one of the following four categories:</p> <ul> <li>Your post details your experience using and configuring Istio. Ideally, your post shares a novel experience or perspective.</li> <li>Your post highlights Istio features.</li> <li>Your post details how to accomplish a task or fulfill a specific use case using Istio.</li> </ul> <p>Posting your blog is only <a href="/v1.9/about/contribute/github/">one PR away</a> and, if you wish, you can <a href="/v1.9/about/contribute/review">request a review</a>.</p> <p>We look forward to reading about your Istio experience on the blog soon!</p>Tue, 05 Feb 2019 00:00:00 +0000/v1.9/blog/2019/sail-the-blog/Rigs Caballero, Google/v1.9/blog/2019/sail-the-blog/communityblogcontributionguideguidelineeventEgress Gateway Performance Investigation <p>The main objective of this investigation was to determine the impact on performance and resource utilization when an egress gateway is added in the service mesh to access an external service (MongoDB, in this case). The steps to configure an egress gateway for an external MongoDB are described in the blog <a href="/v1.9/blog/2018/egress-mongo/">Consuming External MongoDB Services</a>.</p> <p>The application used for this investigation was the Java version of Acmeair, which simulates an airline reservation system. This application is used in the Performance Regression Patrol of Istio daily builds, but on that setup the microservices have been accessing the external MongoDB directly via their sidecars, without an egress gateway.</p> <p>The diagram below illustrates how regression patrol currently runs with Acmeair and Istio:</p> <figure style="width:70%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:62.69230769230769%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/egress-performance/acmeair_regpatrol3.png" title="Acmeair benchmark in the Istio performance regression patrol environment"> <img class="element-to-stretch" src="/v1.9/blog/2019/egress-performance/acmeair_regpatrol3.png" alt="Acmeair benchmark in the Istio performance regression patrol environment" /> </a> </div> <figcaption>Acmeair benchmark in the Istio performance regression patrol environment</figcaption> </figure> <p>Another difference is that the application communicates with the external DB with plain MongoDB protocol. The first change made for this study was to establish a TLS communication between the MongoDB and its clients running within the application, as this is a more realistic scenario.</p> <p>Several cases for accessing the external database from the mesh were tested and described next.</p> <h2 id="egress-traffic-cases">Egress traffic cases</h2> <h3 id="case-1-bypassing-the-sidecar">Case 1: Bypassing the sidecar</h3> <p>In this case, the sidecar does not intercept the communication between the application and the external DB. This is accomplished by setting the init container argument -x with the CIDR of the MongoDB, which makes the sidecar ignore messages to/from this IP address. For example:</p> <pre><code> - -x - &#34;169.47.232.211/32&#34;</code></pre> <figure style="width:70%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:76.45536869340232%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/egress-performance/case1_sidecar_bypass3.png" title="Traffic to external MongoDB by-passing the sidecar"> <img class="element-to-stretch" src="/v1.9/blog/2019/egress-performance/case1_sidecar_bypass3.png" alt="Traffic to external MongoDB by-passing the sidecar" /> </a> </div> <figcaption>Traffic to external MongoDB by-passing the sidecar</figcaption> </figure> <h3 id="case-2-through-the-sidecar-with-service-entry">Case 2: Through the sidecar, with service entry</h3> <p>This is the default configuration when the sidecar is injected into the application pod. All messages are intercepted by the sidecar and routed to the destination according to the configured rules, including the communication with external services. The MongoDB was defined as a <code>ServiceEntry</code>.</p> <figure style="width:70%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:74.41253263707573%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/egress-performance/case2_sidecar_passthru3.png" title="Sidecar intercepting traffic to external MongoDB"> <img class="element-to-stretch" src="/v1.9/blog/2019/egress-performance/case2_sidecar_passthru3.png" alt="Sidecar intercepting traffic to external MongoDB" /> </a> </div> <figcaption>Sidecar intercepting traffic to external MongoDB</figcaption> </figure> <h3 id="case-3-egress-gateway">Case 3: Egress gateway</h3> <p>The egress gateway and corresponding destination rule and virtual service resources are defined for accessing MongoDB. All traffic to and from the external DB goes through the egress gateway (envoy).</p> <figure style="width:70%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:62.309368191721134%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/egress-performance/case3_egressgw3.png" title="Introduction of the egress gateway to access MongoDB"> <img class="element-to-stretch" src="/v1.9/blog/2019/egress-performance/case3_egressgw3.png" alt="Introduction of the egress gateway to access MongoDB" /> </a> </div> <figcaption>Introduction of the egress gateway to access MongoDB</figcaption> </figure> <h3 id="case-4-mutual-tls-between-sidecars-and-the-egress-gateway">Case 4: Mutual TLS between sidecars and the egress gateway</h3> <p>In this case, there is an extra layer of security between the sidecars and the gateway, so some impact in performance is expected.</p> <figure style="width:70%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:63.968957871396896%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/egress-performance/case4_egressgw_mtls3.png" title="Enabling mutual TLS between sidecars and the egress gateway"> <img class="element-to-stretch" src="/v1.9/blog/2019/egress-performance/case4_egressgw_mtls3.png" alt="Enabling mutual TLS between sidecars and the egress gateway" /> </a> </div> <figcaption>Enabling mutual TLS between sidecars and the egress gateway</figcaption> </figure> <h3 id="case-5-egress-gateway-with-sni-proxy">Case 5: Egress gateway with SNI proxy</h3> <p>This scenario is used to evaluate the case where another proxy is required to access wildcarded domains. This may be required due current limitations of envoy. An nginx proxy was created as sidecar in the egress gateway pod.</p> <figure style="width:70%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:65.2762119503946%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/egress-performance/case5_egressgw_sni_proxy3.png" title="Egress gateway with additional SNI Proxy"> <img class="element-to-stretch" src="/v1.9/blog/2019/egress-performance/case5_egressgw_sni_proxy3.png" alt="Egress gateway with additional SNI Proxy" /> </a> </div> <figcaption>Egress gateway with additional SNI Proxy</figcaption> </figure> <h2 id="environment">Environment</h2> <ul> <li>Istio version: 1.0.2</li> <li><code>K8s</code> version: <code>1.10.5_1517</code></li> <li>Acmeair App: 4 services (1 replica of each), inter-services transactions, external Mongo DB, avg payload: 620 bytes.</li> </ul> <h2 id="results">Results</h2> <p><code>Jmeter</code> was used to generate the workload which consisted in a sequence of 5-minute runs, each one using a growing number of clients making http requests. The number of clients used were 1, 5, 10, 20, 30, 40, 50 and 60.</p> <h3 id="throughput">Throughput</h3> <p>The chart below shows the throughput obtained for the different cases:</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:54.29638854296388%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/egress-performance/throughput3.png" title="Throughput obtained for the different cases"> <img class="element-to-stretch" src="/v1.9/blog/2019/egress-performance/throughput3.png" alt="Throughput obtained for the different cases" /> </a> </div> <figcaption>Throughput obtained for the different cases</figcaption> </figure> <p>As you can see, there is no major impact in having sidecars and the egress gateway between the application and the external MongoDB, but enabling mutual TLS and then adding the SNI proxy caused a degradation in the throughput of about 10% and 24%, respectively.</p> <h3 id="response-time">Response time</h3> <p>The average response times for the different requests were collected when traffic was being driven with 20 clients. The chart below shows the average, median, 90%, 95% and 99% average values for each case:</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:48.76783398184176%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/egress-performance/response_times3.png" title="Response times obtained for the different configurations"> <img class="element-to-stretch" src="/v1.9/blog/2019/egress-performance/response_times3.png" alt="Response times obtained for the different configurations" /> </a> </div> <figcaption>Response times obtained for the different configurations</figcaption> </figure> <p>Likewise, not much difference in the response times for the 3 first cases, but mutual TLS and the extra proxy adds noticeable latency.</p> <h3 id="cpu-utilization">CPU utilization</h3> <p>The CPU usage was collected for all Istio components as well as for the sidecars during the runs. For a fair comparison, CPU used by Istio was normalized by the throughput obtained for a given run. The results are shown in the following graph:</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:53.96174863387978%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/egress-performance/cpu_usage3.png" title="CPU usage normalized by TPS"> <img class="element-to-stretch" src="/v1.9/blog/2019/egress-performance/cpu_usage3.png" alt="CPU usage normalized by TPS" /> </a> </div> <figcaption>CPU usage normalized by TPS</figcaption> </figure> <p>In terms of CPU consumption per transaction, Istio has used significantly more CPU only in the egress gateway + SNI proxy case.</p> <h2 id="conclusion">Conclusion</h2> <p>In this investigation, we tried different options to access an external TLS-enabled MongoDB to compare their performance. The introduction of the Egress Gateway did not have a significant impact on the performance nor meaningful additional CPU consumption. Only when enabling mutual TLS between sidecars and egress gateway or using an additional SNI proxy for wildcarded domains we could observe some degradation.</p>Thu, 31 Jan 2019 00:00:00 +0000/v1.9/blog/2019/egress-performance/Jose Nativio, IBM/v1.9/blog/2019/egress-performance/performancetraffic-managementegressmongoDemystifying Istio's Sidecar Injection Model <p>A simple overview of an Istio service-mesh architecture always starts with describing the control-plane and data-plane.</p> <p><a href="/v1.9/docs/ops/deployment/architecture/">From Istio’s documentation</a>:</p> <div> <aside class="callout quote"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-quote"/></svg> </div> <div class="content"><p>An Istio service mesh is logically split into a data plane and a control plane.</p> <p>The data plane is composed of a set of intelligent proxies (Envoy) deployed as sidecars. These proxies mediate and control all network communication between microservices along with Mixer, a general-purpose policy and telemetry hub.</p> <p>The control plane manages and configures the proxies to route traffic. Additionally, the control plane configures Mixers to enforce policies and collect telemetry.</p> </div> </aside> </div> <figure style="width:40%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:80%"> <a data-skipendnotes="true" href="/v1.9/blog/2019/data-plane-setup/arch-2.svg" title="Istio Architecture"> <img class="element-to-stretch" src="/v1.9/blog/2019/data-plane-setup/arch-2.svg" alt="The overall architecture of an Istio-based application." /> </a> </div> <figcaption>Istio Architecture</figcaption> </figure> <p>It is important to understand that the sidecar injection into the application pods happens automatically, though manual injection is also possible. Traffic is directed from the application services to and from these sidecars without developers needing to worry about it. Once the applications are connected to the Istio service mesh, developers can start using and reaping the benefits of all that the service mesh has to offer. However, how does the data plane plumbing happen and what is really required to make it work seamlessly? In this post, we will deep-dive into the specifics of the sidecar injection models to gain a very clear understanding of how sidecar injection works.</p> <h2 id="sidecar-injection">Sidecar injection</h2> <p>In simple terms, sidecar injection is adding the configuration of additional containers to the pod template. The added containers needed for the Istio service mesh are:</p> <p><code>istio-init</code> This <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/">init container</a> is used to setup the <code>iptables</code> rules so that inbound/outbound traffic will go through the sidecar proxy. An init container is different than an app container in following ways:</p> <ul> <li>It runs before an app container is started and it always runs to completion.</li> <li>If there are many init containers, each should complete with success before the next container is started.</li> </ul> <p>So, you can see how this type of container is perfect for a set-up or initialization job which does not need to be a part of the actual application container. In this case, <code>istio-init</code> does just that and sets up the <code>iptables</code> rules.</p> <p><code>istio-proxy</code> This is the actual sidecar proxy (based on Envoy).</p> <h3 id="manual-injection">Manual injection</h3> <p>In the manual injection method, you can use <a href="/v1.9/docs/reference/commands/istioctl"><code>istioctl</code></a> to modify the pod template and add the configuration of the two containers previously mentioned. For both manual as well as automatic injection, Istio takes the configuration from the <code>istio-sidecar-injector</code> configuration map (configmap) and the mesh&rsquo;s <code>istio</code> configmap.</p> <p>Let’s look at the configuration of the <code>istio-sidecar-injector</code> configmap, to get an idea of what actually is going on.</p> <pre><code class='language-bash' data-expandlinks='true' data-outputis='yaml' data-repo='istio' >$ kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath=&#39;{.data.config}&#39; SNIPPET from the output: policy: enabled template: |- initContainers: - name: istio-init image: docker.io/istio/proxy_init:1.0.2 args: - &#34;-p&#34; - [[ .MeshConfig.ProxyListenPort ]] - &#34;-u&#34; - 1337 ..... imagePullPolicy: IfNotPresent securityContext: capabilities: add: - NET_ADMIN restartPolicy: Always containers: - name: istio-proxy image: [[ if (isset .ObjectMeta.Annotations &#34;sidecar.istio.io/proxyImage&#34;) -]] &#34;[[ index .ObjectMeta.Annotations &#34;sidecar.istio.io/proxyImage&#34; ]]&#34; [[ else -]] docker.io/istio/proxyv2:1.0.2 [[ end -]] args: - proxy - sidecar ..... env: ..... - name: ISTIO_META_INTERCEPTION_MODE value: [[ or (index .ObjectMeta.Annotations &#34;sidecar.istio.io/interceptionMode&#34;) .ProxyConfig.InterceptionMode.String ]] imagePullPolicy: IfNotPresent securityContext: readOnlyRootFilesystem: true [[ if eq (or (index .ObjectMeta.Annotations &#34;sidecar.istio.io/interceptionMode&#34;) .ProxyConfig.InterceptionMode.String) &#34;TPROXY&#34; -]] capabilities: add: - NET_ADMIN restartPolicy: Always ..... </code></pre> <p>As you can see, the configmap contains the configuration for both, the <code>istio-init</code> init container and the <code>istio-proxy</code> proxy container. The configuration includes the name of the container image and arguments like interception mode, capabilities, etc.</p> <p>From a security point of view, it is important to note that <code>istio-init</code> requires <code>NET_ADMIN</code> capabilities to modify <code>iptables</code> within the pod&rsquo;s namespace and so does <code>istio-proxy</code> if configured in <code>TPROXY</code> mode. As this is restricted to a pod&rsquo;s namespace, there should be no problem. However, I have noticed that recent open-shift versions may have some issues with it and a workaround is needed. One such option is mentioned at the end of this post.</p> <p>To modify the current pod template for sidecar injection, you can:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl kube-inject -f demo-red.yaml | kubectl apply -f - </code></pre> <p>OR</p> <p>To use modified configmaps or local configmaps:</p> <ul> <li><p>Create <code>inject-config.yaml</code> and <code>mesh-config.yaml</code> from the configmaps</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath=&#39;{.data.config}&#39; &gt; inject-config.yaml $ kubectl -n istio-system get configmap istio -o=jsonpath=&#39;{.data.mesh}&#39; &gt; mesh-config.yaml </code></pre></li> <li><p>Modify the existing pod template, in my case, <code>demo-red.yaml</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ istioctl kube-inject --injectConfigFile inject-config.yaml --meshConfigFile mesh-config.yaml --filename demo-red.yaml --output demo-red-injected.yaml </code></pre></li> <li><p>Apply the <code>demo-red-injected.yaml</code></p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f demo-red-injected.yaml </code></pre></li> </ul> <p>As seen above, we create a new template using the <code>sidecar-injector</code> and the mesh configuration to then apply that new template using <code>kubectl</code>. If we look at the injected YAML file, it has the configuration of the Istio-specific containers, as we discussed above. Once we apply the injected YAML file, we see two containers running. One of them is the actual application container, and the other is the <code>istio-proxy</code> sidecar.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods | grep demo-red demo-red-pod-8b5df99cc-pgnl7 2/2 Running 0 3d </code></pre> <p>The count is not 3 because the <code>istio-init</code> container is an init type container that exits after doing what it supposed to do, which is setting up the <code>iptable</code> rules within the pod. To confirm the init container exit, let’s look at the output of <code>kubectl describe</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-outputis='yaml' data-repo='istio' >$ kubectl describe pod demo-red-pod-8b5df99cc-pgnl7 SNIPPET from the output: Name: demo-red-pod-8b5df99cc-pgnl7 Namespace: default ..... Labels: app=demo-red pod-template-hash=8b5df99cc version=version-red Annotations: sidecar.istio.io/status={&#34;version&#34;:&#34;3c0b8d11844e85232bc77ad85365487638ee3134c91edda28def191c086dc23e&#34;,&#34;initContainers&#34;:[&#34;istio-init&#34;],&#34;containers&#34;:[&#34;istio-proxy&#34;],&#34;volumes&#34;:[&#34;istio-envoy&#34;,&#34;istio-certs... Status: Running IP: 10.32.0.6 Controlled By: ReplicaSet/demo-red-pod-8b5df99cc Init Containers: istio-init: Container ID: docker://bef731eae1eb3b6c9d926cacb497bb39a7d9796db49cd14a63014fc1a177d95b Image: docker.io/istio/proxy_init:1.0.2 Image ID: docker-pullable://docker.io/istio/proxy_init@sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185 ..... State: Terminated Reason: Completed ..... Ready: True Containers: demo-red: Container ID: docker://8cd9957955ff7e534376eb6f28b56462099af6dfb8b9bc37aaf06e516175495e Image: chugtum/blue-green-image:v3 Image ID: docker-pullable://docker.io/chugtum/blue-green-image@sha256:274756dbc215a6b2bd089c10de24fcece296f4c940067ac1a9b4aea67cf815db State: Running Started: Sun, 09 Dec 2018 18:12:31 -0800 Ready: True istio-proxy: Container ID: docker://ca5d690be8cd6557419cc19ec4e76163c14aed2336eaad7ebf17dd46ca188b4a Image: docker.io/istio/proxyv2:1.0.2 Image ID: docker-pullable://docker.io/istio/proxyv2@sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332 Args: proxy sidecar ..... State: Running Started: Sun, 09 Dec 2018 18:12:31 -0800 Ready: True ..... </code></pre> <p>As seen in the output, the <code>State</code> of the <code>istio-init</code> container is <code>Terminated</code> with the <code>Reason</code> being <code>Completed</code>. The only two containers running are the main application <code>demo-red</code> container and the <code>istio-proxy</code> container.</p> <h3 id="automatic-injection">Automatic injection</h3> <p>Most of the times, you don’t want to manually inject a sidecar every time you deploy an application, using the <a href="/v1.9/docs/reference/commands/istioctl"><code>istioctl</code></a> command, but would prefer that Istio automatically inject the sidecar to your pod. This is the recommended approach and for it to work, all you need to do is to label the namespace where you are deploying the app with <code>istio-injection=enabled</code>.</p> <p>Once labeled, Istio injects the sidecar automatically for any pod you deploy in that namespace. In the following example, the sidecar gets automatically injected in the deployed pods in the <code>istio-dev</code> namespace.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get namespaces --show-labels NAME STATUS AGE LABELS default Active 40d &lt;none&gt; istio-dev Active 19d istio-injection=enabled istio-system Active 24d &lt;none&gt; kube-public Active 40d &lt;none&gt; kube-system Active 40d &lt;none&gt; </code></pre> <p>But how does this work? To get to the bottom of this, we need to understand Kubernetes admission controllers.</p> <p><a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/">From Kubernetes documentation:</a></p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object, but after the request is authenticated and authorized. You can define two types of admission webhooks, validating admission Webhook and mutating admission webhook. With validating admission Webhooks, you may reject requests to enforce custom admission policies. With mutating admission Webhooks, you may change requests to enforce custom defaults.</div> </aside> </div> <p>For automatic sidecar injection, Istio relies on <code>Mutating Admission Webhook</code>. Let’s look at the details of the <code>istio-sidecar-injector</code> mutating webhook configuration.</p> <pre><code class='language-bash' data-expandlinks='true' data-outputis='yaml' data-repo='istio' >$ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml SNIPPET from the output: apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&#34;apiVersion&#34;:&#34;admissionregistration.k8s.io/v1beta1&#34;,&#34;kind&#34;:&#34;MutatingWebhookConfiguration&#34;,&#34;metadata&#34;:{&#34;annotations&#34;:{},&#34;labels&#34;:{&#34;app&#34;:&#34;istio-sidecar-injector&#34;,&#34;chart&#34;:&#34;sidecarInjectorWebhook-1.0.1&#34;,&#34;heritage&#34;:&#34;Tiller&#34;,&#34;release&#34;:&#34;istio-remote&#34;},&#34;name&#34;:&#34;istio-sidecar-injector&#34;,&#34;namespace&#34;:&#34;&#34;},&#34;webhooks&#34;:[{&#34;clientConfig&#34;:{&#34;caBundle&#34;:&#34;&#34;,&#34;service&#34;:{&#34;name&#34;:&#34;istio-sidecar-injector&#34;,&#34;namespace&#34;:&#34;istio-system&#34;,&#34;path&#34;:&#34;/inject&#34;}},&#34;failurePolicy&#34;:&#34;Fail&#34;,&#34;name&#34;:&#34;sidecar-injector.istio.io&#34;,&#34;namespaceSelector&#34;:{&#34;matchLabels&#34;:{&#34;istio-injection&#34;:&#34;enabled&#34;}},&#34;rules&#34;:[{&#34;apiGroups&#34;:[&#34;&#34;],&#34;apiVersions&#34;:[&#34;v1&#34;],&#34;operations&#34;:[&#34;CREATE&#34;],&#34;resources&#34;:[&#34;pods&#34;]}]}]} creationTimestamp: 2018-12-10T08:40:15Z generation: 2 labels: app: istio-sidecar-injector chart: sidecarInjectorWebhook-1.0.1 heritage: Tiller release: istio-remote name: istio-sidecar-injector ..... webhooks: - clientConfig: service: name: istio-sidecar-injector namespace: istio-system path: /inject name: sidecar-injector.istio.io namespaceSelector: matchLabels: istio-injection: enabled rules: - apiGroups: - &#34;&#34; apiVersions: - v1 operations: - CREATE resources: - pods </code></pre> <p>This is where you can see the webhook <code>namespaceSelector</code> label that is matched for sidecar injection with the label <code>istio-injection: enabled</code>. In this case, you also see the operations and resources for which this is done when the pods are created. When an <code>apiserver</code> receives a request that matches one of the rules, the <code>apiserver</code> sends an admission review request to the webhook service as specified in the <code>clientConfig:</code>configuration with the <code>name: istio-sidecar-injector</code> key-value pair. We should be able to see that this service is running in the <code>istio-system</code> namespace.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get svc --namespace=istio-system | grep sidecar-injector istio-sidecar-injector ClusterIP 10.102.70.184 &lt;none&gt; 443/TCP 24d </code></pre> <p>This configuration ultimately does pretty much the same as we saw in manual injection. Just that it is done automatically during pod creation, so you won’t see the change in the deployment. You need to use <code>kubectl describe</code> to see the sidecar proxy and the init proxy.</p> <p>The automatic sidecar injection not only depends on the <code>namespaceSelector</code> mechanism of the webhook, but also on the default injection policy and the per-pod override annotation.</p> <p>If you look at the <code>istio-sidecar-injector</code> ConfigMap again, it has the default injection policy defined. In our case, it is enabled by default.</p> <pre><code class='language-bash' data-expandlinks='true' data-outputis='yaml' data-repo='istio' >$ kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath=&#39;{.data.config}&#39; SNIPPET from the output: policy: enabled template: |- initContainers: - name: istio-init image: &#34;gcr.io/istio-release/proxy_init:1.0.2&#34; args: - &#34;-p&#34; - [[ .MeshConfig.ProxyListenPort ]] </code></pre> <p>You can also use the annotation <code>sidecar.istio.io/inject</code> in the pod template to override the default policy. The following example disables the automatic injection of the sidecar for the pods in a <code>Deployment</code>.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ignored spec: template: metadata: annotations: sidecar.istio.io/inject: &#34;false&#34; spec: containers: - name: ignored image: tutum/curl command: [&#34;/bin/sleep&#34;,&#34;infinity&#34;] </code></pre> <p>This example shows there are many variables, based on whether the automatic sidecar injection is controlled in your namespace, ConfigMap, or pod and they are:</p> <ul> <li>webhooks <code>namespaceSelector</code> (<code>istio-injection: enabled</code>)</li> <li>default policy (Configured in the ConfigMap <code>istio-sidecar-injector</code>)</li> <li>per-pod override annotation (<code>sidecar.istio.io/inject</code>)</li> </ul> <p>The <a href="/v1.9/docs/ops/common-problems/injection/">injection status table</a> shows a clear picture of the final injection status based on the value of the above variables.</p> <h2 id="traffic-flow-from-application-container-to-sidecar-proxy">Traffic flow from application container to sidecar proxy</h2> <p>Now that we are clear about how a sidecar container and an init container are injected into an application manifest, how does the sidecar proxy grab the inbound and outbound traffic to and from the container? We did briefly mention that it is done by setting up the <code>iptable</code> rules within the pod namespace, which in turn is done by the <code>istio-init</code> container. Now, it is time to verify what actually gets updated within the namespace.</p> <p>Let’s get into the application pod namespace we deployed in the previous section and look at the configured iptables. I am going to show an example using <code>nsenter</code>. Alternatively, you can enter the container in a privileged mode to see the same information. For folks without access to the nodes, using <code>exec</code> to get into the sidecar and running <code>iptables</code> is more practical.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ docker inspect b8de099d3510 --format &#39;{{ .State.Pid }}&#39; 4125 </code></pre> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ nsenter -t 4215 -n iptables -t nat -S -P PREROUTING ACCEPT -P INPUT ACCEPT -P OUTPUT ACCEPT -P POSTROUTING ACCEPT -N ISTIO_INBOUND -N ISTIO_IN_REDIRECT -N ISTIO_OUTPUT -N ISTIO_REDIRECT -A PREROUTING -p tcp -j ISTIO_INBOUND -A OUTPUT -p tcp -j ISTIO_OUTPUT -A ISTIO_INBOUND -p tcp -m tcp --dport 80 -j ISTIO_IN_REDIRECT -A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15001 -A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -j ISTIO_REDIRECT -A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN -A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN -A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN -A ISTIO_OUTPUT -j ISTIO_REDIRECT -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001 </code></pre> <p>The output above clearly shows that all the incoming traffic to port 80, which is the port our <code>red-demo</code> application is listening, is now <code>REDIRECTED</code> to port <code>15001</code>, which is the port that the <code>istio-proxy</code>, an Envoy proxy, is listening. The same holds true for the outgoing traffic.</p> <p>This brings us to the end of this post. I hope it helped to de-mystify how Istio manages to inject the sidecar proxies into an existing deployment and how Istio routes the traffic to the proxy.</p> <div> <aside class="callout idea"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-idea"/></svg> </div> <div class="content">Update: In place of <code>istio-init</code>, there now seems to be an option of using the new CNI, which removes the need for the init container and associated privileges. This <a href="https://github.com/istio/cni"><code>istio-cni</code></a> plugin sets up the pods&rsquo; networking to fulfill this requirement in place of the current Istio injected pod <code>istio-init</code> approach.</div> </aside> </div>Thu, 31 Jan 2019 00:00:00 +0000/v1.9/blog/2019/data-plane-setup/Manish Chugtu/v1.9/blog/2019/data-plane-setup/kubernetessidecar-injectiontraffic-managementSidestepping Dependency Ordering with AppSwitch <p>We are going through an interesting cycle of application decomposition and recomposition. While the microservice paradigm is driving monolithic applications to be broken into separate individual services, the service mesh approach is helping them to be connected back together into well-structured applications. As such, microservices are logically separate but not independent. They are usually closely interdependent and taking them apart introduces many new concerns such as need for mutual authentication between services. Istio directly addresses most of those issues.</p> <h2 id="dependency-ordering-problem">Dependency ordering problem</h2> <p>An issue that arises due to application decomposition and one that Istio doesn’t address is dependency ordering &ndash; bringing up individual services of an application in an order that guarantees that the application as a whole comes up quickly and correctly. In a monolithic application, with all its components built-in, dependency ordering between the components is enforced by internal locking mechanisms. But with individual services potentially scattered across the cluster in a service mesh, starting a service first requires checking that the services it depends on are up and available.</p> <p>Dependency ordering is deceptively nuanced with a host of interrelated problems. Ordering individual services requires having the dependency graph of the services so that they can be brought up starting from leaf nodes back to the root nodes. It is not easy to construct such a graph and keep it updated over time as interdependencies evolve with the behavior of the application. Even if the dependency graph is somehow provided, enforcing the ordering itself is not easy. Simply starting the services in the specified order obviously won’t do. A service may have started but not be ready to accept connections yet. This is the problem with docker-compose&rsquo;s <code>depends-on</code> tag, for example.</p> <p>Apart from introducing sufficiently long sleeps between service startups, a common pattern that is often used is to check for readiness of dependencies before starting a service. In Kubernetes, this could be done with a wait script as part of the init container of the pod. However that means that the entire application would be held up until all its dependencies come alive. Sometimes applications spend several minutes initializing themselves on startup before making their first outbound connection. Not allowing a service to start at all adds substantial overhead to overall startup time of the application. Also, the strategy of waiting on the init container won&rsquo;t work for the case of multiple interdependent services within the same pod.</p> <h3 id="example-scenario-ibm-websphere-nd">Example scenario: IBM WebSphere ND</h3> <p>Let us consider IBM WebSphere ND &ndash; a widely deployed application middleware &ndash; to grok these problems more closely. It is a fairly complex framework in itself and consists of a central component called deployment manager (<code>dmgr</code>) that manages a set of node instances. It uses UDP to negotiate cluster membership among the nodes and requires that deployment manager is up and operational before any of the node instances can come up and join the cluster.</p> <p>Why are we talking about a traditional application in the modern cloud-native context? It turns out that there are significant gains to be had by enabling them to run on the Kubernetes and Istio platforms. Essentially it&rsquo;s a part of the modernization journey that allows running traditional apps alongside green-field apps on the same modern platform to facilitate interoperation between the two. In fact, WebSphere ND is a demanding application. It expects a consistent network environment with specific network interface attributes etc. AppSwitch is equipped to take care of those requirements. For the purpose of this blog however, I&rsquo;ll focus on the dependency ordering requirement and how AppSwitch addresses it.</p> <p>Simply deploying <code>dmgr</code> and node instances as pods on a Kubernetes cluster does not work. <code>dmgr</code> and the node instances happen to have a lengthy initialization process that can take several minutes. If they are all co-scheduled, the application typically ends up in a funny state. When a node instance comes up and finds that <code>dmgr</code> is missing, it would take an alternate startup path. Instead, if it had exited immediately, Kubernetes crash-loop would have taken over and perhaps the application would have come up. But even in that case, it turns out that a timely startup is not guaranteed.</p> <p>One <code>dmgr</code> along with its node instances is a basic deployment configuration for WebSphere ND. Applications like IBM Business Process Manager that are built on top of WebSphere ND running in production environments include several other services. In those configurations, there could be a chain of interdependencies. Depending on the applications hosted by the node instances, there may be an ordering requirement among them as well. With long service initialization times and crash-loop restarts, there is little chance for the application to start in any reasonable length of time.</p> <h3 id="sidecar-dependency-in-istio">Sidecar dependency in Istio</h3> <p>Istio itself is affected by a version of the dependency ordering problem. Since connections into and out of a service running under Istio are redirected through its sidecar proxy, an implicit dependency is created between the application service and its sidecar. Unless the sidecar is fully operational, all requests from and to the service get dropped.</p> <h2 id="dependency-ordering-with-appswitch">Dependency ordering with AppSwitch</h2> <p>So how do we go about addressing these issues? One way is to defer it to the applications and say that they are supposed to be &ldquo;well behaved&rdquo; and implement appropriate logic to make themselves immune to startup order issues. However, many applications (especially traditional ones) either timeout or deadlock if misordered. Even for new applications, implementing one off logic for each service is substantial additional burden that is best avoided. Service mesh needs to provide adequate support around these problems. After all, factoring out common patterns into an underlying framework is really the point of service mesh.</p> <p><a href="http://appswitch.io">AppSwitch</a> explicitly addresses dependency ordering. It sits on the control path of the application’s network interactions between clients and services in a cluster and knows precisely when a service becomes a client by making the <code>connect</code> call and when a particular service becomes ready to accept connections by making the <code>listen</code> call. It&rsquo;s <em>service router</em> component disseminates information about these events across the cluster and arbitrates interactions among clients and servers. That is how AppSwitch implements functionality such as load balancing and isolation in a simple and efficient manner. Leveraging the same strategic location of the application&rsquo;s network control path, it is conceivable that the <code>connect</code> and <code>listen</code> calls made by those services can be lined up at a finer granularity rather than coarsely sequencing entire services as per a dependency graph. That would effectively solve the multilevel dependency problem and speedup application startup.</p> <p>But that still requires a dependency graph. A number of products and tools exist to help with discovering service dependencies. But they are typically based on passive monitoring of network traffic and cannot provide the information beforehand for any arbitrary application. Network level obfuscation due to encryption and tunneling also makes them unreliable. The burden of discovering and specifying the dependencies ultimately falls to the developer or the operator of the application. As it is, even consistency checking a dependency specification is itself quite complex and any way to avoid requiring a dependency graph would be most desirable.</p> <p>The point of a dependency graph is to know which clients depend on a particular service so that those clients can then be made to wait for the respective service to become live. But does it really matter which specific clients? Ultimately one tautology that always holds is that all clients of a service have an implicit dependency on the service. That’s what AppSwitch leverages to get around the requirement. In fact, that sidesteps dependency ordering altogether. All services of the application can be co-scheduled without regard to any startup order. Interdependencies among them automatically work themselves out at the granularity of individual requests and responses, resulting in quick and correct application startups.</p> <h3 id="appswitch-model-and-constructs">AppSwitch model and constructs</h3> <p>Now that we have a conceptual understanding of AppSwitch’s high-level approach, let’s look at the constructs involved. But first a quick summary of the usage model is in order. Even though it is written for a different context, reviewing my earlier <a href="/v1.9/blog/2018/delayering-istio/">blog</a> on this topic would be useful as well. For completeness, let me also note AppSwitch doesn’t bother with non-network dependencies. For example it may be possible for two services to interact using IPC mechanisms or through the shared file system. Processes with deep ties like that are typically part of the same service anyway and don’t require framework’s intervention for ordering.</p> <p>At its core, AppSwitch is built on a mechanism that allows instrumenting the BSD socket API and other related calls like <code>fcntl</code> and <code>ioctl</code> that deal with sockets. As interesting as the details of its implementation are, it’s going to distract us from the main topic, so I’d just summarize the key properties that distinguish it from other implementations. (1) It’s fast. It uses a combination of <code>seccomp</code> filtering and binary instrumentation to aggressively limit intervening with application’s normal execution. AppSwitch is particularly suited for service mesh and application networking use cases given that it implements those features without ever having to actually touch the data. In contrast, network level approaches incur per-packet cost. Take a look at this <a href="/v1.9/blog/2018/delayering-istio/">blog</a> for some of the performance measurements. (2) It doesn’t require any kernel support, kernel module or a patch and works on standard distro kernels (3) It can run as regular user (no root). In fact, the mechanism can even make it possible to run <a href="https://linuxpiter.com/en/materials/2478">Docker daemon without root</a> by removing root requirement to network containers (4) It doesn’t require any changes to the applications whatsoever and works for any type of application &ndash; from WebSphere ND and SAP to custom C apps to statically linked Go apps. Only requirement at this point is Linux/x86.</p> <h3 id="decoupling-services-from-their-references">Decoupling services from their references</h3> <p>AppSwitch is built on the fundamental premise that applications should be decoupled from their references. The identity of applications is traditionally derived from the identity of the host on which they run. However, applications and hosts are very different objects that need to be referenced independently. Detailed discussion around this topic along with a conceptual foundation of AppSwitch is presented in this <a href="https://arxiv.org/abs/1711.02294">research paper</a>.</p> <p>The central AppSwitch construct that achieves the decoupling between services objects and their identities is <em>service reference</em> (<em>reference</em>, for short). AppSwitch implements service references based on the API instrumentation mechanism outlined above. A service reference consists of an IP:port pair (and optionally a DNS name) and a label-selector that selects the service represented by the reference and the clients to which this reference applies. A reference supports a few key properties. (1) It can be named independently of the name of the object it refers to. That is, a service may be listening on an IP and port but a reference allows that service to be reached on any other IP and port chosen by the user. This is what allows AppSwitch to run traditional applications captured from their source environments with static IP configurations to run on Kubernetes by providing them with necessary IP addresses and ports regardless of the target network environment. (2) It remains unchanged even if the location of the target service changes. A reference automatically redirects itself as its label-selector now resolves to the new instance of the service (3) Most important for this discussion, a reference remains valid even as the target service is coming up.</p> <p>To facilitate discovering services that can be accessed through service references, AppSwitch provides an <em>auto-curated service registry</em>. The registry is automatically kept up to date as services come and go across the cluster based on the network API that AppSwitch tracks. Each entry in the registry consists of the IP and port where the respective service is bound. Along with that, it includes a set of labels indicating the application to which this service belongs, the IP and port that the application passed through the socket API when creating the service, the IP and port where AppSwitch actually bound the service on the underlying host on behalf of the application etc. In addition, applications created under AppSwitch carry a set of labels passed by the user that describe the application together with a few default system labels indicating the user that created the application and the host where the application is running etc. These labels are all available to be expressed in the label-selector carried by a service reference. A service in the registry can be made accessible to clients by creating a service reference. A client would then be able to reach the service at the reference’s name (IP:port). Now let’s look at how AppSwitch guarantees that the reference remains valid even when the target service has not yet come up.</p> <h3 id="non-blocking-requests">Non-blocking requests</h3> <p>AppSwitch leverages the semantics of the BSD socket API to ensure that service references appear valid from the perspective of clients as corresponding services come up. When a client makes a blocking connect call to another service that has not yet come up, AppSwitch blocks the call for a certain time waiting for the target service to become live. Since it is known that the target service is a part of the application and is expected to come up shortly, making the client block rather than returning an error such as <code>ECONNREFUSED</code> prevents the application from failing to start. If the service doesn’t come up within time, an error is returned to the application so that framework-level mechanisms like Kubernetes crash-loop can kick in.</p> <p>If the client request is marked as non-blocking, AppSwitch handles that by returning <code>EAGAIN</code> to inform the application to retry rather than give up. Once again, that is in-line with the semantics of socket API and prevents failures due to startup races. AppSwitch essentially enables the retry logic already built into applications in support of the BSD socket API to be transparently repurposed for dependency ordering.</p> <h3 id="application-timeouts">Application timeouts</h3> <p>What if the application times out based on its own internal timer? Truth be told, AppSwitch can also fake application’s perception of time if wanted but that would be overstepping and actually unnecessary. Application decides and knows best how long it should wait and it’s not appropriate for AppSwitch to mess with that. Application timeouts are conservatively long and if the target service still hasn’t come up in time, it is unlikely to be a dependency ordering issue. There must be something else going on that should not be masked.</p> <h3 id="wildcard-service-references-for-sidecar-dependency">Wildcard service references for sidecar dependency</h3> <p>Service references can be used to address the Istio sidecar dependency issue mentioned earlier. AppSwitch allows the IP:port specified as part of a service reference to be a wildcard. That is, the service reference IP address can be a netmask indicating the IP address range to be captured. If the label selector of the service reference points to the sidecar service, then all outgoing connections of any application for which this service reference is applied, will be transparently redirected to the sidecar. And of course, the service reference remains valid while sidecar is still coming up and the race is removed.</p> <p>Using service references for sidecar dependency ordering also implicitly redirects application’s connections to the sidecar without requiring iptables and attendant privilege issues. Essentially it works as if the application is directly making connections to the sidecar rather than the target destination, leaving the sidecar in charge of what to do. AppSwitch would interject metadata about the original destination etc. into the data stream of the connection using the proxy protocol that the sidecar could decode before passing the connection through to the application. Some of these details were discussed <a href="/v1.9/blog/2018/delayering-istio/">here</a>. That takes care of outbound connections but what about incoming connections? With all services and their sidecars running under AppSwitch, any incoming connections that would have come from remote nodes would be redirected to their respective remote sidecars. So nothing special to do about incoming connections.</p> <h2 id="summary">Summary</h2> <p>Dependency ordering is a pesky problem. This is mostly due to lack of access to fine-grain application-level events around inter-service interactions. Addressing this problem would have normally required applications to implement their own internal logic. But AppSwitch makes those internal application events to be instrumented without requiring application changes. AppSwitch then leverages the ubiquitous support for the BSD socket API to sidestep the requirement of ordering dependencies.</p> <h2 id="acknowledgements">Acknowledgements</h2> <p>Thanks to Eric Herness and team for their insights and support with IBM WebSphere and BPM products as we modernized them onto the Kubernetes platform and to Mandar Jog, Martin Taillefer and Shriram Rajagopalan for reviewing early drafts of this blog.</p>Mon, 14 Jan 2019 00:00:00 +0000/v1.9/blog/2019/appswitch/Dinesh Subhraveti (AppOrbit and Columbia University)/v1.9/blog/2019/appswitch/appswitchperformanceDeploy a Custom Ingress Gateway Using Cert-Manager <p>This post provides instructions to manually create a custom ingress <a href="/v1.9/docs/reference/config/networking/gateway/">gateway</a> with automatic provisioning of certificates based on cert-manager.</p> <p>The creation of custom ingress gateway could be used in order to have different <code>loadbalancer</code> in order to isolate traffic.</p> <h2 id="before-you-begin">Before you begin</h2> <ul> <li>Set up Istio by following the instructions in the <a href="/v1.9/docs/setup/">Installation guide</a>.</li> <li>Set up <code>cert-manager</code> with helm <a href="https://github.com/helm/charts/tree/master/stable/cert-manager#installing-the-chart">chart</a></li> <li>We will use <code>demo.mydemo.com</code> for our example, it must be resolved with your DNS</li> </ul> <h2 id="configuring-the-custom-ingress-gateway">Configuring the custom ingress gateway</h2> <ol> <li><p>Check if <a href="https://github.com/helm/charts/tree/master/stable/cert-manager">cert-manager</a> was installed using Helm with the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ helm ls </code></pre> <p>The output should be similar to the example below and show cert-manager with a <code>STATUS</code> of <code>DEPLOYED</code>:</p> <pre><code class='language-plain' data-expandlinks='true' data-repo='istio' >NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE istio 1 Thu Oct 11 13:34:24 2018 DEPLOYED istio-1.0.X 1.0.X istio-system cert 1 Wed Oct 24 14:08:36 2018 DEPLOYED cert-manager-v0.6.0-dev.2 v0.6.0-dev.2 istio-system </code></pre></li> <li><p>To create the cluster&rsquo;s issuer, apply the following configuration:</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">Change the cluster&rsquo;s <a href="https://cert-manager.readthedocs.io/en/latest/reference/issuers.html">issuer</a> provider with your own configuration values. The example uses the values under <code>route53</code>.</div> </aside> </div> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-demo namespace: kube-system spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: &lt;REDACTED&gt; # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-demo dns01: # Here we define a list of DNS-01 providers that can solve DNS challenges providers: - name: your-dns route53: accessKeyID: &lt;REDACTED&gt; region: eu-central-1 secretAccessKeySecretRef: name: prod-route53-credentials-secret key: secret-access-key </code></pre></li> <li><p>If you use the <code>route53</code> <a href="https://cert-manager.readthedocs.io/en/latest/tasks/acme/configuring-dns01/route53.html">provider</a>, you must provide a secret to perform DNS ACME Validation. To create the secret, apply the following configuration file:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 kind: Secret metadata: name: prod-route53-credentials-secret type: Opaque data: secret-access-key: &lt;REDACTED BASE64&gt; </code></pre></li> <li><p>Create your own certificate:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: demo-certificate namespace: istio-system spec: acme: config: - dns01: provider: your-dns domains: - &#39;*.mydemo.com&#39; commonName: &#39;*.mydemo.com&#39; dnsNames: - &#39;*.mydemo.com&#39; issuerRef: kind: ClusterIssuer name: letsencrypt-demo secretName: istio-customingressgateway-certs </code></pre> <p>Make a note of the value of <code>secretName</code> since a future step requires it.</p></li> <li><p>To scale automatically, declare a new horizontal pod autoscaler with the following configuration:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: my-ingressgateway namespace: istio-system spec: maxReplicas: 5 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: my-ingressgateway targetCPUUtilizationPercentage: 80 status: currentCPUUtilizationPercentage: 0 currentReplicas: 1 desiredReplicas: 1 </code></pre></li> <li><p>Apply your deployment with declaration provided in the <a href="/v1.9/blog/2019/custom-ingress-gateway/deployment-custom-ingress.yaml">yaml definition</a></p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">The annotations used, for example <code>aws-load-balancer-type</code>, only apply for AWS.</div> </aside> </div> </li> <li><p>Create your service:</p> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content">The <code>NodePort</code> used needs to be an available port.</div> </aside> </div> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 kind: Service metadata: name: my-ingressgateway annotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb labels: app: my-ingressgateway istio: my-ingressgateway spec: type: LoadBalancer selector: app: my-ingressgateway istio: my-ingressgateway ports: - name: http2 nodePort: 32380 port: 80 targetPort: 80 - name: https nodePort: 32390 port: 443 - name: tcp nodePort: 32400 port: 31400 </code></pre></li> <li><p>Create your Istio custom gateway configuration object:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: annotations: name: istio-custom-gateway namespace: default spec: selector: istio: my-ingressgateway servers: - hosts: - &#39;*.mydemo.com&#39; port: name: http number: 80 protocol: HTTP tls: httpsRedirect: true - hosts: - &#39;*.mydemo.com&#39; port: name: https number: 443 protocol: HTTPS tls: mode: SIMPLE privateKey: /etc/istio/ingressgateway-certs/tls.key serverCertificate: /etc/istio/ingressgateway-certs/tls.crt </code></pre></li> <li><p>Link your <code>istio-custom-gateway</code> with your <code>VirtualService</code>:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: my-virtualservice spec: hosts: - &#34;demo.mydemo.com&#34; gateways: - istio-custom-gateway http: - route: - destination: host: my-demoapp </code></pre></li> <li><p>Correct certificate is returned by the server and it is successfully verified (<em>SSL certificate verify ok</em> is printed):</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl -v `https://demo.mydemo.com` Server certificate: SSL certificate verify ok. </code></pre></li> </ol> <p><strong>Congratulations!</strong> You can now use your custom <code>istio-custom-gateway</code> <a href="/v1.9/docs/reference/config/networking/gateway/">gateway</a> configuration object.</p>Thu, 10 Jan 2019 00:00:00 +0000/v1.9/blog/2019/custom-ingress-gateway/Julien Senon/v1.9/blog/2019/custom-ingress-gateway/ingresstraffic-managementAnnouncing discuss.istio.io<p>We in the Istio community have been working to find the right medium for users to engage with other members of the community &ndash; to ask questions, to get help from other users, and to engage with developers working on the project.</p> <p>We’ve tried several different avenues, but each has had some downsides. RocketChat was our most recent endeavor, but the lack of certain features (for example, threading) meant it wasn’t ideal for any longer discussions around a single issue. It also led to a dilemma for some users &ndash; when should I email istio-users@googlegroups.com and when should I use RocketChat?</p> <p>We think we’ve found the right balance of features in a single platform, and we’re happy to announce <a href="https://discuss.istio.io">discuss.istio.io</a>. It’s a full-featured forum where we will have discussions about Istio from here on out. It will allow you to ask a question and get threaded replies! As a real bonus, you can use your GitHub identity.</p> <p>If you prefer emails, you can configure it to send emails just like Google groups did.</p> <p>We will be marking our Google groups &ldquo;read only&rdquo; so that the content remains, but we ask you to send further questions over to <a href="https://discuss.istio.io">discuss.istio.io</a>. If you have any outstanding questions or discussions in the groups, please move the conversation over.</p> <p>Happy meshing!</p>Thu, 10 Jan 2019 00:00:00 +0000/v1.9/blog/2019/announcing-discuss.istio.io//v1.9/blog/2019/announcing-discuss.istio.io/Incremental Istio Part 1, Traffic Management <p>Traffic management is one of the critical benefits provided by Istio. At the heart of Istio’s traffic management is the ability to decouple traffic flow and infrastructure scaling. This lets you control your traffic in ways that aren’t possible without a service mesh like Istio.</p> <p>For example, let’s say you want to execute a <a href="https://martinfowler.com/bliki/CanaryRelease.html">canary deployment</a>. With Istio, you can specify that <strong>v1</strong> of a service receives 90% of incoming traffic, while <strong>v2</strong> of that service only receives 10%. With standard Kubernetes deployments, the only way to achieve this is to manually control the number of available Pods for each version, for example 9 Pods running v1 and 1 Pod running v2. This type of manual control is hard to implement, and over time may have trouble scaling. For more information, check out <a href="/v1.9/blog/2017/0.1-canary/">Canary Deployments using Istio</a>.</p> <p>The same issue exists when deploying updates to existing services. While you can update deployments with Kubernetes, it requires replacing v1 Pods with v2 Pods. Using Istio, you can deploy v2 of your service and use built-in traffic management mechanisms to shift traffic to your updated services at a network level, then remove the v1 Pods.</p> <p>In addition to canary deployments and general traffic shifting, Istio also gives you the ability to implement dynamic request routing (based on HTTP headers), failure recovery, retries, circuit breakers, and fault injection. For more information, check out the <a href="/v1.9/docs/concepts/traffic-management/">Traffic Management documentation</a>.</p> <p>This post walks through a technique that highlights a particularly useful way that you can implement Istio incrementally &ndash; in this case, only the traffic management features &ndash; without having to individually update each of your Pods.</p> <h2 id="setup-why-implement-istio-traffic-management-features">Setup: why implement Istio traffic management features?</h2> <p>Of course, the first question is: Why would you want to do this?</p> <p>If you’re part of one of the many organizations out there that have a large cluster with lots of teams deploying, the answer is pretty clear. Let’s say Team A is getting started with Istio and wants to start some canary deployments on Service A, but Team B hasn’t started using Istio, so they don’t have sidecars deployed.</p> <p>With Istio, Team A can still implement their canaries by having Service B call Service A through Istio’s ingress gateway.</p> <h2 id="background-traffic-routing-in-an-istio-mesh">Background: traffic routing in an Istio mesh</h2> <p>But how can you use Istio’s traffic management capabilities without updating each of your applications’ Pods to include the Istio sidecar? Before answering that question, let’s take a quick high-level look at how traffic enters an Istio mesh and how it’s routed.</p> <p>Pods that are part of the Istio mesh contain a sidecar proxy that is responsible for mediating all inbound and outbound traffic to the Pod. Within an Istio mesh, Pilot is responsible for converting high-level routing rules into configurations and propagating them to the sidecar proxies. That means when services communicate with one another, their routing decisions are determined from the client side.</p> <p>Let’s say you have two services that are part of the Istio mesh, Service A and Service B. When A wants to communicate with B, the sidecar proxy of Pod A is responsible for directing traffic to Service B. For example, if you wanted to split traffic <sup>50</sup>&frasl;<sub>50</sub> across Service B v1 and v2, the traffic would flow as follows:</p> <figure style="width:60%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:42.66666666666667%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/incremental-traffic-management/fifty-fifty.png" title="50/50 Traffic Split"> <img class="element-to-stretch" src="/v1.9/blog/2018/incremental-traffic-management/fifty-fifty.png" alt="50/50 Traffic Split" /> </a> </div> <figcaption>50/50 Traffic Split</figcaption> </figure> <p>If Services A and B are not part of the Istio mesh, there is no sidecar proxy that knows how to route traffic to different versions of Service B. In that case you need to use another approach to get traffic from Service A to Service B, following the <sup>50</sup>&frasl;<sub>50</sub> rules you’ve setup.</p> <p>Fortunately, a standard Istio deployment already includes a <a href="/v1.9/docs/concepts/traffic-management/#gateways">Gateway</a> that specifically deals with ingress traffic outside of the Istio mesh. This Gateway is used to allow ingress traffic from outside the cluster via an external load balancer, or to allow ingress traffic from within the Kubernetes cluster but outside the service mesh. It can be configured to proxy incoming ingress traffic to the appropriate Pods, even if they don’t have a sidecar proxy. While this approach allows you to leverage Istio’s traffic management features, it does mean that traffic going through the ingress gateway will incur an extra hop.</p> <figure style="width:60%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:54.83870967741935%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/incremental-traffic-management/fifty-fifty-ingress-gateway.png" title="50/50 Traffic Split using Ingress Gateway"> <img class="element-to-stretch" src="/v1.9/blog/2018/incremental-traffic-management/fifty-fifty-ingress-gateway.png" alt="50/50 Traffic Split using Ingress Gateway" /> </a> </div> <figcaption>50/50 Traffic Split using Ingress Gateway</figcaption> </figure> <h2 id="in-action-traffic-routing-with-istio">In action: traffic routing with Istio</h2> <p>A simple way to see this type of approach in action is to first setup your Kubernetes environment using the <a href="/v1.9/docs/setup/platform-setup/">Platform Setup</a> instructions, and then install the <strong>minimal</strong> Istio profile using <a href="https://archive.istio.io/1.4/docs/setup/install/helm/">Helm</a>, including only the traffic management components (ingress gateway, egress gateway, Pilot). The following example uses <a href="https://cloud.google.com/gke">Google Kubernetes Engine</a>.</p> <p>First, setup and configure <a href="/v1.9/docs/setup/platform-setup/gke/">GKE</a>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ gcloud container clusters create istio-inc --zone us-central1-f $ gcloud container clusters get-credentials istio-inc $ kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin \ --user=$(gcloud config get-value core/account) </code></pre> <p>Next, <a href="https://helm.sh/docs/intro/install/">install Helm</a> and <a href="https://archive.istio.io/1.4/docs/setup/install/helm/">generate a minimal Istio install</a> &ndash; only traffic management components:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ helm template install/kubernetes/helm/istio \ --name istio \ --namespace istio-system \ --set security.enabled=false \ --set galley.enabled=false \ --set sidecarInjectorWebhook.enabled=false \ --set mixer.enabled=false \ --set prometheus.enabled=false \ --set pilot.sidecar=false &gt; istio-minimal.yaml </code></pre> <p>Then create the <code>istio-system</code> namespace and deploy Istio:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl create namespace istio-system $ kubectl apply -f istio-minimal.yaml </code></pre> <p>Next, deploy the Bookinfo sample without the Istio sidecar containers:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ </code></pre></div> <p>Now, configure a new Gateway that allows access to the reviews service from outside the Istio mesh, a new <code>VirtualService</code> that splits traffic evenly between v1 and v2 of the reviews service, and a set of new <code>DestinationRule</code> resources that match destination subsets to service versions:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | kubectl apply -f - apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: reviews-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - &#34;*&#34; --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - &#34;*&#34; gateways: - reviews-gateway http: - match: - uri: prefix: /reviews route: - destination: host: reviews subset: v1 weight: 50 - destination: host: reviews subset: v2 weight: 50 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3 EOF </code></pre> <p>Finally, deploy a pod that you can use for testing with <code>curl</code> (and without the Istio sidecar container):</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/sleep/sleep.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/sleep/sleep.yaml@ </code></pre></div> <h2 id="testing-your-deployment">Testing your deployment</h2> <p>Now, you can test different behaviors using the <code>curl</code> commands via the sleep Pod.</p> <p>The first example is to issue requests to the reviews service using standard Kubernetes service DNS behavior (<strong>note</strong>: <a href="https://stedolan.github.io/jq/"><code>jq</code></a> is used in the examples below to filter the output from <code>curl</code>):</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export SLEEP_POD=$(kubectl get pod -l app=sleep \ -o jsonpath={.items..metadata.name}) $ for i in `seq 3`; do \ kubectl exec -it $SLEEP_POD curl http://reviews:9080/reviews/0 | \ jq &#39;.reviews|.[]|.rating?&#39;; \ done </code></pre> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;stars&#34;: 5, &#34;color&#34;: &#34;black&#34; } { &#34;stars&#34;: 4, &#34;color&#34;: &#34;black&#34; } null null { &#34;stars&#34;: 5, &#34;color&#34;: &#34;red&#34; } { &#34;stars&#34;: 4, &#34;color&#34;: &#34;red&#34; } </code></pre> <p>Notice how we’re getting responses from all three versions of the reviews service (<code>null</code> is from reviews v1 which doesn’t have ratings) and not getting the even split across v1 and v2. This is expected behavior because the <code>curl</code> command is using Kubernetes service load balancing across all three versions of the reviews service. In order to access the reviews <sup>50</sup>&frasl;<sub>50</sub> split we need to access the service via the ingress Gateway:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ for i in `seq 4`; do \ kubectl exec -it $SLEEP_POD curl http://istio-ingressgateway.istio-system/reviews/0 | \ jq &#39;.reviews|.[]|.rating?&#39;; \ done </code></pre> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;stars&#34;: 5, &#34;color&#34;: &#34;black&#34; } { &#34;stars&#34;: 4, &#34;color&#34;: &#34;black&#34; } null null { &#34;stars&#34;: 5, &#34;color&#34;: &#34;black&#34; } { &#34;stars&#34;: 4, &#34;color&#34;: &#34;black&#34; } null null </code></pre> <p>Mission accomplished! This post showed how to deploy a minimal installation of Istio that only contains the traffic management components (Pilot, ingress Gateway), and then use those components to direct traffic to specific versions of the reviews service. And it wasn&rsquo;t necessary to deploy the Istio sidecar proxy to gain these capabilities, so there was little to no interruption of existing workloads or applications.</p> <p>Using the built-in ingress gateway (along with some <code>VirtualService</code> and <code>DestinationRule</code> resources) this post showed how you can easily leverage Istio’s traffic management for cluster-external ingress traffic and cluster-internal service-to-service traffic. This technique is a great example of an incremental approach to adopting Istio, and can be especially useful in real-world cases where Pods are owned by different teams or deployed to different namespaces.</p>Wed, 21 Nov 2018 00:00:00 +0000/v1.9/blog/2018/incremental-traffic-management/Sandeep Parikh/v1.9/blog/2018/incremental-traffic-management/traffic-managementgatewayConsuming External MongoDB Services <p>In the <a href="/v1.9/blog/2018/egress-tcp/">Consuming External TCP Services</a> blog post, I described how external services can be consumed by in-mesh Istio applications via TCP. In this post, I demonstrate consuming external MongoDB services. You use the <a href="/v1.9/docs/examples/bookinfo/">Istio Bookinfo sample application</a>, the version in which the book ratings data is persisted in a MongoDB database. You deploy this database outside the cluster and configure the <em>ratings</em> microservice to use it. You will learn multiple options of controlling traffic to external MongoDB services and their pros and cons.</p> <h2 id="bookinfo-with-external-ratings-database">Bookinfo with external ratings database</h2> <p>First, you set up a MongoDB database instance to hold book ratings data outside of your Kubernetes cluster. Then you modify the <a href="/v1.9/docs/examples/bookinfo/">Bookinfo sample application</a> to use your database.</p> <h3 id="setting-up-the-ratings-database">Setting up the ratings database</h3> <p>For this task you set up an instance of <a href="https://www.mongodb.com">MongoDB</a>. You can use any MongoDB instance; I used <a href="https://www.ibm.com/cloud/compose/mongodb">Compose for MongoDB</a>.</p> <ol> <li><p>Set an environment variable for the password of your <code>admin</code> user. To prevent the password from being preserved in the Bash history, remove the command from the history immediately after running the command, using <a href="https://www.gnu.org/software/bash/manual/html_node/Bash-History-Builtins.html#Bash-History-Builtins">history -d</a>.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export MONGO_ADMIN_PASSWORD=&lt;your MongoDB admin password&gt; </code></pre></li> <li><p>Set an environment variable for the password of the new user you will create, namely <code>bookinfo</code>. Remove the command from the history using <a href="https://www.gnu.org/software/bash/manual/html_node/Bash-History-Builtins.html#Bash-History-Builtins">history -d</a>.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export BOOKINFO_PASSWORD=&lt;password&gt; </code></pre></li> <li><p>Set environment variables for your MongoDB service, <code>MONGODB_HOST</code> and <code>MONGODB_PORT</code>.</p></li> <li><p>Create the <code>bookinfo</code> user:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | mongo --ssl --sslAllowInvalidCertificates $MONGODB_HOST:$MONGODB_PORT -u admin -p $MONGO_ADMIN_PASSWORD --authenticationDatabase admin use test db.createUser( { user: &#34;bookinfo&#34;, pwd: &#34;$BOOKINFO_PASSWORD&#34;, roles: [ &#34;read&#34;] } ); EOF </code></pre></li> <li><p>Create a <em>collection</em> to hold ratings. The following command sets both ratings to be equal <code>1</code> to provide a visual clue when your database is used by the Bookinfo <em>ratings</em> service (the default Bookinfo <em>ratings</em> are <code>4</code> and <code>5</code>).</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | mongo --ssl --sslAllowInvalidCertificates $MONGODB_HOST:$MONGODB_PORT -u admin -p $MONGO_ADMIN_PASSWORD --authenticationDatabase admin use test db.createCollection(&#34;ratings&#34;); db.ratings.insert( [{rating: 1}, {rating: 1}] ); EOF </code></pre></li> <li><p>Check that <code>bookinfo</code> user can get ratings:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | mongo --ssl --sslAllowInvalidCertificates $MONGODB_HOST:$MONGODB_PORT -u bookinfo -p $BOOKINFO_PASSWORD --authenticationDatabase test use test db.ratings.find({}); EOF </code></pre> <p>The output should be similar to:</p> <pre><code class='language-plain' data-expandlinks='true' data-repo='istio' >MongoDB server version: 3.4.10 switched to db test { &#34;_id&#34; : ObjectId(&#34;5b7c29efd7596e65b6ed2572&#34;), &#34;rating&#34; : 1 } { &#34;_id&#34; : ObjectId(&#34;5b7c29efd7596e65b6ed2573&#34;), &#34;rating&#34; : 1 } bye </code></pre></li> </ol> <h3 id="initial-setting-of-bookinfo-application">Initial setting of Bookinfo application</h3> <p>To demonstrate the scenario of using an external database, you start with a Kubernetes cluster with <a href="/v1.9/docs/setup/getting-started/">Istio installed</a>. Then you deploy the <a href="/v1.9/docs/examples/bookinfo/">Istio Bookinfo sample application</a>, <a href="/v1.9/docs/examples/bookinfo/#apply-default-destination-rules">apply the default destination rules</a>, and <a href="/v1.9/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy">change Istio to the blocking-egress-by-default policy</a>.</p> <p>This application uses the <code>ratings</code> microservice to fetch book ratings, a number between 1 and 5. The ratings are displayed as stars for each review. There are several versions of the <code>ratings</code> microservice. You will deploy the version that uses <a href="https://www.mongodb.com">MongoDB</a> as the ratings database in the next subsection.</p> <p>The example commands in this blog post work with Istio 1.0.</p> <p>As a reminder, here is the end-to-end architecture of the application from the <a href="/v1.9/docs/examples/bookinfo/">Bookinfo sample application</a>.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:59.086918235567985%"> <a data-skipendnotes="true" href="/v1.9/docs/examples/bookinfo/withistio.svg" title="The original Bookinfo application"> <img class="element-to-stretch" src="/v1.9/docs/examples/bookinfo/withistio.svg" alt="The original Bookinfo application" /> </a> </div> <figcaption>The original Bookinfo application</figcaption> </figure> <h3 id="use-the-external-database-in-bookinfo-application">Use the external database in Bookinfo application</h3> <ol> <li><p>Deploy the spec of the <em>ratings</em> microservice that uses a MongoDB database (<em>ratings v2</em>):</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@ serviceaccount &#34;bookinfo-ratings-v2&#34; created deployment &#34;ratings-v2&#34; created </code></pre></div></li> <li><p>Update the <code>MONGO_DB_URL</code> environment variable to the value of your MongoDB:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl set env deployment/ratings-v2 &#34;MONGO_DB_URL=mongodb://bookinfo:$BOOKINFO_PASSWORD@$MONGODB_HOST:$MONGODB_PORT/test?authSource=test&amp;ssl=true&#34; deployment.extensions/ratings-v2 env updated </code></pre></li> <li><p>Route all the traffic destined to the <em>reviews</em> service to its <em>v3</em> version. You do this to ensure that the <em>reviews</em> service always calls the <em>ratings</em> service. In addition, route all the traffic destined to the <em>ratings</em> service to <em>ratings v2</em> that uses your database.</p> <p>Specify the routing for both services above by adding two <a href="/v1.9/docs/reference/config/networking/virtual-service/">virtual services</a>. These virtual services are specified in <code>samples/bookinfo/networking/virtual-service-ratings-mongodb.yaml</code> of an Istio release archive. <strong><em>Important:</em></strong> make sure you <a href="/v1.9/docs/examples/bookinfo/#apply-default-destination-rules">applied the default destination rules</a> before running the following command.</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/networking/virtual-service-ratings-db.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/bookinfo/networking/virtual-service-ratings-db.yaml@ </code></pre></div></li> </ol> <p>The updated architecture appears below. Note that the blue arrows inside the mesh mark the traffic configured according to the virtual services we added. According to the virtual services, the traffic is sent to <em>reviews v3</em> and <em>ratings v2</em>.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:59.314858206480224%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-mongo/bookinfo-ratings-v2-mongodb-external.svg" title="The Bookinfo application with ratings v2 and an external MongoDB database"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-mongo/bookinfo-ratings-v2-mongodb-external.svg" alt="The Bookinfo application with ratings v2 and an external MongoDB database" /> </a> </div> <figcaption>The Bookinfo application with ratings v2 and an external MongoDB database</figcaption> </figure> <p>Note that the MongoDB database is outside the Istio service mesh, or more precisely outside the Kubernetes cluster. The boundary of the service mesh is marked by a dashed line.</p> <h3 id="access-the-webpage">Access the webpage</h3> <p>Access the webpage of the application, after <a href="/v1.9/docs/examples/bookinfo/#determine-the-ingress-ip-and-port">determining the ingress IP and port</a>.</p> <p>Since you did not configure the egress traffic control yet, the access to the MongoDB service is blocked by Istio. This is why instead of the rating stars, the message <em>&ldquo;Ratings service is currently unavailable&rdquo;</em> is currently displayed below each review:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:36.18705035971223%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-mongo/errorFetchingBookRating.png" title="The Ratings service error messages"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-mongo/errorFetchingBookRating.png" alt="The Ratings service error messages" /> </a> </div> <figcaption>The Ratings service error messages</figcaption> </figure> <p>In the following sections you will configure egress access to the external MongoDB service, using different options for egress control in Istio.</p> <h2 id="egress-control-for-tcp">Egress control for TCP</h2> <p>Since <a href="https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/">MongoDB Wire Protocol</a> runs on top of TCP, you can control the egress traffic to your MongoDB as traffic to any other <a href="/v1.9/blog/2018/egress-tcp/">external TCP service</a>. To control TCP traffic, a block of IPs in the <a href="https://tools.ietf.org/html/rfc2317">CIDR</a> notation that includes the IP address of your MongoDB host must be specified. The caveat here is that sometimes the IP of the MongoDB host is not stable or known in advance.</p> <p>In the cases when the IP of the MongoDB host is not stable, the egress traffic can either be <a href="#egress-control-for-tls">controlled as TLS traffic</a>, or the traffic can be routed <a href="/v1.9/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services">directly</a>, bypassing the Istio sidecar proxies.</p> <p>Get the IP address of your MongoDB database instance. As an option, you can use the <a href="https://linux.die.net/man/1/host">host</a> command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export MONGODB_IP=$(host $MONGODB_HOST | grep &#34; has address &#34; | cut -d&#34; &#34; -f4) </code></pre> <h3 id="control-tcp-egress-traffic-without-a-gateway">Control TCP egress traffic without a gateway</h3> <p>In case you do not need to direct the traffic through an <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#use-case">egress gateway</a>, for example if you do not have a requirement that all the traffic that exists your mesh must exit through the gateway, follow the instructions in this section. Alternatively, if you do want to direct your traffic through an egress gateway, proceed to <a href="#direct-tcp-egress-traffic-through-an-egress-gateway">Direct TCP egress traffic through an egress gateway</a>.</p> <ol> <li><p>Define a TCP mesh-external service entry:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: mongo spec: hosts: - my-mongo.tcp.svc addresses: - $MONGODB_IP/32 ports: - number: $MONGODB_PORT name: tcp protocol: TCP location: MESH_EXTERNAL resolution: STATIC endpoints: - address: $MONGODB_IP EOF </code></pre> <p>Note that the protocol <code>TCP</code> is specified instead of <code>MONGO</code> due to the fact that the traffic can be encrypted in case <a href="https://docs.mongodb.com/manual/tutorial/configure-ssl/">the MongoDB protocol runs on top of TLS</a>. If the traffic is encrypted, the encrypted MongoDB protocol cannot be parsed by the Istio proxy.</p> <p>If you know that the plain MongoDB protocol is used, without encryption, you can specify the protocol as <code>MONGO</code> and let the Istio proxy produce <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/network_filters/mongo_proxy_filter#statistics">MongoDB related statistics</a>. Also note that when the protocol <code>TCP</code> is specified, the configuration is not specific for MongoDB, but is the same for any other database with the protocol on top of TCP.</p> <p>Note that the host of your MongoDB is not used in TCP routing, so you can use any host, for example <code>my-mongo.tcp.svc</code>. Notice the <code>STATIC</code> resolution and the endpoint with the IP of your MongoDB service. Once you define such an endpoint, you can access MongoDB services that do not have a domain name.</p></li> <li><p>Refresh the web page of the application. Now the application should display the ratings without error:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:36.69064748201439%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-mongo/externalDBRatings.png" title="Book Ratings Displayed Correctly"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-mongo/externalDBRatings.png" alt="Book Ratings Displayed Correctly" /> </a> </div> <figcaption>Book Ratings Displayed Correctly</figcaption> </figure> <p>Note that you see a one-star rating for both displayed reviews, as expected. You set the ratings to be one star to provide yourself with a visual clue that your external database is indeed being used.</p></li> <li><p>If you want to direct the traffic through an egress gateway, proceed to the next section. Otherwise, perform <a href="#cleanup-of-tcp-egress-traffic-control">cleanup</a>.</p></li> </ol> <h3 id="direct-tcp-egress-traffic-through-an-egress-gateway">Direct TCP Egress traffic through an egress gateway</h3> <p>In this section you handle the case when you need to direct the traffic through an <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#use-case">egress gateway</a>. The sidecar proxy routes TCP connections from the MongoDB client to the egress gateway, by matching the IP of the MongoDB host (a CIDR block of length 32). The egress gateway forwards the traffic to the MongoDB host, by its hostname.</p> <ol> <li><p><a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#deploy-istio-egress-gateway">Deploy Istio egress gateway</a>.</p></li> <li><p>If you did not perform the steps in <a href="#control-tcp-egress-traffic-without-a-gateway">the previous section</a>, perform them now.</p></li> <li><p>You may want to enable <span class="term" data-title="Mutual TLS Authentication" data-body="&lt;p&gt;Mutual TLS provides strong service-to-service authentication with built-in identity and credential management. &lt;a href=&#34;/docs/concepts/security/#mutual-tls-authentication&#34;&gt;Learn more about mutual TLS authentication&lt;/a&gt;.&lt;/p&gt; ">mutual TLS Authentication</span> between the sidecar proxies of your MongoDB clients and the egress gateway to let the egress gateway monitor the identity of the source pods and to enable Mixer policy enforcement based on that identity. By enabling mutual TLS you also encrypt the traffic. If you do not want to enable mutual TLS, proceed to the <a href="/v1.9/blog/2018/egress-mongo/#mutual-tls-between-the-sidecar-proxies-and-the-egress-gateway">Mutual TLS between the sidecar proxies and the egress gateway</a> section. Otherwise, proceed to the following section.</p></li> </ol> <h4 id="configure-tcp-traffic-from-sidecars-to-the-egress-gateway">Configure TCP traffic from sidecars to the egress gateway</h4> <ol> <li><p>Define the <code>EGRESS_GATEWAY_MONGODB_PORT</code> environment variable to hold some port for directing traffic through the egress gateway, e.g. <code>7777</code>. You must select a port that is not used for any other service in the mesh.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export EGRESS_GATEWAY_MONGODB_PORT=7777 </code></pre></li> <li><p>Add the selected port to the <code>istio-egressgateway</code> service. You should use the same values you used for installing Istio, in particular you have to specify all the ports of the <code>istio-egressgateway</code> service that you previously configured.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ helm template install/kubernetes/helm/istio/ --name istio-egressgateway --namespace istio-system -x charts/gateways/templates/deployment.yaml -x charts/gateways/templates/service.yaml --set gateways.istio-ingressgateway.enabled=false --set gateways.istio-egressgateway.enabled=true --set gateways.istio-egressgateway.ports[0].port=80 --set gateways.istio-egressgateway.ports[0].name=http --set gateways.istio-egressgateway.ports[1].port=443 --set gateways.istio-egressgateway.ports[1].name=https --set gateways.istio-egressgateway.ports[2].port=$EGRESS_GATEWAY_MONGODB_PORT --set gateways.istio-egressgateway.ports[2].name=mongo | kubectl apply -f - </code></pre></li> <li><p>Check that the <code>istio-egressgateway</code> service indeed has the selected port:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get svc istio-egressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-egressgateway ClusterIP 172.21.202.204 &lt;none&gt; 80/TCP,443/TCP,7777/TCP 34d </code></pre></li> <li><p>Disable mutual TLS authentication for the <code>istio-egressgateway</code> service:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: istio-egressgateway namespace: istio-system spec: targets: - name: istio-egressgateway EOF </code></pre></li> <li><p>Create an egress <code>Gateway</code> for your MongoDB service, and destination rules and a virtual service to direct the traffic through the egress gateway and from the egress gateway to the external service.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-egressgateway spec: selector: istio: egressgateway servers: - port: number: $EGRESS_GATEWAY_MONGODB_PORT name: tcp protocol: TCP hosts: - my-mongo.tcp.svc --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: egressgateway-for-mongo spec: host: istio-egressgateway.istio-system.svc.cluster.local subsets: - name: mongo --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: mongo spec: host: my-mongo.tcp.svc --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: direct-mongo-through-egress-gateway spec: hosts: - my-mongo.tcp.svc gateways: - mesh - istio-egressgateway tcp: - match: - gateways: - mesh destinationSubnets: - $MONGODB_IP/32 port: $MONGODB_PORT route: - destination: host: istio-egressgateway.istio-system.svc.cluster.local subset: mongo port: number: $EGRESS_GATEWAY_MONGODB_PORT - match: - gateways: - istio-egressgateway port: $EGRESS_GATEWAY_MONGODB_PORT route: - destination: host: my-mongo.tcp.svc port: number: $MONGODB_PORT weight: 100 EOF </code></pre></li> <li><p><a href="#verify-that-egress-traffic-is-directed-through-the-egress-gateway">Verify that egress traffic is directed through the egress gateway</a>.</p></li> </ol> <h4 id="mutual-tls-between-the-sidecar-proxies-and-the-egress-gateway">Mutual TLS between the sidecar proxies and the egress gateway</h4> <ol> <li><p>Delete the previous configuration:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete gateway istio-egressgateway --ignore-not-found=true $ kubectl delete virtualservice direct-mongo-through-egress-gateway --ignore-not-found=true $ kubectl delete destinationrule egressgateway-for-mongo mongo --ignore-not-found=true $ kubectl delete policy istio-egressgateway -n istio-system --ignore-not-found=true </code></pre></li> <li><p>Enforce mutual TLS authentication for the <code>istio-egressgateway</code> service:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: istio-egressgateway namespace: istio-system spec: targets: - name: istio-egressgateway peers: - mtls: {} EOF </code></pre></li> <li><p>Create an egress <code>Gateway</code> for your MongoDB service, and destination rules and a virtual service to direct the traffic through the egress gateway and from the egress gateway to the external service.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-egressgateway spec: selector: istio: egressgateway servers: - port: number: 443 name: tls protocol: TLS hosts: - my-mongo.tcp.svc tls: mode: MUTUAL serverCertificate: /etc/certs/cert-chain.pem privateKey: /etc/certs/key.pem caCertificates: /etc/certs/root-cert.pem --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: egressgateway-for-mongo spec: host: istio-egressgateway.istio-system.svc.cluster.local subsets: - name: mongo trafficPolicy: loadBalancer: simple: ROUND_ROBIN portLevelSettings: - port: number: 443 tls: mode: ISTIO_MUTUAL sni: my-mongo.tcp.svc --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: mongo spec: host: my-mongo.tcp.svc --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: direct-mongo-through-egress-gateway spec: hosts: - my-mongo.tcp.svc gateways: - mesh - istio-egressgateway tcp: - match: - gateways: - mesh destinationSubnets: - $MONGODB_IP/32 port: $MONGODB_PORT route: - destination: host: istio-egressgateway.istio-system.svc.cluster.local subset: mongo port: number: 443 - match: - gateways: - istio-egressgateway port: 443 route: - destination: host: my-mongo.tcp.svc port: number: $MONGODB_PORT weight: 100 EOF </code></pre></li> <li><p>Proceed to the next section.</p></li> </ol> <h4 id="verify-that-egress-traffic-is-directed-through-the-egress-gateway">Verify that egress traffic is directed through the egress gateway</h4> <ol> <li><p>Refresh the web page of the application again and verify that the ratings are still displayed correctly.</p></li> <li><p><a href="/v1.9/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging">Enable Envoy’s access logging</a></p></li> <li><p>Check the log of the egress gateway&rsquo;s Envoy and see a line that corresponds to your requests to the MongoDB service. If Istio is deployed in the <code>istio-system</code> namespace, the command to print the log is:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl logs -l istio=egressgateway -n istio-system [2019-04-14T06:12:07.636Z] &#34;- - -&#34; 0 - &#34;-&#34; 1591 4393 94 - &#34;-&#34; &#34;-&#34; &#34;-&#34; &#34;-&#34; &#34;&lt;Your MongoDB IP&gt;:&lt;your MongoDB port&gt;&#34; outbound|&lt;your MongoDB port&gt;||my-mongo.tcp.svc 172.30.146.119:59924 172.30.146.119:443 172.30.230.1:59206 - </code></pre></li> </ol> <h3 id="cleanup-of-tcp-egress-traffic-control">Cleanup of TCP egress traffic control</h3> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete serviceentry mongo $ kubectl delete gateway istio-egressgateway --ignore-not-found=true $ kubectl delete virtualservice direct-mongo-through-egress-gateway --ignore-not-found=true $ kubectl delete destinationrule egressgateway-for-mongo mongo --ignore-not-found=true $ kubectl delete policy istio-egressgateway -n istio-system --ignore-not-found=true </code></pre> <h2 id="egress-control-for-tls">Egress control for TLS</h2> <p>In the real life, most of the communication to the external services must be encrypted and <a href="https://docs.mongodb.com/manual/tutorial/configure-ssl/">the MongoDB protocol runs on top of TLS</a>. Also, the TLS clients usually send <a href="https://en.wikipedia.org/wiki/Server_Name_Indication">Server Name Indication</a>, SNI, as part of their handshake. If your MongoDB server runs TLS and your MongoDB client sends SNI as part of the handshake, you can control your MongoDB egress traffic as any other TLS-with-SNI traffic. With TLS and SNI, you do not need to specify the IP addresses of your MongoDB servers. You specify their host names instead, which is more convenient since you do not have to rely on the stability of the IP addresses. You can also specify wildcards as a prefix of the host names, for example allowing access to any server from the <code>*.com</code> domain.</p> <p>To check if your MongoDB server supports TLS, run:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ openssl s_client -connect $MONGODB_HOST:$MONGODB_PORT -servername $MONGODB_HOST </code></pre> <p>If the command above prints a certificate returned by the server, the server supports TLS. If not, you have to control your MongoDB egress traffic on the TCP level, as described in the previous sections.</p> <h3 id="control-tls-egress-traffic-without-a-gateway">Control TLS egress traffic without a gateway</h3> <p>In case you <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#use-case">do not need an egress gateway</a>, follow the instructions in this section. If you want to direct your traffic through an egress gateway, proceed to <a href="#direct-tcp-egress-traffic-through-an-egress-gateway">Direct TCP Egress traffic through an egress gateway</a>.</p> <ol> <li><p>Create a <code>ServiceEntry</code> for the MongoDB service:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: mongo spec: hosts: - $MONGODB_HOST ports: - number: $MONGODB_PORT name: tls protocol: TLS resolution: DNS EOF </code></pre></li> <li><p>Refresh the web page of the application. The application should display the ratings without error.</p></li> </ol> <h4 id="cleanup-of-the-egress-configuration-for-tls">Cleanup of the egress configuration for TLS</h4> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete serviceentry mongo </code></pre> <h3 id="direct-tls-egress-traffic-through-an-egress-gateway">Direct TLS Egress traffic through an egress gateway</h3> <p>In this section you handle the case when you need to direct the traffic through an <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#use-case">egress gateway</a>. The sidecar proxy routes TLS connections from the MongoDB client to the egress gateway, by matching the SNI of the MongoDB host. The egress gateway forwards the traffic to the MongoDB host. Note that the sidecar proxy rewrites the destination port to be 443. The egress gateway accepts the MongoDB traffic on the port 443, matches the MongoDB host by SNI, and rewrites the port again to be the port of the MongoDB server.</p> <ol> <li><p><a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#deploy-istio-egress-gateway">Deploy Istio egress gateway</a>.</p></li> <li><p>Create a <code>ServiceEntry</code> for the MongoDB service:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: mongo spec: hosts: - $MONGODB_HOST ports: - number: $MONGODB_PORT name: tls protocol: TLS - number: 443 name: tls-port-for-egress-gateway protocol: TLS resolution: DNS location: MESH_EXTERNAL EOF </code></pre></li> <li><p>Refresh the web page of the application and verify that the ratings are displayed correctly.</p></li> <li><p>Create an egress <code>Gateway</code> for your MongoDB service, and destination rules and virtual services to direct the traffic through the egress gateway and from the egress gateway to the external service.</p> <p>If you want to enable <a href="/v1.9/docs/tasks/security/authentication/authn-policy/">mutual TLS Authentication</a> between the sidecar proxies of your application pods and the egress gateway, use the following command. (You may want to enable mutual TLS to let the egress gateway monitor the identity of the source pods and to enable Mixer policy enforcement based on that identity.)</p> <div id="tabset-blog-2018-egress-mongo-1" role="tablist" class="tabset"> <div class="tab-strip" data-category-name="mtls"><button aria-selected="true" data-category-value="enabled" aria-controls="tabset-blog-2018-egress-mongo-1-0-panel" id="tabset-blog-2018-egress-mongo-1-0-tab" role="tab"><span>mutual TLS enabled</span> </button><button tabindex="-1" data-category-value="disabled" aria-controls="tabset-blog-2018-egress-mongo-1-1-panel" id="tabset-blog-2018-egress-mongo-1-1-tab" role="tab"><span>mutual TLS disabled</span> </button></div> <div class="tab-content"><div id="tabset-blog-2018-egress-mongo-1-0-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2018-egress-mongo-1-0-tab"><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-egressgateway spec: selector: istio: egressgateway servers: - port: number: 443 name: tls protocol: TLS hosts: - $MONGODB_HOST tls: mode: MUTUAL serverCertificate: /etc/certs/cert-chain.pem privateKey: /etc/certs/key.pem caCertificates: /etc/certs/root-cert.pem --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: egressgateway-for-mongo spec: host: istio-egressgateway.istio-system.svc.cluster.local subsets: - name: mongo trafficPolicy: loadBalancer: simple: ROUND_ROBIN portLevelSettings: - port: number: 443 tls: mode: ISTIO_MUTUAL sni: $MONGODB_HOST --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: direct-mongo-through-egress-gateway spec: hosts: - $MONGODB_HOST gateways: - mesh - istio-egressgateway tls: - match: - gateways: - mesh port: $MONGODB_PORT sni_hosts: - $MONGODB_HOST route: - destination: host: istio-egressgateway.istio-system.svc.cluster.local subset: mongo port: number: 443 tcp: - match: - gateways: - istio-egressgateway port: 443 route: - destination: host: $MONGODB_HOST port: number: $MONGODB_PORT weight: 100 EOF </code></pre> </div><div hidden id="tabset-blog-2018-egress-mongo-1-1-panel" role="tabpanel" tabindex="0" aria-labelledby="tabset-blog-2018-egress-mongo-1-1-tab"><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-egressgateway spec: selector: istio: egressgateway servers: - port: number: 443 name: tls protocol: TLS hosts: - $MONGODB_HOST tls: mode: PASSTHROUGH --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: egressgateway-for-mongo spec: host: istio-egressgateway.istio-system.svc.cluster.local subsets: - name: mongo --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: direct-mongo-through-egress-gateway spec: hosts: - $MONGODB_HOST gateways: - mesh - istio-egressgateway tls: - match: - gateways: - mesh port: $MONGODB_PORT sni_hosts: - $MONGODB_HOST route: - destination: host: istio-egressgateway.istio-system.svc.cluster.local subset: mongo port: number: 443 - match: - gateways: - istio-egressgateway port: 443 sni_hosts: - $MONGODB_HOST route: - destination: host: $MONGODB_HOST port: number: $MONGODB_PORT weight: 100 EOF </code></pre> </div></div> </div> </li> <li><p><a href="#verify-that-egress-traffic-is-directed-through-the-egress-gateway">Verify that the traffic is directed though the egress gateway</a></p></li> </ol> <h4 id="cleanup-directing-tls-egress-traffic-through-an-egress-gateway">Cleanup directing TLS egress traffic through an egress gateway</h4> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete serviceentry mongo $ kubectl delete gateway istio-egressgateway $ kubectl delete virtualservice direct-mongo-through-egress-gateway $ kubectl delete destinationrule egressgateway-for-mongo </code></pre> <h3 id="enable-mongodb-tls-egress-traffic-to-arbitrary-wildcarded-domains">Enable MongoDB TLS egress traffic to arbitrary wildcarded domains</h3> <p>Sometimes you want to configure egress traffic to multiple hostnames from the same domain, for example traffic to all MongoDB services from <code>*.&lt;your company domain&gt;.com</code>. You do not want to create multiple configuration items, one for each and every MongoDB service in your company. To configure access to all the external services from the same domain by a single configuration, you use <em>wildcarded</em> hosts.</p> <p>In this section you configure egress traffic for a wildcarded domain. I used a MongoDB instance at <code>composedb.com</code> domain, so configuring egress traffic for <code>*.com</code> worked for me (I could have used <code>*.composedb.com</code> as well). You can pick a wildcarded domain according to your MongoDB host.</p> <p>To configure egress gateway traffic for a wildcarded domain, you will first need to deploy a custom egress gateway with <a href="/v1.9/docs/tasks/traffic-management/egress/wildcard-egress-hosts/#wildcard-configuration-for-arbitrary-domains">an additional SNI proxy</a>. This is needed due to current limitations of Envoy, the proxy used by the standard Istio egress gateway.</p> <h4 id="prepare-a-new-egress-gateway-with-an-sni-proxy">Prepare a new egress gateway with an SNI proxy</h4> <p>In this subsection you deploy an egress gateway with an SNI proxy, in addition to the standard Istio Envoy proxy. You can use any SNI proxy that is capable of routing traffic according to arbitrary, not-preconfigured SNI values; we used <a href="http://nginx.org">Nginx</a> to achieve this functionality.</p> <ol> <li><p>Create a configuration file for the Nginx SNI proxy. You may want to edit the file to specify additional Nginx settings, if required.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF &gt; ./sni-proxy.conf user www-data; events { } stream { log_format log_stream &#39;\$remote_addr [\$time_local] \$protocol [\$ssl_preread_server_name]&#39; &#39;\$status \$bytes_sent \$bytes_received \$session_time&#39;; access_log /var/log/nginx/access.log log_stream; error_log /var/log/nginx/error.log; # tcp forward proxy by SNI server { resolver 8.8.8.8 ipv6=off; listen 127.0.0.1:$MONGODB_PORT; proxy_pass \$ssl_preread_server_name:$MONGODB_PORT; ssl_preread on; } } EOF </code></pre></li> <li><p>Create a Kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/">ConfigMap</a> to hold the configuration of the Nginx SNI proxy:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl create configmap egress-sni-proxy-configmap -n istio-system --from-file=nginx.conf=./sni-proxy.conf </code></pre></li> <li><p>The following command will generate <code>istio-egressgateway-with-sni-proxy.yaml</code> to edit and deploy.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | helm template install/kubernetes/helm/istio/ --name istio-egressgateway-with-sni-proxy --namespace istio-system -x charts/gateways/templates/deployment.yaml -x charts/gateways/templates/service.yaml -x charts/gateways/templates/serviceaccount.yaml -x charts/gateways/templates/autoscale.yaml -x charts/gateways/templates/role.yaml -x charts/gateways/templates/rolebindings.yaml --set global.mtls.enabled=true --set global.istioNamespace=istio-system -f - &gt; ./istio-egressgateway-with-sni-proxy.yaml gateways: enabled: true istio-ingressgateway: enabled: false istio-egressgateway: enabled: false istio-egressgateway-with-sni-proxy: enabled: true labels: app: istio-egressgateway-with-sni-proxy istio: egressgateway-with-sni-proxy replicaCount: 1 autoscaleMin: 1 autoscaleMax: 5 cpu: targetAverageUtilization: 80 serviceAnnotations: {} type: ClusterIP ports: - port: 443 name: https secretVolumes: - name: egressgateway-certs secretName: istio-egressgateway-certs mountPath: /etc/istio/egressgateway-certs - name: egressgateway-ca-certs secretName: istio-egressgateway-ca-certs mountPath: /etc/istio/egressgateway-ca-certs configVolumes: - name: sni-proxy-config configMapName: egress-sni-proxy-configmap additionalContainers: - name: sni-proxy image: nginx volumeMounts: - name: sni-proxy-config mountPath: /etc/nginx readOnly: true EOF </code></pre></li> <li><p>Deploy the new egress gateway:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f ./istio-egressgateway-with-sni-proxy.yaml serviceaccount &#34;istio-egressgateway-with-sni-proxy-service-account&#34; created role &#34;istio-egressgateway-with-sni-proxy-istio-system&#34; created rolebinding &#34;istio-egressgateway-with-sni-proxy-istio-system&#34; created service &#34;istio-egressgateway-with-sni-proxy&#34; created deployment &#34;istio-egressgateway-with-sni-proxy&#34; created horizontalpodautoscaler &#34;istio-egressgateway-with-sni-proxy&#34; created </code></pre></li> <li><p>Verify that the new egress gateway is running. Note that the pod has two containers (one is the Envoy proxy and the second one is the SNI proxy).</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pod -l istio=egressgateway-with-sni-proxy -n istio-system NAME READY STATUS RESTARTS AGE istio-egressgateway-with-sni-proxy-79f6744569-pf9t2 2/2 Running 0 17s </code></pre></li> <li><p>Create a service entry with a static address equal to 127.0.0.1 (<code>localhost</code>), and disable mutual TLS on the traffic directed to the new service entry:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: sni-proxy spec: hosts: - sni-proxy.local location: MESH_EXTERNAL ports: - number: $MONGODB_PORT name: tcp protocol: TCP resolution: STATIC endpoints: - address: 127.0.0.1 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: disable-mtls-for-sni-proxy spec: host: sni-proxy.local trafficPolicy: tls: mode: DISABLE EOF </code></pre></li> </ol> <h4 id="configure-access-to-com-using-the-new-egress-gateway">Configure access to <code>*.com</code> using the new egress gateway</h4> <ol> <li><p>Define a <code>ServiceEntry</code> for <code>*.com</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | kubectl create -f - apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: mongo spec: hosts: - &#34;*.com&#34; ports: - number: 443 name: tls protocol: TLS - number: $MONGODB_PORT name: tls-mongodb protocol: TLS location: MESH_EXTERNAL EOF </code></pre></li> <li><p>Create an egress <code>Gateway</code> for <em>*.com</em>, port 443, protocol TLS, a destination rule to set the <a href="https://en.wikipedia.org/wiki/Server_Name_Indication">SNI</a> for the gateway, and Envoy filters to prevent tampering with SNI by a malicious application (the filters verify that the SNI issued by the application is the SNI reported to Mixer).</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-egressgateway-with-sni-proxy spec: selector: istio: egressgateway-with-sni-proxy servers: - port: number: 443 name: tls protocol: TLS hosts: - &#34;*.com&#34; tls: mode: MUTUAL serverCertificate: /etc/certs/cert-chain.pem privateKey: /etc/certs/key.pem caCertificates: /etc/certs/root-cert.pem --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: mtls-for-egress-gateway spec: host: istio-egressgateway-with-sni-proxy.istio-system.svc.cluster.local subsets: - name: mongo trafficPolicy: loadBalancer: simple: ROUND_ROBIN portLevelSettings: - port: number: 443 tls: mode: ISTIO_MUTUAL --- # The following filter is used to forward the original SNI (sent by the application) as the SNI of the mutual TLS # connection. # The forwarded SNI will be reported to Mixer so that policies will be enforced based on the original SNI value. apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: forward-downstream-sni spec: filters: - listenerMatch: portNumber: $MONGODB_PORT listenerType: SIDECAR_OUTBOUND filterName: forward_downstream_sni filterType: NETWORK filterConfig: {} --- # The following filter verifies that the SNI of the mutual TLS connection (the SNI reported to Mixer) is # identical to the original SNI issued by the application (the SNI used for routing by the SNI proxy). # The filter prevents Mixer from being deceived by a malicious application: routing to one SNI while # reporting some other value of SNI. If the original SNI does not match the SNI of the mutual TLS connection, the # filter will block the connection to the external service. apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: egress-gateway-sni-verifier spec: workloadLabels: app: istio-egressgateway-with-sni-proxy filters: - listenerMatch: portNumber: 443 listenerType: GATEWAY filterName: sni_verifier filterType: NETWORK filterConfig: {} EOF </code></pre></li> <li><p>Route the traffic destined for <em>*.com</em> to the egress gateway and from the egress gateway to the SNI proxy.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: direct-mongo-through-egress-gateway spec: hosts: - &#34;*.com&#34; gateways: - mesh - istio-egressgateway-with-sni-proxy tls: - match: - gateways: - mesh port: $MONGODB_PORT sni_hosts: - &#34;*.com&#34; route: - destination: host: istio-egressgateway-with-sni-proxy.istio-system.svc.cluster.local subset: mongo port: number: 443 weight: 100 tcp: - match: - gateways: - istio-egressgateway-with-sni-proxy port: 443 route: - destination: host: sni-proxy.local port: number: $MONGODB_PORT weight: 100 EOF </code></pre></li> <li><p>Refresh the web page of the application again and verify that the ratings are still displayed correctly.</p></li> <li><p><a href="/v1.9/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging">Enable Envoy’s access logging</a></p></li> <li><p>Check the log of the egress gateway&rsquo;s Envoy proxy. If Istio is deployed in the <code>istio-system</code> namespace, the command to print the log is:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl logs -l istio=egressgateway-with-sni-proxy -c istio-proxy -n istio-system </code></pre> <p>You should see lines similar to the following:</p> <pre><code class='language-plain' data-expandlinks='true' data-repo='istio' >[2019-01-02T17:22:04.602Z] &#34;- - -&#34; 0 - 768 1863 88 - &#34;-&#34; &#34;-&#34; &#34;-&#34; &#34;-&#34; &#34;127.0.0.1:28543&#34; outbound|28543||sni-proxy.local 127.0.0.1:49976 172.30.146.115:443 172.30.146.118:58510 &lt;your MongoDB host&gt; [2019-01-02T17:22:04.713Z] &#34;- - -&#34; 0 - 1534 2590 85 - &#34;-&#34; &#34;-&#34; &#34;-&#34; &#34;-&#34; &#34;127.0.0.1:28543&#34; outbound|28543||sni-proxy.local 127.0.0.1:49988 172.30.146.115:443 172.30.146.118:58522 &lt;your MongoDB host&gt; </code></pre></li> <li><p>Check the logs of the SNI proxy. If Istio is deployed in the <code>istio-system</code> namespace, the command to print the log is:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl logs -l istio=egressgateway-with-sni-proxy -n istio-system -c sni-proxy 127.0.0.1 [23/Aug/2018:03:28:18 +0000] TCP [&lt;your MongoDB host&gt;]200 1863 482 0.089 127.0.0.1 [23/Aug/2018:03:28:18 +0000] TCP [&lt;your MongoDB host&gt;]200 2590 1248 0.095 </code></pre></li> </ol> <h4 id="understanding-what-happened">Understanding what happened</h4> <p>In this section you configured egress traffic to your MongoDB host using a wildcarded domain. While for a single MongoDB host there is no gain in using wildcarded domains (an exact hostname can be specified), it could be beneficial for cases when the applications in the cluster access multiple MongoDB hosts that match some wildcarded domain. For example, if the applications access <code>mongodb1.composedb.com</code>, <code>mongodb2.composedb.com</code> and <code>mongodb3.composedb.com</code>, the egress traffic can be configured by a single configuration for the wildcarded domain <code>*.composedb.com</code>.</p> <p>I will leave it as an exercise for the reader to verify that no additional Istio configuration is required when you configure an app to use another instance of MongoDB with a hostname that matches the wildcarded domain used in this section.</p> <h4 id="cleanup-of-configuration-for-mongodb-tls-egress-traffic-to-arbitrary-wildcarded-domains">Cleanup of configuration for MongoDB TLS egress traffic to arbitrary wildcarded domains</h4> <ol> <li><p>Delete the configuration items for <em>*.com</em>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete serviceentry mongo $ kubectl delete gateway istio-egressgateway-with-sni-proxy $ kubectl delete virtualservice direct-mongo-through-egress-gateway $ kubectl delete destinationrule mtls-for-egress-gateway $ kubectl delete envoyfilter forward-downstream-sni egress-gateway-sni-verifier </code></pre></li> <li><p>Delete the configuration items for the <code>egressgateway-with-sni-proxy</code> deployment:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete serviceentry sni-proxy $ kubectl delete destinationrule disable-mtls-for-sni-proxy $ kubectl delete -f ./istio-egressgateway-with-sni-proxy.yaml $ kubectl delete configmap egress-sni-proxy-configmap -n istio-system </code></pre></li> <li><p>Remove the configuration files you created:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ rm ./istio-egressgateway-with-sni-proxy.yaml $ rm ./nginx-sni-proxy.conf </code></pre></li> </ol> <h2 id="cleanup">Cleanup</h2> <ol> <li><p>Drop the <code>bookinfo</code> user:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | mongo --ssl --sslAllowInvalidCertificates $MONGODB_HOST:$MONGODB_PORT -u admin -p $MONGO_ADMIN_PASSWORD --authenticationDatabase admin use test db.dropUser(&#34;bookinfo&#34;); EOF </code></pre></li> <li><p>Drop the <em>ratings</em> collection:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | mongo --ssl --sslAllowInvalidCertificates $MONGODB_HOST:$MONGODB_PORT -u admin -p $MONGO_ADMIN_PASSWORD --authenticationDatabase admin use test db.ratings.drop(); EOF </code></pre></li> <li><p>Unset the environment variables you used:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ unset MONGO_ADMIN_PASSWORD BOOKINFO_PASSWORD MONGODB_HOST MONGODB_PORT MONGODB_IP </code></pre></li> <li><p>Remove the virtual services:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/networking/virtual-service-ratings-db.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete -f @samples/bookinfo/networking/virtual-service-ratings-db.yaml@ Deleted config: virtual-service/default/reviews Deleted config: virtual-service/default/ratings </code></pre></div></li> <li><p>Undeploy <em>ratings v2-mongodb</em>:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@ deployment &#34;ratings-v2&#34; deleted </code></pre></div></li> </ol> <h2 id="conclusion">Conclusion</h2> <p>In this blog post I demonstrated various options for MongoDB egress traffic control. You can control the MongoDB egress traffic on a TCP or TLS level where applicable. In both TCP and TLS cases, you can direct the traffic from the sidecar proxies directly to the external MongoDB host, or direct the traffic through an egress gateway, according to your organization&rsquo;s security requirements. In the latter case, you can also decide to apply or disable mutual TLS authentication between the sidecar proxies and the egress gateway. If you want to control MongoDB egress traffic on the TLS level by specifying wildcarded domains like <code>*.com</code> and you need to direct the traffic through the egress gateway, you must deploy a custom egress gateway with an SNI proxy.</p> <p>Note that the configuration and considerations described in this blog post for MongoDB are rather the same for other non-HTTP protocols on top of TCP/TLS.</p>Fri, 16 Nov 2018 00:00:00 +0000/v1.9/blog/2018/egress-mongo/Vadim Eisenberg/v1.9/blog/2018/egress-mongo/traffic-managementegresstcpmongoAll Day Istio Twitch Stream <p>To celebrate the 1.0 release and to promote the software to a wider audience, the Istio community is hosting an all day live stream on Twitch on August 17th.</p> <h2 id="what-is-twitch">What is Twitch?</h2> <p><a href="https://twitch.tv/">Twitch</a> is a popular video gaming live streaming platform and recently has seen a lot of coding content showing up. The IBM Advocates have been doing live coding and presentations there and it&rsquo;s been fun. While mostly used for gaming content, there is a <a href="https://www.twitch.tv/communities/programming">growing community</a> sharing and watching programming content on the site.</p> <h2 id="what-does-this-have-to-do-with-istio">What does this have to do with Istio?</h2> <p>The stream is going to be a full day of Istio content. Hopefully we&rsquo;ll have a good mix of deep technical content, beginner content and line-of-business content for our audience. We&rsquo;ll have developers, users, and evangelists on throughout the day to share their demos and stories. Expect live coding, q and a, and some surprises. We have stellar guests lined up from IBM, Google, Datadog, Pivotal, and more!</p> <h2 id="recordings">Recordings</h2> <p>Recordings are available <a href="https://www.youtube.com/playlist?list=PLzpeuWUENMK0V3dwpx5gPJun-SLG0USqU">here</a>.</p> <h2 id="schedule">Schedule</h2> <p>All times are <code>PDT</code>.</p> <table> <thead> <tr> <th>Time</th> <th>Speaker</th> <th>Affiliation</th> </tr> </thead> <tbody> <tr> <td>10:00 - 10:30</td> <td><code>Spencer Krum + Lisa-Marie Namphy</code></td> <td><code>IBM / Portworx</code></td> </tr> <tr> <td>10:30 - 11:00</td> <td><code>Lin Sun / Spencer Krum / Sven Mawson</code></td> <td><code>IBM / Google</code></td> </tr> <tr> <td>11:00 - 11:10</td> <td><code>Lin Sun / Spencer Krum</code></td> <td><code>IBM</code></td> </tr> <tr> <td>11:10 - 11:30</td> <td><code>Jason Yee / Ilan Rabinovich</code></td> <td><code>Datadog</code></td> </tr> <tr> <td>11:30 - 11:50</td> <td><code>April Nassl</code></td> <td><code>Google</code></td> </tr> <tr> <td>11:50 - 12:10</td> <td><code>Spike Curtis</code></td> <td><code>Tigera</code></td> </tr> <tr> <td>12:10 - 12:30</td> <td><code>Shannon Coen</code></td> <td><code>Pivotal</code></td> </tr> <tr> <td>12:30 - 1:00</td> <td><code>Matt Klein</code></td> <td><code>Lyft</code></td> </tr> <tr> <td>1:00 - 1:20</td> <td><code>Zach Jory</code></td> <td><code>F5/Aspen Mesh</code></td> </tr> <tr> <td>1:20 - 1:40</td> <td><code>Dan Ciruli</code></td> <td><code>Google</code></td> </tr> <tr> <td>1:40 - 2:00</td> <td><code>Isaiah Snell-Feikema</code> / <code>Greg Hanson</code></td> <td><code>IBM</code></td> </tr> <tr> <td>2:00 - 2:20</td> <td><code>Zach Butcher</code></td> <td><code>Tetrate</code></td> </tr> <tr> <td>2:20 - 2:40</td> <td><code>Ray Hudaihed</code></td> <td><code>American Airlines</code></td> </tr> <tr> <td>2:40 - 3:00</td> <td><code>Christian Posta</code></td> <td><code>Red Hat</code></td> </tr> <tr> <td>3:00 - 3:20</td> <td><code>Google/IBM China</code></td> <td><code>Google / IBM</code></td> </tr> <tr> <td>3:20 - 3:40</td> <td><code>Colby Dyess</code></td> <td><code>Tuffin</code></td> </tr> <tr> <td>3:40 - 4:00</td> <td><code>Rohit Agarwalla</code></td> <td><code>Cisco</code></td> </tr> </tbody> </table>Fri, 03 Aug 2018 00:00:00 +0000/v1.9/blog/2018/istio-twitch-stream/Spencer Krum, IBM/v1.9/blog/2018/istio-twitch-stream/Istio a Game Changer for HP's FitStation Platform<p>The FitStation team at HP strongly believes in the future of Kubernetes, BPF and service-mesh as the next standards in cloud infrastructure. We are also very happy to see Istio coming to its official Istio 1.0 release &ndash; thanks to the joint collaboration that started at Google, IBM and Lyft beginning in May 2017.</p> <p>Throughout the development of FitStation’s large scale and progressive cloud platform, Istio, Cilium and Kubernetes technologies have delivered a multitude of opportunities to make our systems more robust and scalable. Istio was a game changer in creating reliable and dynamic network communication.</p> <p><a href="http://www.fitstation.com">FitStation powered by HP</a> is a technology platform that captures 3D biometric data to design personalized footwear to perfectly fit individual foot size and shape as well as gait profile. It uses 3D scanning, pressure sensing, 3D printing and variable density injection molding to create unique footwear. Footwear brands such as Brooks, Steitz Secura or Superfeet are connecting to FitStation to build their next generation of high performance sports, professional and medical shoes.</p> <p>FitStation is built on the promise of ultimate security and privacy for users&rsquo; biometric data. ISTIO is the cornerstone to make that possible for data-at-flight within our cloud. By managing these aspects at the infrastructure level, we focused on solving business problems instead of spending time on individual implementations of secure service communication. Using Istio allowed us to dramatically reduce the complexity of maintaining a multitude of libraries and services to provide secure service communication.</p> <p>As a bonus benefit of Istio 1.0, we gained network visibility, metrics and tracing out of the box. This radically improved decision-making and response quality for our development and devops teams. The team got in-depth insight in the network communication across the entire platform, both for new as well as legacy applications. The integration of Cilium with Envoy delivered a remarkable performance benefit on Istio service mesh communication, combined with a fine-grained kernel driven L7 network security layer. This was due to the powers of BPF brought to Istio by Cilium. We believe this will drive the future of Linux kernel security.</p> <p>It has been very exciting to follow Istio’s growth. We have been able to see clear improvements of performance and stability over the different development versions. The improvements between version 0.7 and 0.8 made our teams feel comfortable with version 1.0, we can state that Istio is now ready for real production usage.</p> <p>We are looking forward to the promising roadmaps of Istio, Envoy, Cilium and CNCF.</p>Tue, 31 Jul 2018 00:00:00 +0000/v1.9/blog/2018/hp/Steven Ceuppens, Chief Software Architect @ HP FitStation, Open Source Advocate & Contributor/v1.9/blog/2018/hp/Delayering Istio with AppSwitch <div> <aside class="callout quote"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-quote"/></svg> </div> <div class="content">All problems in computer science can be solved with another layer, except of course the problem of too many layers. &ndash; David Wheeler</div> </aside> </div> <p>The sidecar proxy approach enables a lot of awesomeness. Squarely in the datapath between microservices, the sidecar can precisely tell what the application is trying to do. It can monitor and instrument protocol traffic, not in the bowels of the networking layers but at the application level, to enable deep visibility, access controls and traffic management.</p> <p>If we look closely however, there are many intermediate layers that the data has to pass through before the high-value analysis of application-traffic can be performed. Most of those layers are part of the base plumbing infrastructure that are there just to push the data along. In doing so, they add latency to communication and complexity to the overall system.</p> <p>Over the years, there has been much collective effort in implementing aggressive fine-grained optimizations within the layers of the network datapath. Each iteration may shave another few microseconds. But then the true necessity of those layers itself has not been questioned.</p> <h2 id="don-t-optimize-layers-remove-them">Don’t optimize layers, remove them</h2> <p>In my belief, optimizing something is a poor fallback to removing its requirement altogether. That was the goal of my initial work (broken link: <code>https://apporbit.com/a-brief-history-of-containers-from-reality-to-hype/</code>) on OS-level virtualization that led to Linux containers which effectively <a href="https://www.oreilly.com/ideas/the-unwelcome-guest-why-vms-arent-the-solution-for-next-gen-applications">removed virtual machines</a> by running applications directly on the host operating system without requiring an intermediate guest. For a long time the industry was fighting the wrong battle distracted by optimizing VMs rather than removing the additional layer altogether.</p> <p>I see the same pattern repeat itself with the connectivity of microservices, and networking in general. The network has been going through the changes that physical servers have gone through a decade earlier. New set of layers and constructs are being introduced. They are being baked deep into the protocol stack and even silicon without adequately considering low-touch alternatives. Perhaps there is a way to remove those additional layers altogether.</p> <p>I have been thinking about these problems for some time and believe that an approach similar in concept to containers can be applied to the network stack that would fundamentally simplify how application endpoints are connected across the complexity of many intermediate layers. I have reapplied the same principles from the original work on containers to create <a href="http://appswitch.io">AppSwitch</a>. Similar to the way containers provide an interface that applications can directly consume, AppSwitch plugs directly into well-defined and ubiquitous network API that applications currently use and directly connects application clients to appropriate servers, skipping all intermediate layers. In the end, that&rsquo;s what networking is all about.</p> <p>Before going into the details of how AppSwitch promises to remove unnecessary layers from the Istio stack, let me give a very brief introduction to its architecture. Further details are available at the <a href="https://appswitch.readthedocs.io/en/latest/">documentation</a> page.</p> <h2 id="appswitch">AppSwitch</h2> <p>Not unlike the container runtime, AppSwitch consists of a client and a daemon that speak over HTTP via a REST API. Both the client and the daemon are built as one self-contained binary, <code>ax</code>. The client transparently plugs into the application and tracks its system calls related to network connectivity and notifies the daemon about their occurrences. As an example, let’s say an application makes the <code>connect(2)</code> system call to the service IP of a Kubernetes service. The AppSwitch client intercepts the connect call, nullifies it and notifies the daemon about its occurrence along with some context that includes the system call arguments. The daemon would then handle the system call, potentially by directly connecting to the Pod IP of the upstream server on behalf of the application.</p> <p>It is important to note that no data is forwarded between AppSwitch client and daemon. They are designed to exchange file descriptors (FDs) over a Unix domain socket to avoid having to copy data. Note also that client is not a separate process. Rather it directly runs in the context of the application itself. There is no data copy between the application and AppSwitch client either.</p> <h2 id="delayering-the-stack">Delayering the stack</h2> <p>Now that we have an idea about what AppSwitch does, let’s look at the layers that it optimizes away from a standard service mesh.</p> <h3 id="network-devirtualization">Network devirtualization</h3> <p>Kubernetes offers simple and well-defined network constructs to the microservice applications it runs. In order to support them however, it imposes specific <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/">requirements</a> on the underlying network. Meeting those requirements is often not easy. The go-to solution of adding another layer is typically adopted to satisfy the requirements. In most cases the additional layer consists of a network overlay that sits between Kubernetes and underlying network. Traffic produced by the applications is encapsulated at the source and decapsulated at the target, which not only costs network resources but also takes up compute cores.</p> <p>Because AppSwitch arbitrates what the application sees through its touchpoints with the platform, it projects a consistent virtual view of the underlying network to the application similar to an overlay but without introducing an additional layer of processing along the datapath. Just to draw a parallel to containers, the inside of a container looks and feels like a VM. However the underlying implementation does not intervene along the high-incidence control paths of low-level interrupts etc.</p> <p>AppSwitch can be injected into a standard Kubernetes manifest (similar to Istio injection) such that the application’s network is directly handled by AppSwitch bypassing any network overlay underneath. More details to follow in just a bit.</p> <h3 id="artifacts-of-container-networking">Artifacts of container networking</h3> <p>Extending network connectivity from host into the container has been a <a href="https://kubernetes.io/blog/2016/01/why-kubernetes-doesnt-use-libnetwork/">major challenge</a>. New layers of network plumbing were invented explicitly for that purpose. As such, an application running in a container is simply a process on the host. However due to a <a href="http://appswitch.io/blog/kubernetes_istio_and_network_function_devirtualization_with_appswitch/">fundamental misalignment</a> between the network abstraction expected by the application and the abstraction exposed by container network namespace, the process cannot directly access the host network. Applications think of networking in terms of sockets or sessions whereas network namespaces expose a device abstraction. Once placed in a network namespace, the process suddenly loses all connectivity. The notion of veth-pair and corresponding tooling were invented just to close that gap. The data would now have to go from a host interface into a virtual switch and then through a veth-pair to the virtual network interface of the container network namespace.</p> <p>AppSwitch can effectively remove both the virtual switch and veth-pair layers on both ends of the connection. Since the connections are established by the daemon running on the host using the network that’s already available on the host, there is no need for additional plumbing to bridge host network into the container. The socket FDs created on the host are passed to the application running within the pod’s network namespace. By the time the application receives the FD, all control path work (security checks, connection establishment) is already done and the FD is ready for actual IO.</p> <h3 id="skip-tcp-ip-for-colocated-endpoints">Skip TCP/IP for colocated endpoints</h3> <p>TCP/IP is the universal protocol medium over which pretty much all communication occurs. But if application endpoints happen to be on the same host, is TCP/IP really required? After all, it does do quite a bit of work and it is quite complex. Unix sockets are explicitly designed for intrahost communication and AppSwitch can transparently switch the communication to occur over a Unix socket for colocated endpoints.</p> <p>For each listening socket of an application, AppSwitch maintains two listening sockets, one each for TCP and Unix. When a client tries to connect to a server that happens to be colocated, AppSwitch daemon would choose to connect to the Unix listening socket of the server. The resulting Unix sockets on each end are passed into respective applications. Once a fully connected FD is returned, the application would simply treat it as a bit pipe. The protocol doesn’t really matter. The application may occasionally make protocol specific calls such as <code>getsockname(2)</code> and AppSwitch would handle them in kind. It would present consistent responses such that the application would continue to run on.</p> <h3 id="data-pushing-proxy">Data pushing proxy</h3> <p>As we continue to look for layers to remove, let us also reconsider the requirement of the proxy layer itself. There are times when the role of the proxy may degenerate into a plain data pusher:</p> <ul> <li>There may not be a need for any protocol decoding</li> <li>The protocol may not be recognized by the proxy</li> <li>The communication may be encrypted and the proxy cannot access relevant headers</li> <li>The application (redis, memcached etc.) may be too latency-sensitive and cannot afford the cost of an intermediate proxy</li> </ul> <p>In all these cases, the proxy is not different from any low-level plumbing layer. In fact, the latency introduced can be far higher because the same level of optimizations won’t be available to a proxy.</p> <p>To illustrate this with an example, consider the application shown below. It consists of a Python app and a set of memcached servers behind it. An upstream memcached server is selected based on connection time routing. Speed is the primary concern here.</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:38.63965267727931%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/delayering-istio/memcached.png" title="Latency-sensitive application scenario"> <img class="element-to-stretch" src="/v1.9/blog/2018/delayering-istio/memcached.png" alt="Proxyless datapath" /> </a> </div> <figcaption>Latency-sensitive application scenario</figcaption> </figure> <p>If we look at the data flow in this setup, the Python app makes a connection to the service IP of memcached. It is redirected to the client-side sidecar. The sidecar routes the connection to one of the memcached servers and copies the data between the two sockets &ndash; one connected to the app and another connected to memcached. And the same also occurs on the server side between the server-side sidecar and memcached. The role of proxy at that point is just boring shoveling of bits between the two sockets. However, it ends up adding substantial latency to the end-to-end connection.</p> <p>Now let us imagine that the app is somehow made to connect directly to memcached, then the two intermediate proxies could be skipped. The data would flow directly between the app and memcached without any intermediate hops. AppSwitch can arrange for that by transparently tweaking the target address passed by the Python app when it makes the <code>connect(2)</code> system call.</p> <h3 id="proxyless-protocol-decoding">Proxyless protocol decoding</h3> <p>Things are going to get a bit strange here. We have seen that the proxy can be bypassed for cases that don’t involve looking into application traffic. But is there anything we can do even for those other cases? It turns out, yes.</p> <p>In a typical communication between microservices, much of the interesting information is exchanged in the initial headers. Headers are followed by body or payload which typically represents bulk of the communication. And once again the proxy degenerates into a data pusher for this part of communication. AppSwitch provides a nifty mechanism to skip proxy for these cases.</p> <p>Even though AppSwitch is not a proxy, it <em>does</em> arbitrate connections between application endpoints and it <em>does</em> have access to corresponding socket FDs. Normally, AppSwitch simply passes those FDs to the application. But it can also peek into the initial message received on the connection using the <code>MSG_PEEK</code> option of the <code>recvfrom(2)</code> system call on the socket. It allows AppSwitch to examine application traffic without actually removing it from the socket buffers. When AppSwitch returns the FD to the application and steps out of the datapath, the application would do an actual read on the connection. AppSwitch uses this technique to perform deeper analysis of application-level traffic and implement sophisticated network functions as discussed in the next section, all without getting into the datapath.</p> <h3 id="zero-cost-load-balancer-firewall-and-network-analyzer">Zero-cost load balancer, firewall and network analyzer</h3> <p>Typical implementations of network functions such as load balancers and firewalls require an intermediate layer that needs to tap into data/packet stream. Kubernetes&rsquo; implementation of load balancer (<code>kube-proxy</code>) for example introduces a probe into the packet stream through iptables and Istio implements the same at the proxy layer. But if all that is required is to redirect or drop connections based on policy, it is not really necessary to stay in the datapath during the entire course of the connection. AppSwitch can take care of that much more efficiently by simply manipulating the control path at the API level. Given its intimate proximity to the application, AppSwitch also has easy access to various pieces of application level metrics such as dynamics of stack and heap usage, precisely when a service comes alive, attributes of active connections etc., all of which could potentially form a rich signal for monitoring and analytics.</p> <p>To go a step further, AppSwitch can also perform L7 load balancing and firewall functions based on the protocol data that it obtains from the socket buffers. It can synthesize the protocol data and various other signals with the policy information acquired from Pilot to implement a highly efficient form of routing and access control enforcement. It can essentially &ldquo;influence&rdquo; the application to connect to the right backend server without requiring any changes to the application or its configuration. It is as if the application itself is infused with policy and traffic-management intelligence. Except in this case, the application can&rsquo;t escape the influence.</p> <p>There is some more black-magic possible that would actually allow modifying the application data stream without getting into the datapath but I am going to save that for a later post. Current implementation of AppSwitch uses a proxy if the use case requires application protocol traffic to be modified. For those cases, AppSwitch provides a highly optimal mechanism to attract traffic to the proxy as discussed in the next section.</p> <h3 id="traffic-redirection">Traffic redirection</h3> <p>Before the sidecar proxy can look into application protocol traffic, it needs to first receive the connections. Redirection of connections coming into and going out of the application is currently done by a layer of packet filtering that rewrites packets such that they go to respective sidecars. Creating potentially large number of rules required to represent the redirection policy is tedious. And the process of applying the rules and updating them, as the target subnets to be captured by the sidecar change, is expensive.</p> <p>While some of the performance concerns are being addressed by the Linux community, there is another concern related to privilege: iptables rules need to be updated whenever the policy changes. Given the current architecture, all privileged operations are performed in an init container that runs just once at the very beginning before privileges are dropped for the actual application. Since updating iptables rules requires root privileges, there is no way to do that without restarting the application.</p> <p>AppSwitch provides a way to redirect application connections without root privilege. As such, an unprivileged application is already able to connect to any host (modulo firewall rules etc.) and the owner of the application should be allowed to change the host address passed by its application via <code>connect(2)</code> without requiring additional privilege.</p> <h4 id="socket-delegation">Socket delegation</h4> <p>Let&rsquo;s see how AppSwitch could help redirect connections without using iptables. Imagine that the application somehow voluntarily passes the socket FDs that it uses for its communication to the sidecar, then there would be no need for iptables. AppSwitch provides a feature called <em>socket delegation</em> that does exactly that. It allows the sidecar to transparently gain access to copies of socket FDs that the application uses for its communication without any changes to the application itself.</p> <p>Here are the sequence of steps that would achieve this in the context of the Python application example.</p> <ol> <li>The application initiates a connection request to the service IP of memcached service.</li> <li>The connection request from client is forwarded to the daemon.</li> <li>The daemon creates a pair of pre-connected Unix sockets (using <code>socketpair(2)</code> system call).</li> <li>It passes one end of the socket pair into the application such that the application would use that socket FD for read/write. It also ensures that the application consistently sees it as a legitimate TCP socket as it expects by interposing all calls that query connection properties.</li> <li>The other end is passed to sidecar over a different Unix socket where the daemon exposes its API. Information such as the original destination that the application was connecting to is also conveyed over the same interface.</li> </ol> <figure style="width:50%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:22.442748091603054%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/delayering-istio/socket-delegation.png" title="Socket delegation based connection redirection"> <img class="element-to-stretch" src="/v1.9/blog/2018/delayering-istio/socket-delegation.png" alt="Socket delegation protocol" /> </a> </div> <figcaption>Socket delegation based connection redirection</figcaption> </figure> <p>Once the application and sidecar are connected, the rest happens as usual. Sidecar would initiate a connection to upstream server and proxy data between the socket received from the daemon and the socket connected to upstream server. The main difference here is that sidecar would get the connection, not through the <code>accept(2)</code> system call as it is in the normal case, but from the daemon over the Unix socket. In addition to listening for connections from applications through the normal <code>accept(2)</code> channel, the sidecar proxy would connect to the AppSwitch daemon’s REST endpoint and receive sockets that way.</p> <p>For completeness, here are the sequence of steps that would occur on the server side:</p> <ol> <li>The application receives a connection</li> <li>AppSwitch daemon accepts the connection on behalf of the application</li> <li>It creates a pair of pre-connected Unix sockets using <code>socketpair(2)</code> system call</li> <li>One end of the socket pair is returned to the application through the <code>accept(2)</code> system call</li> <li>The other end of the socket pair along with the socket originally accepted by the daemon on behalf of the application is sent to sidecar</li> <li>Sidecar would extract the two socket FDs &ndash; a Unix socket FD connected to the application and a TCP socket FD connected to the remote client</li> <li>Sidecar would read the metadata supplied by the daemon about the remote client and perform its usual operations</li> </ol> <h4 id="sidecar-aware-applications">&ldquo;Sidecar-aware&rdquo; applications</h4> <p>Socket delegation feature can be very useful for applications that are explicitly aware of the sidecar and wish to take advantage of its features. They can voluntarily delegate their network interactions by passing their sockets to the sidecar using the same feature. In a way, AppSwitch transparently turns every application into a sidecar-aware application.</p> <h2 id="how-does-it-all-come-together">How does it all come together?</h2> <p>Just to step back, Istio offloads common connectivity concerns from applications to a sidecar proxy that performs those functions on behalf of the application. And AppSwitch simplifies and optimizes the service mesh by sidestepping intermediate layers and invoking the proxy only for cases where it is truly necessary.</p> <p>In the rest of this section, I outline how AppSwitch may be integrated with Istio based on a very cursory initial implementation. This is not intended to be anything like a design doc &ndash; not every possible way of integration is explored and not every detail is worked out. The intent is to discuss high-level aspects of the implementation to present a rough idea of how the two systems may come together. The key is that AppSwitch would act as a cushion between Istio and a real proxy. It would serve as the &ldquo;fast-path&rdquo; for cases that can be performed more efficiently without invoking the sidecar proxy. And for the cases where the proxy is used, it would shorten the datapath by cutting through unnecessary layers. Look at this <a href="http://appswitch.io/blog/kubernetes_istio_and_network_function_devirtualization_with_appswitch/">blog</a> for a more detailed walk through of the integration.</p> <h3 id="appswitch-client-injection">AppSwitch client injection</h3> <p>Similar to Istio sidecar-injector, a simple tool called <code>ax-injector</code> injects AppSwitch client into a standard Kubernetes manifest. Injected client transparently monitors the application and intimates AppSwitch daemon of the control path network API events that the application produces.</p> <p>It is possible to not require the injection and work with standard Kubernetes manifests if AppSwitch CNI plugin is used. In that case, the CNI plugin would perform necessary injection when it gets the initialization callback. Using injector does have some advantages, however: (1) It works in tightly-controlled environments like GKE (2) It can be easily extended to support other frameworks such as Mesos (3) Same cluster would be able to run standard applications alongside &ldquo;AppSwitch-enabled&rdquo; applications.</p> <h3 id="appswitch-daemonset">AppSwitch <code>DaemonSet</code></h3> <p>AppSwitch daemon can be configured to run as a <code>DaemonSet</code> or as an extension to the application that is directly injected into application manifest. In either case it handles network events coming in from the applications that it supports.</p> <h3 id="agent-for-policy-acquisition">Agent for policy acquisition</h3> <p>This is the component that conveys policy and configuration dictated by Istio to AppSwitch. It implements xDS API to listen from Pilot and calls appropriate AppSwitch APIs to program the daemon. For example, it allows the load balancing strategy, as specified by <code>istioctl</code>, to be translated into equivalent AppSwitch capability.</p> <h3 id="platform-adapter-for-appswitch-auto-curated-service-registry">Platform adapter for AppSwitch &ldquo;Auto-Curated&rdquo; service registry</h3> <p>Given that AppSwitch is in the control path of applications’ network APIs, it has ready access to the topology of services across the cluster. AppSwitch exposes that information in the form of a service registry that is automatically and (almost) synchronously updated as applications and their services come and go. A new platform adapter for AppSwitch alongside Kubernetes, Eureka etc. would provide the details of upstream services to Istio. This is not strictly necessary but it does make it easier to correlate service endpoints received from Pilot by AppSwitch agent above.</p> <h3 id="proxy-integration-and-chaining">Proxy integration and chaining</h3> <p>Connections that do require deep scanning and mutation of application traffic are handed off to an external proxy through the socket delegation mechanism discussed earlier. It uses an extended version of <a href="https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt">proxy protocol</a>. In addition to the simple parameters supported by the proxy protocol, a variety of other metadata (including the initial protocol headers obtained from the socket buffers) and live socket FDs (representing application connections) are forwarded to the proxy.</p> <p>The proxy can look at the metadata and decide how to proceed. It could respond by accepting the connection to do the proxying or by directing AppSwitch to allow the connection and use the fast-path or to just drop the connection.</p> <p>One of the interesting aspects of the mechanism is that, when the proxy accepts a socket from AppSwitch, it can in turn delegate the socket to another proxy. In fact that is how AppSwitch currently works. It uses a simple built-in proxy to examine the metadata and decide whether to handle the connection internally or to hand it off to an external proxy (Envoy). The same mechanism can be potentially extended to allow for a chain of plugins, each looking for a specific signature, with the last one in the chain doing the real proxy work.</p> <h2 id="it-s-not-just-about-performance">It&rsquo;s not just about performance</h2> <p>Removing intermediate layers along the datapath is not just about improving performance. Performance is a great side effect, but it <em>is</em> a side effect. There are a number of important advantages to an API level approach.</p> <h3 id="automatic-application-onboarding-and-policy-authoring">Automatic application onboarding and policy authoring</h3> <p>Before microservices and service mesh, traffic management was done by load balancers and access controls were enforced by firewalls. Applications were identified by IP addresses and DNS names which were relatively static. In fact, that&rsquo;s still the status quo in most environments. Such environments stand to benefit immensely from service mesh. However a practical and scalable bridge to the new world needs to be provided. The difficulty in transformation is not as much due to lack of features and functionality but the investment required to rethink and reimplement the entire application infrastructure. Currently most of the policy and configuration exists in the form of load balancer and firewall rules. Somehow that existing context needs to be leveraged in providing a scalable path to adopting the service mesh model.</p> <p>AppSwitch can substantially ease the onboarding process. It can project the same network environment to the application at the target as its current source environment. Not having any assistance here is typically a non-starter in case of traditional applications which have complex configuration files with static IP addresses or specific DNS names hard-coded in them. AppSwitch could help capture those applications along with their existing configuration and connect them over a service mesh without requiring any changes.</p> <h3 id="broader-application-and-protocol-support">Broader application and protocol support</h3> <p>HTTP clearly dominates the modern application landscapes but once we talk about traditional applications and environments, we&rsquo;d encounter all kinds of protocols and transports. Particularly, support for UDP becomes unavoidable. Traditional application servers such as IBM WebSphere rely extensively on UDP. Most multimedia applications use UDP media streams. Of course DNS is probably the most widely used UDP &ldquo;application&rdquo;. AppSwitch supports UDP at the API level much the same way as TCP and when it detects a UDP connection, it can transparently handle it in its &ldquo;fast-path&rdquo; rather than delegating it to the proxy.</p> <h3 id="client-ip-preservation-and-end-to-end-principle">Client IP preservation and end-to-end principle</h3> <p>The same mechanism that preserves the source network environment can also preserve client IP addresses as seen by the servers. With a sidecar proxy in place, connection requests come from the proxy rather than the client. As a result, the peer address (IP:port) of the connection as seen by the server would be that of the proxy rather than the client. AppSwitch ensures that the server sees correct address of the client, logs it correctly and any decisions made based on the client address remain valid. More generally, AppSwitch preserves the <a href="https://en.wikipedia.org/wiki/End-to-end_principle">end-to-end principle</a> which is otherwise broken by intermediate layers that obfuscate the true underlying context.</p> <h3 id="enhanced-application-signal-with-access-to-encrypted-headers">Enhanced application signal with access to encrypted headers</h3> <p>Encrypted traffic completely undermines the ability of the service mesh to analyze application traffic. API level interposition could potentially offer a way around it. Current implementation of AppSwitch gains access to application&rsquo;s network API at the system call level. However it is possible in principle to influence the application at an API boundary, higher in the stack where application data is not yet encrypted or already decrypted. Ultimately the data is always produced in the clear by the application and then encrypted at some point before it goes out. Since AppSwitch directly runs within the memory context of the application, it is possible to tap into the data higher on the stack where it is still held in clear. Only requirement for this to work is that the API used for encryption should be well-defined and amenable for interposition. Particularly, it requires access to the symbol table of the application binaries. Just to be clear, AppSwitch doesn&rsquo;t implement this today.</p> <h2 id="so-what-s-the-net">So what’s the net?</h2> <p>AppSwitch removes a number of layers and processing from the standard service mesh stack. What does all that translate to in terms of performance?</p> <p>We ran some initial experiments to characterize the extent of the opportunity for optimization based on the initial integration of AppSwitch discussed earlier. The experiments were run on GKE using <code>fortio-0.11.0</code>, <code>istio-0.8.0</code> and <code>appswitch-0.4.0-2</code>. In case of the proxyless test, AppSwitch daemon was run as a <code>DaemonSet</code> on the Kubernetes cluster and the Fortio pod spec was modified to inject AppSwitch client. These were the only two changes made to the setup. The test was configured to measure the latency of GRPC requests across 100 concurrent connections.</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:54.66034755134282%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/delayering-istio/perf.png" title="Latency with and without AppSwitch"> <img class="element-to-stretch" src="/v1.9/blog/2018/delayering-istio/perf.png" alt="Performance comparison" /> </a> </div> <figcaption>Latency with and without AppSwitch</figcaption> </figure> <p>Initial results indicate a difference of over 18x in p50 latency with and without AppSwitch (3.99ms vs 72.96ms). The difference was around 8x when mixer and access logs were disabled. Clearly the difference was due to sidestepping all those intermediate layers along the datapath. Unix socket optimization wasn&rsquo;t triggered in case of AppSwitch because client and server pods were scheduled to separate hosts. End-to-end latency of AppSwitch case would have been even lower if the client and server happened to be colocated. Essentially the client and server running in their respective pods of the Kubernetes cluster are directly connected over a TCP socket going over the GKE network &ndash; no tunneling, bridge or proxies.</p> <h2 id="net-net">Net Net</h2> <p>I started out with David Wheeler&rsquo;s seemingly reasonable quote that says adding another layer is not a solution for the problem of too many layers. And I argued through most of the blog that current network stack already has too many layers and that they should be removed. But isn&rsquo;t AppSwitch itself a layer?</p> <p>Yes, AppSwitch is clearly another layer. However it is one that can remove multiple other layers. In doing so, it seamlessly glues the new service mesh layer with existing layers of traditional network environments. It offsets the cost of sidecar proxy and as Istio graduates to 1.0, it provides a bridge for existing applications and their network environments to transition to the new world of service mesh.</p> <p>Perhaps Wheeler’s quote should read:</p> <div> <aside class="callout quote"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-quote"/></svg> </div> <div class="content">All problems in computer science can be solved with another layer, <strong>even</strong> the problem of too many layers!</div> </aside> </div> <h2 id="acknowledgements">Acknowledgements</h2> <p>Thanks to Mandar Jog (Google) for several discussions about the value of AppSwitch for Istio and to the following individuals (in alphabetical order) for their review of early drafts of this blog.</p> <ul> <li>Frank Budinsky (IBM)</li> <li>Lin Sun (IBM)</li> <li>Shriram Rajagopalan (VMware)</li> </ul>Mon, 30 Jul 2018 00:00:00 +0000/v1.9/blog/2018/delayering-istio/Dinesh Subhraveti (AppOrbit and Columbia University)/v1.9/blog/2018/delayering-istio/appswitchperformanceMicro-Segmentation with Istio Authorization <p>Micro-segmentation is a security technique that creates secure zones in cloud deployments and allows organizations to isolate workloads from one another and secure them individually. <a href="/v1.9/docs/concepts/security/#authorization">Istio&rsquo;s authorization feature</a>, also known as Istio Role Based Access Control, provides micro-segmentation for services in an Istio mesh. It features:</p> <ul> <li>Authorization at different levels of granularity, including namespace level, service level, and method level.</li> <li>Service-to-service and end-user-to-service authorization.</li> <li>High performance, as it is enforced natively on Envoy.</li> <li>Role-based semantics, which makes it easy to use.</li> <li>High flexibility as it allows users to define conditions using <a href="/v1.9/docs/reference/config/security/conditions/">combinations of attributes</a>.</li> </ul> <p>In this blog post, you&rsquo;ll learn about the main authorization features and how to use them in different situations.</p> <h2 id="characteristics">Characteristics</h2> <h3 id="rpc-level-authorization">RPC level authorization</h3> <p>Authorization is performed at the level of individual RPCs. Specifically, it controls &ldquo;who can access my <code>bookstore</code> service”, or &ldquo;who can access method <code>getBook</code> in my <code>bookstore</code> service”. It is not designed to control access to application-specific resource instances, like access to &ldquo;storage bucket X” or access to &ldquo;3rd book on 2nd shelf”. Today this kind of application specific access control logic needs to be handled by the application itself.</p> <h3 id="role-based-access-control-with-conditions">Role-based access control with conditions</h3> <p>Authorization is a <a href="https://en.wikipedia.org/wiki/Role-based_access_control">role-based access control (RBAC)</a> system, contrast this to an <a href="https://en.wikipedia.org/wiki/Attribute-based_access_control">attribute-based access control (ABAC)</a> system. Compared to ABAC, RBAC has the following advantages:</p> <ul> <li><p><strong>Roles allow grouping of attributes.</strong> Roles are groups of permissions, which specifies the actions you are allowed to perform on a system. Users are grouped based on the roles within an organization. You can define the roles and reuse them for different cases.</p></li> <li><p><strong>It is easier to understand and reason about who has access.</strong> The RBAC concepts map naturally to business concepts. For example, a DB admin may have all access to DB backend services, while a web client may only be able to view the frontend service.</p></li> <li><p><strong>It reduces unintentional errors.</strong> RBAC policies make otherwise complex security changes easier. You won&rsquo;t have duplicate configurations in multiple places and later forget to update some of them when you need to make changes.</p></li> </ul> <p>On the other hand, Istio&rsquo;s authorization system is not a traditional RBAC system. It also allows users to define <strong>conditions</strong> using <a href="/v1.9/docs/reference/config/security/conditions/">combinations of attributes</a>. This gives Istio flexibility to express complex access control policies. In fact, <strong>the &ldquo;RBAC + conditions” model that Istio authorization adopts, has all the benefits an RBAC system has, and supports the level of flexibility that normally an ABAC system provides.</strong> You&rsquo;ll see some <a href="#examples">examples</a> below.</p> <h3 id="high-performance">High performance</h3> <p>Because of its simple semantics, Istio authorization is enforced on Envoy as a native authorization support. At runtime, the authorization decision is completely done locally inside an Envoy filter, without dependency to any external module. This allows Istio authorization to achieve high performance and availability.</p> <h3 id="work-with-without-primary-identities">Work with/without primary identities</h3> <p>Like any other RBAC system, Istio authorization is identity aware. In Istio authorization policy, there is a primary identity called <code>user</code>, which represents the principal of the client.</p> <p>In addition to the primary identity, you can also specify any conditions that define the identities. For example, you can specify the client identity as &ldquo;user Alice calling from Bookstore frontend service”, in which case, you have a combined identity of the calling service (<code>Bookstore frontend</code>) and the end user (<code>Alice</code>).</p> <p>To improve security, you should enable <a href="/v1.9/docs/concepts/security/#authentication">authentication features</a>, and use authenticated identities in authorization policies. However, strongly authenticated identity is not required for using authorization. Istio authorization works with or without identities. If you are working with a legacy system, you may not have mutual TLS or JWT authentication setup for your mesh. In this case, the only way to identify the client is, for example, through IP. You can still use Istio authorization to control which IP addresses or IP ranges are allowed to access your service.</p> <h2 id="examples">Examples</h2> <p>The <a href="/v1.9/docs/tasks/security/authorization/authz-http/">authorization task</a> shows you how to use Istio&rsquo;s authorization feature to control namespace level and service level access using the <a href="/v1.9/docs/examples/bookinfo/">Bookinfo application</a>. In this section, you&rsquo;ll see more examples on how to achieve micro-segmentation with Istio authorization.</p> <h3 id="namespace-level-segmentation-via-rbac-conditions">Namespace level segmentation via RBAC + conditions</h3> <p>Suppose you have services in the <code>frontend</code> and <code>backend</code> namespaces. You would like to allow all your services in the <code>frontend</code> namespace to access all services that are marked <code>external</code> in the <code>backend</code> namespace.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRole metadata: name: external-api-caller namespace: backend spec: rules: - services: [&#34;*&#34;] methods: [&#34;*”] constraints: - key: &#34;destination.labels[visibility]” values: [&#34;external&#34;] --- apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRoleBinding metadata: name: external-api-caller namespace: backend spec: subjects: - properties: source.namespace: &#34;frontend” roleRef: kind: ServiceRole name: &#34;external-api-caller&#34; </code></pre> <p>The <code>ServiceRole</code> and <code>ServiceRoleBinding</code> above expressed &ldquo;<em>who</em> is allowed to do <em>what</em> under *which conditions*” (RBAC + conditions). Specifically:</p> <ul> <li><strong>&ldquo;who”</strong> are the services in the <code>frontend</code> namespace.</li> <li><strong>&ldquo;what”</strong> is to call services in <code>backend</code> namespace.</li> <li><strong>&ldquo;conditions”</strong> is the <code>visibility</code> label of the destination service having the value <code>external</code>.</li> </ul> <h3 id="service-method-level-isolation-with-without-primary-identities">Service/method level isolation with/without primary identities</h3> <p>Here is another example that demonstrates finer grained access control at service/method level. The first step is to define a <code>book-reader</code> service role that allows READ access to <code>/books/*</code> resource in <code>bookstore</code> service.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRole metadata: name: book-reader namespace: default spec: rules: - services: [&#34;bookstore.default.svc.cluster.local&#34;] paths: [&#34;/books/*”] methods: [&#34;GET”] </code></pre> <h4 id="using-authenticated-client-identities">Using authenticated client identities</h4> <p>Suppose you want to grant this <code>book-reader</code> role to your <code>bookstore-frontend</code> service. If you have enabled <a href="/v1.9/docs/concepts/security/#mutual-tls-authentication">mutual TLS authentication</a> for your mesh, you can use a service account to identify your <code>bookstore-frontend</code> service. Granting the <code>book-reader</code> role to the <code>bookstore-frontend</code> service can be done by creating a <code>ServiceRoleBinding</code> as shown below:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRoleBinding metadata: name: book-reader namespace: default spec: subjects: - user: &#34;cluster.local/ns/default/sa/bookstore-frontend” roleRef: kind: ServiceRole name: &#34;book-reader&#34; </code></pre> <p>You may want to restrict this further by adding a condition that &ldquo;only users who belong to the <code>qualified-reviewer</code> group are allowed to read books”. The <code>qualified-reviewer</code> group is the end user identity that is authenticated by <a href="/v1.9/docs/concepts/security/#authentication">JWT authentication</a>. In this case, the combination of the client service identity (<code>bookstore-frontend</code>) and the end user identity (<code>qualified-reviewer</code>) is used in the authorization policy.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRoleBinding metadata: name: book-reader namespace: default spec: subjects: - user: &#34;cluster.local/ns/default/sa/bookstore-frontend&#34; properties: request.auth.claims[group]: &#34;qualified-reviewer&#34; roleRef: kind: ServiceRole name: &#34;book-reader&#34; </code></pre> <h4 id="client-does-not-have-identity">Client does not have identity</h4> <p>Using authenticated identities in authorization policies is strongly recommended for security. However, if you have a legacy system that does not support authentication, you may not have authenticated identities for your services. You can still use Istio authorization to protect your services even without authenticated identities. The example below shows that you can specify allowed source IP range in your authorization policy.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;rbac.istio.io/v1alpha1&#34; kind: ServiceRoleBinding metadata: name: book-reader namespace: default spec: subjects: - properties: source.ip: 10.20.0.0/9 roleRef: kind: ServiceRole name: &#34;book-reader&#34; </code></pre> <h2 id="summary">Summary</h2> <p>Istio’s authorization feature provides authorization at namespace-level, service-level, and method-level granularity. It adopts &ldquo;RBAC + conditions” model, which makes it easy to use and understand as an RBAC system, while providing the level of flexibility that an ABAC system normally provides. Istio authorization achieves high performance as it is enforced natively on Envoy. While it provides the best security by working together with <a href="/v1.9/docs/concepts/security/#authentication">Istio authentication features</a>, Istio authorization can also be used to provide access control for legacy systems that do not have authentication.</p>Fri, 20 Jul 2018 00:00:00 +0000/v1.9/blog/2018/istio-authorization/Limin Wang/v1.9/blog/2018/istio-authorization/authorizationrbacsecurityExporting Logs to BigQuery, GCS, Pub/Sub through Stackdriver <p>This post shows how to direct Istio logs to <a href="https://cloud.google.com/stackdriver/">Stackdriver</a> and export those logs to various configured sinks such as such as <a href="https://cloud.google.com/bigquery/">BigQuery</a>, <a href="https://cloud.google.com/storage/">Google Cloud Storage</a> or <a href="https://cloud.google.com/pubsub/">Cloud Pub/Sub</a>. At the end of this post you can perform analytics on Istio data from your favorite places such as BigQuery, GCS or Cloud Pub/Sub.</p> <p>The <a href="/v1.9/docs/examples/bookinfo/">Bookinfo</a> sample application is used as the example application throughout this task.</p> <h2 id="before-you-begin">Before you begin</h2> <p><a href="/v1.9/docs/setup/">Install Istio</a> in your cluster and deploy an application.</p> <h2 id="configuring-istio-to-export-logs">Configuring Istio to export logs</h2> <p>Istio exports logs using the <code>logentry</code> <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/templates/logentry">template</a>. This specifies all the variables that are available for analysis. It contains information like source service, destination service, auth metrics (coming..) among others. Following is a diagram of the pipeline:</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:75%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/export-logs-through-stackdriver/istio-analytics-using-stackdriver.png" title="Exporting logs from Istio to Stackdriver for analysis"> <img class="element-to-stretch" src="/v1.9/blog/2018/export-logs-through-stackdriver/istio-analytics-using-stackdriver.png" alt="Exporting logs from Istio to Stackdriver for analysis" /> </a> </div> <figcaption>Exporting logs from Istio to Stackdriver for analysis</figcaption> </figure> <p>Istio supports exporting logs to Stackdriver which can in turn be configured to export logs to your favorite sink like BigQuery, Pub/Sub or GCS. Please follow the steps below to setup your favorite sink for exporting logs first and then Stackdriver in Istio.</p> <h3 id="setting-up-various-log-sinks">Setting up various log sinks</h3> <p>Common setup for all sinks:</p> <ol> <li>Enable <a href="https://cloud.google.com/monitoring/api/enable-api">Stackdriver Monitoring API</a> for the project.</li> <li>Make sure <code>principalEmail</code> that would be setting up the sink has write access to the project and Logging Admin role permissions.</li> <li>Make sure the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable is set. Please follow instructions <a href="https://cloud.google.com/docs/authentication/getting-started">here</a> to set it up.</li> </ol> <h4 id="bigquery">BigQuery</h4> <ol> <li><a href="https://cloud.google.com/bigquery/docs/datasets">Create a BigQuery dataset</a> as a destination for the logs export.</li> <li>Record the ID of the dataset. It will be needed to configure the Stackdriver handler. It would be of the form <code>bigquery.googleapis.com/projects/[PROJECT_ID]/datasets/[DATASET_ID]</code></li> <li>Give <a href="https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination">sink’s writer identity</a>: <code>cloud-logs@system.gserviceaccount.com</code> BigQuery Data Editor role in IAM.</li> <li>If using <a href="/v1.9/docs/setup/platform-setup/gke/">Google Kubernetes Engine</a>, make sure <code>bigquery</code> <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create">Scope</a> is enabled on the cluster.</li> </ol> <h4 id="google-cloud-storage-gcs">Google Cloud Storage (GCS)</h4> <ol> <li><a href="https://cloud.google.com/storage/docs/creating-buckets">Create a GCS bucket</a> where you would like logs to get exported in GCS.</li> <li>Recode the ID of the bucket. It will be needed to configure Stackdriver. It would be of the form <code>storage.googleapis.com/[BUCKET_ID]</code></li> <li>Give <a href="https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination">sink’s writer identity</a>: <code>cloud-logs@system.gserviceaccount.com</code> Storage Object Creator role in IAM.</li> </ol> <h4 id="google-cloud-pub-sub">Google Cloud Pub/Sub</h4> <ol> <li><a href="https://cloud.google.com/pubsub/docs/admin">Create a topic</a> where you would like logs to get exported in Google Cloud Pub/Sub.</li> <li>Recode the ID of the topic. It will be needed to configure Stackdriver. It would be of the form <code>pubsub.googleapis.com/projects/[PROJECT_ID]/topics/[TOPIC_ID]</code></li> <li>Give <a href="https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination">sink’s writer identity</a>: <code>cloud-logs@system.gserviceaccount.com</code> Pub/Sub Publisher role in IAM.</li> <li>If using <a href="/v1.9/docs/setup/platform-setup/gke/">Google Kubernetes Engine</a>, make sure <code>pubsub</code> <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create">Scope</a> is enabled on the cluster.</li> </ol> <h3 id="setting-up-stackdriver">Setting up Stackdriver</h3> <p>A Stackdriver handler must be created to export data to Stackdriver. The configuration for a Stackdriver handler is described <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/adapters/stackdriver/">here</a>.</p> <ol> <li><p>Save the following yaml file as <code>stackdriver.yaml</code>. Replace <code>&lt;project_id&gt;, &lt;sink_id&gt;, &lt;sink_destination&gt;, &lt;log_filter&gt;</code> with their specific values.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: stackdriver metadata: name: handler namespace: istio-system spec: # We&#39;ll use the default value from the adapter, once per minute, so we don&#39;t need to supply a value. # pushInterval: 1m # Must be supplied for the Stackdriver adapter to work project_id: &#34;&lt;project_id&gt;&#34; # One of the following must be set; the preferred method is `appCredentials`, which corresponds to # Google Application Default Credentials. # If none is provided we default to app credentials. # appCredentials: # apiKey: # serviceAccountPath: # Describes how to map Istio logs into Stackdriver. logInfo: accesslog.logentry.istio-system: payloadTemplate: &#39;{{or (.sourceIp) &#34;-&#34;}} - {{or (.sourceUser) &#34;-&#34;}} [{{or (.timestamp.Format &#34;02/Jan/2006:15:04:05 -0700&#34;) &#34;-&#34;}}] &#34;{{or (.method) &#34;-&#34;}} {{or (.url) &#34;-&#34;}} {{or (.protocol) &#34;-&#34;}}&#34; {{or (.responseCode) &#34;-&#34;}} {{or (.responseSize) &#34;-&#34;}}&#39; httpMapping: url: url status: responseCode requestSize: requestSize responseSize: responseSize latency: latency localIp: sourceIp remoteIp: destinationIp method: method userAgent: userAgent referer: referer labelNames: - sourceIp - destinationIp - sourceService - sourceUser - sourceNamespace - destinationIp - destinationService - destinationNamespace - apiClaims - apiKey - protocol - method - url - responseCode - responseSize - requestSize - latency - connectionMtls - userAgent - responseTimestamp - receivedBytes - sentBytes - referer sinkInfo: id: &#39;&lt;sink_id&gt;&#39; destination: &#39;&lt;sink_destination&gt;&#39; filter: &#39;&lt;log_filter&gt;&#39; --- apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: rule metadata: name: stackdriver namespace: istio-system spec: match: &#34;true&#34; # If omitted match is true. actions: - handler: handler.stackdriver instances: - accesslog.logentry --- </code></pre></li> <li><p>Push the configuration</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f stackdriver.yaml stackdriver &#34;handler&#34; created rule &#34;stackdriver&#34; created logentry &#34;stackdriverglobalmr&#34; created metric &#34;stackdriverrequestcount&#34; created metric &#34;stackdriverrequestduration&#34; created metric &#34;stackdriverrequestsize&#34; created metric &#34;stackdriverresponsesize&#34; created </code></pre></li> <li><p>Send traffic to the sample application.</p> <p>For the Bookinfo sample, visit <code>http://$GATEWAY_URL/productpage</code> in your web browser or issue the following command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl http://$GATEWAY_URL/productpage </code></pre></li> <li><p>Verify that logs are flowing through Stackdriver to the configured sink.</p> <ul> <li>Stackdriver: Navigate to the <a href="https://pantheon.corp.google.com/logs/viewer">Stackdriver Logs Viewer</a> for your project and look under &ldquo;GKE Container&rdquo; -&gt; &ldquo;Cluster Name&rdquo; -&gt; &ldquo;Namespace Id&rdquo; for Istio Access logs.</li> <li>BigQuery: Navigate to the <a href="https://bigquery.cloud.google.com/">BigQuery Interface</a> for your project and you should find a table with prefix <code>accesslog_logentry_istio</code> in your sink dataset.</li> <li>GCS: Navigate to the <a href="https://pantheon.corp.google.com/storage/browser/">Storage Browser</a> for your project and you should find a bucket named <code>accesslog.logentry.istio-system</code> in your sink bucket.</li> <li>Pub/Sub: Navigate to the <a href="https://pantheon.corp.google.com/cloudpubsub/topicList">Pub/Sub Topic List</a> for your project and you should find a topic for <code>accesslog</code> in your sink topic.</li> </ul></li> </ol> <h2 id="understanding-what-happened">Understanding what happened</h2> <p><code>Stackdriver.yaml</code> file above configured Istio to send access logs to Stackdriver and then added a sink configuration where these logs could be exported. In detail as follows:</p> <ol> <li><p>Added a handler of kind <code>stackdriver</code></p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: stackdriver metadata: name: handler namespace: &lt;your defined namespace&gt; </code></pre></li> <li><p>Added <code>logInfo</code> in spec</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >spec: logInfo: accesslog.logentry.istio-system: labelNames: - sourceIp - destinationIp ... ... sinkInfo: id: &#39;&lt;sink_id&gt;&#39; destination: &#39;&lt;sink_destination&gt;&#39; filter: &#39;&lt;log_filter&gt;&#39; </code></pre> <p>In the above configuration sinkInfo contains information about the sink where you want the logs to get exported to. For more information on how this gets filled for different sinks please refer <a href="https://cloud.google.com/logging/docs/export/#sink-terms">here</a>.</p></li> <li><p>Added a rule for Stackdriver</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: rule metadata: name: stackdriver namespace: istio-system spec: match: &#34;true&#34; # If omitted match is true actions: - handler: handler.stackdriver instances: - accesslog.logentry </code></pre></li> </ol> <h2 id="cleanup">Cleanup</h2> <ul> <li><p>Remove the new Stackdriver configuration:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete -f stackdriver.yaml </code></pre></li> <li><p>If you are not planning to explore any follow-on tasks, refer to the <a href="/v1.9/docs/examples/bookinfo/#cleanup">Bookinfo cleanup</a> instructions to shutdown the application.</p></li> </ul> <h2 id="availability-of-logs-in-export-sinks">Availability of logs in export sinks</h2> <p>Export to BigQuery is within minutes (we see it to be almost instant), GCS can have a delay of 2 to 12 hours and Pub/Sub is almost immediately.</p>Mon, 09 Jul 2018 00:00:00 +0000/v1.9/blog/2018/export-logs-through-stackdriver/Nupur Garg and Douglas Reid/v1.9/blog/2018/export-logs-through-stackdriver/Monitoring and Access Policies for HTTP Egress Traffic <p>While Istio&rsquo;s main focus is management of traffic between microservices inside a service mesh, Istio can also manage ingress (from outside into the mesh) and egress (from the mesh outwards) traffic. Istio can uniformly enforce access policies and aggregate telemetry data for mesh-internal, ingress and egress traffic.</p> <p>In this blog post, we show how to apply monitoring and access policies to HTTP egress traffic with Istio.</p> <h2 id="use-case">Use case</h2> <p>Consider an organization that runs applications that process content from <em>cnn.com</em>. The applications are decomposed into microservices deployed in an Istio service mesh. The applications access pages of various topics from <em>cnn.com</em>: <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a>, <a href="https://edition.cnn.com/sport">edition.cnn.com/sport</a> and <a href="https://edition.cnn.com/health">edition.cnn.com/health</a>. The organization <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/">configures Istio to allow access to edition.cnn.com</a> and everything works fine. However, at some point in time, the organization decides to banish politics. Practically, it means blocking access to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a> and allowing access to <a href="https://edition.cnn.com/sport">edition.cnn.com/sport</a> and <a href="https://edition.cnn.com/health">edition.cnn.com/health</a> only. The organization will grant permissions to individual applications and to particular users to access <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a>, on a case-by-case basis.</p> <p>To achieve that goal, the organization&rsquo;s operations people monitor access to the external services and analyze Istio logs to verify that no unauthorized request was sent to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a>. They also configure Istio to prevent access to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a> automatically.</p> <p>The organization is resolved to prevent any tampering with the new policy. It decides to put mechanisms in place that will prevent any possibility for a malicious application to access the forbidden topic.</p> <h2 id="related-tasks-and-examples">Related tasks and examples</h2> <ul> <li>The <a href="/v1.9/docs/tasks/traffic-management/egress/">Control Egress Traffic</a> task demonstrates how external (outside the Kubernetes cluster) HTTP and HTTPS services can be accessed by applications inside the mesh.</li> <li>The <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/">Configure an Egress Gateway</a> example describes how to configure Istio to direct egress traffic through a dedicated gateway service called <em>egress gateway</em>.</li> <li>The <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/">Egress Gateway with TLS Origination</a> example demonstrates how to allow applications to send HTTP requests to external servers that require HTTPS, while directing traffic through egress gateway.</li> <li>The <a href="/v1.9/docs/tasks/observability/metrics/using-istio-dashboard/">Visualizing Metrics with Grafana</a> describes the Istio Dashboard to monitor mesh traffic.</li> <li>The <a href="https://istio.io/v1.6/docs/tasks/policy-enforcement/denial-and-list/">Basic Access Control</a> task shows how to control access to in-mesh services.</li> <li>The <a href="https://istio.io/v1.6/docs/tasks/policy-enforcement/denial-and-list/">Denials and White/Black Listing</a> task shows how to configure access policies using black or white list checkers.</li> </ul> <p>As opposed to the observability and security tasks above, this blog post describes Istio&rsquo;s monitoring and access policies applied exclusively to the egress traffic.</p> <h2 id="before-you-begin">Before you begin</h2> <p>Follow the steps in the <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/">Egress Gateway with TLS Origination</a> example, <strong>with mutual TLS authentication enabled</strong>, without the <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway-tls-origination//#cleanup">Cleanup</a> step. After completing that example, you can access <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a> from an in-mesh container with <code>curl</code> installed. This blog post assumes that the <code>SOURCE_POD</code> environment variable contains the source pod&rsquo;s name and that the container&rsquo;s name is <code>sleep</code>.</p> <h2 id="configure-monitoring-and-access-policies">Configure monitoring and access policies</h2> <p>Since you want to accomplish your tasks in a <em>secure way</em>, you should direct egress traffic through <em>egress gateway</em>, as described in the <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/">Egress Gateway with TLS Origination</a> task. The <em>secure way</em> here means that you want to prevent malicious applications from bypassing Istio monitoring and policy enforcement.</p> <p>According to our scenario, the organization performed the instructions in the <a href="#before-you-begin">Before you begin</a> section, enabled HTTP traffic to <em>edition.cnn.com</em>, and configured that traffic to pass through the egress gateway. The egress gateway performs TLS origination to <em>edition.cnn.com</em>, so the traffic leaves the mesh encrypted. At this point, the organization is ready to configure Istio to monitor and apply access policies for the traffic to <em>edition.cnn.com</em>.</p> <h3 id="logging">Logging</h3> <p>Configure Istio to log access to <em>*.cnn.com</em>. You create a <code>logentry</code> and two <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/adapters/stdio/">stdio</a> <code>handlers</code>, one for logging forbidden access (<em>error</em> log level) and another one for logging all access to <em>*.cnn.com</em> (<em>info</em> log level). Then you create <code>rules</code> to direct your <code>logentry</code> instances to your <code>handlers</code>. One rule directs access to <em>*.cnn.com/politics</em> to the handler for logging forbidden access, another rule directs log entries to the handler that outputs each access to <em>*.cnn.com</em> as an <em>info</em> log entry. To understand the Istio <code>logentries</code>, <code>rules</code>, and <code>handlers</code>, see <a href="/v1.9/blog/2017/adapter-model/">Istio Adapter Model</a>. A diagram with the involved entities and dependencies between them appears below:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:46.46700562636976%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-monitoring-access-control/egress-adapters-monitoring.svg" title="Instances, rules and handlers for egress monitoring"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-monitoring-access-control/egress-adapters-monitoring.svg" alt="Instances, rules and handlers for egress monitoring" /> </a> </div> <figcaption>Instances, rules and handlers for egress monitoring</figcaption> </figure> <ol> <li><p>Create the <code>logentry</code>, <code>rules</code> and <code>handlers</code>. Note that you specify <code>context.reporter.uid</code> as <code>kubernetes://istio-egressgateway</code> in the rules to get logs from the egress gateway only.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | kubectl apply -f - # Log entry for egress access apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: logentry metadata: name: egress-access namespace: istio-system spec: severity: &#39;&#34;info&#34;&#39; timestamp: request.time variables: destination: request.host | &#34;unknown&#34; path: request.path | &#34;unknown&#34; responseCode: response.code | 0 responseSize: response.size | 0 reporterUID: context.reporter.uid | &#34;unknown&#34; sourcePrincipal: source.principal | &#34;unknown&#34; monitored_resource_type: &#39;&#34;UNSPECIFIED&#34;&#39; --- # Handler for error egress access entries apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: stdio metadata: name: egress-error-logger namespace: istio-system spec: severity_levels: info: 2 # output log level as error outputAsJson: true --- # Rule to handle access to *.cnn.com/politics apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: rule metadata: name: handle-politics namespace: istio-system spec: match: request.host.endsWith(&#34;cnn.com&#34;) &amp;&amp; request.path.startsWith(&#34;/politics&#34;) &amp;&amp; context.reporter.uid.startsWith(&#34;kubernetes://istio-egressgateway&#34;) actions: - handler: egress-error-logger.stdio instances: - egress-access.logentry --- # Handler for info egress access entries apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: stdio metadata: name: egress-access-logger namespace: istio-system spec: severity_levels: info: 0 # output log level as info outputAsJson: true --- # Rule to handle access to *.cnn.com apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: rule metadata: name: handle-cnn-access namespace: istio-system spec: match: request.host.endsWith(&#34;.cnn.com&#34;) &amp;&amp; context.reporter.uid.startsWith(&#34;kubernetes://istio-egressgateway&#34;) actions: - handler: egress-access-logger.stdio instances: - egress-access.logentry EOF </code></pre></li> <li><p>Send three HTTP requests to <em>cnn.com</em>, to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a>, <a href="https://edition.cnn.com/sport">edition.cnn.com/sport</a> and <a href="https://edition.cnn.com/health">edition.cnn.com/health</a>. All three should return <em>200 OK</em>.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c &#39;curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/health&#39; 200 200 200 </code></pre></li> <li><p>Query the Mixer log and see that the information about the requests appears in the log:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep egress-access | grep cnn | tail -4 {&#34;level&#34;:&#34;info&#34;,&#34;time&#34;:&#34;2019-01-29T07:43:24.611462Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/politics&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:200,&#34;responseSize&#34;:1883355,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/sleep&#34;} {&#34;level&#34;:&#34;info&#34;,&#34;time&#34;:&#34;2019-01-29T07:43:24.886316Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/sport&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:200,&#34;responseSize&#34;:2094561,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/sleep&#34;} {&#34;level&#34;:&#34;info&#34;,&#34;time&#34;:&#34;2019-01-29T07:43:25.369663Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/health&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:200,&#34;responseSize&#34;:2157009,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/sleep&#34;} {&#34;level&#34;:&#34;error&#34;,&#34;time&#34;:&#34;2019-01-29T07:43:24.611462Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/politics&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:200,&#34;responseSize&#34;:1883355,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/sleep&#34;} </code></pre> <p>You see four log entries related to your three requests. Three <em>info</em> entries about the access to <em>edition.cnn.com</em> and one <em>error</em> entry about the access to <em>edition.cnn.com/politics</em>. The service mesh operators can see all the access instances, and can also search the log for <em>error</em> log entries that represent forbidden accesses. This is the first security measure the organization can apply before blocking the forbidden accesses automatically, namely logging all the forbidden access instances as errors. In some settings this can be a sufficient security measure.</p> <p>Note the attributes:</p> <ul> <li><code>destination</code>, <code>path</code>, <code>responseCode</code>, <code>responseSize</code> are related to HTTP parameters of the requests</li> <li><code>sourcePrincipal</code>:<code>cluster.local/ns/default/sa/sleep</code> - a string that represents the <code>sleep</code> service account in the <code>default</code> namespace</li> <li><code>reporterUID</code>: <code>kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system</code> - a UID of the reporting pod, in this case <code>istio-egressgateway-747b6764b8-44rrh</code> in the <code>istio-system</code> namespace</li> </ul></li> </ol> <h3 id="access-control-by-routing">Access control by routing</h3> <p>After enabling logging of access to <em>edition.cnn.com</em>, automatically enforce an access policy, namely allow accessing <em>/health</em> and <em>/sport</em> URL paths only. Such a simple policy control can be implemented with Istio routing.</p> <ol> <li><p>Redefine your <code>VirtualService</code> for <em>edition.cnn.com</em>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | kubectl apply -f - apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: direct-cnn-through-egress-gateway spec: hosts: - edition.cnn.com gateways: - istio-egressgateway - mesh http: - match: - gateways: - mesh port: 80 route: - destination: host: istio-egressgateway.istio-system.svc.cluster.local subset: cnn port: number: 443 weight: 100 - match: - gateways: - istio-egressgateway port: 443 uri: regex: &#34;/health|/sport&#34; route: - destination: host: edition.cnn.com port: number: 443 weight: 100 EOF </code></pre> <p>Note that you added a <code>match</code> by <code>uri</code> condition that checks that the URL path is either <em>/health</em> or <em>/sport</em>. Also note that this condition is added to the <code>istio-egressgateway</code> section of the <code>VirtualService</code>, since the egress gateway is a hardened component in terms of security (see <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations">egress gateway security considerations</a>). You don&rsquo;t want any tampering with your policies.</p></li> <li><p>Send the previous three HTTP requests to <em>cnn.com</em>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c &#39;curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/health&#39; 404 200 200 </code></pre> <p>The request to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a> returned <em>404 Not Found</em>, while requests to <a href="https://edition.cnn.com/sport">edition.cnn.com/sport</a> and <a href="https://edition.cnn.com/health">edition.cnn.com/health</a> returned <em>200 OK</em>, as expected.</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">You may need to wait several seconds for the update of the <code>VirtualService</code> to propagate to the egress gateway.</div> </aside> </div> </li> <li><p>Query the Mixer log and see that the information about the requests appears again in the log:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep egress-access | grep cnn | tail -4 {&#34;level&#34;:&#34;info&#34;,&#34;time&#34;:&#34;2019-01-29T07:55:59.686082Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/politics&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:404,&#34;responseSize&#34;:0,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/sleep&#34;} {&#34;level&#34;:&#34;info&#34;,&#34;time&#34;:&#34;2019-01-29T07:55:59.697565Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/sport&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:200,&#34;responseSize&#34;:2094561,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/sleep&#34;} {&#34;level&#34;:&#34;info&#34;,&#34;time&#34;:&#34;2019-01-29T07:56:00.264498Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/health&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:200,&#34;responseSize&#34;:2157009,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/sleep&#34;} {&#34;level&#34;:&#34;error&#34;,&#34;time&#34;:&#34;2019-01-29T07:55:59.686082Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/politics&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:404,&#34;responseSize&#34;:0,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/sleep&#34;} </code></pre> <p>You still get info and error messages regarding accesses to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a>, however this time the <code>responseCode</code> is <code>404</code>, as expected.</p></li> </ol> <p>While implementing access control using Istio routing worked for us in this simple case, it would not suffice for more complex cases. For example, the organization may want to allow access to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a> under certain conditions, so more complex policy logic than just filtering by URL paths will be required. You may want to apply <a href="/v1.9/blog/2017/adapter-model/">Istio Mixer Adapters</a>, for example <a href="https://istio.io/v1.6/docs/tasks/policy-enforcement/denial-and-list/#attribute-based-whitelists-or-blacklists">white lists or black lists</a> of allowed/forbidden URL paths, respectively. <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/istio.policy.v1beta1/">Policy Rules</a> allow specifying complex conditions, specified in a <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/expression-language/">rich expression language</a>, which includes AND and OR logical operators. The rules can be reused for both logging and policy checks. More advanced users may want to apply <a href="/v1.9/docs/concepts/security/#authorization">Istio Role-Based Access Control</a>.</p> <p>An additional aspect is integration with remote access policy systems. If the organization in our use case operates some <a href="https://en.wikipedia.org/wiki/Identity_management">Identity and Access Management</a> system, you may want to configure Istio to use access policy information from such a system. You implement this integration by applying <a href="/v1.9/blog/2017/adapter-model/">Istio Mixer Adapters</a>.</p> <p>Cancel the access control by routing you used in this section and implement access control by Mixer policy checks in the next section.</p> <ol> <li><p>Replace the <code>VirtualService</code> for <em>edition.cnn.com</em> with your previous version from the <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/#perform-tls-origination-with-an-egress-gateway">Configure an Egress Gateway</a> example:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | kubectl apply -f - apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: direct-cnn-through-egress-gateway spec: hosts: - edition.cnn.com gateways: - istio-egressgateway - mesh http: - match: - gateways: - mesh port: 80 route: - destination: host: istio-egressgateway.istio-system.svc.cluster.local subset: cnn port: number: 443 weight: 100 - match: - gateways: - istio-egressgateway port: 443 route: - destination: host: edition.cnn.com port: number: 443 weight: 100 EOF </code></pre></li> <li><p>Send the previous three HTTP requests to <em>cnn.com</em>, this time you should get three <em>200 OK</em> responses as previously:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c &#39;curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/health&#39; 200 200 200 </code></pre></li> </ol> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">You may need to wait several seconds for the update of the <code>VirtualService</code> to propagate to the egress gateway.</div> </aside> </div> <h3 id="access-control-by-mixer-policy-checks">Access control by Mixer policy checks</h3> <p>In this step you use a Mixer <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/adapters/list/"><code>Listchecker</code> adapter</a>, its whitelist variety. You define a <code>listentry</code> with the URL path of the request and a <code>listchecker</code> to check the <code>listentry</code> using a static list of allowed URL paths, specified by the <code>overrides</code> field. For an external <a href="https://en.wikipedia.org/wiki/Identity_management">Identity and Access Management</a> system, use the <code>providerurl</code> field instead. The updated diagram of the instances, rules and handlers appears below. Note that you reuse the same policy rule, <code>handle-cnn-access</code> both for logging and for access policy checks.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:52.79420593027812%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-monitoring-access-control/egress-adapters-monitoring-policy.svg" title="Instances, rules and handlers for egress monitoring and access policies"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-monitoring-access-control/egress-adapters-monitoring-policy.svg" alt="Instances, rules and handlers for egress monitoring and access policies" /> </a> </div> <figcaption>Instances, rules and handlers for egress monitoring and access policies</figcaption> </figure> <ol> <li><p>Define <code>path-checker</code> and <code>request-path</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | kubectl create -f - apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: listchecker metadata: name: path-checker namespace: istio-system spec: overrides: [&#34;/health&#34;, &#34;/sport&#34;] # overrides provide a static list blacklist: false --- apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: listentry metadata: name: request-path namespace: istio-system spec: value: request.path EOF </code></pre></li> <li><p>Modify the <code>handle-cnn-access</code> policy rule to send <code>request-path</code> instances to the <code>path-checker</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | kubectl apply -f - # Rule handle egress access to cnn.com apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: rule metadata: name: handle-cnn-access namespace: istio-system spec: match: request.host.endsWith(&#34;.cnn.com&#34;) &amp;&amp; context.reporter.uid.startsWith(&#34;kubernetes://istio-egressgateway&#34;) actions: - handler: egress-access-logger.stdio instances: - egress-access.logentry - handler: path-checker.listchecker instances: - request-path.listentry EOF </code></pre></li> <li><p>Perform your usual test by sending HTTP requests to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a>, <a href="https://edition.cnn.com/sport">edition.cnn.com/sport</a> and <a href="https://edition.cnn.com/health">edition.cnn.com/health</a>. As expected, the request to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a> returns <em>403</em> (Forbidden).</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c &#39;curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/health&#39; 403 200 200 </code></pre></li> </ol> <h3 id="access-control-by-mixer-policy-checks-part-2">Access control by Mixer policy checks, part 2</h3> <p>After the organization in our use case managed to configure logging and access control, it decided to extend its access policy by allowing the applications with a special <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/">Service Account</a> to access any topic of <em>cnn.com</em>, without being monitored. You&rsquo;ll see how this requirement can be configured in Istio.</p> <ol> <li><p>Start the <a href="https://github.com/istio/istio/tree/release-1.9/samples/sleep">sleep</a> sample with the <code>politics</code> service account.</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/sleep/sleep.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ sed &#39;s/: sleep/: politics/g&#39; @samples/sleep/sleep.yaml@ | kubectl create -f - serviceaccount &#34;politics&#34; created service &#34;politics&#34; created deployment &#34;politics&#34; created </code></pre></div></li> <li><p>Define the <code>SOURCE_POD_POLITICS</code> shell variable to hold the name of the source pod with the <code>politics</code> service account, for sending requests to external services.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export SOURCE_POD_POLITICS=$(kubectl get pod -l app=politics -o jsonpath={.items..metadata.name}) </code></pre></li> <li><p>Perform your usual test of sending three HTTP requests this time from <code>SOURCE_POD_POLITICS</code>. The request to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a> returns <em>403</em>, since you did not configure the exception for the <em>politics</em> namespace.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec -it $SOURCE_POD_POLITICS -c politics -- sh -c &#39;curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/health&#39; 403 200 200 </code></pre></li> <li><p>Query the Mixer log and see that the information about the requests from the <em>politics</em> namespace appears in the log:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep egress-access | grep cnn | tail -4 {&#34;level&#34;:&#34;info&#34;,&#34;time&#34;:&#34;2019-01-29T08:04:42.559812Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/politics&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:403,&#34;responseSize&#34;:84,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/politics&#34;} {&#34;level&#34;:&#34;info&#34;,&#34;time&#34;:&#34;2019-01-29T08:04:42.568424Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/sport&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:200,&#34;responseSize&#34;:2094561,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/politics&#34;} {&#34;level&#34;:&#34;error&#34;,&#34;time&#34;:&#34;2019-01-29T08:04:42.559812Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/politics&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:403,&#34;responseSize&#34;:84,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/politics&#34;} {&#34;level&#34;:&#34;info&#34;,&#34;time&#34;:&#34;2019-01-29T08:04:42.615641Z&#34;,&#34;instance&#34;:&#34;egress-access.logentry.istio-system&#34;,&#34;destination&#34;:&#34;edition.cnn.com&#34;,&#34;path&#34;:&#34;/health&#34;,&#34;reporterUID&#34;:&#34;kubernetes://istio-egressgateway-747b6764b8-44rrh.istio-system&#34;,&#34;responseCode&#34;:200,&#34;responseSize&#34;:2157009,&#34;sourcePrincipal&#34;:&#34;cluster.local/ns/default/sa/politics&#34;} </code></pre> <p>Note that <code>sourcePrincipal</code> is <code>cluster.local/ns/default/sa/politics</code> which represents the <code>politics</code> service account in the <code>default</code> namespace.</p></li> <li><p>Redefine <code>handle-cnn-access</code> and <code>handle-politics</code> policy rules, to make the applications in the <em>politics</em> namespace exempt from monitoring and policy enforcement.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat &lt;&lt;EOF | kubectl apply -f - # Rule to handle access to *.cnn.com/politics apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: rule metadata: name: handle-politics namespace: istio-system spec: match: request.host.endsWith(&#34;cnn.com&#34;) &amp;&amp; context.reporter.uid.startsWith(&#34;kubernetes://istio-egressgateway&#34;) &amp;&amp; request.path.startsWith(&#34;/politics&#34;) &amp;&amp; source.principal != &#34;cluster.local/ns/default/sa/politics&#34; actions: - handler: egress-error-logger.stdio instances: - egress-access.logentry --- # Rule handle egress access to cnn.com apiVersion: &#34;config.istio.io/v1alpha2&#34; kind: rule metadata: name: handle-cnn-access namespace: istio-system spec: match: request.host.endsWith(&#34;.cnn.com&#34;) &amp;&amp; context.reporter.uid.startsWith(&#34;kubernetes://istio-egressgateway&#34;) &amp;&amp; source.principal != &#34;cluster.local/ns/default/sa/politics&#34; actions: - handler: egress-access-logger.stdio instances: - egress-access.logentry - handler: path-checker.listchecker instances: - request-path.listentry EOF </code></pre></li> <li><p>Perform your usual test from <code>SOURCE_POD</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c &#39;curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/health&#39; 403 200 200 </code></pre> <p>Since <code>SOURCE_POD</code> does not have <code>politics</code> service account, access to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a> is forbidden, as previously.</p></li> <li><p>Perform the previous test from <code>SOURCE_POD_POLITICS</code>:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl exec -it $SOURCE_POD_POLITICS -c politics -- sh -c &#39;curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/politics; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/sport; curl -sL -o /dev/null -w &#34;%{http_code}\n&#34; http://edition.cnn.com/health&#39; 200 200 200 </code></pre> <p>Access to all the topics of <em>edition.cnn.com</em> is allowed.</p></li> <li><p>Examine the Mixer log and see that no more requests with <code>sourcePrincipal</code> equal <code>cluster.local/ns/default/sa/politics</code> appear in the log.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep egress-access | grep cnn | tail -4 </code></pre></li> </ol> <h2 id="comparison-with-https-egress-traffic-control">Comparison with HTTPS egress traffic control</h2> <p>In this use case the applications use HTTP and Istio Egress Gateway performs TLS origination for them. Alternatively, the applications could originate TLS themselves by issuing HTTPS requests to <em>edition.cnn.com</em>. In this section we describe both approaches and their pros and cons.</p> <p>In the HTTP approach, the requests are sent unencrypted on the local host, intercepted by the Istio sidecar proxy and forwarded to the egress gateway. Since you configure Istio to use mutual TLS between the sidecar proxy and the egress gateway, the traffic leaves the pod encrypted. The egress gateway decrypts the traffic, inspects the URL path, the HTTP method and headers, reports telemetry and performs policy checks. If the request is not blocked by some policy check, the egress gateway performs TLS origination to the external destination (<em>cnn.com</em> in our case), so the request is encrypted again and sent encrypted to the external destination. The diagram below demonstrates the network flow of this approach. The HTTP protocol inside the gateway designates the protocol as seen by the gateway after decryption.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:64.81718469808756%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-monitoring-access-control/http-to-gateway.svg" title="HTTP egress traffic through an egress gateway"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-monitoring-access-control/http-to-gateway.svg" alt="HTTP egress traffic through an egress gateway" /> </a> </div> <figcaption>HTTP egress traffic through an egress gateway</figcaption> </figure> <p>The drawback of this approach is that the requests are sent unencrypted inside the pod, which may be against security policies in some organizations. Also some SDKs have external service URLs hard-coded, including the protocol, so sending HTTP requests could be impossible. The advantage of this approach is the ability to inspect HTTP methods, headers and URL paths, and to apply policies based on them.</p> <p>In the HTTPS approach, the requests are encrypted end-to-end, from the application to the external destination. The diagram below demonstrates the network flow of this approach. The HTTPS protocol inside the gateway designates the protocol as seen by the gateway.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:64.81718469808756%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-monitoring-access-control/https-to-gateway.svg" title="HTTPS egress traffic through an egress gateway"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-monitoring-access-control/https-to-gateway.svg" alt="HTTPS egress traffic through an egress gateway" /> </a> </div> <figcaption>HTTPS egress traffic through an egress gateway</figcaption> </figure> <p>The end-to-end HTTPS is considered a better approach from the security point of view. However, since the traffic is encrypted the Istio proxies and the egress gateway can only see the source and destination IPs and the <a href="https://en.wikipedia.org/wiki/Server_Name_Indication">SNI</a> of the destination. Since you configure Istio to use mutual TLS between the sidecar proxy and the egress gateway, the <a href="/v1.9/docs/concepts/security/#istio-identity">identity of the source</a> is also known. The gateway is unable to inspect the URL path, the HTTP method and the headers of the requests, so no monitoring and policies based on the HTTP information can be possible. In our use case, the organization would be able to allow access to <em>edition.cnn.com</em> and to specify which applications are allowed to access <em>edition.cnn.com</em>. However, it will not be possible to allow or block access to specific URL paths of <em>edition.cnn.com</em>. Neither blocking access to <a href="https://edition.cnn.com/politics">edition.cnn.com/politics</a> nor monitoring such access are possible with the HTTPS approach.</p> <p>We guess that each organization will consider the pros and cons of the two approaches and choose the one most appropriate to its needs.</p> <h2 id="summary">Summary</h2> <p>In this blog post we showed how different monitoring and policy mechanisms of Istio can be applied to HTTP egress traffic. Monitoring can be implemented by configuring a logging adapter. Access policies can be implemented by configuring <code>VirtualServices</code> or by configuring various policy check adapters. We demonstrated a simple policy that allowed certain URL paths only. We also showed a more complex policy that extended the simple policy by making an exemption to the applications with a certain service account. Finally, we compared HTTP-with-TLS-origination egress traffic with HTTPS egress traffic, in terms of control possibilities by Istio.</p> <h2 id="cleanup">Cleanup</h2> <ol> <li><p>Perform the instructions in <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway//#cleanup">Cleanup</a> section of the <a href="/v1.9/docs/tasks/traffic-management/egress/egress-gateway//">Configure an Egress Gateway</a> example.</p></li> <li><p>Delete the logging and policy checks configuration:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete logentry egress-access -n istio-system $ kubectl delete stdio egress-error-logger -n istio-system $ kubectl delete stdio egress-access-logger -n istio-system $ kubectl delete rule handle-politics -n istio-system $ kubectl delete rule handle-cnn-access -n istio-system $ kubectl delete -n istio-system listchecker path-checker $ kubectl delete -n istio-system listentry request-path </code></pre></li> <li><p>Delete the <em>politics</em> source pod:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/sleep/sleep.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ sed &#39;s/: sleep/: politics/g&#39; @samples/sleep/sleep.yaml@ | kubectl delete -f - serviceaccount &#34;politics&#34; deleted service &#34;politics&#34; deleted deployment &#34;politics&#34; deleted </code></pre></div></li> </ol>Fri, 22 Jun 2018 00:00:00 +0000/v1.9/blog/2018/egress-monitoring-access-control/Vadim Eisenberg and Ronen Schaffer (IBM)/v1.9/blog/2018/egress-monitoring-access-control/egresstraffic-managementaccess-controlmonitoringIntroducing the Istio v1alpha3 routing API <p>Up until now, Istio has provided a simple API for traffic management using four configuration resources: <code>RouteRule</code>, <code>DestinationPolicy</code>, <code>EgressRule</code>, and (Kubernetes) <code>Ingress</code>. With this API, users have been able to easily manage the flow of traffic in an Istio service mesh. The API has allowed users to route requests to specific versions of services, inject delays and failures for resilience testing, add timeouts and circuit breakers, and more, all without changing the application code itself.</p> <p>While this functionality has proven to be a very compelling part of Istio, user feedback has also shown that this API does have some shortcomings, specifically when using it to manage very large applications containing thousands of services, and when working with protocols other than HTTP. Furthermore, the use of Kubernetes <code>Ingress</code> resources to configure external traffic has proven to be woefully insufficient for our needs.</p> <p>To address these, and other concerns, a new traffic management API, a.k.a. <code>v1alpha3</code>, is being introduced, which will completely replace the previous API going forward. Although the <code>v1alpha3</code> model is fundamentally the same, it is not backward compatible and will require manual conversion from the old API.</p> <p>To justify this disruption, the <code>v1alpha3</code> API has gone through a long and painstaking community review process that has hopefully resulted in a greatly improved API that will stand the test of time. In this article, we will introduce the new configuration model and attempt to explain some of the motivation and design principles that influenced it.</p> <h2 id="design-principles">Design principles</h2> <p>A few key design principles played a role in the routing model redesign:</p> <ul> <li>Explicitly model infrastructure as well as intent. For example, in addition to configuring an ingress gateway, the component (controller) implementing it can also be specified.</li> <li>The authoring model should be &ldquo;producer oriented&rdquo; and &ldquo;host centric&rdquo; as opposed to compositional. For example, all rules associated with a particular host are configured together, instead of individually.</li> <li>Clear separation of routing from post-routing behaviors.</li> </ul> <h2 id="configuration-resources-in-v1alpha3">Configuration resources in v1alpha3</h2> <p>A typical mesh will have one or more load balancers (we call them gateways) that terminate TLS from external networks and allow traffic into the mesh. Traffic then flows through internal services via sidecar gateways. It is also common for applications to consume external services (e.g., Google Maps API). These may be called directly or, in certain deployments, all traffic exiting the mesh may be forced through dedicated egress gateways. The following diagram depicts this mental model.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:35.204472660409245%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/v1alpha3-routing/gateways.svg" title="Gateways in an Istio service mesh"> <img class="element-to-stretch" src="/v1.9/blog/2018/v1alpha3-routing/gateways.svg" alt="Role of gateways in the mesh" /> </a> </div> <figcaption>Gateways in an Istio service mesh</figcaption> </figure> <p>With the above setup in mind, <code>v1alpha3</code> introduces the following new configuration resources to control traffic routing into, within, and out of the mesh.</p> <ol> <li><code>Gateway</code></li> <li><code>VirtualService</code></li> <li><code>DestinationRule</code></li> <li><code>ServiceEntry</code></li> </ol> <p><code>VirtualService</code>, <code>DestinationRule</code>, and <code>ServiceEntry</code> replace <code>RouteRule</code>, <code>DestinationPolicy</code>, and <code>EgressRule</code> respectively. The <code>Gateway</code> is a platform independent abstraction to model the traffic flowing into dedicated middleboxes.</p> <p>The figure below depicts the flow of control across configuration resources.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:41.164966727369595%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/v1alpha3-routing/virtualservices-destrules.svg" title="Relationship between different v1alpha3 elements"> <img class="element-to-stretch" src="/v1.9/blog/2018/v1alpha3-routing/virtualservices-destrules.svg" alt="Relationship between different v1alpha3 elements" /> </a> </div> <figcaption>Relationship between different v1alpha3 elements</figcaption> </figure> <h3 id="gateway"><code>Gateway</code></h3> <p>A <a href="/v1.9/docs/reference/config/networking/gateway/"><code>Gateway</code></a> configures a load balancer for HTTP/TCP traffic, regardless of where it will be running. Any number of gateways can exist within the mesh and multiple different gateway implementations can co-exist. In fact, a gateway configuration can be bound to a particular workload by specifying the set of workload (pod) labels as part of the configuration, allowing users to reuse off the shelf network appliances by writing a simple gateway controller.</p> <p>For ingress traffic management, you might ask: <em>Why not reuse Kubernetes Ingress APIs</em>? The Ingress APIs proved to be incapable of expressing Istio&rsquo;s routing needs. By trying to draw a common denominator across different HTTP proxies, the Ingress is only able to support the most basic HTTP routing and ends up pushing every other feature of modern proxies into non-portable annotations.</p> <p>Istio <code>Gateway</code> overcomes the <code>Ingress</code> shortcomings by separating the L4-L6 spec from L7. It only configures the L4-L6 functions (e.g., ports to expose, TLS configuration) that are uniformly implemented by all good L7 proxies. Users can then use standard Istio rules to control HTTP requests as well as TCP traffic entering a <code>Gateway</code> by binding a <code>VirtualService</code> to it.</p> <p>For example, the following simple <code>Gateway</code> configures a load balancer to allow external https traffic for host <code>bookinfo.com</code> into the mesh:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: servers: - port: number: 443 name: https protocol: HTTPS hosts: - bookinfo.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key </code></pre> <p>To configure the corresponding routes, a <code>VirtualService</code> (described in the <a href="#virtualservice">following section</a>) must be defined for the same host and bound to the <code>Gateway</code> using the <code>gateways</code> field in the configuration:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - bookinfo.com gateways: - bookinfo-gateway # &lt;---- bind to gateway http: - match: - uri: prefix: /reviews route: ... </code></pre> <p>The <code>Gateway</code> can be used to model an edge-proxy or a purely internal proxy as shown in the first figure. Irrespective of the location, all gateways can be configured and controlled in the same way.</p> <h3 id="virtualservice"><code>VirtualService</code></h3> <p>Replacing route rules with something called &ldquo;virtual services” might seem peculiar at first, but in reality it’s fundamentally a much better name for what is being configured, especially after redesigning the API to address the scalability issues with the previous model.</p> <p>In effect, what has changed is that instead of configuring routing using a set of individual configuration resources (rules) for a particular destination service, each containing a precedence field to control the order of evaluation, we now configure the (virtual) destination itself, with all of its rules in an ordered list within a corresponding <a href="/v1.9/docs/reference/config/networking/virtual-service/"><code>VirtualService</code></a> resource. For example, where previously we had two <code>RouteRule</code> resources for the <a href="/v1.9/docs/examples/bookinfo/">Bookinfo</a> application’s <code>reviews</code> service, like this:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: reviews-default spec: destination: name: reviews precedence: 1 route: - labels: version: v1 --- apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: reviews-test-v2 spec: destination: name: reviews precedence: 2 match: request: headers: cookie: regex: &#34;^(.*?;)?(user=jason)(;.*)?$&#34; route: - labels: version: v2 </code></pre> <p>In <code>v1alpha3</code>, we provide the same configuration in a single <code>VirtualService</code> resource:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: cookie: regex: &#34;^(.*?;)?(user=jason)(;.*)?$&#34; route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v1 </code></pre> <p>As you can see, both of the rules for the <code>reviews</code> service are consolidated in one place, which at first may or may not seem preferable. However, if you look closer at this new model, you’ll see there are fundamental differences that make <code>v1alpha3</code> vastly more functional.</p> <p>First of all, notice that the destination service for the <code>VirtualService</code> is specified using a <code>hosts</code> field (repeated field, in fact) and is then again specified in a <code>destination</code> field of each of the route specifications. This is a very important difference from the previous model.</p> <p>A <code>VirtualService</code> describes the mapping between one or more user-addressable destinations to the actual destination workloads inside the mesh. In our example, they are the same, however, the user-addressed hosts can be any DNS names with optional wildcard prefix or CIDR prefix that will be used to address the service. This can be particularly useful in facilitating turning monoliths into a composite service built out of distinct microservices without requiring the consumers of the service to adapt to the transition.</p> <p>For example, the following rule allows users to address both the <code>reviews</code> and <code>ratings</code> services of the Bookinfo application as if they are parts of a bigger (virtual) service at <code>http://bookinfo.com/</code>:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - bookinfo.com http: - match: - uri: prefix: /reviews route: - destination: host: reviews - match: - uri: prefix: /ratings route: - destination: host: ratings ... </code></pre> <p>The hosts of a <code>VirtualService</code> do not actually have to be part of the service registry, they are simply virtual destinations. This allows users to model traffic for virtual hosts that do not have routable entries inside the mesh. These hosts can be exposed outside the mesh by binding the <code>VirtualService</code> to a <code>Gateway</code> configuration for the same host (as described in the <a href="#gateway">previous section</a>).</p> <p>In addition to this fundamental restructuring, <code>VirtualService</code> includes several other important changes:</p> <ol> <li><p>Multiple match conditions can be expressed inside the <code>VirtualService</code> configuration, reducing the need for redundant rules.</p></li> <li><p>Each service version has a name (called a service subset). The set of pods/VMs belonging to a subset is defined in a <code>DestinationRule</code>, described in the following section.</p></li> <li><p><code>VirtualService</code> hosts can be specified using wildcard DNS prefixes to create a single rule for all matching services. For example, in Kubernetes, to apply the same rewrite rule for all services in the <code>foo</code> namespace, the <code>VirtualService</code> would use <code>*.foo.svc.cluster.local</code> as the host.</p></li> </ol> <h3 id="destinationrule"><code>DestinationRule</code></h3> <p>A <a href="/v1.9/docs/reference/config/networking/destination-rule/"><code>DestinationRule</code></a> configures the set of policies to be applied while forwarding traffic to a service. They are intended to be authored by service owners, describing the circuit breakers, load balancer settings, TLS settings, etc.. <code>DestinationRule</code> is more or less the same as its predecessor, <code>DestinationPolicy</code>, with the following exceptions:</p> <ol> <li>The <code>host</code> of a <code>DestinationRule</code> can include wildcard prefixes, allowing a single rule to be specified for many actual services.</li> <li>A <code>DestinationRule</code> defines addressable <code>subsets</code> (i.e., named versions) of the corresponding destination host. These subsets are used in <code>VirtualService</code> route specifications when sending traffic to specific versions of the service. Naming versions this way allows us to cleanly refer to them across different virtual services, simplify the stats that Istio proxies emit, and to encode subsets in SNI headers.</li> </ol> <p>A <code>DestinationRule</code> that configures policies and subsets for the reviews service might look something like this:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3 </code></pre> <p>Notice that, unlike <code>DestinationPolicy</code>, multiple policies (e.g., default and v2-specific) are specified in a single <code>DestinationRule</code> configuration.</p> <h3 id="serviceentry"><code>ServiceEntry</code></h3> <p><a href="/v1.9/docs/reference/config/networking/service-entry/"><code>ServiceEntry</code></a> is used to add additional entries into the service registry that Istio maintains internally. It is most commonly used to allow one to model traffic to external dependencies of the mesh such as APIs consumed from the web or traffic to services in legacy infrastructure.</p> <p>Everything you could previously configure using an <code>EgressRule</code> can just as easily be done with a <code>ServiceEntry</code>. For example, access to a simple external service from inside the mesh can be enabled using a configuration something like this:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: foo-ext spec: hosts: - foo.com ports: - number: 80 name: http protocol: HTTP </code></pre> <p>That said, <code>ServiceEntry</code> has significantly more functionality than its predecessor. First of all, a <code>ServiceEntry</code> is not limited to external service configuration, it can be of two types: mesh-internal or mesh-external. Mesh-internal entries are like all other internal services but are used to explicitly add services to the mesh. They can be used to add services as part of expanding the service mesh to include unmanaged infrastructure (e.g., VMs added to a Kubernetes-based service mesh). Mesh-external entries represent services external to the mesh. For them, mutual TLS authentication is disabled and policy enforcement is performed on the client-side, instead of on the usual server-side for internal service requests.</p> <p>Because a <code>ServiceEntry</code> configuration simply adds a destination to the internal service registry, it can be used in conjunction with a <code>VirtualService</code> and/or <code>DestinationRule</code>, just like any other service in the registry. The following <code>DestinationRule</code>, for example, can be used to initiate mutual TLS connections for an external service:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: foo-ext spec: host: foo.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem </code></pre> <p>In addition to its expanded generality, <code>ServiceEntry</code> provides several other improvements over <code>EgressRule</code> including the following:</p> <ol> <li>A single <code>ServiceEntry</code> can configure multiple service endpoints, which previously would have required multiple <code>EgressRules</code>.</li> <li>The resolution mode for the endpoints is now configurable (<code>NONE</code>, <code>STATIC</code>, or <code>DNS</code>).</li> <li>Additionally, we are working on addressing another pain point: the need to access secure external services over plain text ports (e.g., <code>http://google.com:443</code>). This should be fixed in the coming weeks, allowing you to directly access <code>https://google.com</code> from your application. Stay tuned for an Istio patch release (0.8.x) that addresses this limitation.</li> </ol> <h2 id="creating-and-deleting-v1alpha3-route-rules">Creating and deleting v1alpha3 route rules</h2> <p>Because all route rules for a given destination are now stored together as an ordered list in a single <code>VirtualService</code> resource, adding a second and subsequent rules for a particular destination is no longer done by creating a new (<code>RouteRule</code>) resource, but instead by updating the one-and-only <code>VirtualService</code> resource for the destination.</p> <p>old routing rules:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f my-second-rule-for-destination-abc.yaml </code></pre> <p><code>v1alpha3</code> routing rules:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f my-updated-rules-for-destination-abc.yaml </code></pre> <p>Deleting route rules other than the last one for a particular destination is also done by updating the existing resource using <code>kubectl apply</code>.</p> <p>When adding or removing routes that refer to service versions, the <code>subsets</code> will need to be updated in the service&rsquo;s corresponding <code>DestinationRule</code>. As you might have guessed, this is also done using <code>kubectl apply</code>.</p> <h2 id="summary">Summary</h2> <p>The Istio <code>v1alpha3</code> routing API has significantly more functionality than its predecessor, but unfortunately is not backwards compatible, requiring a one time manual conversion. The previous configuration resources, <code>RouteRule</code>, <code>DesintationPolicy</code>, and <code>EgressRule</code>, will not be supported from Istio 0.9 onwards. Kubernetes users can continue to use <code>Ingress</code> to configure their edge load balancers for basic routing. However, advanced routing features (e.g., traffic split across two versions) will require use of <code>Gateway</code>, a significantly more functional and highly recommended <code>Ingress</code> replacement.</p> <h2 id="acknowledgments">Acknowledgments</h2> <p>Credit for the routing model redesign and implementation work goes to the following people (in alphabetical order):</p> <ul> <li>Frank Budinsky (IBM)</li> <li>Zack Butcher (Google)</li> <li>Greg Hanson (IBM)</li> <li>Costin Manolache (Google)</li> <li>Martin Ostrowski (Google)</li> <li>Shriram Rajagopalan (VMware)</li> <li>Louis Ryan (Google)</li> <li>Isaiah Snell-Feikema (IBM)</li> <li>Kuat Yessenov (Google)</li> </ul>Wed, 25 Apr 2018 00:00:00 +0000/v1.9/blog/2018/v1alpha3-routing/Frank Budinsky (IBM) and Shriram Rajagopalan (VMware)/v1.9/blog/2018/v1alpha3-routing/traffic-managementConfiguring Istio Ingress with AWS NLB <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">This post was updated on January 16, 2019 to include some usage warnings.</div> </aside> </div> <p>This post provides instructions to use and configure ingress Istio with <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html">AWS Network Load Balancer</a>.</p> <p>Network load balancer (NLB) could be used instead of classical load balancer. You can see the <a href="https://aws.amazon.com/elasticloadbalancing/details/#Product_comparisons">comparison</a> between different AWS <code>loadbalancer</code> for more explanation.</p> <h2 id="prerequisites">Prerequisites</h2> <p>The following instructions require a Kubernetes <strong>1.9.0 or newer</strong> cluster.</p> <div> <aside class="callout warning"> <div class="type"> <svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-warning"/></svg> </div> <div class="content"><p>Usage of AWS <code>nlb</code> on Kubernetes is an Alpha feature and not recommended for production clusters.</p> <p>Usage of AWS <code>nlb</code> does not support the creation of two or more Kubernetes clusters running Istio in the same zone as a result of <a href="https://github.com/kubernetes/kubernetes/issues/69264">Kubernetes Bug #69264</a>.</p> </div> </aside> </div> <h2 id="iam-policy">IAM policy</h2> <p>You need to apply policy on the master role in order to be able to provision network load balancer.</p> <ol> <li><p>In AWS <code>iam</code> console click on policies and click on create a new one:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:52.430278884462155%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/aws-nlb/createpolicystart.png" title="Create a new policy"> <img class="element-to-stretch" src="/v1.9/blog/2018/aws-nlb/createpolicystart.png" alt="Create a new policy" /> </a> </div> <figcaption>Create a new policy</figcaption> </figure></li> <li><p>Select <code>json</code>:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:50.63492063492063%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/aws-nlb/createpolicyjson.png" title="Select json"> <img class="element-to-stretch" src="/v1.9/blog/2018/aws-nlb/createpolicyjson.png" alt="Select json" /> </a> </div> <figcaption>Select json</figcaption> </figure></li> <li><p>Copy/paste text below:</p> <pre><code class='language-json' data-expandlinks='true' data-repo='istio' >{ &#34;Version&#34;: &#34;2012-10-17&#34;, &#34;Statement&#34;: [ { &#34;Sid&#34;: &#34;kopsK8sNLBMasterPermsRestrictive&#34;, &#34;Effect&#34;: &#34;Allow&#34;, &#34;Action&#34;: [ &#34;ec2:DescribeVpcs&#34;, &#34;elasticloadbalancing:AddTags&#34;, &#34;elasticloadbalancing:CreateListener&#34;, &#34;elasticloadbalancing:CreateTargetGroup&#34;, &#34;elasticloadbalancing:DeleteListener&#34;, &#34;elasticloadbalancing:DeleteTargetGroup&#34;, &#34;elasticloadbalancing:DescribeListeners&#34;, &#34;elasticloadbalancing:DescribeLoadBalancerPolicies&#34;, &#34;elasticloadbalancing:DescribeTargetGroups&#34;, &#34;elasticloadbalancing:DescribeTargetHealth&#34;, &#34;elasticloadbalancing:ModifyListener&#34;, &#34;elasticloadbalancing:ModifyTargetGroup&#34;, &#34;elasticloadbalancing:RegisterTargets&#34;, &#34;elasticloadbalancing:SetLoadBalancerPoliciesOfListener&#34; ], &#34;Resource&#34;: [ &#34;*&#34; ] }, { &#34;Effect&#34;: &#34;Allow&#34;, &#34;Action&#34;: [ &#34;ec2:DescribeVpcs&#34;, &#34;ec2:DescribeRegions&#34; ], &#34;Resource&#34;: &#34;*&#34; } ] } </code></pre></li> <li><p>Click review policy, fill all fields and click create policy:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:60.08097165991902%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/aws-nlb/create_policy.png" title="Validate policy"> <img class="element-to-stretch" src="/v1.9/blog/2018/aws-nlb/create_policy.png" alt="Validate policy" /> </a> </div> <figcaption>Validate policy</figcaption> </figure></li> <li><p>Click on roles, select you master role nodes, and click attach policy:</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:30.328324986087924%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/aws-nlb/roles_summary.png" title="Attach policy"> <img class="element-to-stretch" src="/v1.9/blog/2018/aws-nlb/roles_summary.png" alt="Attach policy" /> </a> </div> <figcaption>Attach policy</figcaption> </figure></li> <li><p>Your policy is now attach to your master node.</p></li> </ol> <h2 id="generate-the-istio-manifest">Generate the Istio manifest</h2> <p>To use an AWS <code>nlb</code> load balancer, it is necessary to add an AWS specific annotation to the Istio installation. These instructions explain how to add the annotation.</p> <p>Save this as the file <code>override.yaml</code>:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >gateways: istio-ingressgateway: serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-type: &#34;nlb&#34; </code></pre> <p>Generate a manifest with Helm:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ helm template install/kubernetes/helm/istio --namespace istio -f override.yaml &gt; $HOME/istio.yaml </code></pre>Fri, 20 Apr 2018 00:00:00 +0000/v1.9/blog/2018/aws-nlb/Julien SENON/v1.9/blog/2018/aws-nlb/ingresstraffic-managementawsIstio Soft Multi-Tenancy Support <p>Multi-tenancy is commonly used in many environments across many different applications, but the implementation details and functionality provided on a per tenant basis does not follow one model in all environments. The <a href="https://github.com/kubernetes/community/blob/master/wg-multitenancy/README.md">Kubernetes multi-tenancy working group</a> is working to define the multi-tenant use cases and functionality that should be available within Kubernetes. However, from their work so far it is clear that only &ldquo;soft multi-tenancy&rdquo; is possible due to the inability to fully protect against malicious containers or workloads gaining access to other tenant&rsquo;s pods or kernel resources.</p> <h2 id="soft-multi-tenancy">Soft multi-tenancy</h2> <p>For this blog, &ldquo;soft multi-tenancy&rdquo; is defined as having a single Kubernetes control plane with multiple Istio control planes and multiple meshes, one control plane and one mesh per tenant. The cluster administrator gets control and visibility across all the Istio control planes, while the tenant administrator only gets control of a specific Istio instance. Separation between the tenants is provided by Kubernetes namespaces and RBAC.</p> <p>One use case for this deployment model is a shared corporate infrastructure where malicious actions are not expected, but a clean separation of the tenants is still required.</p> <p>Potential future Istio multi-tenant deployment models are described at the bottom of this blog.</p> <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">This blog is a high-level description of how to deploy Istio in a limited multi-tenancy environment. The <a href="/v1.9/docs/">docs</a> section will be updated when official multi-tenancy support is provided.</div> </aside> </div> <h2 id="deployment">Deployment</h2> <h3 id="multiple-istio-control-planes">Multiple Istio control planes</h3> <p>Deploying multiple Istio control planes starts by replacing all <code>namespace</code> references in a manifest file with the desired namespace. Using <code>istio.yaml</code> as an example, if two tenant level Istio control planes are required; the first can use the <code>istio.yaml</code> default name of <code>istio-system</code> and a second control plane can be created by generating a new yaml file with a different namespace. As an example, the following command creates a yaml file with the Istio namespace of <code>istio-system1</code>.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ cat istio.yaml | sed s/istio-system/istio-system1/g &gt; istio-system1.yaml </code></pre> <p>The <code>istio.yaml</code> file contains the details of the Istio control plane deployment, including the pods that make up the control plane (Mixer, Pilot, Ingress, Galley, CA). Deploying the two Istio control plane yaml files:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f install/kubernetes/istio.yaml $ kubectl apply -f install/kubernetes/istio-system1.yaml </code></pre> <p>Results in two Istio control planes running in two namespaces.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE istio-system istio-ca-ffbb75c6f-98w6x 1/1 Running 0 15d istio-system istio-ingress-68d65fc5c6-dnvfl 1/1 Running 0 15d istio-system istio-mixer-5b9f8dffb5-8875r 3/3 Running 0 15d istio-system istio-pilot-678fc976c8-b8tv6 2/2 Running 0 15d istio-system1 istio-ca-5f496fdbcd-lqhlk 1/1 Running 0 15d istio-system1 istio-ingress-68d65fc5c6-2vldg 1/1 Running 0 15d istio-system1 istio-mixer-7d4f7b9968-66z44 3/3 Running 0 15d istio-system1 istio-pilot-5bb6b7669c-779vb 2/2 Running 0 15d </code></pre> <p>The Istio <a href="/v1.9/docs/setup/additional-setup/sidecar-injection/">sidecar</a> and <a href="/v1.9/docs/tasks/observability/">addons</a>, if required, manifests must also be deployed to match the configured <code>namespace</code> in use by the tenant&rsquo;s Istio control plane.</p> <p>The execution of these two yaml files is the responsibility of the cluster administrator, not the tenant level administrator. Additional RBAC restrictions will also need to be configured and applied by the cluster administrator, limiting the tenant administrator to only the assigned namespace.</p> <h3 id="split-common-and-namespace-specific-resources">Split common and namespace specific resources</h3> <p>The manifest files in the Istio repositories create both common resources that would be used by all Istio control planes as well as resources that are replicated per control plane. Although it is a simple matter to deploy multiple control planes by replacing the <code>istio-system</code> namespace references as described above, a better approach is to split the manifests into a common part that is deployed once for all tenants and a tenant specific part. For the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions">Custom Resource Definitions</a>, the roles and the role bindings should be separated out from the provided Istio manifests. Additionally, the roles and role bindings in the provided Istio manifests are probably unsuitable for a multi-tenant environment and should be modified or augmented as described in the next section.</p> <h3 id="kubernetes-rbac-for-istio-control-plane-resources">Kubernetes RBAC for Istio control plane resources</h3> <p>To restrict a tenant administrator to a single Istio namespace, the cluster administrator would create a manifest containing, at a minimum, a <code>Role</code> and <code>RoleBinding</code> similar to the one below. In this example, a tenant administrator named <em>sales-admin</em> is limited to the namespace <code>istio-system1</code>. A completed manifest would contain many more <code>apiGroups</code> under the <code>Role</code> providing resource access to the tenant administrator.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: istio-system1 name: ns-access-for-sales-admin-istio-system1 rules: - apiGroups: [&#34;&#34;] # &#34;&#34; indicates the core API group resources: [&#34;*&#34;] verbs: [&#34;*&#34;] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: access-all-istio-system1 namespace: istio-system1 subjects: - kind: User name: sales-admin apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: ns-access-for-sales-admin-istio-system1 apiGroup: rbac.authorization.k8s.io </code></pre> <h3 id="watching-specific-namespaces-for-service-discovery">Watching specific namespaces for service discovery</h3> <p>In addition to creating RBAC rules limiting the tenant administrator&rsquo;s access to a specific Istio control plane, the Istio manifest must be updated to specify the application namespace that Pilot should watch for creation of its xDS cache. This is done by starting the Pilot component with the additional command line arguments <code>--appNamespace, ns-1</code>. Where <em>ns-1</em> is the namespace that the tenant’s application will be deployed in. An example snippet from the <code>istio-system1.yaml</code> file is shown below.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: extensions/v1beta1 kind: Deployment metadata: name: istio-pilot namespace: istio-system1 annotations: sidecar.istio.io/inject: &#34;false&#34; spec: replicas: 1 template: metadata: labels: istio: pilot spec: serviceAccountName: istio-pilot-service-account containers: - name: discovery image: docker.io/&lt;user ID&gt;/pilot:&lt;tag&gt; imagePullPolicy: IfNotPresent args: [&#34;discovery&#34;, &#34;-v&#34;, &#34;2&#34;, &#34;--admission-service&#34;, &#34;istio-pilot&#34;, &#34;--appNamespace&#34;, &#34;ns-1&#34;] ports: - containerPort: 8080 - containerPort: 443 </code></pre> <h3 id="deploying-the-tenant-application-in-a-namespace">Deploying the tenant application in a namespace</h3> <p>Now that the cluster administrator has created the tenant&rsquo;s namespace (ex. <code>istio-system1</code>) and Pilot&rsquo;s service discovery has been configured to watch for a specific application namespace (ex. <em>ns-1</em>), create the application manifests to deploy in that tenant&rsquo;s specific namespace. For example:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 kind: Namespace metadata: name: ns-1 </code></pre> <p>And add the namespace reference to each resource type included in the application&rsquo;s manifest file. For example:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 kind: Service metadata: name: details labels: app: details namespace: ns-1 </code></pre> <p>Although not shown, the application namespaces will also have RBAC settings limiting access to certain resources. These RBAC settings could be set by the cluster administrator and/or the tenant administrator.</p> <h3 id="using-kubectl-in-a-multi-tenant-environment">Using <code>kubectl</code> in a multi-tenant environment</h3> <p>When defining <a href="https://archive.istio.io/v0.7/docs/reference/config/istio.routing.v1alpha1/#RouteRule">route rules</a> or <a href="https://archive.istio.io/v0.7/docs/reference/config/istio.routing.v1alpha1/#DestinationPolicy">destination policies</a>, it is necessary to ensure that the <code>kubectl</code> command is scoped to the namespace the Istio control plane is running in to ensure the resource is created in the proper namespace. Additionally, the rule itself must be scoped to the tenant&rsquo;s namespace so that it will be applied properly to that tenant&rsquo;s mesh. The <em>-i</em> option is used to create (or get or describe) the rule in the namespace that the Istio control plane is deployed in. The <em>-n</em> option will scope the rule to the tenant&rsquo;s mesh and should be set to the namespace that the tenant&rsquo;s app is deployed in. Note that the <em>-n</em> option can be skipped on the command line if the .yaml file for the resource scopes it properly instead.</p> <p>For example, the following command would be required to add a route rule to the <code>istio-system1</code> namespace:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl –i istio-system1 apply -n ns-1 -f route_rule_v2.yaml </code></pre> <p>And can be displayed using the command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl -i istio-system1 -n ns-1 get routerule NAME KIND NAMESPACE details-Default RouteRule.v1alpha2.config.istio.io ns-1 productpage-default RouteRule.v1alpha2.config.istio.io ns-1 ratings-default RouteRule.v1alpha2.config.istio.io ns-1 reviews-default RouteRule.v1alpha2.config.istio.io ns-1 </code></pre> <p>See the <a href="/v1.9/blog/2018/soft-multitenancy/#multiple-istio-control-planes">Multiple Istio control planes</a> section of this document for more details on <code>namespace</code> requirements in a multi-tenant environment.</p> <h3 id="test-results">Test results</h3> <p>Following the instructions above, a cluster administrator can create an environment limiting, via RBAC and namespaces, what a tenant administrator can deploy.</p> <p>After deployment, accessing the Istio control plane pods assigned to a specific tenant administrator is permitted:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE grafana-78d649479f-8pqk9 1/1 Running 0 1d istio-ca-ffbb75c6f-98w6x 1/1 Running 0 1d istio-ingress-68d65fc5c6-dnvfl 1/1 Running 0 1d istio-mixer-5b9f8dffb5-8875r 3/3 Running 0 1d istio-pilot-678fc976c8-b8tv6 2/2 Running 0 1d istio-sidecar-injector-7587bd559d-5tgk6 1/1 Running 0 1d prometheus-cf8456855-hdcq7 1/1 Running 0 1d </code></pre> <p>However, accessing all the cluster&rsquo;s pods is not permitted:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods --all-namespaces Error from server (Forbidden): pods is forbidden: User &#34;dev-admin&#34; cannot list pods at the cluster scope </code></pre> <p>And neither is accessing another tenant&rsquo;s namespace:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods -n istio-system1 Error from server (Forbidden): pods is forbidden: User &#34;dev-admin&#34; cannot list pods in the namespace &#34;istio-system1&#34; </code></pre> <p>The tenant administrator can deploy applications in the application namespace configured for that tenant. As an example, updating the <a href="/v1.9/docs/examples/bookinfo/">Bookinfo</a> manifests and then deploying under the tenant&rsquo;s application namespace of <em>ns-0</em>, listing the pods in use by this tenant&rsquo;s namespace is permitted:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods -n ns-0 NAME READY STATUS RESTARTS AGE details-v1-64b86cd49-b7rkr 2/2 Running 0 1d productpage-v1-84f77f8747-rf2mt 2/2 Running 0 1d ratings-v1-5f46655b57-5b4c5 2/2 Running 0 1d reviews-v1-ff6bdb95b-pm5lb 2/2 Running 0 1d reviews-v2-5799558d68-b989t 2/2 Running 0 1d reviews-v3-58ff7d665b-lw5j9 2/2 Running 0 1d </code></pre> <p>But accessing another tenant&rsquo;s application namespace is not:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods -n ns-1 Error from server (Forbidden): pods is forbidden: User &#34;dev-admin&#34; cannot list pods in the namespace &#34;ns-1&#34; </code></pre> <p>If the <a href="/v1.9/docs/tasks/observability/">add-on tools</a>, example <a href="/v1.9/docs/tasks/observability/metrics/querying-metrics/">Prometheus</a>, are deployed (also limited by an Istio <code>namespace</code>) the statistical results returned would represent only that traffic seen from that tenant&rsquo;s application namespace.</p> <h2 id="conclusion">Conclusion</h2> <p>The evaluation performed indicates Istio has sufficient capabilities and security to meet a small number of multi-tenant use cases. It also shows that Istio and Kubernetes <strong>cannot</strong> provide sufficient capabilities and security for other use cases, especially those use cases that require complete security and isolation between untrusted tenants. The improvements required to reach a more secure model of security and isolation require work in container technology, ex. Kubernetes, rather than improvements in Istio capabilities.</p> <h2 id="issues">Issues</h2> <ul> <li>The CA (Certificate Authority) and Mixer pod logs from one tenant&rsquo;s Istio control plane (e.g. <code>istio-system</code> namespace) contained &lsquo;info&rsquo; messages from a second tenant&rsquo;s Istio control plane (e.g. <code>istio-system1</code> namespace).</li> </ul> <h2 id="challenges-with-other-multi-tenancy-models">Challenges with other multi-tenancy models</h2> <p>Other multi-tenancy deployment models were considered:</p> <ol> <li><p>A single mesh with multiple applications, one for each tenant on the mesh. The cluster administrator gets control and visibility mesh wide and across all applications, while the tenant administrator only gets control of a specific application.</p></li> <li><p>A single Istio control plane with multiple meshes, one mesh per tenant. The cluster administrator gets control and visibility across the entire Istio control plane and all meshes, while the tenant administrator only gets control of a specific mesh.</p></li> <li><p>A single cloud environment (cluster controlled), but multiple Kubernetes control planes (tenant controlled).</p></li> </ol> <p>These options either can&rsquo;t be properly supported without code changes or don&rsquo;t fully address the use cases.</p> <p>Current Istio capabilities are poorly suited to support the first model as it lacks sufficient RBAC capabilities to support cluster versus tenant operations. Additionally, having multiple tenants under one mesh is too insecure with the current mesh model and the way Istio drives configuration to the Envoy proxies.</p> <p>Regarding the second option, the current Istio paradigm assumes a single mesh per Istio control plane. The needed changes to support this model are substantial. They would require finer grained scoping of resources and security domains based on namespaces, as well as, additional Istio RBAC changes. This model will likely be addressed by future work, but not currently possible.</p> <p>The third model doesn’t satisfy most use cases, as most cluster administrators prefer a common Kubernetes control plane which they provide as a <a href="https://en.wikipedia.org/wiki/Platform_as_a_service">PaaS</a> to their tenants.</p> <h2 id="future-work">Future work</h2> <p>Allowing a single Istio control plane to control multiple meshes would be an obvious next feature. An additional improvement is to provide a single mesh that can host different tenants with some level of isolation and security between the tenants. This could be done by partitioning within a single control plane using the same logical notion of namespace as Kubernetes. A <a href="https://docs.google.com/document/d/14Hb07gSrfVt5KX9qNi7FzzGwB_6WBpAnDpPG6QEEd9Q">document</a> has been started within the Istio community to define additional use cases and the Istio functionality required to support those use cases.</p> <h2 id="references">References</h2> <ul> <li>Video on Kubernetes multi-tenancy support, <a href="https://www.youtube.com/watch?v=ahwCkJGItkU">Multi-Tenancy Support &amp; Security Modeling with RBAC and Namespaces</a>, and the <a href="https://schd.ws/hosted_files/kccncna17/21/Multi-tenancy%20Support%20%26%20Security%20Modeling%20with%20RBAC%20and%20Namespaces.pdf">supporting slide deck</a>.</li> <li>KubeCon talk on security that discusses Kubernetes support for &ldquo;Cooperative soft multi-tenancy&rdquo;, <a href="https://www.youtube.com/watch?v=YRR-kZub0cA">Building for Trust: How to Secure Your Kubernetes</a>.</li> <li>Kubernetes documentation on <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/">RBAC</a> and <a href="https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/">namespaces</a>.</li> <li>KubeCon slide deck on <a href="https://schd.ws/hosted_files/kccncna17/a9/kubecon-multitenancy.pdf">Multi-tenancy Deep Dive</a>.</li> <li>Google document on <a href="https://docs.google.com/document/d/15w1_fesSUZHv-vwjiYa9vN_uyc--PySRoLKTuDhimjc">Multi-tenancy models for Kubernetes</a>. (Requires permission)</li> <li>Cloud Foundry WIP document, <a href="https://docs.google.com/document/d/14Hb07gSrfVt5KX9qNi7FzzGwB_6WBpAnDpPG6QEEd9Q">Multi-cloud and Multi-tenancy</a></li> <li><a href="https://docs.google.com/document/d/12F183NIRAwj2hprx-a-51ByLeNqbJxK16X06vwH5OWE">Istio Auto Multi-Tenancy 101</a></li> </ul>Thu, 19 Apr 2018 00:00:00 +0000/v1.9/blog/2018/soft-multitenancy/John Joyce and Rich Curran/v1.9/blog/2018/soft-multitenancy/tenancyTraffic Mirroring with Istio for Testing in Production<p>Trying to enumerate all the possible combinations of test cases for testing services in non-production/test environments can be daunting. In some cases, you&rsquo;ll find that all of the effort that goes into cataloging these use cases doesn&rsquo;t match up to real production use cases. Ideally, we could use live production use cases and traffic to help illuminate all of the feature areas of the service under test that we might miss in more contrived testing environments.</p> <p>Istio can help here. With the release of <a href="/v1.9/news/releases/0.x/announcing-0.5">Istio 0.5</a>, Istio can mirror traffic to help test your services. You can write route rules similar to the following to enable traffic mirroring:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: mirror-traffic-to-httbin-v2 spec: destination: name: httpbin precedence: 11 route: - labels: version: v1 weight: 100 - labels: version: v2 weight: 0 mirror: name: httpbin labels: version: v2 </code></pre> <p>A few things to note here:</p> <ul> <li>When traffic gets mirrored to a different service, that happens outside the critical path of the request</li> <li>Responses to any mirrored traffic is ignored; traffic is mirrored as &ldquo;fire-and-forget&rdquo;</li> <li>You&rsquo;ll need to have the 0-weighted route to hint to Istio to create the proper Envoy cluster under the covers; <a href="https://github.com/istio/istio/issues/3270">this should be ironed out in future releases</a>.</li> </ul> <p>Learn more about mirroring by visiting the <a href="/v1.9/docs/tasks/traffic-management/mirroring/">Mirroring Task</a> and see a more <a href="https://dzone.com/articles/traffic-shadowing-with-istio-reducing-the-risk-of">comprehensive treatment of this scenario on my blog</a>.</p>Thu, 08 Feb 2018 00:00:00 +0000/v1.9/blog/2018/traffic-mirroring/Christian Posta/v1.9/blog/2018/traffic-mirroring/traffic-managementmirroringConsuming External TCP Services <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">This blog post was updated on July 23, 2018 to use the new <a href="/v1.9/blog/2018/v1alpha3-routing/">v1alpha3 traffic management API</a>. If you need to use the old version, follow these <a href="https://archive.istio.io/v0.7/blog/2018/egress-tcp.html">docs</a>.</div> </aside> </div> <p>In my previous blog post, <a href="/v1.9/blog/2018/egress-https/">Consuming External Web Services</a>, I described how external services can be consumed by in-mesh Istio applications via HTTPS. In this post, I demonstrate consuming external services over TCP. You will use the <a href="/v1.9/docs/examples/bookinfo/">Istio Bookinfo sample application</a>, the version in which the book ratings data is persisted in a MySQL database. You deploy this database outside the cluster and configure the <em>ratings</em> microservice to use it. You define a <a href="/v1.9/docs/reference/config/networking/service-entry/">Service Entry</a> to allow the in-mesh applications to access the external database.</p> <h2 id="bookinfo-sample-application-with-external-ratings-database">Bookinfo sample application with external ratings database</h2> <p>First, you set up a MySQL database instance to hold book ratings data outside of your Kubernetes cluster. Then you modify the <a href="/v1.9/docs/examples/bookinfo/">Bookinfo sample application</a> to use your database.</p> <h3 id="setting-up-the-database-for-ratings-data">Setting up the database for ratings data</h3> <p>For this task you set up an instance of <a href="https://www.mysql.com">MySQL</a>. You can use any MySQL instance; I used <a href="https://www.ibm.com/cloud/compose/mysql">Compose for MySQL</a>. I used <code>mysqlsh</code> (<a href="https://dev.mysql.com/doc/mysql-shell/en/">MySQL Shell</a>) as a MySQL client to feed the ratings data.</p> <ol> <li><p>Set the <code>MYSQL_DB_HOST</code> and <code>MYSQL_DB_PORT</code> environment variables:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export MYSQL_DB_HOST=&lt;your MySQL database host&gt; $ export MYSQL_DB_PORT=&lt;your MySQL database port&gt; </code></pre> <p>In case of a local MySQL database with the default port, the values are <code>localhost</code> and <code>3306</code>, respectively.</p></li> <li><p>To initialize the database, run the following command entering the password when prompted. The command is performed with the credentials of the <code>admin</code> user, created by default by <a href="https://www.ibm.com/cloud/compose/mysql">Compose for MySQL</a>.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl -s https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/src/mysql/mysqldb-init.sql | mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT </code></pre> <p><em><strong>OR</strong></em></p> <p>When using the <code>mysql</code> client and a local MySQL database, run:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ curl -s https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/src/mysql/mysqldb-init.sql | mysql -u root -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT </code></pre></li> <li><p>Create a user with the name <code>bookinfo</code> and grant it <em>SELECT</em> privilege on the <code>test.ratings</code> table:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &#34;CREATE USER &#39;bookinfo&#39; IDENTIFIED BY &#39;&lt;password you choose&gt;&#39;; GRANT SELECT ON test.ratings to &#39;bookinfo&#39;;&#34; </code></pre> <p><em><strong>OR</strong></em></p> <p>For <code>mysql</code> and the local database, the command is:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ mysql -u root -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &#34;CREATE USER &#39;bookinfo&#39; IDENTIFIED BY &#39;&lt;password you choose&gt;&#39;; GRANT SELECT ON test.ratings to &#39;bookinfo&#39;;&#34; </code></pre> <p>Here you apply the <a href="https://en.wikipedia.org/wiki/Principle_of_least_privilege">principle of least privilege</a>. This means that you do not use your <code>admin</code> user in the Bookinfo application. Instead, you create a special user for the Bookinfo application , <code>bookinfo</code>, with minimal privileges. In this case, the <em>bookinfo</em> user only has the <code>SELECT</code> privilege on a single table.</p> <p>After running the command to create the user, you may want to clean your bash history by checking the number of the last command and running <code>history -d &lt;the number of the command that created the user&gt;</code>. You don&rsquo;t want the password of the new user to be stored in the bash history. If you&rsquo;re using <code>mysql</code>, remove the last command from <code>~/.mysql_history</code> file as well. Read more about password protection of the newly created user in <a href="https://dev.mysql.com/doc/refman/5.5/en/create-user.html">MySQL documentation</a>.</p></li> <li><p>Inspect the created ratings to see that everything worked as expected:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ mysqlsh --sql --ssl-mode=REQUIRED -u bookinfo -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &#34;select * from test.ratings;&#34; Enter password: +----------+--------+ | ReviewID | Rating | +----------+--------+ | 1 | 5 | | 2 | 4 | +----------+--------+ </code></pre> <p><em><strong>OR</strong></em></p> <p>For <code>mysql</code> and the local database:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ mysql -u bookinfo -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &#34;select * from test.ratings;&#34; Enter password: +----------+--------+ | ReviewID | Rating | +----------+--------+ | 1 | 5 | | 2 | 4 | +----------+--------+ </code></pre></li> <li><p>Set the ratings temporarily to <code>1</code> to provide a visual clue when our database is used by the Bookinfo <em>ratings</em> service:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &#34;update test.ratings set rating=1; select * from test.ratings;&#34; Enter password: Rows matched: 2 Changed: 2 Warnings: 0 +----------+--------+ | ReviewID | Rating | +----------+--------+ | 1 | 1 | | 2 | 1 | +----------+--------+ </code></pre> <p><em><strong>OR</strong></em></p> <p>For <code>mysql</code> and the local database:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ mysql -u root -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &#34;update test.ratings set rating=1; select * from test.ratings;&#34; Enter password: +----------+--------+ | ReviewID | Rating | +----------+--------+ | 1 | 1 | | 2 | 1 | +----------+--------+ </code></pre> <p>You used the <code>admin</code> user (and <code>root</code> for the local database) in the last command since the <code>bookinfo</code> user does not have the <code>UPDATE</code> privilege on the <code>test.ratings</code> table.</p></li> </ol> <p>Now you are ready to deploy a version of the Bookinfo application that will use your database.</p> <h3 id="initial-setting-of-bookinfo-application">Initial setting of Bookinfo application</h3> <p>To demonstrate the scenario of using an external database, you start with a Kubernetes cluster with <a href="/v1.9/docs/setup/getting-started/">Istio installed</a>. Then you deploy the <a href="/v1.9/docs/examples/bookinfo/">Istio Bookinfo sample application</a>, <a href="/v1.9/docs/examples/bookinfo/#apply-default-destination-rules">apply the default destination rules</a>, and <a href="/v1.9/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy">change Istio to the blocking-egress-by-default policy</a>.</p> <p>This application uses the <code>ratings</code> microservice to fetch book ratings, a number between 1 and 5. The ratings are displayed as stars for each review. There are several versions of the <code>ratings</code> microservice. Some use <a href="https://www.mongodb.com">MongoDB</a>, others use <a href="https://www.mysql.com">MySQL</a> as their database.</p> <p>The example commands in this blog post work with Istio 0.8+, with or without <a href="/v1.9/docs/concepts/security/#mutual-tls-authentication">mutual TLS</a> enabled.</p> <p>As a reminder, here is the end-to-end architecture of the application from the <a href="/v1.9/docs/examples/bookinfo/">Bookinfo sample application</a>.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:59.086918235567985%"> <a data-skipendnotes="true" href="/v1.9/docs/examples/bookinfo/withistio.svg" title="The original Bookinfo application"> <img class="element-to-stretch" src="/v1.9/docs/examples/bookinfo/withistio.svg" alt="The original Bookinfo application" /> </a> </div> <figcaption>The original Bookinfo application</figcaption> </figure> <h3 id="use-the-database-for-ratings-data-in-bookinfo-application">Use the database for ratings data in Bookinfo application</h3> <ol> <li><p>Modify the deployment spec of a version of the <em>ratings</em> microservice that uses a MySQL database, to use your database instance. The spec is in <a href="https://github.com/istio/istio/blob/release-1.9/samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml"><code>samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml</code></a> of an Istio release archive. Edit the following lines:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >- name: MYSQL_DB_HOST value: mysqldb - name: MYSQL_DB_PORT value: &#34;3306&#34; - name: MYSQL_DB_USER value: root - name: MYSQL_DB_PASSWORD value: password </code></pre> <p>Replace the values in the snippet above, specifying the database host, port, user, and password. Note that the correct way to work with passwords in container&rsquo;s environment variables in Kubernetes is <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables">to use secrets</a>. For this example task only, you may want to write the password directly in the deployment spec. <strong>Do not do it</strong> in a real environment! I also assume everyone realizes that <code>&quot;password&quot;</code> should not be used as a password&hellip;</p></li> <li><p>Apply the modified spec to deploy the version of the <em>ratings</em> microservice, <em>v2-mysql</em>, that will use your database.</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml@ deployment &#34;ratings-v2-mysql&#34; created </code></pre></div></li> <li><p>Route all the traffic destined to the <em>reviews</em> service to its <em>v3</em> version. You do this to ensure that the <em>reviews</em> service always calls the <em>ratings</em> service. In addition, route all the traffic destined to the <em>ratings</em> service to <em>ratings v2-mysql</em> that uses your database.</p> <p>Specify the routing for both services above by adding two <a href="/v1.9/docs/reference/config/networking/virtual-service/">virtual services</a>. These virtual services are specified in <code>samples/bookinfo/networking/virtual-service-ratings-mysql.yaml</code> of an Istio release archive. <strong><em>Important:</em></strong> make sure you <a href="/v1.9/docs/examples/bookinfo/#apply-default-destination-rules">applied the default destination rules</a> before running the following command.</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/networking/virtual-service-ratings-mysql.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/bookinfo/networking/virtual-service-ratings-mysql.yaml@ </code></pre></div></li> </ol> <p>The updated architecture appears below. Note that the blue arrows inside the mesh mark the traffic configured according to the virtual services we added. According to the virtual services, the traffic is sent to <em>reviews v3</em> and <em>ratings v2-mysql</em>.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:59.314858206480224%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-tcp/bookinfo-ratings-v2-mysql-external.svg" title="The Bookinfo application with ratings v2-mysql and an external MySQL database"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-tcp/bookinfo-ratings-v2-mysql-external.svg" alt="The Bookinfo application with ratings v2-mysql and an external MySQL database" /> </a> </div> <figcaption>The Bookinfo application with ratings v2-mysql and an external MySQL database</figcaption> </figure> <p>Note that the MySQL database is outside the Istio service mesh, or more precisely outside the Kubernetes cluster. The boundary of the service mesh is marked by a dashed line.</p> <h3 id="access-the-webpage">Access the webpage</h3> <p>Access the webpage of the application, after <a href="/v1.9/docs/examples/bookinfo/#determine-the-ingress-ip-and-port">determining the ingress IP and port</a>.</p> <p>You have a problem&hellip; Instead of the rating stars, the message <em>&ldquo;Ratings service is currently unavailable&rdquo;</em> is currently displayed below each review:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:36.18705035971223%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-tcp/errorFetchingBookRating.png" title="The Ratings service error messages"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-tcp/errorFetchingBookRating.png" alt="The Ratings service error messages" /> </a> </div> <figcaption>The Ratings service error messages</figcaption> </figure> <p>As in <a href="/v1.9/blog/2018/egress-https/">Consuming External Web Services</a>, you experience <strong>graceful service degradation</strong>, which is good. The application did not crash due to the error in the <em>ratings</em> microservice. The webpage of the application correctly displayed the book information, the details, and the reviews, just without the rating stars.</p> <p>You have the same problem as in <a href="/v1.9/blog/2018/egress-https/">Consuming External Web Services</a>, namely all the traffic outside the Kubernetes cluster, both TCP and HTTP, is blocked by default by the sidecar proxies. To enable such traffic for TCP, a mesh-external service entry for TCP must be defined.</p> <h3 id="mesh-external-service-entry-for-an-external-mysql-instance">Mesh-external service entry for an external MySQL instance</h3> <p>TCP mesh-external service entries come to our rescue.</p> <ol> <li><p>Get the IP address of your MySQL database instance. As an option, you can use the <a href="https://linux.die.net/man/1/host">host</a> command:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ export MYSQL_DB_IP=$(host $MYSQL_DB_HOST | grep &#34; has address &#34; | cut -d&#34; &#34; -f4) </code></pre> <p>For a local database, set <code>MYSQL_DB_IP</code> to contain the IP of your machine, accessible from your cluster.</p></li> <li><p>Define a TCP mesh-external service entry:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: mysql-external spec: hosts: - $MYSQL_DB_HOST addresses: - $MYSQL_DB_IP/32 ports: - name: tcp number: $MYSQL_DB_PORT protocol: tcp location: MESH_EXTERNAL EOF </code></pre></li> <li><p>Review the service entry you just created and check that it contains the correct values:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get serviceentry mysql-external -o yaml apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: ... </code></pre></li> </ol> <p>Note that for a TCP service entry, you specify <code>tcp</code> as the protocol of a port of the entry. Also note that you have to specify the IP of the external service in the list of addresses, as a <a href="https://tools.ietf.org/html/rfc2317">CIDR</a> block with suffix <code>32</code>.</p> <p>I will talk more about TCP service entries <a href="#service-entries-for-tcp-traffic">below</a>. For now, verify that the service entry we added fixed the problem. Access the webpage and see if the stars are back.</p> <p>It worked! Accessing the web page of the application displays the ratings without error:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:36.69064748201439%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-tcp/externalMySQLRatings.png" title="Book Ratings Displayed Correctly"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-tcp/externalMySQLRatings.png" alt="Book Ratings Displayed Correctly" /> </a> </div> <figcaption>Book Ratings Displayed Correctly</figcaption> </figure> <p>Note that you see a one-star rating for both displayed reviews, as expected. You changed the ratings to be one star to provide us with a visual clue that our external database is indeed being used.</p> <p>As with service entries for HTTP/HTTPS, you can delete and create service entries for TCP using <code>kubectl</code>, dynamically.</p> <h2 id="motivation-for-egress-tcp-traffic-control">Motivation for egress TCP traffic control</h2> <p>Some in-mesh Istio applications must access external services, for example legacy systems. In many cases, the access is not performed over HTTP or HTTPS protocols. Other TCP protocols are used, such as database-specific protocols like <a href="https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/">MongoDB Wire Protocol</a> and <a href="https://dev.mysql.com/doc/internals/en/client-server-protocol.html">MySQL Client/Server Protocol</a> to communicate with external databases.</p> <p>Next let me provide more details about the service entries for TCP traffic.</p> <h2 id="service-entries-for-tcp-traffic">Service entries for TCP traffic</h2> <p>The service entries for enabling TCP traffic to a specific port must specify <code>TCP</code> as the protocol of the port. Additionally, for the <a href="https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/">MongoDB Wire Protocol</a>, the protocol can be specified as <code>MONGO</code>, instead of <code>TCP</code>.</p> <p>For the <code>addresses</code> field of the entry, a block of IPs in <a href="https://tools.ietf.org/html/rfc2317">CIDR</a> notation must be used. Note that the <code>hosts</code> field is ignored for TCP service entries.</p> <p>To enable TCP traffic to an external service by its hostname, all the IPs of the hostname must be specified. Each IP must be specified by a CIDR block.</p> <p>Note that all the IPs of an external service are not always known. To enable egress TCP traffic, only the IPs that are used by the applications must be specified.</p> <p>Also note that the IPs of an external service are not always static, for example in the case of <a href="https://en.wikipedia.org/wiki/Content_delivery_network">CDNs</a>. Sometimes the IPs are static most of the time, but can be changed from time to time, for example due to infrastructure changes. In these cases, if the range of the possible IPs is known, you should specify the range by CIDR blocks. If the range of the possible IPs is not known, service entries for TCP cannot be used and <a href="/v1.9/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services">the external services must be called directly</a>, bypassing the sidecar proxies.</p> <h2 id="relation-to-virtual-machines-support">Relation to virtual machines support</h2> <p>Note that the scenario described in this post is different from the <a href="/v1.9/docs/examples/virtual-machines/">Bookinfo with Virtual Machines</a> example. In that scenario, a MySQL instance runs on an external (outside the cluster) machine (a bare metal or a VM), integrated with the Istio service mesh. The MySQL service becomes a first-class citizen of the mesh with all the beneficial features of Istio applicable. Among other things, the service becomes addressable by a local cluster domain name, for example by <code>mysqldb.vm.svc.cluster.local</code>, and the communication to it can be secured by <a href="/v1.9/docs/concepts/security/#mutual-tls-authentication">mutual TLS authentication</a>. There is no need to create a service entry to access this service; however, the service must be registered with Istio. To enable such integration, Istio components (<em>Envoy proxy</em>, <em>node-agent</em>, <code>_istio-agent_</code>) must be installed on the machine and the Istio control plane (<em>Pilot</em>, <em>Mixer</em>, <em>Citadel</em>) must be accessible from it. See the <a href="/v1.9/docs/examples/virtual-machines/">Istio VM-related</a> tasks for more details.</p> <p>In our case, the MySQL instance can run on any machine or can be provisioned as a service by a cloud provider. There is no requirement to integrate the machine with Istio. The Istio control plane does not have to be accessible from the machine. In the case of MySQL as a service, the machine which MySQL runs on may be not accessible and installing on it the required components may be impossible. In our case, the MySQL instance is addressable by its global domain name, which could be beneficial if the consuming applications expect to use that domain name. This is especially relevant when that expected domain name cannot be changed in the deployment configuration of the consuming applications.</p> <h2 id="cleanup">Cleanup</h2> <ol> <li><p>Drop the <code>test</code> database and the <code>bookinfo</code> user:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &#34;drop database test; drop user bookinfo;&#34; </code></pre> <p><em><strong>OR</strong></em></p> <p>For <code>mysql</code> and the local database:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ mysql -u root -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e &#34;drop database test; drop user bookinfo;&#34; </code></pre></li> <li><p>Remove the virtual services:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/networking/virtual-service-ratings-mysql.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete -f @samples/bookinfo/networking/virtual-service-ratings-mysql.yaml@ Deleted config: virtual-service/default/reviews Deleted config: virtual-service/default/ratings </code></pre></div></li> <li><p>Undeploy <em>ratings v2-mysql</em>:</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql.yaml@ deployment &#34;ratings-v2-mysql&#34; deleted </code></pre></div></li> <li><p>Delete the service entry:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete serviceentry mysql-external -n default Deleted config: serviceentry mysql-external </code></pre></li> </ol> <h2 id="conclusion">Conclusion</h2> <p>In this blog post, I demonstrated how the microservices in an Istio service mesh can consume external services via TCP. By default, Istio blocks all the traffic, TCP and HTTP, to the hosts outside the cluster. To enable such traffic for TCP, TCP mesh-external service entries must be created for the service mesh.</p>Tue, 06 Feb 2018 00:00:00 +0000/v1.9/blog/2018/egress-tcp/Vadim Eisenberg/v1.9/blog/2018/egress-tcp/traffic-managementegresstcpConsuming External Web Services <p>In many cases, not all the parts of a microservices-based application reside in a <em>service mesh</em>. Sometimes, the microservices-based applications use functionality provided by legacy systems that reside outside the mesh. You may want to migrate these systems to the service mesh gradually. Until these systems are migrated, they must be accessed by the applications inside the mesh. In other cases, the applications use web services provided by third parties.</p> <p>In this blog post, I modify the <a href="/v1.9/docs/examples/bookinfo/">Istio Bookinfo Sample Application</a> to fetch book details from an external web service (<a href="https://developers.google.com/books/docs/v1/getting_started">Google Books APIs</a>). I show how to enable egress HTTPS traffic in Istio by using <em>mesh-external service entries</em>. I provide two options for egress HTTPS traffic and describe the pros and cons of each of the options.</p> <h2 id="initial-setting">Initial setting</h2> <p>To demonstrate the scenario of consuming an external web service, I start with a Kubernetes cluster with <a href="/v1.9/docs/setup/getting-started/">Istio installed</a>. Then I deploy <a href="/v1.9/docs/examples/bookinfo/">Istio Bookinfo Sample Application</a>. This application uses the <em>details</em> microservice to fetch book details, such as the number of pages and the publisher. The original <em>details</em> microservice provides the book details without consulting any external service.</p> <p>The example commands in this blog post work with Istio 1.0+, with or without <a href="/v1.9/docs/concepts/security/#mutual-tls-authentication">mutual TLS</a> enabled. The Bookinfo configuration files reside in the <code>samples/bookinfo</code> directory of the Istio release archive.</p> <p>Here is a copy of the end-to-end architecture of the application from the original <a href="/v1.9/docs/examples/bookinfo/">Bookinfo sample application</a>.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:59.086918235567985%"> <a data-skipendnotes="true" href="/v1.9/docs/examples/bookinfo/withistio.svg" title="The Original Bookinfo Application"> <img class="element-to-stretch" src="/v1.9/docs/examples/bookinfo/withistio.svg" alt="The Original Bookinfo Application" /> </a> </div> <figcaption>The Original Bookinfo Application</figcaption> </figure> <p>Perform the steps in the <a href="/v1.9/docs/examples/bookinfo/#deploying-the-application">Deploying the application</a>, <a href="/v1.9/docs/examples/bookinfo/#confirm-the-app-is-accessible-from-outside-the-cluster">Confirm the app is running</a>, <a href="/v1.9/docs/examples/bookinfo/#apply-default-destination-rules">Apply default destination rules</a> sections, and <a href="/v1.9/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy">change Istio to the blocking-egress-by-default policy</a>.</p> <h2 id="bookinfo-with-https-access-to-a-google-books-web-service">Bookinfo with HTTPS access to a Google Books web service</h2> <p>Deploy a new version of the <em>details</em> microservice, <em>v2</em>, that fetches the book details from <a href="https://developers.google.com/books/docs/v1/getting_started">Google Books APIs</a>. Run the following command; it sets the <code>DO_NOT_ENCRYPT</code> environment variable of the service&rsquo;s container to <code>false</code>. This setting will instruct the deployed service to use HTTPS (instead of HTTP) to access to the external service.</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-details-v2.yaml@ --dry-run -o yaml | kubectl set env --local -f - &#39;DO_NOT_ENCRYPT=false&#39; -o yaml | kubectl apply -f - </code></pre></div> <p>The updated architecture of the application now looks as follows:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:65.1654485092242%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-https/bookinfo-details-v2.svg" title="The Bookinfo Application with details V2"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-https/bookinfo-details-v2.svg" alt="The Bookinfo Application with details V2" /> </a> </div> <figcaption>The Bookinfo Application with details V2</figcaption> </figure> <p>Note that the Google Books web service is outside the Istio service mesh, the boundary of which is marked by a dashed line.</p> <p>Now direct all the traffic destined to the <em>details</em> microservice, to <em>details version v2</em>.</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/networking/virtual-service-details-v2.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/bookinfo/networking/virtual-service-details-v2.yaml@ </code></pre></div> <p>Note that the virtual service relies on a destination rule that you created in the <a href="/v1.9/docs/examples/bookinfo/#apply-default-destination-rules">Apply default destination rules</a> section.</p> <p>Access the web page of the application, after <a href="/v1.9/docs/examples/bookinfo/#determine-the-ingress-ip-and-port">determining the ingress IP and port</a>.</p> <p>Oops&hellip; Instead of the book details you have the <em>Error fetching product details</em> message displayed:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:36.18649965205289%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-https/errorFetchingBookDetails.png" title="The Error Fetching Product Details Message"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-https/errorFetchingBookDetails.png" alt="The Error Fetching Product Details Message" /> </a> </div> <figcaption>The Error Fetching Product Details Message</figcaption> </figure> <p>The good news is that your application did not crash. With a good microservice design, you do not have <strong>failure propagation</strong>. In your case, the failing <em>details</em> microservice does not cause the <code>productpage</code> microservice to fail. Most of the functionality of the application is still provided, despite the failure in the <em>details</em> microservice. You have <strong>graceful service degradation</strong>: as you can see, the reviews and the ratings are displayed correctly, and the application is still useful.</p> <p>So what might have gone wrong? Ah&hellip; The answer is that I forgot to tell you to enable traffic from inside the mesh to an external service, in this case to the Google Books web service. By default, the Istio sidecar proxies (<a href="https://www.envoyproxy.io">Envoy proxies</a>) <strong>block all the traffic to destinations outside the cluster</strong>. To enable such traffic, you must define a <a href="/v1.9/docs/reference/config/networking/service-entry/">mesh-external service entry</a>.</p> <h3 id="enable-https-access-to-a-google-books-web-service">Enable HTTPS access to a Google Books web service</h3> <p>No worries, define a <strong>mesh-external service entry</strong> and fix your application. You must also define a <em>virtual service</em> to perform routing by <a href="https://en.wikipedia.org/wiki/Server_Name_Indication">SNI</a> to the external service.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: googleapis spec: hosts: - www.googleapis.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: googleapis spec: hosts: - www.googleapis.com tls: - match: - port: 443 sni_hosts: - www.googleapis.com route: - destination: host: www.googleapis.com port: number: 443 weight: 100 EOF </code></pre> <p>Now accessing the web page of the application displays the book details without error:</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:34.82831114225648%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-https/externalBookDetails.png" title="Book Details Displayed Correctly"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-https/externalBookDetails.png" alt="Book Details Displayed Correctly" /> </a> </div> <figcaption>Book Details Displayed Correctly</figcaption> </figure> <p>You can query your service entries:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get serviceentries NAME AGE googleapis 8m </code></pre> <p>You can delete your service entry:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete serviceentry googleapis serviceentry &#34;googleapis&#34; deleted </code></pre> <p>and see in the output that the service entry is deleted.</p> <p>Accessing the web page after deleting the service entry produces the same error that you experienced before, namely <em>Error fetching product details</em>. As you can see, the service entries are defined <strong>dynamically</strong>, as are many other Istio configuration artifacts. The Istio operators can decide dynamically which domains they allow the microservices to access. They can enable and disable traffic to the external domains on the fly, without redeploying the microservices.</p> <h3 id="cleanup-of-https-access-to-a-google-books-web-service">Cleanup of HTTPS access to a Google Books web service</h3> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml'>Zip</a><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/networking/virtual-service-details-v2.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete serviceentry googleapis $ kubectl delete virtualservice googleapis $ kubectl delete -f @samples/bookinfo/networking/virtual-service-details-v2.yaml@ $ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo-details-v2.yaml@ </code></pre></div> <h2 id="tls-origination-by-istio">TLS origination by Istio</h2> <p>There is a caveat to this story. Suppose you want to monitor which specific set of <a href="https://developers.google.com/apis-explorer/">Google APIs</a> your microservices use (<a href="https://developers.google.com/books/docs/v1/getting_started">Books</a>, <a href="https://developers.google.com/calendar/">Calendar</a>, <a href="https://developers.google.com/tasks/">Tasks</a> etc.) Suppose you want to enforce a policy that using only <a href="https://developers.google.com/books/docs/v1/getting_started">Books APIs</a> is allowed. Suppose you want to monitor the book identifiers that your microservices access. For these monitoring and policy tasks you need to know the URL path. Consider for example the URL <a href="https://www.googleapis.com/books/v1/volumes?q=isbn:0486424618"><code>www.googleapis.com/books/v1/volumes?q=isbn:0486424618</code></a>. In that URL, <a href="https://developers.google.com/books/docs/v1/getting_started">Books APIs</a> is specified by the path segment <code>/books</code>, and the <a href="https://en.wikipedia.org/wiki/International_Standard_Book_Number">ISBN</a> number by the path segment <code>/volumes?q=isbn:0486424618</code>. However, in HTTPS, all the HTTP details (hostname, path, headers etc.) are encrypted and such monitoring and policy enforcement by the sidecar proxies is not possible. Istio can only know the server name of the encrypted requests by the <a href="https://tools.ietf.org/html/rfc3546#section-3.1">SNI</a> (<em>Server Name Indication</em>) field, in this case <code>www.googleapis.com</code>.</p> <p>To allow Istio to perform monitoring and policy enforcement of egress requests based on HTTP details, the microservices must issue HTTP requests. Istio then opens an HTTPS connection to the destination (performs TLS origination). The code of the microservices must be written differently or configured differently, according to whether the microservice runs inside or outside an Istio service mesh. This contradicts the Istio design goal of <a href="/v1.9/docs/ops/deployment/architecture/#design-goals">maximizing transparency</a>. Sometimes you need to compromise&hellip;</p> <p>The diagram below shows two options for sending HTTPS traffic to external services. On the top, a microservice sends regular HTTPS requests, encrypted end-to-end. On the bottom, the same microservice sends unencrypted HTTP requests inside a pod, which are intercepted by the sidecar Envoy proxy. The sidecar proxy performs TLS origination, so the traffic between the pod and the external service is encrypted.</p> <figure style="width:60%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:95.1355088590701%"> <a data-skipendnotes="true" href="/v1.9/blog/2018/egress-https/https_from_the_app.svg" title="HTTPS traffic to external services, with TLS originated by the microservice vs. by the sidecar proxy"> <img class="element-to-stretch" src="/v1.9/blog/2018/egress-https/https_from_the_app.svg" alt="HTTPS traffic to external services, with TLS originated by the microservice vs. by the sidecar proxy" /> </a> </div> <figcaption>HTTPS traffic to external services, with TLS originated by the microservice vs. by the sidecar proxy</figcaption> </figure> <p>Here is how both patterns are supported in the <a href="https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/src/details/details.rb">Bookinfo details microservice code</a>, using the Ruby <a href="https://docs.ruby-lang.org/en/2.0.0/Net/HTTP.html">net/http module</a>:</p> <pre><code class='language-ruby' data-expandlinks='true' data-repo='istio' >uri = URI.parse(&#39;https://www.googleapis.com/books/v1/volumes?q=isbn:&#39; + isbn) http = Net::HTTP.new(uri.host, ENV[&#39;DO_NOT_ENCRYPT&#39;] === &#39;true&#39; ? 80:443) ... unless ENV[&#39;DO_NOT_ENCRYPT&#39;] === &#39;true&#39; then http.use_ssl = true end </code></pre> <p>When the <code>DO_NOT_ENCRYPT</code> environment variable is defined, the request is performed without SSL (plain HTTP) to port 80.</p> <p>You can set the <code>DO_NOT_ENCRYPT</code> environment variable to <em>&ldquo;true&rdquo;</em> in the <a href="https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml">Kubernetes deployment spec of details v2</a>, the <code>container</code> section:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >env: - name: DO_NOT_ENCRYPT value: &#34;true&#34; </code></pre> <p>In the next section you will configure TLS origination for accessing an external web service.</p> <h2 id="bookinfo-with-tls-origination-to-a-google-books-web-service">Bookinfo with TLS origination to a Google Books web service</h2> <ol> <li><p>Deploy a version of <em>details v2</em> that sends an HTTP request to <a href="https://developers.google.com/books/docs/v1/getting_started">Google Books APIs</a>. The <code>DO_NOT_ENCRYPT</code> variable is set to true in <a href="https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml"><code>bookinfo-details-v2.yaml</code></a>.</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-details-v2.yaml@ </code></pre></div></li> <li><p>Direct the traffic destined to the <em>details</em> microservice, to <em>details version v2</em>.</p> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/networking/virtual-service-details-v2.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f @samples/bookinfo/networking/virtual-service-details-v2.yaml@ </code></pre></div></li> <li><p>Create a mesh-external service entry for <code>www.google.apis</code> , a virtual service to rewrite the destination port from 80 to 443, and a destination rule to perform TLS origination.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: googleapis spec: hosts: - www.googleapis.com ports: - number: 80 name: http protocol: HTTP - number: 443 name: https protocol: HTTPS resolution: DNS --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: rewrite-port-for-googleapis spec: hosts: - www.googleapis.com http: - match: - port: 80 route: - destination: host: www.googleapis.com port: number: 443 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: originate-tls-for-googleapis spec: host: www.googleapis.com trafficPolicy: loadBalancer: simple: ROUND_ROBIN portLevelSettings: - port: number: 443 tls: mode: SIMPLE # initiates HTTPS when accessing www.googleapis.com EOF </code></pre></li> <li><p>Access the web page of the application and verify that the book details are displayed without errors.</p></li> <li><p><a href="/v1.9/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging">Enable Envoy’s access logging</a></p></li> <li><p>Check the log of of the sidecar proxy of <em>details v2</em> and see the HTTP request.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl logs $(kubectl get pods -l app=details -l version=v2 -o jsonpath=&#39;{.items[0].metadata.name}&#39;) istio-proxy | grep googleapis [2018-08-09T11:32:58.171Z] &#34;GET /books/v1/volumes?q=isbn:0486424618 HTTP/1.1&#34; 200 - 0 1050 264 264 &#34;-&#34; &#34;Ruby&#34; &#34;b993bae7-4288-9241-81a5-4cde93b2e3a6&#34; &#34;www.googleapis.com:80&#34; &#34;172.217.20.74:80&#34; EOF </code></pre> <p>Note the URL path in the log, the path can be monitored and access policies can be applied based on it. To read more about monitoring and access policies for HTTP egress traffic, check out <a href="https://archive.istio.io/v0.8/blog/2018/egress-monitoring-access-control/#logging">this blog post</a>.</p></li> </ol> <h3 id="cleanup-of-tls-origination-to-a-google-books-web-service">Cleanup of TLS origination to a Google Books web service</h3> <div><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/platform/kube/bookinfo-details-v2.yaml'>Zip</a><a data-skipendnotes='true' style='display:none' href='https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/networking/virtual-service-details-v2.yaml'>Zip</a><pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl delete serviceentry googleapis $ kubectl delete virtualservice rewrite-port-for-googleapis $ kubectl delete destinationrule originate-tls-for-googleapis $ kubectl delete -f @samples/bookinfo/networking/virtual-service-details-v2.yaml@ $ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo-details-v2.yaml@ </code></pre></div> <h3 id="relation-to-istio-mutual-tls">Relation to Istio mutual TLS</h3> <p>Note that the TLS origination in this case is unrelated to <a href="/v1.9/docs/concepts/security/#mutual-tls-authentication">the mutual TLS</a> applied by Istio. The TLS origination for the external services will work, whether the Istio mutual TLS is enabled or not. The <strong>mutual</strong> TLS secures service-to-service communication <strong>inside</strong> the service mesh and provides each service with a strong identity. The <strong>external services</strong> in this blog post were accessed using <strong>one-way TLS</strong>, the same mechanism used to secure communication between a web browser and a web server. TLS is applied to the communication with external services to verify the identity of the external server and to encrypt the traffic.</p> <h2 id="conclusion">Conclusion</h2> <p>In this blog post I demonstrated how microservices in an Istio service mesh can consume external web services by HTTPS. By default, Istio blocks all the traffic to the hosts outside the cluster. To enable such traffic, mesh-external service entries must be created for the service mesh. It is possible to access the external sites either by issuing HTTPS requests, or by issuing HTTP requests with Istio performing TLS origination. When the microservices issue HTTPS requests, the traffic is encrypted end-to-end, however Istio cannot monitor HTTP details like the URL paths of the requests. When the microservices issue HTTP requests, Istio can monitor the HTTP details of the requests and enforce HTTP-based access policies. However, in that case the traffic between microservice and the sidecar proxy is unencrypted. Having part of the traffic unencrypted can be forbidden in organizations with very strict security requirements.</p>Wed, 31 Jan 2018 00:00:00 +0000/v1.9/blog/2018/egress-https/Vadim Eisenberg/v1.9/blog/2018/egress-https/traffic-managementegresshttpsMixer and the SPOF Myth <p>As <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/">Mixer</a> is in the request path, it is natural to question how it impacts overall system availability and latency. A common refrain we hear when people first glance at Istio architecture diagrams is &ldquo;Isn&rsquo;t this just introducing a single point of failure?&rdquo;</p> <p>In this post, we’ll dig deeper and cover the design principles that underpin Mixer and the surprising fact Mixer actually increases overall mesh availability and reduces average request latency.</p> <p>Istio&rsquo;s use of Mixer has two main benefits in terms of overall system availability and latency:</p> <ul> <li><p><strong>Increased SLO</strong>. Mixer insulates proxies and services from infrastructure backend failures, enabling higher effective mesh availability. The mesh as a whole tends to experience a lower rate of failure when interacting with the infrastructure backends than if Mixer were not present.</p></li> <li><p><strong>Reduced Latency</strong>. Through aggressive use of shared multi-level caches and sharding, Mixer reduces average observed latencies across the mesh.</p></li> </ul> <p>We&rsquo;ll explain this in more detail below.</p> <h2 id="how-we-got-here">How we got here</h2> <p>For many years at Google, we’ve been using an internal API &amp; service management system to handle the many APIs exposed by Google. This system has been fronting the world’s biggest services (Google Maps, YouTube, Gmail, etc) and sustains a peak rate of hundreds of millions of QPS. Although this system has served us well, it had problems keeping up with Google’s rapid growth, and it became clear that a new architecture was needed in order to tamp down ballooning operational costs.</p> <p>In 2014, we started an initiative to create a replacement architecture that would scale better. The result has proven extremely successful and has been gradually deployed throughout Google, saving in the process millions of dollars a month in ops costs.</p> <p>The older system was built around a centralized fleet of fairly heavy proxies into which all incoming traffic would flow, before being forwarded to the services where the real work was done. The newer architecture jettisons the shared proxy design and instead consists of a very lean and efficient distributed sidecar proxy sitting next to service instances, along with a shared fleet of sharded control plane intermediaries:</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:74.79295535770372%"> <a data-skipendnotes="true" href="/v1.9/blog/2017/mixer-spof-myth/mixer-spof-myth-1.svg" title="Google System Topology"> <img class="element-to-stretch" src="/v1.9/blog/2017/mixer-spof-myth/mixer-spof-myth-1.svg" alt="Google System Topology" /> </a> </div> <figcaption>Google&#39;s API &amp; Service Management System</figcaption> </figure> <p>Look familiar? Of course: it’s just like Istio! Istio was conceived as a second generation of this distributed proxy architecture. We took the core lessons from this internal system, generalized many of the concepts by working with our partners, and created Istio.</p> <h2 id="architecture-recap">Architecture recap</h2> <p>As shown in the diagram below, Mixer sits between the mesh and the infrastructure backends that support it:</p> <figure style="width:75%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:65.89948886170049%"> <a data-skipendnotes="true" href="/v1.9/blog/2017/mixer-spof-myth/mixer-spof-myth-2.svg" title="Istio Topology"> <img class="element-to-stretch" src="/v1.9/blog/2017/mixer-spof-myth/mixer-spof-myth-2.svg" alt="Istio Topology" /> </a> </div> <figcaption>Istio Topology</figcaption> </figure> <p>The Envoy sidecar logically calls Mixer before each request to perform precondition checks, and after each request to report telemetry. The sidecar has local caching such that a relatively large percentage of precondition checks can be performed from cache. Additionally, the sidecar buffers outgoing telemetry such that it only actually needs to call Mixer once for every several thousands requests. Whereas precondition checks are synchronous to request processing, telemetry reports are done asynchronously with a fire-and-forget pattern.</p> <p>At a high level, Mixer provides:</p> <ul> <li><p><strong>Backend Abstraction</strong>. Mixer insulates the Istio components and services within the mesh from the implementation details of individual infrastructure backends.</p></li> <li><p><strong>Intermediation</strong>. Mixer allows operators to have fine-grained control over all interactions between the mesh and the infrastructure backends.</p></li> </ul> <p>However, even beyond these purely functional aspects, Mixer has other characteristics that provide the system with additional benefits.</p> <h2 id="mixer-slo-booster">Mixer: SLO booster</h2> <p>Contrary to the claim that Mixer is a SPOF and can therefore lead to mesh outages, we believe it in fact improves the effective availability of a mesh. How can that be? There are three basic characteristics at play:</p> <ul> <li><p><strong>Statelessness</strong>. Mixer is stateless in that it doesn’t manage any persistent storage of its own.</p></li> <li><p><strong>Hardening</strong>. Mixer proper is designed to be a highly reliable component. The design intent is to achieve &gt; 99.999% uptime for any individual Mixer instance.</p></li> <li><p><strong>Caching and Buffering</strong>. Mixer is designed to accumulate a large amount of transient ephemeral state.</p></li> </ul> <p>The sidecar proxies that sit next to each service instance in the mesh must necessarily be frugal in terms of memory consumption, which constrains the possible amount of local caching and buffering. Mixer, however, lives independently and can use considerably larger caches and output buffers. Mixer thus acts as a highly-scaled and highly-available second-level cache for the sidecars.</p> <p>Mixer’s expected availability is considerably higher than most infrastructure backends (those often have availability of perhaps 99.9%). Its local caches and buffers help mask infrastructure backend failures by being able to continue operating even when a backend has become unresponsive.</p> <h2 id="mixer-latency-slasher">Mixer: Latency slasher</h2> <p>As we explained above, the Istio sidecars generally have fairly effective first-level caching. They can serve the majority of their traffic from cache. Mixer provides a much greater shared pool of second-level cache, which helps Mixer contribute to a lower average per-request latency.</p> <p>While it’s busy cutting down latency, Mixer is also inherently cutting down the number of calls your mesh makes to infrastructure backends. Depending on how you’re paying for these backends, this might end up saving you some cash by cutting down the effective QPS to the backends.</p> <h2 id="work-ahead">Work ahead</h2> <p>We have opportunities ahead to continue improving the system in many ways.</p> <h3 id="configuration-canaries">Configuration canaries</h3> <p>Mixer is highly scaled so it is generally resistant to individual instance failures. However, Mixer is still susceptible to cascading failures in the case when a poison configuration is deployed which causes all Mixer instances to crash basically at the same time (yeah, that would be a bad day). To prevent this from happening, configuration changes can be canaried to a small set of Mixer instances, and then more broadly rolled out.</p> <p>Mixer doesn’t yet do canarying of configuration changes, but we expect this to come online as part of Istio’s ongoing work on reliable configuration distribution.</p> <h3 id="cache-tuning">Cache tuning</h3> <p>We have yet to fine-tune the sizes of the sidecar and Mixer caches. This work will focus on achieving the highest performance possible using the least amount of resources.</p> <h3 id="cache-sharing">Cache sharing</h3> <p>At the moment, each Mixer instance operates independently of all other instances. A request handled by one Mixer instance will not leverage data cached in a different instance. We will eventually experiment with a distributed cache such as memcached or Redis in order to provide a much larger mesh-wide shared cache, and further reduce the number of calls to infrastructure backends.</p> <h3 id="sharding">Sharding</h3> <p>In very large meshes, the load on Mixer can be great. There can be a large number of Mixer instances, each straining to keep caches primed to satisfy incoming traffic. We expect to eventually introduce intelligent sharding such that Mixer instances become slightly specialized in handling particular data streams in order to increase the likelihood of cache hits. In other words, sharding helps improve cache efficiency by routing related traffic to the same Mixer instance over time, rather than randomly dispatching to any available Mixer instance.</p> <h2 id="conclusion">Conclusion</h2> <p>Practical experience at Google showed that the model of a slim sidecar proxy and a large shared caching control plane intermediary hits a sweet spot, delivering excellent perceived availability and latency. We’ve taken the lessons learned there and applied them to create more sophisticated and effective caching, prefetching, and buffering strategies in Istio. We’ve also optimized the communication protocols to reduce overhead when a cache miss does occur.</p> <p>Mixer is still young. As of Istio 0.3, we haven’t really done significant performance work within Mixer itself. This means when a request misses the sidecar cache, we spend more time in Mixer to respond to requests than we should. We’re doing a lot of work to improve this in coming months to reduce the overhead that Mixer imparts in the synchronous precondition check case.</p> <p>We hope this post makes you appreciate the inherent benefits that Mixer brings to Istio. Don’t hesitate to post comments or questions to <a href="https://groups.google.com/forum/#!forum/istio-policies-and-telemetry">istio-policies-and-telemetry@</a>.</p>Thu, 07 Dec 2017 00:00:00 +0000/v1.9/blog/2017/mixer-spof-myth/Martin Taillefer/v1.9/blog/2017/mixer-spof-myth/adaptersmixerpoliciestelemetryavailabilitylatencyMixer Adapter Model <p>Istio 0.2 introduced a new Mixer adapter model which is intended to increase Mixer’s flexibility to address a varied set of infrastructure backends. This post intends to put the adapter model in context and explain how it works.</p> <h2 id="why-adapters">Why adapters?</h2> <p>Infrastructure backends provide support functionality used to build services. They include such things as access control systems, telemetry capturing systems, quota enforcement systems, billing systems, and so forth. Services traditionally directly integrate with these backend systems, creating a hard coupling and baking-in specific semantics and usage options.</p> <p>Mixer serves as an abstraction layer between Istio and an open-ended set of infrastructure backends. The Istio components and services that run within the mesh can interact with these backends, while not being coupled to the backends’ specific interfaces.</p> <p>In addition to insulating application-level code from the details of infrastructure backends, Mixer provides an intermediation model that allows operators to inject and control policies between application code and backends. Operators can control which data is reported to which backend, which backend to consult for authorization, and much more.</p> <p>Given that individual infrastructure backends each have different interfaces and operational models, Mixer needs custom code to deal with each and we call these custom bundles of code <a href="https://github.com/istio/istio/wiki/Mixer-Compiled-In-Adapter-Dev-Guide"><em>adapters</em></a>.</p> <p>Adapters are Go packages that are directly linked into the Mixer binary. It’s fairly simple to create custom Mixer binaries linked with specialized sets of adapters, in case the default set of adapters is not sufficient for specific use cases.</p> <h2 id="philosophy">Philosophy</h2> <p>Mixer is essentially an attribute processing and routing machine. The proxy sends it <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/mixer-overview/#attributes">attributes</a> as part of doing precondition checks and telemetry reports, which it turns into a series of calls into adapters. The operator supplies configuration which describes how to map incoming attributes to inputs for the adapters.</p> <figure style="width:60%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:42.60859894197215%"> <a data-skipendnotes="true" href="/v1.9/blog/2017/adapter-model/machine.svg" title="Attribute Machine"> <img class="element-to-stretch" src="/v1.9/blog/2017/adapter-model/machine.svg" alt="Attribute Machine" /> </a> </div> <figcaption>Attribute Machine</figcaption> </figure> <p>Configuration is a complex task. In fact, evidence shows that the overwhelming majority of service outages are caused by configuration errors. To help combat this, Mixer’s configuration model enforces a number of constraints designed to avoid errors. For example, the configuration model uses strong typing to ensure that only meaningful attributes or attribute expressions are used in any given context.</p> <h2 id="handlers-configuring-adapters">Handlers: configuring adapters</h2> <p>Each adapter that Mixer uses requires some configuration to operate. Typically, adapters need things like the URL to their backend, credentials, caching options, and so forth. Each adapter defines the exact configuration data it needs via a <a href="https://developers.google.com/protocol-buffers/">protobuf</a> message.</p> <p>You configure each adapter by creating <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/mixer-overview/#handlers"><em>handlers</em></a> for them. A handler is a configuration resource which represents a fully configured adapter ready for use. There can be any number of handlers for a single adapter, making it possible to reuse an adapter in different scenarios.</p> <h2 id="templates-adapter-input-schema">Templates: adapter input schema</h2> <p>Mixer is typically invoked twice for every incoming request to a mesh service, once for precondition checks and once for telemetry reporting. For every such call, Mixer invokes one or more adapters. Different adapters need different pieces of data as input in order to do their work. A logging adapter needs a log entry, a metric adapter needs a metric, an authorization adapter needs credentials, etc. Mixer <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/templates/"><em>templates</em></a> are used to describe the exact data that an adapter consumes at request time.</p> <p>Each template is specified as a <a href="https://developers.google.com/protocol-buffers/">protobuf</a> message. A single template describes a bundle of data that is delivered to one or more adapters at runtime. Any given adapter can be designed to support any number of templates, the specific templates the adapter supports is determined by the adapter developer.</p> <p><a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/templates/metric/"><code>metric</code></a> and <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/templates/logentry/"><code>logentry</code></a> are two of the most essential templates used within Istio. They represent respectively the payload to report a single metric and a single log entry to appropriate backends.</p> <h2 id="instances-attribute-mapping">Instances: attribute mapping</h2> <p>You control which data is delivered to individual adapters by creating <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/mixer-overview/#instances"><em>instances</em></a>. Instances control how Mixer uses the <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/mixer-overview/#attributes">attributes</a> delivered by the proxy into individual bundles of data that can be routed to different adapters.</p> <p>Creating instances generally requires using <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/expression-language/">attribute expressions</a>. The point of these expressions is to use any attribute or literal value in order to produce a result that can be assigned to an instance’s field.</p> <p>Every instance field has a type, as defined in the template, every attribute has a <a href="https://github.com/istio/api/blob/release-1.9/policy/v1beta1/value_type.proto">type</a>, and every attribute expression has a type. You can only assign type-compatible expressions to any given instance fields. For example, you can’t assign an integer expression to a string field. This kind of strong typing is designed to minimize the risk of creating bogus configurations.</p> <h2 id="rules-delivering-data-to-adapters">Rules: delivering data to adapters</h2> <p>The last piece to the puzzle is telling Mixer which instances to send to which handler and when. This is done by creating <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/mixer-overview/#rules"><em>rules</em></a>. Each rule identifies a specific handler and the set of instances to send to that handler. Whenever Mixer processes an incoming call, it invokes the indicated handler and gives it the specific set of instances for processing.</p> <p>Rules contain matching predicates. A predicate is an attribute expression which returns a true/false value. A rule only takes effect if its predicate expression returns true. Otherwise, it’s like the rule didn’t exist and the indicated handler isn’t invoked.</p> <h2 id="future">Future</h2> <p>We are working to improve the end to end experience of using and developing adapters. For example, several new features are planned to make templates more expressive. Additionally, the expression language is being substantially enhanced to be more powerful and well-rounded.</p> <p>Longer term, we are evaluating ways to support adapters which aren’t directly linked into the main Mixer binary. This would simplify deployment and composition.</p> <h2 id="conclusion">Conclusion</h2> <p>The refreshed Mixer adapter model is designed to provide a flexible framework to support an open-ended set of infrastructure backends.</p> <p>Handlers provide configuration data for individual adapters, templates determine exactly what kind of data different adapters want to consume at runtime, instances let operators prepare this data, rules direct the data to one or more handlers.</p> <p>You can learn more about Mixer&rsquo;s overall architecture <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/mixer-overview/">here</a>, and learn the specifics of templates, handlers, and rules <a href="https://istio.io/v1.6/docs/reference/config/policy-and-telemetry">here</a>. You can find many examples of Mixer configuration resources in the Bookinfo sample <a href="https://github.com/istio/istio/tree/release-1.9/samples/bookinfo">here</a>.</p>Fri, 03 Nov 2017 00:00:00 +0000/v1.9/blog/2017/adapter-model/Martin Taillefer/v1.9/blog/2017/adapter-model/adaptersmixerpoliciestelemetryUsing Network Policy with Istio <p>The use of Network Policy to secure applications running on Kubernetes is a now a widely accepted industry best practice. Given that Istio also supports policy, we want to spend some time explaining how Istio policy and Kubernetes Network Policy interact and support each other to deliver your application securely.</p> <p>Let’s start with the basics: why might you want to use both Istio and Kubernetes Network Policy? The short answer is that they are good at different things. Consider the main differences between Istio and Network Policy (we will describe &ldquo;typical” implementations, e.g. Calico, but implementation details can vary with different network providers):</p> <table> <thead> <tr> <th></th> <th>Istio Policy</th> <th>Network Policy</th> </tr> </thead> <tbody> <tr> <td><strong>Layer</strong></td> <td>&ldquo;Service&rdquo; &mdash; L7</td> <td>&ldquo;Network&rdquo; &mdash; L3-4</td> </tr> <tr> <td><strong>Implementation</strong></td> <td>User space</td> <td>Kernel</td> </tr> <tr> <td><strong>Enforcement Point</strong></td> <td>Pod</td> <td>Node</td> </tr> </tbody> </table> <h2 id="layer">Layer</h2> <p>Istio policy operates at the &ldquo;service” layer of your network application. This is Layer 7 (Application) from the perspective of the OSI model, but the de facto model of cloud native applications is that Layer 7 actually consists of at least two layers: a service layer and a content layer. The service layer is typically HTTP, which encapsulates the actual application data (the content layer). It is at this service layer of HTTP that the Istio’s Envoy proxy operates. In contrast, Network Policy operates at Layers 3 (Network) and 4 (Transport) in the OSI model.</p> <p>Operating at the service layer gives the Envoy proxy a rich set of attributes to base policy decisions on, for protocols it understands, which at present includes HTTP/1.1 &amp; HTTP/2 (gRPC operates over HTTP/2). So, you can apply policy based on virtual host, URL, or other HTTP headers. In the future, Istio will support a wide range of Layer 7 protocols, as well as generic TCP and UDP transport.</p> <p>In contrast, operating at the network layer has the advantage of being universal, since all network applications use IP. At the network layer you can apply policy regardless of the layer 7 protocol: DNS, SQL databases, real-time streaming, and a plethora of other services that do not use HTTP can be secured. Network Policy isn’t limited to a classic firewall’s tuple of IP addresses, proto, and ports. Both Istio and Network Policy are aware of rich Kubernetes labels to describe pod endpoints.</p> <h2 id="implementation">Implementation</h2> <p>The Istio’s proxy is based on <a href="https://envoyproxy.github.io/envoy/">Envoy</a>, which is implemented as a user space daemon in the data plane that interacts with the network layer using standard sockets. This gives it a large amount of flexibility in processing, and allows it to be distributed (and upgraded!) in a container.</p> <p>Network Policy data plane is typically implemented in kernel space (e.g. using iptables, eBPF filters, or even custom kernel modules). Being in kernel space allows them to be extremely fast, but not as flexible as the Envoy proxy.</p> <h2 id="enforcement-point">Enforcement point</h2> <p>Policy enforcement using the Envoy proxy is implemented inside the pod, as a sidecar container in the same network namespace. This allows a simple deployment model. Some containers are given permission to reconfigure the networking inside their pod (<code>CAP_NET_ADMIN</code>). If such a service instance is compromised, or misbehaves (as in a malicious tenant) the proxy can be bypassed.</p> <p>While this won’t let an attacker access other Istio-enabled pods, so long as they are correctly configured, it opens several attack vectors:</p> <ul> <li>Attacking unprotected pods</li> <li>Attempting to deny service to protected pods by sending lots of traffic</li> <li>Exfiltrating data collected in the pod</li> <li>Attacking the cluster infrastructure (servers or Kubernetes services)</li> <li>Attacking services outside the mesh, like databases, storage arrays, or legacy systems.</li> </ul> <p>Network Policy is typically enforced at the host node, outside the network namespace of the guest pods. This means that compromised or misbehaving pods must break into the root namespace to avoid enforcement. With the addition of egress policy due in Kubernetes 1.8, this difference makes Network Policy a key part of protecting your infrastructure from compromised workloads.</p> <h2 id="examples">Examples</h2> <p>Let’s walk through a few examples of what you might want to do with Kubernetes Network Policy for an Istio-enabled application. Consider the Bookinfo sample application. We’re going to cover the following use cases for Network Policy:</p> <ul> <li>Reduce attack surface of the application ingress</li> <li>Enforce fine-grained isolation within the application</li> </ul> <h3 id="reduce-attack-surface-of-the-application-ingress">Reduce attack surface of the application ingress</h3> <p>Our application ingress controller is the main entry-point to our application from the outside world. A quick peek at <code>istio.yaml</code> (used to install Istio) defines the Istio ingress like this:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 kind: Service metadata: name: istio-ingress labels: istio: ingress spec: type: LoadBalancer ports: - port: 80 name: http - port: 443 name: https selector: istio: ingress </code></pre> <p>The <code>istio-ingress</code> exposes ports 80 and 443. Let’s limit incoming traffic to just these two ports. Envoy has a <a href="https://www.envoyproxy.io/docs/envoy/latest/operations/admin.html#operations-admin-interface">built-in administrative interface</a>, and we don’t want a misconfigured <code>istio-ingress</code> image to accidentally expose our admin interface to the outside world. This is an example of defense in depth: a properly configured image should not expose the interface, and a properly configured Network Policy will prevent anyone from connecting to it. Either can fail or be misconfigured and we are still protected.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: istio-ingress-lockdown namespace: default spec: podSelector: matchLabels: istio: ingress ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 </code></pre> <h3 id="enforce-fine-grained-isolation-within-the-application">Enforce fine-grained isolation within the application</h3> <p>Here is the service graph for the Bookinfo application.</p> <figure style="width:80%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:59.086918235567985%"> <a data-skipendnotes="true" href="/v1.9/docs/examples/bookinfo/withistio.svg" title="Bookinfo Service Graph"> <img class="element-to-stretch" src="/v1.9/docs/examples/bookinfo/withistio.svg" alt="Bookinfo Service Graph" /> </a> </div> <figcaption>Bookinfo Service Graph</figcaption> </figure> <p>This graph shows every connection that a correctly functioning application should be allowed to make. All other connections, say from the Istio Ingress directly to the Rating service, are not part of the application. Let’s lock out those extraneous connections so they cannot be used by an attacker. Imagine, for example, that the Ingress pod is compromised by an exploit that allows an attacker to run arbitrary code. If we only allow connections to the Product Page pods using Network Policy, the attacker has gained no more access to my application backends <em>even though they have compromised a member of the service mesh</em>.</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: product-page-ingress namespace: default spec: podSelector: matchLabels: app: productpage ingress: - ports: - protocol: TCP port: 9080 from: - podSelector: matchLabels: istio: ingress </code></pre> <p>You can and should write a similar policy for each service to enforce which other pods are allowed to access each.</p> <h2 id="summary">Summary</h2> <p>Our take is that Istio and Network Policy have different strengths in applying policy. Istio is application-protocol aware and highly flexible, making it ideal for applying policy in support of operational goals, like service routing, retries, circuit-breaking, etc, and for security that operates at the application layer, such as token validation. Network Policy is universal, highly efficient, and isolated from the pods, making it ideal for applying policy in support of network security goals. Furthermore, having policy that operates at different layers of the network stack is a really good thing as it gives each layer specific context without commingling of state and allows separation of responsibility.</p> <p>This post is based on the three part blog series by Spike Curtis, one of the Istio team members at Tigera. The full series can be found here: <a href="https://www.projectcalico.org/using-network-policy-in-concert-with-istio/">https://www.projectcalico.org/using-network-policy-in-concert-with-istio/</a></p>Thu, 10 Aug 2017 00:00:00 +0000/v1.9/blog/2017/0.1-using-network-policy/Spike Curtis/v1.9/blog/2017/0.1-using-network-policy/Canary Deployments using Istio <div> <aside class="callout tip"> <div class="type"><svg class="large-icon"><use xlink:href="/v1.9/img/icons.svg#callout-tip"/></svg></div> <div class="content">This post was updated on May 16, 2018 to use the latest version of the traffic management model.</div> </aside> </div> <p>One of the benefits of the <a href="/v1.9/">Istio</a> project is that it provides the control needed to deploy canary services. The idea behind canary deployment (or rollout) is to introduce a new version of a service by first testing it using a small percentage of user traffic, and then if all goes well, increase, possibly gradually in increments, the percentage while simultaneously phasing out the old version. If anything goes wrong along the way, we abort and roll back to the previous version. In its simplest form, the traffic sent to the canary version is a randomly selected percentage of requests, but in more sophisticated schemes it can be based on the region, user, or other properties of the request.</p> <p>Depending on your level of expertise in this area, you may wonder why Istio&rsquo;s support for canary deployment is even needed, given that platforms like Kubernetes already provide a way to do <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment">version rollout</a> and <a href="https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments">canary deployment</a>. Problem solved, right? Well, not exactly. Although doing a rollout this way works in simple cases, it’s very limited, especially in large scale cloud environments receiving lots of (and especially varying amounts of) traffic, where autoscaling is needed.</p> <h2 id="canary-deployment-in-kubernetes">Canary deployment in Kubernetes</h2> <p>As an example, let&rsquo;s say we have a deployed service, <strong>helloworld</strong> version <strong>v1</strong>, for which we would like to test (or simply roll out) a new version, <strong>v2</strong>. Using Kubernetes, you can roll out a new version of the <strong>helloworld</strong> service by simply updating the image in the service’s corresponding <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Deployment</a> and letting the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment">rollout</a> happen automatically. If we take particular care to ensure that there are enough <strong>v1</strong> replicas running when we start and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pausing-and-resuming-a-deployment">pause</a> the rollout after only one or two <strong>v2</strong> replicas have been started, we can keep the canary’s effect on the system very small. We can then observe the effect before deciding to proceed or, if necessary, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment">roll back</a>. Best of all, we can even attach a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deployment">horizontal pod autoscaler</a> to the Deployment and it will keep the replica ratios consistent if, during the rollout process, it also needs to scale replicas up or down to handle traffic load.</p> <p>Although fine for what it does, this approach is only useful when we have a properly tested version that we want to deploy, i.e., more of a blue/green, a.k.a. red/black, kind of upgrade than a &ldquo;dip your feet in the water&rdquo; kind of canary deployment. In fact, for the latter (for example, testing a canary version that may not even be ready or intended for wider exposure), the canary deployment in Kubernetes would be done using two Deployments with <a href="https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively">common pod labels</a>. In this case, we can’t use autoscaling anymore because it’s now being done by two independent autoscalers, one for each Deployment, so the replica ratios (percentages) may vary from the desired ratio, depending purely on load.</p> <p>Whether we use one deployment or two, canary management using deployment features of container orchestration platforms like Docker, Mesos/Marathon, or Kubernetes has a fundamental problem: the use of instance scaling to manage the traffic; traffic version distribution and replica deployment are not independent in these systems. All replica pods, regardless of version, are treated the same in the <code>kube-proxy</code> round-robin pool, so the only way to manage the amount of traffic that a particular version receives is by controlling the replica ratio. Maintaining canary traffic at small percentages requires many replicas (e.g., 1% would require a minimum of 100 replicas). Even if we ignore this problem, the deployment approach is still very limited in that it only supports the simple (random percentage) canary approach. If, instead, we wanted to limit the visibility of the canary to requests based on some specific criteria, we still need another solution.</p> <h2 id="enter-istio">Enter Istio</h2> <p>With Istio, traffic routing and replica deployment are two completely independent functions. The number of pods implementing services are free to scale up and down based on traffic load, completely orthogonal to the control of version traffic routing. This makes managing a canary version in the presence of autoscaling a much simpler problem. Autoscalers may, in fact, respond to load variations resulting from traffic routing changes, but they are nevertheless functioning independently and no differently than when loads change for other reasons.</p> <p>Istio’s <a href="/v1.9/docs/concepts/traffic-management/#routing-rules">routing rules</a> also provide other important advantages; you can easily control fine-grained traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). To illustrate, let’s look at deploying the <strong>helloworld</strong> service and see how simple the problem becomes.</p> <p>We begin by defining the <strong>helloworld</strong> Service, just like any other Kubernetes service, something like this:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: v1 kind: Service metadata: name: helloworld labels: app: helloworld spec: selector: app: helloworld ... </code></pre> <p>We then add 2 Deployments, one for each version (<strong>v1</strong> and <strong>v2</strong>), both of which include the service selector’s <code>app: helloworld</code> label:</p> <pre><code class='language-yaml' data-expandlinks='true' data-repo='istio' >apiVersion: apps/v1 kind: Deployment metadata: name: helloworld-v1 spec: replicas: 1 template: metadata: labels: app: helloworld version: v1 spec: containers: - image: helloworld-v1 ... --- apiVersion: apps/v1 kind: Deployment metadata: name: helloworld-v2 spec: replicas: 1 template: metadata: labels: app: helloworld version: v2 spec: containers: - image: helloworld-v2 ... </code></pre> <p>Note that this is exactly the same way we would do a <a href="https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments">canary deployment</a> using plain Kubernetes, but in that case we would need to adjust the number of replicas of each Deployment to control the distribution of traffic. For example, to send 10% of the traffic to the canary version (<strong>v2</strong>), the replicas for <strong>v1</strong> and <strong>v2</strong> could be set to 9 and 1, respectively.</p> <p>However, since we are going to deploy the service in an <a href="/v1.9/docs/setup/">Istio enabled</a> cluster, all we need to do is set a routing rule to control the traffic distribution. For example if we want to send 10% of the traffic to the canary, we could use <code>kubectl</code> to set a routing rule something like this:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: helloworld spec: hosts: - helloworld http: - route: - destination: host: helloworld subset: v1 weight: 90 - destination: host: helloworld subset: v2 weight: 10 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: helloworld spec: host: helloworld subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 EOF </code></pre> <p>After setting this rule, Istio will ensure that only one tenth of the requests will be sent to the canary version, regardless of how many replicas of each version are running.</p> <h2 id="autoscaling-the-deployments">Autoscaling the deployments</h2> <p>Because we don’t need to maintain replica ratios anymore, we can safely add Kubernetes <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">horizontal pod autoscalers</a> to manage the replicas for both version Deployments:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl autoscale deployment helloworld-v1 --cpu-percent=50 --min=1 --max=10 deployment &#34;helloworld-v1&#34; autoscaled </code></pre> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl autoscale deployment helloworld-v2 --cpu-percent=50 --min=1 --max=10 deployment &#34;helloworld-v2&#34; autoscaled </code></pre> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE Helloworld-v1 Deployment/helloworld-v1 50% 47% 1 10 17s Helloworld-v2 Deployment/helloworld-v2 50% 40% 1 10 15s </code></pre> <p>If we now generate some load on the <strong>helloworld</strong> service, we would notice that when scaling begins, the <strong>v1</strong> autoscaler will scale up its replicas significantly higher than the <strong>v2</strong> autoscaler will for its replicas because <strong>v1</strong> pods are handling 90% of the load.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods | grep helloworld helloworld-v1-3523621687-3q5wh 0/2 Pending 0 15m helloworld-v1-3523621687-73642 2/2 Running 0 11m helloworld-v1-3523621687-7hs31 2/2 Running 0 19m helloworld-v1-3523621687-dt7n7 2/2 Running 0 50m helloworld-v1-3523621687-gdhq9 2/2 Running 0 11m helloworld-v1-3523621687-jxs4t 0/2 Pending 0 15m helloworld-v1-3523621687-l8rjn 2/2 Running 0 19m helloworld-v1-3523621687-wwddw 2/2 Running 0 15m helloworld-v1-3523621687-xlt26 0/2 Pending 0 19m helloworld-v2-4095161145-963wt 2/2 Running 0 50m </code></pre> <p>If we then change the routing rule to send 50% of the traffic to <strong>v2</strong>, we should, after a short delay, notice that the <strong>v1</strong> autoscaler will scale down the replicas of <strong>v1</strong> while the <strong>v2</strong> autoscaler will perform a corresponding scale up.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods | grep helloworld helloworld-v1-3523621687-73642 2/2 Running 0 35m helloworld-v1-3523621687-7hs31 2/2 Running 0 43m helloworld-v1-3523621687-dt7n7 2/2 Running 0 1h helloworld-v1-3523621687-gdhq9 2/2 Running 0 35m helloworld-v1-3523621687-l8rjn 2/2 Running 0 43m helloworld-v2-4095161145-57537 0/2 Pending 0 21m helloworld-v2-4095161145-9322m 2/2 Running 0 21m helloworld-v2-4095161145-963wt 2/2 Running 0 1h helloworld-v2-4095161145-c3dpj 0/2 Pending 0 21m helloworld-v2-4095161145-t2ccm 0/2 Pending 0 17m helloworld-v2-4095161145-v3v9n 0/2 Pending 0 13m </code></pre> <p>The end result is very similar to the simple Kubernetes Deployment rollout, only now the whole process is not being orchestrated and managed in one place. Instead, we’re seeing several components doing their jobs independently, albeit in a cause and effect manner. What&rsquo;s different, however, is that if we now stop generating load, the replicas of both versions will eventually scale down to their minimum (1), regardless of what routing rule we set.</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl get pods | grep helloworld helloworld-v1-3523621687-dt7n7 2/2 Running 0 1h helloworld-v2-4095161145-963wt 2/2 Running 0 1h </code></pre> <h2 id="focused-canary-testing">Focused canary testing</h2> <p>As mentioned above, the Istio routing rules can be used to route traffic based on specific criteria, allowing more sophisticated canary deployment scenarios. Say, for example, instead of exposing the canary to an arbitrary percentage of users, we want to try it out on internal users, maybe even just a percentage of them. The following command could be used to send 50% of traffic from users at <em>some-company-name.com</em> to the canary version, leaving all other users unaffected:</p> <pre><code class='language-bash' data-expandlinks='true' data-repo='istio' >$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: helloworld spec: hosts: - helloworld http: - match: - headers: cookie: regex: &#34;^(.*?;)?(email=[^;]*@some-company-name.com)(;.*)?$&#34; route: - destination: host: helloworld subset: v1 weight: 50 - destination: host: helloworld subset: v2 weight: 50 - route: - destination: host: helloworld subset: v1 EOF </code></pre> <p>As before, the autoscalers bound to the 2 version Deployments will automatically scale the replicas accordingly, but that will have no affect on the traffic distribution.</p> <h2 id="summary">Summary</h2> <p>In this article we’ve seen how Istio supports general scalable canary deployments, and how this differs from the basic deployment support in Kubernetes. Istio’s service mesh provides the control necessary to manage traffic distribution with complete independence from deployment scaling. This allows for a simpler, yet significantly more functional, way to do canary test and rollout.</p> <p>Intelligent routing in support of canary deployment is just one of the many features of Istio that will make the production deployment of large-scale microservices-based applications much simpler. Check out <a href="/v1.9/">istio.io</a> for more information and to try it out. The sample code used in this article can be found <a href="https://github.com/istio/istio/tree/release-1.9/samples/helloworld">here</a>.</p>Wed, 14 Jun 2017 00:00:00 +0000/v1.9/blog/2017/0.1-canary/Frank Budinsky/v1.9/blog/2017/0.1-canary/traffic-managementcanaryUsing Istio to Improve End-to-End Security <p>Conventional network security approaches fail to address security threats to distributed applications deployed in dynamic production environments. Today, we describe how Istio authentication enables enterprises to transform their security posture from just protecting the edge to consistently securing all inter-service communications deep within their applications. With Istio authentication, developers and operators can protect services with sensitive data against unauthorized insider access and they can achieve this without any changes to the application code!</p> <p>Istio authentication is the security component of the broader Istio platform. It incorporates the learnings of securing millions of microservice endpoints in Google’s production environment.</p> <h2 id="background">Background</h2> <p>Modern application architectures are increasingly based on shared services that are deployed and scaled dynamically on cloud platforms. Traditional network edge security (e.g. firewall) is too coarse-grained and allows access from unintended clients. An example of a security risk is stolen authentication tokens that can be replayed from another client. This is a major risk for companies with sensitive data that are concerned about insider threats. Other network security approaches like IP whitelists have to be statically defined, are hard to manage at scale, and are unsuitable for dynamic production environments.</p> <p>Thus, security administrators need a tool that enables them to consistently, and by default, secure all communication between services across diverse production environments.</p> <h2 id="solution-strong-service-identity-and-authentication">Solution: strong service identity and authentication</h2> <p>Google has, over the years, developed architecture and technology to uniformly secure millions of microservice endpoints in its production environment against external attacks and insider threats. Key security principles include trusting the endpoints and not the network, strong mutual authentication based on service identity and service level authorization. Istio authentication is based on the same principles.</p> <p>The version 0.1 release of Istio authentication runs on Kubernetes and provides the following features:</p> <ul> <li><p>Strong identity assertion between services</p></li> <li><p>Access control to limit the identities that can access a service (and its data)</p></li> <li><p>Automatic encryption of data in transit</p></li> <li><p>Management of keys and certificates at scale</p></li> </ul> <p>Istio authentication is based on industry standards like mutual TLS and X.509. Furthermore, Google is actively contributing to an open, community-driven service security framework called <a href="https://spiffe.io/">SPIFFE</a>. As the <a href="https://spiffe.io/">SPIFFE</a> specifications mature, we intend for Istio authentication to become a reference implementation of the same.</p> <p>The diagram below provides an overview of the Istio&rsquo;s service authentication architecture on Kubernetes.</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:56.25%"> <a data-skipendnotes="true" href="/v1.9/blog/2017/0.1-auth/istio_auth_overview.svg" title="Istio Authentication Overview"> <img class="element-to-stretch" src="/v1.9/blog/2017/0.1-auth/istio_auth_overview.svg" alt="Istio Authentication Overview" /> </a> </div> <figcaption>Istio Authentication Overview</figcaption> </figure> <p>The above diagram illustrates three key security features:</p> <h3 id="strong-identity">Strong identity</h3> <p>Istio authentication uses <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/">Kubernetes service accounts</a> to identify who the service runs as. The identity is used to establish trust and define service level access policies. The identity is assigned at service deployment time and encoded in the SAN (Subject Alternative Name) field of an X.509 certificate. Using a service account as the identity has the following advantages:</p> <ul> <li><p>Administrators can configure who has access to a Service Account by using the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/">RBAC</a> feature introduced in Kubernetes 1.6</p></li> <li><p>Flexibility to identify a human user, a service, or a group of services</p></li> <li><p>Stability of the service identity for dynamically placed and auto-scaled workloads</p></li> </ul> <h3 id="communication-security">Communication security</h3> <p>Service-to-service communication is tunneled through high performance client side and server side <a href="https://envoyproxy.github.io/envoy/">Envoy</a> proxies. The communication between the proxies is secured using mutual TLS. The benefit of using mutual TLS is that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. Istio authentication also introduces the concept of Secure Naming to protect from a server spoofing attacks - the client side proxy verifies that the authenticated server&rsquo;s service account is allowed to run the named service.</p> <h3 id="key-management-and-distribution">Key management and distribution</h3> <p>Istio authentication provides a per-cluster CA (Certificate Authority) and automated key &amp; certificate management. In this context, Istio authentication:</p> <ul> <li><p>Generates a key and certificate pair for each service account.</p></li> <li><p>Distributes keys and certificates to the appropriate pods using <a href="https://kubernetes.io/docs/concepts/configuration/secret/">Kubernetes Secrets</a>.</p></li> <li><p>Rotates keys and certificates periodically.</p></li> <li><p>Revokes a specific key and certificate pair when necessary (future).</p></li> </ul> <p>The following diagram explains the end to end Istio authentication workflow on Kubernetes:</p> <figure style="width:100%"> <div class="wrapper-with-intrinsic-ratio" style="padding-bottom:56.25%"> <a data-skipendnotes="true" href="/v1.9/blog/2017/0.1-auth/istio_auth_workflow.svg" title="Istio Authentication Workflow"> <img class="element-to-stretch" src="/v1.9/blog/2017/0.1-auth/istio_auth_workflow.svg" alt="Istio Authentication Workflow" /> </a> </div> <figcaption>Istio Authentication Workflow</figcaption> </figure> <p>Istio authentication is part of the broader security story for containers. Red Hat, a partner on the development of Kubernetes, has identified <a href="https://www.redhat.com/en/resources/container-security-openshift-cloud-devops-whitepaper">10 Layers</a> of container security. Istio addresses two of these layers: &ldquo;Network Isolation&rdquo; and &ldquo;API and Service Endpoint Management&rdquo;. As cluster federation evolves on Kubernetes and other platforms, our intent is for Istio to secure communications across services spanning multiple federated clusters.</p> <h2 id="benefits-of-istio-authentication">Benefits of Istio authentication</h2> <p><strong>Defense in depth</strong>: When used in conjunction with Kubernetes (or infrastructure) network policies, users achieve higher levels of confidence, knowing that pod-to-pod or service-to-service communication is secured both at network and application layers.</p> <p><strong>Secure by default</strong>: When used with Istio’s proxy and centralized policy engine, Istio authentication can be configured during deployment with minimal or no application change. Administrators and operators can thus ensure that service communications are secured by default and that they can enforce these policies consistently across diverse protocols and runtimes.</p> <p><strong>Strong service authentication</strong>: Istio authentication secures service communication using mutual TLS to ensure that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. This ensures that services with sensitive data can only be accessed from strongly authenticated and authorized clients.</p> <h2 id="join-us-in-this-journey">Join us in this journey</h2> <p>Istio authentication is the first step towards providing a full stack of capabilities to protect services with sensitive data from external attacks and insider threats. While the initial version runs on Kubernetes, our goal is to enable Istio authentication to secure services across diverse production environments. We encourage the community to <a href="https://github.com/istio/istio/tree/release-1.9/security">join us</a> in making robust service security easy and ubiquitous across different application stacks and runtime platforms.</p>Thu, 25 May 2017 00:00:00 +0000/v1.9/blog/2017/0.1-auth/The Istio Team/v1.9/blog/2017/0.1-auth/