diff --git a/assets/scss/_custom.scss b/assets/scss/_custom.scss
index 1b172d3c95..134e022505 100644
--- a/assets/scss/_custom.scss
+++ b/assets/scss/_custom.scss
@@ -997,6 +997,16 @@ div.alert > em.javascript-required {
#bing-results-container {
padding: 1em;
}
+.bing-result {
+ margin-bottom: 1em;
+}
+.bing-result-url {
+ font-size: 14px;
+}
+.bing-result-snippet {
+ color: #666666;
+ font-size: 14px;
+}
#bing-pagination-container {
padding: 1em;
margin-bottom: 1em;
diff --git a/content/de/_index.html b/content/de/_index.html
index ab7427938f..ea84bd7f01 100644
--- a/content/de/_index.html
+++ b/content/de/_index.html
@@ -4,6 +4,7 @@ abstract: "Automatisierte Bereitstellung, Skalierung und Verwaltung von Containe
cid: home
---
+{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
diff --git a/content/en/blog/_posts/2021-07-15-SIG-Usability-Spotlight.md b/content/en/blog/_posts/2021-07-15-SIG-Usability-Spotlight.md
index 43488fc11f..722d26372d 100644
--- a/content/en/blog/_posts/2021-07-15-SIG-Usability-Spotlight.md
+++ b/content/en/blog/_posts/2021-07-15-SIG-Usability-Spotlight.md
@@ -5,7 +5,18 @@ date: 2021-07-15
slug: sig-usability-spotlight-2021
---
-**Author:** Kunal Kushwaha, Civo
+**Author:** Kunal Kushwaha (Civo)
+
+{{< note >}}
+SIG Usability, which is featured in this Spotlight blog, has been deprecated and is no longer active.
+As a result, the links and information provided in this blog post may no longer be valid or relevant.
+Should there be renewed interest and increased participation in the future, the SIG may be revived.
+However, as of August 2023 the SIG is inactive per the Kubernetes community policy.
+The Kubernetes project encourages you to explore other
+[SIGs](https://github.com/kubernetes/community/blob/master/sig-list.md#special-interest-groups)
+and resources available on the Kubernetes website to stay up-to-date with the latest developments
+and enhancements in Kubernetes.
+{{< /note >}}
## Introduction
diff --git a/content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md b/content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md
index 819e8f7635..7efeae7f3f 100644
--- a/content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md
+++ b/content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md
@@ -32,7 +32,6 @@ including manual cleanup based on the time a volume was last used or producing a
Provided you've enabled the feature gate (see [How to use it](#how-to-use-it), the new `.status.lastPhaseTransitionTime` field of a PersistentVolume (PV)
is updated every time that PV transitions from one phase to another.
-``
Whether it's transitioning from `Pending` to `Bound`, `Bound` to `Released`, or any other phase transition, the `lastPhaseTransitionTime` will be recorded.
For newly created PVs the phase will be set to `Pending` and the `lastPhaseTransitionTime` will be recorded as well.
diff --git a/content/en/blog/_posts/2023-10-25-introducing-ingress2gateway/gateway-api-resources.svg b/content/en/blog/_posts/2023-10-25-introducing-ingress2gateway/gateway-api-resources.svg
new file mode 100644
index 0000000000..3484bb01e6
--- /dev/null
+++ b/content/en/blog/_posts/2023-10-25-introducing-ingress2gateway/gateway-api-resources.svg
@@ -0,0 +1,1539 @@
+
+
diff --git a/content/en/blog/_posts/2023-10-25-introducing-ingress2gateway/index.md b/content/en/blog/_posts/2023-10-25-introducing-ingress2gateway/index.md
new file mode 100644
index 0000000000..3e726fd30e
--- /dev/null
+++ b/content/en/blog/_posts/2023-10-25-introducing-ingress2gateway/index.md
@@ -0,0 +1,203 @@
+---
+layout: blog
+title: "Introducing ingress2gateway; Simplifying Upgrades to Gateway API"
+date: 2023-10-25T10:00:00-08:00
+slug: introducing-ingress2gateway
+---
+
+***Authors:*** Lior Lieberman (Google), Kobi Levi (independent)
+
+Today we are releasing [ingress2gateway](https://github.com/kubernetes-sigs/ingress2gateway), a tool
+that can help you migrate from [Ingress](/docs/concepts/services-networking/ingress/) to [Gateway
+API](https://gateway-api.sigs.k8s.io). Gateway API is just weeks away from graduating to GA, if you
+haven't upgraded yet, now's the time to think about it!
+
+
+## Background
+
+In the ever-evolving world of Kubernetes, networking plays a pivotal role. As more applications are
+deployed in Kubernetes clusters, effective exposure of these services to clients becomes a critical
+concern. If you've been working with Kubernetes, you're likely familiar with the [Ingress API],
+which has been the go-to solution for managing external access to services.
+
+[Ingress API]:/docs/concepts/services-networking/ingress/
+
+The Ingress API provides a way to route external traffic to your applications within the cluster,
+making it an indispensable tool for many Kubernetes users. Ingress has its limitations however, and
+as applications become more complex and the demands on your Kubernetes clusters increase, these
+limitations can become bottlenecks.
+
+Some of the limitations are:
+
+- **Insufficient common denominator** - by attempting to establish a common denominator for various
+ HTTP proxies, Ingress can only accommodate basic HTTP routing, forcing more features of
+ contemporary proxies like traffic splitting and header matching into provider-specific,
+ non-transferable annotations.
+- **Inadequate permission model** - Ingress spec configures both infrastructure and application
+ configuration in one object. With Ingress, the cluster operator and application developer operate
+ on the same Ingress object without being aware of each other’s roles. This creates an insufficient
+ role-based access control and has high potential for setup errors.
+- **Lack of protocol diversity** - Ingress primarily focuses on HTTP(S) routing and does not provide
+ native support for other protocols, such as TCP, UDP and gRPC. This limitation makes it less
+ suitable for handling non-HTTP workloads.
+
+## Gateway API
+
+To overcome this, Gateway API is designed to provide a more flexible, extensible, and powerful way
+to manage traffic to your services.
+
+Gateway API is just weeks away from a GA (General Availability) release. It provides a standard
+Kubernetes API for ingress traffic control. It offers extended functionality, improved
+customization, and greater flexibility. By focusing on modular and expressive API resources, Gateway
+API makes it possible to describe a wider array of routing configurations and models.
+
+The transition from Ingress API to Gateway API in Kubernetes is driven by advantages and advanced
+functionalities that Gateway API offers, with its foundation built on four core principles: a
+role-oriented approach, portability, expressiveness and extensibility.
+
+### A role-oriented approach
+
+Gateway API employs a role-oriented approach that aligns with the conventional roles within
+organizations involved in configuring Kubernetes service networking. This approach enables
+infrastructure engineers, cluster operators, and application developers to collectively address
+different aspects of Gateway API.
+
+For instance, infrastructure engineers play a pivotal role in deploying GatewayClasses,
+cluster-scoped resources that act as templates to explicitly define behavior for Gateways derived
+from them, laying the groundwork for robust service networking.
+
+Subsequently, cluster operators utilize these GatewayClasses to deploy gateways. A Gateway in
+Kubernetes' Gateway API defines how external traffic can be directed to Services within the cluster,
+essentially bridging non-Kubernetes sources to Kubernetes-aware destinations. It represents a
+request for a load balancer configuration aligned with a GatewayClass’ specification. The Gateway
+spec may not be exhaustive as some details can be supplied by the GatewayClass controller, ensuring
+portability. Additionally, a Gateway can be linked to multiple Route references to channel specific
+traffic subsets to designated services.
+
+Lastly, application developers configure route resources (such as HTTPRoutes), to manage
+configuration (e.g. timeouts, request matching/filter) and Service composition (e.g. path routing to
+backends) Route resources define protocol-specific rules for mapping requests from a Gateway to
+Kubernetes Services. HTTPRoute is for multiplexing HTTP or terminated HTTPS connections. It's
+intended for use in cases where you want to inspect the HTTP stream and use HTTP request data for
+either routing or modification, for example using HTTP Headers for routing, or modifying them
+in-flight.
+
+{{< figure src="gateway-api-resources.svg" alt="Diagram showing the key resources that make up Gateway API and how they relate to each other. The resources shown are GatewayClass, Gateway, and HTTPRoute; the Service API is also shown" class="diagram-medium" >}}
+
+### Portability
+
+With more than 20 [API
+implementations](https://gateway-api.sigs.k8s.io/implementations/#implementations), Gateway API is
+designed to be more portable across different implementations, clusters and environments. It helps
+reduce Ingress' reliance on non-portable, provider-specific annotations, making your configurations
+more consistent and easier to manage across multiple clusters.
+
+Gateway API commits to supporting the 5 latest Kubernetes minor versions. That means that Gateway
+API currently supports Kubernetes 1.24+.
+
+### Expressiveness
+
+Gateway API provides standard, Kubernetes-backed support for a wide range of features, such as
+header-based matching, traffic splitting, weight-based routing, request mirroring and more. With
+Ingress, these features need custom provider-specific annotations.
+
+### Extensibility
+
+Gateway API is designed with extensibility as a core feature. Rather than enforcing a
+one-size-fits-all model, it offers the flexibility to link custom resources at multiple layers
+within the API's framework. This layered approach to customization ensures that users can tailor
+configurations to their specific needs without overwhelming the main structure. By doing so, Gateway
+API facilitates more granular and context-sensitive adjustments, allowing for a fine-tuned balance
+between standardization and adaptability. This becomes particularly valuable in complex cloud-native
+environments where specific use cases require nuanced configurations. A critical difference is that
+Gateway API has a much broader base set of features and a standard pattern for extensions that can
+be more expressive than annotations were on Ingress.
+
+
+## Upgrading to Gateway
+
+Migrating from Ingress to Gateway API may seem intimidating, but luckily Kubernetes just released a
+tool to simplify the process. [ingress2gateway](https://github.com/kubernetes-sigs/ingress2gateway)
+assists in the migration by converting your existing Ingress resources into Gateway API resources.
+Here is how you can get started with Gateway API and using ingress2gateway:
+
+1. [Install a Gateway
+ controller](https://gateway-api.sigs.k8s.io/guides/#installing-a-gateway-controller) OR [install
+ the Gateway API CRDs manually](https://gateway-api.sigs.k8s.io/guides/#installing-gateway-api) .
+
+2. Install [ingress2gateway](https://github.com/kubernetes-sigs/ingress2gateway).
+
+ If you have a Go development environment locally, you can install `ingress2gateway` with:
+
+ ```
+ go install github.com/kubernetes-sigs/ingress2gateway@v0.1.0
+ ```
+
+ This installs `ingress2gateway` to `$(go env GOPATH)/bin/ingress2gateway`.
+
+ Alternatively, follow the installation guide
+ [here](https://github.com/kubernetes-sigs/ingress2gateway#installation).
+
+3. Once the tool is installed, you can use it to convert the ingress resources in your cluster to
+ Gateway API resources.
+
+ ```
+ ingress2gateway print
+ ```
+
+ This above command will:
+
+ 1. Load your current Kubernetes client config including the active context, namespace and
+ authentication details.
+ 2. Search for ingresses and provider-specific resources in that namespace.
+ 3. Convert them to Gateway API resources (Currently only Gateways and HTTPRoutes). For other
+ options you can run the tool with `-h`, or refer to
+ [https://github.com/kubernetes-sigs/ingress2gateway#options](https://github.com/kubernetes-sigs/ingress2gateway#options).
+
+4. Review the converted Gateway API resources, validate them, and then apply them to your cluster.
+
+5. Send test requests to your Gateway to check that it is working. You could get your gateway
+ address using `kubectl get gateway -n -o
+ jsonpath='{.status.addresses}{"\n"}'`.
+
+6. Update your DNS to point to the new Gateway.
+
+7. Once you've confirmed that no more traffic is going through your Ingress configuration, you can
+ safely delete it.
+
+## Wrapping up
+
+Achieving reliable, scalable and extensible networking has always been a challenging objective. The
+Gateway API is designed to improve the current Kubernetes networking standards like ingress and
+reduce the need for implementation specific annotations and CRDs.
+
+It is a Kubernetes standard API, consistent across different platforms and implementations and most
+importantly it is future proof. Gateway API is the next generation of the Ingress API, but has a
+larger scope than that, expanding to tackle mesh and layer 4 routing as well. Gateway API and
+ingress2gateway are supported by a dedicated team under SIG Network that actively work on it and
+manage the ecosystem. It is also likely to receive more updates and community support.
+
+### The Road Ahead
+
+ingress2gateway is just getting started. We're planning to onboard more providers, introduce support
+for more types of Gateway API routes, and make sure everything syncs up smoothly with the ongoing
+development of Gateway API.
+
+Excitingly, Gateway API is also making significant strides. While v1.0 is about to launching,
+there's still a lot of work ahead. This release incorporates many new experimental features, with
+additional functionalities currently in the early stages of planning and development.
+
+If you're interested in helping to contribute, we would love to have you! Please check out the
+[community page](https://gateway-api.sigs.k8s.io/contributing/community/) which includes links to
+the Slack channel and community meetings. We look forward to seeing you!!
+
+### Useful Links
+
+- Get involved with the Ingress2Gateway project on
+ [GitHub](https://github.com/kubernetes-sigs/ingress2gateway)
+- Open a new issue -
+ [ingress2gateway](https://github.com/kubernetes-sigs/ingress2gateway/issues/new/choose), [Gateway
+ API](https://github.com/kubernetes-sigs/gateway-api/issues/new/choose).
+- Join our [discussions](https://github.com/kubernetes-sigs/gateway-api/discussions).
+- [Gateway API Getting Started](https://gateway-api.sigs.k8s.io/guides/)
+- [Gateway API Implementations](https://gateway-api.sigs.k8s.io/implementations/#gateways)
diff --git a/content/en/blog/_posts/2023-10-31-Gateway-API-GA/gateway-api-logo.png b/content/en/blog/_posts/2023-10-31-Gateway-API-GA/gateway-api-logo.png
new file mode 100644
index 0000000000..5a2215397f
Binary files /dev/null and b/content/en/blog/_posts/2023-10-31-Gateway-API-GA/gateway-api-logo.png differ
diff --git a/content/en/blog/_posts/2023-10-31-Gateway-API-GA/index.md b/content/en/blog/_posts/2023-10-31-Gateway-API-GA/index.md
new file mode 100644
index 0000000000..2575379f77
--- /dev/null
+++ b/content/en/blog/_posts/2023-10-31-Gateway-API-GA/index.md
@@ -0,0 +1,153 @@
+---
+layout: blog
+title: "Gateway API v1.0: GA Release"
+date: 2023-10-31T10:00:00-08:00
+slug: gateway-api-ga
+---
+
+**Authors:** Shane Utt (Kong), Nick Young (Isovalent), Rob Scott (Google)
+
+On behalf of Kubernetes SIG Network, we are pleased to announce the v1.0 release of [Gateway
+API](https://gateway-api.sigs.k8s.io/)! This release marks a huge milestone for
+this project. Several key APIs are graduating to GA (generally available), while
+other significant features have been added to the Experimental channel.
+
+## What's new
+
+### Graduation to v1
+This release includes the graduation of
+[Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/),
+[GatewayClass](https://gateway-api.sigs.k8s.io/api-types/gatewayclass/), and
+[HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) to v1, which
+means they are now generally available (GA). This API version denotes a high
+level of confidence in the API surface and provides guarantees of backwards
+compatibility. Note that although, the version of these APIs included in the
+Standard channel are now considered stable, that does not mean that they are
+complete. These APIs will continue to receive new features via the Experimental
+channel as they meet graduation criteria. For more information on how all of
+this works, refer to the [Gateway API Versioning
+Policy](https://gateway-api.sigs.k8s.io/concepts/versioning/).
+
+### Logo
+Gateway API now has a logo! This logo was designed through a collaborative
+process, and is intended to represent the idea that this is a set of Kubernetes
+APIs for routing traffic both north-south and east-west:
+
+
+
+### CEL Validation
+Historically, Gateway API has bundled a validating webhook as part of installing
+the API. Starting in v1.0, webhook installation is optional and only recommended
+for Kubernetes 1.24. Gateway API now includes
+[CEL](/docs/reference/using-api/cel/) validation rules as
+part of the
+[CRDs](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
+This new form of validation is supported in Kubernetes 1.25+, and thus the
+validating webhook is no longer required in most installations.
+
+### Standard channel
+This release was primarily focused on ensuring that the existing beta APIs were
+well defined and sufficiently stable to graduate to GA. That led to a variety of
+spec clarifications, as well as some improvements to status to improve the
+overall UX when interacting with Gateway API.
+
+### Experimental channel
+Most of the changes included in this release were limited to the experimental
+channel. These include HTTPRoute timeouts, TLS config from Gateways to backends,
+WebSocket support, Gateway infrastructure labels, and more. Stay tuned for a
+follow up blog post that will cover each of these new features in detail.
+
+### Everything else
+For a full list of the changes included in this release, please refer to the
+[v1.0.0 release
+notes](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v1.0.0).
+
+## How we got here
+
+The idea of Gateway API was initially [proposed](https://youtu.be/Ne9UJL6irXY?si=wgtC9w8PMB5ZHil2)
+4 years ago at KubeCon San Diego as the next generation
+of Ingress API. Since then, an incredible community has formed to develop what
+has likely become the most collaborative API in Kubernetes history. Over 170
+people have contributed to this API so far, and that number continues to grow.
+
+A special thank you to the 20+ [community members who agreed to take on an
+official role in the
+project](https://github.com/kubernetes-sigs/gateway-api/blob/main/OWNERS_ALIASES),
+providing some time for reviews and sharing the load of maintaining the project!
+
+We especially want to highlight the emeritus maintainers that played a pivotal
+role in the early development of this project:
+
+* [Bowei Du](https://github.com/bowei)
+* [Daneyon Hansen](https://github.com/danehans)
+* [Harry Bagdi](https://github.com/hbagdi)
+
+## Try it out
+
+Unlike other Kubernetes APIs, you don't need to upgrade to the latest version of
+Kubernetes to get the latest version of Gateway API. As long as you're running
+one of the 5 most recent minor versions of Kubernetes (1.24+), you'll be able to
+get up and running with the latest version of Gateway API.
+
+To try out the API, follow our [Getting Started
+guide](https://gateway-api.sigs.k8s.io/guides/).
+
+## What's next
+
+This release is just the beginning of a much larger journey for Gateway API, and
+there are still plenty of new features and new ideas in flight for future
+releases of the API.
+
+One of our key goals going forward is to work to stabilize and graduate other
+experimental features of the API. These include [support for service
+mesh](https://gateway-api.sigs.k8s.io/concepts/gamma/), additional route types
+([GRPCRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.GRPCRoute),
+[TCPRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.TCPRoute),
+[TLSRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.TLSRoute),
+[UDPRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.UDPRoute)),
+and a variety of experimental features.
+
+We've also been working towards moving
+[ReferenceGrant](https://gateway-api.sigs.k8s.io/api-types/referencegrant/) into
+a built-in Kubernetes API that can be used for more than just Gateway API.
+Within Gateway API, we've used this resource to safely enable cross-namespace
+references, and that concept is now being adopted by other SIGs. The new version
+of this API will be owned by SIG Auth and will likely include at least some
+modifications as it migrates to a built-in Kubernetes API.
+
+### Gateway API at KubeCon + CloudNativeCon
+
+At [KubeCon North America
+(Chicago)](https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/)
+and the adjacent [Contributor
+Summit](https://www.kubernetes.dev/events/2023/kcsna/) there are several talks
+related to Gateway API that will go into more detail on these topics. If you're
+attending either of these events this year, considering adding these to your
+schedule.
+
+**Contributor Summit:**
+
+- [Lessons Learned Building a GA API with CRDs](https://sched.co/1Sp9u)
+- [Conformance Profiles: Building a generic conformance test reporting framework](https://sched.co/1Sp9l)
+- [Gateway API: Beyond GA](https://sched.co/1SpA9)
+
+**KubeCon Main Event:**
+
+- [Gateway API: The Most Collaborative API in Kubernetes History Is GA](https://sched.co/1R2qM)
+
+**KubeCon Office Hours:**
+
+Gateway API maintainers will be holding office hours sessions at KubeCon if
+you'd like to discuss or brainstorm any related topics. To get the latest
+updates on these sessions, join the `#sig-network-gateway-api` channel on
+[Kubernetes Slack](https://slack.kubernetes.io/).
+
+## Get involved
+
+We've only barely scratched the surface of what's in flight with Gateway API.
+There are lots of opportunities to get involved and help define the future of
+Kubernetes routing APIs for both Ingress and Mesh.
+
+If this is interesting to you, please [join us in the
+community](https://gateway-api.sigs.k8s.io/contributing/) and help us build the
+future of Gateway API together!
diff --git a/content/en/blog/_posts/2023-11-02-kcseu2023-spotlight/index.md b/content/en/blog/_posts/2023-11-02-kcseu2023-spotlight/index.md
new file mode 100644
index 0000000000..ff6d2cad50
--- /dev/null
+++ b/content/en/blog/_posts/2023-11-02-kcseu2023-spotlight/index.md
@@ -0,0 +1,176 @@
+---
+layout: blog
+title: "Kubernetes Contributor Summit: Behind-the-scenes"
+slug: k8s-contributor-summit-behind-the-scenes
+date: 2023-11-03
+canonicalUrl: https://www.k8s.dev/blog/2023/11/03/k8s-contributor-summit-behind-the-scenes/
+---
+
+**Author** : Frederico Muñoz (SAS Institute)
+
+Every year, just before the official start of KubeCon+CloudNativeCon, there's a special event that
+has a very special place in the hearts of those organizing and participating in it: the Kubernetes
+Contributor Summit. To find out why, and to provide a behind-the-scenes perspective, we interview
+Noah Abrahams, whom amongst other roles was the co-lead for the Kubernetes Contributor Summit in
+2023.
+
+
+**Frederico Muñoz (FSM)**: Hello Noah, and welcome. Could you start by introducing yourself and
+telling us how you got involved in Kubernetes?
+
+**Noah Abrahams (NA)**: I’ve been in this space for quite a while. I got started in IT in the mid
+90's, and I’ve been working in the "Cloud" space for about 15 years. It was, frankly, through a
+combination of sheer luck (being in the right place at the right time) and having good mentors to
+pull me into those places (thanks, Tim!), that I ended up at a startup called Apprenda in 2016.
+While I was there, they pivoted into Kubernetes, and it was the best thing that could have happened
+to my career. It was around v1.2 and someone asked me if I could give a presentation on Kubernetes
+concepts at "my local meetup" in Las Vegas. The meetup didn’t exist yet, so I created it, and got
+involved in the wider community. One thing led to another, and soon I was involved in ContribEx,
+joined the release team, was doing booth duty for the CNCF, became an ambassador, and here we are
+today.
+
+## The Contributor Summit
+
+
+
+**FM**: Before leading the organisation of the KCSEU 2023, how many other Contributor Summits were
+you a part of?
+
+**NA**: I was involved in four or five before taking the lead. If I'm recalling correctly, I
+attended the summit in Copenhagen, then sometime in 2018 I joined the wrong meeting, because the
+summit staff meeting was listed on the ContribEx calendar. Instead of dropping out of the call, I
+listened a bit, then volunteered to take on some work that didn't look like it had anybody yet
+dedicated to it. I ended up running Ops in Seattle and helping run the New Contributor Workshop in
+Shanghai, that year. Since then, I’ve been involved in all but two, since I missed both Barcelona
+and Valencia.
+
+**FM**: Have you noticed any major changes in terms of how the conference is organized throughout
+the years? Namely in terms of number of participants, venues, speakers, themes...
+
+**NA**: The summit changes over the years with the ebb and flow of the desires of the contributors
+that attend. While we can typically expect about the same number of attendees, depending on the
+region that the event is held in, we adapt the style and content greatly based on the feedback that
+we receive at the end of each event. Some years, contributors ask for more free-style or
+unconference type sessions, and we plan on having more of those, but some years, people ask for more
+planned sessions or workshops, so that's what we facilitate. We also have to continually adapt to
+the venue that we have, the number of rooms we're allotted, how we're going to share the space with
+other events and so forth. That all goes into the planning ahead of time, from how many talk tracks
+we’ll have, to what types of tables and how many microphones we want in a room.
+
+There has been one very significant change over the years, though, and that is that we no longer run
+the New Contributor Workshop. While the content was valuable, running the session during the summit
+never led to any people who weren’t already contributing to the project becoming dedicated
+contributors to the project, so we removed it from the schedule. We'll deliver that content another
+way, while we’ll keep the summit focused on existing contributors.
+
+## What makes it special
+
+**FM**: Going back to the introduction I made, I’ve heard several participants saying that KubeCon
+is great, but that the Contributor Summit is for them the main event. In your opinion, why do you
+think that makes it so?
+
+**NA**: I think part of it ties into what I mentioned a moment ago, the flexibility in our content
+types. For many contributors, I think the summit is basically "How Kubecon used to be", back when
+it was primarily a gathering of the contributors to talk about the health of the project and the
+work that needed to be done. So, in that context, if the contributors want to discuss, say, a new
+Working Group, then they have dedicated space to do so in the summit. They also have the space to
+sit down and hack on a tough problem, discuss architectural philosophy, bring potential problems to
+more people’s attention, refine our methods, and so forth. Plus, the unconference aspect allows for
+some malleability on the day-of, for whatever is most important right then and there. Whatever
+folks want to get out of this environment is what we’ll provide, and having a space and time
+specifically to address your particular needs is always going to be well received.
+
+Let's not forget the social aspect, too. Despite the fact that we're a global community and work
+together remotely and asynchronously, it's still easier to work together when you have a personal
+connection, and can put a face to a Github handle. Zoom meetings are a good start, but even a
+single instance of in-person time makes a big difference in how people work together. So, getting
+folks together a couple times a year makes the project run more smoothly.
+
+## Organizing the Summit
+
+**FM**: In terms of the organization team itself, could you share with us a general overview of the
+staffing process? Who are the people that make it happen? How many different teams are involved?
+
+**NA**: There's a bit of the "usual suspects" involved in making this happen, many of whom you'll
+find in the ContribEx meetings, but really it comes down to whoever is going to step up and do the
+work. We start with a general call out for volunteers from the org. There's a Github issue where
+we'll track the staffing and that will get shouted out to all the usual comms channels: slack,
+k-dev, etc.
+
+From there, there's a handful of different teams, overseeing content/program committee,
+registration, communications, day-of operations, the awards the SIGs present to their members, the
+after-summit social event, and so on. The leads for each team/role are generally picked from folks
+who have stepped up and worked the event before, either as a shadow, or a previous lead, so we know
+we can rely on them, which is a recurring theme. The leads pick their shadows from whoever pipes up
+on the issue, and the teams move forward, operating according to their role books, which we try to
+update at the end of each summit, with what we've learned over the past few months. It's expected
+that a shadow will be in line to lead that role at some point in a future summit, so we always have
+a good bench of folks available to make this event happen. A couple of the roles also have some
+non-shadow volunteers where people can step in to help a bit, like as an on-site room monitor, and
+get a feel for how things are put together without having to give a serious up-front commitment, but
+most of the folks working the event are dedicated to both making the summit successful, and coming
+back to do so in the future. Of course, the roster can change over time, or even suddenly, as
+people gain or lose travel budget, get new jobs, only attend Europe or North America or Asia, etc.
+It's a constant dance, relying 100% on the people who want to make this project successful.
+
+Last, but not least, is the Summit lead. They have to keep the entire process moving forward, be
+willing to step in to keep bike-shedding from derailing our deadlines, make sure the right people
+are talking to one another, lead all our meetings to make sure everyone gets a voice, etc. In some
+cases, the lead has to even be willing to take over an entirely separate role, in case someone gets
+sick or has any other extenuating circumstances, to make sure absolutely nothing falls through the
+cracks. The lead is only allowed to volunteer after they’ve been through this a few times and know
+what the event entails. Event planning is not for the faint of heart.
+
+
+**FM**: The participation of volunteers is essential, but there's also the topic of CNCF support:
+how does this dynamic play out in practice?
+
+**NA**: This event would not happen in its current form without our CNCF liaison. They provide us
+with space, make sure we are fed and caffeinated and cared for, bring us outside spaces to evaluate,
+so we have somewhere to hold the social gathering, get us the budget so we have t-shirts and patches
+and the like, and generally make it possible for us to put this event together. They're even
+responsible for the signage and arrows, so the attendees know where to go. They're the ones sitting
+at the front desk, keeping an eye on everything and answering people's questions. At the same time,
+they're along to facilitate, and try to avoid influencing our planning.
+
+There's a ton of work that goes into making the summit happen that is easy to overlook, as an
+attendee, because people tend to expect things to just work. It is not exaggerating to say this
+event would not have happened like it has over the years, without the help from our liaisons, like
+Brienne and Deb. They are an integral part of the team.
+
+## A look ahead
+
+**FM**: Currently, we’re preparing the NA 2023 summit, how is it going? Any changes in format
+compared with previous ones?
+
+**NA**: I would say it's going great, though I'm sort of emeritus lead for this event, mostly
+picking up the things that I see need to be done and don't have someone assigned to it. We're
+always learning from our past experiences and making small changes to continually be better, from
+how many people need to be on a particular rotation to how far in advance we open and close the CFP.
+There's no major changes right now, just continually providing the content that the contributors
+want.
+
+**FM**: For our readers that might be interested in joining in the Kubernetes Contributor Summit, is
+there anything they should know?
+
+**NA**: First of all, the summit is an event by and for Org members. If you're not already an org
+member, you should be getting involved before trying to attend the summit, as the content is curated
+specifically towards the contributors and maintainers of the project. That applies to the staff, as
+well, as all the decisions should be made with the interests and health of kubernetes contributors
+being the end goal. We get a lot of people who show interest in helping out, but then aren't ready
+to make any sort of commitment, and that just makes more work for us. If you're not already a
+proven and committed member of this community, it’s difficult for us to place you in a position that
+requires reliability. We have made some rare exceptions when we need someone local to help us out,
+but those are few and far between.
+
+If you are, however, already a member, we'd love to have you. The more people that are involved,
+the better the event becomes. That applies to both dedicated staff, and those in attendance
+bringing CFPs, unconference topics, and just contributing to the discussions. If you're part of
+this community and you're going to be at KubeCon, I would highly urge you to attend, and if you're
+not yet an org member, let's make that happen!
+
+**FM**: Indeed! Any final comments you would like to share?
+
+**NA**: Just that the Contributor Summit is, for me, the ultimate manifestation of the Hallway
+Track. By being here, you're part of the conversations that move this project forward. It's good
+for you, and it's good for Kubernetes. I hope to see you all in Chicago!
diff --git a/content/en/blog/_posts/2023-11-02-kcseu2023-spotlight/kcseu2023-group.jpg b/content/en/blog/_posts/2023-11-02-kcseu2023-spotlight/kcseu2023-group.jpg
new file mode 100644
index 0000000000..88f6abdaf3
Binary files /dev/null and b/content/en/blog/_posts/2023-11-02-kcseu2023-spotlight/kcseu2023-group.jpg differ
diff --git a/content/en/blog/_posts/2023-11-02-sig-architecture-prod-readiness-spotlight.md b/content/en/blog/_posts/2023-11-02-sig-architecture-prod-readiness-spotlight.md
new file mode 100644
index 0000000000..94515e46b2
--- /dev/null
+++ b/content/en/blog/_posts/2023-11-02-sig-architecture-prod-readiness-spotlight.md
@@ -0,0 +1,139 @@
+---
+layout: blog
+title: "Spotlight on SIG Architecture: Production Readiness"
+slug: sig-architecture-production-readiness-spotlight-2023
+date: 2023-11-02
+canonicalUrl: https://www.k8s.dev/blog/2023/11/02/sig-architecture-production-readiness-spotlight-2023/
+---
+
+**Author**: Frederico Muñoz (SAS Institute)
+
+_This is the second interview of a SIG Architecture Spotlight series that will cover the different
+subprojects. In this blog, we will cover the [SIG Architecture: Production Readiness
+subproject](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#production-readiness-1)_.
+
+In this SIG Architecture spotlight, we talked with [Wojciech Tyczynski](https://github.com/wojtek-t)
+(Google), lead of the Production Readiness subproject.
+
+## About SIG Architecture and the Production Readiness subproject
+
+**Frederico (FSM)**: Hello Wojciech, could you tell us a bit about yourself, your role and how you
+got involved in Kubernetes?
+
+**Wojciech Tyczynski (WT)**: I started contributing to Kubernetes in January 2015. At that time,
+Google (where I was and still am working) decided to start a Kubernetes team in the Warsaw office
+(in addition to already existing teams in California and Seattle). I was lucky enough to be one of
+the seeding engineers for that team.
+
+After two months of onboarding and helping with different tasks across the project towards 1.0
+launch, I took ownership of the scalability area and I was leading Kubernetes to support clusters
+with 5000 nodes. I’m still involved in [SIG Scalability](https://github.com/kubernetes/community/blob/master/sig-scalability/README.md)
+as its Technical Lead. That was the start of a journey since scalability is such a cross-cutting topic,
+and I started contributing to many other areas including, over time, to SIG Architecture.
+
+**FSM**: In SIG Architecture, why specifically the Production Readiness subproject? Was it something
+you had in mind from the start, or was it an unexpected consequence of your initial involvement in
+scalability?
+
+**WT**: After reaching that milestone of [Kubernetes supporting 5000-node clusters](https://kubernetes.io/blog/2017/03/scalability-updates-in-kubernetes-1-6/),
+one of the goals was to ensure that Kubernetes would not degrade its scalability properties over time. While
+non-scalable implementation is always fixable, designing non-scalable APIs or contracts is
+problematic. I was looking for a way to ensure that people are thinking about
+scalability when they create new features and capabilities without introducing too much overhead.
+
+This is when I joined forces with [John Belamaric](https://github.com/johnbelamaric) and
+[David Eads](https://github.com/deads2k) and created a Production Readiness subproject within SIG
+Architecture. While setting the bar for scalability was only one of a few motivations for it, it
+ended up fitting quite well. At the same time, I was already involved in the overall reliability of
+the system internally, so other goals of Production Readiness were also close to my heart.
+
+**FSM**: To anyone new to how SIG Architecture works, how would you describe the main goals and
+areas of intervention of the Production Readiness subproject?
+
+**WT**: The goal of the Production Readiness subproject is to ensure that any feature that is added
+to Kubernetes can be reliably used in production clusters. This primarily means that those features
+are observable, scalable, supportable, can always be safely enabled and in case of production issues
+also disabled.
+
+## Production readiness and the Kubernetes project
+
+**FSM**: Architectural consistency being one of the goals of the SIG, is this made more challenging
+by the [distributed and open nature of Kubernetes](https://www.cncf.io/reports/kubernetes-project-journey-report/)?
+Do you feel this impacts the approach that Production Readiness has to take?
+
+**WT**: The distributed nature of Kubernetes certainly impacts Production Readiness, because it
+makes thinking about aspects like enablement/disablement or scalability more challenging. To be more
+precise, when enabling or disabling features that span multiple components you need to think about
+version skew between them and design for it. For scalability, changes in one component may actually
+result in problems for a completely different one, so it requires a good understanding of the whole
+system, not just individual components. But it’s also what makes this project so interesting.
+
+**FSM**: Those running Kubernetes in production will have their own perspective on things, how do
+you capture this feedback?
+
+**WT**: Fortunately, we aren’t talking about _"them"_ here, we’re talking about _"us"_: all of us are
+working for companies that are managing large fleets of Kubernetes clusters and we’re involved in
+that too, so we suffer from those problems ourselves.
+
+So while we’re trying to get feedback (our annual PRR survey is very important for us), it rarely
+reveals completely new problems - it rather shows the scale of them. And we try to react to it -
+changes like "Beta APIs off by default" happen in reaction to the data that we observe.
+
+**FSM**: On the topic of reaction, that made me think of how the [Kubernetes Enhancement Proposal (KEP)](https://github.com/kubernetes/enhancements/blob/master/keps/NNNN-kep-template/README.md)
+template has a Production Readiness Review (PRR) section, which is tied to the graduation
+process. Was this something born out of identified insufficiencies? How would you describe the
+results?
+
+**WT**: As mentioned above, the overall goal of the Production Readiness subproject is to ensure
+that every newly added feature can be reliably used in production. It’s not possible to enforce that
+by a central team - we need to make it everyone's problem.
+
+To achieve it, we wanted to ensure that everyone designing their new feature is thinking about safe
+enablement, scalability, observability, supportability, etc. from the very beginning. Which means
+not when the implementation starts, but rather during the design. Given that KEPs are effectively
+Kubernetes design docs, making it part of the KEP template was the way to achieve the goal.
+
+**FSM**: So, in a way making sure that feature owners have thought about the implications of their
+proposal.
+
+**WT**: Exactly. We already observed that just by forcing feature owners to think through the PRR
+aspects (via forcing them to fill in the PRR questionnaire) many of the original issues are going
+away. Sure - as PRR approvers we’re still catching gaps, but even the initial versions of KEPs are
+better now than they used to be a couple of years ago in what concerns thinking about
+productionisation aspects, which is exactly what we wanted to achieve - spreading the culture of
+thinking about reliability in its widest possible meaning.
+
+**FSM**: We've been talking about the PRR process, could you describe it for our readers?
+
+**WT**: The [PRR process](https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md)
+is fairly simple - we just want to ensure that you think through the productionisation aspects of
+your feature early enough. If you do your job, it’s just a matter of answering some questions in the
+KEP template and getting approval from a PRR approver (in addition to regular SIG approval). If you
+didn’t think about those aspects earlier, it may require spending more time and potentially revising
+some decisions, but that’s exactly what we need to make the Kubernetes project reliable.
+
+## Helping with Production Readiness
+
+**FSM**: Production Readiness seems to be one area where a good deal of prior exposure is required
+in order to be an effective contributor. Are there also ways for someone newer to the project to
+contribute?
+
+**WT**: PRR approvers have to have a deep understanding of the whole Kubernetes project to catch
+potential issues. Kubernetes is such a large project now with so many nuances that people who are
+new to the project can simply miss the context, no matter how senior they are.
+
+That said, there are many ways that you may implicitly help. Increasing the reliability of
+particular areas of the project by improving its observability and debuggability, increasing test
+coverage, and building new kinds of tests (upgrade, downgrade, chaos, etc.) will help us a lot. Note
+that the PRR subproject is focused on keeping the bar at the design level, but we should also care
+equally about the implementation. For that, we’re relying on individual SIGs and code approvers, so
+having people there who are aware of productionisation aspects, and who deeply care about it, will
+help the project a lot.
+
+**FSM**: Thank you! Any final comments you would like to share with our readers?
+
+**WT**: I would like to highlight and thank all contributors for their cooperation. While the PRR
+adds some additional work for them, we see that people care about it, and what’s even more
+encouraging is that with every release the quality of the answers improves, and questions "do I
+really need a metric reflecting if my feature works" or "is downgrade really that important" don’t
+really appear anymore.
diff --git a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md
index 4c2e2671d0..ca6f831da8 100644
--- a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md
+++ b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md
@@ -340,30 +340,6 @@ Then, delete the Secret you now know the name of:
kubectl -n examplens delete secret/example-automated-thing-token-zyxwv
```
-The control plane spots that the ServiceAccount is missing its Secret,
-and creates a replacement:
-
-```shell
-kubectl -n examplens get serviceaccount/example-automated-thing -o yaml
-```
-
-```yaml
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- annotations:
- kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"example-automated-thing","namespace":"examplens"}}
- creationTimestamp: "2019-07-21T07:07:07Z"
- name: example-automated-thing
- namespace: examplens
- resourceVersion: "1026"
- selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing
- uid: f23fd170-66f2-4697-b049-e1e266b7f835
-secrets:
- - name: example-automated-thing-token-4rdrh
-```
-
## Clean up
If you created a namespace `examplens` to experiment with, you can remove it:
diff --git a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md
index 269300c0d5..a18d4489b9 100644
--- a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md
+++ b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md
@@ -291,33 +291,6 @@ variables as well as some other useful variables:
The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from
the root of the object. No other metadata properties are accessible.
-Only property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible.
-Accessible property names are escaped according to the following rules when accessed in the
-expression:
-
-| escape sequence | property name equivalent |
-| ----------------------- | -----------------------|
-| `__underscores__` | `__` |
-| `__dot__` | `.` |
-|`__dash__` | `-` |
-| `__slash__` | `/` |
-| `__{keyword}__` | [CEL RESERVED keyword](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#syntax) |
-
-{{< note >}}
-A **CEL reserved** keyword only needs to be escaped if the token is an exact match
-for the reserved keyword.
-For example, `int` in the word “sprint” would not be escaped.
-{{< /note >}}
-
-Examples on escaping:
-
-|property name | rule with escaped property name |
-| ----------------|-----------------------------------|
-| namespace | `object.__namespace__ > 0` |
-| x-prop | `object.x__dash__prop > 0` |
-| redact__d | `object.redact__underscores__d > 0` |
-| string | `object.startsWith('kube')` |
-
Equality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1].
Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type:
diff --git a/content/en/docs/reference/using-api/deprecation-guide.md b/content/en/docs/reference/using-api/deprecation-guide.md
index 69f32f9eeb..0133cbbb57 100644
--- a/content/en/docs/reference/using-api/deprecation-guide.md
+++ b/content/en/docs/reference/using-api/deprecation-guide.md
@@ -35,7 +35,7 @@ The **flowcontrol.apiserver.k8s.io/v1beta2** API version of FlowSchema and Prior
### v1.27
-The **v1.27** release will stop serving the following deprecated API versions:
+The **v1.27** release stopped serving the following deprecated API versions:
#### CSIStorageCapacity {#csistoragecapacity-v127}
diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md
index 8bea409cf2..e5530ec2a7 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md
@@ -262,6 +262,16 @@ Secret somewhere that your terminal / computer screen could be seen by an onlook
When you delete a ServiceAccount that has an associated Secret, the Kubernetes
control plane automatically cleans up the long-lived token from that Secret.
+{{< note >}}
+If you view the ServiceAccount using:
+
+` kubectl get serviceaccount build-robot -o yaml`
+
+You can't see the `build-robot-secret` Secret in the ServiceAccount API objects
+[`.secrets`](/docs/reference/kubernetes-api/authentication-resources/service-account-v1/) field
+because that field is only populated with auto-generated Secrets.
+{{< /note >}}
+
## Add ImagePullSecrets to a service account
First, [create an imagePullSecret](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
diff --git a/content/en/docs/tutorials/security/seccomp.md b/content/en/docs/tutorials/security/seccomp.md
index 2d77cf52d9..08e6b73d30 100644
--- a/content/en/docs/tutorials/security/seccomp.md
+++ b/content/en/docs/tutorials/security/seccomp.md
@@ -482,7 +482,7 @@ kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- image: kindest/node:v1.23.0@sha256:49824ab1727c04e56a21a5d8372a402fcd32ea51ac96a2706a12af38934f81ac
+ image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c
kubeadmConfigPatches:
- |
kind: JoinConfiguration
@@ -490,7 +490,7 @@ nodes:
kubeletExtraArgs:
seccomp-default: "true"
- role: worker
- image: kindest/node:v1.23.0@sha256:49824ab1727c04e56a21a5d8372a402fcd32ea51ac96a2706a12af38934f81ac
+ image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c
kubeadmConfigPatches:
- |
kind: JoinConfiguration
diff --git a/content/en/examples/pods/security/seccomp/ga/audit-pod.yaml b/content/en/examples/pods/security/seccomp/ga/audit-pod.yaml
index 409d4b923c..34aacd7d95 100644
--- a/content/en/examples/pods/security/seccomp/ga/audit-pod.yaml
+++ b/content/en/examples/pods/security/seccomp/ga/audit-pod.yaml
@@ -11,7 +11,7 @@ spec:
localhostProfile: profiles/audit.json
containers:
- name: test-container
- image: hashicorp/http-echo:0.2.3
+ image: hashicorp/http-echo:1.0
args:
- "-text=just made some syscalls!"
securityContext:
diff --git a/content/en/examples/pods/security/seccomp/ga/default-pod.yaml b/content/en/examples/pods/security/seccomp/ga/default-pod.yaml
index b884ec5924..153031fc9d 100644
--- a/content/en/examples/pods/security/seccomp/ga/default-pod.yaml
+++ b/content/en/examples/pods/security/seccomp/ga/default-pod.yaml
@@ -10,7 +10,7 @@ spec:
type: RuntimeDefault
containers:
- name: test-container
- image: hashicorp/http-echo:0.2.3
+ image: hashicorp/http-echo:1.0
args:
- "-text=just made some more syscalls!"
securityContext:
diff --git a/content/en/examples/pods/security/seccomp/ga/fine-pod.yaml b/content/en/examples/pods/security/seccomp/ga/fine-pod.yaml
index 692b828151..dd7622fe15 100644
--- a/content/en/examples/pods/security/seccomp/ga/fine-pod.yaml
+++ b/content/en/examples/pods/security/seccomp/ga/fine-pod.yaml
@@ -11,7 +11,7 @@ spec:
localhostProfile: profiles/fine-grained.json
containers:
- name: test-container
- image: hashicorp/http-echo:0.2.3
+ image: hashicorp/http-echo:1.0
args:
- "-text=just made some syscalls!"
securityContext:
diff --git a/content/en/examples/pods/security/seccomp/ga/violation-pod.yaml b/content/en/examples/pods/security/seccomp/ga/violation-pod.yaml
index 70deadf4b2..c4844df37c 100644
--- a/content/en/examples/pods/security/seccomp/ga/violation-pod.yaml
+++ b/content/en/examples/pods/security/seccomp/ga/violation-pod.yaml
@@ -11,7 +11,7 @@ spec:
localhostProfile: profiles/violation.json
containers:
- name: test-container
- image: hashicorp/http-echo:0.2.3
+ image: hashicorp/http-echo:1.0
args:
- "-text=just made some syscalls!"
securityContext:
diff --git a/content/es/docs/concepts/storage/ephemeral-volumes.md b/content/es/docs/concepts/storage/ephemeral-volumes.md
new file mode 100644
index 0000000000..743d773ba8
--- /dev/null
+++ b/content/es/docs/concepts/storage/ephemeral-volumes.md
@@ -0,0 +1,189 @@
+---
+reviewers:
+ - ramrodo
+ - krol3
+ - electrocucaracha
+title: Volúmenes efímeros
+content_type: concept
+weight: 30
+---
+
+
+
+Este documento describe _volúmenes efímeros_ en Kubernetes. Se sugiere tener conocimiento previo sobre [volúmenes](/docs/concepts/storage/volumes/), en particular PersistentVolumeClaim y PersistentVolume.
+
+
+
+Algunas aplicaciones requieren almacenamiento adicional, pero no les preocupa si esos datos se almacenan de manera persistente entre reinicios. Por ejemplo, los servicios de caché a menudo tienen limitaciones de tamaño de memoria y pueden trasladar datos poco utilizados a un almacenamiento más lento que la memoria, con un impacto mínimo en el rendimiento general.
+
+Otras aplicaciones esperan que algunos datos de entrada de solo lectura estén presentes en archivos, como datos de configuración o claves secretas.
+
+Los _volúmenes efímeros_ están diseñados para estos casos de uso. Debido a que los volúmenes siguen el ciclo de vida del Pod y se crean y eliminan junto con el Pod, los Pods pueden detenerse y reiniciarse sin estar limitados a la disponibilidad de algún volumen persistente.
+
+Los volúmenes efímeros se especifican _en línea_ en la especificación del Pod, lo que simplifica la implementación y gestión de aplicaciones.
+
+### Tipos de volúmenes efímeros
+
+Kubernetes admite varios tipos diferentes de volúmenes efímeros para diversos propósitos:
+
+- [emptyDir](/docs/concepts/storage/volumes/#emptydir): vacíos al inicio del Pod, con el almacenamiento proveniente localmente del directorio base de kubelet (generalmente el disco raíz) o la RAM.
+- [configMap](/docs/concepts/storage/volumes/#configmap),
+ [downwardAPI](/docs/concepts/storage/volumes/#downwardapi),
+ [secret](/docs/concepts/storage/volumes/#secret): inyectar diferentes tipos de datos de Kubernetes en un Pod.
+
+- [CSI volúmenes efímeros](#csi-ephemeral-volumes):
+ Similar a los tipos de volumen anteriores, pero proporcionados por controladores especiales {{< glossary_tooltip text="CSI" term_id="csi" >}} que [soportan específicamente esta característica](https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html)
+- [volúmenes efímeros genéricos](#generic-ephemeral-volumes), que pueden proporcionar todos los controladores de almacenamiento que también admiten volúmenes persistentes
+
+`emptyDir`, `configMap`, `downwardAPI`, `secret` se proporcionan como [almacenamiento efímero local](/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage).
+Ellos son administrados por kubelet en cada nodo.
+
+Los volúmenes efímeros CSI _deben_ ser proporcionados por controladores de almacenamiento CSI de terceros.
+
+Los volúmenes efímeros genéricos _pueden_ ser proporcionados por controladores de almacenamiento CSI de terceros, pero también por cualquier otro controlador de almacenamiento que admita la provisión dinámica. Algunos controladores CSI están escritos específicamente para volúmenes efímeros CSI y no admiten la provisión dinámica; por lo tanto, no se pueden utilizar para volúmenes efímeros genéricos.
+
+La ventaja de utilizar controladores de terceros es que pueden ofrecer funcionalidades que Kubernetes en sí mismo no admite, como el almacenamiento con características de rendimiento diferentes al disco gestionado por kubelet o la inyección de datos diversos.
+
+### Volúmenes efímeros de CSI
+
+{{< feature-state for_k8s_version="v1.25" state="stable" >}}
+
+{{< note >}}
+Los volúmenes efímeros CSI solo son compatibles con un subconjunto de controladores CSI.
+La [lista de controladores](https://kubernetes-csi.github.io/docs/drivers.html) CSI de Kubernetes muestra cuáles controladores admiten volúmenes efímeros.
+{{< /note >}}
+Conceptualmente, los volúmenes efímeros CSI son similares a los tipos de volumen `configMap`,
+`downwardAPI` y `secret`: el almacenamiento se gestiona localmente en cada nodo y se crea junto con otros recursos locales después de que un Pod ha sido programado en un nodo. Kubernetes ya no tiene ningún concepto de reprogramación de Pods en esta etapa. La creación de volúmenes debe ser poco propensa a fallos,
+de lo contrario, el inicio del Pod queda atascado. En particular, [la programación de Pods con conciencia de la capacidad de almacenamiento](/docs/concepts/storage/storage-capacity/) _no_ está admitida para estos volúmenes. Actualmente, tampoco están cubiertos por los límites de uso de recursos de almacenamiento de un Pod, porque eso es algo que kubelet solo puede aplicar para el almacenamiento que administra él mismo.
+
+Aquí tienes un ejemplo de manifiesto para un Pod que utiliza almacenamiento efímero CSI:
+
+```yaml
+kind: Pod
+apiVersion: v1
+metadata:
+ name: my-csi-app
+spec:
+ containers:
+ - name: my-frontend
+ image: busybox:1.28
+ volumeMounts:
+ - mountPath: "/data"
+ name: my-csi-inline-vol
+ command: ["sleep", "1000000"]
+ volumes:
+ - name: my-csi-inline-vol
+ csi:
+ driver: inline.storage.kubernetes.io
+ volumeAttributes:
+ foo: bar
+```
+
+Los `volumeAttributes` determinan qué volumen es preparado por el controlador. Estos atributos son específicos de cada controlador y no están estandarizados. Consulta la documentación de cada controlador CSI para obtener instrucciones adicionales.
+
+### Restricciones del conductor CSI
+
+Los volúmenes efímeros CSI permiten a los usuarios proporcionar `volumeAttributes` directamente al controlador CSI como parte de la especificación del Pod. Un controlador CSI que permite `volumeAttributes` que normalmente están restringidos a administradores NO es adecuado para su uso en un volumen efímero en línea. Por ejemplo, los parámetros que normalmente se definen en la clase de almacenamiento no deben estar expuestos a los usuarios a través del uso de volúmenes efímeros en línea.
+
+Los administradores del clúster que necesiten restringir los controladores CSI que se pueden utilizar como volúmenes en línea dentro de una especificación de Pod pueden hacerlo mediante:
+
+- Eliminar `Ephemeral` de `volumeLifecycleModes` en la especificación de CSIDriver, lo que evita que los controladores CSI admitan volúmenes efímeros en línea.
+
+- Usando un [webhook de admisión](/docs/reference/access-authn-authz/extensible-admission-controllers/)
+ para restringir el uso de este controlador.
+
+### Volúmenes efímeros genéricos
+
+{{< feature-state for_k8s_version="v1.23" state="stable" >}}
+
+Los volúmenes efímeros genéricos son similares a los volúmenes `emptyDir` en el sentido de que proporcionan un directorio por Pod para datos temporales que generalmente está vacío después de la provisión. Pero también pueden tener características adicionales:
+
+- El almacenamiento puede ser local o conectado a la red.
+- Los volúmenes pueden tener un tamaño fijo que los Pods no pueden exceder.
+- Los volúmenes pueden tener algunos datos iniciales, dependiendo del controlador y los parámetros.
+- Se admiten operaciones típicas en los volúmenes, siempre que el controlador las soporte, incluyendo
+ [instantáneas](/docs/concepts/storage/volume-snapshots/),
+ [clonación](/docs/concepts/storage/volume-pvc-datasource/),
+ [cambiar el tamaño](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims),
+ y [seguimiento de la capacidad de almacenamiento](/docs/concepts/storage/storage-capacity/).
+
+Ejemplo:
+
+```yaml
+kind: Pod
+apiVersion: v1
+metadata:
+ name: my-app
+spec:
+ containers:
+ - name: my-frontend
+ image: busybox:1.28
+ volumeMounts:
+ - mountPath: "/scratch"
+ name: scratch-volume
+ command: ["sleep", "1000000"]
+ volumes:
+ - name: scratch-volume
+ ephemeral:
+ volumeClaimTemplate:
+ metadata:
+ labels:
+ type: my-frontend-volume
+ spec:
+ accessModes: ["ReadWriteOnce"]
+ storageClassName: "scratch-storage-class"
+ resources:
+ requests:
+ storage: 1Gi
+```
+
+### Ciclo de vida y reclamo de volumen persistente
+
+La idea clave de diseño es que los [parámetros para una solicitud de volumen](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralvolumesource-v1-core)
+se permiten dentro de una fuente de volumen del Pod. Se admiten etiquetas, anotaciones y
+todo el conjunto de campos para una PersistentVolumeClaim. Cuando se crea un Pod de este tipo, el controlador de volúmenes efímeros crea entonces un objeto PersistentVolumeClaim real en el mismo espacio de nombres que el Pod y asegura que la PersistentVolumeClaim
+se elimine cuando se elimina el Pod.
+
+Eso desencadena la vinculación y/o aprovisionamiento de volúmenes, ya sea de inmediato si el {{< glossary_tooltip text="StorageClass" term_id="storage-class" >}} utiliza la vinculación inmediata de volúmenes o cuando el Pod está programado provisionalmente en un nodo (modo de vinculación de volumen `WaitForFirstConsumer`). Este último se recomienda para volúmenes efímeros genéricos, ya que permite al planificador elegir libremente un nodo adecuado para el Pod. Con la vinculación inmediata, el planificador está obligado a seleccionar un nodo que tenga acceso al volumen una vez que esté disponible.
+
+En términos de [propiedad de recursos](/docs/concepts/architecture/garbage-collection/#owners-dependents),
+un Pod que tiene almacenamiento efímero genérico es el propietario de la PersistentVolumeClaim(s) que proporciona ese almacenamiento efímero. Cuando se elimina el Pod, el recolector de basura de Kubernetes elimina la PVC, lo que suele desencadenar la eliminación del volumen, ya que la política de recuperación predeterminada de las clases de almacenamiento es eliminar los volúmenes.
+Puedes crear almacenamiento local cuasi-efímero utilizando una StorageClass con una política de recuperación de `retain`: el almacenamiento sobrevive al Pod y, en este caso, debes asegurarte de que la limpieza del volumen se realice por separado.
+
+Mientras estas PVC existen, pueden usarse como cualquier otra PVC. En particular, pueden ser referenciadas como fuente de datos en la clonación o creación de instantáneas de volúmenes. El objeto PVC también contiene el estado actual del volumen.
+
+### Nomenclatura de PersistentVolumeClaim.
+
+La nomenclatura de las PVC creadas automáticamente es determinista: el nombre es una combinación del nombre del Pod y el nombre del volumen, con un guion medio (`-`) en el medio. En el ejemplo anterior, el nombre de la PVC será `my-app-scratch-volume`. Esta nomenclatura determinista facilita la interacción con la PVC, ya que no es necesario buscarla una vez que se conocen el nombre del Pod y el nombre del volumen.
+
+La nomenclatura determinista también introduce un posible conflicto entre diferentes Pods (un Pod "pod-a" con el volumen "scratch" y otro Pod con nombre "pod" y volumen "a-scratch" terminan teniendo el mismo nombre de PVC "pod-a-scratch") y entre Pods y PVCs creadas manualmente.
+
+Estos conflictos se detectan: una PVC solo se utiliza para un volumen efímero si se creó para el Pod. Esta comprobación se basa en la relación de propiedad. Una PVC existente no se sobrescribe ni se modifica. Pero esto no resuelve el conflicto, ya que sin la PVC adecuada, el Pod no puede iniciarse.
+
+{{< caution >}}
+Ten cuidado al nombrar Pods y volúmenes dentro del mismo espacio de nombres para evitar que se produzcan estos conflictos.
+{{< /caution >}}
+
+### Seguridad
+
+El uso de volúmenes efímeros genéricos permite a los usuarios crear PVC de forma indirecta si pueden crear Pods, incluso si no tienen permiso para crear PVC directamente. Los administradores del clúster deben ser conscientes de esto. Si esto no encaja en su modelo de seguridad, deberían utilizar un [webhook de admisión](/docs/reference/access-authn-authz/extensible-admission-controllers/) que rechace objetos como Pods que tienen un volumen efímero genérico.
+
+La cuota normal del [espacio de nombres para PVC](/docs/concepts/policy/resource-quotas/#storage-resource-quota) sigue aplicándose, por lo que incluso si a los usuarios se les permite utilizar este nuevo mecanismo, no pueden utilizarlo para eludir otras políticas.
+
+## {{% heading "whatsnext" %}}
+
+### Volúmenes efímeros gestionados por kubelet
+
+Ver [almacenamiento efímero local](/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage).
+
+### Volúmenes efímeros de CSI
+
+- Para obtener más información sobre el diseño, consulta el
+ [KEP de Volúmenes efímeros en línea de CSI](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md).
+- Para obtener más información sobre el desarrollo futuro de esta función, consulte el
+ [problema de seguimiento de mejoras #596](https://github.com/kubernetes/enhancements/issues/596).
+
+### Volúmenes efímeros genéricos
+
+- Para obtener más información sobre el diseño, consulta el
+ [KEP de Volúmenes efímeros genéricos en línea](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md).
diff --git a/content/id/_index.html b/content/id/_index.html
index 29c6646e6a..170788cdcb 100644
--- a/content/id/_index.html
+++ b/content/id/_index.html
@@ -4,6 +4,7 @@ abstract: "Otomatisasi Kontainer deployment, scaling, dan management"
cid: home
---
+{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
diff --git a/content/it/_index.html b/content/it/_index.html
index 23f7b881eb..177bfa5c09 100644
--- a/content/it/_index.html
+++ b/content/it/_index.html
@@ -4,6 +4,8 @@ abstract: Deployment, scalabilità, e gestione di container automatizzata
cid: home
---
+{{< site-searchbar >}}
+
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) è un software open-source per l'automazione del deployment, scalabilità, e gestione di applicativi in containers.
diff --git a/content/ko/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/ko/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
index a112f71f25..38c2ac1bb6 100644
--- a/content/ko/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
+++ b/content/ko/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
@@ -10,7 +10,7 @@ weight: 20
이 페이지는 어떻게 네트워크 폴리시(NetworkPolicy)로 실리움(Cilium)를 사용하는지 살펴본다.
-실리움의 배경에 대해서는 [실리움 소개](https://docs.cilium.io/en/stable/intro)를 읽어보자.
+실리움의 배경에 대해서는 [실리움 소개](https://docs.cilium.io/en/stable/overview/intro)를 읽어보자.
## {{% heading "prerequisites" %}}
diff --git a/content/zh-cn/community/_index.html b/content/zh-cn/community/_index.html
index 506e2a0fc7..6b201e838c 100644
--- a/content/zh-cn/community/_index.html
+++ b/content/zh-cn/community/_index.html
@@ -2,259 +2,241 @@
title: 社区
layout: basic
cid: community
+community_styles_migrated: true
---
-
+
-
diff --git a/content/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md b/content/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md
index ae25ddbf08..419dd58f9b 100644
--- a/content/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md
+++ b/content/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md
@@ -127,16 +127,16 @@ this field to null if no valid credentials can be returned for the requested ima