Add adopter interviews and summary

Signed-off-by: Jeremy Rickard <jeremyrrickard@gmail.com>
This commit is contained in:
Jeremy Rickard 2025-08-12 16:41:40 -06:00
parent 6fc220cb99
commit 4c2a24bf4c
No known key found for this signature in database
GPG Key ID: ED2461F1AD4DD0B7
6 changed files with 644 additions and 13 deletions

View File

@ -0,0 +1,103 @@
---
title: KNative Adopter Interview - Adopter 5
---
# KNative Adopter Interview - Adopter 5
## Organization Intro
### Can you give us an overview of your organization and what it does?
ADOPTER 5 provides a platform for AI workloads.
## Motivation
### Compared with other products in this space (proprietary and open), what drew you to the project?
Knative was the leader in the space and provided right set of features and abstractions. Adopter 5 did not want to build something from scratch themselves.
## Usage Scenario
### How long has your organization used the project?
For about four years.
### What were the main motivations to adopt the project and which key features do you use today?
* Autoscaling (Knative pod autoscaler which takes advantage of user metrics)
* Throughput and latency for metrics
* Scale to zero is particularly useful.
* Revision management and gradual rollout.
* Handling complex network configuration.
### What is the current level of usage (pre-production, production) and scale?
Knative is used in production for Adopter 5's platform.
Scale varies a lot by customer. Customers use it for both development and production, as well as with both medium and big models, so scale is not that big per customer. There are generally dozens of workloads per customers, but only several replicas.
### What version is used and what is your update cadence with the project?
Adopter 5 is focusing on 1.18. There are different flavors of the platform. Some flavors are more managed by Adopter 5, and in some the customer is responsible for installing. Generally, they follow a twice a year “validation” for the latest versions.
### Can you walk me through what your experience was in either adopting it outright or integrating it with your existing services and applications? What challenges did you experience with the project?
Adoption was pretty straightforward. It works really well most or all of the time. There were no issues that were not easy to spot or related to something that Adopter 5 did wrong.
### Did you find the information in the repo valuable to your implementation? What specifically?
The docs are generally good, and in some areas excellent. In some areas, they are less organized or missing. For example, there are different ways to configure a service and revision, including per revision and global settings. Most of those settings are well organized, but many of them are scattered across different sections, which makes it hard to get the full picture or to know where to look for a particular feature. For some more advanced topics (for example, the different replica load balancing strategies), the formal docs dont cover everything and info can only be found by reading code or sometimes markdown files in the repo. Reading code is a common way to address advanced topics.
### Has your implementation of the project provided measurable value?
Yes, Knative provides relevant features and its easy to measure cost savings from autoscaling and scale to zero. Customers can easily measure this. Adopter 5 also realized cost savings by avoiding developing a custom solution.
### Do you have any future plans regarding the project? More involvement, feature requests, expansion, etc.
Not planning to be involved in community, but will keep using Knative.
Inference is changing a lot. With GenAi and LLMs and distributed inference, and Knative is becoming a bit outdated. You either cant use it or you need to spend a lot of time/effort to make it work. It might not be the right approach for emerging workloads.
## Perception
### What is your perception in terms of the projects:
#### Community openness
#### Governance
#### Community growth potential
#### Maintainer diversity and ladder
#### Maintainer response
### How are you participating in the project community?
### Did you need to engage with the community members or maintainers? If so, what was the context of the engagement and did it reach an acceptable outcome?
Adopter 5 is generally not familiar with the community. They feel that the project is very mature for what it does and rarely need to engage with the community. They do leverage Github, but generally find the answers in documentation or in old Github issues.
## Project Strengths
### In your opinion, what are the overall strengths of the project?
Autoscaling and scale to zero is particularly useful.
## Project Improvements
### Is there something you feel that holds the project back from reaching its ultimate potential?
There are some conflict or clashes between Knative and kserve. It might be beneficial for those projects to merge, there is some perception that kserve is more mature/better than Knative. The project could provide better documentation to clarify the differences between the projects and highlight use cases where it excels.
### In your opinion, what can the project do better?
The docs could be better, and Knative could do more to stay relevant with latest developments in inference serving. Overall, the project is really great for what it does, but needs to take a step forward to evolving workloads.

View File

@ -0,0 +1,115 @@
---
title: KNative Adopter Interview - gojek
---
# KNative Adopter Interview - gojek
Interviewee: Roman Wozniak, Head of Engineering, gojek
## Organization Intro
### Can you give us an overview of your organization and what it does?
The merger of Gojek and Tokopedia created the largest technology company in Indonesia, offering ride-hailing, e-commerce, and delivery services to millions of users across Indonesia and Singapore.
## Motivation
### Compared with other products in this space (proprietary and open), what drew you to the project?
In 2019, my team and I worked closely with data scientists at Gojek to help them productize ML deployments and integrate them with the rest of the engineering systems. This approach worked well momentarily, but it didnt scale as the company grew (hiring more ML engineers to embed them into product streams was expensive and time-consuming). Thats when we decided to build a scalable self-serve platform that would cater to data scientists' needs and offer them easy-to-use interfaces. We had a feature store (hist) and looked at the missing pieces of the lifecycle. We started examining existing platforms and did not find anything that matched our needs and was open-source and easy to use.
## Usage Scenario
### How long has your organization used the project?
We have used Knative since 2020, first as a dependency of KF Serving (now, KServe) and later as an independent component of our DS Platform.
### What were the main motivations to adopt the project and which key features do you use today?
We currently use Knative Serving. We are looking at eventing, but largely using Serving for HTTP requests.
### What is the current level of usage (pre-production, production) and scale?
Knative has been used in production since very early in our adoption, and at very large scale. Serving millions of our users with over 100,000 RPS during peak times.
### What version is used and what is your update cadence with the project?
The update cadence follows latest minus one. Basically everytime a new Knative and kserve release happens, we begin adopting N-1.
### Can you walk me through what your experience was in either adopting it outright or integrating it with your existing services and applications? What challenges did you experience with the project?
We started building an ML model-serving component, but decided to go one level deeper. We looked at open-source components that could allow us to put together various elements and create a product to expose to end users. We first adopted Kserve and built easy-to-use tools and user interfaces on top to develop the platform. Later, we built an integration for ML experimentation based purely on Knative. It provides an API that converts to Knative resources and orchestrates those.
We did not encounter any major challenges while adopting. Post-COVID, cost optimization was a significant focus to understand how to use Knative more efficiently, specifically regarding right-sizing and tuning. We aimed to explore vertical autoscaling, but Knative primarily supported horizontal scaling at the time. We proposed a feature request to integrate VPA into Knative, but it wasn't finalized. This was not due to the project's unwillingness; rather, the complexity of integrating it into Knative was very high. Additionally, this was early in the Knative project, and Kubernetes also lacked maturity in this area (for instance, an API for in-place PodVerticalScaling was added to Kubernetes v1.33 and is still in beta).
### Did you find the information in the repo valuable to your implementation? What specifically?
The high level direction of the project and the projects roadmap was very useful to help plan and understand where the project was going.
Github in general was very useful, specifically Github issues and release notes. Following the discussion on issues and pull requests helped understand the state of the project better.
### Has your implementation of the project provided measurable value?
Yes.
### Do you have any future plans regarding the project? More involvement, feature requests, expansion, etc.
We are looking at possible agentic orchestration
## Perception
### What is your perception in terms of the projects:
#### Community openness
The community feels open. At KubeCon North America we were able to talk to Knative maintainers.
#### Governance
We have a positive opinion of the governance.
#### Community growth potential
The community feels pretty mature, dont feel like worry about the project health.
#### Maintainer diversity and ladder
We have a positive impression of the maintainer pool.
#### Maintainer response
Response time from the maintainers is good.
### How are you participating in the project community?
### Did you need to engage with the community members or maintainers? If so, what was the context of the engagement and did it reach an acceptable outcome?
We have interacted with the community via community calls, slack, and on github. Github is the primary mechanism, followed by community calls, and then slack. Most of the time, we have had an acceptable outcome. The VPA scenario was less acceptable.
## Project Strengths
### In your opinion, what are the overall strengths of the project?
Scalability is the primary strength. The abstractions defined are good and provided lots of flexibility and allowed us to build a fairly stable API on top of it so we could focus on the problems we wanted to solve. Another stength is the the stable API. There are not frequent breaking changes.
## Project Improvements
### Is there something you feel that holds the project back from reaching its ultimate potential?
Nothing comes to mind.
### In your opinion, what can the project do better?
Nothing is ideal, but the project is good and nothing actionable comes to mind.

View File

@ -0,0 +1,112 @@
---
title: KNative Adopter Interview - IBM
---
# KNative Adopter Interview - IBM
Interviewees:
* Simon Moser, Distinguished Engineer, IBM Cloud Container Services
* Sascha Schwarze, IBM Cloud Code Engine
## Organization Intro
### Can you give us an overview of your organization and what it does?
IBM Cloud Code Engine is a cloud service partially based on Knative, that aims to provide a single serverless platform. It supports four types of workloads under one platform, all containerized, with the same set of APIs.
## Motivation
### Compared with other products in this space (proprietary and open), what drew you to the project?
At the time of adopting, Knative was the most promising in the ecosystem. It supported scale to zero and supported functions, and was natively based on Kubernetes. It was not very mature a the time, but covered most of the use cases we had in mind.
## Usage Scenario
### How long has your organization used the project?
5+ years. Platform development started at the end of 2019 or beginning of 2020.
### What were the main motivations to adopt the project and which key features do you use today?
We primarily use Knative serving. We were mainly motivated to adopt it because it was built natively on Kubernetes. Scale to zero is a key feature we use today. We use some Knative eventing, but mostly serving.
### What is the current level of usage (pre-production, production) and scale?
Knative is used as part of the production cloud service.
Knative is used on clusters running large number of services. There are four digit number of Knative services per cluster / region. Each region would be an isolated Knative deployment.
### What version is used and what is your update cadence with the project?
We are using 1.18. We have automation that automatically opens PRs when new version comes out. We sometimes run into issues with the minimum Kubernetes versions, but we try not fall behind too far. We do not really encounter issues w/ Knative upgrades.
### Can you walk me through what your experience was in either adopting it outright or integrating it with your existing services and applications? What challenges did you experience with the project?
We dont recall any major obstacles while adopting. Some of our challenges have been around integrating Istio due to architectural decisions, which led to larger mesh sizes than anticipated. There were also operational hiccups, but most related to Istio. The build uses ko, and results in a large YAML that cannot be all applied at once. Knative controller also tries to update all deployments when changing versions, which result in a lot of extra traffic to API server. Many of these were scale related. Upstream Knative doesnt handle scale but IBM carries some custom patches to alleviate some of this.
### Did you find the information in the repo valuable to your implementation? What specifically?
The Knative documentation is quite good. Release notes are what we use the most now. We are not expanding our feature set. We look at issues in Github and review code as needed.
### Has your implementation of the project provided measurable value?
IBM Code Engine absolutely benefited from adopting Knative. Knative was a huge help in building the platform. We would not have been able to deliver the platform as quickly without Knative, as we would have had to build features or spend effort stitching together other oss projects to enable this.
### Do you have any future plans regarding the project? More involvement, feature requests, expansion, etc.
We are in a steady state for the moment. For Code Engine, we are focused more on batch workloads and that is outside of Knative capabilities. Investment is focused on areas outside Knative capabilities. We are working with community on bug fixes.
## Perception
### What is your perception in terms of the projects:
#### Community openness
Communityu openness is good.
#### Governance
The project has good governance. Meetings and held and records kept, but meetings are perhaps indecisive.
#### Community growth potential
The project doesnt have any obvious features missing within serving.
#### Maintainer diversity and ladder
Contributions to Knative serving seem to have gone down over the last year, and sometimes issues may take longer than expected.
#### Maintainer response
People are generally responsive on slack. Sometimes maintainers dont respond as quickly as we want, but this is normal.
### How are you participating in the project community?
We are working on bug fixes and working on general PRs. IBM previously served on steering for Knative. We are not pushing any major changes.
### Did you need to engage with the community members or maintainers? If so, what was the context of the engagement and did it reach an acceptable outcome?
We engage with maintaienrs mostly though github issues. Outcomes are generally acceptable.
## Project Strengths
### In your opinion, what are the overall strengths of the project?
The scale to zero feature and being natively built on Kubernetes.
## Project Improvements
### Is there something you feel that holds the project back from reaching its ultimate potential?
For functions, startup of containers is an issue for real use. Low latency startup is an issue. Eventing is less desirable due to lack of upstream integrations.
### In your opinion, what can the project do better?
The project could support more Kubernetes versions and possibly be more responsive to issues.

View File

@ -0,0 +1,137 @@
# KNative Adopter Interview - SVA
Interviewee: Norris Sam Osarenkhoe, Principal Solutions Architect, SVA System Vertrieb Alexander GmbH.
Interview date: June 18, 2025
## Organization Intro
### Can you give us an overview of your organization and what it does?
SVA System Vertrieb Alexander GmbH is one of the leading system integrators in Germany in the fields of holistic IT with more than 3,200 employees at 27 branch offices. SVA focusses on the combination of high quality IT products with the project know-how and flexibility of SVA to achieve optimum solutions. Core subjects are Digital Process Solutions, Datacenter Infrastructure, IT Security, Business Continuity, SAP, Big Data and Analytics, End User Computing and Mainframe. Furthermore, SVA offers professional services around topics such as DevOps, Cloud-Native Software Development, Microsoft, IoT, SAM and many more topics.
## Motivation
### Compared with other products in this space (proprietary and open), what drew you to the project?
Open Standards are a big factor, as well as digital sovereignty. The cloud events spec was important. 2022 - we started evaluating different options, and realizing that hyperscalers were widely adopting the cloudevent spec. On top of that, Knatives' architecture seems well thought in terms of separation of concerns between its components. E.g. in Eventing the architecture is cleanly cut into roughly three distinct areas:
* eventing-core - Implements the general, non-specifc Knative Eventing API (Ingress, Routing, Egress)
* eventing-messaging - Implements a generic Knative Eventing messaging model API, like Broker/Trigger or Channel/Subscription
* eventing-messaging-bindings - Integrates a specific messaging technology on top of the messaging model API (like Kafka)
We knew if something were to happen at each level we could fix it or at least had the flexibility to swap in another solution. Additionally, Eventing offered integration with well known messaging solutions such as Kafka, NATS and RabbitMQ, which all were relevant for us. Cloud providers using these solutions as well, meant we would easily find specialist if needed.
Knative has a very plug and play architecture, which attracted us. It is very easy to onboard users. Eventing is very easy to explain, and allows building pretty complex use cases.
We did look at some other projects, but their governance wasnt clear. Knatives' governance stood out to us. It feels like it will be a project that will be around for some time and will have maintainers for some time.
## Usage Scenario
### How long has your organization used the project?
We started using it in 2022. It appeared to be the right tool for our problem. In order to validate our assumption we started prototyping and did lots of performance testing to better understand failure modes, performance KPIs and the inner workings.
In the summer of 2022, we started said testing and prototyping. In parallel, we started to built up other environments. As of March 2023, we have successfully released our customized knative-backed event-mesh on our customer project infrastructure.
2 FTEs were involved in this.
### What were the main motivations to adopt the project and which key features do you use today?
The main motivations were:
* License
* Open Standards
* Clean, Extensible architecture via plugins.
* Clean abstraction layer.
HTTP for serving and eventing was very easy to integrate and adopt.
Eventing was integrated first, the event driven architecture was used to help decouple legacy architectures. Kafka integration was a pretty crucial one.
Serving is used for more cross-cutting things within the platform, not exposed directly to end users. This is more of an enablement problem.
### What is the current level of usage (pre-production, production) and scale?
It is used in production and handles roughly over a million events daily, with an availability of 99.9% supporting approx. 11 org units.
### What version is used, and what is your update cadence with the project?
We update quarterly, using OpenShift Serverless. OpenShift Serverless aligns with the upstream Knative project, but we use the downstream release for compliance reasons.
### Can you walk me through what your experience was in either adopting it outright or integrating it with your existing services and applications? What challenges did you experience with the project?
Originally in 2022, the documentation was a bit lacking on the administration side. We needed to look at source code to help understand.
The operator had some issues around order of deletion of custom resources, and we had to do things like clean up finalizers.
Another challenge was dependencies on control plane webhooks (of the Knative Operator). The Knative Operator invokes a mutating webhook for part of the reconciler loop check when certain custom resources are to be deleted. If there is a CrashLoopbackOff or these webhooks are unreachable, the operator cleanup can become difficult if you are not deeply knowledgeable of the control plane dependencies.
Adoption on the end user side was pretty easy technically speaking. Mind shift for customers adopting an asynchronous event-driven architecture was more challenging. Knative team doesnt have experience in highly regulated industries and some testing is lacking there in my opinion. This is apparent when looking at it from an admin perspective working in a usually air gapped environment, with challenges such as access to Container Images, private Certificate Authority or custom certificates. Most challenges in my view really are to be found on the administration side of things. However, it needs to be said, that Knative has done a lot to alleviate some admin issues we had.
### Did you find the information in the repo valuable to your implementation? What specifically?
### Has your implementation of the project provided measurable value?
The implementation has allowed us to build a solution that is entirely On-Prem, but with different environments. The ease of use and faster integrations has provided value for modernizing legacy applications at our customer site. It has cut down on onboarding quite a bit. We went from 2 weeks to onboard legacy services to one hour. It has also allowed us to add compliance checking and enforce more compliance in said regulated environments.
### Do you have any future plans regarding the project? More involvement, feature requests, expansion, etc.
We have built some observability tooling around Knative Eventing, and would like to contribute that back. We would also like to create a blueprint for other federal agencies in Germany.
## Perception
### What is your perception in terms of the projects:
#### Community openness
The community is very open. For the number of people involved, the Knative community achieves a lot. Lots of subprojects under the umbrella (e.g. knative-extensions).
#### Governance
The project has great governance.
#### Community growth potential
There is a lot of interaction for some use cases. The project probably need more contributions.
#### Maintainer diversity and ladder
The maintainer diversity could be better. It feels like mostly Red Hat is maintaining the project, based on who is at the booth (at KubeCon) and on slack. However, despite that the available amount of maintainers appears to be stable.
#### Maintainer response
The maintainers are pretty responsive. We feel very safe with Knative.
### How are you participating in the project community?
Via Slack and Github
### Did you need to engage with the community members or maintainers? If so, what was the context of the engagement and did it reach an acceptable outcome?
We have had good experience providing feedback via Red Hat.
## Project Strengths
### In your opinion, what are the overall strengths of the project?
Knative has good open standards and is easy to adopt. It is also very stable and has very good performance, we have not been able to overwhelm it or bring it to its knees and feel very confident in it covering future demand.
## Project Improvements
### Is there something you feel that holds the project back from reaching its ultimate potential?
The project could use more diversity of companies contributing to the Knative Serving side.
### In your opinion, what can the project do better?
Better docs and guidance for admin scenarios would be better. For example, how to measure event congestion. Nothing talks about how to fine tune Kafka for Knative. More specific docs like that would be useful. It could also have better observability to support administration of deployments.

View File

@ -0,0 +1,103 @@
# Knative Adopter Interview - Y Meadows
Interviewee: Adam Rich, VP and co-founder of Y Meadows
Interview Date: June 4, 2025
## Organization Intro
### Can you give us an overview of your organization and what it does?
Y Meadows provides AI-Driven Automation for Business Operations, such as order operations, customer inquiries, and account management.
## Motivation
### Compared with other products in this space (proprietary and open), what drew you to the project?
We use Knative serving. The main motivation was the scale to 0 functionality. We wanted to use that to help control costs, so that pods only use resources while they are running.
We started in 2020, and at the time there were only a few ways to scale to 0. We think Knative provides a clean way to do it. There are no restrictions on service or framework and it is open source. The project has a strong community presence and is used by GCP for their cloud run products so it is battle tested. It only has a few moving parts and minimum dependencies.
## Usage Scenario
### How long has your organization used the project?
About 5 years.
### What were the main motivations to adopt the project and which key features do you use today?
We use Knative serving for the scale to zero feature.
### What is the current level of usage (pre-production, production) and scale?
Knative is used in real production flow for all of our clients including some big name customers. It is a core part of our platform. We have used it in production at scale, for years.
### What version is used and what is your update cadence with the project?
We upgraded recently to 1.18. We upgrade about twice a year on average.
### Can you walk me through what your experience was in either adopting it outright or integrating it with your existing services and applications? What challenges did you experience with the project?
The install guide has decent docs. We had to make some adjustments for high availability and had to make minor config changes as to how we want to handle timeouts.
We ran into some scaling issues later on. Per the recommendation of maintainers, we switched the network plugin to Contour which helped. We also switched to GKE Data Plane v2 (which is Cilium based). We use a lot of IPs when we scale to a large number of services with Knative, and Cilium handles that well. We can also run on spot instances in GKE to cut cost which works well with Knative.
### Did you find the information in the repo valuable to your implementation? What specifically?
The project docs are critical. They are decent. They have nice reference docs, maintain change docs, and release notes. They also have a high availability section which is important.
### Has your implementation of the project provided measurable value?
Yes. The main value is that it reduces costs for us.
### Do you have any future plans regarding the project? More involvement, feature requests, expansion, etc.
We are happy with Knative and plan to continue using it. We plan to explore the internal TLS feature and upgrade to use the operator. We havent used Eventing yet, but could be using it in the future.
## Perception
### What is your perception in terms of the projects:
#### Community openness
The community has been helpful to us when we interact on Slack and GitHub.
#### Governance
The maintainers work together in the open on Slack and GitHub and seem to be maintaing the project well.
#### Community growth potential
I believe that to cost effectively use the cloud you need to be able to autoscale and the ideal of autoscaling is to scale to zero. KNative is an excellent way of achieving that. I think that there is a lot of potential to grow usage and the ccommunity.
#### Maintainer diversity and ladder
I am not familiar enough with the maintainers and their backgrounds to comment on this.
#### Maintainer response
The maintainers are responsive and have been helpful to us. They are active on Slack and GitHub.
### How are you participating in the project community?
We have contributed to discussions on Slack and GitHub, providing our perspective as an end user.
### Did you need to engage with the community members or maintainers? If so, what was the context of the engagement and did it reach an acceptable outcome?
We had communications with Dave Protasowski on CNCF slack, he was helpful and jumped on a call with us - really going above and beyond. We have also looked up GitHub issues for guidance. Most communications were in Slack.
## Project Strengths
### In your opinion, what are the overall strengths of the project?
The project has a solid architecture, scales well and solid documentation. It is transparent to workloads and doesnt require clients to treat it differently and it is designed to fit the Kubernetes philosophy.
## Project Improvements
### Is there something you feel that holds the project back from reaching its ultimate potential?
For Knative serving, I think it is really reaching its potential. We would like Kourier to be considered as the default networking plugin instead of an extension. We would love to see TLS internally becomes the default as well. We would like see operator upgrade path from Helm as well.
### In your opinion, what can the project do better?
Same answer as above

View File

@ -319,7 +319,7 @@ Knative has completed a [security self-assessment](https://github.com/cncf/tag-s
- [ ] Moderate and low findings from the Third Party Security Review are planned/tracked for resolution as well as overall thematic findings, such as: improving project contribution guide providing a PR review guide to look for memory leaks and other vulnerabilities the project may be susceptible to by design or language choice ensuring adequate test coverage on all PRs.
Knative underwent a [Third Party Security Audit](https://github.com/knative/docs/blob/main/reports/ADA-knative-security-audit-2023.pdf) by Ada Logics in 2023. The audit found 16 security issues, 15 of which were fixed with upstream patches.
Knative underwent a [Third Party Security Audit](https://github.com/knative/docs/blob/main/reports/ADA-knative-security-audit-2023.pdf) by Ada Logics in 2023. The audit found 16 security issues, 15 of which were fixed with upstream patches. The remaining issue was not addressed, because it was only exploitable if fundamental security assumptions of the cluster were already broken.
- [X] **Achieve the Open Source Security Foundation (OpenSSF) Best Practices passing badge.**
@ -340,11 +340,11 @@ A list of public adopters can be found in the [ADOPTERS.md](https://github.com/k
- [x] **Used in appropriate capacity by at least 3 independent + indirect/direct adopters, (these are not required to be in the publicly documented list of adopters)**
The project provided the TOC with a list of adopters for verification of use of the project in production. The project is used by a many global organizations and the TOC was able to verify confirm production use with all interviwed adopters.
The project provided the TOC with a list of adopters for verification of use of the project in production. The project is used by a many global organizations and the TOC was able to verify confirm production use with all interviewed adopters.
- [x] **TOC verification of adopters.**
The Knative maintainers provided the TOC with a list of 20 adopters from different geographic regions and business segments who agreed to be interviewed for the Graduation Due Diligence process. 6 of these adopters were interviewed. The adoption portion of this document contains interview summaries from adopters who approved public attribution.
The Knative maintainers provided the TOC with a list of 20 adopters from different geographic regions and business segments who agreed to be interviewed for the Graduation Due Diligence process. 5 of these adopters were interviewed. The adoption portion of this document contains interview summaries from adopters who approved public attribution. All adopters recommended Knative for graduation and commented on project maturity. Scalability and stability were common strengths identified by adopters. The Knative Serving project especially was highlighted as providing a great deal of value to adopters.
Refer to the Adoption portion of this document.
@ -372,24 +372,85 @@ Several of these integrations were mentioned during adoption interviewes as well
##### Adopter 1 - YMeadows
June 2025
Y Meadows provides AI-Driven Automation for Business Operations, such as order operations, customer inquiries, and account management.
YMeadows was interviewed as a Knative adopter on June 4, 2025. They have been using Knative in production for about five years, supporting all of their customers.
YMeadows originally adopted Knative for it's "scale to zero" features in order to control costs. The open source nature of the project, along with it's minimal dependencies, and active community presence were also strong factors in adopting the project.
When adopting Knative, YMeadows encountered some scaling issues, which were resolved by replacing the ingress component they were using. They generally found the documentation for the project to be adequate, however they had some issues related to supporting high availability and to address timeouts due to the size of workloads being used.
YMeadows has a positive opinion of the Knative community and views the project as having open and transparent governance, as well as a helpful community with responsive maintainers.
YMeadows would like to see internal TLS as a default configuration, more support for Kourier as a default networking plugin, and a smoother operator upgrade experience.
Overall, YMeadows recommended Knative for graduation and highlighted the cost reduction benefits the project helped deliver and highlighted the stability, scalability, documentation, and alignment with the general Kubernetes philosophy as overall strengths of the project.
The entire adopter interview can be found here: [YMeadows Adopter Interview](knative-adopter-interview-ymeadows.md)
##### Adopter 2 - SVA System Vertrieb Alexander GmbH
June 2025
Note: This adopter uses Knative via the Red Hat OpenShift Serverless product for compliance reasons.
##### Adopter 3 - gojek
SVA System Vertrieb Alexander GmbH is one of the leading system integrators in Germany.
June 2025
SVA System Vertrieb Alexander GmbH was interviewed as a Knative adopter on June 18, 2025. They adopted Knative in 2022, and are currently running the project in production, handling over a million events daily with 99.9% availability across 11 organizational units.
##### Adopter 4 - CoreWeave
SVA System Vertrieb Alexander GmbH chose Knative for its open standards, clean abstraction layers, extensible architecture, and strong governance. The plug-and-play nature and integration with messaging systems like Kafka were key factors.
June 2025
SVA System Vertrieb Alexander GmbH found Knative easy to adopt and saw dramatically reduced onboarding times for customers and were able to implement improved compliance controls in regulated environments. When adopting Knative, SVA System Vertrieb Alexander GmbH faced some challenges relating to documentation, particularly around administrative topics.
##### Adopter 5 - Cloud Service Provider
SVA System Vertrieb Alexander GmbH indicated they have a positive perception of the governance of Knative, although they would like to see increased contributor diversity.
July 2025
Overall, they recommended the project for graduation in part due to it's stability, performance, ease of adoption.
##### Adopter 6 - IBM Cloud
The entire adopter interview can be found here: [SVA System Vertrieb Alexander GmbH Adopter Interview](knative-adopter-interview-sva.md)
July 2025
##### Adopter 3 - Gojek
Gojek is a technology company in Indonesia, offering ride-hailing, e-commerce, and delivery services to millions of users across Indonesia and Singapore.
Gojek initially started using Knative in 2020 via the KServe project, before migrating directly to Knative. It is currently used in production at a very large scale, supporting millions of users and 100,000+ requests per second at peak times.
Gojek adopted Knative while building a self-serve ML model-serving and experimentation platform for data scientists. They chose Knative for it's ease of use, flexibility, and because it was an active open-source project.
While adopting, they had a positive experience overall. They saw immediate value and were able to deliver a platforms that were scalable, flexible, and had stable APIs. After the COVID-19 pandemic, cost optimization became a focus and they wanted to explore vertical autoscaling, but Knative primarily supported horizontal scaling at the time. They proposed a feature request to integrate VPA into Knative, but it wasn't finalized. This was not due to the project's unwillingness; rather, the complexity of integrating it into Knative was very high.
Gojek views the project as mature and effective, and did not offer any areas for improvement.
The entire adopter interview can be found here: [Gojek Adopter Interview](knative-adopter-interview-gojek.md)
##### Adopter 4 - Cloud Service Provider
Adopter 4 provides a platform for AI workloads. They elected to provide this adopter interview anonymously.
Adopter 4 has run Knative in production for about four years, as part of their hosted platform. When building their platform, they wanted to provide the following features:
* Autoscaling
* Throughput and latency for metrics
* Scale to zero
Knative provided these capabilities, which meant that they did not have to build a solution on their own. Adoption for them was straightforward, with very few issues. Problems they did encounter were easily to identify and resolve using project documentation and by reviewing Github issues.
Adopter 4 does not actively participate in the community, finding they rarely need to engage with the community because the project is mature. Adopter 4 did comment that while the documentation is generally good, and in some areas excellent, some advanced topics were harder to find in documentation adn required some review of code and markdown files in order to find answers to some questions.
Overall, Adopter 4 feels that Knative is very mature for the problems that it solves, but commented that Knative might not be the right solution for emerging workloads and the project should consider how to evolve to support changes in the space.
The entire adopter interview can be found here: [Adopter 4 Adopter Interview](knative-adopter-interview-adopter-4.md)
##### Adopter 5 - IBM Cloud
IBM Cloud offers a serverless platform called IBM Cloud Code Engine, that supports multiple containerized workload types, and is partially based on Knative.
Within IBM Cloud Code Engine, Knative has been used in production for over five years, and supports thousands of Knative services per cluster and region. The platform primarily uses Knative Serving, but also some features from Knative Eventing.
IBM Cloud chose Knative because of it's Kubernetes-native design, as well as it's support for scale to zero for workloads. At the time of adoption, Knative was the most promising option available.
IBM Cloud encountered no major obstacles with Knative itself; most challenges were related to Istio integration and scaling. IBM uses custom patches for scale-related issues. Container startup latency, especially for larger workloads, was another issue that was faced while providing support for function based workloads.
IBM Cloud views the Knative community as very open and feels that the project generally has good governance. Maintainer response is generally good, but they have had some instances where issues took longer to resolve than anticipated.
Generally, they view Knative as very mature and support graduation of the project.
The entire adopter interview can be found here: [IBM Cloud Adopter Interview](knative-adopter-interview-adopter-ibm.md)