mirror of https://github.com/knative/docs.git
Remove ClusterChannelProvisioner from docs due to deprecation (#1629)
* Change link to InMemoryChannel * First changes * Migrate debug example from eventing.knative.dev to messaging.knative.dev * Convert KubernetesEventSource to ApiServerSource * Migrate kamel source to messaging channels * Remove default channels with CCP * Remove CCP channels and add GCP PubSub Channel CRD * Migrate debugging example to messaging channels and ApiServerSource * Remove istio injection from example * Remove istio debugging section * Remove ccp files and provide channel implementation links * Custom install changes * Use dispatcher container in kubectl command * Remove CRD prefix from channel list * Apply suggestions from code review Incorporate review suggestions Co-Authored-By: Ville Aikas <vaikas-google@users.noreply.github.com> * Point to ChannelSpec of messaging API instead of eventing API * Use InMemoryChannel directly in debugging example
This commit is contained in:
parent
27c08f0a32
commit
4386162b75
|
@ -71,16 +71,17 @@ To learn how to use the registry, see the
|
||||||
|
|
||||||
Knative Eventing also defines an event forwarding and persistence layer, called
|
Knative Eventing also defines an event forwarding and persistence layer, called
|
||||||
a
|
a
|
||||||
[**Channel**](https://github.com/knative/eventing/blob/master/pkg/apis/eventing/v1alpha1/channel_types.go#L36).
|
[**Channel**](https://github.com/knative/eventing/blob/master/pkg/apis/messaging/v1alpha1/channel_types.go#L57).
|
||||||
Messaging implementations may provide implementations of Channels via the
|
Each channel is a separate Kubernetes [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
||||||
[ClusterChannelProvisioner](https://github.com/knative/eventing/blob/master/pkg/apis/eventing/v1alpha1/cluster_channel_provisioner_types.go#L35)
|
Events are delivered to Services or forwarded to other channels
|
||||||
object. Events are delivered to Services or forwarded to other channels
|
|
||||||
(possibly of a different type) using
|
(possibly of a different type) using
|
||||||
[Subscriptions](https://github.com/knative/eventing/blob/master/pkg/apis/eventing/v1alpha1/subscription_types.go#L35).
|
[Subscriptions](https://github.com/knative/eventing/blob/master/pkg/apis/eventing/v1alpha1/subscription_types.go#L35).
|
||||||
This allows message delivery in a cluster to vary based on requirements, so that
|
This allows message delivery in a cluster to vary based on requirements, so that
|
||||||
some events might be handled by an in-memory implementation while others would
|
some events might be handled by an in-memory implementation while others would
|
||||||
be persisted using Apache Kafka or NATS Streaming.
|
be persisted using Apache Kafka or NATS Streaming.
|
||||||
|
|
||||||
|
See the [List of Channel implementations](../eventing/channels/channels.yaml).
|
||||||
|
|
||||||
### Future design goals
|
### Future design goals
|
||||||
|
|
||||||
The focus for the next Eventing release will be to enable easy implementation of
|
The focus for the next Eventing release will be to enable easy implementation of
|
||||||
|
@ -111,7 +112,7 @@ The eventing infrastructure supports two forms of event delivery at the moment:
|
||||||
Service is not available.
|
Service is not available.
|
||||||
1. Fan-out delivery from a source or Service response to multiple endpoints
|
1. Fan-out delivery from a source or Service response to multiple endpoints
|
||||||
using
|
using
|
||||||
[Channels](https://github.com/knative/eventing/blob/master/pkg/apis/eventing/v1alpha1/channel_types.go#L36)
|
[Channels](https://github.com/knative/eventing/blob/master/pkg/apis/messaging/v1alpha1/channel_types.go#L57)
|
||||||
and
|
and
|
||||||
[Subscriptions](https://github.com/knative/eventing/blob/master/pkg/apis/eventing/v1alpha1/subscription_types.go#L35).
|
[Subscriptions](https://github.com/knative/eventing/blob/master/pkg/apis/eventing/v1alpha1/subscription_types.go#L35).
|
||||||
In this case, the Channel implementation ensures that messages are delivered
|
In this case, the Channel implementation ensures that messages are delivered
|
||||||
|
|
|
@ -23,11 +23,9 @@ kind: Broker
|
||||||
metadata:
|
metadata:
|
||||||
name: default
|
name: default
|
||||||
spec:
|
spec:
|
||||||
channelTemplate:
|
channelTemplateSpec:
|
||||||
provisioner:
|
apiVersion: messaging.knative.dev/v1alpha1
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
kind: InMemoryChannel
|
||||||
kind: ClusterChannelProvisioner
|
|
||||||
name: gcp-pubsub
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Trigger
|
## Trigger
|
||||||
|
|
|
@ -1,9 +1,4 @@
|
||||||
Channels are Kubernetes Custom Resources that define a single event forwarding
|
Channels are Kubernetes [Custom Resources]((https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)) which define a single event forwarding
|
||||||
and persistence layer in Knative. Messaging implementations provide
|
and persistence layer. Messaging implementations may provide implementations of
|
||||||
implementations of Channels via the
|
Channels via a Kubernetes Custom Resource, supporting different technologies, such as Apache Kafka or NATS
|
||||||
[ClusterChannelProvisioner](https://github.com/knative/eventing/blob/master/pkg/apis/eventing/v1alpha1/cluster_channel_provisioner_types.go#L35)
|
|
||||||
object, and support different technologies, such as Apache Kafka or NATS
|
|
||||||
Streaming.
|
Streaming.
|
||||||
|
|
||||||
Note: Cluster Channel Provisioner (CCP) has been deprecated and will be
|
|
||||||
unsupported in v0.9. You should now use the Channels CRDs.
|
|
||||||
|
|
|
@ -21,18 +21,14 @@ This is a non-exhaustive list of the available Channels for Knative Eventing.
|
||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
|
|
||||||
- Inclusion in this list is not an endorsement, nor does it imply any level of
|
* Inclusion in this list is not an endorsement, nor does it imply any level of
|
||||||
support.
|
support.
|
||||||
|
|
||||||
- Cluster Channel Provisioner (CCP) has been deprecated and will be unsupported
|
Name | Status | Support | Description
|
||||||
in v0.9. You should now use the Channels CRDs.
|
--- | --- | --- | ---
|
||||||
|
[GCP PubSub](https://github.com/google/knative-gcp) | Proof of Concept | None | Channels are backed by [GCP PubSub](https://cloud.google.com/pubsub/).
|
||||||
|
[InMemoryChannel](https://github.com/knative/eventing/tree/master/config/channels/in-memory-channel) | Proof of Concept | None | In-memory channels are a best effort Channel. They should NOT be used in Production. They are useful for development.
|
||||||
|
[KafkaChannel](https://github.com/knative/eventing-contrib/tree/master/kafka/channel/config) | Proof of Concept | None | Channels are backed by [Apache Kafka](http://kafka.apache.org/) topics.
|
||||||
|
[NatssChannel](https://github.com/knative/eventing/tree/master/contrib/natss/config) | Proof of Concept | None | Channels are backed by [NATS Streaming](https://github.com/nats-io/nats-streaming-server#configuring).
|
||||||
|
|
||||||
|
|
||||||
| Name | Status | Support | Description |
|
|
||||||
| -------------------------------------------------------------------------------------------------------------- | ---------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------- |
|
|
||||||
| [CCP - Apache Kafka](https://github.com/knative/eventing-contrib/tree/master/kafka/channel/config/provisioner) | Proof of Concept | None | Deprecated: Channels are backed by [Apache Kafka](http://kafka.apache.org/) topics. |
|
|
||||||
| [CCP - GCP PubSub](https://github.com/knative/eventing/tree/master/contrib/gcppubsub/config) | Proof of Concept | None | Deprecated: Channels are backed by [GCP PubSub](https://cloud.google.com/pubsub/). |
|
|
||||||
| [CCP - In-Memory](https://github.com/knative/eventing/tree/v0.8.0/config/provisioners/in-memory-channel) | Proof of Concept | None | Deprecated: In-memory channels are a best effort Channel. They should NOT be used in Production. They are useful for development. |
|
|
||||||
| [CCP - Natss](https://github.com/knative/eventing/tree/master/contrib/natss/config/provisioner) | Proof of Concept | None | Deprecated: Channels are backed by [NATS Streaming](https://github.com/nats-io/nats-streaming-server#configuring). |
|
|
||||||
| [CRD - InMemoryChannel](https://github.com/knative/eventing/tree/master/config/channels/in-memory-channel) | Proof of Concept | None | In-memory channels are a best effort Channel. They should NOT be used in Production. They are useful for development. |
|
|
||||||
| [CRD - KafkaChannel](https://github.com/knative/eventing-contrib/tree/master/kafka/channel/config) | Proof of Concept | None | Channels are backed by [Apache Kafka](http://kafka.apache.org/) topics. |
|
|
||||||
| [CRD - NatssChannel](https://github.com/knative/eventing/tree/master/contrib/natss/config) | Proof of Concept | None | Channels are backed by [NATS Streaming](https://github.com/nats-io/nats-streaming-server#configuring). |
|
|
||||||
|
|
|
@ -1,44 +1,26 @@
|
||||||
# List of available Channel implementation for persistence of the events associated with a given channel
|
# List of available Channel implementation for persistence of the events associated with a given channel
|
||||||
channels:
|
channels:
|
||||||
- name: In-Memory
|
- name: InMemoryChannel
|
||||||
url: https://github.com/knative/eventing/tree/v0.8.0/config/provisioners/in-memory-channel
|
|
||||||
status: Proof of Concept
|
|
||||||
support: None
|
|
||||||
description: >
|
|
||||||
Deprecated: In-memory channels are a best effort Channel. They should NOT be used in Production. They are useful for development.
|
|
||||||
- name: CCP - Apache Kafka
|
|
||||||
url: https://github.com/knative/eventing-contrib/tree/master/kafka/channel/config/provisioner
|
|
||||||
status: Proof of Concept
|
|
||||||
support: None
|
|
||||||
description: >
|
|
||||||
Deprecated: Channels are backed by [Apache Kafka](http://kafka.apache.org/) topics.
|
|
||||||
- name: CCP - GCP PubSub
|
|
||||||
url: https://github.com/knative/eventing/tree/master/contrib/gcppubsub/config
|
|
||||||
status: Proof of Concept
|
|
||||||
support: None
|
|
||||||
description: >
|
|
||||||
Deprecated: Channels are backed by [GCP PubSub](https://cloud.google.com/pubsub/).
|
|
||||||
- name: CCP - Natss
|
|
||||||
url: https://github.com/knative/eventing/tree/master/contrib/natss/config/provisioner
|
|
||||||
status: Proof of Concept
|
|
||||||
support: None
|
|
||||||
description: >
|
|
||||||
Deprecated: Channels are backed by [NATS Streaming](https://github.com/nats-io/nats-streaming-server#configuring).
|
|
||||||
- name: CRD - InMemoryChannel
|
|
||||||
url: https://github.com/knative/eventing/tree/master/config/channels/in-memory-channel
|
url: https://github.com/knative/eventing/tree/master/config/channels/in-memory-channel
|
||||||
status: Proof of Concept
|
status: Proof of Concept
|
||||||
support: None
|
support: None
|
||||||
description: >
|
description: >
|
||||||
In-memory channels are a best effort Channel. They should NOT be used in Production. They are useful for development.
|
In-memory channels are a best effort Channel. They should NOT be used in Production. They are useful for development.
|
||||||
- name: CRD - KafkaChannel
|
- name: KafkaChannel
|
||||||
url: https://github.com/knative/eventing-contrib/tree/master/kafka/channel/config
|
url: https://github.com/knative/eventing-contrib/tree/master/kafka/channel/config
|
||||||
status: Proof of Concept
|
status: Proof of Concept
|
||||||
support: None
|
support: None
|
||||||
description: >
|
description: >
|
||||||
Channels are backed by [Apache Kafka](http://kafka.apache.org/) topics.
|
Channels are backed by [Apache Kafka](http://kafka.apache.org/) topics.
|
||||||
- name: CRD - NatssChannel
|
- name: NatssChannel
|
||||||
url: https://github.com/knative/eventing/tree/master/contrib/natss/config
|
url: https://github.com/knative/eventing/tree/master/contrib/natss/config
|
||||||
status: Proof of Concept
|
status: Proof of Concept
|
||||||
support: None
|
support: None
|
||||||
description: >
|
description: >
|
||||||
Channels are backed by [NATS Streaming](https://github.com/nats-io/nats-streaming-server#configuring).
|
Channels are backed by [NATS Streaming](https://github.com/nats-io/nats-streaming-server#configuring).
|
||||||
|
- name: GCP PubSub
|
||||||
|
url: https://github.com/google/knative-gcp
|
||||||
|
status: Proof of Concept
|
||||||
|
support: None
|
||||||
|
description: >
|
||||||
|
Channels are backed by [GCP PubSub](https://cloud.google.com/pubsub/).
|
||||||
|
|
|
@ -29,9 +29,6 @@ Notes:
|
||||||
* Inclusion in this list is not an endorsement, nor does it imply any level of
|
* Inclusion in this list is not an endorsement, nor does it imply any level of
|
||||||
support.
|
support.
|
||||||
|
|
||||||
* Cluster Channel Provisioner (CCP) has been deprecated and will be
|
|
||||||
unsupported in v0.9. You should now use the Channels CRDs.
|
|
||||||
|
|
||||||
Name | Status | Support | Description
|
Name | Status | Support | Description
|
||||||
--- | --- | --- | ---
|
--- | --- | --- | ---
|
||||||
{{ range .Channels -}}
|
{{ range .Channels -}}
|
||||||
|
|
|
@ -16,7 +16,7 @@ know roughly how things fit together.
|
||||||
|
|
||||||
## Version
|
## Version
|
||||||
|
|
||||||
This Debugging content supports version v0.3.0 or later of
|
This Debugging content supports version v0.8.0 or later of
|
||||||
[Knative Eventing](https://github.com/knative/eventing/releases/) and the
|
[Knative Eventing](https://github.com/knative/eventing/releases/) and the
|
||||||
[Eventing-contrib resources](https://github.com/knative/eventing-contrib/releases/).
|
[Eventing-contrib resources](https://github.com/knative/eventing-contrib/releases/).
|
||||||
|
|
||||||
|
@ -90,7 +90,7 @@ We will attempt to determine why from the most basic pieces out:
|
||||||
|
|
||||||
1. `fn` - The `Deployment` has no dependencies inside Knative.
|
1. `fn` - The `Deployment` has no dependencies inside Knative.
|
||||||
1. `svc` - The `Service` has no dependencies inside Knative.
|
1. `svc` - The `Service` has no dependencies inside Knative.
|
||||||
1. `chan` - The `Channel` depends on its backing `ClusterChannelProvisioner` and
|
1. `chan` - The `Channel` depends on its backing `channel implementation` and
|
||||||
somewhat depends on `sub`.
|
somewhat depends on `sub`.
|
||||||
1. `src` - The `Source` depends on `chan`.
|
1. `src` - The `Source` depends on `chan`.
|
||||||
1. `sub` - The `Subscription` depends on both `chan` and `svc`.
|
1. `sub` - The `Subscription` depends on both `chan` and `svc`.
|
||||||
|
@ -144,18 +144,18 @@ This should return a single Pod, which if you inspect is the one generated by
|
||||||
##### `chan`
|
##### `chan`
|
||||||
|
|
||||||
`chan` uses the
|
`chan` uses the
|
||||||
[`in-memory-channel`](https://github.com/knative/eventing/tree/v0.8.0/config/provisioners/in-memory-channel)
|
[`in-memory-channel`]( https://github.com/knative/eventing/tree/master/config/channels/in-memory-channel).
|
||||||
as its `ClusterChannelProvisioner`. This is a very basic provisioner and has few
|
This is a very basic channel and has few
|
||||||
failure modes that will be exhibited in `chan`'s `status`.
|
failure modes that will be exhibited in `chan`'s `status`.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-debug get channel chan -o jsonpath='{.status.conditions[?(@.type == "Ready")].status}'
|
kubectl --namespace knative-debug get channel.messaging.knative.dev chan -o jsonpath='{.status.conditions[?(@.type == "Ready")].status}'
|
||||||
```
|
```
|
||||||
|
|
||||||
This should return `True`. If it doesn't, get the full resource:
|
This should return `True`. If it doesn't, get the full resource:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-debug get channel chan --output yaml
|
kubectl --namespace knative-debug get channel.messaging.knative.dev chan --output yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
If `status` is completely missing, it implies that something is wrong with the
|
If `status` is completely missing, it implies that something is wrong with the
|
||||||
|
@ -164,7 +164,7 @@ If `status` is completely missing, it implies that something is wrong with the
|
||||||
Next verify that `chan` is addressable:
|
Next verify that `chan` is addressable:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-debug get channel chan -o jsonpath='{.status.address.hostname}'
|
kubectl --namespace knative-debug get channel.messaging.knative.dev chan -o jsonpath='{.status.address.hostname}'
|
||||||
```
|
```
|
||||||
|
|
||||||
This should return a URI, likely ending in '.cluster.local'. If it doesn't, then
|
This should return a URI, likely ending in '.cluster.local'. If it doesn't, then
|
||||||
|
@ -179,7 +179,7 @@ We will verify that the two resources that the `chan` creates exist and are
|
||||||
`chan` creates a K8s `Service`.
|
`chan` creates a K8s `Service`.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-debug get service -l provisioner=in-memory-channel,channel=chan
|
kubectl --namespace knative-debug get service -l messaging.knative.dev/role=in-memory-channel
|
||||||
```
|
```
|
||||||
|
|
||||||
It's spec is completely unimportant, as Istio will ignore it. It just needs to
|
It's spec is completely unimportant, as Istio will ignore it. It just needs to
|
||||||
|
@ -187,40 +187,19 @@ exist so that `src` can send events to it. If it doesn't exist, it implies that
|
||||||
something went wrong during `chan` reconciliation. See
|
something went wrong during `chan` reconciliation. See
|
||||||
[Channel Controller](#channel-controller).
|
[Channel Controller](#channel-controller).
|
||||||
|
|
||||||
###### `VirtualService`
|
|
||||||
|
|
||||||
`chan` creates a `VirtualService` which redirects its hostname to the
|
|
||||||
`in-memory-channel` dispatcher.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl --namespace knative-debug get virtualservice -l provisioner=in-memory-channel,channel=chan -o custom-columns='HOST:.spec.hosts[0],DESTINATION:.spec.http[0].route[0].destination.host'
|
|
||||||
```
|
|
||||||
|
|
||||||
Verify that
|
|
||||||
|
|
||||||
1. `HOST` is the same as the hostname returned by in `chan`'s
|
|
||||||
`status.address.hostname`.
|
|
||||||
1. `DESTINATION` is
|
|
||||||
`in-memory-channel-dispatcher.knative-eventing.svc.cluster.local`.
|
|
||||||
|
|
||||||
If either of those is not accurate, then it implies that something went wrong
|
|
||||||
during `chan` reconciliation. See [Channel Controller](#channel-controller).
|
|
||||||
|
|
||||||
##### `src`
|
##### `src`
|
||||||
|
|
||||||
`src` is a
|
`src` is a
|
||||||
[`KubernetesEventSource`](https://github.com/knative/eventing-contrib/blob/master/pkg/apis/sources/v1alpha1/kuberneteseventsource_types.go),
|
[`ApiServerSource`](https://github.com/knative/eventing/blob/master/pkg/apis/sources/v1alpha1/apiserver_types.go).
|
||||||
which creates an underlying
|
|
||||||
[`ContainerSource`](https://github.com/knative/eventing/blob/master/pkg/apis/sources/v1alpha1/containersource_types.go).
|
|
||||||
|
|
||||||
First we will verify that `src` is writing to `chan`.
|
First we will verify that `src` is writing to `chan`.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-debug get kuberneteseventsource src -o jsonpath='{.spec.sink}'
|
kubectl --namespace knative-debug get apiserversource src -o jsonpath='{.spec.sink}'
|
||||||
```
|
```
|
||||||
|
|
||||||
Which should return
|
Which should return
|
||||||
`map[apiVersion:eventing.knative.dev/v1alpha1 kind:Channel name:chan]`. If it
|
`map[apiVersion:messaging.knative.dev/v1alpha1 kind:Channel name:chan]`. If it
|
||||||
doesn't, then `src` was setup incorrectly and its `spec` needs to be fixed.
|
doesn't, then `src` was setup incorrectly and its `spec` needs to be fixed.
|
||||||
Fixing should be as simple as updating its `spec` to have the correct `sink`
|
Fixing should be as simple as updating its `spec` to have the correct `sink`
|
||||||
(see [example.yaml](example.yaml)).
|
(see [example.yaml](example.yaml)).
|
||||||
|
@ -228,63 +207,9 @@ Fixing should be as simple as updating its `spec` to have the correct `sink`
|
||||||
Now that we know `src` is sending to `chan`, let's verify that it is `Ready`.
|
Now that we know `src` is sending to `chan`, let's verify that it is `Ready`.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-debug get kuberneteseventsource src -o jsonpath='{.status.conditions[?(.type == "Ready")].status}'
|
kubectl --namespace knative-debug get apiserversource src -o jsonpath='{.status.conditions[?(.type == "Ready")].status}'
|
||||||
```
|
```
|
||||||
|
|
||||||
This should return `True`. If it doesn't, then we need to investigate why. First
|
|
||||||
we will look at the owned `ContainerSource` that underlies `src`, and if that is
|
|
||||||
not fruitful, look at the [Source Controller](#source-controller).
|
|
||||||
|
|
||||||
##### ContainerSource
|
|
||||||
|
|
||||||
`src` is backed by a `ContainerSource` resource.
|
|
||||||
|
|
||||||
Is the `ContainerSource` `Ready`?
|
|
||||||
|
|
||||||
```shell
|
|
||||||
srcUID=$(kubectl --namespace knative-debug get kuberneteseventsource src -o jsonpath='{.metadata.uid}')
|
|
||||||
kubectl --namespace knative-debug get containersource -o jsonpath="{.items[?(.metadata.ownerReferences[0].uid == '$srcUID')].status.conditions[?(.type == 'Ready')].status}"
|
|
||||||
```
|
|
||||||
|
|
||||||
That should be `True`. If it is, but `src` is not `Ready`, then that implies the
|
|
||||||
problem is in the [Source Controller](#source-controller).
|
|
||||||
|
|
||||||
If `ContainerSource` is not `Ready`, then we need to look at its entire
|
|
||||||
`status`:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
srcUID=$(kubectl --namespace knative-debug get kuberneteseventsource src -o jsonpath='{.metadata.uid}')
|
|
||||||
containerSourceName=$(kubectl --namespace knative-debug get containersource -o jsonpath="{.items[?(.metadata.ownerReferences[*].uid == '$srcUID')].metadata.name}")
|
|
||||||
kubectl --namespace knative-debug get containersource $containerSourceName --output yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
The most useful condition (when `Ready` is not `True`), is `Deployed`.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
srcUID=$(kubectl --namespace knative-debug get kuberneteseventsource src -o jsonpath='{.metadata.uid}')
|
|
||||||
containerSourceName=$(kubectl --namespace knative-debug get containersource -o jsonpath="{.items[?(.metadata.ownerReferences[*].uid == '$srcUID')].metadata.name}")
|
|
||||||
kubectl --namespace knative-debug get containersource $containerSourceName -o jsonpath='{.status.conditions[?(.type == "Deployed")].message}'
|
|
||||||
```
|
|
||||||
|
|
||||||
You should see something like `Updated deployment src-xz59f-hmtkp`. Let's see
|
|
||||||
the health of the `Deployment` that `ContainerSource` created (named in the
|
|
||||||
message, but we will get it directly in the following command):
|
|
||||||
|
|
||||||
```shell
|
|
||||||
srcUID=$(kubectl --namespace knative-debug get kuberneteseventsource src -o jsonpath='{.metadata.uid}')
|
|
||||||
containerSourceUID=$(kubectl --namespace knative-debug get containersource -o jsonpath="{.items[?(.metadata.ownerReferences[*].uid == '$srcUID')].metadata.uid}")
|
|
||||||
deploymentName=$(kubectl --namespace knative-debug get deployment -o jsonpath="{.items[?(.metadata.ownerReferences[*].uid == '$containerSourceUID')].metadata.name}")
|
|
||||||
kubectl --namespace knative-debug get deployment $deploymentName --output yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
If this is unhealthy, then it should tell you why. E.g.
|
|
||||||
`'pods "src-xz59f-hmtkp-7bd4bc6964-" is forbidden: error looking up service account knative-debug/events-sa: serviceaccount "events-sa" not found'`.
|
|
||||||
Fix any errors so that it the `Deployment` is healthy.
|
|
||||||
|
|
||||||
If the `Deployment` is healthy, but the `ContainerSource` isn't, that implies
|
|
||||||
something went wrong in
|
|
||||||
[ContainerSource Controller](#containersource-controller).
|
|
||||||
|
|
||||||
#### `sub`
|
#### `sub`
|
||||||
|
|
||||||
`sub` is a `Subscription` from `chan` to `fn`.
|
`sub` is a `Subscription` from `chan` to `fn`.
|
||||||
|
@ -319,18 +244,18 @@ document.
|
||||||
|
|
||||||
##### Channel Controller
|
##### Channel Controller
|
||||||
|
|
||||||
There is not a single `Channel` Controller. Instead, there is a single
|
There is not a single `Channel` Controller. Instead, there is one
|
||||||
Controller for each `ClusterChannelProvisioner`. `chan` uses the
|
Controller for each Channel CRD. `chan` uses the
|
||||||
`in-memory-channel` `ClusterChannelProvisioner`, whose Controller is:
|
`InMemoryChannel` `Channel CRD`, whose Controller is:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-eventing get pod -l clusterChannelProvisioner=in-memory-channel,role=controller --output yaml
|
kubectl --namespace knative-eventing get pod -l messaging.knative.dev/channel=in-memory-channel,messaging.knative.dev/role=controller --output yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
See its logs with:
|
See its logs with:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-eventing logs -l clusterChannelProvisioner=in-memory-channel,role=controller
|
kubectl --namespace knative-eventing logs -l messaging.knative.dev/channel=in-memory-channel,messaging.knative.dev/role=controller
|
||||||
```
|
```
|
||||||
|
|
||||||
Pay particular attention to any lines that have a logging level of `warning` or
|
Pay particular attention to any lines that have a logging level of `warning` or
|
||||||
|
@ -338,39 +263,29 @@ Pay particular attention to any lines that have a logging level of `warning` or
|
||||||
|
|
||||||
##### Source Controller
|
##### Source Controller
|
||||||
|
|
||||||
Each Source will have its own Controller. `src` is a `KubernetesEventSource`, so
|
Each Source will have its own Controller. `src` is a `ApiServerSource`, so
|
||||||
its Controller is:
|
its Controller is:
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl --namespace knative-sources get pod -l control-plane=controller-manager
|
|
||||||
```
|
|
||||||
|
|
||||||
This is actually a single binary that runs multiple Source Controllers,
|
|
||||||
importantly including [ContainerSource Controller](#containersource-controller).
|
|
||||||
|
|
||||||
The `KubernetesEventSource` is fairly simple, as it delegates all functionality
|
|
||||||
to an underlying [ContainerSource](#containersource), so there is likely no
|
|
||||||
useful information in its logs. Instead more useful information is likely in the
|
|
||||||
[ContainerSource Controller](#containersource-controller)'s logs. If you want to
|
|
||||||
look at `KubernetesEventSource` Controller's logs anyway, they can be see with:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl --namespace knative-sources logs -l control-plane=controller-manager
|
|
||||||
```
|
|
||||||
|
|
||||||
###### ContainerSource Controller
|
|
||||||
|
|
||||||
The `ContainerSource` Controller is run in the same binary as some other Source
|
|
||||||
Controllers from Eventing. It is:
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-eventing get pod -l app=sources-controller
|
kubectl --namespace knative-eventing get pod -l app=sources-controller
|
||||||
```
|
```
|
||||||
|
|
||||||
|
This is actually a single binary that runs multiple Source Controllers,
|
||||||
|
importantly including [ApiServerSource Controller](#apiserversource-controller).
|
||||||
|
|
||||||
|
###### ApiServerSource Controller
|
||||||
|
|
||||||
|
The `ApiServerSource` Controller is run in the same binary as some other Source
|
||||||
|
Controllers from Eventing. It is:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl --namespace knative-debug get pod -l eventing.knative.dev/sourceName=src,eventing.knative.dev/source=apiserver-source-controller
|
||||||
|
```
|
||||||
|
|
||||||
View its logs with:
|
View its logs with:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-eventing logs -l app=sources-controller
|
kubectl --namespace knative-debug logs -l eventing.knative.dev/sourceName=src,eventing.knative.dev/source=apiserver-source-controller
|
||||||
```
|
```
|
||||||
|
|
||||||
Pay particular attention to any lines that have a logging level of `warning` or
|
Pay particular attention to any lines that have a logging level of `warning` or
|
||||||
|
@ -409,13 +324,7 @@ The Knative event takes the following path:
|
||||||
(from nothing).
|
(from nothing).
|
||||||
|
|
||||||
1. `src` is POSTing the event to `chan`'s address,
|
1. `src` is POSTing the event to `chan`'s address,
|
||||||
`chan-channel-45k5h.knative-debug.svc.cluster.local`.
|
`http://chan-kn-channel.knative-debug.svc.cluster.local`.
|
||||||
|
|
||||||
1. `src`'s Istio proxy is intercepting the request, seeing that the Host matches
|
|
||||||
a `VirtualService`. The request's Host is rewritten to
|
|
||||||
`chan.knative-debug.channels.cluster.local` and sent to the
|
|
||||||
[Channel Dispatcher](#channel-dispatcher),
|
|
||||||
`in-memory-channel-dispatcher.knative-eventing.svc.cluster.local`.
|
|
||||||
|
|
||||||
1. The Channel Dispatcher receives the request and introspects the Host header
|
1. The Channel Dispatcher receives the request and introspects the Host header
|
||||||
to determine which `Channel` it corresponds to. It sees that it corresponds
|
to determine which `Channel` it corresponds to. It sees that it corresponds
|
||||||
|
@ -426,106 +335,6 @@ The Knative event takes the following path:
|
||||||
|
|
||||||
We will investigate components in the order in which events should travel.
|
We will investigate components in the order in which events should travel.
|
||||||
|
|
||||||
#### `src`
|
|
||||||
|
|
||||||
Events should be generated at `src`. First let's look at the `Pod`s logs:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
srcUID=$(kubectl --namespace knative-debug get kuberneteseventsource src -o jsonpath='{.metadata.uid}')
|
|
||||||
containerSourceName=$(kubectl --namespace knative-debug get containersource -o jsonpath="{.items[?(.metadata.ownerReferences[*].uid == '$srcUID')].metadata.name}")
|
|
||||||
kubectl --namespace knative-debug logs -l source=$containerSourceName -c source
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that a few log lines within the first ~15 seconds of the `Pod` starting
|
|
||||||
like the following are fine. They represent the time waiting for the Istio proxy
|
|
||||||
to start. If you see these more than a few seconds after the `Pod` starts, then
|
|
||||||
something is wrong.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
E0116 23:59:40.033667 1 reflector.go:205] github.com/knative/eventing-contrib/pkg/adapter/kubernetesevents/adapter.go:73: Failed to list *v1.Event: Get https://10.51.240.1:443/api/v1/namespaces/kna tive-debug/events?limit=500&resourceVersion=0: dial tcp 10.51.240.1:443: connect: connection refused
|
|
||||||
E0116 23:59:41.034572 1 reflector.go:205] github.com/knative/eventing-contrib/pkg/adapter/kubernetesevents/adapter.go:73: Failed to list *v1.Event: Get https://10.51.240.1:443/api/v1/namespaces/kna tive-debug/events?limit=500&resourceVersion=0: dial tcp 10.51.240.1:443: connect: connection refused
|
|
||||||
```
|
|
||||||
|
|
||||||
The success message is `debug` level, so we don't expect to see anything. If you
|
|
||||||
see lines with a logging level of `error`, look at their `msg`. For example:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
"msg":"[404] unexpected response \"\""
|
|
||||||
```
|
|
||||||
|
|
||||||
Which means that `src` correctly got the Kubernetes `Event` and tried to send it
|
|
||||||
to `chan`, but failed to do so. In this case, the response code was a 404. We
|
|
||||||
will look at the Istio proxy's logs to see if we can get any further
|
|
||||||
information:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
srcUID=$(kubectl --namespace knative-debug get kuberneteseventsource src -o jsonpath='{.metadata.uid}')
|
|
||||||
containerSourceName=$(kubectl --namespace knative-debug get containersource -o jsonpath="{.items[?(.metadata.ownerReferences[*].uid == '$srcUID')].metadata.name}")
|
|
||||||
kubectl --namespace knative-debug logs -l source=$containerSourceName -c istio-proxy
|
|
||||||
```
|
|
||||||
|
|
||||||
We see lines like:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
[2019-01-17T17:16:11.898Z] "POST / HTTP/1.1" 404 NR 0 0 0 - "-" "Go-http-client/1.1" "4702a818-11e3-9e15-b523-277b94598101" "chan-channel-45k5h.knative-debug.svc.cluster.local" "-"
|
|
||||||
```
|
|
||||||
|
|
||||||
These are lines emitted by [Envoy](https://www.envoyproxy.io). The line is
|
|
||||||
documented as Envoy's
|
|
||||||
[Access Logging](https://www.envoyproxy.io/docs/envoy/latest/configuration/access_log).
|
|
||||||
That's odd, we already verified that there is a
|
|
||||||
[`VirtualService`](#virtualservice) for `chan`. In fact, we don't expect to see
|
|
||||||
`chan-channel-45k5h.knative-debug.svc.cluster.local` at all, it should be
|
|
||||||
replaced with `chan.knative-debug.channels.cluster.local`. We keep looking in
|
|
||||||
the same Istio proxy logs and see:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
[2019-01-16 23:59:41.408][23][warning][config] bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:70] gRPC config for type.googleapis.com/envoy.api.v2.RouteConfiguration rejected: Only unique values for domains are permitted. Duplicate entry of domain chan.knative-debug.channels.cluster.local
|
|
||||||
```
|
|
||||||
|
|
||||||
This shows that the [`VirtualService`](#virtualservice) created for `chan`,
|
|
||||||
which tries to map two hosts,
|
|
||||||
`chan-channel-45k5h.knative-debug.svc.cluster.local` and
|
|
||||||
`chan.knative-debug.channels.cluster.local`, is not working. The most likely
|
|
||||||
cause is duplicate `VirtualService`s that all try to rewrite those hosts. Look
|
|
||||||
at all the `VirtualService`s in the namespace and see what hosts they rewrite:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl --namespace knative-debug get virtualservice -o custom-columns='NAME:.metadata.name,HOST:.spec.hosts[*]'
|
|
||||||
```
|
|
||||||
|
|
||||||
In this case, we see:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
NAME HOST
|
|
||||||
chan-channel-38x5a chan-channel-45k5h.knative-debug.svc.cluster.local,chan.knative-debug.channels.cluster.local
|
|
||||||
chan-channel-8dc2x chan-channel-45k5h.knative-debug.svc.cluster.local,chan.knative-debug.channels.cluster.local
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
Note: This shouldn't happen normally. It only happened here because I had local edits to the Channel controller and created a bug. If you see this with any released Channel Controllers, open a bug with all relevant information (Channel Controller info and YAML of all the VirtualServices).
|
|
||||||
```
|
|
||||||
|
|
||||||
Both are owned by `chan`. Deleting both, causes the
|
|
||||||
[Channel Controller](#channel-controller) to recreate the correct one. After
|
|
||||||
deleting both, a single new one is created (same command as above):
|
|
||||||
|
|
||||||
```shell
|
|
||||||
NAME HOST
|
|
||||||
chan-channel-9kbr8 chan-channel-45k5h.knative-debug.svc.cluster.local,chan.knative-debug.channels.cluster.local
|
|
||||||
```
|
|
||||||
|
|
||||||
After [forcing a Kubernetes event to occur](#triggering-events), the Istio proxy
|
|
||||||
logs now have:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
[2019-01-17T18:04:07.571Z] "POST / HTTP/1.1" 202 - 795 0 1 1 "-" "Go-http-client/1.1" "ba36be7e-4fc4-9f26-83bd-b1438db730b0" "chan.knative-debug.channels.cluster.local" "10.48.1.94:8080"
|
|
||||||
```
|
|
||||||
|
|
||||||
Which looks correct. Most importantly, the return code is now 202 Accepted. In
|
|
||||||
addition, the request's Host is being correctly rewritten to
|
|
||||||
`chan.knative-debug.channels.cluster.local`.
|
|
||||||
|
|
||||||
#### Channel Dispatcher
|
#### Channel Dispatcher
|
||||||
|
|
||||||
The Channel Dispatcher is the component that receives POSTs pushing events into
|
The Channel Dispatcher is the component that receives POSTs pushing events into
|
||||||
|
@ -537,14 +346,15 @@ binary that handles both the receiving and dispatching sides for all
|
||||||
First we will inspect the Dispatcher's logs to see if it is anything obvious:
|
First we will inspect the Dispatcher's logs to see if it is anything obvious:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl --namespace knative-eventing logs -l clusterChannelProvisioner=in-memory-channel,role=dispatcher -c dispatcher
|
kubectl --namespace knative-eventing logs -l messaging.knative.dev/channel=in-memory-channel,messaging.knative.dev/role=dispatcher -c dispatcher
|
||||||
```
|
```
|
||||||
|
|
||||||
Ideally we will see lines like:
|
Ideally we will see lines like:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
{"level":"info","ts":1547752472.9581263,"caller":"provisioners/message_receiver.go:116","msg":"Received request for chan.knative-debug.channels.cluster.local"}
|
{"level":"info","ts":"2019-08-16T13:50:55.424Z","logger":"inmemorychannel-dispatcher.in-memory-channel-dispatcher","caller":"provisioners/message_receiver.go:147","msg":"Request mapped to channel: knative-debug/chan-kn-channel","knative.dev/controller":"in-memory-channel-dispatcher"}
|
||||||
{"level":"info","ts":1547752472.9582398,"caller":"provisioners/message_dispatcher.go:106","msg":"Dispatching message to http://svc.knative-debug.svc.cluster.local/"}
|
{"level":"info","ts":"2019-08-16T13:50:55.425Z","logger":"inmemorychannel-dispatcher.in-memory-channel-dispatcher","caller":"provisioners/message_dispatcher.go:112","msg":"Dispatching message to http://svc.knative-debug.svc.cluster.local/","knative.dev/controller":"in-memory-channel-dispatcher"}
|
||||||
|
{"level":"info","ts":"2019-08-16T13:50:55.981Z","logger":"inmemorychannel-dispatcher.in-memory-channel-dispatcher","caller":"provisioners/message_receiver.go:140","msg":"Received request for chan-kn-channel.knative-debug.svc.cluster.local","knative.dev/controller":"in-memory-channel-dispatcher"}
|
||||||
```
|
```
|
||||||
|
|
||||||
Which shows that the request is being received and then sent to `svc`, which is
|
Which shows that the request is being received and then sent to `svc`, which is
|
||||||
|
@ -552,10 +362,15 @@ returning a 2XX response code (likely 200, 202, or 204).
|
||||||
|
|
||||||
However if we see something like:
|
However if we see something like:
|
||||||
|
|
||||||
|
<!--
|
||||||
|
NOTE: This error has been produced by settings spec.ports[0].port to 8081
|
||||||
|
kubectl patch -n knative-debug svc svc -p '{"spec":{"ports": [{"port": 8081, "targetPort":8080}]}}' --type='merge'
|
||||||
|
-->
|
||||||
```shell
|
```shell
|
||||||
{"level":"info","ts":1547752478.5898774,"caller":"provisioners/message_receiver.go:116","msg":"Received request for chan.knative-debug.channels.cluster.local"}
|
{"level":"info","ts":"2019-08-16T16:10:16.859Z","logger":"inmemorychannel-dispatcher.in-memory-channel-dispatcher","caller":"provisioners/message_receiver.go:140","msg":"Received request for chan-kn-channel.knative-debug.svc.cluster.local","knative.dev/controller":"in-memory-channel-dispatcher"}
|
||||||
{"level":"info","ts":1547752478.58999,"caller":"provisioners/message_dispatcher.go:106","msg":"Dispatching message to http://svc.knative-debug.svc.cluster.local/"}
|
{"level":"info","ts":"2019-08-16T16:10:16.859Z","logger":"inmemorychannel-dispatcher.in-memory-channel-dispatcher","caller":"provisioners/message_receiver.go:147","msg":"Request mapped to channel: knative-debug/chan-kn-channel","knative.dev/controller":"in-memory-channel-dispatcher"}
|
||||||
{"level":"error","ts":1547752478.6035335,"caller":"fanout/fanout_handler.go:108","msg":"Fanout had an error","error":"Unable to complete request Post http://svc.knative-debug.svc.cluster.local/: EOF","stacktrace":"github.com/knative/eventing/pkg/sidecar/fanout.(*Handler).dispatch\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/sidecar/fanout/fanout_handler.go:108\ngithub.com/knative/eventing/pkg/sidecar/fanout.createReceiverFunction.func1\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/sidecar/fanout/fanout_handler.go:86\ngithub.com/knative/eventing/pkg/provisioners.(*MessageReceiver).HandleRequest\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/provisioners/message_receiver.go:132\ngithub.com/knative/eventing/pkg/sidecar/fanout.(*Handler).ServeHTTP\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/sidecar/fanout/fanout_handler.go:91\ngithub.com/knative/eventing/pkg/sidecar/multichannelfanout.(*Handler).ServeHTTP\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/sidecar/multichannelfanout/multi_channel_fanout_handler.go:128\ngithub.com/knative/eventing/pkg/sidecar/swappable.(*Handler).ServeHTTP\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/sidecar/swappable/swappable.go:105\nnet/http.serverHandler.ServeHTTP\n\t/usr/lib/google-golang/src/net/http/server.go:2740\nnet/http.(*conn).serve\n\t/usr/lib/google-golang/src/net/http/server.go:1846"}
|
{"level":"info","ts":"2019-08-16T16:10:16.859Z","logger":"inmemorychannel-dispatcher.in-memory-channel-dispatcher","caller":"provisioners/message_dispatcher.go:112","msg":"Dispatching message to http://svc.knative-debug.svc.cluster.local/","knative.dev/controller":"in-memory-channel-dispatcher"}
|
||||||
|
{"level":"error","ts":"2019-08-16T16:10:38.169Z","logger":"inmemorychannel-dispatcher.in-memory-channel-dispatcher","caller":"fanout/fanout_handler.go:121","msg":"Fanout had an error","knative.dev/controller":"in-memory-channel-dispatcher","error":"Unable to complete request Post http://svc.knative-debug.svc.cluster.local/: dial tcp 10.4.44.156:80: i/o timeout","stacktrace":"knative.dev/eventing/pkg/provisioners/fanout.(*Handler).dispatch\n\t/Users/xxxxxx/go/src/knative.dev/eventing/pkg/provisioners/fanout/fanout_handler.go:121\nknative.dev/eventing/pkg/provisioners/fanout.createReceiverFunction.func1.1\n\t/Users/i512777/go/src/knative.dev/eventing/pkg/provisioners/fanout/fanout_handler.go:95"}
|
||||||
```
|
```
|
||||||
|
|
||||||
Then we know there was a problem posting to
|
Then we know there was a problem posting to
|
||||||
|
@ -568,10 +383,4 @@ events about failures.
|
||||||
|
|
||||||
TODO Fill in this section.
|
TODO Fill in this section.
|
||||||
|
|
||||||
See `fn`'s Istio proxy logs:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl --namespace knative-debug logs -l app=fn -c istio-proxy
|
|
||||||
```
|
|
||||||
|
|
||||||
# TODO Finish the guide.
|
# TODO Finish the guide.
|
||||||
|
|
|
@ -12,36 +12,33 @@ metadata:
|
||||||
# The event source.
|
# The event source.
|
||||||
|
|
||||||
apiVersion: sources.eventing.knative.dev/v1alpha1
|
apiVersion: sources.eventing.knative.dev/v1alpha1
|
||||||
kind: KubernetesEventSource
|
kind: ApiServerSource
|
||||||
metadata:
|
metadata:
|
||||||
name: src
|
name: src
|
||||||
namespace: knative-debug
|
namespace: knative-debug
|
||||||
spec:
|
spec:
|
||||||
namespace: knative-debug
|
|
||||||
serviceAccountName: service-account
|
serviceAccountName: service-account
|
||||||
|
mode: Resource
|
||||||
|
resources:
|
||||||
|
- apiVersion: v1
|
||||||
|
kind: Event
|
||||||
sink:
|
sink:
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
apiVersion: messaging.knative.dev/v1alpha1
|
||||||
kind: Channel
|
kind: InMemoryChannel
|
||||||
name: chan
|
name: chan
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# The Channel events are sent to.
|
# The Channel events are sent to.
|
||||||
|
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
apiVersion: messaging.knative.dev/v1alpha1
|
||||||
kind: Channel
|
kind: InMemoryChannel
|
||||||
metadata:
|
metadata:
|
||||||
name: chan
|
name: chan
|
||||||
namespace: knative-debug
|
namespace: knative-debug
|
||||||
spec:
|
|
||||||
provisioner:
|
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
|
||||||
kind: ClusterChannelProvisioner
|
|
||||||
name: in-memory-channel
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# The Subscription to the Channel.
|
# The Subscription to the InMemoryChannel.
|
||||||
|
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
apiVersion: eventing.knative.dev/v1alpha1
|
||||||
kind: Subscription
|
kind: Subscription
|
||||||
|
@ -50,8 +47,8 @@ metadata:
|
||||||
namespace: knative-debug
|
namespace: knative-debug
|
||||||
spec:
|
spec:
|
||||||
channel:
|
channel:
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
apiVersion: messaging.knative.dev/v1alpha1
|
||||||
kind: Channel
|
kind: InMemoryChannel
|
||||||
name: chan
|
name: chan
|
||||||
subscriber:
|
subscriber:
|
||||||
ref:
|
ref:
|
||||||
|
@ -92,8 +89,6 @@ spec:
|
||||||
app: fn
|
app: fn
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
annotations:
|
|
||||||
sidecar.istio.io/inject: "true"
|
|
||||||
labels:
|
labels:
|
||||||
app: fn
|
app: fn
|
||||||
spec:
|
spec:
|
||||||
|
|
|
@ -118,7 +118,7 @@ kubectl delete camelsource camel-timer-source
|
||||||
Install the [telegram CamelSource](source_telegram.yaml) from source:
|
Install the [telegram CamelSource](source_telegram.yaml) from source:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kunbectl apply -f source_telegram.yaml
|
kubectl apply -f source_telegram.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Start `kail` again and keep it open on the event display:
|
Start `kail` again and keep it open on the event display:
|
||||||
|
|
|
@ -1,14 +1,13 @@
|
||||||
# Channel for testing events.
|
# Channel for testing events.
|
||||||
|
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
apiVersion: messaging.knative.dev/v1alpha1
|
||||||
kind: Channel
|
kind: Channel
|
||||||
metadata:
|
metadata:
|
||||||
name: camel-test
|
name: camel-test
|
||||||
spec:
|
spec:
|
||||||
provisioner:
|
channelTemplate:
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
apiVersion: messaging.knative.dev/v1alpha1
|
||||||
kind: ClusterChannelProvisioner
|
kind: InMemoryChannel
|
||||||
name: in-memory-channel
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -20,7 +19,7 @@ metadata:
|
||||||
name: camel-source-display
|
name: camel-source-display
|
||||||
spec:
|
spec:
|
||||||
channel:
|
channel:
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
apiVersion: messaging.knative.dev/v1alpha1
|
||||||
kind: Channel
|
kind: Channel
|
||||||
name: camel-test
|
name: camel-test
|
||||||
subscriber:
|
subscriber:
|
||||||
|
|
|
@ -19,6 +19,6 @@ spec:
|
||||||
.setHeader("Content-Type").constant("application/json")
|
.setHeader("Content-Type").constant("application/json")
|
||||||
.to("knative:endpoint/sink?cloudEventsType=org.apache.camel.quote")
|
.to("knative:endpoint/sink?cloudEventsType=org.apache.camel.quote")
|
||||||
sink:
|
sink:
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
apiVersion: messaging.knative.dev/v1alpha1
|
||||||
kind: Channel
|
kind: Channel
|
||||||
name: camel-test
|
name: camel-test
|
||||||
|
|
|
@ -19,6 +19,6 @@ spec:
|
||||||
# Camel K option to enable serialization of the component output
|
# Camel K option to enable serialization of the component output
|
||||||
camel.component.knative.jsonSerializationEnabled: "true"
|
camel.component.knative.jsonSerializationEnabled: "true"
|
||||||
sink:
|
sink:
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
apiVersion: messaging.knative.dev/v1alpha1
|
||||||
kind: Channel
|
kind: Channel
|
||||||
name: camel-test
|
name: camel-test
|
||||||
|
|
|
@ -14,6 +14,6 @@ spec:
|
||||||
# Using 'period' URI option (see component documentation)
|
# Using 'period' URI option (see component documentation)
|
||||||
uri: timer:tick?period=3s
|
uri: timer:tick?period=3s
|
||||||
sink:
|
sink:
|
||||||
apiVersion: eventing.knative.dev/v1alpha1
|
apiVersion: messaging.knative.dev/v1alpha1
|
||||||
kind: Channel
|
kind: Channel
|
||||||
name: camel-test
|
name: camel-test
|
||||||
|
|
|
@ -115,17 +115,17 @@ files from the Knative repositories:
|
||||||
| **knative/eventing** | | |
|
| **knative/eventing** | | |
|
||||||
| [`release.yaml`][4.1]† | Installs the Eventing component. Includes [ContainerSource](../eventing#containersource), [CronJobSource][6.2], InMemoryChannel. | |
|
| [`release.yaml`][4.1]† | Installs the Eventing component. Includes [ContainerSource](../eventing#containersource), [CronJobSource][6.2], InMemoryChannel. | |
|
||||||
| [`eventing.yaml`][4.2] | Installs the Eventing component. Includes [ContainerSource](../eventing#containersource) and [CronJobSource][6.2]. Does not include any Channel. | |
|
| [`eventing.yaml`][4.2] | Installs the Eventing component. Includes [ContainerSource](../eventing#containersource) and [CronJobSource][6.2]. Does not include any Channel. | |
|
||||||
| [`in-memory-channel-crd.yaml`][4.3] | Installs only the InMemoryChannel. | Eventing component |
|
| [`in-memory-channel-crd.yaml`][4.3] | Installs only the InMemoryChannel. | Eventing component |
|
||||||
| [`natss.yaml`][4.5] | Installs only the NATSS channel provisioner. | Eventing component |
|
|
||||||
| [`gcp-pubsub.yaml`][4.6] | Installs only the GCP PubSub channel provisioner. | Eventing component |
|
|
||||||
| **knative/eventing-contrib** | | |
|
| **knative/eventing-contrib** | | |
|
||||||
| [`github.yaml`][5.10]† | Installs the [GitHub][6.10] source. | Eventing component |
|
| [`github.yaml`][5.10]† | Installs the [GitHub][6.10] source. | Eventing component |
|
||||||
| [`camel.yaml`][5.40] | Installs the Apache Camel source. | Eventing component |
|
| [`camel.yaml`][5.40] | Installs the Apache Camel source. | Eventing component |
|
||||||
| [`gcppubsub.yaml`][5.20] | Installs the [GCP PubSub source][6.30] | Eventing component |
|
| [`kafka-importer.yaml`][5.50] | Installs the Apache Kafka source. | Eventing component |
|
||||||
| [`kafka.yaml`][5.50] | Installs the Apache Kafka source. | Eventing component |
|
| [`kafka-channel.yaml`][5.60] | Installs the KafkaChannel. | Eventing component |
|
||||||
| [`kafka-channel.yaml`][5.60] | Installs the KafkaChannel. | Eventing component |
|
|
||||||
| [`awssqs.yaml`][5.70] | Installs the AWS SQS source. | Eventing component |
|
| [`awssqs.yaml`][5.70] | Installs the AWS SQS source. | Eventing component |
|
||||||
| [`event-display.yaml`][5.30] | Installs a Knative Service that logs events received for use in samples and debugging. | Serving component, Eventing component |
|
| [`event-display.yaml`][5.30] | Installs a Knative Service that logs events received for use in samples and debugging. | Serving component, Eventing component |
|
||||||
|
| [`natss-channel.yaml`][5.80] | Installs the NATS streaming channel implementation. | Eventing component |
|
||||||
|
| **knative/google/knative-gcp** | | |
|
||||||
|
| [`cloud-run-events.yaml`][7.10] | Installs the GCP PubSub channel implementation | |
|
||||||
|
|
||||||
_\*_ See
|
_\*_ See
|
||||||
[Installing logging, metrics, and traces](../serving/installing-logging-metrics-traces.md)
|
[Installing logging, metrics, and traces](../serving/installing-logging-metrics-traces.md)
|
||||||
|
@ -165,28 +165,26 @@ for details about installing the various supported observability plugins.
|
||||||
[4.30]:
|
[4.30]:
|
||||||
https://github.com/knative/eventing/releases/download/{{< version >}}/in-memory-channel-crd.yaml
|
https://github.com/knative/eventing/releases/download/{{< version >}}/in-memory-channel-crd.yaml
|
||||||
[4.40]: https://github.com/knative/eventing/releases/download/{{< version >}}/kafka.yaml
|
[4.40]: https://github.com/knative/eventing/releases/download/{{< version >}}/kafka.yaml
|
||||||
[4.50]: https://github.com/knative/eventing/releases/download/{{< version >}}/natss.yaml
|
|
||||||
[4.60]:
|
|
||||||
https://github.com/knative/eventing/releases/download/{{< version >}}/gcp-pubsub.yaml
|
|
||||||
[5.0]: https://github.com/knative/eventing-contrib/releases/tag/{{< version >}}
|
[5.0]: https://github.com/knative/eventing-contrib/releases/tag/{{< version >}}
|
||||||
[5.10]:
|
[5.10]:
|
||||||
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/github.yaml
|
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/github.yaml
|
||||||
[5.20]:
|
|
||||||
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/gcppubsub.yaml
|
|
||||||
[5.30]:
|
[5.30]:
|
||||||
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/event-display.yaml
|
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/event-display.yaml
|
||||||
[5.40]:
|
[5.40]:
|
||||||
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/camel.yaml
|
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/camel.yaml
|
||||||
[5.50]:
|
[5.50]:
|
||||||
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/kafka.yaml
|
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/kafka-importer.yaml
|
||||||
[5.60]:
|
[5.60]:
|
||||||
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/kafka-channel.yaml
|
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/kafka-channel.yaml
|
||||||
[5.70]:
|
[5.70]:
|
||||||
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/awssqs.yaml
|
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/awssqs.yaml
|
||||||
|
[5.80]:
|
||||||
|
https://github.com/knative/eventing-contrib/releases/download/{{< version >}}/natss.yaml
|
||||||
[6.10]: https://developer.github.com/v3/activity/events/types/
|
[6.10]: https://developer.github.com/v3/activity/events/types/
|
||||||
[6.20]:
|
[6.20]:
|
||||||
https://github.com/knative/eventing-contrib/blob/master/samples/cronjob-source/README.md
|
https://github.com/knative/eventing-contrib/blob/master/samples/cronjob-source/README.md
|
||||||
[6.30]: https://cloud.google.com/pubsub/
|
[7.0]: https://github.com/google/knative-gcp/releases/tag/{{< version >}}
|
||||||
|
[7.10]: https://github.com/google/knative-gcp/releases/download/{{< version >}}/cloud-run-events.yaml
|
||||||
|
|
||||||
### Installing Knative
|
### Installing Knative
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue