mirror of https://github.com/knative/docs.git
fix the broken links found by blc tool (#3734)
This commit is contained in:
parent
5bfd6db68d
commit
10a2995452
|
|
@ -24,7 +24,7 @@ also possible to combine the components in novel ways.
|
|||
configuration from your application.
|
||||
|
||||
1. **I just want to consume events like X, I don't care how they are
|
||||
published.** Use a [trigger](./triggers) to consume events from a Broker based
|
||||
published.** Use a [trigger](./broker/triggers) to consume events from a Broker based
|
||||
on CloudEvents attributes. Your application will receive the events as an
|
||||
HTTP POST.
|
||||
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ visualize and trace your requests.
|
|||
|
||||
## Before you begin
|
||||
|
||||
You must have a Knative cluster running with the Eventing component installed. [Learn more](../install/)
|
||||
You must have a Knative cluster running with the Eventing component installed. [Learn more](../../admin/install/)
|
||||
|
||||
## Configuring tracing
|
||||
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ aliases:
|
|||
|
||||
Brokers are Kubernetes [custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) that define an event mesh for collecting a pool of [CloudEvents](https://cloudevents.io/). Brokers provide a discoverable endpoint, `status.address`, for event ingress, and triggers for event delivery. Event producers can send events to a broker by POSTing the event to the `status.address.url` of the broker.
|
||||
|
||||
Event delivery mechanics are an implementation detail that depend on the configured [broker class](./configmaps/broker-configmaps/#broker-class-options). Using brokers and triggers abstracts the details of event routing from the event producer and event consumer.
|
||||
Event delivery mechanics are an implementation detail that depend on the configured [broker class](./configmaps#broker-class-options). Using brokers and triggers abstracts the details of event routing from the event producer and event consumer.
|
||||
|
||||
<img src="images/broker-workflow.svg" width="70%">
|
||||
|
||||
|
|
@ -56,5 +56,5 @@ The RabbitMQ Broker uses [RabbitMQ](https://www.rabbitmq.com/) for its underlyin
|
|||
## Next steps
|
||||
|
||||
- Create a [MT channel-based broker](./create-mtbroker).
|
||||
- Configure [default broker ConfigMap settings](./configmaps/broker-configmaps).
|
||||
- Configure [default broker ConfigMap settings](./configmaps/).
|
||||
- View the [broker specifications](https://github.com/knative/specs/blob/main/specs/eventing/broker.md).
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ data:
|
|||
|
||||
Now every broker created in the cluster that does not have a `spec.config` will be configured to use the `kafka-channel` ConfigMap.
|
||||
|
||||
For more information about creating a `kafka-channel` ConfigMap to use with your broker, see the [Kafka Channel ConfigMap](../broker/kafka-broker/kafka-configmap/) documentation.
|
||||
For more information about creating a `kafka-channel` ConfigMap to use with your broker, see the [Kafka Channel ConfigMap](../kafka-broker/kafka-configmap/) documentation.
|
||||
|
||||
### Changing the default channel implementation for a namespace
|
||||
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ You can create a broker by using the `kn` CLI or by applying YAML files using `k
|
|||
|
||||
=== "kubectl"
|
||||
|
||||
The YAML in the following example creates a broker named `default` in the current namespace. For more information about configuring broker options using YAML, see the full [broker configuration example](./example-mtbroker).
|
||||
The YAML in the following example creates a broker named `default` in the current namespace. For more information about configuring broker options using YAML, see the full [broker configuration example](../example-mtbroker).
|
||||
|
||||
1. Create a broker in the current namespace:
|
||||
|
||||
|
|
|
|||
|
|
@ -35,8 +35,9 @@ metadata:
|
|||
backoffPolicy: exponential
|
||||
backoffDelay: "2007-03-01T13:00:00Z/P1Y2M10DT2H30M"
|
||||
```
|
||||
|
||||
- You can specify any valid `name` for your broker. Using `default` will create a broker named `default`.
|
||||
- The `namespace` must be an existing namespace in your cluster. Using `default` will create the broker in the current namespace.
|
||||
- You can set the `eventing.knative.dev/broker.class` annotation to change the class of the broker. The default broker class is `MTChannelBasedBroker`, but Knative also supports use of the `KafkaBroker` class. For more information about Kafka brokers, see the [Apache Kafka Broker](./kafka-broker) documentation.
|
||||
- `spec.config` is used to specify the default backing channel configuration for MT channel-based broker implementations. For more information on configuring the default channel type, see the documentation on [ConfigMaps](./configmaps/broker-configmaps).
|
||||
- `spec.delivery` is used to configure event delivery options. Event delivery options specify what happens to an event that fails to be delivered to an event sink. For more information, see the documentation on [broker event delivery](./broker-event-delivery).
|
||||
- You can set the `eventing.knative.dev/broker.class` annotation to change the class of the broker. The default broker class is `MTChannelBasedBroker`, but Knative also supports use of the `KafkaBroker` class. For more information about Kafka brokers, see the [Apache Kafka Broker](../kafka-broker) documentation.
|
||||
- `spec.config` is used to specify the default backing channel configuration for MT channel-based broker implementations. For more information on configuring the default channel type, see the documentation on [ConfigMaps](../configmaps).
|
||||
- `spec.delivery` is used to configure event delivery options. Event delivery options specify what happens to an event that fails to be delivered to an event sink. For more information, see the documentation on [broker event delivery](../broker-event-delivery).
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ Notable features are:
|
|||
|
||||
## Prerequisites
|
||||
|
||||
1. [Installing Eventing using YAML files](../../../install/install-eventing-with-yaml.md).
|
||||
1. [Installing Eventing using YAML files](../../../admin/install/install-eventing-with-yaml).
|
||||
2. An Apache Kafka cluster (if you're just getting started you can follow [Strimzi Quickstart page](https://strimzi.io/quickstarts/)).
|
||||
|
||||
## Installation
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ Knative provides the InMemoryChannel channel implementation by default. This def
|
|||
**NOTE:** InMemoryChannel channels should not be used in production environments.
|
||||
|
||||
The default channel implementation is specified in the `default-ch-webhook` ConfigMap in the `knative-eventing` namespace.
|
||||
For more information about modifying ConfigMaps, see [Configuring the Eventing Operator custom resource](../../../docs/install/operator/configuring-eventing-cr).
|
||||
For more information about modifying ConfigMaps, see [Configuring the Eventing Operator custom resource](../../../install/operator/configuring-eventing-cr).
|
||||
|
||||
In the following example, the cluster default channel implementation is InMemoryChannel, while the namespace default channel implementation for the `example-namespace` is KafkaChannel.
|
||||
|
||||
|
|
|
|||
|
|
@ -157,7 +157,7 @@ kubectl --namespace knative-debug get channel.messaging.knative.dev chan --outpu
|
|||
```
|
||||
|
||||
If `status` is completely missing, it implies that something is wrong with the
|
||||
`in-memory-channel` controller. See [Channel Controller](#channel-controller).
|
||||
`in-memory-channel` controller. See [Channel Controller](./#channel-controller).
|
||||
|
||||
Next verify that `chan` is addressable:
|
||||
|
||||
|
|
@ -187,7 +187,7 @@ something went wrong during `chan` reconciliation. See
|
|||
|
||||
##### `src`
|
||||
|
||||
`src` is a [`ApiServerSource`](../sources/apiserversource.md).
|
||||
`src` is a [`ApiServerSource`](../sources/apiserversource).
|
||||
|
||||
First we will verify that `src` is writing to `chan`.
|
||||
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ Where
|
|||
|
||||
## Broker event delivery
|
||||
|
||||
See the [broker](./broker/broker-event-delivery) documentation.
|
||||
See the [broker](../broker/broker-event-delivery) documentation.
|
||||
|
||||
## Channel event delivery
|
||||
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ type information in the cluster data store.
|
|||
particularly the
|
||||
[Context Attributes](https://github.com/cloudevents/spec/blob/master/spec.md#context-attributes)
|
||||
section.
|
||||
1. Be familiar with [event sources](./sources).
|
||||
1. Be familiar with [event sources](../sources).
|
||||
|
||||
## Discovering events with the registry
|
||||
|
||||
|
|
@ -268,7 +268,7 @@ the next topic: How do we actually populate the registry in the first place?
|
|||
|
||||
If you are interested in more information regarding configuration options of a
|
||||
KafkaSource, please refer to the
|
||||
[KafKaSource sample](./samples/kafka/).
|
||||
[KafKaSource sample](../samples/kafka/).
|
||||
|
||||
For this discussion, the relevant information from the yaml above are the
|
||||
`sink` and the `topics`. We observe that the `sink` is of kind `Broker`. We
|
||||
|
|
@ -283,7 +283,7 @@ the next topic: How do we actually populate the registry in the first place?
|
|||
|
||||
## Next steps
|
||||
|
||||
1. [Installing Knative](../install/).
|
||||
1. [Knative code samples](./samples/) is a useful resource to better understand
|
||||
1. [Installing Knative](../../getting-started/install-knative-quickstart/).
|
||||
1. [Knative code samples](../samples/) is a useful resource to better understand
|
||||
some of the Event Sources (remember to point them to a Broker if you want
|
||||
automatic registration of EventTypes in the registry).
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ For more details about the process, the feature phases, quality requirements and
|
|||
|
||||
## Before you begin
|
||||
|
||||
You must have a Knative cluster running with the Eventing component installed. [Learn more](../install/)
|
||||
You must have a Knative cluster running with the Eventing component installed. [Learn more](../../admin/install/)
|
||||
|
||||
## Experimental features configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -46,4 +46,4 @@ Parallel has three parts for the Status:
|
|||
|
||||
## Examples
|
||||
|
||||
Learn how to use Parallel by following the [examples](../samples/parallel/)
|
||||
Learn how to use Parallel by following the [examples](../../samples/parallel/)
|
||||
|
|
|
|||
|
|
@ -49,7 +49,7 @@ Sequence has four parts for the Status:
|
|||
|
||||
## Examples
|
||||
|
||||
For each of these examples below, You will use a [`PingSource`](../../samples/ping-source/) as the source of events.
|
||||
For each of these examples below, You will use a [`PingSource`](../../sources/ping-source/) as the source of events.
|
||||
|
||||
We also use a very simple [transformer](https://github.com/knative/eventing/blob/main/cmd/appender/main.go) which performs very trivial transformation of the incoming events to demonstrate they have passed through each stage.
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ aliases:
|
|||
# Sequence wired to event-display
|
||||
|
||||
We are going to create the following logical configuration. We create a
|
||||
PingSource, feeding events to a [`Sequence`](../../../flows/sequence.md), then
|
||||
PingSource, feeding events to a [`Sequence`](../), then
|
||||
taking the output of that `Sequence` and displaying the resulting output.
|
||||
|
||||

|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ aliases:
|
|||
# Sequence wired to another Sequence
|
||||
|
||||
We are going to create the following logical configuration. We create a
|
||||
PingSource, feeding events to a [`Sequence`](../../../flows/sequence.md), then
|
||||
PingSource, feeding events to a [`Sequence`](../), then
|
||||
taking the output of that `Sequence` and sending it to a second `Sequence` and
|
||||
finally displaying the resulting output.
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ aliases:
|
|||
# Sequence terminal
|
||||
|
||||
We are going to create the following logical configuration. We create a
|
||||
PingSource, feeding events to a [`Sequence`](../../../flows/sequence.md).
|
||||
PingSource, feeding events to a [`Sequence`](../).
|
||||
Sequence can then do either external work, or out of band create additional
|
||||
events.
|
||||
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ aliases:
|
|||
|
||||
We are going to create the following logical configuration. We create a
|
||||
PingSource, feeding events into the Broker, then we create a `Filter` that wires
|
||||
those events into a [`Sequence`](../../../flows/sequence.md) consisting of 3
|
||||
those events into a [`Sequence`](../) consisting of 3
|
||||
steps. Then we take the end of the Sequence and feed newly minted events back
|
||||
into the Broker and create another Trigger which will then display those events.
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ kubectl create namespace event-example
|
|||
|
||||
## Adding a broker to the namespace
|
||||
|
||||
The [broker](./broker) allows you to route events to different event sinks or consumers.
|
||||
The [broker](../broker) allows you to route events to different event sinks or consumers.
|
||||
|
||||
1. Add a broker named `default` to your namespace by entering the following command:
|
||||
|
||||
|
|
@ -54,7 +54,7 @@ The [broker](./broker) allows you to route events to different event sinks or co
|
|||
```
|
||||
|
||||
If `READY` is `False`, wait a few moments and then run the command again.
|
||||
If you continue to receive the `False` status, see the [Debugging Guide](./debugging/) to troubleshoot the issue.
|
||||
If you continue to receive the `False` status, see the [Debugging Guide](../debugging/) to troubleshoot the issue.
|
||||
|
||||
## Creating event consumers
|
||||
|
||||
|
|
@ -150,11 +150,11 @@ demonstrate how you can configure your event producers to target a specific cons
|
|||
goodbye-display 1/1 1 1 16s
|
||||
```
|
||||
The number of replicas in the **READY** column should match the number of replicas in the **AVAILABLE** column.
|
||||
If the numbers do not match, see the [Debugging Guide](./debugging/) to troubleshoot the issue.
|
||||
If the numbers do not match, see the [Debugging Guide](../debugging/) to troubleshoot the issue.
|
||||
|
||||
## Creating triggers
|
||||
|
||||
A [trigger](./broker/triggers) defines the events that each event consumer receives.
|
||||
A [trigger](../broker/triggers) defines the events that each event consumer receives.
|
||||
Brokers use triggers to forward events to the correct consumers.
|
||||
Each trigger can specify a filter that enables selection of relevant events based on the Cloud Event context attributes.
|
||||
|
||||
|
|
@ -218,7 +218,7 @@ Each trigger can specify a filter that enables selection of relevant events base
|
|||
|
||||
The `SUBSCRIBER_URI` has a value similar to `triggerName.namespaceName.svc.cluster.local`.
|
||||
The exact value depends on the broker implementation.
|
||||
If this value looks incorrect, see the [Debugging Guide](./debugging/) to troubleshoot the issue.
|
||||
If this value looks incorrect, see the [Debugging Guide](../debugging/) to troubleshoot the issue.
|
||||
|
||||
## Creating a pod as an event producer
|
||||
|
||||
|
|
|
|||
|
|
@ -99,6 +99,6 @@ $ ./kafka_setup.sh
|
|||
A number of different examples, showing the `KafkaSource`, `KafkaChannel` and
|
||||
`KafkaBinding` can be found here:
|
||||
|
||||
- [`KafkaSource` to `Service`](./source/)
|
||||
- [`KafkaSource` to `Service`](../../sources/kafka-source)
|
||||
- [`KafkaChannel` and Broker](./channel/)
|
||||
- [`KafkaBinding`](./binding/)
|
||||
|
|
|
|||
|
|
@ -39,16 +39,16 @@ All Sources are part of the `sources` category.
|
|||
| -- | -- | -- | -- |
|
||||
| [APIServerSource](./apiserversource) | v1 | Knative | Brings Kubernetes API server events into Knative. The APIServerSource fires a new event each time a Kubernetes resource is created, updated or deleted. |
|
||||
| [AWS SQS](https://github.com/knative-sandbox/eventing-awssqs/tree/main/samples) | v1alpha1 | Knative | Brings [AWS Simple Queue Service](https://aws.amazon.com/sqs/) messages into Knative. The AwsSqsSource fires a new event each time an event is published on an [AWS SQS topic](https://aws.amazon.com/sqs/). |
|
||||
| [Apache Camel](../samples/apache-camel-source) | v1alpha1 | Knative | Enables use of [Apache Camel](https://github.com/apache/camel) components for pushing events into Knative. A CamelSource is an event source that can represent any existing [Apache Camel component](https://github.com/apache/camel/tree/master/components), that provides a consumer side, and enables publishing events to an addressable endpoint. Each Camel endpoint has the form of a URI where the scheme is the ID of the component to use. CamelSource requires [Camel-K](https://github.com/apache/camel-k#installation) to be installed into the current namespace. See the [CamelSource](https://github.com/knative-sandbox/eventing-camel/tree/main/samples) example. |
|
||||
| [Apache Camel](./apache-camel-source) | v1alpha1 | Knative | Enables use of [Apache Camel](https://github.com/apache/camel) components for pushing events into Knative. A CamelSource is an event source that can represent any existing [Apache Camel component](https://github.com/apache/camel/tree/master/components), that provides a consumer side, and enables publishing events to an addressable endpoint. Each Camel endpoint has the form of a URI where the scheme is the ID of the component to use. CamelSource requires [Camel-K](https://github.com/apache/camel-k#installation) to be installed into the current namespace. See the [CamelSource](https://github.com/knative-sandbox/eventing-camel/tree/main/samples) example. |
|
||||
| [Apache CouchDB](https://github.com/knative-sandbox/eventing-couchdb/blob/main/source) | v1alpha1 | Knative | Brings [Apache CouchDB](https://couchdb.apache.org/) messages into Knative. |
|
||||
| [Apache Kafka](../samples/kafka) | v1beta1 | Knative | Brings [Apache Kafka](https://kafka.apache.org/) messages into Knative. The KafkaSource reads events from an Apache Kafka Cluster, and passes these events to a sink so that they can be consumed. See the [Kafka Source](https://github.com/knative-sandbox/eventing-kafka/blob/main/pkg/source) example for more details. |
|
||||
| [Container Source](./containersource.md) | v1 | Knative | The ContainerSource will instantiate container image(s) that can generate events until the ContainerSource is deleted. This may be used, for example, to poll an FTP server for new files or generate events at a set time interval. Given a `spec.template` with at least a container image specified, ContainerSource will keep a `Pod` running with the specified image(s). `K_SINK` (destination address) and `KE_CE_OVERRIDES` (JSON CloudEvents attributes) environment variables are injected into the running image(s). It is used by multiple other Sources as underlying infrastructure. Refer to the [Container Source](../samples/container-source) example for more details. |
|
||||
| [GitHub](../samples/github-source) | v1alpha1 | Knative | Registers for events of the specified types on the specified GitHub organization or repository, and brings those events into Knative. The GitHubSource fires a new event for selected [GitHub event types](https://developer.github.com/v3/activity/events/types/). See the [GitHub Source](../samples/github-source) example for more details. |
|
||||
| [GitLab](../samples/gitlab-source) | v1alpha1 | Knative | Registers for events of the specified types on the specified GitLab repository, and brings those events into Knative. The GitLabSource creates a webhooks for specified [event types](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#events), listens for incoming events, and passes them to a consumer. See the [GitLab Source](../samples/gitlab-source) example for more details. |
|
||||
| [Container Source](./containersource.md) | v1 | Knative | The ContainerSource will instantiate container image(s) that can generate events until the ContainerSource is deleted. This may be used, for example, to poll an FTP server for new files or generate events at a set time interval. Given a `spec.template` with at least a container image specified, ContainerSource will keep a `Pod` running with the specified image(s). `K_SINK` (destination address) and `KE_CE_OVERRIDES` (JSON CloudEvents attributes) environment variables are injected into the running image(s). It is used by multiple other Sources as underlying infrastructure. Refer to the [Container Source](./container-source) example for more details. |
|
||||
| [GitHub](./github-source) | v1alpha1 | Knative | Registers for events of the specified types on the specified GitHub organization or repository, and brings those events into Knative. The GitHubSource fires a new event for selected [GitHub event types](https://developer.github.com/v3/activity/events/types/). See the [GitHub Source](./github-source) example for more details. |
|
||||
| [GitLab](./gitlab-source) | v1alpha1 | Knative | Registers for events of the specified types on the specified GitLab repository, and brings those events into Knative. The GitLabSource creates a webhooks for specified [event types](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#events), listens for incoming events, and passes them to a consumer. See the [GitLab Source](./gitlab-source) example for more details. |
|
||||
| [Heartbeats](https://github.com/knative/eventing/tree/main/cmd/heartbeats) | N/A | Knative | Uses an in-memory timer to produce events at the specified interval. |
|
||||
| [PingSource](./pingsource) | v1beta2 | Knative | Produces events with a fixed payload on a specified [Cron](https://en.wikipedia.org/wiki/Cron) schedule. See the [Ping Source](../samples/ping-source) example for more details. |
|
||||
| [PingSource](./ping-source) | v1beta2 | Knative | Produces events with a fixed payload on a specified [Cron](https://en.wikipedia.org/wiki/Cron) schedule. See the [Ping Source](./ping-source) example for more details. |
|
||||
| [RabbitMQ](https://github.com/knative-sandbox/eventing-rabbitmq) | Active development | None | Brings [RabbitMQ](https://www.rabbitmq.com/) messages into Knative.
|
||||
| [SinkBinding](./sinkbinding/) | v1 | Knative | The SinkBinding can be used to author new event sources using any of the familiar compute abstractions that Kubernetes makes available (e.g. Deployment, Job, DaemonSet, StatefulSet), or Knative abstractions (e.g. Service, Configuration). SinkBinding provides a framework for injecting `K_SINK` (destination address) and `K_CE_OVERRIDES` (JSON cloudevents attributes) environment variables into any Kubernetes resource which has a `spec.template` that looks like a Pod (aka PodSpecable). See the [SinkBinding](../samples/container-source) example for more details. |
|
||||
| [SinkBinding](./sinkbinding/) | v1 | Knative | The SinkBinding can be used to author new event sources using any of the familiar compute abstractions that Kubernetes makes available (e.g. Deployment, Job, DaemonSet, StatefulSet), or Knative abstractions (e.g. Service, Configuration). SinkBinding provides a framework for injecting `K_SINK` (destination address) and `K_CE_OVERRIDES` (JSON cloudevents attributes) environment variables into any Kubernetes resource which has a `spec.template` that looks like a Pod (aka PodSpecable). See the [SinkBinding](./container-source) example for more details. |
|
||||
| [WebSocket](https://github.com/knative/eventing/tree/main/cmd/websocketsource) | N/A | Knative | Opens a WebSocket to the specified source and packages each received message as a Knative event. |
|
||||
|
||||
## Third-Party Sources
|
||||
|
|
@ -66,10 +66,10 @@ All Sources are part of the `sources` category.
|
|||
[Amazon SNS](https://github.com/triggermesh/aws-event-sources/tree/master/cmd/awssnssource/README.md) | Supported | TriggerMesh | Subscribes to messages from an [Amazon SNS](https://aws.amazon.com/sns/) topic.
|
||||
[Amazon SQS](https://github.com/triggermesh/aws-event-sources/tree/master/cmd/awssqssource/README.md) | Supported | TriggerMesh | Consumes messages from an [Amazon SQS](https://aws.amazon.com/sqs/) queue.
|
||||
[BitBucket](https://github.com/nachocano/bitbucket-source) | Proof of Concept | None | Registers for events of the specified types on the specified BitBucket organization/repository. Brings those events into Knative.
|
||||
[CloudAuditLogsSource](https://github.com/google/knative-gcp/blob/main/docs/examples/cloudauditlogssource/README.md) | v1 | Google | Registers for events of the specified types on the specified [Google Cloud Audit Logs](https://cloud.google.com/logging/docs/audit/). Brings those events into Knative. Refer to the [CloudAuditLogsSource](../samples/cloud-audit-logs-source) example for more details.
|
||||
[CloudPubSubSource](https://github.com/google/knative-gcp/blob/main/docs/examples/cloudpubsubsource/README.md) | v1 | Google | Brings [Cloud Pub/Sub](https://cloud.google.com/pubsub/) messages into Knative. The CloudPubSubSource fires a new event each time a message is published on a [Google Cloud Platform PubSub topic](https://cloud.google.com/pubsub/). See the [CloudPubSubSource](../samples/cloud-pubsub-source) example for more details.
|
||||
[CloudSchedulerSource](https://github.com/google/knative-gcp/blob/main/docs/examples/cloudschedulersource/README.md) | v1 | Google | Create, update, and delete [Google Cloud Scheduler](https://cloud.google.com/scheduler/) Jobs. When those jobs are triggered, receive the event inside Knative. See the [CloudSchedulerSource](../samples/cloud-scheduler-source) example for further details.
|
||||
[CloudStorageSource](https://github.com/google/knative-gcp/blob/main/docs/examples/cloudstoragesource/README.md) | v1 | Google | Registers for events of the specified types on the specified [Google Cloud Storage](https://cloud.google.com/storage/) bucket and optional object prefix. Brings those events into Knative. See the [CloudStorageSource](../samples/cloud-storage-source) example.
|
||||
[CloudAuditLogsSource](https://github.com/google/knative-gcp/blob/main/docs/examples/cloudauditlogssource/README.md) | v1 | Google | Registers for events of the specified types on the specified [Google Cloud Audit Logs](https://cloud.google.com/logging/docs/audit/). Brings those events into Knative. Refer to the [CloudAuditLogsSource](./cloud-audit-logs-source) example for more details.
|
||||
[CloudPubSubSource](https://github.com/google/knative-gcp/blob/main/docs/examples/cloudpubsubsource/README.md) | v1 | Google | Brings [Cloud Pub/Sub](https://cloud.google.com/pubsub/) messages into Knative. The CloudPubSubSource fires a new event each time a message is published on a [Google Cloud Platform PubSub topic](https://cloud.google.com/pubsub/). See the [CloudPubSubSource](./cloud-pubsub-source) example for more details.
|
||||
[CloudSchedulerSource](https://github.com/google/knative-gcp/blob/main/docs/examples/cloudschedulersource/README.md) | v1 | Google | Create, update, and delete [Google Cloud Scheduler](https://cloud.google.com/scheduler/) Jobs. When those jobs are triggered, receive the event inside Knative. See the [CloudSchedulerSource](./cloud-scheduler-source) example for further details.
|
||||
[CloudStorageSource](https://github.com/google/knative-gcp/blob/main/docs/examples/cloudstoragesource/README.md) | v1 | Google | Registers for events of the specified types on the specified [Google Cloud Storage](https://cloud.google.com/storage/) bucket and optional object prefix. Brings those events into Knative. See the [CloudStorageSource](./cloud-storage-source) example.
|
||||
[DockerHubSource](https://github.com/tom24d/eventing-dockerhub) | v1alpha1 | None | Retrieves events from [Docker Hub Webhooks](https://docs.docker.com/docker-hub/webhooks/) and transforms them into CloudEvents for consumption in Knative.
|
||||
[FTP / SFTP](https://github.com/vaikas-google/ftp) | Proof of concept | None | Watches for files being uploaded into a FTP/SFTP and generates events for those.
|
||||
[GitHub Issue Comments](https://github.com/BrianMMcClain/github-issue-comment-source)| Proof of Concept | None | Polls a specific GitHub issue for new comments.
|
||||
|
|
@ -83,6 +83,6 @@ All Sources are part of the `sources` category.
|
|||
|
||||
## Additional resources
|
||||
|
||||
- For information about creating your own Source type, see the [tutorial on writing a Source with a Receive Adapter](../samples/writing-event-source).
|
||||
- For information about creating your own Source type, see the [tutorial on writing a Source with a Receive Adapter](./creating-event-sources/writing-event-source).
|
||||
- If your code needs to send events as part of its business logic and doesn't fit the model of a Source, consider [feeding events directly to a Broker](https://knative.dev/docs/eventing/broker/).
|
||||
- For more information about using `kn` Source related commands, see the [`kn source` reference documentation](https://github.com/knative/client/blob/main/docs/cmd/kn_source.md).
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ aliases:
|
|||
# Creating an event source by using the sample event source
|
||||
|
||||
This guide explains how to create your own event source for Knative
|
||||
Eventing by using a [sample repository](https://github.com/knative-sandbox/sample-source), and explains the key concepts behind each required component. Documentation for the default [Knative event sources](../../sources/) can be used as an additional reference.
|
||||
Eventing by using a [sample repository](https://github.com/knative-sandbox/sample-source), and explains the key concepts behind each required component. Documentation for the default [Knative event sources](../../../sources) can be used as an additional reference.
|
||||
|
||||
After completing the provided tutorial, you will have created a basic event source controller and a receive adapter. Events can be viewed by using the `event_display` Knative service.
|
||||
<!--TODO: Provide links to docs about what the event source controller and receiver adapter are-->
|
||||
|
|
|
|||
Loading…
Reference in New Issue