Format markdown (#626)

Produced via: `prettier --write --prose-wrap=always $(find -name '*.md' | grep -v vendor | grep -v .github)`
This commit is contained in:
mattmoor-sockpuppet 2018-12-04 14:34:25 -08:00 committed by Knative Prow Robot
parent 1e779dca20
commit c64c63a160
80 changed files with 1681 additions and 1330 deletions

View File

@ -29,28 +29,29 @@ see a documentation issue, submit an issue using the following steps:
1. Check the [Knative docs issues list](https://github.com/knative/docs/issues)
as you might find out the issue is a duplicate.
2. Use the [included template for every new issue](https://github.com/knative/docs/issues/new).
2. Use the
[included template for every new issue](https://github.com/knative/docs/issues/new).
When you create a bug report, include as many details as possible and include
suggested fixes to the issue.
Note that code issues should be filed against the individual Knative repositories,
while documentation issues should go in the `docs` repository.
Note that code issues should be filed against the individual Knative
repositories, while documentation issues should go in the `docs` repository.
### Put your docs in the right place
Knative uses the [docs repository](https://github.com/knative/docs) for all
general documentation for Knative components. However, formal specifications
or documentation most relevant to contributors of a component should be placed
in the `docs` folder within a given component's repository. An example of this
is the [spec](https://github.com/knative/serving/tree/master/docs/spec)
folder within the Serving component.
general documentation for Knative components. However, formal specifications or
documentation most relevant to contributors of a component should be placed in
the `docs` folder within a given component's repository. An example of this is
the [spec](https://github.com/knative/serving/tree/master/docs/spec) folder
within the Serving component.
Code samples follow a similar strategy, where most code samples should be located
in the `docs` repository. If there are code examples or samples used for testing
that are not expected to be used by non-contributors, those samples can be put
in a `samples` folder within the component repo.
Code samples follow a similar strategy, where most code samples should be
located in the `docs` repository. If there are code examples or samples used for
testing that are not expected to be used by non-contributors, those samples can
be put in a `samples` folder within the component repo.
### Submitting Documentation Pull Requests
If you're fixing an issue in the existing documentation, you should submit a
PR against the master branch.
If you're fixing an issue in the existing documentation, you should submit a PR
against the master branch.

View File

@ -1,13 +1,14 @@
# Welcome, Knative
Knative (pronounced kay-nay-tiv) extends Kubernetes to provide a set of middleware
components that are essential to build modern, source-centric, and container-based
applications that can run anywhere: on premises, in the cloud, or even in a third-party
data center.
Knative (pronounced kay-nay-tiv) extends Kubernetes to provide a set of
middleware components that are essential to build modern, source-centric, and
container-based applications that can run anywhere: on premises, in the cloud,
or even in a third-party data center.
Each of the components under the Knative project attempt to identify common patterns and
codify the best practices that are shared by successful real-world Kubernetes-based frameworks and
applications. Knative components focus on solving many mundane but difficult tasks such as:
Each of the components under the Knative project attempt to identify common
patterns and codify the best practices that are shared by successful real-world
Kubernetes-based frameworks and applications. Knative components focus on
solving many mundane but difficult tasks such as:
- [Deploying a container](./install/getting-started-knative-app.md)
- [Orchestrating source-to-URL workflows on Kubernetes](./serving/samples/source-to-url-go/)
@ -15,16 +16,19 @@ applications. Knative components focus on solving many mundane but difficult tas
- [Automatic scaling and sizing workloads based on demand](./serving/samples/autoscale-go)
- [Binding running services to eventing ecosystems](./eventing/samples/kubernetes-event-source)
Developers on Knative can use familiar idioms, languages, and frameworks to deploy any workload:
functions, applications, or containers.
Developers on Knative can use familiar idioms, languages, and frameworks to
deploy any workload: functions, applications, or containers.
## Components
The following Knative components are currently available:
- [Build](https://github.com/knative/build) - Source-to-container build orchestration
- [Eventing](https://github.com/knative/eventing) - Management and delivery of events
- [Serving](https://github.com/knative/serving) - Request-driven compute that can scale to zero
- [Build](https://github.com/knative/build) - Source-to-container build
orchestration
- [Eventing](https://github.com/knative/eventing) - Management and delivery of
events
- [Serving](https://github.com/knative/serving) - Request-driven compute that
can scale to zero
## Audience
@ -43,27 +47,27 @@ To join the conversation, head over to the
### Operators
Knative components are intended to be integrated into more polished
products that cloud service providers or in-house teams in large
enterprises can then operate.
Knative components are intended to be integrated into more polished products
that cloud service providers or in-house teams in large enterprises can then
operate.
Any enterprise or cloud provider can adopt Knative components into
their own systems and pass the benefits along to their customers.
Any enterprise or cloud provider can adopt Knative components into their own
systems and pass the benefits along to their customers.
### Contributors
With a clear project scope, lightweight governance model, and clean
lines of separation between pluggable components, the Knative project
establishes an efficient contributor workflow.
With a clear project scope, lightweight governance model, and clean lines of
separation between pluggable components, the Knative project establishes an
efficient contributor workflow.
Knative is a diverse, open, and inclusive community. To get involved, see
[CONTRIBUTING.md](./community/CONTRIBUTING.md)
and join the [Knative community](./community/README.md).
[CONTRIBUTING.md](./community/CONTRIBUTING.md) and join the
[Knative community](./community/README.md).
Your own path to becoming a Knative contributor can
[begin anywhere](https://github.com/knative/serving/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3Akind%2Fgood-first-issue).
[Bug reports](https://github.com/knative/serving/issues/new) and
friction logs from new developers are especially welcome.
[Bug reports](https://github.com/knative/serving/issues/new) and friction logs
from new developers are especially welcome.
## Documentation

View File

@ -8,8 +8,8 @@ process succeeds.
A Knative `Build` runs on-cluster and is implemented by a
[Kubernetes Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
Given a `Builder`, or container image that you have created to perform a task
or action, you can define a Knative `Build` through a single configuration file.
Given a `Builder`, or container image that you have created to perform a task or
action, you can define a Knative `Build` through a single configuration file.
Also consider using a Knative `Build` to build the source code of your apps into
container images, which you can then run on
@ -24,8 +24,8 @@ More information about this use case is demonstrated in
task, whether that's a single step in a process, or the whole process itself.
- The `steps` in a `Build` can push to a repository.
- A `BuildTemplate` can be used to defined reusable templates.
- The `source` in a `Build` can be defined to mount data to a Kubernetes
Volume, and supports:
- The `source` in a `Build` can be defined to mount data to a Kubernetes Volume,
and supports:
- `git` repositories
- Google Cloud Storage
- An arbitrary container image
@ -46,8 +46,8 @@ components:
Because all Knative components stand alone, you can decide which components to
install. Knative Serving is not required to create and run builds.
Before you can run a Knative Build, you must install the Knative Build
component in your Kubernetes cluster:
Before you can run a Knative Build, you must install the Knative Build component
in your Kubernetes cluster:
- For details about installing a new instance of Knative in your Kubernetes
cluster, see [Installing Knative](../install/README.md).
@ -91,7 +91,8 @@ spec:
Use the following samples to learn how to configure your Knative Builds to
perform simple tasks.
Tip: Review and reference multiple samples to piece together more complex builds.
Tip: Review and reference multiple samples to piece together more complex
builds.
#### Simple build samples

View File

@ -1,7 +1,7 @@
# Authentication
This document defines how authentication is provided during execution
of a build.
This document defines how authentication is provided during execution of a
build.
The build system supports two types of authentication, using Kubernetes'
first-class `Secret` types:
@ -9,16 +9,16 @@ first-class `Secret` types:
- `kubernetes.io/basic-auth`
- `kubernetes.io/ssh-auth`
Secrets of these types can be made available to the `Build` by attaching them
to the `ServiceAccount` as which it runs.
Secrets of these types can be made available to the `Build` by attaching them to
the `ServiceAccount` as which it runs.
### Exposing credentials to the build
In their native form, these secrets are unsuitable for consumption by Git and
Docker. For Git, they need to be turned into (some form of) `.gitconfig`. For
Docker, they need to be turned into a `~/.docker/config.json` file. Also,
while each of these supports has multiple credentials for multiple domains,
those credentials typically need to be blended into a single canonical keyring.
Docker, they need to be turned into a `~/.docker/config.json` file. Also, while
each of these supports has multiple credentials for multiple domains, those
credentials typically need to be blended into a single canonical keyring.
To solve this, before the `Source` step, all builds execute a credential
initialization process that accesses each of its secrets and aggregates them
@ -45,7 +45,8 @@ into their respective files in `$HOME`.
1. Generate the value of `ssh-privatekey` by copying the value of (for example)
`cat id_rsa | base64`.
1. Copy the value of `cat ~/.ssh/known_hosts | base64` to the `known_hosts` field.
1. Copy the value of `cat ~/.ssh/known_hosts | base64` to the `known_hosts`
field.
1. Next, direct a `ServiceAccount` to use this `Secret`:
@ -78,8 +79,8 @@ into their respective files in `$HOME`.
```
When the build executes, before steps execute, a `~/.ssh/config` will be
generated containing the key configured in the `Secret`. This key is then
used to authenticate with the Git service.
generated containing the key configured in the `Secret`. This key is then used
to authenticate with the Git service.
## Basic authentication (Git)
@ -206,9 +207,9 @@ stringData:
password: <cleartext non-encoded>
```
This describes a "Basic Auth" (username and password) secret that should be
used to access Git repos at github.com and gitlab.com, as well as Docker
repositories at gcr.io.
This describes a "Basic Auth" (username and password) secret that should be used
to access Git repos at github.com and gitlab.com, as well as Docker repositories
at gcr.io.
Similarly, for SSH:
@ -230,8 +231,8 @@ This describes an SSH key secret that should be used to access Git repos at
github.com only.
Credential annotation keys must begin with `build.knative.dev/docker-` or
`build.knative.dev/git-`, and the value describes the URL of the host with
which to use the credential.
`build.knative.dev/git-`, and the value describes the URL of the host with which
to use the credential.
## Implementation detail
@ -306,16 +307,16 @@ Host url2.com
```
Note: Because `known_hosts` is a non-standard extension of
`kubernetes.io/ssh-auth`, when it is not present this will be generated
through `ssh-keygen url{n}.com` instead.
`kubernetes.io/ssh-auth`, when it is not present this will be generated through
`ssh-keygen url{n}.com` instead.
### Least privilege
The secrets as outlined here will be stored into `$HOME` (by convention the
volume: `/builder/home`), and will be available to `Source` and all `Steps`.
For sensitive credentials that should not be made available to some steps,
do not use the mechanisms outlined here. Instead, the user should declare an
For sensitive credentials that should not be made available to some steps, do
not use the mechanisms outlined here. Instead, the user should declare an
explicit `Volume` from the `Secret` and manually `VolumeMount` it into the
`Step`.

View File

@ -7,8 +7,8 @@ A set of curated and supported build templates is available in the
## What is a Build Template?
A `BuildTemplate` encapsulates a shareable [build](./builds.md)
process with some limited parameterization capabilities.
A `BuildTemplate` encapsulates a shareable [build](./builds.md) process with
some limited parameterization capabilities.
### Example template

View File

@ -1,7 +1,7 @@
# Builders
This document defines `Builder` images and the
conventions to which they are expected to adhere.
This document defines `Builder` images and the conventions to which they are
expected to adhere.
## What is a Builder?
@ -50,15 +50,16 @@ steps:
### Specialized Builders
It is also possible for advanced users to create purpose-built builders.
One example of this are the ["FTL" builders](https://github.com/GoogleCloudPlatform/runtimes-common/tree/master/ftl#ftl).
It is also possible for advanced users to create purpose-built builders. One
example of this are the
["FTL" builders](https://github.com/GoogleCloudPlatform/runtimes-common/tree/master/ftl#ftl).
## What are the Builder conventions?
Builders should expect a Build to implement the following conventions:
- `/workspace`: The default working directory will be `/workspace`, which is
a volume that is filled by the `source:` step and shared across build `steps:`.
- `/workspace`: The default working directory will be `/workspace`, which is a
volume that is filled by the `source:` step and shared across build `steps:`.
- `/builder/home`: This volume is exposed to steps via `$HOME`.

View File

@ -4,8 +4,8 @@ Use the `Build` resource object to create and run on-cluster processes to
completion.
To create a build in Knative, you must define a configuration file, in which
specifies one or more container images that you have implemented to perform
and complete a task.
specifies one or more container images that you have implemented to perform and
complete a task.
A build runs until all `steps` have completed or until a failure occurs.
@ -50,7 +50,8 @@ following fields:
available to your build.
- [`timeout`](#timeout) - Specifies timeout after which the build will fail.
[kubernetes-overview]: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
The following example is a non-working sample where most of the possible
configuration fields are used:
@ -87,12 +88,12 @@ spec:
#### Steps
The `steps` field is required if the `template` field is not defined. You
define one or more `steps` fields to define the body of a build.
The `steps` field is required if the `template` field is not defined. You define
one or more `steps` fields to define the body of a build.
Each `steps` in a build must specify a `Builder`, or type of container image that
adheres to the [Knative builder contract](./builder-contract.md). For each of
the `steps` fields, or container images that you define:
Each `steps` in a build must specify a `Builder`, or type of container image
that adheres to the [Knative builder contract](./builder-contract.md). For each
of the `steps` fields, or container images that you define:
- The `Builder`-type container images are run and evaluated in order, starting
from the top of the configuration file.
@ -121,10 +122,10 @@ to all `steps` of your build.
The currently supported types of sources include:
- `git` - A Git based repository. Specify the `url` field to define the
location of the container image. Specify a `revision` field to define a
branch name, tag name, commit SHA, or any ref. [Learn more about revisions in
Git](https://git-scm.com/docs/gitrevisions#_specifying_revisions).
- `git` - A Git based repository. Specify the `url` field to define the location
of the container image. Specify a `revision` field to define a branch name,
tag name, commit SHA, or any ref.
[Learn more about revisions in Git](https://git-scm.com/docs/gitrevisions#_specifying_revisions).
- `gcs` - An archive that is located in Google Cloud Storage.
@ -134,21 +135,21 @@ The currently supported types of sources include:
Optional. Specifies the `name` of a `ServiceAccount` resource object. Use the
`serviceAccountName` field to run your build with the privileges of the
specified service account. If no `serviceAccountName` field is specified,
your build runs using the
specified service account. If no `serviceAccountName` field is specified, your
build runs using the
[`default` service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server)
that is in the
[namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
of the `Build` resource object.
For examples and more information about specifying service accounts,
see the [`ServiceAccount`](./auth.md) reference topic.
For examples and more information about specifying service accounts, see the
[`ServiceAccount`](./auth.md) reference topic.
#### Volumes
Optional. Specifies one or more
[volumes](https://kubernetes.io/docs/concepts/storage/volumes/) that you want
to make available to your build, including all the build steps. Add volumes to
[volumes](https://kubernetes.io/docs/concepts/storage/volumes/) that you want to
make available to your build, including all the build steps. Add volumes to
complement the volumes that are implicitly
[created during a build step](./builder-contract.md).
@ -159,23 +160,26 @@ For example, use volumes to accomplish one of the following common tasks:
- Create an `emptyDir` volume to act as a cache for use across multiple build
steps. Consider using a persistent volume for inter-build caching.
- Mount a host's Docker socket to use a `Dockerfile` for container image
builds.
- Mount a host's Docker socket to use a `Dockerfile` for container image builds.
#### Timeout
Optional. Specifies timeout for the build. Includes time required for allocating resources and execution of build.
Optional. Specifies timeout for the build. Includes time required for allocating
resources and execution of build.
- Defaults to 10 minutes.
- Refer to [Go's ParseDuration documentation](https://golang.org/pkg/time/#ParseDuration) for expected format.
- Refer to
[Go's ParseDuration documentation](https://golang.org/pkg/time/#ParseDuration)
for expected format.
### Examples
Use these code snippets to help you understand how to define your Knative builds.
Use these code snippets to help you understand how to define your Knative
builds.
Tip: See the collection of simple
[test builds](https://github.com/knative/build/tree/master/test) for
additional code samples, including working copies of the following snippets:
[test builds](https://github.com/knative/build/tree/master/test) for additional
code samples, including working copies of the following snippets:
- [`git` as `source`](#using-git)
- [`gcs` as `source`](#using-gcs)

View File

@ -4,14 +4,14 @@ Use this page to learn how to create and then run a simple build in Knative. In
this topic, you create a Knative Build configuration file for a simple app,
deploy that build to Knative, and then test that the build completes.
The following demonstrates the process of deploying and then testing that
the build completed successfully. This sample build uses a hello-world-type app
that uses [busybox](https://docs.docker.com/samples/library/busybox/) to simply
print "_hello build_".
The following demonstrates the process of deploying and then testing that the
build completed successfully. This sample build uses a hello-world-type app that
uses [busybox](https://docs.docker.com/samples/library/busybox/) to simply print
"_hello build_".
Tip: See the
[build code samples](builds.md#get-started-with-knative-build-samples)
for examples of more complex builds, including code samples that use container
[build code samples](builds.md#get-started-with-knative-build-samples) for
examples of more complex builds, including code samples that use container
images, authentication, and include multiple steps.
## Before you begin
@ -22,8 +22,8 @@ Kubernetes cluster, and it must include the Knative Build component:
- For details about installing a new instance of Knative in your Kubernetes
cluster, see [Installing Knative](../install/README.md).
- If you have a component of Knative installed and running, you must [ensure
that the Knative Build component is also installed](installing-build-component.md).
- If you have a component of Knative installed and running, you must
[ensure that the Knative Build component is also installed](installing-build-component.md).
## Creating and running a build
@ -45,9 +45,9 @@ Kubernetes cluster, and it must include the Knative Build component:
args: ["echo", "hello", "build"]
```
Notice that this definition specifies `kind` as a `Build`, and that
the name of this `Build` resource is `hello-build`.
For more information about defining build configuration files, See the
Notice that this definition specifies `kind` as a `Build`, and that the name
of this `Build` resource is `hello-build`. For more information about
defining build configuration files, See the
[`Build` reference topic](builds.md).
1. Deploy the `build.yaml` configuration file and run the `hello-build` build on
@ -112,8 +112,8 @@ Kubernetes cluster, and it must include the Knative Build component:
reason: Completed
```
Notice that the values of `completed` indicate that the build was
successful, and that `hello-build-jx4ql` is the pod where the build ran.
Notice that the values of `completed` indicate that the build was successful,
and that `hello-build-jx4ql` is the pod where the build ran.
Tip: You can also retrieve the `podName` by running the following command:
@ -123,26 +123,25 @@ Kubernetes cluster, and it must include the Knative Build component:
1. Optional: Run the following
[`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get)
command to retrieve details about the `hello-build-[ID]` pod, including
the name of the
command to retrieve details about the `hello-build-[ID]` pod, including the
name of the
[Init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/):
```shell
kubectl get pod hello-build-[ID] --output yaml
```
where `[ID]` is the suffix of your pod name, for example
`hello-build-jx4ql`.
where `[ID]` is the suffix of your pod name, for example `hello-build-jx4ql`.
The response of this command includes a lot of detail, as well as
the `build-step-hello` name of the Init container.
The response of this command includes a lot of detail, as well as the
`build-step-hello` name of the Init container.
Tip: The name of the Init container is determined by the `name` that is
specified in the `steps` field of the build configuration file, for
example `build-step-[ID]`.
specified in the `steps` field of the build configuration file, for example
`build-step-[ID]`.
1. To verify that your build performed the single task of printing
"_hello build_", you can run the
1. To verify that your build performed the single task of printing "_hello
build_", you can run the
[`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs)
command to retrieve the log files from the `build-step-hello` Init container
in the `hello-build-[ID]` pod:

View File

@ -1,8 +1,8 @@
# Installing the Knative Build component
Before you can run a Knative Build, you must install the Knative Build
component in your Kubernetes cluster. Use this page to add the Knative Build
component to an existing Knative installation.
Before you can run a Knative Build, you must install the Knative Build component
in your Kubernetes cluster. Use this page to add the Knative Build component to
an existing Knative installation.
You have the option to install and use only the components of Knative that you
want, for example Knative serving is not required to create and run builds.
@ -19,8 +19,8 @@ To add only the Knative Build component to an existing installation:
1. Run the
[`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply)
command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
command to install [Knative Build](https://github.com/knative/build) and its
dependencies:
```bash
kubectl apply --filename https://storage.googleapis.com/knative-releases/build/latest/release.yaml
```

View File

@ -1,21 +1,20 @@
# Knative Personas
When discussing user actions, it is often helpful to [define specific
user roles](<https://en.wikipedia.org/wiki/Persona_(user_experience)>) who
might want to do the action.
When discussing user actions, it is often helpful to
[define specific user roles](<https://en.wikipedia.org/wiki/Persona_(user_experience)>)
who might want to do the action.
## Knative Build
We expect the build components of Knative to be useful on their own,
as well as in conjunction with the compute components.
We expect the build components of Knative to be useful on their own, as well as
in conjunction with the compute components.
### Developer
The developer personas for build are broader than the serverless
workloads that the knative compute product focuses on. Developers
expect to have build tools that integrate with their native language
tooling for managing dependencies and even detecting language and
runtime dependencies.
The developer personas for build are broader than the serverless workloads that
the knative compute product focuses on. Developers expect to have build tools
that integrate with their native language tooling for managing dependencies and
even detecting language and runtime dependencies.
User stories:
@ -24,10 +23,9 @@ User stories:
### Language operator / contributor
The language operators perform the work of integrating language
tooling into the knative build system. This role can work either
within a particular organization, or on behalf of a particular
language runtime.
The language operators perform the work of integrating language tooling into the
knative build system. This role can work either within a particular
organization, or on behalf of a particular language runtime.
User stories:
@ -36,9 +34,9 @@ User stories:
## Contributors
Contributors are an important part of the knative project. We
always consider how infrastructure changes encourage and enable
contributors to the project, as well as the impact on users.
Contributors are an important part of the knative project. We always consider
how infrastructure changes encourage and enable contributors to the project, as
well as the impact on users.
Types of users:

View File

@ -4,8 +4,8 @@ So, you want to hack on Knative? Yay!
The following sections outline the process all changes to the Knative
repositories go through. All changes, regardless of whether they are from
newcomers to the community or from the core team follow the same process and
are given the same level of review.
newcomers to the community or from the core team follow the same process and are
given the same level of review.
- [Working groups](#working-groups)
- [Code of conduct](#code-of-conduct)
@ -19,15 +19,15 @@ are given the same level of review.
## Working groups
The Knative community is organized into a set of [working
groups](WORKING-GROUPS.md). Any contribution to Knative should be started by
first engaging with the appropriate working group.
The Knative community is organized into a set of
[working groups](WORKING-GROUPS.md). Any contribution to Knative should be
started by first engaging with the appropriate working group.
## Code of conduct
All members of the Knative community must abide by the [Code of
Conduct](CODE-OF-CONDUCT.md). Only by respecting each other can we develop a
productive, collaborative community.
All members of the Knative community must abide by the
[Code of Conduct](CODE-OF-CONDUCT.md). Only by respecting each other can we
develop a productive, collaborative community.
## Team values
@ -47,13 +47,13 @@ permission to use and redistribute your contributions as part of the project.
## Design documents
Any substantial design deserves a design document. Design documents are
written with Google Docs and should be shared with the community by adding
the doc to our
Any substantial design deserves a design document. Design documents are written
with Google Docs and should be shared with the community by adding the doc to
our
[Team Drive](https://drive.google.com/corp/drive/folders/0APnJ_hRs30R2Uk9PVA)
and sending an email to the appropriate working group's mailing list to let
people know the doc is there. To get write access to the drive, you'll need
to be a [member](ROLES.md#member) of the Knative organization.
people know the doc is there. To get write access to the drive, you'll need to
be a [member](ROLES.md#member) of the Knative organization.
We do not yet have a common design document template(TODO).
@ -74,32 +74,32 @@ later join knative-dev if you want immediate access).
In order to contribute a feature to Knative you'll need to go through the
following steps:
- Discuss your idea with the appropriate [working groups](WORKING-GROUPS.md)
on the working group's mailing list.
- Discuss your idea with the appropriate [working groups](WORKING-GROUPS.md) on
the working group's mailing list.
- Once there is general agreement that the feature is useful, [create a GitHub
issue](https://github.com/knative/docs/issues/new) to track the discussion.
The issue should include information about the requirements and use cases
that it is trying to address. Include a discussion of the proposed design
and technical details of the implementation in the issue.
- Once there is general agreement that the feature is useful,
[create a GitHub issue](https://github.com/knative/docs/issues/new) to track
the discussion. The issue should include information about the requirements
and use cases that it is trying to address. Include a discussion of the
proposed design and technical details of the implementation in the issue.
- If the feature is substantial enough:
- Working group leads will ask for a design document as outlined in
[design documents](#design-documents). Create the design document and
add a link to it in the GitHub issue. Don't forget to send a note to the
working group to let everyone know your document is ready for review.
[design documents](#design-documents). Create the design document and add a
link to it in the GitHub issue. Don't forget to send a note to the working
group to let everyone know your document is ready for review.
- Depending on the breadth of the design and how contentious it is, the
working group leads may decide the feature needs to be discussed in one
or more working group meetings before being approved.
working group leads may decide the feature needs to be discussed in one or
more working group meetings before being approved.
- Once the major technical issues are resolved and agreed upon, post a
note with the design decision and the general execution plan to the
working group's mailing list and on the feature's issue.
- Once the major technical issues are resolved and agreed upon, post a note
with the design decision and the general execution plan to the working
group's mailing list and on the feature's issue.
- Submit PRs to [knative/serving](https://github.com/knative/serving/pulls)
with your code changes.
- Submit PRs to [knative/serving](https://github.com/knative/serving/pulls) with
your code changes.
- Submit PRs to knative/serving with user documentation for your feature,
including usage examples when possible. Add documentation to
@ -120,8 +120,8 @@ so the choice is up to you.
Check out this
[README](https://github.com/knative/serving/blob/master/README.md) to learn
about the Knative source base and setting up your [development
environment](https://github.com/knative/serving/blob/master/DEVELOPMENT.md).
about the Knative source base and setting up your
[development environment](https://github.com/knative/serving/blob/master/DEVELOPMENT.md).
## Pull requests
@ -150,14 +150,17 @@ significant exception is work-in-progress PRs. These should be indicated by a
`[WIP]` prefix in the PR title. WIP PRs should not be merged as long as they are
marked WIP.
When ready, if you have not already done so, sign a [contributor license
agreement](#contributor-license-agreements) and submit the PR.
When ready, if you have not already done so, sign a
[contributor license agreement](#contributor-license-agreements) and submit the
PR.
This project uses [Prow](https://github.com/kubernetes/test-infra/tree/master/prow)
to assign reviewers to the PR, set labels, run tests automatically, and so forth.
This project uses
[Prow](https://github.com/kubernetes/test-infra/tree/master/prow) to assign
reviewers to the PR, set labels, run tests automatically, and so forth.
See [Reviewing and Merging Pull Requests](REVIEWING.md) for the PR review and
merge process used for Knative and for more information about [Prow](./REVIEWING.md#prow).
merge process used for Knative and for more information about
[Prow](./REVIEWING.md#prow).
## Issues
@ -168,8 +171,8 @@ When reporting a bug please include the following key pieces of information:
- The version of the project you were using (version number, git commit, etc)
- Operating system you are using
- The exact, minimal, steps needed to reproduce the issue. Submitting a 5 line
script will get a much faster response from the team than one that's
hundreds of lines long.
script will get a much faster response from the team than one that's hundreds
of lines long.
## Third-party code
@ -180,36 +183,40 @@ When reporting a bug please include the following key pieces of information:
- Other third-party code belongs in `third_party/` folder.
- Third-party code must include licenses.
A non-exclusive list of code that must be places in `vendor/` and `third_party/`:
A non-exclusive list of code that must be places in `vendor/` and
`third_party/`:
- Open source, free software, or commercially-licensed code.
- Tools or libraries or protocols that are open source, free software, or commercially licensed.
- Tools or libraries or protocols that are open source, free software, or
commercially licensed.
- Derivative works of third-party code.
- Excerpts from third-party code.
### Adding a new third-party dependency to `third_party/` folder
- Create a sub-folder under `third_party/` for each component.
- In each sub-folder, make sure there is a file called LICENSE which contains the appropriate
license text for the dependency. If one doesn't exist then create it. More details on this below.
- Check in a pristine copy of the code with LICENSE and METADATA files.
You do not have to include unused files, and you can move or rename files if necessary,
but do not modify the contents of any files yet.
- In each sub-folder, make sure there is a file called LICENSE which contains
the appropriate license text for the dependency. If one doesn't exist then
create it. More details on this below.
- Check in a pristine copy of the code with LICENSE and METADATA files. You do
not have to include unused files, and you can move or rename files if
necessary, but do not modify the contents of any files yet.
- Once the pristine copy is merged into master, you may modify the code.
### LICENSE
The license for the code must be in a file named LICENSE. If it was distributed like that,
you're good. If not, you need to make LICENSE be a file containing the full text of the license.
If there's another file in the distribution with the license in it, rename it to LICENSE
(e.g., rename a LICENSE.txt or COPYING file to LICENSE). If the license is only available in
the comments or at a URL, extract and copy the text of the license into LICENSE.
The license for the code must be in a file named LICENSE. If it was distributed
like that, you're good. If not, you need to make LICENSE be a file containing
the full text of the license. If there's another file in the distribution with
the license in it, rename it to LICENSE (e.g., rename a LICENSE.txt or COPYING
file to LICENSE). If the license is only available in the comments or at a URL,
extract and copy the text of the license into LICENSE.
You may optionally document the generation of the LICENSE file in the local_modifications
field of the METADATA file.
You may optionally document the generation of the LICENSE file in the
local_modifications field of the METADATA file.
If there are multiple licenses for the code, put the text of all the licenses into LICENSE
along with separators and comments as to the applications.
If there are multiple licenses for the code, put the text of all the licenses
into LICENSE along with separators and comments as to the applications.
---

View File

@ -1,7 +1,7 @@
# Knative Community
_Important_. Before proceeding, please review the Knative community [Code of
Conduct](CODE-OF-CONDUCT.md).
_Important_. Before proceeding, please review the Knative community
[Code of Conduct](CODE-OF-CONDUCT.md).
If you any have questions or concerns, please contact the authors at
knative-code-of-conduct@googlegroups.com.
@ -23,15 +23,14 @@ Other Documents
- [Code of Conduct](CODE-OF-CONDUCT.md) - all contributors must abide by the
code of conduct
- [Contributing to Knative](CONTRIBUTING.md) - guidelines and advice on
becoming a contributor
- [Contributing to Knative](CONTRIBUTING.md) - guidelines and advice on becoming
a contributor
- [Working Groups](WORKING-GROUPS.md) - describes our various working groups
- [Working Group Processes](WORKING-GROUP-PROCESSES.md) - describes how
working groups operate
- [Working Group Processes](WORKING-GROUP-PROCESSES.md) - describes how working
groups operate
- [Technical Oversight Committee](TECH-OVERSIGHT-COMMITTEE.md) - describes our
technical oversight committee
- [Steering Committee](STEERING-COMMITTEE.md) - describes our steering
committee
- [Steering Committee](STEERING-COMMITTEE.md) - describes our steering committee
- [Community Roles](ROLES.md) - describes the roles individuals can assume
within the Knative community
- [Reviewing and Merging Pull Requests](REVIEWING.md) - how we manage pull
@ -39,22 +38,25 @@ Other Documents
## Introduction
Knative is a Kubernetes-based platform to build, deploy, and manage modern serverless workloads.
See [Knative docs](https://github.com/knative/docs) for in-depth information about using Knative.
Knative is a Kubernetes-based platform to build, deploy, and manage modern
serverless workloads. See [Knative docs](https://github.com/knative/docs) for
in-depth information about using Knative.
## Knative authors
Knative is an open source project with an active development community.
The project was started by Google but has contributions from a growing number of
industry-leading companies. For a current list of the authors, see [Authors](https://github.com/knative/serving/blob/master/AUTHORS).
Knative is an open source project with an active development community. The
project was started by Google but has contributions from a growing number of
industry-leading companies. For a current list of the authors, see
[Authors](https://github.com/knative/serving/blob/master/AUTHORS).
## Meetings and work groups
Knative has public and recorded bi-weekly community meetings.
Each project has one or more [working groups](WORKING-GROUPS.md) driving
the project, and Knative as a single [technical oversight community](TECH-OVERSIGHT-COMMITTEE.md)
monitoring the overall project.
Each project has one or more [working groups](WORKING-GROUPS.md) driving the
project, and Knative as a single
[technical oversight community](TECH-OVERSIGHT-COMMITTEE.md) monitoring the
overall project.
## How can I help
@ -64,10 +66,10 @@ look for GitHub issues marked with the Help Wanted label:
- [Serving issues](https://github.com/knative/serving/issues?q=is%3Aopen+is%3Aissue+label%3A%22community%2Fhelp+wanted%22)
- [Documentation repo](https://github.com/knative/docs/issues?q=is%3Aopen+is%3Aissue+label%3A%22community%2Fhelp+wanted%22)
Even if there's not an issue opened for it, we can always use more
testing throughout the platform. Similarly, we can always use more docs, richer
docs, insightful docs. Or maybe a cool blog post? And if you're a web developer,
we could use your help in spiffing up our public-facing web site.
Even if there's not an issue opened for it, we can always use more testing
throughout the platform. Similarly, we can always use more docs, richer docs,
insightful docs. Or maybe a cool blog post? And if you're a web developer, we
could use your help in spiffing up our public-facing web site.
## Questions and issues

View File

@ -24,16 +24,16 @@ Please do not ever hesitate to ask a question or submit a PR.
## Code of Conduct
Reviewers are often the first points of contact between new members of the
community and are important in shaping the community. We encourage reviewers
to read the [code of conduct](community/CODE-OF-CONDUCT.md) and to go above and beyond
the code of conduct to promote a collaborative and respectful community.
community and are important in shaping the community. We encourage reviewers to
read the [code of conduct](community/CODE-OF-CONDUCT.md) and to go above and
beyond the code of conduct to promote a collaborative and respectful community.
## Code reviewers
The code review process can introduce latency for contributors and additional
work for reviewers, frustrating both parties. Consequently, as a community
we expect all active participants to also be active reviewers. Participate in
the code review process in areas where you have expertise.
work for reviewers, frustrating both parties. Consequently, as a community we
expect all active participants to also be active reviewers. Participate in the
code review process in areas where you have expertise.
## Reviewing changes
@ -49,10 +49,10 @@ During a review, PR authors are expected to respond to comments and questions
made within the PR - updating the proposed change as appropriate.
After a review of the proposed changes, reviewers can either approve or reject
the PR. To approve, they add an "approved" review to the PR. To reject, they
add a "request changes" review along with a full justification for why they
are not in favor of the change. If a PR gets a "request changes" vote, the
group discusses the issue to resolve their differences.
the PR. To approve, they add an "approved" review to the PR. To reject, they add
a "request changes" review along with a full justification for why they are not
in favor of the change. If a PR gets a "request changes" vote, the group
discusses the issue to resolve their differences.
Reviewers are expected to respond in a timely fashion to PRs that are assigned
to them. Reviewers are expected to respond to _active_ PRs with reasonable
@ -65,18 +65,17 @@ require a rebase are not considered active PRs.
### Holds
Any [Approver](ROLES.md#approver) who wants to review a PR but does not have
time immediately can put a hold on a PR. If you need more time, say so on the
PR discussion and offer an ETA measured in single-digit days at most. Any PR
that has a hold will not be merged until the person who requested the hold
acks the review, withdraws their hold, or is overruled by a majority of
approvers.
time immediately can put a hold on a PR. If you need more time, say so on the PR
discussion and offer an ETA measured in single-digit days at most. Any PR that
has a hold will not be merged until the person who requested the hold acks the
review, withdraws their hold, or is overruled by a majority of approvers.
## Approvers
Merging of PRs is done by [Approvers](ROLES.md#approver).
As is the case with many open source projects, becoming an Approver is based
on contributions to the project. See our [community roles](ROLES.md) document for
As is the case with many open source projects, becoming an Approver is based on
contributions to the project. See our [community roles](ROLES.md) document for
information on how this is done.
## Merging PRs

View File

@ -106,9 +106,9 @@ table describes:
Individuals may be added as an outside collaborator (with READ access) to a repo
in the Knative GitHub organization without becoming a member. This role allows
them to be assigned issues and PRs until they become a member, but will not allow
tests to be run against their PRs automatically nor allow them to interact with
the PR bot.
them to be assigned issues and PRs until they become a member, but will not
allow tests to be run against their PRs automatically nor allow them to interact
with the PR bot.
### Requirements
@ -139,8 +139,8 @@ this is not a requirement.
### Requirements
- Has made multiple contributions to the project or community. Contributions
may include, but are not limited to:
- Has made multiple contributions to the project or community. Contributions may
include, but are not limited to:
- Authoring or reviewing PRs on GitHub
@ -212,12 +212,11 @@ approver in an OWNERS file:
- Responsible for project quality control via [code reviews](REVIEWING.md)
- Focus on holistic acceptance of contribution such as dependencies with
other features, backward / forward compatibility, API and flag
definitions, etc
- Focus on holistic acceptance of contribution such as dependencies with other
features, backward / forward compatibility, API and flag definitions, etc
- Expected to be responsive to review requests as per [community
expectations](REVIEWING.md)
- Expected to be responsive to review requests as per
[community expectations](REVIEWING.md)
- Mentor members and contributors
@ -255,8 +254,8 @@ Additional requirements for leads of a new working group:
The following apply to the area / component for which one would be an owner.
- Run their working group as explained in the [Working Group
Processes](WORKING-GROUP-PROCESSES.md).
- Run their working group as explained in the
[Working Group Processes](WORKING-GROUP-PROCESSES.md).
- Design/proposal approval authority over the area / component, though
escalation to the technical oversight committee is possible.
@ -270,8 +269,8 @@ The following apply to the area / component for which one would be an owner.
- Capable of directly applying lgtm + approve labels for any PR
- Expected to respect OWNERS files approvals and use [standard
procedure for merging code](REVIEWING.md#merging-prs).
- Expected to respect OWNERS files approvals and use
[standard procedure for merging code](REVIEWING.md#merging-prs).
- Expected to work to holistically maintain the health of the project through:
@ -291,8 +290,8 @@ Administrators are responsible for the bureaucratic aspects of the project.
### Responsibilities and privileges
- Manage the Knative GitHub repo, including granting membership and
controlling repo read/write permissions
- Manage the Knative GitHub repo, including granting membership and controlling
repo read/write permissions
- Manage the Knative Slack team

View File

@ -9,8 +9,8 @@ Chat is searchable and public. Do not make comments that you would not say on a
video recording or in another public space. Please be courteous to others.
`@here` and `@channel` should be used rarely. Members will receive notifications
from these commands and we are a global project - please be kind.
Note: `@all` is only to be used by admins.
from these commands and we are a global project - please be kind. Note: `@all`
is only to be used by admins.
You can join the [Knative Slack](https://slack.knative.dev) instance at
https://slack.knative.dev.
@ -29,9 +29,9 @@ project, and includes all communication mediums.
Slack admins should make sure to mention this in the “What I do” section of
their Slack profile, as well as for which time zone.
To connect: please reach out in the #slack-admins channel, mention one of us
in the specific channel where you have a question, or DM (Direct Message) one
of us privately.
To connect: please reach out in the #slack-admins channel, mention one of us in
the specific channel where you have a question, or DM (Direct Message) one of us
privately.
### Admin Expectations and Guidelines
@ -76,14 +76,14 @@ documented, please take a screenshot to include in your message.
### What if you have a problem with an admin
Send a DM to another listed Admin and describe the situation, or if its a code
of conduct issue, please send an email to knative-code-of-conduct@googlegroups.com
and describe the situation.
of conduct issue, please send an email to
knative-code-of-conduct@googlegroups.com and describe the situation.
## Bots, Tokens, and Webhooks
Bots, tokens, and webhooks are reviewed on a case-by-case basis. Expect most
requests will be rejected due to security, privacy, and usability concerns.
Bots and the like tend to make a lot of noise in channels.
requests will be rejected due to security, privacy, and usability concerns. Bots
and the like tend to make a lot of noise in channels.
Please join #slack-admins and have a discussion about your request before
requesting the access.
@ -101,14 +101,14 @@ chat. Please document these interactions for other Slack admins to review.
Content will be automatically removed if it violates code of conduct or is a
sales pitch. Admins will take a screenshot of such behavior in order to document
the situation. Google takes such violations extremely seriously, and
they will be handled swiftly.
the situation. Google takes such violations extremely seriously, and they will
be handled swiftly.
## Inactivating Accounts
For reasons listed below, admins may inactivate individual Slack accounts.
Due to Slacks framework, it does not allow for an account to be banned or
suspended in the traditional sense.
For reasons listed below, admins may inactivate individual Slack accounts. Due
to Slacks framework, it does not allow for an account to be banned or suspended
in the traditional sense.
[Read Slacks policy on this.](https://get.Slack.help/hc/en-us/articles/204475027-Deactivate-a-member-s-account)
- Spreading spam content in DMs and/or channels
@ -122,13 +122,13 @@ in the purpose or pinned docs of that channel.
## DM (Direct Message) Conversations
Please do not engage in proprietary company specific conversations in the Knative
Slack instance. This is meant for conversations related to Knative open source
topics and community.
Please do not engage in proprietary company specific conversations in the
Knative Slack instance. This is meant for conversations related to Knative open
source topics and community.
Proprietary conversations should occur in your company communication platforms.
As with all communication, please be mindful of appropriateness, professionalism,
and applicability to the Knative community.
As with all communication, please be mindful of appropriateness,
professionalism, and applicability to the Knative community.
---

View File

@ -28,11 +28,11 @@ work-in-progress._
- Guided by the TOC for normal business.
- Control and delegate access to and establish processes regarding other
project resources/assets not covered by the above, including web sites
and their domains, blogs, social-media accounts, etc.
project resources/assets not covered by the above, including web sites and
their domains, blogs, social-media accounts, etc.
- Manage the Knative brand to decide which things can be called “Knative”
and how that mark can be used in relation to other efforts or vendors.
- Manage the Knative brand to decide which things can be called “Knative” and
how that mark can be used in relation to other efforts or vendors.
## Committee Mechanics

View File

@ -14,8 +14,8 @@ product and design decisions.
- Set the overall technical direction and roadmap of the project.
- Resolve technical issues, technical disagreements and escalations within
the project.
- Resolve technical issues, technical disagreements and escalations within the
project.
- Set the priorities of individual releases to ensure coherency and proper
sequencing.
@ -33,37 +33,37 @@ product and design decisions.
- Happy Healthy Community
- Establish and maintain the overall technical governance guidelines for
the project.
- Establish and maintain the overall technical governance guidelines for the
project.
- Decide which sub-projects are part of the Knative project, including
accepting new sub-projects and pruning existing sub-projects to maintain
community focus
- Ensure the team adheres to our [code of
conduct](CONTRIBUTING.md#code-of-conduct) and respects our
- Ensure the team adheres to our
[code of conduct](CONTRIBUTING.md#code-of-conduct) and respects our
[values](VALUES.md).
- Foster an environment for a healthy and happy community of developers
and contributors.
- Foster an environment for a healthy and happy community of developers and
contributors.
## Committee Mechanics
The TOCs work includes:
- Regular committee meetings to discuss hot topics, resulting in a set of
published [meeting
notes](https://docs.google.com/document/d/1hR5ijJQjz65QkLrgEhWjv3Q86tWVxYj_9xdhQ6Y5D8Q/edit#).
published
[meeting notes](https://docs.google.com/document/d/1hR5ijJQjz65QkLrgEhWjv3Q86tWVxYj_9xdhQ6Y5D8Q/edit#).
- Create, review, approve and publish technical project governance documents.
- Create proposals for consideration by individual working groups to help
steer their work towards a common project-wide objective.
- Create proposals for consideration by individual working groups to help steer
their work towards a common project-wide objective.
- Review/address/comment on project issues.
- Act as a high-level sounding board for technical questions or designs
bubbled up by the working groups.
- Act as a high-level sounding board for technical questions or designs bubbled
up by the working groups.
## Committee Meeting

View File

@ -5,29 +5,29 @@ values we hold as a team:
- Optimize for the **overall project**, not your own area or feature
- A shortcut for one individual can mean a lot of extra work or disruption
for the rest of the team.
- A shortcut for one individual can mean a lot of extra work or disruption for
the rest of the team.
- Our repos should always be in release shape: **Always Green**
- This lets us move faster in the mid and long term.
- This implies investments in build/test infrastructure to have fast,
reliable tests to ensure that we can release at any time.
- Extra discipline may require more work by individuals to keep the build
in good state, but less work overall for the team.
- This implies investments in build/test infrastructure to have fast, reliable
tests to ensure that we can release at any time.
- Extra discipline may require more work by individuals to keep the build in
good state, but less work overall for the team.
- Be **specific**, **respectful** and **courteous**
- Disagreements are welcome and encouraged, but don't use broad
generalizations, exaggerations, or judgment words that can be taken
personally. Consider other peoples perspective (including the wide
range of applicability of Knative). Empathize with our users. Focus on
the specific issue at hand, and remember that we all care about the
project, first and foremost.
personally. Consider other peoples perspective (including the wide range of
applicability of Knative). Empathize with our users. Focus on the specific
issue at hand, and remember that we all care about the project, first and
foremost.
- Emails to the [mailing lists](CONTRIBUTING.md#contributing-a-feature),
document comments, or meetings are often better and higher bandwidth
ways to communicate complex and nuanced design issues, as opposed to
protracted heated live chats.
document comments, or meetings are often better and higher bandwidth ways to
communicate complex and nuanced design issues, as opposed to protracted
heated live chats.
- Be mindful of the terminology you are using, it may not be the same as
someone else and cause misunderstanding. To promote clear and precise
communication, define the terms you are using in context.
@ -36,14 +36,14 @@ values we hold as a team:
- Raising issues is great, suggesting solutions is even better
- Think of a proposed alternative and improvement rather than just what
you perceive as wrong.
- Think of a proposed alternative and improvement rather than just what you
perceive as wrong.
- If you have no immediate solution even after thinking about it - if
something does seem significant, raise it to someone who might be able
to also think of solutions or to the group (dont stay frustrated! Feel
safe in bringing up issues.
- Avoid rehashing old issues that have been resolved/decided
(unless you have new insights or information).
something does seem significant, raise it to someone who might be able to
also think of solutions or to the group (dont stay frustrated! Feel safe in
bringing up issues.
- Avoid rehashing old issues that have been resolved/decided (unless you have
new insights or information).
- Be productive and **happy**, and most importantly, have _fun_ :-)

View File

@ -48,8 +48,8 @@ new working group. To do so, you need to:
- The scope of the working group (topics, subsystems, code repos, areas of
responsibility)
- **Nominate an initial set of leads**. The leads set the agenda for the
working group and serve as final arbiters on any technical decision. See
- **Nominate an initial set of leads**. The leads set the agenda for the working
group and serve as final arbiters on any technical decision. See
[below](#leads) for information on the responsibilities of leads and
requirements for nominating them.
@ -58,11 +58,11 @@ new working group. To do so, you need to:
- **Send an Email**. Write up an email with your charter, nominated leads, and
roadmap, and send it to
[knative-tech-oversight@](mailto:knative-tech-oversight@googlegroups.com).
The technical oversight committee will evaluate the request and decide
whether the working group should be formed, whether it should be merely a
subgroup of an existing working group, or whether it should be subsumed by
an existing working group.
[knative-tech-oversight@](mailto:knative-tech-oversight@googlegroups.com). The
technical oversight committee will evaluate the request and decide whether the
working group should be formed, whether it should be merely a subgroup of an
existing working group, or whether it should be subsumed by an existing
working group.
## Setting up a working group
@ -70,22 +70,21 @@ Once approval has been granted by the technical oversight committee to form a
working group, the working group leads need to take a few steps to establish the
working group:
- **Create a Google Drive Folder**. Create a folder to hold your
working group documents within this parent
[folder](https://drive.google.com/corp/drive/folders/0APnJ_hRs30R2Uk9PVA). Call
your folder "GROUP_NAME".
- **Create a Google Drive Folder**. Create a folder to hold your working group
documents within this parent
[folder](https://drive.google.com/corp/drive/folders/0APnJ_hRs30R2Uk9PVA).
Call your folder "GROUP_NAME".
- **Create a Meeting Notes Document**. Create a blank document in the above
folder and call it "GROUP_NAME Group Meeting Notes".
- **Create a Roadmap Document**. Create a document in the above folder and
call it "GROUP_NAME Group Roadmap". Put your initial roadmap in the
document.
- **Create a Roadmap Document**. Create a document in the above folder and call
it "GROUP_NAME Group Roadmap". Put your initial roadmap in the document.
- **Create a Wiki**. Create a wiki page on
[GitHub](https://github.com/knative/serving) titled "GROUP_NAME Design
Decisions". This page will be used to track important design decisions made
by the working group.
Decisions". This page will be used to track important design decisions made by
the working group.
- **Create a Public Google Group**. Call the group "knative-_group_name_" (all
in lowercase, dashes for spaces). This mailing list must be open to all.
@ -93,8 +92,8 @@ working group:
- **Schedule a Recurring Meeting**. Create a recurring meeting (weekly or
bi-weekly, 30 or 60 minutes) and call the meeting GROUP_NAME Group Sync-Up".
Attach the meeting notes document to the calendar event. Generally schedule
these meetings between 9:00AM to 2:59PM Pacific Time. Invite the public
Google group to the meeting.
these meetings between 9:00AM to 2:59PM Pacific Time. Invite the public Google
group to the meeting.
- **Register the Working Group**. Go to
[WORKING-GROUPS.md](https://github.com/knative/serving/blob/master/community/WORKING-GROUPS.md)
@ -104,8 +103,8 @@ working group:
- **Announce your Working Group**. Send a note to
[knative-dev@](mailto:knative-dev@googlegroups.com) and
[knative-tech-oversight@](mailto:knative-tech-oversight@googlegroups.com) to
announce your new working group. Include your charter in the email and
provide links to the meeting invitation.
announce your new working group. Include your charter in the email and provide
links to the meeting invitation.
Congratulations, you now have a fully formed working group!
@ -144,16 +143,15 @@ few activities:
the recorded meeting in the notes. The lead may delegate note-taking duties.
- **Wiki**. Ensure that significant design decisions are captured in the Wiki.
In the Wiki, include links to useful design documents, any interesting
GitHub issues or PRs, posts to the mailing lists, etc. The wiki should
provide a good feel for where the mind of the working group is at and where
things are headed.
In the Wiki, include links to useful design documents, any interesting GitHub
issues or PRs, posts to the mailing lists, etc. The wiki should provide a good
feel for where the mind of the working group is at and where things are
headed.
- **Roadmap**. Establish **and maintain** a roadmap for the working group
outlining the areas of focus for the working group over the next 3 months.
- **Report**. Report current status to the main community meeting every 6
weeks.
- **Report**. Report current status to the main community meeting every 6 weeks.
### Be open
@ -185,10 +183,10 @@ Sometimes, different working groups can have conflicting goals or requirements.
Leads from all affected working groups generally work together and come to an
agreeable conclusion.
In all cases, remaining blocking issues can be raised to the [technical
oversight committee](TECH-OVERSIGHT-COMMITTEE.md) to help resolve the situation.
To trigger an escalation, create an issue in the `knative/serving` repo and
assign it to the **@knative/tech-oversight-committee** team.
In all cases, remaining blocking issues can be raised to the
[technical oversight committee](TECH-OVERSIGHT-COMMITTEE.md) to help resolve the
situation. To trigger an escalation, create an issue in the `knative/serving`
repo and assign it to the **@knative/tech-oversight-committee** team.
---

View File

@ -6,21 +6,22 @@ Working groups follow the [contributing](CONTRIBUTING.md) guidelines although
each of these groups may operate a little differently depending on their needs
and workflow.
When the need arises, a new working group can be created. See the [working group
processes](WORKING-GROUP-PROCESSES.md) for working group proposal and creation
procedures.
When the need arises, a new working group can be created. See the
[working group processes](WORKING-GROUP-PROCESSES.md) for working group proposal
and creation procedures.
The working groups generate design docs which are kept in a
[shared drive](https://drive.google.com/corp/drive/folders/0APnJ_hRs30R2Uk9PVA)
and are available for anyone to read and comment on. The shared drive
currently grants read access to
and are available for anyone to read and comment on. The shared drive currently
grants read access to
[knative-users@](https://groups.google.com/forum/#!forum/knative-users) and edit
and comment access to the
[knative-dev@](https://groups.google.com/forum/#!forum/knative-dev) Google group.
[knative-dev@](https://groups.google.com/forum/#!forum/knative-dev) Google
group.
Additionally, all working groups should hold regular meetings, which should be
added to the [shared knative
calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com)
added to the
[shared knative calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com)
WG leads should have access to be able to create and update events on this
calendar, and should invite knative-dev@googlegroups.com to working group
meetings.
@ -39,7 +40,8 @@ The current working groups are:
## API Core
API [resources](../pkg/apis/serving), [validation](../pkg/webhook), and [semantics](../pkg/controller).
API [resources](../pkg/apis/serving), [validation](../pkg/webhook), and
[semantics](../pkg/controller).
| Artifact | Link |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
@ -74,7 +76,8 @@ API [resources](../pkg/apis/serving), [validation](../pkg/webhook), and [semanti
## Documentation
Knative documentation, especially the [Docs](https://github.com/knative/docs) repo.
Knative documentation, especially the [Docs](https://github.com/knative/docs)
repo.
| Artifact | Link |
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
@ -108,8 +111,9 @@ Event sources, bindings, FaaS framework, and orchestration
## Networking
Inbound and outbound network connectivity for [serving](https://github.com/knative/serving) workloads.
Specific areas of interest include: load balancing, routing, DNS configuration and TLS support.
Inbound and outbound network connectivity for
[serving](https://github.com/knative/serving) workloads. Specific areas of
interest include: load balancing, routing, DNS configuration and TLS support.
| Artifact | Link |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
@ -161,7 +165,8 @@ Autoscaling
## Productivity
Project health, test framework, continuous integration & deployment, release, performance/scale/load testing infrastructure
Project health, test framework, continuous integration & deployment, release,
performance/scale/load testing infrastructure
| Artifact | Link |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |

View File

@ -1,23 +1,30 @@
# Knative Eventing
Knative Eventing is a system that is designed to address a common need for cloud native development and
provides composable primitives to enable late-binding event sources and event consumers.
Knative Eventing is a system that is designed to address a common need for cloud
native development and provides composable primitives to enable late-binding
event sources and event consumers.
## Design overview
Knative Eventing is designed around the following goals:
1. Knative Eventing services are loosely coupled. These services can be developed and deployed independently on,
and across a variety of platforms (for example Kubernetes, VMs, SaaS or FaaS).
1. Event producers and event sources are independent. Any producer (or source), can generate events
before there are active event consumers that are listening. Any event consumer can express interest in an
event or class of events, before there are producers that are creating those events.
1. Other services can be connected to the Eventing system. These services can perform the following functions:
- Create new applications without modifying the event producer or event consumer.
1. Knative Eventing services are loosely coupled. These services can be
developed and deployed independently on, and across a variety of platforms
(for example Kubernetes, VMs, SaaS or FaaS).
1. Event producers and event sources are independent. Any producer (or source),
can generate events before there are active event consumers that are
listening. Any event consumer can express interest in an event or class of
events, before there are producers that are creating those events.
1. Other services can be connected to the Eventing system. These services can
perform the following functions:
- Create new applications without modifying the event producer or event
consumer.
- Select and target specific subsets of the events from their producers.
1. Ensure cross-service interoperability. Knative Eventing is consistent with the
1. Ensure cross-service interoperability. Knative Eventing is consistent with
the
[CloudEvents](https://github.com/cloudevents/spec/blob/master/spec.md#design-goals)
specification that is developed by the [CNCF Serverless WG](https://lists.cncf.io/g/cncf-wg-serverless).
specification that is developed by the
[CNCF Serverless WG](https://lists.cncf.io/g/cncf-wg-serverless).
### Event consumers
@ -54,7 +61,8 @@ The focus for the next Eventing release will be to enable easy implementation of
event sources. Sources manage registration and delivery of events from external
systems using Kubernetes
[Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
Learn more about Eventing development in the [Eventing work group](https://github.com/knative/docs/blob/master/community/WORKING-GROUPS.md#events).
Learn more about Eventing development in the
[Eventing work group](https://github.com/knative/docs/blob/master/community/WORKING-GROUPS.md#events).
## Installation

View File

@ -74,8 +74,8 @@ export IOTCORE_TOPIC_DEVICE="iot-demo-device-pubsub-topic"
[in-memory `ClusterChannelProvisioner`](https://github.com/knative/eventing/tree/master/config/provisioners/in-memory-channel).
- Note that you can skip this if you choose to use a different type of
`Channel`. If so, you will need to modify `channel.yaml` before
deploying it.
`Channel`. If so, you will need to modify `channel.yaml` before deploying
it.
#### GCP PubSub Source

View File

@ -8,11 +8,15 @@ consumption by a function that has been implemented as a Knative Service.
### Prerequisites
1. Setup [Knative Serving](https://github.com/knative/docs/tree/master/serving).
1. Setup [Knative Eventing](https://github.com/knative/docs/tree/master/eventing).
1. Setup
[Knative Eventing](https://github.com/knative/docs/tree/master/eventing).
### Channel
1. Create a `Channel`. You can use your own `Channel` or use the provided sample, which creates a channel called `testchannel`. If you use your own `Channel` with a different name, then you will need to alter other commands later.
1. Create a `Channel`. You can use your own `Channel` or use the provided
sample, which creates a channel called `testchannel`. If you use your own
`Channel` with a different name, then you will need to alter other commands
later.
```shell
kubectl -n default apply -f eventing/samples/kubernetes-event-source/channel.yaml
@ -20,7 +24,10 @@ kubectl -n default apply -f eventing/samples/kubernetes-event-source/channel.yam
### Service Account
1. Create a Service Account that the `Receive Adapter` runs as. The `Receive Adapater` watches for Kubernetes events and forwards them to the Knative Eventing Framework. If you want to re-use an existing Service Account with the appropriate permissions, you need to modify the
1. Create a Service Account that the `Receive Adapter` runs as. The
`Receive Adapater` watches for Kubernetes events and forwards them to the
Knative Eventing Framework. If you want to re-use an existing Service Account
with the appropriate permissions, you need to modify the
```shell
kubectl apply -f eventing/samples/kubernetes-event-source/serviceaccount.yaml
@ -28,7 +35,10 @@ kubectl apply -f eventing/samples/kubernetes-event-source/serviceaccount.yaml
### Create Event Source for Kubernetes Events
1. In order to receive events, you have to create a concrete Event Source for a specific namespace. If you are wanting to consume events from a differenet namespace or using a different `Service Account`, you need to modify the yaml accordingly.
1. In order to receive events, you have to create a concrete Event Source for a
specific namespace. If you are wanting to consume events from a differenet
namespace or using a different `Service Account`, you need to modify the yaml
accordingly.
```shell
kubectl apply -f eventing/samples/kubernetes-event-source/k8s-events.yaml
@ -36,9 +46,13 @@ kubectl apply -f eventing/samples/kubernetes-event-source/k8s-events.yaml
### Subscriber
In order to check the `KubernetesEventSource` is fully working, we will create a simple Knative Service that dumps incoming messages to its log and create a `Subscription` from the `Channel` to that Knative Service.
In order to check the `KubernetesEventSource` is fully working, we will create a
simple Knative Service that dumps incoming messages to its log and create a
`Subscription` from the `Channel` to that Knative Service.
1. If the deployed `KubernetesEventSource` is pointing at a `Channel` other than `testchannel`, modify `subscription.yaml` by replacing `testchannel` with that `Channel`'s name.
1. If the deployed `KubernetesEventSource` is pointing at a `Channel` other than
`testchannel`, modify `subscription.yaml` by replacing `testchannel` with
that `Channel`'s name.
1. Deploy `subscription.yaml`.
```shell
@ -47,7 +61,8 @@ kubectl apply -f eventing/samples/kubernetes-event-source/subscription.yaml
### Create Events
Create events by launching a pod in the default namespace. Create a busybox container
Create events by launching a pod in the default namespace. Create a busybox
container
```shell
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
@ -61,7 +76,10 @@ kubectl delete pod busybox
### Verify
We will verify that the kubernetes events were sent into the Knative eventing system by looking at our message dumper function logsIf you deployed the [Subscriber](#subscriber), then continue using this section. If not, then you will need to look downstream yourself.
We will verify that the kubernetes events were sent into the Knative eventing
system by looking at our message dumper function logsIf you deployed the
[Subscriber](#subscriber), then continue using this section. If not, then you
will need to look downstream yourself.
```shell
kubectl get pods

View File

@ -1,7 +1,7 @@
# Knative Install on Azure Kubernetes Service (AKS)
This guide walks you through the installation of the latest version of
Knative using pre-built images.
This guide walks you through the installation of the latest version of Knative
using pre-built images.
You can find [guides for other platforms here](README.md).
@ -16,10 +16,12 @@ commands will need to be adjusted for use in a Windows environment.
### Installing the Azure CLI
1. If you already have `azure cli` version `2.0.41` or later installed, you can skip to the next section and install `kubectl`
1. If you already have `azure cli` version `2.0.41` or later installed, you can
skip to the next section and install `kubectl`
Install `az` by following the instructions for your operating system.
See the [full installation instructions](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) if yours isn't listed below. You will need az cli version 2.0.37 or greater.
Install `az` by following the instructions for your operating system. See the
[full installation instructions](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
if yours isn't listed below. You will need az cli version 2.0.37 or greater.
#### MacOS
@ -43,7 +45,9 @@ brew install azure-cli
### Installing kubectl
1. If you already have `kubectl`, run `kubectl version` to check your client version. If you have `kubectl` v1.10 installed, you can skip to the next section and create an AKS cluster
1. If you already have `kubectl`, run `kubectl version` to check your client
version. If you have `kubectl` v1.10 installed, you can skip to the next
section and create an AKS cluster
```bash
az aks install-cli
@ -57,7 +61,8 @@ Now that we have all the tools, we need a Kubernetes cluster to install Knative.
First let's identify your Azure subscription and save it for use later.
1. Run `az login` and follow the instructions in the command output to authorize `az` to use your account
1. Run `az login` and follow the instructions in the command output to authorize
`az` to use your account
1. List your Azure subscriptions:
```bash
az account list -o table
@ -66,7 +71,8 @@ First let's identify your Azure subscription and save it for use later.
### Create a Resource Group for AKS
To simplify the command lines for this walkthrough, we need to define a few
environment variables. First determine which region you'd like to run AKS in, along with the resource group you'd like to use.
environment variables. First determine which region you'd like to run AKS in,
along with the resource group you'd like to use.
1. Set `RESOURCE_GROUP` and `LOCATION` variables:
@ -76,14 +82,17 @@ environment variables. First determine which region you'd like to run AKS in, al
export CLUSTER_NAME=knative-cluster
```
2. Create a resource group with the az cli using the following command if you are using a new resource group.
2. Create a resource group with the az cli using the following command if you
are using a new resource group.
```bash
az group create --name $RESOURCE_GROUP --location $LOCATION
```
### Create a Kubernetes cluster using AKS
Next we will create a managed Kubernetes cluster using AKS. To make sure the cluster is large enough to host all the Knative and Istio components, the recommended configuration for a cluster is:
Next we will create a managed Kubernetes cluster using AKS. To make sure the
cluster is large enough to host all the Knative and Istio components, the
recommended configuration for a cluster is:
- Kubernetes version 1.10 or later
- Three or more nodes
@ -91,8 +100,9 @@ Next we will create a managed Kubernetes cluster using AKS. To make sure the clu
- RBAC enabled
1. Enable AKS in your subscription, use the following command with the az cli:
`bash az provider register -n Microsoft.ContainerService`
You should also ensure that the `Microsoft.Compute` and `Microsoft.Network` providers are registered in your subscription. If you need to enable them:
`bash az provider register -n Microsoft.ContainerService` You should also
ensure that the `Microsoft.Compute` and `Microsoft.Network` providers are
registered in your subscription. If you need to enable them:
`bash az provider register -n Microsoft.Compute az provider register -n Microsoft.Network`
1. Create the AKS cluster!
@ -131,18 +141,19 @@ Knative depends on Istio.
```
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
`bash kubectl get pods --namespace istio-system`
`Running` or `Completed`: `bash kubectl get pods --namespace istio-system`
It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
> command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
## Installing Knative components
You can install the Knative Serving and Build components together, or Build on its own.
You can install the Knative Serving and Build components together, or Build on
its own.
### Installing Knative Serving and Build components
@ -150,8 +161,8 @@ You can install the Knative Serving and Build components together, or Build on i
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
1. Monitor the Knative components until all of the components show a `STATUS` of
`Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
@ -196,9 +207,9 @@ You have two options for deploying your first app:
## Cleaning up
Running a cluster costs money, so you might want to delete the cluster when you're done if
you're not using it. Deleting the cluster will also remove Knative, Istio,
and any apps you've deployed.
Running a cluster costs money, so you might want to delete the cluster when
you're done if you're not using it. Deleting the cluster will also remove
Knative, Istio, and any apps you've deployed.
To delete the cluster, enter the following command:

View File

@ -1,7 +1,7 @@
# Knative Install on Google Kubernetes Engine
This guide walks you through the installation of the latest version of
Knative using pre-built images.
This guide walks you through the installation of the latest version of Knative
using pre-built images.
You can find [guides for other platforms here](README.md).
@ -57,7 +57,8 @@ export CLUSTER_ZONE=us-west1-c
### Setting up a Google Cloud Platform project
You need a Google Cloud Platform (GCP) project to create a Google Kubernetes Engine cluster.
You need a Google Cloud Platform (GCP) project to create a Google Kubernetes
Engine cluster.
1. Set `PROJECT` environment variable, you can replace `my-knative-project` with
the desired name of your GCP project. If you don't have one, we'll create one
@ -71,16 +72,19 @@ You need a Google Cloud Platform (GCP) project to create a Google Kubernetes Eng
gcloud projects create $PROJECT --set-as-default
```
You also need to [enable billing](https://cloud.google.com/billing/docs/how-to/manage-billing-account)
You also need to
[enable billing](https://cloud.google.com/billing/docs/how-to/manage-billing-account)
for your new project.
1. If you already have a GCP project, make sure your project is set as your `gcloud` default:
1. If you already have a GCP project, make sure your project is set as your
`gcloud` default:
```bash
gcloud config set core/project $PROJECT
```
> Tip: Enter `gcloud config get-value project` to view the ID of your default GCP project.
> Tip: Enter `gcloud config get-value project` to view the ID of your default
> GCP project.
1. Enable the necessary APIs:
```bash
@ -92,8 +96,8 @@ You need a Google Cloud Platform (GCP) project to create a Google Kubernetes Eng
## Creating a Kubernetes cluster
To make sure the cluster is large enough to host all the Knative and
Istio components, the recommended configuration for a cluster is:
To make sure the cluster is large enough to host all the Knative and Istio
components, the recommended configuration for a cluster is:
- Kubernetes version 1.10 or later
- 4 vCPU nodes (`n1-standard-4`)
@ -135,18 +139,19 @@ Knative depends on Istio.
kubectl label namespace default istio-injection=enabled
```
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
`bash kubectl get pods --namespace istio-system`
`Running` or `Completed`: `bash kubectl get pods --namespace istio-system`
It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
> command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
## Installing Knative components
You can install the Knative Serving and Build components together, or Build on its own.
You can install the Knative Serving and Build components together, or Build on
its own.
### Installing Knative Serving and Build components
@ -154,8 +159,8 @@ You can install the Knative Serving and Build components together, or Build on i
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
1. Monitor the Knative components until all of the components show a `STATUS` of
`Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build

View File

@ -89,7 +89,8 @@ rerun the command to see the current status.
## Installing Knative components
You can install the Knative Serving and Build components together, or Build on its own.
You can install the Knative Serving and Build components together, or Build on
its own.
### Installing Knative Serving and Build components
@ -97,8 +98,8 @@ You can install the Knative Serving and Build components together, or Build on i
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
1. Monitor the Knative components until all of the components show a `STATUS` of
`Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build

View File

@ -6,7 +6,8 @@ using pre-built images.
You may also have it all installed for you by clicking the button below:
[![Deploy to IBM Cloud](https://bluemix.net/deploy/button_x2.png)](https://console.bluemix.net/devops/setup/deploy?repository=https://git.ng.bluemix.net/start-with-knative/toolchain.git)
More [instructions on the deploy button here](https://git.ng.bluemix.net/start-with-knative/toolchain/blob/master/README.md).
More
[instructions on the deploy button here](https://git.ng.bluemix.net/start-with-knative/toolchain/blob/master/README.md).
You can find [guides for other platforms here](README.md).
@ -89,8 +90,8 @@ components, the recommended configuration for a cluster is:
```
If you're starting in a fresh account with no public and private VLANs, they
are created automatically for you. If you already have VLANs configured
in your account, get them via `ibmcloud cs vlans --zone $CLUSTER_ZONE` and
are created automatically for you. If you already have VLANs configured in
your account, get them via `ibmcloud cs vlans --zone $CLUSTER_ZONE` and
include the public/private VLAN in the `cluster-create` command:
```bash
@ -118,8 +119,8 @@ components, the recommended configuration for a cluster is:
ibmcloud cs cluster-config $CLUSTER_NAME
```
Follow the instructions on the screen to `EXPORT` the correct `KUBECONFIG` value
to point to the created cluster.
Follow the instructions on the screen to `EXPORT` the correct `KUBECONFIG`
value to point to the created cluster.
1. Make sure all nodes are up:
@ -127,8 +128,8 @@ components, the recommended configuration for a cluster is:
kubectl get nodes
```
Make sure all the nodes are in `Ready` state. You are now ready to install Istio
into your cluster.
Make sure all the nodes are in `Ready` state. You are now ready to install
Istio into your cluster.
## Installing Istio
@ -157,7 +158,8 @@ rerun the command to see the current status.
## Installing Knative components
You can install the Knative Serving and Build components together, or Build on its own.
You can install the Knative Serving and Build components together, or Build on
its own.
### Installing Knative Serving and Build components
@ -165,8 +167,8 @@ You can install the Knative Serving and Build components together, or Build on i
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
1. Monitor the Knative components until all of the components show a `STATUS` of
`Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
@ -211,8 +213,8 @@ You have two options for deploying your first app:
## Cleaning up
Running a cluster in IKS costs money, so if you're not using it, you might
want to delete the cluster when you're done. Deleting the cluster also removes
Running a cluster in IKS costs money, so if you're not using it, you might want
to delete the cluster when you're done. Deleting the cluster also removes
Knative, Istio, and any apps you've deployed.
To delete the cluster, enter the following command:

View File

@ -9,20 +9,21 @@ You can find [guides for other platforms here](README.md).
## Before you begin
Knative requires a Kubernetes cluster v1.10 or newer. If you don't have one,
you can create one using [Minikube](https://github.com/kubernetes/minikube).
Knative requires a Kubernetes cluster v1.10 or newer. If you don't have one, you
can create one using [Minikube](https://github.com/kubernetes/minikube).
### Install kubectl and Minikube
1. If you already have `kubectl` CLI, run `kubectl version` to check the
version. You need v1.10 or newer. If your `kubectl` is older, follow
the next step to install a newer version.
version. You need v1.10 or newer. If your `kubectl` is older, follow the next
step to install a newer version.
1. [Install the kubectl CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl).
1. [Install and configure minikube](https://github.com/kubernetes/minikube#installation)
version v0.28.1 or later with a [VM driver](https://github.com/kubernetes/minikube#requirements),
e.g. `kvm2` on Linux or `hyperkit` on macOS.
version v0.28.1 or later with a
[VM driver](https://github.com/kubernetes/minikube#requirements), e.g. `kvm2`
on Linux or `hyperkit` on macOS.
## Creating a Kubernetes cluster
@ -74,7 +75,8 @@ It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
> command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL+C to
> exit watch mode.
## Installing Knative Serving
@ -99,10 +101,12 @@ kubectl get pods --namespace knative-serving
```
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the command to see the current status.
components to be up and running; you can rerun the command to see the current
status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
> command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL+C to
> exit watch mode.
Now you can deploy an app to your newly created Knative cluster.
@ -118,10 +122,10 @@ guide.
If you'd like to view the available sample apps and deploy one of your choosing,
head to the [sample apps](../serving/samples/README.md) repo.
> Note: When looking up the IP address to use for accessing your app, you need to look up
> the NodePort for the `knative-ingressgateway` as well as the IP address used for Minikube.
> You can use the following command to look up the value to use for the {IP_ADDRESS} placeholder
> used in the samples:
> Note: When looking up the IP address to use for accessing your app, you need
> to look up the NodePort for the `knative-ingressgateway` as well as the IP
> address used for Minikube. You can use the following command to look up the
> value to use for the {IP_ADDRESS} placeholder used in the samples:
```shell
echo $(minikube ip):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')

View File

@ -1,7 +1,7 @@
# Knative Install on OpenShift
This guide walks you through the installation of the latest version of [Knative
Serving](https://github.com/knative/serving) on an
This guide walks you through the installation of the latest version of
[Knative Serving](https://github.com/knative/serving) on an
[OpenShift](https://github.com/openshift/origin) using pre-built images and
demonstrates creating and deploying an image of a sample "hello world" app onto
the newly created Knative cluster.
@ -10,7 +10,8 @@ You can find [guides for other platforms here](README.md).
## Minishift setup
- Setup minishift based instructions from https://docs.okd.io/latest/minishift/getting-started/index.html
- Setup minishift based instructions from
https://docs.okd.io/latest/minishift/getting-started/index.html
- Ensure `minishift` is setup correctly by running the command:
@ -21,7 +22,8 @@ minishift version
## Configure and start minishift
The following details the bare minimum configuration required to setup minishift for running Knative:
The following details the bare minimum configuration required to setup minishift
for running Knative:
```shell
@ -53,15 +55,25 @@ minishift addons enable anyuid
minishift start
```
- The above configuration ensures that Knative gets created in its own [minishift profile](https://docs.okd.io/latest/minishift/using/profiles.html) called `knative` with 8GB of RAM, 4 vCpus and 50GB of hard disk. The image-caching helps in re-starting up the cluster faster every time.
- The [addon](https://docs.okd.io/latest/minishift/using/addons.html) **admin-user** creates a user called `admin` with password `admin` having the role of cluster-admin. The user gets created only after the addon is applied, that is usually after successful start of Minishift
- The [addon](https://docs.okd.io/latest/minishift/using/addons.html) **anyuid** allows the `default` service account to run the application with uid `0`
- The above configuration ensures that Knative gets created in its own
[minishift profile](https://docs.okd.io/latest/minishift/using/profiles.html)
called `knative` with 8GB of RAM, 4 vCpus and 50GB of hard disk. The
image-caching helps in re-starting up the cluster faster every time.
- The [addon](https://docs.okd.io/latest/minishift/using/addons.html)
**admin-user** creates a user called `admin` with password `admin` having the
role of cluster-admin. The user gets created only after the addon is applied,
that is usually after successful start of Minishift
- The [addon](https://docs.okd.io/latest/minishift/using/addons.html) **anyuid**
allows the `default` service account to run the application with uid `0`
- The command `minishift profile set knative` is required every time you start and stop minishift to make sure that you are on right `knative` minishift profile that was configured above.
- The command `minishift profile set knative` is required every time you start
and stop minishift to make sure that you are on right `knative` minishift
profile that was configured above.
## Configuring `oc` (openshift cli)
Running the following command make sure that you have right version of `oc` and have configured the DOCKER daemon to be connected to minishift docker.
Running the following command make sure that you have right version of `oc` and
have configured the DOCKER daemon to be connected to minishift docker.
```shell
# configures the host talk to DOCKER daemon of minishift
@ -74,9 +86,13 @@ minishift oc-env
### Enable Admission Controller Webhook
To be able to deploy and run serverless Knative applications, its required that you must enable the [Admission Controller Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/).
To be able to deploy and run serverless Knative applications, its required that
you must enable the
[Admission Controller Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/).
Run the following command to make OpenShift (run via minishift) to be configured for [Admission Controller Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/):
Run the following command to make OpenShift (run via minishift) to be configured
for
[Admission Controller Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/):
```shell
# Enable admission controller webhooks
@ -117,15 +133,29 @@ until oc login -u admin -p admin; do sleep 5; done;
oc label namespace myproject istio-injection=enabled
```
The `oc adm policy` adds the **privileged** [Security Context Constraints(SCCs)](https://docs.okd.io/3.10/admin_guide/manage_scc.html) to the **default** Service Account. The SCCs are the precursor to the PSP (Pod Security Policy) mechanism in Kubernetes, as isito-sidecars required to be run with **privileged** permissions you need set that here.
The `oc adm policy` adds the **privileged**
[Security Context Constraints(SCCs)](https://docs.okd.io/3.10/admin_guide/manage_scc.html)
to the **default** Service Account. The SCCs are the precursor to the PSP
(Pod Security Policy) mechanism in Kubernetes, as isito-sidecars required to
be run with **privileged** permissions you need set that here.
Its is also ensured that the project myproject is labelled for Istio automatic sidecar injection, with this `istio-injection=enabled` label to **myproject** each of the Knative applications that will be deployed in **myproject** will have Istio sidecars injected automatically.
Its is also ensured that the project myproject is labelled for Istio
automatic sidecar injection, with this `istio-injection=enabled` label to
**myproject** each of the Knative applications that will be deployed in
**myproject** will have Istio sidecars injected automatically.
> **IMPORTANT:** Avoid using `default` project in OpenShift for deploying Knative applications. As OpenShift deploys few of its mission critical applications in `default` project, it's safer not to touch it to avoid any instabilities in OpenShift.
> **IMPORTANT:** Avoid using `default` project in OpenShift for deploying
> Knative applications. As OpenShift deploys few of its mission critical
> applications in `default` project, it's safer not to touch it to avoid any
> instabilities in OpenShift.
### Installing Istio
Knative depends on Istio. The [istio-openshift-policies.sh](scripts/istio-openshift-policies.sh) does run the required commands to configure necessary [privileges](https://istio.io/docs/setup/kubernetes/platform-setup/openshift/) to the service accounts used by Istio.
Knative depends on Istio. The
[istio-openshift-policies.sh](scripts/istio-openshift-policies.sh) does run the
required commands to configure necessary
[privileges](https://istio.io/docs/setup/kubernetes/platform-setup/openshift/)
to the service accounts used by Istio.
```shell
curl -s https://raw.githubusercontent.com/knative/docs/master/install/scripts/istio-openshift-policies.sh | bash
@ -137,21 +167,29 @@ curl -s https://raw.githubusercontent.com/knative/docs/master/install/scripts/is
oc apply -f https://storage.googleapis.com/knative-releases/serving/latest/istio.yaml
```
> **NOTE:** If you get a lot of errors after running the above command like: **unable to recognize "STDIN": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"**, just run the command above again, it's idempotent and hence objects will be created only once.
> **NOTE:** If you get a lot of errors after running the above command like:
> **unable to recognize "STDIN": no matches for kind "Gateway" in version
> "networking.istio.io/v1alpha3"**, just run the command above again, it's
> idempotent and hence objects will be created only once.
2. Ensure the istio-sidecar-injector pods runs as provileged:
```shell
oc get cm istio-sidecar-injector -n istio-system -oyaml | sed -e 's/securityContext:/securityContext:\\n privileged: true/' | oc replace -f -
```
3. Monitor the Istio components until all of the components show a `STATUS` of `Running` or `Completed`:
3. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
`shell while oc get pods -n istio-system | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done`
> **NOTE:** It will take a few minutes for all the components to be up and running.
> **NOTE:** It will take a few minutes for all the components to be up and
> running.
## Install Knative Serving
The following section details on deploying [Knative Serving](https://github.com/knative/serving) to OpenShift.
The following section details on deploying
[Knative Serving](https://github.com/knative/serving) to OpenShift.
The [knative-openshift-policies.sh](scripts/knative-openshift-policies.sh) runs the required commands to configure necessary privileges to the service accounts used by Knative.
The [knative-openshift-policies.sh](scripts/knative-openshift-policies.sh) runs
the required commands to configure necessary privileges to the service accounts
used by Knative.
```shell
curl -s https://raw.githubusercontent.com/knative/docs/master/install/scripts/knative-openshift-policies.sh | bash
@ -159,8 +197,10 @@ curl -s https://raw.githubusercontent.com/knative/docs/master/install/scripts/kn
> You can safely ignore the warnings:
- Warning: ServiceAccount 'build-controller' not found cluster role "cluster-admin" added: "build-controller"
- Warning: ServiceAccount 'controller' not found cluster role "cluster-admin" added: "controller"
- Warning: ServiceAccount 'build-controller' not found cluster role
"cluster-admin" added: "build-controller"
- Warning: ServiceAccount 'controller' not found cluster role "cluster-admin"
added: "controller"
1. Install Knative serving:
@ -168,18 +208,22 @@ curl -s https://raw.githubusercontent.com/knative/docs/master/install/scripts/kn
oc apply -f https://storage.googleapis.com/knative-releases/serving/latest/release-no-mon.yaml
```
2. Monitor the Knative components until all of the components show a `STATUS` of `Running` or `Completed`:
2. Monitor the Knative components until all of the components show a `STATUS` of
`Running` or `Completed`:
```shell
while oc get pods -n knative-build | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done
while oc get pods -n knative-serving | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done
```
The first command watches for all pod status in `knative-build` and the second command will watch for all pod status in `knative-serving`.
The first command watches for all pod status in `knative-build` and the
second command will watch for all pod status in `knative-serving`.
> **NOTE:** It will take a few minutes for all the components to be up and running.
> **NOTE:** It will take a few minutes for all the components to be up and
> running.
3. Set route to access the OpenShift ingress CIDR, so that services can be accessed via LoadBalancerIP
3. Set route to access the OpenShift ingress CIDR, so that services can be
accessed via LoadBalancerIP
```shell
# Only for macOS
sudo route -n add -net $(minishift openshift config view | grep ingressIPNetworkCIDR | awk '{print $NF}') $(minishift ip)
@ -224,7 +268,8 @@ head to the [sample apps](../serving/samples/README.md) repository.
## Cleaning up
There are two ways to clean up, either deleting the entire minishift profile or only the respective projects.
There are two ways to clean up, either deleting the entire minishift profile or
only the respective projects.
1. Delete just Istio and Knative projects and applications:

View File

@ -1,7 +1,7 @@
# Knative Install on OpenShift
This guide walks you through the installation of the latest version of [Knative
Serving](https://github.com/knative/serving) on an
This guide walks you through the installation of the latest version of
[Knative Serving](https://github.com/knative/serving) on an
[OpenShift](https://github.com/openshift/origin) using pre-built images and
demonstrates creating and deploying an image of a sample "hello world" app onto
the newly created Knative cluster.
@ -11,7 +11,8 @@ You can find [guides for other platforms here](README.md).
## Before you begin
These instructions will run an OpenShift 3.10 (Kubernetes 1.10) cluster on your
local machine using [`oc cluster up`](https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container)
local machine using
[`oc cluster up`](https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container)
to test-drive knative.
## Install `oc` (openshift cli)
@ -131,7 +132,8 @@ It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
> command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL+C to
> exit watch mode.
Set `priviledged` to `true` for the `istio-sidecar-injector`:
@ -187,10 +189,12 @@ oc get pods -n knative-serving
```
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the command to see the current status.
components to be up and running; you can rerun the command to see the current
status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
> command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL+C to
> exit watch mode.
Now you can deploy an app to your newly created Knative cluster.
@ -206,10 +210,10 @@ guide.
If you'd like to view the available sample apps and deploy one of your choosing,
head to the [sample apps](../serving/samples/README.md) repo.
> Note: When looking up the IP address to use for accessing your app, you need to look up
> the NodePort for the `knative-ingressgateway` as well as the IP address used for OpenShift.
> You can use the following command to look up the value to use for the {IP_ADDRESS} placeholder
> used in the samples:
> Note: When looking up the IP address to use for accessing your app, you need
> to look up the NodePort for the `knative-ingressgateway` as well as the IP
> address used for OpenShift. You can use the following command to look up the
> value to use for the {IP_ADDRESS} placeholder used in the samples:
```shell
export IP_ADDRESS=$(oc get node -o 'jsonpath={.items[0].status.addresses[0].address}'):$(oc get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')

View File

@ -1,7 +1,7 @@
# Knative Install on Pivotal Container Service
This guide walks you through the installation of the latest version of
Knative using pre-built images.
This guide walks you through the installation of the latest version of Knative
using pre-built images.
You can find [guides for other platforms here](README.md).
@ -16,17 +16,20 @@ commands will need to be adjusted for use in a Windows environment.
### Installing Pivotal Container Service
To install Pivotal Container Service (PKS), follow the documentation at https://docs.pivotal.io/runtimes/pks/1-1/installing-pks.html.
To install Pivotal Container Service (PKS), follow the documentation at
https://docs.pivotal.io/runtimes/pks/1-1/installing-pks.html.
## Creating a Kubernetes cluster
> NOTE: Knative uses Istio sidecar injection and requires privileged mode for your init containers.
> NOTE: Knative uses Istio sidecar injection and requires privileged mode for
> your init containers.
To enable privileged mode and create a cluster:
1. Enable privileged mode:
1. Open the Pivotal Container Service tile in PCF Ops Manager.
1. In the plan configuration that you want to use, enable both of the following:
1. In the plan configuration that you want to use, enable both of the
following:
- Enable Privileged Containers - Use with caution
- Disable DenyEscalatingExec
1. Save your changes.
@ -35,11 +38,13 @@ To enable privileged mode and create a cluster:
## Access the cluster
To retrieve your cluster credentials, follow the documentation at https://docs.pivotal.io/runtimes/pks/1-1/cluster-credentials.html.
To retrieve your cluster credentials, follow the documentation at
https://docs.pivotal.io/runtimes/pks/1-1/cluster-credentials.html.
## Installing Istio
Knative depends on Istio. Istio workloads require privileged mode for Init Containers
Knative depends on Istio. Istio workloads require privileged mode for Init
Containers
1. Install Istio:
```bash
@ -50,18 +55,19 @@ Knative depends on Istio. Istio workloads require privileged mode for Init Conta
kubectl label namespace default istio-injection=enabled
```
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
`bash kubectl get pods --namespace istio-system`
`Running` or `Completed`: `bash kubectl get pods --namespace istio-system`
It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
> command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
## Installing Knative components
You can install the Knative Serving and Build components together, or Build on its own.
You can install the Knative Serving and Build components together, or Build on
its own.
### Installing Knative Serving and Build components
@ -69,8 +75,8 @@ You can install the Knative Serving and Build components together, or Build on i
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
1. Monitor the Knative components until all of the components show a `STATUS` of
`Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
@ -115,4 +121,5 @@ You have two options for deploying your first app:
## Cleaning up
To delete the cluster, follow the documentation at https://docs.pivotal.io/runtimes/pks/1-1/delete-cluster.html.
To delete the cluster, follow the documentation at
https://docs.pivotal.io/runtimes/pks/1-1/delete-cluster.html.

View File

@ -6,9 +6,10 @@ using pre-built images.
## Before you begin
Knative requires a Kubernetes cluster v1.10 or newer with the
[MutatingAdmissionWebhook admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-controller) enabled.
`kubectl` v1.10 is also required. This guide assumes that you've already created a
Kubernetes cluster which you're comfortable installing _alpha_ software on.
[MutatingAdmissionWebhook admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-controller)
enabled. `kubectl` v1.10 is also required. This guide assumes that you've
already created a Kubernetes cluster which you're comfortable installing _alpha_
software on.
This guide assumes you are using bash in a Mac or Linux environment; some
commands will need to be adjusted for use in a Windows environment.
@ -41,7 +42,8 @@ rerun the command to see the current status.
## Installing Knative components
You can install the Knative Serving and Build components together, or Build on its own.
You can install the Knative Serving and Build components together, or Build on
its own.
### Installing Knative Serving and Build components
@ -49,8 +51,8 @@ You can install the Knative Serving and Build components together, or Build on i
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.2/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
1. Monitor the Knative components until all of the components show a `STATUS` of
`Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build

View File

@ -4,12 +4,18 @@ Follow this guide to install Knative components on a platform of your choice.
## Choosing a Kubernetes cluster
To get started with Knative, you need a Kubernetes cluster. If you aren't
sure which Kubernetes platform is right for you, see
To get started with Knative, you need a Kubernetes cluster. If you aren't sure
which Kubernetes platform is right for you, see
[Picking the Right Solution](https://kubernetes.io/docs/setup/pick-right-solution/).
We provide information for installing Knative on
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/), [IBM Cloud Kubernetes Service](https://www.ibm.com/cloud/container-service), [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/), [Minikube](https://kubernetes.io/docs/setup/minikube/), [OpenShift](https://github.com/openshift/origin) and [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) clusters.
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/),
[IBM Cloud Kubernetes Service](https://www.ibm.com/cloud/container-service),
[Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/),
[Minikube](https://kubernetes.io/docs/setup/minikube/),
[OpenShift](https://github.com/openshift/origin) and
[Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service)
clusters.
## Installing Knative
@ -25,8 +31,8 @@ Knative components on the following platforms:
- [Knative Install on Minishift](Knative-with-Minishift.md)
- [Knative Install on Pivotal Container Service](Knative-with-PKS.md)
If you already have a Kubernetes cluster you're comfortable installing
_alpha_ software on, use the following instructions:
If you already have a Kubernetes cluster you're comfortable installing _alpha_
software on, use the following instructions:
- [Knative Install on any Kubernetes](Knative-with-any-k8s.md)

View File

@ -1,7 +1,7 @@
# Checking the Version of Your Knative Serving Installation
If you want to check what version of Knative serving you have installed,
enter the following command:
If you want to check what version of Knative serving you have installed, enter
the following command:
```bash
kubectl describe deploy controller --namespace knative-serving
@ -11,7 +11,6 @@ This will return the description for the `knative-serving` controller; this
information contains the link to the container that was used to install Knative:
```yaml
---
Pod Template:
Labels: app=controller
@ -23,15 +22,15 @@ Pod Template:
Image: gcr.io/knative-releases/github.com/knative/serving/cmd/controller@sha256:59abc8765d4396a3fc7cac27a932a9cc151ee66343fa5338fb7146b607c6e306
```
Copy the full `gcr.io` link to the container and paste it into your browser.
If you are already signed in to a Google account, you'll be taken to the Google
Copy the full `gcr.io` link to the container and paste it into your browser. If
you are already signed in to a Google account, you'll be taken to the Google
Container Registry page for that container in the Google Cloud Platform console.
If you aren't already signed in, you'll need to sign in a to a Google account
before you can view the container details.
On the container details page, you'll see a section titled
"Container classification," and in that section is a list of tags. The versions
of Knative you have installed will appear in the list as `v0.1.1`, or whatever
verion you have installed:
On the container details page, you'll see a section titled "Container
classification," and in that section is a list of tags. The versions of Knative
you have installed will appear in the list as `v0.1.1`, or whatever verion you
have installed:
![Shows list of tags on container details page; v0.1.1 is the Knative version and is the first tag.](../images/knative-version.png)

View File

@ -8,16 +8,19 @@ using cURL requests.
You need:
- A Kubernetes cluster with [Knative installed](./README.md).
- An image of the app that you'd like to deploy available on a
container registry. The image of the sample app used in
this guide is available on Google Container Registry.
- An image of the app that you'd like to deploy available on a container
registry. The image of the sample app used in this guide is available on
Google Container Registry.
## Sample application
This guide uses the
[Hello World sample app in Go](../serving/samples/helloworld-go) to demonstrate
the basic workflow for deploying an app, but these steps can be adapted for your
own application if you have an image of it available on [Docker Hub](https://docs.docker.com/docker-hub/repos/), [Google Container Registry](https://cloud.google.com/container-registry/docs/pushing-and-pulling), or another container image registry.
own application if you have an image of it available on
[Docker Hub](https://docs.docker.com/docker-hub/repos/),
[Google Container Registry](https://cloud.google.com/container-registry/docs/pushing-and-pulling),
or another container image registry.
The Hello World sample app reads in an `env` variable, `TARGET`, from the
configuration `.yaml` file, then prints "Hello World: \${TARGET}!". If `TARGET`
@ -25,8 +28,8 @@ isn't defined, it will print "NOT SPECIFIED".
## Configuring your deployment
To deploy an app using Knative, you need a configuration `.yaml` file
that defines a Service. For more information about the Service object, see the
To deploy an app using Knative, you need a configuration `.yaml` file that
defines a Service. For more information about the Service object, see the
[Resource Types documentation](https://github.com/knative/serving/blob/master/docs/spec/overview.md#service).
This configuration file specifies metadata about the application, points to the
@ -35,7 +38,8 @@ configured. For more information about what configuration options are available,
see the
[Serving spec documentation](https://github.com/knative/serving/blob/master/docs/spec/spec.md).
Create a new file named `service.yaml`, then copy and paste the following content into it:
Create a new file named `service.yaml`, then copy and paste the following
content into it:
```yaml
apiVersion: serving.knative.dev/v1alpha1 # Current version of Knative
@ -61,7 +65,8 @@ the image accordingly.
## Deploying your app
From the directory where the new `service.yaml` file was created, apply the configuration:
From the directory where the new `service.yaml` file was created, apply the
configuration:
```bash
kubectl apply --filename service.yaml
@ -72,13 +77,13 @@ Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Perform network programming to create a route, ingress, service, and load
balancer for your app.
- Automatically scale your pods up and down based on traffic, including to
zero active pods.
- Automatically scale your pods up and down based on traffic, including to zero
active pods.
### Interacting with your app
To see if your app has been deployed succesfully, you need the host URL and
IP address created by Knative.
To see if your app has been deployed succesfully, you need the host URL and IP
address created by Knative.
Note: If your cluster is new, it can take some time before the service is
asssigned an external IP address.
@ -102,9 +107,10 @@ asssigned an external IP address.
```
> Note: if you use minikube or a baremetal cluster that has no external load balancer, the
> `EXTERNAL-IP` field is shown as `<pending>`. You need to use `NodeIP` and `NodePort` to
> interact your app instead. To get your app's `NodeIP` and `NodePort`, enter the following command:
> Note: if you use minikube or a baremetal cluster that has no external load
> balancer, the `EXTERNAL-IP` field is shown as `<pending>`. You need to use
> `NodeIP` and `NodePort` to interact your app instead. To get your app's
> `NodeIP` and `NodePort`, enter the following command:
```shell
export IP_ADDRESS=$(kubectl get node --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
@ -125,8 +131,8 @@ asssigned an external IP address.
```
If you changed the name from `helloworld-go` to something else when creating
the `.yaml` file, replace `helloworld-go` in the above commands with the
name you entered.
the `.yaml` file, replace `helloworld-go` in the above commands with the name
you entered.
1. Now you can make a request to your app and see the results. Replace
`IP_ADDRESS` with the `EXTERNAL-IP` you wrote down, and replace
@ -138,16 +144,16 @@ asssigned an external IP address.
Hello World: Go Sample v1!
```
If you exported the host URL And IP address as variables in the previous steps, you
can use those variables to simplify your cURL request:
If you exported the host URL And IP address as variables in the previous
steps, you can use those variables to simplify your cURL request:
```shell
curl -H "Host: ${HOST_URL}" http://${IP_ADDRESS}
Hello World: Go Sample v1!
```
If you deployed your own app, you might want to customize this cURL
request to interact with your application.
If you deployed your own app, you might want to customize this cURL request
to interact with your application.
It can take a few seconds for Knative to scale up your application and return
a response.

View File

@ -1,12 +1,12 @@
# Resources
This page contains information about various tools and technologies
that are useful to anyone developing on Knative.
This page contains information about various tools and technologies that are
useful to anyone developing on Knative.
## Community Resources
This section contains tools and technologies developed by members of the
Knative community specifically for use with Knative.
This section contains tools and technologies developed by members of the Knative
community specifically for use with Knative.
### [`knctl`](https://github.com/cppforlife/knctl)
@ -14,8 +14,8 @@ Knative community specifically for use with Knative.
## Other Resources
This section contains other tools and technologies that are useful when
working with Knative.
This section contains other tools and technologies that are useful when working
with Knative.
### [`go-containerregistry`](https://github.com/google/go-containerregistry/)
@ -36,8 +36,8 @@ which can be used to interact with and inspect images in a registry.
efficiently builds container images from Java source, without a Dockerfile,
without requiring access to the Docker daemon.
Like `ko`, when `jib` is invoked, it builds your Java source and pushes an
image with that built source atop a
Like `ko`, when `jib` is invoked, it builds your Java source and pushes an image
with that built source atop a
[distroless](https://github.com/GoogleContainerTools/distroless) base image to
produce small images that support fast incremental image builds.
@ -48,10 +48,10 @@ The build templates take no parameters.
### [`kaniko`](https://github.com/GoogleContainerTools/kaniko)
`kaniko` is a tool that enables building a container image from source using
the Dockerfile format, without requiring access to a Docker daemon. Removing
this requirement means that `kaniko` is [safe to run on a Kubernetes
cluster](https://github.com/kubernetes/kubernetes/issues/1806).
`kaniko` is a tool that enables building a container image from source using the
Dockerfile format, without requiring access to a Docker daemon. Removing this
requirement means that `kaniko` is
[safe to run on a Kubernetes cluster](https://github.com/kubernetes/kubernetes/issues/1806).
By contrast, building an image using `docker build` necessarily requires the
Docker daemon, which would give the build complete access to your entire
@ -78,8 +78,10 @@ packages by their [import paths](https://golang.org/doc/code.html#ImportPaths)
(e.g., `github.com/kaniko/serving/cmd/controller`)
The typical usage is `ko apply -f config.yaml`, which reads in the config YAML,
and looks for Go import paths representing runnable commands (i.e., `package main`). When it finds a matching import path, `ko` builds the package using `go build` then pushes a container image containing that binary on top of a base
image (by default, `gcr.io/distroless/base`) to
and looks for Go import paths representing runnable commands (i.e.,
`package main`). When it finds a matching import path, `ko` builds the package
using `go build` then pushes a container image containing that binary on top of
a base image (by default, `gcr.io/distroless/base`) to
`$KO_DOCKER_REPO/unique-string`. After pushing those images, `ko` replaces
instances of matched import paths with fully-qualified references to the images
it pushed.
@ -87,7 +89,6 @@ it pushed.
So if `ko apply` was passed this config:
```yaml
---
image: github.com/my/repo/cmd/foo
```
@ -95,7 +96,6 @@ image: github.com/my/repo/cmd/foo
...it would produce YAML like:
```yaml
---
image: gcr.io/my-docker-repo/foo-zyxwvut@sha256:abcdef # image by digest
```
@ -119,10 +119,11 @@ intended to be required for _users_ of Knative -- they should only need to
### [`skaffold`](https://github.com/GoogleContainerTools/skaffold)
`skaffold` is a CLI tool to aid in iterative development for Kubernetes.
Typically, you would write a [YAML
config](https://github.com/GoogleContainerTools/skaffold/blob/master/examples/annotated-skaffold.yaml)
describing to Skaffold how to build and deploy your app, then run `skaffold dev`, which will watch your local source tree for changes and continuously
builds and deploys based on your config when changes are detected.
Typically, you would write a
[YAML config](https://github.com/GoogleContainerTools/skaffold/blob/master/examples/annotated-skaffold.yaml)
describing to Skaffold how to build and deploy your app, then run
`skaffold dev`, which will watch your local source tree for changes and
continuously builds and deploys based on your config when changes are detected.
Skaffold supports many pluggable implementations for building and deploying.
Skaffold contributors are working on support for Knative Build as a build

View File

@ -13,40 +13,41 @@ The Knative Serving project provides middleware primitives that enable:
## Serving resources
Knative Serving defines a set of objects as Kubernetes
Custom Resource Definitions (CRDs). These objects are used to define and control
how your serverless workload behaves on the cluster:
Knative Serving defines a set of objects as Kubernetes Custom Resource
Definitions (CRDs). These objects are used to define and control how your
serverless workload behaves on the cluster:
- [Service](https://github.com/knative/serving/blob/master/docs/spec/spec.md#service):
The `service.serving.knative.dev` resource automatically manages the whole
lifecycle of your workload. It controls the creation of other
objects to ensure that your app has a route, a configuration, and a new revision
for each update of the service. Service can be defined to always route traffic to the
lifecycle of your workload. It controls the creation of other objects to
ensure that your app has a route, a configuration, and a new revision for each
update of the service. Service can be defined to always route traffic to the
latest revision or to a pinned revision.
- [Route](https://github.com/knative/serving/blob/master/docs/spec/spec.md#route):
The `route.serving.knative.dev` resource maps a network endpoint to a one or
more revisions. You can manage the traffic in several ways, including fractional
traffic and named routes.
more revisions. You can manage the traffic in several ways, including
fractional traffic and named routes.
- [Configuration](https://github.com/knative/serving/blob/master/docs/spec/spec.md#configuration):
The `configuration.serving.knative.dev` resource maintains
the desired state for your deployment. It provides a clean separation between
code and configuration and follows the Twelve-Factor App methodology. Modifying a configuration
creates a new revision.
The `configuration.serving.knative.dev` resource maintains the desired state
for your deployment. It provides a clean separation between code and
configuration and follows the Twelve-Factor App methodology. Modifying a
configuration creates a new revision.
- [Revision](https://github.com/knative/serving/blob/master/docs/spec/spec.md#revision):
The `revision.serving.knative.dev` resource is a point-in-time snapshot
of the code and configuration for each modification made to the workload. Revisions
The `revision.serving.knative.dev` resource is a point-in-time snapshot of the
code and configuration for each modification made to the workload. Revisions
are immutable objects and can be retained for as long as useful.
![Diagram that displays how the Serving resources coordinate with each other.](https://github.com/knative/serving/raw/master/docs/spec/images/object_model.png)
## Getting Started
To get started with Serving, check out one of the [hello world](samples/) sample projects.
These projects use the `Service` resource, which manages all of the details for you.
To get started with Serving, check out one of the [hello world](samples/) sample
projects. These projects use the `Service` resource, which manages all of the
details for you.
With the `Service` resource, a deployed service will automatically have a matching route
and configuration created. Each time the `Service` is updated, a new revision is
created.
With the `Service` resource, a deployed service will automatically have a
matching route and configuration created. Each time the `Service` is updated, a
new revision is created.
For more information on the resources and their interactions, see the
[Resource Types Overview](https://github.com/knative/serving/blob/master/docs/spec/overview.md)
@ -80,8 +81,8 @@ in the Knative Serving repository.
## Known Issues
See the [Knative Serving Issues](https://github.com/knative/serving/issues) page for a full list of
known issues.
See the [Knative Serving Issues](https://github.com/knative/serving/issues) page
for a full list of known issues.
---

View File

@ -1,20 +1,21 @@
# Accessing logs
If you have not yet installed the logging and monitoring components, go through the
[installation instructions](./installing-logging-metrics-traces.md) to set up the
necessary components first.
If you have not yet installed the logging and monitoring components, go through
the [installation instructions](./installing-logging-metrics-traces.md) to set
up the necessary components first.
## Kibana and Elasticsearch
- To open the Kibana UI (the visualization tool for [Elasticsearch](https://info.elastic.co)),
start a local proxy with the following command:
- To open the Kibana UI (the visualization tool for
[Elasticsearch](https://info.elastic.co)), start a local proxy with the
following command:
```shell
kubectl proxy
```
This command starts a local proxy of Kibana on port 8001. For security reasons,
the Kibana UI is exposed only within the cluster.
This command starts a local proxy of Kibana on port 8001. For security
reasons, the Kibana UI is exposed only within the cluster.
- Navigate to the
[Kibana UI](http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana).
@ -24,12 +25,13 @@ necessary components first.
![Kibana UI Discover tab](./images/kibana-discover-tab-annotated.png)
You can change the time frame of logs Kibana displays in the upper right corner
of the screen. The main search bar is across the top of the Discover page.
You can change the time frame of logs Kibana displays in the upper right
corner of the screen. The main search bar is across the top of the Discover
page.
- As more logs are ingested, new fields will be discovered. To have them indexed,
go to "Management" > "Index Patterns" > Refresh button (on top right) > "Refresh
fields".
- As more logs are ingested, new fields will be discovered. To have them
indexed, go to "Management" > "Index Patterns" > Refresh button (on top
right) > "Refresh fields".
<!-- TODO: create a video walkthrough of the Kibana UI -->
@ -96,8 +98,8 @@ To access the request logs, enter the following search in Kibana:
tag: "requestlog.logentry.istio-system"
```
Request logs contain details about requests served by the revision. Below is
a sample request log:
Request logs contain details about requests served by the revision. Below is a
sample request log:
```text
@timestamp July 10th 2018, 10:09:28.000
@ -128,7 +130,8 @@ See [Accessing Traces](./accessing-traces.md) page for details.
## Stackdriver
Go to the [GCP Console logging page](https://console.cloud.google.com/logs/viewer) for
Go to the
[GCP Console logging page](https://console.cloud.google.com/logs/viewer) for
your GCP project, which stores your logs via Stackdriver.
---

View File

@ -9,25 +9,32 @@ the visualization tool for [Prometheus](https://prometheus.io/).
kubectl port-forward --namespace knative-monitoring $(kubectl get pods --namespace knative-monitoring --selector=app=grafana --output=jsonpath="{.items..metadata.name}") 3000
```
- This starts a local proxy of Grafana on port 3000. For security reasons, the Grafana UI is exposed only within the cluster.
- This starts a local proxy of Grafana on port 3000. For security reasons, the
Grafana UI is exposed only within the cluster.
2. Navigate to the Grafana UI at [http://localhost:3000](http://localhost:3000).
3. Select the **Home** button on the top of the page to see the list of pre-installed dashboards (screenshot below):
3. Select the **Home** button on the top of the page to see the list of
pre-installed dashboards (screenshot below):
![Knative Dashboards](./images/grafana1.png)
The following dashboards are pre-installed with Knative Serving:
- **Revision HTTP Requests:** HTTP request count, latency, and size metrics per revision and per configuration
- **Revision HTTP Requests:** HTTP request count, latency, and size metrics per
revision and per configuration
- **Nodes:** CPU, memory, network, and disk metrics at node level
- **Pods:** CPU, memory, and network metrics at pod level
- **Deployment:** CPU, memory, and network metrics aggregated at deployment level
- **Deployment:** CPU, memory, and network metrics aggregated at deployment
level
- **Istio, Mixer and Pilot:** Detailed Istio mesh, Mixer, and Pilot metrics
- **Kubernetes:** Dashboards giving insights into cluster health, deployments, and capacity usage
- **Kubernetes:** Dashboards giving insights into cluster health, deployments,
and capacity usage
4. Set up an administrator account to modify or add dashboards by signing in with username: `admin` and password: `admin`.
4. Set up an administrator account to modify or add dashboards by signing in
with username: `admin` and password: `admin`.
- Before you expose the Grafana UI outside the cluster, make sure to change the password.
- Before you expose the Grafana UI outside the cluster, make sure to change the
password.
---

View File

@ -1,8 +1,8 @@
# Accessing request traces
If you have not yet installed the logging and monitoring components, go through the
[installation instructions](./installing-logging-metrics-traces.md) to set up the
necessary components.
If you have not yet installed the logging and monitoring components, go through
the [installation instructions](./installing-logging-metrics-traces.md) to set
up the necessary components.
In order to access request traces, you use the Zipkin visualization tool.
@ -12,14 +12,15 @@ In order to access request traces, you use the Zipkin visualization tool.
kubectl proxy
```
This command starts a local proxy of Zipkin on port 8001. For security reasons, the
Zipkin UI is exposed only within the cluster.
This command starts a local proxy of Zipkin on port 8001. For security
reasons, the Zipkin UI is exposed only within the cluster.
1. Navigate to the [Zipkin UI](http://localhost:8001/api/v1/namespaces/istio-system/services/zipkin:9411/proxy/zipkin/).
1. Navigate to the
[Zipkin UI](http://localhost:8001/api/v1/namespaces/istio-system/services/zipkin:9411/proxy/zipkin/).
1. Click "Find Traces" to see the latest traces. You can search for a trace ID
or look at traces of a specific application. Click on a trace to see a detailed
view of a specific call.
or look at traces of a specific application. Click on a trace to see a
detailed view of a specific call.
<!--TODO: Consider adding a video here. -->

View File

@ -1,13 +1,13 @@
# Debugging Issues with Your Application
You deployed your app to Knative Serving, but it isn't working as expected.
Go through this step-by-step guide to understand what failed.
You deployed your app to Knative Serving, but it isn't working as expected. Go
through this step-by-step guide to understand what failed.
## Check command-line output
Check your deploy command output to see whether it succeeded or not. If your
deployment process was terminated, you should see an error message
in the output that describes the reason why the deployment failed.
deployment process was terminated, you should see an error message in the output
that describes the reason why the deployment failed.
This kind of failure is most likely due to either a misconfigured manifest or
wrong command. For example, the following output says that you must configure
@ -38,14 +38,14 @@ kubectl get route <route-name> --output yaml
The `conditions` in `status` provide the reason if there is any failure. For
details, see Knative
[Error Conditions and Reporting](https://github.com/knative/serving/blob/master/docs/spec/errors.md)(currently some of them
are not implemented yet).
[Error Conditions and Reporting](https://github.com/knative/serving/blob/master/docs/spec/errors.md)(currently
some of them are not implemented yet).
## Check Revision status
If you configure your `Route` with `Configuration`, run the following
command to get the name of the `Revision` created for you deployment
(look up the configuration name in the `Route` .yaml file):
If you configure your `Route` with `Configuration`, run the following command to
get the name of the `Revision` created for you deployment (look up the
configuration name in the `Route` .yaml file):
```shell
kubectl get configuration <configuration-name> --output jsonpath="{.status.latestCreatedRevisionName}"
@ -77,8 +77,8 @@ If you see this condition, check the following to continue debugging:
If you see other conditions, to debug further:
- Look up the meaning of the conditions in Knative
[Error Conditions and Reporting](https://github.com/knative/serving/blob/master/docs/spec/errors.md). Note: some of them
are not implemented yet. An alternative is to
[Error Conditions and Reporting](https://github.com/knative/serving/blob/master/docs/spec/errors.md).
Note: some of them are not implemented yet. An alternative is to
[check Pod status](#check-pod-status).
- If you are using `BUILD` to deploy and the `BuildComplete` condition is not
`True`, [check BUILD status](#check-build-status).
@ -107,7 +107,8 @@ kubectl get pod <pod-name> --output yaml
```
If you see issues with "user-container" container in the containerStatuses, check your application logs as described below.
If you see issues with "user-container" container in the containerStatuses,
check your application logs as described below.
## Check Build status
@ -119,14 +120,16 @@ kubectl get build $(kubectl get revision <revision-name> --output jsonpath="{.sp
```
If there is any failure, the `conditions` in `status` provide the reason. To
access build logs, first execute `kubectl proxy` and then open [Kibana UI](http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana).
Use any of the following filters within Kibana UI to
see build logs. _(See [telemetry guide](../telemetry.md) for more information on
logging and monitoring features of Knative Serving.)_
access build logs, first execute `kubectl proxy` and then open
[Kibana UI](http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana).
Use any of the following filters within Kibana UI to see build logs. _(See
[telemetry guide](../telemetry.md) for more information on logging and
monitoring features of Knative Serving.)_
- All build logs: `_exists_:"kubernetes.labels.build-name"`
- Build logs for a specific build: `kubernetes.labels.build-name:"<BUILD NAME>"`
- Build logs for a specific build and step: `kubernetes.labels.build-name:"<BUILD NAME>" AND kubernetes.container_name:"build-step-<BUILD STEP NAME>"`
- Build logs for a specific build and step:
`kubernetes.labels.build-name:"<BUILD NAME>" AND kubernetes.container_name:"build-step-<BUILD STEP NAME>"`
---

View File

@ -1,19 +1,20 @@
# Investigating Performance Issues
You deployed your application or function to Knative Serving but its performance
doesn't meet your expectations. Knative Serving provides various dashboards and tools to
help investigate such issues. This document reviews these dashboards and tools.
doesn't meet your expectations. Knative Serving provides various dashboards and
tools to help investigate such issues. This document reviews these dashboards
and tools.
## Request metrics
Start your investigation with the "Revision - HTTP Requests" dashboard.
1. To open this dashboard, open the Grafana UI as described in
[Accessing Metrics](./accessing-metrics.md) and navigate to
"Knative Serving - Revision HTTP Requests".
[Accessing Metrics](./accessing-metrics.md) and navigate to "Knative
Serving - Revision HTTP Requests".
1. Select your configuration and revision from the menu on top left of the page.
You will see a page like this:
1. Select your configuration and revision from the menu on top left of the
page. You will see a page like this:
![Knative Serving - Revision HTTP Requests](./images/request_dash1.png)
@ -25,18 +26,21 @@ Start your investigation with the "Revision - HTTP Requests" dashboard.
- Response time per HTTP response code
- Request and response sizes
This dashboard can show traffic volume or latency discrepancies between different revisions.
If, for example, a revision's latency is higher than others revisions, then
focus your investigation on the offending revision through the rest of this guide.
This dashboard can show traffic volume or latency discrepancies between
different revisions. If, for example, a revision's latency is higher than others
revisions, then focus your investigation on the offending revision through the
rest of this guide.
## Request traces
Next, look into request traces to find out where the time is spent for a single request.
Next, look into request traces to find out where the time is spent for a single
request.
1. To access request traces, open the Zipkin UI as described in [Accessing Traces](./accessing-traces.md).
1. To access request traces, open the Zipkin UI as described in
[Accessing Traces](./accessing-traces.md).
1. Select your revision from the "Service Name" dropdown, and then click the "Find Traces" button. You'll
get a view that looks like this:
1. Select your revision from the "Service Name" dropdown, and then click the
"Find Traces" button. You'll get a view that looks like this:
![Zipkin - Trace Overview](./images/zipkin1.png)
@ -49,17 +53,18 @@ Next, look into request traces to find out where the time is spent for a single
![Zipkin - Span Details](./images/zipkin2.png)
This view shows detailed information about the specific span, such as the
micro service or external URL that was called. In this example, the call to a
Grafana URL is taking the most time. Focus your investigation on why that URL
is taking that long.
micro service or external URL that was called. In this example, the call to
a Grafana URL is taking the most time. Focus your investigation on why that
URL is taking that long.
## Autoscaler metrics
If request metrics or traces do not show any obvious hot spots, or if they show
that most of the time is spent in your own code, look at autoscaler metrics next.
that most of the time is spent in your own code, look at autoscaler metrics
next.
1. To open the autoscaler dashboard, open Grafana UI and select
"Knative Serving - Autoscaler" dashboard, which looks like this:
1. To open the autoscaler dashboard, open Grafana UI and select "Knative
Serving - Autoscaler" dashboard, which looks like this:
![Knative Serving - Autoscaler](./images/autoscaler_dash1.png)
@ -68,37 +73,42 @@ This view shows 4 key metrics from the Knative Serving autoscaler:
- Actual pod count: # of pods that are running a given revision
- Desired pod count: # of pods that autoscaler thinks should serve the revision
- Requested pod count: # of pods that the autoscaler requested from Kubernetes
- Panic mode: If 0, the autoscaler is operating in [stable mode](https://github.com/knative/serving/blob/master/docs/scaling/DEVELOPMENT.md#stable-mode).
If 1, the autoscaler is operating in [panic mode](https://github.com/knative/serving/blob/master/docs/scaling/DEVELOPMENT.md#panic-mode).
- Panic mode: If 0, the autoscaler is operating in
[stable mode](https://github.com/knative/serving/blob/master/docs/scaling/DEVELOPMENT.md#stable-mode).
If 1, the autoscaler is operating in
[panic mode](https://github.com/knative/serving/blob/master/docs/scaling/DEVELOPMENT.md#panic-mode).
A large gap between the actual pod count and the requested pod count
indicates that the Kubernetes cluster is unable to keep up allocating new
resources fast enough, or that the Kubernetes cluster is out of requested
resources.
A large gap between the actual pod count and the requested pod count indicates
that the Kubernetes cluster is unable to keep up allocating new resources fast
enough, or that the Kubernetes cluster is out of requested resources.
A large gap between the requested pod count and the desired pod count indicates that
the Knative Serving autoscaler is unable to communicate with the Kubernetes master to make
the request.
A large gap between the requested pod count and the desired pod count indicates
that the Knative Serving autoscaler is unable to communicate with the Kubernetes
master to make the request.
In the preceding example, the autoscaler requested 18 pods to optimally serve the traffic
but was only granted 8 pods because the cluster is out of resources.
In the preceding example, the autoscaler requested 18 pods to optimally serve
the traffic but was only granted 8 pods because the cluster is out of resources.
## CPU and memory usage
You can access total CPU and memory usage of your revision from
the "Knative Serving - Revision CPU and Memory Usage" dashboard, which looks like this:
You can access total CPU and memory usage of your revision from the "Knative
Serving - Revision CPU and Memory Usage" dashboard, which looks like this:
![Knative Serving - Revision CPU and Memory Usage](./images/cpu_dash1.png)
The first chart shows rate of the CPU usage across all pods serving the revision.
The second chart shows total memory consumed across all pods serving the revision.
Both of these metrics are further divided into per container usage.
The first chart shows rate of the CPU usage across all pods serving the
revision. The second chart shows total memory consumed across all pods serving
the revision. Both of these metrics are further divided into per container
usage.
- user-container: This container runs the user code (application, function, or container).
- user-container: This container runs the user code (application, function, or
container).
- [istio-proxy](https://github.com/istio/proxy): Sidecar container to form an
[Istio](https://istio.io/docs/concepts/what-is-istio/overview.html) mesh.
- queue-proxy: Knative Serving owned sidecar container to enforce request concurrency limits.
- autoscaler: Knative Serving owned sidecar container to provide autoscaling for the revision.
- queue-proxy: Knative Serving owned sidecar container to enforce request
concurrency limits.
- autoscaler: Knative Serving owned sidecar container to provide autoscaling for
the revision.
- fluentd-proxy: Sidecar container to collect logs from /var/log.
## Profiling

View File

@ -39,9 +39,10 @@ It additionally adds one more plugin -
which allows sending logs to Stackdriver.
Operators can build this image and push it to a container registry which their
Kubernetes cluster has access to. See [Setting Up A Logging Plugin](/serving/setting-up-a-logging-plugin.md)
for details. **NOTE**: Operators need to add credentials
file the stackdriver agent needs to the docker image if their Knative Serving is
not built on a GCP based cluster or they want to send logs to another GCP
project. See [here](https://cloud.google.com/logging/docs/agent/authorization)
for more information.
Kubernetes cluster has access to. See
[Setting Up A Logging Plugin](/serving/setting-up-a-logging-plugin.md) for
details. **NOTE**: Operators need to add credentials file the stackdriver agent
needs to the docker image if their Knative Serving is not built on a GCP based
cluster or they want to send logs to another GCP project. See
[here](https://cloud.google.com/logging/docs/agent/authorization) for more
information.

View File

@ -1,16 +1,17 @@
# Assigning a static IP address for Knative on Kubernetes Engine
If you are running Knative on Google Kubernetes Engine and want to use a
[custom domain](./using-a-custom-domain.md) with your apps, you need to configure a
static IP address to ensure that your custom domain mapping doesn't break.
[custom domain](./using-a-custom-domain.md) with your apps, you need to
configure a static IP address to ensure that your custom domain mapping doesn't
break.
Knative uses the shared `knative-shared-gateway` Gateway under the
`knative-serving` namespace to serve all incoming traffic within the
Knative service mesh. The IP address to access the gateway is the
external IP address of the "knative-ingressgateway" service under the
`istio-system` namespace. Therefore, in order to set a static IP for the
Knative shared gateway `knative-shared-gateway`, you must to set the
external IP address of the `knative-ingressgateway` service to a static IP.
`knative-serving` namespace to serve all incoming traffic within the Knative
service mesh. The IP address to access the gateway is the external IP address of
the "knative-ingressgateway" service under the `istio-system` namespace.
Therefore, in order to set a static IP for the Knative shared gateway
`knative-shared-gateway`, you must to set the external IP address of the
`knative-ingressgateway` service to a static IP.
## Step 1: Reserve a static IP address
@ -34,16 +35,20 @@ Using the Google Cloud SDK:
gcloud beta compute addresses list
```
In the [GCP console](https://console.cloud.google.com/networking/addresses/add?_ga=2.97521754.-475089713.1523374982):
In the
[GCP console](https://console.cloud.google.com/networking/addresses/add?_ga=2.97521754.-475089713.1523374982):
1. Enter a name for your static address.
1. For **IP version**, choose IPv4.
1. For **Type**, choose **Regional**.
1. From the **Region** drop-down, choose the region where your Knative cluster is running.
1. From the **Region** drop-down, choose the region where your Knative cluster
is running.
For example, select the `us-west1` region if you deployed your cluster to the `us-west1-c` zone.
For example, select the `us-west1` region if you deployed your cluster to
the `us-west1-c` zone.
1. Leave the **Attached To** field set to `None` since we'll attach the IP address through a config-map later.
1. Leave the **Attached To** field set to `None` since we'll attach the IP
address through a config-map later.
1. Copy the **External Address** of the static IP you created.
## Step 2: Update the external IP of the `knative-ingressgateway` service
@ -57,13 +62,15 @@ kubectl patch svc knative-ingressgateway --namespace istio-system --patch '{"spe
## Step 3: Verify the static IP address of `knative-ingressgateway` service
Run the following command to ensure that the external IP of the "knative-ingressgateway" service has been updated:
Run the following command to ensure that the external IP of the
"knative-ingressgateway" service has been updated:
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```
The output should show the assigned static IP address under the EXTERNAL-IP column:
The output should show the assigned static IP address under the EXTERNAL-IP
column:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

View File

@ -2,15 +2,14 @@
Knative Serving offers two different monitoring setups:
[Elasticsearch, Kibana, Prometheus and Grafana](#elasticsearch-kibana-prometheus--grafana-setup)
or
[Stackdriver, Prometheus and Grafana](#stackdriver-prometheus--grafana-setup)
or [Stackdriver, Prometheus and Grafana](#stackdriver-prometheus--grafana-setup)
You can install only one of these two setups and side-by-side installation of
these two are not supported.
## Before you begin
The following instructions assume that you cloned the Knative Serving repository.
To clone the repository, run the following commands:
The following instructions assume that you cloned the Knative Serving
repository. To clone the repository, run the following commands:
```shell
git clone https://github.com/knative/serving knative-serving
@ -21,22 +20,24 @@ git checkout v0.2.1
## Elasticsearch, Kibana, Prometheus & Grafana Setup
If you installed the
[full Knative release](../install/README.md#installing-knative),
the monitoring component is already installed and you can skip down to the
[full Knative release](../install/README.md#installing-knative), the monitoring
component is already installed and you can skip down to the
[Create Elasticsearch Indices](#create-elasticsearch-indices) section.
To configure and setup monitoring:
1. Choose a container image that meets the
[Fluentd image requirements](fluentd/README.md#requirements). For example, you can use the
public image [k8s.gcr.io/fluentd-elasticsearch:v2.0.4](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch/fluentd-es-image).
[Fluentd image requirements](fluentd/README.md#requirements). For example,
you can use the public image
[k8s.gcr.io/fluentd-elasticsearch:v2.0.4](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch/fluentd-es-image).
Or you can create a custom one and upload the image to a container registry
which your cluster has read access to.
1. Follow the instructions in
["Setting up a logging plugin"](setting-up-a-logging-plugin.md#Configuring)
to configure the Elasticsearch components settings.
1. Install Knative monitoring components by running the following command from the root directory of
[knative/serving](https://github.com/knative/serving) repository:
1. Install Knative monitoring components by running the following command from
the root directory of [knative/serving](https://github.com/knative/serving)
repository:
```shell
kubectl apply --recursive --filename config/monitoring/100-common \
@ -73,7 +74,8 @@ To configure and setup monitoring:
CTRL+C to exit watch.
1. Verify that each of your nodes have the `beta.kubernetes.io/fluentd-ds-ready=true` label:
1. Verify that each of your nodes have the
`beta.kubernetes.io/fluentd-ds-ready=true` label:
```shell
kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
@ -81,13 +83,15 @@ To configure and setup monitoring:
1. If you receive the `No Resources Found` response:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all
your nodes:
```shell
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
```
1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node:
1. Run the following command to ensure that the `fluentd-ds` daemonset is
ready on at least one node:
```shell
kubectl get daemonset fluentd-ds --namespace knative-monitoring
@ -95,12 +99,13 @@ To configure and setup monitoring:
### Create Elasticsearch Indices
To visualize logs with Kibana, you need to set which Elasticsearch indices to explore. We will
create two indices in Elasticsearch using `Logstash` for application logs and `Zipkin`
for request traces.
To visualize logs with Kibana, you need to set which Elasticsearch indices to
explore. We will create two indices in Elasticsearch using `Logstash` for
application logs and `Zipkin` for request traces.
- To open the Kibana UI (the visualization tool for [Elasticsearch](https://info.elastic.co)),
you must start a local proxy by running the following command:
- To open the Kibana UI (the visualization tool for
[Elasticsearch](https://info.elastic.co)), you must start a local proxy by
running the following command:
```shell
kubectl proxy
@ -125,22 +130,25 @@ for request traces.
## Stackdriver, Prometheus & Grafana Setup
You must configure and build your own Fluentd image if either of the following are true:
You must configure and build your own Fluentd image if either of the following
are true:
- Your Knative Serving component is not hosted on a Google Cloud Platform (GCP) based cluster.
- Your Knative Serving component is not hosted on a Google Cloud Platform (GCP)
based cluster.
- You want to send logs to another GCP project.
To configure and setup monitoring:
1. Choose a container image that meets the
[Fluentd image requirements](fluentd/README.md#requirements). For example, you can use a
public image. Or you can create a custom one and upload the image to a
container registry which your cluster has read access to.
[Fluentd image requirements](fluentd/README.md#requirements). For example,
you can use a public image. Or you can create a custom one and upload the
image to a container registry which your cluster has read access to.
1. Follow the instructions in
["Setting up a logging plugin"](setting-up-a-logging-plugin.md#Configuring)
to configure the stackdriver components settings.
1. Install Knative monitoring components by running the following command from the root directory of
[knative/serving](https://github.com/knative/serving) repository:
1. Install Knative monitoring components by running the following command from
the root directory of [knative/serving](https://github.com/knative/serving)
repository:
```shell
kubectl apply --recursive --filename config/monitoring/100-common \
@ -173,7 +181,8 @@ To configure and setup monitoring:
CTRL+C to exit watch.
1. Verify that each of your nodes have the `beta.kubernetes.io/fluentd-ds-ready=true` label:
1. Verify that each of your nodes have the
`beta.kubernetes.io/fluentd-ds-ready=true` label:
```shell
kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
@ -181,13 +190,15 @@ To configure and setup monitoring:
1. If you receive the `No Resources Found` response:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all
your nodes:
```shell
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
```
1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node:
1. Run the following command to ensure that the `fluentd-ds` daemonset is
ready on at least one node:
```shell
kubectl get daemonset fluentd-ds --namespace knative-monitoring

View File

@ -1,18 +1,19 @@
# Configuring outbound network access
This guides walks you through enabling outbound network access for a Knative app.
This guides walks you through enabling outbound network access for a Knative
app.
Knative blocks all outbound traffic by default. To enable outbound access (when you want to connect
to the Cloud Storage API, for example), you need to change the scope of the proxy IP range by editing
the `config-network` map.
Knative blocks all outbound traffic by default. To enable outbound access (when
you want to connect to the Cloud Storage API, for example), you need to change
the scope of the proxy IP range by editing the `config-network` map.
## Determining the IP scope of your cluster
To set the correct scope, you need to determine the IP ranges of your cluster. The scope varies
depending on your platform:
To set the correct scope, you need to determine the IP ranges of your cluster.
The scope varies depending on your platform:
- For Google Kubernetes Engine (GKE) run the following command to determine the scope. Make sure
to replace the variables or export these values first.
- For Google Kubernetes Engine (GKE) run the following command to determine the
scope. Make sure to replace the variables or export these values first.
```shell
gcloud container clusters describe ${CLUSTER_ID} \
--zone=${GCP_ZONE} | grep -e clusterIpv4Cidr -e servicesIpv4Cidr
@ -21,14 +22,16 @@ depending on your platform:
```shell
cat cluster/config.yaml | grep service_cluster_ip_range
```
- For IBM Cloud Kubernetes Service use `172.30.0.0/16,172.20.0.0/16,10.10.10.0/24`
- For IBM Cloud Kubernetes Service use
`172.30.0.0/16,172.20.0.0/16,10.10.10.0/24`
- For Azure Container Service (ACS) use `10.244.0.0/16,10.240.0.0/16`
- For Minikube use `10.0.0.1/24`
## Setting the IP scope
The `istio.sidecar.includeOutboundIPRanges` parameter in the `config-network` map specifies
the IP ranges that Istio sidecar intercepts. To allow outbound access, replace the default parameter
The `istio.sidecar.includeOutboundIPRanges` parameter in the `config-network`
map specifies the IP ranges that Istio sidecar intercepts. To allow outbound
access, replace the default parameter
value with the IP ranges of your cluster.
Run the following command to edit the `config-network` map:
@ -37,8 +40,9 @@ Run the following command to edit the `config-network` map:
kubectl edit configmap config-network --namespace knative-serving
```
Then, use an editor of your choice to change the `istio.sidecar.includeOutboundIPRanges` parameter value
from `*` to the IP range you need. Separate multiple IP entries with a comma. For example:
Then, use an editor of your choice to change the
`istio.sidecar.includeOutboundIPRanges` parameter value from `*` to the IP range
you need. Separate multiple IP entries with a comma. For example:
```
# Please edit the object below. Lines beginning with a '#' will be ignored,
@ -54,24 +58,25 @@ metadata:
```
By default, the `istio.sidecar.includeOutboundIPRanges` parameter is set to `*`,
which means that Istio intercepts all traffic within the cluster as well as all traffic that is going
outside the cluster. Istio blocks all traffic that is going outside the cluster unless
you create the necessary egress rules.
which means that Istio intercepts all traffic within the cluster as well as all
traffic that is going outside the cluster. Istio blocks all traffic that is
going outside the cluster unless you create the necessary egress rules.
When you set the parameter to a valid set of IP address ranges, Istio will no longer intercept
traffic that is going to the IP addresses outside the provided ranges, and you don't need to specify
any egress rules.
When you set the parameter to a valid set of IP address ranges, Istio will no
longer intercept traffic that is going to the IP addresses outside the provided
ranges, and you don't need to specify any egress rules.
If you omit the parameter or set it to `''`, Knative uses the value of the `global.proxy.includeIPRanges`
parameter that is provided at Istio deployment time. In the default Knative Serving
deployment, `global.proxy.includeIPRanges` value is set to `*`.
If you omit the parameter or set it to `''`, Knative uses the value of the
`global.proxy.includeIPRanges` parameter that is provided at Istio deployment
time. In the default Knative Serving deployment, `global.proxy.includeIPRanges`
value is set to `*`.
If an invalid value is passed, `''` is used instead.
If you are still having trouble making off-cluster calls, you can verify that the policy was
applied to the pod running your service by checking the metadata on the pod.
Verify that the `traffic.sidecar.istio.io/includeOutboundIPRanges` annotation matches the
expected value from the config-map.
If you are still having trouble making off-cluster calls, you can verify that
the policy was applied to the pod running your service by checking the metadata
on the pod. Verify that the `traffic.sidecar.istio.io/includeOutboundIPRanges`
annotation matches the expected value from the config-map.
```shell
$ kubectl get pod ${POD_NAME} --output yaml

View File

@ -1,8 +1,9 @@
# Knative serving sample applications
This directory contains sample applications, developed on Knative, to illustrate
different use-cases and resources. See [Knative serving](https://github.com/knative/docs/tree/master/serving)
to learn more about Knative Serving resources.
different use-cases and resources. See
[Knative serving](https://github.com/knative/docs/tree/master/serving) to learn
more about Knative Serving resources.
| Name | Description | Languages |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |

View File

@ -4,9 +4,14 @@ A demonstration of the autoscaling capabilities of a Knative Serving Revision.
## Prerequisites
1. A Kubernetes cluster with [Knative Serving](https://github.com/knative/docs/blob/master/install/README.md) installed.
1. A [metrics installation](https://github.com/knative/docs/blob/master/serving/installing-logging-metrics-traces.md) for viewing scaling graphs (optional).
1. Install [Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
1. A Kubernetes cluster with
[Knative Serving](https://github.com/knative/docs/blob/master/install/README.md)
installed.
1. A
[metrics installation](https://github.com/knative/docs/blob/master/serving/installing-logging-metrics-traces.md)
for viewing scaling graphs (optional).
1. Install
[Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
1. Check out the code:
```
@ -29,8 +34,8 @@ Build the application container and publish it to a container registry:
export REPO="gcr.io/<YOUR_PROJECT_ID>"
```
- This example shows how to use Google Container Registry (GCR). You will need a
Google Cloud Project and to enable the
- This example shows how to use Google Container Registry (GCR). You will
need a Google Cloud Project and to enable the
[Google Container Registry API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
1. Use Docker to build your application container:
@ -109,9 +114,13 @@ Build the application container and publish it to a container registry:
### Algorithm
Knative Serving autoscaling is based on the average number of in-flight requests per pod (concurrency). The system has a default [target concurrency of 100.0](https://github.com/knative/serving/blob/3f00c39e289ed4bfb84019131651c2e4ea660ab5/config/config-autoscaler.yaml#L35).
Knative Serving autoscaling is based on the average number of in-flight requests
per pod (concurrency). The system has a default
[target concurrency of 100.0](https://github.com/knative/serving/blob/3f00c39e289ed4bfb84019131651c2e4ea660ab5/config/config-autoscaler.yaml#L35).
For example, if a Revision is receiving 350 requests per second, each of which takes about about .5 seconds, Knative Serving will determine the Revision needs about 2 pods
For example, if a Revision is receiving 350 requests per second, each of which
takes about about .5 seconds, Knative Serving will determine the Revision needs
about 2 pods
```
350 * .5 = 175
@ -121,7 +130,11 @@ ceil(1.75) = 2 pods
#### Tuning
By default Knative Serving does not limit concurrency in Revision containers. A limit can be set per-Configuration using the [`ContainerConcurrency`](https://github.com/knative/serving/blob/3f00c39e289ed4bfb84019131651c2e4ea660ab5/pkg/apis/serving/v1alpha1/revision_types.go#L149) field. The autoscaler will target a percentage of `ContainerConcurrency` instead of the default `100.0`.
By default Knative Serving does not limit concurrency in Revision containers. A
limit can be set per-Configuration using the
[`ContainerConcurrency`](https://github.com/knative/serving/blob/3f00c39e289ed4bfb84019131651c2e4ea660ab5/pkg/apis/serving/v1alpha1/revision_types.go#L149)
field. The autoscaler will target a percentage of `ContainerConcurrency` instead
of the default `100.0`.
### Dashboards

View File

@ -10,14 +10,17 @@ configuration.
You need:
- A Kubernetes cluster with [Knative installed](../../install/README.md).
- (Optional) [A custom domain configured](../../serving/using-a-custom-domain.md) for use with Knative.
- (Optional)
[A custom domain configured](../../serving/using-a-custom-domain.md) for use
with Knative.
## Deploying Revision 1 (Blue)
We'll be deploying an image of a sample application that displays the text
"App v1" on a blue background.
We'll be deploying an image of a sample application that displays the text "App
v1" on a blue background.
First, create a new file called `blue-green-demo-config.yaml`and copy this into it:
First, create a new file called `blue-green-demo-config.yaml`and copy this into
it:
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -72,25 +75,25 @@ route "blue-green-demo" configured
```
You'll now be able to view the sample app at
http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com (replace `YOUR_CUSTOM_DOMAIN`)
with the [custom domain](../../serving/using-a-custom-domain.md) you configured for
use with Knative.
http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com (replace
`YOUR_CUSTOM_DOMAIN`) with the
[custom domain](../../serving/using-a-custom-domain.md) you configured for use
with Knative.
> Note: If you don't have a custom domain configured for use with Knative, you can interact
> with your app using cURL requests if you have the host URL and IP address:
> Note: If you don't have a custom domain configured for use with Knative, you
> can interact with your app using cURL requests if you have the host URL and IP
> address:
> `curl -H "Host: blue-green-demo.default.example.com" http://IP_ADDRESS`
> Knative creates the host URL by combining the name of your Route object,
> the namespace, and `example.com`, if you haven't configured a custom domain.
> For example, `[route-name].[namespace].example.com`.
> Knative creates the host URL by combining the name of your Route object, the namespace,
> and `example.com`, if you haven't configured a custom domain. For example, `[route-name].[namespace].example.com`.
> You can get the IP address by entering `kubectl get svc knative-ingressgateway --namespace istio-system`
> and copying the `EXTERNAL-IP` returned by that command.
> See [Interacting with your app](../../install/getting-started-knative-app.md#interacting-with-your-app)
> and copying the `EXTERNAL-IP` returned by that command. See [Interacting with your app](../../install/getting-started-knative-app.md#interacting-with-your-app)
> for more information.
## Deploying Revision 2 (Green)
Revision 2 of the sample application will display the text "App v2" on a green background.
To create the new revision, we'll edit our existing configuration in
Revision 2 of the sample application will display the text "App v2" on a green
background. To create the new revision, we'll edit our existing configuration in
`blue-green-demo-config.yaml` with an updated image and environment variables:
```yaml
@ -152,11 +155,13 @@ route "blue-green-demo" configured
Revision 2 of the app is staged at this point. That means:
- No traffic will be routed to revision 2 at the main URL, http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com
- Knative creates a new route named v2 for testing the newly deployed version at http://v2.blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com
- No traffic will be routed to revision 2 at the main URL,
http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com
- Knative creates a new route named v2 for testing the newly deployed version at
http://v2.blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com
This allows you to validate that the new version of the app is behaving as expected before switching
any traffic over to it.
This allows you to validate that the new version of the app is behaving as
expected before switching any traffic over to it.
## Migrating traffic to the new revision
@ -186,11 +191,13 @@ kubectl apply --filename blue-green-demo-route.yaml
route "blue-green-demo" configured
```
Refresh the original route (http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com) a
few times to see that some traffic now goes to version 2 of the app.
Refresh the original route
(http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com) a few times to see that
some traffic now goes to version 2 of the app.
> Note: This sample shows a 50/50 split to assure you don't have to refresh too much,
> but it's recommended to start with 1-2% of traffic in a production environment
> Note: This sample shows a 50/50 split to assure you don't have to refresh too
> much, but it's recommended to start with 1-2% of traffic in a production
> environment
## Rerouting all traffic to the new version
@ -221,8 +228,9 @@ kubectl apply --filename blue-green-demo-route.yaml
route "blue-green-demo" configured
```
Refresh the original route (http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com) a
few times to verify that no traffic is being routed to v1 of the app.
Refresh the original route
(http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com) a few times to verify
that no traffic is being routed to v1 of the app.
We added a named route to v1 of the app, so you can now access it at
http://v1.blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com.

View File

@ -3,20 +3,23 @@
This sample demonstrates:
- Pulling source code from a private Github repository using a deploy-key
- Pushing a Docker container to a private DockerHub repository using a username / password
- Pushing a Docker container to a private DockerHub repository using a username
/ password
- Deploying to Knative Serving using image pull secrets
## Before you begin
- [Install Knative Serving](../../../install/README.md)
- Create a local folder for this sample and download the files in this directory into it.
- Create a local folder for this sample and download the files in this directory
into it.
## Setup
### 1. Setting up the default service account
Knative Serving will run pods as the default service account in the namespace where
you created your resources. You can see its body by entering the following command:
Knative Serving will run pods as the default service account in the namespace
where you created your resources. You can see its body by entering the following
command:
```shell
$ kubectl get serviceaccount default --output yaml
@ -32,7 +35,8 @@ secrets:
We are going to add to this an image pull Secret.
1. Create your image pull Secret with the following command, replacing values as neccesary:
1. Create your image pull Secret with the following command, replacing values as
neccesary:
```shell
kubectl create secret docker-registry dockerhub-pull-secret \
@ -43,13 +47,15 @@ We are going to add to this an image pull Secret.
To learn more about Kubernetes pull Secrets, see
[Creating a Secret in the cluster that holds your authorization token](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token).
2. Add the newly created `imagePullSecret` to your default service account by entering:
2. Add the newly created `imagePullSecret` to your default service account by
entering:
```shell
kubectl edit serviceaccount default
```
This will open the resource in your default text editor. Under `secrets:`, add:
This will open the resource in your default text editor. Under `secrets:`,
add:
```yaml
secrets:
@ -61,17 +67,17 @@ We are going to add to this an image pull Secret.
### 2. Configuring the build
The objects in this section are all defined in `build-bot.yaml`, and the fields that
need to be changed say `REPLACE_ME`. Open the `build-bot.yaml` file and make the
necessary replacements.
The objects in this section are all defined in `build-bot.yaml`, and the fields
that need to be changed say `REPLACE_ME`. Open the `build-bot.yaml` file and
make the necessary replacements.
The following sections explain the different configurations in the `build-bot.yaml` file,
as well as the necessary changes for each section.
The following sections explain the different configurations in the
`build-bot.yaml` file, as well as the necessary changes for each section.
#### Setting up our Build service account
To separate our Build's credentials from our applications credentials, the
Build runs as its own service account:
To separate our Build's credentials from our applications credentials, the Build
runs as its own service account:
```yaml
apiVersion: v1
@ -87,8 +93,8 @@ secrets:
You can set up a deploy key for a private Github repository following
[these](https://developer.github.com/v3/guides/managing-deploy-keys/)
instructions. The deploy key in the `build-bot.yaml` file in this folder is _real_;
you do not need to change it for the sample to work.
instructions. The deploy key in the `build-bot.yaml` file in this folder is
_real_; you do not need to change it for the sample to work.
```yaml
apiVersion: v1
@ -112,7 +118,8 @@ data:
#### Creating a DockerHub push credential
Create a new Secret for your DockerHub credentials. Replace the necessary values:
Create a new Secret for your DockerHub credentials. Replace the necessary
values:
```yaml
apiVersion: v1
@ -129,7 +136,8 @@ stringData:
#### Creating the build bot
When finished with the replacements, create the build bot by entering the following command:
When finished with the replacements, create the build bot by entering the
following command:
```shell
kubectl create --filename build-bot.yaml
@ -145,8 +153,8 @@ kubectl create --filename build-bot.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/build-templates/master/kaniko/kaniko.yaml
```
1. Open `manifest.yaml` and substitute your private DockerHub repository name for
`REPLACE_ME`.
1. Open `manifest.yaml` and substitute your private DockerHub repository name
for `REPLACE_ME`.
## Deploying your application
@ -156,8 +164,8 @@ At this point, you're ready to deploy your application:
kubectl create --filename manifest.yaml
```
To make sure everything works, capture the host URL and the IP of the ingress endpoint
in environment variables:
To make sure everything works, capture the host URL and the IP of the ingress
endpoint in environment variables:
```shell
# Put the Host URL into an environment variable.
@ -171,9 +179,9 @@ export SERVICE_IP=$(kubectl get svc knative-ingressgateway --namespace istio-sys
--output jsonpath="{.status.loadBalancer.ingress[*].ip}")
```
> Note: If your cluster is running outside a cloud provider (for example, on Minikube),
> your services will never get an external IP address. In that case, use the Istio
> `hostIP` and `nodePort` as the service IP:
> Note: If your cluster is running outside a cloud provider (for example, on
> Minikube), your services will never get an external IP address. In that case,
> use the Istio `hostIP` and `nodePort` as the service IP:
```shell
export SERVICE_IP=$(kubectl get po --selector knative=ingressgateway --namespace istio-system \

View File

@ -1,9 +1,11 @@
# Buildpack Sample App
A sample app that demonstrates using [Cloud Foundry](https://www.cloudfoundry.org/)
buildpacks on Knative Serving, using the [packs Docker images](https://github.com/sclevine/packs).
A sample app that demonstrates using
[Cloud Foundry](https://www.cloudfoundry.org/) buildpacks on Knative Serving,
using the [packs Docker images](https://github.com/sclevine/packs).
This deploys the [.NET Core Hello World](https://github.com/cloudfoundry-samples/dotnet-core-hello-world)
This deploys the
[.NET Core Hello World](https://github.com/cloudfoundry-samples/dotnet-core-hello-world)
sample app for Cloud Foundry.
## Prerequisites
@ -12,17 +14,17 @@ sample app for Cloud Foundry.
## Running
This sample uses the [Buildpack build
template](https://github.com/knative/build-templates/blob/master/buildpack/buildpack.yaml)
in the [build-templates](https://github.com/knative/build-templates/) repo.
Save a copy of `buildpack.yaml`, then install it:
This sample uses the
[Buildpack build template](https://github.com/knative/build-templates/blob/master/buildpack/buildpack.yaml)
in the [build-templates](https://github.com/knative/build-templates/) repo. Save
a copy of `buildpack.yaml`, then install it:
```shell
kubectl apply --filename https://raw.githubusercontent.com/knative/build-templates/master/buildpack/buildpack.yaml
```
Then you can deploy this to Knative Serving from the root directory
by entering the following commands:
Then you can deploy this to Knative Serving from the root directory by entering
the following commands:
```shell
# Replace <your-project-here> with your own registry
@ -53,7 +55,8 @@ items:
Once the `BuildComplete` status is `True`, resource creation begins.
To access this service using `curl`, we first need to determine its ingress address:
To access this service using `curl`, we first need to determine its ingress
address:
```shell
$ watch kubectl get svc knative-ingressgateway --namespace istio-system
@ -61,8 +64,9 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
Once the `EXTERNAL-IP` gets assigned to the cluster, enter the follow commands to capture
the host URL and the IP of the ingress endpoint in environment variables:
Once the `EXTERNAL-IP` gets assigned to the cluster, enter the follow commands
to capture the host URL and the IP of the ingress endpoint in environment
variables:
```shell
# Put the Host name into an environment variable.

View File

@ -1,7 +1,8 @@
# Buildpack Sample Function
A sample function that demonstrates using [Cloud Foundry](https://www.cloudfoundry.org/)
buildpacks on Knative Serving, using the [packs Docker images](https://github.com/sclevine/packs).
A sample function that demonstrates using
[Cloud Foundry](https://www.cloudfoundry.org/) buildpacks on Knative Serving,
using the [packs Docker images](https://github.com/sclevine/packs).
This deploys the [riff square](https://github.com/scothis/riff-square-buildpack)
sample function for riff.
@ -12,8 +13,8 @@ sample function for riff.
## Running
This sample uses the [Buildpack build
template](https://github.com/knative/build-templates/blob/master/buildpack/buildpack.yaml)
This sample uses the
[Buildpack build template](https://github.com/knative/build-templates/blob/master/buildpack/buildpack.yaml)
from the [build-templates](https://github.com/knative/build-templates/) repo.
Save a copy of `buildpack.yaml`, then install it:
@ -52,7 +53,8 @@ items:
Once the `BuildComplete` status is `True`, resource creation begins.
To access this service using `curl`, we first need to determine its ingress address:
To access this service using `curl`, we first need to determine its ingress
address:
```shell
watch kubectl get svc knative-ingressgateway --namespace istio-system
@ -60,8 +62,9 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
Once the `EXTERNAL-IP` gets assigned to the cluster, enter the follow commands to capture
the host URL and the IP of the ingress endpoint in environment variables:
Once the `EXTERNAL-IP` gets assigned to the cluster, enter the follow commands
to capture the host URL and the IP of the ingress endpoint in environment
variables:
```shell
# Put the Host name into an environment variable.

View File

@ -1,7 +1,7 @@
# GitHub Webhook - Go sample
A handler written in Go that demonstrates interacting with GitHub
through a webhook.
A handler written in Go that demonstrates interacting with GitHub through a
webhook.
## Prerequisites
@ -15,8 +15,8 @@ through a webhook.
## Build the sample code
1. Use Docker to build a container image for this service. Replace
`username` with your Docker Hub username in the following commands.
1. Use Docker to build a container image for this service. Replace `username`
with your Docker Hub username in the following commands.
```shell
export DOCKER_HUB_USERNAME=username
@ -32,7 +32,8 @@ docker push ${DOCKER_HUB_USERNAME}/gitwebhook-go
used to make API requests to GitHub, and a webhook secret, used to validate
incoming requests.
1. Follow the GitHub instructions to [create a personal access token](https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/).
1. Follow the GitHub instructions to
[create a personal access token](https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/).
Ensure to grant the `repo` permission to give `read/write` access to the
personal access token.
1. Base64 encode the access token:
@ -42,7 +43,8 @@ docker push ${DOCKER_HUB_USERNAME}/gitwebhook-go
NDVkMzgyZDRhOWE5M2M0NTNmYjdjOGFkYzEwOTEyMWU3YzI5ZmEzY2E=
```
1. Copy the encoded access token into `github-secret.yaml` next to `personalAccessToken:`.
1. Copy the encoded access token into `github-secret.yaml` next to
`personalAccessToken:`.
1. Create a webhook secert value unique to this sample, base64 encode it, and
copy it into `github-secret.yaml` next to `webhookSecret:`:
@ -94,10 +96,11 @@ $ kubectl apply --filename service.yaml
service "gitwebhook" created
```
1. Finally, once the service is running, create the webhook from your GitHub repo
to the URL for this service. For this to work properly you will
need to [configure a custom domain](https://github.com/knative/docs/blob/master/serving/using-a-custom-domain.md)
and [assign a static IP address](https://github.com/knative/docs/blob/master/serving/gke-assigning-static-ip-address.md).
1. Finally, once the service is running, create the webhook from your GitHub
repo to the URL for this service. For this to work properly you will need to
[configure a custom domain](https://github.com/knative/docs/blob/master/serving/using-a-custom-domain.md)
and
[assign a static IP address](https://github.com/knative/docs/blob/master/serving/gke-assigning-static-ip-address.md).
1. Retrieve the hostname for this service, using the following command:
@ -110,11 +113,13 @@ service "gitwebhook" created
1. Browse on GitHub to the repository where you want to create a webhook.
1. Click **Settings**, then **Webhooks**, then **Add webhook**.
1. Enter the **Payload URL** as `http://{DOMAIN}`, with the value of DOMAIN listed above.
1. Enter the **Payload URL** as `http://{DOMAIN}`, with the value of DOMAIN
listed above.
1. Set the **Content type** to `application/json`.
1. Enter the **Secret** value to be the same as the original base used for
`webhookSecret` above (the original value, not the base64 encoded value).
1. Select **Disable** under SSL Validation, unless you've [enabled SSL](https://github.com/knative/docs/blob/master/serving/using-an-ssl-cert.md).
1. Select **Disable** under SSL Validation, unless you've
[enabled SSL](https://github.com/knative/docs/blob/master/serving/using-an-ssl-cert.md).
1. Click **Add webhook** to create the webhook.
## Exploring

View File

@ -2,7 +2,8 @@
A simple gRPC server written in Go that you can use for testing.
This sample is dependent on [this issue](https://github.com/knative/serving/issues/1047) to be complete.
This sample is dependent on
[this issue](https://github.com/knative/serving/issues/1047) to be complete.
## Prerequisites
@ -11,7 +12,8 @@ This sample is dependent on [this issue](https://github.com/knative/serving/issu
## Build and run the gRPC server
Build and run the gRPC server. This command will build the server and use `kubectl` to apply the configuration.
Build and run the gRPC server. This command will build the server and use
`kubectl` to apply the configuration.
```shell
REPO="gcr.io/<your-project-here>"

View File

@ -1,25 +1,25 @@
# Hello World - Clojure sample
A simple web app written in Clojure that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
A simple web app written in Clojure that you can use for testing. It reads in an
env variable `TARGET` and prints "Hello \${TARGET}!". If TARGET is not
specified, it will use "World" as the TARGET.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Recreating the sample code
While you can clone all of the code from this directory, hello world
apps are generally more useful if you build them step-by-step. The
following instructions recreate the source files from this folder.
While you can clone all of the code from this directory, hello world apps are
generally more useful if you build them step-by-step. The following instructions
recreate the source files from this folder.
1. Create a new file named `src/helloworld/core.clj` and paste the following code. This
code creates a basic web server which listens on port 8080:
1. Create a new file named `src/helloworld/core.clj` and paste the following
code. This code creates a basic web server which listens on port 8080:
```clojure
(ns helloworld.core
@ -41,8 +41,9 @@ following instructions recreate the source files from this folder.
8080)}))
```
1. In your project directory, create a file named `project.clj` and copy the code
below into it. This code defines the project's dependencies and entrypoint.
1. In your project directory, create a file named `project.clj` and copy the
code below into it. This code defines the project's dependencies and
entrypoint.
```clojure
(defproject helloworld "1.0.0-SNAPSHOT"
@ -54,7 +55,8 @@ following instructions recreate the source files from this folder.
```
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a Clojure app, see
block below into it. For detailed instructions on dockerizing a Clojure app,
see
[the clojure image documentation](https://github.com/docker-library/docs/tree/master/clojure).
```docker
@ -82,7 +84,8 @@ following instructions recreate the source files from this folder.
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -108,8 +111,8 @@ Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -121,8 +124,8 @@ folder) you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -131,13 +134,14 @@ folder) you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the
ingress IP for your cluster. If your cluster is new, it may take sometime for
the service to get asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system

View File

@ -1,14 +1,14 @@
# Hello World - .NET Core sample
A simple web app written in C# using .NET Core 2.1 that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If TARGET
is not specified, it will use "World" as the TARGET.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
- You have installed [.NET Core SDK 2.1](https://www.microsoft.com/net/core).
@ -51,8 +51,9 @@ recreate the source files from this folder.
```
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a .NET core app,
see [dockerizing a .NET core app](https://docs.microsoft.com/en-us/dotnet/core/docker/docker-basics-dotnet-core#dockerize-the-net-core-application).
block below into it. For detailed instructions on dockerizing a .NET core
app, see
[dockerizing a .NET core app](https://docs.microsoft.com/en-us/dotnet/core/docker/docker-basics-dotnet-core#dockerize-the-net-core-application).
```docker
# Use Microsoft's official .NET image.
@ -80,7 +81,8 @@ recreate the source files from this folder.
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -106,8 +108,8 @@ Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -119,8 +121,8 @@ folder) you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -129,13 +131,14 @@ folder) you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the
ingress IP for your cluster. If your cluster is new, it may take sometime for
the service to get asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system

View File

@ -2,7 +2,8 @@
A simple web app written in the [Dart](www.dartlang.org) programming language
that you can use for testing. It reads in the env variable `TARGET` and prints
`"Hello $TARGET"`. If `TARGET` is not specified, it will use `"World"` as `TARGET`.
`"Hello $TARGET"`. If `TARGET` is not specified, it will use `"World"` as
`TARGET`.
## Prerequisites
@ -11,14 +12,14 @@ that you can use for testing. It reads in the env variable `TARGET` and prints
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
- [dart-sdk](https://www.dartlang.org/tools/sdk#install) installed and configured
if you want to run the program locally.
- [dart-sdk](https://www.dartlang.org/tools/sdk#install) installed and
configured if you want to run the program locally.
## Recreating the sample code
While you can clone all of the code from this directory, it is useful to know how
to build a hello world Dart application step-by-step. This application can be
created using the following instructions.
While you can clone all of the code from this directory, it is useful to know
how to build a hello world Dart application step-by-step. This application can
be created using the following instructions.
1. Create a new directory and write `pubspec.yaml` as follows:
@ -81,7 +82,8 @@ created using the following instructions.
```
5. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -107,8 +109,8 @@ Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -120,8 +122,8 @@ folder) you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -130,13 +132,14 @@ folder) you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the
ingress IP for your cluster. If your cluster is new, it may take sometime for
the service to get asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system

View File

@ -1,14 +1,15 @@
# Hello World - Elixir Sample
A simple web application written in [Elixir](https://elixir-lang.org/) using the
[Phoenix Framework](https://phoenixframework.org/).
The application prints all environment variables to the main page.
[Phoenix Framework](https://phoenixframework.org/). The application prints all
environment variables to the main page.
# Set up Elixir and Phoenix Locally
Following the [Phoenix Installation Guide](https://hexdocs.pm/phoenix/installation.html)
is the best way to get your computer set up for developing,
building, running, and packaging Elixir Web applications.
Following the
[Phoenix Installation Guide](https://hexdocs.pm/phoenix/installation.html) is
the best way to get your computer set up for developing, building, running, and
packaging Elixir Web applications.
# Running Locally
@ -30,11 +31,11 @@ mix phoenix.new helloelixir
When asked, if you want to `Fetch and install dependencies? [Yn]` select `y`
1. Follow the direction in the output to change directories into
start your local server with `mix phoenix.server`
1. Follow the direction in the output to change directories into start your
local server with `mix phoenix.server`
1. In the new directory, create a new Dockerfile for packaging
your application for deployment
1. In the new directory, create a new Dockerfile for packaging your application
for deployment
```docker
# Start from a base image for elixir
@ -91,9 +92,9 @@ When asked, if you want to `Fetch and install dependencies? [Yn]` select `y`
CMD ["/opt/app/bin/start_server", "foreground", "boot_var=/tmp"]
```
1. Create a new file, `service.yaml` and copy the following Service
definition into the file. Make sure to replace `{username}` with
your Docker Hub username.
1. Create a new file, `service.yaml` and copy the following Service definition
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -115,9 +116,9 @@ When asked, if you want to `Fetch and install dependencies? [Yn]` select `y`
# Building and deploying the sample
The sample in this directory is ready to build and deploy without changes.
You can deploy the sample as is, or use you created version following the
directions above.
The sample in this directory is ready to build and deploy without changes. You
can deploy the sample as is, or use you created version following the directions
above.
1. Generate a new `secret_key_base` in the `config/prod.secret.exs` file.
Phoenix applications use a secrets file on production deployments and, by
@ -130,9 +131,9 @@ directions above.
sed "s|SECRET+KEY+BASE|$SECRET_KEY_BASE|" config/prod.secret.exs.sample >config/prod.secret.exs
```
1. Use Docker to build the sample code into a container. To build and push
with Docker Hub, run these commands replacing `{username}` with your Docker
Hub username:
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -144,8 +145,8 @@ directions above.
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step.
Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -154,20 +155,25 @@ directions above.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the
ingress IP for your cluster. If your cluster is new, it may take sometime
for the service to get asssigned an external IP address.
```
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32 80:32380/TCP,443:32390/TCP,32400:32400/TCP 1h
knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32
80:32380/TCP,443:32390/TCP,32400:32400/TCP 1h
```
```
1. To find the URL for your service, use

View File

@ -1,22 +1,22 @@
# Hello World - Go sample
A simple web app written in Go that you can use for testing.
It reads in an env variable `TARGET` and prints `Hello ${TARGET}!`. If
`TARGET` is not specified, it will use `World` as the `TARGET`.
A simple web app written in Go that you can use for testing. It reads in an env
variable `TARGET` and prints `Hello ${TARGET}!`. If `TARGET` is not specified,
it will use `World` as the `TARGET`.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Recreating the sample code
While you can clone all of the code from this directory, hello world
apps are generally more useful if you build them step-by-step. The
following instructions recreate the source files from this folder.
While you can clone all of the code from this directory, hello world apps are
generally more useful if you build them step-by-step. The following instructions
recreate the source files from this folder.
1. Create a new file named `helloworld.go` and paste the following code. This
code creates a basic web server which listens on port 8080:
@ -88,7 +88,8 @@ following instructions recreate the source files from this folder.
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -114,8 +115,8 @@ Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -127,8 +128,8 @@ folder) you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -137,12 +138,14 @@ folder) you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. Run the following command to find the external IP address for your service. The ingress IP for your
cluster is returned. If you just created your cluster, you might need to wait and rerun the command until
your service gets asssigned an external IP address.
1. Run the following command to find the external IP address for your service.
The ingress IP for your cluster is returned. If you just created your
cluster, you might need to wait and rerun the command until your service gets
asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
@ -169,9 +172,9 @@ folder) you're ready to build and deploy the sample app.
helloworld-go helloworld-go.default.example.com
```
1. Test your app by sending it a request. Use the following
`curl` command with the domain URL `helloworld-go.default.example.com` and `EXTERNAL-IP` address that you retrieved
in the previous steps:
1. Test your app by sending it a request. Use the following `curl` command with
the domain URL `helloworld-go.default.example.com` and `EXTERNAL-IP` address
that you retrieved in the previous steps:
```shell
curl -H "Host: helloworld-go.default.example.com" http://{EXTERNAL_IP_ADDRESS}

View File

@ -1,22 +1,22 @@
# Hello World - Haskell sample
A simple web app written in Haskell that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
A simple web app written in Haskell that you can use for testing. It reads in an
env variable `TARGET` and prints "Hello \${TARGET}!". If TARGET is not
specified, it will use "World" as the TARGET.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Recreating the sample code
While you can clone all of the code from this directory, hello world
apps are generally more useful if you build them step-by-step. The
following instructions recreate the source files from this folder.
While you can clone all of the code from this directory, hello world apps are
generally more useful if you build them step-by-step. The following instructions
recreate the source files from this folder.
1. Create a new file named `stack.yaml` and paste the following code:
@ -110,7 +110,8 @@ following instructions recreate the source files from this folder.
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -136,8 +137,8 @@ Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, enter these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, enter these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -149,8 +150,8 @@ folder) you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to Docker Hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -159,13 +160,14 @@ folder) you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, enter
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take some time for the service to get assigned
an external IP address.
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the
ingress IP for your cluster. If your cluster is new, it may take some time
for the service to get assigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system

View File

@ -1,17 +1,18 @@
# Hello World - Spring Boot Java sample
A simple web app written in Java using Spring Boot 2.0 that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
A simple web app written in Java using Spring Boot 2.0 that you can use for
testing. It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
- You have installed [Java SE 8 or later JDK](http://www.oracle.com/technetwork/java/javase/downloads/index.html).
- You have installed
[Java SE 8 or later JDK](http://www.oracle.com/technetwork/java/javase/downloads/index.html).
## Recreating the sample code
@ -19,7 +20,8 @@ While you can clone all of the code from this directory, hello world apps are
generally more useful if you build them step-by-step. The following instructions
recreate the source files from this folder.
1. From the console, create a new empty web project using the curl and unzip commands:
1. From the console, create a new empty web project using the curl and unzip
commands:
```shell
curl https://start.spring.io/starter.zip \
@ -31,13 +33,13 @@ recreate the source files from this folder.
```
If you don't have curl installed, you can accomplish the same by visiting the
[Spring Initializr](https://start.spring.io/) page. Specify Artifact as `helloworld`
and add the `Web` dependency. Then click `Generate Project`, download and unzip the
sample archive.
[Spring Initializr](https://start.spring.io/) page. Specify Artifact as
`helloworld` and add the `Web` dependency. Then click `Generate Project`,
download and unzip the sample archive.
1. Update the `SpringBootApplication` class in
`src/main/java/com/example/helloworld/HelloworldApplication.java` by adding
a `@RestController` to handle the "/" mapping and also add a `@Value` field to
`src/main/java/com/example/helloworld/HelloworldApplication.java` by adding a
`@RestController` to handle the "/" mapping and also add a `@Value` field to
provide the TARGET environment variable:
```java
@ -78,8 +80,9 @@ recreate the source files from this folder.
Go to `http://localhost:8080/` to see your `Hello World!` message.
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a Spring Boot app,
see [Spring Boot with Docker](https://spring.io/guides/gs/spring-boot-docker/).
block below into it. For detailed instructions on dockerizing a Spring Boot
app, see
[Spring Boot with Docker](https://spring.io/guides/gs/spring-boot-docker/).
For additional information on multi-stage docker builds for Java see
[Creating Smaller Java Image using Docker Multi-stage Build](http://blog.arungupta.me/smaller-java-image-docker-multi-stage-build/).
@ -113,7 +116,8 @@ recreate the source files from this folder.
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -139,8 +143,8 @@ Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -152,8 +156,8 @@ folder) you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -162,11 +166,12 @@ folder) you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balancer for your app.
- Network programming to create a route, ingress, service, and load balancer
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
1. To find the IP address for your service, use. If your cluster is new, it may
take sometime for the service to get asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system

View File

@ -1,22 +1,22 @@
# Hello World - Kotlin sample
A simple web app written in Kotlin using [Ktor](https://ktor.io/) that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}". If
TARGET is not specified, it will use "World" as the TARGET.
A simple web app written in Kotlin using [Ktor](https://ktor.io/) that you can
use for testing. It reads in an env variable `TARGET` and prints "Hello
\${TARGET}". If TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Steps to recreate the sample code
While you can clone all of the code from this directory, hello world apps are
generally more useful if you build them step-by-step.
The following instructions recreate the source files from this folder.
generally more useful if you build them step-by-step. The following instructions
recreate the source files from this folder.
1. Create a new directory and cd into it:
@ -25,7 +25,8 @@ The following instructions recreate the source files from this folder.
cd hello
```
2. Create a file named `Main.kt` at `src/main/kotlin/com/example/hello` and copy the code block below into it:
2. Create a file named `Main.kt` at `src/main/kotlin/com/example/hello` and copy
the code block below into it:
```shell
mkdir -p src/main/kotlin/com/example/hello
@ -136,7 +137,8 @@ The following instructions recreate the source files from this folder.
```
6. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -162,8 +164,8 @@ Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -175,8 +177,8 @@ folder) you're ready to build and deploy the sample app.
2. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -185,13 +187,14 @@ folder) you're ready to build and deploy the sample app.
3. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
4. To find the IP address for your service, use
`kubectl get service knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get assigned
an external IP address.
`kubectl get service knative-ingressgateway --namespace istio-system` to get
the ingress IP for your cluster. If your cluster is new, it may take sometime
for the service to get assigned an external IP address.
```shell
kubectl get service knative-ingressgateway --namespace istio-system
@ -213,8 +216,8 @@ folder) you're ready to build and deploy the sample app.
helloworld-kotlin helloworld-kotlin.default.example.com
```
6. Now you can make a request to your app to see the result. Replace `{IP_ADDRESS}`
with the address you see returned in the previous step.
6. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-kotlin.default.example.com" http://{IP_ADDRESS}

View File

@ -1,14 +1,14 @@
# Hello World - Node.js sample
A simple web app written in Node.js that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
A simple web app written in Node.js that you can use for testing. It reads in an
env variable `TARGET` and prints "Hello \${TARGET}!". If TARGET is not
specified, it will use "World" as the TARGET.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
- [Node.js](https://nodejs.org/en/) installed and configured.
@ -19,9 +19,9 @@ While you can clone all of the code from this directory, hello world apps are
generally more useful if you build them step-by-step. The following instructions
recreate the source files from this folder.
1. Create a new directory and initalize `npm`. You can accept the defaults,
but change the entry point to `app.js` to be consistent with the sample
code here.
1. Create a new directory and initalize `npm`. You can accept the defaults, but
change the entry point to `app.js` to be consistent with the sample code
here.
```shell
npm init
@ -80,7 +80,8 @@ recreate the source files from this folder.
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a Node.js app,
see [Dockerizing a Node.js web app](https://nodejs.org/en/docs/guides/nodejs-docker-webapp/).
see
[Dockerizing a Node.js web app](https://nodejs.org/en/docs/guides/nodejs-docker-webapp/).
```Dockerfile
# Use the official Node 8 image.
@ -110,7 +111,8 @@ recreate the source files from this folder.
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -136,8 +138,8 @@ Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -149,8 +151,8 @@ folder) you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -159,13 +161,14 @@ folder) you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the
ingress IP for your cluster. If your cluster is new, it may take sometime for
the service to get asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system

View File

@ -1,22 +1,22 @@
# Hello World - PHP sample
A simple web app written in PHP that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
A simple web app written in PHP that you can use for testing. It reads in an env
variable `TARGET` and prints "Hello \${TARGET}!". If TARGET is not specified, it
will use "World" as the TARGET.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Recreating the sample code
While you can clone all of the code from this directory, hello world
apps are generally more useful if you build them step-by-step. The
following instructions recreate the source files from this folder.
While you can clone all of the code from this directory, hello world apps are
generally more useful if you build them step-by-step. The following instructions
recreate the source files from this folder.
1. Create a new directory and cd into it:
@ -33,8 +33,8 @@ following instructions recreate the source files from this folder.
echo sprintf("Hello %s!\n", $target);
```
1. Create a file named `Dockerfile` and copy the code block below into it.
See [official PHP docker image](https://hub.docker.com/_/php/) for more details.
1. Create a file named `Dockerfile` and copy the code block below into it. See
[official PHP docker image](https://hub.docker.com/_/php/) for more details.
```docker
# Use the official PHP 7.2 image.
@ -53,7 +53,8 @@ following instructions recreate the source files from this folder.
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -75,12 +76,12 @@ following instructions recreate the source files from this folder.
## Building and deploying the sample
Once you have recreated the sample code files (or used the files in the sample folder)
you're ready to build and deploy the sample app.
Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -92,8 +93,8 @@ you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -102,13 +103,14 @@ you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the
ingress IP for your cluster. If your cluster is new, it may take sometime for
the service to get asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system

View File

@ -1,22 +1,22 @@
# Hello World - Python sample
A simple web app written in Python that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
A simple web app written in Python that you can use for testing. It reads in an
env variable `TARGET` and prints "Hello \${TARGET}!". If TARGET is not
specified, it will use "World" as the TARGET.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Steps to recreate the sample code
While you can clone all of the code from this directory, hello world apps are
generally more useful if you build them step-by-step.
The following instructions recreate the source files from this folder.
generally more useful if you build them step-by-step. The following instructions
recreate the source files from this folder.
1. Create a new directory and cd into it:
@ -43,8 +43,9 @@ The following instructions recreate the source files from this folder.
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
```
1. Create a file named `Dockerfile` and copy the code block below into it.
See [official Python docker image](https://hub.docker.com/_/python/) for more details.
1. Create a file named `Dockerfile` and copy the code block below into it. See
[official Python docker image](https://hub.docker.com/_/python/) for more
details.
```docker
# Use the official Python image.
@ -68,7 +69,8 @@ The following instructions recreate the source files from this folder.
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -94,8 +96,8 @@ Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -107,8 +109,8 @@ folder) you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -117,13 +119,14 @@ folder) you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the
ingress IP for your cluster. If your cluster is new, it may take sometime for
the service to get asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
@ -141,8 +144,8 @@ folder) you're ready to build and deploy the sample app.
helloworld-python helloworld-python.default.example.com
```
1. Now you can make a request to your app to see the result. Replace `{IP_ADDRESS}`
with the address you see returned in the previous step.
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-python.default.example.com" http://{IP_ADDRESS}

View File

@ -1,22 +1,22 @@
# Hello World - Ruby sample
A simple web app written in Ruby that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
A simple web app written in Ruby that you can use for testing. It reads in an
env variable `TARGET` and prints "Hello \${TARGET}!". If TARGET is not
specified, it will use "World" as the TARGET.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Steps to recreate the sample code
While you can clone all of the code from this directory, hello world apps are
generally more useful if you build them step-by-step.
The following instructions recreate the source files from this folder.
generally more useful if you build them step-by-step. The following instructions
recreate the source files from this folder.
1. Create a new directory and cd into it:
@ -38,8 +38,9 @@ The following instructions recreate the source files from this folder.
end
```
1. Create a file named `Dockerfile` and copy the code block below into it.
See [official Ruby docker image](https://hub.docker.com/_/ruby/) for more details.
1. Create a file named `Dockerfile` and copy the code block below into it. See
[official Ruby docker image](https://hub.docker.com/_/ruby/) for more
details.
```docker
# Use the official Ruby image.
@ -79,7 +80,8 @@ The following instructions recreate the source files from this folder.
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -101,12 +103,12 @@ The following instructions recreate the source files from this folder.
## Build and deploy this sample
Once you have recreated the sample code files (or used the files in the sample folder)
you're ready to build and deploy the sample app.
Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -118,8 +120,8 @@ you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -128,13 +130,14 @@ you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the
ingress IP for your cluster. If your cluster is new, it may take sometime for
the service to get asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
@ -152,8 +155,8 @@ you're ready to build and deploy the sample app.
helloworld-ruby helloworld-ruby.default.example.com
```
1. Now you can make a request to your app to see the result. Replace `{IP_ADDRESS}`
with the address you see returned in the previous step.
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-ruby.default.example.com" http://{IP_ADDRESS}

View File

@ -1,23 +1,23 @@
# Hello World - Rust sample
A simple web app written in Rust that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
A simple web app written in Rust that you can use for testing. It reads in an
env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Steps to recreate the sample code
While you can clone all of the code from this directory, hello world
apps are generally more useful if you build them step-by-step. The
following instructions recreate the source files from this folder.
While you can clone all of the code from this directory, hello world apps are
generally more useful if you build them step-by-step. The following instructions
recreate the source files from this folder.
1. Create a new file named `Cargo.toml` and paste the following code:
@ -108,7 +108,8 @@ following instructions recreate the source files from this folder.
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -134,8 +135,8 @@ Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, enter these commands replacing `{username}` with your
Docker Hub username:
Docker Hub, enter these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
@ -147,8 +148,8 @@ folder) you're ready to build and deploy the sample app.
1. After the build has completed and the container is pushed to Docker Hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
in `service.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
@ -157,13 +158,14 @@ folder) you're ready to build and deploy the sample app.
1. Now that your service is created, Knative will perform the following steps:
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, enter
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the
ingress IP for your cluster. If your cluster is new, it may take sometime for
the service to get asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system

View File

@ -1,29 +1,34 @@
# Hello World - Eclipse Vert.x sample
Learn how to deploy a simple web app that is written in Java and uses Eclipse Vert.x.
This samples uses Docker to build locally. The app reads in a `TARGET` env variable and then
prints "Hello World: \${TARGET}!". If a value for `TARGET` is not specified,
the "NOT SPECIFIED" default value is used.
Learn how to deploy a simple web app that is written in Java and uses Eclipse
Vert.x. This samples uses Docker to build locally. The app reads in a `TARGET`
env variable and then prints "Hello World: \${TARGET}!". If a value for `TARGET`
is not specified, the "NOT SPECIFIED" default value is used.
Use this sample to walk you through the steps of creating and modifying the sample app, building and pushing your
container image to a registry, and then deploying your app to your Knative cluster.
Use this sample to walk you through the steps of creating and modifying the
sample app, building and pushing your container image to a registry, and then
deploying your app to your Knative cluster.
## Before you begin
You must meet the following requirements to complete this sample:
- A version of the Knative Serving component installed and running on your Kubernetes cluster. Follow the
[Knative installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create a Knative cluster.
- A version of the Knative Serving component installed and running on your
Kubernetes cluster. Follow the
[Knative installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create a Knative cluster.
- The following software downloaded and install on your loacal machine:
- [Java SE 8 or later JDK](http://www.oracle.com/technetwork/java/javase/downloads/index.html).
- [Eclipse Vert.x v3.5.4](https://vertx.io/).
- [Docker](https://www.docker.com) for building and pushing your container image.
- [Docker](https://www.docker.com) for building and pushing your container
image.
- [curl](https://curl.haxx.se/) to test the sample app after deployment.
- A [Docker Hub](https://hub.docker.com/) account where you can push your container image.
- A [Docker Hub](https://hub.docker.com/) account where you can push your
container image.
**Tip**: You can clone the [Knatve/docs repo](https://github.com/knative/docs) and then modify the source files.
Alternatively, learn more by manually creating the files youself.
**Tip**: You can clone the [Knatve/docs repo](https://github.com/knative/docs)
and then modify the source files. Alternatively, learn more by manually creating
the files youself.
## Creating and configuring the sample code
@ -98,8 +103,10 @@ To create and configure the source files in the root of your working directory:
</project>
```
1. Create the `HelloWorld.java` file in the `src/main/java/com/example/helloworld` directory. The
`[ROOT]/src/main/java/com/example/helloworld/HelloWorld.java` file creates a basic web server that listens on port `8080`.
1. Create the `HelloWorld.java` file in the
`src/main/java/com/example/helloworld` directory. The
`[ROOT]/src/main/java/com/example/helloworld/HelloWorld.java` file creates a
basic web server that listens on port `8080`.
```java
package com.example.helloworld;
@ -143,8 +150,9 @@ To create and configure the source files in the root of your working directory:
COPY target/helloworld-1.0.0-SNAPSHOT.jar /deployments/
```
1. Create the `service.yaml` file. You must specify your Docker Hub username in `{username}`. You can also
configure the `TARGET`, for example you can modify the `Eclipse Vert.x Sample v1` value.
1. Create the `service.yaml` file. You must specify your Docker Hub username in
`{username}`. You can also configure the `TARGET`, for example you can modify
the `Eclipse Vert.x Sample v1` value.
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -166,10 +174,12 @@ To create and configure the source files in the root of your working directory:
## Building and deploying the sample
To build a container image, push your image to the registry, and then deploy your sample app to your cluster:
To build a container image, push your image to the registry, and then deploy
your sample app to your cluster:
1. Use Docker to build your container image and then push that image to your Docker Hub registry.
You must replace the `{username}` variables in the following commands with your Docker Hub username.
1. Use Docker to build your container image and then push that image to your
Docker Hub registry. You must replace the `{username}` variables in the
following commands with your Docker Hub username.
```shell
# Build the container on your local machine
@ -179,14 +189,15 @@ To build a container image, push your image to the registry, and then deploy you
docker push {username}/helloworld-vertx
```
1. Now that your container image is in the registry, you can deploy it to your Knative cluster by
running the `kubectl apply` command:
1. Now that your container image is in the registry, you can deploy it to your
Knative cluster by running the `kubectl apply` command:
```shell
kubectl apply --filename service.yaml
```
Result: A service name `helloworld-vertx` is created in your cluster along with the following resources:
Result: A service name `helloworld-vertx` is created in your cluster along
with the following resources:
- A new immutable revision for the version of the app that you just deployed.
- The following networking resources are created for your app:
@ -194,15 +205,17 @@ To build a container image, push your image to the registry, and then deploy you
- ingress
- service
- load balancer
- Auto scaling is enable to allow your pods to scale up to meet traffic, and also back down to zero when there is no traffic.
- Auto scaling is enable to allow your pods to scale up to meet traffic, and
also back down to zero when there is no traffic.
## Testing the sample app
To verify that your sample app has been successfully deployed:
1. View your the ingress IP address of your service by running the following
`kubectl get` command. Note that it may take sometime for the new service to get asssigned
an external IP address, especially if your cluster was newly created.
`kubectl get` command. Note that it may take sometime for the new service to
get asssigned an external IP address, especially if your cluster was newly
created.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
@ -215,7 +228,8 @@ To verify that your sample app has been successfully deployed:
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
1. Retrieve the URL for your service, by running the following `kubectl get` command:
1. Retrieve the URL for your service, by running the following `kubectl get`
command:
```shell
kubectl get services.serving.knative.dev helloworld-vertx --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
@ -228,8 +242,9 @@ To verify that your sample app has been successfully deployed:
helloworld-vertx helloworld-vertx.default.example.com
```
1. Run the following `curl` command to test your deployed sample app. You must replace the
`{IP_ADDRESS}` variable the URL that your retrieve in the previous step.
1. Run the following `curl` command to test your deployed sample app. You must
replace the `{IP_ADDRESS}` variable the URL that your retrieve in the
previous step.
```shell
curl -H "Host: helloworld-vertx.default.example.com" http://{IP_ADDRESS}
@ -245,7 +260,8 @@ Congtratualations on deploying your sample Java app to Knative!
## Removing the sample app deployment
To remove the sample app from your cluster, run the following `kubectl delete` command:
To remove the sample app from your cluster, run the following `kubectl delete`
command:
```shell
kubectl delete --filename service.yaml

View File

@ -1,25 +1,28 @@
# Routing across Knative Services
This example shows how to map multiple Knative services to different paths
under a single domain name using the Istio VirtualService concept.
Istio is a general-purpose reverse proxy, therefore these directions can also be
used to configure routing based on other request data such as headers, or even
to map Knative and external resources under the same domain name.
This example shows how to map multiple Knative services to different paths under
a single domain name using the Istio VirtualService concept. Istio is a
general-purpose reverse proxy, therefore these directions can also be used to
configure routing based on other request data such as headers, or even to map
Knative and external resources under the same domain name.
In this sample, we set up two web services: `Search` service and `Login`
service, which simply read in an env variable `SERVICE_NAME` and prints
`"${SERVICE_NAME} is called"`. We'll then create a VirtualService with host
`example.com`, and define routing rules in the VirtualService so that
`example.com/search` maps to the Search service, and `example.com/login` maps
to the Login service.
`example.com/search` maps to the Search service, and `example.com/login` maps to
the Login service.
## Prerequisites
1. A Kubernetes cluster with [Knative Serving](https://github.com/knative/docs/blob/master/install/README.md) installed.
2. Install [Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
1. A Kubernetes cluster with
[Knative Serving](https://github.com/knative/docs/blob/master/install/README.md)
installed.
2. Install
[Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
3. Acquire a domain name.
- In this example, we use `example.com`. If you don't have a domain name,
you can modify your hosts file (on Mac or Linux) to map `example.com` to your
- In this example, we use `example.com`. If you don't have a domain name, you
can modify your hosts file (on Mac or Linux) to map `example.com` to your
cluster's ingress IP.
4. Check out the code:
@ -43,8 +46,9 @@ cd $GOPATH/src/github.com/knative/docs
export REPO="gcr.io/<YOUR_PROJECT_ID>"
```
This example shows how to use Google Container Registry (GCR). You will need a Google Cloud Project and to enable the [Google Container Registry
API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
This example shows how to use Google Container Registry (GCR). You will need a
Google Cloud Project and to enable the
[Google Container Registry API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
3. Use Docker to build your application container:
@ -60,10 +64,12 @@ docker build \
docker push "${REPO}/serving/samples/knative-routing-go"
```
5. Replace the image reference path with our published image path in the configuration file `serving/samples/knative-routing-go/sample.yaml`:
5. Replace the image reference path with our published image path in the
configuration file `serving/samples/knative-routing-go/sample.yaml`:
- Manually replace:
`image: github.com/knative/docs/serving/samples/knative-routing-go` with `image: <YOUR_CONTAINER_REGISTRY>/serving/samples/knative-routing-go`
`image: github.com/knative/docs/serving/samples/knative-routing-go` with
`image: <YOUR_CONTAINER_REGISTRY>/serving/samples/knative-routing-go`
Or
@ -84,8 +90,8 @@ kubectl apply --filename serving/samples/knative-routing-go/sample.yaml
## Exploring the Routes
A shared Gateway "knative-shared-gateway" is used within Knative service mesh
for serving all incoming traffic. You can inspect it and its corresponding Kubernetes
service with:
for serving all incoming traffic. You can inspect it and its corresponding
Kubernetes service with:
- Check the shared Gateway:
@ -158,11 +164,13 @@ kubectl get VirtualService entry-route --output yaml
```
3. Send a request to the `Search` service and the `Login` service by using
corresponding URIs. You should get the same results as directly accessing these services.
_ Get the ingress IP:
```shell
corresponding URIs. You should get the same results as directly accessing
these services.
\_ Get the ingress IP:
```shell
export GATEWAY_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
--output jsonpath="{.status.loadBalancer.ingress[_]['ip']}"`
--output jsonpath="{.status.loadBalancer.ingress[_]['ip']}"`
```
* Send a request to the Search service:
@ -178,13 +186,13 @@ kubectl get VirtualService entry-route --output yaml
## How It Works
When an external request with host `example.com` reaches
`knative-shared-gateway` Gateway, the `entry-route` VirtualService will check
if it has `/search` or `/login` URI. If the URI matches, then the host of
request will be rewritten into the host of `Search` service or `Login` service
correspondingly. This resets the final destination of the request.
The request with updated host will be forwarded to `knative-shared-gateway`
Gateway again. The Gateway proxy checks the updated host, and forwards it to
`Search` or `Login` service according to its host setting.
`knative-shared-gateway` Gateway, the `entry-route` VirtualService will check if
it has `/search` or `/login` URI. If the URI matches, then the host of request
will be rewritten into the host of `Search` service or `Login` service
correspondingly. This resets the final destination of the request. The request
with updated host will be forwarded to `knative-shared-gateway` Gateway again.
The Gateway proxy checks the updated host, and forwards it to `Search` or
`Login` service according to its host setting.
![Object model](images/knative-routing-sample-flow.png)

View File

@ -1,12 +1,20 @@
# Creating a RESTful Service
This sample demonstrates creating a simple RESTful service. The exposed endpoint takes a stock ticker (i.e. stock symbol), then outputs the stock price. The endpoint resource name is defined by an environment variable set in the configuration file.
This sample demonstrates creating a simple RESTful service. The exposed endpoint
takes a stock ticker (i.e. stock symbol), then outputs the stock price. The
endpoint resource name is defined by an environment variable set in the
configuration file.
## Prerequisites
1. A Kubernetes cluster with [Knative Serving](https://github.com/knative/docs/blob/master/install/README.md) installed.
2. Install [Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
3. You need to [configure outbound network access](https://github.com/knative/docs/blob/master/serving/outbound-network-access.md) because this application makes an external API request.
1. A Kubernetes cluster with
[Knative Serving](https://github.com/knative/docs/blob/master/install/README.md)
installed.
2. Install
[Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
3. You need to
[configure outbound network access](https://github.com/knative/docs/blob/master/serving/outbound-network-access.md)
because this application makes an external API request.
4. Check out the code:
```
@ -29,8 +37,9 @@ cd $GOPATH/src/github.com/knative/docs
export REPO="gcr.io/<YOUR_PROJECT_ID>"
```
To run the sample, you need to have a Google Cloud Platform project, and you also need to enable the [Google Container Registry
API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
To run the sample, you need to have a Google Cloud Platform project, and you
also need to enable the
[Google Container Registry API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
3. Use Docker to build your application container:
@ -46,10 +55,12 @@ docker build \
docker push "${REPO}/serving/samples/rest-api-go"
```
5. Replace the image reference path with our published image path in the configuration files (`serving/samples/rest-api-go/sample.yaml`:
5. Replace the image reference path with our published image path in the
configuration files (`serving/samples/rest-api-go/sample.yaml`:
- Manually replace:
`image: github.com/knative/docs/serving/samples/rest-api-go` with `image: <YOUR_CONTAINER_REGISTRY>/serving/samples/rest-api-go`
`image: github.com/knative/docs/serving/samples/rest-api-go` with
`image: <YOUR_CONTAINER_REGISTRY>/serving/samples/rest-api-go`
Or
@ -106,7 +117,8 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
2. When the service is ready, export the ingress hostname and IP as environment variables:
2. When the service is ready, export the ingress hostname and IP as environment
variables:
```
export SERVICE_HOST=`kubectl get route stock-route-example --output jsonpath="{.status.domain}"`
@ -115,7 +127,8 @@ export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-syst
```
- If your cluster is running outside a cloud provider (for example on Minikube),
your services will never get an external IP address. In that case, use the istio `hostIP` and `nodePort` as the service IP:
your services will never get an external IP address. In that case, use the
istio `hostIP` and `nodePort` as the service IP:
```
export SERVICE_IP=$(kubectl get po --selector knative=ingressgateway --namespace istio-system \

View File

@ -3,26 +3,29 @@
A Go sample that shows how to use Knative to go from source code in a git
repository to a running application with a URL.
This sample uses the [Build](../../../build/README.md) and [Serving](../../README.md)
components of Knative to orchestrate an end-to-end deployment.
This sample uses the [Build](../../../build/README.md) and
[Serving](../../README.md) components of Knative to orchestrate an end-to-end
deployment.
## Prerequisites
You need:
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
- Go installed and configured. This is optional, and only required if you want to run the sample app
locally.
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- Go installed and configured. This is optional, and only required if you want
to run the sample app locally.
## Configuring Knative
To use this sample, you need to install a build template and register a secret for Docker Hub.
To use this sample, you need to install a build template and register a secret
for Docker Hub.
### Install the kaniko build template
This sample leverages the [kaniko build template](https://github.com/knative/build-templates/tree/master/kaniko)
This sample leverages the
[kaniko build template](https://github.com/knative/build-templates/tree/master/kaniko)
to perform a source-to-container build on your Kubernetes cluster.
Use kubectl to install the kaniko manifest:
@ -33,10 +36,11 @@ kubectl apply --filename https://raw.githubusercontent.com/knative/build-templat
### Register secrets for Docker Hub
In order to push the container that is built from source to Docker Hub, register a secret in
Kubernetes for authentication with Docker Hub.
In order to push the container that is built from source to Docker Hub, register
a secret in Kubernetes for authentication with Docker Hub.
There are [detailed instructions](https://github.com/knative/docs/blob/master/build/auth.md#basic-authentication-docker)
There are
[detailed instructions](https://github.com/knative/docs/blob/master/build/auth.md#basic-authentication-docker)
available, but these are the key steps:
1. Create a new `Secret` manifest, which is used to store your Docker Hub
@ -68,11 +72,11 @@ available, but these are the key steps:
cGFzc3dvcmQ=
```
> **Note:** If you receive the "invalid option -w" error on macOS,
> try using the `base64 -b 0` command.
> **Note:** If you receive the "invalid option -w" error on macOS, try using
> the `base64 -b 0` command.
1. Create a new `Service Account` manifest which is used to link the build process to the secret.
Save this file as `service-account.yaml`:
1. Create a new `Service Account` manifest which is used to link the build
process to the secret. Save this file as `service-account.yaml`:
```yaml
apiVersion: v1
@ -83,7 +87,8 @@ secrets:
- name: basic-user-pass
```
1. After you have created the manifest files, apply them to your cluster with `kubectl`:
1. After you have created the manifest files, apply them to your cluster with
`kubectl`:
```shell
$ kubectl apply --filename docker-secret.yaml
@ -102,10 +107,10 @@ you could replace this GitHub repo with your own. The only requirements are that
the repo must contain a `Dockerfile` with the instructions for how to build a
container for the application.
1. You need to create a service manifest which defines the service to deploy, including where
the source code is and which build-template to use. Create a file named
`service.yaml` and copy the following definition. Make sure to replace
`{DOCKER_USERNAME}` with your own Docker Hub username:
1. You need to create a service manifest which defines the service to deploy,
including where the source code is and which build-template to use. Create a
file named `service.yaml` and copy the following definition. Make sure to
replace `{DOCKER_USERNAME}` with your own Docker Hub username:
```yaml
apiVersion: serving.knative.dev/v1alpha1
@ -198,11 +203,13 @@ container for the application.
- Fetch the revision specified from GitHub and build it into a container
- Push the container to Docker Hub
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Network programming to create a route, ingress, service, and load balance
for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To get the ingress IP for your cluster, use the following command. If your cluster is new,
it can take some time for the service to get an external IP address:
1. To get the ingress IP for your cluster, use the following command. If your
cluster is new, it can take some time for the service to get an external IP
address:
```shell
$ kubectl get svc knative-ingressgateway --namespace istio-system

View File

@ -1,15 +1,16 @@
# Telemetry Sample
This sample runs a simple web server that makes calls to other in-cluster services
and responds to requests with "Hello World!".
The purpose of this sample is to show generating [metrics](../../accessing-metrics.md),
[logs](../../accessing-logs.md) and distributed [traces](../../accessing-traces.md).
This sample also shows how to create a dedicated Prometheus instance rather than
using the default installation.
This sample runs a simple web server that makes calls to other in-cluster
services and responds to requests with "Hello World!". The purpose of this
sample is to show generating [metrics](../../accessing-metrics.md),
[logs](../../accessing-logs.md) and distributed
[traces](../../accessing-traces.md). This sample also shows how to create a
dedicated Prometheus instance rather than using the default installation.
## Prerequisites
1. A Kubernetes cluster with [Knative Serving](https://github.com/knative/docs/blob/master/install/README.md)
1. A Kubernetes cluster with
[Knative Serving](https://github.com/knative/docs/blob/master/install/README.md)
installed.
2. Check if Knative monitoring components are installed:
@ -17,9 +18,11 @@ using the default installation.
kubectl get pods --namespace knative-monitoring
```
- If pods aren't found, install [Knative monitoring component](../../installing-logging-metrics-traces.md).
- If pods aren't found, install
[Knative monitoring component](../../installing-logging-metrics-traces.md).
3. Install [Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
3. Install
[Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
4. Check out the code:
```
@ -42,9 +45,9 @@ cd $GOPATH/src/github.com/knative/docs
export REPO="gcr.io/<YOUR_PROJECT_ID>"
```
This example shows how to use Google Container Registry (GCR). You will need
a Google Cloud Project and to enable the [Google Container Registry
API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
This example shows how to use Google Container Registry (GCR). You will need a
Google Cloud Project and to enable the
[Google Container Registry API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
3. Use Docker to build your application container:
@ -144,8 +147,7 @@ status:
domain: telemetrysample-route.default.example.com
```
2. Export the ingress hostname and IP as environment
variables:
2. Export the ingress hostname and IP as environment variables:
```
export SERVICE_HOST=`kubectl get route telemetrysample-route --output jsonpath="{.status.domain}"`
@ -172,8 +174,8 @@ for more information.
## Access per Request Traces
You can access to per request traces from Zipkin UI - see [Traces](../../accessing-traces.md)
for more information.
You can access to per request traces from Zipkin UI - see
[Traces](../../accessing-traces.md) for more information.
## Accessing Custom Metrics

View File

@ -17,8 +17,8 @@ If you want to test and run the app locally:
## Sample code
In this demo we are going to use a simple `golang` REST app called
[rester-tester](https://github.com/mchmarny/rester-tester). It's important
to point out that this application doesn't use any special Knative Serving
[rester-tester](https://github.com/mchmarny/rester-tester). It's important to
point out that this application doesn't use any special Knative Serving
components, nor does it have any Knative Serving SDK dependencies.
### Cloning the sample code
@ -30,8 +30,8 @@ git clone git@github.com:mchmarny/rester-tester.git
cd rester-tester
```
The `rester-tester` application uses [godep](https://github.com/tools/godep)
to manage its own dependencies. Download `godep` and restore the app dependencies:
The `rester-tester` application uses [godep](https://github.com/tools/godep) to
manage its own dependencies. Download `godep` and restore the app dependencies:
```
go get github.com/tools/godep
@ -61,8 +61,8 @@ go build
**Docker**
When running the application locally using Docker, you do not need to install `ffmpeg`;
Docker will install it for you 'inside' of the Docker image.
When running the application locally using Docker, you do not need to install
`ffmpeg`; Docker will install it for you 'inside' of the Docker image.
To run the app:
@ -82,13 +82,13 @@ curl -X POST -H "Content-Type: application/json" http://localhost:8080/image \
## Deploying the app to Knative
From this point, you can either deploy a prebuilt image of the app, or build
the app locally and then deploy it.
From this point, you can either deploy a prebuilt image of the app, or build the
app locally and then deploy it.
### Deploying a prebuilt image
You can deploy a prebuilt image of the `rester-tester` app to Knative Serving using
`kubectl` and the included `sample-prebuilt.yaml` file:
You can deploy a prebuilt image of the `rester-tester` app to Knative Serving
using `kubectl` and the included `sample-prebuilt.yaml` file:
```
# From inside the thumbnailer-go directory
@ -97,9 +97,9 @@ kubectl apply --filename sample-prebuilt.yaml
### Building and deploying a version of the app
If you want to build the image yourself, follow these instructions. This sample uses the
[Kaniko build
template](https://github.com/knative/build-templates/blob/master/kaniko/kaniko.yaml)
If you want to build the image yourself, follow these instructions. This sample
uses the
[Kaniko build template](https://github.com/knative/build-templates/blob/master/kaniko/kaniko.yaml)
from the [build-templates](https://github.com/knative/build-templates/) repo.
```shell
@ -115,7 +115,8 @@ kubectl apply --filename https://raw.githubusercontent.com/knative/build-templat
kubectl apply --filename sample.yaml
```
Now, if you look at the `status` of the revision, you will see that a build is in progress:
Now, if you look at the `status` of the revision, you will see that a build is
in progress:
```shell
$ kubectl get revisions --output yaml
@ -136,8 +137,9 @@ Once `BuildComplete` has a `status: "True"`, the revision will be deployed.
## Using the app
To confirm that the app deployed, you can check for the Knative Serving service using `kubectl`.
First, is there an ingress service, and does it have an `EXTERNAL-IP`:
To confirm that the app deployed, you can check for the Knative Serving service
using `kubectl`. First, is there an ingress service, and does it have an
`EXTERNAL-IP`:
```
kubectl get svc knative-ingressgateway --namespace istio-system
@ -147,16 +149,16 @@ knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380
> Note: It can take a few seconds for the service to show an `EXTERNAL-IP`.
The newly deployed app may take few seconds to initialize. You can check its status
by entering the following command:
The newly deployed app may take few seconds to initialize. You can check its
status by entering the following command:
```
kubectl --namespace default get pods
```
The Knative Serving ingress service will automatically be assigned an external IP,
so let's capture the IP and Host URL in variables so that we can use them
in `curl` commands:
The Knative Serving ingress service will automatically be assigned an external
IP, so let's capture the IP and Host URL in variables so that we can use them in
`curl` commands:
```
# Put the Host URL into an environment variable.
@ -203,5 +205,5 @@ curl -H "Host: $SERVICE_HOST" \
## Final Thoughts
Although this demo uses an external application, the Knative Serving deployment
steps would be similar for any 'dockerized' app you may already have.
Just copy the `sample.yaml` and change a few variables.
steps would be similar for any 'dockerized' app you may already have. Just copy
the `sample.yaml` and change a few variables.

View File

@ -1,7 +1,8 @@
# Simple Traffic Splitting Between Revisions
This samples builds off the [Creating a RESTful Service](../rest-api-go) sample
to illustrate applying a revision, then using that revision for manual traffic splitting.
to illustrate applying a revision, then using that revision for manual traffic
splitting.
## Prerequisites
@ -9,12 +10,16 @@ to illustrate applying a revision, then using that revision for manual traffic s
## Updating the Service
This section describes how to create an revision by deploying a new configuration.
This section describes how to create an revision by deploying a new
configuration.
1. Replace the image reference path with our published image path in the configuration files (`serving/samples/traffic-splitting/updated_configuration.yaml`:
1. Replace the image reference path with our published image path in the
configuration files
(`serving/samples/traffic-splitting/updated_configuration.yaml`:
- Manually replace:
`image: github.com/knative/docs/serving/samples/rest-api-go` with `image: <YOUR_CONTAINER_REGISTRY>/serving/samples/rest-api-go`
`image: github.com/knative/docs/serving/samples/rest-api-go` with
`image: <YOUR_CONTAINER_REGISTRY>/serving/samples/rest-api-go`
Or
@ -31,14 +36,16 @@ This section describes how to create an revision by deploying a new configuratio
kubectl apply --filename serving/samples/traffic-splitting/updated_configuration.yaml
```
3. Once deployed, traffic will shift to the new revision automatically. Verify the deployment by checking the route status:
3. Once deployed, traffic will shift to the new revision automatically. Verify
the deployment by checking the route status:
```
kubectl get route --output yaml
```
4. When the new route is ready, you can access the new endpoints:
The hostname and IP address can be found in the same manner as the [Creating a RESTful Service](../rest-api-go) sample:
The hostname and IP address can be found in the same manner as the
[Creating a RESTful Service](../rest-api-go) sample:
```
export SERVICE_HOST=`kubectl get route stock-route-example --output jsonpath="{.status.domain}"`
@ -108,8 +115,8 @@ kubectl apply --filename serving/samples/rest-api-go/sample.yaml
kubectl get route --output yaml
```
Once updated, you can make `curl` requests to the API using either `stock` or `share`
endpoints.
Once updated, you can make `curl` requests to the API using either `stock` or
`share` endpoints.
## Clean Up

View File

@ -23,33 +23,33 @@ Operators can do the following steps to configure the Fluentd DaemonSet for
collecting `stdout/stderr` logs from the containers:
1. Replace `900.output.conf` part in
[100-fluentd-configmap.yaml](https://https://github.com/knative/serving/blob/master/config/monitoring/150-elasticsearch/100-fluentd-configmap.yaml) with the
desired output configuration. Knative provides a sample for sending logs to
Elasticsearch or Stackdriver. Developers can simply use `100-fluentd-configmap.yaml`
or override any with other configuration.
2. Replace the `image` field of `fluentd-ds` container
in [fluentd-ds.yaml](https://github.com/knative/serving/blob/master/third_party/config/monitoring/common/kubernetes/fluentd/fluentd-ds.yaml)
with the Fluentd image including the desired Fluentd output plugin.
See [here](image/fluentd/README.md) for the requirements of Flunetd image
on Knative.
[100-fluentd-configmap.yaml](https://https://github.com/knative/serving/blob/master/config/monitoring/150-elasticsearch/100-fluentd-configmap.yaml)
with the desired output configuration. Knative provides a sample for sending
logs to Elasticsearch or Stackdriver. Developers can simply use
`100-fluentd-configmap.yaml` or override any with other configuration.
2. Replace the `image` field of `fluentd-ds` container in
[fluentd-ds.yaml](https://github.com/knative/serving/blob/master/third_party/config/monitoring/common/kubernetes/fluentd/fluentd-ds.yaml)
with the Fluentd image including the desired Fluentd output plugin. See
[here](image/fluentd/README.md) for the requirements of Flunetd image on
Knative.
### Configure the Sidecar for log files under /var/log
Currently operators have to configure the Fluentd Sidecar separately for
collecting log files under `/var/log`. An
[effort](https://github.com/knative/serving/issues/818)
is in process to get rid of the sidecar. The steps to configure are:
[effort](https://github.com/knative/serving/issues/818) is in process to get rid
of the sidecar. The steps to configure are:
1. Replace `logging.fluentd-sidecar-output-config` flag in
[config-observability](https://github.com/knative/serving/blob/master/config/config-observability.yaml) with the
desired output configuration. **NOTE**: The Fluentd DaemonSet is in
`monitoring` namespace while the Fluentd sidecar is in the namespace same with
the app. There may be small differences between the configuration for DaemonSet
and sidecar even though the desired backends are the same.
[config-observability](https://github.com/knative/serving/blob/master/config/config-observability.yaml)
with the desired output configuration. **NOTE**: The Fluentd DaemonSet is in
`monitoring` namespace while the Fluentd sidecar is in the namespace same
with the app. There may be small differences between the configuration for
DaemonSet and sidecar even though the desired backends are the same.
1. Replace `logging.fluentd-sidecar-image` flag in
[config-observability](https://github.com/knative/serving/blob/master/config/config-observability.yaml)
with the Fluentd image including the desired Fluentd output plugin. In theory,
this is the same with the one for Fluentd DaemonSet.
with the Fluentd image including the desired Fluentd output plugin. In
theory, this is the same with the one for Fluentd DaemonSet.
## Deploying
@ -81,8 +81,8 @@ the Elasticsearch and Kibana services. Knative provides this sample:
kubectl apply --recursive --filename third_party/config/monitoring/elasticsearch
```
See [here](/serving/installing-logging-metrics-traces.md) for deploying the whole Knative
monitoring components.
See [here](/serving/installing-logging-metrics-traces.md) for deploying the
whole Knative monitoring components.
## Uninstalling

View File

@ -1,20 +1,22 @@
# Setting up a custom domain
By default, Knative Serving routes use `example.com` as the default domain.
The fully qualified domain name for a route by default is `{route}.{namespace}.{default-domain}`.
By default, Knative Serving routes use `example.com` as the default domain. The
fully qualified domain name for a route by default is
`{route}.{namespace}.{default-domain}`.
To change the {default-domain} value there are a few steps involved:
## Edit using kubectl
1. Edit the domain configuration config-map to replace `example.com`
with your own domain, for example `mydomain.com`:
1. Edit the domain configuration config-map to replace `example.com` with your
own domain, for example `mydomain.com`:
```shell
kubectl edit cm config-domain --namespace knative-serving
```
This command opens your default text editor and allows you to edit the config map.
This command opens your default text editor and allows you to edit the config
map.
```yaml
apiVersion: v1
@ -24,8 +26,9 @@ To change the {default-domain} value there are a few steps involved:
[...]
```
1. Edit the file to replace `example.com` with the domain you'd like to use and save your changes.
In this example, we configure `mydomain.com` for all routes:
1. Edit the file to replace `example.com` with the domain you'd like to use and
save your changes. In this example, we configure `mydomain.com` for all
routes:
```yaml
apiVersion: v1
@ -40,8 +43,8 @@ To change the {default-domain} value there are a few steps involved:
You can also apply an updated domain configuration:
1. Create a new file, `config-domain.yaml` and paste the following text,
replacing the `example.org` and `example.com` values with the new
domain you want to use:
replacing the `example.org` and `example.com` values with the new domain you
want to use:
```yaml
apiVersion: v1
@ -70,12 +73,13 @@ You can also apply an updated domain configuration:
## Deploy an application
> If you have an existing deployment, Knative will reconcile the change made to
> the configuration map and automatically update the host name for all of the deployed
> services and routes.
> the configuration map and automatically update the host name for all of the
> deployed services and routes.
Deploy an app (for example, [`helloworld-go`](./samples/helloworld-go/README.md)), to
your cluster as normal. You can check the customized domain in Knative Route "helloworld-go" with
the following command:
Deploy an app (for example,
[`helloworld-go`](./samples/helloworld-go/README.md)), to your cluster as
normal. You can check the customized domain in Knative Route "helloworld-go"
with the following command:
```shell
kubectl get route helloworld-go --output jsonpath="{.status.domain}"
@ -106,7 +110,8 @@ echo -e "$GATEWAY_IP\t$DOMAIN_NAME" | sudo tee -a /etc/hosts
```
You can now access your domain from the browser in your machine and do some quick checks.
You can now access your domain from the browser in your machine and do some
quick checks.
## Publish your Domain
@ -114,36 +119,37 @@ Follow these steps to make your domain publicly accessible:
### Set static IP for Knative Gateway
You might want to [set a static IP for your Knative gateway](gke-assigning-static-ip-address.md),
You might want to
[set a static IP for your Knative gateway](gke-assigning-static-ip-address.md),
so that the gateway IP does not change each time your cluster is restarted.
### Update your DNS records
To publish your domain, you need to update your DNS provider to point to the
IP address for your service ingress.
To publish your domain, you need to update your DNS provider to point to the IP
address for your service ingress.
- Create a [wildcard record](https://support.google.com/domains/answer/4633759)
for the namespace and custom domain to the ingress IP Address, which would enable
hostnames for multiple services in the same namespace to work without creating
additional DNS entries.
for the namespace and custom domain to the ingress IP Address, which would
enable hostnames for multiple services in the same namespace to work without
creating additional DNS entries.
```dns
*.default.mydomain.com 59 IN A 35.237.28.44
```
- Create an A record to point from the fully qualified domain name to the IP
address of your Knative gateway. This step needs to be done for each Knative Service or
Route created.
address of your Knative gateway. This step needs to be done for each Knative
Service or Route created.
```dns
helloworld-go.default.mydomain.com 59 IN A 35.237.28.44
```
If you are using Google Cloud DNS, you can find step-by-step instructions
in the [Cloud DNS quickstart](https://cloud.google.com/dns/quickstart).
If you are using Google Cloud DNS, you can find step-by-step instructions in the
[Cloud DNS quickstart](https://cloud.google.com/dns/quickstart).
Once the domain update has propagated, you can access your app using
the fully qualified domain name of the deployed route, for example
Once the domain update has propagated, you can access your app using the fully
qualified domain name of the deployed route, for example
`http://helloworld-go.default.mydomain.com`
---

View File

@ -1,24 +1,25 @@
# Configuring HTTPS with a custom certificate
If you already have an SSL/TLS certificate for your domain you can
follow the steps below to configure Knative to use your certificate
and enable HTTPS connections.
If you already have an SSL/TLS certificate for your domain you can follow the
steps below to configure Knative to use your certificate and enable HTTPS
connections.
Before you begin, you will need to
[configure Knative to use your custom domain](./using-a-custom-domain.md).
**Note:** due to limitations in Istio, Knative only supports a single
certificate per cluster. If you will serve multiple domains in the same
cluster, make sure the certificate is signed for all the domains.
certificate per cluster. If you will serve multiple domains in the same cluster,
make sure the certificate is signed for all the domains.
## Add the Certificate and Private Key into a secret
> Note, if you don't have a certificate, you can find instructions on obtaining an SSL/TLS certificate using LetsEncrypt at the bottom of this page.
> Note, if you don't have a certificate, you can find instructions on obtaining
> an SSL/TLS certificate using LetsEncrypt at the bottom of this page.
Assuming you have two files, `cert.pk` which contains your certificate private
key, and `cert.pem` which contains the public certificate, you can use the
following command to create a secret that stores the certificate. Note the
name of the secret, `istio-ingressgateway-certs` is required.
following command to create a secret that stores the certificate. Note the name
of the secret, `istio-ingressgateway-certs` is required.
```shell
kubectl create --namespace istio-system secret tls istio-ingressgateway-certs \
@ -28,8 +29,8 @@ kubectl create --namespace istio-system secret tls istio-ingressgateway-certs \
## Configure the Knative shared Gateway to use the new secret
Once you have created a secret that contains the certificate,
you need to update the Gateway spec to use the HTTPS.
Once you have created a secret that contains the certificate, you need to update
the Gateway spec to use the HTTPS.
To edit the shared gateway, run:
@ -37,8 +38,8 @@ To edit the shared gateway, run:
kubectl edit gateway knative-shared-gateway --namespace knative-serving
```
Change the Gateway spec to include the `tls:` section as shown below, then
save the changes.
Change the Gateway spec to include the `tls:` section as shown below, then save
the changes.
```yaml
# Please edit the object below. Lines beginning with a '#' will be ignored.
@ -70,8 +71,8 @@ spec:
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
```
Once the change has been made, you can now use the HTTPS protocol to access
your deployed services.
Once the change has been made, you can now use the HTTPS protocol to access your
deployed services.
## Obtaining an SSL/TLS certificate using Lets Encrypt through CertBot
@ -84,41 +85,49 @@ Encrypt][le] to obtain a certificate manually.
[le]: https://letsencrypt.org/
1. Install the `certbot-auto` script from the [Certbot website](https://certbot.eff.org/docs/install.html#certbot-auto).
1. Use the certbot to request a certificate, using DNS validation. The certbot tool will walk
you through validating your domain ownership by creating TXT records in your domain.
1. Install the `certbot-auto` script from the
[Certbot website](https://certbot.eff.org/docs/install.html#certbot-auto).
1. Use the certbot to request a certificate, using DNS validation. The certbot
tool will walk you through validating your domain ownership by creating TXT
records in your domain.
```shell
./certbot-auto certonly --manual --preferred-challenges dns -d '*.default.yourdomain.com'
```
1. When certbot is complete, you will have two output files, `privkey.pem` and `fullchain.pem`. These files
map to the `cert.pk` and `cert.pem` files used above.
1. When certbot is complete, you will have two output files, `privkey.pem` and
`fullchain.pem`. These files map to the `cert.pk` and `cert.pem` files used
above.
## Obtaining an SSL/TLS certificate using LetsEncrypt with cert-manager
You can also use [cert-manager](https://github.com/jetstack/cert-manager)
to automate the steps required to generate a TLS certificate using LetsEncrypt.
You can also use [cert-manager](https://github.com/jetstack/cert-manager) to
automate the steps required to generate a TLS certificate using LetsEncrypt.
### Install cert-manager
To install cert-manager into your cluster, use kubectl to apply the cert-manager manifest:
To install cert-manager into your cluster, use kubectl to apply the cert-manager
manifest:
```
kubectl apply --filename https://raw.githubusercontent.com/jetstack/cert-manager/release-0.5/contrib/manifests/cert-manager/with-rbac.yaml
```
or see the [cert-manager docs](https://cert-manager.readthedocs.io/en/latest/getting-started/) for more ways to install and customize.
or see the
[cert-manager docs](https://cert-manager.readthedocs.io/en/latest/getting-started/)
for more ways to install and customize.
### Configure cert-manager for your DNS provider
Once you have installed cert-manager, you'll need to configure it for your DNS
hosting provider.
Knative currently only works with the `DNS01` challenge type for LetsEncrypt, which
is only supported by a [small number of DNS providers through cert-manager](http://docs.cert-manager.io/en/latest/reference/issuers/acme/dns01.html?highlight=DNS#supported-dns01-providers).
Knative currently only works with the `DNS01` challenge type for LetsEncrypt,
which is only supported by a
[small number of DNS providers through cert-manager](http://docs.cert-manager.io/en/latest/reference/issuers/acme/dns01.html?highlight=DNS#supported-dns01-providers).
Instructions for configuring cert-manager are provided for the following DNS hosts:
Instructions for configuring cert-manager are provided for the following DNS
hosts:
- [Google Cloud DNS](using-cert-manager-on-gcp.md)

View File

@ -1,20 +1,20 @@
# Configuring Knative and CertManager for Google Cloud DNS
These instructions assume you have already setup a Knative cluster and
installed cert-manager into your cluster. For more information, see [using an
SSL certificate](using-an-ssl-cert.md#install-cert-manager). They also assume
you have already set up your managed zone with Cloud DNS as part of
These instructions assume you have already setup a Knative cluster and installed
cert-manager into your cluster. For more information, see
[using an SSL certificate](using-an-ssl-cert.md#install-cert-manager). They also
assume you have already set up your managed zone with Cloud DNS as part of
configuring the domain to map to your IP address.
To automate the generation of a certificate with cert-manager and LetsEncrypt,
we will use a `DNS01` challenge type, which requires the domain owner to add a TXT record
to their zone to prove ownership. Other challenge types are not currently supported by
Knative.
we will use a `DNS01` challenge type, which requires the domain owner to add a
TXT record to their zone to prove ownership. Other challenge types are not
currently supported by Knative.
## Creating a Cloud DNS service account
To add the TXT record, configure Knative with a service account
that can be used by cert-manager to create and update the DNS record.
To add the TXT record, configure Knative with a service account that can be used
by cert-manager to create and update the DNS record.
To begin, create a new service account with the project role `dns.admin`:
@ -42,9 +42,9 @@ gcloud iam service-accounts keys create ~/key.json \
--iam-account=$CLOUD_DNS_SA
```
After obtaining the service account secret, publish it to your cluster.
This command uses the secret name `cloud-dns-key`, but you can
choose a different name.
After obtaining the service account secret, publish it to your cluster. This
command uses the secret name `cloud-dns-key`, but you can choose a different
name.
```shell
# Upload that as a secret in your Kubernetes cluster.
@ -58,16 +58,16 @@ rm ~/key.json
## Configuring CertManager to use your DNS admin service account
Next, configure cert-manager to request new certificates and
verify the challenges using DNS.
Next, configure cert-manager to request new certificates and verify the
challenges using DNS.
### Specifying a certificate issuer
This example configures cert-manager to use LetsEncrypt, but you can
use any certificate provider that supports the ACME protocol.
This example configures cert-manager to use LetsEncrypt, but you can use any
certificate provider that supports the ACME protocol.
This example uses the `dns01` challenge type, which will
enable certificate generation and wildcard certificates.
This example uses the `dns01` challenge type, which will enable certificate
generation and wildcard certificates.
```shell
kubectl apply --filename - <<EOF
@ -122,9 +122,9 @@ status:
### Specifying the certificate
Next, configure which certificate issuer to use
and which secret you will publish the certificate into. Use the Secret `istio-ingressgateway-certs`.
The following steps will overwrite this Secret if it already exists.
Next, configure which certificate issuer to use and which secret you will
publish the certificate into. Use the Secret `istio-ingressgateway-certs`. The
following steps will overwrite this Secret if it already exists.
```shell
# Change this value to the domain you want to use.
@ -231,5 +231,5 @@ spec:
EOF
```
Now you can access your services via HTTPS; cert-manager will keep your certificates
up-to-date, replacing them before the certificate expires.
Now you can access your services via HTTPS; cert-manager will keep your
certificates up-to-date, replacing them before the certificate expires.

View File

@ -4,18 +4,19 @@
that synchronizes exposed Kubernetes Services and Ingresses with DNS providers.
This doc explains how to set up ExternalDNS within a Knative cluster using
[Google Cloud DNS](https://cloud.google.com/dns/) to automate the process
of publishing the Knative domain.
[Google Cloud DNS](https://cloud.google.com/dns/) to automate the process of
publishing the Knative domain.
## Prerequisite
1. A Google Kubernetes Engine cluster with Cloud DNS scope.
You can create a GKE cluster with Cloud DNS scope by entering the following command:
1. A Google Kubernetes Engine cluster with Cloud DNS scope. You can create a GKE
cluster with Cloud DNS scope by entering the following command:
```shell
gcloud container clusters create "external-dns" \
--scopes "https://www.googleapis.com/auth/ndev.clouddns.readwrite"
```
1. [Knative Serving](https://github.com/knative/docs/blob/master/install/README.md) installed on your cluster.
1. [Knative Serving](https://github.com/knative/docs/blob/master/install/README.md)
installed on your cluster.
1. A public domain that will be used in Knative.
1. Knative configured to use your custom domain.
@ -56,7 +57,8 @@ ExternalDNS. You can find detailed instructions for other cloud providers in the
Skip this step if you already have a DNS provider for your domain.
Here is a [list](https://github.com/kubernetes-incubator/external-dns#the-latest-release-v05)
Here is a
[list](https://github.com/kubernetes-incubator/external-dns#the-latest-release-v05)
of DNS providers supported by ExternalDNS. Choose a DNS provider from the list.
### Create a DNS zone for managing DNS records
@ -67,7 +69,8 @@ custom domain.
A DNS zone which will contain the managed DNS records needs to be created.
Assume your custom domain is `external-dns-test.my-org.do`.
Use the following command to create a DNS zone with [Google Cloud DNS](https://cloud.google.com/dns/):
Use the following command to create a DNS zone with
[Google Cloud DNS](https://cloud.google.com/dns/):
```shell
gcloud dns managed-zones create "external-dns-zone" \
@ -95,10 +98,10 @@ In this case, the DNS nameservers are `ns-cloud-{e1-e4}.googledomains.com`.
Yours could differ slightly, e.g. {a1-a4}, {b1-b4} etc.
If this zone has the parent zone, you need to add NS records of this zone into
the parent zone so that this zone can be found from the parent.
Assuming the parent zone is `my-org-do` and the parent domain is `my-org.do`,
and the parent zone is also hosted at Google Cloud DNS, you can follow these
steps to add the NS records of this zone into the parent zone:
the parent zone so that this zone can be found from the parent. Assuming the
parent zone is `my-org-do` and the parent domain is `my-org.do`, and the parent
zone is also hosted at Google Cloud DNS, you can follow these steps to add the
NS records of this zone into the parent zone:
```shell
gcloud dns record-sets transaction start --zone "my-org-do"
@ -109,7 +112,9 @@ gcloud dns record-sets transaction execute --zone "my-org-do"
### Deploy ExternalDNS
Use the following command to apply the [manifest](https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/gke.md#manifest-for-clusters-without-rbac-enabled) to install ExternalDNS
Use the following command to apply the
[manifest](https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/gke.md#manifest-for-clusters-without-rbac-enabled)
to install ExternalDNS
```shell
cat <<EOF | kubectl apply --filename -
@ -135,9 +140,9 @@ needs to be added into Knative gateway service:
kubectl edit svc knative-ingressgateway --namespace istio-system
```
This command opens your default text editor and allows you to add the
annotation to `knative-ingressgateway` service. After you've added your
annotation, your file may look similar to this:
This command opens your default text editor and allows you to add the annotation
to `knative-ingressgateway` service. After you've added your annotation, your
file may look similar to this:
```
apiVersion: v1
@ -167,8 +172,8 @@ NAME TYPE TTL DATA
### Verify domain has been published
You can check if the domain has been published to the Internet be entering
the following command:
You can check if the domain has been published to the Internet be entering the
following command:
```shell
host test.external-dns-test.my-org.do

View File

@ -2,7 +2,8 @@
This directory contains tests and testing docs.
- [Unit tests](#running-unit-tests) currently reside in the codebase alongside the code they test
- [Unit tests](#running-unit-tests) currently reside in the codebase alongside
the code they test
- [End-to-end tests](#running-end-to-end-tests)
## Running unit tests
@ -13,18 +14,20 @@ TODO(#66): Write real unit tests.
### Dependencies
You might need to install `kubetest` in order to run the end-to-end tests locally:
You might need to install `kubetest` in order to run the end-to-end tests
locally:
```shell
go get -u k8s.io/test-infra/kubetest
```
Simply run the `e2e-tests.sh` script, setting `$PROJECT_ID` first to your GCP project. The script
will create a GKE cluster, install Knative, run the end-to-end tests and delete the cluster.
Simply run the `e2e-tests.sh` script, setting `$PROJECT_ID` first to your GCP
project. The script will create a GKE cluster, install Knative, run the
end-to-end tests and delete the cluster.
If you already have a cluster set, ensure that `$PROJECT_ID` is empty and call the script with the
`--run-tests` argument. Note that this requires you to have Knative Build installed and configured
to your particular configuration setup.
If you already have a cluster set, ensure that `$PROJECT_ID` is empty and call
the script with the `--run-tests` argument. Note that this requires you to have
Knative Build installed and configured to your particular configuration setup.
---