Format markdown (#620)

Produced via: `prettier --write --proseWrap=always $(find -name '*.md' | grep -v vendor)`
This commit is contained in:
mattmoor-sockpuppet 2018-12-03 13:54:25 -08:00 committed by Knative Prow Robot
parent b83769e72d
commit 41462c69e1
83 changed files with 3472 additions and 3252 deletions

View File

@ -1,9 +1,7 @@
## Expected Behavior
## Actual Behavior
## Steps to Reproduce the Problem
1.

View File

@ -2,6 +2,6 @@ Fixes #(issue-number)
## Proposed Changes
*
*
*
-
-
-

View File

@ -9,11 +9,11 @@ Each of the components under the Knative project attempt to identify common patt
codify the best practices that are shared by successful real-world Kubernetes-based frameworks and
applications. Knative components focus on solving many mundane but difficult tasks such as:
* [Deploying a container](./install/getting-started-knative-app.md)
* [Orchestrating source-to-URL workflows on Kubernetes](./serving/samples/source-to-url-go/)
* [Routing and managing traffic with blue/green deployment](./serving/samples/blue-green-deployment.md)
* [Automatic scaling and sizing workloads based on demand](./serving/samples/autoscale-go)
* [Binding running services to eventing ecosystems](./eventing/samples/kubernetes-event-source)
- [Deploying a container](./install/getting-started-knative-app.md)
- [Orchestrating source-to-URL workflows on Kubernetes](./serving/samples/source-to-url-go/)
- [Routing and managing traffic with blue/green deployment](./serving/samples/blue-green-deployment.md)
- [Automatic scaling and sizing workloads based on demand](./serving/samples/autoscale-go)
- [Binding running services to eventing ecosystems](./eventing/samples/kubernetes-event-source)
Developers on Knative can use familiar idioms, languages, and frameworks to deploy any workload:
functions, applications, or containers.
@ -22,9 +22,9 @@ functions, applications, or containers.
The following Knative components are currently available:
* [Build](https://github.com/knative/build) - Source-to-container build orchestration
* [Eventing](https://github.com/knative/eventing) - Management and delivery of events
* [Serving](https://github.com/knative/serving) - Request-driven compute that can scale to zero
- [Build](https://github.com/knative/build) - Source-to-container build orchestration
- [Eventing](https://github.com/knative/eventing) - Management and delivery of events
- [Serving](https://github.com/knative/serving) - Request-driven compute that can scale to zero
## Audience
@ -71,41 +71,41 @@ Follow the links in this section to learn more about Knative.
### Getting started
* [Installing Knative](./install/README.md)
* [Getting started with app deployment](./install/getting-started-knative-app.md)
* [Getting started with serving](./serving)
* [Getting started with builds](./build)
* [Getting started with eventing](./eventing)
- [Installing Knative](./install/README.md)
- [Getting started with app deployment](./install/getting-started-knative-app.md)
- [Getting started with serving](./serving)
- [Getting started with builds](./build)
- [Getting started with eventing](./eventing)
### Configuration and networking
* [Configuring outbound network access](./serving/outbound-network-access.md)
* [Using a custom domain](./serving/using-a-custom-domain.md)
* [Assigning a static IP address for Knative on Google Kubernetes Engine](./serving/gke-assigning-static-ip-address.md)
* [Configuring HTTPS with a custom certificate](./serving/using-an-ssl-cert.md)
- [Configuring outbound network access](./serving/outbound-network-access.md)
- [Using a custom domain](./serving/using-a-custom-domain.md)
- [Assigning a static IP address for Knative on Google Kubernetes Engine](./serving/gke-assigning-static-ip-address.md)
- [Configuring HTTPS with a custom certificate](./serving/using-an-ssl-cert.md)
### Samples and demos
* [Autoscaling](./serving/samples/autoscale-go/README.md)
* [Source-to-URL deployment](./serving/samples/source-to-url-go/README.md)
* [Binding running services to eventing ecosystems](./eventing/samples/event-flow/README.md)
* [Telemetry](./serving/samples/telemetry-go/README.md)
* [REST API sample](./serving/samples/rest-api-go/README.md)
* [All samples for serving](./serving/samples/)
* [All samples for eventing](./eventing/samples/)
- [Autoscaling](./serving/samples/autoscale-go/README.md)
- [Source-to-URL deployment](./serving/samples/source-to-url-go/README.md)
- [Binding running services to eventing ecosystems](./eventing/samples/event-flow/README.md)
- [Telemetry](./serving/samples/telemetry-go/README.md)
- [REST API sample](./serving/samples/rest-api-go/README.md)
- [All samples for serving](./serving/samples/)
- [All samples for eventing](./eventing/samples/)
### Logging and metrics
* [Installing logging, metrics and traces](./serving/installing-logging-metrics-traces.md)
* [Accessing logs](./serving/accessing-logs.md)
* [Accessing metrics](./serving/accessing-metrics.md)
* [Accessing traces](./serving/accessing-traces.md)
* [Setting up a logging plugin](./serving/setting-up-a-logging-plugin.md)
- [Installing logging, metrics and traces](./serving/installing-logging-metrics-traces.md)
- [Accessing logs](./serving/accessing-logs.md)
- [Accessing metrics](./serving/accessing-metrics.md)
- [Accessing traces](./serving/accessing-traces.md)
- [Setting up a logging plugin](./serving/setting-up-a-logging-plugin.md)
### Debugging
* [Debugging application issues](./serving/debugging-application-issues.md)
* [Debugging performance issues](./serving/debugging-performance-issues.md)
- [Debugging application issues](./serving/debugging-application-issues.md)
- [Debugging performance issues](./serving/debugging-performance-issues.md)
---

View File

@ -19,27 +19,27 @@ More information about this use case is demonstrated in
## Key features of Knative Builds
* A `Build` can include multiple `steps` where each step specifies a `Builder`.
* A `Builder` is a type of container image that you create to accomplish any
- A `Build` can include multiple `steps` where each step specifies a `Builder`.
- A `Builder` is a type of container image that you create to accomplish any
task, whether that's a single step in a process, or the whole process itself.
* The `steps` in a `Build` can push to a repository.
* A `BuildTemplate` can be used to defined reusable templates.
* The `source` in a `Build` can be defined to mount data to a Kubernetes
- The `steps` in a `Build` can push to a repository.
- A `BuildTemplate` can be used to defined reusable templates.
- The `source` in a `Build` can be defined to mount data to a Kubernetes
Volume, and supports:
* `git` repositories
* Google Cloud Storage
* An arbitrary container image
* Authenticate with `ServiceAccount` using Kubernetes Secrets.
- `git` repositories
- Google Cloud Storage
- An arbitrary container image
- Authenticate with `ServiceAccount` using Kubernetes Secrets.
### Learn more
See the following reference topics for information about each of the build
components:
* [`Build`](https://github.com/knative/docs/blob/master/build/builds.md)
* [`BuildTemplate`](https://github.com/knative/docs/blob/master/build/build-templates.md)
* [ `Builder`](https://github.com/knative/docs/blob/master/build/builder-contract.md)
* [`ServiceAccount`](https://github.com/knative/docs/blob/master/build/auth.md)
- [`Build`](https://github.com/knative/docs/blob/master/build/builds.md)
- [`BuildTemplate`](https://github.com/knative/docs/blob/master/build/build-templates.md)
- [ `Builder`](https://github.com/knative/docs/blob/master/build/builder-contract.md)
- [`ServiceAccount`](https://github.com/knative/docs/blob/master/build/auth.md)
## Install the Knative Build component
@ -49,10 +49,10 @@ install. Knative Serving is not required to create and run builds.
Before you can run a Knative Build, you must install the Knative Build
component in your Kubernetes cluster:
* For details about installing a new instance of Knative in your Kubernetes
- For details about installing a new instance of Knative in your Kubernetes
cluster, see [Installing Knative](../install/README.md).
* If you have a component of Knative installed and running, you can
- If you have a component of Knative installed and running, you can
[add and install the Knative Build component](installing-build-component.md).
## Configuration syntax example
@ -86,7 +86,6 @@ spec:
args: ['echo', 'hello-example', 'build']
```
## Get started with Knative Build samples
Use the following samples to learn how to configure your Knative Builds to
@ -96,21 +95,20 @@ Tip: Review and reference multiple samples to piece together more complex builds
#### Simple build samples
* [Collection of simple test builds](https://github.com/knative/build/tree/master/test).
- [Collection of simple test builds](https://github.com/knative/build/tree/master/test).
#### Build templates
* [Repository of sample build templates](https://github.com/knative/build-templates).
- [Repository of sample build templates](https://github.com/knative/build-templates).
#### Complex samples
#### Complex samples
* [Use Knative to build apps from source code and then run those containers](https://github.com/knative/docs/blob/master/serving/samples/source-to-url-go).
- [Use Knative to build apps from source code and then run those containers](https://github.com/knative/docs/blob/master/serving/samples/source-to-url-go).
## Related info
## Related info
If you are interested in contributing to the Knative Build project, see the
[Knative Build code repository](https://github.com/knative/build).
If you are interested in contributing to the Knative Build project, see the
[Knative Build code repository](https://github.com/knative/build).
---

View File

@ -6,8 +6,8 @@ of a build.
The build system supports two types of authentication, using Kubernetes'
first-class `Secret` types:
* `kubernetes.io/basic-auth`
* `kubernetes.io/ssh-auth`
- `kubernetes.io/basic-auth`
- `kubernetes.io/ssh-auth`
Secrets of these types can be made available to the `Build` by attaching them
to the `ServiceAccount` as which it runs.
@ -34,7 +34,7 @@ into their respective files in `$HOME`.
metadata:
name: ssh-key
annotations:
build.knative.dev/git-0: https://github.com # Described below
build.knative.dev/git-0: https://github.com # Described below
type: kubernetes.io/ssh-auth
data:
ssh-privatekey: <base64 encoded>
@ -55,7 +55,7 @@ into their respective files in `$HOME`.
metadata:
name: build-bot
secrets:
- name: ssh-key
- name: ssh-key
```
1. Then use that `ServiceAccount` in your `Build`:
@ -92,7 +92,7 @@ used to authenticate with the Git service.
metadata:
name: basic-user-pass
annotations:
build.knative.dev/git-0: https://github.com # Described below
build.knative.dev/git-0: https://github.com # Described below
type: kubernetes.io/basic-auth
stringData:
username: <username>
@ -107,7 +107,7 @@ used to authenticate with the Git service.
metadata:
name: build-bot
secrets:
- name: basic-user-pass
- name: basic-user-pass
```
1. Use that `ServiceAccount` in your `Build`:
@ -144,7 +144,7 @@ credentials are then used to authenticate with the Git repository.
metadata:
name: basic-user-pass
annotations:
build.knative.dev/docker-0: https://gcr.io # Described below
build.knative.dev/docker-0: https://gcr.io # Described below
type: kubernetes.io/basic-auth
stringData:
username: <username>
@ -159,7 +159,7 @@ credentials are then used to authenticate with the Git repository.
metadata:
name: build-bot
secrets:
- name: basic-user-pass
- name: basic-user-pass
```
1. Use that `ServiceAccount` in your `Build`:
@ -264,6 +264,7 @@ are ignored.
Given URLs, usernames, and passwords of the form: `https://url{n}.com`,
`user{n}`, and `pass{n}`, generate the following for Git:
```
=== ~/.gitconfig ===
[credential]
@ -283,6 +284,7 @@ https://user2:pass2@url2.com
Given hostnames, private keys, and `known_hosts` of the form: `url{n}.com`,
`key{n}`, and `known_hosts{n}`, generate the following for Git:
```
=== ~/.ssh/id_key1 ===
{contents of key1}
@ -305,7 +307,7 @@ Host url2.com
Note: Because `known_hosts` is a non-standard extension of
`kubernetes.io/ssh-auth`, when it is not present this will be generated
through `ssh-keygen url{n}.com ` instead.
through `ssh-keygen url{n}.com` instead.
### Least privilege

View File

@ -22,43 +22,49 @@ This is used only for the purposes of demonstration.
```yaml
spec:
parameters:
# This has no default, and is therefore required.
- name: IMAGE
description: Where to publish the resulting image.
# This has no default, and is therefore required.
- name: IMAGE
description: Where to publish the resulting image.
# These may be overridden, but provide sensible defaults.
- name: DIRECTORY
description: The directory containing the build context.
default: /workspace
- name: DOCKERFILE_NAME
description: The name of the Dockerfile
default: Dockerfile
# These may be overridden, but provide sensible defaults.
- name: DIRECTORY
description: The directory containing the build context.
default: /workspace
- name: DOCKERFILE_NAME
description: The name of the Dockerfile
default: Dockerfile
steps:
- name: dockerfile-build
image: gcr.io/cloud-builders/docker
workingDir: "${DIRECTORY}"
args: ["build", "--no-cache",
"--tag", "${IMAGE}",
"--file", "${DOCKERFILE_NAME}",
"."]
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: dockerfile-build
image: gcr.io/cloud-builders/docker
workingDir: "${DIRECTORY}"
args:
[
"build",
"--no-cache",
"--tag",
"${IMAGE}",
"--file",
"${DOCKERFILE_NAME}",
".",
]
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: dockerfile-push
image: gcr.io/cloud-builders/docker
args: ["push", "${IMAGE}"]
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: dockerfile-push
image: gcr.io/cloud-builders/docker
args: ["push", "${IMAGE}"]
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
# As an implementation detail, this template mounts the host's daemon socket.
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: Socket
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: Socket
```
In this example, `parameters` describes the formal arguments for the template.
@ -80,6 +86,7 @@ For the sake of illustrating re-use, here are several example Builds
instantiating the `BuildTemplate` above (`dockerfile-build-and-push`).
Build `mchmarny/rester-tester`:
```yaml
spec:
source:
@ -89,11 +96,12 @@ spec:
template:
name: dockerfile-build-and-push
arguments:
- name: IMAGE
value: gcr.io/my-project/rester-tester
- name: IMAGE
value: gcr.io/my-project/rester-tester
```
Build `googlecloudplatform/cloud-builder`'s `wget` builder:
```yaml
spec:
source:
@ -103,14 +111,15 @@ spec:
template:
name: dockerfile-build-and-push
arguments:
- name: IMAGE
value: gcr.io/my-project/wget
# Optional override to specify the subdirectory containing the Dockerfile
- name: DIRECTORY
value: /workspace/wget
- name: IMAGE
value: gcr.io/my-project/wget
# Optional override to specify the subdirectory containing the Dockerfile
- name: DIRECTORY
value: /workspace/wget
```
Build `googlecloudplatform/cloud-builder`'s `docker` builder with `17.06.1`:
```yaml
spec:
source:
@ -120,13 +129,13 @@ spec:
template:
name: dockerfile-build-and-push
arguments:
- name: IMAGE
value: gcr.io/my-project/docker
# Optional overrides
- name: DIRECTORY
value: /workspace/docker
- name: DOCKERFILE_NAME
value: Dockerfile-17.06.1
- name: IMAGE
value: gcr.io/my-project/docker
# Optional overrides
- name: DIRECTORY
value: /workspace/docker
- name: DOCKERFILE_NAME
value: Dockerfile-17.06.1
```
---

View File

@ -40,31 +40,30 @@ overriding `command:` and `args:` for example:
```yaml
steps:
- image: ubuntu
command: ['/bin/bash']
args: ['-c', 'echo hello $FOO']
env:
- name: 'FOO'
value: 'world'
- image: ubuntu
command: ["/bin/bash"]
args: ["-c", "echo hello $FOO"]
env:
- name: "FOO"
value: "world"
```
### Specialized Builders
It is also possible for advanced users to create purpose-built builders.
One example of this are the ["FTL" builders](
https://github.com/GoogleCloudPlatform/runtimes-common/tree/master/ftl#ftl).
One example of this are the ["FTL" builders](https://github.com/GoogleCloudPlatform/runtimes-common/tree/master/ftl#ftl).
## What are the Builder conventions?
Builders should expect a Build to implement the following conventions:
* `/workspace`: The default working directory will be `/workspace`, which is
a volume that is filled by the `source:` step and shared across build `steps:`.
* `/builder/home`: This volume is exposed to steps via `$HOME`.
- `/workspace`: The default working directory will be `/workspace`, which is
a volume that is filled by the `source:` step and shared across build `steps:`.
* Credentials attached to the Build's service account may be exposed as Git or
Docker credentials as outlined [here](./auth.md).
- `/builder/home`: This volume is exposed to steps via `$HOME`.
- Credentials attached to the Build's service account may be exposed as Git or
Docker credentials as outlined [here](./auth.md).
---

View File

@ -11,14 +11,14 @@ A build runs until all `steps` have completed or until a failure occurs.
---
* [Syntax](#syntax)
* [Steps](#steps)
* [Template](#template)
* [Source](#source)
* [Service Account](#service-account)
* [Volumes](#volumes)
* [Timeout](#timeout)
* [Examples](#examples)
- [Syntax](#syntax)
- [Steps](#steps)
- [Template](#template)
- [Source](#source)
- [Service Account](#service-account)
- [Volumes](#volumes)
- [Timeout](#timeout)
- [Examples](#examples)
---
@ -27,28 +27,28 @@ A build runs until all `steps` have completed or until a failure occurs.
To define a configuration file for a `Build` resource, you can specify the
following fields:
* Required:
* [`apiVersion`][kubernetes-overview] - Specifies the API version, for example
- Required:
- [`apiVersion`][kubernetes-overview] - Specifies the API version, for example
`build.knative.dev/v1alpha1`.
* [`kind`][kubernetes-overview] - Specify the `Build` resource object.
* [`metadata`][kubernetes-overview] - Specifies data to uniquely identify the
- [`kind`][kubernetes-overview] - Specify the `Build` resource object.
- [`metadata`][kubernetes-overview] - Specifies data to uniquely identify the
`Build` resource object, for example a `name`.
* [`spec`][kubernetes-overview] - Specifies the configuration information for
- [`spec`][kubernetes-overview] - Specifies the configuration information for
your `Build` resource object. Build steps must be defined through either of
the following fields:
* [`steps`](#steps) - Specifies one or more container images that you want
- [`steps`](#steps) - Specifies one or more container images that you want
to run in your build.
* [`template`](#template) - Specifies a reusable build template that
- [`template`](#template) - Specifies a reusable build template that
includes one or more `steps`.
* Optional:
* [`source`](#source) - Specifies a container image that provides information
- Optional:
- [`source`](#source) - Specifies a container image that provides information
to your build.
* [`serviceAccountName`](#service-account) - Specifies a `ServiceAccount`
- [`serviceAccountName`](#service-account) - Specifies a `ServiceAccount`
resource object that enables your build to run with the defined
authentication information.
* [`volumes`](#volumes) - Specifies one or more volumes that you want to make
- [`volumes`](#volumes) - Specifies one or more volumes that you want to make
available to your build.
* [`timeout`](#timeout) - Specifies timeout after which the build will fail.
- [`timeout`](#timeout) - Specifies timeout after which the build will fail.
[kubernetes-overview]: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
@ -94,15 +94,14 @@ Each `steps` in a build must specify a `Builder`, or type of container image tha
adheres to the [Knative builder contract](./builder-contract.md). For each of
the `steps` fields, or container images that you define:
* The `Builder`-type container images are run and evaluated in order, starting
- The `Builder`-type container images are run and evaluated in order, starting
from the top of the configuration file.
* Each container image runs until completion or until the first failure is
- Each container image runs until completion or until the first failure is
detected.
For details about how to ensure that you implement each step to align with the
"builder contract", see the [`Builder`](./builder-contract.md) reference topic.
#### Template
The `template` field is a required if no `steps` are defined. Specifies a
@ -112,7 +111,6 @@ repeatable or sharable build `steps`.
For examples and more information about build templates, see the
[`BuildTemplate`](./build-templates.md) reference topic.
#### Source
Optional. Specifies a container image. Use the `source` field to provide your
@ -123,15 +121,14 @@ to all `steps` of your build.
The currently supported types of sources include:
* `git` - A Git based repository. Specify the `url` field to define the
location of the container image. Specify a `revision` field to define a
branch name, tag name, commit SHA, or any ref. [Learn more about revisions in
Git](https://git-scm.com/docs/gitrevisions#_specifying_revisions).
- `git` - A Git based repository. Specify the `url` field to define the
location of the container image. Specify a `revision` field to define a
branch name, tag name, commit SHA, or any ref. [Learn more about revisions in
Git](https://git-scm.com/docs/gitrevisions#_specifying_revisions).
* `gcs` - An archive that is located in Google Cloud Storage.
* `custom` - An arbitrary container image.
- `gcs` - An archive that is located in Google Cloud Storage.
- `custom` - An arbitrary container image.
#### Service Account
@ -147,7 +144,6 @@ of the `Build` resource object.
For examples and more information about specifying service accounts,
see the [`ServiceAccount`](./auth.md) reference topic.
#### Volumes
Optional. Specifies one or more
@ -158,20 +154,20 @@ complement the volumes that are implicitly
For example, use volumes to accomplish one of the following common tasks:
* [Mount a Kubernetes secret](./auth.md).
- [Mount a Kubernetes secret](./auth.md).
* Create an `emptyDir` volume to act as a cache for use across multiple build
steps. Consider using a persistent volume for inter-build caching.
- Create an `emptyDir` volume to act as a cache for use across multiple build
steps. Consider using a persistent volume for inter-build caching.
* Mount a host's Docker socket to use a `Dockerfile` for container image
builds.
- Mount a host's Docker socket to use a `Dockerfile` for container image
builds.
#### Timeout
Optional. Specifies timeout for the build. Includes time required for allocating resources and execution of build.
* Defaults to 10 minutes.
* Refer to [Go's ParseDuration documentation](https://golang.org/pkg/time/#ParseDuration) for expected format.
- Defaults to 10 minutes.
- Refer to [Go's ParseDuration documentation](https://golang.org/pkg/time/#ParseDuration) for expected format.
### Examples
@ -181,13 +177,13 @@ Tip: See the collection of simple
[test builds](https://github.com/knative/build/tree/master/test) for
additional code samples, including working copies of the following snippets:
* [`git` as `source`](#using-git)
* [`gcs` as `source`](#using-gcs)
* [`custom` as `source`](#using-custom)
* [Mounting extra volumes](#using-an-extra-volume)
* [Pushing an image](#using-steps-to-push-images)
* [Authenticating with `ServiceAccount`](#using-a-serviceaccount)
* [Timeout](#using-timeout)
- [`git` as `source`](#using-git)
- [`gcs` as `source`](#using-gcs)
- [`custom` as `source`](#using-custom)
- [Mounting extra volumes](#using-an-extra-volume)
- [Pushing an image](#using-steps-to-push-images)
- [Authenticating with `ServiceAccount`](#using-a-serviceaccount)
- [Timeout](#using-timeout)
#### Using `git`
@ -200,8 +196,8 @@ spec:
url: https://github.com/knative/build.git
revision: master
steps:
- image: ubuntu
args: ["cat", "README.md"]
- image: ubuntu
args: ["cat", "README.md"]
```
#### Using `gcs`
@ -215,9 +211,9 @@ spec:
type: Archive
location: gs://build-crd-tests/rules_docker-master.zip
steps:
- name: list-files
image: ubuntu:latest
args: ["ls"]
- name: list-files
image: ubuntu:latest
args: ["ls"]
```
#### Using `custom`
@ -231,8 +227,8 @@ spec:
image: gcr.io/cloud-builders/gsutil
args: ["rsync", "gs://some-bucket", "."]
steps:
- image: ubuntu
args: ["cat", "README.md"]
- image: ubuntu
args: ["cat", "README.md"]
```
#### Using an extra volume
@ -242,22 +238,22 @@ Mounting multiple volumes:
```yaml
spec:
steps:
- image: ubuntu
entrypoint: ["bash"]
args: ["-c", "curl https://foo.com > /var/my-volume"]
volumeMounts:
- name: my-volume
mountPath: /var/my-volume
- image: ubuntu
entrypoint: ["bash"]
args: ["-c", "curl https://foo.com > /var/my-volume"]
volumeMounts:
- name: my-volume
mountPath: /var/my-volume
- image: ubuntu
args: ["cat", "/etc/my-volume"]
volumeMounts:
- name: my-volume
mountPath: /etc/my-volume
- image: ubuntu
args: ["cat", "/etc/my-volume"]
volumeMounts:
- name: my-volume
mountPath: /etc/my-volume
volumes:
- name: my-volume
emptyDir: {}
- name: my-volume
emptyDir: {}
```
#### Using `steps` to push images
@ -267,17 +263,17 @@ Defining a `steps` to push a container image to a repository.
```yaml
spec:
parameters:
- name: IMAGE
description: The name of the image to push
- name: DOCKERFILE
description: Path to the Dockerfile to build.
default: /workspace/Dockerfile
- name: IMAGE
description: The name of the image to push
- name: DOCKERFILE
description: Path to the Dockerfile to build.
default: /workspace/Dockerfile
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor
args:
- --dockerfile=${DOCKERFILE}
- --destination=${IMAGE}
- name: build-and-push
image: gcr.io/kaniko-project/executor
args:
- --dockerfile=${DOCKERFILE}
- --destination=${IMAGE}
```
#### Using a `ServiceAccount`
@ -299,10 +295,10 @@ spec:
revision: master
steps:
- name: config
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "cat README.md"]
- name: config
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "cat README.md"]
```
Where `serviceAccountName: test-build-robot-git-ssh` references the following
@ -314,7 +310,7 @@ kind: ServiceAccount
metadata:
name: test-build-robot-git-ssh
secrets:
- name: test-git-ssh
- name: test-git-ssh
```
And `name: test-git-ssh`, references the following `Secret`:
@ -352,8 +348,8 @@ spec:
url: https://github.com/knative/build.git
revision: master
steps:
- image: ubuntu
args: ["cat", "README.md"]
- image: ubuntu
args: ["cat", "README.md"]
```
---

View File

@ -7,7 +7,7 @@ deploy that build to Knative, and then test that the build completes.
The following demonstrates the process of deploying and then testing that
the build completed successfully. This sample build uses a hello-world-type app
that uses [busybox](https://docs.docker.com/samples/library/busybox/) to simply
print "*hello build*".
print "_hello build_".
Tip: See the
[build code samples](builds.md#get-started-with-knative-build-samples)
@ -19,10 +19,10 @@ images, authentication, and include multiple steps.
Before you can run a Knative Build, you must have Knative installed in your
Kubernetes cluster, and it must include the Knative Build component:
* For details about installing a new instance of Knative in your Kubernetes
- For details about installing a new instance of Knative in your Kubernetes
cluster, see [Installing Knative](../install/README.md).
* If you have a component of Knative installed and running, you must [ensure
- If you have a component of Knative installed and running, you must [ensure
that the Knative Build component is also installed](installing-build-component.md).
## Creating and running a build
@ -31,92 +31,95 @@ Kubernetes cluster, and it must include the Knative Build component:
code.
This `Build` resource definition includes a single "[step](builds.md#steps)"
that performs the task of simply printing "*hello build*":
that performs the task of simply printing "_hello build_":
```yaml
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: hello-build
spec:
steps:
- name: hello
image: busybox
args: ['echo', 'hello', 'build']
```
```yaml
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: hello-build
spec:
steps:
- name: hello
image: busybox
args: ["echo", "hello", "build"]
```
Notice that this definition specifies `kind` as a `Build`, and that
the name of this `Build` resource is `hello-build`.
For more information about defining build configuration files, See the
[`Build` reference topic](builds.md).
Notice that this definition specifies `kind` as a `Build`, and that
the name of this `Build` resource is `hello-build`.
For more information about defining build configuration files, See the
[`Build` reference topic](builds.md).
1. Deploy the `build.yaml` configuration file and run the `hello-build` build on
Knative by running the
[`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply)
command:
```shell
kubectl apply --filename build.yaml
```
```shell
kubectl apply --filename build.yaml
```
Response:
```shell
build "hello-build" created
```
Response:
```shell
build "hello-build" created
```
1. Verify that the `hello-build` build resource has been created by running the
[`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get)
command:
```shell
kubectl get builds
```
```shell
kubectl get builds
```
Response:
```shell
NAME AGE
hello-build 4s
```
Response:
```shell
NAME AGE
hello-build 4s
```
1. After the build is created, you can run the following
[`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get)
command to retrieve details about the `hello-build` build, specifically, in
which cluster and pod the build is running:
```shell
kubectl get build hello-build --output yaml
```
```shell
kubectl get build hello-build --output yaml
```
Response:
```shell
apiVersion: build.knative.dev/v1alpha1
kind: Build
Response:
...
```shell
apiVersion: build.knative.dev/v1alpha1
kind: Build
status:
builder: Cluster
cluster:
namespace: default
podName: hello-build-jx4ql
conditions:
- state: Complete
status: "True"
stepStates:
- terminated:
reason: Completed
- terminated:
reason: Completed
```
...
Notice that the values of `completed` indicate that the build was
successful, and that `hello-build-jx4ql` is the pod where the build ran.
status:
builder: Cluster
cluster:
namespace: default
podName: hello-build-jx4ql
conditions:
- state: Complete
status: "True"
stepStates:
- terminated:
reason: Completed
- terminated:
reason: Completed
```
Tip: You can also retrieve the `podName` by running the following command:
Notice that the values of `completed` indicate that the build was
successful, and that `hello-build-jx4ql` is the pod where the build ran.
```shell
kubectl get build hello-build --output jsonpath={.status.cluster.podName}
```
Tip: You can also retrieve the `podName` by running the following command:
```shell
kubectl get build hello-build --output jsonpath={.status.cluster.podName}
```
1. Optional: Run the following
[`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get)
@ -124,41 +127,43 @@ Kubernetes cluster, and it must include the Knative Build component:
the name of the
[Init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/):
```shell
kubectl get pod hello-build-[ID] --output yaml
```
where `[ID]` is the suffix of your pod name, for example
`hello-build-jx4ql`.
```shell
kubectl get pod hello-build-[ID] --output yaml
```
The response of this command includes a lot of detail, as well as
the `build-step-hello` name of the Init container.
where `[ID]` is the suffix of your pod name, for example
`hello-build-jx4ql`.
Tip: The name of the Init container is determined by the `name` that is
specified in the `steps` field of the build configuration file, for
example `build-step-[ID]`.
The response of this command includes a lot of detail, as well as
the `build-step-hello` name of the Init container.
Tip: The name of the Init container is determined by the `name` that is
specified in the `steps` field of the build configuration file, for
example `build-step-[ID]`.
1. To verify that your build performed the single task of printing
"*hello build*", you can run the
"_hello build_", you can run the
[`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs)
command to retrieve the log files from the `build-step-hello` Init container
in the `hello-build-[ID]` pod:
```shell
kubectl logs $(kubectl get build hello-build --output jsonpath={.status.cluster.podName}) --container build-step-hello
```
```shell
kubectl logs $(kubectl get build hello-build --output jsonpath={.status.cluster.podName}) --container build-step-hello
```
Response:
```shell
hello build
```
Response:
```shell
hello build
```
### Learn more
To learn more about the objects and commands used in this topic, see:
* [Knative `Build` resources](builds.md)
* [Kubernetes Init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
* [Kubernetes kubectl CLI](https://kubernetes.io/docs/reference/kubectl/kubectl/)
- [Knative `Build` resources](builds.md)
- [Kubernetes Init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
- [Kubernetes kubectl CLI](https://kubernetes.io/docs/reference/kubectl/kubectl/)
For information about contributing to the Knative Build project, see the
[Knative Build code repo](https://github.com/knative/build/).

View File

@ -21,20 +21,21 @@ To add only the Knative Build component to an existing installation:
[`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply)
command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply --filename https://storage.googleapis.com/knative-releases/build/latest/release.yaml
```
```bash
kubectl apply --filename https://storage.googleapis.com/knative-releases/build/latest/release.yaml
```
1. Run the
[`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get)
command to monitor the Knative Build components until all of the components
show a `STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-build
```
Tip: Instead of running the `kubectl get` command multiple times, you can
append the `--watch` flag to view the component's status updates in real
time. Use CTRL + C to exit watch mode.
```bash
kubectl get pods --namespace knative-build
```
Tip: Instead of running the `kubectl get` command multiple times, you can
append the `--watch` flag to view the component's status updates in real
time. Use CTRL + C to exit watch mode.
You are now ready to create and run Knative Builds, see
[Creating a simple Knative Build](../build/creating-builds.md) to get started.

View File

@ -1,10 +1,9 @@
# Knative Personas
When discussing user actions, it is often helpful to [define specific
user roles](https://en.wikipedia.org/wiki/Persona_(user_experience)) who
user roles](<https://en.wikipedia.org/wiki/Persona_(user_experience)>) who
might want to do the action.
## Knative Build
We expect the build components of Knative to be useful on their own,
@ -19,8 +18,9 @@ tooling for managing dependencies and even detecting language and
runtime dependencies.
User stories:
* Start a build
* Read build logs
- Start a build
- Read build logs
### Language operator / contributor
@ -30,9 +30,9 @@ within a particular organization, or on behalf of a particular
language runtime.
User stories:
* Create a build image / build pack
* Enable build signing / provenance
- Create a build image / build pack
- Enable build signing / provenance
## Contributors
@ -41,17 +41,19 @@ always consider how infrastructure changes encourage and enable
contributors to the project, as well as the impact on users.
Types of users:
* Hobbyist or newcomer
* Motivated user
* Corporate (employed) maintainer
* Consultant
- Hobbyist or newcomer
- Motivated user
- Corporate (employed) maintainer
- Consultant
User stories:
* Check out the code
* Build and run the code
* Run tests
* View test status
* Run performance tests
- Check out the code
- Build and run the code
- Run tests
- View test status
- Run performance tests
---

View File

@ -14,22 +14,22 @@ race, religion, or sexual identity and orientation.
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
- The use of sexualized language or imagery and unwelcome sexual attention or
advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic
address, without explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities

View File

@ -7,15 +7,15 @@ repositories go through. All changes, regardless of whether they are from
newcomers to the community or from the core team follow the same process and
are given the same level of review.
* [Working groups](#working-groups)
* [Code of conduct](#code-of-conduct)
* [Team values](#team-values)
* [Contributor license agreements](#contributor-license-agreements)
* [Design documents](#design-documents)
* [Contributing a feature](#contributing-a-feature)
* [Setting up to contribute to Knative](#setting-up-to-contribute-to-knative)
* [Pull requests](#pull-requests)
* [Issues](#issues)
- [Working groups](#working-groups)
- [Code of conduct](#code-of-conduct)
- [Team values](#team-values)
- [Contributor license agreements](#contributor-license-agreements)
- [Design documents](#design-documents)
- [Contributing a feature](#contributing-a-feature)
- [Setting up to contribute to Knative](#setting-up-to-contribute-to-knative)
- [Pull requests](#pull-requests)
- [Issues](#issues)
## Working groups
@ -74,40 +74,40 @@ later join knative-dev if you want immediate access).
In order to contribute a feature to Knative you'll need to go through the
following steps:
* Discuss your idea with the appropriate [working groups](WORKING-GROUPS.md)
on the working group's mailing list.
- Discuss your idea with the appropriate [working groups](WORKING-GROUPS.md)
on the working group's mailing list.
* Once there is general agreement that the feature is useful, [create a GitHub
issue](https://github.com/knative/docs/issues/new) to track the discussion.
The issue should include information about the requirements and use cases
that it is trying to address. Include a discussion of the proposed design
and technical details of the implementation in the issue.
- Once there is general agreement that the feature is useful, [create a GitHub
issue](https://github.com/knative/docs/issues/new) to track the discussion.
The issue should include information about the requirements and use cases
that it is trying to address. Include a discussion of the proposed design
and technical details of the implementation in the issue.
* If the feature is substantial enough:
- If the feature is substantial enough:
* Working group leads will ask for a design document as outlined in
[design documents](#design-documents). Create the design document and
add a link to it in the GitHub issue. Don't forget to send a note to the
working group to let everyone know your document is ready for review.
- Working group leads will ask for a design document as outlined in
[design documents](#design-documents). Create the design document and
add a link to it in the GitHub issue. Don't forget to send a note to the
working group to let everyone know your document is ready for review.
* Depending on the breadth of the design and how contentious it is, the
working group leads may decide the feature needs to be discussed in one
or more working group meetings before being approved.
- Depending on the breadth of the design and how contentious it is, the
working group leads may decide the feature needs to be discussed in one
or more working group meetings before being approved.
* Once the major technical issues are resolved and agreed upon, post a
note with the design decision and the general execution plan to the
working group's mailing list and on the feature's issue.
- Once the major technical issues are resolved and agreed upon, post a
note with the design decision and the general execution plan to the
working group's mailing list and on the feature's issue.
* Submit PRs to [knative/serving](https://github.com/knative/serving/pulls)
with your code changes.
- Submit PRs to [knative/serving](https://github.com/knative/serving/pulls)
with your code changes.
* Submit PRs to knative/serving with user documentation for your feature,
including usage examples when possible. Add documentation to
[knative/docs/serving](https://github.com/knative/docs/tree/master/serving).
- Submit PRs to knative/serving with user documentation for your feature,
including usage examples when possible. Add documentation to
[knative/docs/serving](https://github.com/knative/docs/tree/master/serving).
*Note that we prefer bite-sized PRs instead of giant monster PRs. It's therefore
_Note that we prefer bite-sized PRs instead of giant monster PRs. It's therefore
preferable if you can introduce large features in small, individually-reviewable
PRs that build on top of one another.*
PRs that build on top of one another._
If you would like to skip the process of submitting an issue and instead would
prefer to just submit a pull request with your desired code changes then that's
@ -135,14 +135,14 @@ active, and hopefully prevents duplicated efforts.
To submit a proposed change:
* Fork the affected repository.
* Create a new branch for your changes.
* Develop the code/fix.
* Add new test cases. In the case of a bug fix, the tests should fail without
your code changes. For new features try to cover as many variants as
reasonably possible.
* Modify the documentation as necessary.
* Verify all CI status checks pass, and work to make them pass if failing.
- Fork the affected repository.
- Create a new branch for your changes.
- Develop the code/fix.
- Add new test cases. In the case of a bug fix, the tests should fail without
your code changes. For new features try to cover as many variants as
reasonably possible.
- Modify the documentation as necessary.
- Verify all CI status checks pass, and work to make them pass if failing.
The general rule is that all PRs should be 100% complete - meaning they should
include all test cases and documentation changes related to the change. A
@ -165,36 +165,40 @@ GitHub issues can be used to report bugs or submit feature requests.
When reporting a bug please include the following key pieces of information:
* The version of the project you were using (version number, git commit, etc)
* Operating system you are using
* The exact, minimal, steps needed to reproduce the issue. Submitting a 5 line
script will get a much faster response from the team than one that's
hundreds of lines long.
- The version of the project you were using (version number, git commit, etc)
- Operating system you are using
- The exact, minimal, steps needed to reproduce the issue. Submitting a 5 line
script will get a much faster response from the team than one that's
hundreds of lines long.
## Third-party code
* All third-party code must be placed in `vendor/` or `third_party/` folders.
* `vendor/` folder is managed by [dep](https://github.com/golang/dep) and stores
the source code of third-party Go dependencies. `vendor/` folder should not be
modified manually.
* Other third-party code belongs in `third_party/` folder.
* Third-party code must include licenses.
- All third-party code must be placed in `vendor/` or `third_party/` folders.
- `vendor/` folder is managed by [dep](https://github.com/golang/dep) and stores
the source code of third-party Go dependencies. `vendor/` folder should not be
modified manually.
- Other third-party code belongs in `third_party/` folder.
- Third-party code must include licenses.
A non-exclusive list of code that must be places in `vendor/` and `third_party/`:
* Open source, free software, or commercially-licensed code.
* Tools or libraries or protocols that are open source, free software, or commercially licensed.
* Derivative works of third-party code.
* Excerpts from third-party code.
- Open source, free software, or commercially-licensed code.
- Tools or libraries or protocols that are open source, free software, or commercially licensed.
- Derivative works of third-party code.
- Excerpts from third-party code.
### Adding a new third-party dependency to `third_party/` folder
* Create a sub-folder under `third_party/` for each component.
* In each sub-folder, make sure there is a file called LICENSE which contains the appropriate
license text for the dependency. If one doesn't exist then create it. More details on this below.
* Check in a pristine copy of the code with LICENSE and METADATA files.
You do not have to include unused files, and you can move or rename files if necessary,
but do not modify the contents of any files yet.
* Once the pristine copy is merged into master, you may modify the code.
- Create a sub-folder under `third_party/` for each component.
- In each sub-folder, make sure there is a file called LICENSE which contains the appropriate
license text for the dependency. If one doesn't exist then create it. More details on this below.
- Check in a pristine copy of the code with LICENSE and METADATA files.
You do not have to include unused files, and you can move or rename files if necessary,
but do not modify the contents of any files yet.
- Once the pristine copy is merged into master, you may modify the code.
### LICENSE
The license for the code must be in a file named LICENSE. If it was distributed like that,
you're good. If not, you need to make LICENSE be a file containing the full text of the license.
If there's another file in the distribution with the license in it, rename it to LICENSE

View File

@ -1,6 +1,6 @@
# Knative Community
*Important*. Before proceeding, please review the Knative community [Code of
_Important_. Before proceeding, please review the Knative community [Code of
Conduct](CODE-OF-CONDUCT.md).
If you any have questions or concerns, please contact the authors at
@ -13,29 +13,29 @@ Welcome to the Knative community!
This is the starting point for becoming a contributor - improving code,
improving docs, giving talks, etc.
* [Introduction](#introduction)
* [Knative authors](#knative-authors)
* [Meetings and work groups](#meetings-and-work-groups)
* [How can I help?](#how-can-i-help)
* [Questions and issues](#questions-and-issues)
- [Introduction](#introduction)
- [Knative authors](#knative-authors)
- [Meetings and work groups](#meetings-and-work-groups)
- [How can I help?](#how-can-i-help)
- [Questions and issues](#questions-and-issues)
Other Documents
* [Code of Conduct](CODE-OF-CONDUCT.md) - all contributors must abide by the
code of conduct
* [Contributing to Knative](CONTRIBUTING.md) - guidelines and advice on
becoming a contributor
* [Working Groups](WORKING-GROUPS.md) - describes our various working groups
* [Working Group Processes](WORKING-GROUP-PROCESSES.md) - describes how
working groups operate
* [Technical Oversight Committee](TECH-OVERSIGHT-COMMITTEE.md) - describes our
technical oversight committee
* [Steering Committee](STEERING-COMMITTEE.md) - describes our steering
committee
* [Community Roles](ROLES.md) - describes the roles individuals can assume
within the Knative community
* [Reviewing and Merging Pull Requests](REVIEWING.md) - how we manage pull
requests
- [Code of Conduct](CODE-OF-CONDUCT.md) - all contributors must abide by the
code of conduct
- [Contributing to Knative](CONTRIBUTING.md) - guidelines and advice on
becoming a contributor
- [Working Groups](WORKING-GROUPS.md) - describes our various working groups
- [Working Group Processes](WORKING-GROUP-PROCESSES.md) - describes how
working groups operate
- [Technical Oversight Committee](TECH-OVERSIGHT-COMMITTEE.md) - describes our
technical oversight committee
- [Steering Committee](STEERING-COMMITTEE.md) - describes our steering
committee
- [Community Roles](ROLES.md) - describes the roles individuals can assume
within the Knative community
- [Reviewing and Merging Pull Requests](REVIEWING.md) - how we manage pull
requests
## Introduction
@ -61,8 +61,8 @@ monitoring the overall project.
If you're looking for something to do to get your feet wet working on Knative,
look for GitHub issues marked with the Help Wanted label:
* [Serving issues](https://github.com/knative/serving/issues?q=is%3Aopen+is%3Aissue+label%3A%22community%2Fhelp+wanted%22)
* [Documentation repo](https://github.com/knative/docs/issues?q=is%3Aopen+is%3Aissue+label%3A%22community%2Fhelp+wanted%22)
- [Serving issues](https://github.com/knative/serving/issues?q=is%3Aopen+is%3Aissue+label%3A%22community%2Fhelp+wanted%22)
- [Documentation repo](https://github.com/knative/docs/issues?q=is%3Aopen+is%3Aissue+label%3A%22community%2Fhelp+wanted%22)
Even if there's not an issue opened for it, we can always use more
testing throughout the platform. Similarly, we can always use more docs, richer
@ -74,8 +74,8 @@ we could use your help in spiffing up our public-facing web site.
If you're a developer, operator, or contributor trying to use Knative, the
following resources are available for you:
* [Knative Users](https://groups.google.com/forum/#!forum/knative-users)
* [Knative Developers](https://groups.google.com/forum/#!forum/knative-dev)
- [Knative Users](https://groups.google.com/forum/#!forum/knative-users)
- [Knative Developers](https://groups.google.com/forum/#!forum/knative-dev)
For contributors to Knative, we also have [Knative Slack](SLACK-GUIDELINES.md).

View File

@ -7,13 +7,13 @@ in turn produces high quality software.
This document provides guidelines for how the project's
[Members](ROLES.md#member) review issues and merge pull requests (PRs).
* [Pull requests welcome](#pull-requests-welcome)
* [Code of Conduct](#code-of-conduct)
* [Code reviewers](#code-reviewers)
* [Reviewing changes](#reviewing-changes)
* [Holds](#holds)
* [Approvers](#approvers)
* [Merging PRs](#merging-prs)
- [Pull requests welcome](#pull-requests-welcome)
- [Code of Conduct](#code-of-conduct)
- [Code reviewers](#code-reviewers)
- [Reviewing changes](#reviewing-changes)
- [Holds](#holds)
- [Approvers](#approvers)
- [Merging PRs](#merging-prs)
## Pull requests welcome
@ -55,9 +55,9 @@ are not in favor of the change. If a PR gets a "request changes" vote, the
group discusses the issue to resolve their differences.
Reviewers are expected to respond in a timely fashion to PRs that are assigned
to them. Reviewers are expected to respond to *active* PRs with reasonable
to them. Reviewers are expected to respond to _active_ PRs with reasonable
latency. If reviewers fail to respond, those PRs may be assigned to other
reviewers. *Active* PRs are those that have a proper CLA (`cla:yes`) label, are
reviewers. _Active_ PRs are those that have a proper CLA (`cla:yes`) label, are
not works in progress (WIP), are passing tests, and do not need rebase to be
merged. PRs that do not have a proper CLA, are WIP, do not pass tests, or
require a rebase are not considered active PRs.

View File

@ -4,21 +4,21 @@ This document describes the set of roles individuals may have within the Knative
community, the requirements of each role, and the privileges that each role
grants.
* [Role Summary](#role-summary)
* [Collaborator](#collaborator)
* [Member](#member)
* [Approver](#approver)
* [Lead](#lead)
* [Administrator](#administrator)
- [Role Summary](#role-summary)
- [Collaborator](#collaborator)
- [Member](#member)
- [Approver](#approver)
- [Lead](#lead)
- [Administrator](#administrator)
## Role Summary
The following table lists the roles we use within the Knative community. The
table describes:
* General responsibilities expected by individuals in each role
* Requirements necessary to join or stay in a given role
* How the role manifests in terms of permissions and privileges.
- General responsibilities expected by individuals in each role
- Requirements necessary to join or stay in a given role
- How the role manifests in terms of permissions and privileges.
<table>
<thead>
@ -112,12 +112,12 @@ the PR bot.
### Requirements
* Working on some contribution to the project that would benefit from the
ability to have PRs or Issues to be assigned to the contributor
- Working on some contribution to the project that would benefit from the
ability to have PRs or Issues to be assigned to the contributor
* Join [knative-users@](https://groups.google.com/forum/#!forum/knative-users)
unrestricted join permissions; this grants read access to documents in the
Team Drive
- Join [knative-users@](https://groups.google.com/forum/#!forum/knative-users)
unrestricted join permissions; this grants read access to documents in the
Team Drive
## Member
@ -139,36 +139,36 @@ this is not a requirement.
### Requirements
* Has made multiple contributions to the project or community. Contributions
may include, but are not limited to:
- Has made multiple contributions to the project or community. Contributions
may include, but are not limited to:
* Authoring or reviewing PRs on GitHub
- Authoring or reviewing PRs on GitHub
* Filing or commenting on issues on GitHub
- Filing or commenting on issues on GitHub
* Contributing to working group or community discussions
- Contributing to working group or community discussions
* Subscribed to
[knative-dev@googlegroups.com](https://groups.google.com/forum/#!forum/knative-dev)
- Subscribed to
[knative-dev@googlegroups.com](https://groups.google.com/forum/#!forum/knative-dev)
* Actively contributing to 1 or more areas.
- Actively contributing to 1 or more areas.
* Sponsored by 1 approver.
- Sponsored by 1 approver.
* Done by adding GitHub user to Knative organization
- Done by adding GitHub user to Knative organization
### Responsibilities and privileges
* Responsive to issues and PRs assigned to them
- Responsive to issues and PRs assigned to them
* Active owner of code they have contributed (unless ownership is explicitly
transferred)
- Active owner of code they have contributed (unless ownership is explicitly
transferred)
* Code is well tested
- Code is well tested
* Tests consistently pass
- Tests consistently pass
* Addresses bugs or issues discovered after code is accepted
- Addresses bugs or issues discovered after code is accepted
Members who frequently contribute code are expected to proactively perform code
reviews and work towards becoming an approver for the area that they are active
@ -188,40 +188,40 @@ status is scoped to a part of the codebase.
The following apply to the part of the codebase for which one would be an
approver in an OWNERS file:
* Reviewer of the codebase for at least 3 months or 50% of project lifetime,
whichever is shorter
- Reviewer of the codebase for at least 3 months or 50% of project lifetime,
whichever is shorter
* Primary reviewer for at least 10 substantial PRs to the codebase
- Primary reviewer for at least 10 substantial PRs to the codebase
* Reviewed or merged at least 30 PRs to the codebase
- Reviewed or merged at least 30 PRs to the codebase
* Nominated by an area lead
- Nominated by an area lead
* With no objections from other leads
- With no objections from other leads
* Done through PR to update an OWNERS file
- Done through PR to update an OWNERS file
### Responsibilities and privileges
The following apply to the part of the codebase for which one would be an
approver in an OWNERS file:
* Approver status may be a precondition to accepting large code contributions
- Approver status may be a precondition to accepting large code contributions
* Demonstrate sound technical judgement
- Demonstrate sound technical judgement
* Responsible for project quality control via [code reviews](REVIEWING.md)
- Responsible for project quality control via [code reviews](REVIEWING.md)
* Focus on holistic acceptance of contribution such as dependencies with
other features, backward / forward compatibility, API and flag
definitions, etc
- Focus on holistic acceptance of contribution such as dependencies with
other features, backward / forward compatibility, API and flag
definitions, etc
* Expected to be responsive to review requests as per [community
expectations](REVIEWING.md)
- Expected to be responsive to review requests as per [community
expectations](REVIEWING.md)
* Mentor members and contributors
- Mentor members and contributors
* May approve code contributions for acceptance
- May approve code contributions for acceptance
## Lead
@ -233,53 +233,53 @@ and approve design decisions for their area of ownership.
Getting to be a lead of an existing working group:
* Recognized as having expertise in the groups subject matter
- Recognized as having expertise in the groups subject matter
* Approver for some part of the codebase for at least 3 months
- Approver for some part of the codebase for at least 3 months
* Member for at least 1 year or 50% of project lifetime, whichever is shorter
- Member for at least 1 year or 50% of project lifetime, whichever is shorter
* Primary reviewer for 20 substantial PRs
- Primary reviewer for 20 substantial PRs
* Reviewed or merged at least 50 PRs
- Reviewed or merged at least 50 PRs
* Sponsored by the technical oversight committee
- Sponsored by the technical oversight committee
Additional requirements for leads of a new working group:
* Originally authored or contributed major functionality to the group's area
- Originally authored or contributed major functionality to the group's area
* An approver in the OWNERS file for the groups code
- An approver in the OWNERS file for the groups code
### Responsibilities and privileges
The following apply to the area / component for which one would be an owner.
* Run their working group as explained in the [Working Group
Processes](WORKING-GROUP-PROCESSES.md).
- Run their working group as explained in the [Working Group
Processes](WORKING-GROUP-PROCESSES.md).
* Design/proposal approval authority over the area / component, though
escalation to the technical oversight committee is possible.
- Design/proposal approval authority over the area / component, though
escalation to the technical oversight committee is possible.
* Perform issue triage on GitHub.
- Perform issue triage on GitHub.
* Apply/remove/create/delete GitHub labels and milestones
- Apply/remove/create/delete GitHub labels and milestones
* Write access to repo (assign issues/PRs, add/remove labels and milestones,
edit issues and PRs, edit wiki, create/delete labels and milestones)
- Write access to repo (assign issues/PRs, add/remove labels and milestones,
edit issues and PRs, edit wiki, create/delete labels and milestones)
* Capable of directly applying lgtm + approve labels for any PR
- Capable of directly applying lgtm + approve labels for any PR
* Expected to respect OWNERS files approvals and use [standard
procedure for merging code](REVIEWING.md#merging-prs).
- Expected to respect OWNERS files approvals and use [standard
procedure for merging code](REVIEWING.md#merging-prs).
* Expected to work to holistically maintain the health of the project through:
- Expected to work to holistically maintain the health of the project through:
* Reviewing PRs
- Reviewing PRs
* Fixing bugs
- Fixing bugs
* Mentoring and guiding approvers, members, and contributors
- Mentoring and guiding approvers, members, and contributors
## Administrator
@ -287,21 +287,21 @@ Administrators are responsible for the bureaucratic aspects of the project.
### Requirements
* Assigned by technical oversight committee.
- Assigned by technical oversight committee.
### Responsibilities and privileges
* Manage the Knative GitHub repo, including granting membership and
controlling repo read/write permissions
- Manage the Knative GitHub repo, including granting membership and
controlling repo read/write permissions
* Manage the Knative Slack team
- Manage the Knative Slack team
* Manage the Knative Google group forum
- Manage the Knative Google group forum
* Manage any additional Knative technical collaboration assets
- Manage any additional Knative technical collaboration assets
* Expected to be responsive to membership and permission change requests
<!-- TODO SLA for admin response -->
- Expected to be responsive to membership and permission change requests
<!-- TODO SLA for admin response -->
<!-- * TODO Manage the Google Search Console settings for knative.dev -->

View File

@ -16,14 +16,15 @@ You can join the [Knative Slack](https://slack.knative.dev) instance at
https://slack.knative.dev.
## Code of Conduct
The Knative [Code of Conduct](./CODE-OF-CONDUCT.md) applies throughout the
project, and includes all communication mediums.
## Admins
* @mchmarny
* @isdal
* @dewitt
- @mchmarny
- @isdal
- @dewitt
Slack admins should make sure to mention this in the “What I do” section of
their Slack profile, as well as for which time zone.
@ -34,16 +35,16 @@ of us privately.
### Admin Expectations and Guidelines
* Adhere to Code of Conduct
* Take care of spam as soon as possible, which may mean taking action by making
- Adhere to Code of Conduct
- Take care of spam as soon as possible, which may mean taking action by making
members inactive
* Moderating and fostering a safe environment for conversations
* Bring Code of Conduct issues to the Steering Committee
* Create relevant channels and list Code of Conduct in new channel welcome
- Moderating and fostering a safe environment for conversations
- Bring Code of Conduct issues to the Steering Committee
- Create relevant channels and list Code of Conduct in new channel welcome
message
* Help troubleshoot Slack issues
* Review bot, token, and webhook requests
* Be helpful!
- Help troubleshoot Slack issues
- Review bot, token, and webhook requests
- Be helpful!
## Creating Channels
@ -55,10 +56,10 @@ community topics, and related programs/projects.
Channels are not:
* company specific; e.g. a channel named for a cloud provider must be used for
- company specific; e.g. a channel named for a cloud provider must be used for
conversation about Knative-related topics on that cloud, and not proprietary
information of the provider.
* private unless there is an exception: code of conduct matters, mentoring,
- private unless there is an exception: code of conduct matters, mentoring,
security/vulnerabilities, or steering committee.
All channels need a documented purpose. Use this space to welcome the targeted
@ -100,7 +101,7 @@ chat. Please document these interactions for other Slack admins to review.
Content will be automatically removed if it violates code of conduct or is a
sales pitch. Admins will take a screenshot of such behavior in order to document
the situation. Google takes such violations extremely seriously, and
the situation. Google takes such violations extremely seriously, and
they will be handled swiftly.
## Inactivating Accounts
@ -110,9 +111,9 @@ Due to Slacks framework, it does not allow for an account to be banned or
suspended in the traditional sense.
[Read Slacks policy on this.](https://get.Slack.help/hc/en-us/articles/204475027-Deactivate-a-member-s-account)
* Spreading spam content in DMs and/or channels
* Not adhering to the code of conduct set forth in DMs and/or channels
* Overtly selling products, related or unrelated to Knative
- Spreading spam content in DMs and/or channels
- Not adhering to the code of conduct set forth in DMs and/or channels
- Overtly selling products, related or unrelated to Knative
## Specific Channel Rules

View File

@ -1,38 +1,38 @@
# Knative Steering Committee
The Knative Steering Committee (SC) defines, evolves, and defends the vision,
values, mission, and scope of the project. *The Steering Committee is a
work-in-progress.*
values, mission, and scope of the project. _The Steering Committee is a
work-in-progress._
* [Charter](#charter)
* [Committee Mechanics](#committee-mechanics)
* [Committee Members](#committee-members)
- [Charter](#charter)
- [Committee Mechanics](#committee-mechanics)
- [Committee Members](#committee-members)
## Charter
* Non-technical project oversight
- Non-technical project oversight
* Define policy for the creation and administration of community groups,
including [Working Groups](WORKING-GROUPS.md) and Committees.
- Define policy for the creation and administration of community groups,
including [Working Groups](WORKING-GROUPS.md) and Committees.
* Define and evolve project governance structures and policies, including
project role assignment and contributor promotion.
- Define and evolve project governance structures and policies, including
project role assignment and contributor promotion.
* Approve members of the Tech Oversight Committee.
- Approve members of the Tech Oversight Committee.
* Management of project assets
- Management of project assets
* Control access to, establish processes regarding, and provide a final
escalation path for any Knative repository.
- Control access to, establish processes regarding, and provide a final
escalation path for any Knative repository.
* Guided by the TOC for normal business.
- Guided by the TOC for normal business.
* Control and delegate access to and establish processes regarding other
project resources/assets not covered by the above, including web sites
and their domains, blogs, social-media accounts, etc.
- Control and delegate access to and establish processes regarding other
project resources/assets not covered by the above, including web sites
and their domains, blogs, social-media accounts, etc.
* Manage the Knative brand to decide which things can be called “Knative”
and how that mark can be used in relation to other efforts or vendors.
- Manage the Knative brand to decide which things can be called “Knative”
and how that mark can be used in relation to other efforts or vendors.
## Committee Mechanics
@ -51,11 +51,11 @@ The members of the Steering Committee are shown below. Membership in the SC is
determined by current level of contribution to the project. Contribution is
periodically reviewed to ensure proper recognition.
&nbsp; | Member | Company | Profile
-------------------------------------------------------- | -------------- | ------- | -------
<img width="30px" src="https://github.com/dewitt.png"> | DeWitt Clinton | Google | [@dewitt](https://github.com/dewitt)
<img width="30px" src="https://github.com/mchmarny.png"> | Mark Chmarny | Google | [@mchmarny](https://github.com/mchmarny)
<img width="30px" src="https://github.com/isdal.png"> | Tomas Isdal | Google | [@isdal](https://github.com/isdal)
| &nbsp; | Member | Company | Profile |
| -------------------------------------------------------- | -------------- | ------- | ---------------------------------------- |
| <img width="30px" src="https://github.com/dewitt.png"> | DeWitt Clinton | Google | [@dewitt](https://github.com/dewitt) |
| <img width="30px" src="https://github.com/mchmarny.png"> | Mark Chmarny | Google | [@mchmarny](https://github.com/mchmarny) |
| <img width="30px" src="https://github.com/isdal.png"> | Tomas Isdal | Google | [@isdal](https://github.com/isdal) |
---

View File

@ -3,67 +3,67 @@
The Knative Technical Oversight Committee (TOC) is responsible for cross-cutting
product and design decisions.
* [Charter](#charter)
* [Committee Mechanics](#committee-mechanics)
* [Committee Meeting](#committee-meeting)
* [Committee Members](#committee-members)
- [Charter](#charter)
- [Committee Mechanics](#committee-mechanics)
- [Committee Meeting](#committee-meeting)
- [Committee Members](#committee-members)
## Charter
* Technical Project Oversight, Direction & Delivery
- Technical Project Oversight, Direction & Delivery
* Set the overall technical direction and roadmap of the project.
- Set the overall technical direction and roadmap of the project.
* Resolve technical issues, technical disagreements and escalations within
the project.
- Resolve technical issues, technical disagreements and escalations within
the project.
* Set the priorities of individual releases to ensure coherency and proper
sequencing.
- Set the priorities of individual releases to ensure coherency and proper
sequencing.
* Approve declaring a new long-term supported (LTS) Knative release.
- Approve declaring a new long-term supported (LTS) Knative release.
* Approve the creation and dissolution of working groups and approve
leadership changes of working groups.
- Approve the creation and dissolution of working groups and approve
leadership changes of working groups.
* Create proposals based on TOC discussions and bring them to the relevant
working groups for discussion.
- Create proposals based on TOC discussions and bring them to the relevant
working groups for discussion.
* Approve the creation/deletion of GitHub repositories, along with other
high-level administrative issues around GitHub and our other tools.
- Approve the creation/deletion of GitHub repositories, along with other
high-level administrative issues around GitHub and our other tools.
* Happy Healthy Community
- Happy Healthy Community
* Establish and maintain the overall technical governance guidelines for
the project.
- Establish and maintain the overall technical governance guidelines for
the project.
* Decide which sub-projects are part of the Knative project, including
accepting new sub-projects and pruning existing sub-projects to maintain
community focus
- Decide which sub-projects are part of the Knative project, including
accepting new sub-projects and pruning existing sub-projects to maintain
community focus
* Ensure the team adheres to our [code of
conduct](CONTRIBUTING.md#code-of-conduct) and respects our
[values](VALUES.md).
- Ensure the team adheres to our [code of
conduct](CONTRIBUTING.md#code-of-conduct) and respects our
[values](VALUES.md).
* Foster an environment for a healthy and happy community of developers
and contributors.
- Foster an environment for a healthy and happy community of developers
and contributors.
## Committee Mechanics
The TOCs work includes:
* Regular committee meetings to discuss hot topics, resulting in a set of
published [meeting
notes](https://docs.google.com/document/d/1hR5ijJQjz65QkLrgEhWjv3Q86tWVxYj_9xdhQ6Y5D8Q/edit#).
- Regular committee meetings to discuss hot topics, resulting in a set of
published [meeting
notes](https://docs.google.com/document/d/1hR5ijJQjz65QkLrgEhWjv3Q86tWVxYj_9xdhQ6Y5D8Q/edit#).
* Create, review, approve and publish technical project governance documents.
- Create, review, approve and publish technical project governance documents.
* Create proposals for consideration by individual working groups to help
steer their work towards a common project-wide objective.
- Create proposals for consideration by individual working groups to help
steer their work towards a common project-wide objective.
* Review/address/comment on project issues.
- Review/address/comment on project issues.
* Act as a high-level sounding board for technical questions or designs
bubbled up by the working groups.
- Act as a high-level sounding board for technical questions or designs
bubbled up by the working groups.
## Committee Meeting
@ -71,24 +71,24 @@ Community members are encouraged to suggest topics for discussion ahead of the
TOC meetings, and are invited to observe these meetings and engage with the TOC
during the community feedback period at the end of each meeting.
Artifact | Link
-------------------------- | ----
Google Group | [knative-tech-oversight@googlegroups.com](https://groups.google.com/forum/#!forum/knative-tech-oversight)
Community Meeting VC | [meet.google.com/ffc-rypd-kih](https://meet.google.com/ffc-rypd-kih) <br>or dial in:<br>(US) +1 240-630-1102 PIN: 316262#
Community Meeting Calendar | Thursdays at 11:30a-12p <br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com)
Meeting Notes | [Notes](https://docs.google.com/document/d/1hR5ijJQjz65QkLrgEhWjv3Q86tWVxYj_9xdhQ6Y5D8Q/edit#heading=h.g47ptr8u5cov)
Document Folder | [Folder](https://drive.google.com/drive/folders/1_OHttsYLCVtX202aXNmJJrAHJ7BaXcu6)
| Artifact | Link |
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Google Group | [knative-tech-oversight@googlegroups.com](https://groups.google.com/forum/#!forum/knative-tech-oversight) |
| Community Meeting VC | [meet.google.com/ffc-rypd-kih](https://meet.google.com/ffc-rypd-kih) <br>or dial in:<br>(US) +1 240-630-1102 PIN: 316262# |
| Community Meeting Calendar | Thursdays at 11:30a-12p <br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com) |
| Meeting Notes | [Notes](https://docs.google.com/document/d/1hR5ijJQjz65QkLrgEhWjv3Q86tWVxYj_9xdhQ6Y5D8Q/edit#heading=h.g47ptr8u5cov) |
| Document Folder | [Folder](https://drive.google.com/drive/folders/1_OHttsYLCVtX202aXNmJJrAHJ7BaXcu6) |
## Committee Members
The members of the TOC are shown below. Membership in the TOC is determined by
the [Steering committee](STEERING-COMMITTEE.md).
&nbsp; | Member | Company | Profile
------------------------------------------------------------- | ------------- | ------- | -------
<img width="30px" src="https://github.com/evankanderson.png"> | Evan Anderson | Google | [@evankanderson](https://github.com/evankanderson)
<img width="30px" src="https://github.com/mattmoor.png"> | Matt Moore | Google | [@mattmoor](https://github.com/mattmoor)
<img width="30px" src="https://github.com/vaikas-google.png"> | Ville Aikas | Google | [@vaikas-google](https://github.com/vaikas-google)
| &nbsp; | Member | Company | Profile |
| ------------------------------------------------------------- | ------------- | ------- | -------------------------------------------------- |
| <img width="30px" src="https://github.com/evankanderson.png"> | Evan Anderson | Google | [@evankanderson](https://github.com/evankanderson) |
| <img width="30px" src="https://github.com/mattmoor.png"> | Matt Moore | Google | [@mattmoor](https://github.com/mattmoor) |
| <img width="30px" src="https://github.com/vaikas-google.png"> | Ville Aikas | Google | [@vaikas-google](https://github.com/vaikas-google) |
---

View File

@ -3,49 +3,49 @@
We want to make sure every member has a shared understanding of the goals and
values we hold as a team:
* Optimize for the **overall project**, not your own area or feature
- Optimize for the **overall project**, not your own area or feature
* A shortcut for one individual can mean a lot of extra work or disruption
for the rest of the team.
- A shortcut for one individual can mean a lot of extra work or disruption
for the rest of the team.
* Our repos should always be in release shape: **Always Green**
- Our repos should always be in release shape: **Always Green**
* This lets us move faster in the mid and long term.
* This implies investments in build/test infrastructure to have fast,
reliable tests to ensure that we can release at any time.
* Extra discipline may require more work by individuals to keep the build
in good state, but less work overall for the team.
- This lets us move faster in the mid and long term.
- This implies investments in build/test infrastructure to have fast,
reliable tests to ensure that we can release at any time.
- Extra discipline may require more work by individuals to keep the build
in good state, but less work overall for the team.
* Be **specific**, **respectful** and **courteous**
- Be **specific**, **respectful** and **courteous**
* Disagreements are welcome and encouraged, but don't use broad
generalizations, exaggerations, or judgment words that can be taken
personally. Consider other peoples perspective (including the wide
range of applicability of Knative). Empathize with our users. Focus on
the specific issue at hand, and remember that we all care about the
project, first and foremost.
* Emails to the [mailing lists](CONTRIBUTING.md#contributing-a-feature),
document comments, or meetings are often better and higher bandwidth
ways to communicate complex and nuanced design issues, as opposed to
protracted heated live chats.
* Be mindful of the terminology you are using, it may not be the same as
someone else and cause misunderstanding. To promote clear and precise
communication, define the terms you are using in context.
* See also the [Code of Conduct](CODE-OF-CONDUCT.md), which everyone must
abide by.
- Disagreements are welcome and encouraged, but don't use broad
generalizations, exaggerations, or judgment words that can be taken
personally. Consider other peoples perspective (including the wide
range of applicability of Knative). Empathize with our users. Focus on
the specific issue at hand, and remember that we all care about the
project, first and foremost.
- Emails to the [mailing lists](CONTRIBUTING.md#contributing-a-feature),
document comments, or meetings are often better and higher bandwidth
ways to communicate complex and nuanced design issues, as opposed to
protracted heated live chats.
- Be mindful of the terminology you are using, it may not be the same as
someone else and cause misunderstanding. To promote clear and precise
communication, define the terms you are using in context.
- See also the [Code of Conduct](CODE-OF-CONDUCT.md), which everyone must
abide by.
* Raising issues is great, suggesting solutions is even better
- Raising issues is great, suggesting solutions is even better
* Think of a proposed alternative and improvement rather than just what
you perceive as wrong.
* If you have no immediate solution even after thinking about it - if
something does seem significant, raise it to someone who might be able
to also think of solutions or to the group (dont stay frustrated! Feel
safe in bringing up issues.
* Avoid rehashing old issues that have been resolved/decided
(unless you have new insights or information).
- Think of a proposed alternative and improvement rather than just what
you perceive as wrong.
- If you have no immediate solution even after thinking about it - if
something does seem significant, raise it to someone who might be able
to also think of solutions or to the group (dont stay frustrated! Feel
safe in bringing up issues.
- Avoid rehashing old issues that have been resolved/decided
(unless you have new insights or information).
* Be productive and **happy**, and most importantly, have *fun* :-)
- Be productive and **happy**, and most importantly, have _fun_ :-)
---

View File

@ -4,15 +4,15 @@ This document describes the processes we use to manage the Knative working
groups. This includes how they are formed, how leads are established, how they
are run, etc.
* [Why working groups?](#why-working-groups)
* [Proposing a new working group](#proposing-a-new-working-group)
* [Setting up a working group](#setting-up-a-working-group)
* [Dissolving a working group](#dissolving-a-working-group)
* [Running a working group](#running-a-working-group)
* [Be open](#be-open)
* [Making decisions](#making-decisions)
* [Subgroups](#subgroups)
* [Escalations](#escalations)
- [Why working groups?](#why-working-groups)
- [Proposing a new working group](#proposing-a-new-working-group)
- [Setting up a working group](#setting-up-a-working-group)
- [Dissolving a working group](#dissolving-a-working-group)
- [Running a working group](#running-a-working-group)
- [Be open](#be-open)
- [Making decisions](#making-decisions)
- [Subgroups](#subgroups)
- [Escalations](#escalations)
## Why working groups?
@ -39,30 +39,30 @@ If youve identified a substantial architectural area which would benefit from
long-lived, concerted and focused design, then you should consider creating a
new working group. To do so, you need to:
* **Create a charter**. This should be a few paragraphs explaining:
- **Create a charter**. This should be a few paragraphs explaining:
* The mission of the working group
- The mission of the working group
* The goals of the working group (problems being solved)
- The goals of the working group (problems being solved)
* The scope of the working group (topics, subsystems, code repos, areas of
responsibility)
- The scope of the working group (topics, subsystems, code repos, areas of
responsibility)
* **Nominate an initial set of leads**. The leads set the agenda for the
working group and serve as final arbiters on any technical decision. See
[below](#leads) for information on the responsibilities of leads and
requirements for nominating them.
- **Nominate an initial set of leads**. The leads set the agenda for the
working group and serve as final arbiters on any technical decision. See
[below](#leads) for information on the responsibilities of leads and
requirements for nominating them.
* **Prepare a Roadmap**. Create a preliminary 3 month roadmap for what the
working group would focus on.
- **Prepare a Roadmap**. Create a preliminary 3 month roadmap for what the
working group would focus on.
* **Send an Email**. Write up an email with your charter, nominated leads, and
roadmap, and send it to
[knative-tech-oversight@](mailto:knative-tech-oversight@googlegroups.com).
The technical oversight committee will evaluate the request and decide
whether the working group should be formed, whether it should be merely a
subgroup of an existing working group, or whether it should be subsumed by
an existing working group.
- **Send an Email**. Write up an email with your charter, nominated leads, and
roadmap, and send it to
[knative-tech-oversight@](mailto:knative-tech-oversight@googlegroups.com).
The technical oversight committee will evaluate the request and decide
whether the working group should be formed, whether it should be merely a
subgroup of an existing working group, or whether it should be subsumed by
an existing working group.
## Setting up a working group
@ -70,42 +70,42 @@ Once approval has been granted by the technical oversight committee to form a
working group, the working group leads need to take a few steps to establish the
working group:
* **Create a Google Drive Folder**. Create a folder to hold your
working group documents within this parent
[folder](https://drive.google.com/corp/drive/folders/0APnJ_hRs30R2Uk9PVA). Call
your folder "GROUP_NAME".
- **Create a Google Drive Folder**. Create a folder to hold your
working group documents within this parent
[folder](https://drive.google.com/corp/drive/folders/0APnJ_hRs30R2Uk9PVA). Call
your folder "GROUP_NAME".
* **Create a Meeting Notes Document**. Create a blank document in the above
folder and call it "GROUP_NAME Group Meeting Notes".
- **Create a Meeting Notes Document**. Create a blank document in the above
folder and call it "GROUP_NAME Group Meeting Notes".
* **Create a Roadmap Document**. Create a document in the above folder and
call it "GROUP_NAME Group Roadmap". Put your initial roadmap in the
document.
- **Create a Roadmap Document**. Create a document in the above folder and
call it "GROUP_NAME Group Roadmap". Put your initial roadmap in the
document.
* **Create a Wiki**. Create a wiki page on
[GitHub](https://github.com/knative/serving) titled "GROUP_NAME Design
Decisions". This page will be used to track important design decisions made
by the working group.
- **Create a Wiki**. Create a wiki page on
[GitHub](https://github.com/knative/serving) titled "GROUP_NAME Design
Decisions". This page will be used to track important design decisions made
by the working group.
* **Create a Public Google Group**. Call the group "knative-*group_name*" (all
in lowercase, dashes for spaces). This mailing list must be open to all.
- **Create a Public Google Group**. Call the group "knative-_group_name_" (all
in lowercase, dashes for spaces). This mailing list must be open to all.
* **Schedule a Recurring Meeting**. Create a recurring meeting (weekly or
bi-weekly, 30 or 60 minutes) and call the meeting GROUP_NAME Group Sync-Up".
Attach the meeting notes document to the calendar event. Generally schedule
these meetings between 9:00AM to 2:59PM Pacific Time. Invite the public
Google group to the meeting.
- **Schedule a Recurring Meeting**. Create a recurring meeting (weekly or
bi-weekly, 30 or 60 minutes) and call the meeting GROUP_NAME Group Sync-Up".
Attach the meeting notes document to the calendar event. Generally schedule
these meetings between 9:00AM to 2:59PM Pacific Time. Invite the public
Google group to the meeting.
* **Register the Working Group**. Go to
[WORKING-GROUPS.md](https://github.com/knative/serving/blob/master/community/WORKING-GROUPS.md)
and add your working group name, the names of the leads, the working group
charter, and a link to the meeting you created.
- **Register the Working Group**. Go to
[WORKING-GROUPS.md](https://github.com/knative/serving/blob/master/community/WORKING-GROUPS.md)
and add your working group name, the names of the leads, the working group
charter, and a link to the meeting you created.
* **Announce your Working Group**. Send a note to
[knative-dev@](mailto:knative-dev@googlegroups.com) and
[knative-tech-oversight@](mailto:knative-tech-oversight@googlegroups.com) to
announce your new working group. Include your charter in the email and
provide links to the meeting invitation.
- **Announce your Working Group**. Send a note to
[knative-dev@](mailto:knative-dev@googlegroups.com) and
[knative-tech-oversight@](mailto:knative-tech-oversight@googlegroups.com) to
announce your new working group. Include your charter in the email and
provide links to the meeting invitation.
Congratulations, you now have a fully formed working group!
@ -137,23 +137,23 @@ leads role and requirements.
Leads are responsible for running a working group. Running the group involves a
few activities:
* **Meetings**. Prepare the agenda and run the regular working group meetings.
Ensure the meetings are recorded, and properly archived.
- **Meetings**. Prepare the agenda and run the regular working group meetings.
Ensure the meetings are recorded, and properly archived.
* **Notes**. Ensure that meeting notes are kept up to date. Provide a link to
the recorded meeting in the notes. The lead may delegate note-taking duties.
- **Notes**. Ensure that meeting notes are kept up to date. Provide a link to
the recorded meeting in the notes. The lead may delegate note-taking duties.
* **Wiki**. Ensure that significant design decisions are captured in the Wiki.
In the Wiki, include links to useful design documents, any interesting
GitHub issues or PRs, posts to the mailing lists, etc. The wiki should
provide a good feel for where the mind of the working group is at and where
things are headed.
- **Wiki**. Ensure that significant design decisions are captured in the Wiki.
In the Wiki, include links to useful design documents, any interesting
GitHub issues or PRs, posts to the mailing lists, etc. The wiki should
provide a good feel for where the mind of the working group is at and where
things are headed.
* **Roadmap**. Establish **and maintain** a roadmap for the working group
outlining the areas of focus for the working group over the next 3 months.
- **Roadmap**. Establish **and maintain** a roadmap for the working group
outlining the areas of focus for the working group over the next 3 months.
* **Report**. Report current status to the main community meeting every 6
weeks.
- **Report**. Report current status to the main community meeting every 6
weeks.
### Be open

View File

@ -1,6 +1,6 @@
# Knative Working Groups
Most community activity is organized into *working groups*.
Most community activity is organized into _working groups_.
Working groups follow the [contributing](CONTRIBUTING.md) guidelines although
each of these groups may operate a little differently depending on their needs
@ -27,155 +27,155 @@ meetings.
The current working groups are:
* [API Core](#api-core)
* [Build](#build)
* [Documentation](#documentation)
* [Eventing](#eventing)
* [Networking](#networking)
* [Observability](#observability)
* [Productivity](#productivity)
* [Scaling](#scaling)
<!-- TODO add charters for each group -->
- [API Core](#api-core)
- [Build](#build)
- [Documentation](#documentation)
- [Eventing](#eventing)
- [Networking](#networking)
- [Observability](#observability)
- [Productivity](#productivity)
- [Scaling](#scaling)
<!-- TODO add charters for each group -->
## API Core
API [resources](../pkg/apis/serving), [validation](../pkg/webhook), and [semantics](../pkg/controller).
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | [https://meet.google.com/bzx-bjqa-rha](https://meet.google.com/bzx-bjqa-rha) <br>Or dial in:<br>(US) +1 262-448-6367<br>PIN: 923 539#
Community Meeting Calendar | Wednesdays 10:30a-11:00a PST <br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com)
Meeting Notes | [Notes](https://docs.google.com/document/d/1NC4klOdNaU-N-PsKLyXBqDKgNSHtxCDep29Ta2b5FK0/edit)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1fpBW7VyiBISsKuVdgn1MrgFdtx_JGoC5)
Slack Channel | [#api](https://knative.slack.com/messages/api)
| Artifact | Link |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev) |
| Community Meeting VC | [https://meet.google.com/bzx-bjqa-rha](https://meet.google.com/bzx-bjqa-rha) <br>Or dial in:<br>(US) +1 262-448-6367<br>PIN: 923 539# |
| Community Meeting Calendar | Wednesdays 10:30a-11:00a PST <br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com) |
| Meeting Notes | [Notes](https://docs.google.com/document/d/1NC4klOdNaU-N-PsKLyXBqDKgNSHtxCDep29Ta2b5FK0/edit) |
| Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1fpBW7VyiBISsKuVdgn1MrgFdtx_JGoC5) |
| Slack Channel | [#api](https://knative.slack.com/messages/api) |
&nbsp; | Leads | Company | Profile
-------------------------------------------------------- | ---------- | ------- | -------
<img width="30px" src="https://github.com/mattmoor.png"> | Matt Moore | Google | [mattmoor](https://github.com/mattmoor)
| &nbsp; | Leads | Company | Profile |
| -------------------------------------------------------- | ---------- | ------- | --------------------------------------- |
| <img width="30px" src="https://github.com/mattmoor.png"> | Matt Moore | Google | [mattmoor](https://github.com/mattmoor) |
## Build
[Build](https://github.com/knative/build), Builders, and Build templates
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | [meet.google.com/hau-nwak-tgm](https://meet.google.com/hau-nwak-tgm) <br>Or dial in:<br>(US) +1 219-778-6103 PIN: 573 000#
Community Meeting Calendar | Wednesdays 10:00a-10:30a PST <br>[Calendar](https://calendar.google.com/event?action=TEMPLATE&tmeid=MTBkb3MwYnVrbDd0djE0a2kzcmpmbjZndm9fMjAxODA5MTJUMTcwMDAwWiBqYXNvbmhhbGxAZ29vZ2xlLmNvbQ&tmsrc=jasonhall%40google.com&scp=ALL)
Meeting Notes | [Notes](https://docs.google.com/document/d/1e7gMVFlJfkFdTcaWj2qETeRD9kSBG2Vh8mASPmQMYC0/edit)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1ov16HvPam-v_FXAGEaUdHok6_hUAoIoe)
Slack Channel | [#build-crd](https://knative.slack.com/messages/build-crd)
| Artifact | Link |
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev) |
| Community Meeting VC | [meet.google.com/hau-nwak-tgm](https://meet.google.com/hau-nwak-tgm) <br>Or dial in:<br>(US) +1 219-778-6103 PIN: 573 000# |
| Community Meeting Calendar | Wednesdays 10:00a-10:30a PST <br>[Calendar](https://calendar.google.com/event?action=TEMPLATE&tmeid=MTBkb3MwYnVrbDd0djE0a2kzcmpmbjZndm9fMjAxODA5MTJUMTcwMDAwWiBqYXNvbmhhbGxAZ29vZ2xlLmNvbQ&tmsrc=jasonhall%40google.com&scp=ALL) |
| Meeting Notes | [Notes](https://docs.google.com/document/d/1e7gMVFlJfkFdTcaWj2qETeRD9kSBG2Vh8mASPmQMYC0/edit) |
| Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1ov16HvPam-v_FXAGEaUdHok6_hUAoIoe) |
| Slack Channel | [#build-crd](https://knative.slack.com/messages/build-crd) |
&nbsp; | Leads | Company | Profile
-------------------------------------------------------- | ---------- | ------- | -------
<img width="30px" src="https://github.com/ImJasonH.png"> | Jason Hall | Google | [ImJasonH](https://github.com/ImJasonH)
<img width="30px" src="https://github.com/mattmoor.png"> | Matt Moore | Google | [mattmoor](https://github.com/mattmoor)
| &nbsp; | Leads | Company | Profile |
| -------------------------------------------------------- | ---------- | ------- | --------------------------------------- |
| <img width="30px" src="https://github.com/ImJasonH.png"> | Jason Hall | Google | [ImJasonH](https://github.com/ImJasonH) |
| <img width="30px" src="https://github.com/mattmoor.png"> | Matt Moore | Google | [mattmoor](https://github.com/mattmoor) |
## Documentation
Knative documentation, especially the [Docs](https://github.com/knative/docs) repo.
Artifact | Link
-------------------------- | ----
Forum | [knative-docs@](https://groups.google.com/forum/#!forum/knative-docs)
Community Meeting VC | [meet.google.com/mku-npuv-cjs](https://meet.google.com/mku-npuv-cjs) <br>Or dial in:<br>(US) +1 260-277-0211<br>PIN: 956 724#
Community Meeting Calendar | Every other Tuesday, 9:00a-9:30a PST<br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com)
Meeting Notes | [Notes](https://docs.google.com/document/d/1Y7rug0XshcQPdKzptdWbQLQjcjgpFdLeEgP1nfkDAe4/edit)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1K5cM9m-b93ySI5WGKalJKbBq_cfjyi-y)
Slack Channel | [#docs](https://knative.slack.com/messages/docs)
| Artifact | Link |
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Forum | [knative-docs@](https://groups.google.com/forum/#!forum/knative-docs) |
| Community Meeting VC | [meet.google.com/mku-npuv-cjs](https://meet.google.com/mku-npuv-cjs) <br>Or dial in:<br>(US) +1 260-277-0211<br>PIN: 956 724# |
| Community Meeting Calendar | Every other Tuesday, 9:00a-9:30a PST<br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com) |
| Meeting Notes | [Notes](https://docs.google.com/document/d/1Y7rug0XshcQPdKzptdWbQLQjcjgpFdLeEgP1nfkDAe4/edit) |
| Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1K5cM9m-b93ySI5WGKalJKbBq_cfjyi-y) |
| Slack Channel | [#docs](https://knative.slack.com/messages/docs) |
&nbsp; | Leads | Company | Profile
------------------------------------------------------------- | ----------- | ------- | -------
<img width="30px" src="https://github.com/samodell.png"> | Sam O'Dell | Google | [samodell](https://github.com/samodell)
| &nbsp; | Leads | Company | Profile |
| -------------------------------------------------------- | ---------- | ------- | --------------------------------------- |
| <img width="30px" src="https://github.com/samodell.png"> | Sam O'Dell | Google | [samodell](https://github.com/samodell) |
## Eventing
Event sources, bindings, FaaS framework, and orchestration
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | [meet.google.com/uea-zcwt-drt](https://meet.google.com/uea-zcwt-drt) <br>Or dial in:<br>(US) +1 919 525 1825<br>PIN: 356 842#
Community Meeting Calendar | Wednesdays 9:00a-9:30a PST<br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_5pce19kpifu8avnj0eo74sg84c%40group.calendar.google.com)
Meeting Notes | [Notes](https://docs.google.com/document/d/1uGDehQu493N_XCAT5H4XEw5T9IWlPN1o19ULOWKuPnY/edit)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1S22YmGl6B1ppYApwa1j5j9Nc6rEChlPo)
Slack Channel | [#eventing](https://knative.slack.com/messages/eventing)
| Artifact | Link |
| -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev) |
| Community Meeting VC | [meet.google.com/uea-zcwt-drt](https://meet.google.com/uea-zcwt-drt) <br>Or dial in:<br>(US) +1 919 525 1825<br>PIN: 356 842# |
| Community Meeting Calendar | Wednesdays 9:00a-9:30a PST<br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_5pce19kpifu8avnj0eo74sg84c%40group.calendar.google.com) |
| Meeting Notes | [Notes](https://docs.google.com/document/d/1uGDehQu493N_XCAT5H4XEw5T9IWlPN1o19ULOWKuPnY/edit) |
| Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1S22YmGl6B1ppYApwa1j5j9Nc6rEChlPo) |
| Slack Channel | [#eventing](https://knative.slack.com/messages/eventing) |
&nbsp; | Leads | Company | Profile
------------------------------------------------------------- | ----------- | ------- | -------
<img width="30px" src="https://github.com/vaikas-google.png"> | Ville Aikas | Google | [vaikas-google](https://github.com/vaikas-google)
| &nbsp; | Leads | Company | Profile |
| ------------------------------------------------------------- | ----------- | ------- | ------------------------------------------------- |
| <img width="30px" src="https://github.com/vaikas-google.png"> | Ville Aikas | Google | [vaikas-google](https://github.com/vaikas-google) |
## Networking
Inbound and outbound network connectivity for [serving](https://github.com/knative/serving) workloads.
Specific areas of interest include: load balancing, routing, DNS configuration and TLS support.
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | [meet.google.com/cet-jepr-gtx](https://meet.google.com/cet-jepr-gtx) <br>Or dial in:<br>(US) +1 570-865-1288<br>PIN: 741 211#
Community Meeting Calendar | Thursdays at 9:00a-9:30a PST<br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com)
Meeting Notes | [Notes](https://drive.google.com/open?id=1EE1t5mTfnTir2lEasdTMRNtuPEYuPqQCZbU3NC9mHOI)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1oVDYbcEDdQ9EpUmkK6gE4C7aZ8u6ujsN)
Slack Channel | [#networking](https://knative.slack.com/messages/networking)
| Artifact | Link |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev) |
| Community Meeting VC | [meet.google.com/cet-jepr-gtx](https://meet.google.com/cet-jepr-gtx) <br>Or dial in:<br>(US) +1 570-865-1288<br>PIN: 741 211# |
| Community Meeting Calendar | Thursdays at 9:00a-9:30a PST<br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com) |
| Meeting Notes | [Notes](https://drive.google.com/open?id=1EE1t5mTfnTir2lEasdTMRNtuPEYuPqQCZbU3NC9mHOI) |
| Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1oVDYbcEDdQ9EpUmkK6gE4C7aZ8u6ujsN) |
| Slack Channel | [#networking](https://knative.slack.com/messages/networking) |
&nbsp; | Leads | Company | Profile
--------------------------------------------------------- | ---------------- | ------- | -------
<img width="30px" src="https://github.com/tcnghia.png"> | Nghia Tran | Google | [tcnghia](https://github.com/tcnghia)
<img width="30px" src="https://github.com/mdemirhan.png"> | Mustafa Demirhan | Google | [mdemirhan](https://github.com/mdemirhan)
| &nbsp; | Leads | Company | Profile |
| --------------------------------------------------------- | ---------------- | ------- | ----------------------------------------- |
| <img width="30px" src="https://github.com/tcnghia.png"> | Nghia Tran | Google | [tcnghia](https://github.com/tcnghia) |
| <img width="30px" src="https://github.com/mdemirhan.png"> | Mustafa Demirhan | Google | [mdemirhan](https://github.com/mdemirhan) |
## Observability
Logging, monitoring & tracing infrastructure
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | meet.google.com/kps-noeu-uzz <br> Or dial in: <br> (US) +1 413-301-9135 <br>PIN: 602 561#
Community Meeting Calendar | Every other Thursday, 10:30a-11a PST<br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com)
Meeting Notes | [Notes](https://drive.google.com/open?id=1vWEpjf093Jsih3mKkpIvmWWbUQPxFkcyDxzNH15rQgE)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/10HcpZlI1PbFyzinO6HjfHbzCkBXrqXMy)
Slack Channel | [#observability](https://knative.slack.com/messages/observability)
| Artifact | Link |
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev) |
| Community Meeting VC | meet.google.com/kps-noeu-uzz <br> Or dial in: <br> (US) +1 413-301-9135 <br>PIN: 602 561# |
| Community Meeting Calendar | Every other Thursday, 10:30a-11a PST<br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com) |
| Meeting Notes | [Notes](https://drive.google.com/open?id=1vWEpjf093Jsih3mKkpIvmWWbUQPxFkcyDxzNH15rQgE) |
| Document Folder | [Folder](https://drive.google.com/corp/drive/folders/10HcpZlI1PbFyzinO6HjfHbzCkBXrqXMy) |
| Slack Channel | [#observability](https://knative.slack.com/messages/observability) |
&nbsp; | Leads | Company | Profile
--------------------------------------------------------- | ---------------- | ------- | -------
<img width="30px" src="https://github.com/mdemirhan.png"> | Mustafa Demirhan | Google | [mdemirhan](https://github.com/mdemirhan)
| &nbsp; | Leads | Company | Profile |
| --------------------------------------------------------- | ---------------- | ------- | ----------------------------------------- |
| <img width="30px" src="https://github.com/mdemirhan.png"> | Mustafa Demirhan | Google | [mdemirhan](https://github.com/mdemirhan) |
## Scaling
Autoscaling
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | [Hangouts](https://meet.google.com/ick-mumc-mjv?hs=122)
Community Meeting Calendar | Wednesdays at 9:30am PST <br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com)
Meeting Notes | [Notes](https://docs.google.com/document/d/1FoLJqbDJM8_tw7CON-CJZsO2mlF8Ia1cWzCjWX8HDAI/edit#heading=h.c0ufqy5rucfa)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1qpGIPXVGoMm6IXb74gPrrHkudV_bjIZ9)
Slack Channel | [#autoscaling](https://knative.slack.com/messages/autoscaling)
| Artifact | Link |
| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev) |
| Community Meeting VC | [Hangouts](https://meet.google.com/ick-mumc-mjv?hs=122) |
| Community Meeting Calendar | Wednesdays at 9:30am PST <br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com) |
| Meeting Notes | [Notes](https://docs.google.com/document/d/1FoLJqbDJM8_tw7CON-CJZsO2mlF8Ia1cWzCjWX8HDAI/edit#heading=h.c0ufqy5rucfa) |
| Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1qpGIPXVGoMm6IXb74gPrrHkudV_bjIZ9) |
| Slack Channel | [#autoscaling](https://knative.slack.com/messages/autoscaling) |
&nbsp; | Leads | Company | Profile
------------------------------------------------------------- | -------------- | ------- | -------
<img width="30px" src="https://github.com/josephburnett.png"> | Joseph Burnett | Google | [josephburnett](https://github.com/josephburnett)
| &nbsp; | Leads | Company | Profile |
| ------------------------------------------------------------- | -------------- | ------- | ------------------------------------------------- |
| <img width="30px" src="https://github.com/josephburnett.png"> | Joseph Burnett | Google | [josephburnett](https://github.com/josephburnett) |
## Productivity
Project health, test framework, continuous integration & deployment, release, performance/scale/load testing infrastructure
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | [Hangouts](https://meet.google.com/sps-vbhg-rfx)
Community Meeting Calendar | Every other Thursday, 2p PST<br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com)
Meeting Notes | [Notes](https://docs.google.com/document/d/1aPRwYGD4XscRIqlBzbNsSB886PJ0G-vZYUAAUjoydko)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1oMYB4LQHjySuMChmcWYCyhH7-CSkz2r_)
Slack Channel | [#productivity](https://knative.slack.com/messages/productivity)
| Artifact | Link |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev) |
| Community Meeting VC | [Hangouts](https://meet.google.com/sps-vbhg-rfx) |
| Community Meeting Calendar | Every other Thursday, 2p PST<br>[Calendar](https://calendar.google.com/calendar/embed?src=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com) |
| Meeting Notes | [Notes](https://docs.google.com/document/d/1aPRwYGD4XscRIqlBzbNsSB886PJ0G-vZYUAAUjoydko) |
| Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1oMYB4LQHjySuMChmcWYCyhH7-CSkz2r_) |
| Slack Channel | [#productivity](https://knative.slack.com/messages/productivity) |
&nbsp; | Leads | Company | Profile
--------------------------------------------------------- | -------------- | ------- | -------
<img width="30px" src="https://github.com/jessiezcc.png"> | Jessie Zhu | Google | [jessiezcc](https://github.com/jessiezcc)
<img width="30px" src="https://github.com/adrcunha.png"> | Adriano Cunha | Google | [adrcunhua](https://github.com/adrcunha)
| &nbsp; | Leads | Company | Profile |
| --------------------------------------------------------- | ------------- | ------- | ----------------------------------------- |
| <img width="30px" src="https://github.com/jessiezcc.png"> | Jessie Zhu | Google | [jessiezcc](https://github.com/jessiezcc) |
| <img width="30px" src="https://github.com/adrcunha.png"> | Adriano Cunha | Google | [adrcunhua](https://github.com/adrcunha) |
---

View File

@ -155,6 +155,10 @@ not, then you will need to look downstream yourself.
You should see log lines similar to:
```json
{"ID":"284375451531353","Data":"SGVsbG8gV29ybGQh","Attributes":null,"PublishTime":"2018-10-31T00:00:00.00Z"}
{
"ID": "284375451531353",
"Data": "SGVsbG8gV29ybGQh",
"Attributes": null,
"PublishTime": "2018-10-31T00:00:00.00Z"
}
```

View File

@ -10,22 +10,22 @@ by a Knative Service.
You will need:
1. An internet-accessible Kubernetes cluster with Knative Serving
installed. Follow the [installation
instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
installed. Follow the [installation
instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
1. Ensure Knative Serving is [configured with a domain
name](https://github.com/knative/docs/blob/master/serving/using-a-custom-domain.md)
that allows GitHub to call into the cluster.
name](https://github.com/knative/docs/blob/master/serving/using-a-custom-domain.md)
that allows GitHub to call into the cluster.
1. If you're using GKE, you'll also want to [assign a static IP address](https://github.com/knative/docs/blob/master/serving/gke-assigning-static-ip-address.md).
1. Install [Knative
Eventing](https://github.com/knative/docs/tree/master/eventing). Those
instructions also install the default eventing sources, including
the `GitHubSource` we'll use.
Eventing](https://github.com/knative/docs/tree/master/eventing). Those
instructions also install the default eventing sources, including
the `GitHubSource` we'll use.
### Create a Knative Service
To verify the `GitHubSource` is working, we will create a simple Knative
`Service` that dumps incoming messages to its log. The `service.yaml` file
`Service` that dumps incoming messages to its log. The `service.yaml` file
defines this basic service.
```yaml
@ -44,7 +44,6 @@ spec:
Enter the following command to create the service from `service.yaml`:
```shell
kubectl --namespace default apply --filename eventing/samples/github-source/service.yaml
```
@ -54,7 +53,7 @@ kubectl --namespace default apply --filename eventing/samples/github-source/serv
Create a [personal access token](https://github.com/settings/tokens)
for GitHub that the GitHub source can use to register webhooks with
the GitHub API. Also decide on a secret token that your code will use
to authenticate the incoming webhooks from GitHub (*secretToken*).
to authenticate the incoming webhooks from GitHub (_secretToken_).
The token can be named anything you find convenient. The Source
requires `repo:public_repo` and `admin:repo_hook`, to let it fire
@ -68,7 +67,7 @@ recommended scopes:
![GitHub UI](personal_access_token.png "GitHub personal access token screenshot")
Update `githubsecret.yaml` with those values. If your generated access
token is `'personal_access_token_value'` and you choose your *secretToken*
token is `'personal_access_token_value'` and you choose your _secretToken_
as `'asdfasfdsaf'`, you'd modify `githubsecret.yaml` like so:
```yaml
@ -82,7 +81,7 @@ stringData:
secretToken: asdfasfdsaf
```
Hint: you can makeup a random *secretToken* with:
Hint: you can makeup a random _secretToken_ with:
```shell
head -c 8 /dev/urandom | base64
@ -108,7 +107,7 @@ metadata:
name: githubsourcesample
spec:
eventTypes:
- pull_request
- pull_request
ownerAndRepository: <YOUR USER>/<YOUR REPO>
accessToken:
secretKeyRef:
@ -122,7 +121,6 @@ spec:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
name: github-message-dumper
```
Then, apply that yaml using `kubectl`:
@ -146,7 +144,6 @@ Create a pull request in your GitHub repository. We will verify
that the GitHub events were sent into the Knative eventing system
by looking at our message dumper function logs.
```shell
kubectl --namespace default get pods
kubectl --namespace default logs github-message-dumper-XXXX user-container

View File

@ -73,9 +73,9 @@ export IOTCORE_TOPIC_DEVICE="iot-demo-device-pubsub-topic"
1. Install the
[in-memory `ClusterChannelProvisioner`](https://github.com/knative/eventing/tree/master/config/provisioners/in-memory-channel).
- Note that you can skip this if you choose to use a different type of
`Channel`. If so, you will need to modify `channel.yaml` before
deploying it.
- Note that you can skip this if you choose to use a different type of
`Channel`. If so, you will need to modify `channel.yaml` before
deploying it.
#### GCP PubSub Source
@ -138,8 +138,8 @@ Even though the `Source` isn't completely ready yet, we can setup the
ko apply -f -
```
- This uses a very simple Knative Service to see that events are flowing.
Feel free to replace it.
- This uses a very simple Knative Service to see that events are flowing.
Feel free to replace it.
#### IoT Core

View File

@ -10,7 +10,6 @@ consumption by a function that has been implemented as a Knative Service.
1. Setup [Knative Serving](https://github.com/knative/docs/tree/master/serving).
1. Setup [Knative Eventing](https://github.com/knative/docs/tree/master/eventing).
### Channel
1. Create a `Channel`. You can use your own `Channel` or use the provided sample, which creates a channel called `testchannel`. If you use your own `Channel` with a different name, then you will need to alter other commands later.
@ -46,7 +45,6 @@ In order to check the `KubernetesEventSource` is fully working, we will create a
kubectl apply -f eventing/samples/kubernetes-event-source/subscription.yaml
```
### Create Events
Create events by launching a pod in the default namespace. Create a busybox container
@ -61,7 +59,6 @@ Once the shell comes up, just exit it and kill the pod.
kubectl delete pod busybox
```
### Verify
We will verify that the kubernetes events were sent into the Knative eventing system by looking at our message dumper function logsIf you deployed the [Subscriber](#subscriber), then continue using this section. If not, then you will need to look downstream yourself.
@ -72,11 +69,10 @@ kubectl logs -l serving.knative.dev/service=message-dumper -c user-container
```
You should see log lines similar to:
```
{"metadata":{"name":"busybox.15644359eaa4d8e7","namespace":"default","selfLink":"/api/v1/namespaces/default/events/busybox.15644359eaa4d8e7","uid":"daf8d3ca-e10d-11e8-bf3c-42010a8a017d","resourceVersion":"7840","creationTimestamp":"2018-11-05T15:17:05Z"},"involvedObject":{"kind":"Pod","namespace":"default","name":"busybox","uid":"daf645df-e10d-11e8-bf3c-42010a8a017d","apiVersion":"v1","resourceVersion":"681388"},"reason":"Scheduled","message":"Successfully assigned busybox to gke-knative-eventing-e2e-default-pool-575bcad9-vz55","source":{"component":"default-scheduler"},"firstTimestamp":"2018-11-05T15:17:05Z","lastTimestamp":"2018-11-05T15:17:05Z","count":1,"type":"Normal","eventTime":null,"reportingComponent":"","reportingInstance":""}
Ce-Source: /apis/v1/namespaces/default/pods/busybox
{"metadata":{"name":"busybox.15644359f59f72f2","namespace":"default","selfLink":"/api/v1/namespaces/default/events/busybox.15644359f59f72f2","uid":"db14ff23-e10d-11e8-bf3c-42010a8a017d","resourceVersion":"7841","creationTimestamp":"2018-11-05T15:17:06Z"},"involvedObject":{"kind":"Pod","namespace":"default","name":"busybox","uid":"daf645df-e10d-11e8-bf3c-42010a8a017d","apiVersion":"v1","resourceVersion":"681389"},"reason":"SuccessfulMountVolume","message":"MountVolume.SetUp succeeded for volume \"default-token-pzr6x\" ","source":{"component":"kubelet","host":"gke-knative-eventing-e2e-default-pool-575bcad9-vz55"},"firstTimestamp":"2018-11-05T15:17:06Z","lastTimestamp":"2018-11-05T15:17:06Z","count":1,"type":"Normal","eventTime":null,"reportingComponent":"","reportingInstance":""}
Ce-Source: /apis/v1/namespaces/default/pods/busybox
```

View File

@ -8,7 +8,7 @@ You can find [guides for other platforms here](README.md).
## Before you begin
Knative requires a Kubernetes cluster v1.10 or newer. `kubectl` v1.10 is also
required. This guide walks you through creating a cluster with the correct
required. This guide walks you through creating a cluster with the correct
specifications for Knative on Azure Kubernetes Service (AKS).
This guide assumes you are using bash in a Mac or Linux environment; some
@ -30,16 +30,16 @@ brew install azure-cli
#### Ubuntu 64-bit
1. Add the azure-cli repo to your sources:
```console
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main" | \
sudo tee /etc/apt/sources.list.d/azure-cli.list
```
```console
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main" | \
sudo tee /etc/apt/sources.list.d/azure-cli.list
```
1. Run the following commands to install the Azure CLI and its dependencies:
```console
sudo apt-key adv --keyserver packages.microsoft.com --recv-keys 52E16F86FEE04B979B07E28DB02C46DF417A0893
sudo apt-get install apt-transport-https
sudo apt-get update && sudo apt-get install azure-cli
```
```console
sudo apt-key adv --keyserver packages.microsoft.com --recv-keys 52E16F86FEE04B979B07E28DB02C46DF417A0893
sudo apt-get install apt-transport-https
sudo apt-get update && sudo apt-get install azure-cli
```
### Installing kubectl
@ -59,15 +59,17 @@ First let's identify your Azure subscription and save it for use later.
1. Run `az login` and follow the instructions in the command output to authorize `az` to use your account
1. List your Azure subscriptions:
```bash
az account list -o table
```
```bash
az account list -o table
```
### Create a Resource Group for AKS
To simplify the command lines for this walkthrough, we need to define a few
To simplify the command lines for this walkthrough, we need to define a few
environment variables. First determine which region you'd like to run AKS in, along with the resource group you'd like to use.
1. Set `RESOURCE_GROUP` and `LOCATION` variables:
```bash
export LOCATION=eastus
export RESOURCE_GROUP=knative-group
@ -83,64 +85,60 @@ environment variables. First determine which region you'd like to run AKS in, al
Next we will create a managed Kubernetes cluster using AKS. To make sure the cluster is large enough to host all the Knative and Istio components, the recommended configuration for a cluster is:
* Kubernetes version 1.10 or later
* Three or more nodes
* Standard_DS3_v2 nodes
* RBAC enabled
- Kubernetes version 1.10 or later
- Three or more nodes
- Standard_DS3_v2 nodes
- RBAC enabled
1. Enable AKS in your subscription, use the following command with the az cli:
```bash
az provider register -n Microsoft.ContainerService
```
You should also ensure that the `Microsoft.Compute` and `Microsoft.Network` providers are registered in your subscription. If you need to enable them:
```bash
az provider register -n Microsoft.Compute
az provider register -n Microsoft.Network
```
`bash az provider register -n Microsoft.ContainerService`
You should also ensure that the `Microsoft.Compute` and `Microsoft.Network` providers are registered in your subscription. If you need to enable them:
`bash az provider register -n Microsoft.Compute az provider register -n Microsoft.Network`
1. Create the AKS cluster!
```bash
az aks create --resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--generate-ssh-keys \
--kubernetes-version 1.10.5 \
--enable-rbac \
--node-vm-size Standard_DS3_v2
```
```bash
az aks create --resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--generate-ssh-keys \
--kubernetes-version 1.10.5 \
--enable-rbac \
--node-vm-size Standard_DS3_v2
```
1. Configure kubectl to use the new cluster.
```bash
az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --admin
```
```bash
az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --admin
```
1. Verify your cluster is up and running
```bash
kubectl get nodes
```
```bash
kubectl get nodes
```
## Installing Istio
Knative depends on Istio.
1. Install Istio:
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/istio-1.0.2/istio.yaml
```
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/istio-1.0.2/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
kubectl label namespace default istio-injection=enabled
```
```bash
kubectl label namespace default istio-injection=enabled
```
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
```bash
kubectl get pods --namespace istio-system
```
`Running` or `Completed`:
`bash kubectl get pods --namespace istio-system`
It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
## Installing Knative components
@ -149,35 +147,36 @@ You can install the Knative Serving and Build components together, or Build on i
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/config/build/release.yaml
```
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-build
```bash
kubectl get pods --namespace knative-build
```
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
You are now ready to deploy an app or create a build in your new Knative
cluster.
@ -188,11 +187,11 @@ Now that your cluster has Knative installed, you're ready to deploy an app.
You have two options for deploying your first app:
* You can follow the step-by-step
- You can follow the step-by-step
[Getting Started with Knative App Deployment](getting-started-knative-app.md)
guide.
* You can view the available [sample apps](../serving/samples/README.md) and
- You can view the available [sample apps](../serving/samples/README.md) and
deploy one of your choosing.
## Cleaning up
@ -202,6 +201,7 @@ you're not using it. Deleting the cluster will also remove Knative, Istio,
and any apps you've deployed.
To delete the cluster, enter the following command:
```bash
az aks delete --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --yes --no-wait
```

View File

@ -8,7 +8,7 @@ You can find [guides for other platforms here](README.md).
## Before you begin
Knative requires a Kubernetes cluster v1.10 or newer. `kubectl` v1.10 is also
required. This guide walks you through creating a cluster with the correct
required. This guide walks you through creating a cluster with the correct
specifications for Knative on Google Cloud Platform (GCP).
This guide assumes you are using `bash` in a Mac or Linux environment; some
@ -18,22 +18,24 @@ commands will need to be adjusted for use in a Windows environment.
1. If you already have `gcloud` installed with `kubectl` version 1.10 or newer,
you can skip these steps.
> Tip: To check which version of `kubectl` you have installed, enter:
```
kubectl version
```
```
kubectl version
```
1. Download and install the `gcloud` command line tool:
https://cloud.google.com/sdk/install
1. Install the `kubectl` component:
```
gcloud components install kubectl
```
```
gcloud components install kubectl
```
1. Authorize `gcloud`:
```
gcloud auth login
```
```
gcloud auth login
```
### Setting environment variables
@ -55,7 +57,7 @@ export CLUSTER_ZONE=us-west1-c
### Setting up a Google Cloud Platform project
You need a Google Cloud Platform (GCP) project to create a Google Kubernetes Engine cluster.
You need a Google Cloud Platform (GCP) project to create a Google Kubernetes Engine cluster.
1. Set `PROJECT` environment variable, you can replace `my-knative-project` with
the desired name of your GCP project. If you don't have one, we'll create one
@ -64,6 +66,7 @@ You need a Google Cloud Platform (GCP) project to create a Google Kubernetes En
export PROJECT=my-knative-project
```
1. If you don't have a GCP project, create and set it as your `gcloud` default:
```bash
gcloud projects create $PROJECT --set-as-default
```
@ -72,11 +75,13 @@ You need a Google Cloud Platform (GCP) project to create a Google Kubernetes En
for your new project.
1. If you already have a GCP project, make sure your project is set as your `gcloud` default:
```bash
gcloud config set core/project $PROJECT
```
> Tip: Enter `gcloud config get-value project` to view the ID of your default GCP project.
1. Enable the necessary APIs:
```bash
gcloud services enable \
@ -90,29 +95,29 @@ You need a Google Cloud Platform (GCP) project to create a Google Kubernetes En
To make sure the cluster is large enough to host all the Knative and
Istio components, the recommended configuration for a cluster is:
* Kubernetes version 1.10 or later
* 4 vCPU nodes (`n1-standard-4`)
* Node autoscaling, up to 10 nodes
* API scopes for `cloud-platform`, `logging-write`, `monitoring-write`, and
- Kubernetes version 1.10 or later
- 4 vCPU nodes (`n1-standard-4`)
- Node autoscaling, up to 10 nodes
- API scopes for `cloud-platform`, `logging-write`, `monitoring-write`, and
`pubsub` (if those features will be used)
1. Create a Kubernetes cluster on GKE with the required specifications:
```bash
gcloud container clusters create $CLUSTER_NAME \
--zone=$CLUSTER_ZONE \
--cluster-version=latest \
--machine-type=n1-standard-4 \
--enable-autoscaling --min-nodes=1 --max-nodes=10 \
--enable-autorepair \
--scopes=service-control,service-management,compute-rw,storage-ro,cloud-platform,logging-write,monitoring-write,pubsub,datastore \
--num-nodes=3
```
```bash
gcloud container clusters create $CLUSTER_NAME \
--zone=$CLUSTER_ZONE \
--cluster-version=latest \
--machine-type=n1-standard-4 \
--enable-autoscaling --min-nodes=1 --max-nodes=10 \
--enable-autorepair \
--scopes=service-control,service-management,compute-rw,storage-ro,cloud-platform,logging-write,monitoring-write,pubsub,datastore \
--num-nodes=3
```
1. Grant cluster-admin permissions to the current user:
```bash
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)
```
```bash
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)
```
Admin permissions are required to create the necessary
[RBAC rules for Istio](https://istio.io/docs/concepts/security/rbac/).
@ -122,24 +127,22 @@ Admin permissions are required to create the necessary
Knative depends on Istio.
1. Install Istio:
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/istio-1.0.2/istio.yaml
```
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/istio-1.0.2/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
kubectl label namespace default istio-injection=enabled
```
```bash
kubectl label namespace default istio-injection=enabled
```
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
```bash
kubectl get pods --namespace istio-system
```
`Running` or `Completed`:
`bash kubectl get pods --namespace istio-system`
It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
## Installing Knative components
@ -148,35 +151,36 @@ You can install the Knative Serving and Build components together, or Build on i
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/config/build/release.yaml
```
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-build
```bash
kubectl get pods --namespace knative-build
```
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
You are now ready to deploy an app or create a build in your new Knative
cluster.
@ -189,14 +193,14 @@ create a build.
Depending on which Knative component you have installed, there are a few options
for getting started:
* You can follow the step-by-step
- You can follow the step-by-step
[Getting Started with Knative App Deployment](getting-started-knative-app.md)
guide.
* You can view the available [sample apps](../serving/samples/README.md) and
- You can view the available [sample apps](../serving/samples/README.md) and
deploy one of your choosing.
* You can follow the step-by-step
- You can follow the step-by-step
[Creating a simple Knative Build](../build/creating-builds.md) guide.
## Cleaning up
@ -217,4 +221,3 @@ Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).

View File

@ -94,35 +94,36 @@ You can install the Knative Serving and Build components together, or Build on i
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/config/build/release.yaml
```
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-build
```bash
kubectl get pods --namespace knative-build
```
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
You are now ready to deploy an app or create a build in your new Knative
cluster.

View File

@ -162,35 +162,36 @@ You can install the Knative Serving and Build components together, or Build on i
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/config/build/release.yaml
```
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-build
```bash
kubectl get pods --namespace knative-build
```
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
You are now ready to deploy an app or create a build in your new Knative
cluster.

View File

@ -15,14 +15,14 @@ you can create one using [Minikube](https://github.com/kubernetes/minikube).
### Install kubectl and Minikube
1. If you already have `kubectl` CLI, run `kubectl version` to check the
version. You need v1.10 or newer. If your `kubectl` is older, follow
version. You need v1.10 or newer. If your `kubectl` is older, follow
the next step to install a newer version.
1. [Install the kubectl CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl).
1. [Install and configure minikube](https://github.com/kubernetes/minikube#installation)
version v0.28.1 or later with a [VM driver](https://github.com/kubernetes/minikube#requirements),
e.g. `kvm2` on Linux or `hyperkit` on macOS.
version v0.28.1 or later with a [VM driver](https://github.com/kubernetes/minikube#requirements),
e.g. `kvm2` on Linux or `hyperkit` on macOS.
## Creating a Kubernetes cluster
@ -74,7 +74,7 @@ It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
## Installing Knative Serving
@ -102,7 +102,7 @@ Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
Now you can deploy an app to your newly created Knative cluster.
@ -119,12 +119,13 @@ If you'd like to view the available sample apps and deploy one of your choosing,
head to the [sample apps](../serving/samples/README.md) repo.
> Note: When looking up the IP address to use for accessing your app, you need to look up
the NodePort for the `knative-ingressgateway` as well as the IP address used for Minikube.
You can use the following command to look up the value to use for the {IP_ADDRESS} placeholder
used in the samples:
```shell
echo $(minikube ip):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
> the NodePort for the `knative-ingressgateway` as well as the IP address used for Minikube.
> You can use the following command to look up the value to use for the {IP_ADDRESS} placeholder
> used in the samples:
```shell
echo $(minikube ip):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
## Cleaning up

View File

@ -54,8 +54,8 @@ minishift start
```
- The above configuration ensures that Knative gets created in its own [minishift profile](https://docs.okd.io/latest/minishift/using/profiles.html) called `knative` with 8GB of RAM, 4 vCpus and 50GB of hard disk. The image-caching helps in re-starting up the cluster faster every time.
- The [addon](https://docs.okd.io/latest/minishift/using/addons.html) **admin-user** creates a user called `admin` with password `admin` having the role of cluster-admin. The user gets created only after the addon is applied, that is usually after successful start of Minishift
- The [addon](https://docs.okd.io/latest/minishift/using/addons.html) **anyuid** allows the `default` service account to run the application with uid `0`
- The [addon](https://docs.okd.io/latest/minishift/using/addons.html) **admin-user** creates a user called `admin` with password `admin` having the role of cluster-admin. The user gets created only after the addon is applied, that is usually after successful start of Minishift
- The [addon](https://docs.okd.io/latest/minishift/using/addons.html) **anyuid** allows the `default` service account to run the application with uid `0`
- The command `minishift profile set knative` is required every time you start and stop minishift to make sure that you are on right `knative` minishift profile that was configured above.
@ -73,6 +73,7 @@ minishift oc-env
## Preparing Knative Deployment
### Enable Admission Controller Webhook
To be able to deploy and run serverless Knative applications, its required that you must enable the [Admission Controller Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/).
Run the following command to make OpenShift (run via minishift) to be configured for [Admission Controller Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/):
@ -110,16 +111,17 @@ until oc login -u admin -p admin; do sleep 5; done;
1. Set up the project **myproject** for use with Knative applications.
```shell
oc project myproject
oc adm policy add-scc-to-user privileged -z default
oc label namespace myproject istio-injection=enabled
```
The `oc adm policy` adds the **privileged** [Security Context Constraints(SCCs)](https://docs.okd.io/3.10/admin_guide/manage_scc.html) to the **default** Service Account. The SCCs are the precursor to the PSP (Pod Security Policy) mechanism in Kubernetes, as isito-sidecars required to be run with **privileged** permissions you need set that here.
```shell
oc project myproject
oc adm policy add-scc-to-user privileged -z default
oc label namespace myproject istio-injection=enabled
```
Its is also ensured that the project myproject is labelled for Istio automatic sidecar injection, with this `istio-injection=enabled` label to **myproject** each of the Knative applications that will be deployed in **myproject** will have Istio sidecars injected automatically.
The `oc adm policy` adds the **privileged** [Security Context Constraints(SCCs)](https://docs.okd.io/3.10/admin_guide/manage_scc.html) to the **default** Service Account. The SCCs are the precursor to the PSP (Pod Security Policy) mechanism in Kubernetes, as isito-sidecars required to be run with **privileged** permissions you need set that here.
> **IMPORTANT:** Avoid using `default` project in OpenShift for deploying Knative applications. As OpenShift deploys few of its mission critical applications in `default` project, it's safer not to touch it to avoid any instabilities in OpenShift.
Its is also ensured that the project myproject is labelled for Istio automatic sidecar injection, with this `istio-injection=enabled` label to **myproject** each of the Knative applications that will be deployed in **myproject** will have Istio sidecars injected automatically.
> **IMPORTANT:** Avoid using `default` project in OpenShift for deploying Knative applications. As OpenShift deploys few of its mission critical applications in `default` project, it's safer not to touch it to avoid any instabilities in OpenShift.
### Installing Istio
@ -131,20 +133,19 @@ curl -s https://raw.githubusercontent.com/knative/docs/master/install/scripts/is
1. Run the following to install Istio:
```shell
oc apply -f https://storage.googleapis.com/knative-releases/serving/latest/istio.yaml
```
> **NOTE:** If you get a lot of errors after running the above command like: __unable to recognize "STDIN": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"__, just run the command above again, it's idempotent and hence objects will be created only once.
```shell
oc apply -f https://storage.googleapis.com/knative-releases/serving/latest/istio.yaml
```
> **NOTE:** If you get a lot of errors after running the above command like: **unable to recognize "STDIN": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"**, just run the command above again, it's idempotent and hence objects will be created only once.
2. Ensure the istio-sidecar-injector pods runs as provileged:
```shell
oc get cm istio-sidecar-injector -n istio-system -oyaml | sed -e 's/securityContext:/securityContext:\\n privileged: true/' | oc replace -f -
```
```shell
oc get cm istio-sidecar-injector -n istio-system -oyaml | sed -e 's/securityContext:/securityContext:\\n privileged: true/' | oc replace -f -
```
3. Monitor the Istio components until all of the components show a `STATUS` of `Running` or `Completed`:
```shell
while oc get pods -n istio-system | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done
```
> **NOTE:** It will take a few minutes for all the components to be up and running.
`shell while oc get pods -n istio-system | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done`
> **NOTE:** It will take a few minutes for all the components to be up and running.
## Install Knative Serving
@ -156,33 +157,35 @@ The [knative-openshift-policies.sh](scripts/knative-openshift-policies.sh) runs
curl -s https://raw.githubusercontent.com/knative/docs/master/install/scripts/knative-openshift-policies.sh | bash
```
>You can safely ignore the warnings:
> You can safely ignore the warnings:
- Warning: ServiceAccount 'build-controller' not found cluster role "cluster-admin" added: "build-controller"
- Warning: ServiceAccount 'controller' not found cluster role "cluster-admin" added: "controller"
1. Install Knative serving:
```shell
oc apply -f https://storage.googleapis.com/knative-releases/serving/latest/release-no-mon.yaml
```
```shell
oc apply -f https://storage.googleapis.com/knative-releases/serving/latest/release-no-mon.yaml
```
2. Monitor the Knative components until all of the components show a `STATUS` of `Running` or `Completed`:
```shell
while oc get pods -n knative-build | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done
while oc get pods -n knative-serving | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done
```
The first command watches for all pod status in `knative-build` and the second command will watch for all pod status in `knative-serving`.
```shell
while oc get pods -n knative-build | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done
while oc get pods -n knative-serving | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done
```
> **NOTE:** It will take a few minutes for all the components to be up and running.
The first command watches for all pod status in `knative-build` and the second command will watch for all pod status in `knative-serving`.
3. Set route to access the OpenShift ingress CIDR, so that services can be accessed via LoadBalancerIP
```shell
# Only for macOS
sudo route -n add -net $(minishift openshift config view | grep ingressIPNetworkCIDR | awk '{print $NF}') $(minishift ip)
# Only for Linux
sudo ip route add $(minishift openshift config view | grep ingressIPNetworkCIDR | sed 's/\r$//' | awk '{print $NF}') via $(minishift ip)
```
> **NOTE:** It will take a few minutes for all the components to be up and running.
3. Set route to access the OpenShift ingress CIDR, so that services can be accessed via LoadBalancerIP
```shell
# Only for macOS
sudo route -n add -net $(minishift openshift config view | grep ingressIPNetworkCIDR | awk '{print $NF}') $(minishift ip)
# Only for Linux
sudo ip route add $(minishift openshift config view | grep ingressIPNetworkCIDR | sed 's/\r$//' | awk '{print $NF}') via $(minishift ip)
```
## Deploying an app
@ -237,10 +240,11 @@ There are two ways to clean up, either deleting the entire minishift profile or
2. Delete your test cluster by running:
```shell
minishift stop
minishift profile delete knative
```
```shell
minishift stop
minishift profile delete knative
```
---
Except as otherwise noted, the content of this page is licensed under the

View File

@ -131,7 +131,7 @@ It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
Set `priviledged` to `true` for the `istio-sidecar-injector`:
@ -190,7 +190,7 @@ Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
Now you can deploy an app to your newly created Knative cluster.
@ -207,13 +207,13 @@ If you'd like to view the available sample apps and deploy one of your choosing,
head to the [sample apps](../serving/samples/README.md) repo.
> Note: When looking up the IP address to use for accessing your app, you need to look up
the NodePort for the `knative-ingressgateway` as well as the IP address used for OpenShift.
You can use the following command to look up the value to use for the {IP_ADDRESS} placeholder
used in the samples:
> the NodePort for the `knative-ingressgateway` as well as the IP address used for OpenShift.
> You can use the following command to look up the value to use for the {IP_ADDRESS} placeholder
> used in the samples:
```shell
export IP_ADDRESS=$(oc get node -o 'jsonpath={.items[0].status.addresses[0].address}'):$(oc get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
```shell
export IP_ADDRESS=$(oc get node -o 'jsonpath={.items[0].status.addresses[0].address}'):$(oc get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
## Cleaning up

View File

@ -8,7 +8,7 @@ You can find [guides for other platforms here](README.md).
## Before you begin
Knative requires a Kubernetes cluster v1.10 or newer. `kubectl` v1.10 is also
required. This guide walks you through creating a cluster with the correct
required. This guide walks you through creating a cluster with the correct
specifications for Knative on Pivotal Container Service.
This guide assumes you are using bash in a Mac or Linux environment; some
@ -27,8 +27,8 @@ To enable privileged mode and create a cluster:
1. Enable privileged mode:
1. Open the Pivotal Container Service tile in PCF Ops Manager.
1. In the plan configuration that you want to use, enable both of the following:
* Enable Privileged Containers - Use with caution
* Disable DenyEscalatingExec
- Enable Privileged Containers - Use with caution
- Disable DenyEscalatingExec
1. Save your changes.
1. In the PCF Ops Manager, review and then apply your changes.
1. [Create a cluster](https://docs.pivotal.io/runtimes/pks/1-1/create-cluster.html).
@ -42,24 +42,22 @@ To retrieve your cluster credentials, follow the documentation at https://docs.p
Knative depends on Istio. Istio workloads require privileged mode for Init Containers
1. Install Istio:
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/istio-1.0.2/istio.yaml
```
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/istio-1.0.2/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
kubectl label namespace default istio-injection=enabled
```
```bash
kubectl label namespace default istio-injection=enabled
```
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
```bash
kubectl get pods --namespace istio-system
```
`Running` or `Completed`:
`bash kubectl get pods --namespace istio-system`
It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
## Installing Knative components
@ -68,35 +66,36 @@ You can install the Knative Serving and Build components together, or Build on i
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/config/build/release.yaml
```
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-build
```bash
kubectl get pods --namespace knative-build
```
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
You are now ready to deploy an app or create a build in your new Knative
cluster.
@ -107,11 +106,11 @@ Now that your cluster has Knative installed, you're ready to deploy an app.
You have two options for deploying your first app:
* You can follow the step-by-step
- You can follow the step-by-step
[Getting Started with Knative App Deployment](getting-started-knative-app.md)
guide.
* You can view the available [sample apps](../serving/samples/README.md) and
- You can view the available [sample apps](../serving/samples/README.md) and
deploy one of your choosing.
## Cleaning up

View File

@ -46,35 +46,36 @@ You can install the Knative Serving and Build components together, or Build on i
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.2/release.yaml
```
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.2/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
```bash
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.2/third_party/config/build/release.yaml
```
```bash
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.2/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods --namespace knative-build
```bash
kubectl get pods --namespace knative-build
```
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
You are now ready to deploy an app or create a build in your new Knative
cluster.

View File

@ -16,29 +16,29 @@ We provide information for installing Knative on
Follow these step-by-step guides for setting up Kubernetes and installing
Knative components on the following platforms:
* [Knative Install on Azure Kubernetes Service](Knative-with-AKS.md)
* [Knative Install on Gardener](Knative-with-Gardener.md)
* [Knative Install on Google Kubernetes Engine](Knative-with-GKE.md)
* [Knative Install on IBM Cloud Kubernetes Service](Knative-with-IKS.md)
* [Knative Install on Minikube](Knative-with-Minikube.md)
* [Knative Install on OpenShift](Knative-with-OpenShift.md)
* [Knative Install on Minishift](Knative-with-Minishift.md)
* [Knative Install on Pivotal Container Service](Knative-with-PKS.md)
- [Knative Install on Azure Kubernetes Service](Knative-with-AKS.md)
- [Knative Install on Gardener](Knative-with-Gardener.md)
- [Knative Install on Google Kubernetes Engine](Knative-with-GKE.md)
- [Knative Install on IBM Cloud Kubernetes Service](Knative-with-IKS.md)
- [Knative Install on Minikube](Knative-with-Minikube.md)
- [Knative Install on OpenShift](Knative-with-OpenShift.md)
- [Knative Install on Minishift](Knative-with-Minishift.md)
- [Knative Install on Pivotal Container Service](Knative-with-PKS.md)
If you already have a Kubernetes cluster you're comfortable installing
*alpha* software on, use the following instructions:
_alpha_ software on, use the following instructions:
* [Knative Install on any Kubernetes](Knative-with-any-k8s.md)
- [Knative Install on any Kubernetes](Knative-with-any-k8s.md)
## Deploying an app
Now you're ready to deploy an app:
* Follow the step-by-step
- Follow the step-by-step
[Getting Started with Knative App Deployment](getting-started-knative-app.md)
guide.
* View the available [sample apps](../serving/samples) and deploy one of your
- View the available [sample apps](../serving/samples) and deploy one of your
choosing.
## Configuring Knative Serving
@ -47,14 +47,14 @@ After your Knative installation is running, you can set up a custom domain with
a static IP address to be able to use Knative for publicly available services
and set up an Istio IP range for outbound network access:
* [Assign a static IP address](../serving/gke-assigning-static-ip-address.md)
* [Configure a custom domain](../serving/using-a-custom-domain.md)
* [Configure outbound network access](../serving/outbound-network-access.md)
* [Configuring HTTPS with a custom certificate](../serving/using-an-ssl-cert.md)
- [Assign a static IP address](../serving/gke-assigning-static-ip-address.md)
- [Configure a custom domain](../serving/using-a-custom-domain.md)
- [Configure outbound network access](../serving/outbound-network-access.md)
- [Configuring HTTPS with a custom certificate](../serving/using-an-ssl-cert.md)
## Checking the version of your Knative Serving installation
* [Checking the version of your Knative Serving installation](check-install-version.md)
- [Checking the version of your Knative Serving installation](check-install-version.md)
---

View File

@ -11,16 +11,16 @@ This will return the description for the `knative-serving` controller; this
information contains the link to the container that was used to install Knative:
```yaml
...
---
Pod Template:
Labels: app=controller
Annotations: sidecar.istio.io/inject=false
Service Account: controller
Labels: app=controller
Annotations: sidecar.istio.io/inject=false
Service Account: controller
Containers:
controller:
# Link to container used for Knative install
Image: gcr.io/knative-releases/github.com/knative/serving/cmd/controller@sha256:59abc8765d4396a3fc7cac27a932a9cc151ee66343fa5338fb7146b607c6e306
...
controller:
# Link to container used for Knative install
Image: gcr.io/knative-releases/github.com/knative/serving/cmd/controller@sha256:59abc8765d4396a3fc7cac27a932a9cc151ee66343fa5338fb7146b607c6e306
```
Copy the full `gcr.io` link to the container and paste it into your browser.

View File

@ -6,8 +6,9 @@ using cURL requests.
## Before you begin
You need:
* A Kubernetes cluster with [Knative installed](./README.md).
* An image of the app that you'd like to deploy available on a
- A Kubernetes cluster with [Knative installed](./README.md).
- An image of the app that you'd like to deploy available on a
container registry. The image of the sample app used in
this guide is available on Google Container Registry.
@ -19,7 +20,7 @@ the basic workflow for deploying an app, but these steps can be adapted for your
own application if you have an image of it available on [Docker Hub](https://docs.docker.com/docker-hub/repos/), [Google Container Registry](https://cloud.google.com/container-registry/docs/pushing-and-pulling), or another container image registry.
The Hello World sample app reads in an `env` variable, `TARGET`, from the
configuration `.yaml` file, then prints "Hello World: ${TARGET}!". If `TARGET`
configuration `.yaml` file, then prints "Hello World: \${TARGET}!". If `TARGET`
isn't defined, it will print "NOT SPECIFIED".
## Configuring your deployment
@ -50,8 +51,8 @@ spec:
container:
image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
env:
- name: TARGET # The environment variable printed out by the sample app
value: "Go Sample v1"
- name: TARGET # The environment variable printed out by the sample app
value: "Go Sample v1"
```
If you want to deploy the sample app, leave the config file as-is. If you're
@ -61,16 +62,18 @@ the image accordingly.
## Deploying your app
From the directory where the new `service.yaml` file was created, apply the configuration:
```bash
kubectl apply --filename service.yaml
```
Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Perform network programming to create a route, ingress, service, and load
balancer for your app.
* Automatically scale your pods up and down based on traffic, including to
zero active pods.
- Create a new immutable revision for this version of the app.
- Perform network programming to create a route, ingress, service, and load
balancer for your app.
- Automatically scale your pods up and down based on traffic, including to
zero active pods.
### Interacting with your app
@ -88,65 +91,68 @@ asssigned an external IP address.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
Take note of the `EXTERNAL-IP` address.
```
You can also export the IP address as a variable with the following command:
Take note of the `EXTERNAL-IP` address.
```shell
export IP_ADDRESS=$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')
You can also export the IP address as a variable with the following command:
```shell
export IP_ADDRESS=$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')
```
```
> Note: if you use minikube or a baremetal cluster that has no external load balancer, the
`EXTERNAL-IP` field is shown as `<pending>`. You need to use `NodeIP` and `NodePort` to
interact your app instead. To get your app's `NodeIP` and `NodePort`, enter the following command:
```shell
export IP_ADDRESS=$(kubectl get node --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
> `EXTERNAL-IP` field is shown as `<pending>`. You need to use `NodeIP` and `NodePort` to
> interact your app instead. To get your app's `NodeIP` and `NodePort`, enter the following command:
```shell
export IP_ADDRESS=$(kubectl get node --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
1. To find the host URL for your service, enter:
```shell
kubectl get ksvc helloworld-go --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-go helloworld-go.default.example.com
```
```shell
kubectl get ksvc helloworld-go --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-go helloworld-go.default.example.com
```
You can also export the host URL as a variable using the following command:
You can also export the host URL as a variable using the following command:
```shell
export HOST_URL=$(kubectl get ksvc helloworld-go --output jsonpath='{.status.domain}')
```
```shell
export HOST_URL=$(kubectl get ksvc helloworld-go --output jsonpath='{.status.domain}')
```
If you changed the name from `helloworld-go` to something else when creating
the `.yaml` file, replace `helloworld-go` in the above commands with the
name you entered.
If you changed the name from `helloworld-go` to something else when creating
the `.yaml` file, replace `helloworld-go` in the above commands with the
name you entered.
1. Now you can make a request to your app and see the results. Replace
`IP_ADDRESS` with the `EXTERNAL-IP` you wrote down, and replace
`helloworld-go.default.example.com` with the domain returned in the previous
step.
```shell
curl -H "Host: helloworld-go.default.example.com" http://${IP_ADDRESS}
Hello World: Go Sample v1!
```
```shell
curl -H "Host: helloworld-go.default.example.com" http://${IP_ADDRESS}
Hello World: Go Sample v1!
```
If you exported the host URL And IP address as variables in the previous steps, you
can use those variables to simplify your cURL request:
```shell
curl -H "Host: ${HOST_URL}" http://${IP_ADDRESS}
Hello World: Go Sample v1!
```
```shell
curl -H "Host: ${HOST_URL}" http://${IP_ADDRESS}
Hello World: Go Sample v1!
```
If you deployed your own app, you might want to customize this cURL
request to interact with your application.
If you deployed your own app, you might want to customize this cURL
request to interact with your application.
It can take a few seconds for Knative to scale up your application and return
a response.
It can take a few seconds for Knative to scale up your application and return
a response.
> Note: Add `-v` option to get more detail if the `curl` command failed.
> Note: Add `-v` option to get more detail if the `curl` command failed.
You've successfully deployed your first application using Knative!

View File

@ -78,9 +78,7 @@ packages by their [import paths](https://golang.org/doc/code.html#ImportPaths)
(e.g., `github.com/kaniko/serving/cmd/controller`)
The typical usage is `ko apply -f config.yaml`, which reads in the config YAML,
and looks for Go import paths representing runnable commands (i.e., `package
main`). When it finds a matching import path, `ko` builds the package using `go
build` then pushes a container image containing that binary on top of a base
and looks for Go import paths representing runnable commands (i.e., `package main`). When it finds a matching import path, `ko` builds the package using `go build` then pushes a container image containing that binary on top of a base
image (by default, `gcr.io/distroless/base`) to
`$KO_DOCKER_REPO/unique-string`. After pushing those images, `ko` replaces
instances of matched import paths with fully-qualified references to the images
@ -89,17 +87,17 @@ it pushed.
So if `ko apply` was passed this config:
```yaml
...
---
image: github.com/my/repo/cmd/foo
...
```
...it would produce YAML like:
```yaml
...
---
image: gcr.io/my-docker-repo/foo-zyxwvut@sha256:abcdef # image by digest
...
```
(This assumes that you have set the environment variable
@ -108,10 +106,11 @@ image: gcr.io/my-docker-repo/foo-zyxwvut@sha256:abcdef # image by digest
`ko apply` then passes this generated YAML config to `kubectl apply`.
`ko` also supports:
* `ko publish` to simply push images and not produce configs.
* `ko resolve` to push images and output the generated configs, but not
`kubectl apply` them.
* `ko delete` to simply passthrough to `kubectl delete` for convenience.
- `ko publish` to simply push images and not produce configs.
- `ko resolve` to push images and output the generated configs, but not
`kubectl apply` them.
- `ko delete` to simply passthrough to `kubectl delete` for convenience.
`ko` is used during development and release of Knative components, but is not
intended to be required for _users_ of Knative -- they should only need to
@ -122,8 +121,7 @@ intended to be required for _users_ of Knative -- they should only need to
`skaffold` is a CLI tool to aid in iterative development for Kubernetes.
Typically, you would write a [YAML
config](https://github.com/GoogleContainerTools/skaffold/blob/master/examples/annotated-skaffold.yaml)
describing to Skaffold how to build and deploy your app, then run `skaffold
dev`, which will watch your local source tree for changes and continuously
describing to Skaffold how to build and deploy your app, then run `skaffold dev`, which will watch your local source tree for changes and continuously
builds and deploys based on your config when changes are detected.
Skaffold supports many pluggable implementations for building and deploying.

View File

@ -1,4 +1,3 @@
# Knative Serving
Knative Serving builds on Kubernetes and Istio to support deploying and serving
@ -7,10 +6,10 @@ and scales to support advanced scenarios.
The Knative Serving project provides middleware primitives that enable:
* Rapid deployment of serverless containers
* Automatic scaling up and down to zero
* Routing and network programming for Istio components
* Point-in-time snapshots of deployed code and configurations
- Rapid deployment of serverless containers
- Automatic scaling up and down to zero
- Routing and network programming for Istio components
- Point-in-time snapshots of deployed code and configurations
## Serving resources
@ -18,22 +17,22 @@ Knative Serving defines a set of objects as Kubernetes
Custom Resource Definitions (CRDs). These objects are used to define and control
how your serverless workload behaves on the cluster:
* [Service](https://github.com/knative/serving/blob/master/docs/spec/spec.md#service):
- [Service](https://github.com/knative/serving/blob/master/docs/spec/spec.md#service):
The `service.serving.knative.dev` resource automatically manages the whole
lifecycle of your workload. It controls the creation of other
objects to ensure that your app has a route, a configuration, and a new revision
for each update of the service. Service can be defined to always route traffic to the
latest revision or to a pinned revision.
* [Route](https://github.com/knative/serving/blob/master/docs/spec/spec.md#route):
- [Route](https://github.com/knative/serving/blob/master/docs/spec/spec.md#route):
The `route.serving.knative.dev` resource maps a network endpoint to a one or
more revisions. You can manage the traffic in several ways, including fractional
traffic and named routes.
* [Configuration](https://github.com/knative/serving/blob/master/docs/spec/spec.md#configuration):
- [Configuration](https://github.com/knative/serving/blob/master/docs/spec/spec.md#configuration):
The `configuration.serving.knative.dev` resource maintains
the desired state for your deployment. It provides a clean separation between
code and configuration and follows the Twelve-Factor App methodology. Modifying a configuration
creates a new revision.
* [Revision](https://github.com/knative/serving/blob/master/docs/spec/spec.md#revision):
- [Revision](https://github.com/knative/serving/blob/master/docs/spec/spec.md#revision):
The `revision.serving.knative.dev` resource is a point-in-time snapshot
of the code and configuration for each modification made to the workload. Revisions
are immutable objects and can be retained for as long as useful.
@ -55,29 +54,29 @@ in the Knative Serving repository.
## More samples and demos
* [Autoscaling with Knative Serving](./samples/autoscale-go/README.md)
* [Source-to-URL with Knative Serving](./samples/source-to-url-go/README.md)
* [Telemetry with Knative Serving](./samples/telemetry-go/README.md)
* [REST API sample](./samples/rest-api-go/README.md)
- [Autoscaling with Knative Serving](./samples/autoscale-go/README.md)
- [Source-to-URL with Knative Serving](./samples/source-to-url-go/README.md)
- [Telemetry with Knative Serving](./samples/telemetry-go/README.md)
- [REST API sample](./samples/rest-api-go/README.md)
## Setting up Logging and Metrics
* [Installing Logging, Metrics and Traces](./installing-logging-metrics-traces.md)
* [Accessing Logs](./accessing-logs.md)
* [Accessing Metrics](./accessing-metrics.md)
* [Accessing Traces](./accessing-traces.md)
* [Setting up a logging plugin](./setting-up-a-logging-plugin.md)
- [Installing Logging, Metrics and Traces](./installing-logging-metrics-traces.md)
- [Accessing Logs](./accessing-logs.md)
- [Accessing Metrics](./accessing-metrics.md)
- [Accessing Traces](./accessing-traces.md)
- [Setting up a logging plugin](./setting-up-a-logging-plugin.md)
## Debugging Knative Serving issues
* [Debugging Application Issues](./debugging-application-issues.md)
* [Debugging Performance Issues](./debugging-performance-issues.md)
- [Debugging Application Issues](./debugging-application-issues.md)
- [Debugging Performance Issues](./debugging-performance-issues.md)
## Configuration and Networking
* [Configuring outbound network access](./outbound-network-access.md)
* [Using a custom domain](./using-a-custom-domain.md)
* [Assigning a static IP address for Knative on Google Kubernetes Engine](./gke-assigning-static-ip-address.md)
- [Configuring outbound network access](./outbound-network-access.md)
- [Using a custom domain](./using-a-custom-domain.md)
- [Assigning a static IP address for Knative on Google Kubernetes Engine](./gke-assigning-static-ip-address.md)
## Known Issues

View File

@ -6,8 +6,9 @@ necessary components first.
## Kibana and Elasticsearch
* To open the Kibana UI (the visualization tool for [Elasticsearch](https://info.elastic.co)),
start a local proxy with the following command:
- To open the Kibana UI (the visualization tool for [Elasticsearch](https://info.elastic.co)),
start a local proxy with the following command:
```shell
kubectl proxy
```
@ -15,9 +16,9 @@ start a local proxy with the following command:
This command starts a local proxy of Kibana on port 8001. For security reasons,
the Kibana UI is exposed only within the cluster.
* Navigate to the
[Kibana UI](http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana).
*It might take a couple of minutes for the proxy to work*.
- Navigate to the
[Kibana UI](http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana).
_It might take a couple of minutes for the proxy to work_.
The Discover tab of the Kibana UI looks like this:
@ -26,9 +27,9 @@ start a local proxy with the following command:
You can change the time frame of logs Kibana displays in the upper right corner
of the screen. The main search bar is across the top of the Discover page.
* As more logs are ingested, new fields will be discovered. To have them indexed,
go to "Management" > "Index Patterns" > Refresh button (on top right) > "Refresh
fields".
- As more logs are ingested, new fields will be discovered. To have them indexed,
go to "Management" > "Index Patterns" > Refresh button (on top right) > "Refresh
fields".
<!-- TODO: create a video walkthrough of the Kibana UI -->
@ -36,23 +37,28 @@ fields".
To access the logs for a configuration:
* Find the configuration's name with the following command:
- Find the configuration's name with the following command:
```
kubectl get configurations
```
* Replace `<CONFIGURATION_NAME>` and enter the following search query in Kibana:
- Replace `<CONFIGURATION_NAME>` and enter the following search query in Kibana:
```
kubernetes.labels.serving_knative_dev\/configuration: <CONFIGURATION_NAME>
```
To access logs for a revision:
* Find the revision's name with the following command:
- Find the revision's name with the following command:
```
kubectl get revisions
```
* Replace `<REVISION_NAME>` and enter the following search query in Kibana:
- Replace `<REVISION_NAME>` and enter the following search query in Kibana:
```
kubernetes.labels.serving_knative_dev\/revision: <REVISION_NAME>
```
@ -61,19 +67,23 @@ kubernetes.labels.serving_knative_dev\/revision: <REVISION_NAME>
To access logs for a [Knative Build](../build/README.md):
* Find the build's name in the specified in the `.yaml` file:
- Find the build's name in the specified in the `.yaml` file:
```yaml
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: <BUILD_NAME>
```
Or find build names with the following command:
```
kubectl get builds
```
* Replace `<BUILD_NAME>` and enter the following search query in Kibana:
- Replace `<BUILD_NAME>` and enter the following search query in Kibana:
```
kubernetes.labels.build\-name: <BUILD_NAME>
```
@ -86,31 +96,31 @@ To access the request logs, enter the following search in Kibana:
tag: "requestlog.logentry.istio-system"
```
Request logs contain details about requests served by the revision. Below is
a sample request log:
Request logs contain details about requests served by the revision. Below is
a sample request log:
```text
@timestamp July 10th 2018, 10:09:28.000
destinationConfiguration configuration-example
destinationNamespace default
destinationRevision configuration-example-00001
destinationService configuration-example-00001-service.default.svc.cluster.local
latency 1.232902ms
method GET
protocol http
referer unknown
requestHost route-example.default.example.com
requestSize 0
responseCode 200
responseSize 36
severity Info
sourceNamespace istio-system
sourceService unknown
tag requestlog.logentry.istio-system
traceId 986d6faa02d49533
url /
userAgent curl/7.60.0
```
```text
@timestamp July 10th 2018, 10:09:28.000
destinationConfiguration configuration-example
destinationNamespace default
destinationRevision configuration-example-00001
destinationService configuration-example-00001-service.default.svc.cluster.local
latency 1.232902ms
method GET
protocol http
referer unknown
requestHost route-example.default.example.com
requestSize 0
responseCode 200
responseSize 36
severity Info
sourceNamespace istio-system
sourceService unknown
tag requestlog.logentry.istio-system
traceId 986d6faa02d49533
url /
userAgent curl/7.60.0
```
### Accessing end to end request traces

View File

@ -4,28 +4,30 @@ You access metrics through the [Grafana](https://grafana.com/) UI. Grafana is
the visualization tool for [Prometheus](https://prometheus.io/).
1. To open Grafana, enter the following command:
```
kubectl port-forward --namespace knative-monitoring $(kubectl get pods --namespace knative-monitoring --selector=app=grafana --output=jsonpath="{.items..metadata.name}") 3000
```
* This starts a local proxy of Grafana on port 3000. For security reasons, the Grafana UI is exposed only within the cluster.
- This starts a local proxy of Grafana on port 3000. For security reasons, the Grafana UI is exposed only within the cluster.
2. Navigate to the Grafana UI at [http://localhost:3000](http://localhost:3000).
3. Select the **Home** button on the top of the page to see the list of pre-installed dashboards (screenshot below):
![Knative Dashboards](./images/grafana1.png)
![Knative Dashboards](./images/grafana1.png)
The following dashboards are pre-installed with Knative Serving:
The following dashboards are pre-installed with Knative Serving:
* **Revision HTTP Requests:** HTTP request count, latency, and size metrics per revision and per configuration
* **Nodes:** CPU, memory, network, and disk metrics at node level
* **Pods:** CPU, memory, and network metrics at pod level
* **Deployment:** CPU, memory, and network metrics aggregated at deployment level
* **Istio, Mixer and Pilot:** Detailed Istio mesh, Mixer, and Pilot metrics
* **Kubernetes:** Dashboards giving insights into cluster health, deployments, and capacity usage
- **Revision HTTP Requests:** HTTP request count, latency, and size metrics per revision and per configuration
- **Nodes:** CPU, memory, network, and disk metrics at node level
- **Pods:** CPU, memory, and network metrics at pod level
- **Deployment:** CPU, memory, and network metrics aggregated at deployment level
- **Istio, Mixer and Pilot:** Detailed Istio mesh, Mixer, and Pilot metrics
- **Kubernetes:** Dashboards giving insights into cluster health, deployments, and capacity usage
4. Set up an administrator account to modify or add dashboards by signing in with username: `admin` and password: `admin`.
* Before you expose the Grafana UI outside the cluster, make sure to change the password.
- Before you expose the Grafana UI outside the cluster, make sure to change the password.
---

View File

@ -71,17 +71,17 @@ conditions:
If you see this condition, check the following to continue debugging:
* [Check Pod status](#check-pod-status)
* [Check application logs](#check-application-logs)
- [Check Pod status](#check-pod-status)
- [Check application logs](#check-application-logs)
If you see other conditions, to debug further:
* Look up the meaning of the conditions in Knative
[Error Conditions and Reporting](https://github.com/knative/serving/blob/master/docs/spec/errors.md). Note: some of them
are not implemented yet. An alternative is to
[check Pod status](#check-pod-status).
* If you are using `BUILD` to deploy and the `BuildComplete` condition is not
`True`, [check BUILD status](#check-build-status).
- Look up the meaning of the conditions in Knative
[Error Conditions and Reporting](https://github.com/knative/serving/blob/master/docs/spec/errors.md). Note: some of them
are not implemented yet. An alternative is to
[check Pod status](#check-pod-status).
- If you are using `BUILD` to deploy and the `BuildComplete` condition is not
`True`, [check BUILD status](#check-build-status).
## Check Pod status
@ -124,9 +124,9 @@ Use any of the following filters within Kibana UI to
see build logs. _(See [telemetry guide](../telemetry.md) for more information on
logging and monitoring features of Knative Serving.)_
* All build logs: `_exists_:"kubernetes.labels.build-name"`
* Build logs for a specific build: `kubernetes.labels.build-name:"<BUILD NAME>"`
* Build logs for a specific build and step: `kubernetes.labels.build-name:"<BUILD NAME>" AND kubernetes.container_name:"build-step-<BUILD STEP NAME>"`
- All build logs: `_exists_:"kubernetes.labels.build-name"`
- Build logs for a specific build: `kubernetes.labels.build-name:"<BUILD NAME>"`
- Build logs for a specific build and step: `kubernetes.labels.build-name:"<BUILD NAME>" AND kubernetes.container_name:"build-step-<BUILD STEP NAME>"`
---

View File

@ -19,11 +19,11 @@ Start your investigation with the "Revision - HTTP Requests" dashboard.
This dashboard gives visibility into the following for each revision:
* Request volume
* Request volume per HTTP response code
* Response time
* Response time per HTTP response code
* Request and response sizes
- Request volume
- Request volume per HTTP response code
- Response time
- Response time per HTTP response code
- Request and response sizes
This dashboard can show traffic volume or latency discrepancies between different revisions.
If, for example, a revision's latency is higher than others revisions, then
@ -65,11 +65,11 @@ that most of the time is spent in your own code, look at autoscaler metrics next
This view shows 4 key metrics from the Knative Serving autoscaler:
* Actual pod count: # of pods that are running a given revision
* Desired pod count: # of pods that autoscaler thinks should serve the revision
* Requested pod count: # of pods that the autoscaler requested from Kubernetes
* Panic mode: If 0, the autoscaler is operating in [stable mode](https://github.com/knative/serving/blob/master/docs/scaling/DEVELOPMENT.md#stable-mode).
If 1, the autoscaler is operating in [panic mode](https://github.com/knative/serving/blob/master/docs/scaling/DEVELOPMENT.md#panic-mode).
- Actual pod count: # of pods that are running a given revision
- Desired pod count: # of pods that autoscaler thinks should serve the revision
- Requested pod count: # of pods that the autoscaler requested from Kubernetes
- Panic mode: If 0, the autoscaler is operating in [stable mode](https://github.com/knative/serving/blob/master/docs/scaling/DEVELOPMENT.md#stable-mode).
If 1, the autoscaler is operating in [panic mode](https://github.com/knative/serving/blob/master/docs/scaling/DEVELOPMENT.md#panic-mode).
A large gap between the actual pod count and the requested pod count
indicates that the Kubernetes cluster is unable to keep up allocating new
@ -94,12 +94,12 @@ The first chart shows rate of the CPU usage across all pods serving the revision
The second chart shows total memory consumed across all pods serving the revision.
Both of these metrics are further divided into per container usage.
* user-container: This container runs the user code (application, function, or container).
* [istio-proxy](https://github.com/istio/proxy): Sidecar container to form an
[Istio](https://istio.io/docs/concepts/what-is-istio/overview.html) mesh.
* queue-proxy: Knative Serving owned sidecar container to enforce request concurrency limits.
* autoscaler: Knative Serving owned sidecar container to provide autoscaling for the revision.
* fluentd-proxy: Sidecar container to collect logs from /var/log.
- user-container: This container runs the user code (application, function, or container).
- [istio-proxy](https://github.com/istio/proxy): Sidecar container to form an
[Istio](https://istio.io/docs/concepts/what-is-istio/overview.html) mesh.
- queue-proxy: Knative Serving owned sidecar container to enforce request concurrency limits.
- autoscaler: Knative Serving owned sidecar container to provide autoscaling for the revision.
- fluentd-proxy: Sidecar container to collect logs from /var/log.
## Profiling

View File

@ -9,13 +9,13 @@ configuration to define logging output.
Knative requires the customized Fluentd docker image with the following plugins
installed:
* [fluentd](https://github.com/fluent/fluentd) >= v0.14.0
* [fluent-plugin-kubernetes_metadata_filter](https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter) >=
1.0.0 AND < 2.1.0: To enrich log entries with Kubernetes metadata.
* [fluent-plugin-detect-exceptions](https://github.com/GoogleCloudPlatform/fluent-plugin-detect-exceptions) >=
0.0.9: To combine multi-line exception stack traces logs into one log entry.
* [fluent-plugin-multi-format-parser](https://github.com/repeatedly/fluent-plugin-multi-format-parser) >=
1.0.0: To detect log format as Json or plain text.
- [fluentd](https://github.com/fluent/fluentd) >= v0.14.0
- [fluent-plugin-kubernetes_metadata_filter](https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter) >=
1.0.0 AND < 2.1.0: To enrich log entries with Kubernetes metadata.
- [fluent-plugin-detect-exceptions](https://github.com/GoogleCloudPlatform/fluent-plugin-detect-exceptions) >=
0.0.9: To combine multi-line exception stack traces logs into one log entry.
- [fluent-plugin-multi-format-parser](https://github.com/repeatedly/fluent-plugin-multi-format-parser) >=
1.0.0: To detect log format as Json or plain text.
## Sample images

View File

@ -18,36 +18,39 @@ You can reserve a regional static IP address using the Google Cloud SDK or the
Google Cloud Platform console.
Using the Google Cloud SDK:
1. Enter the following command, replacing IP_NAME and REGION with appropriate
values. For example, select the `us-west1` region if you deployed your
cluster to the `us-west1-c` zone:
```shell
gcloud beta compute addresses create IP_NAME --region=REGION
```
For example:
```shell
gcloud beta compute addresses create knative-ip --region=us-west1
```
1. Enter the following command to get the newly created static IP address:
```shell
gcloud beta compute addresses list
```
1. Enter the following command, replacing IP_NAME and REGION with appropriate
values. For example, select the `us-west1` region if you deployed your
cluster to the `us-west1-c` zone:
```shell
gcloud beta compute addresses create IP_NAME --region=REGION
```
For example:
```shell
gcloud beta compute addresses create knative-ip --region=us-west1
```
1. Enter the following command to get the newly created static IP address:
```shell
gcloud beta compute addresses list
```
In the [GCP console](https://console.cloud.google.com/networking/addresses/add?_ga=2.97521754.-475089713.1523374982):
1. Enter a name for your static address.
1. For **IP version**, choose IPv4.
1. For **Type**, choose **Regional**.
1. From the **Region** drop-down, choose the region where your Knative cluster is running.
For example, select the `us-west1` region if you deployed your cluster to the `us-west1-c` zone.
1. Leave the **Attached To** field set to `None` since we'll attach the IP address through a config-map later.
1. Copy the **External Address** of the static IP you created.
1. Enter a name for your static address.
1. For **IP version**, choose IPv4.
1. For **Type**, choose **Regional**.
1. From the **Region** drop-down, choose the region where your Knative cluster is running.
For example, select the `us-west1` region if you deployed your cluster to the `us-west1-c` zone.
1. Leave the **Attached To** field set to `None` since we'll attach the IP address through a config-map later.
1. Copy the **External Address** of the static IP you created.
## Step 2: Update the external IP of the `knative-ingressgateway` service
Run following command to configure the external IP of the
`knative-ingressgateway` service to the static IP that you reserved:
```shell
kubectl patch svc knative-ingressgateway --namespace istio-system --patch '{"spec": { "loadBalancerIP": "<your-reserved-static-ip>" }}'
```
@ -55,14 +58,18 @@ kubectl patch svc knative-ingressgateway --namespace istio-system --patch '{"spe
## Step 3: Verify the static IP address of `knative-ingressgateway` service
Run the following command to ensure that the external IP of the "knative-ingressgateway" service has been updated:
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```
The output should show the assigned static IP address under the EXTERNAL-IP column:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 12.34.567.890 98.765.43.210 80:32380/TCP,443:32390/TCP,32400:32400/TCP 5m
```
> Note: Updating the external IP address can take several minutes.
---

View File

@ -12,11 +12,11 @@ these two are not supported.
The following instructions assume that you cloned the Knative Serving repository.
To clone the repository, run the following commands:
```shell
git clone https://github.com/knative/serving knative-serving
cd knative-serving
git checkout v0.2.1
```
```shell
git clone https://github.com/knative/serving knative-serving
cd knative-serving
git checkout v0.2.1
```
## Elasticsearch, Kibana, Prometheus & Grafana Setup
@ -50,28 +50,28 @@ To configure and setup monitoring:
The installation is complete when logging & monitoring components are all
reported `Running` or `Completed`:
```shell
kubectl get pods --namespace monitoring --watch
```
```shell
kubectl get pods --namespace monitoring --watch
```
```
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-0 1/1 Running 0 2d
elasticsearch-logging-1 1/1 Running 0 2d
fluentd-ds-5kc85 1/1 Running 0 2d
fluentd-ds-vhrcq 1/1 Running 0 2d
fluentd-ds-xghk9 1/1 Running 0 2d
grafana-798cf569ff-v4q74 1/1 Running 0 2d
kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d
kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d
node-exporter-cr6bh 2/2 Running 0 2d
node-exporter-mf6k7 2/2 Running 0 2d
node-exporter-rhzr7 2/2 Running 0 2d
prometheus-system-0 1/1 Running 0 2d
prometheus-system-1 1/1 Running 0 2d
```
```
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-0 1/1 Running 0 2d
elasticsearch-logging-1 1/1 Running 0 2d
fluentd-ds-5kc85 1/1 Running 0 2d
fluentd-ds-vhrcq 1/1 Running 0 2d
fluentd-ds-xghk9 1/1 Running 0 2d
grafana-798cf569ff-v4q74 1/1 Running 0 2d
kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d
kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d
node-exporter-cr6bh 2/2 Running 0 2d
node-exporter-mf6k7 2/2 Running 0 2d
node-exporter-rhzr7 2/2 Running 0 2d
prometheus-system-0 1/1 Running 0 2d
prometheus-system-1 1/1 Running 0 2d
```
CTRL+C to exit watch.
CTRL+C to exit watch.
1. Verify that each of your nodes have the `beta.kubernetes.io/fluentd-ds-ready=true` label:
@ -81,17 +81,17 @@ To configure and setup monitoring:
1. If you receive the `No Resources Found` response:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
```shell
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
```
```shell
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
```
1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node:
1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node:
```shell
kubectl get daemonset fluentd-ds --namespace knative-monitoring
```
```shell
kubectl get daemonset fluentd-ds --namespace knative-monitoring
```
### Create Elasticsearch Indices
@ -102,9 +102,9 @@ for request traces.
- To open the Kibana UI (the visualization tool for [Elasticsearch](https://info.elastic.co)),
you must start a local proxy by running the following command:
```shell
kubectl proxy
```
```shell
kubectl proxy
```
This command starts a local proxy of Kibana on port 8001. For security
reasons, the Kibana UI is exposed only within the cluster.
@ -123,13 +123,12 @@ for request traces.
of the page. Enter `zipkin*` to `Index pattern` and select `timestamp_millis`
from `Time Filter field name` and click on `Create` button.
## Stackdriver, Prometheus & Grafana Setup
You must configure and build your own Fluentd image if either of the following are true:
* Your Knative Serving component is not hosted on a Google Cloud Platform (GCP) based cluster.
* You want to send logs to another GCP project.
- Your Knative Serving component is not hosted on a Google Cloud Platform (GCP) based cluster.
- You want to send logs to another GCP project.
To configure and setup monitoring:
@ -151,28 +150,28 @@ To configure and setup monitoring:
--filename config/monitoring/200-common/100-istio.yaml
```
The installation is complete when logging & monitoring components are all
reported `Running` or `Completed`:
The installation is complete when logging & monitoring components are all
reported `Running` or `Completed`:
```shell
kubectl get pods --namespace monitoring --watch
```
```shell
kubectl get pods --namespace monitoring --watch
```
```
NAME READY STATUS RESTARTS AGE
fluentd-ds-5kc85 1/1 Running 0 2d
fluentd-ds-vhrcq 1/1 Running 0 2d
fluentd-ds-xghk9 1/1 Running 0 2d
grafana-798cf569ff-v4q74 1/1 Running 0 2d
kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d
node-exporter-cr6bh 2/2 Running 0 2d
node-exporter-mf6k7 2/2 Running 0 2d
node-exporter-rhzr7 2/2 Running 0 2d
prometheus-system-0 1/1 Running 0 2d
prometheus-system-1 1/1 Running 0 2d
```
```
NAME READY STATUS RESTARTS AGE
fluentd-ds-5kc85 1/1 Running 0 2d
fluentd-ds-vhrcq 1/1 Running 0 2d
fluentd-ds-xghk9 1/1 Running 0 2d
grafana-798cf569ff-v4q74 1/1 Running 0 2d
kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d
node-exporter-cr6bh 2/2 Running 0 2d
node-exporter-mf6k7 2/2 Running 0 2d
node-exporter-rhzr7 2/2 Running 0 2d
prometheus-system-0 1/1 Running 0 2d
prometheus-system-1 1/1 Running 0 2d
```
CTRL+C to exit watch.
CTRL+C to exit watch.
1. Verify that each of your nodes have the `beta.kubernetes.io/fluentd-ds-ready=true` label:
@ -182,17 +181,17 @@ To configure and setup monitoring:
1. If you receive the `No Resources Found` response:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
```shell
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
```
```shell
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
```
1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node:
1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node:
```shell
kubectl get daemonset fluentd-ds --namespace knative-monitoring
```
```shell
kubectl get daemonset fluentd-ds --namespace knative-monitoring
```
## Learn More

View File

@ -11,19 +11,19 @@ the `config-network` map.
To set the correct scope, you need to determine the IP ranges of your cluster. The scope varies
depending on your platform:
* For Google Kubernetes Engine (GKE) run the following command to determine the scope. Make sure
to replace the variables or export these values first.
- For Google Kubernetes Engine (GKE) run the following command to determine the scope. Make sure
to replace the variables or export these values first.
```shell
gcloud container clusters describe ${CLUSTER_ID} \
--zone=${GCP_ZONE} | grep -e clusterIpv4Cidr -e servicesIpv4Cidr
```
* For IBM Cloud Private run the following command:
- For IBM Cloud Private run the following command:
```shell
cat cluster/config.yaml | grep service_cluster_ip_range
```
* For IBM Cloud Kubernetes Service use `172.30.0.0/16,172.20.0.0/16,10.10.10.0/24`
* For Azure Container Service (ACS) use `10.244.0.0/16,10.240.0.0/16`
* For Minikube use `10.0.0.1/24`
- For IBM Cloud Kubernetes Service use `172.30.0.0/16,172.20.0.0/16,10.10.10.0/24`
- For Azure Container Service (ACS) use `10.244.0.0/16,10.240.0.0/16`
- For Minikube use `10.0.0.1/24`
## Setting the IP scope

View File

@ -4,19 +4,19 @@ This directory contains sample applications, developed on Knative, to illustrate
different use-cases and resources. See [Knative serving](https://github.com/knative/docs/tree/master/serving)
to learn more about Knative Serving resources.
| Name | Description | Languages |
| ---- | ----------- |:---------:|
| Hello World | A quick introduction that highlights how to deploy an app using Knative Serving. | [C#](helloworld-csharp/README.md), [Clojure](helloworld-clojure/README.md), [Eclipse Vert.x](helloworld-vertx/README.md), [Go](helloworld-go/README.md), [Java](helloworld-java/README.md), [Kotlin](helloworld-kotlin/README.md), [Node.js](helloworld-nodejs/README.md), [PHP](helloworld-php/README.md), [Python](helloworld-python/README.md), [Ruby](helloworld-ruby/README.md), [Rust](helloworld-rust/README.md) |
| Advanced Deployment | Simple blue/green-like application deployment pattern illustrating the process of updating a live application without dropping any traffic. | [YAML](blue-green-deployment.md) |
| Autoscale | A demonstration of the autoscaling capabilities of Knative. | [Go](autoscale-go/README.md) |
| Private Repo Build | An example of deploying a Knative Serving Service using a Github deploy-key and a DockerHub image pull secret. | [Go](build-private-repo-go/README.md) |
| Buildpack for Applications | A sample app that demonstrates using Cloud Foundry buildpacks on Knative Serving. | [.NET](buildpack-app-dotnet/README.md) |
| Buildpack for Functions | A sample function that demonstrates using Cloud Foundry buildpacks on Knative Serving. | [Node.js](buildpack-function-nodejs/README.md) |
| Github Webhook | A simple webhook handler that demonstrates interacting with Github. | [Go](gitwebhook-go/README.md) |
| gRPC | A simple gRPC server. | [Go](grpc-ping-go/README.md) |
| Knative Routing | An example of mapping multiple Knative services to different paths under a single domain name using the Istio VirtualService concept. | [Go](knative-routing-go/README.md) |
| REST API | A simple Restful service that exposes an endpoint defined by an environment variable described in the Knative Configuration. | [Go](rest-api-go/README.md) |
| Source to URL | A sample that shows how to use Knative to go from source code in a git repository to a running application with a URL. | [Go](source-to-url-go/README.md) |
| Telemetry | This sample runs a simple web server that makes calls to other in-cluster services and responds to requests with "Hello World!". The purpose of this sample is to show generating metrics, logs, and distributed traces. | [Go](telemetry-go/README.md) |
| Thumbnailer | An example of deploying a "dockerized" application to Knative Serving which takes video URL as an input and generates its thumbnail image. | [Go](thumbnailer-go/README.md) |
| Traffic Splitting | This samples builds off the [Creating a RESTful Service](./rest-api-go) sample to illustrate applying a revision, then using that revision for manual traffic splitting. | [YAML](traffic-splitting/README.md) |
| Name | Description | Languages |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| Hello World | A quick introduction that highlights how to deploy an app using Knative Serving. | [C#](helloworld-csharp/README.md), [Clojure](helloworld-clojure/README.md), [Eclipse Vert.x](helloworld-vertx/README.md), [Go](helloworld-go/README.md), [Java](helloworld-java/README.md), [Kotlin](helloworld-kotlin/README.md), [Node.js](helloworld-nodejs/README.md), [PHP](helloworld-php/README.md), [Python](helloworld-python/README.md), [Ruby](helloworld-ruby/README.md), [Rust](helloworld-rust/README.md) |
| Advanced Deployment | Simple blue/green-like application deployment pattern illustrating the process of updating a live application without dropping any traffic. | [YAML](blue-green-deployment.md) |
| Autoscale | A demonstration of the autoscaling capabilities of Knative. | [Go](autoscale-go/README.md) |
| Private Repo Build | An example of deploying a Knative Serving Service using a Github deploy-key and a DockerHub image pull secret. | [Go](build-private-repo-go/README.md) |
| Buildpack for Applications | A sample app that demonstrates using Cloud Foundry buildpacks on Knative Serving. | [.NET](buildpack-app-dotnet/README.md) |
| Buildpack for Functions | A sample function that demonstrates using Cloud Foundry buildpacks on Knative Serving. | [Node.js](buildpack-function-nodejs/README.md) |
| Github Webhook | A simple webhook handler that demonstrates interacting with Github. | [Go](gitwebhook-go/README.md) |
| gRPC | A simple gRPC server. | [Go](grpc-ping-go/README.md) |
| Knative Routing | An example of mapping multiple Knative services to different paths under a single domain name using the Istio VirtualService concept. | [Go](knative-routing-go/README.md) |
| REST API | A simple Restful service that exposes an endpoint defined by an environment variable described in the Knative Configuration. | [Go](rest-api-go/README.md) |
| Source to URL | A sample that shows how to use Knative to go from source code in a git repository to a running application with a URL. | [Go](source-to-url-go/README.md) |
| Telemetry | This sample runs a simple web server that makes calls to other in-cluster services and responds to requests with "Hello World!". The purpose of this sample is to show generating metrics, logs, and distributed traces. | [Go](telemetry-go/README.md) |
| Thumbnailer | An example of deploying a "dockerized" application to Knative Serving which takes video URL as an input and generates its thumbnail image. | [Go](thumbnailer-go/README.md) |
| Traffic Splitting | This samples builds off the [Creating a RESTful Service](./rest-api-go) sample to illustrate applying a revision, then using that revision for manual traffic splitting. | [YAML](traffic-splitting/README.md) |

View File

@ -8,6 +8,7 @@ A demonstration of the autoscaling capabilities of a Knative Serving Revision.
1. A [metrics installation](https://github.com/knative/docs/blob/master/serving/installing-logging-metrics-traces.md) for viewing scaling graphs (optional).
1. Install [Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
1. Check out the code:
```
go get -d github.com/knative/docs/serving/samples/autoscale-go
```
@ -17,19 +18,23 @@ go get -d github.com/knative/docs/serving/samples/autoscale-go
Build the application container and publish it to a container registry:
1. Move into the sample directory:
```
cd $GOPATH/src/github.com/knative/docs
```
1. Set your preferred container registry:
```
export REPO="gcr.io/<YOUR_PROJECT_ID>"
```
* This example shows how to use Google Container Registry (GCR). You will need a
Google Cloud Project and to enable the
[Google Container Registry API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
- This example shows how to use Google Container Registry (GCR). You will need a
Google Cloud Project and to enable the
[Google Container Registry API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
1. Use Docker to build your application container:
```
docker build \
--tag "${REPO}/serving/samples/autoscale-go" \
@ -37,6 +42,7 @@ Build the application container and publish it to a container registry:
```
1. Push your container to a container registry:
```
docker push "${REPO}/serving/samples/autoscale-go"
```
@ -51,6 +57,7 @@ Build the application container and publish it to a container registry:
## Deploy the Service
1. Deploy the Knative Serving sample:
```
kubectl apply --filename serving/samples/autoscale-go/service.yaml
```
@ -63,9 +70,11 @@ Build the application container and publish it to a container registry:
## View the Autoscaling Capabilities
1. Make a request to the autoscale app to see it consume some resources.
```
curl --header "Host: autoscale-go.default.example.com" "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"
```
```
Allocated 5 Mb of memory.
The largest prime less than 10000 is 9973.
@ -77,6 +86,7 @@ Build the application container and publish it to a container registry:
```
go run serving/samples/autoscale-go/test/test.go -sleep 100 -prime 10000 -bloat 5 -qps 9999 -concurrency 300
```
```
REQUEST STATS:
Total: 439 Inflight: 299 Done: 439 Success Rate: 100.00% Avg Latency: 0.4655 sec
@ -86,6 +96,7 @@ Build the application container and publish it to a container registry:
Total: 2911 Inflight: 300 Done: 577 Success Rate: 100.00% Avg Latency: 0.4401 sec
...
```
> Note: Use CTRL+C to exit the load test.
1. Watch the Knative Serving deployment pod count increase.
@ -110,7 +121,7 @@ ceil(1.75) = 2 pods
#### Tuning
By default Knative Serving does not limit concurrency in Revision containers. A limit can be set per-Configuration using the [`ContainerConcurrency`](https://github.com/knative/serving/blob/3f00c39e289ed4bfb84019131651c2e4ea660ab5/pkg/apis/serving/v1alpha1/revision_types.go#L149) field. The autoscaler will target a percentage of `ContainerConcurrency` instead of the default `100.0`.
By default Knative Serving does not limit concurrency in Revision containers. A limit can be set per-Configuration using the [`ContainerConcurrency`](https://github.com/knative/serving/blob/3f00c39e289ed4bfb84019131651c2e4ea660ab5/pkg/apis/serving/v1alpha1/revision_types.go#L149) field. The autoscaler will target a percentage of `ContainerConcurrency` instead of the default `100.0`.
### Dashboards
@ -127,21 +138,25 @@ kubectl port-forward --namespace knative-monitoring $(kubectl get pods --namespa
### Other Experiments
1. Maintain 1000 concurrent requests.
```
go run serving/samples/autoscale-go/test/test.go -qps 9999 -concurrency 1000
```
1. Maintain 100 qps with fast requests.
```
go run serving/samples/autoscale-go/test/test.go -qps 100 -concurrency 9999
```
1. Maintain 100 qps with slow requests.
```
go run serving/samples/autoscale-go/test/test.go -qps 100 -concurrency 9999 -sleep 500
```
1. Heavy CPU usage.
```
go run serving/samples/autoscale-go/test/test.go -qps 9999 -concurrency 10 -prime 40000000
```

View File

@ -8,8 +8,9 @@ configuration.
## Before you begin
You need:
* A Kubernetes cluster with [Knative installed](../../install/README.md).
* (Optional) [A custom domain configured](../../serving/using-a-custom-domain.md) for use with Knative.
- A Kubernetes cluster with [Knative installed](../../install/README.md).
- (Optional) [A custom domain configured](../../serving/using-a-custom-domain.md) for use with Knative.
## Deploying Revision 1 (Blue)
@ -39,6 +40,7 @@ spec:
```
Save the file, then deploy the configuration to your cluster:
```bash
kubectl apply --filename blue-green-demo-config.yaml
@ -57,11 +59,12 @@ metadata:
namespace: default # The namespace we're working in; also appears in the URL to access the app
spec:
traffic:
- revisionName: blue-green-demo-00001
percent: 100 # All traffic goes to this revision
- revisionName: blue-green-demo-00001
percent: 100 # All traffic goes to this revision
```
Save the file, then apply the route to your cluster:
```bash
kubectl apply --filename blue-green-demo-route.yaml
@ -74,15 +77,15 @@ with the [custom domain](../../serving/using-a-custom-domain.md) you configured
use with Knative.
> Note: If you don't have a custom domain configured for use with Knative, you can interact
with your app using cURL requests if you have the host URL and IP address:
`curl -H "Host: blue-green-demo.default.example.com" http://IP_ADDRESS`
Knative creates the host URL by combining the name of your Route object,
the namespace, and `example.com`, if you haven't configured a custom domain.
For example, `[route-name].[namespace].example.com`.
You can get the IP address by entering `kubectl get svc knative-ingressgateway --namespace istio-system`
and copying the `EXTERNAL-IP` returned by that command.
See [Interacting with your app](../../install/getting-started-knative-app.md#interacting-with-your-app)
for more information.
> with your app using cURL requests if you have the host URL and IP address:
> `curl -H "Host: blue-green-demo.default.example.com" http://IP_ADDRESS`
> Knative creates the host URL by combining the name of your Route object,
> the namespace, and `example.com`, if you haven't configured a custom domain.
> For example, `[route-name].[namespace].example.com`.
> You can get the IP address by entering `kubectl get svc knative-ingressgateway --namespace istio-system`
> and copying the `EXTERNAL-IP` returned by that command.
> See [Interacting with your app](../../install/getting-started-knative-app.md#interacting-with-your-app)
> for more information.
## Deploying Revision 2 (Green)
@ -111,6 +114,7 @@ spec:
```
Save the file, then apply the updated configuration to your cluster:
```bash
kubectl apply --filename blue-green-demo-config.yaml
@ -131,14 +135,15 @@ metadata:
namespace: default
spec:
traffic:
- revisionName: blue-green-demo-00001
percent: 100 # All traffic still going to the first revision
- revisionName: blue-green-demo-00002
percent: 0 # 0% of traffic routed to the second revision
name: v2 # A named route
- revisionName: blue-green-demo-00001
percent: 100 # All traffic still going to the first revision
- revisionName: blue-green-demo-00002
percent: 0 # 0% of traffic routed to the second revision
name: v2 # A named route
```
Save the file, then apply the updated route to your cluster:
```bash
kubectl apply --filename blue-green-demo-route.yaml
@ -147,8 +152,8 @@ route "blue-green-demo" configured
Revision 2 of the app is staged at this point. That means:
* No traffic will be routed to revision 2 at the main URL, http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com
* Knative creates a new route named v2 for testing the newly deployed version at http://v2.blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com
- No traffic will be routed to revision 2 at the main URL, http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com
- Knative creates a new route named v2 for testing the newly deployed version at http://v2.blue-green-demo.default.YOUR_CUSTOM_DOMAIN.com
This allows you to validate that the new version of the app is behaving as expected before switching
any traffic over to it.
@ -166,14 +171,15 @@ metadata:
namespace: default
spec:
traffic:
- revisionName: blue-green-demo-00001
percent: 50 # Updating the percentage from 100 to 50
- revisionName: blue-green-demo-00002
percent: 50 # Updating the percentage from 0 to 50
name: v2
- revisionName: blue-green-demo-00001
percent: 50 # Updating the percentage from 100 to 50
- revisionName: blue-green-demo-00002
percent: 50 # Updating the percentage from 0 to 50
name: v2
```
Save the file, then apply the updated route to your cluster:
```bash
kubectl apply --filename blue-green-demo-route.yaml
@ -184,8 +190,7 @@ Refresh the original route (http://blue-green-demo.default.YOUR_CUSTOM_DOMAIN.co
few times to see that some traffic now goes to version 2 of the app.
> Note: This sample shows a 50/50 split to assure you don't have to refresh too much,
but it's recommended to start with 1-2% of traffic in a production environment
> but it's recommended to start with 1-2% of traffic in a production environment
## Rerouting all traffic to the new version
@ -200,15 +205,16 @@ metadata:
namespace: default
spec:
traffic:
- revisionName: blue-green-demo-00001
percent: 0
name: v1 # Adding a new named route for v1
- revisionName: blue-green-demo-00002
percent: 100
# Named route for v2 has been removed, since we don't need it anymore
- revisionName: blue-green-demo-00001
percent: 0
name: v1 # Adding a new named route for v1
- revisionName: blue-green-demo-00002
percent: 100
# Named route for v2 has been removed, since we don't need it anymore
```
Save the file, then apply the updated route to your cluster:
```bash
kubectl apply --filename blue-green-demo-route.yaml

View File

@ -1,21 +1,22 @@
# Deploying to Knative from a Private GitHub Repo
This sample demonstrates:
* Pulling source code from a private Github repository using a deploy-key
* Pushing a Docker container to a private DockerHub repository using a username / password
* Deploying to Knative Serving using image pull secrets
- Pulling source code from a private Github repository using a deploy-key
- Pushing a Docker container to a private DockerHub repository using a username / password
- Deploying to Knative Serving using image pull secrets
## Before you begin
* [Install Knative Serving](../../../install/README.md)
* Create a local folder for this sample and download the files in this directory into it.
- [Install Knative Serving](../../../install/README.md)
- Create a local folder for this sample and download the files in this directory into it.
## Setup
### 1. Setting up the default service account
Knative Serving will run pods as the default service account in the namespace where
you created your resources. You can see its body by entering the following command:
you created your resources. You can see its body by entering the following command:
```shell
$ kubectl get serviceaccount default --output yaml
@ -32,6 +33,7 @@ secrets:
We are going to add to this an image pull Secret.
1. Create your image pull Secret with the following command, replacing values as neccesary:
```shell
kubectl create secret docker-registry dockerhub-pull-secret \
--docker-server=https://index.docker.io/v1/ --docker-email=not@val.id \
@ -42,6 +44,7 @@ We are going to add to this an image pull Secret.
[Creating a Secret in the cluster that holds your authorization token](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token).
2. Add the newly created `imagePullSecret` to your default service account by entering:
```shell
kubectl edit serviceaccount default
```
@ -50,13 +53,12 @@ We are going to add to this an image pull Secret.
```yaml
secrets:
- name: default-token-zd84v
- name: default-token-zd84v
# This is the secret we just created:
imagePullSecrets:
- name: dockerhub-pull-secret
- name: dockerhub-pull-secret
```
### 2. Configuring the build
The objects in this section are all defined in `build-bot.yaml`, and the fields that
@ -67,6 +69,7 @@ The following sections explain the different configurations in the `build-bot.ya
as well as the necessary changes for each section.
#### Setting up our Build service account
To separate our Build's credentials from our applications credentials, the
Build runs as its own service account:
@ -76,15 +79,15 @@ kind: ServiceAccount
metadata:
name: build-bot
secrets:
- name: deploy-key
- name: dockerhub-push-secrets
- name: deploy-key
- name: dockerhub-push-secrets
```
#### Creating a deploy key
You can set up a deploy key for a private Github repository following
[these](https://developer.github.com/v3/guides/managing-deploy-keys/)
instructions. The deploy key in the `build-bot.yaml` file in this folder is *real*;
instructions. The deploy key in the `build-bot.yaml` file in this folder is _real_;
you do not need to change it for the sample to work.
```yaml
@ -133,6 +136,7 @@ kubectl create --filename build-bot.yaml
```
### 3. Installing a Build template and updating `manifest.yaml`
1. Install the
[Kaniko build template](https://github.com/knative/build-templates/blob/master/kaniko/kaniko.yaml)
by entering the following command:
@ -168,14 +172,14 @@ export SERVICE_IP=$(kubectl get svc knative-ingressgateway --namespace istio-sys
```
> Note: If your cluster is running outside a cloud provider (for example, on Minikube),
your services will never get an external IP address. In that case, use the Istio
`hostIP` and `nodePort` as the service IP:
> your services will never get an external IP address. In that case, use the Istio
> `hostIP` and `nodePort` as the service IP:
```shell
export SERVICE_IP=$(kubectl get po --selector knative=ingressgateway --namespace istio-system \
--output 'jsonpath= . {.items[0].status.hostIP}'):$(kubectl get svc knative-ingressgateway \
--namespace istio-system --output 'jsonpath={.spec.ports[? (@.port==80)].nodePort}')
```
```shell
export SERVICE_IP=$(kubectl get po --selector knative=ingressgateway --namespace istio-system \
--output 'jsonpath= . {.items[0].status.hostIP}'):$(kubectl get svc knative-ingressgateway \
--namespace istio-system --output 'jsonpath={.spec.ports[? (@.port==80)].nodePort}')
```
Now curl the service IP to make sure the deployment succeeded:
@ -183,12 +187,12 @@ Now curl the service IP to make sure the deployment succeeded:
curl -H "Host: $SERVICE_HOST" http://$SERVICE_IP
```
## Appendix: Sample Code
The sample code is in a private Github repository consisting of two files.
1. `Dockerfile`
```Dockerfile
# Use golang:alpine to optimize the image size.
# See https://hub.docker.com/_/golang/ for more information
@ -210,8 +214,8 @@ The sample code is in a private Github repository consisting of two files.
package main
import (
"fmt"
"net/http"
"
"
)
const (
@ -219,11 +223,11 @@ The sample code is in a private Github repository consisting of two files.
)
func helloWorld(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello World.")
)
}
func main() {
http.HandleFunc("/", helloWorld)
http.ListenAndServe(port, nil)
)
)
}
```

View File

@ -8,7 +8,7 @@ sample app for Cloud Foundry.
## Prerequisites
* [Install Knative Serving](../../../install/README.md)
- [Install Knative Serving](../../../install/README.md)
## Running

View File

@ -8,7 +8,7 @@ sample function for riff.
## Prerequisites
* [Install Knative Serving](../../../install/README.md)
- [Install Knative Serving](../../../install/README.md)
## Running
@ -53,6 +53,7 @@ items:
Once the `BuildComplete` status is `True`, resource creation begins.
To access this service using `curl`, we first need to determine its ingress address:
```shell
watch kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

View File

@ -5,28 +5,28 @@ through a webhook.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
* An account on [GitHub](https://github.com) with read/write access to a
repository.
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
- An account on [GitHub](https://github.com) with read/write access to a
repository.
## Build the sample code
1. Use Docker to build a container image for this service. Replace
`username` with your Docker Hub username in the following commands.
```shell
export DOCKER_HUB_USERNAME=username
```shell
export DOCKER_HUB_USERNAME=username
# Build the container, run from the project folder
docker build -t ${DOCKER_HUB_USERNAME}/gitwebhook-go .
# Build the container, run from the project folder
docker build -t ${DOCKER_HUB_USERNAME}/gitwebhook-go .
# Push the container to the registry
docker push ${DOCKER_HUB_USERNAME}/gitwebhook-go
```
# Push the container to the registry
docker push ${DOCKER_HUB_USERNAME}/gitwebhook-go
```
1. Create a secret that holds two values from GitHub, a personal access token
used to make API requests to GitHub, and a webhook secret, used to validate
@ -53,28 +53,28 @@ through a webhook.
1. Apply the secret to your cluster:
```shell
kubectl apply --filename github-secret.yaml
```
```shell
kubectl apply --filename github-secret.yaml
```
1. Next, update the `service.yaml` file in the project to reference the tagged
image from step 1.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: gitwebhook
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
# Replace {DOCKER_HUB_USERNAME} with your actual docker hub username
image: docker.io/{DOCKER_HUB_USERNAME}/gitwebhook-go
env:
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: gitwebhook
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
# Replace {DOCKER_HUB_USERNAME} with your actual docker hub username
image: docker.io/{DOCKER_HUB_USERNAME}/gitwebhook-go
env:
- name: SECRET_TOKEN
valueFrom:
secretKeyRef:
@ -82,10 +82,10 @@ through a webhook.
key: secretToken
- name: ACCESS_TOKEN
valueFrom:
secretKeyRef:
name: githubsecret
key: accessToken
```
secretKeyRef:
name: githubsecret
key: accessToken
```
1. Use `kubectl` to apply the `service.yaml` file.
@ -99,23 +99,23 @@ service "gitwebhook" created
need to [configure a custom domain](https://github.com/knative/docs/blob/master/serving/using-a-custom-domain.md)
and [assign a static IP address](https://github.com/knative/docs/blob/master/serving/gke-assigning-static-ip-address.md).
1. Retrieve the hostname for this service, using the following command:
1. Retrieve the hostname for this service, using the following command:
```shell
$ kubectl get ksvc gitwebhook \
--output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
gitwebhook gitwebhook.default.example.com
```
```shell
$ kubectl get ksvc gitwebhook \
--output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
gitwebhook gitwebhook.default.example.com
```
1. Browse on GitHub to the repository where you want to create a webhook.
1. Click **Settings**, then **Webhooks**, then **Add webhook**.
1. Enter the **Payload URL** as `http://{DOMAIN}`, with the value of DOMAIN listed above.
1. Set the **Content type** to `application/json`.
1. Enter the **Secret** value to be the same as the original base used for
`webhookSecret` above (the original value, not the base64 encoded value).
1. Select **Disable** under SSL Validation, unless you've [enabled SSL](https://github.com/knative/docs/blob/master/serving/using-an-ssl-cert.md).
1. Click **Add webhook** to create the webhook.
1. Browse on GitHub to the repository where you want to create a webhook.
1. Click **Settings**, then **Webhooks**, then **Add webhook**.
1. Enter the **Payload URL** as `http://{DOMAIN}`, with the value of DOMAIN listed above.
1. Set the **Content type** to `application/json`.
1. Enter the **Secret** value to be the same as the original base used for
`webhookSecret` above (the original value, not the base64 encoded value).
1. Select **Disable** under SSL Validation, unless you've [enabled SSL](https://github.com/knative/docs/blob/master/serving/using-an-ssl-cert.md).
1. Click **Add webhook** to create the webhook.
## Exploring
@ -149,4 +149,3 @@ To clean up the sample service:
```shell
kubectl delete --filename service.yaml
```

View File

@ -1,15 +1,15 @@
# Hello World - Clojure sample
A simple web app written in Clojure that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello ${TARGET}!". If
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Recreating the sample code
@ -21,25 +21,25 @@ following instructions recreate the source files from this folder.
1. Create a new file named `src/helloworld/core.clj` and paste the following code. This
code creates a basic web server which listens on port 8080:
```clojure
(ns helloworld.core
(:use ring.adapter.jetty)
(:gen-class))
```clojure
(ns helloworld.core
(:use ring.adapter.jetty)
(:gen-class))
(defn handler [request]
{:status 200
:headers {"Content-Type" "text/html"}
:body (str "Hello "
(if-let [target (System/getenv "TARGET")]
target
"World")
"!\n")})
(defn handler [request]
{:status 200
:headers {"Content-Type" "text/html"}
:body (str "Hello "
(if-let [target (System/getenv "TARGET")]
target
"World")
"!\n")})
(defn -main [& args]
(run-jetty handler {:port (if-let [port (System/getenv "PORT")]
(Integer/parseInt port)
8080)}))
```
(defn -main [& args]
(run-jetty handler {:port (if-let [port (System/getenv "PORT")]
(Integer/parseInt port)
8080)}))
```
1. In your project directory, create a file named `project.clj` and copy the code
below into it. This code defines the project's dependencies and entrypoint.
@ -57,50 +57,50 @@ following instructions recreate the source files from this folder.
block below into it. For detailed instructions on dockerizing a Clojure app, see
[the clojure image documentation](https://github.com/docker-library/docs/tree/master/clojure).
```docker
# Use the official Clojure image.
# https://hub.docker.com/_/clojure
FROM clojure
```docker
# Use the official Clojure image.
# https://hub.docker.com/_/clojure
FROM clojure
# Create the project and download dependencies.
WORKDIR /usr/src/app
COPY project.clj .
RUN lein deps
# Create the project and download dependencies.
WORKDIR /usr/src/app
COPY project.clj .
RUN lein deps
# Copy local code to the container image.
COPY . .
# Copy local code to the container image.
COPY . .
# Build an uberjar release artifact.
RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" app-standalone.jar
# Build an uberjar release artifact.
RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" app-standalone.jar
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Run the web service on container startup.
CMD ["java", "-jar", "app-standalone.jar"]
```
# Run the web service on container startup.
CMD ["java", "-jar", "app-standalone.jar"]
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-clojure
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-clojure
env:
- name: TARGET
value: "Clojure Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-clojure
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-clojure
env:
- name: TARGET
value: "Clojure Sample v1"
```
## Building and deploying the sample
@ -111,55 +111,57 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-clojure .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-clojure .
# Push the container to docker registry
docker push {username}/helloworld-clojure
```
# Push the container to docker registry
docker push {username}/helloworld-clojure
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-clojure --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-clojure helloworld-clojure.default.example.com
```
```
kubectl get ksvc helloworld-clojure --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-clojure helloworld-clojure.default.example.com
```
1. Now you can make a request to your app to see the results. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-clojure.default.example.com" http://{$IP_ADDRESS}
Hello World: Clojure Sample v1!
```
```shell
curl -H "Host: helloworld-clojure.default.example.com" http://{$IP_ADDRESS}
Hello World: Clojure Sample v1!
```
## Removing the sample app deployment

View File

@ -1,17 +1,17 @@
# Hello World - .NET Core sample
A simple web app written in C# using .NET Core 2.1 that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello ${TARGET}!". If
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
* You have installed [.NET Core SDK 2.1](https://www.microsoft.com/net/core).
- You have installed [.NET Core SDK 2.1](https://www.microsoft.com/net/core).
## Recreating the sample code
@ -21,84 +21,84 @@ recreate the source files from this folder.
1. From the console, create a new empty web project using the dotnet command:
```shell
dotnet new web -o helloworld-csharp
```
```shell
dotnet new web -o helloworld-csharp
```
1. Update the `CreateWebHostBuilder` definition in `Program.cs` by adding
`.UseUrls()` to define the serving port:
```csharp
public static IWebHostBuilder CreateWebHostBuilder(string[] args)
{
string port = Environment.GetEnvironmentVariable("PORT") ?? "8080";
string url = String.Concat("http://0.0.0.0:", port);
```csharp
public static IWebHostBuilder CreateWebHostBuilder(string[] args)
{
string port = Environment.GetEnvironmentVariable("PORT") ?? "8080";
string url = String.Concat("http://0.0.0.0:", port);
return WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>().UseUrls(url);
}
```
return WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>().UseUrls(url);
}
```
1. Update the `app.Run(...)` statement in `Startup.cs` to read and return the
TARGET environment variable:
```csharp
app.Run(async (context) =>
{
var target = Environment.GetEnvironmentVariable("TARGET") ?? "World";
await context.Response.WriteAsync($"Hello {target}\n");
});
```
```csharp
app.Run(async (context) =>
{
var target = Environment.GetEnvironmentVariable("TARGET") ?? "World";
await context.Response.WriteAsync($"Hello {target}\n");
});
```
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a .NET core app,
see [dockerizing a .NET core app](https://docs.microsoft.com/en-us/dotnet/core/docker/docker-basics-dotnet-core#dockerize-the-net-core-application).
```docker
# Use Microsoft's official .NET image.
# https://hub.docker.com/r/microsoft/dotnet
FROM microsoft/dotnet:2.1-sdk
```docker
# Use Microsoft's official .NET image.
# https://hub.docker.com/r/microsoft/dotnet
FROM microsoft/dotnet:2.1-sdk
# Install production dependencies.
# Copy csproj and restore as distinct layers.
WORKDIR /app
COPY *.csproj .
RUN dotnet restore
# Install production dependencies.
# Copy csproj and restore as distinct layers.
WORKDIR /app
COPY *.csproj .
RUN dotnet restore
# Copy local code to the container image.
COPY . .
# Copy local code to the container image.
COPY . .
# Build a release artifact.
RUN dotnet publish -c Release -o out
# Build a release artifact.
RUN dotnet publish -c Release -o out
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Run the web service on container startup.
CMD ["dotnet", "out/helloworld-csharp.dll"]
```
# Run the web service on container startup.
CMD ["dotnet", "out/helloworld-csharp.dll"]
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-csharp
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-csharp
env:
- name: TARGET
value: "C# Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-csharp
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-csharp
env:
- name: TARGET
value: "C# Sample v1"
```
## Building and deploying the sample
@ -109,55 +109,57 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-csharp .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-csharp .
# Push the container to docker registry
docker push {username}/helloworld-csharp
```
# Push the container to docker registry
docker push {username}/helloworld-csharp
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-csharp --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-csharp helloworld-csharp.default.example.com
```
```
kubectl get ksvc helloworld-csharp --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-csharp helloworld-csharp.default.example.com
```
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-csharp.default.example.com" http://{IP_ADDRESS}
Hello World!
```
```shell
curl -H "Host: helloworld-csharp.default.example.com" http://{IP_ADDRESS}
Hello World!
```
## Removing the sample app deployment

View File

@ -6,12 +6,12 @@ that you can use for testing. It reads in the env variable `TARGET` and prints
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
* [dart-sdk](https://www.dartlang.org/tools/sdk#install) installed and configured
- [dart-sdk](https://www.dartlang.org/tools/sdk#install) installed and configured
if you want to run the program locally.
## Recreating the sample code
@ -22,84 +22,84 @@ created using the following instructions.
1. Create a new directory and write `pubspec.yaml` as follows:
```yaml
name: hello_world_dart
private: True # let's not accidentally publish this to pub.dartlang.org
description: >-
Hello world server example in dart.
dependencies:
shelf: ^0.7.3
environment:
sdk: '>=2.0.0 <3.0.0'
```
```yaml
name: hello_world_dart
private: True # let's not accidentally publish this to pub.dartlang.org
description: >-
Hello world server example in dart.
dependencies:
shelf: ^0.7.3
environment:
sdk: ">=2.0.0 <3.0.0"
```
2. If you want to run locally, install dependencies. If you only want to run in
Docker or Knative, you can skip this step.
```shell
pub get
```
```shell
pub get
```
3. Create a new file `bin/main.dart` and write the following code:
```dart
import 'dart:io';
```dart
import 'dart:io';
import 'package:shelf/shelf.dart';
import 'package:shelf/shelf_io.dart';
import 'package:shelf/shelf.dart';
import 'package:shelf/shelf_io.dart';
void main() {
// Find port to listen on from environment variable.
var port = int.tryParse(Platform.environment['PORT']);
void main() {
// Find port to listen on from environment variable.
var port = int.tryParse(Platform.environment['PORT']);
// Read $TARGET from environment variable.
var target = Platform.environment['TARGET'] ?? 'World';
// Read $TARGET from environment variable.
var target = Platform.environment['TARGET'] ?? 'World';
// Create handler.
var handler = Pipeline().addMiddleware(logRequests()).addHandler((request) {
return Response.ok('Hello $target');
});
// Create handler.
var handler = Pipeline().addMiddleware(logRequests()).addHandler((request) {
return Response.ok('Hello $target');
});
// Serve handler on given port.
serve(handler, InternetAddress.anyIPv4, port).then((server) {
print('Serving at http://${server.address.host}:${server.port}');
});
}
```
// Serve handler on given port.
serve(handler, InternetAddress.anyIPv4, port).then((server) {
print('Serving at http://${server.address.host}:${server.port}');
});
}
```
4. Create a new file named `Dockerfile`, this file defines instructions for
dockerizing your applications, for dart apps this can be done as follows:
```Dockerfile
# Use Google's official Dart image.
# https://hub.docker.com/r/google/dart-runtime/
FROM google/dart-runtime
```Dockerfile
# Use Google's official Dart image.
# https://hub.docker.com/r/google/dart-runtime/
FROM google/dart-runtime
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
```
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
```
5. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-dart
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-dart
env:
- name: TARGET
value: "Dart Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-dart
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-dart
env:
- name: TARGET
value: "Dart Sample v1"
```
## Building and deploying the sample
@ -110,55 +110,57 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-dart .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-dart .
# Push the container to docker registry
docker push {username}/helloworld-dart
```
# Push the container to docker registry
docker push {username}/helloworld-dart
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-dart --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-dart helloworld-dart.default.example.com
```
```
kubectl get ksvc helloworld-dart --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-dart helloworld-dart.default.example.com
```
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-dart.default.example.com" http://{IP_ADDRESS}
Hello Dart Sample v1
```
```shell
curl -H "Host: helloworld-dart.default.example.com" http://{IP_ADDRESS}
Hello Dart Sample v1
```
## Removing the sample app deployment
@ -167,4 +169,3 @@ To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete --filename service.yaml
```

View File

@ -14,9 +14,9 @@ building, running, and packaging Elixir Web applications.
To start your Phoenix server:
* Install dependencies with `mix deps.get`
* Install Node.js dependencies with `cd assets && npm install`
* Start Phoenix endpoint with `mix phx.server`
- Install dependencies with `mix deps.get`
- Install Node.js dependencies with `cd assets && npm install`
- Start Phoenix endpoint with `mix phx.server`
Now you can visit [`localhost:4000`](http://localhost:4000) from your browser.
@ -24,11 +24,11 @@ Now you can visit [`localhost:4000`](http://localhost:4000) from your browser.
1. Generate a new project.
```shell
mix phoenix.new helloelixir
```
```shell
mix phoenix.new helloelixir
```
When asked, if you want to `Fetch and install dependencies? [Yn]` select `y`
When asked, if you want to `Fetch and install dependencies? [Yn]` select `y`
1. Follow the direction in the output to change directories into
start your local server with `mix phoenix.server`
@ -109,9 +109,9 @@ Now you can visit [`localhost:4000`](http://localhost:4000) from your browser.
container:
image: docker.io/{username}/helloworld-elixir
env:
- name: TARGET
value: "elixir Sample v1"
```
- name: TARGET
value: "elixir Sample v1"
```
# Building and deploying the sample
@ -119,56 +119,58 @@ The sample in this directory is ready to build and deploy without changes.
You can deploy the sample as is, or use you created version following the
directions above.
1. Generate a new `secret_key_base` in the `config/prod.secret.exs` file.
Phoenix applications use a secrets file on production deployments and, by
default, that file is not checked into source control. We have provides
shell of an example on `config/prod.secret.exs.sample` and you can use the
following command to generate a new prod secrets file.
1. Generate a new `secret_key_base` in the `config/prod.secret.exs` file.
Phoenix applications use a secrets file on production deployments and, by
default, that file is not checked into source control. We have provides
shell of an example on `config/prod.secret.exs.sample` and you can use the
following command to generate a new prod secrets file.
```shell
SECRET_KEY_BASE=$(elixir -e ":crypto.strong_rand_bytes(48) |> Base.encode64 |> IO.puts")
sed "s|SECRET+KEY+BASE|$SECRET_KEY_BASE|" config/prod.secret.exs.sample >config/prod.secret.exs
```
1. Use Docker to build the sample code into a container. To build and push
with Docker Hub, run these commands replacing `{username}` with your Docker
Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-elixir .
# Push the container to docker registry
docker push {username}/helloworld-elixir
```shell
SECRET_KEY_BASE=$(elixir -e ":crypto.strong_rand_bytes(48) |> Base.encode64 |> IO.puts")
sed "s|SECRET+KEY+BASE|$SECRET_KEY_BASE|" config/prod.secret.exs.sample >config/prod.secret.exs
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
1. Use Docker to build the sample code into a container. To build and push
with Docker Hub, run these commands replacing `{username}` with your Docker
Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-elixir .
# Push the container to docker registry
docker push {username}/helloworld-elixir
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
1. Now that your service is created, Knative will perform the following steps:
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
```
kubectl get svc knative-ingressgateway --namespace istio-system
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32 80:32380/TCP,443:32390/TCP,32400:32400/TCP 1h
```
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32 80:32380/TCP,443:32390/TCP,32400:32400/TCP 1h
```
1. To find the URL for your service, use
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-elixir --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
@ -177,32 +179,33 @@ knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32 80:32380
helloworld-elixir helloworld-elixir.default.example.com
```
1. Now you can make a request to your app to see the results. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
1. Now you can make a request to your app to see the results. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-elixir.default.example.com" http://{IP_ADDRESS}
```shell
curl -H "Host: helloworld-elixir.default.example.com" http://{IP_ADDRESS}
...
# HTML from your application is returned.
...
# HTML from your application is returned.
```
Here is the HTML returned from our deployed sample application:
```HTML
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>Hello Knative</title>
<link rel="stylesheet" type="text/css" href="/css/app-833cc7e8eeed7a7953c5a02e28130dbd.css?vsn=d">
</head>
```
Here is the HTML returned from our deployed sample application:
```HTML
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>Hello Knative</title>
<link rel="stylesheet" type="text/css" href="/css/app-833cc7e8eeed7a7953c5a02e28130dbd.css?vsn=d">
</head>
<body>
<div class="container">
<header class="header">
@ -215,6 +218,7 @@ knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32 80:32380
<p class="alert alert-danger" role="alert"></p>
<main role="main">
<div class="jumbotron">
<h2>Welcome to Knative and Elixir</h2>
@ -291,6 +295,7 @@ knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32 80:32380
</div> <!-- /container -->
<script src="/js/app-930ab1950e10d7b5ab5083423c28f06e.js?vsn=d"></script>
</body>
</html>
```

View File

@ -6,10 +6,10 @@ It reads in an env variable `TARGET` and prints `Hello ${TARGET}!`. If
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Recreating the sample code
@ -21,92 +21,92 @@ following instructions recreate the source files from this folder.
1. Create a new file named `helloworld.go` and paste the following code. This
code creates a basic web server which listens on port 8080:
```go
package main
```go
package main
import (
"fmt"
"log"
"net/http"
"os"
)
import (
"fmt"
"log"
"net/http"
"os"
)
func handler(w http.ResponseWriter, r *http.Request) {
log.Print("Hello world received a request.")
target := os.Getenv("TARGET")
if target == "" {
target = "World"
}
fmt.Fprintf(w, "Hello %s!\n", target)
}
func handler(w http.ResponseWriter, r *http.Request) {
log.Print("Hello world received a request.")
target := os.Getenv("TARGET")
if target == "" {
target = "World"
}
fmt.Fprintf(w, "Hello %s!\n", target)
}
func main() {
log.Print("Hello world sample started.")
func main() {
log.Print("Hello world sample started.")
http.HandleFunc("/", handler)
http.HandleFunc("/", handler)
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}
```
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}
```
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a Go app, see
[Deploying Go servers with Docker](https://blog.golang.org/docker).
```docker
# Use the offical Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
FROM golang as builder
```docker
# Use the offical Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
FROM golang as builder
# Copy local code to the container image.
WORKDIR /go/src/github.com/knative/docs/helloworld
COPY . .
# Copy local code to the container image.
WORKDIR /go/src/github.com/knative/docs/helloworld
COPY . .
# Build the helloworld command inside the container.
# (You may fetch or manage dependencies here,
# either manually or with a tool like "godep".)
RUN CGO_ENABLED=0 GOOS=linux go build -v -o helloworld
# Build the helloworld command inside the container.
# (You may fetch or manage dependencies here,
# either manually or with a tool like "godep".)
RUN CGO_ENABLED=0 GOOS=linux go build -v -o helloworld
# Use a Docker multi-stage build to create a lean production image.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine
# Use a Docker multi-stage build to create a lean production image.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine
# Copy the binary to the production image from the builder stage.
COPY --from=builder /go/src/github.com/knative/docs/helloworld/helloworld /helloworld
# Copy the binary to the production image from the builder stage.
COPY --from=builder /go/src/github.com/knative/docs/helloworld/helloworld /helloworld
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Run the web service on container startup.
CMD ["/helloworld"]
```
# Run the web service on container startup.
CMD ["/helloworld"]
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-go
env:
- name: TARGET
value: "Go Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-go
env:
- name: TARGET
value: "Go Sample v1"
```
## Building and deploying the sample
@ -117,69 +117,74 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-go .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-go .
# Push the container to docker registry
docker push {username}/helloworld-go
```
# Push the container to docker registry
docker push {username}/helloworld-go
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. Run the following command to find the external IP address for your service. The ingress IP for your
cluster is returned. If you just created your cluster, you might need to wait and rerun the command until
your service gets asssigned an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```
Example:
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
Example:
```
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
1. Run the following command to find the domain URL for your service:
```shell
kubectl get ksvc helloworld-go --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
```
Example:
```shell
NAME DOMAIN
helloworld-go helloworld-go.default.example.com
```
```shell
kubectl get ksvc helloworld-go --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
```
Example:
```shell
NAME DOMAIN
helloworld-go helloworld-go.default.example.com
```
1. Test your app by sending it a request. Use the following
`curl` command with the domain URL `helloworld-go.default.example.com` and `EXTERNAL-IP` address that you retrieved
in the previous steps:
```shell
curl -H "Host: helloworld-go.default.example.com" http://{EXTERNAL_IP_ADDRESS}
```
```shell
curl -H "Host: helloworld-go.default.example.com" http://{EXTERNAL_IP_ADDRESS}
```
Example:
```shell
curl -H "Host: helloworld-go.default.example.com" http://35.203.155.229
Hello World: Go Sample v1!
```
Example:
> Note: Add `-v` option to get more detail if the `curl` command failed.
```shell
curl -H "Host: helloworld-go.default.example.com" http://35.203.155.229
Hello World: Go Sample v1!
```
> Note: Add `-v` option to get more detail if the `curl` command failed.
## Removing the sample app deployment

View File

@ -1,15 +1,15 @@
# Hello World - Haskell sample
A simple web app written in Haskell that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello ${TARGET}!". If
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Recreating the sample code
@ -20,114 +20,115 @@ following instructions recreate the source files from this folder.
1. Create a new file named `stack.yaml` and paste the following code:
```yaml
flags: {}
packages:
- .
extra-deps: []
resolver: lts-10.7
```
```yaml
flags: {}
packages:
- .
extra-deps: []
resolver: lts-10.7
```
1. Create a new file named `package.yaml` and paste the following code
```yaml
name: helloworld-haskell
version: 0.1.0.0
dependencies:
- base >= 4.7 && < 5
- scotty
- text
```yaml
name: helloworld-haskell
version: 0.1.0.0
dependencies:
- base >= 4.7 && < 5
- scotty
- text
executables:
helloworld-haskell-exe:
main: Main.hs
source-dirs: app
ghc-options:
- -threaded
- -rtsopts
- -with-rtsopts=-N
```
executables:
helloworld-haskell-exe:
main: Main.hs
source-dirs: app
ghc-options:
- -threaded
- -rtsopts
- -with-rtsopts=-N
```
1. Create a `app` folder, then create a new file named `Main.hs` in that folder
and paste the following code. This code creates a basic web server which
listens on port 8080:
```haskell
{-# LANGUAGE OverloadedStrings #-}
```haskell
}
import Data.Maybe
import Data.Monoid ((<>))
import Data.Text.Lazy (Text)
import Data.Text.Lazy
import System.Environment (lookupEnv)
import Web.Scotty (ActionM, ScottyM, scotty)
import Web.Scotty.Trans
e
)
)
y
)
)
s
main :: IO ()
main = do
t <- fromMaybe "World" <$> lookupEnv "TARGET"
pStr <- fromMaybe "8080" <$> lookupEnv "PORT"
let p = read pStr :: Int
scotty p (route t)
)
o
"
"
t
)
route :: String -> ScottyM()
route t = get "/" $ hello t
)
t
hello :: String -> ActionM()
hello t = text $ pack ("Hello " ++ t)
```
)
)
```
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it.
```docker
# Use the official Haskell image to create a build artifact.
# https://hub.docker.com/_/haskell/
FROM haskell:8.2.2 as builder
```docker
# Use the official Haskell image to create a build artifact.
# https://hub.docker.com/_/haskell/
FROM haskell:8.2.2 as builder
# Copy local code to the container image.
WORKDIR /app
COPY . .
# Copy local code to the container image.
WORKDIR /app
COPY . .
# Build and test our code, then build the “helloworld-haskell-exe” executable.
RUN stack setup
RUN stack build --copy-bins
# Build and test our code, then build the “helloworld-haskell-exe” executable.
RUN stack setup
RUN stack build --copy-bins
# Use a Docker multi-stage build to create a lean production image.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM fpco/haskell-scratch:integer-gmp
# Use a Docker multi-stage build to create a lean production image.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM fpco/haskell-scratch:integer-gmp
# Copy the "helloworld-haskell-exe" executable from the builder stage to the production image.
WORKDIR /root/
COPY --from=builder /root/.local/bin/helloworld-haskell-exe .
# Copy the "helloworld-haskell-exe" executable from the builder stage to the production image.
WORKDIR /root/
COPY --from=builder /root/.local/bin/helloworld-haskell-exe .
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Run the web service on container startup.
CMD ["./helloworld-haskell-exe"]
```
# Run the web service on container startup.
CMD ["./helloworld-haskell-exe"]
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-haskell
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-haskell
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-haskell
env:
- name: TARGET
value: "Haskell Sample v1"
```
- name: TARGET
value: "Haskell Sample v1"
```
## Build and deploy this sample
@ -138,62 +139,64 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, enter these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-haskell .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-haskell .
# Push the container to docker registry
docker push {username}/helloworld-haskell
```
# Push the container to docker registry
docker push {username}/helloworld-haskell
```
1. After the build has completed and the container is pushed to Docker Hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, enter
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take some time for the service to get assigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```
For minikube or bare-metal, get IP_ADDRESS by running the following command
For minikube or bare-metal, get IP_ADDRESS by running the following command
```shell
echo $(kubectl get node --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```shell
echo $(kubectl get node --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
```
1. To find the URL for your service, enter:
```
kubectl get ksvc helloworld-haskell --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-haskell helloworld-haskell.default.example.com
```
```
kubectl get ksvc helloworld-haskell --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-haskell helloworld-haskell.default.example.com
```
1. Now you can make a request to your app and see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-haskell.default.example.com" http://{IP_ADDRESS}
Hello world: Haskell Sample v1
```
```shell
curl -H "Host: helloworld-haskell.default.example.com" http://{IP_ADDRESS}
Hello world: Haskell Sample v1
```
## Removing the sample app deployment
@ -202,4 +205,3 @@ To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete --filename service.yaml
```

View File

@ -1,17 +1,17 @@
# Hello World - Spring Boot Java sample
A simple web app written in Java using Spring Boot 2.0 that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello ${TARGET}!". If
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
* You have installed [Java SE 8 or later JDK](http://www.oracle.com/technetwork/java/javase/downloads/index.html).
- You have installed [Java SE 8 or later JDK](http://www.oracle.com/technetwork/java/javase/downloads/index.html).
## Recreating the sample code
@ -21,53 +21,54 @@ recreate the source files from this folder.
1. From the console, create a new empty web project using the curl and unzip commands:
```shell
curl https://start.spring.io/starter.zip \
-d dependencies=web \
-d name=helloworld \
-d artifactId=helloworld \
-o helloworld.zip
unzip helloworld.zip
```
```shell
curl https://start.spring.io/starter.zip \
-d dependencies=web \
-d name=helloworld \
-d artifactId=helloworld \
-o helloworld.zip
unzip helloworld.zip
```
If you don't have curl installed, you can accomplish the same by visiting the
[Spring Initializr](https://start.spring.io/) page. Specify Artifact as `helloworld`
and add the `Web` dependency. Then click `Generate Project`, download and unzip the
sample archive.
If you don't have curl installed, you can accomplish the same by visiting the
[Spring Initializr](https://start.spring.io/) page. Specify Artifact as `helloworld`
and add the `Web` dependency. Then click `Generate Project`, download and unzip the
sample archive.
1. Update the `SpringBootApplication` class in
`src/main/java/com/example/helloworld/HelloworldApplication.java` by adding
a `@RestController` to handle the "/" mapping and also add a `@Value` field to
provide the TARGET environment variable:
```java
package com.example.helloworld;
```java
package com.example.helloworld;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@SpringBootApplication
public class HelloworldApplication {
@SpringBootApplication
public class HelloworldApplication {
@Value("${TARGET:World}")
String target;
@Value("${TARGET:World}")
String target;
@RestController
class HelloworldController {
@GetMapping("/")
String hello() {
return "Hello " + target + "!";
}
}
@RestController
class HelloworldController {
@GetMapping("/")
String hello() {
return "Hello " + target + "!";
}
}
public static void main(String[] args) {
SpringApplication.run(HelloworldApplication.class, args);
}
}
```
public static void main(String[] args) {
SpringApplication.run(HelloworldApplication.class, args);
}
}
```
1. Run the application locally:
```shell
@ -82,55 +83,55 @@ recreate the source files from this folder.
For additional information on multi-stage docker builds for Java see
[Creating Smaller Java Image using Docker Multi-stage Build](http://blog.arungupta.me/smaller-java-image-docker-multi-stage-build/).
```docker
# Use the official maven/Java 8 image to create a build artifact.
# https://hub.docker.com/_/maven
FROM maven:3.5-jdk-8-alpine as builder
```docker
# Use the official maven/Java 8 image to create a build artifact.
# https://hub.docker.com/_/maven
FROM maven:3.5-jdk-8-alpine as builder
# Copy local code to the container image.
WORKDIR /app
COPY pom.xml .
COPY src ./src
# Copy local code to the container image.
WORKDIR /app
COPY pom.xml .
COPY src ./src
# Build a release artifact.
RUN mvn package -DskipTests
# Build a release artifact.
RUN mvn package -DskipTests
# Use the Official OpenJDK image for a lean production stage of our multi-stage build.
# https://hub.docker.com/_/openjdk
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM openjdk:8-jre-alpine
# Use the Official OpenJDK image for a lean production stage of our multi-stage build.
# https://hub.docker.com/_/openjdk
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM openjdk:8-jre-alpine
# Copy the jar to the production image from the builder stage.
COPY --from=builder /app/target/helloworld-*.jar /helloworld.jar
# Copy the jar to the production image from the builder stage.
COPY --from=builder /app/target/helloworld-*.jar /helloworld.jar
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Run the web service on container startup.
CMD ["java","-Djava.security.egd=file:/dev/./urandom","-Dserver.port=${PORT}","-jar","/helloworld.jar"]
```
# Run the web service on container startup.
CMD ["java","-Djava.security.egd=file:/dev/./urandom","-Dserver.port=${PORT}","-jar","/helloworld.jar"]
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-java
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-java
env:
- name: TARGET
value: "Spring Boot Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-java
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-java
env:
- name: TARGET
value: "Spring Boot Sample v1"
```
## Building and deploying the sample
@ -141,56 +142,57 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-java .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-java .
# Push the container to docker registry
docker push {username}/helloworld-java
```
# Push the container to docker registry
docker push {username}/helloworld-java
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balancer for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balancer for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
1. To find the URL for your service, use
```shell
kubectl get ksvc helloworld-java \
--output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
```shell
kubectl get ksvc helloworld-java \
--output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-java helloworld-java.default.example.com
```
NAME DOMAIN
helloworld-java helloworld-java.default.example.com
```
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-java.default.example.com" http://{IP_ADDRESS}
```shell
curl -H "Host: helloworld-java.default.example.com" http://{IP_ADDRESS}
Hello World: Spring Boot Sample v1
```
Hello World: Spring Boot Sample v1
```
## Removing the sample app deployment

View File

@ -1,15 +1,15 @@
# Hello World - Kotlin sample
A simple web app written in Kotlin using [Ktor](https://ktor.io/) that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello ${TARGET}". If
It reads in an env variable `TARGET` and prints "Hello \${TARGET}". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Steps to recreate the sample code
@ -20,38 +20,40 @@ The following instructions recreate the source files from this folder.
1. Create a new directory and cd into it:
```shell
mkdir hello
cd hello
```
```shell
mkdir hello
cd hello
```
2. Create a file named `Main.kt` at `src/main/kotlin/com/example/hello` and copy the code block below into it:
```shell
mkdir -p src/main/kotlin/com/example/hello
```
```shell
mkdir -p src/main/kotlin/com/example/hello
```
```kotlin
package com.example.hello
```kotlin
package com.example.hello
import io.ktor.application.*
import io.ktor.http.*
import io.ktor.response.*
import io.ktor.routing.*
import io.ktor.server.engine.*
import io.ktor.server.netty.*
import io.ktor.application.*
import io.ktor.http.*
import io.ktor.response.*
import io.ktor.routing.*
import io.ktor.server.engine.*
import io.ktor.server.netty.*
fun main(args: Array<String>) {
val target = System.getenv("TARGET") ?: "World"
val port = System.getenv("PORT") ?: "8080"
embeddedServer(Netty, port.toInt()) {
routing {
get("/") {
call.respondText("Hello $target", ContentType.Text.Html)
}
}
}.start(wait = true)
}
```
fun main(args: Array<String>) {
val target = System.getenv("TARGET") ?: "World"
val port = System.getenv("PORT") ?: "8080"
embeddedServer(Netty, port.toInt()) {
routing {
get("/") {
call.respondText("Hello $target", ContentType.Text.Html)
}
}
}.start(wait = true)
}
```
3. Switch back to `hello` directory
4. Create a new file, `build.gradle` and copy the following setting
@ -105,54 +107,54 @@ The following instructions recreate the source files from this folder.
5. Create a file named `Dockerfile` and copy the code block below into it.
```docker
# Use the official gradle image to create a build artifact.
# https://hub.docker.com/_/gradle
FROM gradle as builder
```docker
# Use the official gradle image to create a build artifact.
# https://hub.docker.com/_/gradle
FROM gradle as builder
# Copy local code to the container image.
COPY build.gradle .
COPY src ./src
# Copy local code to the container image.
COPY build.gradle .
COPY src ./src
# Build a release artifact.
RUN gradle clean build --no-daemon
# Build a release artifact.
RUN gradle clean build --no-daemon
# Use the Official OpenJDK image for a lean production stage of our multi-stage build.
# https://hub.docker.com/_/openjdk
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM openjdk:8-jre-alpine
# Use the Official OpenJDK image for a lean production stage of our multi-stage build.
# https://hub.docker.com/_/openjdk
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM openjdk:8-jre-alpine
# Copy the jar to the production image from the builder stage.
COPY --from=builder /home/gradle/build/libs/gradle.jar /helloworld.jar
# Copy the jar to the production image from the builder stage.
COPY --from=builder /home/gradle/build/libs/gradle.jar /helloworld.jar
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Run the web service on container startup.
CMD [ "java", "-jar", "-Djava.security.egd=file:/dev/./urandom", "/helloworld.jar" ]
```
# Run the web service on container startup.
CMD [ "java", "-jar", "-Djava.security.egd=file:/dev/./urandom", "/helloworld.jar" ]
```
6. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-kotlin
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-kotlin
env:
- name: TARGET
value: "Kotlin Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-kotlin
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-kotlin
env:
- name: TARGET
value: "Kotlin Sample v1"
```
## Build and deploy this sample
@ -163,59 +165,64 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-kotlin .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-kotlin .
# Push the container to docker registry
docker push {username}/helloworld-kotlin
```
# Push the container to docker registry
docker push {username}/helloworld-kotlin
```
2. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
3. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
4. To find the IP address for your service, use
`kubectl get service knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get assigned
an external IP address.
```shell
kubectl get service knative-ingressgateway --namespace istio-system
```
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```shell
kubectl get service knative-ingressgateway --namespace istio-system
```
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
5. To find the URL for your service, use
```shell
kubectl get ksvc helloworld-kotlin --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
```
```shell
NAME DOMAIN
helloworld-kotlin helloworld-kotlin.default.example.com
```
```shell
kubectl get ksvc helloworld-kotlin --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
```
```shell
NAME DOMAIN
helloworld-kotlin helloworld-kotlin.default.example.com
```
6. Now you can make a request to your app to see the result. Replace `{IP_ADDRESS}`
with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-kotlin.default.example.com" http://{IP_ADDRESS}
```
```shell
Hello Kotlin Sample v1
```
```shell
curl -H "Host: helloworld-kotlin.default.example.com" http://{IP_ADDRESS}
```
```shell
Hello Kotlin Sample v1
```
## Remove the sample app deployment

View File

@ -1,17 +1,17 @@
# Hello World - Node.js sample
A simple web app written in Node.js that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello ${TARGET}!". If
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
* [Node.js](https://nodejs.org/en/) installed and configured.
- [Node.js](https://nodejs.org/en/) installed and configured.
## Recreating the sample code
@ -23,112 +23,112 @@ recreate the source files from this folder.
but change the entry point to `app.js` to be consistent with the sample
code here.
```shell
npm init
```shell
npm init
package name: (helloworld-nodejs)
version: (1.0.0)
description:
entry point: (index.js) app.js
test command:
git repository:
keywords:
author:
license: (ISC) Apache-2.0
```
package name: (helloworld-nodejs)
version: (1.0.0)
description:
entry point: (index.js) app.js
test command:
git repository:
keywords:
author:
license: (ISC) Apache-2.0
```
1. Install the `express` package:
```shell
npm install express --save
```
```shell
npm install express --save
```
1. Create a new file named `app.js` and paste the following code:
```js
const express = require('express');
const app = express();
```js
const express = require("express");
const app = express();
app.get('/', function (req, res) {
console.log('Hello world received a request.');
app.get("/", function(req, res) {
console.log("Hello world received a request.");
const target = process.env.TARGET || 'World';
res.send('Hello ' + target + '!');
});
const target = process.env.TARGET || "World";
res.send("Hello " + target + "!");
});
const port = process.env.PORT || 8080;
app.listen(port, function () {
console.log('Hello world listening on port', port);
});
```
const port = process.env.PORT || 8080;
app.listen(port, function() {
console.log("Hello world listening on port", port);
});
```
1. Modify the `package.json` file to add a start command to the scripts section:
```json
{
"name": "knative-serving-helloworld",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"start": "node app.js"
},
"author": "",
"license": "Apache-2.0"
}
```
```json
{
"name": "knative-serving-helloworld",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"start": "node app.js"
},
"author": "",
"license": "Apache-2.0"
}
```
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a Node.js app,
see [Dockerizing a Node.js web app](https://nodejs.org/en/docs/guides/nodejs-docker-webapp/).
```Dockerfile
# Use the official Node 8 image.
# https://hub.docker.com/_/node
FROM node:8
```Dockerfile
# Use the official Node 8 image.
# https://hub.docker.com/_/node
FROM node:8
# Create and change to the app directory.
WORKDIR /usr/src/app
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install production dependencies.
RUN npm install --only=production
# Install production dependencies.
RUN npm install --only=production
# Copy local code to the container image.
COPY . .
# Copy local code to the container image.
COPY . .
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Run the web service on container startup.
CMD [ "npm", "start" ]
```
# Run the web service on container startup.
CMD [ "npm", "start" ]
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-nodejs
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-nodejs
env:
- name: TARGET
value: "Node.js Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-nodejs
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-nodejs
env:
- name: TARGET
value: "Node.js Sample v1"
```
## Building and deploying the sample
@ -139,55 +139,57 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-nodejs .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-nodejs .
# Push the container to docker registry
docker push {username}/helloworld-nodejs
```
# Push the container to docker registry
docker push {username}/helloworld-nodejs
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-nodejs --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-nodejs helloworld-nodejs.default.example.com
```
```
kubectl get ksvc helloworld-nodejs --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-nodejs helloworld-nodejs.default.example.com
```
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-nodejs.default.example.com" http://{IP_ADDRESS}
Hello Node.js Sample v1!
```
```shell
curl -H "Host: helloworld-nodejs.default.example.com" http://{IP_ADDRESS}
Hello Node.js Sample v1!
```
## Removing the sample app deployment

View File

@ -1,15 +1,15 @@
# Hello World - PHP sample
A simple web app written in PHP that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello ${TARGET}!". If
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Recreating the sample code
@ -20,58 +20,58 @@ following instructions recreate the source files from this folder.
1. Create a new directory and cd into it:
````shell
mkdir app
cd app
````
```shell
mkdir app
cd app
```
1. Create a file named `index.php` and copy the code block below into it:
```php
<?php
$target = getenv('TARGET', true) ?: "World";
echo sprintf("Hello %s!\n", $target);
```
```php
<?php
$target = getenv('TARGET', true) ?: "World";
echo sprintf("Hello %s!\n", $target);
```
1. Create a file named `Dockerfile` and copy the code block below into it.
See [official PHP docker image](https://hub.docker.com/_/php/) for more details.
```docker
# Use the official PHP 7.2 image.
# https://hub.docker.com/_/php
FROM php:7.2.6-apache
```docker
# Use the official PHP 7.2 image.
# https://hub.docker.com/_/php
FROM php:7.2.6-apache
# Copy local code to the container image.
COPY index.php /var/www/html/
# Copy local code to the container image.
COPY index.php /var/www/html/
# Use the PORT environment variable in Apache configuration files.
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf
# Use the PORT environment variable in Apache configuration files.
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
```
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-php
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-php
env:
- name: TARGET
value: "PHP Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-php
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-php
env:
- name: TARGET
value: "PHP Sample v1"
```
## Building and deploying the sample
@ -82,55 +82,57 @@ you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-php .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-php .
# Push the container to docker registry
docker push {username}/helloworld-php
```
# Push the container to docker registry
docker push {username}/helloworld-php
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-php --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-php helloworld-php.default.example.com
```
```
kubectl get ksvc helloworld-php --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-php helloworld-php.default.example.com
```
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-php.default.example.com" http://{IP_ADDRESS}
Hello World: PHP Sample v1!
```
```shell
curl -H "Host: helloworld-php.default.example.com" http://{IP_ADDRESS}
Hello World: PHP Sample v1!
```
## Removing the sample app deployment

View File

@ -1,15 +1,15 @@
# Hello World - Python sample
A simple web app written in Python that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello ${TARGET}!". If
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Steps to recreate the sample code
@ -20,72 +20,73 @@ The following instructions recreate the source files from this folder.
1. Create a new directory and cd into it:
````shell
mkdir app
cd app
````
```shell
mkdir app
cd app
```
1. Create a file named `app.py` and copy the code block below into it:
```python
import os
```python
import os
from flask import Flask
from flask import Flask
app = Flask(__name__)
app = Flask(__name__)
@app.route('/')
def hello_world():
target = os.environ.get('TARGET', 'World')
return 'Hello {}!\n'.format(target)
@app.route('/')
def hello_world():
target = os.environ.get('TARGET', 'World')
return 'Hello {}!\n'.format(target)
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
```
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
```
1. Create a file named `Dockerfile` and copy the code block below into it.
See [official Python docker image](https://hub.docker.com/_/python/) for more details.
```docker
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python
```docker
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
# Install production dependencies.
RUN pip install Flask
# Install production dependencies.
RUN pip install Flask
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Run the web service on container startup.
CMD ["python", "app.py"]
```
# Run the web service on container startup.
CMD ["python", "app.py"]
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-python
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-python
env:
- name: TARGET
value: "Python Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-python
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-python
env:
- name: TARGET
value: "Python Sample v1"
```
## Build and deploy this sample
@ -96,55 +97,57 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-python .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-python .
# Push the container to docker registry
docker push {username}/helloworld-python
```
# Push the container to docker registry
docker push {username}/helloworld-python
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-python --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-python helloworld-python.default.example.com
```
```
kubectl get ksvc helloworld-python --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-python helloworld-python.default.example.com
```
1. Now you can make a request to your app to see the result. Replace `{IP_ADDRESS}`
with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-python.default.example.com" http://{IP_ADDRESS}
Hello World: Python Sample v1!
```
```shell
curl -H "Host: helloworld-python.default.example.com" http://{IP_ADDRESS}
Hello World: Python Sample v1!
```
## Remove the sample app deployment

View File

@ -1,15 +1,15 @@
# Hello World - Ruby sample
A simple web app written in Ruby that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello ${TARGET}!". If
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Steps to recreate the sample code
@ -20,84 +20,84 @@ The following instructions recreate the source files from this folder.
1. Create a new directory and cd into it:
````shell
mkdir app
cd app
````
```shell
mkdir app
cd app
```
1. Create a file named `app.rb` and copy the code block below into it:
```ruby
require 'sinatra'
```ruby
require 'sinatra'
set :bind, '0.0.0.0'
set :bind, '0.0.0.0'
get '/' do
target = ENV['TARGET'] || 'World'
"Hello #{target}!\n"
end
```
get '/' do
target = ENV['TARGET'] || 'World'
"Hello #{target}!\n"
end
```
1. Create a file named `Dockerfile` and copy the code block below into it.
See [official Ruby docker image](https://hub.docker.com/_/ruby/) for more details.
```docker
# Use the official Ruby image.
# https://hub.docker.com/_/ruby
FROM ruby:2.5
```docker
# Use the official Ruby image.
# https://hub.docker.com/_/ruby
FROM ruby:2.5
# Install production dependencies.
WORKDIR /usr/src/app
COPY Gemfile Gemfile.lock ./
ENV BUNDLE_FROZEN=true
RUN bundle install
# Install production dependencies.
WORKDIR /usr/src/app
COPY Gemfile Gemfile.lock ./
ENV BUNDLE_FROZEN=true
RUN bundle install
# Copy local code to the container image.
COPY . .
# Copy local code to the container image.
COPY . .
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Run the web service on container startup.
CMD ["ruby", "./app.rb"]
```
# Run the web service on container startup.
CMD ["ruby", "./app.rb"]
```
1. Create a file named `Gemfile` and copy the text block below into it.
```gem
source 'https://rubygems.org'
```gem
source 'https://rubygems.org'
gem 'sinatra'
```
gem 'sinatra'
```
1. Run bundle. If you don't have bundler installed, copy the
[Gemfile.lock](./Gemfile.lock) to your working directory.
```shell
bundle install
```
```shell
bundle install
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-ruby
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-ruby
env:
- name: TARGET
value: "Ruby Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-ruby
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-ruby
env:
- name: TARGET
value: "Ruby Sample v1"
```
## Build and deploy this sample
@ -108,55 +108,57 @@ you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-ruby .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-ruby .
# Push the container to docker registry
docker push {username}/helloworld-ruby
```
# Push the container to docker registry
docker push {username}/helloworld-ruby
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-ruby --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-ruby helloworld-ruby.default.example.com
```
```
kubectl get ksvc helloworld-ruby --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-ruby helloworld-ruby.default.example.com
```
1. Now you can make a request to your app to see the result. Replace `{IP_ADDRESS}`
with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-ruby.default.example.com" http://{IP_ADDRESS}
Hello World: Ruby Sample v1!
```
```shell
curl -H "Host: helloworld-ruby.default.example.com" http://{IP_ADDRESS}
Hello World: Ruby Sample v1!
```
## Remove the sample app deployment

View File

@ -1,16 +1,16 @@
# Hello World - Rust sample
A simple web app written in Rust that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello ${TARGET}!". If
It reads in an env variable `TARGET` and prints "Hello \${TARGET}!". If
TARGET is not specified, it will use "World" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Steps to recreate the sample code
@ -21,112 +21,112 @@ following instructions recreate the source files from this folder.
1. Create a new file named `Cargo.toml` and paste the following code:
```toml
[package]
name = "hellorust"
version = "0.0.0"
publish = false
```toml
[package]
name = "hellorust"
version = "0.0.0"
publish = false
[dependencies]
hyper = "0.12.3"
pretty_env_logger = "0.2.3"
```
[dependencies]
hyper = "0.12.3"
pretty_env_logger = "0.2.3"
```
1. Create a `src` folder, then create a new file named `main.rs` in that folder
and paste the following code. This code creates a basic web server which
listens on port 8080:
```rust
#![deny(warnings)]
extern crate hyper;
extern crate pretty_env_logger;
```rust
#![deny(warnings)]
extern crate hyper;
extern crate pretty_env_logger;
use hyper::{Body, Response, Server};
use hyper::service::service_fn_ok;
use hyper::rt::{self, Future};
use std::env;
use hyper::{Body, Response, Server};
use hyper::service::service_fn_ok;
use hyper::rt::{self, Future};
use std::env;
fn main() {
pretty_env_logger::init();
fn main() {
pretty_env_logger::init();
let mut port: u16 = 8080;
match env::var("PORT") {
Ok(p) => {
match p.parse::<u16>() {
Ok(n) => {port = n;},
Err(_e) => {},
};
}
Err(_e) => {},
};
let addr = ([0, 0, 0, 0], port).into();
let mut port: u16 = 8080;
match env::var("PORT") {
Ok(p) => {
match p.parse::<u16>() {
Ok(n) => {port = n;},
Err(_e) => {},
};
}
Err(_e) => {},
};
let addr = ([0, 0, 0, 0], port).into();
let new_service = || {
service_fn_ok(|_| {
let new_service = || {
service_fn_ok(|_| {
let mut hello = "Hello ".to_string();
match env::var("TARGET") {
Ok(target) => {hello.push_str(&target);},
Err(_e) => {hello.push_str("World")},
};
let mut hello = "Hello ".to_string();
match env::var("TARGET") {
Ok(target) => {hello.push_str(&target);},
Err(_e) => {hello.push_str("World")},
};
Response::new(Body::from(hello))
})
};
Response::new(Body::from(hello))
})
};
let server = Server::bind(&addr)
.serve(new_service)
.map_err(|e| eprintln!("server error: {}", e));
let server = Server::bind(&addr)
.serve(new_service)
.map_err(|e| eprintln!("server error: {}", e));
println!("Listening on http://{}", addr);
println!("Listening on http://{}", addr);
rt::run(server);
}
```
rt::run(server);
}
```
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it.
```docker
# Use the official Rust image.
# https://hub.docker.com/_/rust
FROM rust:1.27.0
```docker
# Use the official Rust image.
# https://hub.docker.com/_/rust
FROM rust:1.27.0
# Copy local code to the container image.
WORKDIR /usr/src/app
COPY . .
# Copy local code to the container image.
WORKDIR /usr/src/app
COPY . .
# Install production dependencies and build a release artifact.
RUN cargo install
# Install production dependencies and build a release artifact.
RUN cargo install
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Configure and document the service HTTP port.
ENV PORT 8080
EXPOSE $PORT
# Run the web service on container startup.
CMD ["hellorust"]
```
# Run the web service on container startup.
CMD ["hellorust"]
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-rust
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-rust
env:
- name: TARGET
value: "Rust Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-rust
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-rust
env:
- name: TARGET
value: "Rust Sample v1"
```
## Build and deploy this sample
@ -137,55 +137,57 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, enter these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-rust .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-rust .
# Push the container to docker registry
docker push {username}/helloworld-rust
```
# Push the container to docker registry
docker push {username}/helloworld-rust
```
1. After the build has completed and the container is pushed to Docker Hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, enter
`kubectl get svc knative-ingressgateway --namespace istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```
1. To find the URL for your service, enter:
```
kubectl get ksvc helloworld-rust --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-rust helloworld-rust.default.example.com
```
```
kubectl get ksvc helloworld-rust --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-rust helloworld-rust.default.example.com
```
1. Now you can make a request to your app and see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-rust.default.example.com" http://{IP_ADDRESS}
Hello World!
```
```shell
curl -H "Host: helloworld-rust.default.example.com" http://{IP_ADDRESS}
Hello World!
```
## Removing the sample app deployment

View File

@ -2,7 +2,7 @@
Learn how to deploy a simple web app that is written in Java and uses Eclipse Vert.x.
This samples uses Docker to build locally. The app reads in a `TARGET` env variable and then
prints "Hello World: ${TARGET}!". If a value for `TARGET` is not specified,
prints "Hello World: \${TARGET}!". If a value for `TARGET` is not specified,
the "NOT SPECIFIED" default value is used.
Use this sample to walk you through the steps of creating and modifying the sample app, building and pushing your
@ -12,15 +12,15 @@ container image to a registry, and then deploying your app to your Knative clust
You must meet the following requirements to complete this sample:
* A version of the Knative Serving component installed and running on your Kubernetes cluster. Follow the
- A version of the Knative Serving component installed and running on your Kubernetes cluster. Follow the
[Knative installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create a Knative cluster.
* The following software downloaded and install on your loacal machine:
* [Java SE 8 or later JDK](http://www.oracle.com/technetwork/java/javase/downloads/index.html).
* [Eclipse Vert.x v3.5.4](https://vertx.io/).
* [Docker](https://www.docker.com) for building and pushing your container image.
* [curl](https://curl.haxx.se/) to test the sample app after deployment.
* A [Docker Hub](https://hub.docker.com/) account where you can push your container image.
- The following software downloaded and install on your loacal machine:
- [Java SE 8 or later JDK](http://www.oracle.com/technetwork/java/javase/downloads/index.html).
- [Eclipse Vert.x v3.5.4](https://vertx.io/).
- [Docker](https://www.docker.com) for building and pushing your container image.
- [curl](https://curl.haxx.se/) to test the sample app after deployment.
- A [Docker Hub](https://hub.docker.com/) account where you can push your container image.
**Tip**: You can clone the [Knatve/docs repo](https://github.com/knative/docs) and then modify the source files.
Alternatively, learn more by manually creating the files youself.
@ -31,72 +31,72 @@ To create and configure the source files in the root of your working directory:
1. Create the `pom.xml` file:
```xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example.vertx</groupId>
<artifactId>helloworld</artifactId>
<version>1.0.0-SNAPSHOT</version>
```xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example.vertx</groupId>
<artifactId>helloworld</artifactId>
<version>1.0.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-core</artifactId>
<version>${version.vertx}</version>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-rx-java2</artifactId>
<version>${version.vertx}</version>
</dependency>
</dependencies>
<dependencies>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-core</artifactId>
<version>${version.vertx}</version>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-rx-java2</artifactId>
<version>${version.vertx}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<manifestEntries>
<Main-Class>io.vertx.core.Launcher</Main-Class>
<Main-Verticle>com.example.helloworld.HelloWorld</Main-Verticle>
</manifestEntries>
</transformer>
</transformers>
<artifactSet/>
</configuration>
</execution>
</executions>
</plugin>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<manifestEntries>
<Main-Class>io.vertx.core.Launcher</Main-Class>
<Main-Verticle>com.example.helloworld.HelloWorld</Main-Verticle>
</manifestEntries>
</transformer>
</transformers>
<artifactSet/>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<properties>
<version.vertx>3.5.4</version.vertx>
</properties>
</project>
```
</plugins>
</build>
<properties>
<version.vertx>3.5.4</version.vertx>
</properties>
</project>
```
1. Create the `HelloWorld.java` file in the `src/main/java/com/example/helloworld` directory. The
`[ROOT]/src/main/java/com/example/helloworld/HelloWorld.java` file creates a basic web server that listens on port `8080`.
@ -136,33 +136,33 @@ To create and configure the source files in the root of your working directory:
1. Create the `Dockerfile` file:
```docker
FROM fabric8/s2i-java:2.0
ENV JAVA_APP_DIR=/deployments
EXPOSE 8080
COPY target/helloworld-1.0.0-SNAPSHOT.jar /deployments/
```
```docker
FROM fabric8/s2i-java:2.0
ENV JAVA_APP_DIR=/deployments
EXPOSE 8080
COPY target/helloworld-1.0.0-SNAPSHOT.jar /deployments/
```
1. Create the `service.yaml` file. You must specify your Docker Hub username in `{username}`. You can also
configure the `TARGET`, for example you can modify the `Eclipse Vert.x Sample v1` value.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-vertx
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-vertx
env:
- name: TARGET
value: "Eclipse Vert.x Sample v1"
```
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-vertx
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-vertx
env:
- name: TARGET
value: "Eclipse Vert.x Sample v1"
```
## Building and deploying the sample
@ -171,37 +171,38 @@ To build a container image, push your image to the registry, and then deploy you
1. Use Docker to build your container image and then push that image to your Docker Hub registry.
You must replace the `{username}` variables in the following commands with your Docker Hub username.
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-vertx .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-vertx .
# Push the container to docker registry
docker push {username}/helloworld-vertx
```
# Push the container to docker registry
docker push {username}/helloworld-vertx
```
1. Now that your container image is in the registry, you can deploy it to your Knative cluster by
running the `kubectl apply` command:
```shell
kubectl apply --filename service.yaml
```
```shell
kubectl apply --filename service.yaml
```
Result: A service name `helloworld-vertx` is created in your cluster along with the following resources:
* A new immutable revision for the version of the app that you just deployed.
* The following networking resources are created for your app:
* route
* ingress
* service
* load balancer
* Auto scaling is enable to allow your pods to scale up to meet traffic, and also back down to zero when there is no traffic.
Result: A service name `helloworld-vertx` is created in your cluster along with the following resources:
- A new immutable revision for the version of the app that you just deployed.
- The following networking resources are created for your app:
- route
- ingress
- service
- load balancer
- Auto scaling is enable to allow your pods to scale up to meet traffic, and also back down to zero when there is no traffic.
## Testing the sample app
To verify that your sample app has been successfully deployed:
1. View your the ingress IP address of your service by running the following
`kubectl get` command. Note that it may take sometime for the new service to get asssigned
an external IP address, especially if your cluster was newly created.
`kubectl get` command. Note that it may take sometime for the new service to get asssigned
an external IP address, especially if your cluster was newly created.
```shell
kubectl get svc knative-ingressgateway --namespace istio-system
@ -215,6 +216,7 @@ an external IP address, especially if your cluster was newly created.
```
1. Retrieve the URL for your service, by running the following `kubectl get` command:
```shell
kubectl get services.serving.knative.dev helloworld-vertx --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
```
@ -229,15 +231,15 @@ an external IP address, especially if your cluster was newly created.
1. Run the following `curl` command to test your deployed sample app. You must replace the
`{IP_ADDRESS}` variable the URL that your retrieve in the previous step.
```shell
curl -H "Host: helloworld-vertx.default.example.com" http://{IP_ADDRESS}
```shell
curl -H "Host: helloworld-vertx.default.example.com" http://{IP_ADDRESS}
```
Example result:
```shell
Hello World: Eclipse Vert.x Sample v1
```
```
Congtratualations on deploying your sample Java app to Knative!

View File

@ -19,9 +19,10 @@ to the Login service.
2. Install [Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
3. Acquire a domain name.
- In this example, we use `example.com`. If you don't have a domain name,
you can modify your hosts file (on Mac or Linux) to map `example.com` to your
cluster's ingress IP.
you can modify your hosts file (on Mac or Linux) to map `example.com` to your
cluster's ingress IP.
4. Check out the code:
```
go get -d github.com/knative/docs/serving/samples/knative-routing-go
```
@ -31,18 +32,22 @@ go get -d github.com/knative/docs/serving/samples/knative-routing-go
Build the application container and publish it to a container registry:
1. Move into the sample directory:
```shell
cd $GOPATH/src/github.com/knative/docs
```
2. Set your preferred container registry:
```shell
export REPO="gcr.io/<YOUR_PROJECT_ID>"
```
This example shows how to use Google Container Registry (GCR). You will need a Google Cloud Project and to enable the [Google Container Registry
This example shows how to use Google Container Registry (GCR). You will need a Google Cloud Project and to enable the [Google Container Registry
API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
3. Use Docker to build your application container:
```
docker build \
--tag "${REPO}/serving/samples/knative-routing-go" \
@ -50,24 +55,28 @@ docker build \
```
4. Push your container to a container registry:
```
docker push "${REPO}/serving/samples/knative-routing-go"
```
5. Replace the image reference path with our published image path in the configuration file `serving/samples/knative-routing-go/sample.yaml`:
* Manually replace:
`image: github.com/knative/docs/serving/samples/knative-routing-go` with `image: <YOUR_CONTAINER_REGISTRY>/serving/samples/knative-routing-go`
Or
- Manually replace:
`image: github.com/knative/docs/serving/samples/knative-routing-go` with `image: <YOUR_CONTAINER_REGISTRY>/serving/samples/knative-routing-go`
* Run this command:
```
perl -pi -e "s@github.com/knative/docs@${REPO}@g" serving/samples/knative-routing-go/sample.yaml
```
Or
- Run this command:
```
perl -pi -e "s@github.com/knative/docs@${REPO}@g" serving/samples/knative-routing-go/sample.yaml
```
## Deploy the Service
Deploy the Knative Serving sample:
```
kubectl apply --filename serving/samples/knative-routing-go/sample.yaml
```
@ -78,36 +87,43 @@ A shared Gateway "knative-shared-gateway" is used within Knative service mesh
for serving all incoming traffic. You can inspect it and its corresponding Kubernetes
service with:
* Check the shared Gateway:
- Check the shared Gateway:
```
kubectl get Gateway --namespace knative-serving --output yaml
```
* Check the corresponding Kubernetes service for the shared Gateway:
- Check the corresponding Kubernetes service for the shared Gateway:
```
kubectl get svc knative-ingressgateway --namespace istio-system --output yaml
```
* Inspect the deployed Knative services with:
- Inspect the deployed Knative services with:
```
kubectl get ksvc
```
You should see 2 Knative services: `search-service` and `login-service`.
### Access the Services
1. Find the shared Gateway IP and export as an environment variable:
```shell
export GATEWAY_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
--output jsonpath="{.status.loadBalancer.ingress[*]['ip']}"`
```
2. Find the `Search` service route and export as an environment variable:
```shell
export SERVICE_HOST=`kubectl get route search-service --output jsonpath="{.status.domain}"`
```
3. Make a curl request to the service:
```shell
curl http://${GATEWAY_IP} --header "Host:${SERVICE_HOST}"
```
@ -115,6 +131,7 @@ curl http://${GATEWAY_IP} --header "Host:${SERVICE_HOST}"
You should see: `Search Service is called !`
4. Similarly, you can also directly access "Login" service with:
```shell
export SERVICE_HOST=`kubectl get route login-service --output jsonpath="{.status.domain}"`
```
@ -128,33 +145,35 @@ You should see: `Login Service is called !`
## Apply Custom Routing Rule
1. Apply the custom routing rules defined in `routing.yaml` file with:
```
kubectl apply --filename serving/samples/knative-routing-go/routing.yaml
```
2. The `routing.yaml` file will generate a new VirtualService `entry-route` for
domain `example.com`. View the VirtualService:
domain `example.com`. View the VirtualService:
```
kubectl get VirtualService entry-route --output yaml
```
3. Send a request to the `Search` service and the `Login` service by using
corresponding URIs. You should get the same results as directly accessing these services.
* Get the ingress IP:
```shell
3. Send a request to the `Search` service and the `Login` service by using
corresponding URIs. You should get the same results as directly accessing these services.
_ Get the ingress IP:
```shell
export GATEWAY_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
--output jsonpath="{.status.loadBalancer.ingress[*]['ip']}"`
--output jsonpath="{.status.loadBalancer.ingress[_]['ip']}"`
```
* Send a request to the Search service:
```shell
curl http://${GATEWAY_IP}/search --header "Host:example.com"
```
* Send a request to the Search service:
```shell
curl http://${GATEWAY_IP}/search --header "Host:example.com"
```
* Send a request to the Login service:
```shell
curl http://${GATEWAY_IP}/login --header "Host:example.com"
```
* Send a request to the Login service:
```shell
curl http://${GATEWAY_IP}/login --header "Host:example.com"
```
## How It Works
@ -169,10 +188,10 @@ Gateway again. The Gateway proxy checks the updated host, and forwards it to
![Object model](images/knative-routing-sample-flow.png)
## Clean Up
To clean up the sample resources:
```
kubectl delete --filename serving/samples/knative-routing-go/sample.yaml
kubectl delete --filename serving/samples/knative-routing-go/routing.yaml

View File

@ -8,6 +8,7 @@ This sample demonstrates creating a simple RESTful service. The exposed endpoint
2. Install [Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
3. You need to [configure outbound network access](https://github.com/knative/docs/blob/master/serving/outbound-network-access.md) because this application makes an external API request.
4. Check out the code:
```
go get -d github.com/knative/docs/serving/samples/rest-api-go
```
@ -17,18 +18,22 @@ go get -d github.com/knative/docs/serving/samples/rest-api-go
Build the application container and publish it to a container registry:
1. Move into the sample directory:
```
cd $GOPATH/src/github.com/knative/docs
```
2. Set your preferred container registry:
```
export REPO="gcr.io/<YOUR_PROJECT_ID>"
```
To run the sample, you need to have a Google Cloud Platform project, and you also need to enable the [Google Container Registry
To run the sample, you need to have a Google Cloud Platform project, and you also need to enable the [Google Container Registry
API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
3. Use Docker to build your application container:
```
docker build \
--tag "${REPO}/serving/samples/rest-api-go" \
@ -36,24 +41,28 @@ docker build \
```
4. Push your container to a container registry:
```
docker push "${REPO}/serving/samples/rest-api-go"
```
5. Replace the image reference path with our published image path in the configuration files (`serving/samples/rest-api-go/sample.yaml`:
* Manually replace:
`image: github.com/knative/docs/serving/samples/rest-api-go` with `image: <YOUR_CONTAINER_REGISTRY>/serving/samples/rest-api-go`
Or
- Manually replace:
`image: github.com/knative/docs/serving/samples/rest-api-go` with `image: <YOUR_CONTAINER_REGISTRY>/serving/samples/rest-api-go`
* Use run this command:
```
perl -pi -e "s@github.com/knative/docs@${REPO}@g" serving/samples/rest-api-go/sample.yaml
```
Or
- Use run this command:
```
perl -pi -e "s@github.com/knative/docs@${REPO}@g" serving/samples/rest-api-go/sample.yaml
```
## Deploy the Configuration
Deploy the Knative Serving sample:
```
kubectl apply --filename serving/samples/rest-api-go/sample.yaml
```
@ -62,17 +71,20 @@ kubectl apply --filename serving/samples/rest-api-go/sample.yaml
Inspect the created resources with the `kubectl` commands:
* View the created Route resource:
- View the created Route resource:
```
kubectl get route --output yaml
```
* View the created Configuration resource:
- View the created Configuration resource:
```
kubectl get configurations --output yaml
```
* View the Revision that was created by our Configuration:
- View the Revision that was created by our Configuration:
```
kubectl get revisions --output yaml
```
@ -82,55 +94,65 @@ kubectl get revisions --output yaml
To access this service via `curl`, you need to determine its ingress address.
1. To determine if your service is ready:
```
kubectl get svc knative-ingressgateway --namespace istio-system --watch
```
When the service is ready, you'll see an IP address in the `EXTERNAL-IP` field:
```
kubectl get svc knative-ingressgateway --namespace istio-system --watch
```
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
When the service is ready, you'll see an IP address in the `EXTERNAL-IP` field:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
2. When the service is ready, export the ingress hostname and IP as environment variables:
```
export SERVICE_HOST=`kubectl get route stock-route-example --output jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
--output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
* If your cluster is running outside a cloud provider (for example on Minikube),
```
export SERVICE_HOST=`kubectl get route stock-route-example --output jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
--output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
- If your cluster is running outside a cloud provider (for example on Minikube),
your services will never get an external IP address. In that case, use the istio `hostIP` and `nodePort` as the service IP:
```
export SERVICE_IP=$(kubectl get po --selector knative=ingressgateway --namespace istio-system \
--output 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc knative-ingressgateway --namespace istio-system \
--output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
```
export SERVICE_IP=$(kubectl get po --selector knative=ingressgateway --namespace istio-system \
--output 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc knative-ingressgateway --namespace istio-system \
--output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
3. Now use `curl` to make a request to the service:
* Make a request to the index endpoint:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}
```
Response body: `Welcome to the stock app!`
* Make a request to the `/stock` endpoint:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}/stock
```
Response body: `stock ticker not found!, require /stock/{ticker}`
- Make a request to the index endpoint:
* Make a request to the `/stock` endpoint with a `ticker` parameter:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}/stock/<ticker>
```
Response body: `stock price for ticker <ticker> is <price>`
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}
```
Response body: `Welcome to the stock app!`
- Make a request to the `/stock` endpoint:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}/stock
```
Response body: `stock ticker not found!, require /stock/{ticker}`
- Make a request to the `/stock` endpoint with a `ticker` parameter:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}/stock/<ticker>
```
Response body: `stock price for ticker <ticker> is <price>`
## Clean Up
To clean up the sample service:
```
kubectl delete --filename serving/samples/rest-api-go/sample.yaml
```

View File

@ -10,10 +10,10 @@ components of Knative to orchestrate an end-to-end deployment.
You need:
* A Kubernetes cluster with Knative installed. Follow the
- A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* Go installed and configured. This is optional, and only required if you want to run the sample app
- Go installed and configured. This is optional, and only required if you want to run the sample app
locally.
## Configuring Knative
@ -74,15 +74,14 @@ available, but these are the key steps:
1. Create a new `Service Account` manifest which is used to link the build process to the secret.
Save this file as `service-account.yaml`:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-bot
secrets:
- name: basic-user-pass
```
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-bot
secrets:
- name: basic-user-pass
```
1. After you have created the manifest files, apply them to your cluster with `kubectl`:
@ -93,7 +92,6 @@ available, but these are the key steps:
serviceaccount "build-bot" created
```
## Deploying the sample
Now that you've configured your cluster accordingly, you are ready to deploy the
@ -130,16 +128,16 @@ container for the application.
template:
name: kaniko
arguments:
- name: IMAGE
value: docker.io/{DOCKER_USERNAME}/app-from-source:latest
- name: IMAGE
value: docker.io/{DOCKER_USERNAME}/app-from-source:latest
revisionTemplate:
spec:
container:
image: docker.io/{DOCKER_USERNAME}/app-from-source:latest
imagePullPolicy: Always
env:
- name: SIMPLE_MSG
value: "Hello from the sample app!"
- name: SIMPLE_MSG
value: "Hello from the sample app!"
```
1. Apply this manifest using `kubectl`, and watch the results:
@ -196,38 +194,39 @@ container for the application.
```
1. Now that your service is created, Knative will perform the following steps:
* Fetch the revision specified from GitHub and build it into a container
* Push the container to Docker Hub
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
- Fetch the revision specified from GitHub and build it into a container
- Push the container to Docker Hub
- Create a new immutable revision for this version of the app.
- Network programming to create a route, ingress, service, and load balance for your app.
- Automatically scale your pods up and down (including to zero active pods).
1. To get the ingress IP for your cluster, use the following command. If your cluster is new,
it can take some time for the service to get an external IP address:
```shell
$ kubectl get svc knative-ingressgateway --namespace istio-system
```shell
$ kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
```
1. To find the URL for your service, type:
```shell
$ kubectl get ksvc app-from-source --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
app-from-source app-from-source.default.example.com
```
```shell
$ kubectl get ksvc app-from-source --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
app-from-source app-from-source.default.example.com
```
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address that you got in the previous step:
```shell
curl -H "Host: app-from-source.default.example.com" http://{IP_ADDRESS}
Hello from the sample app!"
```
```shell
curl -H "Host: app-from-source.default.example.com" http://{IP_ADDRESS}
Hello from the sample app!"
```
## Removing the sample app deployment

View File

@ -10,14 +10,18 @@ using the default installation.
## Prerequisites
1. A Kubernetes cluster with [Knative Serving](https://github.com/knative/docs/blob/master/install/README.md)
installed.
installed.
2. Check if Knative monitoring components are installed:
```
kubectl get pods --namespace knative-monitoring
```
* If pods aren't found, install [Knative monitoring component](../../installing-logging-metrics-traces.md).
- If pods aren't found, install [Knative monitoring component](../../installing-logging-metrics-traces.md).
3. Install [Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
4. Check out the code:
```
go get -d github.com/knative/docs/serving/samples/telemetry-go
```
@ -27,19 +31,23 @@ go get -d github.com/knative/docs/serving/samples/telemetry-go
Build the application container and publish it to a container registry:
1. Move into the sample directory:
```
cd $GOPATH/src/github.com/knative/docs
```
2. Set your preferred container registry:
```
export REPO="gcr.io/<YOUR_PROJECT_ID>"
```
This example shows how to use Google Container Registry (GCR). You will need
a Google Cloud Project and to enable the [Google Container Registry
This example shows how to use Google Container Registry (GCR). You will need
a Google Cloud Project and to enable the [Google Container Registry
API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
3. Use Docker to build your application container:
```
docker build \
--tag "${REPO}/serving/samples/telemetry-go" \
@ -47,26 +55,32 @@ docker build \
```
4. Push your container to a container registry:
```
docker push "${REPO}/serving/samples/telemetry-go"
```
5. Replace the image reference path with our published image path in the
configuration file (`serving/samples/telemetry-go/sample.yaml`):
* Manually replace:
`image: github.com/knative/docs/serving/samples/telemetry-go` with
`image: <YOUR_CONTAINER_REGISTRY>/serving/samples/telemetry-go`
5. Replace the image reference path with our published image path in the
configuration file (`serving/samples/telemetry-go/sample.yaml`):
Or
- Manually replace:
`image: github.com/knative/docs/serving/samples/telemetry-go` with
`image: <YOUR_CONTAINER_REGISTRY>/serving/samples/telemetry-go`
* Use run this command:
```
perl -pi -e "s@github.com/knative/docs@${REPO}@g" serving/samples/telemetry-go/sample.yaml
```
Or
- Use run this command:
```
perl -pi -e "s@github.com/knative/docs@${REPO}@g" serving/samples/telemetry-go/sample.yaml
```
## Deploy the Service
Deploy this application to Knative Serving:
```
kubectl apply --filename serving/samples/telemetry-go/
```
@ -75,81 +89,97 @@ kubectl apply --filename serving/samples/telemetry-go/
Inspect the created resources with the `kubectl` commands:
* View the created Route resource:
```
kubectl get route --output yaml
```
- View the created Route resource:
* View the created Configuration resource:
```
kubectl get configurations --output yaml
```
```
kubectl get route --output yaml
```
* View the Revision that was created by the Configuration:
```
kubectl get revisions --output yaml
```
- View the created Configuration resource:
```
kubectl get configurations --output yaml
```
- View the Revision that was created by the Configuration:
```
kubectl get revisions --output yaml
```
## Access the Service
To access this service via `curl`, you need to determine its ingress address.
1. To determine if your service is ready:
Check the status of your Knative gateway:
```
kubectl get svc knative-ingressgateway --namespace istio-system --watch
```
Check the status of your Knative gateway:
When the service is ready, you'll see an IP address in the `EXTERNAL-IP` field:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
CTRL+C to end watch.
```
kubectl get svc knative-ingressgateway --namespace istio-system --watch
```
Check the status of your route:
```
kubectl get route --output yaml
```
When the route is ready, you'll see the following fields reported as:
```YAML
status:
conditions:
...
status: "True"
type: Ready
domain: telemetrysample-route.default.example.com
```
When the service is ready, you'll see an IP address in the `EXTERNAL-IP` field:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
CTRL+C to end watch.
Check the status of your route:
```
kubectl get route --output yaml
```
When the route is ready, you'll see the following fields reported as:
```YAML
status:
conditions:
...
status: "True"
type: Ready
domain: telemetrysample-route.default.example.com
```
2. Export the ingress hostname and IP as environment
variables:
variables:
```
export SERVICE_HOST=`kubectl get route telemetrysample-route --output jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
3. Make a request to the service to see the `Hello World!` message:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}
```
4. Make a request to the `/log` endpoint to generate logs to the `stdout` file
and generate files under `/var/log` in both `JSON` and plain text formats:
and generate files under `/var/log` in both `JSON` and plain text formats:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}/log
```
## Access Logs
You can access to the logs from Kibana UI - see [Logs](../../accessing-logs.md)
for more information.
## Access per Request Traces
You can access to per request traces from Zipkin UI - see [Traces](../../accessing-traces.md)
for more information.
## Accessing Custom Metrics
You can see published metrics using Prometheus UI. To access to the UI, forward
the Prometheus server to your machine:
```
kubectl port-forward $(kubectl get pods --selector=app=prometheus,prometheus=test --output=jsonpath="{.items[0].metadata.name}") 9090
```
@ -159,6 +189,7 @@ Then browse to http://localhost:9090.
## Clean up
To clean up the sample service:
```
kubectl delete --filename serving/samples/telemetry-go/
```

View File

@ -7,11 +7,12 @@ generates its thumbnail image using the `ffmpeg` framework.
## Before you begin
* [Install Knative Serving](../../../install/README.md)
- [Install Knative Serving](../../../install/README.md)
If you want to test and run the app locally:
* [Install Go](https://golang.org/doc/install)
* [Download `ffmpeg`](https://www.ffmpeg.org/download.html)
- [Install Go](https://golang.org/doc/install)
- [Download `ffmpeg`](https://www.ffmpeg.org/download.html)
## Sample code
@ -114,7 +115,6 @@ kubectl apply --filename https://raw.githubusercontent.com/knative/build-templat
kubectl apply --filename sample.yaml
```
Now, if you look at the `status` of the revision, you will see that a build is in progress:
```shell
@ -134,7 +134,6 @@ items:
Once `BuildComplete` has a `status: "True"`, the revision will be deployed.
## Using the app
To confirm that the app deployed, you can check for the Knative Serving service using `kubectl`.

View File

@ -12,61 +12,74 @@ to illustrate applying a revision, then using that revision for manual traffic s
This section describes how to create an revision by deploying a new configuration.
1. Replace the image reference path with our published image path in the configuration files (`serving/samples/traffic-splitting/updated_configuration.yaml`:
* Manually replace:
- Manually replace:
`image: github.com/knative/docs/serving/samples/rest-api-go` with `image: <YOUR_CONTAINER_REGISTRY>/serving/samples/rest-api-go`
Or
Or
* Use run this command:
```
perl -pi -e "s@github.com/knative/docs@${REPO}@g" serving/samples/rest-api-go/updated_configuration.yaml
```
- Use run this command:
```
perl -pi -e "s@github.com/knative/docs@${REPO}@g" serving/samples/rest-api-go/updated_configuration.yaml
```
2. Deploy the new configuration to update the `RESOURCE` environment variable
from `stock` to `share`:
from `stock` to `share`:
```
kubectl apply --filename serving/samples/traffic-splitting/updated_configuration.yaml
```
3. Once deployed, traffic will shift to the new revision automatically. Verify the deployment by checking the route status:
```
kubectl get route --output yaml
```
4. When the new route is ready, you can access the new endpoints:
The hostname and IP address can be found in the same manner as the [Creating a RESTful Service](../rest-api-go) sample:
```
export SERVICE_HOST=`kubectl get route stock-route-example --output jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
--output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
The hostname and IP address can be found in the same manner as the [Creating a RESTful Service](../rest-api-go) sample:
* Make a request to the index endpoint:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}
```
Response body: `Welcome to the share app!`
```
export SERVICE_HOST=`kubectl get route stock-route-example --output jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
--output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
* Make a request to the `/share` endpoint:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}/share
```
Response body: `share ticker not found!, require /share/{ticker}`
- Make a request to the index endpoint:
* Make a request to the `/share` endpoint with a `ticker` parameter:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}/share/<ticker>
```
Response body: `share price for ticker <ticker> is <price>`
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}
```
Response body: `Welcome to the share app!`
- Make a request to the `/share` endpoint:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}/share
```
Response body: `share ticker not found!, require /share/{ticker}`
- Make a request to the `/share` endpoint with a `ticker` parameter:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}/share/<ticker>
```
Response body: `share price for ticker <ticker> is <price>`
## Manual Traffic Splitting
This section describes how to manually split traffic to specific revisions.
1. Get your revisions names via:
```
kubectl get revisions
```
```
NAME AGE
stock-configuration-example-00001 11m
@ -74,6 +87,7 @@ stock-configuration-example-00002 4m
```
2. Update the `traffic` list in `serving/samples/rest-api-go/sample.yaml` as:
```yaml
traffic:
- revisionName: <YOUR_FIRST_REVISION_NAME>
@ -83,14 +97,17 @@ traffic:
```
3. Deploy your traffic revision:
```
kubectl apply --filename serving/samples/rest-api-go/sample.yaml
```
4. Verify the deployment by checking the route status:
```
kubectl get route --output yaml
```
Once updated, you can make `curl` requests to the API using either `stock` or `share`
endpoints.

View File

@ -41,7 +41,7 @@ collecting log files under `/var/log`. An
is in process to get rid of the sidecar. The steps to configure are:
1. Replace `logging.fluentd-sidecar-output-config` flag in
[config-observability](https://github.com/knative/serving/blob/master/config/config-observability.yaml) with the
[config-observability](https://github.com/knative/serving/blob/master/config/config-observability.yaml) with the
desired output configuration. **NOTE**: The Fluentd DaemonSet is in
`monitoring` namespace while the Fluentd sidecar is in the namespace same with
the app. There may be small differences between the configuration for DaemonSet

View File

@ -43,29 +43,29 @@ You can also apply an updated domain configuration:
replacing the `example.org` and `example.com` values with the new
domain you want to use:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
# These are example settings of domain.
# example.org will be used for routes having app=prod.
example.org: |
selector:
app: prod
# Default value for domain, for routes that does not have app=prod labels.
# Although it will match all routes, it is the least-specific rule so it
# will only be used if no other domain matches.
example.com: ""
```
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
# These are example settings of domain.
# example.org will be used for routes having app=prod.
example.org: |
selector:
app: prod
# Default value for domain, for routes that does not have app=prod labels.
# Although it will match all routes, it is the least-specific rule so it
# will only be used if no other domain matches.
example.com: ""
```
1. Apply updated domain configuration to your cluster:
```shell
kubectl apply --filename config-domain.yaml
```
```shell
kubectl apply --filename config-domain.yaml
```
## Deploy an application
@ -73,16 +73,18 @@ You can also apply an updated domain configuration:
> the configuration map and automatically update the host name for all of the deployed
> services and routes.
Deploy an app (for example, [`helloworld-go`](./samples/helloworld-go/README.md)), to
your cluster as normal. You can check the customized domain in Knative Route "helloworld-go" with
your cluster as normal. You can check the customized domain in Knative Route "helloworld-go" with
the following command:
```shell
kubectl get route helloworld-go --output jsonpath="{.status.domain}"
```
You should see the full customized domain: `helloworld-go.default.mydomain.com`.
And you can check the IP address of your Knative gateway by running:
```shell
kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*]['ip']}"
```
@ -103,6 +105,7 @@ export DOMAIN_NAME=`kubectl get route helloworld-go --output jsonpath="{.status.
echo -e "$GATEWAY_IP\t$DOMAIN_NAME" | sudo tee -a /etc/hosts
```
You can now access your domain from the browser in your machine and do some quick checks.
## Publish your Domain
@ -119,27 +122,26 @@ so that the gateway IP does not change each time your cluster is restarted.
To publish your domain, you need to update your DNS provider to point to the
IP address for your service ingress.
* Create a [wildcard record](https://support.google.com/domains/answer/4633759)
- Create a [wildcard record](https://support.google.com/domains/answer/4633759)
for the namespace and custom domain to the ingress IP Address, which would enable
hostnames for multiple services in the same namespace to work without creating
additional DNS entries.
```dns
*.default.mydomain.com 59 IN A 35.237.28.44
```
```dns
*.default.mydomain.com 59 IN A 35.237.28.44
```
* Create an A record to point from the fully qualified domain name to the IP
- Create an A record to point from the fully qualified domain name to the IP
address of your Knative gateway. This step needs to be done for each Knative Service or
Route created.
```dns
helloworld-go.default.mydomain.com 59 IN A 35.237.28.44
```
```dns
helloworld-go.default.mydomain.com 59 IN A 35.237.28.44
```
If you are using Google Cloud DNS, you can find step-by-step instructions
in the [Cloud DNS quickstart](https://cloud.google.com/dns/quickstart).
Once the domain update has propagated, you can access your app using
the fully qualified domain name of the deployed route, for example
`http://helloworld-go.default.mydomain.com`

View File

@ -52,22 +52,22 @@ spec:
selector:
knative: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
- hosts:
- '*'
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
- hosts:
- "*"
port:
name: http
number: 80
protocol: HTTP
- hosts:
- "*"
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
```
Once the change has been made, you can now use the HTTPS protocol to access
@ -88,9 +88,9 @@ Encrypt][le] to obtain a certificate manually.
1. Use the certbot to request a certificate, using DNS validation. The certbot tool will walk
you through validating your domain ownership by creating TXT records in your domain.
```shell
./certbot-auto certonly --manual --preferred-challenges dns -d '*.default.yourdomain.com'
```
```shell
./certbot-auto certonly --manual --preferred-challenges dns -d '*.default.yourdomain.com'
```
1. When certbot is complete, you will have two output files, `privkey.pem` and `fullchain.pem`. These files
map to the `cert.pk` and `cert.pem` files used above.
@ -107,6 +107,7 @@ To install cert-manager into your cluster, use kubectl to apply the cert-manager
```
kubectl apply --filename https://raw.githubusercontent.com/jetstack/cert-manager/release-0.5/contrib/manifests/cert-manager/with-rbac.yaml
```
or see the [cert-manager docs](https://cert-manager.readthedocs.io/en/latest/getting-started/) for more ways to install and customize.
### Configure cert-manager for your DNS provider
@ -119,8 +120,7 @@ is only supported by a [small number of DNS providers through cert-manager](http
Instructions for configuring cert-manager are provided for the following DNS hosts:
* [Google Cloud DNS](using-cert-manager-on-gcp.md)
- [Google Cloud DNS](using-cert-manager-on-gcp.md)
---

View File

@ -12,6 +12,7 @@ to their zone to prove ownership. Other challenge types are not currently suppor
Knative.
## Creating a Cloud DNS service account
To add the TXT record, configure Knative with a service account
that can be used by cert-manager to create and update the DNS record.
@ -105,24 +106,24 @@ To check if your ClusterIssuer is valid, enter:
kubectl get clusterissuer --namespace cert-manager letsencrypt-issuer --output yaml
```
Then confirm that its conditions have `Ready=True`. For example:
Then confirm that its conditions have `Ready=True`. For example:
```yaml
status:
acme:
uri: https://acme-v02.api.letsencrypt.org/acme/acct/40759665
conditions:
- lastTransitionTime: 2018-08-23T01:44:54Z
message: The ACME account was registered with the ACME server
reason: ACMEAccountRegistered
status: "True"
type: Ready
- lastTransitionTime: 2018-08-23T01:44:54Z
message: The ACME account was registered with the ACME server
reason: ACMEAccountRegistered
status: "True"
type: Ready
```
### Specifying the certificate
Next, configure which certificate issuer to use
and which secret you will publish the certificate into. Use the Secret `istio-ingressgateway-certs`.
and which secret you will publish the certificate into. Use the Secret `istio-ingressgateway-certs`.
The following steps will overwrite this Secret if it already exists.
```shell
@ -173,19 +174,21 @@ To check that your certificate setting is valid, enter:
kubectl get certificate --namespace istio-system my-certificate --output yaml
```
Verify that its `Status.Conditions` have `Ready=True`. For example:
Verify that its `Status.Conditions` have `Ready=True`. For example:
```yaml
status:
acme:
order:
url: https://acme-v02.api.letsencrypt.org/acme/order/40759665/45358362
conditions:
- lastTransitionTime: 2018-08-23T02:28:44Z
message: Certificate issued successfully
reason: CertIssued
status: "True"
type: Ready
- lastTransitionTime: 2018-08-23T02:28:44Z
message: Certificate issued successfully
reason: CertIssued
status: "True"
type: Ready
```
A condition with `Ready=False` is a failure to obtain certificate, and such
condition usually has an error message to indicate the reason of failure.
@ -230,4 +233,3 @@ EOF
Now you can access your services via HTTPS; cert-manager will keep your certificates
up-to-date, replacing them before the certificate expires.

View File

@ -18,11 +18,14 @@ of publishing the Knative domain.
1. [Knative Serving](https://github.com/knative/docs/blob/master/install/README.md) installed on your cluster.
1. A public domain that will be used in Knative.
1. Knative configured to use your custom domain.
```shell
kubectl edit cm config-domain --namespace knative-serving
```
This command opens your default text editor and allows you to edit the config
map.
```
apiVersion: v1
data:
@ -30,9 +33,11 @@ data:
kind: ConfigMap
[...]
```
Edit the file to replace `example.com` with the domain you'd like to use and
save your changes. In this example, we use domain `external-dns-test.my-org.do`
for all routes:
for all routes:
```
apiVersion: v1
data:
@ -63,23 +68,29 @@ A DNS zone which will contain the managed DNS records needs to be created.
Assume your custom domain is `external-dns-test.my-org.do`.
Use the following command to create a DNS zone with [Google Cloud DNS](https://cloud.google.com/dns/):
```shell
gcloud dns managed-zones create "external-dns-zone" \
--dns-name "external-dns-test.my-org.do." \
--description "Automatically managed zone by kubernetes.io/external-dns"
```
Make a note of the nameservers that were assigned to your new zone.
```shell
gcloud dns record-sets list \
--zone "external-dns-zone" \
--name "external-dns-test.my-org.do." \
--type NS
```
You should see output similar to the following:
```
NAME TYPE TTL DATA
external-dns-test.my-org.do. NS 21600 ns-cloud-e1.googledomains.com.,ns-cloud-e2.googledomains.com.,ns-cloud-e3.googledomains.com.,ns-cloud-e4.googledomains.com.
```
In this case, the DNS nameservers are `ns-cloud-{e1-e4}.googledomains.com`.
Yours could differ slightly, e.g. {a1-a4}, {b1-b4} etc.
@ -88,6 +99,7 @@ the parent zone so that this zone can be found from the parent.
Assuming the parent zone is `my-org-do` and the parent domain is `my-org.do`,
and the parent zone is also hosted at Google Cloud DNS, you can follow these
steps to add the NS records of this zone into the parent zone:
```shell
gcloud dns record-sets transaction start --zone "my-org-do"
gcloud dns record-sets transaction add ns-cloud-e{1..4}.googledomains.com. \
@ -98,14 +110,17 @@ gcloud dns record-sets transaction execute --zone "my-org-do"
### Deploy ExternalDNS
Use the following command to apply the [manifest](https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/gke.md#manifest-for-clusters-without-rbac-enabled) to install ExternalDNS
```shell
cat <<EOF | kubectl apply --filename -
<the-content-of-manifest-with-custom-domain-filter>
EOF
```
Note that you need to set the argument `domain-filter` to your custom domain.
You should see ExternalDNS is installed by running:
```shell
kubectl get deployment external-dns
```
@ -115,12 +130,15 @@ kubectl get deployment external-dns
In order to publish the Knative Gateway service, the annotation
`external-dns.alpha.kubernetes.io/hostname: '*.external-dns-test.my-org.do'`
needs to be added into Knative gateway service:
```shell
kubectl edit svc knative-ingressgateway --namespace istio-system
```
This command opens your default text editor and allows you to add the
annotation to `knative-ingressgateway` service. After you've added your
annotation, your file may look similar to this:
```
apiVersion: v1
kind: Service
@ -138,6 +156,7 @@ service was created.
```shell
gcloud dns record-sets list --zone "external-dns-zone" --name "*.external-dns-test.my-org.do."
```
You should see output similar to:
```
@ -150,12 +169,16 @@ NAME TYPE TTL DATA
You can check if the domain has been published to the Internet be entering
the following command:
```shell
host test.external-dns-test.my-org.do
```
You should see the below result after the domain is published:
```
test.external-dns-test.my-org.do has address 35.231.248.30
```
> Note: The process of publishing the domain to the Internet can take several
minutes.
> minutes.

View File

@ -2,9 +2,8 @@
This directory contains tests and testing docs.
* [Unit tests](#running-unit-tests) currently reside in the codebase alongside the code they test
* [End-to-end tests](#running-end-to-end-tests)
- [Unit tests](#running-unit-tests) currently reside in the codebase alongside the code they test
- [End-to-end tests](#running-end-to-end-tests)
## Running unit tests