Merge branch 'master' of https://github.com/knative/docs into message-in-a-bottle

This commit is contained in:
Evan Anderson 2018-08-31 11:03:18 -07:00
commit a1434f7560
111 changed files with 4373 additions and 368 deletions

8
Gopkg.lock generated
View File

@ -70,6 +70,14 @@
packages = ["pkg/event"]
revision = "2b0383b8e4d67ffac446b17a7922bf7e5d9f5362"
[[projects]]
branch = "master"
digest = "1:d6415e6b744ec877c21fe734067636b9ee149af77276b08a3d33dd8698abf947"
name = "github.com/knative/test-infra"
packages = ["."]
pruneopts = "T"
revision = "4a4a682ee1fd31f33e450406393c3553b9ec5c2a"
[[projects]]
name = "github.com/matttproud/golang_protobuf_extensions"
packages = ["pbutil"]

View File

@ -1,6 +1,10 @@
# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md
# for detailed Gopkg.toml documentation.
required = [
"github.com/knative/test-infra",
]
ignored = [
"github.com/knative/docs/serving/samples/grpc-ping-go*",
]
@ -9,3 +13,8 @@ ignored = [
go-tests = true
unused-packages = true
non-go = true
[[prune.project]]
name = "github.com/knative/test-infra"
unused-packages = false
non-go = false

View File

@ -17,6 +17,7 @@ A build runs until all `steps` have completed or until a failure occurs.
* [Source](#source)
* [Service Account](#service-account)
* [Volumes](#volumes)
* [Timeout](#timeout)
* [Examples](#examples)
---
@ -47,6 +48,7 @@ following fields:
authentication information.
* [`volumes`](#volumes) - Specifies one or more volumes that you want to make
available to your build.
* [`timeout`](#timeout) - Specifies timeout after which the build will fail.
[kubernetes-overview]: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
@ -156,7 +158,7 @@ complement the volumes that are implicitly
For example, use volumes to accomplish one of the following common tasks:
* [Mount a Kubernetes secrets(./auth.md).
* [Mount a Kubernetes secret](./auth.md).
* Creat an `emptyDir` volume to act as a cache for use across multiple build
steps. Consider using a persistent volume for inter-build caching.
@ -164,6 +166,12 @@ For example, use volumes to accomplish one of the following common tasks:
* Mount a host's Docker socket to use a `Dockerfile` for container image
builds.
#### Timeout
Optional. Specifies timeout for the build. Includes time required for allocating resources and execution of build.
* Defaults to 10 minutes.
* Refer to [Go's ParseDuration documentation](https://golang.org/pkg/time/#ParseDuration) for expected format.
### Examples
@ -179,6 +187,7 @@ additional code samples, including working copies of the following snippets:
* [Mounting extra volumes](#using-an-extra-volume)
* [Pushing an image](#using-steps-to-push-images)
* [Authenticating with `ServiceAccount`](#using-a-serviceaccount)
* [Timeout](#using-timeout)
#### Using `git`
@ -331,6 +340,22 @@ Note: For a working copy of this `ServiceAccount` example, see the
[build/test/git-ssh](https://github.com/knative/build/tree/master/test/git-ssh)
code sample.
#### Using `timeout`
Specifying `timeout` for your `build`:
```yaml
spec:
timeout: 20m
source:
git:
url: https://github.com/knative/build.git
revision: master
steps:
- image: ubuntu
args: ["cat", "README.md"]
```
---
Except as otherwise noted, the content of this page is licensed under the

View File

@ -92,34 +92,13 @@ A PR can be merged only after the following criteria are met:
This project uses
[Prow](https://github.com/kubernetes/test-infra/tree/master/prow) to
automatically run tests for every PR. PRs with failing tests might not b
e merged. If necessary, you can rerun the tests by simply adding the comment
automatically run tests for every PR. PRs with failing tests might not be
merged. If necessary, you can rerun the tests by simply adding the comment
`/retest` to your PR.
Prow has several other features that make PR management easier, like running the
go linter or assigning labels. For a full list of commands understood by Prow,
see the [command help
page](https://prow-internal.gcpnode.com/command-help?repo=knative%2Fknative).
### Viewing test logs
The Prow instance is internal to Google, which means that only Google
employees can access the "Details" link of the test job (provided by
Prow in the PR thread). However, if you're a Knative team member outside
Google, and you are a member of the
[knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Google group, you can see the test logs by following these instructions:
1. Wait for Prow to finish the test execution. Note the PR number.
2. Open the URL http://gcsweb.k8s.io/gcs/knative-prow/pr-logs/pull/knative_serving/###/pull-knative-serving-@@@-tests/
where ### is the PR number and @@@ is the test type (_build_, _unit_ or _integration_).
3. You'll see one or more numbered directories. The highest number is the latest
test execution (called "build" by Prow).
4. The raw test log is the text file named `build-log.txt` inside each numbered
directory.
see the [command help page](https://prow.knative.dev/command-help).
---

View File

@ -46,7 +46,7 @@ table describes:
<td>Member</td>
<td>Regular active contributor in the community</td>
<td>
<p>Sponsored by 2 reviewers</p>
<p>Sponsored by two members</p>
<p>Has made multiple contributions to the project</p>
</td>
<td>

View File

@ -12,6 +12,9 @@ video recording or in another public space. Please be courteous to others.
from these commands and we are a global project - please be kind.
Note: `@all` is only to be used by admins.
You can join the [Knative Slack](https://slack.knative.dev) instance at
https://slack.knative.dev.
## Code of Conduct
The Knative [Code of Conduct](./CODE-OF-CONDUCT.md) applies throughout the
project, and includes all communication mediums.

View File

@ -23,8 +23,10 @@ The current working groups are:
* [API Core](#api-core)
* [Build](#build)
* [Events](#events)
* [Networking](#networking)
* [Observability](#observability)
* [Productivity](#productivity)
* [Scaling](#scaling)
* [Serving](#serving)
<!-- TODO add charters for each group -->
## API Core
@ -51,8 +53,8 @@ Slack Channel | [#api](https://knative.slack.com)
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | [build-crd](https://hangouts.google.com/hangouts/_/google.com/build-crd)
Community Meeting Calendar | Wednesdays 10:00a-10:30a PST <br>[Calendar Invitation](https://calendar.google.com/event?action=TEMPLATE&tmeid=MTBkb3MwYnVrbDd0djE0a2kzcmpmbjZndm9fMjAxODA3MTFUMTcwMDAwWiBtYXR0bW9vckBnb29nbGUuY29t&tmsrc=mattmoor%40google.com)
Community Meeting VC | [meet.google.com/hau-nwak-tgm](https://meet.google.com/hau-nwak-tgm) <br>Or dial in:<br>(US) +1 219-778-6103 PIN: 573 000#
Community Meeting Calendar | Wednesdays 10:00a-10:30a PST <br>[Calendar Invitation](https://calendar.google.com/event?action=TEMPLATE&tmeid=MTBkb3MwYnVrbDd0djE0a2kzcmpmbjZndm9fMjAxODA4MTVUMTcwMDAwWiBqYXNvbmhhbGxAZ29vZ2xlLmNvbQ&tmsrc=jasonhall%40google.com&scp=ALL)
Meeting Notes | [Notes](https://docs.google.com/document/d/1e7gMVFlJfkFdTcaWj2qETeRD9kSBG2Vh8mASPmQMYC0/edit)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1ov16HvPam-v_FXAGEaUdHok6_hUAoIoe)
Slack Channel | [#build-crd](https://knative.slack.com)
@ -79,6 +81,42 @@ Slack Channel | [#eventing](https://knative.slack.com/messages/C9JP
------------------------------------------------------------- | ----------- | ------- | -------
<img width="30px" src="https://github.com/vaikas-google.png"> | Ville Aikas | Google | [vaikas-google](https://github.com/vaikas-google)
## Networking
Inbound and outbound network connectivity for [serving](https://github.com/knative/serving) workloads.
Specific areas of interest include: load balancing, routing, DNS configuration and TLS support.
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | Coming soon
Community Meeting Calendar | Coming soon
Meeting Notes | [Notes](https://drive.google.com/open?id=1EE1t5mTfnTir2lEasdTMRNtuPEYuPqQCZbU3NC9mHOI)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1oVDYbcEDdQ9EpUmkK6gE4C7aZ8u6ujsN)
Slack Channel | [#networking](https://knative.slack.com)
&nbsp; | Leads | Company | Profile
--------------------------------------------------------- | ---------------- | ------- | -------
<img width="30px" src="https://github.com/tcnghia.png"> | Nghia Tran | Google | [tcnghia](https://github.com/tcnghia)
<img width="30px" src="https://github.com/mdemirhan.png"> | Mustafa Demirhan | Google | [mdemirhan](https://github.com/mdemirhan)
## Observability
Logging, monitoring & tracing infrastructure
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | https://meet.google.com/kik-btis-sqz <br> Or dial in: <br> (US) +1 515-705-3725 <br>PIN: 704 774#
Community Meeting Calendar | [Calendar Invitation](https://calendar.google.com/event?action=TEMPLATE&tmeid=MDc4ZnRkZjFtbzZhZzBmdDMxYXBrM3B1YTVfMjAxODA4MDJUMTczMDAwWiBtZGVtaXJoYW5AZ29vZ2xlLmNvbQ&tmsrc=mdemirhan%40google.com&scp=ALL)
Meeting Notes | [Notes](https://drive.google.com/open?id=1vWEpjf093Jsih3mKkpIvmWWbUQPxFkcyDxzNH15rQgE)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/10HcpZlI1PbFyzinO6HjfHbzCkBXrqXMy)
Slack Channel | [#observability](https://knative.slack.com)
&nbsp; | Leads | Company | Profile
--------------------------------------------------------- | ---------------- | ------- | -------
<img width="30px" src="https://github.com/mdemirhan.png"> | Mustafa Demirhan | Google | [mdemirhan](https://github.com/mdemirhan)
## Scaling
Autoscaling
@ -96,23 +134,23 @@ Slack Channel | [#autoscaling](https://knative.slack.com)
------------------------------------------------------------- | -------------- | ------- | -------
<img width="30px" src="https://github.com/josephburnett.png"> | Joseph Burnett | Google | [josephburnett](https://github.com/josephburnett)
## Serving
## Productivity
Logging infra, Monitoring infra, Trace, Load balancing/Istio, Domain names,
Routing, Observability
Project health, test framework, continuous integration & deployment, release, performance/scale/load testing infrastructure
Artifact | Link
-------------------------- | ----
Forum | [knative-dev@](https://groups.google.com/forum/#!forum/knative-dev)
Community Meeting VC | [TODO](TODO)
Community Meeting Calendar | [Calendar Invitation](TODO)
Meeting Notes | [Notes](TODO)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1pfcc6z8oQl6S7bOl1MnfZJ2o32FtgvRB)
Slack Channel | [#metrics](https://knative.slack.com)
Community Meeting VC | [Hangouts](https://meet.google.com/sps-vbhg-rfx)
Community Meeting Calendar | [Calendar Invitation](https://calendar.google.com/event?action=TEMPLATE&tmeid=NW5zM21rbHVwZWgyNHFoMGpyY2JhMjB2bHRfMjAxODA4MzBUMjEwMDAwWiBnb29nbGUuY29tXzE4dW40ZnVoNnJva3FmOGhtZmZ0bTVvcXE0QGc&tmsrc=google.com_18un4fuh6rokqf8hmfftm5oqq4%40group.calendar.google.com&scp=ALL)
Meeting Notes | [Notes](https://docs.google.com/document/d/1aPRwYGD4XscRIqlBzbNsSB886PJ0G-vZYUAAUjoydko)
Document Folder | [Folder](https://drive.google.com/corp/drive/folders/1oMYB4LQHjySuMChmcWYCyhH7-CSkz2r_)
Slack Channel | [#productivity](https://knative.slack.com)
&nbsp; | Leads | Company | Profile
--------------------------------------------------------- | ---------------- | ------- | -------
<img width="30px" src="https://github.com/mdemirhan.png"> | Mustafa Demirhan | Google | [mdemirhan](https://github.com/mdemirhan)
--------------------------------------------------------- | -------------- | ------- | -------
<img width="30px" src="https://github.com/jessiezcc.png"> | Jessie Zhu | Google | [jessiezcc](https://github.com/jessiezcc)
<img width="30px" src="https://github.com/adrcunha.png"> | Adriano Cunha | Google | [adrcunhua](https://github.com/adrcunha)
---

View File

@ -88,8 +88,8 @@ is a random number 1-10.
Now we want to consume these IoT events, so let's create the function to handle the events:
```shell
kubectl apply -f event-flow/route.yaml
kubectl apply -f event-flow/configuration.yaml
kubectl apply -f route.yaml
kubectl apply -f configuration.yaml
```
## Create an event source
@ -103,10 +103,10 @@ in Pull mode to poll for the events from this topic.
Then let's create a GCP PubSub as an event source that we can bind to.
```shell
kubectl apply -f event-flow/serviceaccount.yaml
kubectl apply -f event-flow/serviceaccountbinding.yaml
kubectl apply -f event-flow/eventsource.yaml
kubectl apply -f event-flow/eventtype.yaml
kubectl apply -f serviceaccount.yaml
kubectl apply -f serviceaccountbinding.yaml
kubectl apply -f eventsource.yaml
kubectl apply -f eventtype.yaml
```
## Bind IoT events to our function
@ -115,5 +115,5 @@ We have now created a function that we want to consume our IoT events, and we ha
source that's sending events via GCP PubSub, so let's wire the two together:
```shell
kubectl apply -f event-flow/flow.yaml
kubectl apply -f flow.yaml
```

View File

@ -0,0 +1,27 @@
# Copyright 2018 The Knative Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM golang AS builder
WORKDIR /go/src/github.com/knative/docs/
ADD . /go/src/github.com/knative/docs/
RUN CGO_ENABLED=0 go build ./eventing/samples/github-events
FROM gcr.io/distroless/base
COPY --from=builder /go/src/github.com/knative/docs/github-events /sample
ENTRYPOINT ["/sample"]
EXPOSE 8080

View File

@ -0,0 +1,180 @@
# Reacting to GitHub Events
In response to a pull request event, the sample app _legit_ Service will add
`(looks pretty legit)` to the PR title.
A GitHub webhook will be created on a repository and a Knative `Service` will be
deployed to receive the webhook's event deliveries and forward them into a
`Channel`, through a `Bus`, and out to the consumer via a `Subscription`. The
`Flow` resource takes care of provisioning the webhook, the `Service`, the
`Channel`, and the `Subscription`.
## Prerequisites
You will need:
- A Kubernetes cluster with Knative serving installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md)
if you need to create one.
- [Docker](https://www.docker.com/) installed and running on your local machine,
and a Docker Hub account configured (you'll use it for a container registry).
- Knative eventing core installed on your Kubernetes cluster. You can install
with:
```shell
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release.yaml
```
- A domain name that allows GitHub to call into the cluster: Follow the
[assign a static IP address](https://github.com/knative/docs/blob/master/serving/gke-assigning-static-ip-address.md)
and
[configure a custom domain](https://github.com/knative/docs/blob/master/serving/using-a-custom-domain.md)
instructions.
## Configuring Knative
To use this sample, you'll need to install the `stub` ClusterBus and the
`github` EventSource:
```shell
# Installs ClusterBus
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-stub.yaml
# Installs EventSource
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-source-github.yaml
```
## Granting permissions
Because the `github` EventSource needs to create a Knative Service, you'll need
to provision a special ServiceAccount with the necessary permissions.
The `auth.yaml` file provisions a service
account, and creates a role which can create a Knative Service in the `default`
namespace. In a production environment, you might want to limit the access of
this service account to only specific namespaces.
```shell
kubectl apply -f auth.yaml
```
## Building and deploying the sample
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run the following commands, replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
# Note: The relative path points to the _root_ of the `knative/docs` repo
docker build -t {username}/github-events --file=Dockerfile ../../../
# Push the container to docker registry
docker push {username}/github-events
```
1. After the build has completed and the container is pushed to Docker Hub, you
can deploy the function into your cluster. **Ensure that the container image
value in `function.yaml` matches the container you built in the previous
step.** Apply the configuration using `kubectl`:
```shell
kubectl apply -f function.yaml
```
1. Check that your service is running using:
```shell
kubectl get ksvc -o "custom-columns=NAME:.metadata.name,READY:.status.conditions[2].status,REASON:.status.conditions[2].message"
NAME READY REASON
legit True <none>
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Create a [personal access token](https://github.com/settings/tokens) to
GitHub repo that the GitHub source can use to register webhooks with the
GitHub API. Also decide on a token that your code will use to authenticate
the incoming webhooks from GitHub (*accessToken*).
The token can be named anything you find convenient. This sample requires
full `repo` control to be able update the title of the _Pull Request_.
The Source requires `admin:repo_hook`, this allows it to create webhooks
into repos that your account is allowed to do so. Copy and save this token;
GitHub will force you to generate it again if misplaced.
Here I named my token "EventingSample" and have selected the recommended
scopes:
![GitHub UI](personal_access_token.png "GitHub personal access token screenshot")
Update `githubsecret.yaml` with those
values. If your generated access token is `'asdfasfdsaf'` and you choose
your *secretToken* as `'personal_access_token_value'`, you'd modify
`githubsecret.yaml` like so:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: githubsecret
type: Opaque
stringData:
githubCredentials: >
{
"accessToken": "asdfasfdsaf",
"secretToken": "personal_access_token_value"
}
```
Hint: you can makeup a random *accessToken* with:
```shell
head -c 8 /dev/urandom | base64
```
Then, apply the githubsecret using `kubectl`:
```shell
kubectl apply -f githubsecret.yaml
```
1. Update the resource inside `flow.yaml` to the
org/repo of your choosing. Note that the personal access token must be valid
for the chosen org/repo.
Then create the flow sending GitHub Events to the service:
```shell
kubectl apply -f flow.yaml
```
1. Create a PR for the repo you configured the webhook for, and you'll see that
the Title will be modified with the suffix `(looks pretty legit)`
## Understanding what happened
`TODO: similar to k8s-events.`
<!--TODO:
explain the resources and communication channels, as well as where the secret
is used. In particular include a note to look at
https://github.com/<owner>/<repo>/settings/hooks to see the webhook registered
and then deleted.
-->
## Cleaning up
To clean up the function, `Flow`, auth, and secret:
```shell
kubectl delete -f function.yaml
kubectl delete -f flow.yaml
kubectl delete -f auth.yaml
kubectl delete -f githubsecret.yaml
```
And then delete the [personal access token](https://github.com/settings/tokens)
created from GitHub.

View File

@ -0,0 +1,43 @@
# Copyright 2018 The Knative Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ServiceAccount
metadata:
name: feed-sa
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: create-knative-service
namespace: default
rules:
- apiGroups: ["serving.knative.dev"]
resources: ["services"]
verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
---
# This enables the feed-sa to deploy the receive adapter.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: feed-sa-deploy
namespace: default
subjects:
- kind: ServiceAccount
name: feed-sa
namespace: default
roleRef:
kind: Role
name: create-knative-service
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,37 @@
# Copyright 2018 The Knative Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: flows.knative.dev/v1alpha1
kind: Flow
metadata:
name: github-flow
namespace: default
spec:
serviceAccountName: feed-sa
trigger:
eventType: dev.knative.github.pullrequest
resource: <org>/<repo> # TODO: update this
service: github
parameters:
secretName: githubsecret
secretKey: githubCredentials
parametersFrom:
- secretKeyRef:
name: githubsecret
key: githubCredentials
action:
target:
kind: Service
apiVersion: serving.knative.dev/v1alpha1
name: legit

View File

@ -0,0 +1,106 @@
/*
Copyright 2018 The Knative Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"context"
"encoding/json"
"flag"
"fmt"
ghclient "github.com/google/go-github/github"
"github.com/knative/eventing/pkg/event"
"golang.org/x/oauth2"
"gopkg.in/go-playground/webhooks.v3/github"
"log"
"net/http"
"os"
"strings"
)
const (
// Environment variable containing json credentials
envSecret = "GITHUB_SECRET"
// this is what we tack onto each PR title if not there already
titleSuffix = "looks pretty legit"
)
// GithubHandler holds necessary objects for communicating with the Github.
type GithubHandler struct {
client *ghclient.Client
ctx context.Context
}
type GithubSecrets struct {
AccessToken string `json:"accessToken"`
SecretToken string `json:"secretToken"`
}
func (h *GithubHandler) newPullRequestPayload(ctx context.Context, pl *github.PullRequestPayload) {
title := pl.PullRequest.Title
log.Printf("GOT PR with Title: %q", title)
// Check the title and if it contains 'looks pretty legit' leave it alone
if strings.Contains(title, titleSuffix) {
// already modified, leave it alone.
return
}
newTitle := fmt.Sprintf("%s (%s)", title, titleSuffix)
updatedPR := ghclient.PullRequest{
Title: &newTitle,
}
newPR, response, err := h.client.PullRequests.Edit(h.ctx,
pl.Repository.Owner.Login, pl.Repository.Name, int(pl.Number), &updatedPR)
if err != nil {
log.Printf("Failed to update PR: %s\n%s", err, response)
return
}
if newPR.Title != nil {
log.Printf("New PR Title: %q", *newPR.Title)
} else {
log.Printf("New PR title is nil")
}
}
func main() {
flag.Parse()
githubSecrets := os.Getenv(envSecret)
var credentials GithubSecrets
err := json.Unmarshal([]byte(githubSecrets), &credentials)
if err != nil {
log.Fatalf("Failed to unmarshal credentials: %s", err)
return
}
// Set up the auth for being able to talk to Github.
ctx := context.Background()
ts := oauth2.StaticTokenSource(
&oauth2.Token{AccessToken: credentials.AccessToken},
)
tc := oauth2.NewClient(ctx, ts)
client := ghclient.NewClient(tc)
h := &GithubHandler{
client: client,
ctx: ctx,
}
log.Fatal(http.ListenAndServe(":8080", event.Handler(h.newPullRequestPayload)))
}

View File

@ -0,0 +1,34 @@
# Copyright 2018 The Knative Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: legit
spec:
runLatest:
configuration:
revisionTemplate:
metadata:
labels:
knative.dev/type: function
spec:
container:
image: docker.io/{username}/github-events # TODO: fill username out
env:
- name: GITHUB_SECRET
valueFrom:
secretKeyRef:
key: githubCredentials
name: githubsecret

View File

@ -0,0 +1,25 @@
# Copyright 2018 The Knative Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Secret
metadata:
name: githubsecret
type: Opaque
stringData:
githubCredentials: >
{
"accessToken": "<YOUR PERSONAL TOKEN FROM GITHUB>",
"secretToken": "<YOUR RANDOM STRING>"
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 240 KiB

View File

@ -46,11 +46,12 @@ kubectl apply -f serviceaccount.yaml
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username. Run the following from the _root_ of the `knative/docs` repo:
username:
```shell
# Build the container on your local machine
docker build -t {username}/k8s-events --file=eventing/samples/k8s-events/Dockerfile .
# Note: The relative path points to the _root_ of the `knative/docs` repo
docker build -t {username}/k8s-events --file Dockerfile ../../../
# Push the container to docker registry
docker push {username}/k8s-events
@ -62,21 +63,26 @@ kubectl apply -f serviceaccount.yaml
step.** Apply the configuration using `kubectl`:
```shell
kubectl apply -f eventing/samples/k8s-events/function.yaml
kubectl apply -f function.yaml
```
1. Check that your service is running using:
```shell
kubectl get services.serving.knative.dev -o "custom-columns=NAME:.metadata.name,READY:.status.conditions[2].status,REASON:.status.conditions[2].message"
kubectl get ksvc -o "custom-columns=NAME:.metadata.name,READY:.status.conditions[2].status,REASON:.status.conditions[2].message"
NAME READY REASON
read-k8s-events True <none>
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Create the flow sending Kubernetes Events to the service:
```shell
kubectl apply -f eventing/samples/k8s-events/flow.yaml
kubectl apply -f flow.yaml
```
1. If you have the full knative install, you can read the function logs using

View File

@ -24,4 +24,4 @@ spec:
spec:
container:
# Replace this image with your {username}/k8s-events container
image: github.com/knative/docs/eventing/sample/k8s-events
image: github.com/knative/docs/eventing/samples/k8s-events

29
hack/update-deps.sh Normal file
View File

@ -0,0 +1,29 @@
#!/bin/bash
# Copyright 2018 The Knative Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
source $(dirname $0)/../vendor/github.com/knative/test-infra/scripts/library.sh
cd ${REPO_ROOT_DIR}
# Ensure we have everything we need under vendor/
dep ensure
# Keep the only dir in knative/test-infra we're interested in
find vendor/github.com/knative/test-infra -mindepth 1 -maxdepth 1 ! -name scripts -exec rm -fr {} \;

BIN
images/knative-version.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

View File

@ -69,7 +69,7 @@ environment variables. First determine which region you'd like to run AKS in, al
1. Set `RESOURCE_GROUP` and `LOCATION` variables:
```bash
export LOCATION=east-us
export LOCATION=eastus
export RESOURCE_GROUP=knative-group
export CLUSTER_NAME=knative-cluster
```
@ -123,7 +123,7 @@ Knative depends on Istio.
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.0/third_party/istio-0.8.0/istio.yaml
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
@ -142,26 +142,45 @@ rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
## Installing Knative Serving
## Installing Knative components
1. Next, we will install [Knative Serving](https://github.com/knative/serving)
and its dependencies:
You can install the Knative Serving and Build components together, or Build on its own.
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.0/release.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components, until all of the components show a `STATUS` of
`Running`:
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the command to see the current status.
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
You are now ready to deploy an app to your new Knative cluster.
You are now ready to deploy an app or create a build in your new Knative
cluster.
## Deploying an app

View File

@ -14,12 +14,14 @@ specifications for Knative on Google Cloud Platform.
This guide assumes you are using bash in a Mac or Linux environment; some
commands will need to be adjusted for use in a Windows environment.
### Installing the Google Cloud SDK
### Installing the Google Cloud SDK and `kubectl`
1. If you already have `kubectl`, run `kubectl version` to check your client version.
1. If you already have `gcloud` installed with the `kubectl` component later than
v1.10, you can skip these steps.
1. If you already have `gcloud` installed with `kubectl` version 1.10 or newer,
you can skip these steps.
> Tip: To check which version of `kubectl` you have installed, enter:
```
kubectl version
```
1. Download and install the `gcloud` command line tool:
https://cloud.google.com/sdk/install
@ -117,7 +119,7 @@ Knative depends on Istio.
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.0/third_party/istio-0.8.0/istio.yaml
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
@ -137,30 +139,27 @@ rerun the command to see the current status.
## Installing Knative components
You have the option to install and use only the Knative components that you
want. You can install only the component of Knative if you need that
functionality, for example Knative serving is not required to create and run
builds.
You can install the Knative Serving and Build components together, or Build on its own.
### Installing Knative Serving
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install
[Knative Serving](https://github.com/knative/serving) and its dependencies:
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.0/release.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative serving components until all of the components show a
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
```
### Installing Knative Build
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.0/third_party/config/build/release.yaml
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
@ -178,7 +177,7 @@ the current status.
You are now ready to deploy an app or create a build in your new Knative
cluster.
## Deploying apps or builds
## What's next
Now that your cluster has Knative installed, you're ready to deploy an app or
create a build.
@ -193,8 +192,8 @@ for getting started:
* You can view the available [sample apps](../serving/samples/README.md) and
deploy one of your choosing.
* To get started by creating a build, see
[Creating a simple Knative Build](../build/creating-builds.md)
* You can follow the step-by-step
[Creating a simple Knative Build](../build/creating-builds.md) guide.
## Cleaning up

View File

@ -71,7 +71,7 @@ Knative depends on Istio.
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.0/third_party/istio-0.8.0/istio.yaml
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
2. Label the default namespace with `istio-injection=enabled`:
```bash
@ -87,23 +87,45 @@ rerun the command to see the current status.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
## Installing Knative Serving
## Installing Knative components
1. Next, we will install [Knative Serving](https://github.com/knative/serving)
and its dependencies:
`bash kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.0/release.yaml`
1. Monitor the Knative components, until all of the components show a `STATUS`
of `Running`: `bash kubectl get pods -n knative-serving`
You can install the Knative Serving and Build components together, or Build on its own.
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the command to see the current
status.
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
You are now ready to deploy an app to your new Knative cluster.
You are now ready to deploy an app or create a build in your new Knative
cluster.
## Alternative way to enable Knative with Gardener
@ -137,10 +159,10 @@ spec:
And of course create the respectve `ConfigMaps`:
```
curl https://raw.githubusercontent.com/knative/serving/v0.1.0/third_party/istio-0.8.0/istio.yaml
curl https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
kubectl create configmap istio-chart-080 --from-file=istio.yaml
curl https://github.com/knative/serving/releases/download/v0.1.0/release.yaml
curl https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
kubectl create configmap knative-chart-001 --from-file=release.yaml
```

View File

@ -126,7 +126,7 @@ Knative depends on Istio.
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.0/third_party/istio-0.8.0/istio.yaml
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
@ -145,28 +145,45 @@ rerun the command to see the current status.
> command to view the component's status updates in real time. Use CTRL+C to
> exit watch mode.
## Installing Knative Serving
## Installing Knative components
1. Next, we will install [Knative Serving](https://github.com/knative/serving)
and its dependencies:
You can install the Knative Serving and Build components together, or Build on its own.
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.0/release.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components until all of the components show a `STATUS`
of `Running`:
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the command to see the current
status.
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of re-running the command, you can add `--watch` to the above
> command to view the component's status updates in real time. Use CTRL+C to
> exit watch mode.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
You are now ready to deploy an app to your new Knative cluster.
You are now ready to deploy an app or create a build in your new Knative
cluster.
## Deploying an app

View File

@ -59,7 +59,7 @@ Knative depends on Istio. Run the following to install Istio. (We are changing
`LoadBalancer` to `NodePort` for the `istio-ingress` service).
```shell
curl -L https://raw.githubusercontent.com/knative/serving/v0.1.0/third_party/istio-0.8.0/istio.yaml \
curl -L https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml \
| sed 's/LoadBalancer/NodePort/' \
| kubectl apply -f -
@ -85,12 +85,12 @@ rerun the command to see the current status.
Next, install [Knative Serving](https://github.com/knative/serving):
Because you have limited resources available, use the
`https://github.com/knative/serving/releases/download/v0.1.0/release-lite.yaml`
`https://github.com/knative/serving/releases/download/v0.1.1/release-lite.yaml`
file, which omits some of the monitoring components to reduce the memory used by
the Knative components. To use the provided `release-lite.yaml` release, run:
```shell
curl -L https://github.com/knative/serving/releases/download/v0.1.0/release-lite.yaml \
curl -L https://github.com/knative/serving/releases/download/v0.1.1/release-lite.yaml \
| sed 's/LoadBalancer/NodePort/' \
| kubectl apply -f -
```

View File

@ -0,0 +1,212 @@
# Knative Install on OpenShift
This guide walks you through the installation of the latest version of [Knative
Serving](https://github.com/knative/serving) on an
[OpenShift](https://github.com/openshift/origin) using pre-built images and
demonstrates creating and deploying an image of a sample "hello world" app onto
the newly created Knative cluster.
You can find [guides for other platforms here](README.md).
## Before you begin
These instructions will run an OpenShift 3.10 (Kubernetes 1.10) cluster on your
local machine using [`oc cluster up`](https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container)
to test-drive knative.
## Install `oc` (openshift cli)
You can install the latest version of `oc`, the OpenShift CLI, into your local
directory by downloading the right release tarball for your OS from the
[releases page](https://github.com/openshift/origin/releases/tag/v3.10.0).
```shell
export OS=<your OS here>
curl https://github.com/openshift/origin/releases/download/v3.10.0/openshift-origin-client-tools-v3.10.0-dd10d17-$OS-64bit.tar.gz -o oc.tar.gz
tar zvf oc.tar.gz -x openshift-origin-client-tools-v3.10.0-dd10d17-$OS-64bit/oc --strip=1
# You will now have the oc binary in your local directory
```
## Scripted cluster setup and installation
For Linux and Mac, you can optionally run a
[script](scripts/knative-with-openshift.sh) that automates the steps on this
page.
Once you have `oc` present on your machine and in your `PATH`, you can simply
run [this script](scripts/knative-with-openshift.sh); it will:
- Create a new OpenShift cluster on your local machine with `oc cluster up`
- Install Istio and Knative serving
- Log you in as the cluster administrator
- Set up the default namespace for istio autoinjection
Once the script completes, you'll be ready to test out Knative!
## Creating a new OpenShift cluster
Create a new OpenShift cluster on your local machine using `oc cluster up`:
```shell
oc cluster up --write-config
# Enable admission webhooks
sed -i -e 's/"admissionConfig":{"pluginConfig":null}/"admissionConfig": {\
"pluginConfig": {\
"ValidatingAdmissionWebhook": {\
"configuration": {\
"apiVersion": "v1",\
"kind": "DefaultAdmissionConfig",\
"disable": false\
}\
},\
"MutatingAdmissionWebhook": {\
"configuration": {\
"apiVersion": "v1",\
"kind": "DefaultAdmissionConfig",\
"disable": false\
}\
}\
}\
}/' openshift.local.clusterup/kube-apiserver/master-config.yaml
oc cluster up --server-loglevel=5
```
Once the cluster is up, login as the cluster administrator:
```shell
oc login -u system:admin
```
Now, we'll set up the default project for use with Knative.
```shell
oc project default
# SCCs (Security Context Constraints) are the precursor to the PSP (Pod
# Security Policy) mechanism in Kubernetes.
oc adm policy add-scc-to-user privileged -z default -n default
oc label namespace default istio-injection=enabled
```
## Installing Istio
Knative depends on Istio. First, run the following to grant the necessary
privileges to the service accounts istio will use:
```shell
oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z default -n istio-system
oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-egressgateway-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-citadel-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-ingressgateway-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-cleanup-old-ca-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-mixer-post-install-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-mixer-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-pilot-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-sidecar-injector-service-account -n istio-system
oc adm policy add-cluster-role-to-user cluster-admin -z istio-galley-service-account -n istio-system
```
Run the following to install Istio:
```shell
curl -L https://storage.googleapis.com/knative-releases/serving/latest/istio.yaml \
| sed 's/LoadBalancer/NodePort/' \
| oc apply -f -
```
Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
```shell
oc get pods -n istio-system
```
It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
## Installing Knative Serving
Next, we'll install [Knative Serving](https://github.com/knative/serving).
First, run the following to grant the necessary privileges to the service
accounts istio will use:
```shell
oc adm policy add-scc-to-user anyuid -z build-controller -n knative-build
oc adm policy add-scc-to-user anyuid -z controller -n knative-serving
oc adm policy add-scc-to-user anyuid -z autoscaler -n knative-serving
oc adm policy add-scc-to-user anyuid -z kube-state-metrics -n monitoring
oc adm policy add-scc-to-user anyuid -z node-exporter -n monitoring
oc adm policy add-scc-to-user anyuid -z prometheus-system -n monitoring
oc adm policy add-cluster-role-to-user cluster-admin -z build-controller -n knative-build
oc adm policy add-cluster-role-to-user cluster-admin -z controller -n knative-serving
```
Next, install Knative:
```shell
curl -L https://storage.googleapis.com/knative-releases/serving/latest/release-lite.yaml \
| sed 's/LoadBalancer/NodePort/' \
| oc apply -f -
```
Monitor the Knative components until all of the components show a `STATUS` of
`Running`:
```shell
oc get pods -n knative-serving
```
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL+C to exit watch mode.
Now you can deploy an app to your newly created Knative cluster.
## Deploying an app
Now that your cluster has Knative installed, you're ready to deploy an app.
If you'd like to follow a step-by-step guide for deploying your first app on
Knative, check out the
[Getting Started with Knative App Deployment](getting-started-knative-app.md)
guide.
If you'd like to view the available sample apps and deploy one of your choosing,
head to the [sample apps](../serving/samples/README.md) repo.
> Note: When looking up the IP address to use for accessing your app, you need to look up
the NodePort for the `knative-ingressgateway` as well as the IP address used for OpenShift.
You can use the following command to look up the value to use for the {IP_ADDRESS} placeholder
used in the samples:
```shell
export IP_ADDRESS=$(oc get node -o 'jsonpath={.items[0].status.addresses[0].address}'):$(oc get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
## Cleaning up
Delete your test cluster by running:
```shell
oc cluster down
rm -rf openshift.local.clusterup
```
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).

View File

@ -32,7 +32,7 @@ Knative depends on Istio. Istio workloads require privileged mode for Init Conta
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.0/third_party/istio-0.8.0/istio.yaml
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
@ -50,26 +50,45 @@ rerun the command to see the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
## Installing Knative Serving
## Installing Knative components
1. Next, we will install [Knative Serving](https://github.com/knative/serving)
and its dependencies:
You can install the Knative Serving and Build components together, or Build on its own.
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.0/release.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components, until all of the components show a `STATUS` of
`Running`:
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the command to see the current status.
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
command to view the component's status updates in real time. Use CTRL + C to exit watch mode.
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
You are now ready to deploy an app to your new Knative cluster.
You are now ready to deploy an app or create a build in your new Knative
cluster.
## Deploying an app

View File

@ -19,7 +19,7 @@ Containers
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.0/third_party/istio-0.8.0/istio.yaml
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
@ -35,28 +35,45 @@ rerun the command to see the current status.
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
## Installing Knative Serving
## Installing Knative components
1. Next, we will install [Knative Serving](https://github.com/knative/serving)
and its dependencies:
You can install the Knative Serving and Build components together, or Build on its own.
### Installing Knative Serving and Build components
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.0/release.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components, until all of the components show a `STATUS`
of `Running`:
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
```
### Installing Knative Build only
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the command to see the current
status.
components to be up and running; you can rerun the `kubectl get` command to see
the current status.
> Note: Instead of rerunning the command, you can add `--watch` to the above
> command to view the component's status updates in real time. Use CTRL + C to
> exit watch mode.
command to view the component's status updates in real time. Use CTRL + C to
exit watch mode.
You are now ready to deploy an app to your new Knative cluster.
You are now ready to deploy an app or create a build in your new Knative
cluster.
## Deploying an app

View File

@ -9,7 +9,7 @@ sure which Kubernetes platform is right for you, see
[Picking the Right Solution](https://kubernetes.io/docs/setup/pick-right-solution/).
We provide information for installing Knative on
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/), [IBM Cloud Kubernetes Service](https://www.ibm.com/cloud/container-service), [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/), [Minikube](https://kubernetes.io/docs/setup/minikube/) and [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) clusters.
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/), [IBM Cloud Kubernetes Service](https://www.ibm.com/cloud/container-service), [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/), [Minikube](https://kubernetes.io/docs/setup/minikube/), [OpenShift](https://github.com/openshift/origin) and [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) clusters.
## Installing Knative
@ -21,6 +21,7 @@ Knative components on the following platforms:
* [Knative Install on Google Kubernetes Engine](Knative-with-GKE.md)
* [Knative Install on IBM Cloud Kubernetes Service](Knative-with-IKS.md)
* [Knative Install on Minikube](Knative-with-Minikube.md)
* [Knative Install on OpenShift](Knative-with-OpenShift.md)
* [Knative Install on Pivotal Container Service](Knative-with-PKS.md)
If you already have a Kubernetes cluster you're comfortable installing
@ -50,6 +51,10 @@ and set up an Istio IP range for outbound network access:
* [Configure outbound network access](../serving/outbound-network-access.md)
* [Configuring HTTPS with a custom certificate](../serving/using-an-ssl-cert.md)
## Checking the Version of Your Knative Serving Installation
* [Checking the version of your Knative Serving installation](check-install-version.md)
---
Except as otherwise noted, the content of this page is licensed under the

View File

@ -0,0 +1,37 @@
# Checking the Version of Your Knative Serving Installation
If you want to check what version of Knative serving you have installed,
enter the following command:
```bash
kubectl describe deploy controller -n knative-serving
```
This will return the description for the `knative-serving` controller; this
information contains the link to the container that was used to install Knative:
```yaml
...
Pod Template:
Labels: app=controller
Annotations: sidecar.istio.io/inject=false
Service Account: controller
Containers:
controller:
# Link to container used for Knative install
Image: gcr.io/knative-releases/github.com/knative/serving/cmd/controller@sha256:59abc8765d4396a3fc7cac27a932a9cc151ee66343fa5338fb7146b607c6e306
...
```
Copy the full `gcr.io` link to the container and paste it into your browser.
If you are already signed in to a Google account, you'll be taken to the Google
Container Registry page for that container in the Google Cloud Platform console.
If you aren't already signed in, you'll need to sign in a to a Google account
before you can view the container details.
On the container details page, you'll see a section titled
"Container classification," and in that section is a list of tags. The versions
of Knative you have installed will appear in the list as `v0.1.1`, or whatever
verion you have installed:
![Shows list of tags on container details page; v0.1.1 is the Knative version and is the first tag.](../images/knative-version.png)

View File

@ -77,16 +77,26 @@ Now that your service is created, Knative will perform the following steps:
To see if your app has been deployed succesfully, you need the host URL and
IP address created by Knative.
1. To find the IP address for your service, enter
`kubectl get svc knative-ingressgateway -n istio-system`. If your cluster is
new, it can take sometime for the service to get asssigned an external IP address.
Note: If your cluster is new, it can take some time before the service is
asssigned an external IP address.
1. To find the IP address for your service, enter:
```shell
kubectl get svc knative-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
Take note of the `EXTERNAL-IP` address.
You can also export the IP address as a variable with the following command:
```shell
export IP_ADDRESS=$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.status.loadBalancer.ingress[0].ip}')
```
> Note: if you use minikube or a baremetal cluster that has no external load balancer, the
`EXTERNAL-IP` field is shown as `<pending>`. You need to use `NodeIP` and `NodePort` to
interact your app instead. To get your app's `NodeIP` and `NodePort`, enter the following command:
@ -97,26 +107,48 @@ IP address created by Knative.
1. To find the host URL for your service, enter:
```shell
export HOST_URL=$(kubectl get services.serving.knative.dev helloworld-go -o jsonpath='{.status.domain}')
kubectl get ksvc helloworld-go -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-go helloworld-go.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](check-install-version.md)
to learn how to see what version you have installed.
You can also export the host URL as a variable using the following command:
```shell
export HOST_URL=$(kubectl get ksvc helloworld-go -o jsonpath='{.status.domain}')
```
If you changed the name from `helloworld-go` to something else when creating
the the `.yaml` file, replace `helloworld-go` in the above command with the
the `.yaml` file, replace `helloworld-go` in the above commands with the
name you entered.
1. Now you can make a request to your app to see the results. Replace
1. Now you can make a request to your app and see the results. Replace
`IP_ADDRESS` with the `EXTERNAL-IP` you wrote down, and replace
`helloworld-go.default.example.com` with the domain returned in the previous
step.
If you deployed your own app, you may want to customize this cURL
request to interact with your application.
```shell
curl -H "Host: helloworld-go.default.example.com" http://IP_ADDRESS
Hello World: Go Sample v1!
```
If you exported the host URL And IP address as variables in the previous steps, you
can use those variables to simplify your cURL request:
```shell
curl -H "Host: ${HOST_URL}" http://${IP_ADDRESS}
Hello World: Go Sample v1!
```
If you deployed your own app, you might want to customize this cURL
request to interact with your application.
It can take a few seconds for Knative to scale up your application and return
a response.

View File

@ -0,0 +1,94 @@
#!/usr/bin/env bash
# Turn colors in this script off by setting the NO_COLOR variable in your
# environment to any value:
#
# $ NO_COLOR=1 test.sh
NO_COLOR=${NO_COLOR:-""}
if [ -z "$NO_COLOR" ]; then
header=$'\e[1;33m'
reset=$'\e[0m'
else
header=''
reset=''
fi
function header_text {
echo "$header$*$reset"
}
header_text "Starting Knative test-drive on OpenShift!"
echo "Using oc version:"
oc version
header_text "Writing config"
oc cluster up --write-config
sed -i -e 's/"admissionConfig":{"pluginConfig":null}/"admissionConfig": {\
"pluginConfig": {\
"ValidatingAdmissionWebhook": {\
"configuration": {\
"apiVersion": "v1",\
"kind": "DefaultAdmissionConfig",\
"disable": false\
}\
},\
"MutatingAdmissionWebhook": {\
"configuration": {\
"apiVersion": "v1",\
"kind": "DefaultAdmissionConfig",\
"disable": false\
}\
}\
}\
}/' openshift.local.clusterup/kube-apiserver/master-config.yaml
header_text "Starting OpenShift with 'oc cluster up'"
oc cluster up --server-loglevel=5
header_text "Logging in as system:admin and setting up default namespace"
oc login -u system:admin
oc project default
oc adm policy add-scc-to-user privileged -z default -n default
oc label namespace default istio-injection=enabled
header_text "Setting up security policy for istio"
oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z default -n istio-system
oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-egressgateway-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-citadel-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-ingressgateway-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-cleanup-old-ca-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-mixer-post-install-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-mixer-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-pilot-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-sidecar-injector-service-account -n istio-system
oc adm policy add-cluster-role-to-user cluster-admin -z istio-galley-service-account -n istio-system
header_text "Installing istio"
curl -L https://storage.googleapis.com/knative-releases/serving/latest/istio.yaml \
| sed 's/LoadBalancer/NodePort/' \
| oc apply -f -
header_text "Waiting for istio to become ready"
sleep 5; while echo && oc get pods -n istio-system | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done
header_text "Setting up security policy for knative"
oc adm policy add-scc-to-user anyuid -z build-controller -n knative-build
oc adm policy add-scc-to-user anyuid -z controller -n knative-serving
oc adm policy add-scc-to-user anyuid -z autoscaler -n knative-serving
oc adm policy add-scc-to-user anyuid -z kube-state-metrics -n monitoring
oc adm policy add-scc-to-user anyuid -z node-exporter -n monitoring
oc adm policy add-scc-to-user anyuid -z prometheus-system -n monitoring
oc adm policy add-cluster-role-to-user cluster-admin -z build-controller -n knative-build
oc adm policy add-cluster-role-to-user cluster-admin -z controller -n knative-serving
header_text "Installing Knative"
curl -L https://storage.googleapis.com/knative-releases/serving/latest/release-lite.yaml \
| sed 's/LoadBalancer/NodePort/' \
| oc apply -f -
header_text "Waiting for Knative to become ready"
sleep 5; while echo && oc get pods -n knative-serving | grep -v -E "(Running|Completed|STATUS)"; do sleep 5; done

131
resources.md Normal file
View File

@ -0,0 +1,131 @@
# Resources
This page contains information about various tools and technologies
that are useful to anyone developing on Knative.
## Community Resources
This section contains tools and technologies developed by members of the
Knative community specifically for use with Knative.
### [`knctl`](https://github.com/cppforlife/knctl)
`knctl` is an under-development CLI for working with Knative.
## Other Resources
This section contains other tools and technologies that are useful when
working with Knative.
### [`go-containerregistry`](https://github.com/google/go-containerregistry/)
`go-containerregistry` is a Go library used by `ko`, `kaniko`, `skaffold` and
others, which enables support for pushing, pulling and managing images in a
container image registry, without requiring Docker to be installed.
It also provides packages to interact with images in a local Docker daemon,
which does require that Docker be installed.
This library also provides a CLI tool called
[`crane`](https://github.com/google/go-containerregistry/blob/master/cmd/crane/doc/crane.md),
which can be used to interact with and inspect images in a registry.
### [`jib`](https://github.com/GoogleContainerTools/jib)
`jib` is a tool, packaged as a Maven plugin and a Gradle plugin, that
efficiently builds container images from Java source, without a Dockerfile,
without requiring access to the Docker daemon.
Like `ko`, when `jib` is invoked, it builds your Java source and pushes an
image with that built source atop a
[distroless](https://github.com/GoogleContainerTools/distroless) base image to
produce small images that support fast incremental image builds.
There are `BuildTemplate`s that wraps `jib` for use with Maven and Gradle, at
https://github.com/knative/build-templates/blob/master/jib/. It expects that
your `pom.xml` or `build.gradle` describes to `jib` where to push your image.
The build templates take no parameters.
### [`kaniko`](https://github.com/GoogleContainerTools/kaniko)
`kaniko` is a tool that enables building a container image from source using
the Dockerfile format, without requiring access to a Docker daemon. Removing
this requirement means that `kaniko` is [safe to run on a Kubernetes
cluster](https://github.com/kubernetes/kubernetes/issues/1806).
By contrast, building an image using `docker build` necessarily requires the
Docker daemon, which would give the build complete access to your entire
cluster. So that's a very bad idea.
`kaniko` expects to run inside a container, so it's a natural fit for the Build
CRD [builder contract](...). `kaniko` is available as a builder at
`gcr.io/kaniko-project/executor:latest`, and there's a `BuildTemplate` that
wraps it at
https://github.com/knative/build-templates/blob/master/kaniko/kaniko.yaml. It
exposes one required parameter, `IMAGE`, which describes the name of the image
to push to.
More information here:
https://github.com/knative/build-templates/tree/master/kaniko
`kaniko` is unrelated to `ko`.
### [`ko`](https://github.com/google/go-containerregistry/tree/master/cmd/ko)
`ko` is a tool designed to make development of Go apps on Kubernetes easier, by
abstracting away the container image being used, and instead referring to Go
packages by their [import paths](https://golang.org/doc/code.html#ImportPaths)
(e.g., `github.com/kaniko/serving/cmd/controller`)
The typical usage is `ko apply -f config.yaml`, which reads in the config YAML,
and looks for Go import paths representing runnable commands (i.e., `package
main`). When it finds a matching import path, `ko` builds the package using `go
build` then pushes a container image containing that binary on top of a base
image (by default, `gcr.io/distroless/base`) to
`$KO_DOCKER_REPO/unique-string`. After pushing those images, `ko` replaces
instances of matched import paths with fully-qualified references to the images
it pushed.
So if `ko apply` was passed this config:
```yaml
...
image: github.com/my/repo/cmd/foo
...
```
...it would produce YAML like:
```yaml
...
image: gcr.io/my-docker-repo/foo-zyxwvut@sha256:abcdef # image by digest
...
```
(This assumes that you have set the environment variable
`KO_DOCKER_REPO=gcr.io/my-docker-repo`)
`ko apply` then passes this generated YAML config to `kubectl apply`.
`ko` also supports:
* `ko publish` to simply push images and not produce configs.
* `ko resolve` to push images and output the generated configs, but not
`kubectl apply` them.
* `ko delete` to simply passthrough to `kubectl delete` for convenience.
`ko` is used during development and release of Knative components, but is not
intended to be required for _users_ of Knative -- they should only need to
`kubectl apply` released configs generated by `ko`.
### [`skaffold`](https://github.com/GoogleContainerTools/skaffold)
`skaffold` is a CLI tool to aid in iterative development for Kubernetes.
Typically, you would write a [YAML
config](https://github.com/GoogleContainerTools/skaffold/blob/master/examples/annotated-skaffold.yaml)
describing to Skaffold how to build and deploy your app, then run `skaffold
dev`, which will watch your local source tree for changes and continuously
builds and deploys based on your config when changes are detected.
Skaffold supports many pluggable implementations for building and deploying.
Skaffold contributors are working on support for Knative Build as a build
plugin, and could support Knative Serving as a deployment plugin.

View File

@ -6,105 +6,111 @@ necessary components first.
## Kibana and Elasticsearch
To open the Kibana UI (the visualization tool for [Elasticsearch](https://info.elastic.co),
enter the following command:
* To open the Kibana UI (the visualization tool for [Elasticsearch](https://info.elastic.co)),
start a local proxy with the following command:
```shell
kubectl proxy
```
```shell
kubectl proxy
```
This command starts a local proxy of Kibana on port 8001. For security reasons,
the Kibana UI is exposed only within the cluster.
This command starts a local proxy of Kibana on port 8001. For security reasons, the
Kibana UI is exposed only within the cluster.
* Navigate to the
[Kibana UI](http://localhost:8001/api/v1/namespaces/monitoring/services/kibana-logging/proxy/app/kibana).
*It might take a couple of minutes for the proxy to work*.
Navigate to the
[Kibana UI](http://localhost:8001/api/v1/namespaces/monitoring/services/kibana-logging/proxy/app/kibana)
(*It might take a couple of minutes for the proxy to work*).
The Discover tab of the Kibana UI looks like this:
The Discover tab of the Kibana UI looks like this:
![Kibana UI Discover tab](./images/kibana-discover-tab-annotated.png)
![Kibana UI Discover tab](./images/kibana-discover-tab-annotated.png)
You can change the time frame of logs Kibana displays in the upper right corner
of the screen. The main search bar is across the top of the Discover page.
You can change the time frame of logs Kibana displays in the upper right corner
of the screen. The main search bar is across the top of the Discover page.
As more logs are ingested, new fields will be discovered. To have them indexed,
go to Management > Index Patterns > Refresh button (on top right) > Refresh
fields.
* As more logs are ingested, new fields will be discovered. To have them indexed,
go to "Management" > "Index Patterns" > Refresh button (on top right) > "Refresh
fields".
<!-- TODO: create a video walkthrough of the Kibana UI -->
### Accessing configuration and revision logs
To access the logs for a configuration, enter the following search query in Kibana:
To access the logs for a configuration:
```text
kubernetes.labels.serving_knative_dev\/configuration: "configuration-example"
* Find the configuration's name with the following command:
```
Replace `configuration-example` with your configuration's name. Enter the following
command to get your configuration's name:
```shell
kubectl get configurations
```
To access logs for a revision, enter the following search query in Kibana:
```text
kubernetes.labels.serving_knative_dev\/revision: "configuration-example-00001"
* Replace `<CONFIGURATION_NAME>` and enter the following search query in Kibana:
```
kubernetes.labels.serving_knative_dev\/configuration: <CONFIGURATION_NAME>
```
Replace `configuration-example-00001` with your revision's name.
To access logs for a revision:
* Find the revision's name with the following command:
```
kubectl get revisions
```
* Replace `<REVISION_NAME>` and enter the following search query in Kibana:
```
kubernetes.labels.serving_knative_dev\/revision: <REVISION_NAME>
```
### Accessing build logs
To access the logs for a build, enter the following search query in Kibana:
To access logs for a [Knative Build](../build/README.md):
```text
kubernetes.labels.build\-name: "test-build"
* Find the build's name in the specified in the `.yaml` file:
```yaml
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: <BUILD_NAME>
```
Or find build names with the following command:
```
kubectl get builds
```
* Replace `<BUILD_NAME>` and enter the following search query in Kibana:
```
Replace `test-build` with your build's name. The build name is specified in the `.yaml` file as follows:
```yaml
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: test-build
kubernetes.labels.build\-name: <BUILD_NAME>
```
### Accessing request logs
To access to request logs, enter the following search in Kibana:
To access the request logs, enter the following search in Kibana:
```text
tag: "requestlog.logentry.istio-system"
```
Request logs contain details about requests served by the revision. Below is a sample request log:
Request logs contain details about requests served by the revision. Below is
a sample request log:
```text
@timestamp July 10th 2018, 10:09:28.000
destinationConfiguration configuration-example
destinationNamespace default
destinationRevision configuration-example-00001
destinationService configuration-example-00001-service.default.svc.cluster.local
latency 1.232902ms
method GET
protocol http
referer unknown
requestHost route-example.default.example.com
requestSize 0
responseCode 200
responseSize 36
severity Info
sourceNamespace istio-system
sourceService unknown
tag requestlog.logentry.istio-system
traceId 986d6faa02d49533
url /
userAgent curl/7.60.0
```
```text
@timestamp July 10th 2018, 10:09:28.000
destinationConfiguration configuration-example
destinationNamespace default
destinationRevision configuration-example-00001
destinationService configuration-example-00001-service.default.svc.cluster.local
latency 1.232902ms
method GET
protocol http
referer unknown
requestHost route-example.default.example.com
requestSize 0
responseCode 200
responseSize 36
severity Info
sourceNamespace istio-system
sourceService unknown
tag requestlog.logentry.istio-system
traceId 986d6faa02d49533
url /
userAgent curl/7.60.0
```
### Accessing end to end request traces

View File

@ -1,30 +1,31 @@
# Accessing metrics
You access metrics through the [Grafana](https://grafana.com/) UI. Grafana is the visualization tool
for [Prometheus](https://prometheus.io/). To open Grafana, enter the following command:
You access metrics through the [Grafana](https://grafana.com/) UI. Grafana is
the visualization tool for [Prometheus](https://prometheus.io/).
```shell
1. To open Grafana, enter the following command:
```
kubectl port-forward -n monitoring $(kubectl get pods -n monitoring --selector=app=grafana --output=jsonpath="{.items..metadata.name}") 3000
```
This starts a local proxy of Grafana on port 3000. For security reasons, the Grafana UI is exposed only within
the cluster.
* This starts a local proxy of Grafana on port 3000. For security reasons, the Grafana UI is exposed only within the cluster.
Navigate to the Grafana UI at [http://localhost:3000](http://localhost:3000).
Select the **Home** button on the top of the page to see the list of pre-installed dashboards (screenshot below):
2. Navigate to the Grafana UI at [http://localhost:3000](http://localhost:3000).
3. Select the **Home** button on the top of the page to see the list of pre-installed dashboards (screenshot below):
![Knative Dashboards](./images/grafana1.png)
The following dashboards are pre-installed with Knative Serving:
The following dashboards are pre-installed with Knative Serving:
* **Revision HTTP Requests:** HTTP request count, latency, and size metrics per revision and per configuration
* **Nodes:** CPU, memory, network, and disk metrics at node level
* **Pods:** CPU, memory, and network metrics at pod level
* **Deployment:** CPU, memory, and network metrics aggregated at deployment level
* **Istio, Mixer and Pilot:** Detailed Istio mesh, Mixer, and Pilot metrics
* **Kubernetes:** Dashboards giving insights into cluster health, deployments, and capacity usage
* **Revision HTTP Requests:** HTTP request count, latency, and size metrics per revision and per configuration
* **Nodes:** CPU, memory, network, and disk metrics at node level
* **Pods:** CPU, memory, and network metrics at pod level
* **Deployment:** CPU, memory, and network metrics aggregated at deployment level
* **Istio, Mixer and Pilot:** Detailed Istio mesh, Mixer, and Pilot metrics
* **Kubernetes:** Dashboards giving insights into cluster health, deployments, and capacity usage
To sign in as an administrator and modify or add dashboards, sign in with username `admin` and password `admin`.
Before you expose the Grafana UI outside the cluster, make sure to change the password.
4. Set up an administrator account to modify or add dashboards by signing in with username: `admin` and password: `admin`.
* Before you expose the Grafana UI outside the cluster, make sure to change the password.
---

View File

@ -38,7 +38,7 @@ kubectl get route <route-name> -o yaml
The `conditions` in `status` provide the reason if there is any failure. For
details, see Knative
[Error Conditions and Reporting](../spec/errors.md)(currently some of them
[Error Conditions and Reporting](https://github.com/knative/serving/blob/master/docs/spec/errors.md)(currently some of them
are not implemented yet).
## Check Revision status
@ -77,7 +77,7 @@ If you see this condition, check the following to continue debugging:
If you see other conditions, to debug further:
* Look up the meaning of the conditions in Knative
[Error Conditions and Reporting](../spec/errors.md). Note: some of them
[Error Conditions and Reporting](https://github.com/knative/serving/blob/master/docs/spec/errors.md). Note: some of them
are not implemented yet. An alternative is to
[check Pod status](#check-pod-status).
* If you are using `BUILD` to deploy and the `BuidComplete` condition is not

View File

@ -1,72 +1,112 @@
# Monitoring, Logging and Tracing Installation
Knative Serving offers two different monitoring setups:
One that uses Elasticsearch, Kibana, Prometheus and Grafana and
another that uses Stackdriver, Prometheus and Grafana. See below
for installation instructions for these two setups. You can install
only one of these two setups and side-by-side installation of these two are not supported.
[Elasticsearch, Kibana, Prometheus and Grafana](#Elasticsearch,-Kibana,-Prometheus-&-Grafana-Setup)
or
[Stackdriver, Prometheus and Grafana](#Stackdriver,-Prometheus-&-Grafana-Setup).
You can install only one of these two setups and side-by-side installation of
these two are not supported.
## Elasticsearch, Kibana, Prometheus & Grafana Setup
*If you installed Knative Serving using [Easy Install](../install/README.md#Installing-Knative) guide,
skip this step and continue to [Create Elasticsearch Indices](#Create-Elasticsearch-Indices)*
If you installed the
[full Knative release](../install/README.md#Installing-Knative),
skip this step and continue to
[Create Elasticsearch Indices](#Create-Elasticsearch-Indices)
- Install Knative monitoring components from the root of the [Serving repository](https://github.com/knative/serving):
Run:
```shell
kubectl apply -R -f config/monitoring/100-common \
```shell
kubectl apply -R -f config/monitoring/100-common \
-f config/monitoring/150-elasticsearch \
-f third_party/config/monitoring/common \
-f third_party/config/monitoring/elasticsearch \
-f config/monitoring/200-common \
-f config/monitoring/200-common/100-istio.yaml
```
```
Monitor logging & monitoring components, until all of the components report Running or Completed:
- The installation is complete when logging & monitoring components are all
reported `Running` or `Completed`:
```shell
kubectl get pods -n monitoring --watch
```
```shell
kubectl get pods -n monitoring --watch
```
CTRL+C when it's done.
```
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-0 1/1 Running 0 2d
elasticsearch-logging-1 1/1 Running 0 2d
fluentd-ds-5kc85 1/1 Running 0 2d
fluentd-ds-vhrcq 1/1 Running 0 2d
fluentd-ds-xghk9 1/1 Running 0 2d
grafana-798cf569ff-v4q74 1/1 Running 0 2d
kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d
kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d
node-exporter-cr6bh 2/2 Running 0 2d
node-exporter-mf6k7 2/2 Running 0 2d
node-exporter-rhzr7 2/2 Running 0 2d
prometheus-system-0 1/1 Running 0 2d
prometheus-system-1 1/1 Running 0 2d
```
CTRL+C to exit watch.
### Create Elasticsearch Indices
We will create two indexes in ElasticSearch - one for application logs and one for request traces.
To create the indexes, open Kibana Index Management UI at this [link](http://localhost:8001/api/v1/namespaces/monitoring/services/kibana-logging/proxy/app/kibana#/management/kibana/index)
(*it might take a couple of minutes for the proxy to work the first time after the installation*).
Within the "Configure an index pattern" page, enter `logstash-*` to `Index pattern` and select `@timestamp`
from `Time Filter field name` and click on `Create` button. See below for a screenshot:
To visualize logs with Kibana, you need to set which Elasticsearch indices to explore. We will create two indices in Elasticsearch using `Logstash` for application logs and `Zipkin`
for request traces.
- To open the Kibana UI (the visualization tool for
[Elasticsearch](https://info.elastic.co)), start a local proxy with the
following command:
```shell
kubectl proxy
```
This command starts a local proxy of Kibana on port 8001. For security
reasons, the Kibana UI is exposed only within the cluster.
- Navigate to the
[Kibana UI](http://localhost:8001/api/v1/namespaces/monitoring/services/kibana-logging/proxy/app/kibana).
_It might take a couple of minutes for the proxy to work_.
- Within the "Configure an index pattern" page, enter `logstash-*` to
`Index pattern` and select `@timestamp` from `Time Filter field name` and
click on `Create` button.
![Create logstash-* index](images/kibana-landing-page-configure-index.png)
To create the second index, select `Create Index Pattern` button on top left of the page.
Enter `zipkin*` to `Index pattern` and select `timestamp_millis` from `Time Filter field name`
and click on `Create` button.
- To create the second index, select `Create Index Pattern` button on top left
of the page. Enter `zipkin*` to `Index pattern` and select `timestamp_millis`
from `Time Filter field name` and click on `Create` button.
Next, visit instructions below to access to logs, metrics and traces:
* [Accessing Logs](./accessing-logs.md)
* [Accessing Metrics](./accessing-metrics.md)
* [Accessing Traces](./accessing-traces.md)
## Stackdriver, Prometheus & Grafana Setup
## Stackdriver(logs), Prometheus & Grafana Setup
If your Knative Serving is not built on a Google Cloud Platform (GCP) based
cluster or you want to send logs to another GCP project, you need to build your
own Fluentd image and modify the configuration first. See
If your Knative Serving is not built on a GCP based cluster or you want to send logs to
another GCP project, you need to build your own Fluentd image and modify the
configuration first. See
1. Install
[Fluentd image on Knative Serving](https://github.com/knative/serving/blob/master/image/fluentd/README.md).
2. [Set up a logging plugin](setting-up-a-logging-plugin.md).
3. Install Knative monitoring components:
1. [Fluentd image on Knative Serving](/image/fluentd/README.md)
2. [Setting up a logging plugin](setting-up-a-logging-plugin.md)
```shell
kubectl apply -R -f config/monitoring/100-common \
-f config/monitoring/150-stackdriver \
```shell
kubectl apply -R -f config/monitoring/100-common \
-f config/monitoring/150-stackdriver-prod \
-f third_party/config/monitoring/common \
-f config/monitoring/200-common \
-f config/monitoring/200-common/100-istio.yaml
```
```
## Learn More
- Learn more about accessing logs, metrics, and traces:
- [Accessing Logs](./accessing-logs.md)
- [Accessing Metrics](./accessing-metrics.md)
- [Accessing Traces](./accessing-traces.md)
---

View File

@ -102,13 +102,19 @@ service "gitwebhook" created
1. Retrieve the hostname for this service, using the following command:
```shell
$ kubectl get services.serving.knative.dev gitwebhook \
$ kubectl get ksvc gitwebhook \
-o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
gitwebhook gitwebhook.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Browse on GitHub to the repository where you want to create a webhook.
1. Click **Settings**, then **Webhooks**, then **Add webhook**.
1. Enter the **Payload URL** as `http://{DOMAIN}`, with the value of DOMAIN listed above.

View File

@ -130,11 +130,17 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get services.serving.knative.dev helloworld-csharp -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-csharp -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-csharp helloworld-csharp.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.

View File

@ -0,0 +1,28 @@
# App artifacts
/_build
/db
/deps
/*.ez
# Generated on crash by the VM
erl_crash.dump
# Generated on crash by NPM
npm-debug.log
/assets/package-lock.json
# Static artifacts
/assets/node_modules
# Since we are building assets from assets/,
# we ignore priv/static. You may want to comment
# this depending on your deployment strategy.
/priv/static/
# Files matching config/*.secret.exs pattern contain sensitive
# data and you should not commit them into version control.
#
# Alternatively, you may comment the line below and commit the
# secrets files as long as you replace their contents by environment
# variables.
/config/*.secret.exs

View File

@ -0,0 +1,27 @@
FROM elixir:alpine
ARG APP_NAME=hello
ARG PHOENIX_SUBDIR=.
ENV MIX_ENV=prod REPLACE_OS_VARS=true TERM=xterm
WORKDIR /opt/app
RUN apk update \
&& apk --no-cache --update add nodejs nodejs-npm \
&& mix local.rebar --force \
&& mix local.hex --force
COPY . .
RUN mix do deps.get, deps.compile, compile
RUN cd ${PHOENIX_SUBDIR}/assets \
&& npm install \
&& ./node_modules/brunch/bin/brunch build -p \
&& cd .. \
&& mix phx.digest
RUN mix release --env=prod --verbose \
&& mv _build/prod/rel/${APP_NAME} /opt/release \
&& mv /opt/release/bin/${APP_NAME} /opt/release/bin/start_server
FROM alpine:latest
RUN apk update && apk --no-cache --update add bash openssl-dev
ENV PORT=8080 MIX_ENV=prod REPLACE_OS_VARS=true
WORKDIR /opt/app
EXPOSE 8080
COPY --from=0 /opt/release .
ENV RUNNER_LOG_DIR /var/log
CMD ["/opt/app/bin/start_server", "foreground", "boot_var=/tmp"]

View File

@ -0,0 +1,300 @@
# Hello World - Elixir Sample
A simple web application written in [Elixir](https://elixir-lang.org/) using the
[Phoenix Framework](https://phoenixframework.org/).
The application prints all environment variables to the main page.
# Set up Elixir and Phoenix Locally
Following the [Phoenix Installation Guide](https://hexdocs.pm/phoenix/installation.html)
is the best way to get your computer set up for developing,
building, running, and packaging Elixir Web applications.
# Running Locally
To start your Phoenix server:
* Install dependencies with `mix deps.get`
* Install Node.js dependencies with `cd assets && npm install`
* Start Phoenix endpoint with `mix phx.server`
Now you can visit [`localhost:4000`](http://localhost:4000) from your browser.
# Recreating the sample code
1. Generate a new project.
```shell
mix phoenix.new helloelixir
```
When asked, if you want to `Fetch and install dependencies? [Yn]` select `y`
1. Follow the direction in the output to change directories into
start your local server with `mix phoenix.server`
1. In the new directory, create a new Dockerfile for packaging
your application for deployment
```docker
# Start from a base image for elixir
FROM elixir:alpine
# Set up Elixir and Phoenix
ARG APP_NAME=hello
ARG PHOENIX_SUBDIR=.
ENV MIX_ENV=prod REPLACE_OS_VARS=true TERM=xterm
WORKDIR /opt/app
# Compile assets.
RUN apk update \
&& apk --no-cache --update add nodejs nodejs-npm \
&& mix local.rebar --force \
&& mix local.hex --force
COPY . .
# Download and compile dependencies, then compile Web app.
RUN mix do deps.get, deps.compile, compile
RUN cd ${PHOENIX_SUBDIR}/assets \
&& npm install \
&& ./node_modules/brunch/bin/brunch build -p \
&& cd .. \
&& mix phx.digest
# Create a release version of the application
RUN mix release --env=prod --verbose \
&& mv _build/prod/rel/${APP_NAME} /opt/release \
&& mv /opt/release/bin/${APP_NAME} /opt/release/bin/start_server
# Prepare final layer
FROM alpine:latest
RUN apk update && apk --no-cache --update add bash openssl-dev
ENV PORT=8080 MIX_ENV=prod REPLACE_OS_VARS=true
WORKDIR /opt/app
# Document that the service listens on port 8080.
EXPOSE 8080
COPY --from=0 /opt/release .
ENV RUNNER_LOG_DIR /var/log
# Command to execute the application.
CMD ["/opt/app/bin/start_server", "foreground", "boot_var=/tmp"]
```
1. Create a new file, `service.yaml` and copy the following Service
definition into the file. Make sure to replace `{username}` with
your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-elixir
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-elixir
env:
- name: TARGET
value: "elixir Sample v1"
```
# Building and deploying the sample
The sample in this directory is ready to build and deploy without changes.
You can deploy the sample as is, or use you created version following the
directions above.
1. Generate a new `secret_key_base` in the `config/prod.secret.exs` file.
Phoenix applications use a secrets file on production deployments and, by
default, that file is not checked into source control. We have provides
shell of an example on `config/prod.secret.exs.sample` and you can use the
following command to generate a new prod secrets file.
```shell
SECRET_KEY_BASE=$(elixir -e ":crypto.strong_rand_bytes(48) |> Base.encode64 |> IO.puts")
sed "s|SECRET+KEY+BASE|$SECRET_KEY_BASE|" config/prod.secret.exs.sample >config/prod.secret.exs
```
1. Use Docker to build the sample code into a container. To build and push
with Docker Hub, run these commands replacing `{username}` with your Docker
Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-elixir .
# Push the container to docker registry
docker push {username}/helloworld-elixir
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, use
`kubectl get svc knative-ingressgateway -n istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take sometime for the service to get asssigned
an external IP address.
```
kubectl get svc knative-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32 80:32380/TCP,443:32390/TCP,32400:32400/TCP 1h
```
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-elixir -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-elixir helloworld-elixir.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app to see the results. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-elixir.default.example.com" http://{IP_ADDRESS}
...
# HTML from your application is returned.
```
Here is the HTML returned from our deployed sample application:
```HTML
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>Hello Knative</title>
<link rel="stylesheet" type="text/css" href="/css/app-833cc7e8eeed7a7953c5a02e28130dbd.css?vsn=d">
</head>
<body>
<div class="container">
<header class="header">
<nav role="navigation">
</nav>
</header>
<p class="alert alert-info" role="alert"></p>
<p class="alert alert-danger" role="alert"></p>
<main role="main">
<div class="jumbotron">
<h2>Welcome to Knative and Elixir</h2>
<p>$TARGET = elixir Sample v1</p>
</div>
<h3>Environment</h3>
<ul>
<li>BINDIR = /opt/app/erts-9.3.2/bin</li>
<li>DEST_SYS_CONFIG_PATH = /opt/app/var/sys.config</li>
<li>DEST_VMARGS_PATH = /opt/app/var/vm.args</li>
<li>DISTILLERY_TASK = foreground</li>
<li>EMU = beam</li>
<li>ERL_LIBS = /opt/app/lib</li>
<li>ERL_OPTS = </li>
<li>ERTS_DIR = /opt/app/erts-9.3.2</li>
<li>ERTS_LIB_DIR = /opt/app/erts-9.3.2/../lib</li>
<li>ERTS_VSN = 9.3.2</li>
<li>HELLOWORLD_ELIXIR_00001_SERVICE_PORT = tcp://10.35.241.50:80</li>
<li>HELLOWORLD_ELIXIR_00001_SERVICE_PORT_80_TCP = tcp://10.35.241.50:80</li>
<li>HELLOWORLD_ELIXIR_00001_SERVICE_PORT_80_TCP_ADDR = 10.35.241.50</li>
<li>HELLOWORLD_ELIXIR_00001_SERVICE_PORT_80_TCP_PORT = 80</li>
<li>HELLOWORLD_ELIXIR_00001_SERVICE_PORT_80_TCP_PROTO = tcp</li>
<li>HELLOWORLD_ELIXIR_00001_SERVICE_SERVICE_HOST = 10.35.241.50</li>
<li>HELLOWORLD_ELIXIR_00001_SERVICE_SERVICE_PORT = 80</li>
<li>HELLOWORLD_ELIXIR_00001_SERVICE_SERVICE_PORT_HTTP = 80</li>
<li>HELLOWORLD_ELIXIR_PORT = tcp://10.35.253.90:80</li>
<li>HELLOWORLD_ELIXIR_PORT_80_TCP = tcp://10.35.253.90:80</li>
<li>HELLOWORLD_ELIXIR_PORT_80_TCP_ADDR = 10.35.253.90</li>
<li>HELLOWORLD_ELIXIR_PORT_80_TCP_PORT = 80</li>
<li>HELLOWORLD_ELIXIR_PORT_80_TCP_PROTO = tcp</li>
<li>HELLOWORLD_ELIXIR_SERVICE_HOST = 10.35.253.90</li>
<li>HELLOWORLD_ELIXIR_SERVICE_PORT = 80</li>
<li>HELLOWORLD_ELIXIR_SERVICE_PORT_HTTP = 80</li>
<li>HOME = /root</li>
<li>HOSTNAME = helloworld-elixir-00001-deployment-84f68946b4-76hcv</li>
<li>KUBERNETES_PORT = tcp://10.35.240.1:443</li>
<li>KUBERNETES_PORT_443_TCP = tcp://10.35.240.1:443</li>
<li>KUBERNETES_PORT_443_TCP_ADDR = 10.35.240.1</li>
<li>KUBERNETES_PORT_443_TCP_PORT = 443</li>
<li>KUBERNETES_PORT_443_TCP_PROTO = tcp</li>
<li>KUBERNETES_SERVICE_HOST = 10.35.240.1</li>
<li>KUBERNETES_SERVICE_PORT = 443</li>
<li>KUBERNETES_SERVICE_PORT_HTTPS = 443</li>
<li>LD_LIBRARY_PATH = /opt/app/erts-9.3.2/lib:</li>
<li>MIX_ENV = prod</li>
<li>NAME = hello@127.0.0.1</li>
<li>NAME_ARG = -name hello@127.0.0.1</li>
<li>NAME_TYPE = -name</li>
<li>OLDPWD = /opt/app</li>
<li>OTP_VER = 20</li>
<li>PATH = /opt/app/erts-9.3.2/bin:/opt/app/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin</li>
<li>PORT = 8080</li>
<li>PROGNAME = opt/app/releases/0.0.1/hello.sh</li>
<li>PWD = /opt/app</li>
<li>RELEASES_DIR = /opt/app/releases</li>
<li>RELEASE_CONFIG_DIR = /opt/app</li>
<li>RELEASE_ROOT_DIR = /opt/app</li>
<li>REL_NAME = hello</li>
<li>REL_VSN = 0.0.1</li>
<li>REPLACE_OS_VARS = true</li>
<li>ROOTDIR = /opt/app</li>
<li>RUNNER_LOG_DIR = /var/log</li>
<li>RUN_ERL_ENV = </li>
<li>SHLVL = 1</li>
<li>SRC_SYS_CONFIG_PATH = /opt/app/releases/0.0.1/sys.config</li>
<li>SRC_VMARGS_PATH = /opt/app/releases/0.0.1/vm.args</li>
<li>SYS_CONFIG_PATH = /opt/app/var/sys.config</li>
<li>TARGET = elixir Sample v1</li>
<li>TERM = xterm</li>
<li>VMARGS_PATH = /opt/app/var/vm.args</li>
</ul>
</main>
</div> <!-- /container -->
<script src="/js/app-930ab1950e10d7b5ab5083423c28f06e.js?vsn=d"></script>
</body>
</html>
```
## Removing the sample app deployment
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
```

View File

@ -0,0 +1,62 @@
exports.config = {
// See http://brunch.io/#documentation for docs.
files: {
javascripts: {
joinTo: "js/app.js"
// To use a separate vendor.js bundle, specify two files path
// http://brunch.io/docs/config#-files-
// joinTo: {
// "js/app.js": /^js/,
// "js/vendor.js": /^(?!js)/
// }
//
// To change the order of concatenation of files, explicitly mention here
// order: {
// before: [
// "vendor/js/jquery-2.1.1.js",
// "vendor/js/bootstrap.min.js"
// ]
// }
},
stylesheets: {
joinTo: "css/app.css"
},
templates: {
joinTo: "js/app.js"
}
},
conventions: {
// This option sets where we should place non-css and non-js assets in.
// By default, we set this to "/assets/static". Files in this directory
// will be copied to `paths.public`, which is "priv/static" by default.
assets: /^(static)/
},
// Phoenix paths configuration
paths: {
// Dependencies and current project directories to watch
watched: ["static", "css", "js", "vendor"],
// Where to compile files to
public: "../priv/static"
},
// Configure your plugins
plugins: {
babel: {
// Do not use ES6 compiler in vendor code
ignore: [/vendor/]
}
},
modules: {
autoRequire: {
"js/app.js": ["js/app"]
}
},
npm: {
enabled: true
}
};

View File

@ -0,0 +1 @@
/* This file is for your main application css. */

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,21 @@
// Brunch automatically concatenates all files in your
// watched paths. Those paths can be configured at
// config.paths.watched in "brunch-config.js".
//
// However, those files will only be executed if
// explicitly imported. The only exception are files
// in vendor, which are never wrapped in imports and
// therefore are always executed.
// Import dependencies
//
// If you no longer want to use a dependency, remember
// to also remove its path from "config.paths.watched".
import "phoenix_html"
// Import local files
//
// Local files can be imported directly using relative
// paths "./socket" or full ones "web/static/js/socket".
// import socket from "./socket"

View File

@ -0,0 +1,62 @@
// NOTE: The contents of this file will only be executed if
// you uncomment its entry in "assets/js/app.js".
// To use Phoenix channels, the first step is to import Socket
// and connect at the socket path in "lib/web/endpoint.ex":
import {Socket} from "phoenix"
let socket = new Socket("/socket", {params: {token: window.userToken}})
// When you connect, you'll often need to authenticate the client.
// For example, imagine you have an authentication plug, `MyAuth`,
// which authenticates the session and assigns a `:current_user`.
// If the current user exists you can assign the user's token in
// the connection for use in the layout.
//
// In your "lib/web/router.ex":
//
// pipeline :browser do
// ...
// plug MyAuth
// plug :put_user_token
// end
//
// defp put_user_token(conn, _) do
// if current_user = conn.assigns[:current_user] do
// token = Phoenix.Token.sign(conn, "user socket", current_user.id)
// assign(conn, :user_token, token)
// else
// conn
// end
// end
//
// Now you need to pass this token to JavaScript. You can do so
// inside a script tag in "lib/web/templates/layout/app.html.eex":
//
// <script>window.userToken = "<%= assigns[:user_token] %>";</script>
//
// You will need to verify the user token in the "connect/2" function
// in "lib/web/channels/user_socket.ex":
//
// def connect(%{"token" => token}, socket) do
// # max_age: 1209600 is equivalent to two weeks in seconds
// case Phoenix.Token.verify(socket, "user socket", token, max_age: 1209600) do
// {:ok, user_id} ->
// {:ok, assign(socket, :user, user_id)}
// {:error, reason} ->
// :error
// end
// end
//
// Finally, pass the token on connect as below. Or remove it
// from connect if you don't care about authentication.
socket.connect()
// Now that you are connected, you can join channels with a topic:
let channel = socket.channel("topic:subtopic", {})
channel.join()
.receive("ok", resp => { console.log("Joined successfully", resp) })
.receive("error", resp => { console.log("Unable to join", resp) })
export default socket

View File

@ -0,0 +1,18 @@
{
"repository": {},
"license": "MIT",
"scripts": {
"deploy": "brunch build --production",
"watch": "brunch watch --stdin"
},
"dependencies": {
"phoenix": "file:../deps/phoenix",
"phoenix_html": "file:../deps/phoenix_html"
},
"devDependencies": {
"babel-brunch": "6.1.1",
"brunch": "2.10.9",
"clean-css-brunch": "2.10.0",
"uglify-js-brunch": "2.10.0"
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View File

@ -0,0 +1,5 @@
# See http://www.robotstxt.org/robotstxt.html for documentation on how to use the robots.txt file
#
# To ban all spiders from the entire site uncomment the next two lines:
# User-agent: *
# Disallow: /

View File

@ -0,0 +1,23 @@
# This file is responsible for configuring your application
# and its dependencies with the aid of the Mix.Config module.
#
# This configuration file is loaded before any dependency and
# is restricted to this project.
use Mix.Config
# Configures the endpoint
config :hello, HelloWeb.Endpoint,
url: [host: "localhost"],
secret_key_base: "P0fv0U47j6mdL40sn3f3BIuvEyIdbxUq+yKc8Mcbwig5105brf2Dzio5ANjTVRUo",
render_errors: [view: HelloWeb.ErrorView, accepts: ~w(html json)]
#pubsub: [name: Hello.PubSub,
# adapter: Phoenix.PubSub.PG2]
# Configures Elixir's Logger
config :logger, :console,
format: "$time $metadata[$level] $message\n",
metadata: [:user_id]
# Import environment specific config. This must remain at the bottom
# of this file so it overrides the configuration defined above.
import_config "#{Mix.env}.exs"

View File

@ -0,0 +1,49 @@
use Mix.Config
# For development, we disable any cache and enable
# debugging and code reloading.
#
# The watchers configuration can be used to run external
# watchers to your application. For example, we use it
# with brunch.io to recompile .js and .css sources.
config :hello, HelloWeb.Endpoint,
http: [port: 4000],
debug_errors: true,
code_reloader: true,
check_origin: false,
watchers: [node: ["node_modules/brunch/bin/brunch", "watch", "--stdin",
cd: Path.expand("../assets", __DIR__)]]
# ## SSL Support
#
# In order to use HTTPS in development, a self-signed
# certificate can be generated by running the following
# command from your terminal:
#
# openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com" -keyout priv/server.key -out priv/server.pem
#
# The `http:` config above can be replaced with:
#
# https: [port: 4000, keyfile: "priv/server.key", certfile: "priv/server.pem"],
#
# If desired, both `http:` and `https:` keys can be
# configured to run both http and https servers on
# different ports.
# Watch static and templates for browser reloading.
config :hello, HelloWeb.Endpoint,
live_reload: [
patterns: [
~r{priv/static/.*(js|css|png|jpeg|jpg|gif|svg)$},
~r{priv/gettext/.*(po)$},
~r{lib/hello_web/views/.*(ex)$},
~r{lib/hello_web/templates/.*(eex)$}
]
]
# Do not include metadata nor timestamps in development logs
config :logger, :console, format: "[$level] $message\n"
# Set a higher stacktrace during development. Avoid configuring such
# in production as building large stacktraces may be expensive.
config :phoenix, :stacktrace_depth, 20

View File

@ -0,0 +1,67 @@
use Mix.Config
# For production, we often load configuration from external
# sources, such as your system environment. For this reason,
# you won't find the :http configuration below, but set inside
# HelloWeb.Endpoint.init/2 when load_from_system_env is
# true. Any dynamic configuration should be done there.
#
# Don't forget to configure the url host to something meaningful,
# Phoenix uses this information when generating URLs.
#
# Finally, we also include the path to a cache manifest
# containing the digested version of static files. This
# manifest is generated by the mix phx.digest task
# which you typically run after static files are built.
config :hello, HelloWeb.Endpoint,
load_from_system_env: false,
http: [port: 8080],
check_origin: false,
server: true,
root: ".",
cache_static_manifest: "priv/static/cache_manifest.json"
# Do not print debug messages in production
config :logger, level: :info
# ## SSL Support
#
# To get SSL working, you will need to add the `https` key
# to the previous section and set your `:url` port to 443:
#
# config :hello, HelloWeb.Endpoint,
# ...
# url: [host: "example.com", port: 443],
# https: [:inet6,
# port: 443,
# keyfile: System.get_env("SOME_APP_SSL_KEY_PATH"),
# certfile: System.get_env("SOME_APP_SSL_CERT_PATH")]
#
# Where those two env variables return an absolute path to
# the key and cert in disk or a relative path inside priv,
# for example "priv/ssl/server.key".
#
# We also recommend setting `force_ssl`, ensuring no data is
# ever sent via http, always redirecting to https:
#
# config :hello, HelloWeb.Endpoint,
# force_ssl: [hsts: true]
#
# Check `Plug.SSL` for all available options in `force_ssl`.
# ## Using releases
#
# If you are doing OTP releases, you need to instruct Phoenix
# to start the server for all endpoints:
#
# config :phoenix, :serve_endpoints, true
#
# Alternatively, you can configure exactly which server to
# start per endpoint:
#
# config :hello, HelloWeb.Endpoint, server: true
#
# Finally import the config/prod.secret.exs
# which should be versioned separately.
import_config "prod.secret.exs"

View File

@ -0,0 +1,12 @@
use Mix.Config
# In this file, we keep production configuration that
# you'll likely want to automate and keep away from
# your version control system.
#
# You should document the content of this
# file or create a script for recreating it, since it's
# kept out of version control and might be hard to recover
# or recreate for your teammates (or yourself later on).
config :hello, HelloWeb.Endpoint,
secret_key_base: "SECRET+KEY+BASE"

View File

@ -0,0 +1,10 @@
use Mix.Config
# We don't run a server during test. If one is required,
# you can enable the server option below.
config :hello, HelloWeb.Endpoint,
http: [port: 4001],
server: false
# Print only warnings and errors during test
config :logger, level: :warn

View File

@ -0,0 +1,9 @@
defmodule Hello do
@moduledoc """
Hello keeps the contexts that define your domain
and business logic.
Contexts are also responsible for managing your data, regardless
if it comes from the database, an external API or others.
"""
end

View File

@ -0,0 +1,31 @@
defmodule Hello.Application do
use Application
# See https://hexdocs.pm/elixir/Application.html
# for more information on OTP Applications
def start(_type, _args) do
IO.puts :stderr, "Application starting up"
import Supervisor.Spec
# Define workers and child supervisors to be supervised
children = [
# Start the endpoint when the application starts
supervisor(HelloWeb.Endpoint, []),
# Start your own worker by calling: Hello.Worker.start_link(arg1, arg2, arg3)
# worker(Hello.Worker, [arg1, arg2, arg3]),
]
# See https://hexdocs.pm/elixir/Supervisor.html
# for other strategies and supported options
opts = [strategy: :one_for_one, name: Hello.Supervisor]
Supervisor.start_link(children, opts)
end
# Tell Phoenix to update the endpoint configuration
# whenever the application is updated.
def config_change(changed, _new, removed) do
IO.puts :stderr, "Config changed"
HelloWeb.Endpoint.config_change(changed, removed)
:ok
end
end

View File

@ -0,0 +1,67 @@
defmodule HelloWeb do
@moduledoc """
The entrypoint for defining your web interface, such
as controllers, views, channels and so on.
This can be used in your application as:
use HelloWeb, :controller
use HelloWeb, :view
The definitions below will be executed for every view,
controller, etc, so keep them short and clean, focused
on imports, uses and aliases.
Do NOT define functions inside the quoted expressions
below. Instead, define any helper function in modules
and import those modules here.
"""
def controller do
quote do
use Phoenix.Controller, namespace: HelloWeb
import Plug.Conn
import HelloWeb.Router.Helpers
import HelloWeb.Gettext
end
end
def view do
quote do
use Phoenix.View, root: "lib/hello_web/templates",
namespace: HelloWeb
# Import convenience functions from controllers
import Phoenix.Controller, only: [get_flash: 2, view_module: 1]
# Use all HTML functionality (forms, tags, etc)
use Phoenix.HTML
import HelloWeb.Router.Helpers
import HelloWeb.ErrorHelpers
import HelloWeb.Gettext
end
end
def router do
quote do
use Phoenix.Router
import Plug.Conn
import Phoenix.Controller
end
end
def channel do
quote do
use Phoenix.Channel
import HelloWeb.Gettext
end
end
@doc """
When used, dispatch to the appropriate controller/view/etc.
"""
defmacro __using__(which) when is_atom(which) do
apply(__MODULE__, which, [])
end
end

View File

@ -0,0 +1,38 @@
defmodule HelloWeb.UserSocket do
use Phoenix.Socket
## Channels
# channel "room:*", HelloWeb.RoomChannel
## Transports
transport :longpoll, Phoenix.Transports.LongPoll
# transport :longpoll, Phoenix.Transports.LongPoll
# Socket params are passed from the client and can
# be used to verify and authenticate a user. After
# verification, you can put default assigns into
# the socket that will be set for all channels, ie
#
# {:ok, assign(socket, :user_id, verified_user_id)}
#
# To deny connection, return `:error`.
#
# See `Phoenix.Token` documentation for examples in
# performing token verification on connect.
def connect(_params, socket) do
IO.puts :stderr, "UserSocket.connect called"
{:ok, socket}
end
# Socket id's are topics that allow you to identify all sockets for a given user:
#
# def id(socket), do: "user_socket:#{socket.assigns.user_id}"
#
# Would allow you to broadcast a "disconnect" event and terminate
# all active sockets and channels for a given user:
#
# HelloWeb.Endpoint.broadcast("user_socket:#{user.id}", "disconnect", %{})
#
# Returning `nil` makes this socket anonymous.
def id(_socket), do: nil
end

View File

@ -0,0 +1,14 @@
defmodule HelloWeb.PageController do
use HelloWeb, :controller
def index(conn, params) do
env = System.get_env()
target = Map.get(env, "TARGET")
render conn, "index.html",
title: "Hello Knative",
greeting: "Welcome to Knative and Elixir",
target: target,
env: env
end
end

View File

@ -0,0 +1,57 @@
defmodule HelloWeb.Endpoint do
use Phoenix.Endpoint, otp_app: :hello
socket "/socket", HelloWeb.UserSocket
# Serve at "/" the static files from "priv/static" directory.
#
# You should set gzip to true if you are running phoenix.digest
# when deploying your static files in production.
plug Plug.Static,
at: "/", from: :hello, gzip: false,
only: ~w(css fonts images js favicon.ico robots.txt)
# Code reloading can be explicitly enabled under the
# :code_reloader configuration of your endpoint.
if code_reloading? do
socket "/phoenix/live_reload/socket", Phoenix.LiveReloader.Socket
plug Phoenix.LiveReloader
plug Phoenix.CodeReloader
end
plug Plug.Logger
plug Plug.Parsers,
parsers: [:urlencoded, :multipart, :json],
pass: ["*/*"],
json_decoder: Poison
plug Plug.MethodOverride
plug Plug.Head
# The session will be stored in the cookie and signed,
# this means its contents can be read but not tampered with.
# Set :encryption_salt if you would also like to encrypt it.
plug Plug.Session,
store: :cookie,
key: "_hello_key",
signing_salt: "M0cGJtXU"
plug HelloWeb.Router
@doc """
Callback invoked for dynamically configuring the endpoint.
It receives the endpoint configuration and checks if
configuration should be loaded from the system environment.
"""
def init(_key, config) do
IO.puts :stderr, "called HelloWeb.Endpoint.init"
if config[:load_from_system_env] do
port = System.get_env("PORT") || raise "expected the PORT environment variable to be set"
{:ok, Keyword.put(config, :http, [:inet6, port: port])}
else
{:ok, config}
end
end
end

View File

@ -0,0 +1,24 @@
defmodule HelloWeb.Gettext do
@moduledoc """
A module providing Internationalization with a gettext-based API.
By using [Gettext](https://hexdocs.pm/gettext),
your module gains a set of macros for translations, for example:
import HelloWeb.Gettext
# Simple translation
gettext "Here is the string to translate"
# Plural translation
ngettext "Here is the string to translate",
"Here are the strings to translate",
3
# Domain-based translation
dgettext "errors", "Here is the error message to translate"
See the [Gettext Docs](https://hexdocs.pm/gettext) for detailed usage.
"""
use Gettext, otp_app: :hello
end

View File

@ -0,0 +1,27 @@
defmodule HelloWeb.Router do
use HelloWeb, :router
pipeline :browser do
plug :accepts, ["html"]
plug :fetch_session
plug :fetch_flash
plug :protect_from_forgery
plug :put_secure_browser_headers
end
pipeline :api do
plug :accepts, ["json"]
end
scope "/", HelloWeb do
pipe_through :browser # Use the default browser stack
get "/", PageController, :index
get "/:name", PageController, :show
end
# Other scopes may use custom stacks.
# scope "/api", HelloWeb do
# pipe_through :api
# end
end

View File

@ -0,0 +1,32 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title><%= @title %></title>
<link rel="stylesheet" type="text/css" href="<%= static_path(@conn, "/css/app.css") %>">
</head>
<body>
<div class="container">
<header class="header">
<nav role="navigation">
</nav>
</header>
<p class="alert alert-info" role="alert"><%= get_flash(@conn, :info) %></p>
<p class="alert alert-danger" role="alert"><%= get_flash(@conn, :error) %></p>
<main role="main">
<%= render @view_module, @view_template, assigns %>
</main>
</div> <!-- /container -->
<script src="<%= static_path(@conn, "/js/app.js") %>"></script>
</body>
</html>

View File

@ -0,0 +1,12 @@
<div class="jumbotron">
<h2><%= @greeting %></h2>
<p>$TARGET = <%= @target %></p>
</div>
<h3>Environment</h3>
<ul>
<%= for key <- Enum.sort(Map.keys(@env)) do %>
<li><%= key %> = <%= Map.get(@env, key) %></li>
<% end %>
</ul>

View File

@ -0,0 +1,44 @@
defmodule HelloWeb.ErrorHelpers do
@moduledoc """
Conveniences for translating and building error messages.
"""
use Phoenix.HTML
@doc """
Generates tag for inlined form input errors.
"""
def error_tag(form, field) do
Enum.map(Keyword.get_values(form.errors, field), fn (error) ->
content_tag :span, translate_error(error), class: "help-block"
end)
end
@doc """
Translates an error message using gettext.
"""
def translate_error({msg, opts}) do
# When using gettext, we typically pass the strings we want
# to translate as a static argument:
#
# # Translate "is invalid" in the "errors" domain
# dgettext "errors", "is invalid"
#
# # Translate the number of files with plural rules
# dngettext "errors", "1 file", "%{count} files", count
#
# Because the error messages we show in our forms and APIs
# are defined inside Ecto, we need to translate them dynamically.
# This requires us to call the Gettext module passing our gettext
# backend as first argument.
#
# Note we use the "errors" domain, which means translations
# should be written to the errors.po file. The :count option is
# set by Ecto and indicates we should also apply plural rules.
if count = opts[:count] do
Gettext.dngettext(HelloWeb.Gettext, "errors", msg, msg, count, opts)
else
Gettext.dgettext(HelloWeb.Gettext, "errors", msg, opts)
end
end
end

View File

@ -0,0 +1,16 @@
defmodule HelloWeb.ErrorView do
use HelloWeb, :view
# If you want to customize a particular status code
# for a certain format, you may uncomment below.
# def render("500.html", _assigns) do
# "Internal Server Error"
# end
# By default, Phoenix returns the status message from
# the template name. For example, "404.html" becomes
# "Not Found".
def template_not_found(template, _assigns) do
Phoenix.Controller.status_message_from_template(template)
end
end

View File

@ -0,0 +1,3 @@
defmodule HelloWeb.HelloView do
use HelloWeb, :view
end

View File

@ -0,0 +1,3 @@
defmodule HelloWeb.LayoutView do
use HelloWeb, :view
end

View File

@ -0,0 +1,3 @@
defmodule HelloWeb.PageView do
use HelloWeb, :view
end

View File

@ -0,0 +1,44 @@
defmodule Hello.Mixfile do
use Mix.Project
def project do
[
app: :hello,
version: "0.0.1",
elixir: "~> 1.4",
elixirc_paths: elixirc_paths(Mix.env),
compilers: [:phoenix, :gettext] ++ Mix.compilers,
start_permanent: Mix.env == :prod,
deps: deps()
]
end
# Configuration for the OTP application.
#
# Type `mix help compile.app` for more information.
def application do
[
mod: {Hello.Application, []},
extra_applications: [:logger, :runtime_tools]
]
end
# Specifies which paths to compile per environment.
defp elixirc_paths(:test), do: ["lib", "test/support"]
defp elixirc_paths(_), do: ["lib"]
# Specifies your project dependencies.
#
# Type `mix help deps` for examples and options.
defp deps do
[
{:phoenix, "~> 1.3.2"},
{:phoenix_pubsub, "~> 1.0"},
{:phoenix_html, "~> 2.10"},
{:phoenix_live_reload, "~> 1.0", only: :dev},
{:gettext, "~> 0.11"},
{:cowboy, "~> 1.0"},
{:distillery, "~> 1.5"}
]
end
end

View File

@ -0,0 +1,15 @@
%{
"cowboy": {:hex, :cowboy, "1.1.2", "61ac29ea970389a88eca5a65601460162d370a70018afe6f949a29dca91f3bb0", [:rebar3], [{:cowlib, "~> 1.0.2", [hex: :cowlib, repo: "hexpm", optional: false]}, {:ranch, "~> 1.3.2", [hex: :ranch, repo: "hexpm", optional: false]}], "hexpm"},
"cowlib": {:hex, :cowlib, "1.0.2", "9d769a1d062c9c3ac753096f868ca121e2730b9a377de23dec0f7e08b1df84ee", [:make], [], "hexpm"},
"distillery": {:hex, :distillery, "1.5.2", "eec18b2d37b55b0bcb670cf2bcf64228ed38ce8b046bb30a9b636a6f5a4c0080", [:mix], [], "hexpm"},
"file_system": {:hex, :file_system, "0.2.5", "a3060f063b116daf56c044c273f65202e36f75ec42e678dc10653056d3366054", [:mix], [], "hexpm"},
"gettext": {:hex, :gettext, "0.15.0", "40a2b8ce33a80ced7727e36768499fc9286881c43ebafccae6bab731e2b2b8ce", [:mix], [], "hexpm"},
"mime": {:hex, :mime, "1.3.0", "5e8d45a39e95c650900d03f897fbf99ae04f60ab1daa4a34c7a20a5151b7a5fe", [:mix], [], "hexpm"},
"phoenix": {:hex, :phoenix, "1.3.2", "2a00d751f51670ea6bc3f2ba4e6eb27ecb8a2c71e7978d9cd3e5de5ccf7378bd", [:mix], [{:cowboy, "~> 1.0", [hex: :cowboy, repo: "hexpm", optional: true]}, {:phoenix_pubsub, "~> 1.0", [hex: :phoenix_pubsub, repo: "hexpm", optional: false]}, {:plug, "~> 1.3.3 or ~> 1.4", [hex: :plug, repo: "hexpm", optional: false]}, {:poison, "~> 2.2 or ~> 3.0", [hex: :poison, repo: "hexpm", optional: false]}], "hexpm"},
"phoenix_html": {:hex, :phoenix_html, "2.11.2", "86ebd768258ba60a27f5578bec83095bdb93485d646fc4111db8844c316602d6", [:mix], [{:plug, "~> 1.5", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm"},
"phoenix_live_reload": {:hex, :phoenix_live_reload, "1.1.5", "8d4c9b1ef9ca82deee6deb5a038d6d8d7b34b9bb909d99784a49332e0d15b3dc", [:mix], [{:file_system, "~> 0.2.1 or ~> 0.3", [hex: :file_system, repo: "hexpm", optional: false]}, {:phoenix, "~> 1.0 or ~> 1.2 or ~> 1.3", [hex: :phoenix, repo: "hexpm", optional: false]}], "hexpm"},
"phoenix_pubsub": {:hex, :phoenix_pubsub, "1.0.2", "bfa7fd52788b5eaa09cb51ff9fcad1d9edfeb68251add458523f839392f034c1", [:mix], [], "hexpm"},
"plug": {:hex, :plug, "1.5.1", "1ff35bdecfb616f1a2b1c935ab5e4c47303f866cb929d2a76f0541e553a58165", [:mix], [{:cowboy, "~> 1.0.1 or ~> 1.1 or ~> 2.3", [hex: :cowboy, repo: "hexpm", optional: true]}, {:mime, "~> 1.0", [hex: :mime, repo: "hexpm", optional: false]}], "hexpm"},
"poison": {:hex, :poison, "3.1.0", "d9eb636610e096f86f25d9a46f35a9facac35609a7591b3be3326e99a0484665", [:mix], [], "hexpm"},
"ranch": {:hex, :ranch, "1.3.2", "e4965a144dc9fbe70e5c077c65e73c57165416a901bd02ea899cfd95aa890986", [:rebar3], [], "hexpm"},
}

View File

@ -0,0 +1,11 @@
## `msgid`s in this file come from POT (.pot) files.
##
## Do not add, change, or remove `msgid`s manually here as
## they're tied to the ones in the corresponding POT file
## (with the same domain).
##
## Use `mix gettext.extract --merge` or `mix gettext.merge`
## to merge POT files into PO files.
msgid ""
msgstr ""
"Language: en\n"

View File

@ -0,0 +1,10 @@
## This file is a PO Template file.
##
## `msgid`s here are often extracted from source code.
## Add new translations manually only if they're dynamic
## translations that can't be statically extracted.
##
## Run `mix gettext.extract` to bring this file up to
## date. Leave `msgstr`s empty as changing them here as no
## effect: edit them in PO (`.po`) files instead.

View File

@ -0,0 +1,53 @@
# Import all plugins from `rel/plugins`
# They can then be used by adding `plugin MyPlugin` to
# either an environment, or release definition, where
# `MyPlugin` is the name of the plugin module.
Path.join(["rel", "plugins", "*.exs"])
|> Path.wildcard()
|> Enum.map(&Code.eval_file(&1))
use Mix.Releases.Config,
# This sets the default release built by `mix release`
default_release: :default,
# This sets the default environment used by `mix release`
default_environment: Mix.env()
# For a full list of config options for both releases
# and environments, visit https://hexdocs.pm/distillery/configuration.html
# You may define one or more environments in this file,
# an environment's settings will override those of a release
# when building in that environment, this combination of release
# and environment configuration is called a profile
environment :dev do
# If you are running Phoenix, you should make sure that
# server: true is set and the code reloader is disabled,
# even in dev mode.
# It is recommended that you build with MIX_ENV=prod and pass
# the --env flag to Distillery explicitly if you want to use
# dev mode.
set dev_mode: true
set include_erts: false
set cookie: :"Bps5@RVvPgL9c~C~D(<o2o>DCQ5*Iu!<h[wb{zngH1o&j?EtGh~U$nB9kBamX(LF"
end
environment :prod do
set include_erts: true
set include_src: false
set cookie: :"oj@tGtLJFd=:Qn]]9d;6mI^.K}*eJuMF(&@`(s}zCIgba<J:;c`aLvLjw@ZN!Z@3"
end
# You may define one or more releases in this file.
# If you have not set a default release, or selected one
# when running `mix release`, the first release in the file
# will be used by default
release :hello do
set version: current_version(:hello)
set applications: [
:runtime_tools
]
end

View File

@ -0,0 +1,15 @@
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-elixir
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-elixir
env:
- name: TARGET
value: "elixir Sample v1"

View File

@ -0,0 +1,8 @@
defmodule HelloWeb.PageControllerTest do
use HelloWeb.ConnCase
test "GET /", %{conn: conn} do
conn = get conn, "/"
assert html_response(conn, 200) =~ "Welcome to Knative and Elixir"
end
end

View File

@ -0,0 +1,16 @@
defmodule HelloWeb.ErrorViewTest do
use HelloWeb.ConnCase, async: true
# Bring render/3 and render_to_string/3 for testing custom views
import Phoenix.View
test "renders 404.html" do
assert render_to_string(HelloWeb.ErrorView, "404.html", []) ==
"Not Found"
end
test "renders 500.html" do
assert render_to_string(HelloWeb.ErrorView, "500.html", []) ==
"Internal Server Error"
end
end

View File

@ -0,0 +1,3 @@
defmodule HelloWeb.LayoutViewTest do
use HelloWeb.ConnCase, async: true
end

View File

@ -0,0 +1,3 @@
defmodule HelloWeb.PageViewTest do
use HelloWeb.ConnCase, async: true
end

View File

@ -0,0 +1,33 @@
defmodule HelloWeb.ChannelCase do
@moduledoc """
This module defines the test case to be used by
channel tests.
Such tests rely on `Phoenix.ChannelTest` and also
import other functionality to make it easier
to build common datastructures and query the data layer.
Finally, if the test case interacts with the database,
it cannot be async. For this reason, every test runs
inside a transaction which is reset at the beginning
of the test unless the test case is marked as async.
"""
use ExUnit.CaseTemplate
using do
quote do
# Import conveniences for testing with channels
use Phoenix.ChannelTest
# The default endpoint for testing
@endpoint HelloWeb.Endpoint
end
end
setup _tags do
:ok
end
end

View File

@ -0,0 +1,34 @@
defmodule HelloWeb.ConnCase do
@moduledoc """
This module defines the test case to be used by
tests that require setting up a connection.
Such tests rely on `Phoenix.ConnTest` and also
import other functionality to make it easier
to build common datastructures and query the data layer.
Finally, if the test case interacts with the database,
it cannot be async. For this reason, every test runs
inside a transaction which is reset at the beginning
of the test unless the test case is marked as async.
"""
use ExUnit.CaseTemplate
using do
quote do
# Import conveniences for testing with connections
use Phoenix.ConnTest
import HelloWeb.Router.Helpers
# The default endpoint for testing
@endpoint HelloWeb.Endpoint
end
end
setup _tags do
{:ok, conn: Phoenix.ConnTest.build_conn()}
end
end

View File

@ -0,0 +1,2 @@
ExUnit.start()

View File

@ -141,11 +141,17 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get services.serving.knative.dev helloworld-go -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-go -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-go helloworld-go.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app to see the results. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.

View File

@ -0,0 +1,6 @@
.stack-work/
*.cabal
*~
/.idea/
/dist/
/out/

View File

@ -0,0 +1,21 @@
# Use the existing Haskell image as our base
FROM haskell:8.2.2 as builder
# Checkout our code onto the Docker container
WORKDIR /app
ADD . /app
# Build and test our code, then install the “helloworld-haskell-exe” executable
RUN stack setup
RUN stack build --copy-bins
# Copy the "helloworld-haskell-exe" executable to the image using docker multi stage build
FROM fpco/haskell-scratch:integer-gmp
WORKDIR /root/
COPY --from=builder /root/.local/bin/helloworld-haskell-exe .
# Expose a port to run our application
EXPOSE 8080
# Run the server command
CMD ["./helloworld-haskell-exe"]

View File

@ -0,0 +1,204 @@
# Hello World - Haskell sample
A simple web app written in Haskell that you can use for testing.
It reads in an env variable `TARGET` and prints "Hello World: ${TARGET}!". If
TARGET is not specified, it will use "NOT SPECIFIED" as the TARGET.
## Prerequisites
* A Kubernetes cluster with Knative installed. Follow the
[installation instructions](https://github.com/knative/docs/blob/master/install/README.md) if you need
to create one.
* [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
## Recreating the sample code
While you can clone all of the code from this directory, hello world
apps are generally more useful if you build them step-by-step. The
following instructions recreate the source files from this folder.
1. Create a new file named `stack.yaml` and paste the following code:
```yaml
flags: {}
packages:
- .
extra-deps: []
resolver: lts-10.7
```
1. Create a new file named `package.yaml` and paste the following code
```yaml
name: helloworld-haskell
version: 0.1.0.0
dependencies:
- base >= 4.7 && < 5
- scotty
- text
executables:
helloworld-haskell-exe:
main: Main.hs
source-dirs: app
ghc-options:
- -threaded
- -rtsopts
- -with-rtsopts=-N
```
1. Create a `app` folder, then create a new file named `Main.hs` in that folder
and paste the following code. This code creates a basic web server which
listens on port 8080:
```haskell
{-# LANGUAGE OverloadedStrings #-}
import Data.Maybe
import Data.Monoid ((<>))
import Data.Text.Lazy (Text)
import Data.Text.Lazy
import System.Environment (lookupEnv)
import Web.Scotty (ActionM, ScottyM, scotty)
import Web.Scotty.Trans
main :: IO ()
main = do
t <- fromMaybe "NOT SPECIFIED" <$> lookupEnv "TARGET"
scotty 8080 (route t)
route :: String -> ScottyM()
route t = get "/" $ hello t
hello :: String -> ActionM()
hello t = text $ pack ("Hello world: " ++ t)
```
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it.
```docker
# Use the existing Haskell image as our base
FROM haskell:8.2.2 as builder
# Checkout our code onto the Docker container
WORKDIR /app
ADD . /app
# Build and test our code, then install the “helloworld-haskell-exe” executable
RUN stack setup
RUN stack build --copy-bins
# Copy the "helloworld-haskell-exe" executable to the image using docker multi stage build
FROM fpco/haskell-scratch:integer-gmp
WORKDIR /root/
COPY --from=builder /root/.local/bin/helloworld-haskell-exe .
# Expose a port to run our application
EXPOSE 8080
# Run the server command
CMD ["./helloworld-haskell-exe"]
```
1. Create a new file, `service.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-haskell
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-haskell
env:
- name: TARGET
value: "Haskell Sample v1"
```
## Build and deploy this sample
Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, enter these commands replacing `{username}` with your
Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-haskell .
# Push the container to docker registry
docker push {username}/helloworld-haskell
```
1. After the build has completed and the container is pushed to Docker Hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Create a new immutable revision for this version of the app.
* Network programming to create a route, ingress, service, and load balance for your app.
* Automatically scale your pods up and down (including to zero active pods).
1. To find the IP address for your service, enter
`kubectl get svc knative-ingressgateway -n istio-system` to get the ingress IP for your
cluster. If your cluster is new, it may take some time for the service to get assigned
an external IP address.
```shell
kubectl get svc knative-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
For minikube or bare-metal, get IP_ADDRESS by running the following command
```shell
echo $(kubectl get node -o 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
1. To find the URL for your service, enter:
```
kubectl get ksvc helloworld-haskell -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-haskell helloworld-haskell.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app and see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.
```shell
curl -H "Host: helloworld-haskell.default.example.com" http://{IP_ADDRESS}
Hello world: Haskell Sample v1
```
## Removing the sample app deployment
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
```

View File

@ -0,0 +1,20 @@
{-# LANGUAGE OverloadedStrings #-}
import Data.Maybe
import Data.Monoid ((<>))
import Data.Text.Lazy (Text)
import Data.Text.Lazy
import System.Environment (lookupEnv)
import Web.Scotty (ActionM, ScottyM, scotty)
import Web.Scotty.Trans
main :: IO ()
main = do
t <- fromMaybe "NOT SPECIFIED" <$> lookupEnv "TARGET"
scotty 8080 (route t)
route :: String -> ScottyM()
route t = get "/" $ hello t
hello :: String -> ActionM()
hello t = text $ pack ("Hello world: " ++ t)

View File

@ -0,0 +1,15 @@
name: helloworld-haskell
version: 0.1.0.0
dependencies:
- base >= 4.7 && < 5
- scotty
- text
executables:
helloworld-haskell-exe:
main: Main.hs
source-dirs: app
ghc-options:
- -threaded
- -rtsopts
- -with-rtsopts=-N

View File

@ -0,0 +1,15 @@
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-haskell
namespace: default
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: docker.io/{username}/helloworld-haskell
env:
- name: TARGET
value: "Haskell Sample v1"

View File

@ -0,0 +1,5 @@
flags: {}
packages:
- .
extra-deps: []
resolver: lts-10.7

View File

@ -154,11 +154,17 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get services.serving.knative.dev helloworld-java -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-java -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-java helloworld-java.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.

View File

@ -172,11 +172,17 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get services.serving.knative.dev helloworld-nodejs -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-nodejs -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-nodejs helloworld-nodejs.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.

View File

@ -113,11 +113,17 @@ you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get services.serving.knative.dev helloworld-php -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-php -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-php helloworld-php.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.

View File

@ -125,11 +125,17 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get services.serving.knative.dev helloworld-python -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-python -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-python helloworld-python.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app to see the result. Replace `{IP_ADDRESS}`
with the address you see returned in the previous step.

View File

@ -140,10 +140,15 @@ you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get services.serving.knative.dev helloworld-ruby -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-ruby -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-ruby helloworld-ruby.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app to see the result. Replace `{IP_ADDRESS}`
with the address you see returned in the previous step.

View File

@ -156,11 +156,17 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, enter:
```
kubectl get services.serving.knative.dev helloworld-rust -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-rust -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-rust helloworld-rust.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app and see the result. Replace
`{IP_ADDRESS}` with the address you see returned in the previous step.

View File

@ -189,16 +189,6 @@ container for the application.
revisionName: app-from-source-00007
```
1. After the build has completed and the container is pushed to Docker Hub, you
can deploy the app into your cluster. Ensure that the container image value
in `service.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
* Fetch the revision specified from GitHub and build it into a container
* Push the container to Docker Hub
@ -220,11 +210,17 @@ container for the application.
1. To find the URL for your service, type:
```shell
$ kubectl get services.serving.knative.dev app-from-source -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
$ kubectl get ksvc app-from-source -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
app-from-source app-from-source.default.example.com
```
> Note: `ksvc` is an alias for `services.serving.knative.dev`. If you have
an older version (version 0.1.0) of Knative installed, you'll need to use
the long name until you upgrade to version 0.1.1 or higher. See
[Checking Knative Installation Version](../../../install/check-install-version.md)
to learn how to see what version you have installed.
1. Now you can make a request to your app to see the result. Replace
`{IP_ADDRESS}` with the address that you got in the previous step:

View File

@ -2,100 +2,163 @@
This sample runs a simple web server that makes calls to other in-cluster services
and responds to requests with "Hello World!".
The purpose of this sample is to show generating metrics, logs and distributed traces
(see [Logs](../../accessing-logs.md), [Metrics](../../accessing-metrics.md), and [Traces](../../accessing-traces.md) for more information).
This sample also creates a dedicated Prometheus instances rather than using the one
that is installed by default as a showcase of installing dedicated Prometheus instances.
The purpose of this sample is to show generating [metrics](../../accessing-metrics.md),
[logs](../../accessing-logs.md) and distributed [traces](../../accessing-traces.md).
This sample also shows how to create a dedicated Prometheus instance rather than
using the default installation.
## Prerequisites
1. [Install Knative Serving](https://github.com/knative/docs/blob/master/install/README.md)
2. [Install Knative monitoring component](../../installing-logging-metrics-traces.md)
3. Install [docker](https://www.docker.com/)
1. A Kubernetes cluster with [Knative Serving](https://github.com/knative/docs/blob/master/install/README.md)
installed.
2. Check if Knative monitoring components are installed:
```
kubectl get pods -n monitoring
```
* If pods aren't found, install [Knative monitoring component](../../installing-logging-metrics-traces.md).
3. Install [Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
4. Check out the code:
```
go get -d github.com/knative/docs/serving/samples/telemetry-go
```
## Setup
Build the app container and publish it to your registry of choice:
Build the application container and publish it to a container registry:
```shell
REPO="gcr.io/<your-project-here>"
1. Move into the sample directory:
```
cd $GOPATH/src/github.com/knative/docs
```
# Build and publish the container, run from the root directory.
2. Set your preferred container registry:
```
export REPO="gcr.io/<YOUR_PROJECT_ID>"
```
This example shows how to use Google Container Registry (GCR). You will need
a Google Cloud Project and to enable the [Google Container Registry
API](https://console.cloud.google.com/apis/library/containerregistry.googleapis.com).
3. Use Docker to build your application container:
```
docker build \
--tag "${REPO}/serving/samples/telemetry-go" \
--file=serving/samples/telemetry-go/Dockerfile .
```
4. Push your container to a container registry:
```
docker push "${REPO}/serving/samples/telemetry-go"
```
# Replace the image reference with our published image.
perl -pi -e "s@github.com/knative/docs/serving/samples/telemetry-go@${REPO}/serving/samples/telemetry-go@g" serving/samples/telemetry-go/*.yaml
5. Replace the image reference path with our published image path in the
configuration file (`serving/samples/telemetry-go/sample.yaml`):
* Manually replace:
`image: github.com/knative/docs/serving/samples/telemetry-go` with
`image: <YOUR_CONTAINER_REGISTRY>/serving/samples/telemetry-go`
# Deploy the Knative Serving sample
Or
* Use run this command:
```
perl -pi -e "s@github.com/knative/docs@${REPO}@g" serving/samples/telemetry-go/sample.yaml
```
## Deploy the Service
Deploy this application to Knative Serving:
```
kubectl apply -f serving/samples/telemetry-go/
```
## Exploring
## Explore the Service
Once deployed, you can inspect the created resources with `kubectl` commands:
Inspect the created resources with the `kubectl` commands:
```shell
# This will show the route that we created:
kubectl get route -o yaml
* View the created Route resource:
```
kubectl get route -o yaml
```
# This will show the configuration that we created:
kubectl get configurations -o yaml
* View the created Configuration resource:
```
kubectl get configurations -o yaml
```
# This will show the Revision that was created by our configuration:
kubectl get revisions -o yaml
* View the Revision that was created by the Configuration:
```
kubectl get revisions -o yaml
```
## Access the Service
To access this service via `curl`, you need to determine its ingress address.
1. To determine if your service is ready:
Check the status of your Knative gateway:
```
kubectl get svc knative-ingressgateway -n istio-system --watch
```
When the service is ready, you'll see an IP address in the `EXTERNAL-IP` field:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
CTRL+C to end watch.
Check the status of your route:
```
kubectl get route -o yaml
```
When the route is ready, you'll see the following fields reported as:
```YAML
status:
conditions:
...
status: "True"
type: Ready
domain: telemetrysample-route.default.example.com
```
2. Export the ingress hostname and IP as environment
variables:
```
To access this service via `curl`, we first need to determine its ingress address:
```shell
watch kubectl get svc knative-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
Once the `EXTERNAL-IP` gets assigned to the cluster, you can run:
```shell
# Put the Host name into an environment variable.
export SERVICE_HOST=`kubectl get route telemetrysample-route -o jsonpath="{.status.domain}"`
# Put the ingress IP into an environment variable.
export SERVICE_IP=`kubectl get svc knative-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
# Curl the ingress IP "as-if" DNS were properly configured.
3. Make a request to the service to see the `Hello World!` message:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}
Hello World!
```
Generate some logs to STDOUT and files under `/var/log` in `Json` or plain text formats.
```shell
4. Make a request to the `/log` endpoint to generate logs to the `stdout` file
and generate files under `/var/log` in both `JSON` and plain text formats:
```
curl --header "Host:$SERVICE_HOST" http://${SERVICE_IP}/log
Sending logs done.
```
## Accessing logs
You can access to the logs from Kibana UI - see [Logs](../../accessing-logs.md) for more information.
## Access Logs
You can access to the logs from Kibana UI - see [Logs](../../accessing-logs.md)
for more information.
## Accessing per request traces
You can access to per request traces from Zipkin UI - see [Traces](../../accessing-traces.md) for more information.
## Access per Request Traces
You can access to per request traces from Zipkin UI - see [Traces](../../accessing-traces.md)
for more information.
## Accessing custom metrics
You can see published metrics using Prometheus UI. To access to the UI, forward the Prometheus server to your machine:
```bash
## Accessing Custom Metrics
You can see published metrics using Prometheus UI. To access to the UI, forward
the Prometheus server to your machine:
```
kubectl port-forward $(kubectl get pods --selector=app=prometheus,prometheus=test --output=jsonpath="{.items[0].metadata.name}") 9090
```
Then browse to http://localhost:9090.
## Cleaning up
## Clean up
To clean up the sample service:
```shell
kubectl delete -f serving/samples/telemetrysample-go/
```
kubectl delete -f serving/samples/telemetry-go/
```

Some files were not shown because too many files have changed in this diff Show More