Update docs to use kubectl long flags (#372)

* Update docs to use kubectl long flags

* adding back in the --filename flag (lost when manually merged)

* notice a few missed

* notice a few missed '-f' flags

* change -o to --output
This commit is contained in:
Tyler Auerbeck 2018-09-12 12:46:03 -04:00 committed by RichieEscarez
parent dfc53c67c8
commit b2254cbb42
48 changed files with 290 additions and 290 deletions

View File

@ -74,7 +74,7 @@ into their respective files in `$HOME`.
1. Execute the build:
```shell
kubectl apply -f secret.yaml serviceaccount.yaml build.yaml
kubectl apply --filename secret.yaml serviceaccount.yaml build.yaml
```
When the build executes, before steps execute, a `~/.ssh/config` will be
@ -126,7 +126,7 @@ used to authenticate with the Git service.
1. Execute the build:
```shell
kubectl apply -f secret.yaml serviceaccount.yaml build.yaml
kubectl apply --filename secret.yaml serviceaccount.yaml build.yaml
```
When this build executes, before steps execute, a `~/.gitconfig` will be
@ -178,7 +178,7 @@ credentials are then used to authenticate with the Git repository.
1. Execute the build:
```shell
kubectl apply -f secret.yaml serviceaccount.yaml build.yaml
kubectl apply --filename secret.yaml serviceaccount.yaml build.yaml
```
When this build executes, before steps execute, a `~/.docker/config.json` will

View File

@ -56,7 +56,7 @@ Kubernetes cluster, and it must include the Knative Build component:
command:
```shell
kubectl apply -f build.yaml
kubectl apply --filename build.yaml
```
Response:
@ -84,7 +84,7 @@ Kubernetes cluster, and it must include the Knative Build component:
which cluster and pod the build is running:
```shell
kubectl get build hello-build -oyaml
kubectl get build hello-build --output yaml
```
Response:
@ -115,7 +115,7 @@ Kubernetes cluster, and it must include the Knative Build component:
Tip: You can also retrieve the `podName` by running the following command:
```shell
kubectl get build hello-build -ojsonpath={.status.cluster.podName}
kubectl get build hello-build --output jsonpath={.status.cluster.podName}
```
1. Optional: Run the following
@ -125,7 +125,7 @@ Kubernetes cluster, and it must include the Knative Build component:
[Init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/):
```shell
kubectl get pod hello-build-[ID] -oyaml
kubectl get pod hello-build-[ID] --output yaml
```
where `[ID]` is the suffix of your pod name, for example
`hello-build-jx4ql`.
@ -144,7 +144,7 @@ Kubernetes cluster, and it must include the Knative Build component:
in the `hello-build-[ID]` pod:
```shell
kubectl logs $(kubectl get build hello-build -ojsonpath={.status.cluster.podName}) -c build-step-hello
kubectl logs $(kubectl get build hello-build --output jsonpath={.status.cluster.podName}) --container build-step-hello
```
Response:

View File

@ -22,14 +22,14 @@ To add only the Knative Build component to an existing installation:
command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://storage.googleapis.com/knative-releases/build/latest/release.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/build/latest/release.yaml
```
1. Run the
[`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get)
command to monitor the Knative Build components until all of the components
show a `STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
kubectl get pods --namespace knative-build
```
Tip: Instead of running the `kubectl get` command multiple times, you can

View File

@ -36,7 +36,7 @@ EventSources.
You can install Knative Eventing with the following command:
```bash
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release.yaml
```
In addition to the core definitions, you'll need to install at least one
@ -76,20 +76,20 @@ We currently have 3 buses implemented:
- [Stub](https://github.com/knative/eventing/tree/master/pkg/buses/stub)
provides a zero-dependency in-memory transport.
```bash
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-bus-stub.yaml
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-stub.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-bus-stub.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-stub.yaml
```
- [Kafka](https://github.com/knative/eventing/tree/master/pkg/buses/kafka) uses
an existing (user-provided) Kafka cluster for persistence.
```bash
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-bus-kafka.yaml
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-kafka.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-bus-kafka.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-kafka.yaml
```
- [GCP PubSub](https://github.com/knative/eventing/tree/master/pkg/buses/gcppubsub)
uses Google Cloud PubSub for message persistence.
```bash
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-bus-gcppubsub.yaml
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-gcppubsub.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-bus-gcppubsub.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-gcppubsub.yaml
```
### Sources
@ -117,18 +117,18 @@ We currently have 3 sources implemented:
[Kubernetes Events](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#event-v1-core)
and presents them as CloudEvents.
```bash
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-source-k8sevents.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-source-k8sevents.yaml
```
- [GitHub](https://github.com/knative/eventing/tree/master/pkg/sources/github)
collects pull request notifications and presents them as CloudEvents.
```bash
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-source-github.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-source-github.yaml
```
- [GCP PubSub](https://github.com/knative/eventing/tree/master/pkg/sources/gcppubsub)
collects events published to a GCP PubSub topic and presents them as
CloudEvents.
```bash
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-source-gcppubsub.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-source-gcppubsub.yaml
```
### Flows

View File

@ -88,8 +88,8 @@ is a random number 1-10.
Now we want to consume these IoT events, so let's create the function to handle the events:
```shell
kubectl apply -f route.yaml
kubectl apply -f configuration.yaml
kubectl apply --filename route.yaml
kubectl apply --filename configuration.yaml
```
## Create an event source
@ -103,10 +103,10 @@ in Pull mode to poll for the events from this topic.
Then let's create a GCP PubSub as an event source that we can bind to.
```shell
kubectl apply -f serviceaccount.yaml
kubectl apply -f serviceaccountbinding.yaml
kubectl apply -f eventsource.yaml
kubectl apply -f eventtype.yaml
kubectl apply --filename serviceaccount.yaml
kubectl apply --filename serviceaccountbinding.yaml
kubectl apply --filename eventsource.yaml
kubectl apply --filename eventtype.yaml
```
## Bind IoT events to our function
@ -115,5 +115,5 @@ We have now created a function that we want to consume our IoT events, and we ha
source that's sending events via GCP PubSub, so let's wire the two together:
```shell
kubectl apply -f flow.yaml
kubectl apply --filename flow.yaml
```

View File

@ -21,7 +21,7 @@ You will need:
- Knative eventing core installed on your Kubernetes cluster. You can install
with:
```shell
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release.yaml
```
- A domain name that allows GitHub to call into the cluster: Follow the
[assign a static IP address](https://github.com/knative/docs/blob/master/serving/gke-assigning-static-ip-address.md)
@ -36,9 +36,9 @@ To use this sample, you'll need to install the `stub` ClusterBus and the
```shell
# Installs ClusterBus
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-stub.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-stub.yaml
# Installs EventSource
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-source-github.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-source-github.yaml
```
## Granting permissions
@ -52,7 +52,7 @@ namespace. In a production environment, you might want to limit the access of
this service account to only specific namespaces.
```shell
kubectl apply -f auth.yaml
kubectl apply --filename auth.yaml
```
## Building and deploying the sample
@ -101,7 +101,7 @@ kubectl apply -f auth.yaml
Then, apply the githubsecret using `kubectl`:
```shell
kubectl apply -f githubsecret.yaml
kubectl apply --filename githubsecret.yaml
```
1. Use Docker to build the sample code into a container. To build and push with
@ -124,7 +124,7 @@ kubectl apply -f auth.yaml
step.** Apply the configuration using `kubectl`:
```shell
kubectl apply -f function.yaml
kubectl apply --filename function.yaml
```
1. Check that your service is running using:
@ -148,7 +148,7 @@ kubectl apply -f auth.yaml
Then create the flow sending GitHub Events to the service:
```shell
kubectl apply -f flow.yaml
kubectl apply --filename flow.yaml
```
1. Create a PR for the repo you configured the webhook for, and you'll see that
@ -171,10 +171,10 @@ and then deleted.
To clean up the function, `Flow`, auth, and secret:
```shell
kubectl delete -f function.yaml
kubectl delete -f flow.yaml
kubectl delete -f auth.yaml
kubectl delete -f githubsecret.yaml
kubectl delete --filename function.yaml
kubectl delete --filename flow.yaml
kubectl delete --filename auth.yaml
kubectl delete --filename githubsecret.yaml
```
And then delete the [personal access token](https://github.com/settings/tokens)

View File

@ -14,7 +14,7 @@ Knative serving service, so it scales automatically as event traffic increases.
and a Docker Hub account configured (you'll use it for a container registry).
- The core Knative eventing tools installed. You can install them with:
```shell
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release.yaml
```
## Configuring Knative
@ -23,8 +23,8 @@ To use this sample, you'll need to install the `stub` ClusterBus and the
`k8sevents` EventSource.
```shell
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-stub.yaml
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing/latest/release-source-k8sevents.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-clusterbus-stub.yaml
kubectl apply --filename https://storage.googleapis.com/knative-releases/eventing/latest/release-source-k8sevents.yaml
```
## Granting permissions
@ -39,7 +39,7 @@ Kubernetes resources. In a production environment, you might want to limit the
access of this service account to only specific namespaces.
```shell
kubectl apply -f serviceaccount.yaml
kubectl apply --filename serviceaccount.yaml
```
## Build and deploy the sample
@ -63,13 +63,13 @@ kubectl apply -f serviceaccount.yaml
step.** Apply the configuration using `kubectl`:
```shell
kubectl apply -f function.yaml
kubectl apply --filename function.yaml
```
1. Check that your service is running using:
```shell
kubectl get ksvc -o "custom-columns=NAME:.metadata.name,READY:.status.conditions[2].status,REASON:.status.conditions[2].message"
kubectl get ksvc --output "custom-columns=NAME:.metadata.name,READY:.status.conditions[2].status,REASON:.status.conditions[2].message"
NAME READY REASON
read-k8s-events True <none>
```
@ -82,7 +82,7 @@ kubectl apply -f serviceaccount.yaml
1. Create the flow sending Kubernetes Events to the service:
```shell
kubectl apply -f flow.yaml
kubectl apply --filename flow.yaml
```
1. If you have the full knative install, you can read the function logs using
@ -121,7 +121,7 @@ When the flow is created, it provisions the following resources:
bus:
```shell
kubectl get -o yaml feed k8s-event-flow
kubectl get --output yaml feed k8s-event-flow
```
```yaml
@ -146,7 +146,7 @@ When the flow is created, it provisions the following resources:
some parameters to that EventType:
```shell
kubectl get -o yaml eventtype dev.knative.k8s.event
kubectl get --output yaml eventtype dev.knative.k8s.event
```
```yaml
@ -167,7 +167,7 @@ When the flow is created, it provisions the following resources:
sorts of object watches will be supported in the future.
```shell
kubectl get -o yaml eventsource k8sevents
kubectl get --output yaml eventsource k8sevents
```
```yaml
@ -189,7 +189,7 @@ When the flow is created, it provisions the following resources:
channel object by examining the `ownerReferences` on the Service:
```shell
kubectl get -o yaml svc k8s-event-flow-channel
kubectl get --output yaml svc k8s-event-flow-channel
```
```yaml
@ -216,7 +216,7 @@ When the flow is created, it provisions the following resources:
persistence. Each Channel is associated with either a Bus or a ClusterBus:
```shell
kubectl get -o yaml channel k8s-event-flow
kubectl get --output yaml channel k8s-event-flow
```
```yaml
@ -240,7 +240,7 @@ When the flow is created, it provisions the following resources:
but will not durably store messages if the connected endpoints are down.
```shell
kubectl get -o yaml clusterbus stub
kubectl get --output yaml clusterbus stub
```
```yaml
@ -265,7 +265,7 @@ When the flow is created, it provisions the following resources:
Subscription:
```shell
kubectl get -o yaml subscription k8s-event-flow
kubectl get --output yaml subscription k8s-event-flow
```
```yaml

View File

@ -123,7 +123,7 @@ Knative depends on Istio.
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
@ -133,7 +133,7 @@ Knative depends on Istio.
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
```bash
kubectl get pods -n istio-system
kubectl get pods --namespace istio-system
```
It will take a few minutes for all the components to be up and running; you can
@ -150,13 +150,13 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
@ -164,12 +164,12 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
kubectl get pods --namespace knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see

View File

@ -119,7 +119,7 @@ Knative depends on Istio.
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
@ -128,7 +128,7 @@ Knative depends on Istio.
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
```bash
kubectl get pods -n istio-system
kubectl get pods --namespace istio-system
```
It will take a few minutes for all the components to be up and running; you can
@ -145,13 +145,13 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
@ -159,12 +159,12 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
kubectl get pods --namespace knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see

View File

@ -46,7 +46,7 @@ Make sure the namespace matches that of your project. Then just apply the
prepared so-called "shoot" cluster crd with kubectl:
```
kubectl apply -f my-cluster.yaml
kubectl apply --filename my-cluster.yaml
```
The easier alternative is to create the cluster following the cluster creation
@ -59,7 +59,7 @@ You can now download the kubeconfig for your freshly created cluster in the
Gardener dashboard or via cli as follows:
```
kubectl --namespace shoot--my-project--my-cluster get secret kubecfg -o jsonpath={.data.kubeconfig} | base64 --decode > my-cluster.yaml
kubectl --namespace shoot--my-project--my-cluster get secret kubecfg --output jsonpath={.data.kubeconfig} | base64 --decode > my-cluster.yaml
```
This kubeconfig file has full administrators access to you cluster. For the rest
@ -71,14 +71,14 @@ Knative depends on Istio.
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
2. Label the default namespace with `istio-injection=enabled`:
```bash
kubectl label namespace default istio-injection=enabled
```
3. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`: `bash kubectl get pods -n istio-system`
`Running` or `Completed`: `bash kubectl get pods --namespace istio-system`
It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
@ -95,13 +95,13 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
@ -109,12 +109,12 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
kubectl get pods --namespace knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see
@ -192,7 +192,7 @@ knative-ingressgateway LoadBalancer 100.70.219.81 35.233.41.212 80:32380
3. Adapt your knative config-domain (set your domain in the data field)
```
kubectl --namespace knative-serving get configmaps config-domain -o yaml
kubectl --namespace knative-serving get configmaps config-domain --output yaml
apiVersion: v1
data:
knative.<my domain>: ""

View File

@ -126,7 +126,7 @@ Knative depends on Istio.
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
@ -135,7 +135,7 @@ Knative depends on Istio.
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
```bash
kubectl get pods -n istio-system
kubectl get pods --namespace istio-system
```
It will take a few minutes for all the components to be up and running; you can
@ -153,13 +153,13 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
@ -167,12 +167,12 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
kubectl get pods --namespace knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see

View File

@ -61,7 +61,7 @@ Knative depends on Istio. Run the following to install Istio. (We are changing
```shell
curl -L https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml \
| sed 's/LoadBalancer/NodePort/' \
| kubectl apply -f -
| kubectl apply --filename -
# Label the default namespace with istio-injection=enabled.
kubectl label namespace default istio-injection=enabled
@ -71,7 +71,7 @@ Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
```shell
kubectl get pods -n istio-system
kubectl get pods --namespace istio-system
```
It will take a few minutes for all the components to be up and running; you can
@ -92,14 +92,14 @@ the Knative components. To use the provided `release-lite.yaml` release, run:
```shell
curl -L https://github.com/knative/serving/releases/download/v0.1.1/release-lite.yaml \
| sed 's/LoadBalancer/NodePort/' \
| kubectl apply -f -
| kubectl apply --filename -
```
Monitor the Knative components until all of the components show a `STATUS` of
`Running`:
```shell
kubectl get pods -n knative-serving
kubectl get pods --namespace knative-serving
```
Just as with the Istio components, it will take a few seconds for the Knative
@ -127,7 +127,7 @@ head to the [sample apps](../serving/samples/README.md) repo.
You can use the following command to look up the value to use for the {IP_ADDRESS} placeholder
used in the samples:
```shell
echo $(minikube ip):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
echo $(minikube ip):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
## Cleaning up

View File

@ -32,7 +32,7 @@ Knative depends on Istio. Istio workloads require privileged mode for Init Conta
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
@ -41,7 +41,7 @@ Knative depends on Istio. Istio workloads require privileged mode for Init Conta
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`:
```bash
kubectl get pods -n istio-system
kubectl get pods --namespace istio-system
```
It will take a few minutes for all the components to be up and running; you can
@ -58,13 +58,13 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
@ -72,12 +72,12 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
kubectl get pods --namespace knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see

View File

@ -20,14 +20,14 @@ Containers.
1. Install Istio:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/istio-0.8.0/istio.yaml
```
1. Label the default namespace with `istio-injection=enabled`:
```bash
kubectl label namespace default istio-injection=enabled
```
1. Monitor the Istio components until all of the components show a `STATUS` of
`Running` or `Completed`: `bash kubectl get pods -n istio-system`
`Running` or `Completed`: `bash kubectl get pods --namespace istio-system`
It will take a few minutes for all the components to be up and running; you can
rerun the command to see the current status.
@ -44,13 +44,13 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install Knative and its dependencies:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.1.1/release.yaml
```
1. Monitor the Knative components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods --namespace knative-serving
kubectl get pods --namespace knative-build
```
### Installing Knative Build only
@ -58,12 +58,12 @@ You can install the Knative Serving and Build components together, or Build on i
1. Run the `kubectl apply` command to install
[Knative Build](https://github.com/knative/build) and its dependencies:
```bash
kubectl apply -f https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.1.1/third_party/config/build/release.yaml
```
1. Monitor the Knative Build components until all of the components show a
`STATUS` of `Running`:
```bash
kubectl get pods -n knative-build
kubectl get pods --namespace knative-build
Just as with the Istio components, it will take a few seconds for the Knative
components to be up and running; you can rerun the `kubectl get` command to see

View File

@ -4,7 +4,7 @@ If you want to check what version of Knative serving you have installed,
enter the following command:
```bash
kubectl describe deploy controller -n knative-serving
kubectl describe deploy controller --namespace knative-serving
```
This will return the description for the `knative-serving` controller; this
@ -34,4 +34,4 @@ On the container details page, you'll see a section titled
of Knative you have installed will appear in the list as `v0.1.1`, or whatever
verion you have installed:
![Shows list of tags on container details page; v0.1.1 is the Knative version and is the first tag.](../images/knative-version.png)
![Shows list of tags on container details page; v0.1.1 is the Knative version and is the first tag.](../images/knative-version.png)

View File

@ -62,7 +62,7 @@ the image accordingly.
From the directory where the new `service.yaml` file was created, apply the configuration:
```bash
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
Now that your service is created, Knative will perform the following steps:
@ -83,7 +83,7 @@ asssigned an external IP address.
1. To find the IP address for your service, enter:
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -94,20 +94,20 @@ asssigned an external IP address.
You can also export the IP address as a variable with the following command:
```shell
export IP_ADDRESS=$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.status.loadBalancer.ingress[0].ip}')
export IP_ADDRESS=$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')
```
> Note: if you use minikube or a baremetal cluster that has no external load balancer, the
`EXTERNAL-IP` field is shown as `<pending>`. You need to use `NodeIP` and `NodePort` to
interact your app instead. To get your app's `NodeIP` and `NodePort`, enter the following command:
```shell
export IP_ADDRESS=$(kubectl get node -o 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
export IP_ADDRESS=$(kubectl get node --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
1. To find the host URL for your service, enter:
```shell
kubectl get ksvc helloworld-go -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-go --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-go helloworld-go.default.example.com
```
@ -121,7 +121,7 @@ asssigned an external IP address.
You can also export the host URL as a variable using the following command:
```shell
export HOST_URL=$(kubectl get ksvc helloworld-go -o jsonpath='{.status.domain}')
export HOST_URL=$(kubectl get ksvc helloworld-go --output jsonpath='{.status.domain}')
```
If you changed the name from `helloworld-go` to something else when creating
@ -159,7 +159,7 @@ You've successfully deployed your first application using Knative!
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```
---

View File

@ -5,7 +5,7 @@ the visualization tool for [Prometheus](https://prometheus.io/).
1. To open Grafana, enter the following command:
```
kubectl port-forward -n monitoring $(kubectl get pods -n monitoring --selector=app=grafana --output=jsonpath="{.items..metadata.name}") 3000
kubectl port-forward --namespae monitoring $(kubectl get pods --namespace monitoring --selector=app=grafana --output=jsonpath="{.items..metadata.name}") 3000
```
* This starts a local proxy of Grafana on port 3000. For security reasons, the Grafana UI is exposed only within the cluster.

View File

@ -33,7 +33,7 @@ Run the following command to get the `status` of the `Route` object with which
you deployed your application:
```shell
kubectl get route <route-name> -o yaml
kubectl get route <route-name> --output yaml
```
The `conditions` in `status` provide the reason if there is any failure. For
@ -48,7 +48,7 @@ command to get the name of the `Revision` created for you deployment
(look up the configuration name in the `Route` .yaml file):
```shell
kubectl get configuration <configuration-name> -o jsonpath="{.status.latestCreatedRevisionName}"
kubectl get configuration <configuration-name> --output jsonpath="{.status.latestCreatedRevisionName}"
```
If you configure your `Route` with `Revision` directly, look up the revision
@ -57,7 +57,7 @@ name in the `Route` yaml file.
Then run the following command:
```shell
kubectl get revision <revision-name> -o yaml
kubectl get revision <revision-name> --output yaml
```
A ready `Revision` should have the following condition in `status`:
@ -103,7 +103,7 @@ Choose one and use the following command to see detailed information for its
`status`. Some useful fields are `conditions` and `containerStatuses`:
```shell
kubectl get pod <pod-name> -o yaml
kubectl get pod <pod-name> --output yaml
```
@ -115,7 +115,7 @@ If you are using Build to deploy, run the following command to get the Build for
your `Revision`:
```shell
kubectl get build $(kubectl get revision <revision-name> -o jsonpath="{.spec.buildName}") -o yaml
kubectl get build $(kubectl get revision <revision-name> --output jsonpath="{.spec.buildName}") --output yaml
```
If there is any failure, the `conditions` in `status` provide the reason. To

View File

@ -49,7 +49,7 @@ In the [GCP console](https://console.cloud.google.com/networking/addresses/add?_
Run following command to configure the external IP of the
`knative-ingressgateway` service to the static IP that you reserved:
```shell
kubectl patch svc knative-ingressgateway -n istio-system --patch '{"spec": { "loadBalancerIP": "<your-reserved-static-ip>" }}'
kubectl patch svc knative-ingressgateway --namespace istio-system --patch '{"spec": { "loadBalancerIP": "<your-reserved-static-ip>" }}'
service "knative-ingressgateway" patched
```
@ -57,7 +57,7 @@ service "knative-ingressgateway" patched
Run the following command to ensure that the external IP of the "knative-ingressgateway" service has been updated:
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
```
The output should show the assigned static IP address under the EXTERNAL-IP column:
```

View File

@ -17,19 +17,19 @@ skip this step and continue to
- Install Knative monitoring components from the root of the [Serving repository](https://github.com/knative/serving):
```shell
kubectl apply -R -f config/monitoring/100-common \
-f config/monitoring/150-elasticsearch \
-f third_party/config/monitoring/common \
-f third_party/config/monitoring/elasticsearch \
-f config/monitoring/200-common \
-f config/monitoring/200-common/100-istio.yaml
kubectl apply --recursive --filename config/monitoring/100-common \
--filename config/monitoring/150-elasticsearch \
--filename third_party/config/monitoring/common \
--filename third_party/config/monitoring/elasticsearch \
--filename config/monitoring/200-common \
--filename config/monitoring/200-common/100-istio.yaml
```
- The installation is complete when logging & monitoring components are all
reported `Running` or `Completed`:
```shell
kubectl get pods -n monitoring --watch
kubectl get pods --namespace monitoring --watch
```
```
@ -94,11 +94,11 @@ own Fluentd image and modify the configuration first. See
3. Install Knative monitoring components:
```shell
kubectl apply -R -f config/monitoring/100-common \
-f config/monitoring/150-stackdriver-prod \
-f third_party/config/monitoring/common \
-f config/monitoring/200-common \
-f config/monitoring/200-common/100-istio.yaml
kubectl apply --recursive --filename config/monitoring/100-common \
--filename config/monitoring/150-stackdriver-prod \
--filename third_party/config/monitoring/common \
--filename config/monitoring/200-common \
--filename config/monitoring/200-common/100-istio.yaml
```
## Learn More

View File

@ -34,7 +34,7 @@ value with the IP ranges of your cluster.
Run the following command to edit the `config-network` map:
```shell
kubectl edit configmap config-network -n knative-serving
kubectl edit configmap config-network --namespace knative-serving
```
Then, use an editor of your choice to change the `istio.sidecar.includeOutboundIPRanges` parameter value
@ -74,7 +74,7 @@ Verify that the `traffic.sidecar.istio.io/includeOutboundIPRanges` annotation ma
expected value from the config-map.
```shell
$ kubectl get pod ${POD_NAME} -o yaml
$ kubectl get pod ${POD_NAME} --output yaml
apiVersion: v1
kind: Pod

View File

@ -52,12 +52,12 @@ Build the application container and publish it to a container registry:
1. Deploy the Knative Serving sample:
```
kubectl apply -f serving/samples/autoscale-go/service.yaml
kubectl apply --filename serving/samples/autoscale-go/service.yaml
```
1. Find the ingress hostname and IP and export as an environment variable:
```
export IP_ADDRESS=`kubectl get svc knative-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
export IP_ADDRESS=`kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
## View the Autoscaling Capabilities
@ -112,7 +112,7 @@ ceil(8.75) = 9
View the Knative Serving Scaling and Request dashboards (if configured).
```
kubectl port-forward -n monitoring $(kubectl get pods -n monitoring --selector=app=grafana --output=jsonpath="{.items..metadata.name}") 3000
kubectl port-forward --namespace monitoring $(kubectl get pods --namespace monitoring --selector=app=grafana --output=jsonpath="{.items..metadata.name}") 3000
```
![scale dashboard](scale-dashboard.png)
@ -149,7 +149,7 @@ kubectl port-forward -n monitoring $(kubectl get pods -n monitoring --selector=a
## Cleanup
```
kubectl delete -f serving/samples/autoscale-go/service.yaml
kubectl delete --filename serving/samples/autoscale-go/service.yaml
```
## Further reading

View File

@ -40,7 +40,7 @@ spec:
Save the file, then deploy the configuration to your cluster:
```bash
kubectl apply -f blue-green-demo-config.yaml
kubectl apply --filename blue-green-demo-config.yaml
configuration "blue-green-demo" configured
```
@ -63,7 +63,7 @@ spec:
Save the file, then apply the route to your cluster:
```bash
kubectl apply -f blue-green-demo-route.yaml
kubectl apply --filename blue-green-demo-route.yaml
route "blue-green-demo" configured
```
@ -112,7 +112,7 @@ spec:
Save the file, then apply the updated configuration to your cluster:
```bash
kubectl apply -f blue-green-demo-config.yaml
kubectl apply --filename blue-green-demo-config.yaml
configuration "blue-green-demo" configured
```
@ -140,7 +140,7 @@ spec:
Save the file, then apply the updated route to your cluster:
```bash
kubectl apply -f blue-green-demo-route.yaml
kubectl apply --filename blue-green-demo-route.yaml
route "blue-green-demo" configured
```
@ -175,7 +175,7 @@ spec:
Save the file, then apply the updated route to your cluster:
```bash
kubectl apply -f blue-green-demo-route.yaml
kubectl apply --filename blue-green-demo-route.yaml
route "blue-green-demo" configured
```
@ -210,7 +210,7 @@ spec:
Save the file, then apply the updated route to your cluster:
```bash
kubectl apply -f blue-green-demo-route.yaml
kubectl apply --filename blue-green-demo-route.yaml
route "blue-green-demo" configured
```

View File

@ -18,7 +18,7 @@ Knative Serving will run pods as the default service account in the namespace wh
you created your resources. You can see its body by entering the following command:
```shell
$ kubectl get serviceaccount default -o yaml
$ kubectl get serviceaccount default --output yaml
apiVersion: v1
kind: ServiceAccount
metadata:
@ -129,7 +129,7 @@ stringData:
When finished with the replacements, create the build bot by entering the following command:
```shell
kubectl create -f build-bot.yaml
kubectl create --filename build-bot.yaml
```
### 3. Installing a Build template and updating `manifest.yaml`
@ -138,7 +138,7 @@ kubectl create -f build-bot.yaml
by entering the following command:
```shell
kubectl apply -f https://raw.githubusercontent.com/knative/build-templates/master/kaniko/kaniko.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/build-templates/master/kaniko/kaniko.yaml
```
1. Open `manifest.yaml` and substitute your private DockerHub repository name for
@ -149,7 +149,7 @@ kubectl create -f build-bot.yaml
At this point, you're ready to deploy your application:
```shell
kubectl create -f manifest.yaml
kubectl create --filename manifest.yaml
```
To make sure everything works, capture the host URL and the IP of the ingress endpoint
@ -158,12 +158,12 @@ in environment variables:
```
# Put the Host URL into an environment variable.
export SERVICE_HOST=`kubectl get route private-repos \
-o jsonpath="{.status.domain}"`
--output jsonpath="{.status.domain}"`
```
```
# Put the IP address into an environment variable
export SERVICE_IP=`kubectl get svc knative-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
> Note: If your cluster is running outside a cloud provider (for example, on Minikube),
@ -171,7 +171,7 @@ export SERVICE_IP=`kubectl get svc knative-ingressgateway -n istio-system -o jso
`hostIP` and `nodePort` as the service IP:
```shell
export SERVICE_IP=$(kubectl get po -l knative=ingressgateway -n istio-system -o 'jsonpath= . {.items[0].status.hostIP}'):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[? (@.port==80)].nodePort}')
export SERVICE_IP=$(kubectl get po --selector knative=ingressgateway --namespace istio-system --output 'jsonpath= . {.items[0].status.hostIP}'):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[? (@.port==80)].nodePort}')
```
Now curl the service IP to make sure the deployment succeeded:

View File

@ -18,7 +18,7 @@ in the [build-templates](https://github.com/knative/build-templates/) repo.
Save a copy of `buildpack.yaml`, then install it:
```shell
kubectl apply -f https://raw.githubusercontent.com/knative/build-templates/master/buildpack/buildpack.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/build-templates/master/buildpack/buildpack.yaml
```
Then you can deploy this to Knative Serving from the root directory
@ -31,13 +31,13 @@ export REPO="gcr.io/<your-project-here>"
perl -pi -e "s@DOCKER_REPO_OVERRIDE@$REPO@g" sample.yaml
# Create the Kubernetes resources
kubectl apply -f sample.yaml
kubectl apply --filename sample.yaml
```
Once deployed, you will see that it first builds:
```shell
$ kubectl get revision -o yaml
$ kubectl get revision --output yaml
apiVersion: v1
items:
- apiVersion: serving.knative.dev/v1alpha1
@ -56,7 +56,7 @@ Once the `BuildComplete` status is `True`, resource creation begins.
To access this service using `curl`, we first need to determine its ingress address:
```shell
$ watch kubectl get svc knative-ingressgateway -n istio-system
$ watch kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
@ -66,10 +66,10 @@ the host URL and the IP of the ingress endpoint in environment variables:
```shell
# Put the Host name into an environment variable.
export SERVICE_HOST=`kubectl get route buildpack-sample-app -o jsonpath="{.status.domain}"`
export SERVICE_HOST=`kubectl get route buildpack-sample-app --output jsonpath="{.status.domain}"`
# Put the ingress IP into an environment variable.
export SERVICE_IP=`kubectl get svc knative-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
Now curl the service IP to make sure the deployment succeeded:
@ -86,7 +86,7 @@ To clean up the sample service:
```shell
# Clean up the serving resources
kubectl delete -f serving/samples/buildpack-app-dotnet/sample.yaml
kubectl delete --filename serving/samples/buildpack-app-dotnet/sample.yaml
# Clean up the build template
kubectl delete buildtemplate buildpack
```

View File

@ -19,7 +19,7 @@ from the [build-templates](https://github.com/knative/build-templates/) repo.
Save a copy of `buildpack.yaml`, then install it:
```shell
kubectl apply -f https://raw.githubusercontent.com/knative/build-templates/master/buildpack/buildpack.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/build-templates/master/buildpack/buildpack.yaml
```
Then you can deploy this to Knative Serving from the root directory via:
@ -30,13 +30,13 @@ export REPO="gcr.io/<your-project-here>"
perl -pi -e "s@DOCKER_REPO_OVERRIDE@$REPO@g" sample.yaml
kubectl apply -f sample.yaml
kubectl apply --filename sample.yaml
```
Once deployed, you will see that it first builds:
```shell
$ kubectl get revision -o yaml
$ kubectl get revision --output yaml
apiVersion: v1
items:
- apiVersion: serving.knative.dev/v1alpha1
@ -54,7 +54,7 @@ Once the `BuildComplete` status is `True`, resource creation begins.
To access this service using `curl`, we first need to determine its ingress address:
```shell
watch kubectl get svc knative-ingressgateway -n istio-system
watch kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
@ -64,10 +64,10 @@ the host URL and the IP of the ingress endpoint in environment variables:
```shell
# Put the Host name into an environment variable.
$ export SERVICE_HOST=`kubectl get route buildpack-function -o jsonpath="{.status.domain}"`
$ export SERVICE_HOST=`kubectl get route buildpack-function --output jsonpath="{.status.domain}"`
# Put the ingress IP into an environment variable.
$ export SERVICE_IP=`kubectl get svc knative-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
$ export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
Now curl the service IP to make sure the deployment succeeded:
@ -84,7 +84,7 @@ To clean up the sample service:
```shell
# Clean up the serving resources
kubectl delete -f serving/samples/buildpack-function-nodejs/sample.yaml
kubectl delete --filename serving/samples/buildpack-function-nodejs/sample.yaml
# Clean up the build template
kubectl delete buildtemplate buildpack
```

View File

@ -54,7 +54,7 @@ through a webhook.
1. Apply the secret to your cluster:
```shell
kubectl apply -f github-secret.yaml
kubectl apply --filename github-secret.yaml
```
1. Next, update the `service.yaml` file in the project to reference the tagged
@ -90,7 +90,7 @@ through a webhook.
1. Use `kubectl` to apply the `service.yaml` file.
```shell
$ kubectl apply -f service.yaml
$ kubectl apply --filename service.yaml
service "gitwebhook" created
```
@ -130,16 +130,16 @@ Once deployed, you can inspect the created resources with `kubectl` commands:
```shell
# This will show the Knative service that we created:
kubectl get service.serving.knative.dev -o yaml
kubectl get service.serving.knative.dev --output yaml
# This will show the Route, created by the service:
kubectl get route -o yaml
kubectl get route --output yaml
# This will show the Configuration, created by the service:
kubectl get configurations -o yaml
kubectl get configurations --output yaml
# This will show the Revision, created by the Configuration:
kubectl get revisions -o yaml
kubectl get revisions --output yaml
```
## Testing the service
@ -154,6 +154,6 @@ right, you'll see the title of the PR will be modified, with the text
To clean up the sample service:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -26,7 +26,7 @@ docker push "${REPO}/serving/samples/grpc-ping-go"
perl -pi -e "s@github.com/knative/docs/serving/samples/grpc-ping-go@${REPO}/serving/samples/grpc-ping-go@g" serving/samples/grpc-ping-go/*.yaml
# Deploy the Knative sample
kubectl apply -f serving/samples/grpc-ping-go/sample.yaml
kubectl apply --filename serving/samples/grpc-ping-go/sample.yaml
```
@ -36,10 +36,10 @@ kubectl apply -f serving/samples/grpc-ping-go/sample.yaml
```
# Put the Host name into an environment variable.
export SERVICE_HOST=`kubectl get route grpc-ping -o jsonpath="{.status.domain}"`
export SERVICE_HOST=`kubectl get route grpc-ping --output jsonpath="{.status.domain}"`
# Put the ingress IP into an environment variable.
export SERVICE_IP=`kubectl get svc knative-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
1. Use the client to send message streams to the gRPC server

View File

@ -107,7 +107,7 @@ folder) you're ready to build and deploy the sample app.
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
@ -121,7 +121,7 @@ folder) you're ready to build and deploy the sample app.
an external IP address.
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -130,7 +130,7 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-csharp -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-csharp --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-csharp helloworld-csharp.default.example.com
```
@ -154,5 +154,5 @@ folder) you're ready to build and deploy the sample app.
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -138,7 +138,7 @@ directions above.
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
@ -152,7 +152,7 @@ directions above.
an external IP address.
```
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32 80:32380/TCP,443:32390/TCP,32400:32400/TCP 1h
@ -161,7 +161,7 @@ knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32 80:32380
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-elixir -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-elixir --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-elixir helloworld-elixir.default.example.com
@ -296,5 +296,5 @@ knative-ingressgateway LoadBalancer 10.35.254.218 35.225.171.32 80:32380
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -118,7 +118,7 @@ folder) you're ready to build and deploy the sample app.
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
@ -132,7 +132,7 @@ folder) you're ready to build and deploy the sample app.
an external IP address.
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -141,7 +141,7 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-go -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-go --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-go helloworld-go.default.example.com
```
@ -165,5 +165,5 @@ folder) you're ready to build and deploy the sample app.
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -145,7 +145,7 @@ folder) you're ready to build and deploy the sample app.
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
@ -159,7 +159,7 @@ folder) you're ready to build and deploy the sample app.
an external IP address.
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -169,13 +169,13 @@ folder) you're ready to build and deploy the sample app.
For minikube or bare-metal, get IP_ADDRESS by running the following command
```shell
echo $(kubectl get node -o 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
echo $(kubectl get node --output 'jsonpath={.items[0].status.addresses[0].address}'):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
1. To find the URL for your service, enter:
```
kubectl get ksvc helloworld-haskell -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-haskell --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-haskell helloworld-haskell.default.example.com
```
@ -199,6 +199,6 @@ folder) you're ready to build and deploy the sample app.
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -131,7 +131,7 @@ folder) you're ready to build and deploy the sample app.
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
@ -145,7 +145,7 @@ folder) you're ready to build and deploy the sample app.
an external IP address.
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -154,7 +154,7 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-java -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-java --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-java helloworld-java.default.example.com
```
@ -178,5 +178,5 @@ folder) you're ready to build and deploy the sample app.
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -149,7 +149,7 @@ folder) you're ready to build and deploy the sample app.
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
@ -163,7 +163,7 @@ folder) you're ready to build and deploy the sample app.
an external IP address.
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -172,7 +172,7 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-nodejs -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-nodejs --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-nodejs helloworld-nodejs.default.example.com
```
@ -196,5 +196,5 @@ folder) you're ready to build and deploy the sample app.
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -90,7 +90,7 @@ you're ready to build and deploy the sample app.
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
@ -104,7 +104,7 @@ you're ready to build and deploy the sample app.
an external IP address.
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -113,7 +113,7 @@ you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-php -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-php --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-php helloworld-php.default.example.com
```
@ -137,5 +137,5 @@ you're ready to build and deploy the sample app.
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -102,7 +102,7 @@ folder) you're ready to build and deploy the sample app.
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
@ -116,7 +116,7 @@ folder) you're ready to build and deploy the sample app.
an external IP address.
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -125,7 +125,7 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-python -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-python --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-python helloworld-python.default.example.com
```
@ -149,5 +149,5 @@ folder) you're ready to build and deploy the sample app.
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -117,7 +117,7 @@ you're ready to build and deploy the sample app.
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
@ -131,7 +131,7 @@ you're ready to build and deploy the sample app.
an external IP address.
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -140,7 +140,7 @@ you're ready to build and deploy the sample app.
1. To find the URL for your service, use
```
kubectl get ksvc helloworld-ruby -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-ruby --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-ruby helloworld-ruby.default.example.com
```
@ -163,5 +163,5 @@ you're ready to build and deploy the sample app.
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -133,7 +133,7 @@ folder) you're ready to build and deploy the sample app.
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply -f service.yaml
kubectl apply --filename service.yaml
```
1. Now that your service is created, Knative will perform the following steps:
@ -147,7 +147,7 @@ folder) you're ready to build and deploy the sample app.
an external IP address.
```shell
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -156,7 +156,7 @@ folder) you're ready to build and deploy the sample app.
1. To find the URL for your service, enter:
```
kubectl get ksvc helloworld-rust -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
kubectl get ksvc helloworld-rust --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
helloworld-rust helloworld-rust.default.example.com
```
@ -180,5 +180,5 @@ folder) you're ready to build and deploy the sample app.
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```

View File

@ -69,7 +69,7 @@ docker push "${REPO}/serving/samples/knative-routing-go"
Deploy the Knative Serving sample:
```
kubectl apply -f serving/samples/knative-routing-go/sample.yaml
kubectl apply --filename serving/samples/knative-routing-go/sample.yaml
```
## Exploring the Routes
@ -80,12 +80,12 @@ service with:
* Check the shared Gateway:
```
kubectl get Gateway -n knative-serving -oyaml
kubectl get Gateway --namespace knative-serving --output yaml
```
* Check the corresponding Kubernetes service for the shared Gateway:
```
kubectl get svc knative-ingressgateway -n istio-system -oyaml
kubectl get svc knative-ingressgateway --namespace istio-system --output yaml
```
* Inspect the deployed Knative services with:
@ -98,13 +98,13 @@ You should see 2 Knative services: search-service and login-service.
1. Find the shared Gateway IP and export as an environment variable:
```
export GATEWAY_IP=`kubectl get svc knative-ingressgateway -n istio-system \
-o jsonpath="{.status.loadBalancer.ingress[*]['ip']}"`
export GATEWAY_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
--output jsonpath="{.status.loadBalancer.ingress[*]['ip']}"`
```
2. Find the "Search" service route and export as an environment variable:
```
export SERVICE_HOST=`kubectl get route search-service -o jsonpath="{.status.domain}"`
export SERVICE_HOST=`kubectl get route search-service --output jsonpath="{.status.domain}"`
```
3. Make a curl request to the service:
```
@ -114,7 +114,7 @@ You should see: `Search Service is called !`
4. Similarly, you can also directly access "Login" service with:
```
export SERVICE_HOST=`kubectl get route login-service -o jsonpath="{.status.domain}"`
export SERVICE_HOST=`kubectl get route login-service --output jsonpath="{.status.domain}"`
```
```
curl http://${GATEWAY_IP} --header "Host:${SERVICE_HOST}"
@ -125,20 +125,20 @@ You should see: `Login Service is called !`
1. Apply the custom routing rules defined in `routing.yaml` file with:
```
kubectl apply -f serving/samples/knative-routing-go/routing.yaml
kubectl apply --filename serving/samples/knative-routing-go/routing.yaml
```
2. The `routing.yaml` file will generate a new VirtualService "entry-route" for
domain "example.com". View the VirtualService:
```
kubectl get VirtualService entry-route -oyaml
kubectl get VirtualService entry-route --output yaml
```
3. Send a request to the "Search" service and the "Login" service by using
corresponding URIs. You should get the same results as directly accessing these services.
* Get the ingress IP:
```
export GATEWAY_IP=`kubectl get svc knative-ingressgateway -n istio-system \
export GATEWAY_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
-o jsonpath="{.status.loadBalancer.ingress[*]['ip']}"`
```
@ -170,6 +170,6 @@ Gateway again. The Gateway proxy checks the updated host, and forwards it to
To clean up the sample resources:
```
kubectl delete -f serving/samples/knative-routing-go/sample.yaml
kubectl delete -f serving/samples/knative-routing-go/routing.yaml
kubectl delete --filename serving/samples/knative-routing-go/sample.yaml
kubectl delete --filename serving/samples/knative-routing-go/routing.yaml
```

View File

@ -55,7 +55,7 @@ docker push "${REPO}/serving/samples/rest-api-go"
Deploy the Knative Serving sample:
```
kubectl apply -f serving/samples/rest-api-go/sample.yaml
kubectl apply --filename serving/samples/rest-api-go/sample.yaml
```
## Explore the Configuration
@ -64,17 +64,17 @@ Inspect the created resources with the `kubectl` commands:
* View the created Route resource:
```
kubectl get route -o yaml
kubectl get route --output yaml
```
* View the created Configuration resource:
```
kubectl get configurations -o yaml
kubectl get configurations --output yaml
```
* View the Revision that was created by our Configuration:
```
kubectl get revisions -o yaml
kubectl get revisions --output yaml
```
## Access the Service
@ -83,7 +83,7 @@ To access this service via `curl`, you need to determine its ingress address.
1. To determine if your service is ready:
```
kubectl get svc knative-ingressgateway -n istio-system --watch
kubectl get svc knative-ingressgateway --namespace istio-system --watch
```
When the service is ready, you'll see an IP address in the `EXTERNAL-IP` field:
@ -95,17 +95,17 @@ To access this service via `curl`, you need to determine its ingress address.
2. When the service is ready, export the ingress hostname and IP as environment variables:
```
export SERVICE_HOST=`kubectl get route stock-route-example -o jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway -n istio-system \
-o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
export SERVICE_HOST=`kubectl get route stock-route-example --output jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
--output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
* If your cluster is running outside a cloud provider (for example on Minikube),
your services will never get an external IP address. In that case, use the istio `hostIP` and `nodePort` as the service IP:
```
export SERVICE_IP=$(kubectl get po -l knative=ingressgateway -n istio-system \
-o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc knative-ingressgateway -n istio-system \
-o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
export SERVICE_IP=$(kubectl get po --selector knative=ingressgateway --namespace istio-system \
--output 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc knative-ingressgateway --namespace istio-system \
--output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
3. Now use `curl` to make a request to the service:
@ -132,5 +132,5 @@ To access this service via `curl`, you need to determine its ingress address.
To clean up the sample service:
```
kubectl delete -f serving/samples/rest-api-go/sample.yaml
kubectl delete --filename serving/samples/rest-api-go/sample.yaml
```

View File

@ -28,7 +28,7 @@ to perform a source-to-container build on your Kubernetes cluster.
Use kubectl to install the kaniko manifest:
```shell
kubectl apply -f https://raw.githubusercontent.com/knative/build-templates/master/kaniko/kaniko.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/build-templates/master/kaniko/kaniko.yaml
```
### Register secrets for Docker Hub
@ -87,9 +87,9 @@ available, but these are the key steps:
1. After you have created the manifest files, apply them to your cluster with `kubectl`:
```shell
$ kubectl apply -f docker-secret.yaml
$ kubectl apply --filename docker-secret.yaml
secret "basic-user-pass" created
$ kubectl apply -f service-account.yaml
$ kubectl apply --filename service-account.yaml
serviceaccount "build-bot" created
```
@ -143,7 +143,7 @@ container for the application.
```shell
# Apply the manifest
$ kubectl apply -f service.yaml
$ kubectl apply --filename service.yaml
service "app-from-source" created
# Watch the pods for build and serving
@ -168,7 +168,7 @@ container for the application.
status block:
```shell
$ kubectl get service.serving.knative.dev app-from-source -o yaml
$ kubectl get service.serving.knative.dev app-from-source --output yaml
[...]
status:
@ -203,7 +203,7 @@ container for the application.
it can take some time for the service to get an external IP address:
```shell
$ kubectl get svc knative-ingressgateway -n istio-system
$ kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
@ -213,7 +213,7 @@ container for the application.
1. To find the URL for your service, type:
```shell
$ kubectl get ksvc app-from-source -o=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
$ kubectl get ksvc app-from-source --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
NAME DOMAIN
app-from-source app-from-source.default.example.com
```
@ -237,7 +237,7 @@ container for the application.
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete -f service.yaml
kubectl delete --filename service.yaml
```
---

View File

@ -13,7 +13,7 @@ using the default installation.
installed.
2. Check if Knative monitoring components are installed:
```
kubectl get pods -n monitoring
kubectl get pods --namespace monitoring
```
* If pods aren't found, install [Knative monitoring component](../../installing-logging-metrics-traces.md).
3. Install [Docker](https://docs.docker.com/get-started/#prepare-your-docker-environment).
@ -68,7 +68,7 @@ configuration file (`serving/samples/telemetry-go/sample.yaml`):
Deploy this application to Knative Serving:
```
kubectl apply -f serving/samples/telemetry-go/
kubectl apply --filename serving/samples/telemetry-go/
```
## Explore the Service
@ -77,17 +77,17 @@ Inspect the created resources with the `kubectl` commands:
* View the created Route resource:
```
kubectl get route -o yaml
kubectl get route --output yaml
```
* View the created Configuration resource:
```
kubectl get configurations -o yaml
kubectl get configurations --output yaml
```
* View the Revision that was created by the Configuration:
```
kubectl get revisions -o yaml
kubectl get revisions --output yaml
```
## Access the Service
@ -97,7 +97,7 @@ To access this service via `curl`, you need to determine its ingress address.
1. To determine if your service is ready:
Check the status of your Knative gateway:
```
kubectl get svc knative-ingressgateway -n istio-system --watch
kubectl get svc knative-ingressgateway --namespace istio-system --watch
```
When the service is ready, you'll see an IP address in the `EXTERNAL-IP` field:
@ -109,7 +109,7 @@ To access this service via `curl`, you need to determine its ingress address.
Check the status of your route:
```
kubectl get route -o yaml
kubectl get route --output yaml
```
When the route is ready, you'll see the following fields reported as:
```YAML
@ -124,8 +124,8 @@ To access this service via `curl`, you need to determine its ingress address.
2. Export the ingress hostname and IP as environment
variables:
```
export SERVICE_HOST=`kubectl get route telemetrysample-route -o jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
export SERVICE_HOST=`kubectl get route telemetrysample-route --output jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
3. Make a request to the service to see the `Hello World!` message:
@ -160,5 +160,5 @@ Then browse to http://localhost:9090.
To clean up the sample service:
```
kubectl delete -f serving/samples/telemetry-go/
kubectl delete --filename serving/samples/telemetry-go/
```

View File

@ -91,7 +91,7 @@ You can deploy a prebuilt image of the `rester-tester` app to Knative Serving us
```
# From inside the thumbnailer-go directory
kubectl apply -f sample-prebuilt.yaml
kubectl apply --filename sample-prebuilt.yaml
```
### Building and deploying a version of the app
@ -108,17 +108,17 @@ perl -pi -e "s@DOCKER_REPO_OVERRIDE@$REPO@g" sample.yaml
# Install the Kaniko build template used to build this sample (in the
# build-templates repo).
kubectl apply -f https://raw.githubusercontent.com/knative/build-templates/master/kaniko/kaniko.yaml
kubectl apply --filename https://raw.githubusercontent.com/knative/build-templates/master/kaniko/kaniko.yaml
# Create the Knative route and configuration for the application
kubectl apply -f sample.yaml
kubectl apply --filename sample.yaml
```
Now, if you look at the `status` of the revision, you will see that a build is in progress:
```shell
$ kubectl get revisions -o yaml
$ kubectl get revisions --output yaml
apiVersion: v1
items:
- apiVersion: serving.knative.dev/v1alpha1
@ -141,7 +141,7 @@ To confirm that the app deployed, you can check for the Knative Serving service
First, is there an ingress service, and does it have an `EXTERNAL-IP`:
```
kubectl get svc knative-ingressgateway -n istio-system
kubectl get svc knative-ingressgateway --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
knative-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d
```
@ -152,7 +152,7 @@ The newly deployed app may take few seconds to initialize. You can check its sta
by entering the following command:
```
kubectl -n default get pods
kubectl --namespace default get pods
```
The Knative Serving ingress service will automatically be assigned an external IP,
@ -161,10 +161,10 @@ in `curl` commands:
```
# Put the Host URL into an environment variable.
export SERVICE_HOST=`kubectl get route thumb -o jsonpath="{.status.domain}"`
export SERVICE_HOST=`kubectl get route thumb --output jsonpath="{.status.domain}"`
# Put the ingress IP into an environment variable.
export SERVICE_IP=`kubectl get svc knative-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
If your cluster is running outside a cloud provider (for example on Minikube),
@ -172,7 +172,7 @@ your services will never get an external IP address. In that case, use the istio
`hostIP` and `nodePort` as the service IP:
```shell
export SERVICE_IP=$(kubectl get po -l knative=ingressgateway -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
export SERVICE_IP=$(kubectl get po --selector knative=ingressgateway --namespace istio-system --output 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```
### Ping

View File

@ -25,19 +25,19 @@ This section describes how to create an revision by deploying a new configuratio
2. Deploy the new configuration to update the `RESOURCE` environment variable
from `stock` to `share`:
```
kubectl apply -f serving/samples/traffic-splitting/updated_configuration.yaml
kubectl apply --filename serving/samples/traffic-splitting/updated_configuration.yaml
```
3. Once deployed, traffic will shift to the new revision automatically. Verify the deployment by checking the route status:
```
kubectl get route -o yaml
kubectl get route --output yaml
```
4. When the new route is ready, you can access the new endpoints:
The hostname and IP address can be found in the same manner as the [Creating a RESTful Service](../rest-api-go) sample:
```
export SERVICE_HOST=`kubectl get route stock-route-example -o jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway -n istio-system \
export SERVICE_HOST=`kubectl get route stock-route-example --output jsonpath="{.status.domain}"`
export SERVICE_IP=`kubectl get svc knative-ingressgateway --namespace istio-system \
-o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
@ -84,12 +84,12 @@ traffic:
3. Deploy your traffic revision:
```
kubectl apply -f serving/samples/rest-api-go/sample.yaml
kubectl apply --filename serving/samples/rest-api-go/sample.yaml
```
4. Verify the deployment by checking the route status:
```
kubectl get route -o yaml
kubectl get route --output yaml
```
Once updated, you can make `curl` requests to the API using either `stock` or `share`
endpoints.
@ -99,5 +99,5 @@ endpoints.
To clean up the sample service:
```
kubectl delete -f serving/samples/traffic-splitting/updated_configuration.yaml
kubectl delete --filename serving/samples/traffic-splitting/updated_configuration.yaml
```

View File

@ -56,15 +56,15 @@ Operators need to deploy Knative components after the configuring:
# In case there is no change with the controller code
bazel run config:controller.delete
# Deploy the configuration for sidecar
kubectl apply -f config/config-observability.yaml
kubectl apply --filename config/config-observability.yaml
# Deploy the controller to make configuration for sidecar take effect
bazel run config:controller.apply
# Deploy the DaemonSet to make configuration for DaemonSet take effect
kubectl apply -f <the-fluentd-config-for-daemonset> \
-f third_party/config/monitoring/common/kubernetes/fluentd/fluentd-ds.yaml \
-f config/monitoring/200-common/100-fluentd.yaml
-f config/monitoring/200-common/100-istio.yaml
kubectl apply --filename <the-fluentd-config-for-daemonset> \
--filename third_party/config/monitoring/common/kubernetes/fluentd/fluentd-ds.yaml \
--filename config/monitoring/200-common/100-fluentd.yaml
--filename config/monitoring/200-common/100-istio.yaml
```
In the commands above, replace `<the-fluentd-config-for-daemonset>` with the
@ -75,7 +75,7 @@ backends. For example, if they desire Elasticsearch&Kibana, they have to deploy
the Elasticsearch and Kibana services. Knative provides this sample:
```shell
kubectl apply -R -f third_party/config/monitoring/elasticsearch
kubectl apply --recursive --filename third_party/config/monitoring/elasticsearch
```
See [here](/config/monitoring/README.md) for deploying the whole Knative

View File

@ -11,7 +11,7 @@ To change the {default-domain} value there are a few steps involved:
with your own domain, for example `mydomain.com`:
```shell
kubectl edit cm config-domain -n knative-serving
kubectl edit cm config-domain --namespace knative-serving
```
This command opens your default text editor and allows you to edit the config map.
@ -64,7 +64,7 @@ You can also apply an updated domain configuration:
1. Apply updated domain configuration to your cluster:
```shell
kubectl apply -f config-domain.yaml
kubectl apply --filename config-domain.yaml
```
## Deploy an application
@ -78,13 +78,13 @@ Deploy an app (for example, [`helloworld-go`](./samples/helloworld-go/README.md)
your cluster as normal. You can check the customized domain in Knative Route "helloworld-go" with
the following command:
```shell
kubectl get route helloworld-go -o jsonpath="{.status.domain}"
kubectl get route helloworld-go --output jsonpath="{.status.domain}"
```
You should see the full customized domain: `helloworld-go.default.mydomain.com`.
And you can check the IP address of your Knative gateway by running:
```shell
kubectl get svc knative-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*]['ip']}"
kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*]['ip']}"
```
## Local DNS setup
@ -93,11 +93,11 @@ You can map the domain to the IP address of your Knative gateway in your local
machine with:
```shell
export GATEWAY_IP=`kubectl get svc knative-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[*]['ip']}"`
export GATEWAY_IP=`kubectl get svc knative-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*]['ip']}"`
# helloworld-go is the generated Knative Route of "helloworld-go" sample.
# You need to replace it with your own Route in your project.
export DOMAIN_NAME=`kubectl get route helloworld-go -o jsonpath="{.status.domain}"`
export DOMAIN_NAME=`kubectl get route helloworld-go --output jsonpath="{.status.domain}"`
# Add the record of Gateway IP and domain name into file "/etc/hosts"
echo -e "$GATEWAY_IP\t$DOMAIN_NAME" | sudo tee -a /etc/hosts

View File

@ -21,7 +21,7 @@ following command to create a secret that stores the certificate. Note the
name of the secret, `istio-ingressgateway-certs` is required.
```shell
kubectl create -n istio-system secret tls istio-ingressgateway-certs \
kubectl create --namespace istio-system secret tls istio-ingressgateway-certs \
--key cert.pk \
--cert cert.pem
```
@ -34,7 +34,7 @@ you need to update the Gateway spec to use the HTTPS.
To edit the shared gateway, run:
```shell
kubectl edit gateway knative-shared-gateway -n knative-serving
kubectl edit gateway knative-shared-gateway --namespace knative-serving
```
Change the Gateway spec to include the `tls:` section as shown below, then

View File

@ -19,7 +19,7 @@ of publishing the Knative domain.
1. A public domain that will be used in Knative.
1. Knative configured to use your custom domain.
```shell
kubectl edit cm config-domain -n knative-serving
kubectl edit cm config-domain --namespace knative-serving
```
This command opens your default text editor and allows you to edit the config
map.
@ -99,7 +99,7 @@ gcloud dns record-sets transaction execute --zone "my-org-do"
Use the following command to apply the [manifest](https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/gke.md#manifest-for-clusters-without-rbac-enabled) to install ExternalDNS
```shell
cat <<EOF | kubectl apply -f -
cat <<EOF | kubectl apply --filename -
<the-content-of-manifest-with-custom-domain-filter>
EOF
```
@ -116,7 +116,7 @@ In order to publish the Knative Gateway service, the annotation
`external-dns.alpha.kubernetes.io/hostname: '*.external-dns-test.my-org.do'`
needs to be added into Knative gateway service:
```shell
kubectl edit svc knative-ingressgateway -n istio-system
kubectl edit svc knative-ingressgateway --namespace istio-system
```
This command opens your default text editor and allows you to add the
annotation to `knative-ingressgateway` service. After you've added your