Fix links in tutorials section

This commit is contained in:
Qiming Teng 2020-08-07 12:05:17 +08:00
parent 664464806c
commit c7e297ccab
5 changed files with 52 additions and 32 deletions

View File

@ -118,7 +118,7 @@ Pod runs a Container based on the provided Docker image.
```
{{< note >}}
For more information about `kubectl`commands, see the [kubectl overview](/docs/user-guide/kubectl-overview/).
For more information about `kubectl` commands, see the [kubectl overview](/docs/reference/kubectl/overview/).
{{< /note >}}
## Create a Service

View File

@ -412,7 +412,7 @@ protocol between the loadbalancer and backend to communicate the true client IP
such as the HTTP [Forwarded](https://tools.ietf.org/html/rfc7239#section-5.2)
or [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For)
headers, or the
[proxy protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt).
[proxy protocol](https://www.haproxy.org/download/1.5/doc/proxy-protocol.txt).
Load balancers in the second category can leverage the feature described above
by creating an HTTP health check pointing at the port stored in
the `service.spec.healthCheckNodePort` field on the Service.

View File

@ -7,9 +7,13 @@ weight: 30
---
<!-- overview -->
This tutorial shows you how to run [Apache Cassandra](http://cassandra.apache.org/) on Kubernetes. Cassandra, a database, needs persistent storage to provide data durability (application _state_). In this example, a custom Cassandra seed provider lets the database discover new Cassandra instances as they join the Cassandra cluster.
This tutorial shows you how to run [Apache Cassandra](https://cassandra.apache.org/) on Kubernetes.
Cassandra, a database, needs persistent storage to provide data durability (application _state_).
In this example, a custom Cassandra seed provider lets the database discover new Cassandra instances as they join the Cassandra cluster.
*StatefulSets* make it easier to deploy stateful applications into your Kubernetes cluster. For more information on the features used in this tutorial, see [StatefulSet](/docs/concepts/workloads/controllers/statefulset/).
*StatefulSets* make it easier to deploy stateful applications into your Kubernetes cluster.
For more information on the features used in this tutorial, see
[StatefulSet](/docs/concepts/workloads/controllers/statefulset/).
{{< note >}}
Cassandra and Kubernetes both use the term _node_ to mean a member of a cluster. In this
@ -38,12 +42,17 @@ new Cassandra Pods as they appear inside your Kubernetes cluster.
{{< include "task-tutorial-prereqs.md" >}}
To complete this tutorial, you should already have a basic familiarity with {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip text="Services" term_id="service" >}}, and {{< glossary_tooltip text="StatefulSets" term_id="StatefulSet" >}}.
To complete this tutorial, you should already have a basic familiarity with
{{< glossary_tooltip text="Pods" term_id="pod" >}},
{{< glossary_tooltip text="Services" term_id="service" >}}, and
{{< glossary_tooltip text="StatefulSets" term_id="StatefulSet" >}}.
### Additional Minikube setup instructions
{{< caution >}}
[Minikube](/docs/getting-started-guides/minikube/) defaults to 1024MiB of memory and 1 CPU. Running Minikube with the default resource configuration results in insufficient resource errors during this tutorial. To avoid these errors, start Minikube with the following settings:
[Minikube](/docs/setup/learning-environment/minikube/) defaults to 1024MiB of memory and 1 CPU.
Running Minikube with the default resource configuration results in insufficient resource
errors during this tutorial. To avoid these errors, start Minikube with the following settings:
```shell
minikube start --memory 5120 --cpus=4
@ -51,11 +60,11 @@ minikube start --memory 5120 --cpus=4
{{< /caution >}}
<!-- lessoncontent -->
## Creating a headless Service for Cassandra {#creating-a-cassandra-headless-service}
In Kubernetes, a {{< glossary_tooltip text="Service" term_id="service" >}} describes a set of {{< glossary_tooltip text="Pods" term_id="pod" >}} that perform the same task.
In Kubernetes, a {{< glossary_tooltip text="Service" term_id="service" >}} describes a set of
{{< glossary_tooltip text="Pods" term_id="pod" >}} that perform the same task.
The following Service is used for DNS lookups between Cassandra Pods and clients within your cluster:
@ -83,14 +92,17 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra ClusterIP None <none> 9042/TCP 45s
```
If you don't see a Service named `cassandra`, that means creation failed. Read [Debug Services](/docs/tasks/debug-application-cluster/debug-service/) for help troubleshooting common issues.
If you don't see a Service named `cassandra`, that means creation failed. Read
[Debug Services](/docs/tasks/debug-application-cluster/debug-service/)
for help troubleshooting common issues.
## Using a StatefulSet to create a Cassandra ring
The StatefulSet manifest, included below, creates a Cassandra ring that consists of three Pods.
{{< note >}}
This example uses the default provisioner for Minikube. Please update the following StatefulSet for the cloud you are working with.
This example uses the default provisioner for Minikube.
Please update the following StatefulSet for the cloud you are working with.
{{< /note >}}
{{< codenew file="application/cassandra/cassandra-statefulset.yaml" >}}
@ -182,7 +194,8 @@ Use `kubectl edit` to modify the size of a Cassandra StatefulSet.
kubectl edit statefulset cassandra
```
This command opens an editor in your terminal. The line you need to change is the `replicas` field. The following sample is an excerpt of the StatefulSet file:
This command opens an editor in your terminal. The line you need to change is the `replicas` field.
The following sample is an excerpt of the StatefulSet file:
```yaml
# Please edit the object below. Lines beginning with a '#' will be ignored,
@ -225,10 +238,12 @@ Use `kubectl edit` to modify the size of a Cassandra StatefulSet.
## {{% heading "cleanup" %}}
Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all related StatefulSet resources.
Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet.
This setting is for your safety because your data is more valuable than automatically purging all related StatefulSet resources.
{{< warning >}}
Depending on the storage class and reclaim policy, deleting the *PersistentVolumeClaims* may cause the associated volumes to also be deleted. Never assume youll be able to access data if its volume claims are deleted.
Depending on the storage class and reclaim policy, deleting the *PersistentVolumeClaims* may cause the associated volumes
to also be deleted. Never assume youll be able to access data if its volume claims are deleted.
{{< /warning >}}
1. Run the following commands (chained together into a single command) to delete everything in the Cassandra StatefulSet:

View File

@ -15,25 +15,23 @@ weight: 40
<!-- overview -->
This tutorial demonstrates running [Apache Zookeeper](https://zookeeper.apache.org) on
Kubernetes using [StatefulSets](/docs/concepts/workloads/controllers/statefulset/),
[PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget),
and [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature).
[PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget),
and [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
## {{% heading "prerequisites" %}}
Before starting this tutorial, you should be familiar with the following
Kubernetes concepts.
- [Pods](/docs/user-guide/pods/single-container/)
- [Pods](/docs/concepts/workloads/pods/)
- [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
- [Headless Services](/docs/concepts/services-networking/service/#headless-services)
- [PersistentVolumes](/docs/concepts/storage/volumes/)
- [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/)
- [StatefulSets](/docs/concepts/workloads/controllers/statefulset/)
- [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget)
- [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature)
- [kubectl CLI](/docs/user-guide/kubectl/)
- [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget)
- [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
- [kubectl CLI](/docs/reference/kubectl/kubectl/)
You will require a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and drain the cluster's nodes. **This means that the cluster will terminate and evict all Pods on its nodes, and the nodes will temporarily become unschedulable.** You should use a dedicated cluster for this tutorial, or you should ensure that the disruption you cause will not interfere with other tenants.
@ -75,7 +73,7 @@ ZooKeeper servers keep their entire state machine in memory, and write every mut
The manifest below contains a
[Headless Service](/docs/concepts/services-networking/service/#headless-services),
a [Service](/docs/concepts/services-networking/service/),
a [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget),
a [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets),
and a [StatefulSet](/docs/concepts/workloads/controllers/statefulset/).
{{< codenew file="application/zookeeper/zookeeper.yaml" >}}
@ -127,7 +125,7 @@ zk-2 1/1 Running 0 40s
```
The StatefulSet controller creates three Pods, and each Pod has a container with
a [ZooKeeper](http://www-us.apache.org/dist/zookeeper/stable/) server.
a [ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) server.
### Facilitating Leader Election
@ -502,7 +500,7 @@ The command used to start the ZooKeeper servers passed the configuration as comm
### Configuring Logging
One of the files generated by the `zkGenConfig.sh` script controls ZooKeeper's logging.
ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default,
ZooKeeper uses [Log4j](https://logging.apache.org/log4j/2.x/), and, by default,
it uses a time and size based rolling file appender for its logging configuration.
Use the command below to get the logging configuration from one of Pods in the `zk` `StatefulSet`.
@ -524,7 +522,10 @@ log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n
```
This is the simplest possible way to safely log inside the container. Because the applications write logs to standard out, Kubernetes will handle log rotation for you. Kubernetes also implements a sane retention policy that ensures application logs written to standard out and standard error do not exhaust local storage media.
This is the simplest possible way to safely log inside the container.
Because the applications write logs to standard out, Kubernetes will handle log rotation for you.
Kubernetes also implements a sane retention policy that ensures application logs written to
standard out and standard error do not exhaust local storage media.
Use [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands/#logs) to retrieve the last 20 log lines from one of the Pods.
@ -679,7 +680,7 @@ statefulset.apps/zk rolled back
### Handling Process Failure
[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) control how
[Restart Policies](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) control how
Kubernetes handles process failures for the entry point of the container in a Pod.
For Pods in a `StatefulSet`, the only appropriate `RestartPolicy` is Always, and this
is the default value. For stateful applications you should **never** override
@ -832,7 +833,9 @@ domains to ensure availability. To avoid an outage, due to the loss of an
individual machine, best practices preclude co-locating multiple instances of the
application on the same machine.
By default, Kubernetes may co-locate Pods in a `StatefulSet` on the same node. For the three server ensemble you created, if two servers are on the same node, and that node fails, the clients of your ZooKeeper service will experience an outage until at least one of the Pods can be rescheduled.
By default, Kubernetes may co-locate Pods in a `StatefulSet` on the same node.
For the three server ensemble you created, if two servers are on the same node, and that node fails,
the clients of your ZooKeeper service will experience an outage until at least one of the Pods can be rescheduled.
You should always provision additional capacity to allow the processes of critical
systems to be rescheduled in the event of node failures. If you do so, then the
@ -978,7 +981,8 @@ pod "zk-1" deleted
node "kubernetes-node-ixsl" drained
```
The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `PodAntiAffinity` rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state.
The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `PodAntiAffinity` rule preventing
co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state.
```shell
kubectl get pods -w -l app=zk
@ -1119,9 +1123,10 @@ kubectl uncordon kubernetes-node-ixsl
node "kubernetes-node-ixsl" uncordoned
```
You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure that your services remain available during maintenance. If drain is used to cordon nodes and evict pods prior to taking the node offline for maintenance, services that express a disruption budget will have that budget respected. You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled.
You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure that your services remain available during maintenance.
If drain is used to cordon nodes and evict pods prior to taking the node offline for maintenance,
services that express a disruption budget will have that budget respected.
You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled.
## {{% heading "cleanup" %}}

View File

@ -365,7 +365,7 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
## {{% heading "whatsnext" %}}
* Add [ELK logging and monitoring](../guestbook-logs-metrics-with-elk/) to your Guestbook application
* Add [ELK logging and monitoring](/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/) to your Guestbook application
* Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
* Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/)