* Admin Can Specify in Which GCE Availability Zone(s) a PV Shall Be Created An admin wants to specify in which GCE availability zone(s) users may create persistent volumes using dynamic provisioning. That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones. * Admin Can Specify in Which AWS Availability Zone(s) a PV Shall Be Created An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning. That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones. * move hardPodAffinitySymmetricWeight to scheduler policy config * Added Bind method to Scheduler Extender - only one extender can support the bind method - if an extender supports bind, scheduler delegates the pod binding to the extender * examples/podsecuritypolicy/rbac: allow to use projected volumes in restricted PSP. * fix typo * SPBM policy ID support in vsphere cloud provider * fix the invalid link * DeamonSet-DaemonSet * Update GlusterFS examples readme. Signed-off-by: Humble Chirammal <hchiramm@redhat.com> * fix some typo in example/volumes * Fix spelling in example/spark * Correct spelling in quobyte * Support custom domains in the cockroachdb example's init container This switches from using v0.1 of the peer-finder image to a version that includes https://github.com/kubernetes/contrib/pull/2013 While I'm here, switch the version of cockroachdb from 1.0 to 1.0.1 * Update docs/ URLs to point to proper locations * Adds --insecure to cockroachdb client command Cockroach errors out when using said command: ```shell ▶ kubectl run -it --rm cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- ./cockroach sql --host cockroachdb-public Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter. Error attaching, falling back to logs: unable to upgrade connection: container cockroach-client not found in pod cockroach-client_default Error: problem using security settings, did you mean to use --insecure?: problem with CA certificate: not found Failed running "sql" Waiting for pod default/cockroach-client to terminate, status is Running pod "cockroach-client" deleted ``` This PR updates the README.md to include --insecure in the client command * Add StorageOS volume plugin * examples/volumes/flexvolume/nfs: check for jq and simplify quoting. * Remove broken getvolumename and pass PV or volume name to attach call * Remove controller node plugin driver dependency for non-attachable flex volume drivers (Ex: NFS). * Add `imageFeatures` parameter for RBD volume plugin, which is used to customize RBD image format 2 features. Update RBD docs in examples/persistent-volume-provisioning/README.md. * Only `layering` RBD image format 2 feature should be supported for now. * Formatted Dockerfile to be cleaner and precise * Update docs for user-guide * Make the Quota creation optional * Remove duplicated line from ceph-secret-admin.yaml * Update CockroachDB tag to v1.0.3 * Correct the comment in PSP examples. * Update wordpress to 4.8.0 * Cassandra example, use nodetool drain in preStop * Add termination gracePeriod * Use buildozer to remove deprecated automanaged tags * Use buildozer to delete licenses() rules except under third_party/ * NR Infrastructure agent example daemonset Copy of previous newrelic example, then modified to use the new agent "newrelic-infra" instead of "nrsysmond". Also maps all of host node's root fs into /host in the container (ro, but still exposes underlying node info into a container). Updates to README * Reduce one time url direction Reduce one time url direction * update to rbac v1 in yaml file * Replicate the persistent volume label admission plugin in a controller in the cloud-controller-manager * update related files * Paramaterize stickyMaxAgeMinutes for service in API * Update example to CockroachDB v1.0.5 * Remove storage-class annotations in examples * PodSecurityPolicy.allowedCapabilities: add support for using * to allow to request any capabilities. Also modify "privileged" PSP to use it and allow privileged users to use any capabilities. * Add examples pods to demonstrate CPU manager. * Tag broken examples test as manual * bazel: use autogenerated all-srcs rules instead of manually-curated sources rules * Update CockroachDB tag to v1.1.0 * update BUILD files * pkg/api/legacyscheme: fixup imports * Update bazel * [examples.storage/minio] update deploy config version * Volunteer to help review examples I would like to do some code review for examples about how to run real applications with Kubernetes * examples/podsecuritypolicy/rbac: fix names in comments and sync with examples repository. * Update storageclass version to v1 in examples * pkg/apis/core: mechanical import fixes in dependencies * Use k8s.gcr.io vanity domain for container images * Update generated files * gcloud docker now auths k8s.gcr.io by default * -Add scheduler optimization options, short circuit all predicates if one predicate fails * Revert k8s.gcr.io vanity domain This reverts commit eba5b6092afcae27a7c925afea76b85d903e87a9. Fixes https://github.com/kubernetes/kubernetes/issues/57526 * Autogenerate BUILD files * Move scheduler code out of plugin directory. This moves plugin/pkg/scheduler to pkg/scheduler and plugin/cmd/kube-scheduler to cmd/kube-scheduler. Bulk of the work was done with gomvpkg, except for kube-scheduler main package. * Fix scheduler refs in BUILD files. Update references to moved scheduler code. * Switch to k8s.gcr.io vanity domain This is the 2nd attempt. The previous was reverted while we figured out the regional mirrors (oops). New plan: k8s.gcr.io is a read-only facade that auto-detects your source region (us, eu, or asia for now) and pulls from the closest. To publish an image, push k8s-staging.gcr.io and it will be synced to the regionals automatically (similar to today). For now the staging is an alias to gcr.io/google_containers (the legacy URL). When we move off of google-owned projects (working on it), then we just do a one-time sync, and change the google-internal config, and nobody outside should notice. We can, in parallel, change the auto-sync into a manual sync - send a PR to "promote" something from staging, and a bot activates it. Nice and visible, easy to keep track of. * Remove apiVersion from scheduler extender example configuration * Update examples to use PSPs from the policy API group. * fix all the typos across the project * Autogenerated: hack/update-bazel.sh * Modify PodSecurityPolicy admission plugin to additionally allow authorizing via "use" verb in policy API group. * fix todo: add validate method for &schedulerapi.Policy * examples/podsecuritypolicy: add owners. * Adding dummy and dummy-attachable example Flexvolume drivers; adding DaemonSet deployment example * Fix relative links in README |
||
---|---|---|
.. | ||
php-phabricator | ||
README.md | ||
phabricator-controller.json | ||
phabricator-service.json | ||
setup.sh | ||
teardown.sh |
README.md
Phabricator example
This example shows how to build a simple multi-tier web application using Kubernetes and Docker.
The example combines a web frontend and an external service that provides MySQL database. We use CloudSQL on Google Cloud Platform in this example, but in principle any approach to running MySQL should work.
Step Zero: Prerequisites
This example assumes that you have a basic understanding of kubernetes services and that you have forked the repository and turned up a Kubernetes cluster:
$ cd kubernetes
$ cluster/kube-up.sh
Step One: Set up Cloud SQL instance
Follow the official instructions to set up Cloud SQL instance.
In the remaining part of this example we will assume that your instance is named "phabricator-db", has IP 1.2.3.4, is listening on port 3306 and the password is "1234".
Step Two: Authenticate phabricator in Cloud SQL
In order to allow phabricator to connect to your Cloud SQL instance you need to run the following command to authorize all your nodes within a cluster:
NODE_NAMES=`kubectl get nodes | cut -d" " -f1 | tail -n+2`
NODE_IPS=`gcloud compute instances list $NODE_NAMES | tr -s " " | cut -d" " -f 5 | tail -n+2`
gcloud sql instances patch phabricator-db --authorized-networks $NODE_IPS
Otherwise you will see the following logs:
$ kubectl logs phabricator-controller-02qp4
[...]
Raw MySQL Error: Attempt to connect to root@1.2.3.4 failed with error
#2013: Lost connection to MySQL server at 'reading initial communication packet', system error: 0.
Step Three: Turn up the phabricator
To start Phabricator server use the file examples/phabricator/phabricator-controller.json
which describes a replication controller with a single pod running an Apache server with Phabricator PHP source:
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "phabricator-controller",
"labels": {
"name": "phabricator"
}
},
"spec": {
"replicas": 1,
"selector": {
"name": "phabricator"
},
"template": {
"metadata": {
"labels": {
"name": "phabricator"
}
},
"spec": {
"containers": [
{
"name": "phabricator",
"image": "fgrzadkowski/example-php-phabricator",
"ports": [
{
"name": "http-server",
"containerPort": 80
}
],
"env": [
{
"name": "MYSQL_SERVICE_IP",
"value": "1.2.3.4"
},
{
"name": "MYSQL_SERVICE_PORT",
"value": "3306"
},
{
"name": "MYSQL_PASSWORD",
"value": "1234"
}
]
}
]
}
}
}
}
Create the phabricator pod in your Kubernetes cluster by running:
$ kubectl create -f examples/phabricator/phabricator-controller.json
Note: Remember to substitute environment variable values in json file before create replication controller.
Once that's up you can list the pods in the cluster, to verify that it is running:
kubectl get pods
You'll see a single phabricator pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds):
NAME READY STATUS RESTARTS AGE
phabricator-controller-9vy68 1/1 Running 0 1m
If you ssh to that machine, you can run docker ps
to see the actual pod:
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-node-2
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
54983bc33494 fgrzadkowski/phabricator:latest "/run.sh" 2 hours ago Up 2 hours k8s_phabricator.d6b45054_phabricator-controller-02qp4.default.api_eafb1e53-b6a9-11e4-b1ae-42010af05ea6_01c2c4ca
(Note that initial docker pull
may take a few minutes, depending on network conditions. During this time, the get pods
command will return Pending
because the container has not yet started )
Step Four: Turn up the phabricator service
A Kubernetes 'service' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via environment variables. Services find the containers to load balance based on pod labels. These environment variables are typically referenced in application code, shell scripts, or other places where one node needs to talk to another in a distributed system. You should catch up on kubernetes services before proceeding.
The pod that you created in Step Three has the label name=phabricator
. The selector field of the service determines which pods will receive the traffic sent to the service.
Use the file examples/phabricator/phabricator-service.json
:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "phabricator"
},
"spec": {
"ports": [
{
"port": 80,
"targetPort": "http-server"
}
],
"selector": {
"name": "phabricator"
},
"type": "LoadBalancer"
}
}
To create the service run:
$ kubectl create -f examples/phabricator/phabricator-service.json
phabricator
To play with the service itself, find the external IP of the load balancer:
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
phabricator <none> name=phabricator 10.0.31.173 80/TCP
$ kubectl get services phabricator -o json | grep ingress -A 4
"ingress": [
{
"ip": "104.197.13.125"
}
]
and then visit port 80 of that IP address.
Note: Provisioning of the external IP address may take few minutes.
Note: You may need to open the firewall for port 80 using the [console][cloud-console] or the gcloud
tool. The following command will allow traffic from any source to instances tagged kubernetes-node
:
$ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-node
Step Six: Cleanup
To turn down a Kubernetes cluster:
$ cluster/kube-down.sh