* Admin Can Specify in Which GCE Availability Zone(s) a PV Shall Be Created An admin wants to specify in which GCE availability zone(s) users may create persistent volumes using dynamic provisioning. That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones. * Admin Can Specify in Which AWS Availability Zone(s) a PV Shall Be Created An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning. That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones. * move hardPodAffinitySymmetricWeight to scheduler policy config * Added Bind method to Scheduler Extender - only one extender can support the bind method - if an extender supports bind, scheduler delegates the pod binding to the extender * examples/podsecuritypolicy/rbac: allow to use projected volumes in restricted PSP. * fix typo * SPBM policy ID support in vsphere cloud provider * fix the invalid link * DeamonSet-DaemonSet * Update GlusterFS examples readme. Signed-off-by: Humble Chirammal <hchiramm@redhat.com> * fix some typo in example/volumes * Fix spelling in example/spark * Correct spelling in quobyte * Support custom domains in the cockroachdb example's init container This switches from using v0.1 of the peer-finder image to a version that includes https://github.com/kubernetes/contrib/pull/2013 While I'm here, switch the version of cockroachdb from 1.0 to 1.0.1 * Update docs/ URLs to point to proper locations * Adds --insecure to cockroachdb client command Cockroach errors out when using said command: ```shell ▶ kubectl run -it --rm cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- ./cockroach sql --host cockroachdb-public Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter. Error attaching, falling back to logs: unable to upgrade connection: container cockroach-client not found in pod cockroach-client_default Error: problem using security settings, did you mean to use --insecure?: problem with CA certificate: not found Failed running "sql" Waiting for pod default/cockroach-client to terminate, status is Running pod "cockroach-client" deleted ``` This PR updates the README.md to include --insecure in the client command * Add StorageOS volume plugin * examples/volumes/flexvolume/nfs: check for jq and simplify quoting. * Remove broken getvolumename and pass PV or volume name to attach call * Remove controller node plugin driver dependency for non-attachable flex volume drivers (Ex: NFS). * Add `imageFeatures` parameter for RBD volume plugin, which is used to customize RBD image format 2 features. Update RBD docs in examples/persistent-volume-provisioning/README.md. * Only `layering` RBD image format 2 feature should be supported for now. * Formatted Dockerfile to be cleaner and precise * Update docs for user-guide * Make the Quota creation optional * Remove duplicated line from ceph-secret-admin.yaml * Update CockroachDB tag to v1.0.3 * Correct the comment in PSP examples. * Update wordpress to 4.8.0 * Cassandra example, use nodetool drain in preStop * Add termination gracePeriod * Use buildozer to remove deprecated automanaged tags * Use buildozer to delete licenses() rules except under third_party/ * NR Infrastructure agent example daemonset Copy of previous newrelic example, then modified to use the new agent "newrelic-infra" instead of "nrsysmond". Also maps all of host node's root fs into /host in the container (ro, but still exposes underlying node info into a container). Updates to README * Reduce one time url direction Reduce one time url direction * update to rbac v1 in yaml file * Replicate the persistent volume label admission plugin in a controller in the cloud-controller-manager * update related files * Paramaterize stickyMaxAgeMinutes for service in API * Update example to CockroachDB v1.0.5 * Remove storage-class annotations in examples * PodSecurityPolicy.allowedCapabilities: add support for using * to allow to request any capabilities. Also modify "privileged" PSP to use it and allow privileged users to use any capabilities. * Add examples pods to demonstrate CPU manager. * Tag broken examples test as manual * bazel: use autogenerated all-srcs rules instead of manually-curated sources rules * Update CockroachDB tag to v1.1.0 * update BUILD files * pkg/api/legacyscheme: fixup imports * Update bazel * [examples.storage/minio] update deploy config version * Volunteer to help review examples I would like to do some code review for examples about how to run real applications with Kubernetes * examples/podsecuritypolicy/rbac: fix names in comments and sync with examples repository. * Update storageclass version to v1 in examples * pkg/apis/core: mechanical import fixes in dependencies * Use k8s.gcr.io vanity domain for container images * Update generated files * gcloud docker now auths k8s.gcr.io by default * -Add scheduler optimization options, short circuit all predicates if one predicate fails * Revert k8s.gcr.io vanity domain This reverts commit eba5b6092afcae27a7c925afea76b85d903e87a9. Fixes https://github.com/kubernetes/kubernetes/issues/57526 * Autogenerate BUILD files * Move scheduler code out of plugin directory. This moves plugin/pkg/scheduler to pkg/scheduler and plugin/cmd/kube-scheduler to cmd/kube-scheduler. Bulk of the work was done with gomvpkg, except for kube-scheduler main package. * Fix scheduler refs in BUILD files. Update references to moved scheduler code. * Switch to k8s.gcr.io vanity domain This is the 2nd attempt. The previous was reverted while we figured out the regional mirrors (oops). New plan: k8s.gcr.io is a read-only facade that auto-detects your source region (us, eu, or asia for now) and pulls from the closest. To publish an image, push k8s-staging.gcr.io and it will be synced to the regionals automatically (similar to today). For now the staging is an alias to gcr.io/google_containers (the legacy URL). When we move off of google-owned projects (working on it), then we just do a one-time sync, and change the google-internal config, and nobody outside should notice. We can, in parallel, change the auto-sync into a manual sync - send a PR to "promote" something from staging, and a bot activates it. Nice and visible, easy to keep track of. * Remove apiVersion from scheduler extender example configuration * Update examples to use PSPs from the policy API group. * fix all the typos across the project * Autogenerated: hack/update-bazel.sh * Modify PodSecurityPolicy admission plugin to additionally allow authorizing via "use" verb in policy API group. * fix todo: add validate method for &schedulerapi.Policy * examples/podsecuritypolicy: add owners. * Adding dummy and dummy-attachable example Flexvolume drivers; adding DaemonSet deployment example * Fix relative links in README |
||
---|---|---|
.. | ||
dockerbase | ||
README.md | ||
meteor-controller.json | ||
meteor-service.json | ||
mongo-pod.json | ||
mongo-service.json |
README.md
Meteor on Kubernetes
This example shows you how to package and run a Meteor app on Kubernetes.
Get started on Google Compute Engine
Meteor uses MongoDB, and we will use the GCEPersistentDisk
type of
volume for persistent storage. Therefore, this example is only
applicable to Google Compute
Engine. Take a look at the
volumes documentation for other options.
First, if you have not already done so:
- Create a Google Cloud Platform project.
- Enable billing.
- Install the gcloud SDK.
Authenticate with gcloud and set the gcloud default project name to point to the project you want to use for your Kubernetes cluster:
gcloud auth login
gcloud config set project <project-name>
Next, start up a Kubernetes cluster:
wget -q -O - https://get.k8s.io | bash
Please see the Google Compute Engine getting started guide for full details and other options for starting a cluster.
Build a container for your Meteor app
To be able to run your Meteor app on Kubernetes you need to build a
Docker container for it first. To do that you need to install
Docker Once you have that you need to add 2
files to your existing Meteor project Dockerfile
and
.dockerignore
.
Dockerfile
should contain the below lines. You should replace the
ROOT_URL
with the actual hostname of your app.
FROM chees/meteor-kubernetes
ENV ROOT_URL http://myawesomeapp.com
The .dockerignore
file should contain the below lines. This tells
Docker to ignore the files on those directories when it's building
your container.
.meteor/local
packages/*/.build*
You can see an example meteor project already set up at: meteor-gke-example. Feel free to use this app for this example.
Note: The next step will not work if you have added mobile platforms to your meteor project. Check with
meteor list-platforms
Now you can build your container by running this in your Meteor project directory:
docker build -t my-meteor .
Pushing to a registry
For the Docker Hub, tag your app image with
your username and push to the Hub with the below commands. Replace
<username>
with your Hub username.
docker tag my-meteor <username>/my-meteor
docker push <username>/my-meteor
For Google Container
Registry, tag
your app image with your project ID, and push to GCR. Replace
<project>
with your project ID.
docker tag my-meteor gcr.io/<project>/my-meteor
gcloud docker -- push gcr.io/<project>/my-meteor
Running
Now that you have containerized your Meteor app it's time to set up
your cluster. Edit meteor-controller.json
and make sure the image:
points to the container you just pushed to
the Docker Hub or GCR.
We will need to provide MongoDB a persistent Kubernetes volume to store its data. See the volumes documentation for options. We're going to use Google Compute Engine persistent disks. Create the MongoDB disk by running:
gcloud compute disks create --size=200GB mongo-disk
Now you can start Mongo using that disk:
kubectl create -f examples/meteor/mongo-pod.json
kubectl create -f examples/meteor/mongo-service.json
Wait until Mongo is started completely and then start up your Meteor app:
kubectl create -f examples/meteor/meteor-service.json
kubectl create -f examples/meteor/meteor-controller.json
Note that meteor-service.json
creates a load balancer, so
your app should be available through the IP of that load balancer once
the Meteor pods are started. We also created the service before creating the rc to
aid the scheduler in placing pods, as the scheduler ranks pod placement according to
service anti-affinity (among other things). You can find the IP of your load balancer
by running:
kubectl get service meteor --template="{{range .status.loadBalancer.ingress}} {{.ip}} {{end}}"
You will have to open up port 80 if it's not open yet in your environment. On Google Compute Engine, you may run the below command.
gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-node
What is going on?
Firstly, the FROM chees/meteor-kubernetes
line in your Dockerfile
specifies the base image for your Meteor app. The code for that image
is located in the dockerbase/
subdirectory. Open up the Dockerfile
to get an insight of what happens during the docker build
step. The
image is based on the Node.js official image. It then installs Meteor
and copies in your apps' code. The last line specifies what happens
when your app container is run.
ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js
Here we can see the MongoDB host and port information being passed
into the Meteor app. The MONGO_SERVICE...
environment variables are
set by Kubernetes, and point to the service named mongo
specified in
mongo-service.json
. See the environment
documentation for more details.
As you may know, Meteor uses long lasting connections, and requires
sticky sessions. With Kubernetes you can scale out your app easily
with session affinity. The
meteor-service.json
file contains
"sessionAffinity": "ClientIP"
, which provides this for us. See the
service
documentation for
more information.
As mentioned above, the mongo container uses a volume which is mapped
to a persistent disk by Kubernetes. In mongo-pod.json
the container
section specifies the volume:
{
"volumeMounts": [
{
"name": "mongo-disk",
"mountPath": "/data/db"
}
The name mongo-disk
refers to the volume specified outside the
container section:
{
"volumes": [
{
"name": "mongo-disk",
"gcePersistentDisk": {
"pdName": "mongo-disk",
"fsType": "ext4"
}
}
],