* Admin Can Specify in Which GCE Availability Zone(s) a PV Shall Be Created An admin wants to specify in which GCE availability zone(s) users may create persistent volumes using dynamic provisioning. That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones. * Admin Can Specify in Which AWS Availability Zone(s) a PV Shall Be Created An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning. That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones. * move hardPodAffinitySymmetricWeight to scheduler policy config * Added Bind method to Scheduler Extender - only one extender can support the bind method - if an extender supports bind, scheduler delegates the pod binding to the extender * examples/podsecuritypolicy/rbac: allow to use projected volumes in restricted PSP. * fix typo * SPBM policy ID support in vsphere cloud provider * fix the invalid link * DeamonSet-DaemonSet * Update GlusterFS examples readme. Signed-off-by: Humble Chirammal <hchiramm@redhat.com> * fix some typo in example/volumes * Fix spelling in example/spark * Correct spelling in quobyte * Support custom domains in the cockroachdb example's init container This switches from using v0.1 of the peer-finder image to a version that includes https://github.com/kubernetes/contrib/pull/2013 While I'm here, switch the version of cockroachdb from 1.0 to 1.0.1 * Update docs/ URLs to point to proper locations * Adds --insecure to cockroachdb client command Cockroach errors out when using said command: ```shell ▶ kubectl run -it --rm cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- ./cockroach sql --host cockroachdb-public Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter. Error attaching, falling back to logs: unable to upgrade connection: container cockroach-client not found in pod cockroach-client_default Error: problem using security settings, did you mean to use --insecure?: problem with CA certificate: not found Failed running "sql" Waiting for pod default/cockroach-client to terminate, status is Running pod "cockroach-client" deleted ``` This PR updates the README.md to include --insecure in the client command * Add StorageOS volume plugin * examples/volumes/flexvolume/nfs: check for jq and simplify quoting. * Remove broken getvolumename and pass PV or volume name to attach call * Remove controller node plugin driver dependency for non-attachable flex volume drivers (Ex: NFS). * Add `imageFeatures` parameter for RBD volume plugin, which is used to customize RBD image format 2 features. Update RBD docs in examples/persistent-volume-provisioning/README.md. * Only `layering` RBD image format 2 feature should be supported for now. * Formatted Dockerfile to be cleaner and precise * Update docs for user-guide * Make the Quota creation optional * Remove duplicated line from ceph-secret-admin.yaml * Update CockroachDB tag to v1.0.3 * Correct the comment in PSP examples. * Update wordpress to 4.8.0 * Cassandra example, use nodetool drain in preStop * Add termination gracePeriod * Use buildozer to remove deprecated automanaged tags * Use buildozer to delete licenses() rules except under third_party/ * NR Infrastructure agent example daemonset Copy of previous newrelic example, then modified to use the new agent "newrelic-infra" instead of "nrsysmond". Also maps all of host node's root fs into /host in the container (ro, but still exposes underlying node info into a container). Updates to README * Reduce one time url direction Reduce one time url direction * update to rbac v1 in yaml file * Replicate the persistent volume label admission plugin in a controller in the cloud-controller-manager * update related files * Paramaterize stickyMaxAgeMinutes for service in API * Update example to CockroachDB v1.0.5 * Remove storage-class annotations in examples * PodSecurityPolicy.allowedCapabilities: add support for using * to allow to request any capabilities. Also modify "privileged" PSP to use it and allow privileged users to use any capabilities. * Add examples pods to demonstrate CPU manager. * Tag broken examples test as manual * bazel: use autogenerated all-srcs rules instead of manually-curated sources rules * Update CockroachDB tag to v1.1.0 * update BUILD files * pkg/api/legacyscheme: fixup imports * Update bazel * [examples.storage/minio] update deploy config version * Volunteer to help review examples I would like to do some code review for examples about how to run real applications with Kubernetes * examples/podsecuritypolicy/rbac: fix names in comments and sync with examples repository. * Update storageclass version to v1 in examples * pkg/apis/core: mechanical import fixes in dependencies * Use k8s.gcr.io vanity domain for container images * Update generated files * gcloud docker now auths k8s.gcr.io by default * -Add scheduler optimization options, short circuit all predicates if one predicate fails * Revert k8s.gcr.io vanity domain This reverts commit eba5b6092afcae27a7c925afea76b85d903e87a9. Fixes https://github.com/kubernetes/kubernetes/issues/57526 * Autogenerate BUILD files * Move scheduler code out of plugin directory. This moves plugin/pkg/scheduler to pkg/scheduler and plugin/cmd/kube-scheduler to cmd/kube-scheduler. Bulk of the work was done with gomvpkg, except for kube-scheduler main package. * Fix scheduler refs in BUILD files. Update references to moved scheduler code. * Switch to k8s.gcr.io vanity domain This is the 2nd attempt. The previous was reverted while we figured out the regional mirrors (oops). New plan: k8s.gcr.io is a read-only facade that auto-detects your source region (us, eu, or asia for now) and pulls from the closest. To publish an image, push k8s-staging.gcr.io and it will be synced to the regionals automatically (similar to today). For now the staging is an alias to gcr.io/google_containers (the legacy URL). When we move off of google-owned projects (working on it), then we just do a one-time sync, and change the google-internal config, and nobody outside should notice. We can, in parallel, change the auto-sync into a manual sync - send a PR to "promote" something from staging, and a bot activates it. Nice and visible, easy to keep track of. * Remove apiVersion from scheduler extender example configuration * Update examples to use PSPs from the policy API group. * fix all the typos across the project * Autogenerated: hack/update-bazel.sh * Modify PodSecurityPolicy admission plugin to additionally allow authorizing via "use" verb in policy API group. * fix todo: add validate method for &schedulerapi.Policy * examples/podsecuritypolicy: add owners. * Adding dummy and dummy-attachable example Flexvolume drivers; adding DaemonSet deployment example * Fix relative links in README |
||
---|---|---|
.. | ||
README.md | ||
mongo-controller.yaml | ||
mongo-service.yaml | ||
web-controller-demo.yaml | ||
web-controller.yaml | ||
web-service.yaml |
README.md
Node.js and MongoDB on Kubernetes
The following document describes the deployment of a basic Node.js and MongoDB web stack on Kubernetes. Currently this example does not use replica sets for MongoDB.
For more a in-depth explanation of this example, please read this post.
Prerequisites
This example assumes that you have a basic understanding of Kubernetes conecepts (Pods, Services, Replication Controllers), a Kubernetes cluster up and running, and that you have installed the kubectl
command line tool somewhere in your path. Please see the getting started for installation instructions for your platform.
Note: This example was tested on Google Container Engine. Some optional commands require the Google Cloud SDK.
Creating the MongoDB Service
The first thing to do is create the MongoDB Service. This service is used by the other Pods in the cluster to find and connect to the MongoDB instance.
apiVersion: v1
kind: Service
metadata:
labels:
name: mongo
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
This service looks for all pods with the "mongo" tag, and creates a Service on port 27017 that targets port 27017 on the MongoDB pods. Port 27017 is the standard MongoDB port.
To start the service, run:
kubectl create -f examples/nodesjs-mongodb/mongo-service.yaml
Creating the MongoDB Controller
Next, create the MongoDB instance that runs the Database. Databases also need persistent storage, which will be different for each platform.
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mongo
name: mongo-controller
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk
fsType: ext4
Looking at this file from the bottom up:
First, it creates a volume called "mongo-persistent-storage."
In the above example, it is using a "gcePersistentDisk" to back the storage. This is only applicable if you are running your Kubernetes cluster in Google Cloud Platform.
If you don't already have a Google Persistent Disk created in the same zone as your cluster, create a new disk in the same Google Compute Engine / Container Engine zone as your cluster with this command:
gcloud compute disks create --size=200GB --zone=$ZONE mongo-disk
If you are using AWS, replace the "volumes" section with this (untested):
volumes:
- name: mongo-persistent-storage
awsElasticBlockStore:
volumeID: aws://{region}/{volume ID}
fsType: ext4
If you don't have a EBS volume in the same region as your cluster, create a new EBS volume in the same region with this command (untested):
ec2-create-volume --size 200 --region $REGION --availability-zone $ZONE
This command will return a volume ID to use.
For other storage options (iSCSI, NFS, OpenStack), please follow the documentation.
Now that the volume is created and usable by Kubernetes, the next step is to create the Pod.
Looking at the container section: It uses the official MongoDB container, names itself "mongo", opens up port 27017, and mounts the disk to "/data/db" (where the mongo container expects the data to be).
Now looking at the rest of the file, it is creating a Replication Controller with one replica, called mongo-controller. It is important to use a Replication Controller and not just a Pod, as a Replication Controller will restart the instance in case it crashes.
Create this controller with this command:
kubectl create -f examples/nodesjs-mongodb/mongo-controller.yaml
At this point, MongoDB is up and running.
Note: There is no password protection or auth running on the database by default. Please keep this in mind!
Creating the Node.js Service
The next step is to create the Node.js service. This service is what will be the endpoint for the web site, and will load balance requests to the Node.js instances.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
name: web
This service is called "web," and it uses a LoadBalancer to distribute traffic on port 80 to port 3000 running on Pods with the "web" tag. Port 80 is the standard HTTP port, and port 3000 is the standard Node.js port.
On Google Container Engine, a network load balancer and firewall rule to allow traffic are automatically created.
To start the service, run:
kubectl create -f examples/nodesjs-mongodb/web-service.yaml
If you are running on a platform that does not support LoadBalancer (i.e Bare Metal), you need to use a NodePort with your own load balancer.
You may also need to open appropriate Firewall ports to allow traffic.
Creating the Node.js Controller
The final step is deploying the Node.js container that will run the application code. This container can easily by replaced by any other web serving frontend, such as Rails, LAMP, Java, Go, etc.
The most important thing to keep in mind is how to access the MongoDB service.
If you were running MongoDB and Node.js on the same server, you would access MongoDB like so:
MongoClient.connect('mongodb://localhost:27017/database-name', function(err, db) { console.log(db); });
With this Kubernetes setup, that line of code would become:
MongoClient.connect('mongodb://mongo:27017/database-name', function(err, db) { console.log(db); });
The MongoDB Service previously created tells Kubernetes to configure the cluster so 'mongo' points to the MongoDB instance created earlier.
Custom Container
You should have your own container that runs your Node.js code hosted in a container registry.
See this example to see how to make your own Node.js container.
Once you have created your container, create the web controller.
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: <YOUR-CONTAINER>
name: web
ports:
- containerPort: 3000
name: http-server
Replace with the url of your container.
This Controller will create two replicas of the Node.js container, and each Node.js container will have the tag "web" and expose port 3000. The Service LoadBalancer will forward port 80 traffic to port 3000 automatically, along with load balancing traffic between the two instances.
To start the Controller, run:
kubectl create -f examples/nodesjs-mongodb/web-controller.yaml
Demo Container
If you DON'T want to create a custom container, you can use the following YAML file:
Note: You cannot run both Controllers at the same time, as they both try to control the same Pods.
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: node:0.10.40
command: ['/bin/sh', '-c']
args: ['cd /home && git clone https://github.com/ijason/NodeJS-Sample-App.git demo && cd demo/EmployeeDB/ && npm install && sed -i -- ''s/localhost/mongo/g'' app.js && node app.js']
name: web
ports:
- containerPort: 3000
name: http-server
This will use the default Node.js container, and will pull and execute code at run time. This is not recommended; typically, your code should be part of the container.
To start the Controller, run:
kubectl create -f examples/nodesjs-mongodb/web-controller-demo.yaml
Testing it out
Now that all the components are running, visit the IP address of the load balancer to access the website.
With Google Cloud Platform, get the IP address of all load balancers with the following command:
gcloud compute forwarding-rules list