Improved kubemark-guide to reflect recent changes

This commit is contained in:
Shyam JVS 2016-12-23 16:08:16 +01:00 committed by Shyam Jeedigunta
parent bb2e772be4
commit a1967db5bd
1 changed files with 104 additions and 56 deletions

View File

@ -13,27 +13,38 @@ and how to use it.
## Architecture
On a very high level Kubemark cluster consists of two parts: real master
components and a set of “Hollow” Nodes. The prefix “Hollow” means an
implementation/instantiation of a component with all “moving” parts mocked out.
The best example is HollowKubelet, which pretends to be an ordinary Kubelet, but
does not start anything, nor mount any volumes - it just lies it does. More
detailed design and implementation details are at the end of this document.
On a very high level, Kubemark cluster consists of two parts: a real master
and a set of “Hollow” Nodes. The prefix “Hollow” to any component means an
implementation/instantiation of the actual component with all “moving”
parts mocked out. The best example is HollowKubelet, which pretends to be an
ordinary Kubelet, but does not start anything, nor mount any volumes - it just
lies it does. More detailed design and implementation details are at the end
of this document.
Currently master components run on a dedicated machine(s), and HollowNodes run
on an external Kubernetes cluster. This design has a slight advantage, over
running master components on external cluster, of completely isolating master
resources from everything else.
Currently, master components run on a dedicated machine as pods that are
created/managed by kubelet, which itself runs as either a systemd or a supervisord
service on the master VM depending on the VM distro (though currently it is
only systemd as we use a GCI image). Having a dedicated machine for the master
has a slight advantage over running the master components on an external cluster,
which is being able to completely isolate master resources from everything else.
The HollowNodes on the other hand are run on an external Kubernetes cluster
as pods in an isolated namespace (named kubemark). This idea of using pods on a
real cluster behave (or act) as nodes on the kubemark cluster lies at the heart of
kubemark's design.
## Requirements
To run Kubemark you need a Kubernetes cluster (called `external cluster`)
for running all your HollowNodes and a dedicated machine for a master.
Master machine has to be directly routable from HollowNodes. You also need an
access to some Docker repository.
Master machine has to be directly routable from HollowNodes. You also need
access to a Docker repository (which is gcr.io in the case of GCE) that has the
container images for etcd, hollow-node and node-problem-detector.
Currently scripts are written to be easily usable by GCE, but it should be
Currently, scripts are written to be easily usable by GCE, but it should be
relatively straightforward to port them to different providers or bare metal.
There is an ongoing effort to refactor kubemark code into provider-specific (gce)
and provider-independent code, which should make it relatively simple to run
kubemark clusters on other cloud providers as well.
## Common use cases and helper scripts
@ -43,8 +54,10 @@ Common workflow for Kubemark is:
- monitoring test execution and debugging problems
- turning down Kubemark cluster
Included in descriptions there will be comments helpful for anyone who'll want to
port Kubemark to different providers.
(For now) Included in descriptions there will be comments helpful for anyone wholl
want to port Kubemark to different providers.
(Later) When the refactoring mentioned in the above section finishes, we would replace
these comments with a clean API that would allow kubemark to run on top of any provider.
### Starting a Kubemark cluster
@ -52,46 +65,32 @@ To start a Kubemark cluster on GCE you need to create an external kubernetes
cluster (it can be GCE, GKE or anything else) by yourself, make sure that kubeconfig
points to it by default, build a kubernetes release (e.g. by running
`make quick-release`) and run `test/kubemark/start-kubemark.sh` script.
This script will create a VM for master components, Pods for HollowNodes
and do all the setup necessary to let them talk to each other. It will use the
configuration stored in `cluster/kubemark/config-default.sh` - you can tweak it
however you want, but note that some features may not be implemented yet, as
implementation of Hollow components/mocks will probably be lagging behind real
one. For performance tests interesting variables are `NUM_NODES` and
`MASTER_SIZE`. After start-kubemark script is finished you'll have a ready
Kubemark cluster, a kubeconfig file for talking to the Kubemark cluster is
stored in `test/kubemark/kubeconfig.kubemark`.
This script will create a VM for master (along with mounted PD and firewall rules set),
then start kubelet and run the pods for the master components. Following this, it
sets up the HollowNodes as Pods on the external cluster and do all the setup necessary
to let them talk to the kubemark apiserver. It will use the configuration stored in
`cluster/kubemark/config-default.sh` - you can tweak it however you want, but note that
some features may not be implemented yet, as implementation of Hollow components/mocks
will probably be lagging behind real one. For performance tests interesting variables
are `NUM_NODES` and `KUBEMARK_MASTER_SIZE`. After start-kubemark script is finished,
youll have a ready Kubemark cluster, and a kubeconfig file for talking to the Kubemark
cluster is stored in `test/kubemark/resources/kubeconfig.kubemark`.
Currently we're running HollowNode with limit of 0.05 a CPU core and ~60MB or
memory, which taking into account default cluster addons and fluentD running on
an 'external' cluster, allows running ~17.5 HollowNodes per core.
Currently we're running HollowNode with a limit of 0.09 CPU core/pod and 220MB of memory.
However, if we also take into account the resources absorbed by default cluster addons
and fluentD running on the 'external' cluster, this limit becomes ~0.1 CPU core/pod,
thus allowing ~10 HollowNodes to run per core (on an "n1-standard-8" VM node).
#### Behind the scene details:
Start-kubemark script does quite a lot of things:
start-kubemark.sh script does quite a lot of things:
- Creates a master machine called hollow-cluster-master and PD for it (*uses
gcloud, should be easy to do outside of GCE*)
- Creates a firewall rule which opens port 443\* on the master machine (*uses
gcloud, should be easy to do outside of GCE*)
- Builds a Docker image for HollowNode from the current repository and pushes it
to the Docker repository (*GCR for us, using scripts from
`cluster/gce/util.sh` - it may get tricky outside of GCE*)
- Generates certificates and kubeconfig files, writes a kubeconfig locally to
`test/kubemark/kubeconfig.kubemark` and creates a Secret which stores kubeconfig for
HollowKubelet/HollowProxy use (*used gcloud to transfer files to Master, should
be easy to do outside of GCE*).
- Creates a ReplicationController for HollowNodes and starts them up. (*will
work exactly the same everywhere as long as MASTER_IP will be populated
correctly, but you'll need to update docker image address if you're not using
GCR and default image name*)
- Waits until all HollowNodes are in the Running phase (*will work exactly the
same everywhere*)
- Prepare a master machine named MASTER_NAME (this variable's value should be set by this point):
(*the steps below use gcloud, and should be easy to do outside of GCE*)
1. Creates a Persistent Disk for use by the master (one more for etcd-events, if flagged)
2. Creates a static IP address for the master in the cluster and assign it to variable MASTER_IP
3. Creates a VM instance for the master, configured with the PD and IP created above.
4. Set firewall rule in the master to open port 443\* for all TCP traffic by default.
<sub>\* Port 443 is a secured port on the master machine which is used for all
external communication with the API server. In the last sentence *external*
@ -99,6 +98,40 @@ means all traffic coming from other machines, including all the Nodes, not only
from outside of the cluster. Currently local components, i.e. ControllerManager
and Scheduler talk with API server using insecure port 8080.</sub>
- [Optional to read] Establish necessary certs/keys required for setting up the PKI for kubemark cluster:
(*the steps below are independent of GCE and work for all providers*)
1. Generate a randomly named temporary directory for storing PKI certs/keys which is delete-trapped on EXIT.
2. Create a bearer token for 'admin' in master.
3. Generate certificate for CA and (certificate + private-key) pair for each of master, kubelet and kubecfg.
4. Generate kubelet and kubeproxy tokens for master.
5. Write a kubeconfig locally to `test/kubemark/resources/kubeconfig.kubemark` for enabling local kubectl use.
- Set up environment and start master components (through `start-kubemark-master.sh` script):
(*the steps below use gcloud for SSH and SCP to master, and should be easy to do outside of GCE*)
1. SSH to the master machine and create a new directory (`/etc/srv/kubernetes`) and write all the
certs/keys/tokens/passwords to it.
2. SCP all the master pod manifests, shell scripts (`start-kubemark-master.sh`, `configure-kubectl.sh`, etc),
config files for passing env variables (`kubemark-master-env.sh`) from the local machine to the master.
3. SSH to the master machine and run the startup script `start-kubemark-master.sh` (and possibly others).
Note: The directory structure and the functions performed by the startup script(s) can vary based on master distro.
We currently support the GCI image `gci-dev-56-8977-0-0` in GCE.
- Set up and start HollowNodes (as pods) on the external cluster:
(*the steps below (except 2nd and 3rd) are independent of GCE and work for all providers*)
1. Identify the right kubemark binary from the current kubernetes repo for the platform linux/amd64.
2. Create a Docker image for HollowNode using this binary and upload it to a remote Docker repository.
(We use gcr.io/ as our remote docker repository in GCE, should be different for other providers)
3. [One-off] Create and upload a Docker image for NodeProblemDetector (see kubernetes/node-problem-detector repo),
which is one of the containers in the HollowNode pod, besides HollowKubelet and HollowProxy. However we
use it with a hollow config that esentially has an empty set of rules and conditions to be detected.
This step is required only for other cloud providers, as the docker image for GCE already exists on GCR.
4. Create secret which stores kubeconfig for use by HollowKubelet/HollowProxy, addons, and configMaps
for the HollowNode and the HollowNodeProblemDetector.
5. Create a ReplicationController for HollowNodes that starts them up, after replacing all variables in
the hollow-node_template.json resource.
6. Wait until all HollowNodes are in the Running phase.
### Running e2e tests on Kubemark cluster
To run standard e2e test on your Kubemark cluster created in the previous step
@ -156,7 +189,7 @@ E.g. you want to see the logs of HollowKubelet on which pod `my-pod` is running.
To do so you can execute:
```
$ kubectl kubernetes/test/kubemark/kubeconfig.kubemark describe pod my-pod
$ kubectl kubernetes/test/kubemark/resources/kubeconfig.kubemark describe pod my-pod
```
Which outputs pod description and among it a line:
@ -190,9 +223,12 @@ All those things should work exactly the same on all cloud providers.
On GCE you just need to execute `test/kubemark/stop-kubemark.sh` script, which
will delete HollowNode ReplicationController and all the resources for you. On
other providers you'll need to delete all this stuff by yourself.
other providers youll need to delete all this stuff by yourself. As part of
the effort mentioned above to refactor kubemark into provider-independent and
provider-specific parts, the resource deletion logic specific to the provider
would move out into a clean API.
## Some current implementation details
## Some current implementation details and future roadmap
Kubemark master uses exactly the same binaries as ordinary Kubernetes does. This
means that it will never be out of date. On the other hand HollowNodes use
@ -202,9 +238,21 @@ Because there's no easy way of mocking other managers (e.g. VolumeManager), they
are not supported in Kubemark (e.g. we can't schedule Pods with volumes in them
yet).
As the time passes more fakes will probably be plugged into HollowNodes, but
it's crucial to make it as simple as possible to allow running a big number of
Hollows on a single core.
We currently plan to extend kubemark along the following directions:
- As you would have noticed at places above, we aim to make kubemark more structured
and easy to run across various providers without having to tweak the setup scripts,
using a well-defined kubemark-provider API.
- Allow kubemark to run on various distros (GCI, debian, redhat, etc) for any
given provider.
- Make Kubemark performance on ci-tests mimic real cluster ci-tests on metrics such as
CPU, memory and network bandwidth usage and realizing this goal through measurable
objectives (like the kubemark metric should vary no more than X% with the real
cluster metric). We could also use metrics reported by Prometheus.
- Improve logging of CI-test metrics (such as aggregated API call latencies, scheduling
call latencies, %ile for CPU/mem usage of different master components in density/load
tests) by packing them into well-structured artifacts instead of the (current) dumping
to logs.
- Create a Dashboard that lets easy viewing and comparison of these metrics across tests.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->