# Node End-To-End (e2e) tests Node e2e tests are component tests meant for testing the Kubelet code on a custom host environment. Tests can be run either locally or against a host running on Google Compute Engine (GCE). Node e2e tests are run as both pre- and post- submit tests by the Kubernetes project. *Note: Linux only. Mac and Windows unsupported.* *Note: There is no scheduler running. The e2e tests have to do manual scheduling, e.g. by using `framework.PodClient`.* # Running tests ## Locally Why run tests *locally*? It is much faster than running tests remotely. Prerequisites: - [Install etcd](https://github.com/coreos/etcd/releases) and include the path to the installation in your PATH - Verify etcd is installed correctly by running `which etcd` - Or make etcd binary available and executable at `/tmp/etcd` - [containerd](https://github.com/containerd/containerd) configured with the cgroupfs driver - Working CNI - Ensure that you have a valid CNI configuration in /etc/cni/net.d/. For testing purposes, a [bridge](https://www.cni.dev/plugins/current/main/bridge/) configuration should work. From the Kubernetes base directory, run: ```sh make test-e2e-node ``` This will run the *ginkgo* binary against the subdirectory *test/e2e_node*, which will in turn: - Ask for sudo access (needed for running some of the processes) - Build the Kubernetes source code - Pre-pull docker images used by the tests - Start a local instance of *etcd* - Start a local instance of *kube-apiserver* - Start a local instance of *kubelet* - Run the test using the locally started processes - Output the test results to STDOUT - Stop *kubelet*, *kube-apiserver*, and *etcd* To view the settings and print help, run: ```sh make test-e2e-node PRINT_HELP=y ``` ## Remotely Why run tests *remotely*? Tests will be run in a customized testing environment. This environment closely mimics the pre- and post- submit testing performed by the project. Prerequisites: - [Join the googlegroup](https://groups.google.com/a/kubernetes.io/group/dev) `dev@kubernetes.io` - *This provides read access to the node test images.* - Setup a [Google Cloud Platform](https://cloud.google.com/) account and project with Google Compute Engine enabled - Install and setup the [gcloud sdk](https://cloud.google.com/sdk/downloads) - Set your project and a zone by running `gcloud config set project $PROJECT` and `gcloud config set compute/zone $zone` - Verify the sdk is setup correctly by running `gcloud compute instances list` and `gcloud compute images list --project cos-cloud` - Configure credentials for the same project that you configured above for "application defaults". This can be done with `gcloud auth application-default login` or by setting the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to a path to a credentials file. Run: ```sh make test-e2e-node REMOTE=true ``` This will: - Build the Kubernetes source code - Create a new GCE instance using the default test image - Instance will be named something like **test-cos-beta-81-12871-44-0** - Lookup the instance public ip address - Copy a compressed archive file to the host containing the following binaries: - ginkgo - kubelet - kube-apiserver - e2e_node.test (this binary contains the actual tests to be run) - Unzip the archive to a directory under **/tmp/gcloud** - Run the tests using the `ginkgo` command - Starts etcd, kube-apiserver, kubelet - The ginkgo command is used because this supports more features than running the test binary directly - Output the remote test results to STDOUT - `scp` the log files back to the local host under /tmp/_artifacts/e2e-node-containervm-v20160321-image - Stop the processes on the remote host - **Leave the GCE instance running** **Note: Subsequent tests run using the same image will *reuse the existing host* instead of deleting it and provisioning a new one. To delete the GCE instance after each test see *[DELETE_INSTANCE](#delete-instance-after-tests-run)*.** ## Additional Remote Options ## Run tests using different images This is useful if you want to run tests against a host using a different OS distro or container runtime than provided by the default image. List the available test images using gcloud. ```sh gcloud compute images list --project="cos-cloud" --no-standard-images --filter="name ~ 'cos-beta.*'" ``` This will output a list of the available images for the default image project. Then run: ```sh make test-e2e-node REMOTE=true IMAGES="" ``` ## Run tests against a running GCE instance (not an image) This is useful if you have an host instance running already and want to run the tests there instead of on a new instance. ```sh make test-e2e-node REMOTE=true HOSTS="" ``` ## Run tests against a different network and subnet (not the default) This is useful if you want to run tests on a non-default network and subnet. ```sh make test-e2e-node REMOTE=true NETWORK=" SUBNET="" ``` ## Delete instance after tests run This is useful if you want recreate the instance for each test run to trigger flakes related to starting the instance. ```sh make test-e2e-node REMOTE=true DELETE_INSTANCES=true ``` ## Keep instance, test binaries, and processes around after tests run This is useful if you want to manually inspect or debug the kubelet process run as part of the tests. ```sh make test-e2e-node REMOTE=true CLEANUP=false ``` ## Run tests using an image in another project This is useful if you want to create your own host image in another project and use it for testing. ```sh make test-e2e-node REMOTE=true IMAGE_PROJECT="" IMAGES="" ``` Setting up your own host image may require additional steps such as installing etcd or docker. See [setup_host.sh](https://git.k8s.io/kubernetes/test/e2e_node/environment/setup_host.sh) for common steps to setup hosts to run node tests. ## Create instances using a different instance name prefix This is useful if you want to create instances using a different name so that you can run multiple copies of the test in parallel against different instances of the same image. ```sh make test-e2e-node REMOTE=true INSTANCE_PREFIX="my-prefix" ``` ## Run tests using a custom image configuration This is useful if you want to test out different runtime configurations. First, make a local (temporary) copy of the base image config from the test-infra repo: https://github.com/kubernetes/test-infra/tree/master/jobs/e2e_node Make your desired modifications to the config, and update data paths to be absolute paths to the relevant files on your local machine (e.g. prepend your home directory path to each). For example: ```diff images: cos-stable: image_regex: cos-stable-60-9592-84-0 project: cos-cloud - metadata: "user-data /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.1/sriov_numvfs ``` Some topology manager tests require minimal knowledge of the host topology in order to be performed. The required information is to which NUMA node in the system are the SRIOV device attached to. The test code tries to autodetect the information it needs, skipping the relevant tests if the autodetection fails. You can override the autodetection adding annotations to the config map like this example: ```yaml metadata: annotations: pcidevice_node0: "1" pcidevice_node1: "0" pcidevice_node2: "0" pcidevice_node3: "0" ``` Please note that if you add the annotations, then you must provide the full information: you must should specify the number of SRIOV devices attached to each NUMA node in the system, even if the number is zero. # Debugging E2E Tests Locally 1. Install kubectl on the node 2. Set your KUBCONFIG environment variable to reference the kubeconfig created by the e2e node tests `export KUBECONFIG=./_output/local/go/bin/kubeconfig` 3. Inspect the node and pods as needed while the tests are running ``` $ kubectl get pod -A $ kubectl describe node ```