Import site converted from adoc to markdown
Signed-off-by: David Vossel <davidvossel@gmail.com>
|  | @ -0,0 +1,201 @@ | ||||||
|  |                                  Apache License | ||||||
|  |                            Version 2.0, January 2004 | ||||||
|  |                         http://www.apache.org/licenses/ | ||||||
|  | 
 | ||||||
|  |    TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION | ||||||
|  | 
 | ||||||
|  |    1. Definitions. | ||||||
|  | 
 | ||||||
|  |       "License" shall mean the terms and conditions for use, reproduction, | ||||||
|  |       and distribution as defined by Sections 1 through 9 of this document. | ||||||
|  | 
 | ||||||
|  |       "Licensor" shall mean the copyright owner or entity authorized by | ||||||
|  |       the copyright owner that is granting the License. | ||||||
|  | 
 | ||||||
|  |       "Legal Entity" shall mean the union of the acting entity and all | ||||||
|  |       other entities that control, are controlled by, or are under common | ||||||
|  |       control with that entity. For the purposes of this definition, | ||||||
|  |       "control" means (i) the power, direct or indirect, to cause the | ||||||
|  |       direction or management of such entity, whether by contract or | ||||||
|  |       otherwise, or (ii) ownership of fifty percent (50%) or more of the | ||||||
|  |       outstanding shares, or (iii) beneficial ownership of such entity. | ||||||
|  | 
 | ||||||
|  |       "You" (or "Your") shall mean an individual or Legal Entity | ||||||
|  |       exercising permissions granted by this License. | ||||||
|  | 
 | ||||||
|  |       "Source" form shall mean the preferred form for making modifications, | ||||||
|  |       including but not limited to software source code, documentation | ||||||
|  |       source, and configuration files. | ||||||
|  | 
 | ||||||
|  |       "Object" form shall mean any form resulting from mechanical | ||||||
|  |       transformation or translation of a Source form, including but | ||||||
|  |       not limited to compiled object code, generated documentation, | ||||||
|  |       and conversions to other media types. | ||||||
|  | 
 | ||||||
|  |       "Work" shall mean the work of authorship, whether in Source or | ||||||
|  |       Object form, made available under the License, as indicated by a | ||||||
|  |       copyright notice that is included in or attached to the work | ||||||
|  |       (an example is provided in the Appendix below). | ||||||
|  | 
 | ||||||
|  |       "Derivative Works" shall mean any work, whether in Source or Object | ||||||
|  |       form, that is based on (or derived from) the Work and for which the | ||||||
|  |       editorial revisions, annotations, elaborations, or other modifications | ||||||
|  |       represent, as a whole, an original work of authorship. For the purposes | ||||||
|  |       of this License, Derivative Works shall not include works that remain | ||||||
|  |       separable from, or merely link (or bind by name) to the interfaces of, | ||||||
|  |       the Work and Derivative Works thereof. | ||||||
|  | 
 | ||||||
|  |       "Contribution" shall mean any work of authorship, including | ||||||
|  |       the original version of the Work and any modifications or additions | ||||||
|  |       to that Work or Derivative Works thereof, that is intentionally | ||||||
|  |       submitted to Licensor for inclusion in the Work by the copyright owner | ||||||
|  |       or by an individual or Legal Entity authorized to submit on behalf of | ||||||
|  |       the copyright owner. For the purposes of this definition, "submitted" | ||||||
|  |       means any form of electronic, verbal, or written communication sent | ||||||
|  |       to the Licensor or its representatives, including but not limited to | ||||||
|  |       communication on electronic mailing lists, source code control systems, | ||||||
|  |       and issue tracking systems that are managed by, or on behalf of, the | ||||||
|  |       Licensor for the purpose of discussing and improving the Work, but | ||||||
|  |       excluding communication that is conspicuously marked or otherwise | ||||||
|  |       designated in writing by the copyright owner as "Not a Contribution." | ||||||
|  | 
 | ||||||
|  |       "Contributor" shall mean Licensor and any individual or Legal Entity | ||||||
|  |       on behalf of whom a Contribution has been received by Licensor and | ||||||
|  |       subsequently incorporated within the Work. | ||||||
|  | 
 | ||||||
|  |    2. Grant of Copyright License. Subject to the terms and conditions of | ||||||
|  |       this License, each Contributor hereby grants to You a perpetual, | ||||||
|  |       worldwide, non-exclusive, no-charge, royalty-free, irrevocable | ||||||
|  |       copyright license to reproduce, prepare Derivative Works of, | ||||||
|  |       publicly display, publicly perform, sublicense, and distribute the | ||||||
|  |       Work and such Derivative Works in Source or Object form. | ||||||
|  | 
 | ||||||
|  |    3. Grant of Patent License. Subject to the terms and conditions of | ||||||
|  |       this License, each Contributor hereby grants to You a perpetual, | ||||||
|  |       worldwide, non-exclusive, no-charge, royalty-free, irrevocable | ||||||
|  |       (except as stated in this section) patent license to make, have made, | ||||||
|  |       use, offer to sell, sell, import, and otherwise transfer the Work, | ||||||
|  |       where such license applies only to those patent claims licensable | ||||||
|  |       by such Contributor that are necessarily infringed by their | ||||||
|  |       Contribution(s) alone or by combination of their Contribution(s) | ||||||
|  |       with the Work to which such Contribution(s) was submitted. If You | ||||||
|  |       institute patent litigation against any entity (including a | ||||||
|  |       cross-claim or counterclaim in a lawsuit) alleging that the Work | ||||||
|  |       or a Contribution incorporated within the Work constitutes direct | ||||||
|  |       or contributory patent infringement, then any patent licenses | ||||||
|  |       granted to You under this License for that Work shall terminate | ||||||
|  |       as of the date such litigation is filed. | ||||||
|  | 
 | ||||||
|  |    4. Redistribution. You may reproduce and distribute copies of the | ||||||
|  |       Work or Derivative Works thereof in any medium, with or without | ||||||
|  |       modifications, and in Source or Object form, provided that You | ||||||
|  |       meet the following conditions: | ||||||
|  | 
 | ||||||
|  |       (a) You must give any other recipients of the Work or | ||||||
|  |           Derivative Works a copy of this License; and | ||||||
|  | 
 | ||||||
|  |       (b) You must cause any modified files to carry prominent notices | ||||||
|  |           stating that You changed the files; and | ||||||
|  | 
 | ||||||
|  |       (c) You must retain, in the Source form of any Derivative Works | ||||||
|  |           that You distribute, all copyright, patent, trademark, and | ||||||
|  |           attribution notices from the Source form of the Work, | ||||||
|  |           excluding those notices that do not pertain to any part of | ||||||
|  |           the Derivative Works; and | ||||||
|  | 
 | ||||||
|  |       (d) If the Work includes a "NOTICE" text file as part of its | ||||||
|  |           distribution, then any Derivative Works that You distribute must | ||||||
|  |           include a readable copy of the attribution notices contained | ||||||
|  |           within such NOTICE file, excluding those notices that do not | ||||||
|  |           pertain to any part of the Derivative Works, in at least one | ||||||
|  |           of the following places: within a NOTICE text file distributed | ||||||
|  |           as part of the Derivative Works; within the Source form or | ||||||
|  |           documentation, if provided along with the Derivative Works; or, | ||||||
|  |           within a display generated by the Derivative Works, if and | ||||||
|  |           wherever such third-party notices normally appear. The contents | ||||||
|  |           of the NOTICE file are for informational purposes only and | ||||||
|  |           do not modify the License. You may add Your own attribution | ||||||
|  |           notices within Derivative Works that You distribute, alongside | ||||||
|  |           or as an addendum to the NOTICE text from the Work, provided | ||||||
|  |           that such additional attribution notices cannot be construed | ||||||
|  |           as modifying the License. | ||||||
|  | 
 | ||||||
|  |       You may add Your own copyright statement to Your modifications and | ||||||
|  |       may provide additional or different license terms and conditions | ||||||
|  |       for use, reproduction, or distribution of Your modifications, or | ||||||
|  |       for any such Derivative Works as a whole, provided Your use, | ||||||
|  |       reproduction, and distribution of the Work otherwise complies with | ||||||
|  |       the conditions stated in this License. | ||||||
|  | 
 | ||||||
|  |    5. Submission of Contributions. Unless You explicitly state otherwise, | ||||||
|  |       any Contribution intentionally submitted for inclusion in the Work | ||||||
|  |       by You to the Licensor shall be under the terms and conditions of | ||||||
|  |       this License, without any additional terms or conditions. | ||||||
|  |       Notwithstanding the above, nothing herein shall supersede or modify | ||||||
|  |       the terms of any separate license agreement you may have executed | ||||||
|  |       with Licensor regarding such Contributions. | ||||||
|  | 
 | ||||||
|  |    6. Trademarks. This License does not grant permission to use the trade | ||||||
|  |       names, trademarks, service marks, or product names of the Licensor, | ||||||
|  |       except as required for reasonable and customary use in describing the | ||||||
|  |       origin of the Work and reproducing the content of the NOTICE file. | ||||||
|  | 
 | ||||||
|  |    7. Disclaimer of Warranty. Unless required by applicable law or | ||||||
|  |       agreed to in writing, Licensor provides the Work (and each | ||||||
|  |       Contributor provides its Contributions) on an "AS IS" BASIS, | ||||||
|  |       WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or | ||||||
|  |       implied, including, without limitation, any warranties or conditions | ||||||
|  |       of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A | ||||||
|  |       PARTICULAR PURPOSE. You are solely responsible for determining the | ||||||
|  |       appropriateness of using or redistributing the Work and assume any | ||||||
|  |       risks associated with Your exercise of permissions under this License. | ||||||
|  | 
 | ||||||
|  |    8. Limitation of Liability. In no event and under no legal theory, | ||||||
|  |       whether in tort (including negligence), contract, or otherwise, | ||||||
|  |       unless required by applicable law (such as deliberate and grossly | ||||||
|  |       negligent acts) or agreed to in writing, shall any Contributor be | ||||||
|  |       liable to You for damages, including any direct, indirect, special, | ||||||
|  |       incidental, or consequential damages of any character arising as a | ||||||
|  |       result of this License or out of the use or inability to use the | ||||||
|  |       Work (including but not limited to damages for loss of goodwill, | ||||||
|  |       work stoppage, computer failure or malfunction, or any and all | ||||||
|  |       other commercial damages or losses), even if such Contributor | ||||||
|  |       has been advised of the possibility of such damages. | ||||||
|  | 
 | ||||||
|  |    9. Accepting Warranty or Additional Liability. While redistributing | ||||||
|  |       the Work or Derivative Works thereof, You may choose to offer, | ||||||
|  |       and charge a fee for, acceptance of support, warranty, indemnity, | ||||||
|  |       or other liability obligations and/or rights consistent with this | ||||||
|  |       License. However, in accepting such obligations, You may act only | ||||||
|  |       on Your own behalf and on Your sole responsibility, not on behalf | ||||||
|  |       of any other Contributor, and only if You agree to indemnify, | ||||||
|  |       defend, and hold each Contributor harmless for any liability | ||||||
|  |       incurred by, or claims asserted against, such Contributor by reason | ||||||
|  |       of your accepting any such warranty or additional liability. | ||||||
|  | 
 | ||||||
|  |    END OF TERMS AND CONDITIONS | ||||||
|  | 
 | ||||||
|  |    APPENDIX: How to apply the Apache License to your work. | ||||||
|  | 
 | ||||||
|  |       To apply the Apache License to your work, attach the following | ||||||
|  |       boilerplate notice, with the fields enclosed by brackets "[]" | ||||||
|  |       replaced with your own identifying information. (Don't include | ||||||
|  |       the brackets!)  The text should be enclosed in the appropriate | ||||||
|  |       comment syntax for the file format. We also recommend that a | ||||||
|  |       file or class name and description of purpose be included on the | ||||||
|  |       same "printed page" as the copyright notice for easier | ||||||
|  |       identification within third-party archives. | ||||||
|  | 
 | ||||||
|  |    Copyright [yyyy] [name of copyright owner] | ||||||
|  | 
 | ||||||
|  |    Licensed under the Apache License, Version 2.0 (the "License"); | ||||||
|  |    you may not use this file except in compliance with the License. | ||||||
|  |    You may obtain a copy of the License at | ||||||
|  | 
 | ||||||
|  |        http://www.apache.org/licenses/LICENSE-2.0 | ||||||
|  | 
 | ||||||
|  |    Unless required by applicable law or agreed to in writing, software | ||||||
|  |    distributed under the License is distributed on an "AS IS" BASIS, | ||||||
|  |    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||||||
|  |    See the License for the specific language governing permissions and | ||||||
|  |    limitations under the License. | ||||||
|  | @ -0,0 +1,11 @@ | ||||||
|  | find-unref: | ||||||
|  | 	# Ignore _*.md files | ||||||
|  | 	bash -c 'comm -23 <(find . -regex "./[^_].*\.md" | cut -d / -f 2- | sort) <(grep -hRo "[a-zA-Z/:.-]*\.md" | sort -u)' | ||||||
|  | 
 | ||||||
|  | spellcheck: | ||||||
|  | 	for FN in $$(find . -name \*.md); do \
 | ||||||
|  | 	aspell --personal=.aspell.en.pws --lang=en --encoding=utf-8 list <$$FN ; \
 | ||||||
|  | 	done | sort -u | ||||||
|  | 
 | ||||||
|  | test: | ||||||
|  | 	test "$$(make -s find-unref) -eq 0" | ||||||
|  | @ -0,0 +1,32 @@ | ||||||
|  | Welcome to the KubeVirt Documentation. This welcome page is provided as | ||||||
|  | the entrypoint to the different topics of this user-guide. | ||||||
|  | 
 | ||||||
|  | Try it out | ||||||
|  | ========== | ||||||
|  | 
 | ||||||
|  | -   An easy to use demo: <https://github.com/kubevirt/demo> | ||||||
|  | 
 | ||||||
|  | Getting help | ||||||
|  | ============ | ||||||
|  | 
 | ||||||
|  | -   File a bug: <https://github.com/kubevirt/kubevirt/issues> | ||||||
|  | 
 | ||||||
|  | -   Mailing list: <https://groups.google.com/forum/#!forum/kubevirt-dev> | ||||||
|  | 
 | ||||||
|  | -   Slack: <https://kubernetes.slack.com/messages/virtualization> | ||||||
|  | 
 | ||||||
|  | Developer | ||||||
|  | ========= | ||||||
|  | 
 | ||||||
|  | -   Start contributing: | ||||||
|  |     <https://github.com/kubevirt/kubevirt/blob/master/CONTRIBUTING.md> | ||||||
|  | 
 | ||||||
|  | -   API Reference: <http://kubevirt.io/api-reference/> | ||||||
|  | 
 | ||||||
|  | Privacy | ||||||
|  | ======= | ||||||
|  | 
 | ||||||
|  | -   Check our privacy policy at: <https://kubevirt.io/privacy/> | ||||||
|  | 
 | ||||||
|  | -   We do use <https://netlify.com> Open Source Plan for Rendering Pull | ||||||
|  |     Requests to the documentation repository | ||||||
|  | @ -0,0 +1,12 @@ | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  | # KubeVirt User Guide <small>v0.4.0</small> | ||||||
|  | 
 | ||||||
|  | > Reference documentation | ||||||
|  | 
 | ||||||
|  | * Installation | ||||||
|  | * Life-cycle management | ||||||
|  | * Additional resources | ||||||
|  | 
 | ||||||
|  | [GitHub](https://github.com/kubevirt) | ||||||
|  | [Start reading](#Introduction) | ||||||
|  | @ -0,0 +1,42 @@ | ||||||
|  | # Table of contents | ||||||
|  | 
 | ||||||
|  | * [Welcome](README.md) | ||||||
|  | * [Architecture](architecture.md) | ||||||
|  | * [Installing Kubevirt](installation/installation.md) | ||||||
|  |   * [virtctl Client Tool](installation/virtctl.md) | ||||||
|  |   * [Updates and Deletion](installation/updating-and-deleting-installs.md) | ||||||
|  |   * [Enabling Live Migration](installation/live-migration.md) | ||||||
|  |   * [Enabling HugePage Support](installation/hugepages.md) | ||||||
|  |   * [Monitoring](installation/monitoring.md) | ||||||
|  |   * [Image Upload](installation/image-upload.md) | ||||||
|  |   * [Webhooks](installation/webhooks.md) | ||||||
|  |   * [Authorization](installation/authorization.md) | ||||||
|  |   * [Annotations and Labels](installation/annotations_and_labels.md) | ||||||
|  |   * [Unresponsive Node Handling](installation/unresponsive-nodes.md) | ||||||
|  |   * [Node Eviction](installation/node-eviction.md) | ||||||
|  | * [Creating Virtual Machines](creation/creating-virtual-machines.md) | ||||||
|  |   * [Devices](creation/devices.md) | ||||||
|  |   * [Disks and Volumes](creation/disks-and-volumes.md) | ||||||
|  |   * [Interfaces and Networks](creation/interfaces-and-networks.md) | ||||||
|  |   * [Dedicated CPU Usage](creation/dedicated-cpu.md) | ||||||
|  |   * [cloud-init](creation/cloud-init.md) | ||||||
|  |   * [Associating Guest Informations](creation/guest-operating-system-information.md) | ||||||
|  |   * [Windows Virtio Drivers](creation/virtio-win.md) | ||||||
|  |   * [Virtual Machine Presets](creation/presets.md) | ||||||
|  |   * [Probes](creation/probes.md) | ||||||
|  |   * [Run Strategies](creation/run-strategies.md) | ||||||
|  | * [Using Virtual Machines](usage/usage.md) | ||||||
|  |   * [Starting and Stopping](usage/life-cycle.md) | ||||||
|  |   * [Console Access (Serial and Graphical)](usage/graphical-and-console-access.md) | ||||||
|  |   * [Node Placement](usage/node-placement.md) | ||||||
|  |   * [DNS Integration](usage/dns.md) | ||||||
|  |   * [Network Service Integration](usage/network-service-integration.md) | ||||||
|  |   * [Assigning Network Policies](usage/create-networkpolicy.md) | ||||||
|  |   * [Resource over-commit](usage/overcommit.md) | ||||||
|  |   * [Virtual Machine Replica Set](usage/virtual-machine-replica-set.md) | ||||||
|  |   * [vmctl](usage/vmctl.md) | ||||||
|  | * [Virtual Machine Templates](templates/templates.md) | ||||||
|  |   * [Common Templates](templates/common-templates.md) | ||||||
|  |   * [Using Templates](templates/using-templates.md) | ||||||
|  | * [Web Interface](web-interface.md) | ||||||
|  | * [Latest Release Notes](changelog.md) | ||||||
|  | @ -0,0 +1,313 @@ | ||||||
|  | VirtualMachine | ||||||
|  | ============== | ||||||
|  | 
 | ||||||
|  | An `VirtualMachine` provides additional management capabilities to a | ||||||
|  | VirtualMachineInstance inside the cluster. That includes: | ||||||
|  | 
 | ||||||
|  | -   ABI stability | ||||||
|  | 
 | ||||||
|  | -   Start/stop/restart capabilities on the controller level | ||||||
|  | 
 | ||||||
|  | -   Offline configuration change with propagation on | ||||||
|  |     VirtualMachineInstance recreation | ||||||
|  | 
 | ||||||
|  | -   Ensure that the VirtualMachineInstance is running if it should be | ||||||
|  |     running | ||||||
|  | 
 | ||||||
|  | It focuses on a 1:1 relationship between the controller instance and a | ||||||
|  | virtual machine instance. In many ways it is very similar to a | ||||||
|  | [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) | ||||||
|  | with `spec.replica` set to `1`. | ||||||
|  | 
 | ||||||
|  | How to use a VirtualMachine | ||||||
|  | --------------------------- | ||||||
|  | 
 | ||||||
|  | A VirtualMachine will make sure that a VirtualMachineInstance object | ||||||
|  | with an identical name will be present in the cluster, if `spec.running` | ||||||
|  | is set to `true`. Further it will make sure that a | ||||||
|  | VirtualMachineInstance will be removed from the cluster if | ||||||
|  | `spec.running` is set to `false`. | ||||||
|  | 
 | ||||||
|  | There exists a field `spec.runStrategy` which can also be used to | ||||||
|  | control the state of the associated VirtualMachineInstance object. To | ||||||
|  | avoid confusing and contradictory states, these fields are mutually | ||||||
|  | exclusive. | ||||||
|  | 
 | ||||||
|  | An extended explanation of `spec.runStrategy` vs `spec.running` can be | ||||||
|  | found in [Run Strategies](creation/run-strategies.md) | ||||||
|  | 
 | ||||||
|  | ### Starting and stopping | ||||||
|  | 
 | ||||||
|  | After creating a VirtualMachine it can be switched on or off like this: | ||||||
|  | 
 | ||||||
|  |     # Start the virtual machine: | ||||||
|  |     virtctl start myvm | ||||||
|  | 
 | ||||||
|  |     # Stop the virtual machine: | ||||||
|  |     virtctl stop myvm | ||||||
|  | 
 | ||||||
|  | `kubectl` can be used too: | ||||||
|  | 
 | ||||||
|  |     # Start the virtual machine: | ||||||
|  |     kubectl patch virtualmachine myvm --type merge -p \ | ||||||
|  |         '{"spec":{"running":true}}' | ||||||
|  | 
 | ||||||
|  |     # Stop the virtual machine: | ||||||
|  |     kubectl patch virtualmachine myvm --type merge -p \ | ||||||
|  |         '{"spec":{"running":false}}' | ||||||
|  | 
 | ||||||
|  | ### Controller status | ||||||
|  | 
 | ||||||
|  | Once a VirtualMachineInstance is created, its state will be tracked via | ||||||
|  | `status.created` and `status.ready`. If a VirtualMachineInstance exists | ||||||
|  | in the cluster, `status.created` will equal to `true`. If the | ||||||
|  | VirtualMachineInstance is also ready, `status.ready` will equal `true` | ||||||
|  | too. | ||||||
|  | 
 | ||||||
|  | If a VirtualMachineInstance reaches a final state but the `spec.running` | ||||||
|  | equals `true`, the VirtualMachine controller will set `status.ready` to | ||||||
|  | `false` and re-create the VirtualMachineInstance. | ||||||
|  | 
 | ||||||
|  | ### Restarting | ||||||
|  | 
 | ||||||
|  | A VirtualMachineInstance restart can be triggered by deleting the | ||||||
|  | VirtualMachineInstance. This will also propagate configuration changes | ||||||
|  | from the template in the VirtualMachine: | ||||||
|  | 
 | ||||||
|  |     # Restart the virtual machine (you delete the instance!): | ||||||
|  |     kubectl delete virtualmachineinstance myvm | ||||||
|  | 
 | ||||||
|  | To restart a VirtualMachine named myvm using virtctl: | ||||||
|  | 
 | ||||||
|  |     $ virtctl restart myvm | ||||||
|  | 
 | ||||||
|  | This would perform a normal restart for the VirtualMachineInstance and | ||||||
|  | would reschedule the VirtualMachineInstance on a new virt-launcher Pod | ||||||
|  | 
 | ||||||
|  | To force restart a VirtualMachine named myvm using virtctl: | ||||||
|  | 
 | ||||||
|  |     $ virtctl restart myvm --force --grace-period=0 | ||||||
|  | 
 | ||||||
|  | This would try to perform a normal restart, and would also delete the | ||||||
|  | virt-launcher Pod of the VirtualMachineInstance with setting | ||||||
|  | GracePeriodSeconds to the seconds passed in the command. | ||||||
|  | 
 | ||||||
|  | Currently, only setting grace-period=0 is supported. | ||||||
|  | 
 | ||||||
|  | > Note: Force restart can cause data corruption, and should be used in | ||||||
|  | > cases of kernal panic or VirtualMachine being unresponsive to normal | ||||||
|  | > restarts. | ||||||
|  | 
 | ||||||
|  | ### Fencing considerations | ||||||
|  | 
 | ||||||
|  | A VirtualMachine will never restart or re-create a | ||||||
|  | VirtualMachineInstance until the current instance of the | ||||||
|  | VirtualMachineInstance is deleted from the cluster. | ||||||
|  | 
 | ||||||
|  | ### Exposing as a Service | ||||||
|  | 
 | ||||||
|  | A VirtualMachine can be exposed as a service. The actual service will be | ||||||
|  | available once the VirtualMachineInstance starts without additional | ||||||
|  | interaction. | ||||||
|  | 
 | ||||||
|  | For example, exposing SSH port (22) as a `ClusterIP` service using | ||||||
|  | `virtctl` after the OfflineVirtualMAchine was created, but before it | ||||||
|  | started: | ||||||
|  | 
 | ||||||
|  |     $ virtctl expose virtualmachine vmi-ephemeral --name vmiservice --port 27017 --target-port 22 | ||||||
|  | 
 | ||||||
|  | All service exposure options that apply to a VirtualMachineInstance | ||||||
|  | apply to a VirtualMachine. | ||||||
|  | 
 | ||||||
|  | See [Network Service Integration](usage/network-service-integration.md) for more details. | ||||||
|  | 
 | ||||||
|  | When to use a VirtualMachine | ||||||
|  | ---------------------------- | ||||||
|  | 
 | ||||||
|  | ### When ABI stability is required between restarts | ||||||
|  | 
 | ||||||
|  | A `VirtualMachine` makes sure that VirtualMachineInstance ABI | ||||||
|  | configurations are consistent between restarts. A classical example are | ||||||
|  | licenses which are bound to the firmware UUID of a virtual machine. The | ||||||
|  | `VirtualMachine` makes sure that the UUID will always stay the same | ||||||
|  | without the user having to take care of it. | ||||||
|  | 
 | ||||||
|  | One of the main benefits is that a user can still make use of defaulting | ||||||
|  | logic, although a stable ABI is needed. | ||||||
|  | 
 | ||||||
|  | ### When config updates should be picked up on the next restart | ||||||
|  | 
 | ||||||
|  | If the VirtualMachineInstance configuration should be modifyable inside | ||||||
|  | the cluster and these changes should be picked up on the next | ||||||
|  | VirtualMachineInstance restart. This means that no hotplug is involved. | ||||||
|  | 
 | ||||||
|  | When you want to let the cluster manage your individual | ||||||
|  | VirtualMachineInstance | ||||||
|  | 
 | ||||||
|  | Kubernetes as a declarative system can help you to manage the | ||||||
|  | VirtualMachineInstance. You tell it that you want this | ||||||
|  | VirtualMachineInstance with your application running, the VirtualMachine | ||||||
|  | will try to make sure it stays running. | ||||||
|  | 
 | ||||||
|  | > Note: The current believe is that if it is defined that the | ||||||
|  | > VirtualMachineInstance should be running, it should be running. This is | ||||||
|  | > different to many classical virtualization platforms, where VMs stay | ||||||
|  | > down if they were switched off. Restart policies may be added if needed. | ||||||
|  | > Please provide your use-case if you need this! | ||||||
|  | 
 | ||||||
|  | ### Example | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | apiVersion: kubevirt.io/v1alpha3 | ||||||
|  | kind: VirtualMachine | ||||||
|  | metadata: | ||||||
|  |   labels: | ||||||
|  |     kubevirt.io/vm: vm-cirros | ||||||
|  |   name: vm-cirros | ||||||
|  | spec: | ||||||
|  |   running: false | ||||||
|  |   template: | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/vm: vm-cirros | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: containerdisk | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: cloudinitdisk | ||||||
|  |         machine: | ||||||
|  |           type: "" | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  |       volumes: | ||||||
|  |       - name: containerdisk | ||||||
|  |         containerDisk: | ||||||
|  |           image: kubevirt/cirros-container-disk-demo:latest | ||||||
|  |       - cloudInitNoCloud: | ||||||
|  |           userDataBase64: IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK | ||||||
|  |         name: cloudinitdisk | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | Saving this manifest into `vm.yaml` and submitting it to Kubernetes will | ||||||
|  | create the controller instance: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | $ kubectl create -f vm.yaml  | ||||||
|  | virtualmachine "vm-cirros" created | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | Since `spec.running` is set to `false`, no vmi will be created: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | $ kubectl get vmis | ||||||
|  | No resources found. | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | Let’s start the VirtualMachine: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | $ virtctl start omv vm-cirros | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | As expected, a VirtualMachineInstance called `vm-cirros` got created: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | $ kubectl describe vm vm-cirros | ||||||
|  | Name:         vm-cirros | ||||||
|  | Namespace:    default | ||||||
|  | Labels:       kubevirt.io/vm=vm-cirros | ||||||
|  | Annotations:  <none> | ||||||
|  | API Version:  kubevirt.io/v1alpha3 | ||||||
|  | Kind:         VirtualMachine | ||||||
|  | Metadata: | ||||||
|  |   Cluster Name:         | ||||||
|  |   Creation Timestamp:  2018-04-30T09:25:08Z | ||||||
|  |   Generation:          0 | ||||||
|  |   Resource Version:    6418 | ||||||
|  |   Self Link:           /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachines/vm-cirros | ||||||
|  |   UID:                 60043358-4c58-11e8-8653-525500d15501 | ||||||
|  | Spec: | ||||||
|  |   Running:  true | ||||||
|  |   Template: | ||||||
|  |     Metadata: | ||||||
|  |       Creation Timestamp:  <nil> | ||||||
|  |       Labels: | ||||||
|  |         Kubevirt . Io / Ovmi:  vm-cirros | ||||||
|  |     Spec: | ||||||
|  |       Domain: | ||||||
|  |         Devices: | ||||||
|  |           Disks: | ||||||
|  |             Disk: | ||||||
|  |               Bus:        virtio | ||||||
|  |             Name:         containerdisk | ||||||
|  |             Volume Name:  containerdisk | ||||||
|  |             Disk: | ||||||
|  |               Bus:        virtio | ||||||
|  |             Name:         cloudinitdisk | ||||||
|  |             Volume Name:  cloudinitdisk | ||||||
|  |         Machine: | ||||||
|  |           Type:   | ||||||
|  |         Resources: | ||||||
|  |           Requests: | ||||||
|  |             Memory:                      64M | ||||||
|  |       Termination Grace Period Seconds:  0 | ||||||
|  |       Volumes: | ||||||
|  |         Name:  containerdisk | ||||||
|  |         Registry Disk: | ||||||
|  |           Image:  kubevirt/cirros-registry-disk-demo:latest | ||||||
|  |         Cloud Init No Cloud: | ||||||
|  |           User Data Base 64:  IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK | ||||||
|  |         Name:                 cloudinitdisk | ||||||
|  | Status: | ||||||
|  |   Created:  true | ||||||
|  |   Ready:    true | ||||||
|  | Events: | ||||||
|  |   Type    Reason            Age   From                              Message | ||||||
|  |   ----    ------            ----  ----                              ------- | ||||||
|  |   Normal  SuccessfulCreate  15s   virtualmachine-controller  Created virtual machine: vm-cirros | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | ### Kubectl commandline interactions | ||||||
|  | 
 | ||||||
|  | Whenever you want to manipulate the VirtualMachine through the | ||||||
|  | commandline you can use the kubectl command. The following are examples | ||||||
|  | demonstrating how to do it. | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  |     # Define a virtual machine: | ||||||
|  |     kubectl create -f myvm.yaml | ||||||
|  | 
 | ||||||
|  |     # Start the virtual machine: | ||||||
|  |     kubectl patch virtualmachine myvm --type merge -p \ | ||||||
|  |         '{"spec":{"running":true}}' | ||||||
|  | 
 | ||||||
|  |     # Look at virtual machine status and associated events: | ||||||
|  |     kubectl describe virtualmachine myvm | ||||||
|  | 
 | ||||||
|  |     # Look at the now created virtual machine instance status and associated events: | ||||||
|  |     kubectl describe virtualmachineinstance myvm | ||||||
|  | 
 | ||||||
|  |     # Stop the virtual machine instance: | ||||||
|  |     kubectl patch virtualmachine myvm --type merge -p \ | ||||||
|  |         '{"spec":{"running":false}}' | ||||||
|  | 
 | ||||||
|  |     # Restart the virtual machine (you delete the instance!): | ||||||
|  |     kubectl delete virtualmachineinstance myvm | ||||||
|  | 
 | ||||||
|  |     # Implicit cascade delete (first deletes the virtual machine and then the virtual machine) | ||||||
|  |     kubectl delete virtualmachine myvm | ||||||
|  | 
 | ||||||
|  |     # Explicit cascade delete (first deletes the virtual machine and then the virtual machine) | ||||||
|  |     kubectl delete virtualmachine myvm --cascade=true | ||||||
|  | 
 | ||||||
|  |     # Orphan delete (The running virtual machine is only detached, not deleted) | ||||||
|  |     # Recreating the virtual machine would lead to the adoption of the virtual machine instance | ||||||
|  |     kubectl delete virtualmachine myvm --cascade=false | ||||||
|  | ``` | ||||||
| After Width: | Height: | Size: 163 KiB | 
| After Width: | Height: | Size: 4.7 KiB | 
|  | @ -0,0 +1,125 @@ | ||||||
|  | <?xml version="1.0" encoding="UTF-8" standalone="no"?> | ||||||
|  | <!-- Created with Inkscape (http://www.inkscape.org/) --> | ||||||
|  | 
 | ||||||
|  | <svg | ||||||
|  |    xmlns:dc="http://purl.org/dc/elements/1.1/" | ||||||
|  |    xmlns:cc="http://creativecommons.org/ns#" | ||||||
|  |    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" | ||||||
|  |    xmlns:svg="http://www.w3.org/2000/svg" | ||||||
|  |    xmlns="http://www.w3.org/2000/svg" | ||||||
|  |    xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" | ||||||
|  |    xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" | ||||||
|  |    width="185.92097mm" | ||||||
|  |    height="44.051247mm" | ||||||
|  |    viewBox="0 0 658.77511 156.08709" | ||||||
|  |    id="svg3452" | ||||||
|  |    version="1.1" | ||||||
|  |    inkscape:version="0.91 r13725" | ||||||
|  |    sodipodi:docname="asciibinder_web_logo.svg"> | ||||||
|  |   <defs | ||||||
|  |      id="defs3454" /> | ||||||
|  |   <sodipodi:namedview | ||||||
|  |      id="base" | ||||||
|  |      pagecolor="#ffffff" | ||||||
|  |      bordercolor="#666666" | ||||||
|  |      borderopacity="1.0" | ||||||
|  |      inkscape:pageopacity="0.0" | ||||||
|  |      inkscape:pageshadow="2" | ||||||
|  |      inkscape:zoom="1.1545455" | ||||||
|  |      inkscape:cx="326.39545" | ||||||
|  |      inkscape:cy="-21.169069" | ||||||
|  |      inkscape:document-units="px" | ||||||
|  |      inkscape:current-layer="layer1" | ||||||
|  |      showgrid="false" | ||||||
|  |      fit-margin-top="4" | ||||||
|  |      fit-margin-left="4" | ||||||
|  |      fit-margin-right="4" | ||||||
|  |      fit-margin-bottom="3" | ||||||
|  |      inkscape:window-width="2560" | ||||||
|  |      inkscape:window-height="1409" | ||||||
|  |      inkscape:window-x="1920" | ||||||
|  |      inkscape:window-y="0" | ||||||
|  |      inkscape:window-maximized="1" /> | ||||||
|  |   <metadata | ||||||
|  |      id="metadata3457"> | ||||||
|  |     <rdf:RDF> | ||||||
|  |       <cc:Work | ||||||
|  |          rdf:about=""> | ||||||
|  |         <dc:format>image/svg+xml</dc:format> | ||||||
|  |         <dc:type | ||||||
|  |            rdf:resource="http://purl.org/dc/dcmitype/StillImage" /> | ||||||
|  |         <dc:title></dc:title> | ||||||
|  |       </cc:Work> | ||||||
|  |     </rdf:RDF> | ||||||
|  |   </metadata> | ||||||
|  |   <g | ||||||
|  |      inkscape:label="Layer 1" | ||||||
|  |      inkscape:groupmode="layer" | ||||||
|  |      id="layer1" | ||||||
|  |      transform="translate(-45.651797,-348.92496)"> | ||||||
|  |     <g | ||||||
|  |        id="g4305" | ||||||
|  |        transform="matrix(1.1624734,0,0,1.1624734,1406.9902,-264.01854)"> | ||||||
|  |       <path | ||||||
|  |          d="m -982.80622,621.21723 c 0.30277,0.75695 1.05972,1.21111 1.74098,1.21111 0.22708,0 0.37847,-0.0757 0.68125,-0.0757 0.90834,-0.37847 1.36251,-1.5139 0.98403,-2.42223 l -18.39384,-49.12593 c -0.22708,-0.68125 -0.98403,-1.13542 -1.81667,-1.13542 -0.75693,0 -1.51393,0.45417 -1.81663,1.13542 l -18.3939,49.12593 c -0.3785,0.90833 0.1514,2.04376 1.0597,2.42223 0.2271,0 0.5299,0.0757 0.6813,0.0757 0.7569,0 1.4382,-0.45416 1.741,-1.21111 l 4.6931,-12.48964 24.1466,0 4.69308,12.48964 z m -16.72856,-44.65993 10.67297,28.46124 -21.42159,0 10.74862,-28.46124 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:Quicksand;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4231" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |       <path | ||||||
|  |          d="m -970.73409,617.12971 c 3.02779,2.42223 7.72087,5.22294 13.70076,5.29863 3.63335,0 6.66114,-1.05972 9.23477,-2.95209 2.49792,-1.89237 4.23891,-4.3903 4.23891,-7.49379 0,-3.10348 -1.74099,-5.97989 -4.16322,-7.56948 -2.57362,-1.4382 -5.45002,-2.49793 -9.00768,-3.02779 l -0.15139,0 c -3.40627,-0.52986 -5.90419,-1.4382 -7.56948,-2.57362 -1.4382,-1.13543 -2.11945,-2.19515 -2.11945,-3.70905 0,-1.51389 0.83264,-3.02779 2.34653,-4.23891 1.5896,-1.28681 4.01183,-2.11945 7.03962,-2.11945 3.63335,0 6.35836,1.89237 9.08338,3.70904 0.83264,0.52987 1.96806,0.30278 2.49793,-0.52986 0.52986,-0.83264 0.30277,-1.96807 -0.52987,-2.57362 -2.8764,-1.81668 -6.28267,-4.01183 -11.05144,-4.01183 -3.78474,0 -6.88823,0.98403 -9.08338,2.49793 -2.34653,1.74098 -4.01182,4.3146 -4.01182,7.2667 0,2.8764 1.66529,5.29864 4.01182,6.66114 2.11946,1.5139 5.07156,2.27085 8.40213,2.87641 l 0.15139,0.0757 c 3.63335,0.60556 6.28267,1.5139 7.87226,2.72501 1.89237,1.58959 2.57362,2.80071 2.64931,4.54169 0,1.66529 -0.98403,3.33057 -2.8007,4.54169 -1.66529,1.36251 -4.16322,2.27084 -7.03962,2.27084 -4.69308,0 -8.7806,-2.42223 -11.35422,-4.3146 -0.83264,-0.68125 -1.96806,-0.52986 -2.57362,0.22708 -0.60556,0.75695 -0.52987,1.74099 0.22708,2.42224 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:Quicksand;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4233" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |       <path | ||||||
|  |          d="m -914.70574,622.42834 c 0,0 0,0 0,0 4.3903,0 8.55351,-1.51389 11.657,-3.93612 0.90834,-0.68126 1.05973,-1.81668 0.37847,-2.64932 -0.60555,-0.68126 -1.66528,-0.90834 -2.49792,-0.30278 -2.64932,2.04376 -5.97989,3.25487 -9.53755,3.25487 -8.62921,0 -15.44174,-6.66114 -15.44174,-14.98757 0,-8.17503 6.81253,-14.91187 15.44174,-14.91187 3.55766,0 6.88823,1.21111 9.53755,3.33057 0.83264,0.60556 1.89237,0.37847 2.49792,-0.30278 0.68126,-0.83264 0.52987,-1.96807 -0.22708,-2.49793 -3.25488,-2.57362 -7.41809,-4.08752 -11.80839,-4.08752 -10.59727,0 -19.07509,8.17504 -19.07509,18.46953 0,10.44589 8.47782,18.62092 19.07509,18.62092 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:Quicksand;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4235" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |       <path | ||||||
|  |          d="m -887.88612,620.61167 c 0,0.98403 0.83264,1.81667 1.81667,1.81667 1.05973,0 1.89237,-0.83264 1.89237,-1.81667 l 0,-33.4571 c 0,-1.05973 -0.83264,-1.81668 -1.89237,-1.81668 -0.98403,0 -1.81667,0.75695 -1.81667,1.81668 l 0,33.4571 z m 0,-43.82729 c 0,1.05973 0.83264,1.96806 1.81667,1.96806 1.05973,0 1.89237,-0.90833 1.89237,-1.96806 l 0,-1.74098 c 0,-0.98403 -0.83264,-1.81668 -1.89237,-1.81668 -0.98403,0 -1.81667,0.83265 -1.81667,1.81668 l 0,1.74098 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:Quicksand;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4237" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |       <path | ||||||
|  |          d="m -866.59697,620.61167 c 0,0.98403 0.83264,1.81667 1.81668,1.81667 1.05972,0 1.89237,-0.83264 1.89237,-1.81667 l 0,-33.4571 c 0,-1.05973 -0.83265,-1.81668 -1.89237,-1.81668 -0.98404,0 -1.81668,0.75695 -1.81668,1.81668 l 0,33.4571 z m 0,-43.82729 c 0,1.05973 0.83264,1.96806 1.81668,1.96806 1.05972,0 1.89237,-0.90833 1.89237,-1.96806 l 0,-1.74098 c 0,-0.98403 -0.83265,-1.81668 -1.89237,-1.81668 -0.98404,0 -1.81668,0.83265 -1.81668,1.81668 l 0,1.74098 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:Quicksand;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4239" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |       <path | ||||||
|  |          d="m -840.38764,622.42834 c 0.0757,0 0.0757,-0.0757 0.22709,-0.0757 0,0 0,0.0757 0.0757,0.0757 l 13.39798,0 c 8.93199,0 16.19869,-7.2667 16.19869,-16.27438 0,-5.7528 -3.10349,-10.82435 -7.64518,-13.70076 1.81668,-2.42223 2.87641,-5.37433 2.87641,-8.6292 0,-7.94796 -6.43406,-14.38202 -14.38202,-14.38202 l -10.44588,0 c -0.0757,0 -0.0757,0 -0.0757,0 -0.15139,0 -0.15139,0 -0.22709,0 -2.27084,0 -4.16321,1.81668 -4.16321,4.16322 l 0,44.65993 c 0,2.27084 1.89237,4.16321 4.16321,4.16321 z m 4.23891,-44.58423 6.50975,0 c 3.33058,0 5.97989,2.64931 5.97989,5.97989 0,3.33057 -2.64931,6.13127 -5.97989,6.13127 l -6.50975,0 0,-12.11116 z m 0,20.51329 9.46185,0 c 4.31461,0.0757 7.79657,3.48196 7.79657,7.79656 0,4.3903 -3.48196,7.79657 -7.79657,7.87226 l -9.46185,0 0,-15.66882 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:'Quicksand Bold';letter-spacing:0px;word-spacing:0px;fill:#ff580d;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4241" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |       <path | ||||||
|  |          d="m -795.23333,584.65664 c -2.27084,0 -4.08752,1.74098 -4.08752,4.16321 l 0,29.52098 c 0,2.27084 1.81668,4.08751 4.08752,4.08751 2.34654,0 4.08752,-1.81667 4.08752,-4.08751 l 0,-29.52098 c 0,-2.42223 -1.74098,-4.16321 -4.08752,-4.16321 z m 4.08752,-11.12714 c 0,-2.27084 -1.74098,-4.08752 -4.08752,-4.08752 -2.27084,0 -4.08752,1.81668 -4.08752,4.08752 l 0,1.58959 c 0,2.27085 1.81668,4.16322 4.08752,4.16322 2.34654,0 4.08752,-1.89237 4.08752,-4.16322 l 0,-1.58959 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:'Quicksand Bold';letter-spacing:0px;word-spacing:0px;fill:#ff580d;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4243" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |       <path | ||||||
|  |          d="m -762.36285,584.58094 c -2.9521,0 -5.75281,0.90834 -8.02365,2.34654 -0.68125,-1.4382 -2.11946,-2.34654 -3.70905,-2.34654 -2.27084,0 -4.01182,1.81668 -4.01182,4.01183 l 0,11.73269 c 0,0 0,0 0,0 l 0,18.09106 c 0,2.19515 1.74098,4.01182 4.01182,4.01182 1.96807,0 3.63335,-1.4382 4.01183,-3.25487 0.15139,-0.30278 0.15139,-0.52987 0.15139,-0.75695 l 0,-18.09106 c 0,-4.23891 3.33057,-7.64517 7.56948,-7.64517 4.3146,0 7.87226,3.40626 7.87226,7.64517 l 0,18.09106 c 0,2.19515 1.89237,4.01182 4.01182,4.01182 2.19515,0 3.93613,-1.81667 3.93613,-4.01182 l 0,-18.09106 c 0,-8.55351 -7.03962,-15.74452 -15.82021,-15.74452 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:'Quicksand Bold';letter-spacing:0px;word-spacing:0px;fill:#ff580d;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4245" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |       <path | ||||||
|  |          d="m -698.87382,603.58034 c 0,0 0,0 0,0 0,-0.0757 0,-0.0757 0,-0.0757 l 0,-29.97514 c 0,-2.19515 -1.81668,-4.08752 -4.08752,-4.08752 -2.34654,0 -4.16322,1.89237 -4.16322,4.08752 l 0,14.23063 c -2.80071,-1.96807 -6.43406,-3.17919 -10.1431,-3.17919 -10.2188,0 -18.39384,8.62921 -18.39384,18.9994 0,10.2188 8.17504,18.848 18.39384,18.848 3.78474,0 7.41809,-1.28681 10.29449,-3.25487 0.37848,1.81667 1.96807,3.25487 4.01183,3.25487 2.27084,0 4.08752,-1.89237 4.08752,-4.08751 l 0,-14.76049 z m -11.27853,-7.64518 c 1.89237,1.89237 3.02779,4.61739 3.02779,7.64518 0,2.8764 -1.13542,5.60141 -3.02779,7.49378 -1.89237,1.96807 -4.3146,3.10349 -7.11531,3.10349 -2.72501,0 -5.22294,-1.13542 -7.11531,-3.10349 -1.89237,-1.89237 -3.02779,-4.61738 -3.02779,-7.49378 0,-3.02779 1.13542,-5.75281 3.02779,-7.64518 1.89237,-1.96806 4.3903,-3.10348 7.11531,-3.10348 2.80071,0 5.22294,1.13542 7.11531,3.10348 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:'Quicksand Bold';letter-spacing:0px;word-spacing:0px;fill:#ff580d;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4247" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |       <path | ||||||
|  |          d="m -668.5557,622.42834 c 5.37433,0 9.84033,-1.58959 13.54937,-4.99585 1.74098,-1.58959 0.98403,-3.86044 0,-4.92016 -0.98403,-1.21112 -4.08752,-1.43821 -5.67711,0.22708 -1.89237,1.4382 -5.14724,2.11945 -7.87226,1.89237 -2.64932,-0.22708 -5.67711,-1.58959 -7.2667,-3.40627 -1.3625,-1.28681 -2.19515,-3.25487 -2.57362,-4.99585 l 24.07095,0 c 2.11945,0 3.70904,-1.21112 3.93612,-3.02779 0.0757,-0.15139 0.0757,-0.52987 0.0757,-0.68126 0,0 0,0 0,0 0,0 0,0 0,0 0,-0.15139 0,-0.30278 0,-0.37847 -0.60556,-10.2188 -8.40212,-17.5612 -18.24245,-17.5612 -10.37019,0 -18.39383,8.62921 -18.46953,18.9237 0.0757,10.2945 8.09934,18.84801 18.46953,18.9237 z m 0,-30.65639 c 6.58545,0.52986 10.44589,4.99586 11.20283,8.02365 l -21.42163,0 c 0.60556,-3.10349 3.78474,-7.72087 10.2188,-8.02365 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:'Quicksand Bold';letter-spacing:0px;word-spacing:0px;fill:#ff580d;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4249" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |       <path | ||||||
|  |          d="m -620.57349,584.65664 c -3.78474,-0.0757 -7.03962,1.28681 -9.53755,2.9521 -0.52986,0.37847 -1.13542,0.98403 -1.58959,1.51389 l 0,-0.52986 c 0,-2.19515 -1.89237,-3.93613 -4.16321,-3.93613 -2.19515,0 -4.01183,1.74098 -4.01183,3.93613 l 0,29.82375 c 0,2.19515 1.81668,4.01182 4.01183,4.01182 2.27084,0 4.16321,-1.81667 4.16321,-4.01182 l 0,-14.38201 c 0.37848,-1.05973 1.28681,-3.93613 3.10349,-6.35837 0.90834,-1.4382 2.04376,-2.72501 3.48196,-3.63335 1.28681,-0.98403 2.80071,-1.4382 4.54169,-1.4382 2.27084,0 4.01182,-1.89237 4.01182,-4.01182 0,-2.19515 -1.74098,-3.93613 -4.01182,-3.93613 z" | ||||||
|  |          style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:medium;line-height:125%;font-family:Quicksand;-inkscape-font-specification:'Quicksand Bold';letter-spacing:0px;word-spacing:0px;fill:#ff580d;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" | ||||||
|  |          id="path4251" | ||||||
|  |          inkscape:connector-curvature="0" /> | ||||||
|  |     </g> | ||||||
|  |     <path | ||||||
|  |        id="path4327" | ||||||
|  |        d="m 86.702795,363.09819 c -14.8904,0 -26.8778,11.98735 -26.8778,26.87783 l 0,77.52672 c 0,14.89048 11.9874,26.87939 26.8778,26.87939 l 77.526795,0 c 14.8904,0 26.8794,-11.98891 26.8794,-26.87939 l 0,-77.52672 c 0,-14.89048 -11.989,-26.87783 -26.8794,-26.87783 l -77.526795,0 z m 11.4962,15.65637 c 6.010395,0 12.021995,2.28784 16.598295,6.86349 4.5733,4.57273 6.462,10.21255 7.1696,16.59986 0.7076,6.38731 0.314,13.60514 0.314,21.85894 a 1.6635808,1.6635808 0 0 1 -1.664,1.66407 c -10.7256,0 -18.0691,0.33714 -24.000295,-0.33124 -5.9312,-0.66839 -10.505,-2.47594 -14.9609,-6.54952 -9.6175,-8.79294 -9.2065,-24.0899 -0.053,-33.24211 4.5763,-4.57565 10.5863,-6.86349 16.5968,-6.86349 z m 54.534395,0 c 6.0104,-2e-5 12.0204,2.28784 16.5967,6.86349 9.1539,9.15224 9.5648,24.44915 -0.053,33.24211 -4.4558,4.07359 -9.0297,5.88113 -14.9609,6.54952 -5.9312,0.66838 -13.2763,0.33124 -24.0018,0.33124 a 1.663624,1.663624 0 0 1 -1.6625,-1.66407 c 0,-8.2538 -0.3937,-15.47162 0.314,-21.85894 0.7076,-6.38732 2.5963,-12.02712 7.1696,-16.59986 4.5762,-4.57566 10.5878,-6.86349 16.5983,-6.86349 z m -54.534395,3.30616 c -5.1522,0 -10.3044,1.97055 -14.245,5.91057 -7.8796,7.87939 -8.2133,20.97373 -0.053,28.43358 4.0094,3.66543 7.597,5.07969 13.0896,5.69865 5.164495,0.58199 12.212595,0.34931 21.978295,0.31868 0.042,-7.60428 0.3055,-14.29032 -0.3093,-19.83851 -0.6616,-5.97268 -2.2715,-10.66933 -6.2151,-14.6124 -3.9405,-3.94002 -9.0929,-5.91057 -14.245095,-5.91057 z m 54.534395,0 c -5.1522,0 -10.3047,1.97057 -14.2451,5.91057 -3.9436,3.94306 -5.5534,8.63973 -6.2151,14.6124 -0.6147,5.54819 -0.3511,12.23423 -0.3093,19.83851 9.7648,0.0306 16.814,0.26328 21.9783,-0.31868 5.4926,-0.61896 9.0802,-2.03322 13.0896,-5.69865 8.1604,-7.45983 7.8253,-20.55422 -0.055,-28.43358 -3.9404,-3.93943 -9.0917,-5.91057 -14.2435,-5.91057 z m -56.259695,49.59235 24.252995,0 a 1.663624,1.663624 0 0 1 1.6641,1.66249 l 0,22.71767 a 1.663624,1.663624 0 1 1 -3.3266,0 l 0,-18.9013 -38.450895,38.45095 a 1.663624,1.663624 0 1 1 -2.3517,-2.35167 l 38.251595,-38.25157 -20.039495,0 a 1.663624,1.663624 0 1 1 0,-3.32657 z m 47.639495,0.002 0,0 c 3.918,-8e-4 7.237,0.0818 10.2026,0.41602 5.9312,0.66836 10.5051,2.47598 14.9609,6.54951 9.6181,8.79307 9.2051,24.08998 0.052,33.24211 -9.1526,9.15132 -24.0425,9.15133 -33.1951,0 -4.5732,-4.57269 -6.4619,-10.21256 -7.1696,-16.59986 -0.7076,-6.38731 -0.3139,-13.60514 -0.3139,-21.85894 a 1.663624,1.663624 0 0 1 1.6625,-1.66407 c 5.3628,0 9.8812,-0.084 13.7992,-0.0848 z m 0,3.33442 c -3.4084,0 -7.5795,0.0532 -12.1509,0.0675 -0.042,7.60488 -0.3053,14.29157 0.3093,19.84008 0.6618,5.97266 2.2716,10.6694 6.2151,14.6124 7.881,7.88003 20.609,7.88004 28.4901,0 7.8796,-7.87924 8.2117,-20.97363 0.052,-28.43358 -4.0094,-3.66536 -7.597,-5.07971 -13.0896,-5.69864 -2.746,-0.30939 -5.9536,-0.38774 -9.8243,-0.38776 l 0,0 z" | ||||||
|  |        style="opacity:1;fill:#ff580d;fill-opacity:1;stroke:none;stroke-width:4.13899994;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:4;stroke-opacity:1" | ||||||
|  |        inkscape:connector-curvature="0" /> | ||||||
|  |   </g> | ||||||
|  | </svg> | ||||||
| After Width: | Height: | Size: 17 KiB | 
| After Width: | Height: | Size: 83 KiB | 
| After Width: | Height: | Size: 15 KiB | 
| After Width: | Height: | Size: 2.6 KiB | 
| After Width: | Height: | Size: 10 KiB | 
| After Width: | Height: | Size: 35 KiB | 
| After Width: | Height: | Size: 44 KiB | 
| After Width: | Height: | Size: 43 KiB | 
| After Width: | Height: | Size: 53 KiB | 
| After Width: | Height: | Size: 49 KiB | 
| After Width: | Height: | Size: 44 KiB | 
| After Width: | Height: | Size: 81 KiB | 
| After Width: | Height: | Size: 143 KiB | 
| After Width: | Height: | Size: 89 KiB | 
| After Width: | Height: | Size: 94 KiB | 
|  | @ -0,0 +1,693 @@ | ||||||
|  | \# Changelog | ||||||
|  | 
 | ||||||
|  | \#\# v0.26.0 | ||||||
|  | 
 | ||||||
|  | Released on: Fri Feb 7 09:40:07 2020 +0100 | ||||||
|  | 
 | ||||||
|  | -   Fix incorrect ownerReferences to avoid VMs getting GCed | ||||||
|  | 
 | ||||||
|  | -   Fixes for several tests | ||||||
|  | 
 | ||||||
|  | -   Fix greedy permissions around Secrets by delegating them to kubelet | ||||||
|  | 
 | ||||||
|  | -   Fix OOM infra pod by increasing it’s memory request | ||||||
|  | 
 | ||||||
|  | -   Clarify device support around live migrations | ||||||
|  | 
 | ||||||
|  | -   Support for an uninstall strategy to protect workloads during | ||||||
|  |     uninstallation | ||||||
|  | 
 | ||||||
|  | -   Support for more prometheus metrics and alert rules | ||||||
|  | 
 | ||||||
|  | -   Support for testing SRIOV connectivity in functional tests | ||||||
|  | 
 | ||||||
|  | -   Update Kubernetes client-go to 1.16.4 | ||||||
|  | 
 | ||||||
|  | -   FOSSA fixes and status | ||||||
|  | 
 | ||||||
|  | \#\# v0.25.0 | ||||||
|  | 
 | ||||||
|  | Released on: Mon Jan 13 20:37:15 2020 +0100 | ||||||
|  | 
 | ||||||
|  | -   CI: Support for Kubernetes 1.17 | ||||||
|  | 
 | ||||||
|  | -   Support emulator thread pinning | ||||||
|  | 
 | ||||||
|  | -   Support virtctl restart --force | ||||||
|  | 
 | ||||||
|  | -   Support virtctl migrate to trigger live migrations from the CLI | ||||||
|  | 
 | ||||||
|  | \#\# v0.24.0 | ||||||
|  | 
 | ||||||
|  | Released on: Tue Dec 3 15:34:34 2019 +0100 | ||||||
|  | 
 | ||||||
|  | -   CI: Support for Kubernetes 1.15 | ||||||
|  | 
 | ||||||
|  | -   CI: Support for Kubernetes 1.16 | ||||||
|  | 
 | ||||||
|  | -   Add and fix a couple of test cases | ||||||
|  | 
 | ||||||
|  | -   Support for pause and unpausing VMs | ||||||
|  | 
 | ||||||
|  | -   Update of libvirt to 5.6.0 | ||||||
|  | 
 | ||||||
|  | -   Fix bug related to parallel scraping of Prometheus endpoints | ||||||
|  | 
 | ||||||
|  | -   Fix to reliably test VNC | ||||||
|  | 
 | ||||||
|  | \#\# v0.23.0 | ||||||
|  | 
 | ||||||
|  | Released on: Mon Nov 4 16:42:54 2019 +0100 | ||||||
|  | 
 | ||||||
|  | -   Guest OS Information is available under the VMI status now | ||||||
|  | 
 | ||||||
|  | -   Updated to Go 1.12.8 and latest bazel | ||||||
|  | 
 | ||||||
|  | -   Updated go-yaml to v2.2.4, which has a ddos vulnerability fixed | ||||||
|  | 
 | ||||||
|  | -   Cleaned up and fixed CRD scheme registration | ||||||
|  | 
 | ||||||
|  | -   Several bugfixes | ||||||
|  | 
 | ||||||
|  | -   Many CI improvements (e.g. more logs in case of test failures) | ||||||
|  | 
 | ||||||
|  | \#\# v0.22.0 | ||||||
|  | 
 | ||||||
|  | Released on: Thu Oct 10 18:55:08 2019 +0200 | ||||||
|  | 
 | ||||||
|  | -   Support for Nvidia GPUs and vGPUs exposed by Nvidia Kubevirt Device | ||||||
|  |     Plugin. | ||||||
|  | 
 | ||||||
|  | -   VMIs now successfully start if they get a 0xfe prefixed MAC address | ||||||
|  |     assigned from the pod network | ||||||
|  | 
 | ||||||
|  | -   Removed dependency on host semanage in SELinux Permissive mode | ||||||
|  | 
 | ||||||
|  | -   Some changes as result of entering the CNCF sandbox (DCO check, | ||||||
|  |     FOSSA check, best practice badge) | ||||||
|  | 
 | ||||||
|  | -   Many bug fixes and improvements in several areas | ||||||
|  | 
 | ||||||
|  | -   CI: Introduced a OKD 4 test lane | ||||||
|  | 
 | ||||||
|  | -   CI: Many improved tests, resulting in less flakyness | ||||||
|  | 
 | ||||||
|  | \#\# v0.21.0 | ||||||
|  | 
 | ||||||
|  | Released on: Mon Sep 9 09:59:08 2019 +0200 | ||||||
|  | 
 | ||||||
|  | -   CI: Support for Kubernetes 1.14 | ||||||
|  | 
 | ||||||
|  | -   Many bug fixes in several areas | ||||||
|  | 
 | ||||||
|  | -   Support for `virtctl migrate` | ||||||
|  | 
 | ||||||
|  | -   Support configurable number of controller threads | ||||||
|  | 
 | ||||||
|  | -   Support to opt-out of bridge binding for podnetwork | ||||||
|  | 
 | ||||||
|  | -   Support for OpenShift Prometheus monitoring | ||||||
|  | 
 | ||||||
|  | -   Support for setting more SMBIOS fields | ||||||
|  | 
 | ||||||
|  | -   Improved containerDisk memory usage and speed | ||||||
|  | 
 | ||||||
|  | -   Fix CRI-O memory limit | ||||||
|  | 
 | ||||||
|  | -   Drop spc\_t from launcher | ||||||
|  | 
 | ||||||
|  | -   Add feature gates to security sensitive features | ||||||
|  | 
 | ||||||
|  | \#\# v0.20.0 | ||||||
|  | 
 | ||||||
|  | Released on: Fri Aug 9 16:42:41 2019 +0200 | ||||||
|  | 
 | ||||||
|  | -   Containerdisks are now secure and they are not copied anymore on | ||||||
|  |     every start. Old containerdisks can still be used in the same secure | ||||||
|  |     way, but new containerdisks can’t be used on older kubevirt releases | ||||||
|  | 
 | ||||||
|  | -   Create specific SecurityContextConstraints on OKD instead of using | ||||||
|  |     the privileged SCC | ||||||
|  | 
 | ||||||
|  | -   Added clone authorization check for DataVolumes with PVC source | ||||||
|  | 
 | ||||||
|  | -   The sidecar feature is feature-gated now | ||||||
|  | 
 | ||||||
|  | -   Use container image shasums instead of tags for KubeVirt deployments | ||||||
|  | 
 | ||||||
|  | -   Protect control plane components against voluntary evictions with a | ||||||
|  |     PodDisruptionBudget of MinAvailable=1 | ||||||
|  | 
 | ||||||
|  | -   Replaced hardcoded `virtctl` by using the basename of the call, this | ||||||
|  |     enables nicer output when installed via krew plugin package manager | ||||||
|  | 
 | ||||||
|  | -   Added RNG device to all Fedora VMs in tests and examples (newer | ||||||
|  |     kernels might block bootimg while waiting for entropy) | ||||||
|  | 
 | ||||||
|  | -   The virtual memory is now set to match the memory limit, if memory | ||||||
|  |     limit is specified and guest memory is not | ||||||
|  | 
 | ||||||
|  | -   Support nftable for CoreOS | ||||||
|  | 
 | ||||||
|  | -   Added a block-volume flag to the virtctl image-upload command | ||||||
|  | 
 | ||||||
|  | -   Improved virtctl console/vnc data flow | ||||||
|  | 
 | ||||||
|  | -   Removed DataVolumes feature gate in favor of auto-detecting CDI | ||||||
|  |     support | ||||||
|  | 
 | ||||||
|  | -   Removed SR-IOV feature gate, it is enabled by default now | ||||||
|  | 
 | ||||||
|  | -   VMI-related metrics have been renamed from `kubevirt_vm_` to | ||||||
|  |     `kubevirt_vmi_` to better reflect their purpose | ||||||
|  | 
 | ||||||
|  | -   Added metric to report the VMI count | ||||||
|  | 
 | ||||||
|  | -   Improved integration with HCO by adding a CSV generator tool and | ||||||
|  |     modified KubeVirt CR conditions | ||||||
|  | 
 | ||||||
|  | -   CI Improvements: | ||||||
|  | 
 | ||||||
|  | -   Added dedicated SR-IOV test lane | ||||||
|  | 
 | ||||||
|  | -   Improved log gathering | ||||||
|  | 
 | ||||||
|  | -   Reduced amount of flaky tests | ||||||
|  | 
 | ||||||
|  | \#\# v0.19.0 | ||||||
|  | 
 | ||||||
|  | Released on: Fri Jul 5 12:52:16 2019 +0200 | ||||||
|  | 
 | ||||||
|  | -   Fixes when run on kind | ||||||
|  | 
 | ||||||
|  | -   Fixes for sub-resource RBAC | ||||||
|  | 
 | ||||||
|  | -   Limit pod network interface bindings | ||||||
|  | 
 | ||||||
|  | -   Many additional bug fixes in many areas | ||||||
|  | 
 | ||||||
|  | -   Additional testcases for updates, disk types, live migration with | ||||||
|  |     NFS | ||||||
|  | 
 | ||||||
|  | -   Additional testcases for memory over-commit, block storage, cpu | ||||||
|  |     manager, headless mode | ||||||
|  | 
 | ||||||
|  | -   Improvements around HyperV | ||||||
|  | 
 | ||||||
|  | -   Improved error handling for runStartegies | ||||||
|  | 
 | ||||||
|  | -   Improved update procedure | ||||||
|  | 
 | ||||||
|  | -   Improved network metrics reporting (packets and errors) | ||||||
|  | 
 | ||||||
|  | -   Improved guest overhead calculation | ||||||
|  | 
 | ||||||
|  | -   Improved SR-IOV testsuite | ||||||
|  | 
 | ||||||
|  | -   Support for live migration auto-converge | ||||||
|  | 
 | ||||||
|  | -   Support for config-drive disks | ||||||
|  | 
 | ||||||
|  | -   Support for setting a pullPolicy con containerDisks | ||||||
|  | 
 | ||||||
|  | -   Support for unprivileged VMs when using SR-IOV | ||||||
|  | 
 | ||||||
|  | -   Introduction of a project security policy | ||||||
|  | 
 | ||||||
|  | \#\# v0.18.0 | ||||||
|  | 
 | ||||||
|  | Released on: Wed Jun 5 22:25:09 2019 +0200 | ||||||
|  | 
 | ||||||
|  | -   Build: Use of go modules | ||||||
|  | 
 | ||||||
|  | -   CI: Support for Kubernetes 1.13 | ||||||
|  | 
 | ||||||
|  | -   Countless testcase fixes and additions | ||||||
|  | 
 | ||||||
|  | -   Several smaller bug fixes | ||||||
|  | 
 | ||||||
|  | -   Improved upgrade documentation | ||||||
|  | 
 | ||||||
|  | \#\# v0.17.0 | ||||||
|  | 
 | ||||||
|  | Released on: Mon May 6 16:18:01 2019 +0200 | ||||||
|  | 
 | ||||||
|  | -   Several testcase additions | ||||||
|  | 
 | ||||||
|  | -   Improved virt-controller node distribution | ||||||
|  | 
 | ||||||
|  | -   Improved support between version migrations | ||||||
|  | 
 | ||||||
|  | -   Support for a configurable MachineType default | ||||||
|  | 
 | ||||||
|  | -   Support for live-migration of a VM on node taints | ||||||
|  | 
 | ||||||
|  | -   Support for VM swap metrics | ||||||
|  | 
 | ||||||
|  | -   Support for versioned virt-launcher / virt-handler communication | ||||||
|  | 
 | ||||||
|  | -   Support for HyperV flags | ||||||
|  | 
 | ||||||
|  | -   Support for different VM run strategies (i.e manual and | ||||||
|  |     rerunOnFailure) | ||||||
|  | 
 | ||||||
|  | -   Several fixes for live-migration (TLS support, protected pods) | ||||||
|  | 
 | ||||||
|  | \#\# v0.16.0 | ||||||
|  | 
 | ||||||
|  | Released on: Fri Apr 5 23:18:22 2019 +0200 | ||||||
|  | 
 | ||||||
|  | -   Bazel fixes | ||||||
|  | 
 | ||||||
|  | -   Initial work to support upgrades (not finalized) | ||||||
|  | 
 | ||||||
|  | -   Initial support for HyperV features | ||||||
|  | 
 | ||||||
|  | -   Support propagation of MAC addresses to multus | ||||||
|  | 
 | ||||||
|  | -   Support live migration cancellation | ||||||
|  | 
 | ||||||
|  | -   Support for table input devices | ||||||
|  | 
 | ||||||
|  | -   Support for generating OLM metadata | ||||||
|  | 
 | ||||||
|  | -   Support for triggering VM live migration on node taints | ||||||
|  | 
 | ||||||
|  | \#\# v0.15.0 | ||||||
|  | 
 | ||||||
|  | Released on: Tue Mar 5 10:35:08 2019 +0100 | ||||||
|  | 
 | ||||||
|  | -   CI: Several fixes | ||||||
|  | 
 | ||||||
|  | -   Fix configurable number of KVM devices | ||||||
|  | 
 | ||||||
|  | -   Narrow virt-handler permissions | ||||||
|  | 
 | ||||||
|  | -   Use bazel for development builds | ||||||
|  | 
 | ||||||
|  | -   Support for live migration with shared and non-shared disks | ||||||
|  | 
 | ||||||
|  | -   Support for live migration progress tracking | ||||||
|  | 
 | ||||||
|  | -   Support for EFI boot | ||||||
|  | 
 | ||||||
|  | -   Support for libvirt 5.0 | ||||||
|  | 
 | ||||||
|  | -   Support for extra DHCP options | ||||||
|  | 
 | ||||||
|  | -   Support for a hook to manipualte cloud-init metadata | ||||||
|  | 
 | ||||||
|  | -   Support setting a VM serial number | ||||||
|  | 
 | ||||||
|  | -   Support for exposing infra and VM metrics | ||||||
|  | 
 | ||||||
|  | -   Support for a tablet input device | ||||||
|  | 
 | ||||||
|  | -   Support for extra CPU flags | ||||||
|  | 
 | ||||||
|  | -   Support for ignition metadata | ||||||
|  | 
 | ||||||
|  | -   Support to set a default CPU model | ||||||
|  | 
 | ||||||
|  | -   Update to go 1.11.5 | ||||||
|  | 
 | ||||||
|  | \#\# v0.14.0 | ||||||
|  | 
 | ||||||
|  | Released on: Mon Feb 4 22:04:14 2019 +0100 | ||||||
|  | 
 | ||||||
|  | -   CI: Several stabilizing fixes | ||||||
|  | 
 | ||||||
|  | -   docs: Document the KubeVirt Razor | ||||||
|  | 
 | ||||||
|  | -   build: golang update | ||||||
|  | 
 | ||||||
|  | -   Update to Kubernetes 1.12 | ||||||
|  | 
 | ||||||
|  | -   Update CDI | ||||||
|  | 
 | ||||||
|  | -   Support for Ready and Created Operator conditions | ||||||
|  | 
 | ||||||
|  | -   Support (basic) EFI | ||||||
|  | 
 | ||||||
|  | -   Support for generating cloud-init network-config | ||||||
|  | 
 | ||||||
|  | \#\# v0.13.0 | ||||||
|  | 
 | ||||||
|  | Released on: Tue Jan 15 08:26:25 2019 +0100 | ||||||
|  | 
 | ||||||
|  | -   CI: Fix virt-api race | ||||||
|  | 
 | ||||||
|  | -   API: Remove volumeName from disks | ||||||
|  | 
 | ||||||
|  | \#\# v0.12.0 | ||||||
|  | 
 | ||||||
|  | Released on: Fri Jan 11 22:22:02 2019 +0100 | ||||||
|  | 
 | ||||||
|  | -   Introduce a KubeVirt Operator for KubeVirt life-cycle management | ||||||
|  | 
 | ||||||
|  | -   Introduce dedicated kubevirt namespace | ||||||
|  | 
 | ||||||
|  | -   Support VMI ready conditions | ||||||
|  | 
 | ||||||
|  | -   Support vCPU threads and sockets | ||||||
|  | 
 | ||||||
|  | -   Support scale and HPA for VMIRS | ||||||
|  | 
 | ||||||
|  | -   Support to pass NTP related DHCP options | ||||||
|  | 
 | ||||||
|  | -   Support guest IP address reporting via qemu guest agent | ||||||
|  | 
 | ||||||
|  | -   Support for live migration with shared storage | ||||||
|  | 
 | ||||||
|  | -   Support scheduling of VMs based on CPU family | ||||||
|  | 
 | ||||||
|  | -   Support masquerade network interface binding | ||||||
|  | 
 | ||||||
|  | \#\# v0.11.0 | ||||||
|  | 
 | ||||||
|  | Released on: Thu Dec 6 10:15:51 2018 +0100 | ||||||
|  | 
 | ||||||
|  | -   API: registryDisk got renamed to containreDisk | ||||||
|  | 
 | ||||||
|  | -   CI: User OKD 3.11 | ||||||
|  | 
 | ||||||
|  | -   Fix: Tolerate if the PVC has less capacity than expected | ||||||
|  | 
 | ||||||
|  | -   Aligned to use ownerReferences | ||||||
|  | 
 | ||||||
|  | -   Update to libvirt-4.10.0 | ||||||
|  | 
 | ||||||
|  | -   Support for VNC on MAC OSX | ||||||
|  | 
 | ||||||
|  | -   Support for network SR-IOV interfaces | ||||||
|  | 
 | ||||||
|  | -   Support for custom DHCP options | ||||||
|  | 
 | ||||||
|  | -   Support for VM restarts via a custom endpoint | ||||||
|  | 
 | ||||||
|  | -   Support for liveness and readiness probes | ||||||
|  | 
 | ||||||
|  | \#\# v0.10.0 | ||||||
|  | 
 | ||||||
|  | Released on: Thu Nov 8 15:21:34 2018 +0100 | ||||||
|  | 
 | ||||||
|  | -   Support for vhost-net | ||||||
|  | 
 | ||||||
|  | -   Support for block multi-queue | ||||||
|  | 
 | ||||||
|  | -   Support for custom PCI addresses for virtio devices | ||||||
|  | 
 | ||||||
|  | -   Support for deploying KubeVirt to a custom namespace | ||||||
|  | 
 | ||||||
|  | -   Support for ServiceAccount token disks | ||||||
|  | 
 | ||||||
|  | -   Support for multus backed networks | ||||||
|  | 
 | ||||||
|  | -   Support for genie backed networks | ||||||
|  | 
 | ||||||
|  | -   Support for kuryr backed networks | ||||||
|  | 
 | ||||||
|  | -   Support for block PVs | ||||||
|  | 
 | ||||||
|  | -   Support for configurable disk device caches | ||||||
|  | 
 | ||||||
|  | -   Support for pinned IO threads | ||||||
|  | 
 | ||||||
|  | -   Support for virtio net multi-queue | ||||||
|  | 
 | ||||||
|  | -   Support for image upload (depending on CDI) | ||||||
|  | 
 | ||||||
|  | -   Support for custom entity lists with more VM details (cusomt | ||||||
|  |     columns) | ||||||
|  | 
 | ||||||
|  | -   Support for IP and MAC address reporting of all vNICs | ||||||
|  | 
 | ||||||
|  | -   Basic support for guest agent status reporting | ||||||
|  | 
 | ||||||
|  | -   More structured logging | ||||||
|  | 
 | ||||||
|  | -   Better libvirt error reporting | ||||||
|  | 
 | ||||||
|  | -   Stricter CR validation | ||||||
|  | 
 | ||||||
|  | -   Better ownership references | ||||||
|  | 
 | ||||||
|  | -   Several test improvements | ||||||
|  | 
 | ||||||
|  | \#\# v0.9.0 | ||||||
|  | 
 | ||||||
|  | Released on: Thu Oct 4 14:42:28 2018 +0200 | ||||||
|  | 
 | ||||||
|  | -   CI: NetworkPolicy tests | ||||||
|  | 
 | ||||||
|  | -   CI: Support for an external provider (use a preconfigured cluster | ||||||
|  |     for tests) | ||||||
|  | 
 | ||||||
|  | -   Fix virtctl console issues with CRI-O | ||||||
|  | 
 | ||||||
|  | -   Support to initialize empty PVs | ||||||
|  | 
 | ||||||
|  | -   Support for basic CPU pinning | ||||||
|  | 
 | ||||||
|  | -   Support for setting IO Threads | ||||||
|  | 
 | ||||||
|  | -   Support for block volumes | ||||||
|  | 
 | ||||||
|  | -   Move preset logic to mutating webhook | ||||||
|  | 
 | ||||||
|  | -   Introduce basic metrics reporting using prometheus metrics | ||||||
|  | 
 | ||||||
|  | -   Many stabilizing fixes in many places | ||||||
|  | 
 | ||||||
|  | \#\# v0.8.0 | ||||||
|  | 
 | ||||||
|  | Released on: Thu Sep 6 14:25:22 2018 +0200 | ||||||
|  | 
 | ||||||
|  | -   Support for DataVolume | ||||||
|  | 
 | ||||||
|  | -   Support for a subprotocol for webbrowser terminals | ||||||
|  | 
 | ||||||
|  | -   Support for virtio-rng | ||||||
|  | 
 | ||||||
|  | -   Support disconnected VMs | ||||||
|  | 
 | ||||||
|  | -   Support for setting host model | ||||||
|  | 
 | ||||||
|  | -   Support for host CPU passthrough | ||||||
|  | 
 | ||||||
|  | -   Support setting a vNICs mac and PCI address | ||||||
|  | 
 | ||||||
|  | -   Support for memory over-commit | ||||||
|  | 
 | ||||||
|  | -   Support booting from network devices | ||||||
|  | 
 | ||||||
|  | -   Use less devices by default, aka disable unused ones | ||||||
|  | 
 | ||||||
|  | -   Improved VMI shutdown status | ||||||
|  | 
 | ||||||
|  | -   More logging to improve debugability | ||||||
|  | 
 | ||||||
|  | -   A lot of small fixes, including typos and documentation fixes | ||||||
|  | 
 | ||||||
|  | -   Race detection in tests | ||||||
|  | 
 | ||||||
|  | -   Hook improvements | ||||||
|  | 
 | ||||||
|  | -   Update to use Fedora 28 (includes updates of dependencies like | ||||||
|  |     libvirt and qemu) | ||||||
|  | 
 | ||||||
|  | -   Move CI to support Kubernetes 1.11 | ||||||
|  | 
 | ||||||
|  | \#\# v0.7.0 | ||||||
|  | 
 | ||||||
|  | Released on: Wed Jul 4 17:41:33 2018 +0200 | ||||||
|  | 
 | ||||||
|  | -   CI: Move test storage to hostPath | ||||||
|  | 
 | ||||||
|  | -   CI: Add support for Kubernetes 1.10.4 | ||||||
|  | 
 | ||||||
|  | -   CI: Improved network tests for multiple-interfaces | ||||||
|  | 
 | ||||||
|  | -   CI: Drop Origin 3.9 support | ||||||
|  | 
 | ||||||
|  | -   CI: Add test for testing templates on Origin | ||||||
|  | 
 | ||||||
|  | -   VM to VMI rename | ||||||
|  | 
 | ||||||
|  | -   VM affinity and anti-affinity | ||||||
|  | 
 | ||||||
|  | -   Add awareness for multiple networks | ||||||
|  | 
 | ||||||
|  | -   Add hugepage support | ||||||
|  | 
 | ||||||
|  | -   Add device-plugin based kvm | ||||||
|  | 
 | ||||||
|  | -   Add support for setting the network interface model | ||||||
|  | 
 | ||||||
|  | -   Add (basic and inital) Kubernetes compatible networking approach | ||||||
|  |     (SLIRP) | ||||||
|  | 
 | ||||||
|  | -   Add role aggregation for our roles | ||||||
|  | 
 | ||||||
|  | -   Add support for setting a disks serial number | ||||||
|  | 
 | ||||||
|  | -   Add support for specyfing the CPU model | ||||||
|  | 
 | ||||||
|  | -   Add support for setting an network intefraces MAC address | ||||||
|  | 
 | ||||||
|  | -   Relocate binaries for FHS conformance | ||||||
|  | 
 | ||||||
|  | -   Logging improvements | ||||||
|  | 
 | ||||||
|  | -   Template fixes | ||||||
|  | 
 | ||||||
|  | -   Fix OpenShift CRD validation | ||||||
|  | 
 | ||||||
|  | -   virtctl: Improve vnc logging improvements | ||||||
|  | 
 | ||||||
|  | -   virtctl: Add expose | ||||||
|  | 
 | ||||||
|  | -   virtctl: Use PATCH instead of PUT | ||||||
|  | 
 | ||||||
|  | \#\# v0.6.0 | ||||||
|  | 
 | ||||||
|  | Released on: Mon Jun 11 09:30:28 2018 +0200 | ||||||
|  | 
 | ||||||
|  | -   A range of flakyness reducing test fixes | ||||||
|  | 
 | ||||||
|  | -   Vagrant setup got deprectated | ||||||
|  | 
 | ||||||
|  | -   Updated Docker and CentOS versions | ||||||
|  | 
 | ||||||
|  | -   Add Kubernetes 1.10.3 to test matrix | ||||||
|  | 
 | ||||||
|  | -   A couple of ginkgo concurrency fixes | ||||||
|  | 
 | ||||||
|  | -   A couple of spelling fixes | ||||||
|  | 
 | ||||||
|  | -   A range if infra updates | ||||||
|  | 
 | ||||||
|  | -   Use /dev/kvm if possible, otherwise fallback to emulation | ||||||
|  | 
 | ||||||
|  | -   Add default view/edit/admin RBAC Roles | ||||||
|  | 
 | ||||||
|  | -   Network MTU fixes | ||||||
|  | 
 | ||||||
|  | -   CDRom drives are now read-only | ||||||
|  | 
 | ||||||
|  | -   Secrets can now be correctly referenced on VMs | ||||||
|  | 
 | ||||||
|  | -   Add disk boot ordering | ||||||
|  | 
 | ||||||
|  | -   Add virtctl version | ||||||
|  | 
 | ||||||
|  | -   Add virtctl expose | ||||||
|  | 
 | ||||||
|  | -   Fix virtual machine memory calculations | ||||||
|  | 
 | ||||||
|  | -   Add basic virtual machine Network API | ||||||
|  | 
 | ||||||
|  | \#\# v0.5.0 | ||||||
|  | 
 | ||||||
|  | Released on: Fri May 4 18:25:32 2018 +0200 | ||||||
|  | 
 | ||||||
|  | -   Better controller health signaling | ||||||
|  | 
 | ||||||
|  | -   Better virtctl error messages | ||||||
|  | 
 | ||||||
|  | -   Improvements to enable CRI-O support | ||||||
|  | 
 | ||||||
|  | -   Run CI on stable OpenShift | ||||||
|  | 
 | ||||||
|  | -   Add test coverage for multiple PVCs | ||||||
|  | 
 | ||||||
|  | -   Improved controller life-cycle guarantees | ||||||
|  | 
 | ||||||
|  | -   Add Webhook validation | ||||||
|  | 
 | ||||||
|  | -   Add tests coverage for node eviction | ||||||
|  | 
 | ||||||
|  | -   OfflineVirtualMachine status improvements | ||||||
|  | 
 | ||||||
|  | -   RegistryDisk API update | ||||||
|  | 
 | ||||||
|  | \#\# v0.4.0 | ||||||
|  | 
 | ||||||
|  | Released on: Fri Apr 6 16:40:31 2018 +0200 | ||||||
|  | 
 | ||||||
|  | -   Fix several networking issues | ||||||
|  | 
 | ||||||
|  | -   Add and enable OpenShift support to CI | ||||||
|  | 
 | ||||||
|  | -   Add conditional Windows tests (if an image is present) | ||||||
|  | 
 | ||||||
|  | -   Add subresources for console access | ||||||
|  | 
 | ||||||
|  | -   virtctl config alignmnet with kubectl | ||||||
|  | 
 | ||||||
|  | -   Fix API reference generation | ||||||
|  | 
 | ||||||
|  | -   Stable UUIDs for OfflineVirtualMachines | ||||||
|  | 
 | ||||||
|  | -   Build virtctl for MacOS and Windows | ||||||
|  | 
 | ||||||
|  | -   Set default architecture to x86\_64 | ||||||
|  | 
 | ||||||
|  | -   Major improvement to the CI infrastructure (all containerized) | ||||||
|  | 
 | ||||||
|  | -   virtctl convenience functions for starting and stopping a VM | ||||||
|  | 
 | ||||||
|  | \#\# v0.3.0 | ||||||
|  | 
 | ||||||
|  | Released on: Thu Mar 8 10:21:57 2018 +0100 | ||||||
|  | 
 | ||||||
|  | -   Kubernetes compatible networking | ||||||
|  | 
 | ||||||
|  | -   Kubernetes compatible PV based storage | ||||||
|  | 
 | ||||||
|  | -   VirtualMachinePresets support | ||||||
|  | 
 | ||||||
|  | -   OfflineVirtualMachine support | ||||||
|  | 
 | ||||||
|  | -   RBAC improvements | ||||||
|  | 
 | ||||||
|  | -   Switch to q35 machien type by default | ||||||
|  | 
 | ||||||
|  | -   A large number of test and CI fixes | ||||||
|  | 
 | ||||||
|  | -   Ephemeral disk support | ||||||
|  | 
 | ||||||
|  | \#\# v0.2.0 | ||||||
|  | 
 | ||||||
|  | Released on: Fri Jan 5 16:30:45 2018 +0100 | ||||||
|  | 
 | ||||||
|  | -   VM launch and shutdown flow improvements | ||||||
|  | 
 | ||||||
|  | -   VirtualMachine API redesign | ||||||
|  | 
 | ||||||
|  | -   Removal of HAProxy | ||||||
|  | 
 | ||||||
|  | -   Redesign of VNC/Console access | ||||||
|  | 
 | ||||||
|  | -   Initial support for different vagrant providers | ||||||
|  | 
 | ||||||
|  | \#\# v0.1.0 | ||||||
|  | 
 | ||||||
|  | Released on: Fri Dec 8 20:43:06 2017 +0100 | ||||||
|  | 
 | ||||||
|  | -   Many API improvements for a proper OpenAPI reference | ||||||
|  | 
 | ||||||
|  | -   Add watchdog support | ||||||
|  | 
 | ||||||
|  | -   Drastically improve the deployment on non-vagrant setups | ||||||
|  | 
 | ||||||
|  | -   Dropped nodeSelectors | ||||||
|  | 
 | ||||||
|  | -   Separated inner component deployment from edge component deployment | ||||||
|  | 
 | ||||||
|  | -   Created separate manifests for developer, test, and release | ||||||
|  |     deployments | ||||||
|  | 
 | ||||||
|  | -   Moved komponents to kube-system namespace | ||||||
|  | 
 | ||||||
|  | -   Improved and unified flag parsing | ||||||
|  | @ -0,0 +1,555 @@ | ||||||
|  | Startup Scripts | ||||||
|  | =============== | ||||||
|  | 
 | ||||||
|  | Overview | ||||||
|  | -------- | ||||||
|  | 
 | ||||||
|  | KubeVirt supports the ability to assign a startup script to a | ||||||
|  | VirtualMachineInstance instance which is executed automatically when the | ||||||
|  | VM initializes. | ||||||
|  | 
 | ||||||
|  | These scripts are commonly used to automate injection of users and SSH | ||||||
|  | keys into VMs in order to provide remote access to the machine. For | ||||||
|  | example, a startup script can be used to inject credentials into a VM | ||||||
|  | that allows an Ansible job running on a remote host to access and | ||||||
|  | provision the VM. | ||||||
|  | 
 | ||||||
|  | Startup scripts are not limited to any specific use case though. They | ||||||
|  | can be used to run any arbitrary script in a VM on boot. | ||||||
|  | 
 | ||||||
|  | ### Cloud-init | ||||||
|  | 
 | ||||||
|  | cloud-init is a widely adopted project used for early initialization of | ||||||
|  | a VM. Used by cloud providers such as AWS and GCP, cloud-init has | ||||||
|  | established itself as the defacto method of providing startup scripts to | ||||||
|  | VMs. | ||||||
|  | 
 | ||||||
|  | Cloud-init documentation can be found here: [Cloud-init | ||||||
|  | Documentation](https://cloudinit.readthedocs.io/en/latest/). | ||||||
|  | 
 | ||||||
|  | KubeVirt supports cloud-init’s “NoCloud” and “ConfigDrive” datasources | ||||||
|  | which involve injecting startup scripts into a VM instance through the | ||||||
|  | use of an ephemeral disk. VMs with the cloud-init package installed will | ||||||
|  | detect the ephemeral disk and execute custom userdata scripts at boot. | ||||||
|  | 
 | ||||||
|  | ### Sysprep | ||||||
|  | 
 | ||||||
|  | Sysprep is an automation tool for Windows that automates Windows | ||||||
|  | installation, setup, and custom software provisioning. | ||||||
|  | 
 | ||||||
|  | **Sysprep support is currently not implemented by KubeVirt.** However it | ||||||
|  | is a feature the KubeVirt upstream community has shown interest in. As a | ||||||
|  | result, it is likely Sysprep support will make its way into a future | ||||||
|  | KubeVirt release. | ||||||
|  | 
 | ||||||
|  | Cloud-init Examples | ||||||
|  | ------------------- | ||||||
|  | 
 | ||||||
|  | ### User Data | ||||||
|  | 
 | ||||||
|  | KubeVirt supports the cloud-init NoCloud and ConfigDrive data sources | ||||||
|  | which involve injecting startup scripts through the use of a disk | ||||||
|  | attached to the VM. | ||||||
|  | 
 | ||||||
|  | In order to assign a custom userdata script to a VirtualMachineInstance | ||||||
|  | using this method, users must define a disk and a volume for the NoCloud | ||||||
|  | or ConfigDrive datasource in the VirtualMachineInstance’s spec. | ||||||
|  | 
 | ||||||
|  | #### Data Sources | ||||||
|  | 
 | ||||||
|  | Under most circumstances users should stick to the NoCloud data source | ||||||
|  | as it is the simplest cloud-init data source. Only if NoCloud is not | ||||||
|  | supported by the cloud-init implementation (e.g. | ||||||
|  | [coreos-cloudinit](https://github.com/coreos/coreos-cloudinit)) users | ||||||
|  | should switch the data source to ConfigDrive. | ||||||
|  | 
 | ||||||
|  | Switching the cloud-init data source to ConfigDrive is as easy as | ||||||
|  | changing the volume type in the VirtualMachineInstance’s spec from | ||||||
|  | `cloudInitNoCloud` to `cloudInitConfigDrive`. | ||||||
|  | 
 | ||||||
|  | NoCloud data source: | ||||||
|  | 
 | ||||||
|  |     volumes: | ||||||
|  |       - name: cloudinitvolume | ||||||
|  |         cloudInitNoCloud: | ||||||
|  |           userData: "#cloud-config" | ||||||
|  | 
 | ||||||
|  | ConfigDrive data source: | ||||||
|  | 
 | ||||||
|  |     volumes: | ||||||
|  |       - name: cloudinitvolume | ||||||
|  |         cloudInitConfigDrive: | ||||||
|  |           userData: "#cloud-config" | ||||||
|  | 
 | ||||||
|  | See the examples below for more complete cloud-init examples. | ||||||
|  | 
 | ||||||
|  | #### Cloud-init user-data as clear text | ||||||
|  | 
 | ||||||
|  | In the example below, a SSH key is stored in the cloudInitNoCloud | ||||||
|  | Volume’s userData field as clean text. There is a corresponding disks | ||||||
|  | entry that references the cloud-init volume and assigns it to the VM’s | ||||||
|  | device. | ||||||
|  | 
 | ||||||
|  |     # Create a VM manifest with the startup script | ||||||
|  |     # a cloudInitNoCloud volume's userData field. | ||||||
|  | 
 | ||||||
|  |     cat << END > my-vmi.yaml | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 5 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: containerdisk | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |           - name: cloudinitdisk | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |       volumes: | ||||||
|  |         - name: containerdisk | ||||||
|  |           containerDisk: | ||||||
|  |             image: kubevirt/cirros-container-disk-demo:latest | ||||||
|  |         - name: cloudinitdisk | ||||||
|  |           cloudInitNoCloud: | ||||||
|  |             userData: | | ||||||
|  |               ssh-authorized-keys: | ||||||
|  |                 - ssh-rsa AAAAB3NzaK8L93bWxnyp test@test.com | ||||||
|  | 
 | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Post the Virtual Machine spec to KubeVirt. | ||||||
|  | 
 | ||||||
|  |     kubectl create -f my-vmi.yaml | ||||||
|  | 
 | ||||||
|  | #### Cloud-init user-data as base64 string | ||||||
|  | 
 | ||||||
|  | In the example below, a simple bash script is base64 encoded and stored | ||||||
|  | in the cloudInitNoCloud Volume’s userDataBase64 field. There is a | ||||||
|  | corresponding disks entry that references the cloud-init volume and | ||||||
|  | assigns it to the VM’s device. | ||||||
|  | 
 | ||||||
|  | *Users also have the option of storing the startup script in a | ||||||
|  | Kubernetes Secret and referencing the Secret in the VM’s spec. Examples | ||||||
|  | further down in the document illustrate how that is done.* | ||||||
|  | 
 | ||||||
|  |     # Create a simple startup script | ||||||
|  | 
 | ||||||
|  |     cat << END > startup-script.sh | ||||||
|  |     #!/bin/bash | ||||||
|  |     echo "Hi from startup script!" | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Create a VM manifest with the startup script base64 encoded into | ||||||
|  |     # a cloudInitNoCloud volume's userDataBase64 field. | ||||||
|  | 
 | ||||||
|  |     cat << END > my-vmi.yaml | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 5 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: containerdisk | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |           - name: cloudinitdisk | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |       volumes: | ||||||
|  |         - name: containerdisk | ||||||
|  |           containerDisk: | ||||||
|  |             image: kubevirt/cirros-container-disk-demo:latest | ||||||
|  |         - name: cloudinitdisk | ||||||
|  |           cloudInitNoCloud: | ||||||
|  |             userDataBase64: $(cat startup-script.sh | base64 -w0) | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Post the Virtual Machine spec to KubeVirt. | ||||||
|  | 
 | ||||||
|  |     kubectl create -f my-vmi.yaml | ||||||
|  | 
 | ||||||
|  | #### Cloud-init UserData as k8s Secret | ||||||
|  | 
 | ||||||
|  | Users who wish to not store the cloud-init userdata directly in the | ||||||
|  | VirtualMachineInstance spec have the option to store the userdata into a | ||||||
|  | Kubernetes Secret and reference that Secret in the spec. | ||||||
|  | 
 | ||||||
|  | Multiple VirtualMachineInstance specs can reference the same Kubernetes | ||||||
|  | Secret containing cloud-init userdata. | ||||||
|  | 
 | ||||||
|  | Below is an example of how to create a Kubernetes Secret containing a | ||||||
|  | startup script and reference that Secret in the VM’s spec. | ||||||
|  | 
 | ||||||
|  |     # Create a simple startup script | ||||||
|  | 
 | ||||||
|  |     cat << END > startup-script.sh | ||||||
|  |     #!/bin/bash | ||||||
|  |     echo "Hi from startup script!" | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Store the startup script in a Kubernetes Secret | ||||||
|  |     kubectl create secret generic my-vmi-secret --from-file=userdata=startup-script.sh | ||||||
|  | 
 | ||||||
|  |     # Create a VM manifest and reference the Secret's name in the cloudInitNoCloud | ||||||
|  |     # Volume's secretRef field | ||||||
|  | 
 | ||||||
|  |     cat << END > my-vmi.yaml | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 5 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: containerdisk | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |           - name: cloudinitdisk | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |       volumes: | ||||||
|  |         - name: containerdisk | ||||||
|  |           containerDisk: | ||||||
|  |             image: kubevirt/cirros-registry-disk-demo:latest | ||||||
|  |         - name: cloudinitdisk | ||||||
|  |           cloudInitNoCloud: | ||||||
|  |             secretRef: | ||||||
|  |               name: my-vmi-secret | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Post the VM | ||||||
|  |     kubectl create -f my-vmi.yaml | ||||||
|  | 
 | ||||||
|  | #### Injecting SSH keys with Cloud-init’s Cloud-config | ||||||
|  | 
 | ||||||
|  | In the examples so far, the cloud-init userdata script has been a bash | ||||||
|  | script. Cloud-init has it’s own configuration that can handle some | ||||||
|  | common tasks such as user creation and SSH key injection. | ||||||
|  | 
 | ||||||
|  | More cloud-config examples can be found here: [Cloud-init | ||||||
|  | Examples](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) | ||||||
|  | 
 | ||||||
|  | Below is an example of using cloud-config to inject an SSH key for the | ||||||
|  | default user (fedora in this case) of a [Fedora | ||||||
|  | Atomic](https://getfedora.org/en/atomic/download/) disk image. | ||||||
|  | 
 | ||||||
|  |     # Create the cloud-init cloud-config userdata. | ||||||
|  |     cat << END > startup-script | ||||||
|  |     #cloud-config | ||||||
|  |     password: atomic | ||||||
|  |     chpasswd: { expire: False } | ||||||
|  |     ssh_pwauth: False | ||||||
|  |     ssh_authorized_keys: | ||||||
|  |         - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC6zdgFiLr1uAK7PdcchDd+LseA5fEOcxCCt7TLlr7Mx6h8jUg+G+8L9JBNZuDzTZSF0dR7qwzdBBQjorAnZTmY3BhsKcFr8Gt4KMGrS6r3DNmGruP8GORvegdWZuXgASKVpXeI7nCIjRJwAaK1x+eGHwAWO9Z8ohcboHbLyffOoSZDSIuk2kRIc47+ENRjg0T6x2VRsqX27g6j4DfPKQZGk0zvXkZaYtr1e2tZgqTBWqZUloMJK8miQq6MktCKAS4VtPk0k7teQX57OGwD6D7uo4b+Cl8aYAAwhn0hc0C2USfbuVHgq88ESo2/+NwV4SQcl3sxCW21yGIjAGt4Hy7J fedora@localhost.localdomain | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Create the VM spec | ||||||
|  |     cat << END > my-vmi.yaml | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: sshvmi | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 1024M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: containerdisk | ||||||
|  |             disk: | ||||||
|  |               dev: vda | ||||||
|  |           - name: cloudinitdisk | ||||||
|  |             disk: | ||||||
|  |               dev: vdb | ||||||
|  |       volumes: | ||||||
|  |         - name: containerdisk | ||||||
|  |           containerDisk: | ||||||
|  |             image: kubevirt/fedora-atomic-registry-disk-demo:latest | ||||||
|  |         - name: cloudinitdisk | ||||||
|  |           cloudInitNoCloud: | ||||||
|  |             userDataBase64: $(cat startup-script | base64 -w0) | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Post the VirtualMachineInstance spec to KubeVirt. | ||||||
|  |     kubectl create -f my-vmi.yaml | ||||||
|  | 
 | ||||||
|  |     # Connect to VM with passwordless SSH key | ||||||
|  |     ssh -i <insert private key here> fedora@<insert ip here> | ||||||
|  | 
 | ||||||
|  | #### Inject SSH key using a Custom Shell Script | ||||||
|  | 
 | ||||||
|  | Depending on the boot image in use, users may have a mixed experience | ||||||
|  | using cloud-init’s cloud-config to create users and inject SSH keys. | ||||||
|  | 
 | ||||||
|  | Below is an example of creating a user and injecting SSH keys for that | ||||||
|  | user using a script instead of cloud-config. | ||||||
|  | 
 | ||||||
|  |     cat << END > startup-script.sh | ||||||
|  |     #!/bin/bash | ||||||
|  |     export NEW_USER="foo" | ||||||
|  |     export SSH_PUB_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC6zdgFiLr1uAK7PdcchDd+LseA5fEOcxCCt7TLlr7Mx6h8jUg+G+8L9JBNZuDzTZSF0dR7qwzdBBQjorAnZTmY3BhsKcFr8Gt4KMGrS6r3DNmGruP8GORvegdWZuXgASKVpXeI7nCIjRJwAaK1x+eGHwAWO9Z8ohcboHbLyffOoSZDSIuk2kRIc47+ENRjg0T6x2VRsqX27g6j4DfPKQZGk0zvXkZaYtr1e2tZgqTBWqZUloMJK8miQq6MktCKAS4VtPk0k7teQX57OGwD6D7uo4b+Cl8aYAAwhn0hc0C2USfbuVHgq88ESo2/+NwV4SQcl3sxCW21yGIjAGt4Hy7J $NEW_USER@localhost.localdomain" | ||||||
|  | 
 | ||||||
|  |     sudo adduser -U -m $NEW_USER | ||||||
|  |     echo "$NEW_USER:atomic" | chpasswd | ||||||
|  |     sudo mkdir /home/$NEW_USER/.ssh | ||||||
|  |     sudo echo "$SSH_PUB_KEY" > /home/$NEW_USER/.ssh/authorized_keys | ||||||
|  |     sudo chown -R ${NEW_USER}: /home/$NEW_USER/.ssh | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Create the VM spec | ||||||
|  |     cat << END > my-vmi.yaml | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: sshvmi | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 1024M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: containerdisk | ||||||
|  |             disk: | ||||||
|  |               dev: vda | ||||||
|  |           - name: cloudinitdisk | ||||||
|  |             disk: | ||||||
|  |               dev: vdb | ||||||
|  |       volumes: | ||||||
|  |         - name: containerdisk | ||||||
|  |           containerDisk: | ||||||
|  |             image: kubevirt/fedora-atomic-registry-disk-demo:latest | ||||||
|  |         - name: cloudinitdisk | ||||||
|  |           cloudInitNoCloud: | ||||||
|  |             userDataBase64: $(cat startup-script.sh | base64 -w0) | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Post the VirtualMachineInstance spec to KubeVirt. | ||||||
|  |     kubectl create -f my-vmi.yaml | ||||||
|  | 
 | ||||||
|  |     # Connect to VM with passwordless SSH key | ||||||
|  |     ssh -i <insert private key here> foo@<insert ip here> | ||||||
|  | 
 | ||||||
|  | ### Network Config | ||||||
|  | 
 | ||||||
|  | A cloud-init [network version | ||||||
|  | 1](https://cloudinit.readthedocs.io/en/latest/topics/network-config-format-v1.html) | ||||||
|  | configuration can be set to configure the network at boot. | ||||||
|  | 
 | ||||||
|  | Cloud-init [user-data](#user-data) **must** be set for cloud-init to | ||||||
|  | parse *network-config* even if it is just the user-data config header: | ||||||
|  | 
 | ||||||
|  |     #cloud-config | ||||||
|  | 
 | ||||||
|  | #### Cloud-init network-config as clear text | ||||||
|  | 
 | ||||||
|  | In the example below, a simple cloud-init network-config is stored in | ||||||
|  | the cloudInitNoCloud Volume’s networkData field as clean text. There is | ||||||
|  | a corresponding disks entry that references the cloud-init volume and | ||||||
|  | assigns it to the VM’s device. | ||||||
|  | 
 | ||||||
|  |     # Create a VM manifest with the network-config in | ||||||
|  |     # a cloudInitNoCloud volume's networkData field. | ||||||
|  | 
 | ||||||
|  |     cat << END > my-vmi.yaml | ||||||
|  |     apiVersion: kubevirt.io/v1alpha2 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 5 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: containerdisk | ||||||
|  |             volumeName: registryvolume | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |           - name: cloudinitdisk | ||||||
|  |             volumeName: cloudinitvolume | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |       volumes: | ||||||
|  |         - name: registryvolume | ||||||
|  |           containerDisk: | ||||||
|  |             image: kubevirt/cirros-container-disk-demo:latest | ||||||
|  |         - name: cloudinitvolume | ||||||
|  |           cloudInitNoCloud: | ||||||
|  |             userData: "#cloud-config" | ||||||
|  |             networkData: | | ||||||
|  |               network: | ||||||
|  |                 version: 1 | ||||||
|  |                 config: | ||||||
|  |                 - type: physical | ||||||
|  |                 name: eth0 | ||||||
|  |                 subnets: | ||||||
|  |                   - type: dhcp | ||||||
|  | 
 | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Post the Virtual Machine spec to KubeVirt. | ||||||
|  | 
 | ||||||
|  |     kubectl create -f my-vmi.yaml | ||||||
|  | 
 | ||||||
|  | #### Cloud-init network-config as base64 string | ||||||
|  | 
 | ||||||
|  | In the example below, a simple network-config is base64 encoded and | ||||||
|  | stored in the cloudInitNoCloud Volume’s networkDataBase64 field. There | ||||||
|  | is a corresponding disks entry that references the cloud-init volume and | ||||||
|  | assigns it to the VM’s device. | ||||||
|  | 
 | ||||||
|  | *Users also have the option of storing the network-config in a | ||||||
|  | Kubernetes Secret and referencing the Secret in the VM’s spec. Examples | ||||||
|  | further down in the document illustrate how that is done.* | ||||||
|  | 
 | ||||||
|  |     # Create a simple network-config | ||||||
|  | 
 | ||||||
|  |     cat << END > network-config | ||||||
|  |     network: | ||||||
|  |       version: 1 | ||||||
|  |       config: | ||||||
|  |       - type: physical | ||||||
|  |       name: eth0 | ||||||
|  |       subnets: | ||||||
|  |         - type: dhcp | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Create a VM manifest with the networkData base64 encoded into | ||||||
|  |     # a cloudInitNoCloud volume's networkDataBase64 field. | ||||||
|  | 
 | ||||||
|  |     cat << END > my-vmi.yaml | ||||||
|  |     apiVersion: kubevirt.io/v1alpha2 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 5 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: containerdisk | ||||||
|  |             volumeName: registryvolume | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |           - name: cloudinitdisk | ||||||
|  |             volumeName: cloudinitvolume | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |       volumes: | ||||||
|  |         - name: registryvolume | ||||||
|  |           containerDisk: | ||||||
|  |             image: kubevirt/cirros-container-disk-demo:latest | ||||||
|  |         - name: cloudinitvolume | ||||||
|  |           cloudInitNoCloud: | ||||||
|  |             userData: "#cloud-config" | ||||||
|  |             networkDataBase64: $(cat network-config | base64 -w0) | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Post the Virtual Machine spec to KubeVirt. | ||||||
|  | 
 | ||||||
|  |     kubectl create -f my-vmi.yaml | ||||||
|  | 
 | ||||||
|  | #### Cloud-init network-config as k8s Secret | ||||||
|  | 
 | ||||||
|  | Users who wish to not store the cloud-init network-config directly in | ||||||
|  | the VirtualMachineInstance spec have the option to store the | ||||||
|  | network-config into a Kubernetes Secret and reference that Secret in the | ||||||
|  | spec. | ||||||
|  | 
 | ||||||
|  | Multiple VirtualMachineInstance specs can reference the same Kubernetes | ||||||
|  | Secret containing cloud-init network-config. | ||||||
|  | 
 | ||||||
|  | Below is an example of how to create a Kubernetes Secret containing a | ||||||
|  | network-config and reference that Secret in the VM’s spec. | ||||||
|  | 
 | ||||||
|  |     # Create a simple network-config | ||||||
|  | 
 | ||||||
|  |     cat << END > network-config | ||||||
|  |     network: | ||||||
|  |       version: 1 | ||||||
|  |       config: | ||||||
|  |       - type: physical | ||||||
|  |       name: eth0 | ||||||
|  |       subnets: | ||||||
|  |         - type: dhcp | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Store the network-config in a Kubernetes Secret | ||||||
|  |     kubectl create secret generic my-vmi-secret --from-file=networkdata=network-config | ||||||
|  | 
 | ||||||
|  |     # Create a VM manifest and reference the Secret's name in the cloudInitNoCloud | ||||||
|  |     # Volume's secretRef field | ||||||
|  | 
 | ||||||
|  |     cat << END > my-vmi.yaml | ||||||
|  |     apiVersion: kubevirt.io/v1alpha2 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 5 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: containerdisk | ||||||
|  |             volumeName: registryvolume | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |           - name: cloudinitdisk | ||||||
|  |             volumeName: cloudinitvolume | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |       volumes: | ||||||
|  |         - name: registryvolume | ||||||
|  |           containerDisk: | ||||||
|  |             image: kubevirt/cirros-registry-disk-demo:latest | ||||||
|  |         - name: cloudinitvolume | ||||||
|  |           cloudInitNoCloud: | ||||||
|  |             userData: "#cloud-config" | ||||||
|  |             networkDataSecretRef: | ||||||
|  |               name: my-vmi-secret | ||||||
|  |     END | ||||||
|  | 
 | ||||||
|  |     # Post the VM | ||||||
|  |     kubectl create -f my-vmi.yaml | ||||||
|  | 
 | ||||||
|  | Debugging | ||||||
|  | --------- | ||||||
|  | 
 | ||||||
|  | Depending on the operating system distribution in use, cloud-init output | ||||||
|  | is often printed to the console output on boot up. When developing | ||||||
|  | userdata scripts, users can connect to the VM’s console during boot up | ||||||
|  | to debug. | ||||||
|  | 
 | ||||||
|  | Example of connecting to console using virtctl: | ||||||
|  | 
 | ||||||
|  |     virtctl console <name of vmi> | ||||||
|  | @ -0,0 +1,83 @@ | ||||||
|  | Virtual Machines | ||||||
|  | ================ | ||||||
|  | 
 | ||||||
|  | The `VirtualMachineInstance` type conceptionally has two parts: | ||||||
|  | 
 | ||||||
|  | -   Information for making scheduling decisions | ||||||
|  | 
 | ||||||
|  | -   Information about the virtual machine ABI | ||||||
|  | 
 | ||||||
|  | Every `VirtualMachineInstance` object represents a single running | ||||||
|  | virtual machine instance. | ||||||
|  | 
 | ||||||
|  | Creation | ||||||
|  | ======== | ||||||
|  | 
 | ||||||
|  | API Overview | ||||||
|  | ------------ | ||||||
|  | 
 | ||||||
|  | With the installation of KubeVirt, new types are added to the Kubernetes | ||||||
|  | API to manage Virtual Machines. | ||||||
|  | 
 | ||||||
|  | You can interact with the new resources (via `kubectl`) as you would | ||||||
|  | with any other API resource. | ||||||
|  | 
 | ||||||
|  | VirtualMachineInstance API | ||||||
|  | -------------------------- | ||||||
|  | 
 | ||||||
|  | > Note: A full API reference is available at | ||||||
|  | > <https://kubevirt.io/api-reference/>. | ||||||
|  | 
 | ||||||
|  | Here is an example of a VirtualMachineInstance object: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: testvmi-nocloud | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 30 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 1024M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: containerdisk | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |           - name: emptydisk | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: cloudinitdisk | ||||||
|  |       volumes: | ||||||
|  |       - name: containerdisk | ||||||
|  |         containerDisk: | ||||||
|  |           image: kubevirt/fedora-cloud-container-disk-demo:latest | ||||||
|  |       - name: emptydisk | ||||||
|  |         emptyDisk: | ||||||
|  |           capacity: "2Gi" | ||||||
|  |       - name: cloudinitdisk | ||||||
|  |         cloudInitNoCloud: | ||||||
|  |           userData: |- | ||||||
|  |             #cloud-config | ||||||
|  |             password: fedora | ||||||
|  |             chpasswd: { expire: False } | ||||||
|  | 
 | ||||||
|  | This example uses a fedora cloud image in combination with cloud-init | ||||||
|  | and an ephemeral empty disk with a capacity of `2Gi`. For the sake of | ||||||
|  | simplicity, the volume sources in this example are ephemeral and don’t | ||||||
|  | require a provisioner in your cluster. | ||||||
|  | 
 | ||||||
|  | What’s next | ||||||
|  | =========== | ||||||
|  | 
 | ||||||
|  | -   More information about persistent and ephemeral volumes: | ||||||
|  |     [Disks and Volumes](creation/disks-and-volumes.md) | ||||||
|  | 
 | ||||||
|  | -   How to access a VirtualMachineInstance via `console` or `vnc`: | ||||||
|  |     [Console Access](usage/graphical-and-console-access.md) | ||||||
|  | 
 | ||||||
|  | -   How to customize VirtualMachineInstances with `cloud-init`: | ||||||
|  |     [Cloud Init] (creation/cloud-init.md) | ||||||
|  | @ -0,0 +1,186 @@ | ||||||
|  | VirtualMachineInstance with dedicated CPU resources | ||||||
|  | =================================================== | ||||||
|  | 
 | ||||||
|  | Certain workloads, requiring a predictable latency and enhanced | ||||||
|  | performance during its execution would benefit from obtaining dedicated | ||||||
|  | CPU resources. KubeVirt, relying on the Kubernetes CPU manager, is able | ||||||
|  | to pin guest’s vCPUs to the host’s pCPUs. | ||||||
|  | 
 | ||||||
|  | [Kubernetes CPU | ||||||
|  | manager](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/) | ||||||
|  | 
 | ||||||
|  |     Kubernetes CPU manager is a mechanism that affects the scheduling of | ||||||
|  |     workloads, placing it on a host which can allocate `Guaranteed` | ||||||
|  |     resources and pin certain POD’s containers to host pCPUs, if the | ||||||
|  |     following requirement are met: | ||||||
|  | 
 | ||||||
|  |     * https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed[POD’s | ||||||
|  |     QOS] is Guaranteed | ||||||
|  |     ** resources requests and limits are equal | ||||||
|  |     ** all containers in the POD express CPU and memory requirements | ||||||
|  |     * Requested number of CPUs is an Integer | ||||||
|  | 
 | ||||||
|  |     Additional information: * | ||||||
|  |     https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/[Enabling | ||||||
|  |     the CPU manager on Kubernetes] * | ||||||
|  |     https://docs.openshift.com/container-platform/3.10/scaling_performance/using_cpu_manager.html[Enabling | ||||||
|  |     the CPU manager on OKD] * | ||||||
|  |     https://kubernetes.io/blog/2018/07/24/feature-highlight-cpu-manager/[Kubernetes | ||||||
|  |     blog explaning the feature] | ||||||
|  | 
 | ||||||
|  |     Requesting dedicated CPU resources | ||||||
|  | 
 | ||||||
|  | Setting `spec.domain.cpu.dedicatedCpuPlacement` to `true` in a VMI spec | ||||||
|  | will indicate the desire to allocate dedicated CPU resource to the VMI | ||||||
|  | 
 | ||||||
|  | Kubevirt will verify that all the necessary conditions are met, for the | ||||||
|  | Kubernetes CPU manager to pin the virt-launcher container to dedicated | ||||||
|  | host CPUs. Once, virt-launcher is running, the VMI’s vCPUs will be | ||||||
|  | pinned to the pCPUS that has been dedicated for the virt-launcher | ||||||
|  | container. | ||||||
|  | 
 | ||||||
|  | Expressing the desired amount of VMI’s vCPUs can be done by either | ||||||
|  | setting the guest topology in `spec.domain.cpu` (`sockets`, `cores`, | ||||||
|  | `threads`) or `spec.domain.resources.[requests/limits].cpu` to a whole | ||||||
|  | number, integer (e.g. 1, 2, etc) indicating the number of vCPUs | ||||||
|  | requested for the VMI. Number of vCPUs is counted as | ||||||
|  | `sockets * cores * threads` or if `spec.domain.cpu` is empty then it | ||||||
|  | takes value from `spec.domain.resources.requests.cpu` or | ||||||
|  | `spec.domain.resources.limits.cpu`. | ||||||
|  | 
 | ||||||
|  | > **Note:** Users should not specify both `spec.domain.cpu` and | ||||||
|  | > `spec.domain.resources.[requests/limits].cpu` | ||||||
|  | > | ||||||
|  | > **Note:** `spec.domain.resources.requests.cpu` must be equal to | ||||||
|  | > `spec.domain.resources.limits.cpu` | ||||||
|  | > | ||||||
|  | > **Note:** Multiple cpu-bound microbenchmarks show a significant | ||||||
|  | > performance advantage when using `spec.domain.cpu.sockets` instead of | ||||||
|  | > `spec.domain.cpu.cores`. | ||||||
|  | 
 | ||||||
|  | All inconsistent requirements will be rejected. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           sockets: 2 | ||||||
|  |           cores: 1 | ||||||
|  |           threads: 1 | ||||||
|  |           dedicatedCpuPlacement: true | ||||||
|  |         resources: | ||||||
|  |           limits: | ||||||
|  |             memory: 2Gi | ||||||
|  |     [...] | ||||||
|  | 
 | ||||||
|  | OR | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           dedicatedCpuPlacement: true | ||||||
|  |         resources: | ||||||
|  |           limits: | ||||||
|  |             cpu: 2 | ||||||
|  |             memory: 2Gi | ||||||
|  |     [...] | ||||||
|  | 
 | ||||||
|  | Requesting dedicated CPU for QEMU emulator | ||||||
|  | ------------------------------------------ | ||||||
|  | 
 | ||||||
|  | A number of QEMU threads, such as QEMU main event loop, async I/O | ||||||
|  | operation completion, etc., also execute on the same physical CPUs as | ||||||
|  | the VMI’s vCPUs. This may affect the expected latency of a vCPU. In | ||||||
|  | order to enhance the real-time support in KubeVirt and provide improved | ||||||
|  | latency, KubeVirt will allocate an additional dedicated CPU, exclusively | ||||||
|  | for the emulator thread, to which it will be pinned. This will | ||||||
|  | effectively "isolate" the emulator thread from the vCPUs of the VMI. | ||||||
|  | 
 | ||||||
|  | This functionality can be enabled by specifying | ||||||
|  | `isolateEmulatorThread: true` inside VMI spec’s `Spec.Domain.CPU` | ||||||
|  | section. Naturally, this setting has to be specified in a combination | ||||||
|  | with a `dedicatedCpuPlacement: true`. | ||||||
|  | 
 | ||||||
|  | Example: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           dedicatedCpuPlacement: true | ||||||
|  |           isolateEmulatorThread: true | ||||||
|  |         resources: | ||||||
|  |           limits: | ||||||
|  |             cpu: 2 | ||||||
|  |             memory: 2Gi | ||||||
|  | 
 | ||||||
|  | Identifying nodes with a running CPU manager | ||||||
|  | -------------------------------------------- | ||||||
|  | 
 | ||||||
|  | At this time, [Kubernetes doesn’t label the | ||||||
|  | nodes](https://github.com/kubernetes/kubernetes/issues/66525) that has | ||||||
|  | CPU manager running on it. | ||||||
|  | 
 | ||||||
|  | KubeVirt has a mechansim to identify which nodes has the CPU manager | ||||||
|  | running and manually add a `cpumanager=true` label. This label will be | ||||||
|  | removed when KubeVirt will identify that CPU manager is no longer | ||||||
|  | running on the node. This automatic identification should be viewed as a | ||||||
|  | temporary workaround until Kubernetes will provide the required | ||||||
|  | functionality. Therefore, this feature should be manually enabled by | ||||||
|  | adding CPUManager to the kube-config feature-gate field. | ||||||
|  | 
 | ||||||
|  | When automatic identification is disabled, cluster administrator may | ||||||
|  | manually add the above label to all the nodes when CPU Manager is | ||||||
|  | running. | ||||||
|  | 
 | ||||||
|  | -   Nodes’ labels are view-able: `kubectl describe nodes` | ||||||
|  | 
 | ||||||
|  | -   Administrators may manually label a missing node: | ||||||
|  |     `kubectl label node [node_name] cpumanager=true` | ||||||
|  | 
 | ||||||
|  | ### Enabling the CPU Manager automatic identification feature gate | ||||||
|  | 
 | ||||||
|  | To enable the automatic idetification, user may expand the | ||||||
|  | `feature-gates` field in the kubevirt-config config map by adding the | ||||||
|  | `CPUManager` to it. | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: ConfigMap | ||||||
|  |     metadata: | ||||||
|  |       name: kubevirt-config | ||||||
|  |       namespace: kubevirt | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io: "" | ||||||
|  |     data: | ||||||
|  |       feature-gates: "CPUManager" | ||||||
|  | 
 | ||||||
|  | Alternatively, users can edit an existing kubevirt-config: | ||||||
|  | 
 | ||||||
|  | `kubectl edit configmap kubevirt-config -n kubevirt` | ||||||
|  | 
 | ||||||
|  |     ... | ||||||
|  |     data: | ||||||
|  |       feature-gates: "DataVolumes,CPUManager" | ||||||
|  | 
 | ||||||
|  | Sidecar containers and CPU allocation overhead | ||||||
|  | ---------------------------------------------- | ||||||
|  | 
 | ||||||
|  | **Note:** In order to run sidecar containers, KubeVirt requires the | ||||||
|  | `Sidecar` feature gate to be enabled by adding `Sidecar` to the | ||||||
|  | `kubevirt-config` ConfigMap’s `feature-gates` field. | ||||||
|  | 
 | ||||||
|  | According to the Kubernetes CPU manager model, in order the POD would | ||||||
|  | reach the required QOS level `Guaranteed`, all containers in the POD | ||||||
|  | must express CPU and memory requirements. At this time, Kubevirt often | ||||||
|  | uses a sidecar container to mount VMI’s registry disk. It also uses a | ||||||
|  | sidecar container of it’s hooking mechanism. These additional resources | ||||||
|  | can be viewed as an overhead and should be taken into account when | ||||||
|  | calculating a node capacity. | ||||||
|  | 
 | ||||||
|  | **Note:** The current defaults for sidecar’s resources: `CPU: 200m` | ||||||
|  | `Memory: 64M` As the CPU resource is not expressed as a whole number, | ||||||
|  | CPU manager will not attempt to pin the sidecar container to a host CPU. | ||||||
|  | @ -0,0 +1,655 @@ | ||||||
|  | Virtualized Hardware Configuration | ||||||
|  | ================================== | ||||||
|  | 
 | ||||||
|  | Fine-tuning different aspects of the hardware which are not device | ||||||
|  | related (BIOS, mainboard, …) is sometimes necessary to allow guest | ||||||
|  | operating systems to properly boot and reboot. | ||||||
|  | 
 | ||||||
|  | Machine Type | ||||||
|  | ------------ | ||||||
|  | 
 | ||||||
|  | QEMU is able to work with two different classes of chipsets for x86\_64, | ||||||
|  | so called machine types. The x86\_64 chipsets are i440fx (also called | ||||||
|  | pc) and q35. They are versioned based on qemu-system-latexmath:$ARCH, | ||||||
|  | following the format `pc-${machine_type}-${qemu_version}`, | ||||||
|  | e.g.`pc-i440fx-2.10` and `pc-q35-2.10`. | ||||||
|  | 
 | ||||||
|  | KubeVirt defaults to QEMU’s newest q35 machine type. If a custom machine | ||||||
|  | type is desired, it is configurable through the following structure: | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         machine: | ||||||
|  |           # This value indicates QEMU machine type. | ||||||
|  |           type: pc-q35-2.10 | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 512M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: myimage | ||||||
|  |             disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: myclaim | ||||||
|  | 
 | ||||||
|  | Comparison of the machine types’ internals can be found [at QEMU | ||||||
|  | wiki](https://wiki.qemu.org/Features/Q35). | ||||||
|  | 
 | ||||||
|  | BIOS/UEFI | ||||||
|  | --------- | ||||||
|  | 
 | ||||||
|  | All virtual machines use BIOS by default for booting. | ||||||
|  | 
 | ||||||
|  | It is possible to utilize UEFI/OVMF by setting a value via | ||||||
|  | `spec.firmware.bootloader`: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         special: vmi-alpine-efi | ||||||
|  |       name: vmi-alpine-efi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: containerdisk | ||||||
|  |         firmware: | ||||||
|  |           # this sets the bootloader type | ||||||
|  |           bootloader: | ||||||
|  |             efi: {} | ||||||
|  | 
 | ||||||
|  | SecureBoot is not yet supported. | ||||||
|  | 
 | ||||||
|  | SMBIOS Firmware | ||||||
|  | --------------- | ||||||
|  | 
 | ||||||
|  | In order to provide a consistent view on the virtualized hardware for | ||||||
|  | the guest OS, the SMBIOS UUID can be set to a constant value via | ||||||
|  | `spec.firmware.uuid`: | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         firmware: | ||||||
|  |           # this sets the UUID | ||||||
|  |           uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223 | ||||||
|  |           serial: e4686d2c-6e8d-4335-b8fd-81bee22f4815 | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 512M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: myimage | ||||||
|  |             disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: myclaim | ||||||
|  | 
 | ||||||
|  | In addition, the SMBIOS serial number can be set to a constant value via | ||||||
|  | `spec.firmware.serial`, as demonstrated above. | ||||||
|  | 
 | ||||||
|  | CPU | ||||||
|  | --- | ||||||
|  | 
 | ||||||
|  | **Note**: This is not related to scheduling decisions or resource | ||||||
|  | assignment. | ||||||
|  | 
 | ||||||
|  | ### Topology | ||||||
|  | 
 | ||||||
|  | Setting the number of CPU cores is possible via `spec.domain.cpu.cores`. | ||||||
|  | The following VM will have a CPU with `3` cores: | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           # this sets the cores | ||||||
|  |           cores: 3 | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 512M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: myimage | ||||||
|  |             disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: myclaim | ||||||
|  | 
 | ||||||
|  | ### Enabling cpu compatibility enforcement | ||||||
|  | 
 | ||||||
|  | To enable the cpu compatibility enforcement, user may expand the | ||||||
|  | `feature-gates` field in the kubevirt-config config map by adding the | ||||||
|  | `CPUNodeDiscovery` to it. | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: ConfigMap | ||||||
|  |     metadata: | ||||||
|  |       name: kubevirt-config | ||||||
|  |       namespace: kubevirt | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io: "" | ||||||
|  |     data: | ||||||
|  |       feature-gates: "CPUNodeDiscovery" | ||||||
|  |     ... | ||||||
|  | 
 | ||||||
|  | This feature-gate allows kubevirt to take VM cpu model and cpu features | ||||||
|  | and create node selectors from them. With these node selectors, VM can | ||||||
|  | be scheduled on the node which can support VM cpu model and features. | ||||||
|  | 
 | ||||||
|  | ### Labeling nodes with cpu models and cpu features | ||||||
|  | 
 | ||||||
|  | To properly label the node, user can use (only for cpu models and cpu | ||||||
|  | features) [node-labeller](https://github.com/kubevirt/node-labeller) in | ||||||
|  | combination with | ||||||
|  | [cpu-nfd-plugin](https://github.com/kubevirt/cpu-nfd-plugin) or create | ||||||
|  | node labels by himself. | ||||||
|  | 
 | ||||||
|  | To install node-labeller to cluster, user can use | ||||||
|  | ([kubevirt-ssp-operator](https://github.com/MarSik/kubevirt-ssp-operator)), | ||||||
|  | which will install node-labeller + all available plugins. | ||||||
|  | 
 | ||||||
|  | Cpu-nfd-plugin uses libvirt to get all supported cpu models and cpu | ||||||
|  | features on host and Node-labeller create labels from cpu models. Then | ||||||
|  | Kubevirt can schedule VM on node which has support for VM cpu model and | ||||||
|  | features. | ||||||
|  | 
 | ||||||
|  | Cpu-nfd-plugin supports black list of cpu models and minimal baseline | ||||||
|  | cpu model for features. Both features can be set via config map: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: ConfigMap | ||||||
|  |     metadata: | ||||||
|  |       name: cpu-plugin-configmap | ||||||
|  |     data: | ||||||
|  |       cpu-plugin-configmap.yaml: |- | ||||||
|  |         obsoleteCPUs: | ||||||
|  |           - "486" | ||||||
|  |           - "pentium" | ||||||
|  |         minCPU: "Penryn" | ||||||
|  | 
 | ||||||
|  | This config map has to be created before node-labeller is created, | ||||||
|  | otherwise plugin will show all cpu models. Plugin will not reload when | ||||||
|  | config map is changed. | ||||||
|  | 
 | ||||||
|  | Obsolete cpus will not be inserted in labels. In minCPU user can set | ||||||
|  | baseline cpu model. CPU features, which have this model, are used as | ||||||
|  | basic features. These basic features are not in the label list. Feature | ||||||
|  | labels are created as subtraction between set of newer cpu features and | ||||||
|  | set of basic cpu features, e.g.: Haswell has: aes, apic, clflush Penryr | ||||||
|  | has: apic, clflush subtraction is: aes. So label will be created only | ||||||
|  | with aes feature. | ||||||
|  | 
 | ||||||
|  | ### Model | ||||||
|  | 
 | ||||||
|  | **Note**: Be sure that node CPU model where you run a VM, has the same | ||||||
|  | or higher CPU family. | ||||||
|  | 
 | ||||||
|  | **Note**: If CPU model wasn’t defined, the VM will have CPU model | ||||||
|  | closest to one that used on the node where the VM is running. | ||||||
|  | 
 | ||||||
|  | **Note**: CPU model is case sensitive. | ||||||
|  | 
 | ||||||
|  | Setting the CPU model is possible via `spec.domain.cpu.model`. The | ||||||
|  | following VM will have a CPU with the `Conroe` model: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           # this sets the CPU model | ||||||
|  |           model: Conroe | ||||||
|  |     ... | ||||||
|  | 
 | ||||||
|  | You can check list of available models | ||||||
|  | [here](https://github.com/libvirt/libvirt/blob/master/src/cpu_map/index.xml). | ||||||
|  | 
 | ||||||
|  | When CPUNodeDiscovery feature-gate is enabled and VM has cpu model, | ||||||
|  | Kubevirt creates node selector with format: | ||||||
|  | `feature.node.kubernetes.io/cpu-model-<cpuModel>`, e.g. | ||||||
|  | `feature.node.kubernetes.io/cpu-model-Conroe`. When VM doesn’t have cpu | ||||||
|  | model, then no node selector is created. | ||||||
|  | 
 | ||||||
|  | #### Enabling default cluster cpu model | ||||||
|  | 
 | ||||||
|  | To enable the default cpu model, user may add the `default-cpu-model` | ||||||
|  | field in the kubevirt-config config map. | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: ConfigMap | ||||||
|  |     metadata: | ||||||
|  |       name: kubevirt-config | ||||||
|  |       namespace: kubevirt | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io: "" | ||||||
|  |     data: | ||||||
|  |       default-cpu-model: "EPYC" | ||||||
|  |     ... | ||||||
|  | 
 | ||||||
|  | Default CPU model is set when vmi doesn’t have any cpu model. When vmi | ||||||
|  | has cpu model set, then vmi’s cpu model is preferred. When default cpu | ||||||
|  | model is not set and vmi’s cpu model is not set too, `host-model` will | ||||||
|  | be set. Default cpu model can be changed when kubevirt is running. When | ||||||
|  | CPUNodeDiscovery feature gate is enabled Kubevirt creates node selector | ||||||
|  | with default cpu model. | ||||||
|  | 
 | ||||||
|  | #### CPU model special cases | ||||||
|  | 
 | ||||||
|  | As special cases you can set `spec.domain.cpu.model` equals to: - | ||||||
|  | `host-passthrough` to passthrough CPU from the node to the VM | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           # this passthrough the node CPU to the VM | ||||||
|  |           model: host-passthrough | ||||||
|  |     ... | ||||||
|  | 
 | ||||||
|  | -   `host-model` to get CPU on the VM close to the node one | ||||||
|  | 
 | ||||||
|  | <!-- --> | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           # this set the VM CPU close to the node one | ||||||
|  |           model: host-model | ||||||
|  |     ... | ||||||
|  | 
 | ||||||
|  | See the [CPU API | ||||||
|  | reference](https://libvirt.org/formatdomain.html#elementsCPU) for more | ||||||
|  | details. | ||||||
|  | 
 | ||||||
|  | ### Features | ||||||
|  | 
 | ||||||
|  | Setting CPU features is possible via `spec.domain.cpu.features` and can | ||||||
|  | contain zero or more CPU features : | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           # this sets the CPU features | ||||||
|  |           features: | ||||||
|  |           # this is the feature's name | ||||||
|  |           - name: "apic" | ||||||
|  |           # this is the feature's policy | ||||||
|  |            policy: "require" | ||||||
|  |     ... | ||||||
|  | 
 | ||||||
|  | **Note**: Policy attribute can either be omitted or contain one of the | ||||||
|  | following policies: force, require, optional, disable, forbid. | ||||||
|  | 
 | ||||||
|  | **Note**: In case a policy is omitted for a feature, it will default to | ||||||
|  | **require**. | ||||||
|  | 
 | ||||||
|  | Behaviour according to Policies: | ||||||
|  | 
 | ||||||
|  | -   All policies will be passed to libvirt during virtual machine | ||||||
|  |     creation. | ||||||
|  | 
 | ||||||
|  | -   In case the feature gate "CPUNodeDiscovery" is enabled and the | ||||||
|  |     policy is omitted or has "require" value, then the virtual machine | ||||||
|  |     could be scheduled only on nodes that support this feature. | ||||||
|  | 
 | ||||||
|  | -   In case the feature gate "CPUNodeDiscovery" is enabled and the | ||||||
|  |     policy has "forbid" value, then the virtual machine would **not** be | ||||||
|  |     scheduled on nodes that support this feature. | ||||||
|  | 
 | ||||||
|  | Full description about features and policies can be found | ||||||
|  | [here](https://libvirt.org/formatdomain.html#elementsCPU). | ||||||
|  | 
 | ||||||
|  | When CPUNodeDiscovery feature-gate is enabled Kubevirt creates node | ||||||
|  | selector from cpu features with format: | ||||||
|  | `feature.node.kubernetes.io/cpu-feature-<cpuFeature>`, e.g. | ||||||
|  | `feature.node.kubernetes.io/cpu-feature-apic`. When VM doesn’t have cpu | ||||||
|  | feature, then no node selector is created. | ||||||
|  | 
 | ||||||
|  | Clock | ||||||
|  | ----- | ||||||
|  | 
 | ||||||
|  | ### Guest time | ||||||
|  | 
 | ||||||
|  | Sets the virtualized hardware clock inside the VM to a specific time. | ||||||
|  | Available options are | ||||||
|  | 
 | ||||||
|  | -   **utc** | ||||||
|  | 
 | ||||||
|  | -   **timezone** | ||||||
|  | 
 | ||||||
|  | See the [Clock API | ||||||
|  | Reference](https://kubevirt.github.io/api-reference/master/definitions.html#_v1_clock) | ||||||
|  | for all possible configuration options. | ||||||
|  | 
 | ||||||
|  | #### utc | ||||||
|  | 
 | ||||||
|  | If `utc` is specified, the VM’s clock will be set to UTC. | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         clock: | ||||||
|  |           utc: {} | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 512M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: myimage | ||||||
|  |             disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: myclaim | ||||||
|  | 
 | ||||||
|  | #### timezone | ||||||
|  | 
 | ||||||
|  | If `timezone` is specified, the VM’s clock will be set to the specified | ||||||
|  | local time. | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         clock: | ||||||
|  |           timezone: "America/New York" | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 512M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: myimage | ||||||
|  |             disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: myclaim | ||||||
|  | 
 | ||||||
|  | ### Timers | ||||||
|  | 
 | ||||||
|  | -   **pit** | ||||||
|  | 
 | ||||||
|  | -   **rtc** | ||||||
|  | 
 | ||||||
|  | -   **kvm** | ||||||
|  | 
 | ||||||
|  | -   **hyperv** | ||||||
|  | 
 | ||||||
|  | A pretty common timer configuration for VMs looks like this: | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         clock: | ||||||
|  |           utc: {} | ||||||
|  |           # here are the timer | ||||||
|  |           timer: | ||||||
|  |             hpet: | ||||||
|  |               present: false | ||||||
|  |             pit: | ||||||
|  |               tickPolicy: delay | ||||||
|  |             rtc: | ||||||
|  |               tickPolicy: catchup | ||||||
|  |             hyperv: {} | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 512M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: myimage | ||||||
|  |             disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: myclaim | ||||||
|  | 
 | ||||||
|  | `hpet` is disabled,`pit` and `rtc` are configured to use a specific | ||||||
|  | `tickPolicy`. Finally, `hyperv` is made available too. | ||||||
|  | 
 | ||||||
|  | See the [Timer API | ||||||
|  | Reference](https://kubevirt.github.io/api-reference/master/definitions.html#_v1_timer) | ||||||
|  | for all possible configuration options. | ||||||
|  | 
 | ||||||
|  | **Note**: Timer can be part of a machine type. Thus it may be necessary | ||||||
|  | to explicitly disable them. We may in the future decide to add them via | ||||||
|  | cluster-level defaulting, if they are part of a QEMU machine definition. | ||||||
|  | 
 | ||||||
|  | Random number generator (RNG) | ||||||
|  | ----------------------------- | ||||||
|  | 
 | ||||||
|  | You may want to use entropy collected by your cluster nodes inside your | ||||||
|  | guest. KubeVirt allows to add a `virtio` RNG device to a virtual machine | ||||||
|  | as follows. | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: vmi-with-rng | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           rng: {} | ||||||
|  | 
 | ||||||
|  | For Linux guests, the `virtio-rng` kernel module should be loaded early | ||||||
|  | in the boot process to acquire access to the entropy source. Other | ||||||
|  | systems may require similar adjustments to work with the `virtio` RNG | ||||||
|  | device. | ||||||
|  | 
 | ||||||
|  | **Note**: Some guest operating systems or user payloads may require the | ||||||
|  | RNG device with enough entropy and may fail to boot without it. For | ||||||
|  | example, fresh Fedora images with newer kernels (4.16.4+) may require | ||||||
|  | the `virtio` RNG device to be present to boot to login. | ||||||
|  | 
 | ||||||
|  | Video and Graphics Device | ||||||
|  | ------------------------- | ||||||
|  | 
 | ||||||
|  | By default a minimal Video and Graphics device configuration will be | ||||||
|  | applied to the VirtualMachineInstance. The video device is `vga` | ||||||
|  | compatible and comes with a memory size of 16 MB. This device allows | ||||||
|  | connecting to the OS via `vnc`. | ||||||
|  | 
 | ||||||
|  | It is possible not attach it by setting | ||||||
|  | `spec.domain.devices.autoattachGraphicsDevice` to `false`: | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           autoattachGraphicsDevice: false | ||||||
|  |           disks: | ||||||
|  |           - name: myimage | ||||||
|  |             disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: myclaim | ||||||
|  | 
 | ||||||
|  | VMIs without graphics and video devices are very often referenced as | ||||||
|  | `headless` VMIs. | ||||||
|  | 
 | ||||||
|  | If using a huge amount of small VMs this can be helpful to increase the | ||||||
|  | VMI density per node, since no memory needs to be reserved for video. | ||||||
|  | 
 | ||||||
|  | Features | ||||||
|  | -------- | ||||||
|  | 
 | ||||||
|  | KubeVirt supports a range of virtualization features which may be | ||||||
|  | tweaked in order to allow non-Linux based operating systems to properly | ||||||
|  | boot. Most noteworthy are | ||||||
|  | 
 | ||||||
|  | -   **acpi** | ||||||
|  | 
 | ||||||
|  | -   **apic** | ||||||
|  | 
 | ||||||
|  | -   **hyperv** | ||||||
|  | 
 | ||||||
|  | A common feature configuration is shown by the following example: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         # typical features | ||||||
|  |         features: | ||||||
|  |           acpi: {} | ||||||
|  |           apic: {} | ||||||
|  |           hyperv: | ||||||
|  |             relaxed: {} | ||||||
|  |             vapic: {} | ||||||
|  |             spinlocks: | ||||||
|  |               spinlocks: 8191 | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 512M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: myimage | ||||||
|  |             disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimname: myclaim | ||||||
|  | 
 | ||||||
|  | See the [Features API | ||||||
|  | Reference](https://kubevirt.github.io/api-reference/master/definitions.html#_v1_features) | ||||||
|  | for all available features and configuration options. | ||||||
|  | 
 | ||||||
|  | Resources Requests and Limits | ||||||
|  | ----------------------------- | ||||||
|  | 
 | ||||||
|  | An optional resource request can be specified by the users to allow the | ||||||
|  | scheduler to make a better decision in finding the most suitable Node to | ||||||
|  | place the VM. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: "1Gi" | ||||||
|  |             cpu: "2" | ||||||
|  |           limits: | ||||||
|  |             memory: "2Gi" | ||||||
|  |             cpu: "1" | ||||||
|  |           disks: | ||||||
|  |           - name: myimage | ||||||
|  |             disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimname: myclaim | ||||||
|  | 
 | ||||||
|  | ### CPU | ||||||
|  | 
 | ||||||
|  | Specifying CPU limits will determine the amount of *cpu* *shares* set on | ||||||
|  | the control group the VM is running in, in other words, the amount of | ||||||
|  | time the VM’s CPUs can execute on the assigned resources when there is a | ||||||
|  | competition for CPU resources. | ||||||
|  | 
 | ||||||
|  | For more information please refer to [how Pods with resource limits are | ||||||
|  | run](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run). | ||||||
|  | 
 | ||||||
|  | ### Memory Overhead | ||||||
|  | 
 | ||||||
|  | Various VM resources, such as a video adapter, IOThreads, and | ||||||
|  | supplementary system software, consume additional memory from the Node, | ||||||
|  | beyond the requested memory intended for the guest OS consumption. In | ||||||
|  | order to provide a better estimate for the scheduler, this memory | ||||||
|  | overhead will be calculated and added to the requested memory. | ||||||
|  | 
 | ||||||
|  | Please see [how Pods with resource requests are | ||||||
|  | scheduled](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled) | ||||||
|  | for additional information on resource requests and limits. | ||||||
|  | 
 | ||||||
|  | Hugepages | ||||||
|  | --------- | ||||||
|  | 
 | ||||||
|  | KubeVirt give you possibility to use hugepages as backing memory for | ||||||
|  | your VM. You will need to provide desired amount of memory | ||||||
|  | `resources.requests.memory` and size of hugepages to use | ||||||
|  | `memory.hugepages.pageSize`, for example for x86\_64 architecture it can | ||||||
|  | be `2Mi`. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha1 | ||||||
|  |     kind: VirtualMachine | ||||||
|  |     metadata: | ||||||
|  |       name: myvm | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: "64Mi" | ||||||
|  |         memory: | ||||||
|  |           hugepages: | ||||||
|  |             pageSize: "2Mi" | ||||||
|  |         disks: | ||||||
|  |         - name: myimage | ||||||
|  |           disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimname: myclaim | ||||||
|  | 
 | ||||||
|  | In the above example the VM will have `64Mi` of memory, but instead of | ||||||
|  | regular memory it will use node hugepages of the size of `2Mi`. | ||||||
|  | 
 | ||||||
|  | ### Limitations | ||||||
|  | 
 | ||||||
|  | -   a node must have pre-allocated hugepages | ||||||
|  | 
 | ||||||
|  | -   hugepages size cannot be bigger than requested memory | ||||||
|  | 
 | ||||||
|  | -   requested memory must be divisible by hugepages size | ||||||
|  | 
 | ||||||
|  | Input Devices | ||||||
|  | ------------- | ||||||
|  | 
 | ||||||
|  | ### Tablet | ||||||
|  | 
 | ||||||
|  | Kubevirt supports input devices. The only type which is supported is | ||||||
|  | `tablet`. Tablet input device supports only `virtio` and `usb` bus. Bus | ||||||
|  | can be empty. In that case, `usb` will be selected. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachine | ||||||
|  |     metadata: | ||||||
|  |       name: myvm | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           inputs: | ||||||
|  |           - type: tablet | ||||||
|  |             bus: virtio | ||||||
|  |             name: tablet1 | ||||||
|  |           disks: | ||||||
|  |           - name: myimage | ||||||
|  |             disk: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: myimage | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimname: myclaim | ||||||
|  | @ -0,0 +1,197 @@ | ||||||
|  | Guest Operating System Information | ||||||
|  | ================================== | ||||||
|  | 
 | ||||||
|  | Guest operating system identity for the VirtualMachineInstance will be | ||||||
|  | provided by the label `kubevirt.io/os` : | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: win2k12r2 | ||||||
|  | 
 | ||||||
|  | The `kubevirt.io/os` label is based on the short OS identifier from | ||||||
|  | [libosinfo](https://libosinfo.org/) database. The following Short IDs | ||||||
|  | are currently supported: | ||||||
|  | 
 | ||||||
|  | <table> | ||||||
|  | <colgroup> | ||||||
|  | <col style="width: 20%" /> | ||||||
|  | <col style="width: 20%" /> | ||||||
|  | <col style="width: 20%" /> | ||||||
|  | <col style="width: 20%" /> | ||||||
|  | <col style="width: 20%" /> | ||||||
|  | </colgroup> | ||||||
|  | <thead> | ||||||
|  | <tr class="header"> | ||||||
|  | <th>Short ID</th> | ||||||
|  | <th>Name</th> | ||||||
|  | <th>Version</th> | ||||||
|  | <th>Family</th> | ||||||
|  | <th>ID</th> | ||||||
|  | </tr> | ||||||
|  | </thead> | ||||||
|  | <tbody> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p><strong>win2k12r2</strong></p></td> | ||||||
|  | <td><p>Microsoft Windows Server 2012 R2</p></td> | ||||||
|  | <td><p>6.3</p></td> | ||||||
|  | <td><p>winnt</p></td> | ||||||
|  | <td><p><a href="http://microsoft.com/win/2k12r2">http://microsoft.com/win/2k12r2</a></p></td> | ||||||
|  | </tr> | ||||||
|  | </tbody> | ||||||
|  | </table> | ||||||
|  | 
 | ||||||
|  | Use with presets | ||||||
|  | ---------------- | ||||||
|  | 
 | ||||||
|  | A VirtualMachineInstancePreset representing an operating system with a | ||||||
|  | `kubevirt.io/os` label could be applied on any given | ||||||
|  | VirtualMachineInstance that have and match the\`kubevirt.io/os\` label. | ||||||
|  | 
 | ||||||
|  | Default presets for the OS identifiers above are included in the current | ||||||
|  | release. | ||||||
|  | 
 | ||||||
|  | ### Windows Server 2012R2 `VirtualMachineInstancePreset` Example | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstancePreset | ||||||
|  |     metadata: | ||||||
|  |       name: windows-server-2012r2 | ||||||
|  |       selector: | ||||||
|  |         matchLabels: | ||||||
|  |           kubevirt.io/os: win2k12r2 | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           cores: 2 | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 2G | ||||||
|  |         features: | ||||||
|  |           acpi: {} | ||||||
|  |           apic: {} | ||||||
|  |           hyperv: | ||||||
|  |             relaxed: {} | ||||||
|  |             vapic: {} | ||||||
|  |             spinlocks: | ||||||
|  |               spinlocks: 8191 | ||||||
|  |         clock: | ||||||
|  |           utc: {} | ||||||
|  |           timer: | ||||||
|  |             hpet: | ||||||
|  |               present: false | ||||||
|  |             pit: | ||||||
|  |               tickPolicy: delay | ||||||
|  |             rtc: | ||||||
|  |               tickPolicy: catchup | ||||||
|  |             hyperv: {} | ||||||
|  |     --- | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: win2k12r2 | ||||||
|  |       name: windows2012r2 | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  |       domain: | ||||||
|  |         firmware: | ||||||
|  |           uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223 | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: server2012r2 | ||||||
|  |             disk: | ||||||
|  |               dev: vda | ||||||
|  |       volumes: | ||||||
|  |         - name: server2012r2 | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: my-windows-image | ||||||
|  | 
 | ||||||
|  | Once the `VirtualMachineInstancePreset` is applied to the | ||||||
|  | `VirtualMachineInstance`, the resulting resource would look like this: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       annotations: | ||||||
|  |         presets.virtualmachineinstances.kubevirt.io/presets-applied: kubevirt.io/v1alpha3 | ||||||
|  |         virtualmachineinstancepreset.kubevirt.io/windows-server-2012r2: kubevirt.io/v1alpha3 | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: win2k12r2 | ||||||
|  |       name: windows2012r2 | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           cores: 2 | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 2G | ||||||
|  |         features: | ||||||
|  |           acpi: {} | ||||||
|  |           apic: {} | ||||||
|  |           hyperv: | ||||||
|  |             relaxed: {} | ||||||
|  |             vapic: {} | ||||||
|  |             spinlocks: | ||||||
|  |               spinlocks: 8191 | ||||||
|  |         clock: | ||||||
|  |           utc: {} | ||||||
|  |           timer: | ||||||
|  |             hpet: | ||||||
|  |               present: false | ||||||
|  |             pit: | ||||||
|  |               tickPolicy: delay | ||||||
|  |             rtc: | ||||||
|  |               tickPolicy: catchup | ||||||
|  |             hyperv: {} | ||||||
|  |         firmware: | ||||||
|  |           uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223 | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: server2012r2 | ||||||
|  |             disk: | ||||||
|  |               dev: vda | ||||||
|  |       volumes: | ||||||
|  |         - name: server2012r2 | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: my-windows-image | ||||||
|  | 
 | ||||||
|  | For more information see [VirtualMachineInstancePresets](presets.md) | ||||||
|  | 
 | ||||||
|  | HyperV optimizations | ||||||
|  | -------------------- | ||||||
|  | 
 | ||||||
|  | KubeVirt supports quite a lot of so-called "HyperV enlightenments", | ||||||
|  | which are optimizations for Windows Guests. Some of these optimization | ||||||
|  | may require an up to date host kernel support to work properly, or to | ||||||
|  | deliver the maximum performance gains. | ||||||
|  | 
 | ||||||
|  | KubeVirt can perform extra checks on the hosts before to run Hyper-V | ||||||
|  | enabled VMs, to make sure the host has no known issues with Hyper-V | ||||||
|  | support, properly expose all the required features and thus we can | ||||||
|  | expect optimal performance. These checks are disabled by default for | ||||||
|  | backward compatibility and because they depend on the | ||||||
|  | [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) | ||||||
|  | and on extra configuration. | ||||||
|  | 
 | ||||||
|  | To enable strict host checking, the user may expand the `feature-gates` | ||||||
|  | field in the kubevirt-config config map by adding the | ||||||
|  | `HypervStrictCheck` to it. | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: ConfigMap | ||||||
|  |     metadata: | ||||||
|  |       name: kubevirt-config | ||||||
|  |       namespace: kubevirt | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io: "" | ||||||
|  |     data: | ||||||
|  |       feature-gates: "HypervStrictCheck" | ||||||
|  | 
 | ||||||
|  | Alternatively, users can edit an existing kubevirt-config: | ||||||
|  | 
 | ||||||
|  | `kubectl edit configmap kubevirt-config -n kubevirt` | ||||||
|  | 
 | ||||||
|  |     data: | ||||||
|  |       feature-gates: "HypervStrictCheck,CPUManager" | ||||||
|  | @ -0,0 +1,648 @@ | ||||||
|  | Interfaces and Networks | ||||||
|  | ======================= | ||||||
|  | 
 | ||||||
|  | Connecting a virtual machine to a network consists of two parts. First, | ||||||
|  | networks are specified in `spec.networks`. Then, interfaces backed by | ||||||
|  | the networks are added to the VM by specifying them in | ||||||
|  | `spec.domain.devices.interfaces`. | ||||||
|  | 
 | ||||||
|  | Each interface must have a corresponding network with the same name. | ||||||
|  | 
 | ||||||
|  | An `interface` defines a virtual network interface of a virtual machine | ||||||
|  | (also called a frontend). A `network` specifies the backend of an | ||||||
|  | `interface` and declares which logical or physical device it is | ||||||
|  | connected to (also called as backend). | ||||||
|  | 
 | ||||||
|  | There are multiple ways of configuring an `interface` as well as a | ||||||
|  | `network`. | ||||||
|  | 
 | ||||||
|  | All possible configuration options are available in the [Interface API | ||||||
|  | Reference](https://kubevirt.io/api-reference/master/definitions.html#_v1_interface) | ||||||
|  | and [Network API | ||||||
|  | Reference](https://kubevirt.io/api-reference/master/definitions.html#_v1_network). | ||||||
|  | 
 | ||||||
|  | Backend | ||||||
|  | ------- | ||||||
|  | 
 | ||||||
|  | Network backends are configured in `spec.networks`. A network must have | ||||||
|  | a unique name. Additional fields declare which logical or physical | ||||||
|  | device the network relates to. | ||||||
|  | 
 | ||||||
|  | Each network should declare its type by defining one of the following | ||||||
|  | fields: | ||||||
|  | 
 | ||||||
|  | <table> | ||||||
|  | <colgroup> | ||||||
|  | <col style="width: 50%" /> | ||||||
|  | <col style="width: 50%" /> | ||||||
|  | </colgroup> | ||||||
|  | <thead> | ||||||
|  | <tr class="header"> | ||||||
|  | <th>Type</th> | ||||||
|  | <th>Description</th> | ||||||
|  | </tr> | ||||||
|  | </thead> | ||||||
|  | <tbody> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p><code>pod</code></p></td> | ||||||
|  | <td><p>Default Kubernetes network</p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="even"> | ||||||
|  | <td><p><code>multus</code></p></td> | ||||||
|  | <td><p>Secondary network provided using Multus</p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p><code>genie</code></p></td> | ||||||
|  | <td><p>Secondary network provided using Genie</p></td> | ||||||
|  | </tr> | ||||||
|  | </tbody> | ||||||
|  | </table> | ||||||
|  | 
 | ||||||
|  | ### pod | ||||||
|  | 
 | ||||||
|  | A `pod` network represents the default pod `eth0` interface configured | ||||||
|  | by cluster network solution that is present in each pod. | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           interfaces: | ||||||
|  |             - name: default | ||||||
|  |               masquerade: {} | ||||||
|  |       networks: | ||||||
|  |       - name: default | ||||||
|  |         pod: {} # Stock pod network | ||||||
|  | 
 | ||||||
|  | ### multus | ||||||
|  | 
 | ||||||
|  | It is also possible to connect VMIs to secondary networks using | ||||||
|  | [Multus](https://github.com/intel/multus-cni). This assumes that multus | ||||||
|  | is installed across your cluster and a corresponding | ||||||
|  | `NetworkAttachmentDefinition` CRD was created. | ||||||
|  | 
 | ||||||
|  | The following example defines a network which uses the [ovs-cni | ||||||
|  | plugin](https://github.com/kubevirt/ovs-cni), which will connect the VMI | ||||||
|  | to Open vSwitch’s bridge `br1` and VLAN 100. Other CNI plugins such as | ||||||
|  | ptp, bridge, macvlan or Flannel might be used as well. For their | ||||||
|  | installation and usage refer to the respective project documentation. | ||||||
|  | 
 | ||||||
|  | First the `NetworkAttachmentDefinition` needs to be created. That is | ||||||
|  | usually done by an administrator. Users can then reference the | ||||||
|  | definition. | ||||||
|  | 
 | ||||||
|  |     apiVersion: "k8s.cni.cncf.io/v1" | ||||||
|  |     kind: NetworkAttachmentDefinition | ||||||
|  |     metadata: | ||||||
|  |       name: ovs-vlan-100 | ||||||
|  |     spec: | ||||||
|  |       config: '{ | ||||||
|  |           "cniVersion": "0.3.1", | ||||||
|  |           "type": "ovs", | ||||||
|  |           "bridge": "br1", | ||||||
|  |           "vlan": 100 | ||||||
|  |         }' | ||||||
|  | 
 | ||||||
|  | With following definition, the VMI will be connected to the default pod | ||||||
|  | network and to the secondary Open vSwitch network. | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           interfaces: | ||||||
|  |             - name: default | ||||||
|  |               masquerade: {} | ||||||
|  |               bootFileName: default_image.bin | ||||||
|  |               tftpServerName: tftp.example.com | ||||||
|  |               bootOrder: 1   # attempt to boot from an external tftp server | ||||||
|  |             - name: ovs-net | ||||||
|  |               bridge: {} | ||||||
|  |               bootOrder: 2   # if first attempt failed, try to PXE-boot from this L2 networks | ||||||
|  |       networks: | ||||||
|  |       - name: default | ||||||
|  |         pod: {} # Stock pod network | ||||||
|  |       - name: ovs-net | ||||||
|  |         multus: # Secondary multus network | ||||||
|  |           networkName: ovs-vlan-100 | ||||||
|  | 
 | ||||||
|  | It is also possible to define a multus network as the default pod | ||||||
|  | network with [Multus](https://github.com/intel/multus-cni). A version of | ||||||
|  | multus after this [Pull | ||||||
|  | Request](https://github.com/intel/multus-cni/pull/174) is required | ||||||
|  | (currently master). | ||||||
|  | 
 | ||||||
|  | **Note the following:** | ||||||
|  | 
 | ||||||
|  | -   A multus default network and a pod network type are mutually | ||||||
|  |     exclusive. | ||||||
|  | 
 | ||||||
|  | -   The virt-launcher pod that starts the VMI will **not** have the pod | ||||||
|  |     network configured. | ||||||
|  | 
 | ||||||
|  | -   The multus delegate chosen as default **must** return at least one | ||||||
|  |     IP address. | ||||||
|  | 
 | ||||||
|  | Create a `NetworkAttachmentDefinition` with IPAM. | ||||||
|  | 
 | ||||||
|  |     apiVersion: "k8s.cni.cncf.io/v1" | ||||||
|  |     kind: NetworkAttachmentDefinition | ||||||
|  |     metadata: | ||||||
|  |       name: macvlan-test | ||||||
|  |     spec: | ||||||
|  |       config: '{ | ||||||
|  |           "type": "macvlan", | ||||||
|  |           "master": "eth0", | ||||||
|  |           "mode": "bridge", | ||||||
|  |           "ipam": { | ||||||
|  |             "type": "host-local", | ||||||
|  |                   "subnet": "10.250.250.0/24" | ||||||
|  |           } | ||||||
|  |         }' | ||||||
|  | 
 | ||||||
|  | Define a VMI with a [Multus](https://github.com/intel/multus-cni) | ||||||
|  | network as the default. | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           interfaces: | ||||||
|  |             - name: test1 | ||||||
|  |               bridge: {} | ||||||
|  |       networks: | ||||||
|  |       - name: test1 | ||||||
|  |         multus: # Multus network as default | ||||||
|  |           default: true | ||||||
|  |           networkName: macvlan-test | ||||||
|  | 
 | ||||||
|  | ### genie | ||||||
|  | 
 | ||||||
|  | It is also possible to connect VMIs to multiple networks using | ||||||
|  | [Genie](https://github.com/Huawei-PaaS/CNI-Genie). This assumes that | ||||||
|  | genie is installed across your cluster. | ||||||
|  | 
 | ||||||
|  | The following example defines a network which uses | ||||||
|  | [Flannel](https://github.com/coreos/flannel-cni) as the main network | ||||||
|  | provider and as the [ovs-cni | ||||||
|  | plugin](https://github.com/kubevirt/ovs-cni) as the secondary one. The | ||||||
|  | OVS CNI will connect the VMI to Open vSwitch’s bridge `br1` and VLAN | ||||||
|  | 100. | ||||||
|  | 
 | ||||||
|  | Other CNI plugins such as ptp, bridge, macvlan might be used as well. | ||||||
|  | For their installation and usage refer to the respective project | ||||||
|  | documentation. | ||||||
|  | 
 | ||||||
|  | Genie does not use the `NetworkAttachmentDefinition` CRD. Instead it | ||||||
|  | uses the name of the underlying CNI in order to find the required | ||||||
|  | configuration. It does that by looking into the configuration files | ||||||
|  | under `/etc/cni/net.d/` and finding the file that has that network name | ||||||
|  | as the CNI type. Therefore, for the case described above, the following | ||||||
|  | configuration file should exist, for example, | ||||||
|  | `/etc/cni/net.d/99-ovs-cni.conf` file would be: | ||||||
|  | 
 | ||||||
|  |     { | ||||||
|  |       "cniVersion": "0.3.1", | ||||||
|  |       "type": "ovs", | ||||||
|  |       "bridge": "br1", | ||||||
|  |       "vlan": 100 | ||||||
|  |     } | ||||||
|  | 
 | ||||||
|  | Similarly to Multus, Genie’s configuration file must be the first one in | ||||||
|  | the `/etc/cni/net.d/` directory. This also means that Genie cannot be | ||||||
|  | used together with Multus on the same cluster. | ||||||
|  | 
 | ||||||
|  | With following definition, the VMI will be connected to the default pod | ||||||
|  | network and to the secondary Open vSwitch network. | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           interfaces: | ||||||
|  |             - name: default | ||||||
|  |               bridge: {} | ||||||
|  |             - name: ovs-net | ||||||
|  |               bridge: {} | ||||||
|  |       networks: | ||||||
|  |       - name: default | ||||||
|  |         genie: # Stock pod network | ||||||
|  |           networkName: flannel | ||||||
|  |       - name: ovs-net | ||||||
|  |         genie: # Secondary genie network | ||||||
|  |           networkName: ovs | ||||||
|  | 
 | ||||||
|  | Frontend | ||||||
|  | -------- | ||||||
|  | 
 | ||||||
|  | Network interfaces are configured in `spec.domain.devices.interfaces`. | ||||||
|  | They describe properties of virtual interfaces as “seen” inside guest | ||||||
|  | instances. The same network backend may be connected to a virtual | ||||||
|  | machine in multiple different ways, each with their own connectivity | ||||||
|  | guarantees and characteristics. | ||||||
|  | 
 | ||||||
|  | Each interface should declare its type by defining on of the following | ||||||
|  | fields: | ||||||
|  | 
 | ||||||
|  | <table> | ||||||
|  | <colgroup> | ||||||
|  | <col style="width: 50%" /> | ||||||
|  | <col style="width: 50%" /> | ||||||
|  | </colgroup> | ||||||
|  | <thead> | ||||||
|  | <tr class="header"> | ||||||
|  | <th>Type</th> | ||||||
|  | <th>Description</th> | ||||||
|  | </tr> | ||||||
|  | </thead> | ||||||
|  | <tbody> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p><code>bridge</code></p></td> | ||||||
|  | <td><p>Connect using a linux bridge</p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="even"> | ||||||
|  | <td><p><code>slirp</code></p></td> | ||||||
|  | <td><p>Connect using QEMU user networking mode</p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p><code>sriov</code></p></td> | ||||||
|  | <td><p>Pass through a SR-IOV PCI device via <code>vfio</code></p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="even"> | ||||||
|  | <td><p><code>masquerade</code></p></td> | ||||||
|  | <td><p>Connect using Iptables rules to nat the traffic</p></td> | ||||||
|  | </tr> | ||||||
|  | </tbody> | ||||||
|  | </table> | ||||||
|  | 
 | ||||||
|  | Each interface may also have additional configuration fields that modify | ||||||
|  | properties “seen” inside guest instances, as listed below: | ||||||
|  | 
 | ||||||
|  | <table> | ||||||
|  | <colgroup> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | </colgroup> | ||||||
|  | <thead> | ||||||
|  | <tr class="header"> | ||||||
|  | <th>Name</th> | ||||||
|  | <th>Format</th> | ||||||
|  | <th>Default value</th> | ||||||
|  | <th>Description</th> | ||||||
|  | </tr> | ||||||
|  | </thead> | ||||||
|  | <tbody> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p><code>model</code></p></td> | ||||||
|  | <td><p>One of: <code>e1000</code>, <code>e1000e</code>, <code>ne2k_pci</code>, <code>pcnet</code>, <code>rtl8139</code>, <code>virtio</code></p></td> | ||||||
|  | <td><p><code>virtio</code></p></td> | ||||||
|  | <td><p>NIC type</p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="even"> | ||||||
|  | <td><p>macAddress</p></td> | ||||||
|  | <td><p><code>ff:ff:ff:ff:ff:ff</code> or <code>FF-FF-FF-FF-FF-FF</code></p></td> | ||||||
|  | <td></td> | ||||||
|  | <td><p>MAC address as seen inside the guest system, for example: <code>de:ad:00:00:be:af</code></p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p>ports</p></td> | ||||||
|  | <td></td> | ||||||
|  | <td><p>empty</p></td> | ||||||
|  | <td><p>List of ports to be forwarded to the virtual machine.</p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="even"> | ||||||
|  | <td><p>pciAddress</p></td> | ||||||
|  | <td><p><code>0000:81:00.1</code></p></td> | ||||||
|  | <td></td> | ||||||
|  | <td><p>Set network interface PCI address, for example: <code>0000:81:00.1</code></p></td> | ||||||
|  | </tr> | ||||||
|  | </tbody> | ||||||
|  | </table> | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           interfaces: | ||||||
|  |             - name: default | ||||||
|  |               model: e1000 # expose e1000 NIC to the guest | ||||||
|  |               masquerade: {} # connect through a masquerade | ||||||
|  |               ports: | ||||||
|  |                - name: http | ||||||
|  |                  port: 80 | ||||||
|  |       networks: | ||||||
|  |       - name: default | ||||||
|  |         pod: {} | ||||||
|  | 
 | ||||||
|  | > **Note:** If a specific MAC address is configured for a virtual | ||||||
|  | > machine interface, it’s passed to the underlying CNI plugin that is | ||||||
|  | > expected to configure the backend to allow for this particular MAC | ||||||
|  | > address. Not every plugin has native support for custom MAC addresses. | ||||||
|  | 
 | ||||||
|  | > **Note:** For some CNI plugins without native support for custom MAC | ||||||
|  | > addresses, there is a workaround, which is to use the `tuning` CNI | ||||||
|  | > plugin to adjust pod interface MAC address. This can be used as | ||||||
|  | > follows: | ||||||
|  | > | ||||||
|  | >     apiVersion: "k8s.cni.cncf.io/v1" | ||||||
|  | >     kind: NetworkAttachmentDefinition | ||||||
|  | >     metadata: | ||||||
|  | >       name: ptp-mac | ||||||
|  | >     spec: | ||||||
|  | >       config: '{ | ||||||
|  | >           "cniVersion": "0.3.1", | ||||||
|  | >           "name": "ptp-mac", | ||||||
|  | >           "plugins": [ | ||||||
|  | >             { | ||||||
|  | >               "type": "ptp", | ||||||
|  | >               "ipam": { | ||||||
|  | >                 "type": "host-local", | ||||||
|  | >                 "subnet": "10.1.1.0/24" | ||||||
|  | >               } | ||||||
|  | >             }, | ||||||
|  | >             { | ||||||
|  | >               "type": "tuning" | ||||||
|  | >             } | ||||||
|  | >           ] | ||||||
|  | >         }' | ||||||
|  | > | ||||||
|  | > This approach may not work for all plugins. For example, OKD SDN is | ||||||
|  | > not compatible with `tuning` plugin. | ||||||
|  | > | ||||||
|  | > -   Plugins that handle custom MAC addresses natively: `ovs`. | ||||||
|  | > | ||||||
|  | > -   Plugins that are compatible with `tuning` plugin: `flannel`, | ||||||
|  | >     `ptp`, `bridge`. | ||||||
|  | > | ||||||
|  | > -   Plugins that don’t need special MAC address treatment: `sriov` (in | ||||||
|  | >     `vfio` mode). | ||||||
|  | > | ||||||
|  | ### Ports | ||||||
|  | 
 | ||||||
|  | Declare ports listen by the virtual machine | ||||||
|  | 
 | ||||||
|  | > **Note:** When using the slirp interface only the configured ports | ||||||
|  | > will be forwarded to the virtual machine. | ||||||
|  | 
 | ||||||
|  | <table> | ||||||
|  | <colgroup> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | </colgroup> | ||||||
|  | <thead> | ||||||
|  | <tr class="header"> | ||||||
|  | <th>Name</th> | ||||||
|  | <th>Format</th> | ||||||
|  | <th>Required</th> | ||||||
|  | <th>Description</th> | ||||||
|  | </tr> | ||||||
|  | </thead> | ||||||
|  | <tbody> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p><code>name</code></p></td> | ||||||
|  | <td></td> | ||||||
|  | <td><p>no</p></td> | ||||||
|  | <td><p>Name</p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="even"> | ||||||
|  | <td><p><code>port</code></p></td> | ||||||
|  | <td><p>1 - 65535</p></td> | ||||||
|  | <td><p>yes</p></td> | ||||||
|  | <td><p>Port to expose</p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p><code>protocol</code></p></td> | ||||||
|  | <td><p>TCP,UDP</p></td> | ||||||
|  | <td><p>no</p></td> | ||||||
|  | <td><p>Connection protocol</p></td> | ||||||
|  | </tr> | ||||||
|  | </tbody> | ||||||
|  | </table> | ||||||
|  | 
 | ||||||
|  | > **Tip:** Use `e1000` model if your guest image doesn’t ship with | ||||||
|  | > virtio drivers. | ||||||
|  | 
 | ||||||
|  | > **Note:** Windows machines need the latest virtio network driver to | ||||||
|  | > configure the correct MTU on the interface. | ||||||
|  | 
 | ||||||
|  | If `spec.domain.devices.interfaces` is omitted, the virtual machine is | ||||||
|  | connected using the default pod network interface of `bridge` type. If | ||||||
|  | you’d like to have a virtual machine instance without any network | ||||||
|  | connectivity, you can use the `autoattachPodInterface` field as follows: | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           autoattachPodInterface: false | ||||||
|  | 
 | ||||||
|  | ### bridge | ||||||
|  | 
 | ||||||
|  | In `bridge` mode, virtual machines are connected to the network backend | ||||||
|  | through a linux “bridge”. The pod network IPv4 address is delegated to | ||||||
|  | the virtual machine via DHCPv4. The virtual machine should be configured | ||||||
|  | to use DHCP to acquire IPv4 addresses. | ||||||
|  | 
 | ||||||
|  | > **Note:** If a specific MAC address is not configured in the virtual | ||||||
|  | > machine interface spec the MAC address from the relevant pod interface | ||||||
|  | > is delegated to the virtual machine. | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           interfaces: | ||||||
|  |             - name: red | ||||||
|  |               bridge: {} # connect through a bridge | ||||||
|  |       networks: | ||||||
|  |       - name: red | ||||||
|  |         multus: | ||||||
|  |           networkName: red | ||||||
|  | 
 | ||||||
|  | At this time, `bridge` mode doesn’t support additional configuration | ||||||
|  | fields. | ||||||
|  | 
 | ||||||
|  | > **Note:** due to IPv4 address delegation, in `bridge` mode the pod | ||||||
|  | > doesn’t have an IP address configured, which may introduce issues with | ||||||
|  | > third-party solutions that may rely on it. For example, Istio may not | ||||||
|  | > work in this mode. | ||||||
|  | 
 | ||||||
|  | > **Note:** admin can forbid using `bridge` interface type for pod | ||||||
|  | > networks via a designated configuration flag. To achieve it, the admin | ||||||
|  | > should set the following option to `false`: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: ConfigMap | ||||||
|  |     metadata: | ||||||
|  |       name: kubevirt-config | ||||||
|  |       namespace: kubevirt | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io: "" | ||||||
|  |     data: | ||||||
|  |       permitBridgeInterfaceOnPodNetwork: "false" | ||||||
|  | 
 | ||||||
|  | > **Note:** binding the pod network using `bridge` interface type may | ||||||
|  | > cause issues. Other than the third-party issue mentioned in the above | ||||||
|  | > note, live migration is not allowed with a pod network binding of | ||||||
|  | > `bridge` interface type, and also some CNI plugins might not allow to | ||||||
|  | > use a custom MAC address for your VM instances. If you think you may | ||||||
|  | > be affected by any of issues mentioned above, consider changing the | ||||||
|  | > default interface type to `masquerade`, and disabling the `bridge` | ||||||
|  | > type for pod network, as shown in the example above. | ||||||
|  | 
 | ||||||
|  | ### slirp | ||||||
|  | 
 | ||||||
|  | In `slirp` mode, virtual machines are connected to the network backend | ||||||
|  | using QEMU user networking mode. In this mode, QEMU allocates internal | ||||||
|  | IP addresses to virtual machines and hides them behind NAT. | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           interfaces: | ||||||
|  |             - name: red | ||||||
|  |               slirp: {} # connect using SLIRP mode | ||||||
|  |       networks: | ||||||
|  |       - name: red | ||||||
|  |         pod: {} | ||||||
|  | 
 | ||||||
|  | At this time, `slirp` mode doesn’t support additional configuration | ||||||
|  | fields. | ||||||
|  | 
 | ||||||
|  | > **Note:** in `slirp` mode, the only supported protocols are TCP and | ||||||
|  | > UDP. ICMP is *not* supported. | ||||||
|  | 
 | ||||||
|  | More information about SLIRP mode can be found in [QEMU | ||||||
|  | Wiki](https://wiki.qemu.org/Documentation/Networking#User_Networking_.28SLIRP.29). | ||||||
|  | 
 | ||||||
|  | ### masquerade | ||||||
|  | 
 | ||||||
|  | In `masquerade` mode, KubeVirt allocates internal IP addresses to | ||||||
|  | virtual machines and hides them behind NAT. All the traffic exiting | ||||||
|  | virtual machines is "NAT’ed" using pod IP addresses. A virtual machine | ||||||
|  | should be configured to use DHCP to acquire IPv4 addresses. | ||||||
|  | 
 | ||||||
|  | To allow traffic into virtual machines, the template `ports` section of | ||||||
|  | the interface should be configured as follows. | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           interfaces: | ||||||
|  |             - name: red | ||||||
|  |               masquerade: {} # connect using masquerade mode | ||||||
|  |               ports: | ||||||
|  |                 - port: 80 # allow incoming traffic on port 80 to get into the virtual machine | ||||||
|  |       networks: | ||||||
|  |       - name: red | ||||||
|  |         pod: {} | ||||||
|  | 
 | ||||||
|  | > **Note:** Masquerade is only allowed to connect to the pod network. | ||||||
|  | 
 | ||||||
|  | > **Note:** The network CIDR can be configured in the pod network | ||||||
|  | > section using the `vmNetworkCIDR` attribute. | ||||||
|  | 
 | ||||||
|  | ### virtio-net multiqueue | ||||||
|  | 
 | ||||||
|  | Setting the `networkInterfaceMultiqueue` to `true` will enable the | ||||||
|  | multi-queue functionality, increasing the number of vhost queue, for | ||||||
|  | interfaces configured with a `virtio` model. | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           networkInterfaceMultiqueue: true | ||||||
|  | 
 | ||||||
|  | Users of a Virtual Machine with multiple vCPUs may benefit of increased | ||||||
|  | network throughput and performance. | ||||||
|  | 
 | ||||||
|  | Currently, the number of queues is being determined by the number of | ||||||
|  | vCPUs of a VM. This is because multi-queue support optimizes RX | ||||||
|  | interrupt affinity and TX queue selection in order to make a specific | ||||||
|  | queue private to a specific vCPU. | ||||||
|  | 
 | ||||||
|  | Without enabling the feature, network performance does not scale as the | ||||||
|  | number of vCPUs increases. Guests cannot transmit or retrieve packets in | ||||||
|  | parallel, as virtio-net has only one TX and RX queue. | ||||||
|  | 
 | ||||||
|  | *NOTE*: Although the virtio-net multiqueue feature provides a | ||||||
|  | performance benefit, it has some limitations and therefore should not be | ||||||
|  | unconditionally enabled | ||||||
|  | 
 | ||||||
|  | #### Some known limitations | ||||||
|  | 
 | ||||||
|  | -   Guest OS is limited to ~200 MSI vectors. Each NIC queue requires a | ||||||
|  |     MSI vector, as well as any virtio device or assigned PCI device. | ||||||
|  |     Defining an instance with multiple virtio NICs and vCPUs might lead | ||||||
|  |     to a possibility of hitting the guest MSI limit. | ||||||
|  | 
 | ||||||
|  | -   virtio-net multiqueue works well for incoming traffic, but can | ||||||
|  |     occasionally cause a performance degradation, for outgoing traffic. | ||||||
|  |     Specifically, this may occur when sending packets under 1,500 bytes | ||||||
|  |     over the Transmission Control Protocol (TCP) stream. | ||||||
|  | 
 | ||||||
|  | -   Enabling virtio-net multiqueue increases the total network | ||||||
|  |     throughput, but in parallel it also increases the CPU consumption. | ||||||
|  | 
 | ||||||
|  | -   Enabling virtio-net multiqueue in the host QEMU config, does not | ||||||
|  |     enable the functionality in the guest OS. The guest OS administrator | ||||||
|  |     needs to manually turn it on for each guest NIC that requires this | ||||||
|  |     feature, using ethtool. | ||||||
|  | 
 | ||||||
|  | -   MSI vectors would still be consumed (wasted), if multiqueue was | ||||||
|  |     enabled in the host, but has not been enabled in the guest OS by the | ||||||
|  |     administrator. | ||||||
|  | 
 | ||||||
|  | -   In case the number of vNICs in a guest instance is proportional to | ||||||
|  |     the number of vCPUs, enabling the multiqueue feature is less | ||||||
|  |     important. | ||||||
|  | 
 | ||||||
|  | -   Each virtio-net queue consumes 64 KB of kernel memory for the vhost | ||||||
|  |     driver. | ||||||
|  | 
 | ||||||
|  | *NOTE*: Virtio-net multiqueue should be enabled in the guest OS | ||||||
|  | manually, using ethtool. For example: | ||||||
|  | `ethtool -L <NIC> combined #num_of_queues` | ||||||
|  | 
 | ||||||
|  | More information please refer to [KVM/QEMU | ||||||
|  | MultiQueue](http://www.linux-kvm.org/page/Multiqueue). | ||||||
|  | 
 | ||||||
|  | ### sriov | ||||||
|  | 
 | ||||||
|  | In `sriov` mode, virtual machines are directly exposed to an SR-IOV PCI | ||||||
|  | device, usually allocated by [Intel SR-IOV device | ||||||
|  | plugin](https://github.com/intel/sriov-network-device-plugin). The | ||||||
|  | device is passed through into the guest operating system as a host | ||||||
|  | device, using the | ||||||
|  | [vfio](https://www.kernel.org/doc/Documentation/vfio.txt) userspace | ||||||
|  | interface, to maintain high networking performance. | ||||||
|  | 
 | ||||||
|  |     kind: VM | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           interfaces: | ||||||
|  |             - name: sriov-net | ||||||
|  |               sriov: {} | ||||||
|  |       networks: | ||||||
|  |       - name: sriov-net | ||||||
|  |         multus: | ||||||
|  |           networkName: sriov-net-crd | ||||||
|  | 
 | ||||||
|  | To simplify procedure, please use [OpenShift SR-IOV | ||||||
|  | operator](https://github.com/openshift/sriov-network-operator) to deploy | ||||||
|  | and configure SR-IOV components in your cluster. On how to use the | ||||||
|  | operator, please refer to [their respective | ||||||
|  | documentation](https://github.com/openshift/sriov-network-operator/blob/master/doc/quickstart.md). | ||||||
|  | 
 | ||||||
|  | > **Note:** KubeVirt relies on VFIO userspace driver to pass PCI devices | ||||||
|  | > into VMI guest. Because of that, when configuring SR-IOV operator | ||||||
|  | > policies, make sure you define a pool of VF resources that uses | ||||||
|  | > `driver: vfio`. | ||||||
|  | @ -0,0 +1,513 @@ | ||||||
|  | Presets | ||||||
|  | ======= | ||||||
|  | 
 | ||||||
|  | What is a VirtualMachineInstancePreset? | ||||||
|  | --------------------------------------- | ||||||
|  | 
 | ||||||
|  | `VirtualMachineInstancePresets` are an extension to general | ||||||
|  | `VirtualMachineInstance` configuration behaving much like `PodPresets` | ||||||
|  | from Kubernetes. When a `VirtualMachineInstance` is created, any | ||||||
|  | applicable `VirtualMachineInstancePresets` will be applied to the | ||||||
|  | existing spec for the `VirtualMachineInstance`. This allows for re-use | ||||||
|  | of common settings that should apply to multiple | ||||||
|  | `VirtualMachineInstances`. | ||||||
|  | 
 | ||||||
|  | Create a VirtualMachineInstancePreset | ||||||
|  | ------------------------------------- | ||||||
|  | 
 | ||||||
|  | You can describe a `VirtualMachineInstancePreset` in a YAML file. For | ||||||
|  | example, the `vmi-preset.yaml` file below describes a | ||||||
|  | `VirtualMachineInstancePreset` that requests a `VirtualMachineInstance` | ||||||
|  | be created with a resource request for 64M of RAM. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstancePreset | ||||||
|  |     metadata: | ||||||
|  |       name: small-qemu | ||||||
|  |     spec: | ||||||
|  |       selector: | ||||||
|  |         matchLabels: | ||||||
|  |           kubevirt.io/size: small | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  | 
 | ||||||
|  | -   Create a `VirtualMachineInstancePreset` based on that YAML file: | ||||||
|  | 
 | ||||||
|  | <!-- --> | ||||||
|  | 
 | ||||||
|  |     kubectl create -f vmipreset.yaml | ||||||
|  | 
 | ||||||
|  | ### Required Fields | ||||||
|  | 
 | ||||||
|  | As with most Kubernetes resources, a `VirtualMachineInstancePreset` | ||||||
|  | requires `apiVersion`, `kind` and `metadata` fields. | ||||||
|  | 
 | ||||||
|  | Additionally `VirtualMachineInstancePresets` also need a `spec` section. | ||||||
|  | While not technically required to satisfy syntax, it is strongly | ||||||
|  | recommended to include a `Selector` in the `spec` section, otherwise a | ||||||
|  | `VirtualMachineInstancePreset` will match all `VirtualMachineInstances` | ||||||
|  | in a namespace. | ||||||
|  | 
 | ||||||
|  | ### VirtualMachine Selector | ||||||
|  | 
 | ||||||
|  | KubeVirt uses Kubernetes `Labels` and `Selectors` to determine which | ||||||
|  | `VirtualMachineInstancePresets` apply to a given | ||||||
|  | `VirtualMachineInstance`, similarly to how `PodPresets` work in | ||||||
|  | Kubernetes. If a setting from a `VirtualMachineInstancePreset` is | ||||||
|  | applied to a `VirtualMachineInstance`, the `VirtualMachineInstance` will | ||||||
|  | be marked with an Annotation upon completion. | ||||||
|  | 
 | ||||||
|  | Any domain structure can be listed in the `spec` of a | ||||||
|  | `VirtualMachineInstancePreset`, e.g. Clock, Features, Memory, CPU, or | ||||||
|  | Devices such as network interfaces. All elements of the `spec` section | ||||||
|  | of a `VirtualMachineInstancePreset` will be applied to the | ||||||
|  | `VirtualMachineInstance`. | ||||||
|  | 
 | ||||||
|  | Once a `VirtualMachineInstancePreset` is successfully applied to a | ||||||
|  | `VirtualMachineInstance`, the `VirtualMachineInstance` will be marked | ||||||
|  | with an annotation to indicate that it was applied. If a conflict occurs | ||||||
|  | while a `VirtualMachineInstancePreset` is being applied, that portion of | ||||||
|  | the `VirtualMachineInstancePreset` will be skipped. | ||||||
|  | 
 | ||||||
|  | Any valid `Label` can be matched against, but it is suggested that a | ||||||
|  | general rule of thumb is to use os/shortname, e.g. | ||||||
|  | `kubevirt.io/os: rhel7`. | ||||||
|  | 
 | ||||||
|  | ### Updating a VirtualMachineInstancePreset | ||||||
|  | 
 | ||||||
|  | If a `VirtualMachineInstancePreset` is modified, changes will *not* be | ||||||
|  | applied to existing `VirtualMachineInstances`. This applies to both the | ||||||
|  | `Selector` indicating which `VirtualMachineInstances` should be matched, | ||||||
|  | and also the `Domain` section which lists the settings that should be | ||||||
|  | applied to a `VirtualMachine`. | ||||||
|  | 
 | ||||||
|  | ### Overrides | ||||||
|  | 
 | ||||||
|  | `VirtualMachineInstancePresets` use a similar conflict resolution | ||||||
|  | strategy to Kubernetes `PodPresets`. If a portion of the domain spec is | ||||||
|  | present in both a `VirtualMachineInstance` and a | ||||||
|  | `VirtualMachineInstancePreset` and both resources have the identical | ||||||
|  | information, then creation of the `VirtualMachineInstance` will continue | ||||||
|  | normally. If however there is a difference between the resources, an | ||||||
|  | Event will be created indicating which `DomainSpec` element of which | ||||||
|  | `VirtualMachineInstancePreset` was overridden. For example: If both the | ||||||
|  | `VirtualMachineInstance` and `VirtualMachineInstancePreset` define a | ||||||
|  | `CPU`, but use a different number of `Cores`, KubeVirt will note the | ||||||
|  | difference. | ||||||
|  | 
 | ||||||
|  | If any settings from the `VirtualMachineInstancePreset` were | ||||||
|  | successfully applied, the `VirtualMachineInstance` will be annotated. | ||||||
|  | 
 | ||||||
|  | In the event that there is a difference between the `Domains` of a | ||||||
|  | `VirtualMachineInstance` and `VirtualMachineInstancePreset`, KubeVirt | ||||||
|  | will create an `Event`. `kubectl get events` can be used to show all | ||||||
|  | `Events`. For example: | ||||||
|  | 
 | ||||||
|  |     $ kubectl get events | ||||||
|  |     .... | ||||||
|  |     Events: | ||||||
|  |       FirstSeen                         LastSeen                        Count From                              SubobjectPath                Reason    Message | ||||||
|  |       2m          2m           1         myvmi.1515bbb8d397f258                       VirtualMachineInstance                                     Warning   Conflict                  virtualmachineinstance-preset-controller   Unable to apply VirtualMachineInstancePreset 'example-preset': spec.cpu: &{6} != &{4} | ||||||
|  | 
 | ||||||
|  | ### Usage | ||||||
|  | 
 | ||||||
|  | `VirtualMachineInstancePresets` are namespaced resources, so should be | ||||||
|  | created in the same namespace as the `VirtualMachineInstances` that will | ||||||
|  | use them: | ||||||
|  | 
 | ||||||
|  | `kubectl create -f <preset>.yaml [--namespace <namespace>]` | ||||||
|  | 
 | ||||||
|  | KubeVirt will determine which `VirtualMachineInstancePresets` apply to a | ||||||
|  | Particular `VirtualMachineInstance` by matching `Labels`. For example: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstancePreset | ||||||
|  |     metadata: | ||||||
|  |       name: example-preset | ||||||
|  |       selector: | ||||||
|  |         matchLabels: | ||||||
|  |           kubevirt.io/os: win10 | ||||||
|  |       ... | ||||||
|  | 
 | ||||||
|  | would match any `VirtualMachineInstance` in the same namespace with a | ||||||
|  | `Label` of `flavor: foo`. For example: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     version: v1 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: win10 | ||||||
|  |       ... | ||||||
|  | 
 | ||||||
|  | ### Conflicts | ||||||
|  | 
 | ||||||
|  | When multiple `VirtualMachineInstancePresets` match a particular | ||||||
|  | `VirtualMachineInstance`, if they specify the same settings within a | ||||||
|  | Domain, those settings must match. If two | ||||||
|  | `VirtualMachineInstancePresets` have conflicting settings (e.g. for the | ||||||
|  | number of CPU cores requested), an error will occur, and the | ||||||
|  | `VirtualMachineInstance` will enter the `Failed` state, and a `Warning` | ||||||
|  | event will be emitted explaining which settings of which | ||||||
|  | `VirtualMachineInstancePresets` were problematic. | ||||||
|  | 
 | ||||||
|  | ### Matching Multiple `VirtualMachineInstances` | ||||||
|  | 
 | ||||||
|  | The main use case for `VirtualMachineInstancePresets` is to create | ||||||
|  | re-usable settings that can be applied across various machines. Multiple | ||||||
|  | methods are available to match the labels of a `VirtualMachineInstance` | ||||||
|  | using selectors. | ||||||
|  | 
 | ||||||
|  | -   matchLabels: Each `VirtualMachineInstance` can use a specific label | ||||||
|  |     shared by all | ||||||
|  | 
 | ||||||
|  |     instances. \* matchExpressions: Logical operators for sets can be | ||||||
|  |     used to match multiple | ||||||
|  | 
 | ||||||
|  |     labels. | ||||||
|  | 
 | ||||||
|  | Using matchLabels, the label used in the `VirtualMachineInstancePreset` | ||||||
|  | must match one of the labels of the `VirtualMachineInstance`: | ||||||
|  | 
 | ||||||
|  |     selector: | ||||||
|  |       matchLabels: | ||||||
|  |         kubevirt.io/memory: large | ||||||
|  | 
 | ||||||
|  | would match | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/memory: large | ||||||
|  |         kubevirt.io/os: win10 | ||||||
|  | 
 | ||||||
|  | or | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/memory: large | ||||||
|  |         kubevirt.io/os: fedora27 | ||||||
|  | 
 | ||||||
|  | Using matchExpressions allows for matching multiple labels of | ||||||
|  | `VirtualMachineInstances` without needing to explicity list a label. | ||||||
|  | 
 | ||||||
|  |     selector: | ||||||
|  |       matchExpressions: | ||||||
|  |         - {key: kubevirt.io/os, operator: In, values: [fedora27, fedora26]} | ||||||
|  | 
 | ||||||
|  | would match both: | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: fedora26 | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: fedora27 | ||||||
|  | 
 | ||||||
|  | The Kubernetes | ||||||
|  | [documentation](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) | ||||||
|  | has a detailed explanation. Examples are provided below. | ||||||
|  | 
 | ||||||
|  | ### Exclusions | ||||||
|  | 
 | ||||||
|  | Since `VirtualMachineInstancePresets` use `Selectors` that indicate | ||||||
|  | which `VirtualMachineInstances` their settings should apply to, there | ||||||
|  | needs to exist a mechanism by which `VirtualMachineInstances` can opt | ||||||
|  | out of `VirtualMachineInstancePresets` altogether. This is done using an | ||||||
|  | annotation: | ||||||
|  | 
 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     version: v1 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |       annotations: | ||||||
|  |         virtualmachineinstancepresets.admission.kubevirt.io/exclude: "true" | ||||||
|  |       ... | ||||||
|  | 
 | ||||||
|  | Examples | ||||||
|  | -------- | ||||||
|  | 
 | ||||||
|  | ### Simple `VirtualMachineInstancePreset` Example | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstancePreset | ||||||
|  |     version: v1alpha3 | ||||||
|  |     metadata: | ||||||
|  |       name: example-preset | ||||||
|  |     spec: | ||||||
|  |       selector: | ||||||
|  |         matchLabels: | ||||||
|  |           kubevirt.io/os: win10 | ||||||
|  |       domain: | ||||||
|  |         features: | ||||||
|  |           acpi: {} | ||||||
|  |           apic: {} | ||||||
|  |           hyperv: | ||||||
|  |             relaxed: {} | ||||||
|  |             vapic: {} | ||||||
|  |             spinlocks: | ||||||
|  |               spinlocks: 8191 | ||||||
|  |     --- | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     version: v1 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: win10 | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         firmware: | ||||||
|  |           uuid: c8f99fc8-20f5-46c4-85e5-2b841c547cef | ||||||
|  | 
 | ||||||
|  | Once the `VirtualMachineInstancePreset` is applied to the | ||||||
|  | `VirtualMachineInstance`, the resulting resource would look like this: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       annotations: | ||||||
|  |         presets.virtualmachineinstances.kubevirt.io/presets-applied: kubevirt.io/v1alpha3 | ||||||
|  |         virtualmachineinstancepreset.kubevirt.io/example-preset: kubevirt.io/v1alpha3 | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: win10 | ||||||
|  |         kubevirt.io/nodeName: master | ||||||
|  |       name: myvmi | ||||||
|  |       namespace: default | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: {} | ||||||
|  |         features: | ||||||
|  |           acpi: | ||||||
|  |             enabled: true | ||||||
|  |           apic: | ||||||
|  |             enabled: true | ||||||
|  |           hyperv: | ||||||
|  |             relaxed: | ||||||
|  |               enabled: true | ||||||
|  |             spinlocks: | ||||||
|  |               enabled: true | ||||||
|  |               spinlocks: 8191 | ||||||
|  |             vapic: | ||||||
|  |               enabled: true | ||||||
|  |         firmware: | ||||||
|  |           uuid: c8f99fc8-20f5-46c4-85e5-2b841c547cef | ||||||
|  |         machine: | ||||||
|  |           type: q35 | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 8Mi | ||||||
|  | 
 | ||||||
|  | ### Conflict Example | ||||||
|  | 
 | ||||||
|  | This is an example of a merge conflict. In this case both the | ||||||
|  | `VirtualMachineInstance` and `VirtualMachineInstancePreset` request | ||||||
|  | different number of CPU’s. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstancePreset | ||||||
|  |     version: v1alpha3 | ||||||
|  |     metadata: | ||||||
|  |       name: example-preset | ||||||
|  |     spec: | ||||||
|  |       selector: | ||||||
|  |         matchLabels: | ||||||
|  |           kubevirt.io/flavor: default-features | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           cores: 4 | ||||||
|  |     --- | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     version: v1 | ||||||
|  |     metadata: | ||||||
|  |       name: myvmi | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/flavor: default-features | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           cores: 6 | ||||||
|  | 
 | ||||||
|  | In this case the `VirtualMachineInstance` Spec will remain unmodified. | ||||||
|  | Use `kubectl get events` to show events. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       annotations: | ||||||
|  |         presets.virtualmachineinstances.kubevirt.io/presets-applied: kubevirt.io/v1alpha3 | ||||||
|  |       generation: 0 | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/flavor: default-features | ||||||
|  |       name: myvmi | ||||||
|  |       namespace: default | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           cores: 6 | ||||||
|  |         devices: {} | ||||||
|  |         machine: | ||||||
|  |           type: "" | ||||||
|  |         resources: {} | ||||||
|  |     status: {} | ||||||
|  | 
 | ||||||
|  | Calling ‘kubectl get events\` would have a line like: 2m 2m 1 | ||||||
|  | myvmi.1515bbb8d397f258 VirtualMachineInstance Warning Conflict | ||||||
|  | virtualmachineinstance-preset-controller Unable to apply | ||||||
|  | VirtualMachineInstancePreset \`example-preset’: spec.cpu: &{6} != &{4} | ||||||
|  | 
 | ||||||
|  | ### Matching Multiple VirtualMachineInstances Using MatchLabels | ||||||
|  | 
 | ||||||
|  | These `VirtualMachineInstances` have multiple labels, one that is unique | ||||||
|  | and one that is shared. | ||||||
|  | 
 | ||||||
|  | Note: This example breaks from the convention of using os-shortname as a | ||||||
|  | `Label` for demonstration purposes. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstancePreset | ||||||
|  |     metadata: | ||||||
|  |       name: twelve-cores | ||||||
|  |     spec: | ||||||
|  |       selector: | ||||||
|  |         matchLabels: | ||||||
|  |           kubevirt.io/cpu: dodecacore | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           cores: 12 | ||||||
|  |     --- | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: windows-10 | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: win10 | ||||||
|  |         kubevirt.io/cpu: dodecacore | ||||||
|  |     spec: | ||||||
|  |     --- | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: windows-7 | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: win7 | ||||||
|  |         kubevirt.io/cpu: dodecacore | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  | 
 | ||||||
|  | Adding this `VirtualMachineInstancePreset` and these | ||||||
|  | `VirtualMachineInstances` will result in: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       annotations: | ||||||
|  |         presets.virtualmachineinstances.kubevirt.io/presets-applied: kubevirt.io/v1alpha3 | ||||||
|  |         virtualmachineinstancepreset.kubevirt.io/twelve-cores: kubevirt.io/v1alpha3 | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/cpu: dodecacore | ||||||
|  |         kubevirt.io/os: win10 | ||||||
|  |       name: windows-10 | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           cores: 12 | ||||||
|  |         devices: {} | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 4Gi | ||||||
|  |     --- | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       annotations: | ||||||
|  |         presets.virtualmachineinstances.kubevirt.io/presets-applied: kubevirt.io/v1alpha3 | ||||||
|  |         virtualmachineinstancepreset.kubevirt.io/twelve-cores: kubevirt.io/v1alpha3 | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/cpu: dodecacore | ||||||
|  |         kubevirt.io/os: win7 | ||||||
|  |       name: windows-7 | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           cores: 12 | ||||||
|  |         devices: {} | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 4Gi | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  | 
 | ||||||
|  | ### Matching Multiple VirtualMachineInstances Using MatchExpressions | ||||||
|  | 
 | ||||||
|  | This `VirtualMachineInstancePreset` has a matchExpression that will | ||||||
|  | match two labels: `kubevirt.io/os: win10` and `kubevirt.io/os: win7`. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstancePreset | ||||||
|  |     metadata: | ||||||
|  |       name: windows-vmis | ||||||
|  |     spec: | ||||||
|  |       selector: | ||||||
|  |         matchExpressions: | ||||||
|  |           - {key: kubevirt.io/os, operator: In, values: [win10, win7]} | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 128M | ||||||
|  |     --- | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: smallvmi | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: win10 | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 60 | ||||||
|  |     --- | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: largevmi | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/os: win7 | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 120 | ||||||
|  | 
 | ||||||
|  | Applying the preset to both VM’s will result in: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     items: | ||||||
|  |     - apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |       kind: VirtualMachineInstance | ||||||
|  |       metadata: | ||||||
|  |         annotations: | ||||||
|  |           presets.virtualmachineinstances.kubevirt.io/presets-applied: kubevirt.io/v1alpha3 | ||||||
|  |           virtualmachineinstancepreset.kubevirt.io/windows-vmis: kubevirt.io/v1alpha3 | ||||||
|  |         labels: | ||||||
|  |           kubevirt.io/os: win7 | ||||||
|  |         name: largevmi | ||||||
|  |       spec: | ||||||
|  |         domain: | ||||||
|  |           resources: | ||||||
|  |             requests: | ||||||
|  |               memory: 128M | ||||||
|  |         terminationGracePeriodSeconds: 120 | ||||||
|  |     - apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |       kind: VirtualMachineInstance | ||||||
|  |       metadata: | ||||||
|  |         annotations: | ||||||
|  |           presets.virtualmachineinstances.kubevirt.io/presets-applied: kubevirt.io/v1alpha3 | ||||||
|  |           virtualmachineinstancepreset.kubevirt.io/windows-vmis: kubevirt.io/v1alpha3 | ||||||
|  |         labels: | ||||||
|  |           kubevirt.io/os: win10 | ||||||
|  |         name: smallvmi | ||||||
|  |       spec: | ||||||
|  |         domain: | ||||||
|  |           resources: | ||||||
|  |             requests: | ||||||
|  |               memory: 128M | ||||||
|  |         terminationGracePeriodSeconds: 60 | ||||||
|  | @ -0,0 +1,171 @@ | ||||||
|  | Configure Liveness and Readiness Probes | ||||||
|  | ======================================= | ||||||
|  | 
 | ||||||
|  | It is possible to configure Liveness and Readiness Probes in a similar | ||||||
|  | fashion like it is possible to configure [Liveness and Readiness Probes | ||||||
|  | on | ||||||
|  | Containers](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). | ||||||
|  | 
 | ||||||
|  | Liveness Probes will effectively stop the VirtualMachineInstance if they | ||||||
|  | fail, which will allow higher level controllers, like VirtualMachine or | ||||||
|  | VirtualMachineInstanceReplicaSet to spawn new instances, which will | ||||||
|  | hopefully be responsive again. | ||||||
|  | 
 | ||||||
|  | Readiness Probes are an indicator for Services and Endpoints if the | ||||||
|  | VirtualMachineInstance is ready to receive traffic from Services. If | ||||||
|  | Readiness Probes fail, the VirtualMachineInstance will be removed from | ||||||
|  | the Endpoints which back services until the probe recovers. | ||||||
|  | 
 | ||||||
|  | Define a HTTP Liveness Probe | ||||||
|  | ---------------------------- | ||||||
|  | 
 | ||||||
|  | The following VirtualMachineInstance configures a HTTP Liveness Probe | ||||||
|  | via `spec.livenessProbe.httpGet`, which will query port 1500 of the | ||||||
|  | VirtualMachineInstance, after an initial delay of 120 seconds. The | ||||||
|  | VirtualMachineInstance itself installs and runs a minimal HTTP server on | ||||||
|  | port 1500 via cloud-init. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         special: vmi-fedora | ||||||
|  |       name: vmi-fedora | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: containerdisk | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: cloudinitdisk | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 1024M | ||||||
|  |       livenessProbe: | ||||||
|  |         initialDelaySeconds: 120 | ||||||
|  |         periodSeconds: 20 | ||||||
|  |         httpGet: | ||||||
|  |           port: 1500 | ||||||
|  |         timeoutSeconds: 10 | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  |       volumes: | ||||||
|  |       - name: containerdisk | ||||||
|  |         registryDisk: | ||||||
|  |           image: registry:5000/kubevirt/fedora-cloud-registry-disk-demo:devel | ||||||
|  |       - cloudInitNoCloud: | ||||||
|  |           userData: |- | ||||||
|  |             #cloud-config | ||||||
|  |             password: fedora | ||||||
|  |             chpasswd: { expire: False } | ||||||
|  |             bootcmd: | ||||||
|  |               - setenforce 0 | ||||||
|  |               - dnf install -y nmap-ncat | ||||||
|  |               - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!' | ||||||
|  |         name: cloudinitdisk | ||||||
|  | 
 | ||||||
|  | Define a TCP Liveness Probe | ||||||
|  | --------------------------- | ||||||
|  | 
 | ||||||
|  | The following VirtualMachineInstance configures a TCP Liveness Probe via | ||||||
|  | `spec.livenessProbe.tcpSocket`, which will query port 1500 of the | ||||||
|  | VirtualMachineInstance, after an initial delay of 120 seconds. The | ||||||
|  | VirtualMachineInstance itself installs and runs a minimal HTTP server on | ||||||
|  | port 1500 via cloud-init. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         special: vmi-fedora | ||||||
|  |       name: vmi-fedora | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: containerdisk | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: cloudinitdisk | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 1024M | ||||||
|  |       livenessProbe: | ||||||
|  |         initialDelaySeconds: 120 | ||||||
|  |         periodSeconds: 20 | ||||||
|  |         tcpSocket: | ||||||
|  |           port: 1500 | ||||||
|  |         timeoutSeconds: 10 | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  |       volumes: | ||||||
|  |       - name: containerdisk | ||||||
|  |         registryDisk: | ||||||
|  |           image: registry:5000/kubevirt/fedora-cloud-registry-disk-demo:devel | ||||||
|  |       - cloudInitNoCloud: | ||||||
|  |           userData: |- | ||||||
|  |             #cloud-config | ||||||
|  |             password: fedora | ||||||
|  |             chpasswd: { expire: False } | ||||||
|  |             bootcmd: | ||||||
|  |               - setenforce 0 | ||||||
|  |               - dnf install -y nmap-ncat | ||||||
|  |               - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!' | ||||||
|  |         name: cloudinitdisk | ||||||
|  | 
 | ||||||
|  | Define Readiness Probes | ||||||
|  | ----------------------- | ||||||
|  | 
 | ||||||
|  | Readiness Probes are configured in a similar way like liveness probes. | ||||||
|  | Instead of `spec.livenessProbe`, `spec.readinessProbe` needs to be | ||||||
|  | filled: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         special: vmi-fedora | ||||||
|  |       name: vmi-fedora | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: containerdisk | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: cloudinitdisk | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 1024M | ||||||
|  |       readinessProbe: | ||||||
|  |         httpGet: | ||||||
|  |           port: 1500 | ||||||
|  |         initialDelaySeconds: 120 | ||||||
|  |         periodSeconds: 20 | ||||||
|  |         timeoutSeconds: 10 | ||||||
|  |         failureThreshold: 3 | ||||||
|  |         successThreshold: 3 | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  |       volumes: | ||||||
|  |       - name: containerdisk | ||||||
|  |         registryDisk: | ||||||
|  |           image: registry:5000/kubevirt/fedora-cloud-registry-disk-demo:devel | ||||||
|  |       - cloudInitNoCloud: | ||||||
|  |           userData: |- | ||||||
|  |             #cloud-config | ||||||
|  |             password: fedora | ||||||
|  |             chpasswd: { expire: False } | ||||||
|  |             bootcmd: | ||||||
|  |               - setenforce 0 | ||||||
|  |               - dnf install -y nmap-ncat | ||||||
|  |               - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!' | ||||||
|  |         name: cloudinitdisk | ||||||
|  | 
 | ||||||
|  | Note that in the case of Readiness Probes, it is also possible to set a | ||||||
|  | `failureThreshold` and a `successThreashold` to only flip between ready | ||||||
|  | and non-ready state if the probe succeeded or failed multiple times. | ||||||
|  | @ -0,0 +1,127 @@ | ||||||
|  | Run Strategies | ||||||
|  | ============== | ||||||
|  | 
 | ||||||
|  | Overview | ||||||
|  | -------- | ||||||
|  | 
 | ||||||
|  | VirtualMachines have a `Running` setting that determines whether or not | ||||||
|  | there should be a guest running or not. Because KubeVirt will always | ||||||
|  | immediately restart a VirtualMachineInstance for VirtualMachines with | ||||||
|  | `spec.running: true`, a simple boolean is not always enough to fully | ||||||
|  | describe desired behavior. For instance, there are cases when a user | ||||||
|  | would like the ability to shut down a guest from inside the virtual | ||||||
|  | machine. With `spec.running: true`, KubeVirt would immediately restart | ||||||
|  | the VirtualMachineInstance. | ||||||
|  | 
 | ||||||
|  | ### RunStrategy | ||||||
|  | 
 | ||||||
|  | To allow for greater variation of user states, the `RunStrategy` field | ||||||
|  | has been introduced. This is mutually exclusive with `Running` as they | ||||||
|  | have somewhat overlapping conditions. There are currently four | ||||||
|  | RunStrategies defined: | ||||||
|  | 
 | ||||||
|  | -   Always: A VirtualMachineInstance will always be present. If the | ||||||
|  |     VirtualMachineInstance crashed, a new one will be spawned. This is | ||||||
|  |     the same behavior as `spec.running: true`. | ||||||
|  | 
 | ||||||
|  | -   RerunOnFailure: A VirtualMachineInstance will be respawned if the | ||||||
|  |     previous instance failed in an error state. It will not be | ||||||
|  |     re-created if the guest stopped successfully (e.g. shut down from | ||||||
|  |     inside guest). | ||||||
|  | 
 | ||||||
|  | -   Manual: The presence of a VirtualMachineInstance or lack thereof is | ||||||
|  |     controlled exclusively by the start/stop/restart VirtualMachint | ||||||
|  |     subresource endpoints. | ||||||
|  | 
 | ||||||
|  | -   Halted: No VirtualMachineInstance will be present. If a guest is | ||||||
|  |     already running, it will be stopped. This is the same behavior as | ||||||
|  |     `spec.running: false`. | ||||||
|  | 
 | ||||||
|  | *Note*: RunStrategy and Running are mutually exclusive, because they can | ||||||
|  | be contradictory. The API server will reject VirtualMachine resources | ||||||
|  | that define both. | ||||||
|  | 
 | ||||||
|  | Virtctl | ||||||
|  | ------- | ||||||
|  | 
 | ||||||
|  | The `start`, `stop` and `restart` methods of virtctl will invoke their | ||||||
|  | respective subresources of VirtualMachines. This can have an effect on | ||||||
|  | the runStrategy of the VirtualMachine as below: | ||||||
|  | 
 | ||||||
|  | <table> | ||||||
|  | <colgroup> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | <col style="width: 25%" /> | ||||||
|  | </colgroup> | ||||||
|  | <thead> | ||||||
|  | <tr class="header"> | ||||||
|  | <th>RunStrategy</th> | ||||||
|  | <th>start</th> | ||||||
|  | <th>stop</th> | ||||||
|  | <th>restart</th> | ||||||
|  | </tr> | ||||||
|  | </thead> | ||||||
|  | <tbody> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p><strong>Always</strong></p></td> | ||||||
|  | <td><p><code>-</code></p></td> | ||||||
|  | <td><p><code>Halted</code></p></td> | ||||||
|  | <td><p><code>Always</code></p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="even"> | ||||||
|  | <td><p><strong>RerunOnFailure</strong></p></td> | ||||||
|  | <td><p><code>-</code></p></td> | ||||||
|  | <td><p><code>Halted</code></p></td> | ||||||
|  | <td><p><code>RerunOnFailure</code></p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="odd"> | ||||||
|  | <td><p><strong>Manual</strong></p></td> | ||||||
|  | <td><p><code>Manual</code></p></td> | ||||||
|  | <td><p><code>Manual</code></p></td> | ||||||
|  | <td><p><code>Manual</code></p></td> | ||||||
|  | </tr> | ||||||
|  | <tr class="even"> | ||||||
|  | <td><p><strong>Halted</strong></p></td> | ||||||
|  | <td><p><code>Always</code></p></td> | ||||||
|  | <td><p><code>-</code></p></td> | ||||||
|  | <td><p><code>-</code></p></td> | ||||||
|  | </tr> | ||||||
|  | </tbody> | ||||||
|  | </table> | ||||||
|  | 
 | ||||||
|  | Table entries marked with `-` don’t make sense, so won’t have an effect | ||||||
|  | on RunStrategy. | ||||||
|  | 
 | ||||||
|  | RunStrategy Examples | ||||||
|  | -------------------- | ||||||
|  | 
 | ||||||
|  | ### Always | ||||||
|  | 
 | ||||||
|  | An example usage of the Always RunStrategy. | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachine | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io/vm: vm-cirros | ||||||
|  |       name: vm-cirros | ||||||
|  |     spec: | ||||||
|  |       runStrategy: always | ||||||
|  |       template: | ||||||
|  |         metadata: | ||||||
|  |           labels: | ||||||
|  |             kubevirt.io/vm: vm-cirros | ||||||
|  |         spec: | ||||||
|  |           domain: | ||||||
|  |             devices: | ||||||
|  |               disks: | ||||||
|  |               - disk: | ||||||
|  |                   bus: virtio | ||||||
|  |                 name: containerdisk | ||||||
|  |           terminationGracePeriodSeconds: 0 | ||||||
|  |           volumes: | ||||||
|  |           - containerDisk: | ||||||
|  |               image: kubevirt/cirros-container-disk-demo:latest | ||||||
|  |             name: containerdisk | ||||||
|  | @ -0,0 +1,161 @@ | ||||||
|  | Windows driver disk usage | ||||||
|  | ========================= | ||||||
|  | 
 | ||||||
|  | Purpose of this document is to explain how to install virtio drivers for | ||||||
|  | Microsoft Windows running in a fully virtualized guest. | ||||||
|  | 
 | ||||||
|  | Do I need virtio drivers? | ||||||
|  | ------------------------- | ||||||
|  | 
 | ||||||
|  | Yes. Without the virtio drivers, you cannot use | ||||||
|  | [paravirtualized](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/sec-virtualization_getting_started-products-virtualized-hardware-devices#sec-Virtualization_Getting_Started-Products-paravirtdevices) | ||||||
|  | hardware properly. It would either not work, or will have a severe | ||||||
|  | performance penalty. | ||||||
|  | 
 | ||||||
|  | For more details on configuring your guest please refer to [Guest | ||||||
|  | Virtual Machine Device | ||||||
|  | Configuration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-guest_virtual_machine_device_configuration). | ||||||
|  | 
 | ||||||
|  | Which drivers I need to install? | ||||||
|  | -------------------------------- | ||||||
|  | 
 | ||||||
|  | There are usually up to 8 possible devices that are required to run | ||||||
|  | Windows smoothly in a virtualized environment. KubeVirt currently | ||||||
|  | supports only: | ||||||
|  | 
 | ||||||
|  | -   **viostor**, the block driver, applies to SCSI Controller in the | ||||||
|  |     Other devices group. | ||||||
|  | 
 | ||||||
|  | -   **viorng**, the entropy source driver, applies to PCI Device in the | ||||||
|  |     Other devices group. | ||||||
|  | 
 | ||||||
|  | -   **NetKVM**, the network driver, applies to Ethernet Controller in | ||||||
|  |     the Other devices group. Available only if a virtio NIC is | ||||||
|  |     configured. | ||||||
|  | 
 | ||||||
|  | Other virtio drivers, that exists and might be supported in the future: | ||||||
|  | 
 | ||||||
|  | -   Balloon, the balloon driver, applies to PCI Device in the Other | ||||||
|  |     devices group | ||||||
|  | 
 | ||||||
|  | -   vioserial, the paravirtual serial driver, applies to PCI Simple | ||||||
|  |     Communications Controller in the Other devices group. | ||||||
|  | 
 | ||||||
|  | -   viocsci, the SCSI block driver, applies to SCSI Controller in the | ||||||
|  |     Other devices group. | ||||||
|  | 
 | ||||||
|  | -   qemupciserial, the emulated PCI serial driver, applies to PCI Serial | ||||||
|  |     Port in the Other devices group. | ||||||
|  | 
 | ||||||
|  | -   qxl, the paravirtual video driver, applied to Microsoft Basic | ||||||
|  |     Display Adapter in the Display adapters group. | ||||||
|  | 
 | ||||||
|  | -   pvpanic, the paravirtual panic driver, applies to Unknown device in | ||||||
|  |     the Other devices group. | ||||||
|  | 
 | ||||||
|  | > **Note** | ||||||
|  | > | ||||||
|  | > Some drivers are required in the installation phase. When you are | ||||||
|  | > installing Windows onto the virtio block storage you have to provide | ||||||
|  | > an appropriate virtio driver. Namely, choose viostor driver for your | ||||||
|  | > version of Microsoft Windows, eg does not install XP driver when you | ||||||
|  | > run Windows 10. | ||||||
|  | > | ||||||
|  | > Other drivers can be installed after the successful windows | ||||||
|  | > installation. Again, please install only drivers matching your Windows | ||||||
|  | > version. | ||||||
|  | 
 | ||||||
|  | ### How to install during Windows install? | ||||||
|  | 
 | ||||||
|  | To install drivers before the Windows starts its install, make sure you | ||||||
|  | have virtio-win package attached to your VirtualMachine as SATA CD-ROM. | ||||||
|  | In the Windows installation, choose advanced install and load driver. | ||||||
|  | Then please navigate to loaded Virtio CD-ROM and install one of viostor | ||||||
|  | or vioscsi, depending on whichever you have set up. | ||||||
|  | 
 | ||||||
|  | Step by step screenshots: | ||||||
|  | 
 | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  | ### How to install after Windows install? | ||||||
|  | 
 | ||||||
|  | After windows install, please go to [Device | ||||||
|  | Manager](https://support.microsoft.com/en-us/help/4026149/windows-open-device-manager). | ||||||
|  | There you should see undetected devices in "available devices" section. | ||||||
|  | You can install virtio drivers one by one going through this list. | ||||||
|  | 
 | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  |  | ||||||
|  | 
 | ||||||
|  | For more details on how to choose a proper driver and how to install the | ||||||
|  | driver, please refer to the [Windows Guest Virtual Machines on Red Hat | ||||||
|  | Enterprise Linux 7](https://access.redhat.com/articles/2470791). | ||||||
|  | 
 | ||||||
|  | How to obtain virtio drivers? | ||||||
|  | ----------------------------- | ||||||
|  | 
 | ||||||
|  | The virtio Windows drivers are distributed in a form of | ||||||
|  | [containerDisk](https://kubevirt.io/user-guide/docs/latest/creating-virtual-machines/disks-and-volumes.html#containerDisk), | ||||||
|  | which can be simply mounted to the VirtualMachine. The container image, | ||||||
|  | containing the disk is located at: | ||||||
|  | <https://hub.docker.com/r/kubevirt/virtio-container-disk> and the image | ||||||
|  | be pulled as any other docker container: | ||||||
|  | 
 | ||||||
|  |     docker pull kubevirt/virtio-container-disk | ||||||
|  | 
 | ||||||
|  | However, pulling image manually is not required, it will be downloaded | ||||||
|  | if not present by Kubernetes when deploying VirtualMachine. | ||||||
|  | 
 | ||||||
|  | Attaching to VirtualMachine | ||||||
|  | --------------------------- | ||||||
|  | 
 | ||||||
|  | KubeVirt distributes virtio drivers for Microsoft Windows in a form of | ||||||
|  | container disk. The package contains the virtio drivers and QEMU guest | ||||||
|  | agent. The disk was tested on Microsoft Windows Server 2012. Supported | ||||||
|  | Windows version is XP and up. | ||||||
|  | 
 | ||||||
|  | The package is intended to be used as CD-ROM attached to the virtual | ||||||
|  | machine with Microsoft Windows. It can be used as SATA CDROM during | ||||||
|  | install phase or to provide drivers in an existing Windows installation. | ||||||
|  | 
 | ||||||
|  | Attaching the virtio-win package can be done simply by adding | ||||||
|  | ContainerDisk to you VirtualMachine. | ||||||
|  | 
 | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |             - name: virtiocontainerdisk | ||||||
|  |               # Any other disk you want to use, must go before virtioContainerDisk. | ||||||
|  |               # KubeVirt boots from disks in order ther are defined. | ||||||
|  |               # Therefore virtioContainerDisk, must be after bootable disk. | ||||||
|  |               # Other option is to choose boot order explicitly: | ||||||
|  |               #  - https://kubevirt.io/api-reference/v0.13.2/definitions.html#_v1_disk | ||||||
|  |               # NOTE: You either specify bootOrder explicitely or sort the items in | ||||||
|  |               #       disks. You can not do both at the same time. | ||||||
|  |               # bootOrder: 2 | ||||||
|  |               cdrom: | ||||||
|  |                 bus: sata | ||||||
|  |     volumes: | ||||||
|  |       - containerDisk: | ||||||
|  |           image: kubevirt/virtio-container-disk | ||||||
|  |         name: virtiocontainerdisk | ||||||
|  | 
 | ||||||
|  | Once you are done installing virtio drivers, you can remove virtio | ||||||
|  | container disk by simply removing the disk from yaml specification and | ||||||
|  | restarting the VirtualMachine. | ||||||
|  | @ -0,0 +1,31 @@ | ||||||
|  | <!-- index.html --> | ||||||
|  | 
 | ||||||
|  | <!DOCTYPE html> | ||||||
|  | <html> | ||||||
|  | <head> | ||||||
|  |   <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> | ||||||
|  |   <meta name="viewport" content="width=device-width,initial-scale=1"> | ||||||
|  |   <meta charset="UTF-8"> | ||||||
|  |   <link rel="stylesheet" href="//unpkg.com/docsify/themes/vue.css"> | ||||||
|  |   <link rel="stylesheet" href="//unpkg.com/docsify/lib/themes/vue.css" title="vue"> | ||||||
|  |   <link rel="stylesheet" href="//unpkg.com/docsify/lib/themes/dark.css" title="dark" disabled> | ||||||
|  |   <link rel="stylesheet" href="//unpkg.com/docsify/lib/themes/buble.css" title="buble" disabled> | ||||||
|  |   <link rel="stylesheet" href="//unpkg.com/docsify/lib/themes/pure.css" title="pure" disabled> | ||||||
|  | </head> | ||||||
|  | <body> | ||||||
|  |   <div id="app"></div> | ||||||
|  |   <script> | ||||||
|  |     window.$docsify = { | ||||||
|  |       loadSidebar: true, | ||||||
|  |       auto2top: true, | ||||||
|  |       coverpage: false, | ||||||
|  |       name: "kubevirt", | ||||||
|  |       repo: 'davidvossel/kubevirt-user-guide-v2', | ||||||
|  |     } | ||||||
|  |   </script> | ||||||
|  |   <script src="//unpkg.com/docsify/lib/docsify.min.js"></script> | ||||||
|  |   <script src="//unpkg.com/docsify/lib/plugins/search.min.js"></script> | ||||||
|  |   <script src="//unpkg.com/prismjs/components/prism-yaml.min.js"></script> | ||||||
|  |   <script src="//unpkg.com/prismjs/components/prism-bash.min.js"></script> | ||||||
|  | </body> | ||||||
|  | </html> | ||||||
|  | @ -0,0 +1,46 @@ | ||||||
|  | KubeVirt specific annotations and labels | ||||||
|  | ======================================== | ||||||
|  | 
 | ||||||
|  | KubeVirt builds on and exposes a number of labels and annotations that | ||||||
|  | either are used for internal implementation needs or expose useful | ||||||
|  | information to API users. This page documents the labels and annotations | ||||||
|  | that may be useful for regular API consumers. This page intentionally | ||||||
|  | does *not* list labels and annotations that are merely part of internal | ||||||
|  | implementation. | ||||||
|  | 
 | ||||||
|  | **Note:** Annotations and labels that are not specific to KubeVirt are | ||||||
|  | also documented | ||||||
|  | [here](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/). | ||||||
|  | 
 | ||||||
|  | kubevirt.io | ||||||
|  | ----------- | ||||||
|  | 
 | ||||||
|  | Example: `kubevirt.io=virt-launcher` | ||||||
|  | 
 | ||||||
|  | Used on: Pod | ||||||
|  | 
 | ||||||
|  | This label marks resources that belong to KubeVirt. An optional value | ||||||
|  | may indicate which specific KubeVirt component a resource belongs to. | ||||||
|  | This label may be used to list all resources that belong to KubeVirt, | ||||||
|  | for example, to uninstall it from a cluster. | ||||||
|  | 
 | ||||||
|  | kubevirt.io/schedulable | ||||||
|  | ----------------------- | ||||||
|  | 
 | ||||||
|  | Example: `kubevirt.io/schedulable=true` | ||||||
|  | 
 | ||||||
|  | Used on: Node | ||||||
|  | 
 | ||||||
|  | This label declares whether a particular node is available for | ||||||
|  | scheduling virtual machine instances on it. | ||||||
|  | 
 | ||||||
|  | kubevirt.io/heartbeat | ||||||
|  | --------------------- | ||||||
|  | 
 | ||||||
|  | Example: `kubevirt.io/heartbeat=2018-07-03T20:07:25Z` | ||||||
|  | 
 | ||||||
|  | Used on: Node | ||||||
|  | 
 | ||||||
|  | This annotation is regularly updated by virt-handler to help determine | ||||||
|  | if a particular node is alive and hence should be available for new | ||||||
|  | virtual machine instance scheduling. | ||||||
|  | @ -0,0 +1,142 @@ | ||||||
|  | Authorization | ||||||
|  | ============= | ||||||
|  | 
 | ||||||
|  | KubeVirt authorization is performed using Kubernete’s Resource Based | ||||||
|  | Authorization Control system (RBAC). RBAC allows cluster admins to grant | ||||||
|  | access to cluster resources by binding RBAC roles to users. | ||||||
|  | 
 | ||||||
|  | For example, an admin creates an RBAC role that represents the | ||||||
|  | permissions required to create a VirtualMachineInstance. The admin can | ||||||
|  | then bind that role to users in order to grant them the permissions | ||||||
|  | required to launch a VirtualMachineInstance. | ||||||
|  | 
 | ||||||
|  | With RBAC roles, admins can grant users targeted access to various | ||||||
|  | KubeVirt features. | ||||||
|  | 
 | ||||||
|  | KubeVirt Default RBAC ClusterRoles | ||||||
|  | ---------------------------------- | ||||||
|  | 
 | ||||||
|  | KubeVirt comes with a set of predefined RBAC ClusterRoles that can be | ||||||
|  | used to grant users permissions to access KubeVirt Resources. | ||||||
|  | 
 | ||||||
|  | ### Default View Role | ||||||
|  | 
 | ||||||
|  | The **kubevirt.io:view** ClusterRole gives users permissions to view all | ||||||
|  | KubeVirt resources in the cluster. The permissions to create, delete, | ||||||
|  | modify or access any KubeVirt resources beyond viewing the resource’s | ||||||
|  | spec are not included in this role. This means a user with this role | ||||||
|  | could see that a VirtualMachineInstance is running, but neither shutdown | ||||||
|  | nor gain access to that VirtualMachineInstance via console/VNC. | ||||||
|  | 
 | ||||||
|  | ### Default Edit Role | ||||||
|  | 
 | ||||||
|  | The **kubevirt.io:edit** ClusterRole gives users permissions to modify | ||||||
|  | all KubeVirt resources in the cluster. For example, a user with this | ||||||
|  | role can create new VirtualMachineInstances, delete | ||||||
|  | VirtualMachineInstances, and gain access to both console and VNC. | ||||||
|  | 
 | ||||||
|  | ### Default Admin Role | ||||||
|  | 
 | ||||||
|  | The **kubevirt.io:admin** ClusterRole grants users full permissions to | ||||||
|  | all KubeVirt resources, including the ability to delete collections of | ||||||
|  | resources. | ||||||
|  | 
 | ||||||
|  | The admin role also grants users access to view and modify the KubeVirt | ||||||
|  | runtime config. This config exists within a configmap called | ||||||
|  | **kubevirt-config** in the namespace the KubeVirt components are | ||||||
|  | running. | ||||||
|  | 
 | ||||||
|  | > *NOTE* Users are only guaranteed the ability to modify the kubevirt | ||||||
|  | > runtime configuration if a ClusterRoleBinding is used. A RoleBinding | ||||||
|  | > will work to provide kubevirt-config access only if the RoleBinding | ||||||
|  | > targets the same namespace the kubevirt-config exists in. | ||||||
|  | 
 | ||||||
|  | ### Binding Default ClusterRoles to Users | ||||||
|  | 
 | ||||||
|  | The KubeVirt default ClusterRoles are granted to users by creating | ||||||
|  | either a ClusterRoleBinding or RoleBinding object. | ||||||
|  | 
 | ||||||
|  | #### Binding within All Namespaces | ||||||
|  | 
 | ||||||
|  | With a ClusterRoleBinding, users receive the permissions granted by the | ||||||
|  | role across all namespaces. | ||||||
|  | 
 | ||||||
|  | #### Binding within Single Namespace | ||||||
|  | 
 | ||||||
|  | With a RoleBinding, users receive the permissions granted by the role | ||||||
|  | only within a targeted namespace. | ||||||
|  | 
 | ||||||
|  | Extending Kubernetes Default Roles with KubeVirt permissions | ||||||
|  | ------------------------------------------------------------ | ||||||
|  | 
 | ||||||
|  | The aggregated ClusterRole Kubernetes feature facilitates combining | ||||||
|  | multiple ClusterRoles into a single aggregated ClusterRole. This feature | ||||||
|  | is commonly used to extend the default Kubernetes roles with permissions | ||||||
|  | to access custom resources that do not exist in the Kubernetes core. | ||||||
|  | 
 | ||||||
|  | In order to extend the default Kubernetes roles to provide permission to | ||||||
|  | access KubeVirt resources, we need to add the following labels to the | ||||||
|  | KubeVirt ClusterRoles. | ||||||
|  | 
 | ||||||
|  |     kubectl label clusterrole kubevirt.io:admin rbac.authorization.k8s.io/aggregate-to-admin=true | ||||||
|  |     kubectl label clusterrole kubevirt.io:edit rbac.authorization.k8s.io/aggregate-to-edit=true | ||||||
|  |     kubectl label clusterrole kubevirt.io:view rbac.authorization.k8s.io/aggregate-to-view=true | ||||||
|  | 
 | ||||||
|  | By adding these labels, any user with a RoleBinding or | ||||||
|  | ClusterRoleBinding involving one of the default Kubernetes roles will | ||||||
|  | automatically gain access to the equivalent KubeVirt roles as well. | ||||||
|  | 
 | ||||||
|  | More information about aggregated cluster roles can be found | ||||||
|  | [here](https://kubernetes.io/docs/admin/authorization/rbac/#aggregated-clusterroles) | ||||||
|  | 
 | ||||||
|  | Creating Custom RBAC Roles | ||||||
|  | -------------------------- | ||||||
|  | 
 | ||||||
|  | If the default KubeVirt ClusterRoles are not expressive enough, admins | ||||||
|  | can create their own custom RBAC roles to grant user access to KubeVirt | ||||||
|  | resources. The creation of a RBAC role is inclusive only, meaning | ||||||
|  | there’s no way to deny access. Instead access is only granted. | ||||||
|  | 
 | ||||||
|  | Below is an example of what KubeVirt’s default admin ClusterRole looks | ||||||
|  | like. A custom RBAC role can be created by reducing the permissions in | ||||||
|  | this example role. | ||||||
|  | 
 | ||||||
|  |     apiVersion: rbac.authorization.k8s.io/v1beta1 | ||||||
|  |     kind: ClusterRole | ||||||
|  |     metadata: | ||||||
|  |       name: my-custom-rbac-role | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io: "" | ||||||
|  |     rules: | ||||||
|  |       - apiGroups: | ||||||
|  |           - subresources.kubevirt.io | ||||||
|  |         resources: | ||||||
|  |           - virtualmachineinstances/console | ||||||
|  |           - virtualmachineinstances/vnc | ||||||
|  |         verbs: | ||||||
|  |           - get | ||||||
|  |       - apiGroups: | ||||||
|  |           - kubevirt.io | ||||||
|  |         resources: | ||||||
|  |           - virtualmachineinstances | ||||||
|  |           - virtualmachines | ||||||
|  |           - virtualmachineinstancepresets | ||||||
|  |           - virtualmachineinstancereplicasets | ||||||
|  |         verbs: | ||||||
|  |           - get | ||||||
|  |           - delete | ||||||
|  |           - create | ||||||
|  |           - update | ||||||
|  |           - patch | ||||||
|  |           - list | ||||||
|  |           - watch | ||||||
|  |           - deletecollection | ||||||
|  |       - apiGroups: [""] | ||||||
|  |         resources: | ||||||
|  |           - configmaps | ||||||
|  |         resourceNames: | ||||||
|  |           - kubevirt-config | ||||||
|  |         verbs: | ||||||
|  |           - update | ||||||
|  |           - get | ||||||
|  |           - patch | ||||||
|  | @ -0,0 +1,20 @@ | ||||||
|  | # Hugepages support | ||||||
|  | 
 | ||||||
|  | For hugepages support you need at least Kubernetes version `1.9`. | ||||||
|  | 
 | ||||||
|  | ## Enable feature-gate | ||||||
|  | 
 | ||||||
|  | To enable hugepages on Kubernetes, check the [official | ||||||
|  | documentation](https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/). | ||||||
|  | 
 | ||||||
|  | To enable hugepages on OKD, check the [official | ||||||
|  | documentation](https://docs.openshift.org/3.9/scaling_performance/managing_hugepages.html#huge-pages-prerequisites). | ||||||
|  | 
 | ||||||
|  | ## Pre-allocate hugepages on a node | ||||||
|  | 
 | ||||||
|  | To pre-allocate hugepages on boot time, you will need to specify | ||||||
|  | hugepages under kernel boot parameters `hugepagesz=2M hugepages=64` and | ||||||
|  | restart your machine. | ||||||
|  | 
 | ||||||
|  | You can find more about hugepages under [official | ||||||
|  | documentation](https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt). | ||||||
|  | @ -0,0 +1,150 @@ | ||||||
|  | Creating Virtual Machines from local images with CDI and virtctl | ||||||
|  | ================================================================ | ||||||
|  | 
 | ||||||
|  | The [Containerized Data | ||||||
|  | Importer](https://github.com/kubevirt/containerized-data-importer) (CDI) | ||||||
|  | project provides facilities for enabling [Persistent Volume | ||||||
|  | Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) | ||||||
|  | (PVCs) to be used as disks for KubeVirt VMs by way of | ||||||
|  | [DataVolumes](https://github.com/kubevirt/containerized-data-importer/blob/master/doc/datavolumes.md). | ||||||
|  | The three main CDI use cases are: | ||||||
|  | 
 | ||||||
|  | -   Import a disk image from a URL to a DataVolume (HTTP/S3) | ||||||
|  | 
 | ||||||
|  | -   Clone an existing PVC to a DataVolume | ||||||
|  | 
 | ||||||
|  | -   Upload a local disk image to a DataVolume | ||||||
|  | 
 | ||||||
|  | This document deals with the third use case. So you should have CDI | ||||||
|  | installed in your cluster, a VM disk that you’d like to upload, and | ||||||
|  | virtctl in your path. | ||||||
|  | 
 | ||||||
|  | Install CDI | ||||||
|  | ----------- | ||||||
|  | 
 | ||||||
|  | Install the latest CDI release | ||||||
|  | [here](https://github.com/kubevirt/containerized-data-importer/releases) | ||||||
|  | 
 | ||||||
|  |     VERSION=$(curl -s https://github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*") | ||||||
|  |     kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml | ||||||
|  |     kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator-cr.yaml | ||||||
|  | 
 | ||||||
|  | ### Expose cdi-uploadproxy service | ||||||
|  | 
 | ||||||
|  | The `cdi-uploadproxy` service must be accessible from outside the | ||||||
|  | cluster. Here are some ways to do that: | ||||||
|  | 
 | ||||||
|  | -   [NodePort | ||||||
|  |     Service](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport) | ||||||
|  | 
 | ||||||
|  | -   [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) | ||||||
|  | 
 | ||||||
|  | -   [Route](https://docs.openshift.com/container-platform/3.9/architecture/networking/routes.html) | ||||||
|  | 
 | ||||||
|  | -   [kubectl | ||||||
|  |     port-forward](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) | ||||||
|  |     (not recommended for production clusters) | ||||||
|  | 
 | ||||||
|  | Look | ||||||
|  | [here](https://github.com/kubevirt/containerized-data-importer/blob/master/doc/upload.md) | ||||||
|  | for example manifests. | ||||||
|  | 
 | ||||||
|  | Supported image formats | ||||||
|  | ----------------------- | ||||||
|  | 
 | ||||||
|  | -   `.img` | ||||||
|  | 
 | ||||||
|  | -   `.iso` | ||||||
|  | 
 | ||||||
|  | -   `.qcow2` | ||||||
|  | 
 | ||||||
|  | -   compressed `.tar`, `.gz`, and `.xz` versions of above supported as | ||||||
|  |     well | ||||||
|  | 
 | ||||||
|  | Example in this doc uses | ||||||
|  | [this](http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img) | ||||||
|  | [CirrOS](https://launchpad.net/cirros) image | ||||||
|  | 
 | ||||||
|  | virtctl image-upload | ||||||
|  | -------------------- | ||||||
|  | 
 | ||||||
|  | virtctl has an image-upload command with the following options: | ||||||
|  | 
 | ||||||
|  |     virtctl image-upload --help | ||||||
|  |     Upload a VM image to a DataVolume/PersistentVolumeClaim. | ||||||
|  | 
 | ||||||
|  |     Usage: | ||||||
|  |       virtctl image-upload [flags] | ||||||
|  | 
 | ||||||
|  |     Examples: | ||||||
|  |       # Upload a local disk image to a newly created DataVolume: | ||||||
|  |       virtctl image-upload dv dv-name --size=10Gi --image-path=/images/fedora30.qcow2 | ||||||
|  | 
 | ||||||
|  |       # Upload a local disk image to an existing DataVolume | ||||||
|  |       virtctl image-upload dv dv-name --no-create --image-path=/images/fedora30.qcow2 | ||||||
|  | 
 | ||||||
|  |       # Upload a local disk image to an existing PersistentVolumeClaim | ||||||
|  |       virtctl image-upload pvc pvc-name --image-path=/images/fedora30.qcow2 | ||||||
|  | 
 | ||||||
|  |       # Upload to a DataVolume with explicit URL to CDI Upload Proxy | ||||||
|  |       virtctl image-upload dv dv-name --uploadproxy-url=https://cdi-uploadproxy.mycluster.com --image-path=/images/fedora30.qcow2 | ||||||
|  | 
 | ||||||
|  |     Flags: | ||||||
|  |           --access-mode string       The access mode for the PVC. (default "ReadWriteOnce") | ||||||
|  |           --block-volume             Create a PVC with VolumeMode=Block (default Filesystem). | ||||||
|  |       -h, --help                     help for image-upload | ||||||
|  |           --image-path string        Path to the local VM image. | ||||||
|  |           --insecure                 Allow insecure server connections when using HTTPS. | ||||||
|  |           --no-create                Don't attempt to create a new DataVolume/PVC. | ||||||
|  |           --pvc-name string          DEPRECATED - The destination DataVolume/PVC name. | ||||||
|  |           --pvc-size string          DEPRECATED - The size of the PVC to create (ex. 10Gi, 500Mi). | ||||||
|  |           --size string              The size of the DataVolume to create (ex. 10Gi, 500Mi). | ||||||
|  |           --storage-class string     The storage class for the PVC. | ||||||
|  |           --uploadproxy-url string   The URL of the cdi-upload proxy service. | ||||||
|  |           --wait-secs uint           Seconds to wait for upload pod to start. (default 60) | ||||||
|  | 
 | ||||||
|  |     Use "virtctl options" for a list of global command-line options (applies to all commands). | ||||||
|  | 
 | ||||||
|  | “virtctl image-upload” works by creating a DataVolume of the requested | ||||||
|  | size, sending an `UploadTokenRequest` to the `cdi-apiserver`, and | ||||||
|  | uploading the file to the `cdi-uploadproxy`. | ||||||
|  | 
 | ||||||
|  |     virtctl image-upload dv cirros-vm-disk --size=500Mi --image-path=/home/mhenriks/images/cirros-0.4.0-x86_64-disk.img --uploadproxy-url=<url to upload proxy service> | ||||||
|  | 
 | ||||||
|  | Create a VirtualMachineInstance | ||||||
|  | ------------------------------- | ||||||
|  | 
 | ||||||
|  | To create a `VirtualMachinInstance` from a PVC, you can execute the | ||||||
|  | following: | ||||||
|  | 
 | ||||||
|  |     cat <<EOF | kubectl apply -f - | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: cirros-vm | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: pvcdisk | ||||||
|  |         machine: | ||||||
|  |           type: "" | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  |       volumes: | ||||||
|  |       - name: pvcdisk | ||||||
|  |         persistentVolumeClaim: | ||||||
|  |           claimName: cirros-vm-disk | ||||||
|  |     status: {} | ||||||
|  |     EOF | ||||||
|  | 
 | ||||||
|  | Connect to VirtualMachineInstance console | ||||||
|  | ----------------------------------------- | ||||||
|  | 
 | ||||||
|  | Use `virtctl` to connect to the newly create `VirtualMachinInstance`. | ||||||
|  | 
 | ||||||
|  |     virtctl console cirros-vm | ||||||
|  | @ -0,0 +1,134 @@ | ||||||
|  | Installation | ||||||
|  | ============ | ||||||
|  | 
 | ||||||
|  | KubeVirt is a virtualization add-on to Kubernetes and this guide assumes | ||||||
|  | that a Kubernetes cluster is already installed. | ||||||
|  | 
 | ||||||
|  | If installed on OKD, the web console is extended for management of | ||||||
|  | virtual machines. | ||||||
|  | 
 | ||||||
|  | Requirements | ||||||
|  | ------------ | ||||||
|  | 
 | ||||||
|  | A few requirements need to be met before you can begin: | ||||||
|  | 
 | ||||||
|  | -   [Kubernetes](https://kubernetes.io) cluster or derivative | ||||||
|  |     (such as [OpenShift](https://github.com/openshift/origin), Tectonic) | ||||||
|  |     based on Kubernetes 1.10 or greater | ||||||
|  | -   Kubernetes apiserver must have `--allow-privileged=true` in order to run KubeVirt's privileged DaemonSet. | ||||||
|  | -   `kubectl` client utility | ||||||
|  | 
 | ||||||
|  | ### Container Runtime Support | ||||||
|  | 
 | ||||||
|  | KubeVirt is currently supported on the following container runtimes: | ||||||
|  | 
 | ||||||
|  | -   docker | ||||||
|  | -   crio (with runv) | ||||||
|  | 
 | ||||||
|  | Other container runtimes, which do not use virtualization features, | ||||||
|  | should work too. However, they are not tested. | ||||||
|  | 
 | ||||||
|  | ### Validate Hardware Virtualization Support | ||||||
|  | 
 | ||||||
|  | Hardware with virtualization support is recommended. You can use | ||||||
|  | virt-host-validate to ensure that your hosts are capable of running | ||||||
|  | virtualization workloads: | ||||||
|  | 
 | ||||||
|  |     $ virt-host-validate qemu | ||||||
|  |       QEMU: Checking for hardware virtualization                                 : PASS | ||||||
|  |       QEMU: Checking if device /dev/kvm exists                                   : PASS | ||||||
|  |       QEMU: Checking if device /dev/kvm is accessible                            : PASS | ||||||
|  |       QEMU: Checking if device /dev/vhost-net exists                             : PASS | ||||||
|  |       QEMU: Checking if device /dev/net/tun exists                               : PASS | ||||||
|  |     ... | ||||||
|  | 
 | ||||||
|  | If hardware virtualization is not available, then a [software emulation | ||||||
|  | fallback](https://github.com/kubevirt/kubevirt/blob/master/docs/software-emulation.md) | ||||||
|  | can be enabled using: | ||||||
|  | 
 | ||||||
|  |     $ kubectl create namespace kubevirt | ||||||
|  |     $ kubectl create configmap -n kubevirt kubevirt-config \ | ||||||
|  |         --from-literal debug.useEmulation=true | ||||||
|  | 
 | ||||||
|  | This ConfigMap needs to be created before deployment or the | ||||||
|  | virt-controller deployment has to be restarted. | ||||||
|  | 
 | ||||||
|  | ## Installing KubeVirt on Kubernetes | ||||||
|  | 
 | ||||||
|  | KubeVirt can be installed using the KubeVirt operator, which manages the | ||||||
|  | lifecycle of all the KubeVirt core components. Below is an example of | ||||||
|  | how to install KubeVirt using an official release. | ||||||
|  | 
 | ||||||
|  |     # Pick an upstream version of KubeVirt to install | ||||||
|  |     $ export VERSION=v0.26.0 | ||||||
|  |     # Deploy the KubeVirt operator | ||||||
|  |     $ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml | ||||||
|  |     # Create the KubeVirt CR (instance deployment request) | ||||||
|  |     $ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml | ||||||
|  |     # wait until all KubeVirt components is up | ||||||
|  |     $ kubectl -n kubevirt wait kv kubevirt --for condition=Available | ||||||
|  | 
 | ||||||
|  | > Note: Prior to release v0.20.0 the condition for the `kubectl wait` | ||||||
|  | > command was named "Ready" instead of "Available" | ||||||
|  | 
 | ||||||
|  | All new components will be deployed under the `kubevirt` namespace: | ||||||
|  | 
 | ||||||
|  |     kubectl get pods -n kubevirt | ||||||
|  |     NAME                                           READY     STATUS        RESTARTS   AGE | ||||||
|  |     virt-api-6d4fc3cf8a-b2ere                      1/1       Running       0          1m | ||||||
|  |     virt-controller-5d9fc8cf8b-n5trt               1/1       Running       0          1m | ||||||
|  |     virt-handler-vwdjx                             1/1       Running       0          1m | ||||||
|  |     ... | ||||||
|  | 
 | ||||||
|  | ## Installing KubeVirt on OKD | ||||||
|  | 
 | ||||||
|  | The following | ||||||
|  | [SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) | ||||||
|  | needs to be added prior KubeVirt deployment: | ||||||
|  | 
 | ||||||
|  |     $ oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator | ||||||
|  | 
 | ||||||
|  | Once privileges are granted, the KubeVirt can be deployed as described above. | ||||||
|  | 
 | ||||||
|  | ### Web user interface on OKD | ||||||
|  | 
 | ||||||
|  | No additional steps are required to extend OKD's web console for KubeVirt. | ||||||
|  | 
 | ||||||
|  | The virtualization extension is automatically enabled when KubeVirt deployment is detected. | ||||||
|  | 
 | ||||||
|  | ### From Service Catalog as an APB | ||||||
|  | 
 | ||||||
|  | You can find KubeVirt in the OKD Service Catalog and install it from | ||||||
|  | there. In order to do that please follow the documentation in the | ||||||
|  | [KubeVirt APB | ||||||
|  | repository](https://github.com/ansibleplaybookbundle/kubevirt-apb). | ||||||
|  | 
 | ||||||
|  | Deploying from Source | ||||||
|  | --------------------- | ||||||
|  | 
 | ||||||
|  | See the [Developer Getting Started | ||||||
|  | Guide](https://github.com/kubevirt/kubevirt/blob/master/docs/getting-started.md) | ||||||
|  | to understand how to build and deploy KubeVirt from source. | ||||||
|  | 
 | ||||||
|  | Installing network plugins (optional) | ||||||
|  | ------------------------------------- | ||||||
|  | 
 | ||||||
|  | KubeVirt alone does not bring any additional network plugins, it just | ||||||
|  | allows user to utilize them. If you want to attach your VMs to multiple | ||||||
|  | networks (Multus CNI) or have full control over L2 (OVS CNI), you need | ||||||
|  | to deploy respective network plugins. For more information, refer to | ||||||
|  | [OVS CNI installation | ||||||
|  | guide](https://github.com/kubevirt/ovs-cni/blob/master/docs/deployment-on-arbitrary-cluster.md). | ||||||
|  | 
 | ||||||
|  | > Note: KubeVirt Ansible [network | ||||||
|  | > playbook](https://github.com/kubevirt/kubevirt-ansible/tree/master/playbooks#network) | ||||||
|  | > installs these plugins by default. | ||||||
|  | 
 | ||||||
|  | # Restricting virt-handler DaemonSet | ||||||
|  | 
 | ||||||
|  | You can patch the `virt-handler` DaemonSet post-deployment to restrict | ||||||
|  | it to a specific subset of nodes with a nodeSelector. For example, to | ||||||
|  | restrict the DaemonSet to only nodes with the "region=primary" label: | ||||||
|  | 
 | ||||||
|  |     kubectl patch ds/virt-handler -n kubevirt -p '{"spec": {"template": {"spec": {"nodeSelector": {"region": "primary"}}}}}' | ||||||
|  | 
 | ||||||
|  | @ -0,0 +1,197 @@ | ||||||
|  | # Live Migration | ||||||
|  | 
 | ||||||
|  | Live migration is a process during which a running Virtual Machine | ||||||
|  | Instance moves to another compute node while the guest workload | ||||||
|  | continues to run and remain accessable. | ||||||
|  | 
 | ||||||
|  | ## Enabling the live-migration support | ||||||
|  | 
 | ||||||
|  | Live migration must be eabled in the featue gates to be supported. The | ||||||
|  | `feature-gates` field in the kubevirt-config config map can be expanded | ||||||
|  | by adding the `LiveMigration` to it. | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: ConfigMap | ||||||
|  |     metadata: | ||||||
|  |       name: kubevirt-config | ||||||
|  |       namespace: kubevirt | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io: "" | ||||||
|  |     data: | ||||||
|  |       feature-gates: "LiveMigration" | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | Alternatively, existing kubevirt-config can be altered: | ||||||
|  | 
 | ||||||
|  | ```kubectl edit configmap kubevirt-config -n kubevirt` | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  |     data: | ||||||
|  |       feature-gates: "DataVolumes,LiveMigration" | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | ## Limitations | ||||||
|  | 
 | ||||||
|  | -   Virtual machines using a PersistentVolumeClaim (PVC) must have a | ||||||
|  |     shared ReadWriteMany (RWX) access mode to be live migrated. | ||||||
|  | 
 | ||||||
|  | -   Live migration is not allowed with a pod network binding of bridge | ||||||
|  |     interface type | ||||||
|  |     (<https://kubevirt.io/user-guide/docs/latest/creating-virtual-machines/interfaces-and-networks.html>) | ||||||
|  | 
 | ||||||
|  | ## Initiate live migration | ||||||
|  | 
 | ||||||
|  | Live migration is initiated by posting a VirtualMachineInstanceMigration | ||||||
|  | (VMIM) object to the cluster. The example below starts a migration | ||||||
|  | process for a virtual machine instance `vmi-fedora` | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | apiVersion: kubevirt.io/v1alpha3 | ||||||
|  | kind: VirtualMachineInstanceMigration | ||||||
|  | metadata: | ||||||
|  |   name: migration-job | ||||||
|  | spec: | ||||||
|  |   vmiName: vmi-fedora | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | ## Migration Status Reporting | ||||||
|  | 
 | ||||||
|  | # Condition and migration method | ||||||
|  | 
 | ||||||
|  | When starting a virtual machine instance, it has also been calculated | ||||||
|  | whether the machine is live migratable. The result is being stored in | ||||||
|  | the VMI `VMI.status.conditions`. The calculation can be based on | ||||||
|  | multiple parameters of the VMI, however, at the moment, the calculation | ||||||
|  | is largely based on the `Access Mode` of the VMI volumes. Live migration | ||||||
|  | is only permitted when the volume access mode is set to `ReadWriteMany`. | ||||||
|  | Requests to migrate a non-LiveMigratable VMI will be rejected. | ||||||
|  | 
 | ||||||
|  | The reported `Migration Method` is also being calculated during VMI | ||||||
|  | start. `BlockMigration` indicates that some of the VMI disks require | ||||||
|  | copying from the source to the destination. `LiveMigration` means that | ||||||
|  | only the instance memory will be copied. | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | Status: | ||||||
|  |   Conditions: | ||||||
|  |     Status:                True | ||||||
|  |     Type:                  LiveMigratable | ||||||
|  |   Migration Method:  BlockMigration | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | # Migration Status | ||||||
|  | 
 | ||||||
|  | The migration progress status is being reported in the VMI `VMI.status`. | ||||||
|  | Most importantly, it indicates whether the migration has been | ||||||
|  | `Completed` or if it `Failed`. | ||||||
|  | 
 | ||||||
|  | Below is an example of a successful migration. | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | Migration State: | ||||||
|  |     Completed:        true | ||||||
|  |     End Timestamp:    2019-03-29T03:37:52Z | ||||||
|  |     Migration Config: | ||||||
|  |       Completion Timeout Per GiB:  800 | ||||||
|  |       Progress Timeout:             150 | ||||||
|  |     Migration UID:                  c64d4898-51d3-11e9-b370-525500d15501 | ||||||
|  |     Source Node:                    node02 | ||||||
|  |     Start Timestamp:                2019-03-29T04:02:47Z | ||||||
|  |     Target Direct Migration Node Ports: | ||||||
|  |       35001:                      0 | ||||||
|  |       41068:                      49152 | ||||||
|  |       38284:                      49153 | ||||||
|  |     Target Node:                  node01 | ||||||
|  |     Target Node Address:          10.128.0.46 | ||||||
|  |     Target Node Domain Detected:  true | ||||||
|  |     Target Pod:                   virt-launcher-testvmimcbjgw6zrzcmp8wpddvztvzm7x2k6cjbdgktwv8tkq | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | ## Cancel live migration | ||||||
|  | 
 | ||||||
|  | Live migration can also be canceled by simply deleting the migration | ||||||
|  | object. A successfully aborted migration will indicate that the abort | ||||||
|  | has been requested `Abort Requested`, and that it succeeded: | ||||||
|  | `Abort Status: Succeeded`. The migration in this case will be `Completed` | ||||||
|  | and `Failed`. | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | Migration State: | ||||||
|  |     Abort Requested:  true | ||||||
|  |     Abort Status:     Succeeded | ||||||
|  |     Completed:        true | ||||||
|  |     End Timestamp:    2019-03-29T04:02:49Z | ||||||
|  |     Failed:           true | ||||||
|  |     Migration Config: | ||||||
|  |       Completion Timeout Per GiB:  800 | ||||||
|  |       Progress Timeout:             150 | ||||||
|  |     Migration UID:                  57a693d6-51d7-11e9-b370-525500d15501 | ||||||
|  |     Source Node:                    node02 | ||||||
|  |     Start Timestamp:                2019-03-29T04:02:47Z | ||||||
|  |     Target Direct Migration Node Ports: | ||||||
|  |       39445:                      0 | ||||||
|  |       43345:                      49152 | ||||||
|  |       44222:                      49153 | ||||||
|  |     Target Node:                  node01 | ||||||
|  |     Target Node Address:          10.128.0.46 | ||||||
|  |     Target Node Domain Detected:  true | ||||||
|  |     Target Pod:                   virt-launcher-testvmimcbjgw6zrzcmp8wpddvztvzm7x2k6cjbdgktwv8tkq | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | ## Changing Cluster Wide Migration Limits | ||||||
|  | 
 | ||||||
|  | KubeVirt puts some limits in place, so that migrations don’t overwhelm | ||||||
|  | the cluster. By default it is configured to only run `5` migrations in | ||||||
|  | parallel with an additional limit of a maximum of `2` outbound | ||||||
|  | migrations per node. Finally every migration is limited to a bandwidth | ||||||
|  | of `64MiB/s`. | ||||||
|  | 
 | ||||||
|  | These values can be change in the `kubevirt-config`: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: ConfigMap | ||||||
|  |     metadata: | ||||||
|  |       name: kubevirt-config | ||||||
|  |       namespace: kubevirt | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io: "" | ||||||
|  |     data: | ||||||
|  |       feature-gates: "LiveMigration" | ||||||
|  |       migrations: |- | ||||||
|  |         parallelMigrationsPerCluster: 5 | ||||||
|  |         parallelOutboundMigrationsPerNode: 2 | ||||||
|  |         bandwidthPerMigration: 64Mi | ||||||
|  |         completionTimeoutPerGiB: 800 | ||||||
|  |         progressTimeout: 150 | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | # Migration timeouts | ||||||
|  | 
 | ||||||
|  | Depending on the type, the live migration process will copy virtual | ||||||
|  | machine memory pages and disk blocks to the destination. During this | ||||||
|  | process non-locked pages and blocks are being copied and become free for | ||||||
|  | the instance to use again. To achieve a successful migration, it is | ||||||
|  | assumed that the instance will write to the free pages and blocks | ||||||
|  | (pollute the pages) at a lower rate than these are being copied. | ||||||
|  | 
 | ||||||
|  | ## Completion time | ||||||
|  | 
 | ||||||
|  | In some cases the virtual machine can write to different memory pages / | ||||||
|  | disk blocks at a higher rate than these can be copied, which will | ||||||
|  | prevent the migration process from completing in a reasonable amount of | ||||||
|  | time. In this case, live migration will be aborted if it is running for | ||||||
|  | a long perioud of time. The timeout is calculated base on the size of | ||||||
|  | the VMI, it’s memory and the ephemeral disks that are needed to be | ||||||
|  | copied. The configurable parameter `completionTimeoutPerGiB`, which | ||||||
|  | deafults to 800s is the time for GiB of data to wait for the migration | ||||||
|  | to be completed before aborting it. A VMI with 8Gib of memory will time | ||||||
|  | out after 6400 seconds. | ||||||
|  | 
 | ||||||
|  | ## Progress timeout | ||||||
|  | 
 | ||||||
|  | Live migration will also be aborted when it will be noticed that copying | ||||||
|  | memory doesn’t make any progress. The time to wait for live migration to | ||||||
|  | make progress in transferring data is configurable by `progressTimeout` | ||||||
|  | parameter, which defaults to 150s | ||||||
|  | @ -0,0 +1,136 @@ | ||||||
|  | Monitoring KubeVirt components | ||||||
|  | ============================== | ||||||
|  | 
 | ||||||
|  | All KubeVirt system-components expose Prometheus metrics at their | ||||||
|  | `/metrics` REST endpoint. | ||||||
|  | 
 | ||||||
|  | Custom Service Discovery | ||||||
|  | ------------------------ | ||||||
|  | 
 | ||||||
|  | Prometheus supports service discovery based on | ||||||
|  | [Pods](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#pod) | ||||||
|  | and | ||||||
|  | [Endpoints](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#endpoints) | ||||||
|  | out of the box. Both can be used to discover KubeVirt services. | ||||||
|  | 
 | ||||||
|  | All Pods which expose metrics are labeled with `prometheus.kubevirt.io` | ||||||
|  | and contain a port-definition which is called `metrics`. In the KubeVirt | ||||||
|  | release-manifests, the default `metrics` port is `8443`. | ||||||
|  | 
 | ||||||
|  | The above labels and port informations are collected by a `Service` | ||||||
|  | called `kubevirt-prometheus-metrics`. Kuberentes automatically creates a | ||||||
|  | corresponding `Endpoint` with an equal name: | ||||||
|  | 
 | ||||||
|  |     $ kubectl get endpoints -n kubevirt kubevirt-prometheus-metrics -o yaml | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Endpoints | ||||||
|  |     metadata: | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io: "" | ||||||
|  |         prometheus.kubevirt.io: "" | ||||||
|  |       name: kubevirt-prometheus-metrics | ||||||
|  |       namespace: kubevirt | ||||||
|  |     subsets: | ||||||
|  |     - addresses: | ||||||
|  |       - ip: 10.244.0.5 | ||||||
|  |         nodeName: node01 | ||||||
|  |         targetRef: | ||||||
|  |           kind: Pod | ||||||
|  |           name: virt-handler-cjzg6 | ||||||
|  |           namespace: kubevirt | ||||||
|  |           resourceVersion: "4891" | ||||||
|  |           uid: c67331f9-bfcf-11e8-bc54-525500d15501 | ||||||
|  |       - ip: 10.244.0.6 | ||||||
|  |       [...] | ||||||
|  |       ports: | ||||||
|  |       - name: metrics | ||||||
|  |         port: 8443 | ||||||
|  |         protocol: TCP | ||||||
|  | 
 | ||||||
|  | By watching this endpoint for added and removed IPs to | ||||||
|  | `subsets.addresses` and appending the `metrics` port from | ||||||
|  | `subsets.ports`, it is possible to always get a complete list of | ||||||
|  | ready-to-be-scraped Prometheus targets. | ||||||
|  | 
 | ||||||
|  | Integrating with the prometheus-operator | ||||||
|  | ---------------------------------------- | ||||||
|  | 
 | ||||||
|  | The [prometheus-operator](https://github.com/coreos/prometheus-operator) | ||||||
|  | can make use of the `kubevirt-prometheus-metrics` service to | ||||||
|  | automatically create the appropriate Prometheus config. | ||||||
|  | 
 | ||||||
|  | KubeVirt’s `virt-operator` checks if the `ServiceMonitor` custom | ||||||
|  | resource exists when creating an install strategy for deployment. | ||||||
|  | KubeVirt will automatically create a `ServiceMonitor` resource in the | ||||||
|  | `monitorNamespace`, as well as an appropriate role and rolebinding in | ||||||
|  | KubeVirt’s namespace. | ||||||
|  | 
 | ||||||
|  | Two settings are exposed in the `KubeVirt` custom resource to direct | ||||||
|  | KubeVirt to create these resources correctly: | ||||||
|  | 
 | ||||||
|  | -   `monitorNamespace`: The namespace that prometheus-operator runs in. | ||||||
|  |     Defaults to `openshift-monitoring`. | ||||||
|  | 
 | ||||||
|  | -   `monitorAccount`: The serviceAccount that prometheus-operator runs | ||||||
|  |     with. Defaults to `prometheus-k8s`. | ||||||
|  | 
 | ||||||
|  | If the prometheus-operator for a given deployment uses these defaults, | ||||||
|  | then these values can be omitted. | ||||||
|  | 
 | ||||||
|  | An example of the KubeVirt resource depicting these default values: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: KubeVirt | ||||||
|  |     metadata: | ||||||
|  |       name: kubevirt | ||||||
|  |     spec: | ||||||
|  |       monitorNamespace: openshift-monitoring | ||||||
|  |       monitorAccount: prometheus-k8s | ||||||
|  | 
 | ||||||
|  | Integrating with the OKD cluster-monitoring-operator | ||||||
|  | ---------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | After the | ||||||
|  | [cluster-monitoring-operator](https://github.com/openshift/cluster-monitoring-operator) | ||||||
|  | is up and running, KubeVirt will detect the existence of the | ||||||
|  | `ServiceMonitor` resource. Because the definition contains the | ||||||
|  | `openshift.io/cluster-monitoring` label, it will automatically be picked | ||||||
|  | up by the cluster monitor. | ||||||
|  | 
 | ||||||
|  | Metrics about Virtual Machines | ||||||
|  | ------------------------------ | ||||||
|  | 
 | ||||||
|  | The endpoints report metrics related to the runtime behaviour of the | ||||||
|  | Virtual Machines. All the relevant metrics are prefixed with | ||||||
|  | `kubevirt_vmi`. | ||||||
|  | 
 | ||||||
|  | The metrics have labels that allow to connect to the VMI objects they | ||||||
|  | refer to. At minimum, the labels will expose `node`, `name` and | ||||||
|  | `namespace` of the related VMI object. | ||||||
|  | 
 | ||||||
|  | For example, reported metrics could look like | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | kubevirt_vmi_memory_resident_bytes{domain="default_vm-test-01",name="vm-test-01",namespace="default",node="node01"} 2.5595904e+07 | ||||||
|  | kubevirt_vmi_network_traffic_bytes_total{domain="default_vm-test-01",interface="vnet0",name="vm-test-01",namespace="default",node="node01",type="rx"} 8431 | ||||||
|  | kubevirt_vmi_network_traffic_bytes_total{domain="default_vm-test-01",interface="vnet0",name="vm-test-01",namespace="default",node="node01",type="tx"} 1835 | ||||||
|  | kubevirt_vmi_vcpu_seconds{domain="default_vm-test-01",id="0",name="vm-test-01",namespace="default",node="node01",state="1"} 19 | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | Please note the `domain` label in the above example. This label is | ||||||
|  | deprecated and it will be removed in a future release. You should | ||||||
|  | identify the VMI using the `node`, `namespace`, `name` labels instead. | ||||||
|  | 
 | ||||||
|  | Important Queries | ||||||
|  | ----------------- | ||||||
|  | 
 | ||||||
|  | ### Detecting connection issues for the REST client | ||||||
|  | 
 | ||||||
|  | Use the following query to get a counter for all REST call which | ||||||
|  | indicate connection issues: | ||||||
|  | 
 | ||||||
|  |     rest_client_requests_total{code="<error>"} | ||||||
|  | 
 | ||||||
|  | If this counter is continuously increasing, it is an indicator that the | ||||||
|  | corresponding KubeVirt component has general issues to connect to the | ||||||
|  | apiserver | ||||||
|  | @ -0,0 +1,212 @@ | ||||||
|  | VirtualMachineInstance Node Eviction | ||||||
|  | ==================================== | ||||||
|  | 
 | ||||||
|  | Before removing a kubernetes node from the cluster, users will want to | ||||||
|  | ensure that VirtualMachineInstances have been gracefully terminated | ||||||
|  | before powering down the node. Since all VirtualMachineInstances are | ||||||
|  | backed by a Pod, the recommended method of evicting | ||||||
|  | VirtualMachineInstances is to use the **kubectl drain** command, or in | ||||||
|  | the case of OKD the **oc adm drain** command. | ||||||
|  | 
 | ||||||
|  | How to Evict all VMs on a Node | ||||||
|  | ------------------------------ | ||||||
|  | 
 | ||||||
|  | Select the node you’d like to evict VirtualMachineInstances from by | ||||||
|  | identifying the node from the list of cluster nodes. | ||||||
|  | 
 | ||||||
|  | `kubectl get nodes` | ||||||
|  | 
 | ||||||
|  | The following command will gracefully terminate all VMs on a specific | ||||||
|  | node. Replace \*\* with the target node you want the eviction to occur | ||||||
|  | on. | ||||||
|  | 
 | ||||||
|  | `kubectl drain <node name> --delete-local-data --ignore-daemonsets=true --force --pod-selector=kubevirt.io=virt-launcher` | ||||||
|  | 
 | ||||||
|  | Below is a break down of why each argument passed to the drain command | ||||||
|  | is required. | ||||||
|  | 
 | ||||||
|  | -   `kubectl drain <node name>` is selecting a specific node as a target | ||||||
|  |     for the eviction | ||||||
|  | 
 | ||||||
|  | -   `--delete-local-data` is a required flag that is necessary for | ||||||
|  |     removing any pod that utilizes an emptyDir volume. The | ||||||
|  |     VirtualMachineInstance Pod does use emptryDir volumes, however the | ||||||
|  |     data in those volumes are ephemeral which means it is safe to delete | ||||||
|  |     after termination. | ||||||
|  | 
 | ||||||
|  | -   `--ignore-daemonsets=true` is a required flag because every node | ||||||
|  |     running a VirtualMachineInstance will also be running our helper | ||||||
|  |     DaemonSet called virt-handler. DaemonSets are not allowed to be | ||||||
|  |     evicted using **kubectl drain**. By default, if this command | ||||||
|  |     encounters a DaemonSet on the target node, the command will fail. | ||||||
|  |     This flag tells the command it is safe to proceed with the eviction | ||||||
|  |     and to just ignore DaemonSets. | ||||||
|  | 
 | ||||||
|  | -   `--force` is a required flag because VirtualMachineInstance pods are | ||||||
|  |     not owned by a ReplicaSet or DaemonSet controller. This means | ||||||
|  |     kubectl can’t guarantee that the pods being terminated on the target | ||||||
|  |     node will get re-scheduled replacements placed else where in the | ||||||
|  |     cluster after the pods are evicted. KubeVirt has its own controllers | ||||||
|  |     which manage the underlying VirtualMachineInstance pods. Each | ||||||
|  |     controller behaves differently to a VirtualMachineInstance being | ||||||
|  |     evicted. That behavior is outlined futher down in this document. | ||||||
|  | 
 | ||||||
|  | -   `--pod-selector=kubevirt.io=virt-launcher` means only | ||||||
|  |     VirtualMachineInstance pods managed by KubeVirt will be removed from | ||||||
|  |     the node. | ||||||
|  | 
 | ||||||
|  | How to Evict all VMs and Pods on a Node | ||||||
|  | --------------------------------------- | ||||||
|  | 
 | ||||||
|  | By removing the **–pod-selector** argument from the previous command, we | ||||||
|  | can issue the eviction of all Pods on a node. This command ensures Pods | ||||||
|  | associated with VMs as well as all other Pods are evicted from the | ||||||
|  | target node. | ||||||
|  | 
 | ||||||
|  | `kubectl drain <node name> --delete-local-data --ignore-daemonsets=true --force` | ||||||
|  | 
 | ||||||
|  | How to evacuate VMIs via Live Migration from a Node | ||||||
|  | --------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | If the **LiveMigration** feature gate is enabled, it is possible to | ||||||
|  | specify an `evictionStrategy` on VMIs which will react with | ||||||
|  | live-migrations on specific taints on nodes. The following snipped on a | ||||||
|  | VMI ensures that the VMI is migrated if the | ||||||
|  | `kubevirt.io/drain:NoSchedule` taint is added to a nodes: | ||||||
|  | 
 | ||||||
|  |     spec: | ||||||
|  |       evictionStrategy: LiveMigrate | ||||||
|  | 
 | ||||||
|  | Here a full VMI: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: testvmi-nocloud | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 30 | ||||||
|  |       evictionStrategy: LiveMigrate | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 1024M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: containerdisk | ||||||
|  |             disk: | ||||||
|  |               bus: virtio | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: cloudinitdisk | ||||||
|  |       volumes: | ||||||
|  |       - name: containerdisk | ||||||
|  |         containerDisk: | ||||||
|  |           image: kubevirt/fedora-cloud-container-disk-demo:latest | ||||||
|  |       - name: cloudinitdisk | ||||||
|  |         cloudInitNoCloud: | ||||||
|  |           userData: |- | ||||||
|  |             #cloud-config | ||||||
|  |             password: fedora | ||||||
|  |             chpasswd: { expire: False } | ||||||
|  | 
 | ||||||
|  | Once the VMI is created, taint the node with | ||||||
|  | 
 | ||||||
|  |     kubectl taint nodes foo kubevirt.io/drain=draining:NoSchedule | ||||||
|  | 
 | ||||||
|  | which will trigger a migration. | ||||||
|  | 
 | ||||||
|  | Behind the scenes a **PodDisruptionBudget** is created for each VMI | ||||||
|  | which has an **evictionStrategy** defined. This ensures that evictions | ||||||
|  | are be blocked on these VMIs and that we can guarantee that a VMI will | ||||||
|  | be migrated instead of shut off. | ||||||
|  | 
 | ||||||
|  | **Note:** While the **evictionStrategy** blocks the shutdown of VMIs | ||||||
|  | during evictions, the live migration process is detached from the drain | ||||||
|  | process itselve. Therefore it is necessary to add specified taints as | ||||||
|  | part of the drain process explicitly, until we have a better integrated | ||||||
|  | solution. | ||||||
|  | 
 | ||||||
|  | By default KubeVirt will rewact with live migrations if the taint | ||||||
|  | `kubevirt.io/drain:NoSchedule` is added to the node. It is possible to | ||||||
|  | configure a different key in the `kubevirt-config` config map, by | ||||||
|  | setting in the migration options the `nodeDrainTaintKey`: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: ConfigMap | ||||||
|  |     metadata: | ||||||
|  |       name: kubevirt-config | ||||||
|  |       namespace: kubevirt | ||||||
|  |       labels: | ||||||
|  |         kubevirt.io: "" | ||||||
|  |     data: | ||||||
|  |       feature-gates: "LiveMigration" | ||||||
|  |       migrations: |- | ||||||
|  |         nodeDrainTaintKey: mytaint/drain | ||||||
|  | 
 | ||||||
|  | The default value is `kubevirt.io/drain`. With the change above | ||||||
|  | migrations can be triggered with | ||||||
|  | 
 | ||||||
|  |     kubectl taint nodes foo mytaint/drain=draining:NoSchedule | ||||||
|  | 
 | ||||||
|  | Here a full drain flow for nodes which includes VMI live migrations with | ||||||
|  | the default setting: | ||||||
|  | 
 | ||||||
|  |     kubectl taint nodes foo kubevirt.io/drain=draining:NoSchedule | ||||||
|  |     kubectl drain foo --delete-local-data --ignore-daemonsets=true --force | ||||||
|  | 
 | ||||||
|  | To make the node schedulable again, run | ||||||
|  | 
 | ||||||
|  |     kubectl taint nodes foo kubevirt.io/drain- | ||||||
|  |     kubectl uncordon foo | ||||||
|  | 
 | ||||||
|  | Re-enabling a Node after Eviction | ||||||
|  | --------------------------------- | ||||||
|  | 
 | ||||||
|  | The **kubectl drain** will result in the target node being marked as | ||||||
|  | unschedulable. This means the node will not be eligible for running new | ||||||
|  | VirtualMachineInstances or Pods. | ||||||
|  | 
 | ||||||
|  | If it is decided that the target node should become schedulable again, | ||||||
|  | the following command must be run. | ||||||
|  | 
 | ||||||
|  | `kubectl uncordon <node name>` | ||||||
|  | 
 | ||||||
|  | or in the case of OKD. | ||||||
|  | 
 | ||||||
|  | `oc adm uncordon <node name>` | ||||||
|  | 
 | ||||||
|  | Shutting down a Node after Eviction | ||||||
|  | ----------------------------------- | ||||||
|  | 
 | ||||||
|  | From KubeVirt’s perspective, a node is safe to shutdown once all | ||||||
|  | VirtualMachineInstances have been evicted from the node. In a multi-use | ||||||
|  | cluster where VirtualMachineInstances are being scheduled along side | ||||||
|  | other containerized workloads, it is up to the cluster admin to ensure | ||||||
|  | all other pods have been safely evicted before powering down the node. | ||||||
|  | 
 | ||||||
|  | VirtualMachine Evictions | ||||||
|  | ------------------------ | ||||||
|  | 
 | ||||||
|  | The eviction of any VirtualMachineInstance that is owned by a | ||||||
|  | VirtualMachine set to **running=true** will result in the | ||||||
|  | VirtualMachineInstance being re-scheduled to another node. | ||||||
|  | 
 | ||||||
|  | The VirtualMachineInstance in this case will be forced to power down and | ||||||
|  | restart on another node. In the future once KubeVirt introduces live | ||||||
|  | migration support, the VM will be able to seamlessly migrate to another | ||||||
|  | node during eviction. | ||||||
|  | 
 | ||||||
|  | VirtualMachineInstanceReplicaSet Eviction Behavior | ||||||
|  | -------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | The eviction of VirtualMachineInstances owned by a | ||||||
|  | VirtualMachineInstanceReplicaSet will result in the | ||||||
|  | VirtualMachineInstanceReplicaSet scheduling replacements for the evicted | ||||||
|  | VirtualMachineInstances on other nodes in the cluster. | ||||||
|  | 
 | ||||||
|  | VirtualMachineInstance Eviction Behavior | ||||||
|  | ---------------------------------------- | ||||||
|  | 
 | ||||||
|  | VirtualMachineInstances not backed by either a | ||||||
|  | VirtualMachineInstanceReplicaSet or an VirtualMachine object will not be | ||||||
|  | re-scheduled after eviction. | ||||||
|  | @ -0,0 +1,95 @@ | ||||||
|  | Detecting and resolving Node issues | ||||||
|  | =================================== | ||||||
|  | 
 | ||||||
|  | KubeVirt has its own node daemon, called virt-handler. In addition to | ||||||
|  | the usual k8s methods of detecting issues on nodes, the virt-handler | ||||||
|  | daemon has its own heartbeat mechanism. This allows for fine-tuned error | ||||||
|  | handling of VirtualMachineInstances. | ||||||
|  | 
 | ||||||
|  | Heartbeat of virt-handler | ||||||
|  | ------------------------- | ||||||
|  | 
 | ||||||
|  | `virt-handler` periodically tries to update the | ||||||
|  | `kubevirt.io/schedulable` label and the `kubevirt.io/heartbeat` | ||||||
|  | annotation on the node it is running on: | ||||||
|  | 
 | ||||||
|  |     $ kubectl get nodes -o yaml | ||||||
|  |     apiVersion: v1 | ||||||
|  |     items: | ||||||
|  |     - apiVersion: v1 | ||||||
|  |       kind: Node | ||||||
|  |       metadata: | ||||||
|  |         annotations: | ||||||
|  |           kubevirt.io/heartbeat: 2018-11-05T09:42:25Z | ||||||
|  |         creationTimestamp: 2018-11-05T08:55:53Z | ||||||
|  |         labels: | ||||||
|  |           beta.kubernetes.io/arch: amd64 | ||||||
|  |           beta.kubernetes.io/os: linux | ||||||
|  |           cpumanager: "false" | ||||||
|  |           kubernetes.io/hostname: node01 | ||||||
|  |           kubevirt.io/schedulable: "true" | ||||||
|  |           node-role.kubernetes.io/master: "" | ||||||
|  | 
 | ||||||
|  | If a `VirtualMachineInstance` gets scheduled, the scheduler is only | ||||||
|  | considering nodes where `kubevirt.io/schedulable` is `true`. This can be | ||||||
|  | seen when looking on the corresponding pod of a | ||||||
|  | `VirtualMachineInstance`: | ||||||
|  | 
 | ||||||
|  |     $ kubectl get pods  virt-launcher-vmi-nocloud-ct6mr -o yaml | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Pod | ||||||
|  |     metadata: | ||||||
|  |       [...] | ||||||
|  |     spec: | ||||||
|  |       [...] | ||||||
|  |       nodeName: node01 | ||||||
|  |       nodeSelector: | ||||||
|  |         kubevirt.io/schedulable: "true" | ||||||
|  |       [...] | ||||||
|  | 
 | ||||||
|  | In case there is a communication issue or the host goes down, | ||||||
|  | `virt-handler` can’t update its labels and annotations any-more. Once | ||||||
|  | the last `kubevirt.io/heartbeat` timestamp is older than five minutes, | ||||||
|  | the KubeVirt node-controller kicks in and sets the | ||||||
|  | `kubevirt.io/schedulable` label to `false`. As a consequence no more | ||||||
|  | VMIs will be schedule to this node until virt-handler is connected | ||||||
|  | again. | ||||||
|  | 
 | ||||||
|  | Deleting stuck VMIs when virt-handler is unresponsive | ||||||
|  | ----------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | In cases where `virt-handler` has some issues but the node is in general | ||||||
|  | fine, a `VirtualMachineInstance` can be deleted as usual via | ||||||
|  | `kubectl delete vmi <myvm>`. Pods of a `VirtualMachineInstance` will be | ||||||
|  | told by the cluster-controllers they should shut down. As soon as the | ||||||
|  | Pod is gone, the `VirtualMachineInstance` will be moved to `Failed` | ||||||
|  | state, if `virt-handler` did not manage to update it’s heartbeat in the | ||||||
|  | meantime. If `virt-handler` could recover in the meantime, | ||||||
|  | `virt-handler` will move the `VirtualMachineInstance` to failed state | ||||||
|  | instead of the cluster-controllers. | ||||||
|  | 
 | ||||||
|  | Deleting stuck VMIs when the whole node is unresponsive | ||||||
|  | ------------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | If the whole node is unresponsive, deleting a `VirtualMachineInstance` | ||||||
|  | via `kubectl delete vmi <myvmi>` alone will never remove the | ||||||
|  | `VirtualMachineInstance`. In this case all pods on the unresponsive node | ||||||
|  | need to be force-deleted: First make sure that the node is really dead. | ||||||
|  | Then delete all pods on the node via a force-delete: | ||||||
|  | `kubectl delete pod --force --grace-period=0 <mypod>`. | ||||||
|  | 
 | ||||||
|  | As soon as the pod disappears and the heartbeat from virt-handler timed | ||||||
|  | out, the VMIs will be moved to `Failed` state. If they were already | ||||||
|  | marked for deletion they will simply disappear. If not, they can be | ||||||
|  | deleted and will disappear almost immediately. | ||||||
|  | 
 | ||||||
|  | Timing considerations | ||||||
|  | --------------------- | ||||||
|  | 
 | ||||||
|  | It takes up to five minutes until the KubeVirt cluster components can | ||||||
|  | detect that virt-handler is unhealthy. During that time-frame it is | ||||||
|  | possible that new VMIs are scheduled to the affected node. If | ||||||
|  | virt-handler is not capable of connecting to these pods on the node, the | ||||||
|  | pods will sooner or later go to failed state. As soon as the cluster | ||||||
|  | finally detects the issue, the VMIs will be set to failed by the | ||||||
|  | cluster. | ||||||
|  | @ -0,0 +1,65 @@ | ||||||
|  | # Updating KubeVirt | ||||||
|  | 
 | ||||||
|  | Zero downtime rolling updates are supported starting with release | ||||||
|  | `v0.17.0` onward. Updating from any release prior to the KubeVirt | ||||||
|  | `v0.17.0` release is not supported. | ||||||
|  | 
 | ||||||
|  | > Note: Updating is only supported from N-1 to N release. | ||||||
|  | 
 | ||||||
|  | Updates are triggered one of two ways. | ||||||
|  | 
 | ||||||
|  | 1.  By changing the imageTag value in the KubeVirt CR’s spec. | ||||||
|  | 
 | ||||||
|  | For example, updating from `v0.17.0-alpha.1` to `v0.17.0` is as simple | ||||||
|  | as patching the KubeVirt CR with the `imageTag: v0.17.0` value. From | ||||||
|  | there the KubeVirt operator will begin the process of rolling out the | ||||||
|  | new version of KubeVirt. Existing VM/VMIs will remain uninterrupted both | ||||||
|  | during and after the update succeeds. | ||||||
|  | 
 | ||||||
|  |     $ kubectl patch kv kubevirt -n kubevirt --type=json -p '[{ "op": "add", "path": "/spec/imageTag", "value": "v0.17.0" }]' | ||||||
|  | 
 | ||||||
|  | 2.  Or, by updating the kubevirt operator if no imageTag value is set. | ||||||
|  | 
 | ||||||
|  | When no imageTag value is set in the kubevirt CR, the system assumes | ||||||
|  | that the version of KubeVirt is locked to the version of the operator. | ||||||
|  | This means that updating the operator will result in the underlying | ||||||
|  | KubeVirt installation being updated as well. | ||||||
|  | 
 | ||||||
|  |     $ export RELEASE=v0.26.0 | ||||||
|  |     $ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml | ||||||
|  | 
 | ||||||
|  | The first way provides a fine granular approach where you have full | ||||||
|  | control over what version of KubeVirt is installed independently of what | ||||||
|  | version of the KubeVirt operator you might be running. The second | ||||||
|  | approach allows you to lock both the operator and operand to the same | ||||||
|  | version. | ||||||
|  | 
 | ||||||
|  | Newer KubeVirt may require additional or extended RBAC rules. In this | ||||||
|  | case, the #1 update method may fail, because the virt-operator present | ||||||
|  | in the cluster doesn’t have these RBAC rules itself. In this case, you | ||||||
|  | need to update the `virt-operator` first, and then proceed to update | ||||||
|  | kubevirt. See [this issue for more | ||||||
|  | details](https://github.com/kubevirt/kubevirt/issues/2533). | ||||||
|  | 
 | ||||||
|  | # Deleting KubeVirt | ||||||
|  | 
 | ||||||
|  | To delete the KubeVirt you should first to delete `KubeVirt` custom | ||||||
|  | resource and then delete the KubeVirt operator. | ||||||
|  | 
 | ||||||
|  |     $ export RELEASE=v0.17.0 | ||||||
|  |     $ kubectl delete -n kubevirt kubevirt kubevirt --wait=true # --wait=true should anyway be default | ||||||
|  |     $ kubectl delete apiservices v1alpha3.subresources.kubevirt.io # this needs to be deleted to avoid stuck terminating namespaces | ||||||
|  |     $ kubectl delete mutatingwebhookconfigurations virt-api-mutator # not blocking but would be left over | ||||||
|  |     $ kubectl delete validatingwebhookconfigurations virt-api-validator # not blocking but would be left over | ||||||
|  |     $ kubectl delete -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml --wait=false | ||||||
|  | 
 | ||||||
|  | > Note: If by mistake you deleted the operator first, the KV custom | ||||||
|  | > resource will get stuck in the `Terminating` state, to fix it, delete | ||||||
|  | > manually finalizer from the resource. | ||||||
|  | > | ||||||
|  | > Note: The `apiservice` and the `webhookconfigurations` need to be | ||||||
|  | > deleted manually due to a bug. | ||||||
|  | > | ||||||
|  | >     $ kubectl -n kubevirt patch kv kubevirt --type=json -p '[{ "op": "remove", "path": "/metadata/finalizers" }]' | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | @ -0,0 +1,52 @@ | ||||||
|  | # Retrieving the `virtctl` client tool | ||||||
|  | 
 | ||||||
|  | Basic VirtualMachineInstance operations can be performed with the stock | ||||||
|  | `kubectl` utility. However, the `virtctl` binary utility is required to | ||||||
|  | use advanced features such as: | ||||||
|  | 
 | ||||||
|  | -   Serial and graphical console access | ||||||
|  | 
 | ||||||
|  | It also provides convenience commands for: | ||||||
|  | 
 | ||||||
|  | -   Starting and stopping VirtualMachineInstances | ||||||
|  | 
 | ||||||
|  | -   Live migrating VirtualMachineInstances | ||||||
|  | 
 | ||||||
|  | There are two ways to get it: | ||||||
|  | 
 | ||||||
|  | -   the most recent version of the tool can be retrieved from the | ||||||
|  |     [official release | ||||||
|  |     page](https://github.com/kubevirt/kubevirt/releases) | ||||||
|  | 
 | ||||||
|  | -   it can be installed as a `kubectl` plugin using | ||||||
|  |     [krew](https://krew.dev/) | ||||||
|  | 
 | ||||||
|  | Example: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | export VERSION=v0.26.1 | ||||||
|  | wget https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-x86_64 | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | ## Install `virtctl` with `krew` | ||||||
|  | 
 | ||||||
|  | It is required to [install `krew` plugin | ||||||
|  | manager](https://github.com/kubernetes-sigs/krew/#installation) | ||||||
|  | beforehand. If `krew` is installed, `virtctl` can be installed via | ||||||
|  | `krew`: | ||||||
|  | 
 | ||||||
|  |     $ kubectl krew install virt | ||||||
|  | 
 | ||||||
|  | Then `virtctl` can be used as a kubectl plugin. For a list of available | ||||||
|  | commands run: | ||||||
|  | 
 | ||||||
|  |     $ kubectl virt help | ||||||
|  | 
 | ||||||
|  | Every occurrence throughout this guide of | ||||||
|  | 
 | ||||||
|  |     $ ./virtctl <command>... | ||||||
|  | 
 | ||||||
|  | should then be read as | ||||||
|  | 
 | ||||||
|  |     $ kubectl virt <command>... | ||||||
|  | 
 | ||||||
|  | @ -0,0 +1,74 @@ | ||||||
|  | KubeVirt API Validation | ||||||
|  | ======================= | ||||||
|  | 
 | ||||||
|  | The KubeVirt VirtualMachineInstance API is implemented using a | ||||||
|  | Kubernetes Custom Resource Definition (CRD). Because of this, KubeVirt | ||||||
|  | is able to leverage a couple of features Kubernetes provides in order to | ||||||
|  | perform validation checks on our API as objects created and updated on | ||||||
|  | the cluster. | ||||||
|  | 
 | ||||||
|  | How API Validation Works | ||||||
|  | ------------------------ | ||||||
|  | 
 | ||||||
|  | ### CRD OpenAPIv3 Schema | ||||||
|  | 
 | ||||||
|  | The KubeVirt API is registered with Kubernetes at install time through a | ||||||
|  | series of CRD definitions. KubeVirt includes an OpenAPIv3 schema in | ||||||
|  | these definitions which indicates to the Kubernetes Apiserver some very | ||||||
|  | basic information about our API, such as what fields are required and | ||||||
|  | what type of data is expected for each value. | ||||||
|  | 
 | ||||||
|  | This OpenAPIv3 schema validation is installed automatically and requires | ||||||
|  | no thought on the users part to enable. | ||||||
|  | 
 | ||||||
|  | ### Admission Control Webhooks | ||||||
|  | 
 | ||||||
|  | The OpenAPIv3 schema validation is limited. It only validates the | ||||||
|  | general structure of a KubeVirt object looks correct. It does not | ||||||
|  | however verify that the contents of that object make sense. | ||||||
|  | 
 | ||||||
|  | With OpenAPIv3 validation alone, users can easily make simple mistakes | ||||||
|  | (like not referencing a volume’s name correctly with a disk) and the | ||||||
|  | cluster will still accept the object. However, the | ||||||
|  | VirtualMachineInstance will of course not start if these errors in the | ||||||
|  | API exist. Ideally we’d like to catch configuration issues as early as | ||||||
|  | possible and not allow an object to even be posted to the cluster if we | ||||||
|  | can detect there’s a problem with the object’s Spec. | ||||||
|  | 
 | ||||||
|  | In order to perform this advanced validation, KubeVirt implements its | ||||||
|  | own admission controller which is registered with kubernetes as an | ||||||
|  | admission controller webhook. This webhook is registered with Kubernetes | ||||||
|  | at install time. As KubeVirt objects are posted to the cluster, the | ||||||
|  | Kubernetes API server forwards Creation requests to our webhook for | ||||||
|  | validation before persisting the object into storage. | ||||||
|  | 
 | ||||||
|  | Note however that the KubeVirt admission controller requires features to | ||||||
|  | be enabled on the cluster in order to be enabled. | ||||||
|  | 
 | ||||||
|  | Enabling KubeVirt Admission Controller on Kubernetes | ||||||
|  | ---------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | When provisioning a new Kubernetes cluster, ensure that both the | ||||||
|  | **MutatingAdmissionWebhook** and **ValidatingAdmissionWebhook** values | ||||||
|  | are present in the Apiserver’s **--admission-control** cli argument. | ||||||
|  | 
 | ||||||
|  | Below is an example of the **--admission-control** values we use during | ||||||
|  | development | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | --admission-control='Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota' | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | Enabling KubeVirt Admission Controller on OKD | ||||||
|  | --------------------------------------------- | ||||||
|  | 
 | ||||||
|  | OKD also requires the admission control webhooks to be enabled at | ||||||
|  | install time. The process is slightly different though. With OKD, we | ||||||
|  | enable webhooks using an admission plugin. | ||||||
|  | 
 | ||||||
|  | These admission control plugins can be configured in openshift-ansible | ||||||
|  | by setting the following value in ansible inventory file. | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | openshift_master_admission_plugin_config={"ValidatingAdmissionWebhook":{"configuration":{"kind": "DefaultAdmissionConfig","apiVersion": "v1","disable": false}},"MutatingAdmissionWebhook":{"configuration":{"kind": "DefaultAdmissionConfig","apiVersion": "v1","disable": false}}} | ||||||
|  | ``` | ||||||
|  | @ -0,0 +1,357 @@ | ||||||
|  | Virtual machine templates | ||||||
|  | ========================= | ||||||
|  | 
 | ||||||
|  | What is a virtual machine template? | ||||||
|  | ----------------------------------- | ||||||
|  | 
 | ||||||
|  | The KubeVirt projects provides a set of | ||||||
|  | [templates](https://docs.okd.io/latest/dev_guide/templates.html) to | ||||||
|  | create VMs to handle common usage scenarios. These templates provide a | ||||||
|  | combination of some key factors that could be further customized and | ||||||
|  | processed to have a Virtual Machine object. The key factors which define | ||||||
|  | a template are | ||||||
|  | 
 | ||||||
|  | -   Workload Most Virtual Machine should be *generic* to have maximum | ||||||
|  |     flexibility; the *highperformance* workload trades some of this | ||||||
|  |     flexibility to provide better performances. | ||||||
|  | 
 | ||||||
|  | -   Guest Operating System (OS) This allow to ensure that the emulated | ||||||
|  |     hardware is compatible with the guest OS. Furthermore, it allows to | ||||||
|  |     maximize the stability of the VM, and allows performance | ||||||
|  |     optimizations. | ||||||
|  | 
 | ||||||
|  | -   Size (flavor) Defines the amount of resources (CPU, memory) to | ||||||
|  |     allocate to the VM. | ||||||
|  | 
 | ||||||
|  | More documentation is available in the [common templates | ||||||
|  | subproject](https://github.com/kubevirt/common-templates) | ||||||
|  | 
 | ||||||
|  | Accessing the virtual machine templates | ||||||
|  | --------------------------------------- | ||||||
|  | 
 | ||||||
|  | If you installed KubeVirt using a supported method you should find the | ||||||
|  | common templates preinstalled in the cluster. Should you want to upgrade | ||||||
|  | the templates, or install them from scratch, you can use one of the | ||||||
|  | [supported | ||||||
|  | releases](https://github.com/kubevirt/common-templates/releases) | ||||||
|  | 
 | ||||||
|  | To install the templates: | ||||||
|  | 
 | ||||||
|  |     $ export VERSION="v0.3.1" | ||||||
|  |     $ oc create -f https://github.com/kubevirt/common-templates/releases/download/$VERSION/common-templates-$VERSION.yaml | ||||||
|  | 
 | ||||||
|  | Editable fields | ||||||
|  | --------------- | ||||||
|  | 
 | ||||||
|  | You can edit the fields of the templates which define the amount of | ||||||
|  | resources which the VMs will receive. | ||||||
|  | 
 | ||||||
|  | Each template can list a different set of fields that are to be | ||||||
|  | considered editable. The fields are used as hints for the user | ||||||
|  | interface, and also for other components in the cluster. | ||||||
|  | 
 | ||||||
|  | The editable fields are taken from annotations in the template. Here is | ||||||
|  | a snippet presenting a couple of most commonly found editable fields: | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       annotations: | ||||||
|  |         template.kubevirt.io/editable: | | ||||||
|  |           /objects[0].spec.template.spec.domain.cpu.sockets | ||||||
|  |           /objects[0].spec.template.spec.domain.cpu.cores | ||||||
|  |           /objects[0].spec.template.spec.domain.cpu.threads | ||||||
|  |           /objects[0].spec.template.spec.domain.resources.requests.memory | ||||||
|  | 
 | ||||||
|  | Each entry in the editable field list must be a | ||||||
|  | [jsonpath](https://kubernetes.io/docs/reference/kubectl/jsonpath/). The | ||||||
|  | jsonpath root is the objects: element of the template. The actually | ||||||
|  | editable field is the last entry (the “leaf”) of the path. For example, | ||||||
|  | the following minimal snippet highlights the fields which you can edit: | ||||||
|  | 
 | ||||||
|  |     objects: | ||||||
|  |       spec: | ||||||
|  |         template: | ||||||
|  |           spec: | ||||||
|  |             domain: | ||||||
|  |               cpu: | ||||||
|  |                 sockets: | ||||||
|  |                   VALUE # this is editable | ||||||
|  |                 cores: | ||||||
|  |                   VALUE # this is editable | ||||||
|  |                 threads: | ||||||
|  |                   VALUE # this is editable | ||||||
|  |               resources: | ||||||
|  |                 requests: | ||||||
|  |                   memory: | ||||||
|  |                     VALUE # this is editable | ||||||
|  | 
 | ||||||
|  | Relationship between templates and VMs | ||||||
|  | -------------------------------------- | ||||||
|  | 
 | ||||||
|  | Once | ||||||
|  | [processed](https://docs.openshift.com/enterprise/3.0/dev_guide/templates.html#creating-from-templates-using-the-cli), | ||||||
|  | the templates produce VM objects to be used in the cluster. The VMs | ||||||
|  | produced from templates will have a `vm.kubevirt.io/template` label, | ||||||
|  | whose value will be the name of the parent template, for example | ||||||
|  | `fedora-desktop-medium`: | ||||||
|  | 
 | ||||||
|  |       metadata: | ||||||
|  |         labels: | ||||||
|  |           vm.kubevirt.io/template: fedora-desktop-medium | ||||||
|  | 
 | ||||||
|  | In addition, these VMs can include an optional label | ||||||
|  | `vm.kubevirt.io/template-namespace`, whose value will be the namespace | ||||||
|  | of the parent template, for example: | ||||||
|  | 
 | ||||||
|  |       metadata: | ||||||
|  |         labels: | ||||||
|  |           vm.kubevirt.io/template-namespace: openshift | ||||||
|  | 
 | ||||||
|  | If this label is not defined, the template is expected to belong to the | ||||||
|  | same namespace as the VM. | ||||||
|  | 
 | ||||||
|  | This make it possible to query for all the VMs built from any template. | ||||||
|  | 
 | ||||||
|  | Example: | ||||||
|  | 
 | ||||||
|  |     oc process -o yaml rhel7-server-tiny PVCNAME=mydisk NAME=rheltinyvm | ||||||
|  | 
 | ||||||
|  | And the output: | ||||||
|  | 
 | ||||||
|  |     apiversion: v1 | ||||||
|  |     items: | ||||||
|  |     - apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |       kind: VirtualMachine | ||||||
|  |       metadata: | ||||||
|  |         labels: | ||||||
|  |           vm.kubevirt.io/template: rhel7-server-tiny | ||||||
|  |         name: rheltinyvm | ||||||
|  |         osinfoname: rhel7.0 | ||||||
|  |       spec: | ||||||
|  |         running: false | ||||||
|  |         template: | ||||||
|  |           spec: | ||||||
|  |             domain: | ||||||
|  |               cpu: | ||||||
|  |                 sockets: 1 | ||||||
|  |                 cores: 1 | ||||||
|  |                 threads: 1 | ||||||
|  |               devices: | ||||||
|  |                 disks: | ||||||
|  |                 - disk: | ||||||
|  |                     bus: virtio | ||||||
|  |                   name: rootdisk | ||||||
|  |                 rng: {} | ||||||
|  |               resources: | ||||||
|  |                 requests: | ||||||
|  |                   memory: 1G | ||||||
|  |             terminationGracePeriodSeconds: 0 | ||||||
|  |             volumes: | ||||||
|  |             - name: rootdisk | ||||||
|  |               persistentVolumeClaim: | ||||||
|  |                 claimName: mydisk | ||||||
|  |             - cloudInitNoCloud: | ||||||
|  |                 userData: |- | ||||||
|  |                   #cloud-config | ||||||
|  |                   password: redhat | ||||||
|  |                   chpasswd: { expire: False } | ||||||
|  |               name: cloudinitdisk | ||||||
|  |     kind: List | ||||||
|  |     metadata: {} | ||||||
|  | 
 | ||||||
|  | You can add add the VM from the template to the cluster in one go | ||||||
|  | 
 | ||||||
|  |     oc process rhel7-server-tiny PVCNAME=mydisk NAME=rheltinyvm | oc apply -f - | ||||||
|  | 
 | ||||||
|  | Please note that, after the generation step, VM objects and template | ||||||
|  | objects have no relationship with each other besides the aforementioned | ||||||
|  | label (e.g. changes in templates do not automatically affect VMs, or | ||||||
|  | vice versa). | ||||||
|  | 
 | ||||||
|  | common template customization | ||||||
|  | ----------------------------- | ||||||
|  | 
 | ||||||
|  | The templates provided by the kubevirt project provide a set of | ||||||
|  | conventions and annotations that augment the basic feature of the | ||||||
|  | [openshift | ||||||
|  | templates](https://docs.okd.io/latest/dev_guide/templates.html). You can | ||||||
|  | customize your kubevirt-provided templates editing these annotations, or | ||||||
|  | you can add them to your existing templates to make them consumable by | ||||||
|  | the kubevirt services. | ||||||
|  | 
 | ||||||
|  | Here’s a description of the kubevirt annotations. Unless otherwise | ||||||
|  | specified, the following keys are meant to be top-level entries of the | ||||||
|  | template metadata, like | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Template | ||||||
|  |     metadata: | ||||||
|  |       name: windows-10 | ||||||
|  |       annotations: | ||||||
|  |         openshift.io/display-name: "Generic demo template" | ||||||
|  | 
 | ||||||
|  | All the following annotations are prefixed with | ||||||
|  | `defaults.template.kubevirt.io`, which is omitted below for brevity. So | ||||||
|  | the actual annotations you should use will look like | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Template | ||||||
|  |     metadata: | ||||||
|  |       name: windows-10 | ||||||
|  |       annotations: | ||||||
|  |         defaults.template.kubevirt.io/disk: default-disk | ||||||
|  |         defaults.template.kubevirt.io/volume: default-volume | ||||||
|  |         defaults.template.kubevirt.io/nic: default-nic | ||||||
|  |         defaults.template.kubevirt.io/network: default-network | ||||||
|  | 
 | ||||||
|  | Unless otherwise specified, all annotations are meant to be safe | ||||||
|  | defaults, both for performance and compability, and hints for the | ||||||
|  | CNV-aware UI and tooling. | ||||||
|  | 
 | ||||||
|  | ### disk | ||||||
|  | 
 | ||||||
|  | See the section `references` below. | ||||||
|  | 
 | ||||||
|  | Example: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Template | ||||||
|  |     metadata: | ||||||
|  |       name: Linux | ||||||
|  |       annotations: | ||||||
|  |         defaults.template.kubevirt.io/disk: rhel-disk | ||||||
|  | 
 | ||||||
|  | ### nic | ||||||
|  | 
 | ||||||
|  | See the section `references` below. | ||||||
|  | 
 | ||||||
|  | Example: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Template | ||||||
|  |     metadata: | ||||||
|  |       name: Windows | ||||||
|  |       annotations: | ||||||
|  |         defaults.template.kubevirt.io/nic: my-nic | ||||||
|  | 
 | ||||||
|  | ### volume | ||||||
|  | 
 | ||||||
|  | See the section `references` below. | ||||||
|  | 
 | ||||||
|  | Example: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Template | ||||||
|  |     metadata: | ||||||
|  |       name: Linux | ||||||
|  |       annotations: | ||||||
|  |         defaults.template.kubevirt.io/volume: custom-volume | ||||||
|  | 
 | ||||||
|  | ### network | ||||||
|  | 
 | ||||||
|  | See the section `references` below. | ||||||
|  | 
 | ||||||
|  | Example: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Template | ||||||
|  |     metadata: | ||||||
|  |       name: Linux | ||||||
|  |       annotations: | ||||||
|  |         defaults.template.kubevirt.io/network: fast-net | ||||||
|  | 
 | ||||||
|  | ### references | ||||||
|  | 
 | ||||||
|  | The default values for network, nic, volume, disk are meant to be the | ||||||
|  | **name** of a section later in the document that the UI will find and | ||||||
|  | consume to find the default values for the corresponding types. For | ||||||
|  | example, considering the annotation | ||||||
|  | `defaults.template.kubevirt.io/disk: my-disk`: we assume that later in | ||||||
|  | the document it exists an element called `my-disk` that the UI can use | ||||||
|  | to find the data it needs. The names actually don’t matter as long as | ||||||
|  | they are legal for kubernetes and consistent with the content of the | ||||||
|  | document. | ||||||
|  | 
 | ||||||
|  | ### complete example | ||||||
|  | 
 | ||||||
|  | `demo-template.yaml` | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | apiversion: v1 | ||||||
|  | items: | ||||||
|  | - apiversion: kubevirt.io/v1alpha3 | ||||||
|  |   kind: virtualmachine | ||||||
|  |   metadata: | ||||||
|  |     labels: | ||||||
|  |       vm.kubevirt.io/template: rhel7-generic-tiny | ||||||
|  |     name: rheltinyvm | ||||||
|  |     osinfoname: rhel7.0 | ||||||
|  |     defaults.template.kubevirt.io/disk: rhel-default-disk | ||||||
|  |     defaults.template.kubevirt.io/nic: rhel-default-net | ||||||
|  |   spec: | ||||||
|  |     running: false | ||||||
|  |     template: | ||||||
|  |       spec: | ||||||
|  |         domain: | ||||||
|  |           cpu: | ||||||
|  |             sockets: 1 | ||||||
|  |             cores: 1 | ||||||
|  |             threads: 1 | ||||||
|  |           devices: | ||||||
|  |             rng: {} | ||||||
|  |           resources: | ||||||
|  |             requests: | ||||||
|  |               memory: 1g | ||||||
|  |         terminationgraceperiodseconds: 0 | ||||||
|  |         volumes: | ||||||
|  |         - containerDisk: | ||||||
|  |           image: registry:5000/kubevirt/cirros-container-disk-demo:devel | ||||||
|  |           name: rhel-default-disk | ||||||
|  |         networks: | ||||||
|  |         - genie: | ||||||
|  |           networkName: flannel | ||||||
|  |           name: rhel-default-net | ||||||
|  | kind: list | ||||||
|  | metadata: {} | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | once processed becomes: | ||||||
|  | `demo-vm.yaml` | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | apiVersion: kubevirt.io/v1alpha3 | ||||||
|  | kind: VirtualMachine | ||||||
|  | metadata: | ||||||
|  |   labels: | ||||||
|  |     vm.kubevirt.io/template: rhel7-generic-tiny | ||||||
|  |   name: rheltinyvm | ||||||
|  |   osinfoname: rhel7.0 | ||||||
|  | spec: | ||||||
|  |   running: false | ||||||
|  |   template: | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         cpu: | ||||||
|  |           sockets: 1 | ||||||
|  |           cores: 1 | ||||||
|  |           threads: 1 | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 1g | ||||||
|  |         devices: | ||||||
|  |           rng: {} | ||||||
|  |           disks: | ||||||
|  |           - disk: | ||||||
|  |             name: rhel-default-disk | ||||||
|  |         interfaces: | ||||||
|  |         - bridge: {} | ||||||
|  |           name: rhel-default-nic | ||||||
|  |       terminationgraceperiodseconds: 0 | ||||||
|  |       volumes: | ||||||
|  |       - containerDisk: | ||||||
|  |           image: registry:5000/kubevirt/cirros-container-disk-demo:devel | ||||||
|  |         name: containerdisk | ||||||
|  |       networks: | ||||||
|  |       - genie: | ||||||
|  |           networkName: flannel | ||||||
|  |         name: rhel-default-nic | ||||||
|  | ``` | ||||||
|  | @ -0,0 +1,11 @@ | ||||||
|  | Templates | ||||||
|  | ========= | ||||||
|  | 
 | ||||||
|  | !> This only works on OpenShift so far (See [Installation | ||||||
|  | Guide](/installation/README) for more information on how to deploy | ||||||
|  | KubeVirt on OpenShift). | ||||||
|  | 
 | ||||||
|  | By deploying KubeVirt on top of OpenShift can user benefit from the | ||||||
|  | [OpenShift | ||||||
|  | Template](https://docs.openshift.org/latest/dev_guide/templates.html) | ||||||
|  | functionality. | ||||||
|  | @ -0,0 +1,319 @@ | ||||||
|  | # Virtual Machine Creation | ||||||
|  | 
 | ||||||
|  | ## Overview | ||||||
|  | 
 | ||||||
|  | The KubeVirt projects provides a set of | ||||||
|  | [templates](https://docs.okd.io/latest/dev_guide/templates.html) to | ||||||
|  | create VMs to handle common usage scenarios. These templates provide a | ||||||
|  | combination of some key factors that could be further customized and | ||||||
|  | processed to have a Virtual Machine object. | ||||||
|  | 
 | ||||||
|  | The key factors which define a template are - Workload Most Virtual | ||||||
|  | Machine should be **server** or **desktop** to have maximum flexibility; | ||||||
|  | the **highperformance** workload trades some of this flexibility to | ||||||
|  | provide better performances. - Guest Operating System (OS) This allow to | ||||||
|  | ensure that the emulated hardware is compatible with the guest OS. | ||||||
|  | Furthermore, it allows to maximize the stability of the VM, and allows | ||||||
|  | performance optimizations. - Size (flavor) Defines the amount of | ||||||
|  | resources (CPU, memory) to allocate to the VM. | ||||||
|  | 
 | ||||||
|  | ## WebUI | ||||||
|  | 
 | ||||||
|  | Kubevirt project has [the official UI](https://github.com/kubevirt/web-ui). | ||||||
|  | This UI supports creation VM using templates and templates | ||||||
|  | features - flavors and workload profiles. To create VM from template, choose | ||||||
|  | WorkLoads in the left panel >> press to the "Create Virtual Machine" | ||||||
|  | blue button >> choose "Create from Wizzard". Next, you have to see | ||||||
|  | "Create Virtual Machine" window | ||||||
|  | 
 | ||||||
|  | ## Common-templates | ||||||
|  | 
 | ||||||
|  | There is the [common-templates | ||||||
|  | subproject](https://github.com/kubevirt/common-templates/) | ||||||
|  | subproject. It provides official prepaired and useful templates. | ||||||
|  | [Additional doc available](templates/common-templates.md). | ||||||
|  | You can also create templates by hand. You can find an example below, in | ||||||
|  | the "Example template" section. | ||||||
|  | 
 | ||||||
|  | ## Example template | ||||||
|  | 
 | ||||||
|  | In order to create a virtual machine via OpenShift CLI, you need to | ||||||
|  | provide a template defining the corresponding object and its metadata. | ||||||
|  | 
 | ||||||
|  | **NOTE** Only `VirtualMachine` object is currently supported. | ||||||
|  | 
 | ||||||
|  | Here is an example template that defines an instance of the | ||||||
|  | `VirtualMachine` object: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | apiVersion: v1 | ||||||
|  | kind: Template | ||||||
|  | metadata: | ||||||
|  |   annotations: | ||||||
|  |     description: OCP KubeVirt Fedora 27 VM template | ||||||
|  |     iconClass: icon-fedora | ||||||
|  |     tags: kubevirt,ocp,template,linux,virtualmachine | ||||||
|  |   labels: | ||||||
|  |     kubevirt.io/os: fedora27 | ||||||
|  |     miq.github.io/kubevirt-is-vm-template: "true" | ||||||
|  |   name: vm-template-fedora | ||||||
|  | objects: | ||||||
|  | - apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |   kind: VirtualMachine | ||||||
|  |   metadata: | ||||||
|  |     labels: | ||||||
|  |       kubevirt-vm: vm-${NAME} | ||||||
|  |       kubevirt.io/os: fedora27 | ||||||
|  |     name: ${NAME} | ||||||
|  |   spec: | ||||||
|  |     running: false | ||||||
|  |     template: | ||||||
|  |       metadata: | ||||||
|  |         creationTimestamp: null | ||||||
|  |         labels: | ||||||
|  |           kubevirt-vm: vm-${NAME} | ||||||
|  |           kubevirt.io/os: fedora27 | ||||||
|  |       spec: | ||||||
|  |         domain: | ||||||
|  |           cpu: | ||||||
|  |             cores: ${{CPU_CORES}} | ||||||
|  |           devices: | ||||||
|  |             disks: | ||||||
|  |               - name: disk0 | ||||||
|  |         volumes: | ||||||
|  |           - name: disk0 | ||||||
|  |             persistentVolumeClaim: | ||||||
|  |               claimName: myroot | ||||||
|  |             - disk: | ||||||
|  |                 bus: virtio | ||||||
|  |               name: registrydisk | ||||||
|  |               volumeName: registryvolume | ||||||
|  |             - disk: | ||||||
|  |                 bus: virtio | ||||||
|  |               name: cloudinitdisk | ||||||
|  |               volumeName: cloudinitvolume | ||||||
|  |           machine: | ||||||
|  |             type: "" | ||||||
|  |           resources: | ||||||
|  |             requests: | ||||||
|  |               memory: ${MEMORY} | ||||||
|  |         terminationGracePeriodSeconds: 0 | ||||||
|  |         volumes: | ||||||
|  |         - name: registryvolume | ||||||
|  |           registryDisk: | ||||||
|  |             image: registry:5000/kubevirt/fedora-cloud-registry-disk-demo:devel | ||||||
|  |         - cloudInitNoCloud: | ||||||
|  |             userData: |- | ||||||
|  |               #cloud-config | ||||||
|  |               password: fedora | ||||||
|  |               chpasswd: { expire: False } | ||||||
|  |           name: cloudinitvolume | ||||||
|  |   status: {} | ||||||
|  | parameters: | ||||||
|  | - description: Name for the new VM | ||||||
|  |   name: NAME | ||||||
|  | - description: Amount of memory | ||||||
|  |   name: MEMORY | ||||||
|  |   value: 4096Mi | ||||||
|  | - description: Amount of cores | ||||||
|  |   name: CPU_CORES | ||||||
|  |   value: "4" | ||||||
|  | ``` | ||||||
|  | Note that the template above defines free parameters (`NAME` and | ||||||
|  | `CPU_CORES`) and the `NAME` parameter does not have specified default | ||||||
|  | value. | ||||||
|  | 
 | ||||||
|  | An OpenShift template has to be converted into the JSON file via | ||||||
|  | `oc process` command, that also allows you to set the template | ||||||
|  | parameters. | ||||||
|  | 
 | ||||||
|  | A complete example can be found in the [KubeVirt | ||||||
|  | repository](https://github.com/kubevirt/kubevirt/blob/master/cluster/examples/vm-template-fedora.yaml). | ||||||
|  | 
 | ||||||
|  | !> You need to be logged in by `oc login` command. | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | $ oc process -f cluster/vmi-template-fedora.yaml\ | ||||||
|  |     -p NAME=testvmi \ | ||||||
|  |     -p CPU_CORES=2 | ||||||
|  | { | ||||||
|  |     "kind": "List", | ||||||
|  |     "apiVersion": "v1", | ||||||
|  |     "metadata": {}, | ||||||
|  |     "items": [ | ||||||
|  |         { | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | The JSON file is usually applied directly by piping the processed output | ||||||
|  | to `oc create` command. | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | $ oc process -f cluster/examples/vm-template-fedora.yaml \ | ||||||
|  |     -p NAME=testvm \ | ||||||
|  |     -p CPU_CORES=2 \ | ||||||
|  |     | oc create -f - | ||||||
|  | virtualmachine.kubevirt.io/testvm created | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | The command above results in creating a Kubernetes object according to | ||||||
|  | the specification given by the template \\(in this example it is an | ||||||
|  | instance of the VirtualMachine object\\). | ||||||
|  | 
 | ||||||
|  | It’s possible to get list of available parameters using the following | ||||||
|  | command: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | $ oc process -f cluster/examples/vmi-template-fedora.yaml --parameters | ||||||
|  | NAME                DESCRIPTION           GENERATOR           VALUE | ||||||
|  | NAME                Name for the new VM                        | ||||||
|  | MEMORY              Amount of memory                          4096Mi | ||||||
|  | CPU_CORES           Amount of cores                           4 | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | ## Starting virtual machine from the created object | ||||||
|  | 
 | ||||||
|  | The created object is now a regular VirtualMachine object and from now | ||||||
|  | it can be controlled by accessing Kubernetes API resources. The | ||||||
|  | preferred way how to do this from within the OpenShift environment is to | ||||||
|  | use `oc patch` command. | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | $ oc patch virtualmachine testvm --type merge -p '{"spec":{"running":true}}' | ||||||
|  | virtualmachine.kubevirt.io/testvm patched | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | Do not forget about virtctl tool. Using it in the real cases instead of | ||||||
|  | using kubernetes API can be more convinient. Example: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | $ virtctl start testvm | ||||||
|  | VM testvm was scheduled to start | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | As soon as VM starts, kubernates creates new type of object - | ||||||
|  | VirtualMachineInstance. It has similar name to VirtualMachine. Example | ||||||
|  | (not full output, it’s too big): | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | $ kubectl describe vm testvm | ||||||
|  | name:         testvm | ||||||
|  | Namespace:    myproject | ||||||
|  | Labels:       kubevirt-vm=vm-testvm | ||||||
|  |               kubevirt.io/os=fedora27 | ||||||
|  | Annotations:  <none> | ||||||
|  | API Version:  kubevirt.io/v1alpha2 | ||||||
|  | Kind:         VirtualMachine | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | ## Cloud-init script and parameters | ||||||
|  | 
 | ||||||
|  | Kubevirt VM templates, just like kubevirt VM/VMI yaml configs, supports | ||||||
|  | [cloud-init scripts](https://cloudinit.readthedocs.io/en/latest/) | ||||||
|  | 
 | ||||||
|  | ## Using registry images | ||||||
|  | 
 | ||||||
|  | Kubevirt VM templates, just like kubevirt VM/VMI yaml configs, supports | ||||||
|  | creating VM’s disks from registry. ContainerDisk is a special type volume | ||||||
|  | which supports downloading images from user-defined registry server. | ||||||
|  | 
 | ||||||
|  | ## **Hack** - use pre-downloaded image | ||||||
|  | 
 | ||||||
|  | Kubevirt VM templates, just like kubevirt VM/VMI yaml configs, can use | ||||||
|  | pre-downloaded VM image, which can be a useful feature especially in the | ||||||
|  | debug/development/testing cases. No special parameters required in the | ||||||
|  | VM template or VM/VMI yaml config. The main idea is to create Kubernetes | ||||||
|  | PersistentVolume and PersistentVolumeClaim corresponding to existing | ||||||
|  | image in the file system. Example: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | --- | ||||||
|  | kind: PersistentVolume | ||||||
|  | apiVersion: v1 | ||||||
|  | metadata: | ||||||
|  |   name: mypv | ||||||
|  |   labels: | ||||||
|  |     type: local | ||||||
|  | spec: | ||||||
|  |   storageClassName: manual | ||||||
|  |   capacity: | ||||||
|  |     storage: 10G | ||||||
|  |   accessModes: | ||||||
|  |     - ReadWriteOnce | ||||||
|  |   hostPath: | ||||||
|  |     path: "/mnt/sda1/images/testvm" | ||||||
|  | --- | ||||||
|  | kind: PersistentVolumeClaim | ||||||
|  | apiVersion: v1 | ||||||
|  | metadata: | ||||||
|  |   name: mypvc | ||||||
|  | spec: | ||||||
|  |   storageClassName: manual | ||||||
|  |   accessModes: | ||||||
|  |     - ReadWriteOnce | ||||||
|  |   resources: | ||||||
|  |     requests: | ||||||
|  |       storage: 10G | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | ## Cloud-init script and parameters | ||||||
|  | 
 | ||||||
|  | Kubevirt VM templates, just like kubevirt VM/VMI yaml configs, supports | ||||||
|  | [cloud-init scripts](https://cloudinit.readthedocs.io/en/latest/) | ||||||
|  | 
 | ||||||
|  | ## Using Container Images | ||||||
|  | 
 | ||||||
|  | Kubevirt VM templates, just like kubevirt VM/VMI yaml configs, supports | ||||||
|  | creating VM’s disks from registry. ContainerDisk is a special type volume | ||||||
|  | which supports downloading images from user-defined registry server. | ||||||
|  | 
 | ||||||
|  | ## **Hack** - use pre-downloaded image | ||||||
|  | 
 | ||||||
|  | Kubevirt VM templates, just like kubevirt VM/VMI yaml configs, can use | ||||||
|  | pre-downloaded VM image, which can be a useful feature especially in the | ||||||
|  | debug/development/testing cases. No special parameters required in the | ||||||
|  | VM template or VM/VMI yaml config. The main idea is to create Kubernetes | ||||||
|  | PersistentVolume and PersistentVolumeClaim corresponding to existing | ||||||
|  | image in the file system. Example: | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | --- | ||||||
|  | kind: PersistentVolume | ||||||
|  | apiVersion: v1 | ||||||
|  | metadata: | ||||||
|  |   name: mypv | ||||||
|  |   labels: | ||||||
|  |     type: local | ||||||
|  | spec: | ||||||
|  |   storageClassName: manual | ||||||
|  |   capacity: | ||||||
|  |     storage: 10G | ||||||
|  |   accessModes: | ||||||
|  |     - ReadWriteOnce | ||||||
|  |   hostPath: | ||||||
|  |     path: "/mnt/sda1/images/testvm" | ||||||
|  | --- | ||||||
|  | kind: PersistentVolumeClaim | ||||||
|  | apiVersion: v1 | ||||||
|  | metadata: | ||||||
|  |   name: mypvc | ||||||
|  | spec: | ||||||
|  |   storageClassName: manual | ||||||
|  |   accessModes: | ||||||
|  |     - ReadWriteOnce | ||||||
|  |   resources: | ||||||
|  |     requests: | ||||||
|  |       storage: 10G | ||||||
|  | 
 | ||||||
|  | ``` | ||||||
|  | 
 | ||||||
|  | If you create this PV/PVC, then you have to put VM image in the file | ||||||
|  | path | ||||||
|  | 
 | ||||||
|  | ```bash | ||||||
|  | /mnt/sda1/images/testvm/disk.img | ||||||
|  | ``` | ||||||
|  | Avaible in the each OpenShift/Kubevirt compute nodes. | ||||||
|  | 
 | ||||||
|  | ## Additional information You can follow [Virtual Machine Lifecycle | ||||||
|  | Guide](usage/life-cycle.md) for further reference. | ||||||
|  | @ -0,0 +1,31 @@ | ||||||
|  | [[ -e kubevirt ]] || git clone git@github.com:kubevirt/kubevirt.git | ||||||
|  | git -C kubevirt checkout master | ||||||
|  | git -C kubevirt pull --tags | ||||||
|  | 
 | ||||||
|  | releases() { | ||||||
|  | git -C kubevirt tag | sort -rV | while read TAG ; | ||||||
|  | do | ||||||
|  |   [[ "$TAG" =~ [0-9].0$ ]] || continue ; | ||||||
|  |   echo "$TAG" ; | ||||||
|  | done | ||||||
|  | } | ||||||
|  | 
 | ||||||
|  | features_for() { | ||||||
|  |   echo -e  "" | ||||||
|  |   git -C kubevirt show $1 | grep Date: | head -n1 | sed "s/Date:\s\+/Released on: /" | ||||||
|  |   echo -e  "" | ||||||
|  |   git -C kubevirt show $1 | sed -n "/changes$/,/Contributors/ p" | sed '1d;2d;$d' | sed '/^$/d' | ||||||
|  | } | ||||||
|  | 
 | ||||||
|  | gen_changelog() { | ||||||
|  |   { | ||||||
|  |   echo "# Changelog" | ||||||
|  |   for REL in $(releases); | ||||||
|  |   do | ||||||
|  |     echo -e "\n## $REL" ; | ||||||
|  |     features_for $REL | ||||||
|  |   done | ||||||
|  |   } > changelog.md | ||||||
|  | } | ||||||
|  | 
 | ||||||
|  | gen_changelog | ||||||
|  | @ -0,0 +1,83 @@ | ||||||
|  | Enabling NetworkPolicy for VirtualMachineInstance | ||||||
|  | ================================================= | ||||||
|  | 
 | ||||||
|  | Before creating NetworkPolicy objects, make sure you are using a | ||||||
|  | networking solution which supports NetworkPolicy. Network isolation is | ||||||
|  | controlled entirely by NetworkPolicy objects. By default, all vmis in a | ||||||
|  | namespace are accessible from other vmis and network endpoints. To | ||||||
|  | isolate one or more vmis in a project, you can create NetworkPolicy | ||||||
|  | objects in that namespace to indicate the allowed incoming connections. | ||||||
|  | 
 | ||||||
|  | > Note: vmis and pods are treated equally by network policies, since | ||||||
|  | > labels are passed through to the pods which contain the running vmi. | ||||||
|  | > With other words, labels on vmis can be matched by `spec.podSelector` | ||||||
|  | > on the policy. | ||||||
|  | 
 | ||||||
|  | Create NetworkPolicy to Deny All Traffic | ||||||
|  | ---------------------------------------- | ||||||
|  | 
 | ||||||
|  | To make a project “deny by default” add a NetworkPolicy object that | ||||||
|  | matches all vmis but accepts no traffic. | ||||||
|  | 
 | ||||||
|  |     kind: NetworkPolicy | ||||||
|  |     apiVersion: networking.k8s.io/v1 | ||||||
|  |     metadata: | ||||||
|  |       name: deny-by-default | ||||||
|  |     spec: | ||||||
|  |       podSelector: | ||||||
|  |       ingress: [] | ||||||
|  | 
 | ||||||
|  | Create NetworkPolicy to only Accept connections from vmis within | ||||||
|  | namespaces | ||||||
|  | 
 | ||||||
|  |     To make vmis accept connections from other vmis in the same namespace, | ||||||
|  |     but reject all other connections from vmis in other namespaces: | ||||||
|  | 
 | ||||||
|  |     .... | ||||||
|  |     kind: NetworkPolicy | ||||||
|  |     apiVersion: networking.k8s.io/v1 | ||||||
|  |     metadata: | ||||||
|  |       name: allow-same-namespace | ||||||
|  |     spec: | ||||||
|  |       podSelector: | ||||||
|  |       ingress: | ||||||
|  |       - from: | ||||||
|  |         - podSelector: {} | ||||||
|  |     .... | ||||||
|  | 
 | ||||||
|  |     Create NetworkPolicy to only allow HTTP and HTTPS traffic | ||||||
|  | 
 | ||||||
|  | To enable only HTTP and HTTPS access to the vmis, add a NetworkPolicy | ||||||
|  | object similar to: | ||||||
|  | 
 | ||||||
|  |     kind: NetworkPolicy | ||||||
|  |     apiVersion: networking.k8s.io/v1 | ||||||
|  |     metadata: | ||||||
|  |       name: allow-http-https | ||||||
|  |     spec: | ||||||
|  |       podSelector: | ||||||
|  |       ingress: | ||||||
|  |       - ports: | ||||||
|  |         - protocol: TCP | ||||||
|  |           port: 8080 | ||||||
|  |         - protocol: TCP | ||||||
|  |           port: 8443 | ||||||
|  | 
 | ||||||
|  | Create NetworkPolicy to deny traffic by labels | ||||||
|  | ---------------------------------------------- | ||||||
|  | 
 | ||||||
|  | To make one specific vmi with a label `type: test` to reject all traffic | ||||||
|  | from other vmis, create: | ||||||
|  | 
 | ||||||
|  |     kind: NetworkPolicy | ||||||
|  |     apiVersion: networking.k8s.io/v1 | ||||||
|  |     metadata: | ||||||
|  |       name: deny-by-label | ||||||
|  |     spec: | ||||||
|  |       podSelector: | ||||||
|  |         matchLabels: | ||||||
|  |           type: test | ||||||
|  |       ingress: [] | ||||||
|  | 
 | ||||||
|  | Kubernetes NetworkPolicy Documentation can be found here: [Kubernetes | ||||||
|  | NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) | ||||||
|  | @ -0,0 +1,86 @@ | ||||||
|  | DNS for Services and VirtualMachineInstances | ||||||
|  | ============================================ | ||||||
|  | 
 | ||||||
|  | Creating unique DNS entries per VirtualMachineInstance | ||||||
|  | ------------------------------------------------------ | ||||||
|  | 
 | ||||||
|  | In order to create unique DNS entries per VirtualMachineInstance, it is | ||||||
|  | possible to set `spec.hostname` and `spec.subdomain`. If a subdomain is | ||||||
|  | set and a headless service with a name, matching the subdomain, exists, | ||||||
|  | kube-dns will create unique DNS entries for every VirtualMachineInstance | ||||||
|  | which matches the selector of the service. Have a look at the [DNS for | ||||||
|  | Services and Pods | ||||||
|  | documentation](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-hostname-and-subdomain-fields) | ||||||
|  | for additional information. | ||||||
|  | 
 | ||||||
|  | The following example consists of a VirtualMachine and a headless | ||||||
|  | Service which matches the labels and the subdomain of the | ||||||
|  | VirtualMachineInstance: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: vmi-fedora | ||||||
|  |       labels: | ||||||
|  |         expose: me | ||||||
|  |     spec: | ||||||
|  |       hostname: "myvmi" | ||||||
|  |       subdomain: "mysubdomain" | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: containerdisk | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: cloudinitdisk | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 1024M | ||||||
|  |       terminationGracePeriodSeconds: 0 | ||||||
|  |       volumes: | ||||||
|  |       - name: containerdisk | ||||||
|  |         containerDisk: | ||||||
|  |           image: kubevirt/fedora-cloud-registry-disk-demo:latest | ||||||
|  |       - cloudInitNoCloud: | ||||||
|  |           userDataBase64: IyEvYmluL2Jhc2gKZWNobyAiZmVkb3JhOmZlZG9yYSIgfCBjaHBhc3N3ZAo= | ||||||
|  |         name: cloudinitdisk | ||||||
|  |     --- | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Service | ||||||
|  |     metadata: | ||||||
|  |       name: mysubdomain | ||||||
|  |     spec: | ||||||
|  |       selector: | ||||||
|  |         expose: me | ||||||
|  |       clusterIP: None | ||||||
|  |       ports: | ||||||
|  |       - name: foo # Actually, no port is needed. | ||||||
|  |         port: 1234 | ||||||
|  |         targetPort: 1234 | ||||||
|  | 
 | ||||||
|  | As a consequence, when we enter the VirtualMachineInstance via e.g. | ||||||
|  | `virtctl console vmi-fedora` and ping `myvmi.mysubdomain` we see that we | ||||||
|  | find a DNS entry for `myvmi.mysubdomain.default.svc.cluster.local` which | ||||||
|  | points to `10.244.0.57`, which is the IP of the VirtualMachineInstance | ||||||
|  | (not of the Service): | ||||||
|  | 
 | ||||||
|  |     [fedora@myvmi ~]$ ping myvmi.mysubdomain | ||||||
|  |     PING myvmi.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. | ||||||
|  |     64 bytes from myvmi.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms | ||||||
|  |     [fedora@myvmi ~]$ ip a | ||||||
|  |     2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 | ||||||
|  |         link/ether 0a:58:0a:f4:00:39 brd ff:ff:ff:ff:ff:ff | ||||||
|  |         inet 10.244.0.57/24 brd 10.244.0.255 scope global dynamic eth0 | ||||||
|  |            valid_lft 86313556sec preferred_lft 86313556sec | ||||||
|  |         inet6 fe80::858:aff:fef4:39/64 scope link | ||||||
|  |            valid_lft forever preferred_lft forever | ||||||
|  | 
 | ||||||
|  | So `spec.hostname` and `spec.subdomain` get translated to a DNS A-record | ||||||
|  | of the form | ||||||
|  | `<vmi.spec.hostname>.<vmi.spec.subdomain>.<vmi.metadata.namespace>.svc.cluster.local`. | ||||||
|  | If no `spec.hostname` is set, then we fall back to the | ||||||
|  | VirtualMachineInstance name itself. The resulting DNS A-record looks | ||||||
|  | like this then: | ||||||
|  | `<vmi.metadata.name>.<vmi.spec.subdomain>.<vmi.metadata.namespace>.svc.cluster.local`. | ||||||
|  | @ -0,0 +1,85 @@ | ||||||
|  | Graphical and Serial Console Access | ||||||
|  | =================================== | ||||||
|  | 
 | ||||||
|  | Once a virtual machine is started you are able to connect to the | ||||||
|  | consoles it exposes. Usually there are two types of consoles: | ||||||
|  | 
 | ||||||
|  | -   Serial Console | ||||||
|  | 
 | ||||||
|  | -   Graphical Console (VNC) | ||||||
|  | 
 | ||||||
|  | > Note: You need to have `virtctl` | ||||||
|  | > [installed](/installation/?id=client-side-virtctl-deployment) to gain | ||||||
|  | > access to the VirtualMachineInstance. | ||||||
|  | 
 | ||||||
|  | Accessing the serial console | ||||||
|  | ---------------------------- | ||||||
|  | 
 | ||||||
|  | The serial console of a virtual machine can be accessed by using the | ||||||
|  | `console` command: | ||||||
|  | 
 | ||||||
|  |     $ virtctl console --kubeconfig=$KUBECONFIG testvmi | ||||||
|  | 
 | ||||||
|  | Accessing the graphical console (VNC) | ||||||
|  | ------------------------------------- | ||||||
|  | 
 | ||||||
|  | Accessing the graphical console of a virtual machine is usually done | ||||||
|  | through VNC, which requires `remote-viewer`. Once the tool is installed | ||||||
|  | you can access the graphical console using: | ||||||
|  | 
 | ||||||
|  |     $ virtctl vnc --kubeconfig=$KUBECONFIG testvmi | ||||||
|  | 
 | ||||||
|  | Debugging console access | ||||||
|  | ------------------------ | ||||||
|  | 
 | ||||||
|  | Should the connection fail, you can use the `-v` flag to get more output | ||||||
|  | from both `virtctl` and the `remote-viewer` tool, to troubleshoot the | ||||||
|  | problem. | ||||||
|  | 
 | ||||||
|  |     $ virtctl vnc --kubeconfig=$KUBECONFIG testvmi -v 4 | ||||||
|  | 
 | ||||||
|  | > **Note:** If you are using virtctl via ssh on a remote machine, you | ||||||
|  | > need to forward the X session to your machine (Look up the -X and -Y | ||||||
|  | > flags of `ssh` if you are not familiar with that). As an alternative | ||||||
|  | > you can proxy the apiserver port with ssh to your machine (either | ||||||
|  | > direct or in combination with `kubectl proxy`) | ||||||
|  | 
 | ||||||
|  | RBAC Permissions for Console/VNC Access | ||||||
|  | --------------------------------------- | ||||||
|  | 
 | ||||||
|  | ### Using Default RBAC ClusterRoles | ||||||
|  | 
 | ||||||
|  | Every KubeVirt installation after version v0.5.1 comes a set of default | ||||||
|  | RBAC cluster roles that can be used to grant users access to | ||||||
|  | VirtualMachineInstances. | ||||||
|  | 
 | ||||||
|  | The **kubevirt.io:admin** and **kubevirt.io:edit** ClusterRoles have | ||||||
|  | console and VNC access permissions built into them. By binding either of | ||||||
|  | these roles to a user, they will have the ability to use virtctl to | ||||||
|  | access console and VNC. | ||||||
|  | 
 | ||||||
|  | ### With Custom RBAC ClusterRole | ||||||
|  | 
 | ||||||
|  | The default KubeVirt ClusterRoles give access to more than just console | ||||||
|  | in VNC. In the event that an Admin would like to craft a custom role | ||||||
|  | that targets only console and VNC, the ClusterRole below demonstrates | ||||||
|  | how that can be done. | ||||||
|  | 
 | ||||||
|  |     apiVersion: rbac.authorization.k8s.io/v1beta1 | ||||||
|  |     kind: ClusterRole | ||||||
|  |     metadata: | ||||||
|  |       name: allow-vnc-console-access | ||||||
|  |     rules: | ||||||
|  |       - apiGroups: | ||||||
|  |           - subresources.kubevirt.io | ||||||
|  |         resources: | ||||||
|  |           - virtualmachineinstances/console | ||||||
|  |           - virtualmachineinstances/vnc | ||||||
|  |         verbs: | ||||||
|  |           - get | ||||||
|  | 
 | ||||||
|  | The ClusterRole above provides access to virtual machines across all | ||||||
|  | namespaces. | ||||||
|  | 
 | ||||||
|  | In order to reduce the scope to a single namespace, bind this | ||||||
|  | ClusterRole using a RoleBinding that targets a single namespace. | ||||||
|  | @ -0,0 +1,52 @@ | ||||||
|  | Life-cycle | ||||||
|  | ========== | ||||||
|  | 
 | ||||||
|  | Every `VirtualMachineInstance` represents a single virtual machine | ||||||
|  | *instance*. In general, the management of VirtualMachineInstances is | ||||||
|  | kept similar to how `Pods` are managed: Every VM that is defined in the | ||||||
|  | cluster is expected to be running, just like Pods. Deleting a | ||||||
|  | VirtualMachineInstance is equivalent to shutting it down, this is also | ||||||
|  | equivalent to how Pods behave. | ||||||
|  | 
 | ||||||
|  | FIXME needs to be reworked. | ||||||
|  | 
 | ||||||
|  | Overview | ||||||
|  | -------- | ||||||
|  | 
 | ||||||
|  | Launching a virtual machine | ||||||
|  | --------------------------- | ||||||
|  | 
 | ||||||
|  | In order to start a VirtualMachineInstance, you just need to create a | ||||||
|  | `VirtualMachineInstance` object using `kubectl`: | ||||||
|  | 
 | ||||||
|  |     $ kubectl create -f vmi.yaml | ||||||
|  | 
 | ||||||
|  | Listing virtual machines | ||||||
|  | ------------------------ | ||||||
|  | 
 | ||||||
|  | VirtualMachineInstances can be listed by querying for | ||||||
|  | VirtualMachineInstance objects: | ||||||
|  | 
 | ||||||
|  |     $ kubectl get vmis | ||||||
|  | 
 | ||||||
|  | Retrieving a virtual machine definition | ||||||
|  | --------------------------------------- | ||||||
|  | 
 | ||||||
|  | A single VirtualMachineInstance definition can be retrieved by getting | ||||||
|  | the specific VirtualMachineInstance object: | ||||||
|  | 
 | ||||||
|  |     $ kubectl get vmis testvmi | ||||||
|  | 
 | ||||||
|  | Stopping a virtual machine | ||||||
|  | -------------------------- | ||||||
|  | 
 | ||||||
|  | To stop the VirtualMachineInstance, you just need to delete the | ||||||
|  | corresponding `VirtualMachineInstance` object using `kubectl`. | ||||||
|  | 
 | ||||||
|  |     $ kubectl delete -f vmi.yaml | ||||||
|  |     # OR | ||||||
|  |     $ kubectl delete vmis testvmi | ||||||
|  | 
 | ||||||
|  | > Note: Stopping a VirtualMachineInstance implies that it will be | ||||||
|  | > deleted from the cluster. You will not be able to start this | ||||||
|  | > VirtualMachineInstance object again. | ||||||
|  | @ -0,0 +1,169 @@ | ||||||
|  | Expose VirtualMachineInstances as a Services | ||||||
|  | ============================================ | ||||||
|  | 
 | ||||||
|  | Once the VirtualMachineInstance is started, in order to connect to a | ||||||
|  | VirtualMachineInstance, you can create a `Service` object for a | ||||||
|  | VirtualMachineInstance. Currently, three types of service are supported: | ||||||
|  | `ClusterIP`, `NodePort` and `LoadBalancer`. The default type is | ||||||
|  | `ClusterIP`. | ||||||
|  | 
 | ||||||
|  | > **Note**: Labels on a VirtualMachineInstance are passed through to the | ||||||
|  | > pod, so simply add your labels for service creation to the | ||||||
|  | > VirtualMachineInstance. From there on it works like exposing any other | ||||||
|  | > k8s resource, by referencing these labels in a service. | ||||||
|  | 
 | ||||||
|  | Expose VirtualMachineInstance as a ClusterIP Service | ||||||
|  | ---------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | Give a VirtualMachineInstance with the label `special: key`: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: vmi-ephemeral | ||||||
|  |       labels: | ||||||
|  |         special: key | ||||||
|  |     spec: | ||||||
|  |       domain: | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - disk: | ||||||
|  |               bus: virtio | ||||||
|  |             name: containerdisk | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |       volumes: | ||||||
|  |       - name: containerdisk | ||||||
|  |         containerDisk: | ||||||
|  |           image: kubevirt/cirros-registry-disk-demo:latest | ||||||
|  | 
 | ||||||
|  | we can expose its SSH port (22) by creating a `ClusterIP` service: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Service | ||||||
|  |     metadata: | ||||||
|  |       name: vmiservice | ||||||
|  |     spec: | ||||||
|  |       ports: | ||||||
|  |       - port: 27017 | ||||||
|  |         protocol: TCP | ||||||
|  |         targetPort: 22 | ||||||
|  |       selector: | ||||||
|  |         special: key | ||||||
|  |       type: ClusterIP | ||||||
|  | 
 | ||||||
|  | You just need to create this `ClusterIP` service by using `kubectl`: | ||||||
|  | 
 | ||||||
|  |     $ kubectl create -f vmiservice.yaml | ||||||
|  | 
 | ||||||
|  | Alternatively, the VirtualMachineInstance could be exposed using the | ||||||
|  | `virtctl` command: | ||||||
|  | 
 | ||||||
|  |     $ virtctl expose virtualmachineinstance vmi-ephemeral --name vmiservice --port 27017 --target-port 22 | ||||||
|  | 
 | ||||||
|  | Notes: \* If `--target-port` is not set, it will be take the same value | ||||||
|  | as `--port` \* The cluster IP is usually allocated automatically, but it | ||||||
|  | may also be forced into a value using the `--cluster-ip` flag (assuming | ||||||
|  | value is in the valid range and not taken) | ||||||
|  | 
 | ||||||
|  | Query the service object: | ||||||
|  | 
 | ||||||
|  |     $ kubectl get service | ||||||
|  |     NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE | ||||||
|  |     vmiservice   ClusterIP   172.30.3.149   <none>        27017/TCP   2m | ||||||
|  | 
 | ||||||
|  | You can connect to the VirtualMachineInstance by service IP and service | ||||||
|  | port inside the cluster network: | ||||||
|  | 
 | ||||||
|  |     $ ssh cirros@172.30.3.149 -p 27017 | ||||||
|  | 
 | ||||||
|  | Expose VirtualMachineInstance as a NodePort Service | ||||||
|  | --------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | Expose the SSH port (22) of a VirtualMachineInstance running on KubeVirt | ||||||
|  | by creating a `NodePort` service: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Service | ||||||
|  |     metadata: | ||||||
|  |       name: nodeport | ||||||
|  |     spec: | ||||||
|  |       externalTrafficPolicy: Cluster | ||||||
|  |       ports: | ||||||
|  |       - name: nodeport | ||||||
|  |         nodePort: 30000 | ||||||
|  |         port: 27017 | ||||||
|  |         protocol: TCP | ||||||
|  |         targetPort: 22 | ||||||
|  |       selector: | ||||||
|  |         special: key | ||||||
|  |       type: NodePort | ||||||
|  | 
 | ||||||
|  | You just need to create this `NodePort` service by using `kubectl`: | ||||||
|  | 
 | ||||||
|  |     $ kubectl -f nodeport.yaml | ||||||
|  | 
 | ||||||
|  | Alternatively, the VirtualMachineInstance could be exposed using the | ||||||
|  | `virtctl` command: | ||||||
|  | 
 | ||||||
|  |     $ virtctl expose virtualmachineinstance vmi-ephemeral --name nodeport --type NodePort --port 27017 --target-port 22 --node-port 30000 | ||||||
|  | 
 | ||||||
|  | Notes: \* If `--node-port` is not set, its value will be allocated | ||||||
|  | dynamically (in the range above 30000) \* If the `--node-port` value is | ||||||
|  | set, it must be unique across all services | ||||||
|  | 
 | ||||||
|  | The service can be listed by querying for the service objects: | ||||||
|  | 
 | ||||||
|  |     $ kubectl get service | ||||||
|  |     NAME           TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE | ||||||
|  |     nodeport       NodePort   172.30.232.73   <none>        27017:30000/TCP   5m | ||||||
|  | 
 | ||||||
|  | Connect to the VirtualMachineInstance by using a node IP and node port | ||||||
|  | outside the cluster network: | ||||||
|  | 
 | ||||||
|  |     $ ssh cirros@$NODE_IP -p 30000 | ||||||
|  | 
 | ||||||
|  | Expose VirtualMachineInstance as a LoadBalancer Service | ||||||
|  | ------------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | Expose the RDP port (3389) of a VirtualMachineInstance running on | ||||||
|  | KubeVirt by creating `LoadBalancer` service. Here is an example: | ||||||
|  | 
 | ||||||
|  |     apiVersion: v1 | ||||||
|  |     kind: Service | ||||||
|  |     metadata: | ||||||
|  |       name: lbsvc | ||||||
|  |     spec: | ||||||
|  |       externalTrafficPolicy: Cluster | ||||||
|  |       ports: | ||||||
|  |       - port: 27017 | ||||||
|  |         protocol: TCP | ||||||
|  |         targetPort: 3389 | ||||||
|  |       selector: | ||||||
|  |         special: key | ||||||
|  |       type: LoadBalancer | ||||||
|  | 
 | ||||||
|  | You could create this `LoadBalancer` service by using `kubectl`: | ||||||
|  | 
 | ||||||
|  |     $ kubectl -f lbsvc.yaml | ||||||
|  | 
 | ||||||
|  | Alternatively, the VirtualMachineInstance could be exposed using the | ||||||
|  | `virtctl` command: | ||||||
|  | 
 | ||||||
|  |     $ virtctl expose virtualmachineinstance vmi-ephemeral --name lbsvc --type LoadBalancer --port 27017 --target-port 3389 | ||||||
|  | 
 | ||||||
|  | Note that the external IP of the service could be forced to a value | ||||||
|  | using the `--external-ip` flag (no validation is performed on this | ||||||
|  | value). | ||||||
|  | 
 | ||||||
|  | The service can be listed by querying for the service objects: | ||||||
|  | 
 | ||||||
|  |     $ kubectl get svc | ||||||
|  |     NAME      TYPE           CLUSTER-IP       EXTERNAL-IP                   PORT(S)           AGE | ||||||
|  |     lbsvc     LoadBalancer   172.30.27.5      172.29.10.235,172.29.10.235   27017:31829/TCP   5s | ||||||
|  | 
 | ||||||
|  | Use `vinagre` client to connect your VirtualMachineInstance by using the | ||||||
|  | public IP and port. | ||||||
|  | 
 | ||||||
|  | Note that here the external port here (31829) was dynamically allocated. | ||||||
|  | @ -0,0 +1,145 @@ | ||||||
|  | Assigning VMs to Nodes | ||||||
|  | ====================== | ||||||
|  | 
 | ||||||
|  | You can constrain the VM to only run on specific nodes or to prefer | ||||||
|  | running on specific nodes: | ||||||
|  | 
 | ||||||
|  | -   **nodeSelector** | ||||||
|  | 
 | ||||||
|  | -   **Affinity and anti-affinity** | ||||||
|  | 
 | ||||||
|  | -   **Taints and Tolerations** | ||||||
|  | 
 | ||||||
|  | nodeSelector | ||||||
|  | ------------ | ||||||
|  | 
 | ||||||
|  | Setting `spec.nodeSelector` requirements, constrains the scheduler to | ||||||
|  | only schedule VMs on nodes, which contain the specified labels. In the | ||||||
|  | following example the vmi contains the labels `cpu: slow` and | ||||||
|  | `storage: fast`: | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: testvmi-ephemeral | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     spec: | ||||||
|  |       nodeSelector: | ||||||
|  |         cpu: slow | ||||||
|  |         storage: fast | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: mypvcdisk | ||||||
|  |             lun: {} | ||||||
|  |       volumes: | ||||||
|  |         - name: mypvcdisk | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: mypvc | ||||||
|  | 
 | ||||||
|  | Thus the scheduler will only schedule the vmi to nodes which contain | ||||||
|  | these labels in their metadata. It works exactly like the Pods | ||||||
|  | `nodeSelector`. See the [Pod nodeSelector | ||||||
|  | Documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) | ||||||
|  | for more examples. | ||||||
|  | 
 | ||||||
|  | Affinity and anti-affinity | ||||||
|  | -------------------------- | ||||||
|  | 
 | ||||||
|  | The `spec.affinity` field allows specifying hard- and soft-affinity for | ||||||
|  | VMs. It is possible to write matching rules agains workloads (VMs and | ||||||
|  | Pods) and Nodes. Since VMs are a workload type based on Pods, | ||||||
|  | Pod-affinity affects VMs as well. | ||||||
|  | 
 | ||||||
|  | An example for `podAffinity` and `podAntiAffinity` may look like this: | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: testvmi-ephemeral | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     spec: | ||||||
|  |       nodeSelector: | ||||||
|  |         cpu: slow | ||||||
|  |         storage: fast | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: mypvcdisk | ||||||
|  |             lun: {} | ||||||
|  |       affinity: | ||||||
|  |         podAffinity: | ||||||
|  |           requiredDuringSchedulingIgnoredDuringExecution: | ||||||
|  |           - labelSelector: | ||||||
|  |               matchExpressions: | ||||||
|  |               - key: security | ||||||
|  |                 operator: In | ||||||
|  |                 values: | ||||||
|  |                 - S1 | ||||||
|  |             topologyKey: failure-domain.beta.kubernetes.io/zone | ||||||
|  |         podAntiAffinity: | ||||||
|  |           preferredDuringSchedulingIgnoredDuringExecution: | ||||||
|  |           - weight: 100 | ||||||
|  |             podAffinityTerm: | ||||||
|  |               labelSelector: | ||||||
|  |                 matchExpressions: | ||||||
|  |                 - key: security | ||||||
|  |                   operator: In | ||||||
|  |                   values: | ||||||
|  |                   - S2 | ||||||
|  |               topologyKey: kubernetes.io/hostname | ||||||
|  |       volumes: | ||||||
|  |         - name: mypvcdisk | ||||||
|  |           persistentVolumeClaim: | ||||||
|  |             claimName: mypvc | ||||||
|  | 
 | ||||||
|  | Affinity and anti-affinity works exactly like the Pods `affinity`. This | ||||||
|  | includes `podAffinity`, `podAntiAffinity`, `nodeAffinity` and | ||||||
|  | `nodeAntiAffinity`. See the [Pod affinity and anti-affinity | ||||||
|  | Documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) | ||||||
|  | for more examples and details. | ||||||
|  | 
 | ||||||
|  | Taints and Tolerations | ||||||
|  | ---------------------- | ||||||
|  | 
 | ||||||
|  | Affinity as described above, is a property of VMs that attracts them to | ||||||
|  | a set of nodes (either as a preference or a hard requirement). Taints | ||||||
|  | are the opposite – they allow a node to repel a set of VMs. | ||||||
|  | 
 | ||||||
|  | Taints and tolerations work together to ensure that VMs are not | ||||||
|  | scheduled onto inappropriate nodes. One or more taints are applied to a | ||||||
|  | node; this marks that the node should not accept any VMs that do not | ||||||
|  | tolerate the taints. Tolerations are applied to VMs, and allow (but do | ||||||
|  | not require) the VMs to schedule onto nodes with matching taints. | ||||||
|  | 
 | ||||||
|  | You add a taint to a node using kubectl taint. For example, | ||||||
|  | 
 | ||||||
|  |     kubectl taint nodes node1 key=value:NoSchedule | ||||||
|  | 
 | ||||||
|  | An example for `tolerations` may look like this: | ||||||
|  | 
 | ||||||
|  |     metadata: | ||||||
|  |       name: testvmi-ephemeral | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     spec: | ||||||
|  |       nodeSelector: | ||||||
|  |         cpu: slow | ||||||
|  |         storage: fast | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           requests: | ||||||
|  |             memory: 64M | ||||||
|  |         devices: | ||||||
|  |           disks: | ||||||
|  |           - name: mypvcdisk | ||||||
|  |             lun: {} | ||||||
|  |       tolerations: | ||||||
|  |       - key: "key" | ||||||
|  |         operator: "Equal" | ||||||
|  |         value: "value" | ||||||
|  |         effect: "NoSchedule" | ||||||
|  | @ -0,0 +1,214 @@ | ||||||
|  | Increasing the VirtualMachineInstance Density on Nodes | ||||||
|  | ====================================================== | ||||||
|  | 
 | ||||||
|  | KubeVirt does not yet support classical Memory Overcommit Management or | ||||||
|  | Memory Ballooning. In other words VirtualMachineInstances can’t give | ||||||
|  | back memory they have allocated. However, a few other things can be | ||||||
|  | tweaked to reduce the memory footprint and overcommit the per-VMI memory | ||||||
|  | overhead. | ||||||
|  | 
 | ||||||
|  | Remove the Graphical Devices | ||||||
|  | ---------------------------- | ||||||
|  | 
 | ||||||
|  | First the safest option to reduce the memory footprint, is removing the | ||||||
|  | graphical device from the VMI by setting | ||||||
|  | `spec.domain.devices.autottachGraphicsDevice` to `false`. See the video | ||||||
|  | and graphics device | ||||||
|  | [documentation](/workloads/virtual-machines/virtualized-hardware-configuration#video-and-graphics-device) | ||||||
|  | for further details and examples. | ||||||
|  | 
 | ||||||
|  | This will save a constant amount of `16MB` per VirtualMachineInstance | ||||||
|  | but also disable VNC access. | ||||||
|  | 
 | ||||||
|  | Overcommit the Guest Overhead | ||||||
|  | ----------------------------- | ||||||
|  | 
 | ||||||
|  | Before you continue, make sure you make yourself comfortable with the | ||||||
|  | [Out of Resource | ||||||
|  | Managment](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/) | ||||||
|  | of Kubernetes. | ||||||
|  | 
 | ||||||
|  | Every VirtualMachineInstance requests slightly more memory from | ||||||
|  | Kubernetes than what was requested by the user for the Operating System. | ||||||
|  | The additional memory is used for the per-VMI overhead consisting of our | ||||||
|  | infrastructure which is wrapping the actual VirtualMachineInstance | ||||||
|  | process. | ||||||
|  | 
 | ||||||
|  | In order to increase the VMI density on the node, it is possible to not | ||||||
|  | request the additional overhead by setting | ||||||
|  | `spec.domain.resources.overcommitGuestOverhead` to `true`: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: testvmi-nocloud | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 30 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           overcommitGuestOverhead: true | ||||||
|  |           requests: | ||||||
|  |             memory: 1024M | ||||||
|  |     [...] | ||||||
|  | 
 | ||||||
|  | This will work fine for as long as most of the VirtualMachineInstances | ||||||
|  | will not request the whole memory. That is especially the case if you | ||||||
|  | have short-lived VMIs. But if you have long-lived | ||||||
|  | VirtualMachineInstances or do extremely memory intensive tasks inside | ||||||
|  | the VirtualMachineInstance, your VMIs will use all memory they are | ||||||
|  | granted sooner or later. | ||||||
|  | 
 | ||||||
|  | Overcommit Guest Memory | ||||||
|  | ----------------------- | ||||||
|  | 
 | ||||||
|  | The third option is real memory overcommit on the VMI. In this scenario | ||||||
|  | the VMI is explicitly told that it has more memory available than what | ||||||
|  | is requested from the cluster by setting `spec.domain.memory.guest` to a | ||||||
|  | value higher than `spec.domain.resources.requests.memory`. | ||||||
|  | 
 | ||||||
|  | The following definition requests `1024MB` from the cluster but tells | ||||||
|  | the VMI that it has `2048MB` of memory available: | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstance | ||||||
|  |     metadata: | ||||||
|  |       name: testvmi-nocloud | ||||||
|  |     spec: | ||||||
|  |       terminationGracePeriodSeconds: 30 | ||||||
|  |       domain: | ||||||
|  |         resources: | ||||||
|  |           overcommitGuestOverhead: true | ||||||
|  |           requests: | ||||||
|  |             memory: 1024M | ||||||
|  |         memory: | ||||||
|  |           guest: 2048M | ||||||
|  |     [...] | ||||||
|  | 
 | ||||||
|  | For as long as there is enough free memory available on the node, the | ||||||
|  | VMI can happily consume up to `2048MB`. This VMI will get the | ||||||
|  | `Burstable` resource class assigned by Kubernetes (See [QoS classes in | ||||||
|  | Kubernetes](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-burstable) | ||||||
|  | for more details). The same eviction rules like for Pods apply to the | ||||||
|  | VMI in case the node gets under memory pressure. | ||||||
|  | 
 | ||||||
|  | Implicit memory overcommit is disabled by default. This means that when | ||||||
|  | memory request is not specified, it is set to match | ||||||
|  | `spec.domain.memory.guest`. However, it can be enabled using | ||||||
|  | `memory-overcommit` in the `kubevirt-config`. For example, by setting | ||||||
|  | `memory-overcommit: "150"` we define that when memory request is not | ||||||
|  | explicitly set, it will be implicitly set to achieve memory overcommit | ||||||
|  | of 150%. For instance, when `spec.domain.memory.guest: 3072M`, memory | ||||||
|  | request is set to 2048M, if omitted. Note that the actual memory request | ||||||
|  | depends on additional configuration options like | ||||||
|  | OvercommitGuestOverhead. | ||||||
|  | 
 | ||||||
|  | Configuring the memory pressure behaviour of nodes | ||||||
|  | -------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | If the node gets under memory pressure, depending on the `kubelet` | ||||||
|  | configuration the virtual machines may get killed by the OOM handler or | ||||||
|  | by the `kubelet` itself. It is possible to tweak that behaviour based on | ||||||
|  | the requirements of your VirtualMachineInstances by: | ||||||
|  | 
 | ||||||
|  | -   Configuring [Soft Eviction | ||||||
|  |     Thresholds](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#soft-eviction-thresholds) | ||||||
|  | 
 | ||||||
|  | -   Configuring [Hard Eviction | ||||||
|  |     Thresholds](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#hard-eviction-thresholds) | ||||||
|  | 
 | ||||||
|  | -   Requesting the right QoS class for VirtualMachineInstances | ||||||
|  | 
 | ||||||
|  | -   Setting `--system-reserved` and `--kubelet-reserved` | ||||||
|  | 
 | ||||||
|  | -   Enabling KSM | ||||||
|  | 
 | ||||||
|  | -   Enabling swap | ||||||
|  | 
 | ||||||
|  | ### Configuring Soft Eviction Thresholds | ||||||
|  | 
 | ||||||
|  | > Note: Soft Eviction will effectively shutdown VirtualMachineInstances. | ||||||
|  | > They are not paused, hibernated or migrated. Further, Soft Eviction is | ||||||
|  | > disabled by default. | ||||||
|  | 
 | ||||||
|  | If configured, VirtualMachineInstances get evicted once the available | ||||||
|  | memory falls below the threshold specified via `--eviction-soft` and the | ||||||
|  | VirtualmachineInstance is given the chance to perform a shutdown of the | ||||||
|  | VMI within a timespan specified via `--eviction-max-pod-grace-period`. | ||||||
|  | The flag `--eviction-soft-grace-period` specifies for how long a soft | ||||||
|  | eviction condition must be held before soft evictions are triggered. | ||||||
|  | 
 | ||||||
|  | If set properly according to the demands of the VMIs, overcommitting | ||||||
|  | should only lead to soft evictions in rare cases for some VMIs. They may | ||||||
|  | even get re-scheduled to the same node with less initial memory demand. | ||||||
|  | For some workload types, this can be perfectly fine and lead to better | ||||||
|  | overall memory-utilization. | ||||||
|  | 
 | ||||||
|  | ### Configuring Hard Eviction Thresholds | ||||||
|  | 
 | ||||||
|  | > Note: If unspecified, the kubelet will do hard evictions for Pods once | ||||||
|  | > `memory.available` falls below `100Mi`. | ||||||
|  | 
 | ||||||
|  | Limits set via `--eviction-hard` will lead to immediate eviction of | ||||||
|  | VirtualMachineInstances or Pods. This stops VMIs without a grace period | ||||||
|  | and is comparable with power-loss on a real computer. | ||||||
|  | 
 | ||||||
|  | If the hard limit is hit, VMIs may from time to time simply be killed. | ||||||
|  | They may be re-scheduled to the same node immediately again, since they | ||||||
|  | start with less memory consumption again. This can be a simple option, | ||||||
|  | if the memory threshold is only very seldom hit and the work performed | ||||||
|  | by the VMIs is reproducible or it can be resumed from some checkpoints. | ||||||
|  | 
 | ||||||
|  | ### Requesting the right QoS Class for VirtualMachineInstances | ||||||
|  | 
 | ||||||
|  | Different QoS classes get [assigned to Pods and | ||||||
|  | VirtualMachineInstances](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy) | ||||||
|  | based on the `requests.memory` and `limits.memory`. KubeVirt right now | ||||||
|  | supports the QoS classes `Burstable` and `Guaranteed`. `Burstable` VMIs | ||||||
|  | are evicted before `Guaranteed` VMIs. | ||||||
|  | 
 | ||||||
|  | This allows creating two classes of VMIs: | ||||||
|  | 
 | ||||||
|  | -   One type can have equal `requests.memory` and `limits.memory` set | ||||||
|  |     and therefore gets the `Guaranteed` class assigned. This one will | ||||||
|  |     not get evicted and should never run into memory issues, but is more | ||||||
|  |     demanding. | ||||||
|  | 
 | ||||||
|  | -   One type can have no `limits.memory` or a `limits.memory` which is | ||||||
|  |     greater than `requests.memory` and therefore gets the `Burstable` | ||||||
|  |     class assigned. These VMIs will be evicted first. | ||||||
|  | 
 | ||||||
|  | ### Setting `--system-reserved` and `--kubelet-reserved` | ||||||
|  | 
 | ||||||
|  | It may be important to reserve some memory for other daemons (not | ||||||
|  | DaemonSets) which are running on the same node (e.g. ssh, dhcp servers, | ||||||
|  | …). The reservation can be done with the `--system-reserved` switch. | ||||||
|  | Further for the Kubelet and Docker a special flag called | ||||||
|  | `--kubelet-reserved` exists. | ||||||
|  | 
 | ||||||
|  | ### Enabling KSM | ||||||
|  | 
 | ||||||
|  | The [KSM](https://www.linux-kvm.org/page/KSM) (Kernel same-page merging) | ||||||
|  | daemon can be started on the node. Depending on its tuning parameters it | ||||||
|  | can more or less aggressively try to merge identical pages between | ||||||
|  | applications and VirtualMachineInstances. The more aggressive it is | ||||||
|  | configured the more CPU it will use itself, so the memory overcommit | ||||||
|  | advantages comes with a slight CPU performance hit. | ||||||
|  | 
 | ||||||
|  | Config file tuning allows changes to scanning frequency (how often will | ||||||
|  | KSM activate) and aggressiveness (how many pages per second will it | ||||||
|  | scan). | ||||||
|  | 
 | ||||||
|  | ### Enabling Swap | ||||||
|  | 
 | ||||||
|  | > Note: This will definitely make sure that your VirtualMachines can’t | ||||||
|  | > crash or get evicted from the node but it comes with the cost of | ||||||
|  | > pretty unpredictable performance once the node runs out of memory and | ||||||
|  | > the kubelet may not detect that it should evict Pods to increase the | ||||||
|  | > performance again. | ||||||
|  | 
 | ||||||
|  | Enabling swap is in general [not | ||||||
|  | recommended](https://github.com/kubernetes/kubernetes/issues/53533) on | ||||||
|  | Kubernetes right now. However, it can be useful in combination with KSM, | ||||||
|  | since KSM merges identical pages over time. Swap allows the VMIs to | ||||||
|  | successfuly allocate memory which will then effectively never be used | ||||||
|  | because of the later de-duplication done by KSM. | ||||||
|  | @ -0,0 +1,22 @@ | ||||||
|  | Usage | ||||||
|  | ===== | ||||||
|  | 
 | ||||||
|  | Using KubeVirt should be fairly natural if you are used to working with | ||||||
|  | Kubernetes. | ||||||
|  | 
 | ||||||
|  | The primary way of using KubeVirt is by working with the KubeVirt kinds | ||||||
|  | in the Kubernetes API: | ||||||
|  | 
 | ||||||
|  |     $ kubectl create -f vmi.yaml | ||||||
|  |     $ kubectl wait --for=condition=Ready vmis/my-vmi | ||||||
|  |     $ kubectl get vmis | ||||||
|  |     $ kubectl delete vmis testvmi | ||||||
|  | 
 | ||||||
|  | The following pages describe how to use and discover the API, manage, | ||||||
|  | and access virtual machines. | ||||||
|  | 
 | ||||||
|  | User Interface | ||||||
|  | -------------- | ||||||
|  | 
 | ||||||
|  | KubeVirt does not come with a UI, it is only extending the Kuebrnetes | ||||||
|  | API with virtualization functionality. | ||||||
|  | @ -0,0 +1,249 @@ | ||||||
|  | VirtualMachineInstanceReplicaSet | ||||||
|  | ================================ | ||||||
|  | 
 | ||||||
|  | VirtualMachineInstanceReplicaSet | ||||||
|  | -------------------------------- | ||||||
|  | 
 | ||||||
|  | A *VirtualMachineInstanceReplicaSet* tries to ensures that a specified | ||||||
|  | number of VirtualMachineInstance replicas are running at any time. In | ||||||
|  | other words, a *VirtualMachineInstanceReplicaSet* makes sure that a | ||||||
|  | VirtualMachineInstance or a homogeneous set of VirtualMachineInstances | ||||||
|  | is always up and ready. It is very similar to a [Kubernetes | ||||||
|  | ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/). | ||||||
|  | 
 | ||||||
|  | No state is kept and no guarantees about the maximum number of | ||||||
|  | VirtualMachineInstance replicas which are up are given. For example, the | ||||||
|  | *VirtualMachineInstanceReplicaSet* may decide to create new replicas if | ||||||
|  | possibly still running VMs are entering an unknown state. | ||||||
|  | 
 | ||||||
|  | How to use a VirtualMachineInstanceReplicaSet | ||||||
|  | --------------------------------------------- | ||||||
|  | 
 | ||||||
|  | The *VirtualMachineInstanceReplicaSet* allows us to specify a | ||||||
|  | *VirtualMachineInstanceTemplate* in `spec.template`. It consists of | ||||||
|  | `ObjectMetadata` in `spec.template.metadata`, and a | ||||||
|  | `VirtualMachineInstanceSpec` in `spec.template.spec`. The specification | ||||||
|  | of the virtual machine is equal to the specification of the virtual | ||||||
|  | machine in the `VirtualMachineInstance` workload. | ||||||
|  | 
 | ||||||
|  | `spec.replicas` can be used to specify how many replicas are wanted. If | ||||||
|  | unspecified, the default value is 1. This value can be updated anytime. | ||||||
|  | The controller will react to the changes. | ||||||
|  | 
 | ||||||
|  | `spec.selector` is used by the controller to keep track of managed | ||||||
|  | virtual machines. The selector specified there must be able to match the | ||||||
|  | virtual machine labels as specified in `spec.template.metadata.labels`. | ||||||
|  | If the selector does not match these labels, or they are empty, the | ||||||
|  | controller will simply do nothing except from logging an error. The user | ||||||
|  | is responsible for not creating other virtual machines or | ||||||
|  | *VirtualMachineInstanceReplicaSets* which conflict with the selector and | ||||||
|  | the template labels. | ||||||
|  | 
 | ||||||
|  | Exposing a VirtualMachineInstanceReplicaSet as a Service | ||||||
|  | -------------------------------------------------------- | ||||||
|  | 
 | ||||||
|  | A VirtualMachineInstanceReplicaSet could be exposed as a service. When | ||||||
|  | this is done, one of the VirtualMachineInstances replicas will be picked | ||||||
|  | for the actual delivery of the service. | ||||||
|  | 
 | ||||||
|  | For example, exposing SSH port (22) as a ClusterIP service using virtctl | ||||||
|  | on a VirtualMachineInstanceReplicaSet: | ||||||
|  | 
 | ||||||
|  |     $ virtctl expose vmirs vmi-ephemeral --name vmiservice --port 27017 --target-port 22 | ||||||
|  | 
 | ||||||
|  | All service exposure options that apply to a VirtualMachineInstance | ||||||
|  | apply to a VirtualMachineInstanceReplicaSet. See [Exposing | ||||||
|  | VirtualMachineInstance](http://kubevirt.io/user-guide/#/workloads/virtual-machines/expose-service) | ||||||
|  | for more details. | ||||||
|  | 
 | ||||||
|  | When to use a VirtualMachineInstanceReplicaSet | ||||||
|  | ---------------------------------------------- | ||||||
|  | 
 | ||||||
|  | > **Note:** The base assumption is that referenced disks are read-only | ||||||
|  | > or that the VMIs are writing internally to a tmpfs. The most obvious | ||||||
|  | > volume sources for VirtualMachineInstanceReplicaSets which KubeVirt | ||||||
|  | > supports are referenced below. If other types are used **data | ||||||
|  | > corruption** is possible. | ||||||
|  | 
 | ||||||
|  | Using VirtualMachineInstanceReplicaSet is the right choice when one | ||||||
|  | wants many identical VMs and does not care about maintaining any disk | ||||||
|  | state after the VMs are terminated. | ||||||
|  | 
 | ||||||
|  | [Volume types](workloads/virtual-machines/disks-and-volumes.md) which | ||||||
|  | work well in combination with a VirtualMachineInstanceReplicaSet are: | ||||||
|  | 
 | ||||||
|  | -   **cloudInitNoCloud** | ||||||
|  | 
 | ||||||
|  | -   **ephemeral** | ||||||
|  | 
 | ||||||
|  | -   **containerDisk** | ||||||
|  | 
 | ||||||
|  | -   **emptyDisk** | ||||||
|  | 
 | ||||||
|  | -   **configMap** | ||||||
|  | 
 | ||||||
|  | -   **secret** | ||||||
|  | 
 | ||||||
|  | -   any other type, if the VMI writes internally to a tmpfs | ||||||
|  | 
 | ||||||
|  | ### Fast starting ephemeral Virtual Machines | ||||||
|  | 
 | ||||||
|  | This use-case involves small and fast booting VMs with little | ||||||
|  | provisioning performed during initialization. | ||||||
|  | 
 | ||||||
|  | In this scenario, migrations are not important. Redistributing VM | ||||||
|  | workloads between Nodes can be achieved simply by deleting managed | ||||||
|  | VirtualMachineInstances which are running on an overloaded Node. The | ||||||
|  | `eviction` of such a VirtualMachineInstance can happen by directly | ||||||
|  | deleting the VirtualMachineInstance instance (KubeVirt aware workload | ||||||
|  | redistribution) or by deleting the corresponding Pod where the Virtual | ||||||
|  | Machine runs in (Only Kubernetes aware workload redistribution). | ||||||
|  | 
 | ||||||
|  | ### Slow starting ephemeral Virtual Machines | ||||||
|  | 
 | ||||||
|  | In this use-case one has big and slow booting VMs, and complex or | ||||||
|  | resource intensive provisioning is done during boot. More specifically, | ||||||
|  | the timespan between the creation of a new VM and it entering the ready | ||||||
|  | state is long. | ||||||
|  | 
 | ||||||
|  | In this scenario, one still does not care about the state, but since | ||||||
|  | re-provisioning VMs is expensive, migrations are important. Workload | ||||||
|  | redistribution between Nodes can be achieved by migrating | ||||||
|  | VirtualMachineInstances to different Nodes. A workload redistributor | ||||||
|  | needs to be aware of KubeVirt and create migrations, instead of | ||||||
|  | `evicting` VirtualMachineInstances by deletion. | ||||||
|  | 
 | ||||||
|  | > **Note:** The simplest form of having a migratable ephemeral | ||||||
|  | > VirtualMachineInstance, will be to use local storage based on | ||||||
|  | > `ContainerDisks` in combination with a file based backing store. | ||||||
|  | > However, migratable backing store support has not officially landed | ||||||
|  | > yet in KubeVirt and is untested. | ||||||
|  | 
 | ||||||
|  | Example | ||||||
|  | ------- | ||||||
|  | 
 | ||||||
|  |     apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |     kind: VirtualMachineInstanceReplicaSet | ||||||
|  |     metadata: | ||||||
|  |       name: testreplicaset | ||||||
|  |     spec: | ||||||
|  |       replicas: 3 | ||||||
|  |       selector: | ||||||
|  |         matchLabels: | ||||||
|  |           myvmi: myvmi | ||||||
|  |       template: | ||||||
|  |         metadata: | ||||||
|  |           name: test | ||||||
|  |           labels: | ||||||
|  |             myvmi: myvmi | ||||||
|  |         spec: | ||||||
|  |           domain: | ||||||
|  |             devices: | ||||||
|  |               disks: | ||||||
|  |               - disk: | ||||||
|  |                 name: containerdisk | ||||||
|  |             resources: | ||||||
|  |               requests: | ||||||
|  |                 memory: 64M | ||||||
|  |           volumes: | ||||||
|  |           - name: containerdisk | ||||||
|  |             containerDisk: | ||||||
|  |               image: kubevirt/cirros-container-disk-demo:latest | ||||||
|  | 
 | ||||||
|  | Saving this manifest into `testreplicaset.yaml` and submitting it to | ||||||
|  | Kubernetes will create three virtual machines based on the template. | ||||||
|  | 
 | ||||||
|  |     $ kubectl create -f testreplicaset.yaml | ||||||
|  |     virtualmachineinstancereplicaset "testreplicaset" created | ||||||
|  |     $ kubectl describe vmirs testreplicaset | ||||||
|  |     Name:         testreplicaset | ||||||
|  |     Namespace:    default | ||||||
|  |     Labels:       <none> | ||||||
|  |     Annotations:  <none> | ||||||
|  |     API Version:  kubevirt.io/v1alpha3 | ||||||
|  |     Kind:         VirtualMachineInstanceReplicaSet | ||||||
|  |     Metadata: | ||||||
|  |       Cluster Name: | ||||||
|  |       Creation Timestamp:  2018-01-03T12:42:30Z | ||||||
|  |       Generation:          0 | ||||||
|  |       Resource Version:    6380 | ||||||
|  |       Self Link:           /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstancereplicasets/testreplicaset | ||||||
|  |       UID:                 903a9ea0-f083-11e7-9094-525400ee45b0 | ||||||
|  |     Spec: | ||||||
|  |       Replicas:  3 | ||||||
|  |       Selector: | ||||||
|  |         Match Labels: | ||||||
|  |           Myvmi:  myvmi | ||||||
|  |       Template: | ||||||
|  |         Metadata: | ||||||
|  |           Creation Timestamp:  <nil> | ||||||
|  |           Labels: | ||||||
|  |             Myvmi:  myvmi | ||||||
|  |           Name:    test | ||||||
|  |         Spec: | ||||||
|  |           Domain: | ||||||
|  |             Devices: | ||||||
|  |               Disks: | ||||||
|  |                 Disk: | ||||||
|  |                 Name:         containerdisk | ||||||
|  |                 Volume Name:  containerdisk | ||||||
|  |             Resources: | ||||||
|  |               Requests: | ||||||
|  |                 Memory:  64M | ||||||
|  |           Volumes: | ||||||
|  |             Name:  containerdisk | ||||||
|  |             Container Disk: | ||||||
|  |               Image:  kubevirt/cirros-container-disk-demo:latest | ||||||
|  |     Status: | ||||||
|  |       Conditions:      <nil> | ||||||
|  |       Ready Replicas:  2 | ||||||
|  |       Replicas:        3 | ||||||
|  |     Events: | ||||||
|  |       Type    Reason            Age   From                                 Message | ||||||
|  |       ----    ------            ----  ----                                 ------- | ||||||
|  |       Normal  SuccessfulCreate  13s   virtualmachineinstancereplicaset-controller  Created virtual machine: testh8998 | ||||||
|  |       Normal  SuccessfulCreate  13s   virtualmachineinstancereplicaset-controller  Created virtual machine: testf474w | ||||||
|  |       Normal  SuccessfulCreate  13s   virtualmachineinstancereplicaset-controller  Created virtual machine: test5lvkd | ||||||
|  | 
 | ||||||
|  | `Replicas` is `3` and `Ready Replicas` is `2`. This means that at the | ||||||
|  | moment when showing the status, three Virtual Machines were already | ||||||
|  | created, but only two are running and ready. | ||||||
|  | 
 | ||||||
|  | Scaling via the Scale Subresource | ||||||
|  | --------------------------------- | ||||||
|  | 
 | ||||||
|  | > **Note:** This requires the `CustomResourceSubresources` feature gate | ||||||
|  | > to be enables for clusters prior to 1.11. | ||||||
|  | 
 | ||||||
|  | The `VirtualMachineInstanceReplicaSet` supports the `scale` subresource. | ||||||
|  | As a consequence it is possible to scale it via `kubectl`: | ||||||
|  | 
 | ||||||
|  |     $ kubectl scale vmirs myvmirs --replicas 5 | ||||||
|  | 
 | ||||||
|  | Using the Horizontal Pod Autoscaler | ||||||
|  | ----------------------------------- | ||||||
|  | 
 | ||||||
|  | > **Note:** This requires at cluster newer or equal to 1.11. | ||||||
|  | 
 | ||||||
|  | The | ||||||
|  | [HorizontalPodAutoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) | ||||||
|  | (HPA) can be used with a `VirtualMachineInstanceReplicaSet`. Simply | ||||||
|  | reference it in the spec of the autoscaler: | ||||||
|  | 
 | ||||||
|  |     apiVersion: autoscaling/v1 | ||||||
|  |     kind: HorizontalPodAutoscaler | ||||||
|  |     metadata: | ||||||
|  |       name: myhpa | ||||||
|  |     spec: | ||||||
|  |       scaleTargetRef: | ||||||
|  |         kind: VirtualMachineInstanceReplicaSet | ||||||
|  |         name: vmi-replicaset-cirros | ||||||
|  |         apiVersion: kubevirt.io/v1alpha3 | ||||||
|  |       minReplicas: 3 | ||||||
|  |       maxReplicas: 10 | ||||||
|  |       targetCPUUtilizationPercentage: 50 | ||||||
|  | 
 | ||||||
|  | Right now `kubectl autoscale` does not work with Custom Resources. Only | ||||||
|  | the declarative form of writing the HPA yaml manually and posting it via | ||||||
|  | `kubectl | ||||||
|  | create` is supported. | ||||||
|  | @ -0,0 +1,219 @@ | ||||||
|  | VMCTL | ||||||
|  | ===== | ||||||
|  | 
 | ||||||
|  | Background | ||||||
|  | ---------- | ||||||
|  | 
 | ||||||
|  | One remarkable difference between KubeVirt and other solutions that run | ||||||
|  | Virtual Machine workloads in a container is the toplevel API. KubeVirt | ||||||
|  | treats VirtualMachineInstances as a first class citizen by designating | ||||||
|  | custom resources to map/track VirtualMachine settings and attributes. | ||||||
|  | This has considerable advantages, but there’s a trade-off: native | ||||||
|  | Kubernetes higher-level workload controllers such as Deployments, | ||||||
|  | ReplicaSets, DaemonSets, StatefulSets are designed to work directly with | ||||||
|  | Pods. Because VirtualMachine and VirtualMachineInstance resources are | ||||||
|  | simply defined outside the scope of Kubernetes responsibility, it will | ||||||
|  | always be up to the KubeVirt project to create analogues of those | ||||||
|  | controllers. This is possible, and is in fact something that exists for | ||||||
|  | some entities, e.g. VirtualMachineInstanceReplicaSet, but the KubeVirt | ||||||
|  | project will always be one step behind. Any significant changes upstream | ||||||
|  | would need to be implemented manually in KubeVirt. | ||||||
|  | 
 | ||||||
|  | Overview | ||||||
|  | -------- | ||||||
|  | 
 | ||||||
|  | Vmctl is designed to address this delta by managing VirtualMachines from | ||||||
|  | within a Pod. Vmctl will take an upstream VirtualMachine to act as a | ||||||
|  | prototype and derive and spawn a new VirtualMachine based on it. This | ||||||
|  | derived VM will be running alongside the vmctl pod. Thus for every vmctl | ||||||
|  | pod in the cluster, there should be a VM running alongside of it. To be | ||||||
|  | clear, vmctl is not a VM instead it is controlling a VM close by. The | ||||||
|  | derived VM will be similar to the prototype, but a few fields will be | ||||||
|  | modified: | ||||||
|  | 
 | ||||||
|  | -   Name | ||||||
|  | 
 | ||||||
|  | -   NodeSelector | ||||||
|  | 
 | ||||||
|  | -   Running | ||||||
|  | 
 | ||||||
|  | ### Name | ||||||
|  | 
 | ||||||
|  | The new VirtualMachine’s `Name` attribute will be a concatenation of the | ||||||
|  | prototype VM’s name and the Pod’s name. This will be a unique resource | ||||||
|  | name because both the prototype VM name and the vmctl Pod name are | ||||||
|  | unique. | ||||||
|  | 
 | ||||||
|  | ### NodeSelector | ||||||
|  | 
 | ||||||
|  | The new VirtualMachine will have a selector with node affinity matching | ||||||
|  | the running vmctl Pod’s node, thus the VirtualMachine and the vmctl Pod | ||||||
|  | will run on the same node. This is because a `DaemonSet` maps one pod to | ||||||
|  | each node in a cluster. By tracking which Node a vmctl Pod is running | ||||||
|  | on, KubeVirt ensures the same behavior for VirtualMachines. | ||||||
|  | 
 | ||||||
|  | ### Running | ||||||
|  | 
 | ||||||
|  | The new VirtualMachine will be set to the running state regardless of | ||||||
|  | the prototype VM’s state. | ||||||
|  | 
 | ||||||
|  | Implementation | ||||||
|  | -------------- | ||||||
|  | 
 | ||||||
|  | Vmctl is implemented as a go binary, deployed in a container, that takes | ||||||
|  | the following parameters: | ||||||
|  | 
 | ||||||
|  | -   `namespace`: The namespace to create the derived VirtualMachine in. | ||||||
|  |     The default namespace is `default`. | ||||||
|  | 
 | ||||||
|  | -   `proto-namespace`: The namespace the prototype VM is in. This | ||||||
|  |     defaults to the value used for `namespace` if omitted. | ||||||
|  | 
 | ||||||
|  | -   `hostname-override`: Mainly for testing–in order to make it possible | ||||||
|  |     to run vmctl outside of a pod. | ||||||
|  | 
 | ||||||
|  | vmctl has a single positional argument: | ||||||
|  | 
 | ||||||
|  | -   prototype VM name | ||||||
|  | 
 | ||||||
|  | When the vmctl container is deployed, it will locate the requested | ||||||
|  | prototype VM, clone it, and watch wait. When the vmctl pod is deleted, | ||||||
|  | vmctl will clean up the derived VirtualMachine. Consequently it is | ||||||
|  | inadvisable to use a 0 length grace period for shutting down the pod. | ||||||
|  | 
 | ||||||
|  | Services | ||||||
|  | -------- | ||||||
|  | 
 | ||||||
|  | One note worth stressing is that from Kubernete’s perspective the vmctl | ||||||
|  | Pod is entirely distinct from the VM it spawns. It is especially | ||||||
|  | important to be mindful of this when creating services. From an end | ||||||
|  | user’s perspective, there’s nothing useful running on the vmctl Pod | ||||||
|  | itself. The recommended method of exposing services on a VM is to use | ||||||
|  | Labels and Selectors. Applying a label to the prototype VM, and using | ||||||
|  | that `matchLabel` on a service is sufficient to expose the service on | ||||||
|  | all derived VM’s. | ||||||
|  | 
 | ||||||
|  | PersistentVolumeClaims | ||||||
|  | ---------------------- | ||||||
|  | 
 | ||||||
|  | Another thing to consider with vmctl is the use of shared volumes. By | ||||||
|  | nature vmctl is designed to spawn an arbitrary number of VirtualMachines | ||||||
|  | on demand, all of which will define the same Disk and Volume stanzas. | ||||||
|  | Because of this, using shared volumes in read-write mode should be | ||||||
|  | avoided, or the PVC’s could be corrupted. To avoid this issue, ephemeral | ||||||
|  | disks or ContainerDisks could be used. | ||||||
|  | 
 | ||||||
|  | Examples | ||||||
|  | ======== | ||||||
|  | 
 | ||||||
|  | The following PodPreset applies to all examples below. This is done to | ||||||
|  | remove lines that are related to the Kubernetes DownwardAPI in order to | ||||||
|  | make the examples more clear. | ||||||
|  | 
 | ||||||
|  |     apiVersion: settings.k8s.io/v1alpha1 | ||||||
|  |     kind: PodPreset | ||||||
|  |     metadata: | ||||||
|  |       name: have-podinfo | ||||||
|  |       selector: | ||||||
|  |         matchLabels: | ||||||
|  |           app: vmctl | ||||||
|  |     spec: | ||||||
|  |       volumeMounts: | ||||||
|  |         - name: podinfo | ||||||
|  |           mountPath: /etc/podinfo | ||||||
|  |       volumes: | ||||||
|  |         - name: podinfo | ||||||
|  |           downwardAPI: | ||||||
|  |             items: | ||||||
|  |             - path: "name" | ||||||
|  |               fieldRef: | ||||||
|  |                 fieldPath: metadata.name | ||||||
|  | 
 | ||||||
|  | Deployment | ||||||
|  | ---------- | ||||||
|  | 
 | ||||||
|  | This is an example of using vmctl as a Deployment (Note: this example | ||||||
|  | uses the `have-podinfo` PodPreset above): | ||||||
|  | 
 | ||||||
|  |     apiVersion: apps/v1 | ||||||
|  |     kind: Deployment | ||||||
|  |     metadata: | ||||||
|  |       name: vmctl | ||||||
|  |       labels: | ||||||
|  |         app: vmctl | ||||||
|  |     spec: | ||||||
|  |       replicas: 3 | ||||||
|  |       selector: | ||||||
|  |         matchLabels: | ||||||
|  |           app: vmctl | ||||||
|  |       template: | ||||||
|  |         metadata: | ||||||
|  |           labels: | ||||||
|  |             app: vmctl | ||||||
|  |         spec: | ||||||
|  |           containers: | ||||||
|  |           - name: vmctl | ||||||
|  |             image: quay.io/fabiand/vmctl | ||||||
|  |             imagePullPolicy: IfNotPresent | ||||||
|  |             args: | ||||||
|  |             - "testvm" | ||||||
|  |             serviceAccountName: default | ||||||
|  | 
 | ||||||
|  | This example would look for a VirtualMachine in the `default` namespace | ||||||
|  | named `testvm`, and instantiate 3 replicas of it. | ||||||
|  | 
 | ||||||
|  | Daemonset | ||||||
|  | --------- | ||||||
|  | 
 | ||||||
|  | This is an example of using vmctl as a Daemonset (Note: this example | ||||||
|  | uses the `have-podinfo` PodPreset above): | ||||||
|  | 
 | ||||||
|  |     apiVersion: apps/v1 | ||||||
|  |     kind: Daemonset | ||||||
|  |     metadata: | ||||||
|  |       name: vmctl | ||||||
|  |       labels: | ||||||
|  |         app: vmctl | ||||||
|  |     spec: | ||||||
|  |       selector: | ||||||
|  |         matchLabels: | ||||||
|  |           app: vmctl | ||||||
|  |       template: | ||||||
|  |         metadata: | ||||||
|  |           labels: | ||||||
|  |             app: vmctl | ||||||
|  |         spec: | ||||||
|  |           containers: | ||||||
|  |           - name: vmctl | ||||||
|  |             image: quay.io/fabiand/vmctl | ||||||
|  |             imagePullPolicy: IfNotPresent | ||||||
|  |             args: | ||||||
|  |             - "testvm" | ||||||
|  |             serviceAccountName: default | ||||||
|  | 
 | ||||||
|  | This example would look for a VirtualMachine in the `default` namespace | ||||||
|  | named `testvm`, and instantiate a VirtualMachine on every node in the | ||||||
|  | Kubernetes cluster. | ||||||
|  | 
 | ||||||
|  | Service | ||||||
|  | ------- | ||||||
|  | 
 | ||||||
|  | Assuming a controller similar to the examples above, where a label | ||||||
|  | `app: vmctl` is used, a service to expose the VM’s could look like this: | ||||||
|  | 
 | ||||||
|  |     kind: Service | ||||||
|  |     apiVersion: v1 | ||||||
|  |     metadata: | ||||||
|  |       name: my-service | ||||||
|  |     spec: | ||||||
|  |       selector: | ||||||
|  |         app: vmctl | ||||||
|  |       ports: | ||||||
|  |       - protocol: TCP | ||||||
|  |         port: 80 | ||||||
|  |         targetPort: 80 | ||||||
|  | 
 | ||||||
|  | In this case a clusterIP would be created that maps port 80 to each VM. | ||||||
|  | See [Kubernetes | ||||||
|  | Services](https://kubernetes.io/docs/concepts/services-networking/service/) | ||||||
|  | for more information. | ||||||