This reverts commit 15d5325f15.
Signed-off-by: Andrew Burden <aburden@redhat.com>
This commit is contained in:
parent
15d5325f15
commit
640ea40939
73
_redirects
73
_redirects
|
|
@ -1,73 +0,0 @@
|
|||
/operations/customize_components /cluster_admin/customize_components
|
||||
/operations/installation /cluster_admin/installation
|
||||
/operations/updating_and_deletion /cluster_admin/updating_and_deletion
|
||||
/operations/basic_use /user_workloads/basic_use
|
||||
/operations/customize_components /cluster_admin/customize_components
|
||||
/operations/deploy_common_instancetypes /user_workloads/deploy_common_instancetypes
|
||||
/operations/api_validation /cluster_admin/api_validation
|
||||
/operations/debug /debug_virt_stack/debug
|
||||
/operations/virtctl_client_tool /user_workloads/virtctl_client_tool
|
||||
/operations/live_migration /compute/live_migration
|
||||
/operations/hotplug_interfaces /network/hotplug_interfaces
|
||||
/operations/hotplug_volumes /storage/hotplug_volumes
|
||||
/operations/client_passthrough /compute/client_passthrough
|
||||
/operations/snapshot_restore_api /storage/snapshot_restore_api
|
||||
/operations/scheduler /cluster_admin/scheduler
|
||||
/operations/hugepages /compute/hugepages
|
||||
/operations/component_monitoring /user_workloads/component_monitoring
|
||||
/operations/authorization /cluster_admin/authorization
|
||||
/operations/annotations_and_labels /cluster_admin/annotations_and_labels
|
||||
/operations/node_assignment /compute/node_assignment
|
||||
/operations/node_maintenance /cluster_admin/node_maintenance
|
||||
/operations/node_overcommit /compute/node_overcommit
|
||||
/operations/unresponsive_nodes /cluster_admin/unresponsive_nodes
|
||||
/operations/containerized_data_importer /storage/containerized_data_importer
|
||||
/operations/activating_feature_gates /cluster_admin/activating_feature_gates
|
||||
/operations/export_api /storage/export_api
|
||||
/operations/clone_api /storage/clone_api
|
||||
/operations/memory_dump /compute/memory_dump
|
||||
/operations/mediated_devices_configuration /compute/mediated_devices_configuration
|
||||
/operations/migration_policies /cluster_admin/migration_policies
|
||||
/operations/ksm /cluster_admin/ksm
|
||||
/operations/gitops /cluster_admin/gitops
|
||||
/operations/operations_on_Arm64 /cluster_admin/operations_on_Arm64
|
||||
/operations/feature_gate_status_on_Arm64 /cluster_admin/feature_gate_status_on_Arm64
|
||||
/operations/cpu_hotplug /compute/cpu_hotplug
|
||||
/operations/memory_hotplug /compute/memory_hotplug
|
||||
/operations/vm_rollout_strategies /user_workloads/vm_rollout_strategies
|
||||
/operations/hook-sidecar /user_workloads/hook-sidecar
|
||||
/virtual_machines/virtual_machine_instances /user_workloads/virtual_machine_instances
|
||||
/virtual_machines/creating_vms /user_workloads/creating_vms
|
||||
/virtual_machines/lifecycle /user_workloads/lifecycle
|
||||
/virtual_machines/run_strategies /compute/run_strategies
|
||||
/virtual_machines/instancetypes /user_workloads/instancetypes
|
||||
/virtual_machines/presets /user_workloads/presets
|
||||
/virtual_machines/virtual_hardware /compute/virtual_hardware
|
||||
/virtual_machines/dedicated_cpu_resources /compute/dedicated_cpu_resources
|
||||
/virtual_machines/numa /compute/numa
|
||||
/virtual_machines/disks_and_volumes /storage/disks_and_volumes
|
||||
/virtual_machines/interfaces_and_networks /network/interfaces_and_networks
|
||||
/virtual_machines/network_binding_plugins /network/network_binding_plugins
|
||||
/virtual_machines/istio_service_mesh /network/istio_service_mesh
|
||||
/virtual_machines/networkpolicy /network/networkpolicy
|
||||
/virtual_machines/host-devices /compute/host-devices
|
||||
/virtual_machines/windows_virtio_drivers /user_workloads/windows_virtio_drivers
|
||||
/virtual_machines/guest_operating_system_information /user_workloads/guest_operating_system_information
|
||||
/virtual_machines/guest_agent_information /user_workloads/guest_agent_information
|
||||
/virtual_machines/liveness_and_readiness_probes /user_workloads/liveness_and_readiness_probes
|
||||
/virtual_machines/accessing_virtual_machines /user_workloads/accessing_virtual_machines
|
||||
/virtual_machines/startup_scripts /user_workloads/startup_scripts
|
||||
/virtual_machines/service_objects /network/service_objects
|
||||
/virtual_machines/templates /user_workloads/templates
|
||||
/virtual_machines/tekton_tasks /cluster_admin/tekton_tasks
|
||||
/virtual_machines/replicaset /user_workloads/replicaset
|
||||
/virtual_machines/pool /user_workloads/pool
|
||||
/virtual_machines/dns /network/dns
|
||||
/virtual_machines/boot_from_external_source /user_workloads/boot_from_external_source
|
||||
/virtual_machines/confidential_computing /cluster_admin/confidential_computing
|
||||
/virtual_machines/vsock /compute/vsock
|
||||
/virtual_machines/virtual_machines_on_Arm64 /cluster_admin/virtual_machines_on_Arm64
|
||||
/virtual_machines/device_status_on_Arm64 /cluster_admin/device_status_on_Arm64
|
||||
/virtual_machines/persistent_tpm_and_uefi_state /compute/persistent_tpm_and_uefi_state
|
||||
/virtual_machines/resources_requests_and_limits /compute/resources_requests_and_limits
|
||||
/virtual_machines/guestfs /storage/guestfs
|
||||
|
|
@ -2,11 +2,8 @@ nav:
|
|||
- Welcome: index.md
|
||||
- architecture.md
|
||||
- Quickstarts: quickstarts.md
|
||||
- Cluster Administration: cluster_admin
|
||||
- User Workloads: user_workloads
|
||||
- Compute: compute
|
||||
- Network: network
|
||||
- Storage: storage
|
||||
- operations
|
||||
- Virtual Machines: virtual_machines
|
||||
- Release Notes: release_notes.md
|
||||
- contributing.md
|
||||
- Virtualization Debugging: debug_virt_stack
|
||||
|
|
|
|||
|
|
@ -171,7 +171,7 @@ avoid confusing and contradictory states, these fields are mutually
|
|||
exclusive.
|
||||
|
||||
An extended explanation of `spec.runStrategy` vs `spec.running` can be
|
||||
found in [Run Strategies](./compute/run_strategies.md)
|
||||
found in [Run Strategies](./virtual_machines/run_strategies.md)
|
||||
|
||||
### Starting and stopping
|
||||
|
||||
|
|
@ -193,7 +193,7 @@ After creating a VirtualMachine it can be switched on or off like this:
|
|||
kubectl patch virtualmachine vm --type merge -p \
|
||||
'{"spec":{"running":false}}'
|
||||
|
||||
Find more details about [a VM's life-cycle in the relevant section](./user_workloads/lifecycle.md)
|
||||
Find more details about [a VM's life-cycle in the relevant section](./virtual_machines/lifecycle.md)
|
||||
|
||||
### Controller status
|
||||
|
||||
|
|
@ -282,7 +282,7 @@ after the VirtualMachine was created, but before it started:
|
|||
|
||||
All service exposure options that apply to a VirtualMachineInstance apply to a VirtualMachine.
|
||||
|
||||
See [Service Objects](./network/service_objects.md) for more details.
|
||||
See [Service Objects](./virtual_machines/service_objects.md) for more details.
|
||||
|
||||
## When to use a VirtualMachine
|
||||
|
||||
|
|
|
|||
|
|
@ -1,21 +0,0 @@
|
|||
nav:
|
||||
- installation.md
|
||||
- updating_and_deletion.md
|
||||
- activating_feature_gates.md
|
||||
- annotations_and_labels.md
|
||||
- api_validation.md
|
||||
- authorization.md
|
||||
- confidential_computing.md
|
||||
- customize_components.md
|
||||
- gitops.md
|
||||
- ksm.md
|
||||
- migration_policies.md
|
||||
- node_maintenance.md
|
||||
- scheduler.md
|
||||
- tekton_tasks.md
|
||||
- unresponsive_nodes.md
|
||||
- ARM cluster:
|
||||
- device_status_on_Arm64.md
|
||||
- feature_gate_status_on_Arm64.md
|
||||
- operations_on_Arm64.md
|
||||
- virtual_machines_on_Arm64.md
|
||||
|
|
@ -1,18 +0,0 @@
|
|||
nav:
|
||||
- client_passthrough.md
|
||||
- cpu_hotplug.md
|
||||
- dedicated_cpu_resources.md
|
||||
- host-devices.md
|
||||
- hugepages.md
|
||||
- live_migration.md
|
||||
- mediated_devices_configuration.md
|
||||
- memory_hotplug.md
|
||||
- node_assignment.md
|
||||
- node_overcommit.md
|
||||
- numa.md
|
||||
- persistent_tpm_and_uefi_state.md
|
||||
- resources_requests_and_limits.md
|
||||
- run_strategies.md
|
||||
- virtual_hardware.md
|
||||
- memory_dump.md
|
||||
- vsock.md
|
||||
|
|
@ -1,154 +0,0 @@
|
|||
# Windows virtio drivers
|
||||
|
||||
Purpose of this document is to explain how to install virtio drivers for
|
||||
Microsoft Windows running in a fully virtualized guest.
|
||||
|
||||
## Do I need virtio drivers?
|
||||
|
||||
Yes. Without the virtio drivers, you cannot use paravirtualized hardware properly. It would either not work, or will have a severe performance penalty.
|
||||
|
||||
For more information about VirtIO and paravirtualization, see [VirtIO and paravirtualization](https://wiki.libvirt.org/page/Virtio)
|
||||
|
||||
For more details on configuring your VirtIO driver please refer to [Installing VirtIO driver on a new Windows virtual machine](https://docs.openshift.com/container-platform/4.10/virt/virtual_machines/virt-installing-virtio-drivers-on-new-windows-vm.html) and [Installing VirtIO driver on an existing Windows virtual machine](https://docs.openshift.com/container-platform/4.10/virt/virtual_machines/virt-installing-virtio-drivers-on-existing-windows-vm.html).
|
||||
|
||||
|
||||
## Which drivers I need to install?
|
||||
|
||||
There are usually up to 8 possible devices that are required to run
|
||||
Windows smoothly in a virtualized environment. KubeVirt currently
|
||||
supports only:
|
||||
|
||||
- **viostor**, the block driver, applies to SCSI Controller in the
|
||||
Other devices group.
|
||||
|
||||
- **viorng**, the entropy source driver, applies to PCI Device in the
|
||||
Other devices group.
|
||||
|
||||
- **NetKVM**, the network driver, applies to Ethernet Controller in
|
||||
the Other devices group. Available only if a virtio NIC is
|
||||
configured.
|
||||
|
||||
Other virtio drivers, that exists and might be supported in the future:
|
||||
|
||||
- Balloon, the balloon driver, applies to PCI Device in the Other
|
||||
devices group
|
||||
|
||||
- vioserial, the paravirtual serial driver, applies to PCI Simple
|
||||
Communications Controller in the Other devices group.
|
||||
|
||||
- vioscsi, the SCSI block driver, applies to SCSI Controller in the
|
||||
Other devices group.
|
||||
|
||||
- qemupciserial, the emulated PCI serial driver, applies to PCI Serial
|
||||
Port in the Other devices group.
|
||||
|
||||
- qxl, the paravirtual video driver, applied to Microsoft Basic
|
||||
Display Adapter in the Display adapters group.
|
||||
|
||||
- pvpanic, the paravirtual panic driver, applies to Unknown device in
|
||||
the Other devices group.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> Some drivers are required in the installation phase. When you are
|
||||
> installing Windows onto the virtio block storage you have to provide
|
||||
> an appropriate virtio driver. Namely, choose viostor driver for your
|
||||
> version of Microsoft Windows, eg. does not install XP driver when you
|
||||
> run Windows 10.
|
||||
>
|
||||
> Other drivers can be installed after the successful windows
|
||||
> installation. Again, please install only drivers matching your Windows
|
||||
> version.
|
||||
|
||||
### How to install during Windows install?
|
||||
|
||||
To install drivers before the Windows starts its install, make sure you
|
||||
have virtio-win package attached to your VirtualMachine as SATA CD-ROM.
|
||||
In the Windows installation, choose advanced install and load driver.
|
||||
Then please navigate to loaded Virtio CD-ROM and install one of viostor
|
||||
or vioscsi, depending on whichever you have set up.
|
||||
|
||||
Step by step screenshots:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### How to install after Windows install?
|
||||
|
||||
After windows install, please go to [Device
|
||||
Manager](https://support.microsoft.com/en-us/help/4026149/windows-open-device-manager).
|
||||
There you should see undetected devices in "available devices" section.
|
||||
You can install virtio drivers one by one going through this list.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
For more details on how to choose a proper driver and how to install the
|
||||
driver, please refer to the [Windows Guest Virtual Machines on Red Hat
|
||||
Enterprise Linux 7](https://access.redhat.com/articles/2470791).
|
||||
|
||||
## How to obtain virtio drivers?
|
||||
|
||||
The virtio Windows drivers are distributed in a form of
|
||||
[containerDisk](../storage/disks_and_volumes.md#containerdisk),
|
||||
which can be simply mounted to the VirtualMachine. The container image,
|
||||
containing the disk is located at:
|
||||
<https://quay.io/repository/kubevirt/virtio-container-disk?tab=tags> and the image
|
||||
be pulled as any other docker container:
|
||||
|
||||
docker pull quay.io/kubevirt/virtio-container-disk
|
||||
|
||||
However, pulling image manually is not required, it will be downloaded
|
||||
if not present by Kubernetes when deploying VirtualMachine.
|
||||
|
||||
## Attaching to VirtualMachine
|
||||
|
||||
KubeVirt distributes virtio drivers for Microsoft Windows in a form of
|
||||
container disk. The package contains the virtio drivers and QEMU guest
|
||||
agent. The disk was tested on Microsoft Windows Server 2012. Supported
|
||||
Windows version is XP and up.
|
||||
|
||||
The package is intended to be used as CD-ROM attached to the virtual
|
||||
machine with Microsoft Windows. It can be used as SATA CDROM during
|
||||
install phase or to provide drivers in an existing Windows installation.
|
||||
|
||||
Attaching the virtio-win package can be done simply by adding
|
||||
ContainerDisk to you VirtualMachine.
|
||||
|
||||
spec:
|
||||
domain:
|
||||
devices:
|
||||
disks:
|
||||
- name: virtiocontainerdisk
|
||||
# Any other disk you want to use, must go before virtioContainerDisk.
|
||||
# KubeVirt boots from disks in order ther are defined.
|
||||
# Therefore virtioContainerDisk, must be after bootable disk.
|
||||
# Other option is to choose boot order explicitly:
|
||||
# - https://kubevirt.io/api-reference/v0.13.2/definitions.html#_v1_disk
|
||||
# NOTE: You either specify bootOrder explicitely or sort the items in
|
||||
# disks. You can not do both at the same time.
|
||||
# bootOrder: 2
|
||||
cdrom:
|
||||
bus: sata
|
||||
volumes:
|
||||
- containerDisk:
|
||||
image: quay.io/kubevirt/virtio-container-disk
|
||||
name: virtiocontainerdisk
|
||||
|
||||
Once you are done installing virtio drivers, you can remove virtio
|
||||
container disk by simply removing the disk from yaml specification and
|
||||
restarting the VirtualMachine.
|
||||
|
|
@ -1,5 +1,4 @@
|
|||
nav:
|
||||
- debug.md
|
||||
- logging.md
|
||||
- privileged-node-debugging.md
|
||||
- virsh-commands.md
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ This guide is for cases where QEMU counters very early failures and it is hard t
|
|||
|
||||
## Image creation and PVC population
|
||||
|
||||
This scenario is a slight variation of the [guide about starting strace](../debug_virt_stack/launch-qemu-strace.md), hence some of the details on the image build and the PVC population are simply skipped and explained in the other section.
|
||||
This scenario is a slight variation of the [guide about starting strace](launch-qemu-strace.md), hence some of the details on the image build and the PVC population are simply skipped and explained in the other section.
|
||||
|
||||
In this example, QEMU will be launched with [`gdbserver`](https://man7.org/linux/man-pages/man1/gdbserver.1.html) and later we will connect to it using a local `gdb` client.
|
||||
|
||||
|
|
@ -35,7 +35,7 @@ RUN chown 107:107 ${DIR}/wrap_qemu_gdb.sh
|
|||
RUN chown 107:107 ${DIR}/logs
|
||||
```
|
||||
|
||||
Then, we can create and populate the `debug-tools` PVC as with did in the [strace example](../debug_virt_stack/launch-qemu-strace.md):
|
||||
Then, we can create and populate the `debug-tools` PVC as with did in the [strace example](launch-qemu-strace.md):
|
||||
```console
|
||||
$ k apply -f debug-tools-pvc.yaml
|
||||
persistentvolumeclaim/debug-tools created
|
||||
|
|
|
|||
|
|
@ -5,18 +5,7 @@ hide:
|
|||
|
||||
# Welcome
|
||||
|
||||
The KubeVirt User Guide is divided into the following sections:
|
||||
|
||||
* Architecture: Technical and conceptual overview of KubeVirt components
|
||||
* Quickstarts: A list of resources to help you learn KubeVirt basics
|
||||
* Cluster Administration: Cluster-level administration concepts and tasks
|
||||
* User Workloads: Creating, customizing, using, and monitoring virtual machines
|
||||
* Compute: Resource allocation and optimization for the virtualization layer
|
||||
* Network: Concepts and tasks for the networking and service layers
|
||||
* Storage: Concepts and tasks for the storage layer, including importing and exporting.
|
||||
* Release Notes: The release notes for all KubeVirt releases
|
||||
* Contributing: How you can contribute to this guide or the KubeVirt project
|
||||
* Virtualization Debugging: How to debug your KubeVirt cluster and virtual resources
|
||||
This page is provided as the entrypoint to the different topics of this user-guide.
|
||||
|
||||
## Try it out
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +0,0 @@
|
|||
nav:
|
||||
- dns.md
|
||||
- hotplug_interfaces.md
|
||||
- interfaces_and_networks.md
|
||||
- istio_service_mesh.md
|
||||
- network_binding_plugins.md
|
||||
- networkpolicy.md
|
||||
- service_objects.md
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
nav:
|
||||
- installation.md
|
||||
- updating_and_deletion.md
|
||||
- basic_use.md
|
||||
- customize_components.md
|
||||
- deploy_common_instancetypes.md
|
||||
- api_validation.md
|
||||
- debug.md
|
||||
- virtctl_client_tool.md
|
||||
- live_migration.md
|
||||
- hotplug_interfaces.md
|
||||
- hotplug_volumes.md
|
||||
- client_passthrough.md
|
||||
- snapshot_restore_api.md
|
||||
- scheduler.md
|
||||
- hugepages.md
|
||||
- component_monitoring.md
|
||||
- authorization.md
|
||||
- annotations_and_labels.md
|
||||
- node_assignment.md
|
||||
- node_maintenance.md
|
||||
- node_overcommit.md
|
||||
- unresponsive_nodes.md
|
||||
- containerized_data_importer.md
|
||||
- activating_feature_gates.md
|
||||
- export_api.md
|
||||
- clone_api.md
|
||||
- memory_dump.md
|
||||
- mediated_devices_configuration.md
|
||||
- migration_policies.md
|
||||
- ksm.md
|
||||
- gitops.md
|
||||
- operations_on_Arm64.md
|
||||
- feature_gate_status_on_Arm64.md
|
||||
- cpu_hotplug.md
|
||||
- memory_hotplug.md
|
||||
- vm_rollout_strategies.md
|
||||
- hook-sidecar.md
|
||||
|
|
@ -11,18 +11,18 @@ yet and that APIs may change in the future.
|
|||
### Snapshot / Restore
|
||||
|
||||
Under the hood, the clone API relies upon Snapshot & Restore APIs. Therefore, in order to be able to use the clone API,
|
||||
please see [Snapshot & Restore prerequesites](../storage//snapshot_restore_api.md#prerequesites).
|
||||
please see [Snapshot & Restore prerequesites](./snapshot_restore_api.md#prerequesites).
|
||||
|
||||
### Snapshot Feature Gate
|
||||
|
||||
Currently, clone API is guarded by Snapshot feature gate. The
|
||||
[feature gates](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[feature gates](./activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
field in the KubeVirt CR must be expanded by adding the `Snapshot` to it.
|
||||
|
||||
## The clone object
|
||||
|
||||
Firstly, as written above, the clone API relies upon Snapshot & Restore APIs under the hood. Therefore, it might be helpful
|
||||
to look at [Snapshot & Restore](../storage/snapshot_restore_api.md) user-guide page for more info.
|
||||
to look at [Snapshot & Restore](./snapshot_restore_api.md) user-guide page for more info.
|
||||
|
||||
### VirtualMachineClone object overview
|
||||
|
||||
|
|
@ -50,7 +50,7 @@ spec:
|
|||
vmRolloutStrategy: "LiveUpdate"
|
||||
```
|
||||
|
||||
More information can be found on the [VM Rollout Strategies](../user_workloads/vm_rollout_strategies.md) page
|
||||
More information can be found on the [VM Rollout Strategies](./vm_rollout_strategies.md) page
|
||||
|
||||
### [OPTIONAL] Set maximum sockets or hotplug ratio
|
||||
You can explicitly set the maximum amount of sockets in three ways:
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
# Deploy common-instancetypes
|
||||
|
||||
The [`kubevirt/common-instancetypes`](https://github.com/kubevirt/common-instancetypes) provide a set of [instancetypes and preferences](../user_workloads/instancetypes.md) to help create KubeVirt [`VirtualMachines`](http://kubevirt.io/api-reference/main/definitions.html#_v1_virtualmachine).
|
||||
The [`kubevirt/common-instancetypes`](https://github.com/kubevirt/common-instancetypes) provide a set of [instancetypes and preferences](../virtual_machines/instancetypes.md) to help create KubeVirt [`VirtualMachines`](http://kubevirt.io/api-reference/main/definitions.html#_v1_virtualmachine).
|
||||
|
||||
Beginning with the 1.1 release of KubeVirt, cluster wide resources can be deployed directly through KubeVirt, without another operator.
|
||||
This allows deployment of a set of default instancetypes and preferences along side KubeVirt.
|
||||
|
|
@ -9,7 +9,7 @@ This allows deployment of a set of default instancetypes and preferences along s
|
|||
|
||||
To enable the deployment of cluster-wide common-instancetypes through the KubeVirt `virt-operator`, the `CommonInstancetypesDeploymentGate` feature gate needs to be enabled.
|
||||
|
||||
See [Activating feature gates](../cluster_admin/activating_feature_gates.md) on how to enable it.
|
||||
See [Activating feature gates](activating_feature_gates.md) on how to enable it.
|
||||
|
||||
## Deploy common-instancetypes manually
|
||||
|
||||
|
|
@ -6,7 +6,7 @@ In order not to overload the kubernetes API server the data is transferred throu
|
|||
### Export Feature Gate
|
||||
|
||||
VMExport support must be enabled in the feature gates to be available. The
|
||||
[feature gates](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[feature gates](./activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
field in the KubeVirt CR must be expanded by adding the `VMExport` to it.
|
||||
|
||||
### Export token
|
||||
|
|
@ -17,7 +17,7 @@ sidecar hooks:
|
|||
## Enabling `Sidecar` feature gate
|
||||
|
||||
`Sidecar` feature gate can be enabled by following the steps mentioned in
|
||||
[Activating feature gates](../cluster_admin/activating_feature_gates.md).
|
||||
[Activating feature gates](../activating_feature_gates).
|
||||
|
||||
In case of a development cluster created using kubevirtci, follow the steps mentioned in the
|
||||
[developer doc](https://github.com/kubevirt/kubevirt/blob/main/docs/getting-started.md#compile-and-run-it) to enable
|
||||
|
|
@ -149,7 +149,7 @@ The `name` field indicates the name of the ConfigMap on the cluster which contai
|
|||
the path where you want the script to be mounted. It could be either of `/usr/bin/onDefineDomain` or
|
||||
`/usr/bin/preCloudInitIso` depending upon the hook you want to execute.
|
||||
An optional value can be specified with the `"image"` key if a custom image is needed, if omitted the default Sidecar-shim image built together with the other KubeVirt images will be used.
|
||||
The default Sidecar-shim image, if not override with a custom value, will also be updated as other images as for [Updating KubeVirt Workloads](../cluster_admin/updating_and_deletion/#updating-kubevirt-workloads).
|
||||
The default Sidecar-shim image, if not override with a custom value, will also be updated as other images as for [Updating KubeVirt Workloads](./updating_and_deletion/#updating-kubevirt-workloads).
|
||||
|
||||
### Verify everything works
|
||||
|
||||
|
|
@ -196,7 +196,7 @@ spec:
|
|||
vmiName: vmi-fedora
|
||||
EOF
|
||||
```
|
||||
Please refer to the [Live Migration](../compute/live_migration.md) documentation for more information.
|
||||
Please refer to the [Live Migration](./live_migration.md) documentation for more information.
|
||||
|
||||
Once the migration is completed the VM will have the new interface attached.
|
||||
|
||||
|
|
@ -244,7 +244,7 @@ spec:
|
|||
vmiName: vmi-fedora
|
||||
EOF
|
||||
```
|
||||
Please refer to the [Live Migration](../compute/live_migration.md) documentation for more information.
|
||||
Please refer to the [Live Migration](./live_migration.md) documentation for more information.
|
||||
|
||||
Once the VM is migrated, the interface will not exist in the migration target pod.
|
||||
|
||||
|
|
@ -280,7 +280,7 @@ template:
|
|||
networkName: sriov-net-1
|
||||
...
|
||||
```
|
||||
Please refer to the [Interface and Networks](../network/interfaces_and_networks.md/#sriov)
|
||||
Please refer to the [Interface and Networks](https://kubevirt.io/user-guide/virtual_machines/interfaces_and_networks/#sriov)
|
||||
documentation for more information about SR-IOV networking.
|
||||
|
||||
At this point the interface and network will be added to the corresponding VMI object as well, but won't be attached to the guest.
|
||||
|
|
@ -296,7 +296,7 @@ spec:
|
|||
vmiName: vmi-fedora
|
||||
EOF
|
||||
```
|
||||
Please refer to the [Live Migration](../compute/live_migration.md) documentation for more information.
|
||||
Please refer to the [Live Migration](./live_migration.md) documentation for more information.
|
||||
|
||||
Once the VM is migrated, the interface will not exist in the migration target pod.
|
||||
Due to limitation of Kubernetes device plugin API to allocate resources dynamically,
|
||||
|
|
@ -5,7 +5,7 @@ KubeVirt now supports hotplugging volumes into a running Virtual Machine Instanc
|
|||
## Enabling hotplug volume support
|
||||
|
||||
Hotplug volume support must be enabled in the feature gates to be supported. The
|
||||
[feature gates](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[feature gates](./activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
field in the KubeVirt CR must be expanded by adding the `HotplugVolumes` to it.
|
||||
|
||||
## Virtctl support
|
||||
|
|
@ -77,7 +77,7 @@ The format and length of serials are specified according to the libvirt document
|
|||
```
|
||||
|
||||
#### Supported Disk types
|
||||
Kubevirt supports hotplugging disk devices of type [disk](../storage/disks_and_volumes.md/#disk) and [lun](../storage/disks_and_volumes.md/#lun). As with other volumes, using type `disk` will expose the hotplugged volume as a regular disk, while using `lun` allows additional functionalities like the execution of iSCSI commands.
|
||||
Kubevirt supports hotplugging disk devices of type [disk](https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#disk) and [lun](https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#lun). As with other volumes, using type `disk` will expose the hotplugged volume as a regular disk, while using `lun` allows additional functionalities like the execution of iSCSI commands.
|
||||
|
||||
You can specify the desired type by using the --disk-type parameter, for example:
|
||||
|
||||
|
|
@ -81,7 +81,7 @@ additional user interaction.
|
|||
|
||||
- The default AppArmor profile used by the container runtimes usually
|
||||
denies `mount` call for the workloads. That may prevent from
|
||||
running VMs with [VirtIO-FS](../storage/disks_and_volumes.md#virtio-fs).
|
||||
running VMs with [VirtIO-FS](../virtual_machines/disks_and_volumes.md#virtio-fs).
|
||||
This is a [known issue](https://github.com/kubevirt/kubevirt/issues/4290).
|
||||
The current workaround is to run such a VM as `unconfined` by adding the
|
||||
following annotation to the VM or VMI object:
|
||||
|
|
@ -8,7 +8,7 @@ continues to run and remain accessible.
|
|||
|
||||
Live migration is enabled by default in recent versions of KubeVirt. Versions
|
||||
prior to v0.56, it must be enabled in the feature gates. The
|
||||
[feature gates](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[feature gates](./activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
field in the KubeVirt CR must be expanded by adding the `LiveMigration` to it.
|
||||
|
||||
## Limitations
|
||||
|
|
@ -21,7 +21,7 @@ field in the KubeVirt CR must be expanded by adding the `LiveMigration` to it.
|
|||
(</#/creation/interfaces-and-networks>)
|
||||
|
||||
- Live migration requires ports `49152, 49153` to be available in the virt-launcher pod.
|
||||
If these ports are explicitly specified in [masquarade interface](../network/interfaces_and_networks.md#masquerade), live migration will not function.
|
||||
If these ports are explicitly specified in [masquarade interface](../virtual_machines/interfaces_and_networks.md#masquerade), live migration will not function.
|
||||
|
||||
## Initiate live migration
|
||||
|
||||
|
|
@ -168,7 +168,7 @@ spec:
|
|||
```
|
||||
|
||||
Bear in mind that most of these configuration can be overridden and fine-tuned to
|
||||
a specified group of VMs. For more information, please see [Migration Policies](../cluster_admin/migration_policies.md).
|
||||
a specified group of VMs. For more information, please see [Migration Policies](./migration_policies.md).
|
||||
|
||||
## Understanding different migration strategies
|
||||
Live migration is a complex process. During a migration, the source VM needs to transfer its
|
||||
|
|
@ -142,4 +142,4 @@ Any change to the node labels that match the `nodeMediatedDeviceTypes` nodeSelec
|
|||
Consequently, mediated devices will be reconfigured or entirely removed based on the updated configuration.
|
||||
|
||||
## Assigning vGPU/MDEV to a Virtual Machine
|
||||
See the [Host Devices Assignment](../compute/host-devices.md) to learn how to consume the newly created mediated devices/vGPUs.
|
||||
See the [Host Devices Assignment](<../virtual_machines/host-devices.md>) to learn how to consume the newly created mediated devices/vGPUs.
|
||||
|
|
@ -10,7 +10,7 @@ The Memory dump can be used to diagnose, identify and resolve issues in the VM.
|
|||
### Hot plug Feature Gate
|
||||
|
||||
The memory dump process mounts a PVC to the virt-launcher in order to get the output in that PVC, hence the hot plug volumes feature gate must be enabled. The
|
||||
[feature gates](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[feature gates](./activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
field in the KubeVirt CR must be expanded by adding the `HotplugVolumes` to it.
|
||||
|
||||
## Virtctl support
|
||||
|
|
@ -50,7 +50,7 @@ spec:
|
|||
|
||||
**NOTE:** If memory hotplug is enabled/disabled on an already running VM, a reboot is necessary for the changes to take effect.
|
||||
|
||||
More information can be found on the [VM Rollout Strategies](../user_workloads/vm_rollout_strategies.md) page.
|
||||
More information can be found on the [VM Rollout Strategies](./vm_rollout_strategies.md) page.
|
||||
|
||||
### [OPTIONAL] Set a cluster-wide maximum amount of memory
|
||||
|
||||
|
|
@ -10,7 +10,7 @@ yet and that APIs may change in the future.
|
|||
|
||||
## Overview
|
||||
|
||||
KubeVirt supports [Live Migrations](../compute/live_migration.md) of Virtual Machine workloads.
|
||||
KubeVirt supports [Live Migrations](./live_migration.md) of Virtual Machine workloads.
|
||||
Before migration policies were introduced, migration settings could be configurable only on the cluster-wide
|
||||
scope by editing [KubevirtCR's spec](https://kubevirt.io/api-reference/master/definitions.html#_v1_kubevirtspec)
|
||||
or more specifically [MigrationConfiguration](https://kubevirt.io/api-reference/master/definitions.html#_v1_migrationconfiguration)
|
||||
|
|
@ -64,7 +64,7 @@ target node.
|
|||
## Evacuate VMIs via Live Migration from a Node
|
||||
|
||||
If the `LiveMigration`
|
||||
[feature gate](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[feature gate](./activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
is enabled, it is possible to
|
||||
specify an `evictionStrategy` on VMIs which will react with live-migrations on
|
||||
specific taints on nodes. The following snippet on a VMI or the VMI templates in
|
||||
|
|
@ -12,7 +12,7 @@ First the safest option to reduce the memory footprint, is removing the
|
|||
graphical device from the VMI by setting
|
||||
`spec.domain.devices.autottachGraphicsDevice` to `false`. See the video
|
||||
and graphics device
|
||||
[documentation](../compute/virtual_hardware.md#video-and-graphics-device)
|
||||
[documentation](../virtual_machines/virtual_hardware.md#video-and-graphics-device)
|
||||
for further details and examples.
|
||||
|
||||
This will save a constant amount of `16MB` per VirtualMachineInstance
|
||||
|
|
@ -23,7 +23,7 @@ Even if you have no `VolumeSnapshotClasses` in your cluster, `VirtualMachineSnap
|
|||
### Snapshot Feature Gate
|
||||
|
||||
Snapshot/Restore support must be enabled in the feature gates to be supported. The
|
||||
[feature gates](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[feature gates](./activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
field in the KubeVirt CR must be expanded by adding the `Snapshot` to it.
|
||||
|
||||
|
||||
|
|
@ -24,7 +24,7 @@ spec:
|
|||
## LiveUpdate
|
||||
|
||||
The `LiveUpdate` VM rollout strategy tries to propagate VM object changes to running VMIs as soon as possible.
|
||||
For example, changing the number of CPU sockets will trigger a [CPU hotplug](../compute/cpu_hotplug.md).
|
||||
For example, changing the number of CPU sockets will trigger a [CPU hotplug](./cpu_hotplug.md).
|
||||
|
||||
Enable the `LiveUpdate` VM rollout strategy in the KubeVirt CR:
|
||||
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
nav:
|
||||
- clone_api.md
|
||||
- containerized_data_importer.md
|
||||
- disks_and_volumes.md
|
||||
- export_api.md
|
||||
- guestfs.md
|
||||
- hotplug_volumes.md
|
||||
- snapshot_restore_api.md
|
||||
|
|
@ -1,24 +0,0 @@
|
|||
nav:
|
||||
- lifecycle.md
|
||||
- basic_use.md
|
||||
- creating_vms.md
|
||||
- virtctl_client_tool.md
|
||||
- accessing_virtual_machines.md
|
||||
- boot_from_external_source.md
|
||||
- startup_scripts.md
|
||||
- windows_virtio_drivers.md
|
||||
- Monitoring:
|
||||
- component_monitoring.md
|
||||
- guest_agent_information.md
|
||||
- guest_operating_system_information.md
|
||||
- liveness_and_readiness_probes.md
|
||||
- Workloads:
|
||||
- instancetypes.md
|
||||
- deploy_common_instancetypes.md
|
||||
- hook-sidecar.md
|
||||
- presets.md
|
||||
- templates.md
|
||||
- pool.md
|
||||
- replicaset.md
|
||||
- virtual_machine_instances.md
|
||||
- vm_rollout_strategies.md
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
nav:
|
||||
- virtual_machine_instances.md
|
||||
- creating_vms.md
|
||||
- lifecycle.md
|
||||
- run_strategies.md
|
||||
- instancetypes.md
|
||||
- presets.md
|
||||
- virtual_hardware.md
|
||||
- dedicated_cpu_resources.md
|
||||
- numa.md
|
||||
- disks_and_volumes.md
|
||||
- interfaces_and_networks.md
|
||||
- network_binding_plugins.md
|
||||
- istio_service_mesh.md
|
||||
- networkpolicy.md
|
||||
- host-devices.md
|
||||
- windows_virtio_drivers.md
|
||||
- guest_operating_system_information.md
|
||||
- guest_agent_information.md
|
||||
- liveness_and_readiness_probes.md
|
||||
- accessing_virtual_machines.md
|
||||
- startup_scripts.md
|
||||
- service_objects.md
|
||||
- templates.md
|
||||
- tekton_tasks.md
|
||||
- replicaset.md
|
||||
- pool.md
|
||||
- dns.md
|
||||
- boot_from_external_source.md
|
||||
- confidential_computing.md
|
||||
- vsock.md
|
||||
- virtual_machines_on_Arm64.md
|
||||
- device_status_on_Arm64.md
|
||||
- persistent_tpm_and_uefi_state.md
|
||||
- resources_requests_and_limits.md
|
||||
- guestfs.md
|
||||
|
|
@ -9,7 +9,7 @@ exposes. Usually there are two types of consoles:
|
|||
- Graphical Console (VNC)
|
||||
|
||||
> Note: You need to have `virtctl`
|
||||
> [installed](../user_workloads/virtctl_client_tool.md) to gain
|
||||
> [installed](../operations/virtctl_client_tool.md) to gain
|
||||
> access to the VirtualMachineInstance.
|
||||
|
||||
### Accessing the Serial Console
|
||||
|
|
@ -68,9 +68,9 @@ KubeVirt provides multiple ways to inject SSH public keys into a virtual
|
|||
machine.
|
||||
|
||||
In general, these methods fall into two categories:
|
||||
- [Static key injection](../user_workloads/accessing_virtual_machines.md#static-ssh-key-injection-via-cloud-init),
|
||||
- [Static key injection](./accessing_virtual_machines.md#static-ssh-key-injection-via-cloud-init),
|
||||
which places keys on the virtual machine the first time it is booted.
|
||||
- [Dynamic key injection](../user_workloads/accessing_virtual_machines.md#dynamic-ssh-key-injection-via-qemu-user-agent),
|
||||
- [Dynamic key injection](./accessing_virtual_machines.md#dynamic-ssh-key-injection-via-qemu-user-agent),
|
||||
which allows keys to be dynamically updated both at boot and during runtime.
|
||||
|
||||
Once a SSH public key is injected into the virtual machine, it can be
|
||||
|
|
@ -82,7 +82,7 @@ Users creating virtual machines can provide startup scripts to their virtual
|
|||
machines, allowing multiple customization operations.
|
||||
|
||||
One option for injecting public SSH keys into a VM is via
|
||||
[cloud-init startup script](../user_workloads/startup_scripts.md#cloud-init).
|
||||
[cloud-init startup script](./startup_scripts.md#cloud-init).
|
||||
However, there are more flexible options available.
|
||||
|
||||
The virtual machine's access credential API allows statically injecting SSH
|
||||
|
|
@ -12,8 +12,8 @@ KubeVirt supports running confidential VMs on AMD EPYC hardware with SEV feature
|
|||
|
||||
In order to run an SEV guest the following condition must be met:
|
||||
|
||||
- `WorkloadEncryptionSEV` [feature gate](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate) must be enabled.
|
||||
- The guest must support [UEFI boot](../compute/virtual_hardware.md#biosuefi)
|
||||
- `WorkloadEncryptionSEV` [feature gate](../operations/activating_feature_gates.md#how-to-activate-a-feature-gate) must be enabled.
|
||||
- The guest must support [UEFI boot](virtual_hardware.md#biosuefi)
|
||||
- SecureBoot must be disabled for the guest VM
|
||||
|
||||
### Running an SEV guest
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
The virtctl sub command `create vm` allows easy creation of VirtualMachine
|
||||
manifests from the command line. It leverages
|
||||
[instance types and preferences](../user_workloads/instancetypes.md) and inference by
|
||||
[instance types and preferences](./instancetypes.md) and inference by
|
||||
default (see
|
||||
[Specifying or inferring instance types and preferences](#specifying-or-inferring-instance-types-and-preferences))
|
||||
and provides several flags to control details of the created virtual machine.
|
||||
|
|
@ -126,7 +126,7 @@ If a prefix was not supplied the cluster scoped resources will be used by
|
|||
default.
|
||||
|
||||
To explicitly
|
||||
infer [instance types and/or preferences](../user_workloads/instancetypes.md#inferFromVolume)
|
||||
infer [instance types and/or preferences](./instancetypes.md#inferFromVolume)
|
||||
from the volume used to boot the virtual machine add the following flags:
|
||||
|
||||
```shell
|
||||
|
|
@ -158,7 +158,7 @@ running on the node. This automatic identification should be viewed as a
|
|||
temporary workaround until Kubernetes will provide the required
|
||||
functionality. Therefore, this feature should be manually enabled by
|
||||
activating the `CPUManager`
|
||||
[feature gate](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[feature gate](../operations/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
to the KubeVirt CR.
|
||||
|
||||
When automatic identification is disabled, cluster administrator may
|
||||
|
|
@ -175,7 +175,7 @@ running.
|
|||
|
||||
**Note:** In order to run sidecar containers, KubeVirt requires the
|
||||
`Sidecar`
|
||||
[feature gate](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[feature gate](../operations/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
to be enabled in KubeVirt's CR.
|
||||
|
||||
According to the Kubernetes CPU manager model, in order the POD would
|
||||
|
|
@ -387,7 +387,7 @@ A `PersistentVolume` can be in "filesystem" or "block" mode:
|
|||
|
||||
- Block: Use a block volume for consuming raw block devices. Note: you
|
||||
need to enable the `BlockVolume`
|
||||
[feature gate](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate).
|
||||
[feature gate](../operations/activating_feature_gates.md#how-to-activate-a-feature-gate).
|
||||
|
||||
A simple example which attaches a `PersistentVolumeClaim` as a `disk`
|
||||
may look like this:
|
||||
|
|
@ -1298,7 +1298,7 @@ example is SAP HANA.
|
|||
In order to expose `downwardMetrics` to VMs, the methods `disk` and `virtio-serial port` are supported.
|
||||
|
||||
> **Note:** The **DownwardMetrics** feature gate
|
||||
> [must be enabled](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
> [must be enabled](../operations/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
> to use the metrics. Available starting with KubeVirt v0.42.0.
|
||||
|
||||
#### Disk
|
||||
|
|
@ -1470,7 +1470,7 @@ fewer IOThreads than CPU, each IOThread will be pinned to a set of CPUs.
|
|||
#### IOThreads with QEMU Emulator thread and Dedicated (pinned) CPUs
|
||||
|
||||
To further improve the vCPUs latency, KubeVirt can allocate an
|
||||
additional dedicated physical CPU<sup>[1](../compute/virtual_hardware.md#cpu)</sup>, exclusively for the emulator thread, to which it will
|
||||
additional dedicated physical CPU<sup>[1](./virtual_hardware.md#cpu)</sup>, exclusively for the emulator thread, to which it will
|
||||
be pinned. This will effectively "isolate" the emulator thread from the vCPUs
|
||||
of the VMI. When `ioThreadsPolicy` is set to `auto` IOThreads will also be
|
||||
"isolated" from the vCPUs and placed on the same physical CPU as the QEMU
|
||||
|
|
@ -157,7 +157,7 @@ spec:
|
|||
claimName: my-windows-image
|
||||
```
|
||||
|
||||
For more information see [VirtualMachineInstancePresets](../user_workloads/presets.md)
|
||||
For more information see [VirtualMachineInstancePresets](./presets.md)
|
||||
|
||||
### HyperV optimizations
|
||||
|
||||
|
|
@ -58,7 +58,7 @@ echo 0000:65:00.0 > /sys/bus/pci/drivers/vfio-pci/bind
|
|||
|
||||
In general, configuration of a Mediated devices (mdevs), such as vGPUs, should be done according to the vendor directions.
|
||||
KubeVirt can now facilitate the creation of the mediated devices / vGPUs on the cluster nodes. This assumes that the required vendor driver is already installed on the nodes.
|
||||
See the [Mediated devices and virtual GPUs](../compute/mediated_devices_configuration.md) to learn more about this functionality.
|
||||
See the [Mediated devices and virtual GPUs](<../operations/mediated_devices_configuration.md>) to learn more about this functionality.
|
||||
|
||||
Once the mdev is configured, KubeVirt will be able to discover and use it for device assignment.
|
||||
|
||||
|
|
@ -185,7 +185,7 @@ same Node.
|
|||
|
||||
Cluster admin privilege to edit the KubeVirt CR in order to:
|
||||
|
||||
- Enable the `HostDevices` [feature gate](../cluster_admin/activating_feature_gates.md)
|
||||
- Enable the `HostDevices` [feature gate](../operations/activating_feature_gates.md)
|
||||
- Edit the `permittedHostDevices` configuration to expose node USB devices to the cluster
|
||||
|
||||
### Exposing USB Devices
|
||||
|
|
@ -173,7 +173,7 @@ The previous instance type and preference CRDs are matched to a given `VirtualMa
|
|||
|
||||
## Creating InstanceTypes, Preferences and VirtualMachines
|
||||
|
||||
It is possible to streamline the creation of instance types, preferences, and virtual machines with the usage of the virtctl command-line tool. To read more about it, please see the [Creating VirtualMachines](../user_workloads/creating_vms.md#creating-virtualmachines).
|
||||
It is possible to streamline the creation of instance types, preferences, and virtual machines with the usage of the virtctl command-line tool. To read more about it, please see the [Creating VirtualMachines](creating_vms.md#creating-virtualmachines).
|
||||
|
||||
## Versioning
|
||||
|
||||
|
|
@ -457,9 +457,9 @@ null
|
|||
|
||||
## common-instancetypes
|
||||
|
||||
The [`kubevirt/common-instancetypes`](https://github.com/kubevirt/common-instancetypes) provide a set of [instancetypes and preferences](../user_workloads/instancetypes.md) to help create KubeVirt [`VirtualMachines`](http://kubevirt.io/api-reference/main/definitions.html#_v1alpha1_virtualmachine).
|
||||
The [`kubevirt/common-instancetypes`](https://github.com/kubevirt/common-instancetypes) provide a set of [instancetypes and preferences](../virtual_machines/instancetypes.md) to help create KubeVirt [`VirtualMachines`](http://kubevirt.io/api-reference/main/definitions.html#_v1alpha1_virtualmachine).
|
||||
|
||||
See [Deploy common-instancetypes](../user_workloads/deploy_common_instancetypes.md) on how to deploy them.
|
||||
See [Deploy common-instancetypes](../operations/deploy_common_instancetypes.md) on how to deploy them.
|
||||
|
||||
## Examples
|
||||
|
||||
|
|
@ -551,4 +551,4 @@ EOF
|
|||
|
||||
* This version is backwardly compatible with `instancetype.kubevirt.io/v1alpha1` and `instancetype.kubevirt.io/v1alpha2` objects, no modifications are required to existing `VirtualMachine{Instancetype,ClusterInstancetype,Preference,ClusterPreference}` or `ControllerRevisions`.
|
||||
|
||||
* As with the migration to [`kubevirt.io/v1`](https://github.com/kubevirt/kubevirt/blob/main/docs/updates.md#v100-migration-to-new-storage-versions) it is recommend previous users of `instancetype.kubevirt.io/v1alpha1` or `instancetype.kubevirt.io/v1alpha2` use [`kube-storage-version-migrator`](https://github.com/kubernetes-sigs/kube-storage-version-migrator) to upgrade any stored objects to `instancetype.kubevirt.io/v1beta1`.
|
||||
* As with the migration to [`kubevirt.io/v1`](https://github.com/kubevirt/kubevirt/blob/main/docs/updates.md#v100-migration-to-new-storage-versions) it is recommend previous users of `instancetype.kubevirt.io/v1alpha1` or `instancetype.kubevirt.io/v1alpha2` use [`kube-storage-version-migrator`](https://github.com/kubernetes-sigs/kube-storage-version-migrator) to upgrade any stored objects to `instancetype.kubevirt.io/v1beta1`.
|
||||
|
|
@ -520,7 +520,7 @@ More information about SLIRP mode can be found in [QEMU
|
|||
Wiki](https://wiki.qemu.org/Documentation/Networking#User_Networking_.28SLIRP.29).
|
||||
|
||||
> **Note**: Since v1.1.0, Kubevirt delegates Slirp network configuration to
|
||||
> the [Slirp network binding plugin](../network/net_binding_plugins/slirp.md#slirp-network-binding-plugin) by default.
|
||||
> the [Slirp network binding plugin](net_binding_plugins/slirp.md#slirp-network-binding-plugin) by default.
|
||||
> In case the binding plugin is not registered,
|
||||
> Kubevirt will use the following default image:
|
||||
> `quay.io/kubevirt/network-slirp-binding:20230830_638c60fc8`.
|
||||
|
|
@ -542,7 +542,7 @@ operating system should be configured to use DHCP to acquire IPv4 addresses.
|
|||
|
||||
To allow the VM to live-migrate or hard restart (both cause the VM to run on a
|
||||
different pod, with a different IP address) and still be reachable, it should be
|
||||
exposed by a Kubernetes [service](../network/service_objects.md#service-objects).
|
||||
exposed by a Kubernetes [service](service_objects.md#service-objects).
|
||||
|
||||
To allow traffic of specific ports into virtual machines, the template `ports` section of
|
||||
the interface should be configured as follows. If the `ports` section is missing,
|
||||
|
|
@ -637,7 +637,7 @@ Tracking issue - https://github.com/kubevirt/kubevirt/issues/7184
|
|||
> **Warning**: The core binding is being deprecated and targeted for removal
|
||||
> in v1.3 .
|
||||
> As an alternative, the same functionality is introduced and available as a
|
||||
> [binding plugin](../network/net_binding_plugins/passt.md).
|
||||
> [binding plugin](net_binding_plugins/passt.md).
|
||||
|
||||
`passt` is a new approach for user-mode networking which can be used as a simple replacement for Slirp (which is practically dead).
|
||||
|
||||
|
|
@ -693,7 +693,7 @@ When no ports are explicitly specified, all ports are forwarded, leading to memo
|
|||
1. `passt` currently only supported as primary network and doesn't allow extra multus networks to be configured on the VM.
|
||||
|
||||
passt interfaces are feature gated; to enable the feature, follow
|
||||
[these](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[these](../operations/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
instructions, in order to activate the `Passt` feature gate (case sensitive).
|
||||
|
||||
More information about passt mode can be found in [passt
|
||||
|
|
@ -908,20 +908,20 @@ spec:
|
|||
> installed in the guest VM.
|
||||
|
||||
> **Note:** Placement on dedicated CPUs can only be achieved if the Kubernetes CPU manager is running on the SR-IOV capable workers.
|
||||
> For further details please refer to the [dedicated cpu resources documentation](../compute/dedicated-cpu_resources.md/).
|
||||
> For further details please refer to the [dedicated cpu resources documentation](https://kubevirt.io/user-guide/#/creation/dedicated-cpu).
|
||||
|
||||
### Macvtap
|
||||
|
||||
> **Note**: The core binding will be deprecated soon.
|
||||
> As an alternative, the same functionality is introduced and available as a
|
||||
> [binding plugin](../network/net_binding_plugins/macvtap.md).
|
||||
> [binding plugin](net_binding_plugins/macvtap.md).
|
||||
|
||||
In `macvtap` mode, virtual machines are directly exposed to the Kubernetes
|
||||
nodes L2 network. This is achieved by 'extending' an existing network interface
|
||||
with a virtual device that has its own MAC address.
|
||||
|
||||
Macvtap interfaces are feature gated; to enable the feature, follow
|
||||
[these](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[these](../operations/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
instructions, in order to activate the `Macvtap` feature gate (case sensitive).
|
||||
|
||||
> **Note:** On [KinD](https://github.com/kubernetes-sigs/kind) clusters, the user needs to
|
||||
|
|
@ -61,7 +61,7 @@ The passt binding plugin consists of the following components:
|
|||
- Passt CNI plugin.
|
||||
- Sidecar image.
|
||||
|
||||
As described in the [definition & flow](../../network/network_binding_plugins.md#definition--flow) section,
|
||||
As described in the [definition & flow](#definition--flow) section,
|
||||
the passt plugin needs to:
|
||||
|
||||
- Deploy the CNI plugin binary on the nodes.
|
||||
|
|
@ -134,7 +134,7 @@ kubectl patch kubevirts -n kubevirt kubevirt --type=json -p='[{"op": "add", "pat
|
|||
> The passt binding is still in evaluation, use it with care.
|
||||
|
||||
### Passt Registration
|
||||
As described in the [registration section](../../network/network_binding_plugins.md#register), passt binding plugin
|
||||
As described in the [registration section](#register), passt binding plugin
|
||||
configuration needs to be added to the kubevirt CR.
|
||||
|
||||
To register the passt binding, patch the kubevirt CR as follows:
|
||||
|
|
@ -11,7 +11,7 @@ network connectivity.
|
|||
## Slirp network binding plugin
|
||||
[v1.1.0]
|
||||
|
||||
The binding plugin replaces the [core `slirp` binding](../../network/interfaces_and_networks.md#slirp)
|
||||
The binding plugin replaces the [core `slirp` binding](interfaces_and_networks.md#slirp)
|
||||
API.
|
||||
|
||||
> **Note**: The network binding plugin infrastructure is in Alpha stage.
|
||||
|
|
@ -21,7 +21,7 @@ The slirp binding plugin consists of the following components:
|
|||
|
||||
- Sidecar image.
|
||||
|
||||
As described in the [definition & flow](../../network/network_binding_plugins.md#definition--flow) section,
|
||||
As described in the [definition & flow](#definition--flow) section,
|
||||
the slirp plugin needs to:
|
||||
|
||||
- Assure access to the sidecar image.
|
||||
|
|
@ -42,7 +42,7 @@ kubectl patch kubevirts -n kubevirt kubevirt --type=json -p='[{"op": "add", "pat
|
|||
> admin to decide if the plugin is to be available in the cluster.
|
||||
|
||||
### Slirp Registration
|
||||
As described in the [registration section](../../network/network_binding_plugins.md#register), slirp binding plugin
|
||||
As described in the [registration section](#register), slirp binding plugin
|
||||
configuration needs to be added to the kubevirt CR.
|
||||
|
||||
To register the slirp binding, patch the kubevirt CR as follows:
|
||||
|
|
@ -42,9 +42,9 @@ and integrate it into Kubevirt in a modular manner.
|
|||
Kubevirt is providing several network binding plugins as references.
|
||||
The following plugins are available:
|
||||
|
||||
- [passt](../network/net_binding_plugins/passt.md) [v1.1.0]
|
||||
- [macvtap](../network/net_binding_plugins/macvtap.md) [v1.1.1]
|
||||
- [slirp](../network/net_binding_plugins/slirp.md) [v1.1.0]
|
||||
- [passt](net_binding_plugins/passt.md) [v1.1.0]
|
||||
- [macvtap](net_binding_plugins/macvtap.md) [v1.1.1]
|
||||
- [slirp](net_binding_plugins/slirp.md) [v1.1.0]
|
||||
|
||||
## Definition & Flow
|
||||
A network binding plugin configuration consist of the following steps:
|
||||
|
|
@ -17,11 +17,11 @@ The following NUMA mapping strategies can be used:
|
|||
|
||||
In order to use current NUMA support, the following preconditions must be met:
|
||||
|
||||
* [Dedicated CPU Resources](../compute/dedicated_cpu_resources.md) must be configured.
|
||||
* [Hugepages](../compute/virtual_hardware.md#hugepages) need to be allocatable on target
|
||||
* [Dedicated CPU Resources](dedicated_cpu_resources.md) must be configured.
|
||||
* [Hugepages](virtual_hardware.md#hugepages) need to be allocatable on target
|
||||
nodes.
|
||||
* The `NUMA`
|
||||
[feature gate](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
[feature gate](../operations/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
must be enabled.
|
||||
|
||||
## GuestMappingPassthrough
|
||||
|
|
@ -3,7 +3,7 @@
|
|||
**FEATURE STATE:**
|
||||
|
||||
* `VirtualMachineInstancePresets` are deprecated as of the [`v0.57.0`](https://github.com/kubevirt/kubevirt/releases/tag/v0.57.0) release and will be removed in a future release.
|
||||
* Users should instead look to use [Instancetypes and preferences](../user_workloads/instancetypes.md) as a replacement.
|
||||
* Users should instead look to use [Instancetypes and preferences](./instancetypes.md) as a replacement.
|
||||
|
||||
`VirtualMachineInstancePresets` are an extension to general
|
||||
`VirtualMachineInstance` configuration behaving much like `PodPresets`
|
||||
|
|
@ -49,7 +49,7 @@ on a VirtualMachineInstanceReplicaSet:
|
|||
|
||||
All service exposure options that apply to a VirtualMachineInstance
|
||||
apply to a VirtualMachineInstanceReplicaSet. See [Exposing
|
||||
VirtualMachineInstance](../network/service_objects.md)
|
||||
VirtualMachineInstance](http://kubevirt.io/user-guide/#/workloads/virtual-machines/expose-service)
|
||||
for more details.
|
||||
|
||||
|
||||
|
|
@ -65,7 +65,7 @@ Using VirtualMachineInstanceReplicaSet is the right choice when one
|
|||
wants many identical VMs and does not care about maintaining any disk
|
||||
state after the VMs are terminated.
|
||||
|
||||
[Volume types](../storage/disks_and_volumes.md) which
|
||||
[Volume types](./disks_and_volumes.md) which
|
||||
work well in combination with a VirtualMachineInstanceReplicaSet are:
|
||||
|
||||
- **cloudInitNoCloud**
|
||||
|
|
@ -202,7 +202,7 @@ created, but only two are running and ready.
|
|||
### Scaling via the Scale Subresource
|
||||
|
||||
> **Note:** This requires the `CustomResourceSubresources`
|
||||
> [feature gate](../cluster_admin/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
> [feature gate](../operations/activating_feature_gates.md#how-to-activate-a-feature-gate)
|
||||
> to be enabled for clusters prior to 1.11.
|
||||
|
||||
The `VirtualMachineInstanceReplicaSet` supports the `scale` subresource.
|
||||
|
|
@ -4,7 +4,7 @@ In this document, we are talking about the resources values set on the virt-laun
|
|||
|
||||
## CPU
|
||||
|
||||
Note: dedicated CPUs (and isolated emulator thread) are ignored here as they have a [dedicated page](../compute/dedicated_cpu_resources.md).
|
||||
Note: dedicated CPUs (and isolated emulator thread) are ignored here as they have a [dedicated page](./dedicated_cpu_resources.md).
|
||||
|
||||
### CPU requests on the container
|
||||
- By default, the container requests (1/cpuAllocationRatio) CPU per vCPU. The number of vCPUs is sockets*cores*threads, defaults to 1.
|
||||
|
|
@ -65,7 +65,7 @@ metadata:
|
|||
|
||||
## Memory
|
||||
### Memory requests on the container
|
||||
- VM(I)s must specify a desired amount of memory, in either spec.domain.memory.guest or spec.domain.resources.requests.memory (ignoring hugepages, see the [dedicated page](../compute/hugepages.md)). If both are set, the memory requests take precedence. A calculated amount of overhead will be added to it, forming the memory request value for the container.
|
||||
- VM(I)s must specify a desired amount of memory, in either spec.domain.memory.guest or spec.domain.resources.requests.memory (ignoring hugepages, see the [dedicated page](../operations/hugepages.md)). If both are set, the memory requests take precedence. A calculated amount of overhead will be added to it, forming the memory request value for the container.
|
||||
|
||||
### Memory limits on the container
|
||||
- By default, no memory limit is set on the container
|
||||
|
|
@ -783,4 +783,4 @@ virtualmachine.kubevirt.io/testvm created
|
|||
```
|
||||
|
||||
## Additional information
|
||||
You can follow [Virtual Machine Lifecycle Guide](../user_workloads/lifecycle.md) for further reference.
|
||||
You can follow [Virtual Machine Lifecycle Guide](./lifecycle.md) for further reference.
|
||||
|
|
@ -69,13 +69,13 @@ require a provisioner in your cluster.
|
|||
## Additional Information
|
||||
|
||||
- Using instancetypes and preferences with a VirtualMachine:
|
||||
[Instancetypes and preferences](../user_workloads/instancetypes.md)
|
||||
[Instancetypes and preferences](./instancetypes.md)
|
||||
|
||||
- More information about persistent and ephemeral volumes:
|
||||
[Disks and Volumes](../storage/disks_and_volumes.md)
|
||||
[Disks and Volumes](./disks_and_volumes.md)
|
||||
|
||||
- How to access a VirtualMachineInstance via `console` or `vnc`:
|
||||
[Console Access](../user_workloads/accessing_virtual_machines.md)
|
||||
[Console Access](./accessing_virtual_machines.md)
|
||||
|
||||
- How to customize VirtualMachineInstances with `cloud-init`:
|
||||
[Cloud Init](../user_workloads/startup_scripts.md#cloud-init)
|
||||
[Cloud Init](./startup_scripts.md#cloud-init)
|
||||
|
|
@ -104,7 +104,7 @@ Enterprise Linux 7](https://access.redhat.com/articles/2470791).
|
|||
## How to obtain virtio drivers?
|
||||
|
||||
The virtio Windows drivers are distributed in a form of
|
||||
[containerDisk](../storage/disks_and_volumes.md#containerdisk),
|
||||
[containerDisk](./disks_and_volumes.md#containerdisk),
|
||||
which can be simply mounted to the VirtualMachine. The container image,
|
||||
containing the disk is located at:
|
||||
<https://quay.io/repository/kubevirt/virtio-container-disk?tab=tags> and the image
|
||||
Loading…
Reference in New Issue