Chapter 2
Deployment, Installation, and Cluster Management
Venture beyond theory and into the realm of operational mastery, where KubeVirt's capabilities are unleashed through robust deployment, precise configuration, and resilient cluster management. This chapter guides you through the advanced processes, design choices, and best practices that ensure your KubeVirt deployment is secure, adaptable, and production-ready.
2.1 Infrastructure Prerequisites and Compatibility
The deployment of KubeVirt, a virtualization extension for Kubernetes, necessitates a thorough understanding of the underlying infrastructure prerequisites and compatibility constraints. This section delineates the fundamental requirements across Kubernetes versions, operating systems, networking architectures, and resource allocation, as well as outlining environmental validation and advanced infrastructure planning considerations for scalable and sustainable clusters.
KubeVirt maintains compatibility with specific Kubernetes versions, primarily to leverage stable APIs and networking features critical for hosting virtual machine instances (VMIs) alongside containers. Empirically, KubeVirt supports Kubernetes versions from 1.19 through the latest minor releases of 1.25, with particular emphasis on patch level currency for security and bug fixes.
Functionality such as CustomResourceDefinitions (CRDs), dynamic admission controllers, and enhanced scheduling predicates are dependent on a minimum Kubernetes API version (typically 1.19+). The deployment manifests and controllers provided by KubeVirt are tightly coupled to these APIs, rendering earlier Kubernetes clusters unsuitable without considerable backporting effort. It is recommended to verify the precise Kubernetes version using:
kubectl version --short and cross-reference with the official KubeVirt compatibility matrix to confirm support status.
KubeVirt nodes typically run on Linux operating systems due to kernel virtualization modules and container runtime compatibility. The integration relies fundamentally on Kernel-based Virtual Machine (KVM) support, which is available on mainstream Linux distributions with appropriate kernel versions (4.14 and above recommended).
CentOS, Fedora, Ubuntu, and Red Hat Enterprise Linux (RHEL) constitute the most validated environments. Notably, the node OS must have the kvm and kvm_intel or kvm_amd kernel modules loaded to enable hardware-assisted virtualization. Verification commands include:
lsmod | grep kvm modprobe kvm_intel # or modprobe kvm_amd Container runtimes compatible with Kubernetes (e.g., containerd, CRI-O) should be deployed and configured with appropriate cgroup drivers and network plugins that coexist with KubeVirt's operational model.
Networking within KubeVirt clusters is subject to stringent architectural choices due to the dual nature of containerized and virtualized workloads. The fundamental requirement is a Container Network Interface (CNI) plugin that supports Multus, allowing attachment of multiple network interfaces per pod, which is necessary to expose VMIs to external and internal networks.
Common CNI plugins like Calico, Canal, and OpenShift SDN can be augmented with Multus to facilitate secondary networks. These secondary interfaces typically enable direct VM connectivity via macvtap or bridge devices. The selection of network architecture should account for:
- Network isolation: Isolating VM traffic for security and compliance.
- Performance: Using SR-IOV or macvtap for lower latencies and higher throughput when needed.
- IPAM and routing: Ensuring IP address management and routes accommodate pod and VM IP spaces without conflict.
Proper validation includes confirming Multus deployment and network attachment definition availability:
kubectl get pods -n kube-system | grep multus kubectl get network-attachment-definitions.k8s.cni.cncf.io -A and testing connectivity between VMIs and cluster components.
Resource planning for KubeVirt entails accommodating hypervisor overhead alongside container orchestration demands. Each node must encapsulate sufficient CPU cores, memory, and storage to support VM lifecycle operations, including boot processes and live migration.
Typically, nodes hosting VMIs are recommended to have:
- CPU: At least 4 physical cores with virtualization extensions enabled.
- Memory: Minimum of 16 GiB RAM, adjustable per workload memory profiles.
- Storage: Fast storage class availability, preferably SSD-backed volumes supporting persistent volume claims (PVCs).
Resource requests and limits must be configured carefully. Kubernetes resource manager applies these constraints at the pod level, but VMIs often demand real-time performance characteristics, thus necessitating reserved CPU sets via CPUManager and NUMA-aware placement when applicable.
Resource requests example for a VMI:
resources: requests: memory: "4Gi" cpu: "2" limits: memory: "8Gi" cpu: "4" Resource overcommitment can degrade performance; hence, capacity planning should be based on peak utilization scenarios rather than averages.
Prior to operational deployment, comprehensive validation of the entire cluster environment is essential. This includes:
- Virtualization verification: Ensuring KVM modules are loaded and accessible within nodes.
- Network probe: Validation of CNI and Multus configurations with sample pod/VM connectivity tests.
- Storage tests: Verifying persistent volume provisioning and I/O performance.
- Cluster health: Checking node readiness, API server responsiveness, and cluster policy enforcement.
Example command to verify node virtualization support:
...