Home > Storage > PowerMax and VMAX > Storage Admin > Using Dell PowerMax with Linux KVM Implementation Guide > Installation
The oVirt installation begins with selecting a node for the oVirt Engine.
There are three steps to the installation.
The following sections describe the installation of the oVirt Node and oVirt Engine.
Install the oVirt Node:
Figure 11. oVirt Node ISO file for Enterprise Linux 8
Note: Version 9 is still under development.
Figure 12. oVirt ISO installation: install oVIrt Node
Figure 13. oVirt ISO installation: select language
Figure 14. oVirt ISO installation: modify options
Note: The ISO file is preconfigured with the necessary packages.
The installation progresses quickly due to the limited number of packages.
Figure 15. oVirt ISO installation: reboot the system
Figure 16. Node health
The self-hosted oVirt Engine is a virtual machine created on the first node of a KVM cluster. Deploy it by running the hosted-engine –deploy command, which in turn runs Ansible scripts or playbooks to create the oVirt Engine. The script queries for the network, host names, IP addresses, and most importantly, storage. Initially, the virtual machine is deployed on the local mount (/) before being moved to the chosen shared storage. It is essential to use a disk large enough for deployment of the KVM node where the oVirt Engine will be built. By default, the script uses IPV6. If IPV4 is required, the --4 parameter must be passed.
Figure 17 shows the script output:
Figure 17. Hosted-engine script execution
Note the reference to shared storage. The virtual machine is copied to a shared storage domain of the user’s choice that is created as part of the script. It cannot be a local storage domain. This limitation narrows the options to: GlusterFS, iSCSI, FC, or NFS. The storage must be presented to the node before running the deployment script for iSCSI and FC. To prepare for an FC storage domain, see Fibre Channel. To prepare for an iSCSI storage domain, see see iSCSI. The user can specify GlusterFS and NFS when prompted.
Note: When running the deployment script, use the tmux utility, which allows a script restart if the user is disconnected.
The script prompts the user for the management network (typically the public adapter of the host), the data center and cluster names, and the CPU, memory, and FQDN of the Engine. The Engine FQDN is not the running host, rather it is an available host/IP address for the virtual machine. The default values for CPU and memory are a percentage of the actual server amounts. They can be adjusted as shown in Figure 18:
Figure 18. Engine virtual machine specifications
When the user enters the required information, the script deploys the appliance and creates the virtual machine on local storage. After this step is completed, the oVirt Engine is ready.
The final steps in the script move the virtual machine to shared storage. The user is prompted to select the type of storage. At the prompt, NFS is the default option. Figure 19 shows an example in which FC is selected:
Figure 19. Shared storage prompt
When selecting either iSCSI or FC, the script initiates a storage scan and then presents a list of devices from which to choose. In Figure 20, option [2] is selected, which is a 99 GiB device. The default size of the virtual machine is 51 GB, therefore, the device must be larger than 51 GB.
Figure 20. Selecting an FC volume
After the script creates the shared storage domain, it prompts the user for a virtual machine disk size before moving it to shared storage. Because KVM does not support moving live virtual machines across storage domains, there is the opportunity to change the disk size. Unless there is a specific reason to do so, accept the default value of 51 GB. Figure 21 shows these steps:
Figure 21. Deployment completion
The script in this example reports a successful completion, which means that the oVirt Engine is up and running on shared storage. If there are errors at any point, the user can address them and the script starts again. Sometimes, the state of the installation is unknown and requires a full reset. See Failed oVirt Engine script, which describes how to rollback a failed installation.
The oVirt Engine virtual machine contains the Open Virtualization Manager (OVM), which is the UI of the environment.
To access the OVM:
Figure 22. Open Virtualization Manager login
The Dashboard is displayed, as shown in Figure 23:
Figure 23. Open Virtualization Manager Dashboard
The Dashboard enables one to access environment components, as well as a link to the Monitoring Portal. Each tile at the top of the dashboard is a hyperlink to the category header. For example, clicking the Hosts tile displays the oVirt node, as shown in Figure 24:
Figure 24. Hosts in Open Virtualization Manager
The crown icon to the left of the hostname indicates that this host is running the Hosted Engine VM.
The exclamation point icon to the left of the crown icon indicates that power management is not active on the host. Power management allows the user to power on and power off the host from the OVM interface. It accomplishes this task through a fencing agent such as iDRAC. This node is the initial node in the configuration. One cannot enable power management on the initial node in the cluster until a second node is added. For information about enabling power management, see Adding a KVM host to a cluster.
With the completion of the script, the user can now add more hosts to the cluster and create more storage domains.
A user will likely see a reference to Cockpit when they log in to a KVM node. Cockpit is part of the build for the oVirt node and can be enabled on each KVM node for a standard server installation. It is a UI that can manage a Linux system, including virtual machines. The user might find it useful to manage the Linux host themselves, however, because the oVirt Engine provides the UI for the KVM environment as a whole, Cockpit is unnecessary for virtualization components. While the Virtualization option in the menu on the left offers the ability to install the oVirt Engine for the cluster, as shown in Figure 25, the UI deployment of the hosted engine is deprecated. Using the UI setup can lead to SSO authentication issues. Both Dell Technologies and the oVirt community recommend using the CLI procedure that is documented in this guide to ensure success.
Figure 25. oVirt Engine UI setup
After deployment of the oVirt Engine, whether self-hosted or stand-alone, the user can add other Enterprise Linux or oVirt Nodes KVM nodes to the cluster. The oVirt documentation lists the minimal requirements for a KVM node before adding it to a cluster. The following procedure assumes that the user has met these prerequisites.
After the steps are completed, OVM initiates the operations to add the host to the cluster, including installation of the packages. Figure 32 shows the status of the host as Installing; when the step is completed, the status of the host is Up.
Figure 32. Add KVM host - view the host status
If a host is selected for the self-hosted engine, a gray crown icon to the left of the hostname indicates that the host is allowed to run the self-hosted engine, either because of an HA event or migration, as shown in Figure 33:
Figure 33. KVM host that can run the Hosted Engine
After the second host is part of the cluster, the user can edit the first host and assign power management. One cannot enable power management for the first host until there is a second host. Repeat the preceding steps to add power management to the first host.
If new oVirt software becomes available, OVM informs the user if any hosts in a cluster can be upgraded. If the user examines the hosts, each host indicates whether it can be upgraded. Figure 34 shows both views:
Figure 34. Upgrade status
In the hosts view, the Installation menu also enables the user to check for updates because OVM only checks for updates periodically.
To upgrade a host:
Figure 35. Upgrade the host
In Figure 37, the status of the host is shown as Maintenance before the installation and reboot. In the inset, the event log shows a successful upgrade:
Figure 37. Host event log