When spawning a virtual machine in RHV without the guest agent, a warning will appear as an exclamation mark in RHV.
Register with Satellite
You can skip this step if your guest is already registered
In order to install the guest agent, I must first register the virtual machine with my local satellite. If not using a local satellite server, you can register with RHN.
First we need to grab the katello rpm from satellite – an insecure satellite server in this case as we are using a self-signed cert.
Out of the box you are not going to be able to run virsh commands on the cli as root. Libvirt, Virsh, and KVM are just not that tightly integrated in RHEV.
You can however, follow the procedure below, and create a user and password to use for authentication.
# saslpasswd2 -a libvirt fatmin
Password:
Again (for verification):
Now enter the credentials you entered above.
# virsh list –all
Please enter your authentication name: fatmin
Please enter your password:
Id Name State
—————————————————-
10 HostedEngine running
Now you can shut down or start a VM. Here I am shutting down my RHEV HostedEngine.
# virsh destroy HostedEngine
Please enter your authentication name: fatmin
Please enter your password:
Domain HostedEngine destroyed
A while ago, I needed to tear down one of the nodes in my RHEV cluster, as I wanted to re-purpose it as a RHEL7 host. A few hours before a planned to shut it down, I logged into my RHEV-M console and put the node in “Maintenance” status.
Later, when I came home I powered off the node and rebuilt it, not once checking to ensure that the VMs running on the host had migrated properly. They had not, and this was a problem. I unknowingly fubared my cluster.
When I logged back into RHEV-M, I found the several VMs with a status of “?” or “Unknown State“, and a physical host that I was unable to remove from the cluster, despite the fact that it had been rebuilt.
So now what….
Fix the VMs first…
There were several VMs that were running on the now retired RHEV-H host when the server was powered off. I needed to fix these before I could remove the host. I tried powering them off, on, or migrating them from the WebUI, but this did not work. So I had to delete them manually from the RHEV-M database. Note that its possible that I could have figured out how to save the VMs instead of deleting them, but this was not a priority since this is my homelab environment.
First lets SSH to the RHEV-M server and log into the database.
This post will walk you through fixing the RHEV/oVirt error shown below. Note that this error was received when trying to load the RHEV-M/oVirtEngine web interface.
Note that these instructions worked for me in a known working environment, where the web interface was previously working but had failed after an unscheduled restart of the host.
First lets restart the ovirt-engine.
# service ovirt-engine restart
Stopping oVirt Engine: [FAILED]
Starting oVirt Engine: [ OK ]
Next we restart the ovirt-websocket-proxy
# service ovirt-websocket-proxy restart
Stopping oVirt Engine websockets proxy: [ OK ]
Starting oVirt Engine websockets proxy: [ OK ]
Finally, we restart Apache.
# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]
Interesting read for those who want to integrate Foreman into their RHEV environment. I’m still working on figuring out the exact benefits of doing so, but this this seems like its the go-to guide to reference.
In my lab, I am in the process of moving off of Vmware/ESXi and onto RHEV (Red Hat Enterprise Virtualization). So far the learning curve has not been that bad….
Installing and configuring RHEV-M was just as simple as installing a few packages on top of RHEL 6. I had an unused desktop that was more than up for the task. For RHEV-H… Well I had just finished building out a new server for it. Once my initial hardware was up and running it was time to start moving virtual machines out of Vsphere.
Virt-V2V Conversion Failures
And this is when I started to run into trouble. Using virt-v2v, the process was simple for existing RHEL vms. I imported a couple of them without issue.
However, I ran into a few issues when I started trying to import non RHEL VMs. In particular, my OpenVPN appliance running Ubuntu was putting up a good fight.
virt-v2v: error: virt-v2v is unable to convert this guest type (linux/ubuntu)
If reporting bugs, run virt-v2v with debugging enabled and include the complete output:
Ok, very unexpected – so lets try another approach.
OVA Import Failures
This time I decided to attempt to export the VM as on OVA file. However this did not work either as the OVA file was not recognized as a gzip file by the rhevm-image-uploader (run from the RHEVM server).
# rhevm-image-uploader -e pandora-export-vol01 upload vpn.lab.localdomain.ova
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
ERROR: not a gzip file
The Workaround
However not all was lost, as I was able to leverage the exported VM files to convert the the VMDK of my vm to img format.
#qemu-img convert -f vmdk vpn.lab.localdomain-disk1.vmdk -O raw vpn.lab.localdomain-disk1.img
Once the disk was successfully converted, I used qemu-info to verify its size.
# qemu-img info vpn.lab.localdomain-disk1.img
image: vpn.lab.localdomain-disk1.img
file format: raw
virtual size: 8.0G (8589934592 bytes)
disk size: 2.8G
I then logged into RHEV and created a new virtual machine which I named vpn.lab.localdomain. For all intents and purposes, it is just a placeholder VM with an empty 8GB disk.
Then using rhevm-shell ran the following command to get a list of existing vms and their disks.
[RHEVM shell (connected)]# list vms
id : 93a873a0-2fc2-4d73-952e-58e934d7ecea
name : vpn.lab.localdomain
[RHEVM shell (connected)]# list disks
id : d239f1c7-4fab-43fd-a7e9-ff5add0d53de
name : vpn.lab.localdomain_Disk1
I knew that the disk image for vpn.lab.localdomain was housed on the NFS mount below. All I needed was the UUID of the disk.
# showmount -e freenas
Export list for freenas:
/mnt/freenas-vol01 192.168.0.0,10.1.0.0
So I mounted the NFS share on my RHEVM server as /freenas-vol01
# mount -t nfs freenas.lab.localdomain:/mnt/freenas-vol-1 /mnt
I then navigated /freenas-vol01/4f46e5ca-a417-45d7-9dbf-2d400207fdaa/images and looked for a directory with this name (d239f1c7-4fab-43fd-a7e9-ff5add0d53de). Note that this is the same name as the UUID that I found a few steps above.
There are three files here, but only one is 8Gb in size, which is not coincidentally the same size of the disk I created. This is the new empty disk on our new vm.
Now lets dd or original disk onto this disk. This overwrites the contents of the new disk, with the contents of the original disk.