A while ago, I needed to tear down one of the nodes in my RHEV cluster, as I wanted to re-purpose it as a RHEL7 host. A few hours before a planned to shut it down, I logged into my RHEV-M console and put the node in “Maintenance” status.
Later, when I came home I powered off the node and rebuilt it, not once checking to ensure that the VMs running on the host had migrated properly. They had not, and this was a problem. I unknowingly fubared my cluster.
When I logged back into RHEV-M, I found the several VMs with a status of “?” or “Unknown State“, and a physical host that I was unable to remove from the cluster, despite the fact that it had been rebuilt.
So now what….
Fix the VMs first…
There were several VMs that were running on the now retired RHEV-H host when the server was powered off. I needed to fix these before I could remove the host. I tried powering them off, on, or migrating them from the WebUI, but this did not work. So I had to delete them manually from the RHEV-M database. Note that its possible that I could have figured out how to save the VMs instead of deleting them, but this was not a priority since this is my homelab environment.
First lets SSH to the RHEV-M server and log into the database.
Interesting read for those who want to integrate Foreman into their RHEV environment. I’m still working on figuring out the exact benefits of doing so, but this this seems like its the go-to guide to reference.
Installing and configuring RHEV-M was just as simple as installing a few packages on top of RHEL 6. I had an unused desktop that was more than up for the task. For RHEV-H… Well I had just finished building out a new server for it. Once my initial hardware was up and running it was time to start moving virtual machines out of Vsphere.
Virt-V2V Conversion Failures
And this is when I started to run into trouble. Using virt-v2v, the process was simple for existing RHEL vms. I imported a couple of them without issue.
However, I ran into a few issues when I started trying to import non RHEL VMs. In particular, my OpenVPN appliance running Ubuntu was putting up a good fight.
virt-v2v: error: virt-v2v is unable to convert this guest type (linux/ubuntu)
If reporting bugs, run virt-v2v with debugging enabled and include the complete output:
Ok, very unexpected – so lets try another approach.
OVA Import Failures
This time I decided to attempt to export the VM as on OVA file. However this did not work either as the OVA file was not recognized as a gzip file by the rhevm-image-uploader (run from the RHEVM server).
# rhevm-image-uploader -e pandora-export-vol01 upload vpn.lab.localdomain.ova
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
ERROR: not a gzip file
However not all was lost, as I was able to leverage the exported VM files to convert the the VMDK of my vm to img format.
#qemu-img convert -f vmdk vpn.lab.localdomain-disk1.vmdk -O raw vpn.lab.localdomain-disk1.img
Once the disk was successfully converted, I used qemu-info to verify its size.
# qemu-img info vpn.lab.localdomain-disk1.img
file format: raw
virtual size: 8.0G (8589934592 bytes)
disk size: 2.8G
I then logged into RHEV and created a new virtual machine which I named vpn.lab.localdomain. For all intents and purposes, it is just a placeholder VM with an empty 8GB disk.
Then using rhevm-shell ran the following command to get a list of existing vms and their disks.
[RHEVM shell (connected)]# list vms
id : 93a873a0-2fc2-4d73-952e-58e934d7ecea
name : vpn.lab.localdomain
[RHEVM shell (connected)]# list disks
id : d239f1c7-4fab-43fd-a7e9-ff5add0d53de
name : vpn.lab.localdomain_Disk1
I knew that the disk image for vpn.lab.localdomain was housed on the NFS mount below. All I needed was the UUID of the disk.
# showmount -e freenas
Export list for freenas:
So I mounted the NFS share on my RHEVM server as /freenas-vol01
# mount -t nfs freenas.lab.localdomain:/mnt/freenas-vol-1 /mnt
I then navigated /freenas-vol01/4f46e5ca-a417-45d7-9dbf-2d400207fdaa/images and looked for a directory with this name (d239f1c7-4fab-43fd-a7e9-ff5add0d53de). Note that this is the same name as the UUID that I found a few steps above.