Introduction
In my lab, I am in the process of moving off of Vmware/ESXi and onto RHEV (Red Hat Enterprise Virtualization). So far the learning curve has not been that bad….
Installing and configuring RHEV-M was just as simple as installing a few packages on top of RHEL 6. I had an unused desktop that was more than up for the task. For RHEV-H… Well I had just finished building out a new server for it. Once my initial hardware was up and running it was time to start moving virtual machines out of Vsphere.
Virt-V2V Conversion Failures
And this is when I started to run into trouble. Using virt-v2v, the process was simple for existing RHEL vms. I imported a couple of them without issue.
However, I ran into a few issues when I started trying to import non RHEL VMs. In particular, my OpenVPN appliance running Ubuntu was putting up a good fight.
virt-v2v: error: virt-v2v is unable to convert this guest type
(linux/ubuntu)If reporting bugs, run virt-v2v with debugging enabled and include the
complete output:
Ok, very unexpected – so lets try another approach.
OVA Import Failures
This time I decided to attempt to export the VM as on OVA file. However this did not work either as the OVA file was not recognized as a gzip file by the rhevm-image-uploader (run from the RHEVM server).
# rhevm-image-uploader -e pandora-export-vol01 upload vpn.lab.localdomain.ova
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
ERROR: not a gzip file
The Workaround
However not all was lost, as I was able to leverage the exported VM files to convert the the VMDK of my vm to img format.
#qemu-img convert -f vmdk vpn.lab.localdomain-disk1.vmdk -O raw vpn.lab.localdomain-disk1.img
Once the disk was successfully converted, I used qemu-info to verify its size.
# qemu-img info vpn.lab.localdomain-disk1.img
image: vpn.lab.localdomain-disk1.img
file format: raw
virtual size: 8.0G (8589934592 bytes)
disk size: 2.8G
I then logged into RHEV and created a new virtual machine which I named vpn.lab.localdomain. For all intents and purposes, it is just a placeholder VM with an empty 8GB disk.
Then using rhevm-shell ran the following command to get a list of existing vms and their disks.
[RHEVM shell (connected)]# list vms
id : 93a873a0-2fc2-4d73-952e-58e934d7ecea
name : vpn.lab.localdomain[RHEVM shell (connected)]# list disks
id : d239f1c7-4fab-43fd-a7e9-ff5add0d53de
name : vpn.lab.localdomain_Disk1
I knew that the disk image for vpn.lab.localdomain was housed on the NFS mount below. All I needed was the UUID of the disk.
# showmount -e freenas
Export list for freenas:
/mnt/freenas-vol01 192.168.0.0,10.1.0.0
So I mounted the NFS share on my RHEVM server as /freenas-vol01
# mount -t nfs freenas.lab.localdomain:/mnt/freenas-vol-1 /mnt
I then navigated /freenas-vol01/4f46e5ca-a417-45d7-9dbf-2d400207fdaa/images and looked for a directory with this name (d239f1c7-4fab-43fd-a7e9-ff5add0d53de). Note that this is the same name as the UUID that I found a few steps above.
Note: It’s the second one in the list.
# ls -rlt
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 12:39 aa1c9b26-00f0-45b0-9cca-224339448347
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 12:45 d239f1c7-4fab-43fd-a7e9-ff5add0d53de
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 13:53 021227e4-bb19-4747-a7a9-e522402fa6fa
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 13:53 390298bc-94f0-477a-b437-5898e3f664be
Now cd to d239f1c7-4fab-43fd-a7e9-ff5add0d53de and inspect its contents
# ls -rlt
total 949342
-rw-r–rw-. 1 vdsm kvm 321 Sep 14 12:45 a50ce217-5c51-4f89-afe3-2c650a21c3b0.meta
-rw-rw-rw-. 1 vdsm kvm 1048576 Sep 14 12:45 a50ce217-5c51-4f89-afe3-2c650a21c3b0.lease
-rw-rw-rw-. 1 vdsm kvm 8589934592 Sep 14 16:44 a50ce217-5c51-4f89-afe3-2c650a21c3b0
There are three files here, but only one is 8Gb in size, which is not coincidentally the same size of the disk I created. This is the new empty disk on our new vm.
Now lets dd or original disk onto this disk. This overwrites the contents of the new disk, with the contents of the original disk.
#dd if=/root/vpn.lab.localdomain-disk1.img of=a50ce217-5c51-4f89-afe3-2c650a21c3b0 bs=4M
Much to my surprise I was able to boot up the vm without issue.