RHEV: How to Import a VMware ESXi OVA Disk Image into RHEV-M

424782-cloud-computing

Introduction

In my lab, I am in the process of moving off of Vmware/ESXi and onto RHEV (Red Hat Enterprise Virtualization). So far the learning curve has not been that bad….

Installing and configuring RHEV-M was just as simple as installing a few packages on top of RHEL 6. I had an unused desktop that was more than up for the task.  For RHEV-H… Well I had just finished building out a new server for it. Once my initial hardware was up and running it was time to start moving virtual machines out of Vsphere.

Virt-V2V Conversion Failures

And this is when I started to run into trouble. Using virt-v2v, the process was simple for existing RHEL vms. I imported a couple of them without issue.

However, I ran into a few issues when I started trying to import non RHEL VMs. In particular, my OpenVPN appliance running Ubuntu was putting up a good fight.

virt-v2v: error: virt-v2v is unable to convert this guest type
(linux/ubuntu)
If reporting bugs, run virt-v2v with debugging enabled and include the
complete output:

Ok, very unexpected – so lets try another approach.

OVA Import Failures

This time I decided to attempt to export the VM as on OVA file. However this did not work either as the OVA file was not recognized as a gzip file by the rhevm-image-uploader (run from the RHEVM server).

# rhevm-image-uploader -e pandora-export-vol01 upload vpn.lab.localdomain.ova
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
ERROR: not a gzip file

The Workaround

However not all was lost, as I was able to leverage the exported VM files to convert the the VMDK of my vm to img format.

#qemu-img convert -f vmdk vpn.lab.localdomain-disk1.vmdk -O raw vpn.lab.localdomain-disk1.img

Once the disk was successfully converted, I used qemu-info to verify its size.

# qemu-img info vpn.lab.localdomain-disk1.img
image: vpn.lab.localdomain-disk1.img
file format: raw
virtual size: 8.0G (8589934592 bytes)
disk size: 2.8G

I then logged into RHEV and created a new virtual machine which I named vpn.lab.localdomain. For all intents and purposes, it is just a placeholder VM with an empty 8GB disk.

Then using rhevm-shell ran the following command to get a list of existing vms and their disks.

[RHEVM shell (connected)]# list vms

id : 93a873a0-2fc2-4d73-952e-58e934d7ecea
name : vpn.lab.localdomain

[RHEVM shell (connected)]# list disks

id : d239f1c7-4fab-43fd-a7e9-ff5add0d53de
name : vpn.lab.localdomain_Disk1

I knew that the disk image for vpn.lab.localdomain was housed on the NFS mount below. All I needed was the UUID of the disk.

# showmount -e freenas
Export list for freenas:
/mnt/freenas-vol01 192.168.0.0,10.1.0.0

So I mounted the NFS share on my RHEVM server as /freenas-vol01

# mount -t nfs freenas.lab.localdomain:/mnt/freenas-vol-1 /mnt

I then navigated /freenas-vol01/4f46e5ca-a417-45d7-9dbf-2d400207fdaa/images and looked for a directory with this name (d239f1c7-4fab-43fd-a7e9-ff5add0d53de). Note that this is the same name as the UUID that I found a few steps above.

Note: It’s the second one in the list.

# ls -rlt
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 12:39 aa1c9b26-00f0-45b0-9cca-224339448347
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 12:45 d239f1c7-4fab-43fd-a7e9-ff5add0d53de
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 13:53 021227e4-bb19-4747-a7a9-e522402fa6fa
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 13:53 390298bc-94f0-477a-b437-5898e3f664be

Now cd to d239f1c7-4fab-43fd-a7e9-ff5add0d53de and inspect its contents

# ls -rlt
total 949342
-rw-r–rw-. 1 vdsm kvm 321 Sep 14 12:45 a50ce217-5c51-4f89-afe3-2c650a21c3b0.meta
-rw-rw-rw-. 1 vdsm kvm 1048576 Sep 14 12:45 a50ce217-5c51-4f89-afe3-2c650a21c3b0.lease
-rw-rw-rw-. 1 vdsm kvm 8589934592 Sep 14 16:44 a50ce217-5c51-4f89-afe3-2c650a21c3b0

There are three files here, but only one is 8Gb in size, which is not coincidentally the same size of the disk I created. This is the new empty disk on our new vm.

Now lets dd or original disk onto this disk. This overwrites the contents of the new disk, with the contents of the original disk.

#dd if=/root/vpn.lab.localdomain-disk1.img of=a50ce217-5c51-4f89-afe3-2c650a21c3b0 bs=4M

Much to my surprise I was able to boot up the vm without issue.

ESX 5: How to Power On A Virtual Machine from the Command Line

Avengers_lego___captain_america_by_robking21-d4wjhwtSo lately I have been noticing some strange issues with my home lab. Every so often I find my all my virtuall machines are down, however the ESX box is up and running fine. Checking the logs on the ESX server has not helped me get to the bottom of the issue, rather I can just see the messages that the server has rebooted. So at the very least I know that something is happening to the ESX server causing it to crash.

This morning I happned to get lucky as I was sitting in my office when the basement airconditioner turned on around the same time that the whole home dehumidifier kicked on. Bleep when all my UPSs, and then I heard my ESX server power off and power back on again.

So now I am on a mission to figure out how to get my ESX server and my UPS to communicate so i can get at least 30 seconds of power during brownout. However this is an issue for another day, right now I just need to get my Vcenter Server back up and running, however my challenge is how to do so despite the fact that I do not have a Windows machine that I can use to connect directly to my ESX server via Vcenter Client.

Thanfully I can power up my Virtual Machines on the Command Line Via SSH. First step is to log into the ESX Server and get a list of all the Virtual Machines that are currently registerd on it. You can do this using the command below.

 

 # vim-cmd vmsvc-getallvms

Now that I have a list of all my registered Virtual Machines, I need to check its current power state and ensure that it is in fact powered off.

~ # vim-cmd vmsvc/power.getstate 1
Retrieved runtime info
Powered off

Yup – its down. Now lets power it back on.

~ # vim-cmd vmsvc/power.on 1
Powering on VM:
~ #

Now wait a minute or so to allow the Virtual Machine time to boot… Now I should be able to RDP from my Linux Workstation to the Vcenter Server. No extra Windows Workstations needed.

Related articles

Vmware VCenter Virtual Appliance – Death to Windows.. I think.
HomeLab: Simple DHCP Service Configuration on a Cisco Router
NEW VMSA-2013-0016 VMware ESXi and ESX unauthorized file access through vCenter Server and ESX
Anonymous leaks VMware ESX Server Kernel source code

How to Use Hpacucli in ESXi

Miracle_WhipWow just stumbled into this one today as I was dreading having to reboot my ESXi 4.1 boxes just to see a few new local disks.

Apparently you can install hpacucli in ESXi very easily, and whats even better is that you can install it directly on a datastore and use the install on any host that has access to that datastore. No need to reinstall on each and every host in your cluster. No need to reboot to configure your disks.

Anyway here is the link to the how to, its obviously written by a Windows Guy, but it works easy enough.

http://v-front.blogspot.com/2012/03/how-to-run-hp-online-acu-cli-for-linux.html

 

Update: Apparently HP now provides an official version of hpacucli for ESXi 5 as part of HP ESXi Utilities Offline Bundle for VMware ESXi 5.0.

Also note that I found that my HP servers with the P400 card did not detect the new logical drives, even after a rescan. Apparently a reboot is required. However my HP boxes with the P410 card did detect the new logical drives after a rescan and no reboot was required.

 

Configuring Remote Syslog in ESXi 4.1 via the Linux RCLI

Groundhog

First thing you need to know about ESXi is that it rolls its log after a reboot, meaning if your ESXi server crashes there will be no logs to review and no way of knowing what when to hell and where.

For this reason it is imperative that you setup a remote syslog server and send your logs there. Vmware support will tell you this and shame you if you have not setup remote syslogging.

Fortunately the first hit on google for “ESX syslog how to” will take you directly to the VMware KB article. You can find it here. However this page does not contain directions on how to do this via the Linux rcli, it only contains directions on how to do this via the VMA (management appliance) and from Windows PowerCLI. Really Vmware?

Follow the directions below if you run a real operating system and are not a Windows Admin.

First configure your remote ESXi host to forward syslog to your syslog server.

esxcfg-syslog –server esx04 –setserver vsyslog02 –setport 514

Then verify your settings

esxcfg-syslog –server esx04 –show

Output will appear as follows if everything worked right.

Current remote syslog server settings:
Server name : (vsyslog02)
Server port : 514

Raising A Virtual Machine From the Dead, or How I Screwed Up a VM and Resurected It

Big-girl-zombie Something really nasty happened to me when I was working on storage vmotioning a ton of VMs onto new datastores. Basically I was not paying attention and tried to manually remove a Virtual Machine from the datastore using the datastore browser. This basically wiped out the .vmx and.vmdk files, leaving only the flat.vmdk file which could not be removed as thankfully it was in use.

Anyway once I had realized what I had done, and got my mini panic attack out of the way, I found the following kb from Vmware. Reference it here.

In order to get your VM back up and running you must create a matching .vmdk file to go with your flat.vmdk file. You do this using the command below. Note that you must know the size of your flat file in bytes.

# ls -l vmdisk0-flat.vmdk (gives you size in bytes)

Create a new virtual disk/vmdk file and a temp flat file.

# vmkfstools -c 4294967296 -a lsilogic -d thin temp.vmdk

Now delete the temp flat file that you just created.

# rm temp-flat.vmdk

Rename your temp.vmdk to match the name of the flat.vmdk from your vm

mv temp.vmdk vmdisk0.vmdk

Now vi the .vmdk file and point it to the flat file at the line below. You may also need to remove the last line if you did not use thin provisioning on the original vm.

# Disk DescriptorFile
version=1
CID=fb183c20
parentCID=ffffffff
createType="vmfs"

# Extent description
RW 8388608 VMFS "vmdisk0-flat.vmdk"

# The Disk Data Base
#DDB

ddb.virtualHWVersion = "4"
ddb.geometry.cylinders = "522"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.adapterType = "lsilogic"
ddb.thinProvisioned = "1"

Now you can go through the process of creating a new VM via VCenter, only instead of creating a new virtual disk, you use the vmdk file that you just created. You should then be able to power on your rescued VM.

Scary!

 

ESXi 4.1 – Three Ways to Kill a Virtual Machine from the Command Line.

Kill-buttonYou can log into an esxi 4.1 host and issue the following command to get a list of all running and registered VMs.

# vm-support -x

The vm-support command above will give you the World ID of the VM that you are looking to kill.

# vm-support -X <World_ID> –w <dir>

Another way to kill or shutdown a VM is with the vim-cmd command. Use the command below to find your vmid

vim-cmd vmsvc/getallvms\

Then kill it with…

vim-cmd vmsvc/power.off <vmid>

The nastiest and uncleanest method is to actually kill the unix PPID (parent process id) of the VM. Take a look at the command below and subsitute your VM’s name. The PPID is in bold.

# ps | grep  <my_vm_name>
10071899      vmm0:<my_vm_name>
10051474 10071896 mks:<my_vm_name>   /bin/vmx
10084243 10071896 vcpu-0:<my_vm_name> /bin/vmx

Reference here

Other HA Configuration Error in ESXi and ESX cluster

Big-frame-hammer Ran into the error below yesterday after rebuilding an ESX 4.0 host as ESXi 4.1 and attempting to enable HA after the newly rebuild ESXi host was added back to the cluster.

Cannot complete the configuration of the HA agent on the host. Other HA configuration error.

Since error message was dissapointingly unspecific, I called VMware Support and asked for their guidance. According to my tech, they are actually getting a lot of calls on this error, as may who are doing in place upgrades to esxi from esx are running into this.

Specfically the error has to do with the fact that ESX has both a Service Console and VMkernel port used for host management, and ESXi only has a VMkernel port. To correct the issue and allow ESXi and ESX to place nice in the same cluster you must add the following.

First under VMkernel Properties make sure that both VMotion and Management traffic have a check mark to indicate that they are enabled.

Second right click on your cluster, and select VMware HA. Then click on Advanced Options and add the following 3 entries.

das.allowNetwork1 = Service Console

das.allowNetwork2 = VMkernel

das. allowVmotionNetworks = true

See screenshot below.

 

Snapshot1

 

 

UPDATE: Note that by default the original name of the management port group is,   Management Network. You must change this to VMkernel for this to work.