RHEL7: Install RHV Guest Agent and Drivers

howto-draw-octopuses-tutorials_html_101880e

About the Guest Agent

The RHEL 7.x virtual machine guest agent in RHV 4.x provides  drivers, additional data, and functionality once installed on a RHEL virtual machine.

The guest agent includes:

  • virtio-net paravirtualized network driver
  • virtio-scsi paravirtualized HDD driver
  • virtio-balloon driver which improves memory overcommit (currently not used by RHV)
  • rhevm-guest-agent common which allows RHV to retrieve guest internal information such as IP addresses and allows RHV to gracefully reboot the guest

You can view the entire list here.

When spawning a virtual machine in RHV without the guest agent, a warning will appear as an exclamation mark in RHV.

Screenshot from 2019-07-02 16-11-58

Register with Satellite

You can skip this step if your guest is already registered

In order to install the guest agent, I must first register the virtual machine with my local satellite. If not using a local satellite server, you can register with RHN.

First we need to grab the katello rpm from satellite – an insecure satellite server in this case as we are using a self-signed cert.

# curl -Ok https://satellite.lab.localdomain/pub/katello-ca-consumer-latest.noarch.rpm

Then install the rpm.

# rpm -ivh katello-ca-consumer-latest.noarch.rpm

Now register with Satellite. In the example below, we are using a custom activation key and organization.

# subscription-manager register –activationkey=”auburn-lab-ak” –org=”lab”

Installing the Guest Agent

You will need to ensure that the RHEL 7 RH Common repo is enabled. If the repo is not available to the guest, you will need to enable.

# yum repolist | grep common
!rhel-7-server-rh-common-rpms/7Server/x86_64 Red Hat Enterprise Linux 234

If the proper repo is enabled, then run the following command.

Once installed, the orange exclamation point will disappear.

Screenshot from 2019-07-02 17-13-19

Start and Enable the Guest Agent Service

# systemctl enable ovirt-guest-agent.service
# systemctl start ovirt-guest-agent.service

Managing RHEV VMs Via the Virsh CLI

cropped-space-cadet.jpg

Out of the box you are not going to be able to run virsh commands on the cli as root. Libvirt, Virsh, and KVM are just not that tightly integrated in RHEV.

You can however, follow the procedure below, and create a user and password to use for authentication.

# saslpasswd2 -a libvirt fatmin
Password:
Again (for verification):

Now enter the credentials you entered above.

# virsh list –all
Please enter your authentication name: fatmin
Please enter your password:
Id Name State
—————————————————-
10 HostedEngine running

Now you can shut down or start a VM. Here I am shutting down my RHEV HostedEngine.

# virsh destroy HostedEngine
Please enter your authentication name: fatmin
Please enter your password:
Domain HostedEngine destroyed

 

 

RHEV 3.5: Recovering from a Catastrophic Host Failure

 

Cat-day.jpg

A while ago, I needed to tear down one of the nodes in my RHEV cluster, as I wanted to re-purpose it as a RHEL7 host. A few hours before a planned to shut it down, I logged into my RHEV-M console and put the node in “Maintenance” status.

Later, when I came home I powered off the node and rebuilt it, not once checking to ensure that the VMs running on the host had migrated properly. They had not, and this was a problem. I unknowingly fubared my cluster.

When I logged back into RHEV-M, I found the several VMs with a status of “?” or “Unknown State“, and a physical host that I was unable to remove from the cluster, despite the fact that it had been rebuilt.

So now what….

Fix the VMs first…

There were several VMs that were running on the now retired RHEV-H host when the server was powered off. I needed to fix these before I could remove the host. I tried powering them off, on, or migrating them from the WebUI, but this did not work. So I had to delete them manually from the RHEV-M database. Note that its possible that I could have figured out how to save the VMs instead of deleting them, but this was not a priority since this is my homelab environment.

First lets SSH to the RHEV-M server and log into the database.

# source /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
# export PGPASSWORD=$ENGINE_DB_PASSWORD
#psql -h localhost -U engine engine

Now we figure out the vm_guids for each VM.  Below I am starting with my logging server (log.lab.localdomain).

Below we are telling RHEV to mark each VM as Powered Off. Nothing here will delete your VM.

engine=> select vm_guid from vm_static where vm_name = ‘log.lab.localdomain’;
vm_guid
————————————–
f2c43e33-fd02-4b74-b86c-9e9ff9b8c51b
(1 row)

engine=> update vm_dynamic set status = 0 where vm_guid = ‘f2c43e33-fd02-4b74-b86c-9e9ff9b8c51b’;
UPDATE 1

I needed to run through this process a total of four times, once for each VM that was orphaned, scared, and alone.

Now put the node in Maintenance Mode…

First open up another ssh session to your RHEV-M host, and stop Jboss

# service jbossas stop

Now back in your other window, which should still be connected to the engine database….

My node is named titan.lab.localdomain – we need to get ids vds_id.

engine=# select vds_id, storage_pool_name from vds where vds_name = ‘titan.lab.localdomain’ ;
vds_id | storage_pool_name
————————————–+——————-
e59198b0-fc75-4c5d-b31e-1ab639a1f708 | Auburn_Datacenter
(1 row)

Now lets put it in maintenance mode.

engine=# update vds_dynamic set status = 4 where vds_id = ‘e59198b0-fc75-4c5d-b31e-1ab639a1f708’;
UPDATE 1

Now lets start jboss in our other window.

# service jbossas start

At this point I was able to log back into the WebUI and remove the host.

RHEV/oVirt – 503 Service Temporarily Unavailable

OVirt-logo-highres

This post will walk you through fixing the RHEV/oVirt error shown below. Note that this error was received when trying to load the RHEV-M/oVirt Engine web interface.

ovirt-rhev

Note that these instructions worked for me in a known working environment, where the web interface was previously working but had failed after an unscheduled restart of the host.

First lets restart the ovirt-engine.

# service ovirt-engine restart
Stopping oVirt Engine: [FAILED]
Starting oVirt Engine: [ OK ]

Next we restart the ovirt-websocket-proxy

# service ovirt-websocket-proxy restart
Stopping oVirt Engine websockets proxy: [ OK ]
Starting oVirt Engine websockets proxy: [ OK ]

Finally, we restart Apache.

# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]

Foreman: Advanced integration with Red Hat Enterprise Virtualization Manager (RHEV-M) – Part 1 of 2

Interesting read for those who want to integrate Foreman into their RHEV environment.  I’m still working on figuring out the exact benefits of doing so, but this this seems like its the go-to guide to reference.

Source: Advanced integration with Red Hat Enterprise Virtualization Manager (RHEV-M) – Part 1 of 2

RHEV: How to Import a VMware ESXi OVA Disk Image into RHEV-M

424782-cloud-computing

Introduction

In my lab, I am in the process of moving off of Vmware/ESXi and onto RHEV (Red Hat Enterprise Virtualization). So far the learning curve has not been that bad….

Installing and configuring RHEV-M was just as simple as installing a few packages on top of RHEL 6. I had an unused desktop that was more than up for the task.  For RHEV-H… Well I had just finished building out a new server for it. Once my initial hardware was up and running it was time to start moving virtual machines out of Vsphere.

Virt-V2V Conversion Failures

And this is when I started to run into trouble. Using virt-v2v, the process was simple for existing RHEL vms. I imported a couple of them without issue.

However, I ran into a few issues when I started trying to import non RHEL VMs. In particular, my OpenVPN appliance running Ubuntu was putting up a good fight.

virt-v2v: error: virt-v2v is unable to convert this guest type
(linux/ubuntu)
If reporting bugs, run virt-v2v with debugging enabled and include the
complete output:

Ok, very unexpected – so lets try another approach.

OVA Import Failures

This time I decided to attempt to export the VM as on OVA file. However this did not work either as the OVA file was not recognized as a gzip file by the rhevm-image-uploader (run from the RHEVM server).

# rhevm-image-uploader -e pandora-export-vol01 upload vpn.lab.localdomain.ova
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
ERROR: not a gzip file

The Workaround

However not all was lost, as I was able to leverage the exported VM files to convert the the VMDK of my vm to img format.

#qemu-img convert -f vmdk vpn.lab.localdomain-disk1.vmdk -O raw vpn.lab.localdomain-disk1.img

Once the disk was successfully converted, I used qemu-info to verify its size.

# qemu-img info vpn.lab.localdomain-disk1.img
image: vpn.lab.localdomain-disk1.img
file format: raw
virtual size: 8.0G (8589934592 bytes)
disk size: 2.8G

I then logged into RHEV and created a new virtual machine which I named vpn.lab.localdomain. For all intents and purposes, it is just a placeholder VM with an empty 8GB disk.

Then using rhevm-shell ran the following command to get a list of existing vms and their disks.

[RHEVM shell (connected)]# list vms

id : 93a873a0-2fc2-4d73-952e-58e934d7ecea
name : vpn.lab.localdomain

[RHEVM shell (connected)]# list disks

id : d239f1c7-4fab-43fd-a7e9-ff5add0d53de
name : vpn.lab.localdomain_Disk1

I knew that the disk image for vpn.lab.localdomain was housed on the NFS mount below. All I needed was the UUID of the disk.

# showmount -e freenas
Export list for freenas:
/mnt/freenas-vol01 192.168.0.0,10.1.0.0

So I mounted the NFS share on my RHEVM server as /freenas-vol01

# mount -t nfs freenas.lab.localdomain:/mnt/freenas-vol-1 /mnt

I then navigated /freenas-vol01/4f46e5ca-a417-45d7-9dbf-2d400207fdaa/images and looked for a directory with this name (d239f1c7-4fab-43fd-a7e9-ff5add0d53de). Note that this is the same name as the UUID that I found a few steps above.

Note: It’s the second one in the list.

# ls -rlt
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 12:39 aa1c9b26-00f0-45b0-9cca-224339448347
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 12:45 d239f1c7-4fab-43fd-a7e9-ff5add0d53de
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 13:53 021227e4-bb19-4747-a7a9-e522402fa6fa
drwxr-xr-x. 2 vdsm kvm 5 Sep 14 13:53 390298bc-94f0-477a-b437-5898e3f664be

Now cd to d239f1c7-4fab-43fd-a7e9-ff5add0d53de and inspect its contents

# ls -rlt
total 949342
-rw-r–rw-. 1 vdsm kvm 321 Sep 14 12:45 a50ce217-5c51-4f89-afe3-2c650a21c3b0.meta
-rw-rw-rw-. 1 vdsm kvm 1048576 Sep 14 12:45 a50ce217-5c51-4f89-afe3-2c650a21c3b0.lease
-rw-rw-rw-. 1 vdsm kvm 8589934592 Sep 14 16:44 a50ce217-5c51-4f89-afe3-2c650a21c3b0

There are three files here, but only one is 8Gb in size, which is not coincidentally the same size of the disk I created. This is the new empty disk on our new vm.

Now lets dd or original disk onto this disk. This overwrites the contents of the new disk, with the contents of the original disk.

#dd if=/root/vpn.lab.localdomain-disk1.img of=a50ce217-5c51-4f89-afe3-2c650a21c3b0 bs=4M

Much to my surprise I was able to boot up the vm without issue.