RHEL7: Install RHV Guest Agent and Drivers

howto-draw-octopuses-tutorials_html_101880e

About the Guest Agent

The RHEL 7.x virtual machine guest agent in RHV 4.x provides  drivers, additional data, and functionality once installed on a RHEL virtual machine.

The guest agent includes:

  • virtio-net paravirtualized network driver
  • virtio-scsi paravirtualized HDD driver
  • virtio-balloon driver which improves memory overcommit (currently not used by RHV)
  • rhevm-guest-agent common which allows RHV to retrieve guest internal information such as IP addresses and allows RHV to gracefully reboot the guest

You can view the entire list here.

When spawning a virtual machine in RHV without the guest agent, a warning will appear as an exclamation mark in RHV.

Screenshot from 2019-07-02 16-11-58

Register with Satellite

You can skip this step if your guest is already registered

In order to install the guest agent, I must first register the virtual machine with my local satellite. If not using a local satellite server, you can register with RHN.

First we need to grab the katello rpm from satellite – an insecure satellite server in this case as we are using a self-signed cert.

# curl -Ok https://satellite.lab.localdomain/pub/katello-ca-consumer-latest.noarch.rpm

Then install the rpm.

# rpm -ivh katello-ca-consumer-latest.noarch.rpm

Now register with Satellite. In the example below, we are using a custom activation key and organization.

# subscription-manager register –activationkey=”auburn-lab-ak” –org=”lab”

Installing the Guest Agent

You will need to ensure that the RHEL 7 RH Common repo is enabled. If the repo is not available to the guest, you will need to enable.

# yum repolist | grep common
!rhel-7-server-rh-common-rpms/7Server/x86_64 Red Hat Enterprise Linux 234

If the proper repo is enabled, then run the following command.

Once installed, the orange exclamation point will disappear.

Screenshot from 2019-07-02 17-13-19

Start and Enable the Guest Agent Service

# systemctl enable ovirt-guest-agent.service
# systemctl start ovirt-guest-agent.service

Managing RHEV VMs Via the Virsh CLI

cropped-space-cadet.jpg

Out of the box you are not going to be able to run virsh commands on the cli as root. Libvirt, Virsh, and KVM are just not that tightly integrated in RHEV.

You can however, follow the procedure below, and create a user and password to use for authentication.

# saslpasswd2 -a libvirt fatmin
Password:
Again (for verification):

Now enter the credentials you entered above.

# virsh list –all
Please enter your authentication name: fatmin
Please enter your password:
Id Name State
—————————————————-
10 HostedEngine running

Now you can shut down or start a VM. Here I am shutting down my RHEV HostedEngine.

# virsh destroy HostedEngine
Please enter your authentication name: fatmin
Please enter your password:
Domain HostedEngine destroyed

 

 

RHEV: Remotely Connect to Hosted Engine Console via VNC

Snail_On_White_Background_600

 

Honestly, this one is not hard to figure out, as it’s documented in multiple places. However, I have found that the documentation varies greatly depending if you are using RHEV or Ovirt, and the version of each that you are using seems to matter as well. At least, that has been my experience trying to figure out how to get this working.

So I figured I would document it here so that I would not have to try to remember which google result worked for me.

Note that this example is on RHEV 3.6.1.

First, you need to connect to the RHEV-h machine that is hosting the HostedEngine. Then you need to set a console password. See example below.

Note: This is a one-time password, and must be set each time you want to connect to the console.

# hosted-engine –add-console-password
Enter password:
code = 0
message = ‘Done’

Now via your remote machine (mine is Linux). Run the following command. Replace the IP address below with the IP or hostname of your RHEV-H host.

$ remote-viewer vnc://10.1.0.112:5900

If everything is successful, you should get a pop-up window similar to what is shown below.

kvm

Note, I have run into several issues in the past getting this to work. Not sure why, but if I run into any, I will document them here.

 

 

Introduction to Managing OVS Bridges

London Bridge

Open Vswitch is an Open Source software switch designed specifically to be used in virtualized environments such as OpenStack or RHEV-H.  OVS (Open Vswitch) was designed to make it easier to manage, configure, and monitor virtual traffic within a virtualized environment.

Below, is Part 1 in what I suspect will be a multiple part series on configuring, viewing, and managing your virtualized network via OVS.

Viewing OVS Bridges

To view OVS Bridges configured on a system, use the command ovs-vsctl as shown below. On my test system, we have three configured bridges; br-ex, br-int, and br-tun.

# ovs-vsctl show
b5aa3cf4-d962-4cb2-b3b6-20f0b4858f59
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port “eth0”
Interface “eth0”
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}

Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal

Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.4.0”

The output from the command above is rather verbose. If you prefer a more terse output you best try the command below. In this instance the only output is the name of our bridges.

Continue reading

RHEV 3.5: Recovering from a Catastrophic Host Failure

 

Cat-day.jpg

A while ago, I needed to tear down one of the nodes in my RHEV cluster, as I wanted to re-purpose it as a RHEL7 host. A few hours before a planned to shut it down, I logged into my RHEV-M console and put the node in “Maintenance” status.

Later, when I came home I powered off the node and rebuilt it, not once checking to ensure that the VMs running on the host had migrated properly. They had not, and this was a problem. I unknowingly fubared my cluster.

When I logged back into RHEV-M, I found the several VMs with a status of “?” or “Unknown State“, and a physical host that I was unable to remove from the cluster, despite the fact that it had been rebuilt.

So now what….

Fix the VMs first…

There were several VMs that were running on the now retired RHEV-H host when the server was powered off. I needed to fix these before I could remove the host. I tried powering them off, on, or migrating them from the WebUI, but this did not work. So I had to delete them manually from the RHEV-M database. Note that its possible that I could have figured out how to save the VMs instead of deleting them, but this was not a priority since this is my homelab environment.

First lets SSH to the RHEV-M server and log into the database.

# source /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
# export PGPASSWORD=$ENGINE_DB_PASSWORD
#psql -h localhost -U engine engine

Now we figure out the vm_guids for each VM.  Below I am starting with my logging server (log.lab.localdomain).

Below we are telling RHEV to mark each VM as Powered Off. Nothing here will delete your VM.

engine=> select vm_guid from vm_static where vm_name = ‘log.lab.localdomain’;
vm_guid
————————————–
f2c43e33-fd02-4b74-b86c-9e9ff9b8c51b
(1 row)

engine=> update vm_dynamic set status = 0 where vm_guid = ‘f2c43e33-fd02-4b74-b86c-9e9ff9b8c51b’;
UPDATE 1

I needed to run through this process a total of four times, once for each VM that was orphaned, scared, and alone.

Now put the node in Maintenance Mode…

First open up another ssh session to your RHEV-M host, and stop Jboss

# service jbossas stop

Now back in your other window, which should still be connected to the engine database….

My node is named titan.lab.localdomain – we need to get ids vds_id.

engine=# select vds_id, storage_pool_name from vds where vds_name = ‘titan.lab.localdomain’ ;
vds_id | storage_pool_name
————————————–+——————-
e59198b0-fc75-4c5d-b31e-1ab639a1f708 | Auburn_Datacenter
(1 row)

Now lets put it in maintenance mode.

engine=# update vds_dynamic set status = 4 where vds_id = ‘e59198b0-fc75-4c5d-b31e-1ab639a1f708’;
UPDATE 1

Now lets start jboss in our other window.

# service jbossas start

At this point I was able to log back into the WebUI and remove the host.

Numa CPU Pinning with KVM/Virsh

Intel_CPU_Pentium_4_640_Prescott_bottom

According to Wikipedia, Numa is — “computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors). The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users.

So what does this mean for a Virtual Machine optimization under KVM/Libvirt? It means that for best performance, you want to configure your multi-vcpu VMs to use only cores from the same physical CPU (or numa node).

So how do we do this? See the example below from one of my homelab servers. This machine has two hyperthreaded quad core Xeons (x5550) — for a total of 16 cores.

First we use the “lspcu” command to determine which cpu cores are tied to which CPU. This is in bold below.

# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Model name: Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
Stepping: 5
CPU MHz: 2668.000
BogoMIPS: 5319.11
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-3,8-11 
NUMA node1 CPU(s): 4-7,12-15

Using the virsh command, we can inspect the CPU pinning for my test VM called “mytestvm“.

Continue reading

RHEV/oVirt – 503 Service Temporarily Unavailable

OVirt-logo-highres

This post will walk you through fixing the RHEV/oVirt error shown below. Note that this error was received when trying to load the RHEV-M/oVirt Engine web interface.

ovirt-rhev

Note that these instructions worked for me in a known working environment, where the web interface was previously working but had failed after an unscheduled restart of the host.

First lets restart the ovirt-engine.

# service ovirt-engine restart
Stopping oVirt Engine: [FAILED]
Starting oVirt Engine: [ OK ]

Next we restart the ovirt-websocket-proxy

# service ovirt-websocket-proxy restart
Stopping oVirt Engine websockets proxy: [ OK ]
Starting oVirt Engine websockets proxy: [ OK ]

Finally, we restart Apache.

# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]