Honestly, this one is not hard to figure out, as it’s documented in multiple places. However, I have found that the documentation varies greatly depending if you are using RHEV or Ovirt, and the version of each that you are using seems to matter as well. At least, that has been my experience trying to figure out how to get this working.
So I figured I would document it here so that I would not have to try to remember which google result worked for me.
Note that this example is on RHEV 3.6.1.
First, you need to connect to the RHEV-h machine that is hosting the HostedEngine. Then you need to set a console password. See example below.
Note: This is a one-time password, and must be set each time you want to connect to the console.
Open Vswitch is an Open Source software switch designed specifically to be used in virtualized environments such as OpenStack or RHEV-H. OVS (Open Vswitch) was designed to make it easier to manage, configure, and monitor virtual traffic within a virtualized environment.
Below, is Part 1 in what I suspect will be a multiple part series on configuring, viewing, and managing your virtualized network via OVS.
Viewing OVS Bridges
To view OVS Bridges configured on a system, use the command ovs-vsctl as shown below. On my test system, we have three configured bridges; br-ex, br-int, and br-tun.
# ovs-vsctl show
The output from the command above is rather verbose. If you prefer a more terse output you best try the command below. In this instance the only output is the name of our bridges.
A while ago, I needed to tear down one of the nodes in my RHEV cluster, as I wanted to re-purpose it as a RHEL7 host. A few hours before a planned to shut it down, I logged into my RHEV-M console and put the node in “Maintenance” status.
Later, when I came home I powered off the node and rebuilt it, not once checking to ensure that the VMs running on the host had migrated properly. They had not, and this was a problem. I unknowingly fubared my cluster.
When I logged back into RHEV-M, I found the several VMs with a status of “?” or “Unknown State“, and a physical host that I was unable to remove from the cluster, despite the fact that it had been rebuilt.
So now what….
Fix the VMs first…
There were several VMs that were running on the now retired RHEV-H host when the server was powered off. I needed to fix these before I could remove the host. I tried powering them off, on, or migrating them from the WebUI, but this did not work. So I had to delete them manually from the RHEV-M database. Note that its possible that I could have figured out how to save the VMs instead of deleting them, but this was not a priority since this is my homelab environment.
First lets SSH to the RHEV-M server and log into the database.
According to Wikipedia, Numa is — “a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors). The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users.“
So what does this mean for a Virtual Machine optimization under KVM/Libvirt? It means that for best performance, you want to configure your multi-vcpu VMs to use only cores from the same physical CPU (or numa node).
So how do we do this? See the example below from one of my homelab servers. This machine has two hyperthreaded quad core Xeons (x5550) — for a total of 16 cores.
First we use the “lspcu” command to determine which cpu cores are tied to which CPU. This is in bold below.
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 4
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model name: Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
CPU MHz: 2668.000
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K NUMA node0 CPU(s): 0-3,8-11 NUMA node1 CPU(s): 4-7,12-15
Using the virsh command, we can inspect the CPU pinning for my test VM called “mytestvm“.