Mapping Libvirt VM Names with OpenStack Instance Names

openstack

Within OpenStack, each virtual machine instance running on a Compute nodes also a virtual machine running on a libvirt node.

If you ssh to a Compute node and run the command below you can get the names of each VM running or registered on this Compute node.

# virsh list –all
Id Name State
—————————————————-
2 instance-000000f0 running
– instance-00000024 shut off
– instance-00000039 shut off
– instance-000000ea shut off

So there is only one VM currently running on this Compute node, but which VM is it?

Well, we can figure that out pretty easily. See below.

# virsh dumpxml instance-000000f0 | grep uuid | grep name

<entry name=’uuid’>3103d38c-447d-40af-9607-56b26473ee72</entry>

Now we just have to map this UUID back to an OpenStack instance name

Here we have a nasty little awk grep to get the UUID and name of each OpenStack instance running in our cluster.

# nova list | grep -v “+” | grep -v ID | awk ‘{print $2 $3 $4}’

a402716b-73d0-4303-9331-202bc2386ab8|storage-perf-a
bd73f092-88c9-4af1-b569-c1176290841c|storage-perf-b
0ab8decb-6623-4e68-b8e4-b9cd522f6ea9|storage-perf-c
493c8afe-836e-4846-bdd0-029bb6e7f70e|storage-perf-d

Note that you can also get the instance name using a UUID,  see below

#nova show db79f6a2-455e-4f17-88d0-b3018d279c7c | grep instanceOS-EXT-SRV-ATTR:instance_name | instance-0000002a

OS-EXT-SRV-ATTR:instance_name | instance-0000002a

 

Update:

Thanks to TB for showing me this one. You can accomplish all of this via nova cli using the FQDN of the host.

(overcloud) [stack@undercloud ~]$ nova list --all --host comp39.localdomain --fields=name,host,instance_name,status
+--------------------------------------+----------------------+-------------------------------+-------------------+--------+
| ID | Name | Host | Instance Name | Status |
+--------------------------------------+----------------------+-------------------------------+-------------------+--------+
| 30fccd83-e517-4e56-a1df-5d5d454818ff | vm-name1 | comp39.localdomain | instance-0023a423 | ACTIVE |
| bc6bb4da-1cac-4603-ad52-2078f13cf6fd | vm-name-2 | comp39.localdomain | instance-0023a348 | ACTIVE |
+--------------------------------------+----------------------+-------------------------------+-------------------+--------+

 

 

Numa CPU Pinning with KVM/Virsh

Intel_CPU_Pentium_4_640_Prescott_bottom

According to Wikipedia, Numa is — “computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors). The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users.

So what does this mean for a Virtual Machine optimization under KVM/Libvirt? It means that for best performance, you want to configure your multi-vcpu VMs to use only cores from the same physical CPU (or numa node).

So how do we do this? See the example below from one of my homelab servers. This machine has two hyperthreaded quad core Xeons (x5550) — for a total of 16 cores.

First we use the “lspcu” command to determine which cpu cores are tied to which CPU. This is in bold below.

# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Model name: Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
Stepping: 5
CPU MHz: 2668.000
BogoMIPS: 5319.11
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-3,8-11 
NUMA node1 CPU(s): 4-7,12-15

Using the virsh command, we can inspect the CPU pinning for my test VM called “mytestvm“.

Continue reading