How to Manage Libvirt VMs via OpenStack Ironic (OSP10)

Ironic_mascot_color

Bear Metal

 

In this post I will document the steps that I am using to create a fully virtualized OSP 10 environment in my lab. The undercloud node is a VM, as well as the overcloud nodes. We will configure libvirt so that ironic has the ability to boot and shutdown the VMs on the underlying hypervisor via Ironic.

Add the stack user on your hypervisor. In this case my hypervisor’s hostname is virt01, however we will refer to it as hypervisor for clarity.

[simterm]
hypervisor# useradd stack
hypervisor# echo “password” | passwd stack –stdin
[/simterm]

Modify polkit to allow stack user to manage libvirt.

[simterm]hypervisor # cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla
[libvirt Management Access]
Identity=unix-user:stack
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
EOF

[/simterm]

Now attempt to libvirt as stack via a remote session. Here we are just connecting back to the localhost, virt01. In the example below, 10.1.99.112 is the ip of the hypervisor. The undercloud has an ip of 10.1.99.10

[simterm]undercloud# virsh –connect qemu+ssh://stack@10.1.99.112/system list –all

[/simterm]

Now ssh as stack to your undercloud vm

Copy stack’s public key to your hypervisor (virt01 in this case). In the command below you will replace the ip address shown with the ip that your undercloud vm will use to connect to libvirt on the hypervisor

[simterm]undercloud# ssh-copy-id -i ~/.ssh/id_rsa.pub stack@10.1.99.112

[/simterm]

Now we need to create a few Virtual Machines. Specifically I am building an environment with 5 virtual machines to run virtualized Red Hat Openstack 13. My overcloud will consist of 2 computes and three controller nodes

I will use the command below to create 5 qcows.

[simterm]hypervisor# cd /var/lib/libvirt/images/

hypervisor# for i in {1..5}; do qemu-img create -f qcow2 \
-o preallocation=metadata overcloud-node$i.qcow2 60G; done
Formatting ‘overcloud-node1.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node2.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node3.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node4.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node5.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off

[/simterm]

The command below will create 5 xml files and use those to spawn my 5 VMs.

[simterm] hypervisor# for i in {1..5}; do \
virt-install –ram 16384 –vcpus 4 –os-variant rhel7 \
–disk path=/var/lib/libvirt/images/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 \
–noautoconsole –vnc –network network:provisioning –network bridge:br99 \
–network network:default –name overcloud-node$i \
–dry-run –print-xml > /tmp/overcloud-node$i.xml; \

hypervisor# virsh define –file /tmp/overcloud-node$i.xml; done

[/simterm]

You should end up with the following virtual machines

 

[simterm]hypervisor# virsh list –all
Id Name State
—————————————————-
1 undercloud running
– overcloud-node1 shut off
– overcloud-node2 shut off
– overcloud-node3 shut off
– overcloud-node4 shut off
– overcloud-node5 shut off

[/simterm]

Back on the undercloud we use the command below to grab the provisioning network mac address from each virtual machine running on the hypervisor. We could run this command locally on the hypervisor, but since we need the mac addresses for ironic on the undercloud, we will run it here.

[simterm]undercloud$ for i in {1..5}; do virsh -c qemu+ssh://stack@10.1.99.112/system domiflist overcloud-node$i | awk ‘$3 == “provisioning” {print $5}’; done> /tmp/nodes.txt

[/simterm]

Now we use our temp file above to populate the instackenv.json that we will import into ironic. See gist below


undercloud$ jq . << EOF > ~/instackenv.json
{
"ssh-user": "stack",
"ssh-key": "$(cat ~/.ssh/id_rsa)",
"power_manager": "nova.virt.baremetal.virtual_power_driver.VirtualPowerManager",
"host-ip": "192.168.122.1",
"arch": "x86_64",
"nodes": [
{
"name": "overcloud-node1",
"pm_addr": "192.168.122.1",
"pm_password": "$(cat ~/.ssh/id_rsa)",
"pm_type": "pxe_ssh",
"mac": [
"$(sed -n 1p /tmp/nodes.txt)"
],
"cpu": "4",
"memory": "8192",
"disk": "60",
"arch": "x86_64",
"pm_user": "stack"
},
{
"name": "overcloud-node2",
"pm_addr": "192.168.122.1",
"pm_password": "$(cat ~/.ssh/id_rsa)",
"pm_type": "pxe_ssh",
"mac": [
"$(sed -n 2p /tmp/nodes.txt)"
],
"cpu": "4",
"memory": "8192",
"disk": "60",
"arch": "x86_64",
"pm_user": "stack"
},
{
"name": "overcloud-node3",
"pm_addr": "192.168.122.1",
"pm_password": "$(cat ~/.ssh/id_rsa)",
"pm_type": "pxe_ssh",
"mac": [
"$(sed -n 3p /tmp/nodes.txt)"
],
"cpu": "4",
"memory": "8192",
"disk": "60",
"arch": "x86_64",
"pm_user": "stack"
},
{
"name": "overcloud-node4",
"pm_addr": "192.168.122.1",
"pm_password": "$(cat ~/.ssh/id_rsa)",
"pm_type": "pxe_ssh",
"mac": [
"$(sed -n 4p /tmp/nodes.txt)"
],
"cpu": "4",
"memory": "8192",
"disk": "60",
"arch": "x86_64",
"pm_user": "stack"
},
{
"name": "overcloud-node5",
"pm_addr": "192.168.122.1",
"pm_password": "$(cat ~/.ssh/id_rsa)",
"pm_type": "pxe_ssh",
"mac": [
"$(sed -n 5p /tmp/nodes.txt)"
],
"cpu": "4",
"memory": "8192",
"disk": "60",
"arch": "x86_64",
"pm_user": "stack"
}
]
}
EOF

view raw

gistfile1.txt

hosted with ❤ by GitHub

At this point we are ready to import our nodes via Ironic.

Note that I do not claim to be the original author of the steps documented above, rather I wanted to ensure that I could easily consume these steps in the future.

Also, I look forward to experimenting with the vbmc ironic driver and might stop using pxe_ssh altogether.

Configure RHEL7/Centos 7 as a Virtualization Host

smarta

This is a fresh install of RHEL 7.5

First install the packages as shown below.

yum install qemu-kvm libvirt

Now install the additional recommened virtualization packages


# yum install virt-install libvirt-python virt-manager virt-install libvirt-client 

Now restart libvirtd


# systemctl restart libvirtd

 
Now you should be able to launch virt-manager from your remote machine and add a connection to your new virtualization host.

Special note. Make sure that you disable NetworkManager


# systemctl stop NetworkManager
# systemctl disable NetworkManager

Managing RHEV VMs Via the Virsh CLI

cropped-space-cadet.jpg

Out of the box you are not going to be able to run virsh commands on the cli as root. Libvirt, Virsh, and KVM are just not that tightly integrated in RHEV.

You can however, follow the procedure below, and create a user and password to use for authentication.

# saslpasswd2 -a libvirt fatmin
Password:
Again (for verification):

Now enter the credentials you entered above.

# virsh list –all
Please enter your authentication name: fatmin
Please enter your password:
Id Name State
—————————————————-
10 HostedEngine running

Now you can shut down or start a VM. Here I am shutting down my RHEV HostedEngine.

# virsh destroy HostedEngine
Please enter your authentication name: fatmin
Please enter your password:
Domain HostedEngine destroyed

 

 

Mapping Libvirt VM Names with OpenStack Instance Names

openstack

Within OpenStack, each virtual machine instance running on a Compute nodes also a virtual machine running on a libvirt node.

If you ssh to a Compute node and run the command below you can get the names of each VM running or registered on this Compute node.

# virsh list –all
Id Name State
—————————————————-
2 instance-000000f0 running
– instance-00000024 shut off
– instance-00000039 shut off
– instance-000000ea shut off

So there is only one VM currently running on this Compute node, but which VM is it?

Well, we can figure that out pretty easily. See below.

# virsh dumpxml instance-000000f0 | grep uuid | grep name

<entry name=’uuid’>3103d38c-447d-40af-9607-56b26473ee72</entry>

Now we just have to map this UUID back to an OpenStack instance name

Here we have a nasty little awk grep to get the UUID and name of each OpenStack instance running in our cluster.

# nova list | grep -v “+” | grep -v ID | awk ‘{print $2 $3 $4}’

a402716b-73d0-4303-9331-202bc2386ab8|storage-perf-a
bd73f092-88c9-4af1-b569-c1176290841c|storage-perf-b
0ab8decb-6623-4e68-b8e4-b9cd522f6ea9|storage-perf-c
493c8afe-836e-4846-bdd0-029bb6e7f70e|storage-perf-d

Note that you can also get the instance name using a UUID,  see below

#nova show db79f6a2-455e-4f17-88d0-b3018d279c7c | grep instanceOS-EXT-SRV-ATTR:instance_name | instance-0000002a

OS-EXT-SRV-ATTR:instance_name | instance-0000002a

 

Update:

Thanks to TB for showing me this one. You can accomplish all of this via nova cli using the FQDN of the host.

(overcloud) [stack@undercloud ~]$ nova list --all --host comp39.localdomain --fields=name,host,instance_name,status
+--------------------------------------+----------------------+-------------------------------+-------------------+--------+
| ID | Name | Host | Instance Name | Status |
+--------------------------------------+----------------------+-------------------------------+-------------------+--------+
| 30fccd83-e517-4e56-a1df-5d5d454818ff | vm-name1 | comp39.localdomain | instance-0023a423 | ACTIVE |
| bc6bb4da-1cac-4603-ad52-2078f13cf6fd | vm-name-2 | comp39.localdomain | instance-0023a348 | ACTIVE |
+--------------------------------------+----------------------+-------------------------------+-------------------+--------+

 

 

Numa CPU Pinning with KVM/Virsh

Intel_CPU_Pentium_4_640_Prescott_bottom

According to Wikipedia, Numa is — “computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors). The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users.

So what does this mean for a Virtual Machine optimization under KVM/Libvirt? It means that for best performance, you want to configure your multi-vcpu VMs to use only cores from the same physical CPU (or numa node).

So how do we do this? See the example below from one of my homelab servers. This machine has two hyperthreaded quad core Xeons (x5550) — for a total of 16 cores.

First we use the “lspcu” command to determine which cpu cores are tied to which CPU. This is in bold below.

# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Model name: Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
Stepping: 5
CPU MHz: 2668.000
BogoMIPS: 5319.11
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-3,8-11 
NUMA node1 CPU(s): 4-7,12-15

Using the virsh command, we can inspect the CPU pinning for my test VM called “mytestvm“.

Continue reading