How to disable Cloud-Init in a RHEL Cloud Image

happy_cloud1600

So this one is pretty simple. However, I found a lot of misinformation along the way, so I figured that I would jot the proper (and most simple) process here.

Symptoms: a RHEL (or variant) VM that takes a very long time to boot. On the VM console, you can see the following output while the VM boot process is stalled and waiting for a timeout. Note that the message below has nothing to do with cloud init, but its the output that I have most often seen on the console while waiting for a VM to boot.

[106.325574} random: crng init done

Note that I have run into this issue in both OpenStack (when booting from external provider networks) and in KVM.

Upon initial boot of the VM, run the command below.

touch /etc/cloud/cloud-init.disabled

Seriously, that’s it. No need to disable or remove cloud-init services. See reference.

 

 

How to Resize a qcow2 Image and Filesystem with Virt-Resize

faf3a30ac4067155dd656381da179869

In this post, I am going to quickly document the process I used to grow a qcow2 disk image and increase the size of the underlying filesystem.

In this instance, I ran out of disk space on my virtual disk named undercloud.qcow2. Apparently, the underlying disk image that I was using was too small for my needs.

Below, I have used qemu image to inspect the disk size. This disk is only 10G in size.

# qemu-img info undercloud.qcow2
image: undercloud.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 9.8G
cluster_size: 65536

 

Let’s add 20G to the disk. Note, the VM must be powered down before proceeding.

# qemu-img resize undercloud.qcow2 +20G
Image resized.

Now we need to resize the underlying filesystems using “virt-resize“. Note, however, that that “virt-resize” CANNOT resize disk images in-place. So we need to use make a backup copy and use the backup copy of the qcow as input and use the original qcow as output. See example below.

First, we make a backup copy of the disk as shown below.

# cp undercloud.qcow2 undercloud-orig.qcow2

 

Then we run the command below to grow /dev/sda.

NOTE: In this example /dev/sda1 is not the /boot partition. So be careful you are growing the correct partitions on your qcow.

# virt-resize –expand /dev/sda1 undercloud-orig.qcow2 undercloud.qcow2

Output shown below:

virt-resize: warning: unknown/unavailable method for expanding the xfs
filesystem on /dev/sda1
/dev/sda1: This partition will be resized from 10.0G to 30.0G.

We now inspect new disk

# qemu-img info undercloud.qcow2
image: undercloud.qcow2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 9.4G
cluster_size: 65536
Format specific information:
compat: 0.10

Finally, we verify that the filesystems have grown.

# virt-filesystems –long -h –all -a undercloud.qcow2
Name Type VFS Label MBR Size Parent
/dev/sda1 filesystem xfs – – 30G –
/dev/sda1 partition – – 83 30G /dev/sda
/dev/sda device – – – 30G –

We can now boot our Virtual Machine.

Managing RHEV VMs Via the Virsh CLI

cropped-space-cadet.jpg

Out of the box you are not going to be able to run virsh commands on the cli as root. Libvirt, Virsh, and KVM are just not that tightly integrated in RHEV.

You can however, follow the procedure below, and create a user and password to use for authentication.

# saslpasswd2 -a libvirt fatmin
Password:
Again (for verification):

Now enter the credentials you entered above.

# virsh list –all
Please enter your authentication name: fatmin
Please enter your password:
Id Name State
—————————————————-
10 HostedEngine running

Now you can shut down or start a VM. Here I am shutting down my RHEV HostedEngine.

# virsh destroy HostedEngine
Please enter your authentication name: fatmin
Please enter your password:
Domain HostedEngine destroyed

 

 

Numa CPU Pinning with KVM/Virsh

Intel_CPU_Pentium_4_640_Prescott_bottom

According to Wikipedia, Numa is — “computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors). The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users.

So what does this mean for a Virtual Machine optimization under KVM/Libvirt? It means that for best performance, you want to configure your multi-vcpu VMs to use only cores from the same physical CPU (or numa node).

So how do we do this? See the example below from one of my homelab servers. This machine has two hyperthreaded quad core Xeons (x5550) — for a total of 16 cores.

First we use the “lspcu” command to determine which cpu cores are tied to which CPU. This is in bold below.

# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Model name: Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
Stepping: 5
CPU MHz: 2668.000
BogoMIPS: 5319.11
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-3,8-11 
NUMA node1 CPU(s): 4-7,12-15

Using the virsh command, we can inspect the CPU pinning for my test VM called “mytestvm“.

Continue reading

Enable Nested Virtualization on RHEL 7

empty birds nest

Follow the steps shown below if you want to be able to run nested virtual machines on RHEL 7 via KVM.

In this particular situation I have a physical Supermicro server that I want to use to host OpenStack.

Note that my home server has Intel Xeon processors, so I first enable nested virtualization via the KVM intel module.  AMD procs use a different module.

[code language=”css”]
cat << EOF > /etc/modprobe.d/kvm_intel.conf
options kvm-intel nested=1
options kvm-intel enable_shadow_vmcs=1
options kvm-intel enable_apicv=1
options kvm-intel ept=1
EOF
[/code]

Also, in order to communicate with your nested VMs you will need to disable reverse path filtering, otherwise RHEL will discard any network packets in order to prevent asymmetric routing. See below.

[code language=”css”]
cat << EOF > /etc/sysctl.d/98-rp-filter.conf
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
EOF
[/code]

The simplest way to enable these changes is via a reboot of the physical host.

Cloning A Virtual Machine in KVM

4794-96-20090910021400 KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). More Information here.

KVM is now native to RHEL/Centos and has replaced Xen in RHEL 5.4.

What i have seen so far has been pretty interesting, but I have barely scratched the surface of KVM’s inner workings.

Anyway, to clone a virtual machine you need to run the virt-clone command to clone the virtual disk. This automatically creates the virtual machine for you and registers it in virt-manager.

#virt-clone –original vm01 –name vm02 –file /vol0/vm02.img

Note that you do not need to tell virt-clone the location of the existing virtual disk. It figures this out on its own based on the name of the existing virtual machine.