How to Manage Libvirt VMs via OpenStack Ironic (OSP10)

Ironic_mascot_color

Bear Metal

 

In this post I will document the steps that I am using to create a fully virtualized OSP 10 environment in my lab. The undercloud node is a VM, as well as the overcloud nodes. We will configure libvirt so that ironic has the ability to boot and shutdown the VMs on the underlying hypervisor via Ironic.

Add the stack user on your hypervisor. In this case my hypervisor’s hostname is virt01, however we will refer to it as hypervisor for clarity.

hypervisor# useradd stack
hypervisor# echo “password” | passwd stack --stdin

Modify polkit to allow stack user to manage libvirt.

hypervisor # cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla
[libvirt Management Access]
Identity=unix-user:stack
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
EOF

Now attempt to libvirt as stack via a remote session. Here we are just connecting back to the localhost, virt01. In the example below, 10.1.99.112 is the ip of the hypervisor. The undercloud has an ip of 10.1.99.10

undercloud# virsh --connect qemu+ssh://stack@10.1.99.112/system list --all

Now ssh as stack to your undercloud vm

Copy stack’s public key to your hypervisor (virt01 in this case). In the command below you will replace the ip address shown with the ip that your undercloud vm will use to connect to libvirt on the hypervisor

undercloud# ssh-copy-id -i ~/.ssh/id_rsa.pub stack@10.1.99.112

Now we need to create a few Virtual Machines. Specifically I am building an environment with 5 virtual machines to run virtualized Red Hat Openstack 13. My overcloud will consist of 2 computes and three controller nodes

I will use the command below to create 5 qcows.

hypervisor# cd /var/lib/libvirt/images/
hypervisor# for i in {1..5}; do qemu-img create -f qcow2 \
-o preallocation=metadata overcloud-node$i.qcow2 60G; done
Formatting ‘overcloud-node1.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node2.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node3.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node4.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node5.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off

The command below will create 5 xml files and use those to spawn my 5 VMs.

hypervisor# for i in {1..5}; do \
virt-install --ram 16384 --vcpus 4 --os-variant rhel7 \
--disk path=/var/lib/libvirt/images/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 \
--noautoconsole --vnc --network network:provisioning --network bridge:br99 \
--network network:default --name overcloud-node$i \
--dry-run --print-xml > /tmp/overcloud-node$i.xml; \
hypervisor# virsh define --file /tmp/overcloud-node$i.xml; done

You should end up with the following virtual machines

 

hypervisor# virsh list --all
Id Name State
----------------------------------------------------
1 undercloud running
-- overcloud-node1 shut off
-- overcloud-node2 shut off
-- overcloud-node3 shut off
-- overcloud-node4 shut off
-- overcloud-node5 shut off

Back on the undercloud we use the command below to grab the provisioning network mac address from each virtual machine running on the hypervisor. We could run this command locally on the hypervisor, but since we need the mac addresses for ironic on the undercloud, we will run it here.

undercloud$ for i in {1..5}; do virsh -c qemu+ssh://stack@10.1.99.112/system domiflist overcloud-node$i | awk ‘$3 == “provisioning” {print $5}’; done> /tmp/nodes.txt

Now we use our temp file above to populate the instackenv.json that we will import into ironic. See gist below

At this point we are ready to import our nodes via Ironic.

Note that I do not claim to be the original author of the steps documented above, rather I wanted to ensure that I could easily consume these steps in the future.

Also, I look forward to experimenting with the vbmc ironic driver and might stop using pxe_ssh altogether.

Advertisements

Cockpit for Centos and RHEL 7: Install and Configure

Snail_On_White_Background_600

Introduction

I have recently purchased 3 Dell servers, and put myself to task to build out a new lab. My old lab was in desperate need of updating as I had long past the time when 48GB of memory per node was sufficient. The cost of memory, old or new was not even closely in line with cheap server grade CPUs that were perfect for lab servers. Today you can buy a used E7540, a low power, 12 core (HT enabled) Xeon for less than $30 (USD) from a reputable retailer. Cram two of these into an 11 gen Dell and you are in business.

So, three new (to me) Dell rackmounts, deployed as virtualization servers, and I want a simple way to view performance stats in a nice clean single pain of glass. I am not in any way shape or form looking to build fancy dashboard and setup any sort of historical monitoring. I just want to know where the performance hot spots are when my environment seems to be running slowly.

I installed Cockpit before on a laptop or two and thought it might foot the bill, especially since you could use one dashboard for multiple nodes.

So here we are going to deploy Cockpit on all three nodes, on each the steps are the same.

Prerequisites

First we must open a firewall port on each node.

Continue reading

Configure RHEL7/Centos 7 as a Virtualization Host

smarta

This is a fresh install of RHEL 7.5

First install the packages as shown below.


1
yum install qemu-kvm libvirt

Now install the additional recommened virtualization packages


1
# yum install virt-install libvirt-python virt-manager virt-install libvirt-client 

Now restart libvirtd


1
# systemctl restart libvirtd

 
Now you should be able to launch virt-manager from your remote machine and add a connection to your new virtualization host.

Special note. Make sure that you disable NetworkManager


1
2
# systemctl stop NetworkManager
# systemctl disable NetworkManager

How to disable Cloud-Init in a RHEL Cloud Image

happy_cloud1600

So this one is pretty simple. However, I found a lot of misinformation along the way, so I figured that I would jot the proper (and most simple) process here.

Symptoms: a RHEL (or variant) VM that takes a very long time to boot. On the VM console, you can see the following output while the VM boot process is stalled and waiting for a timeout. Note that the message below has nothing to do with cloud init, but its the output that I have most often seen on the console while waiting for a VM to boot.

[106.325574} random: crng init done

Note that I have run into this issue in both OpenStack (when booting from external provider networks) and in KVM.

Upon initial boot of the VM, run the command below.

touch /etc/cloud/cloud-init.disabled

Seriously, that’s it. No need to disable or remove cloud-init services. See reference.

 

 

RHEL 7 Two-Factor SSH Via Google Authenticator

650x300xgoogle-authenticator-header.png.pagespeed.gp+jp+jw+pj+js+rj+rp+rw+ri+cp+md.ic.194erIegOZ.jpg

In this post,  I am going to walk you through the process of installing and configuring two- factor SSH authentication via Google Authenticator. My base system is running a fresh install of RHEL 7.2

Installation Steps

The first step on my system was to install autoreconf, automake, and libtool. These packages are required by the bootstrap.sh script that we will need to in a couple more steps.

# yum -y install autoconf automake libtool

Now, we are going to install Git.

#yum -y install git

One more dependency to knock out. Install pam-devel as shown below.

# yum -y install pam-devel

Next, we clone the google-authenticator Git repo. In this example, I am cloning to /root

# git clone https://github.com/google/google-authenticator.git
Cloning into ‘google-authenticator’…
remote: Counting objects: 1435, done.
remote: Total 1435 (delta 0), reused 0 (delta 0), pack-reused 1435
Receiving objects: 100% (1435/1435), 2.32 MiB | 0 bytes/s, done.
Resolving deltas: 100% (758/758), done.

Now change directory as shown below and run bootstrap.sh.

# cd /root/google-authenticator/libpam

# ./bootstrap.sh

Now run the following commands to finalize the module installs.

# ./configure

#make

#make install

Assuming that you do not run into any errors, the following modules will be installed.

  • /usr/local/lib/security/pam_google_authenticator.so
  • /usr/local/lib/security/pam_google_authenticator.la

Continue reading

RHEL7: Registering a System with the Red Hat Network Using Subscription Manager

howto-draw-octopuses-tutorials_html_101880e

Note that I am about to give you the simpletons version of registering a system. Expect nothing fancy below. In  this post I will do my best to keep it brief.

If you need to register newly build RHEL 7 system using Subscription Manger rather than the ‘rhn_register’ command (which is pretty much deprecated) you will need to run the command as shown below. Including the option  “–auto-attach is pretty much the simplest method to register with RHN, as you  do not need to keep track of any of your subscription names.

[root@rhel7 ~]#  subscription-manager register –username <username> –password <password> –auto-attach

Once auto-attached you can log into rhn.redhat.com, and use the WebUI to pick and choose any additional subscriptions that you want to attach.

Once you have attached as many subscriptions as your heart desires (or you are licensed for) ,you can then run the following command to see each and every yum repository that you have access to, even if they are disabled.

[root@rhel7 ~]# yum repolist all

Depending on the number of repos that you have access to, so you might want to narrow the list down a bit, as shown below.

[root@rhel7 ~]# yum repolist all | grep -i openstack-6

rhel-7-server-openstack-6.0-debug-rpms/7Server/x86_64                           disabled
rhel-7-server-openstack-6.0-installer-debug-rpms/7Server/x86_64             disabled
rhel-7-server-openstack-6.0-installer-rpms/7Server/x86_64                        disabled
rhel-7-server-openstack-6.0-installer-source-rpms/7Server/x86_64            disabled
rhel-7-server-openstack-6.0-rpms/7Server/x86_64                                      disabled
rhel-7-server-openstack-6.0-source-rpms/7Server/x86_64                          disabled

Now you can enable the repos that you need with the command below. Note that the enable option accepts wildcards, but also note the discrepancy in the repo names in the command output above, and in the command below. If you are not passing a wildcard option on the enable command, then you will need to modify the repo names before you can run your command with any bit of success.

[root@rhel7 ~]# subscription-manager repos –enable rhel-7-server-openstack-6.0*
Repository ‘rhel-7-server-openstack-6.0-rpms’ is enabled for this system.
Repository ‘rhel-7-server-openstack-6.0-source-rpms’ is enabled for this system.
Repository ‘rhel-7-server-openstack-6.0-debug-rpms’ is enabled for this system.
Repository ‘rhel-7-server-openstack-6.0-installer-debug-rpms’ is enabled for this system.
Repository ‘rhel-7-server-openstack-6.0-installer-source-rpms’ is enabled for this system.
Repository ‘rhel-7-server-openstack-6.0-installer-rpms’ is enabled for this system.

RHEL7 – How to Set Your Hostname in Redhat Enterprise Linux 7

name-badge

Setting a server’s hostname used to be as simple as running the ‘hostname’ command and adding a “HOSTNAME” entry in /etc/sysconfig/network. However things have changed quite a bit in RHEL7. Apparently, systemd now controls setting a server’s hostname. The ‘hostname‘ command no longer works to set your hostname, however the command is still available just to confuse you.

Now in RHEL 7 you use the command ‘hostnamectl‘. Below is an example of how it works.

Here I have logged into my a RHEL 7.1 VM. You can see that the system appears to have the hostname of node1.

[root@node1 ~]# hostname
node1

However upon further inspection, I find that this is not the case. Rather, the server has a static hostname of localhost.localdomain.

[root@node1 ~]# hostnamectl
Static hostname: localhost.localdomain
Transient hostname: node1
Icon name: computer-vm
Chassis: vm
Machine ID: 4c26a2a3101947bfa2ec7d9c16824ca4
Boot ID: f58707942bd1458da48680025b6f1a53
Virtualization: vmware
CPE OS Name: cpe:/o:redhat:enterprise_linux:7.1:GA:server
Kernel: Linux 3.10.0-229.el7.x86_64
Architecture: x86_64

So lets set the hostname permanently using ‘hostnamectl’.

[root@node1 ~]# hostnamectl set-hostname node1.packy.lab.localdomain

As you can see the hostname shows correct in the output of the ‘hostname’ command

[root@node1 ~]# hostname
node1.packy.lab.localdomain

… and in the output from ‘hostnamectl’

[root@node1 ~]# hostnamectl
Static hostname: node1.packy.lab.localdomain
Icon name: computer-vm
Chassis: vm
Machine ID: 4c26a2a3101947bfa2ec7d9c16824ca4
Boot ID: f58707942bd1458da48680025b6f1a53
Virtualization: vmware
Operating System: Employee SKU
CPE OS Name: cpe:/o:redhat:enterprise_linux:7.1:GA:server
Kernel: Linux 3.10.0-229.el7.x86_64
Architecture: x86_64