How to Manage Libvirt VMs via OpenStack Ironic (OSP10)

Ironic_mascot_color

Bear Metal

 

In this post I will document the steps that I am using to create a fully virtualized OSP 10 environment in my lab. The undercloud node is a VM, as well as the overcloud nodes. We will configure libvirt so that ironic has the ability to boot and shutdown the VMs on the underlying hypervisor via Ironic.

Add the stack user on your hypervisor. In this case my hypervisor’s hostname is virt01, however we will refer to it as hypervisor for clarity.

[simterm]
hypervisor# useradd stack
hypervisor# echo “password” | passwd stack –stdin
[/simterm]

Modify polkit to allow stack user to manage libvirt.

[simterm]hypervisor # cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla
[libvirt Management Access]
Identity=unix-user:stack
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
EOF

[/simterm]

Now attempt to libvirt as stack via a remote session. Here we are just connecting back to the localhost, virt01. In the example below, 10.1.99.112 is the ip of the hypervisor. The undercloud has an ip of 10.1.99.10

[simterm]undercloud# virsh –connect qemu+ssh://stack@10.1.99.112/system list –all

[/simterm]

Now ssh as stack to your undercloud vm

Copy stack’s public key to your hypervisor (virt01 in this case). In the command below you will replace the ip address shown with the ip that your undercloud vm will use to connect to libvirt on the hypervisor

[simterm]undercloud# ssh-copy-id -i ~/.ssh/id_rsa.pub stack@10.1.99.112

[/simterm]

Now we need to create a few Virtual Machines. Specifically I am building an environment with 5 virtual machines to run virtualized Red Hat Openstack 13. My overcloud will consist of 2 computes and three controller nodes

I will use the command below to create 5 qcows.

[simterm]hypervisor# cd /var/lib/libvirt/images/

hypervisor# for i in {1..5}; do qemu-img create -f qcow2 \
-o preallocation=metadata overcloud-node$i.qcow2 60G; done
Formatting ‘overcloud-node1.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node2.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node3.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node4.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node5.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off

[/simterm]

The command below will create 5 xml files and use those to spawn my 5 VMs.

[simterm] hypervisor# for i in {1..5}; do \
virt-install –ram 16384 –vcpus 4 –os-variant rhel7 \
–disk path=/var/lib/libvirt/images/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 \
–noautoconsole –vnc –network network:provisioning –network bridge:br99 \
–network network:default –name overcloud-node$i \
–dry-run –print-xml > /tmp/overcloud-node$i.xml; \

hypervisor# virsh define –file /tmp/overcloud-node$i.xml; done

[/simterm]

You should end up with the following virtual machines

 

[simterm]hypervisor# virsh list –all
Id Name State
—————————————————-
1 undercloud running
– overcloud-node1 shut off
– overcloud-node2 shut off
– overcloud-node3 shut off
– overcloud-node4 shut off
– overcloud-node5 shut off

[/simterm]

Back on the undercloud we use the command below to grab the provisioning network mac address from each virtual machine running on the hypervisor. We could run this command locally on the hypervisor, but since we need the mac addresses for ironic on the undercloud, we will run it here.

[simterm]undercloud$ for i in {1..5}; do virsh -c qemu+ssh://stack@10.1.99.112/system domiflist overcloud-node$i | awk ‘$3 == “provisioning” {print $5}’; done> /tmp/nodes.txt

[/simterm]

Now we use our temp file above to populate the instackenv.json that we will import into ironic. See gist below


undercloud$ jq . << EOF > ~/instackenv.json
{
"ssh-user": "stack",
"ssh-key": "$(cat ~/.ssh/id_rsa)",
"power_manager": "nova.virt.baremetal.virtual_power_driver.VirtualPowerManager",
"host-ip": "192.168.122.1",
"arch": "x86_64",
"nodes": [
{
"name": "overcloud-node1",
"pm_addr": "192.168.122.1",
"pm_password": "$(cat ~/.ssh/id_rsa)",
"pm_type": "pxe_ssh",
"mac": [
"$(sed -n 1p /tmp/nodes.txt)"
],
"cpu": "4",
"memory": "8192",
"disk": "60",
"arch": "x86_64",
"pm_user": "stack"
},
{
"name": "overcloud-node2",
"pm_addr": "192.168.122.1",
"pm_password": "$(cat ~/.ssh/id_rsa)",
"pm_type": "pxe_ssh",
"mac": [
"$(sed -n 2p /tmp/nodes.txt)"
],
"cpu": "4",
"memory": "8192",
"disk": "60",
"arch": "x86_64",
"pm_user": "stack"
},
{
"name": "overcloud-node3",
"pm_addr": "192.168.122.1",
"pm_password": "$(cat ~/.ssh/id_rsa)",
"pm_type": "pxe_ssh",
"mac": [
"$(sed -n 3p /tmp/nodes.txt)"
],
"cpu": "4",
"memory": "8192",
"disk": "60",
"arch": "x86_64",
"pm_user": "stack"
},
{
"name": "overcloud-node4",
"pm_addr": "192.168.122.1",
"pm_password": "$(cat ~/.ssh/id_rsa)",
"pm_type": "pxe_ssh",
"mac": [
"$(sed -n 4p /tmp/nodes.txt)"
],
"cpu": "4",
"memory": "8192",
"disk": "60",
"arch": "x86_64",
"pm_user": "stack"
},
{
"name": "overcloud-node5",
"pm_addr": "192.168.122.1",
"pm_password": "$(cat ~/.ssh/id_rsa)",
"pm_type": "pxe_ssh",
"mac": [
"$(sed -n 5p /tmp/nodes.txt)"
],
"cpu": "4",
"memory": "8192",
"disk": "60",
"arch": "x86_64",
"pm_user": "stack"
}
]
}
EOF

view raw

gistfile1.txt

hosted with ❤ by GitHub

At this point we are ready to import our nodes via Ironic.

Note that I do not claim to be the original author of the steps documented above, rather I wanted to ensure that I could easily consume these steps in the future.

Also, I look forward to experimenting with the vbmc ironic driver and might stop using pxe_ssh altogether.

HomeLab Adventures: My HomeLab Overview

motherboard

Today I am finally going to take the time write an article about my lab. A while back, I had posted an article detailing the FreeNas server that I built to provide shared storage to my virtualization lab. However, today I am going to attempt to dive in deep and write, in at least a bit of detail, about the other components of my lab.

IMAG0490

Wall mounted network rack in lab

 

Network

Over the last few years I have collected quite a bit of network gear. My job has given me the ability to take home decommissioned equipment from time to time, and Craigslist has helped me round out my collection. Below is what I currently have deployed. Not everything is powered on currently, as its crazy loud when all those tiny fans are a’screaming.

IMAG0850

Wall mounted network rack in office

The equipment is distributed between my home office, and my lab in the next room over. Everything is racked and stacked in wall mounted telco/network racks. A patch panel, and a number of copper runs connect everything together.

Continue reading

HomeLab Adventures: Freenas Volume 1

Humpty_Dumpty

 

 

 

 

 

So I am not going to lie, I am a very sick man, but I am also not afraid to admit it. I have a terrible, terrible addiction which is my homelab.

 

It all started out so innocently… An old Sun Ultra 5 to learn Sparc Solaris at home.. A couple of desktops converted over to rack mount cases and racked in a cheap telecom rack in my unfinished basement.

 

This was very early in my career when I had a lot to learn and plenty of free time to study. However that was many moons ago.

 

I measure the time that has past since then by the amount of gray that has crept into my beard. As I moved from one role to the next, I found that I had the pick of the litter when it came to retired equipment.

 

Previously I would have been lucky to land an old Xeon (without virtualization support) to take home, something chock full of PCI-X cards (or worse, SCSI) that were useless to me in a desktop. However now I was landing quad core Nehalems (perfect for virtualization) with handfuls of memory and sexy pci-e SAS/Sata raid controllers.

 

Oh and tons of SSDs that were considered too small not 6 months after they were unboxed. Let’s not even get into my networking setup… as that is a tale for a different day.

 

Once I had a deployed a couple of very nice and fully loaded ESX servers, I came to find that the performance bottleneck in my lab was storage. Sure I had terabytes of SAS and SATA disk, but it was all local. I had nothing that allowed me to fail over between host. Thus began a quest.. a quest for the ages.

 

Knowing myself as I do, I knew that I was not going to be satisfied by throwing a cheap NAS together out of a couple or SATA disk. No, desktop performance was not going to cut it. I needed 15k SAS, a raid controller with battery backup, a handful of spindles, and a beefy tower to allow for plenty of expansion (yes, all my machines were converted to towers). I also knew I was going to need to use LACP or some other network bonding to cable my creation into my network. Heck, I even dared check out the cost of a cheap 10Gb small business class switch (yup too expensive… lets wait a year or so).

Which brings us to today. The day I fired up my first freenas box.

My rough specs are as follows.

  • Gigabyte Z97-HD3
  • Intel Core i3 3.8Ghz
  • 5x600gb 15K SAS -Raid-Z1
  • 1x32gGB SSD
  • 2x4tb 7k SATA – Raid 1
  • 16GB Memory
  • LSI 9260 8i

So now what – move some VMS onto it and call it a day. Well that’s no fun. Lets see what kind of performance we can push through this baby. I mean after all, we are not using 15k SAS drives for nothing.

Side note, it’s not exactly plug and play when it comes to using SAS drives in a standard tower. Even if you have a SAS capable controller, you are going to need a backplane of some sort to provide power and i/o connectivity. Finding something that will fit the bill, without having to use a cheap one-off backplane is a challenge to say the least. For my lab I picked up a couple of these. 99% of what you see in the box stores will not support SAS drives, and its not always obvious at first glance… you have to check the specs on the side of the box. Also don’t walk into Fry’s thinking you will find one… I have tried. Microcenter seems to be the only large chain that stocks an internal SAS enclosure.

For testing I am have ssh’d into a linux desktop that is on the same network as the freenas box. The desktop has only 1gb network interface. Both systems a cabled northbound to a Cisco 3560g.

First let’s mount up our RaidZ-1 volume by sticking this in our /etc/fstab and running mount  /mnt.

freenas:/mnt/freenas-vol-1      /mnt    nfs rsize=8192,wsize=8192,timeo=14,intr

Boom, there it is our new fancy mount. Now to run the tests. However that will come in part 2 as I plan not to rush through this. As far as I understand, there can be a bit of tuning in Freenas, so it might take me a bit to get everything dialed in.

Related articles

Turn an Old Computer Into a Do-Anything Home Server with FreeNAS 8
Configuring ZFS on FreeNAS for backup storage from a Windows Domain
Sync Hacks: How to Set Up FreeNAS with BitTorrent Sync Using a Plugin
RHEL6 – Quick and Dirty NFS How To

 

HomeLab: Upgrading Cisco IOS Via tftp on RHEL

0012fbf7_mediumSo I was planning on blogging about this process simply because I keep forgetting it. Being that I am in the middle of building out my lab, and have a ton of old cisco hardware with ancient images, I have found myself going through the process of updating firmware quite a bit as I tinker with different IOS images.

The first thing that you are going to need is a tftp server, which I am running on my Fedora 18 desktop. Its a very easy setup, and has been simply documented on the link below. Note the article below is specfically about how to setup a tftp server on RHEL, however the process is pretty much the same.

How to Install a tftp Server on RHEL

Now if you are looking to setup a tftp server on Windows, well I cannot help you there. If I recall you just download and install a server app, and away you go.

Below is another link that I have found that also does a fine job of explaining the Cisco side of the process simply, without getting too technical and long winded.

How to Upgrade Cisco IOS Images

 

Related articles

HomeLab: The Cisco 3560G
HomeLab: Configuring the NTP Client on a Cisco Catalyst Switches
Configuring TFTP in Linux
Best Cisco IOS switch for home use
Cisco IOS: Basic Configuration of a Switch.
Cisco IOS : How to configure passwords.

HomeLab: The Cisco 3560G

WS-C3560G-24TSThe Cisco Catalyst 3506G is a layer 3 switch which went end-of-life in 2009. For the home lab its a pretty nice switch to have due to its layer 3 support and gigabit speed. Mine is the model seen to the left, 24 gigabit ports and 4x1gb SFP uplinks.

 

The Cisco Catalyst 3560 is available with one of two software images:

  • IP Base software includes advanced quality of service (QoS),
    rate limiting, access control lists (ACLs), Open Shortest Path First
    (OSPF) for routed access, and IPv6 functionality.
  • IP Services software provides a broader set of
    enterprise-class features, including advanced hardware-based IP Unicast
    and IP Multicast routing, as well as policy-based routing (PBR).

Anyway, gathered here are a few manadatory resources for the Cisco Catalyst 3560G.

Related articles

Vlan Configuration between Cisco & sonicwall help
Cisco WS C3560X 24T L 3560X Catalyst Switch