Red Hat OpenStack 13: Containerized Services Operations Guide

Screenshot from 2018-11-10 18-59-16.png

Contain Yourself

With the release of Red Hat OpenStack 13, the move to containerized overcloud services is complete.  Traditional systemd services such as RabbitMQ, Haproxy, Mariadb, etc, are all now running as containers in the overcloud.  This move to containers is meant to provide additional stability, control, and security to the platform. Future upgrades should be easier, and future deploys should be more flexible.

However, the move to containers brings with it a couple of new challenges. Operations.

The average OpenStack administrator no longer restarts services, they restart containers.

They no longer view a rabbit cluster’s status on the controller node, but rather within a container on the controller node.  Log locations have changed. Config file locations have changed.

So let’s retrain ourselves.

Continue reading

Advertisements

How to Manage Libvirt VMs via OpenStack Ironic (OSP10)

Ironic_mascot_color

Bear Metal

 

In this post I will document the steps that I am using to create a fully virtualized OSP 10 environment in my lab. The undercloud node is a VM, as well as the overcloud nodes. We will configure libvirt so that ironic has the ability to boot and shutdown the VMs on the underlying hypervisor via Ironic.

Add the stack user on your hypervisor. In this case my hypervisor’s hostname is virt01, however we will refer to it as hypervisor for clarity.

hypervisor# useradd stack
hypervisor# echo “password” | passwd stack --stdin

Modify polkit to allow stack user to manage libvirt.

hypervisor # cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla
[libvirt Management Access]
Identity=unix-user:stack
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
EOF

Now attempt to libvirt as stack via a remote session. Here we are just connecting back to the localhost, virt01. In the example below, 10.1.99.112 is the ip of the hypervisor. The undercloud has an ip of 10.1.99.10

undercloud# virsh --connect qemu+ssh://stack@10.1.99.112/system list --all

Now ssh as stack to your undercloud vm

Copy stack’s public key to your hypervisor (virt01 in this case). In the command below you will replace the ip address shown with the ip that your undercloud vm will use to connect to libvirt on the hypervisor

undercloud# ssh-copy-id -i ~/.ssh/id_rsa.pub stack@10.1.99.112

Now we need to create a few Virtual Machines. Specifically I am building an environment with 5 virtual machines to run virtualized Red Hat Openstack 13. My overcloud will consist of 2 computes and three controller nodes

I will use the command below to create 5 qcows.

hypervisor# cd /var/lib/libvirt/images/
hypervisor# for i in {1..5}; do qemu-img create -f qcow2 \
-o preallocation=metadata overcloud-node$i.qcow2 60G; done
Formatting ‘overcloud-node1.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node2.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node3.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node4.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node5.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off

The command below will create 5 xml files and use those to spawn my 5 VMs.

hypervisor# for i in {1..5}; do \
virt-install --ram 16384 --vcpus 4 --os-variant rhel7 \
--disk path=/var/lib/libvirt/images/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 \
--noautoconsole --vnc --network network:provisioning --network bridge:br99 \
--network network:default --name overcloud-node$i \
--dry-run --print-xml > /tmp/overcloud-node$i.xml; \
hypervisor# virsh define --file /tmp/overcloud-node$i.xml; done

You should end up with the following virtual machines

 

hypervisor# virsh list --all
Id Name State
----------------------------------------------------
1 undercloud running
-- overcloud-node1 shut off
-- overcloud-node2 shut off
-- overcloud-node3 shut off
-- overcloud-node4 shut off
-- overcloud-node5 shut off

Back on the undercloud we use the command below to grab the provisioning network mac address from each virtual machine running on the hypervisor. We could run this command locally on the hypervisor, but since we need the mac addresses for ironic on the undercloud, we will run it here.

undercloud$ for i in {1..5}; do virsh -c qemu+ssh://stack@10.1.99.112/system domiflist overcloud-node$i | awk ‘$3 == “provisioning” {print $5}’; done> /tmp/nodes.txt

Now we use our temp file above to populate the instackenv.json that we will import into ironic. See gist below

At this point we are ready to import our nodes via Ironic.

Note that I do not claim to be the original author of the steps documented above, rather I wanted to ensure that I could easily consume these steps in the future.

Also, I look forward to experimenting with the vbmc ironic driver and might stop using pxe_ssh altogether.

Cockpit for Centos and RHEL 7: Install and Configure

Snail_On_White_Background_600

Introduction

I have recently purchased 3 Dell servers, and put myself to task to build out a new lab. My old lab was in desperate need of updating as I had long past the time when 48GB of memory per node was sufficient. The cost of memory, old or new was not even closely in line with cheap server grade CPUs that were perfect for lab servers. Today you can buy a used E7540, a low power, 12 core (HT enabled) Xeon for less than $30 (USD) from a reputable retailer. Cram two of these into an 11 gen Dell and you are in business.

So, three new (to me) Dell rackmounts, deployed as virtualization servers, and I want a simple way to view performance stats in a nice clean single pain of glass. I am not in any way shape or form looking to build fancy dashboard and setup any sort of historical monitoring. I just want to know where the performance hot spots are when my environment seems to be running slowly.

I installed Cockpit before on a laptop or two and thought it might foot the bill, especially since you could use one dashboard for multiple nodes.

So here we are going to deploy Cockpit on all three nodes, on each the steps are the same.

Prerequisites

First we must open a firewall port on each node.

Continue reading

Centos7/Rhel7 – Collectd Config for Libvirt, Carbon-Cache

collectd-logo

The following is the collectd config that I am running on my RHEL 7 kvm hypervisor. This is not meant to be an all inclusive config for collectd, rather I am looking to gather basic performance metrics on my hypervisors and VMs.

Note that I have disabled selinux, as I am running these hypervisors in my lab. Do not do the same in your production environments.


1
#setenforce 0

I also edited /etc/selinux/config as shown below. Again, this is for non-prod/test envs. Do not disable in production


1
2
3
4
5
6
7
8
9
10
11
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Also note that I have already configured the EPEL repo.. again, not production.

EPEL repo installed via the command below


1
rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

First a couple of steps to get us up and running. We need to install packages


1
# yum -y install collectd collectd-virt.x86_64 collectd-rrdtool.x86_64 collectd-sensors.x86_64 collectd-smart.x86_64 collectd-netlink.x86_64 collectd-ipmi.x86_64

We also need to write our config.

You can pull my config using the gist below

We now need to enable and start collectd


1
2
#systemctl enable collect
#systemctl start collectd 

 

Configure RHEL7/Centos 7 as a Virtualization Host

smarta

This is a fresh install of RHEL 7.5

First install the packages as shown below.


1
yum install qemu-kvm libvirt

Now install the additional recommened virtualization packages


1
# yum install virt-install libvirt-python virt-manager virt-install libvirt-client 

Now restart libvirtd


1
# systemctl restart libvirtd

 
Now you should be able to launch virt-manager from your remote machine and add a connection to your new virtualization host.

Special note. Make sure that you disable NetworkManager


1
2
# systemctl stop NetworkManager
# systemctl disable NetworkManager

Cisco Catalyst: Configure a Trunk Port with a Native Vlan

data_sheet_c78-530976-1

This post will show you how to configure a trunk port with a native vlan. In this specific example I am configuring a range of ports, but the commands for an individual interface will be the same once you have entered the config for a particular port.

First enter configuration mode

s3560#conf terminal

Now we configure the range of ports. Here we are configuring the range of ports to be a trunk, using dot1q encapsulation. We are allowing vlans 96-99, and vlan 101. We are also setting vlan101 to be the native vlan (no vlan tagging required).

s3560(config)#interface range gigabitEthernet 0/13 – 24
s3560(config-if-range)#description generic trunk ports
s3560(config-if-range)#switchport trunk encapsulation dot1q
s3560(config-if-range)#switchport mode trunk
s3560(config-if-range)#switchport trunk native vlan 101
s3560(config-if-range)switchport trunk allowed vlan 96-99,101

Now we enable spanning-tree portfast and bpduguard

s3560(config-if-range)#spanning-tree portfast

%Warning: portfast should only be enabled on ports connected to a single
host. Connecting hubs, concentrators, switches, bridges, etc… to this
interface when portfast is enabled, can cause temporary bridging loops.
Use with CAUTION

%Portfast will be configured in 13 interfaces due to the range command
but will only have effect when the interfaces are in a non-trunking mode.

s3560(config-if-range)#spanning-tree bpduguard enable
s3560(config-if-range)#end

Save your config changes

s3560#copy run start
Destination filename [startup-config]?
Building configuration…
[OK]
0 bytes copied in 1.443 secs (0 bytes/sec)
s3560#

Cisco Catalyst: Reset/Clear Port Config

inner-banner-itnetworkaudit

Reset Port Config to Default for One Port

Use the command below to clear the configuration on a port.

s3560(config)#default interface gigabitEthernet 0/3
Interface GigabitEthernet0/3 set to default configuration

Use the command below to verify that the running config for the port has been reset to the default configuration.

s3560#show running-config interface gigabitEthernet0/3
Building configuration…

Current configuration : 36 bytes
!
interface GigabitEthernet0/3
end

Now write your changes to memory.

s3560#copy run start
Destination filename [startup-config]?
Building configuration…
[OK]
0 bytes copied in 1.427 secs (0 bytes/sec)

Reset Port Config to Default for a Range of Ports

Below we are going to clear/delete the config from a range of ports

s3560#conf terminal

s3560(config)#default interface range gigabitEthernet 0/13 – 24

Again, don’t forget to save your changes.