Creating and Deleting OpenStack Pacemaker IP Addresses


You can use the steps below if you need to change managed IP resources, for example, if you need to re-IP your RHEL OSP Overcloud endpoints.

In this example, we are changing a managed VIP from one IP to another.

First, we get a good look at the resource that we want to delete. Here we are going to delete the resource ip- This resource starts the VIP,

# pcs resource show ip-
Resource: ip- (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip= cidr_netmask=32
Operations: start interval=0s timeout=20s (ip-
stop interval=0s timeout=20s (ip-
monitor interval=10s timeout=20s (ip-

Now let’s actually delete it.

# pcs resource delete ip-
Attempting to stop: ip-…Stopped

Now lets create the replacement VIP

# pcs resource create ip- ocf:heartbeat:IPaddr2 ip= cidr_netmask=32 op monitor interval=10s

Now, let’s take a good look at it.

# pcs resource show ip-
Resource: ip- (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip= cidr_netmask=32
Operations: start interval=0s timeout=20s (ip-
stop interval=0s timeout=20s (ip-
monitor interval=10s (ip-

Now we need to check to make sure that the VIP started on one of our OpenStack controllers.

# pcs status | grep
ip- (ocf::heartbeat:IPaddr2): Started ctrl01

For good measure, let’s make sure we can ping it.

# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.781 ms
64 bytes from icmp_seq=2 ttl=64 time=1.21 ms



Configuring ControlPlaneSubnetCidr in RHEL OSP 7.2



In previous versions of RHEL OSP 7 the Control Plane/Provisioning network interface was assigned via DHCP and not managed via Heat. Starting in 7.2, this interface is now managed via Heat.

Sample Heat Template

Below is an example from /home/stack/templates/nic-configs/compute.yaml or /home/stack/templates/nic-configs/controller.yaml. In this example we are hard coding the interface name, however this is not required (although I recommend it).

    type: OS::Heat::StructuredConfig
      group: os-apply-config
              type: interface
              name: em3
              use_dhcp: false
                      - '/'
                      - - {get_param: ControlPlaneIp}
                        - {get_param: ControlPlaneSubnetCidr}
                  next_hop: {get_param: EC2MetadataIp}

Note that this new configuration requires an additional parameter to be added to your top-level template, usually named network-environment.yaml.

ControlPlaneSubnetCidr: "23"

Stick this next to the “ControlPlaneIP” under “parameter_defaults”

  ControlPlaneSubnetCidr: "23"

Note that if you forget to add this param, the network CIDR for this network will default to “24” which may or may not be correct for your environment. So watch out.

Your Cloud Installed Before Lunch with QuickStart Cloud Installer 1.0

What if I told you that you can have your Red Hat Enterprise Linux (RHEL) based Cloud infrastructure, with Red Hat Virtualization, OpenStack, OpenShift and CloudForms all setup before you have to s…

Source: Your Cloud Installed Before Lunch with QuickStart Cloud Installer 1.0

Mapping Virtual Networks with plotnetcfg


Plotnetcfg is a Linux utility that you can use to scan the networking configuration on a server and output the configuration hierarchy to a file. Plotnetcfg is most useful when troubleshooting complex virtual networks with all sorts of bonds and bridges, the likes of which you will find on KVM nodes, or OpenStack Controller nodes.

You can install plot on RHEL/Centos as shown below.

# yum -y plotnetcfg.x86_64

You will also want to install the “dot” command which is installed with graphiz. See below.

# yum -y install graphviz.x86_64

Now that the bits and pieces are installed we can run the command below which outputs to PDF file named file.pd

# plotnetcfg | dot -Tpdf > file.pd

If you want to, you can also use “convert” to convert the PDF to a jpg. For example, I exported to jpg to embed below.


Super clean, and super easy to read and understand



Component, Plug-In, and Driver Support in RHEL OpenStack Platform




This article details all supported drivers and plugins for each RHEL OSP component (Cinder, Glance, Neutron, etc). Last updated 7/11/16.

Link Below:

OpenStack Ironic Troubleshooting – Neutron Port in Use


Ran into issues with a deploy via RHEL OSP director, which caused our heat stack scale compute scale-out to fail.

We corrected the issue, and then attempted our deploy again. This time around, we were able to scale out by several compute nodes. However, several nodes failed to deploy properly.

After trudging through the logs for a bit we were able to find the error below in /var/log/nova-conductor.log

2016-07-25 18:34:39.453 27374 ERROR nova.scheduler.utils [req-1caa3ca0-9e13-4340-b8b2-80b1cf2f8a7f 8408462bd5a8445c9742ea4dfbc20d70 cc44dc9e68064e64899697ac610c8f06 – – -] [instance: e44e3a04-a47e-4a8e-9eb5-0037c1175e4d] Error from last host: fatmin.lab.localdomain (node c5177fbf-ae0f-49db-94f4-087537b3dd53): [u’Traceback (most recent call last):\n’, u’ File “/usr/lib/python2.7/site-packages/nova/compute/”, line 1905, in _do_build_and_run_instance\n filter_properties)\n’, u’ File “/usr/lib/python2.7/site-packages/nova/compute/”, line 2058, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n’, u’RescheduledException: Build of instance e44e3a04-a47e-4a8e-9eb5-0037c1175e4d was re-scheduled: Port 14:18:77:3e:1a:bf is still in use.\n’]

To resolve we ran the following command, using the MAC address shown above to narrow down the search.

#neutron port-list | grep “14:18:77:3e:1a:bf”
| 25a366df-1e6c-4eb8-853c-5b7db82637f0 | | 14:18:77:3e:1a:bf | {“subnet_id”: “0310f210-63ad-4616-9338-d59ac13cc0be”, “ip_address”: “”} |

The command “neutron port show“, showed us that this port was down and was not responding to ping.

We then deleted the port via neutron

#neutron port-delete 25a366df-1e6c-4eb8-853c-5b7db82637f0

We then re-ran our deploy and were able to scale without issue.


Managing RHEV VMs Via the Virsh CLI


Out of the box you are not going to be able to run virsh commands on the cli as root. Libvirt, Virsh, and KVM are just not that tightly integrated in RHEV.

You can however, follow the procedure below, and create a user and password to use for authentication.

# saslpasswd2 -a libvirt fatmin
Again (for verification):

Now enter the credentials you entered above.

# virsh list –all
Please enter your authentication name: fatmin
Please enter your password:
Id Name State
10 HostedEngine running

Now you can shut down or start a VM. Here I am shutting down my RHEV HostedEngine.

# virsh destroy HostedEngine
Please enter your authentication name: fatmin
Please enter your password:
Domain HostedEngine destroyed