Red Hat Satellite 6.x: Restarting Services



Red Hat Satellite consists of a number of running services. Restarting each service manually can be painful. Luckily you can use the commands below to easily restart all services.

List Services

Run the command below to view a list of all Satellite services that are started at boot.

# katello-service list
Redirecting to ‘foreman-maintain service’
Running Service List
List applicable services:
dynflowd.service enabled
foreman-proxy.service enabled
httpd.service enabled
postgresql.service enabled
pulp_celerybeat.service enabled
pulp_resource_manager.service enabled
pulp_streamer.service enabled
pulp_workers.service enabled
puppetserver.service enabled
qdrouterd.service enabled
qpidd.service enabled
rh-mongodb34-mongod.service enabled
smart_proxy_dynflow_core.service enabled
squid.service enabled
tomcat.service enabled

All services listed [OK]

Check Service Status

The command below will check the status of all Satellite services. The output is similar to running a systemctl status on all Satellite specific services. The output can be quite verbose.

# katello-service status

Stop Services

Use the command below to stop all Satellite services.

# katello-service stop

Start Services

Use the command below to start all Satellite services.

# katello-service start

Restart Services

The command below will restart all Satellite services.

# katello-service restart


RHEL7: Install RHV Guest Agent and Drivers


About the Guest Agent

The RHEL 7.x virtual machine guest agent in RHV 4.x provides  drivers, additional data, and functionality once installed on a RHEL virtual machine.

The guest agent includes:

  • virtio-net paravirtualized network driver
  • virtio-scsi paravirtualized HDD driver
  • virtio-balloon driver which improves memory overcommit (currently not used by RHV)
  • rhevm-guest-agent common which allows RHV to retrieve guest internal information such as IP addresses and allows RHV to gracefully reboot the guest

You can view the entire list here.

When spawning a virtual machine in RHV without the guest agent, a warning will appear as an exclamation mark in RHV.

Screenshot from 2019-07-02 16-11-58

Register with Satellite

You can skip this step if your guest is already registered

In order to install the guest agent, I must first register the virtual machine with my local satellite. If not using a local satellite server, you can register with RHN.

First we need to grab the katello rpm from satellite – an insecure satellite server in this case as we are using a self-signed cert.

# curl -Ok https://satellite.lab.localdomain/pub/katello-ca-consumer-latest.noarch.rpm

Then install the rpm.

# rpm -ivh katello-ca-consumer-latest.noarch.rpm

Now register with Satellite. In the example below, we are using a custom activation key and organization.

# subscription-manager register –activationkey=”auburn-lab-ak” –org=”lab”

Installing the Guest Agent

You will need to ensure that the RHEL 7 RH Common repo is enabled. If the repo is not available to the guest, you will need to enable.

# yum repolist | grep common
!rhel-7-server-rh-common-rpms/7Server/x86_64 Red Hat Enterprise Linux 234

If the proper repo is enabled, then run the following command.

Once installed, the orange exclamation point will disappear.

Screenshot from 2019-07-02 17-13-19

Red Hat Satellite: Create and Publish Content Views for RHEL + OpenStack



In this post I will review the process of creating Content Views (CV), Composite Content Views (CCV), publishing each view, and creating lifecycles.

Note that in this post we are working with Red Hat Satellite 6.4, in which there was a major overhaul of the WebUI. You may have noticed that all menus are now situated on in a pane on the left, rather than at the top of each page.

Sync Plans

A sync plan is a constant, scheduled synchronization of updates of a Red Hat Satellite repository and the source repositories. I suggest syncing either daily or weekly in order to minimize the deltas between each sync. When you sync more often, the amount of change between syncs is less and therefore should complete faster than a monthly sync.

Note that this step assumes that you have already setup the correct repositories for RHEL and Red Hat OpenStack.  A list of required repositories can be found in the Red Hat OpenStack Director Installation and Usage Guide.

Navigate to Content > Sync Plans

Screenshot from 2019-03-18 17-02-19.png

Here we create a daily sync plan for RHEL 7.

Screenshot from 2019-03-18 17-03-37.png

We now add RHEL 7 as the product.

Screenshot from 2019-03-18 17-05-06.png

Now we need to create a daily sync plan for Red Hat OpenStack.

Screenshot from 2019-03-18 17-07-16.png

Note: you might need to create a sync plan for Ceph as well.  Ensure all plans sync at the same interval.

Create a Content View

Now we need to create our content views. We will create one for RHEL, and one for OSP. If you are using ceph, you will need to create a content view for it as well.


Continue reading

How to Manage Libvirt VMs via OpenStack Ironic (OSP10)


Bear Metal


In this post I will document the steps that I am using to create a fully virtualized OSP 10 environment in my lab. The undercloud node is a VM, as well as the overcloud nodes. We will configure libvirt so that ironic has the ability to boot and shutdown the VMs on the underlying hypervisor via Ironic.

Add the stack user on your hypervisor. In this case my hypervisor’s hostname is virt01, however we will refer to it as hypervisor for clarity.

hypervisor# useradd stack
hypervisor# echo “password” | passwd stack --stdin

Modify polkit to allow stack user to manage libvirt.

hypervisor # cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla
[libvirt Management Access]

Now attempt to libvirt as stack via a remote session. Here we are just connecting back to the localhost, virt01. In the example below, is the ip of the hypervisor. The undercloud has an ip of

undercloud# virsh --connect qemu+ssh://stack@ list --all

Now ssh as stack to your undercloud vm

Copy stack’s public key to your hypervisor (virt01 in this case). In the command below you will replace the ip address shown with the ip that your undercloud vm will use to connect to libvirt on the hypervisor

undercloud# ssh-copy-id -i ~/.ssh/ stack@

Now we need to create a few Virtual Machines. Specifically I am building an environment with 5 virtual machines to run virtualized Red Hat Openstack 13. My overcloud will consist of 2 computes and three controller nodes

I will use the command below to create 5 qcows.

hypervisor# cd /var/lib/libvirt/images/
hypervisor# for i in {1..5}; do qemu-img create -f qcow2 \
-o preallocation=metadata overcloud-node$i.qcow2 60G; done
Formatting ‘overcloud-node1.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node2.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node3.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node4.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node5.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off

The command below will create 5 xml files and use those to spawn my 5 VMs.

hypervisor# for i in {1..5}; do \
virt-install --ram 16384 --vcpus 4 --os-variant rhel7 \
--disk path=/var/lib/libvirt/images/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 \
--noautoconsole --vnc --network network:provisioning --network bridge:br99 \
--network network:default --name overcloud-node$i \
--dry-run --print-xml > /tmp/overcloud-node$i.xml; \
hypervisor# virsh define --file /tmp/overcloud-node$i.xml; done

You should end up with the following virtual machines


hypervisor# virsh list --all
Id Name State
1 undercloud running
-- overcloud-node1 shut off
-- overcloud-node2 shut off
-- overcloud-node3 shut off
-- overcloud-node4 shut off
-- overcloud-node5 shut off

Back on the undercloud we use the command below to grab the provisioning network mac address from each virtual machine running on the hypervisor. We could run this command locally on the hypervisor, but since we need the mac addresses for ironic on the undercloud, we will run it here.

undercloud$ for i in {1..5}; do virsh -c qemu+ssh://stack@ domiflist overcloud-node$i | awk ‘$3 == “provisioning” {print $5}’; done> /tmp/nodes.txt

Now we use our temp file above to populate the instackenv.json that we will import into ironic. See gist below

At this point we are ready to import our nodes via Ironic.

Note that I do not claim to be the original author of the steps documented above, rather I wanted to ensure that I could easily consume these steps in the future.

Also, I look forward to experimenting with the vbmc ironic driver and might stop using pxe_ssh altogether.

Cockpit for Centos and RHEL 7: Install and Configure



I have recently purchased 3 Dell servers, and put myself to task to build out a new lab. My old lab was in desperate need of updating as I had long past the time when 48GB of memory per node was sufficient. The cost of memory, old or new was not even closely in line with cheap server grade CPUs that were perfect for lab servers. Today you can buy a used E7540, a low power, 12 core (HT enabled) Xeon for less than $30 (USD) from a reputable retailer. Cram two of these into an 11 gen Dell and you are in business.

So, three new (to me) Dell rackmounts, deployed as virtualization servers, and I want a simple way to view performance stats in a nice clean single pain of glass. I am not in any way shape or form looking to build fancy dashboard and setup any sort of historical monitoring. I just want to know where the performance hot spots are when my environment seems to be running slowly.

I installed Cockpit before on a laptop or two and thought it might foot the bill, especially since you could use one dashboard for multiple nodes.

So here we are going to deploy Cockpit on all three nodes, on each the steps are the same.


First we must open a firewall port on each node.

Continue reading

Creating and Deleting OpenStack Pacemaker IP Addresses


You can use the steps below if you need to change managed IP resources, for example, if you need to re-IP your RHEL OSP Overcloud endpoints.

In this example, we are changing a managed VIP from one IP to another.

First, we get a good look at the resource that we want to delete. Here we are going to delete the resource ip- This resource starts the VIP,

# pcs resource show ip-
Resource: ip- (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip= cidr_netmask=32
Operations: start interval=0s timeout=20s (ip-
stop interval=0s timeout=20s (ip-
monitor interval=10s timeout=20s (ip-

Now let’s actually delete it.

# pcs resource delete ip-
Attempting to stop: ip-…Stopped

Now lets create the replacement VIP

# pcs resource create ip- ocf:heartbeat:IPaddr2 ip= cidr_netmask=32 op monitor interval=10s

Now, let’s take a good look at it.

# pcs resource show ip-
Resource: ip- (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip= cidr_netmask=32
Operations: start interval=0s timeout=20s (ip-
stop interval=0s timeout=20s (ip-
monitor interval=10s (ip-

Now we need to check to make sure that the VIP started on one of our OpenStack controllers.

# pcs status | grep
ip- (ocf::heartbeat:IPaddr2): Started ctrl01

For good measure, let’s make sure we can ping it.

# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.781 ms
64 bytes from icmp_seq=2 ttl=64 time=1.21 ms



Configuring ControlPlaneSubnetCidr in RHEL OSP 7.2



In previous versions of RHEL OSP 7 the Control Plane/Provisioning network interface was assigned via DHCP and not managed via Heat. Starting in 7.2, this interface is now managed via Heat.

Sample Heat Template

Below is an example from /home/stack/templates/nic-configs/compute.yaml or /home/stack/templates/nic-configs/controller.yaml. In this example we are hard coding the interface name, however this is not required (although I recommend it).

    type: OS::Heat::StructuredConfig
      group: os-apply-config
              type: interface
              name: em3
              use_dhcp: false
                      - '/'
                      - - {get_param: ControlPlaneIp}
                        - {get_param: ControlPlaneSubnetCidr}
                  next_hop: {get_param: EC2MetadataIp}

Note that this new configuration requires an additional parameter to be added to your top-level template, usually named network-environment.yaml.

ControlPlaneSubnetCidr: "23"

Stick this next to the “ControlPlaneIP” under “parameter_defaults”

  ControlPlaneSubnetCidr: "23"

Note that if you forget to add this param, the network CIDR for this network will default to “24” which may or may not be correct for your environment. So watch out.