Red Hat Satellite 6.x: Restarting Services

sputnik-2-icon

Introduction

Red Hat Satellite consists of a number of running services. Restarting each service manually can be painful. Luckily you can use the commands below to easily restart all services.

List Services

Run the command below to view a list of all Satellite services that are started at boot.

# katello-service list
Redirecting to ‘foreman-maintain service’
Running Service List
========================================================================
List applicable services:
dynflowd.service enabled
foreman-proxy.service enabled
httpd.service enabled
postgresql.service enabled
pulp_celerybeat.service enabled
pulp_resource_manager.service enabled
pulp_streamer.service enabled
pulp_workers.service enabled
puppetserver.service enabled
qdrouterd.service enabled
qpidd.service enabled
rh-mongodb34-mongod.service enabled
smart_proxy_dynflow_core.service enabled
squid.service enabled
tomcat.service enabled

All services listed [OK]

Check Service Status

The command below will check the status of all Satellite services. The output is similar to running a systemctl status on all Satellite specific services. The output can be quite verbose.

# katello-service status

Stop Services

Use the command below to stop all Satellite services.

# katello-service stop

Start Services

Use the command below to start all Satellite services.

# katello-service start

Restart Services

The command below will restart all Satellite services.

# katello-service restart

Advertisements

RHEL7: Install RHV Guest Agent and Drivers

howto-draw-octopuses-tutorials_html_101880e

About the Guest Agent

The RHEL 7.x virtual machine guest agent in RHV 4.x provides  drivers, additional data, and functionality once installed on a RHEL virtual machine.

The guest agent includes:

  • virtio-net paravirtualized network driver
  • virtio-scsi paravirtualized HDD driver
  • virtio-balloon driver which improves memory overcommit (currently not used by RHV)
  • rhevm-guest-agent common which allows RHV to retrieve guest internal information such as IP addresses and allows RHV to gracefully reboot the guest

You can view the entire list here.

When spawning a virtual machine in RHV without the guest agent, a warning will appear as an exclamation mark in RHV.

Screenshot from 2019-07-02 16-11-58

Register with Satellite

You can skip this step if your guest is already registered

In order to install the guest agent, I must first register the virtual machine with my local satellite. If not using a local satellite server, you can register with RHN.

First we need to grab the katello rpm from satellite – an insecure satellite server in this case as we are using a self-signed cert.

# curl -Ok https://satellite.lab.localdomain/pub/katello-ca-consumer-latest.noarch.rpm

Then install the rpm.

# rpm -ivh katello-ca-consumer-latest.noarch.rpm

Now register with Satellite. In the example below, we are using a custom activation key and organization.

# subscription-manager register –activationkey=”auburn-lab-ak” –org=”lab”

Installing the Guest Agent

You will need to ensure that the RHEL 7 RH Common repo is enabled. If the repo is not available to the guest, you will need to enable.

# yum repolist | grep common
!rhel-7-server-rh-common-rpms/7Server/x86_64 Red Hat Enterprise Linux 234

If the proper repo is enabled, then run the following command.

Once installed, the orange exclamation point will disappear.

Screenshot from 2019-07-02 17-13-19

Monitor Tripplite UPS on RHEL 7 via NUT

61cR65ugGTL._SL1200_.jpg

One of the UPS’s in my home lab is a Tripplite 1500VALCD. I wanted to be able to monitor/manage the UPS via RHEL/Centos however Tripplite no longer makes a Linux version of Power Alert Local for Linux.  Instead I decided to use Nut.

After connecting a USB cable between my RHEL server and my UPS, I needed to install lsusb to verify that it was detected properly.
# yum -y install usbutils

I was then able to verify connectivity

# lsusb | grep -i trip
Bus 003 Device 123: ID 09ae:2012 Tripp Lite

Nut can be found in the EPEL repo which I needed to install.

#wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# yum localinstall epel-release-latest-7.noarch.rpm

Then install Nut.

# yum -y install nut.x86_64

I then ran nut-monitor to detect the proper config for Nut.

[nutdev1]
driver = “usbhid-ups”
port = “auto”
vendorid = “09AE”
productid = “2012”
product = “Tripp Lite UPS”
vendor = “Tripp Lite”
bus = “003”

Use the output from the above command to populate /etc/ups/ups.conf. In the example below I I only changed the name of the device.

[tripplite]
driver = “usbhid-ups”
port = “auto”
vendorid = “09AE”
productid = “2012”
product = “Tripp Lite UPS”
vendor = “Tripp Lite”
bus = “003”

I then started and enabled the following services.

# systemctl start nut-server
# systemctl enable nut-server

I was then able to run upsc and query the ups.

# upsc tripplite
battery.charge: 100
battery.runtime: 620
battery.type: PbAC
battery.voltage: 26.3
battery.voltage.nominal: 24.0
device.mfr: Tripp Lite
device.model: Tripp Lite UPS
device.type: ups
driver.name: usbhid-ups
driver.parameter.bus: 003
driver.parameter.pollfreq: 30
driver.parameter.pollinterval: 2
driver.parameter.port: auto
driver.parameter.product: Tripp Lite UPS
driver.parameter.productid: 2012
driver.parameter.vendor: Tripp Lite
driver.parameter.vendorid: 09AE
driver.version: 2.7.2
driver.version.data: TrippLite HID 0.81
driver.version.internal: 0.38
input.frequency: 59.8
input.voltage: 112.2
input.voltage.nominal: 120
output.frequency.nominal: 60
output.voltage: 112.2
output.voltage.nominal: 120
ups.beeper.status: disabled
ups.delay.shutdown: 20
ups.load: 48
ups.mfr: Tripp Lite
ups.model: Tripp Lite UPS
ups.power: 0.0
ups.power.nominal: 1500
ups.productid: 2012
ups.status: OL
ups.test.result: Done and error
ups.timer.reboot: 65535
ups.timer.shutdown: 65535
ups.vendorid: 09ae
ups.watchdog.status: 0

Next I plan to explore nut-monitor, but for know I can at least query the UPS. Apparently the battery is dead.

Red Hat Satellite: Create and Publish Content Views for RHEL + OpenStack

sputnik-2-icon.png

Overview

In this post I will review the process of creating Content Views (CV), Composite Content Views (CCV), publishing each view, and creating lifecycles.

Note that in this post we are working with Red Hat Satellite 6.4, in which there was a major overhaul of the WebUI. You may have noticed that all menus are now situated on in a pane on the left, rather than at the top of each page.

Sync Plans

A sync plan is a constant, scheduled synchronization of updates of a Red Hat Satellite repository and the source repositories. I suggest syncing either daily or weekly in order to minimize the deltas between each sync. When you sync more often, the amount of change between syncs is less and therefore should complete faster than a monthly sync.

Note that this step assumes that you have already setup the correct repositories for RHEL and Red Hat OpenStack.  A list of required repositories can be found in the Red Hat OpenStack Director Installation and Usage Guide.

Navigate to Content > Sync Plans

Screenshot from 2019-03-18 17-02-19.png

Here we create a daily sync plan for RHEL 7.

Screenshot from 2019-03-18 17-03-37.png

We now add RHEL 7 as the product.

Screenshot from 2019-03-18 17-05-06.png

Now we need to create a daily sync plan for Red Hat OpenStack.

Screenshot from 2019-03-18 17-07-16.png

Note: you might need to create a sync plan for Ceph as well.  Ensure all plans sync at the same interval.

Create a Content View

Now we need to create our content views. We will create one for RHEL, and one for OSP. If you are using ceph, you will need to create a content view for it as well.

 

Continue reading

How to Manage Libvirt VMs via OpenStack Ironic (OSP10)

Ironic_mascot_color

Bear Metal

 

In this post I will document the steps that I am using to create a fully virtualized OSP 10 environment in my lab. The undercloud node is a VM, as well as the overcloud nodes. We will configure libvirt so that ironic has the ability to boot and shutdown the VMs on the underlying hypervisor via Ironic.

Add the stack user on your hypervisor. In this case my hypervisor’s hostname is virt01, however we will refer to it as hypervisor for clarity.

hypervisor# useradd stack
hypervisor# echo “password” | passwd stack --stdin

Modify polkit to allow stack user to manage libvirt.

hypervisor # cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla
[libvirt Management Access]
Identity=unix-user:stack
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
EOF

Now attempt to libvirt as stack via a remote session. Here we are just connecting back to the localhost, virt01. In the example below, 10.1.99.112 is the ip of the hypervisor. The undercloud has an ip of 10.1.99.10

undercloud# virsh --connect qemu+ssh://stack@10.1.99.112/system list --all

Now ssh as stack to your undercloud vm

Copy stack’s public key to your hypervisor (virt01 in this case). In the command below you will replace the ip address shown with the ip that your undercloud vm will use to connect to libvirt on the hypervisor

undercloud# ssh-copy-id -i ~/.ssh/id_rsa.pub stack@10.1.99.112

Now we need to create a few Virtual Machines. Specifically I am building an environment with 5 virtual machines to run virtualized Red Hat Openstack 13. My overcloud will consist of 2 computes and three controller nodes

I will use the command below to create 5 qcows.

hypervisor# cd /var/lib/libvirt/images/
hypervisor# for i in {1..5}; do qemu-img create -f qcow2 \
-o preallocation=metadata overcloud-node$i.qcow2 60G; done
Formatting ‘overcloud-node1.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node2.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node3.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node4.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off
Formatting ‘overcloud-node5.qcow2′, fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation=’metadata’ lazy_refcounts=off

The command below will create 5 xml files and use those to spawn my 5 VMs.

hypervisor# for i in {1..5}; do \
virt-install --ram 16384 --vcpus 4 --os-variant rhel7 \
--disk path=/var/lib/libvirt/images/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 \
--noautoconsole --vnc --network network:provisioning --network bridge:br99 \
--network network:default --name overcloud-node$i \
--dry-run --print-xml > /tmp/overcloud-node$i.xml; \
hypervisor# virsh define --file /tmp/overcloud-node$i.xml; done

You should end up with the following virtual machines

 

hypervisor# virsh list --all
Id Name State
----------------------------------------------------
1 undercloud running
-- overcloud-node1 shut off
-- overcloud-node2 shut off
-- overcloud-node3 shut off
-- overcloud-node4 shut off
-- overcloud-node5 shut off

Back on the undercloud we use the command below to grab the provisioning network mac address from each virtual machine running on the hypervisor. We could run this command locally on the hypervisor, but since we need the mac addresses for ironic on the undercloud, we will run it here.

undercloud$ for i in {1..5}; do virsh -c qemu+ssh://stack@10.1.99.112/system domiflist overcloud-node$i | awk ‘$3 == “provisioning” {print $5}’; done> /tmp/nodes.txt

Now we use our temp file above to populate the instackenv.json that we will import into ironic. See gist below

At this point we are ready to import our nodes via Ironic.

Note that I do not claim to be the original author of the steps documented above, rather I wanted to ensure that I could easily consume these steps in the future.

Also, I look forward to experimenting with the vbmc ironic driver and might stop using pxe_ssh altogether.

Cockpit for Centos and RHEL 7: Install and Configure

Snail_On_White_Background_600

Introduction

I have recently purchased 3 Dell servers, and put myself to task to build out a new lab. My old lab was in desperate need of updating as I had long past the time when 48GB of memory per node was sufficient. The cost of memory, old or new was not even closely in line with cheap server grade CPUs that were perfect for lab servers. Today you can buy a used E7540, a low power, 12 core (HT enabled) Xeon for less than $30 (USD) from a reputable retailer. Cram two of these into an 11 gen Dell and you are in business.

So, three new (to me) Dell rackmounts, deployed as virtualization servers, and I want a simple way to view performance stats in a nice clean single pain of glass. I am not in any way shape or form looking to build fancy dashboard and setup any sort of historical monitoring. I just want to know where the performance hot spots are when my environment seems to be running slowly.

I installed Cockpit before on a laptop or two and thought it might foot the bill, especially since you could use one dashboard for multiple nodes.

So here we are going to deploy Cockpit on all three nodes, on each the steps are the same.

Prerequisites

First we must open a firewall port on each node.

Continue reading

Configure RHEL7/Centos 7 as a Virtualization Host

smarta

This is a fresh install of RHEL 7.5

First install the packages as shown below.


1
yum install qemu-kvm libvirt

Now install the additional recommened virtualization packages


1
# yum install virt-install libvirt-python virt-manager virt-install libvirt-client 

Now restart libvirtd


1
# systemctl restart libvirtd

 
Now you should be able to launch virt-manager from your remote machine and add a connection to your new virtualization host.

Special note. Make sure that you disable NetworkManager


1
2
# systemctl stop NetworkManager
# systemctl disable NetworkManager