OpenSCAP Part 2: SCAP Content for RHEL 7

openscap-base

Introduction

In part one of the OpenSCAP series we were introduced to the basic usage of the OpenSCAP toolset. In that post we learned how to run a basic scan via the scap-workbench in a desktop environment.

This post will focus on the Content, Profiles, and Targets.

Content

All content will be installed in the directory shown below.  The content in this directory will vary based on the installed OS (the content on my Fedora differs from RHEL for example).

/usr/share/xml/scap/ssg/content

The screenshot below contains a list of content installed by default on RHEL 7.

Screenshot from 2019-07-24 15-58-04.png

Additional content can be obtained and added to the content directory shown above.  For example, NIST content can be downloaded directly from the NIST website. Link below.

National Checklist Program Repository

In the screenshot below we have performed a search for all content that targets RHEL 7.6

Screenshot from 2019-07-25 11-45-44.png

 

Continue reading

OpenSCAP Part 1: Introduction and Basic Usage for RHEL 7/CentOS 7/Fedora

openscap-base

Introduction

OpenSCAP is a standardized compliance solution for Linux. It is comprised of policies (NIST, STIG, etc) and tooling (oscap, scap-workbench) that allow you to scan and evaluate Linux hosts in order to validate their compliance with industry defined standards.

In addition to providing industry standard compliance rules, OpenSCAP also allows administrators to create custom compliance standards using the scap-workbench.

Administrators can then generate remediation files in bash, ansible or puppet.

Let’s get familiar with OpenSCAP below.

Getting Started

Below is an overview of the “Getting Started” workflow. In this workflow we are gonna run through a very simple use-case, scanning your local Fedora workstation.

  1. Install
  2. Choose Policy
  3. Modify/Adjust
  4. Scan
  5. Remediate

In the sections below we will walk through each of these steps in detail.

Continue reading

Red Hat Satellite 6.x: Restarting Services

sputnik-2-icon

Introduction

Red Hat Satellite consists of a number of running services. Restarting each service manually can be painful. Luckily you can use the commands below to easily restart all services.

List Services

Run the command below to view a list of all Satellite services that are started at boot.

# katello-service list
Redirecting to ‘foreman-maintain service’
Running Service List
========================================================================
List applicable services:
dynflowd.service enabled
foreman-proxy.service enabled
httpd.service enabled
postgresql.service enabled
pulp_celerybeat.service enabled
pulp_resource_manager.service enabled
pulp_streamer.service enabled
pulp_workers.service enabled
puppetserver.service enabled
qdrouterd.service enabled
qpidd.service enabled
rh-mongodb34-mongod.service enabled
smart_proxy_dynflow_core.service enabled
squid.service enabled
tomcat.service enabled

All services listed [OK]

Check Service Status

The command below will check the status of all Satellite services. The output is similar to running a systemctl status on all Satellite specific services. The output can be quite verbose.

# katello-service status

Stop Services

Use the command below to stop all Satellite services.

# katello-service stop

Start Services

Use the command below to start all Satellite services.

# katello-service start

Restart Services

The command below will restart all Satellite services.

# katello-service restart

RHEL7: Install RHV Guest Agent and Drivers

howto-draw-octopuses-tutorials_html_101880e

About the Guest Agent

The RHEL 7.x virtual machine guest agent in RHV 4.x provides  drivers, additional data, and functionality once installed on a RHEL virtual machine.

The guest agent includes:

  • virtio-net paravirtualized network driver
  • virtio-scsi paravirtualized HDD driver
  • virtio-balloon driver which improves memory overcommit (currently not used by RHV)
  • rhevm-guest-agent common which allows RHV to retrieve guest internal information such as IP addresses and allows RHV to gracefully reboot the guest

You can view the entire list here.

When spawning a virtual machine in RHV without the guest agent, a warning will appear as an exclamation mark in RHV.

Screenshot from 2019-07-02 16-11-58

Register with Satellite

You can skip this step if your guest is already registered

In order to install the guest agent, I must first register the virtual machine with my local satellite. If not using a local satellite server, you can register with RHN.

First we need to grab the katello rpm from satellite – an insecure satellite server in this case as we are using a self-signed cert.

# curl -Ok https://satellite.lab.localdomain/pub/katello-ca-consumer-latest.noarch.rpm

Then install the rpm.

# rpm -ivh katello-ca-consumer-latest.noarch.rpm

Now register with Satellite. In the example below, we are using a custom activation key and organization.

# subscription-manager register –activationkey=”auburn-lab-ak” –org=”lab”

Installing the Guest Agent

You will need to ensure that the RHEL 7 RH Common repo is enabled. If the repo is not available to the guest, you will need to enable.

# yum repolist | grep common
!rhel-7-server-rh-common-rpms/7Server/x86_64 Red Hat Enterprise Linux 234

If the proper repo is enabled, then run the following command.

Once installed, the orange exclamation point will disappear.

Screenshot from 2019-07-02 17-13-19

Start and Enable the Guest Agent Service

# systemctl enable ovirt-guest-agent.service
# systemctl start ovirt-guest-agent.service

Monitor Tripplite UPS on RHEL 7 via NUT

61cR65ugGTL._SL1200_.jpg

One of the UPS’s in my home lab is a Tripplite 1500VALCD. I wanted to be able to monitor/manage the UPS via RHEL/Centos however Tripplite no longer makes a Linux version of Power Alert Local for Linux.  Instead I decided to use Nut.

After connecting a USB cable between my RHEL server and my UPS, I needed to install lsusb to verify that it was detected properly.
# yum -y install usbutils

I was then able to verify connectivity

# lsusb | grep -i trip
Bus 003 Device 123: ID 09ae:2012 Tripp Lite

Nut can be found in the EPEL repo which I needed to install.

#wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# yum localinstall epel-release-latest-7.noarch.rpm

Then install Nut.

# yum -y install nut.x86_64

I then ran nut-monitor to detect the proper config for Nut.

[nutdev1]
driver = “usbhid-ups”
port = “auto”
vendorid = “09AE”
productid = “2012”
product = “Tripp Lite UPS”
vendor = “Tripp Lite”
bus = “003”

Use the output from the above command to populate /etc/ups/ups.conf. In the example below I I only changed the name of the device.

[tripplite]
driver = “usbhid-ups”
port = “auto”
vendorid = “09AE”
productid = “2012”
product = “Tripp Lite UPS”
vendor = “Tripp Lite”
bus = “003”

I then started and enabled the following services.

# systemctl start nut-server
# systemctl enable nut-server

I was then able to run upsc and query the ups.

# upsc tripplite
battery.charge: 100
battery.runtime: 620
battery.type: PbAC
battery.voltage: 26.3
battery.voltage.nominal: 24.0
device.mfr: Tripp Lite
device.model: Tripp Lite UPS
device.type: ups
driver.name: usbhid-ups
driver.parameter.bus: 003
driver.parameter.pollfreq: 30
driver.parameter.pollinterval: 2
driver.parameter.port: auto
driver.parameter.product: Tripp Lite UPS
driver.parameter.productid: 2012
driver.parameter.vendor: Tripp Lite
driver.parameter.vendorid: 09AE
driver.version: 2.7.2
driver.version.data: TrippLite HID 0.81
driver.version.internal: 0.38
input.frequency: 59.8
input.voltage: 112.2
input.voltage.nominal: 120
output.frequency.nominal: 60
output.voltage: 112.2
output.voltage.nominal: 120
ups.beeper.status: disabled
ups.delay.shutdown: 20
ups.load: 48
ups.mfr: Tripp Lite
ups.model: Tripp Lite UPS
ups.power: 0.0
ups.power.nominal: 1500
ups.productid: 2012
ups.status: OL
ups.test.result: Done and error
ups.timer.reboot: 65535
ups.timer.shutdown: 65535
ups.vendorid: 09ae
ups.watchdog.status: 0

Next I plan to explore nut-monitor, but for know I can at least query the UPS. Apparently the battery is dead.

OpenStack Staging-Ovirt Driver: global name \’sdk\’ is not defined

python-snek

Getting Started

The staging-ovirt driver allows OpenStack to easily use ovirt/RHV virtual machines as overcloud nodes.   For those of us running virtualized OpenStack labs, it’s a huge step forward – as we either were previously having to hack our way around pxe_ssh or vmbc. Neither was a great solution.

In order to use the staging-ovirt driver , I first I needed to configure the undercloud to use the staging-ovirt driver. See undercloud.conf below.


[DEFAULT]
local_ip = 10.1.98.2/24
undercloud_public_vip = 10.1.98.3
undercloud_admin_vip = 10.1.98.4
local_interface = eth1
masquerade_network = 10.1.98.0/24
dhcp_start = 10.1.98.100
dhcp_end = 10.1.98.120
network_cidr = 10.1.98.0/24
network_gateway = 10.1.98.2
inspection_iprange = 10.1.98.130,10.1.98.150
inspection_runbench = false
undercloud_debug = false
store_events = false
enabled_hardware_types = staging-ovirt
inspection_enable_uefi = false

view raw

undercloud.conf

hosted with ❤ by GitHub

Then create an instackenv.json.  In the example below pm_addr is the IP of my local RHV manager.


"arch": "x86_64",
"cpu": "1",
"disk": "10",
"mac": [
"00:1a:4a:16:01:5a"
],
"memory": "1024",
"name": "ospd13-ctrl01",
"pm_addr": "10.1.99.10",
"pm_password": "redhat",
"pm_type": "staging-ovirt",
"pm_user": "admin@internal",
"pm_vm_name": "ospd13-ctrl01",
"capabilities": "profile:control,boot_option:local"

view raw

gistfile1.txt

hosted with ❤ by GitHub

You should then be able to import your nodes.

[simterm]
$ openstack overcloud node import instackenv.json
[/simterm]

Troubleshooting

Note that I ran into an error importing my nodes. Error shown below.

[{u’result’: u’Node 09dfefec-e5c3-42c4-93d0-45fb44ce37a8 did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 09dfefec-e5c3-42c4-93d0-45fb44ce37a8. Error: global name \’sdk\’ is not defined’}, {u’result’: u’Node 59dce2eb-3aea-41f9-aec2-3f13deece30b did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 59dce2eb-3aea-41f9-aec2-3f13deece30b. Error: global name \’sdk\’ is not defined’}, {u’result’: u’Node 0895a6d0-f934-44d0-9c26-25e61b6679cb did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 0895a6d0-f934-44d0-9c26-25e61b6679cb. Error: global name \’sdk\’ is not defined’}, {u’result’: u’Node 68bdf1cb-fe1f-48ab-b96d-fb5edaf17154 did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 68bdf1cb-fe1f-48ab-b96d-fb5edaf17154. Error: global name \’sdk\’ is not defined’}]

Help was found here.

Apparently I was missing a package. I needed to yum install the package shown below and restart ironic-conductor

[simterm]
# sudo yum -y install python-ovirt-engine-sdk4.x86_64
# sudo systemctl restart openstack-ironic-conductor.service
[/simterm]

OpenStack Nova – Overview of Host Aggregates and Availability Zones

edba0-6a00e551c39e1c88340148c7db94aa970c-pi

Introduction

This document is one that I have created by using multiple sources as reference.

Availability Zones

  • Typically used for separating failure domains
  • Availability Zones are the end-user visible logical abstraction for partitioning a cloud without knowing the physical infrastructure.
  • An availability zone is a way in which the user can specify a particular “location” in which a host should boot.
  • Availability zones are fairly straightforward; pick a zone, start a VM.
  • Availability zones serve as a bucket
  • Host Aggregate has no conflict with Availability Zone.
  • Choose availability zone when booting a VM.

Host Aggregates

  • Typically used for grouping servers with similar capabilities
  • Host aggregates can be regarded as a mechanism to further partition an availability zone; while availability zones are visible to users, host aggregates are only visible to administrators
  • Host aggregates also allow higher availability of a single guest instance within an availability zone, it enables advanced VM placement strategies, and more importantly it enables hosts’ zero-downtime upgrades.
  • Host aggregates are in the administrator’s domain
  • Host aggregates are intended as a way to group servers that have a particular quality to them.
  • Host aggregates serve as an intelligent way for schedulers to know where to place VM’s based on some sort of characteristic
  • Use Keys set at flavor level.
  • Host Aggregate has no conflict with Availability Zone.

Configure Nova to Use Host Aggregates

AggregateInstanceExtraSpecsFilter set in scheduler_default_filters in /etc/nova/nova.conf. Example below.

[simterm]

scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter

[/simterm]

Host Aggregate Workflow

In general, the workflow for using host aggregates looks like this:

  1. Create a new aggregate.
  2. Set a particular property for that aggregate, such as ssd=true , or in our case, joeistheboss=true .
  3. Add qualifying hosts to this aggregate.
  4. Create a flavor that requires this particular property.
  5. Instantiate hosts using this flavor.

Planning

As an admin planning for your customers, however, you have a decision to make.  In general, you’ll need to consider the following:

  1. Is there a clear physical separation between hosts, either physically or redundantly?  If so, you will probably want to use availability zones.
  2. Is the separation based on hardware capabilities?  If so, you will probably want to use hardware aggregates.
  3. Are hosts within a particular “category” spread across multiple locations?  If so, you will probably want to use hardware aggregates so that you can group together hosts from multiple availability zones.  (In this case, you can create an aggregate with the appropriate metadata in each zone.)
  4. Do you want users to consciously choose a “category” for their VMs?  If so, you will probably want to use availability zones, as users can specify them directly.

Creating and Using Host Aggregates

Create a host aggregate in an availability zone as shown below.

[simterm]  nova aggregate-create

[/simterm]

Example:

[simterm]

nova aggregate-create ssd-disk nova

[/simterm]

Add a host to your aggregate.

[simterm]nova aggregate-add-host

[/simterm]

Add metadata to be associated with the aggregate

[simterm]nova aggregate-set-metadata <key=value>

[/simterm]

Create flavor using key=value pair

[simterm]  nova flavor-create ssd.large 6 8192 80 4

  nova flavor-key set_key –name=ssd.large –key=ssd –value=true

[/simterm]