In part one of the OpenSCAP series we were introduced to the basic usage of the OpenSCAP toolset. In that post we learned how to run a basic scan via the scap-workbench in a desktop environment.
This post will focus on the Content, Profiles, and Targets.
Content
All content will be installed in the directory shown below. The content in this directory will vary based on the installed OS (the content on my Fedora differs from RHEL for example).
/usr/share/xml/scap/ssg/content
The screenshot below contains a list of content installed by default on RHEL 7.
Additional content can be obtained and added to the content directory shown above. For example, NIST content can be downloaded directly from the NIST website. Link below.
OpenSCAP is a standardized compliance solution for Linux. It is comprised of policies (NIST, STIG, etc) and tooling (oscap, scap-workbench) that allow you to scan and evaluate Linux hosts in order to validate their compliance with industry defined standards.
In addition to providing industry standard compliance rules, OpenSCAP also allows administrators to create custom compliance standards using the scap-workbench.
Administrators can then generate remediation files in bash, ansible or puppet.
Let’s get familiar with OpenSCAP below.
Getting Started
Below is an overview of the “Getting Started” workflow. In this workflow we are gonna run through a very simple use-case, scanning your local Fedora workstation.
Install
Choose Policy
Modify/Adjust
Scan
Remediate
In the sections below we will walk through each of these steps in detail.
Red Hat Satellite consists of a number of running services. Restarting each service manually can be painful. Luckily you can use the commands below to easily restart all services.
List Services
Run the command below to view a list of all Satellite services that are started at boot.
# katello-service list
Redirecting to ‘foreman-maintain service’
Running Service List
========================================================================
List applicable services:
dynflowd.service enabled
foreman-proxy.service enabled
httpd.service enabled
postgresql.service enabled
pulp_celerybeat.service enabled
pulp_resource_manager.service enabled
pulp_streamer.service enabled
pulp_workers.service enabled
puppetserver.service enabled
qdrouterd.service enabled
qpidd.service enabled
rh-mongodb34-mongod.service enabled
smart_proxy_dynflow_core.service enabled
squid.service enabled
tomcat.service enabled
All services listed [OK]
Check Service Status
The command below will check the status of all Satellite services. The output is similar to running a systemctl status on all Satellite specific services. The output can be quite verbose.
# katello-service status
Stop Services
Use the command below to stop all Satellite services.
# katello-service stop
Start Services
Use the command below to start all Satellite services.
# katello-service start
Restart Services
The command below will restart all Satellite services.
When spawning a virtual machine in RHV without the guest agent, a warning will appear as an exclamation mark in RHV.
Register with Satellite
You can skip this step if your guest is already registered
In order to install the guest agent, I must first register the virtual machine with my local satellite. If not using a local satellite server, you can register with RHN.
First we need to grab the katello rpm from satellite – an insecure satellite server in this case as we are using a self-signed cert.
One of the UPS’s in my home lab is a Tripplite 1500VALCD. I wanted to be able to monitor/manage the UPS via RHEL/Centos however Tripplite no longer makes a Linux version of Power Alert Local for Linux. Instead I decided to use Nut.
After connecting a USB cable between my RHEL server and my UPS, I needed to install lsusb to verify that it was detected properly.
# yum -y install usbutils
I was then able to verify connectivity
# lsusb | grep -i trip
Bus 003 Device 123: ID 09ae:2012 Tripp Lite
Nut can be found in the EPEL repo which I needed to install.
In order to use the staging-ovirt driver , I first I needed to configure the undercloud to use the staging-ovirt driver. See undercloud.conf below.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Then create an instackenv.json. In the example below pm_addr is the IP of my local RHV manager.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Note that I ran into an error importing my nodes. Error shown below.
[{u’result’: u’Node 09dfefec-e5c3-42c4-93d0-45fb44ce37a8 did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 09dfefec-e5c3-42c4-93d0-45fb44ce37a8. Error: global name \’sdk\’ is not defined’}, {u’result’: u’Node 59dce2eb-3aea-41f9-aec2-3f13deece30b did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 59dce2eb-3aea-41f9-aec2-3f13deece30b. Error: global name \’sdk\’ is not defined’}, {u’result’: u’Node 0895a6d0-f934-44d0-9c26-25e61b6679cb did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 0895a6d0-f934-44d0-9c26-25e61b6679cb. Error: global name \’sdk\’ is not defined’}, {u’result’: u’Node 68bdf1cb-fe1f-48ab-b96d-fb5edaf17154 did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 68bdf1cb-fe1f-48ab-b96d-fb5edaf17154. Error: global name \’sdk\’ is not defined’}]
This document is one that I have created by using multiple sources as reference.
Availability Zones
Typically used for separating failure domains
Availability Zones are the end-user visible logical abstraction for partitioning a cloud without knowing the physical infrastructure.
An availability zone is a way in which the user can specify a particular “location” in which a host should boot.
Availability zones are fairly straightforward; pick a zone, start a VM.
Availability zones serve as a bucket
Host Aggregate has no conflict with Availability Zone.
Choose availability zone when booting a VM.
Host Aggregates
Typically used for grouping servers with similar capabilities
Host aggregates can be regarded as a mechanism to further partition an availability zone; while availability zones are visible to users, host aggregates are only visible to administrators
Host aggregates also allow higher availability of a single guest instance within an availability zone, it enables advanced VM placement strategies, and more importantly it enables hosts’ zero-downtime upgrades.
Host aggregates are in the administrator’s domain
Host aggregates are intended as a way to group servers that have a particular quality to them.
Host aggregates serve as an intelligent way for schedulers to know where to place VM’s based on some sort of characteristic
Use Keys set at flavor level.
Host Aggregate has no conflict with Availability Zone.
Configure Nova to Use Host Aggregates
AggregateInstanceExtraSpecsFilter set in scheduler_default_filters in /etc/nova/nova.conf. Example below.
In general, the workflow for using host aggregates looks like this:
Create a new aggregate.
Set a particular property for that aggregate, such as ssd=true , or in our case, joeistheboss=true .
Add qualifying hosts to this aggregate.
Create a flavor that requires this particular property.
Instantiate hosts using this flavor.
Planning
As an admin planning for your customers, however, you have a decision to make. In general, you’ll need to consider the following:
Is there a clear physical separation between hosts, either physically or redundantly? If so, you will probably want to use availability zones.
Is the separation based on hardware capabilities? If so, you will probably want to use hardware aggregates.
Are hosts within a particular “category” spread across multiple locations? If so, you will probably want to use hardware aggregates so that you can group together hosts from multiple availability zones. (In this case, you can create an aggregate with the appropriate metadata in each zone.)
Do you want users to consciously choose a “category” for their VMs? If so, you will probably want to use availability zones, as users can specify them directly.
Creating and Using Host Aggregates
Create a host aggregate in an availability zone as shown below.
[simterm] nova aggregate-create
[/simterm]
Example:
[simterm]
nova aggregate-create ssd-disk nova
[/simterm]
Add a host to your aggregate.
[simterm]nova aggregate-add-host
[/simterm]
Add metadata to be associated with the aggregate
[simterm]nova aggregate-set-metadata <key=value>
[/simterm]
Create flavor using key=value pair
[simterm] nova flavor-create ssd.large 6 8192 80 4
nova flavor-key set_key –name=ssd.large –key=ssd –value=true