In part 1 of this series we were introduced to OpenSCAP and the process of running scans via the SCAP workbench. In part 2, we explored concepts and components that define security/vulnerability scans. In this 3rd post we are going to dive into the command line operation.
Let’s get started with oscap.
Installing oscap
In RHEL 7 oscap can be installed with the following command
In part one of the OpenSCAP series we were introduced to the basic usage of the OpenSCAP toolset. In that post we learned how to run a basic scan via the scap-workbench in a desktop environment.
This post will focus on the Content, Profiles, and Targets.
Content
All content will be installed in the directory shown below. The content in this directory will vary based on the installed OS (the content on my Fedora differs from RHEL for example).
/usr/share/xml/scap/ssg/content
The screenshot below contains a list of content installed by default on RHEL 7.
Additional content can be obtained and added to the content directory shown above. For example, NIST content can be downloaded directly from the NIST website. Link below.
Red Hat Satellite consists of a number of running services. Restarting each service manually can be painful. Luckily you can use the commands below to easily restart all services.
List Services
Run the command below to view a list of all Satellite services that are started at boot.
# katello-service list
Redirecting to ‘foreman-maintain service’
Running Service List
========================================================================
List applicable services:
dynflowd.service enabled
foreman-proxy.service enabled
httpd.service enabled
postgresql.service enabled
pulp_celerybeat.service enabled
pulp_resource_manager.service enabled
pulp_streamer.service enabled
pulp_workers.service enabled
puppetserver.service enabled
qdrouterd.service enabled
qpidd.service enabled
rh-mongodb34-mongod.service enabled
smart_proxy_dynflow_core.service enabled
squid.service enabled
tomcat.service enabled
All services listed [OK]
Check Service Status
The command below will check the status of all Satellite services. The output is similar to running a systemctl status on all Satellite specific services. The output can be quite verbose.
# katello-service status
Stop Services
Use the command below to stop all Satellite services.
# katello-service stop
Start Services
Use the command below to start all Satellite services.
# katello-service start
Restart Services
The command below will restart all Satellite services.
When spawning a virtual machine in RHV without the guest agent, a warning will appear as an exclamation mark in RHV.
Register with Satellite
You can skip this step if your guest is already registered
In order to install the guest agent, I must first register the virtual machine with my local satellite. If not using a local satellite server, you can register with RHN.
First we need to grab the katello rpm from satellite – an insecure satellite server in this case as we are using a self-signed cert.
In this post I will review the process of creating Content Views (CV), Composite Content Views (CCV), publishing each view, and creating lifecycles.
Note that in this post we are working with Red Hat Satellite 6.4, in which there was a major overhaul of the WebUI. You may have noticed that all menus are now situated on in a pane on the left, rather than at the top of each page.
Sync Plans
A sync plan is a constant, scheduled synchronization of updates of a Red Hat Satellite repository and the source repositories. I suggest syncing either daily or weekly in order to minimize the deltas between each sync. When you sync more often, the amount of change between syncs is less and therefore should complete faster than a monthly sync.
Note that this step assumes that you have already setup the correct repositories for RHEL and Red Hat OpenStack. A list of required repositories can be found in the Red Hat OpenStack Director Installation and Usage Guide.
Navigate to Content > Sync Plans
Here we create a daily sync plan for RHEL 7.
We now add RHEL 7 as the product.
Now we need to create a daily sync plan for Red Hat OpenStack.
Note: you might need to create a sync plan for Ceph as well. Ensure all plans sync at the same interval.
Create a Content View
Now we need to create our content views. We will create one for RHEL, and one for OSP. If you are using ceph, you will need to create a content view for it as well.
Heat is the main orchestration engine for OpenStack, and is used my OpenStack director to install an OpenStack Overcloud environment.
When we run the “openstack deploy overcloud” command, we are specifically
telling RHEL OSP director that we want it to use the pre-defined Heat templates from /usr/share/openstack-tripleo-heat-templates/. OSP director will manage the
deployment of a new overcloud heat stack, using files from this directory.
When RHEL OSP director calls the Heat stack, it needs the following data…
A top-level Heat template to use that describes the overall environment and the
resources required.
An environment/resource registry to tell Heat where to find resource
definitions for non-standard Heat elements, e.g. TripleO components.
A set of parameters to declare the deployment-specific options (via -e)
The most important files for us to focus on are in our deployment directory, these are the default files that get called by OSP director.
The top-level Heat template that OSP director uses for deployment is
/usr/share/openstack-tripleo-heat-templates/overcloud-without-mergepy.yaml
The resource registry, which tells Heat where to find the templates for
deployment resources is
/usr/share/openstack-tripleo-heat-templates/overcloud-resource-registry-puppet.
yaml
Creating a Heat Stack
To create the stack we run the command below. This command instructs heat to use
the templates in ~/my_templates/, as well as the override templates specified
with the ‘-e’ option.
This is just an example of what I am using in my lab environment, your deploy command will be much different. Also note that I have copied the templates from /usr/share/openstack-tripleo-heat-templates to ~/my_templates/.
Unfortunately our deploy failed with the following errors.
Exception: Heat Stack create failed.
DEBUG: openstackclient.shell clean_up DeployOvercloud
DEBUG: openstackclient.shell got an error: Heat Stack create failed.
ERROR: openstackclient.shell Traceback (most recent call last):
File “/usr/lib/python2.7/site-packages/openstackclient/shell.py”, line 176, in
run
return super(OpenStackShell, self).run(argv)
File “/usr/lib/python2.7/site-packages/cliff/app.py”, line 230, in run
result = self.run_subcommand(remainder)
File “/usr/lib/python2.7/site-packages/cliff/app.py”, line 295, in
run_subcommand
result = cmd.run(parsed_args)
File “/usr/lib/python2.7/site-packages/cliff/command.py”, line 53, in run
self.take_action(parsed_args)
File
“/usr/lib/python2.7/site-packages/rdomanager_oscplugin/v1/overcloud_deploy.py”,
line 864, in take_action
self._deploy_tripleo_heat_templates(stack, parsed_args)
File
“/usr/lib/python2.7/site-packages/rdomanager_oscplugin/v1/overcloud_deploy.py”,
line 535, in _deploy_tripleo_heat_templates
parsed_args.timeout)
File
“/usr/lib/python2.7/site-packages/rdomanager_oscplugin/v1/overcloud_deploy.py”,
line 478, in _heat_deploy
raise Exception(“Heat Stack create failed.”)
Exception: Heat Stack create failed.
We can verify that the deploy failed with the command below.
The failed resources are named “Compute” and “Controller“. Lets take a closer
look at those using the “resource-show” argument.
#heat resource-show overcloud Compute
| resource_status_reason | ResourceUnknownStatus: Resource failed – Unknown
status FAILED due to “Resource CREATE failed: ResourceUnknownStatus:
Resource failed – Unknown status FAILED due to “Resource CREATE failed:
StackValidationFailed: Property error : OsNetConfigImpl: config The Parameter
(BondInterfaceOvsOptions) was not provided.”” |
Let’s now do the same for Controller.
#heat resource-show overcloud Controller
| resource_status_reason | ResourceUnknownStatus: Resource failed – Unknown
status FAILED due to “Resource CREATE failed: ResourceUnknownStatus:
Resource failed – Unknown status FAILED due to “Resource CREATE failed:
StackValidationFailed: Property error : OsNetConfigImpl: config The Parameter
(BondInterfaceOvsOptions) was not provided.”” |
Apparently I have some issues with my OVS bonding options, so I need to get those straight before I can continue.
Deleting a Failed Heat Stack
Since our last deploy failed, we need to delete the failed stack before we can kick off another stack deploy. Below is an example of that command – note we are using the UUID of the stack.
Ok, let’s take a look at /var/log/heat/heat/heat-engine.log for more details. I also suggest opening another ssh session and tailing the log while the delete is attempting to do its thing.
If the output is too verbose to follow, I suggest attempting to thin out the output using the command below
Apparently SELinux is blocking the reads for the certificates.
There are two ways to work around this issue. You can run “restorecon -v /path/to/certs/“, or you can work around by disabling selinux by running “setenforce 0” or by editing the /etc/selinux/config file and setting ‘SELINUX=DISABLED’.
You may need to rerun the delete, in my case it was stuck in “DELETE_IN_PROGRESS”. I restarted all heat releated services to force the delete to error.