Running RHEL 8 on Dell R710/610 via Raid Controller Retrofit

Late last year I wrote a blog post that outlined a process of injecting deprecated storage controller drivers into RHEL 8 via a Driver Update Disk. In a nutshell, this process allows you to install RHEL 8 on the R710/R610 (11th Gen)

This process worked fine, unless you wanted to yum update your server. Rather than attempt to find a repeatable process of injecting drivers prior to updating, I decided to upgrade my RAID controller to one that was supported.

My 11th generation, R710, came with a Perc H700. Identified below.

[root@virt01 ~]# lspci -knn | grep 'RAID bus controller'
03:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS 2108 [Liberator] [1000:0079] (rev 05)

Based on feedback from friends and co-workers. I purchased a PERC H330, which I believe ships with the 13th Gen Dell rackmount servers.

These cards can be found for pretty cheap on Ebay.

Plus you will need 2 new cables. You are looking for SFF8643 to SFF8087 (mini SAS HD to mini SAS)

Output from one of my Dell R710s. Note I did not remove the original card.

[{ocptest}root@virt02 ~]# lspci -knn | grep 'RAID bus controller'
03:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS 2108 [Liberator] [1000:0079] (rev 05)
05:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS-3 3108 [Invader] [1000:005d] (rev 02)

Afterthoughts

I’ve only performed this swap on one of my R710s, I have 2 more as well as 2 R610s. Below are are few things to consider if you are looking to perform a similar upgrade

  1. Take a hard look at your existing server. Do you need to purchase a full sized pci card or a mini card. This might likely be the case on the 1u servers.
  2. Remove the old card. The server does not quite like having a raid controller installed without cables.
  3. If you buy a full sized pci card, make sure you get one with the bracket that you need (full sized or short)
  4. You may want to find a cable with 90 degree connectors for the backplane. Mine fit fine on the 710, but I have heard that the 90 degree connector is best for 1U servers.

Nmcli: Working with Linux Bridges on RHEL8/CentOS8

For better or worse, mmcli is pretty much the default mechanism for configuring networking on RHEL8/CentOS8. I cannot as of yet, say I am a fan. However, I have been working with ifconfig files for years and held on to that method of networking configuration until they were dragged from my cold and bloated hands.

Show Bridges

# nmcli con show
NAME          UUID                                  TYPE      DEVICE       
eno1          e875f081-1ef6-4995-bb12-0f5ff50f1095  ethernet  eno1         
provisioning  272457e0-8a6b-4e93-8902-f28c07f45cdc  bridge    provisioning 
baremetal     f6d0909c-9ee7-4aa6-8902-b8ce9171fa93  bridge    baremetal    
vlan98        1db123b9-0e43-453d-ba7b-757213ed377d  vlan      eno1.98      
vlan99        9d5d886b-e9bd-45c3-8dc4-c958b73b232e  vlan      eno1.99      
vnet0         73344ec4-37e4-487d-9a07-775ddab2b05e  tun       vnet0        
vnet1         cb4622dc-7987-4410-8497-780f89195a2f  tun       vnet1        
eno2          c1b3674d-cc63-414c-90bb-69e00759efd8  ethernet  --           
eno3          5f12b893-3333-4c8f-91c2-54512d7e7f09  ethernet  --           
eno4          4bbbcace-9ffe-43df-863a-2b19585d98ee  ethernet  --    

Show a Bridge and Connected Interfaces

Format is as shown below.

nmcli -f GENERAL.DEVICE,GENERAL.TYPE,BRIDGE.SLAVES device show bridge_device_name

For example:

nmcli -f GENERAL.DEVICE,GENERAL.TYPE,BRIDGE.SLAVES device show provisioning
GENERAL.DEVICE:                         provisioning
GENERAL.TYPE:                           bridge
BRIDGE.SLAVES:                          eno1.98 vnet1

Add a Bridge

# nmcli connection add type bridge ifname app-br0
Connection 'bridge-app-br0' (f26b8315-5ab0-409a-a701-8abf3837aa92) successfully added.

Add a Slave Interface to a Bridge

nmcli con add type ethernet con-name br-slave-1 ifname eno51 master app-br0

Delete a Bridge

# nmcli conn delete bridge-app-br0
Connection 'bridge-app-br0' (f26b8315-5ab0-409a-a701-8abf3837aa92) successfully deleted.

OpenStack Nova – Overview of Host Aggregates and Availability Zones

edba0-6a00e551c39e1c88340148c7db94aa970c-pi

Introduction

This document is one that I have created by using multiple sources as reference.

Availability Zones

  • Typically used for separating failure domains
  • Availability Zones are the end-user visible logical abstraction for partitioning a cloud without knowing the physical infrastructure.
  • An availability zone is a way in which the user can specify a particular “location” in which a host should boot.
  • Availability zones are fairly straightforward; pick a zone, start a VM.
  • Availability zones serve as a bucket
  • Host Aggregate has no conflict with Availability Zone.
  • Choose availability zone when booting a VM.

Host Aggregates

  • Typically used for grouping servers with similar capabilities
  • Host aggregates can be regarded as a mechanism to further partition an availability zone; while availability zones are visible to users, host aggregates are only visible to administrators
  • Host aggregates also allow higher availability of a single guest instance within an availability zone, it enables advanced VM placement strategies, and more importantly it enables hosts’ zero-downtime upgrades.
  • Host aggregates are in the administrator’s domain
  • Host aggregates are intended as a way to group servers that have a particular quality to them.
  • Host aggregates serve as an intelligent way for schedulers to know where to place VM’s based on some sort of characteristic
  • Use Keys set at flavor level.
  • Host Aggregate has no conflict with Availability Zone.

Configure Nova to Use Host Aggregates

AggregateInstanceExtraSpecsFilter set in scheduler_default_filters in /etc/nova/nova.conf. Example below.

[simterm]

scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter

[/simterm]

Host Aggregate Workflow

In general, the workflow for using host aggregates looks like this:

  1. Create a new aggregate.
  2. Set a particular property for that aggregate, such as ssd=true , or in our case, joeistheboss=true .
  3. Add qualifying hosts to this aggregate.
  4. Create a flavor that requires this particular property.
  5. Instantiate hosts using this flavor.

Planning

As an admin planning for your customers, however, you have a decision to make.  In general, you’ll need to consider the following:

  1. Is there a clear physical separation between hosts, either physically or redundantly?  If so, you will probably want to use availability zones.
  2. Is the separation based on hardware capabilities?  If so, you will probably want to use hardware aggregates.
  3. Are hosts within a particular “category” spread across multiple locations?  If so, you will probably want to use hardware aggregates so that you can group together hosts from multiple availability zones.  (In this case, you can create an aggregate with the appropriate metadata in each zone.)
  4. Do you want users to consciously choose a “category” for their VMs?  If so, you will probably want to use availability zones, as users can specify them directly.

Creating and Using Host Aggregates

Create a host aggregate in an availability zone as shown below.

[simterm]  nova aggregate-create

[/simterm]

Example:

[simterm]

nova aggregate-create ssd-disk nova

[/simterm]

Add a host to your aggregate.

[simterm]nova aggregate-add-host

[/simterm]

Add metadata to be associated with the aggregate

[simterm]nova aggregate-set-metadata <key=value>

[/simterm]

Create flavor using key=value pair

[simterm]  nova flavor-create ssd.large 6 8192 80 4

  nova flavor-key set_key –name=ssd.large –key=ssd –value=true

[/simterm]

 

MegaCLI Force Firmware Flash

16-118-105-02.jpg

The step below will update your adapter’s firmware.

[simterm]# MegaCli -adpfwflash -F mr2108fw.rom -aAll

[/simterm]

The step below will FORCE update your adapter’s firmware. Useful if you need to roll-back to an older firmware.

[simterm]# MegaCli -adpfwflash -F mr2108fw.rom -NoVerChk -aAll

[/simterm]

 

Linux: How to start a Minecraft server at boot via Systemd

index-hero-og.088fb7996b03

I am not into Minecraft, but my kids are.

I decided to local host a small Minecraft server on a RHEL 7 vm.  There are tons of guides on how to install Minecraft on Linux, but I did not find much of anything that details how to get the software to start at boot-time via systemd.

Below are the steps that I took. These instructions should work on RHEL 6/7, Centos 6/7, or any other Linux that utilizes systemd for starting services.

Change directory to the systemd directory.

# cd /etc/systemd/system

Create a systemd service file using your favorite text editor. Mine is called minecraft.service.

[root@minecraft system]# cat minecraft.service
[Unit]
Description=Start Minecraft
After=network.target

[Service]
Type=simple
ExecStart=/root/start_minecraft_server.bash
TimeoutStartSec=0

[Install]
WantedBy=default.target

Make the systemd script executable.

# chmod +x /etc/systemd/system/minecraft.service

Next, create a startup script. Again use your favorite text editor. In the example above, systemd is configured to run the script “/root/start_minecraft_server.bash”. We now need to create that script. Its contents are below.

#!/bin/bash

#Standard Minecraft
cd /home/mcserver/minecraft
exec java -Xmx2048M -Xms2048M -jar minecraft_server.1.12.2.jar nogui

Make the script above executable.

#chmod +x /root/start_minecraft_server.bash

Now reload systemd.

# systemctl daemon-reload

Enable and start your service

#systemctl enable minecraft.service

#systemctl start minecraft.service