Running RHEL 8 on Dell R710/610 via Raid Controller Retrofit

Late last year I wrote a blog post that outlined a process of injecting deprecated storage controller drivers into RHEL 8 via a Driver Update Disk. In a nutshell, this process allows you to install RHEL 8 on the R710/R610 (11th Gen)

This process worked fine, unless you wanted to yum update your server. Rather than attempt to find a repeatable process of injecting drivers prior to updating, I decided to upgrade my RAID controller to one that was supported.

My 11th generation, R710, came with a Perc H700. Identified below.

[root@virt01 ~]# lspci -knn | grep 'RAID bus controller'
03:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS 2108 [Liberator] [1000:0079] (rev 05)

Based on feedback from friends and co-workers. I purchased a PERC H330, which I believe ships with the 13th Gen Dell rackmount servers.

These cards can be found for pretty cheap on Ebay.

Plus you will need 2 new cables. You are looking for SFF8643 to SFF8087 (mini SAS HD to mini SAS)

Output from one of my Dell R710s. Note I did not remove the original card.

[{ocptest}root@virt02 ~]# lspci -knn | grep 'RAID bus controller'
03:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS 2108 [Liberator] [1000:0079] (rev 05)
05:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS-3 3108 [Invader] [1000:005d] (rev 02)

Afterthoughts

I’ve only performed this swap on one of my R710s, I have 2 more as well as 2 R610s. Below are are few things to consider if you are looking to perform a similar upgrade

  1. Take a hard look at your existing server. Do you need to purchase a full sized pci card or a mini card. This might likely be the case on the 1u servers.
  2. Remove the old card. The server does not quite like having a raid controller installed without cables.
  3. If you buy a full sized pci card, make sure you get one with the bracket that you need (full sized or short)
  4. You may want to find a cable with 90 degree connectors for the backplane. Mine fit fine on the 710, but I have heard that the 90 degree connector is best for 1U servers.

Red Hat OpenShift 4.x: Adding a new user with admin rights

Getting Started

Once the initial baremetal OpenShift 4.6 IPI install is complete, you will see output similar to what is shown below.

This output contains the URL for the OpenShift WebUI along with the “kubeadmin” password. You will be able to log into the WebUI using these credentials.

If you forget your kubeadmin login (which you will,. because it’s ugly) you will be able to find in the auth directory.

{ocptest}[kni@bastion ~]$ cat clusterconfigs/auth/kubeadmin-password 
WMAWX-QkGc3-IQKy2-XJDI6

Upon initial login to the WebUI you will see a blue bar at the near the top of the page – as shown below.

Adding a New User with Admin Rights

Bottom line, the Openshift “kubeadmin” password is not user friendly. In my lab I do not want to have to look it up each time I attempt to access the webui. I need simple and easy to remember credentials.

So I am going to create a new admin user. In the example below I am creating a user named ‘admin‘ with the password of ‘admin‘.

I am working in a homelab, so security is not exactly a priority.

{ocp}[kni@bastion ~]$ htpasswd -c -B -b users.htpasswd admin admin
Adding password for user admin

Now I define a secret which uses the HTPasswd user file as shown below.

{ocp}[kni@bastion ~]$ oc create secret generic htpass-secret --from-file=htpasswd=/home/kni/users.htpasswd -n openshift-config
secret/htpass-secret created

Next, I create a custom resource (htpasswd.cr) that defines the use of the HTPasswd identity provider. This file is straight out of the user doc and is unmodified.

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: my_htpasswd_provider 
    mappingMethod: claim 
    type: HTPasswd
    htpasswd:
      fileData:
        name: htpass-secret 

Now we apply the Custom Resource (CR) via oc apply.

{ocp}[kni@bastion ~]$ oc apply -f /home/kni/htpasswd.cr
oauth.config.openshift.io/cluster configured

Now let’s test your new id and password on the CLI. If the step below fails, you will need to start all over again.

{ocp}[kni@bastion ~]$ oc login -u admin
Authentication required for https://api.ocp.lab.localdomain:6443 (openshift)
Username: admin
Password: 
Login successful.

You have access to 59 projects, the list has been suppressed. You can list all projects with ' projects'

Using project "default".

Confirm URL for WebUI.

{ocp}[kni@bastion ~]$ oc whoami --show-console
https://console-openshift-console.apps.ocp.lab.localdomain

Navigate to the URL above and select my_htpasswd_provider. Enter your new credentials. At this point you should be able to login without issue. If not, please work through the process again.

Troubleshooting

If you do run into errors, the files that you created and run the command below to delete the secret.

oc delete secret htpass-secret -n openshift-config

At this point you should be able to repeat the procedure and check for errors.

Reference

https://docs.openshift.com/container-platform/4.5/authentication/identity_providers/configuring-htpasswd-identity-provider.html#identity-provider-creating-htpasswd-file-linux_configuring-htpasswd-identity-provider

RHEL 8/CentOS 8: Install and Enable Libvirt

[root@bastion ~]$  dnf module install virt 
[root@bastion ~]$  dnf install virt-install virt-viewer
[root@bastion ~]$  systemctl start libvirtd
[root@bastion ~]$  systemctl enable libvirtd
[root@bastion ~]$  systemctl status libvirtd
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-12-14 15:16:44 EST; 8s ago
     Docs: man:libvirtd(8)
           https://libvirt.org
 Main PID: 33522 (libvirtd)
    Tasks: 19 (limit: 32768)
   Memory: 22.2M
   CGroup: /system.slice/libvirtd.service
           ├─33522 /usr/sbin/libvirtd --timeout 120
           ├─33653 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
           └─33654 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

[root@bastion ~]$  virsh list --all
 Id   Name   State
--------------------

Openshift 4 CLI Common Command Guide: Part 1

Show OpenShift Version

[kni@bastion ~]$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.4     True        False         6d2h    Cluster version is 4.6.4

Accessing the Web Console

Once your deployment is complete, you can use the following command to determine the web console url

[kni@bastion ~]$ oc whoami --show-console
https://console-openshift-console.apps.ocp.lab.localdomain

By default, your username for the console is kubeadmin, use the command below to obtain your password. Note, your cluster directory may not be clusterconfigs however this is the default

[kni@bastion ~]$ cat clusterconfigs/auth/kubeadmin-password 
KLNf6-JEjPC-A6cha-PJfft

Working with Nodes

Show nodes.

[kni@bastion ~]$ oc get nodes
NAME       STATUS   ROLES           AGE    VERSION
master-0   Ready    master,worker   6d3h   v1.19.0+9f84db3
master-1   Ready    master,worker   6d3h   v1.19.0+9f84db3
master-2   Ready    master,worker   6d3h   v1.19.0+9f84db3

Show baremetal hardware nodes.

Oops I have errors..

[kni@bastion ~]$ oc get bmh -n openshift-machine-api
NAME       STATUS   PROVISIONING STATUS      CONSUMER             BMC                     HARDWARE PROFILE   ONLINE   ERROR
master-0   OK       externally provisioned   ocp-d9z87-master-0   ipmi://10.1.99.2:6230                      true     
master-1   OK       externally provisioned   ocp-d9z87-master-1   ipmi://10.1.99.2:6231                      true     
master-2   OK       externally provisioned   ocp-d9z87-master-2   ipmi://10.1.99.2:6232                      true     
worker-0   error    registration error                            ipmi://10.1.99.2:623                       true     Failed to get power state for node c8105b63-3697-419c-9cab-fdfa9985411a. Error: IPMI call failed: power status.
worker-1   error    inspecting                                    ipmi://10.1.99.2:6234                      true     Introspection timeout

Delete a baremetal node.

[kni@bastion ~]$ oc delete bmh worker-0 -n openshift-machine-api
baremetalhost.metal3.io "worker-0" deleted

Working with Projects

Show Current Project

[kni@bastion ~]$ oc project
Using project "openshift-machine-api" on server "https://api.ocp.lab.localdomain:6443".

Create a New Project

[kni@bastion ~]$ oc new-project testproject
Now using project "testproject" on server "https://api.ocp.lab.localdomain:6443".

Change Current Project

If the project does not already exist, you will get an error

[kni@bastion ~]$ oc project testproject
Now using project "testproject" on server "https://api.ocp.lab.localdomain:6443".

Show Status of Current Project

[kni@bastion ~]$ oc status
In project testproject on server https://api.ocp.lab.localdomain:6443

You have no services, deployment configs, or build configs.
Run 'oc new-app' to create an application.

Delete a Project

[kni@bastion ~]$ oc delete project testproject
project.project.openshift.io "testproject" deleted

List all Projects

[kni@bastion ~]$ oc get projects
NAME                                               DISPLAY NAME   STATUS
default                                                           Active
kube-node-lease                                                   Active
kube-public                                                       Active
kube-system                                                       Active
openshift                                                         Active
openshift-apiserver                                               Active
openshift-apiserver-operator                                      Active
--trunc--

Working with Pods

Show all Pods in a Particular Namespace

oc get pods will shows you the pods in your current namespace. Use -n to specify a pod by name. Use -A to specify all pods.

[kni@bastion ~]$ oc get pod -n openshift-dns
NAME                READY   STATUS    RESTARTS   AGE
dns-default-64gcs   3/3     Running   0          4d1h
dns-default-dttgz   3/3     Running   0          4d1h
dns-default-jwmdx   3/3     Running   0          4d1h

Wide listing of Pods in a Particular Namespace

-o wide allows you to see the pod IP address and the node where it is running.

[kni@bastion ~]$ oc get pods -n openshift-console -o wide
NAME                         READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE   READINESS GATES
console-65c4b9c45b-gts2h     1/1     Running   0          4d     10.129.0.58   master-1   <none>           <none>
console-65c4b9c45b-jqrhp     1/1     Running   0          4d     10.128.0.44   master-0   <none>           <none>
downloads-65c97dd5b9-mwpfd   1/1     Running   0          4d1h   10.129.0.53   master-1   <none>           <none>
downloads-65c97dd5b9-np5hh   1/1     Running   0          4d1h   10.130.0.40   master-2   <none>           <none>

Show Pod Resource Usage

The command below will allow you to see number of Cores and Memory used per pod. Measured in millicores (# of host cores * 1000) and bytes.

[kni@bastion ~]$ oc adm top pods -A
NAMESPACE                                          NAME                                                      CPU(cores)   MEMORY(bytes)   
openshift-apiserver                                apiserver-c95cd4bfd-92h9s                                 20m          207Mi           
openshift-apiserver                                apiserver-c95cd4bfd-kfkd7                                 22m          243Mi           
openshift-apiserver                                apiserver-c95cd4bfd-nbm26                                 24m          227Mi           
openshift-apiserver-operator                       openshift-apiserver-operator-54ff4fb46f-jnxzn             16m          98Mi            
openshift-authentication                           oauth-openshift-8bcb6778d-lnqvj                           4m           49Mi            
openshift-authentication                           oauth-openshift-8bcb6778d-svffg                           3m           37Mi            
openshift-authentication-operator                  authentication-operator-5687669dcd-5vrmk                  29m          102Mi           
openshift-cloud-credential-operator                cloud-credential-operator-5bc7bbcdc5-w2hwt                2m           98Mi            
openshift-cluster-machine-approver                 machine-approver-6bbc78c46b-mmzdf                         0m           49Mi            
openshift-cluster-node-tuning-operator             cluster-node-tuning-operator-57d7f9b947-xkrd8             0m           25Mi            

Nmcli: Working with Linux Bridges on RHEL8/CentOS8

For better or worse, mmcli is pretty much the default mechanism for configuring networking on RHEL8/CentOS8. I cannot as of yet, say I am a fan. However, I have been working with ifconfig files for years and held on to that method of networking configuration until they were dragged from my cold and bloated hands.

Show Bridges

# nmcli con show
NAME          UUID                                  TYPE      DEVICE       
eno1          e875f081-1ef6-4995-bb12-0f5ff50f1095  ethernet  eno1         
provisioning  272457e0-8a6b-4e93-8902-f28c07f45cdc  bridge    provisioning 
baremetal     f6d0909c-9ee7-4aa6-8902-b8ce9171fa93  bridge    baremetal    
vlan98        1db123b9-0e43-453d-ba7b-757213ed377d  vlan      eno1.98      
vlan99        9d5d886b-e9bd-45c3-8dc4-c958b73b232e  vlan      eno1.99      
vnet0         73344ec4-37e4-487d-9a07-775ddab2b05e  tun       vnet0        
vnet1         cb4622dc-7987-4410-8497-780f89195a2f  tun       vnet1        
eno2          c1b3674d-cc63-414c-90bb-69e00759efd8  ethernet  --           
eno3          5f12b893-3333-4c8f-91c2-54512d7e7f09  ethernet  --           
eno4          4bbbcace-9ffe-43df-863a-2b19585d98ee  ethernet  --    

Show a Bridge and Connected Interfaces

Format is as shown below.

nmcli -f GENERAL.DEVICE,GENERAL.TYPE,BRIDGE.SLAVES device show bridge_device_name

For example:

nmcli -f GENERAL.DEVICE,GENERAL.TYPE,BRIDGE.SLAVES device show provisioning
GENERAL.DEVICE:                         provisioning
GENERAL.TYPE:                           bridge
BRIDGE.SLAVES:                          eno1.98 vnet1

Add a Bridge

# nmcli connection add type bridge ifname app-br0
Connection 'bridge-app-br0' (f26b8315-5ab0-409a-a701-8abf3837aa92) successfully added.

Add a Slave Interface to a Bridge

nmcli con add type ethernet con-name br-slave-1 ifname eno51 master app-br0

Delete a Bridge

# nmcli conn delete bridge-app-br0
Connection 'bridge-app-br0' (f26b8315-5ab0-409a-a701-8abf3837aa92) successfully deleted.

Enhanced Platform Awareness in Red Hat OpenShift

cropped-post-7682-0-05689900-1437494023.jpg

Four part series published on Medium.com

Enhanced Platform Awareness (EPA) in OpenShift — Part I, HugePages

Enhanced Platform Awareness (EPA) in OpenShift — Part II, CPU pinning

Enhanced Platform Awareness (EPA) in OpenShift — Part III, NUMA Topology Awareness

Enhanced Platform Awareness (EPA) in OpenShift — Part IV, SR-IOV, DPDK and RDMA