RHEV: How to Configure LACP Ports on a Cisco Catalyst Switch

data_sheet_c78-530976-1

Here is how to configure ports on a Cisco Switch for use as LACP (mode 4) uplinks for RHEV (Red Hat Enterprise Virtualization). Note that the configuration that I have defined below may need to be tweaked a bit depending on your configuration. I am not exactly using the most recent equipment in my lab at home.

Note that all configurations are on a Cisco 3560G unless otherwise noted.

RHEV Uplink Ports – Trunk Mode

Configure your port channel similar to what is shown below. This configuration allows you to trunk multiple vlans. VLAN tagging will be required on the RHEV side.

interface Port-channel1
description trunk to RHEV Host1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101,192
switchport mode trunk
spanning-tree portfast disable
spanning-tree bpduguard disable
spanning-tree guard root

Below is the configuration for the first physical port in the port channel.

interface GigabitEthernet0/9
description uplink to ETH0, RHEV Host1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101-104,192
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active

Below is the configuration for the second physical port in the port channel.

interface GigabitEthernet0/10
description uplink to ETH1, RHEV Host1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101-104,192
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active

RHEV Uplink Ports – Access Mode

Configure your port channel similar to what is shown below. In this configuration you are configuring an access port. No VLAN tagging required on the RHEV Side

interface Port-channel2
description Uplink to RHEV02
switchport access vlan 101
switchport mode access
spanning-tree portfast disable
spanning-tree bpduguard disable
spanning-tree guard root

Below is the configuration for the first physical port in the port channel.

interface GigabitEthernet0/5
description Uplink to RHEV02 – ETH0
switchport access vlan 101
switchport mode access
channel-protocol lacp
channel-group 2 mode active

Below is the configuration for the first physical port in the port channel.

interface GigabitEthernet0/6
description Uplink to RHEV02 – ETH1
switchport access vlan 101
switchport mode access
channel-protocol lacp
channel-group 2 mode active

Additional Information

The port configuration that you need to use for RHEV (or KVM, or XEN) is going to be a bit different than what your network administrator may be used to configuring for ESX.  There are actually three settings which might give your network admin might be initially opposed to.  A bit about those below.

> spanning-tree portfast disable

Portfast is enabled when connecting to an end device such as a server, and is disabled when connecting to another layer 2 device such as a switch. With RHEV you are creating a software switch internal to the server, so portfast needs to be disabled.

> spanning-tree bpduguard disable

A Bridge Protocol Data Unit, is a data message sent between switches. Since we are creating software switches in RHEV, we need to allow BPDU. So we need to disable BPDU guard which will down a port if a BPDU data message is received.

> spanning-tree guard root

Root Guard allows the port to participate in STP. However, unlike BPDU Guard which will disable a port upon receipt of a BPDU data unit, the Spanning Tree Root Guard feature will allow BPDU’s through a port to maintain the spanning tree topology, however if it receives a superior BPDU, which indicates that another switch is trying to become the root bridge it will err-disable the port.

Addendum

Below is the configuration that I have in place on a Cisco 2950.

Below is the configuration on the Port-Channel itself.

interface Port-channel2
description LACP to RHEV
switchport trunk allowed vlan 101-104,192
switchport mode trunk
flowcontrol send off
spanning-tree portfast disable
spanning-tree bpduguard disable
spanning-tree guard root

Below is the configuration on the first physical port in the bond

interface FastEthernet0/20
description port-chan 2 interface
switchport trunk allowed vlan 101-104,192
switchport mode trunk
channel-group 2 mode active
channel-protocol lacp

Below is the configuration on the second physical port in the bond

interface FastEthernet0/21
description lacp port channel 2
switchport trunk allowed vlan 101-104,192
switchport mode trunk
channel-group 2 mode active
channel-protocol lacp

RHEV: Error while executing action: Cannot add Storage Connection. Storage connection already exists.

redhat-580x224_tcm53-32995

Wow, this one was a real pain to fix as there is not much official information on how to fix this error, just a couple dozen pages from a mailing list that’s not exactly easy to follow. From what I can tell, the developers do not really want you logging into the database. But in this case we must.

First a little background. I am working on setting up Red Hat Enterprise Virtualization (RHEV), using a couple of spare machines. One machine, a desktop is running RHEL6 with RHEV-M installed via Yum. The other machine is a server that I built using the RHEV-H iso.

Once I got both machines up and running I figured I would take a poke at the WebUI and see if I could figure out my way around. So I clicked and clicked, removed and added, rebooted and restarted, and I found myself in a situation where I was trying to re-add an existing NFS share to use as a Storage Domain.

Well things started to go south pretty fast when I started to get this error in the WebUI.

“Error while executing action: Cannot add Storage Connection. Storage connection already exists.”

What is occurring here is that even though the UI no longer shows the Storage Domain as active, it still exists in the postgres database. So you must log into the database to remove it. Below are the steps.

First SSH into the server running your database. In my lab, this is the RHEV-M host. Then switch users and connect to the database

[root@pandora ~]# su – postgres
-bash-4.1$ psql engine

Now lets look at our existing storage connections. Note how we are taking care to query the ID and Connection Name.

engine=# select id, connection from storage_server_connections;
id | connection
————————————–+————————————————————-
7e7284e2-a4c5-4ff5-978e-75cf08486f0c | pandora.lab.localdomain:/var/lib/exports/iso-20150905011112
6dab7f89-629f-4ecf-95cf-b643e243a96d | pandora.lab.localdomain:/var/lib/exports/nfs
(2 rows)

Both of these need to be removed, which we do by using the ID. Note the single quotes and the spaces.

engine=# DELETE from storage_server_connections where id=’7e7284e2-a4c5-4ff5-978e-75cf08486f0c’ ;
engine-# DELETE from storage_server_connections where id=’6dab7f89-629f-4ecf-95cf-b643e243a96d’ ;

I “should” now be able to re-add our NFS shares.

Note if you need to restart ovirt you will also need to restart httpd. I am running RHEL 6.6.

[root@pandora ~]# service ovirt-engine start
Starting oVirt Engine: [ OK ]
[root@pandora ~]# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]