Cisco UCS Failover Via 6248 CLI

Ucs_6248_lg1Need to fail over Cisco UCS Manager from one 6248 to another. Oh did I mention that you are in a hurry because you are going to have to sift through pages and pages of pdfs to get the information. I found this out the hard way. Anyway, out of the kindness of my heart I have documented this process below.

First log into one of your 6248s and figure out which one is the primary by running the command below.

ucs01-A# show cluster state
Cluster Id: 0x4b05e6042b6111e1-0x8c9e547fee4bbf24

A: UP, SUBORDINATE
B: UP, PRIMARY

Note that UCS01-B is our primary. So log into UCS-B and issue the command below.

ucs01-B# connect local-mgmt

Then run the command to make UCS-A our primary.

ucs01-B(local-mgmt)# cluster lead a
Cluster Id: 0x4b05e6042b6111e1-0x8c9e547fee4bbf24

Note that you must issue the failover command on the node that is the primary, otherwise this happens.

ucs01-A(local-mgmt)# cluster lead a
Cluster Id: 0x4b05e6042b6111e1-0x8c9e547fee4bbf24
request failed: local node is subordinate

During the failover process you will see the output below when checking the cluster state.

ucs01-B(local-mgmt)# show cluster state

Cluster Id: 0x4b05e6042b6111e1-0x8c9e547fee4bbf24 B: UP, PRIMARY, (Management services: SWITCHOVER IN PROGRESS) A: UP, SUBORDINATE,

(Management services: SWITCHOVER IN PROGRESS) HA NOT READY Management services: switchover in progress on local Fabric Interconnect

How Not to Assign KVM IP Addresses Via Cisco UCS Manager

Boxing-gloveAfter a few hours poking around a newly deployed UCS cluster trying to get some basic profiles created and assigned. I realized that I had actually no idea how the KVM is actually supposed to work inside the UCS cluster. Which is funny as this was a subject that we touched on during my DCUDC class. Apparently we did not touch on it enough.

Anyway, before I get ahead of myself, lets review the gear in this cluster.

2 5108 chassis
7 B200 M2 blades with 2104 IOMs
2 6248s Fabric Interconnects

Now in my network all lights out management ips (ilos, ipmi, etc) are all on one particular vlan, which for the purpose of this post we will call VLAN 100. Non application related infrastructure equipment (servers, virtual hosts) are on another vlan, which we will call VLAN 200. So when the Fabric Interconnets were deployed, I gave them each an ip address on VLAN 200. And once UCS Manager was up and running, I created a KVM ip address pool of unused ip addresses on VLAN 100. Well guess what, this is wrong.

Routing for the KVM addresses is done through the management interfaces on the Fabric Interconects, so unless you are using vlan tagging, your KVM pool must be on the same vlan as the ip addresses assigned to your Fabric Interconnects.

But wait, why is this?

I thought that I could even assign private 192.168.x.x ip addreses to the KVMs as they were only supposed to be managed via the UCS Manager, but this also incorrect.

Navigate to one of your working KVM ip addresses in a web browser and you can access the KVM of the blade outside of UCS Manager. Nice, which is how I actually expected this to work. 

Logo

Note that I find it rather dumb to have my KVM management ips and Fabric Interconnects on the same vlan as a rule, however since this is how its supposed to work I am going to have to let that one go.

Now, the fact that you can navigate to a specific KVM IP address via a web browser also makes the idea of using a pool of ip addresses silly. Would you not want to hard code the KVM ip address in the service profile so that you always know which server's console you are logging into? Dunno, I am still working on figuring that one out.