After a few hours poking around a newly deployed UCS cluster trying to get some basic profiles created and assigned. I realized that I had actually no idea how the KVM is actually supposed to work inside the UCS cluster. Which is funny as this was a subject that we touched on during my DCUDC class. Apparently we did not touch on it enough.
Anyway, before I get ahead of myself, lets review the gear in this cluster.
2 5108 chassis
7 B200 M2 blades with 2104 IOMs
2 6248s Fabric Interconnects
Now in my network all lights out management ips (ilos, ipmi, etc) are all on one particular vlan, which for the purpose of this post we will call VLAN 100. Non application related infrastructure equipment (servers, virtual hosts) are on another vlan, which we will call VLAN 200. So when the Fabric Interconnets were deployed, I gave them each an ip address on VLAN 200. And once UCS Manager was up and running, I created a KVM ip address pool of unused ip addresses on VLAN 100. Well guess what, this is wrong.
Routing for the KVM addresses is done through the management interfaces on the Fabric Interconects, so unless you are using vlan tagging, your KVM pool must be on the same vlan as the ip addresses assigned to your Fabric Interconnects.
But wait, why is this?
I thought that I could even assign private 192.168.x.x ip addreses to the KVMs as they were only supposed to be managed via the UCS Manager, but this also incorrect.
Navigate to one of your working KVM ip addresses in a web browser and you can access the KVM of the blade outside of UCS Manager. Nice, which is how I actually expected this to work.
Note that I find it rather dumb to have my KVM management ips and Fabric Interconnects on the same vlan as a rule, however since this is how its supposed to work I am going to have to let that one go.
Now, the fact that you can navigate to a specific KVM IP address via a web browser also makes the idea of using a pool of ip addresses silly. Would you not want to hard code the KVM ip address in the service profile so that you always know which server's console you are logging into? Dunno, I am still working on figuring that one out.
12 thoughts on “How Not to Assign KVM IP Addresses Via Cisco UCS Manager”
UCS Manager now has the ability to assign an ip address to the Service Profile also – not just the physical blade.
See “IP address for CIMC (remote KVM) added to UCS Service Profile” at this link: http://www.mseanmcgee.com/2010/12/cisco’s-stocking-stuffer-for-ucs-customers-firmware-release-1-41/
Hey thanks – I was actually reading your blog the other day.
Read both this post and Sean’s mention of the Profile based KVM IP.
Does the Profile based KVM IP have the same limitation in that it must reside on the same VLAN as the Fabric Interconnects?
Also, if you are using the Profile based KVM IP, can you skip the hardware based one? Otherwise I would think you will run out of IPs on that VLAN twice as fast, no?
The KVM IP must reside on the same VLAN as management as it uses the management connection for KVM traffic to the blades. It does not got through the dataplane ports. There is not routing so the KVM IP must be in the same subnet as they are using the mgmt0 physical connections.
On another note I have seen both the blades pulling an IP from the pool as well as the SPs. This extinguishes the pool twice as quickly as expected.
Hi Daniel & Chris,
Thanks for the notes…So it means that the management pool ( esx-mgmt pool) IP for KVM blade access should be in the same subnet of Fabric Interconnects Management IP.
So if i can access UCSM (using FI cluster IP ) from a Test PC , i can access the assigned KVM IP from the same Test PC.Is that right ?
Secondly , if the management interface of FI failed , will it affect the access to KVM from outside test PC?
I still does not get how the traffic is flowing from FI to KVM through the management plane.
Awaiting your responds.
Now that it’s many years later, Cisco has introduced inband mgmt access, which will traverse your uplink. The older way of it is using the OOB mgmt, which traverses your management port. Utilizing inband config will allow you to use the KVM traffic on a different VLAN, since it can go up to your core via uplink
Hey thanks Jack, this is very good to know. I actually just setup a new cluster a few months ago, and could have used this info. Thanks.
Oh @shiji, in regards to your second question.
“, if the management interface of FI failed , will it affect the access to KVM from outside test PC?
I still does not get how the traffic is flowing from FI to KVM through the management plane.”
The old way of handling management traffic — the way that I originally described in this post years ago – would have probably killed KVM access to the Service Profiles if the management links to each fabric interconnect failed.
The newer process that jack mentions, probably sends this KVM traffic down the network uplinks instead. If those go down, you are in a world of hurt anyway, as you will not be able to access your running profiles.
@Jack — correct me if I am wrong here. Thanks.
Nope. You are correct!
Thanks all for the valuable inputs.
And is great to know about the new inband KVM communication.
So it means that now as per the new design , i can have other dedicated VLAN/subnet only for KVMs access in upstream aggregation layer and configure KVM address pool with this subnet , point the gateway to the aggregation layer (that is reachable through UPLINK port).
So in this case Fabric interconnect’s management subnet ( in Management Network cloud) should reach to the KVM subnet ( in Aggregation layer) , inorder to access KVM from UCSM…
Is that right ?
It should, but I would check with your Cisco SE on how to ensure that you are setting this up correctly. As I stated a few comments ago, I just setup a new UCS cluster in a LAB for testing, and it was not apparent to me that I could setup KVM traffic to route over the network uplinks.
Pffffttt long way from Silverpop and yet here I am. Cheers!