So I was planning on blogging about this process simply because I keep forgetting it. Being that I am in the middle of building out my lab, and have a ton of old cisco hardware with ancient images, I have found myself going through the process of updating firmware quite a bit as I tinker with different IOS images.
The first thing that you are going to need is a tftp server, which I am running on my Fedora 18 desktop. Its a very easy setup, and has been simply documented on the link below. Note the article below is specfically about how to setup a tftp server on RHEL, however the process is pretty much the same.
Now if you are looking to setup a tftp server on Windows, well I cannot help you there. If I recall you just download and install a server app, and away you go.
Below is another link that I have found that also does a fine job of explaining the Cisco side of the process simply, without getting too technical and long winded.
So I have recently made the switch to Kubuntu 12.10 on my new desktop. Basically I am building a monster workstation and I ran into issue running Fedora 18. Since I wanted Steam support too I decided not to try to move to an earlier Fedora version, but rather, I chose to give Ubuntu another try (its been years since I have run Ubuntu).
Anyway, I am building this hoss of a work station that has 12 Dimm slots, which I fully populated with 4GB dims. However when I booted my new monster, I found that I was 8GB short in the Memory department.
So, how do I figure out which two dimms are bad? I certainly dont want to have to pull all of them out and boot the machine and test each dimm one by one.
So this is where lshw comes to the rescue. Which I blogged about back in 2010 here.
After a few hours poking around a newly deployed UCS cluster trying to get some basic profiles created and assigned. I realized that I had actually no idea how the KVM is actually supposed to work inside the UCS cluster. Which is funny as this was a subject that we touched on during my DCUDC class. Apparently we did not touch on it enough.
Anyway, before I get ahead of myself, lets review the gear in this cluster.
Now in my network all lights out management ips (ilos, ipmi, etc) are all on one particular vlan, which for the purpose of this post we will call VLAN 100. Non application related infrastructure equipment (servers, virtual hosts) are on another vlan, which we will call VLAN 200. So when the Fabric Interconnets were deployed, I gave them each an ip address on VLAN 200. And once UCS Manager was up and running, I created a KVM ip address pool of unused ip addresses on VLAN 100. Well guess what, this is wrong.
Routing for the KVM addresses is done through the management interfaces on the Fabric Interconects, so unless you are using vlan tagging, your KVM pool must be on the same vlan as the ip addresses assigned to your Fabric Interconnects.
But wait, why is this?
I thought that I could even assign private 192.168.x.x ip addreses to the KVMs as they were only supposed to be managed via the UCS Manager, but this also incorrect.
Navigate to one of your working KVM ip addresses in a web browser and you can access the KVM of the blade outside of UCS Manager. Nice, which is how I actually expected this to work.
Note that I find it rather dumb to have my KVM management ips and Fabric Interconnects on the same vlan as a rule, however since this is how its supposed to work I am going to have to let that one go.
Now, the fact that you can navigate to a specific KVM IP address via a web browser also makes the idea of using a pool of ip addresses silly. Would you not want to hard code the KVM ip address in the service profile so that you always know which server's console you are logging into? Dunno, I am still working on figuring that one out.
When you right click on a VM via the vSphere client and click edit setting you will see your hard disk enumerated as follows:
Hard disk 1 - Virtual Disk
Hard disk 2 - Virtual Disk
However CentOS/RHEL enumerates your disks using a different nomenclature. See below:
Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 1044 8385898+ 83 Linux /dev/sda2 1045 2349 10482412+ 83 Linux /dev/sda3 2350 2610 2096482+ 82 Linux swap
Disk /dev/sdb: 268.4 GB, 268435456000 bytes 255 heads, 63 sectors/track, 32635 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdb1 1 32635 262140606 83 Linux
[[truncated]]
While you can assume that the your first disk listed in ESX is your first disk listed via the OS, this is not always the case. You can attempt to compare disk sizes to further determine which disk is which, however the sizes do not always match up.
The best way to identify disks is via the Virtual Device Node, which is visible via the virtual disk properties in ESX and via the command lsscsi in Centos/OS.
More than likely lsscsi is not installed via default so you are going to either need to install via yum, or download a rpm and install. Once installed run as seen below
>lsscsi [0:0:0:0] disk VMware Virtual disk 1.0 /dev/sda [0:0:1:0] disk VMware Virtual disk 1.0 /dev/sdb [0:0:2:0] disk VMware Virtual disk 1.0 /dev/sdc [0:0:3:0] disk VMware Virtual disk 1.0 /dev/sdd
The first two numbers are unique to the scsi controller and the second two numbers indicate the disk number which can be seen in the disk properties in the vsphere client properties.
For example in vmware Hard disk 4 has a virtual device id of “SCSI (0:3) Hard Disk 4, and by running lsscsi I know for sure that this disk is /dev/sdd. Now I can remove the disk with confidence from esx and have to worry that I removed the wrong disk.
Dell™ OpenManage™ Server Administrator is Dell's version of the HP System Management Homepage, as it allows you to log into a web interface to view system configuration, health, and performance statistics. Its availible for free and runs in Windows and Linux.
Dell makes the install easy on Linux due via a public rpm repo. The steps below outline the simple install process.
Hpacucli is the a command line Array Configuration Utility program for Smart Array Controllers and RAID Array Controllers. The command can be installed in one of two ways; either by standalone rpm, or as part of the Proliant Support Pack. The utility is useful as it allows you to create and delete raid configs without having to reboot.
First run the hpacucli command to start the utility. Once the utility is started run the command below to show your arrays and disks.
=> ctrl all show config
In my case I had added 3 additonal drives to a DL385 G1 with the intention of creating a simple raid1 mirror with a hot spare. The command below creates the second array, array B.