HomeLab: Upgrading Cisco IOS Via tftp on RHEL

0012fbf7_mediumSo I was planning on blogging about this process simply because I keep forgetting it. Being that I am in the middle of building out my lab, and have a ton of old cisco hardware with ancient images, I have found myself going through the process of updating firmware quite a bit as I tinker with different IOS images.

The first thing that you are going to need is a tftp server, which I am running on my Fedora 18 desktop. Its a very easy setup, and has been simply documented on the link below. Note the article below is specfically about how to setup a tftp server on RHEL, however the process is pretty much the same.

How to Install a tftp Server on RHEL

Now if you are looking to setup a tftp server on Windows, well I cannot help you there. If I recall you just download and install a server app, and away you go.

Below is another link that I have found that also does a fine job of explaining the Cisco side of the process simply, without getting too technical and long winded.

How to Upgrade Cisco IOS Images

 

Related articles

HomeLab: The Cisco 3560G
HomeLab: Configuring the NTP Client on a Cisco Catalyst Switches
Configuring TFTP in Linux
Best Cisco IOS switch for home use
Cisco IOS: Basic Configuration of a Switch.
Cisco IOS : How to configure passwords.

Ubuntu Linux- Locate Failed Dimms without Pain

Cartoon_ramSo I have recently made the switch to Kubuntu 12.10 on my new desktop. Basically I am building a monster workstation and I ran into issue running Fedora 18. Since I wanted Steam support too I decided not to try to move to an earlier Fedora version, but rather, I chose to  give Ubuntu another try (its been years since I have run Ubuntu).

Anyway, I am building this hoss of a work station that has 12 Dimm slots, which I fully populated with 4GB dims. However when I booted my new monster, I found that I was 8GB short in the Memory department.

So, how do I figure out which two dimms are bad? I certainly dont want to have to pull all of them out and boot the machine and test each dimm one by one.

So this is where lshw comes to the rescue. Which I blogged about back in 2010 here.

Anyway. Here is how you find the empty slots.

# lshw -short -C memory

which output what you see below.

0/14                           memory      System Memory
/0/14/0                         memory      4GiB DIMM DDR3 1066 MHz (0.9 ns)
/0/14/1                         memory      4GiB DIMM DDR3 1066 MHz (0.9 ns)
/0/14/2                         memory      4GiB DIMM DDR3 1066 MHz (0.9 ns)
/0/14/3                         memory      4GiB DIMM DDR3 1066 MHz (0.9 ns)
/0/14/4                         memory      4GiB DIMM DDR3 1066 MHz (0.9 ns)
/0/14/5                         memory      4GiB DIMM DDR3 1066 MHz (0.9 ns)
/0/22                           memory      System Memory
/0/22/0                         memory      DIMM DDR3 [empty]
/0/22/1                         memory      DIMM DDR3 [empty]
/0/22/2                         memory      4GiB DIMM DDR3 1066 MHz (0.9 ns)
/0/22/3                         memory      4GiB DIMM DDR3 1066 MHz (0.9 ns)
/0/22/4                         memory      4GiB DIMM DDR3 1066 MHz (0.9 ns)
/0/22/5                         memory      4GiB DIMM DDR3 1066 MHz (0.9 ns)

Basically this is telling me that my first two dimms on CPU two are dead and are the ones that need to be replaced.

Now all I have to do is powerdown and pull and replace two dimms.. which will save my fingers from much discomfort.

How Not to Assign KVM IP Addresses Via Cisco UCS Manager

Boxing-gloveAfter a few hours poking around a newly deployed UCS cluster trying to get some basic profiles created and assigned. I realized that I had actually no idea how the KVM is actually supposed to work inside the UCS cluster. Which is funny as this was a subject that we touched on during my DCUDC class. Apparently we did not touch on it enough.

Anyway, before I get ahead of myself, lets review the gear in this cluster.

2 5108 chassis
7 B200 M2 blades with 2104 IOMs
2 6248s Fabric Interconnects

Now in my network all lights out management ips (ilos, ipmi, etc) are all on one particular vlan, which for the purpose of this post we will call VLAN 100. Non application related infrastructure equipment (servers, virtual hosts) are on another vlan, which we will call VLAN 200. So when the Fabric Interconnets were deployed, I gave them each an ip address on VLAN 200. And once UCS Manager was up and running, I created a KVM ip address pool of unused ip addresses on VLAN 100. Well guess what, this is wrong.

Routing for the KVM addresses is done through the management interfaces on the Fabric Interconects, so unless you are using vlan tagging, your KVM pool must be on the same vlan as the ip addresses assigned to your Fabric Interconnects.

But wait, why is this?

I thought that I could even assign private 192.168.x.x ip addreses to the KVMs as they were only supposed to be managed via the UCS Manager, but this also incorrect.

Navigate to one of your working KVM ip addresses in a web browser and you can access the KVM of the blade outside of UCS Manager. Nice, which is how I actually expected this to work. 

Logo

Note that I find it rather dumb to have my KVM management ips and Fabric Interconnects on the same vlan as a rule, however since this is how its supposed to work I am going to have to let that one go.

Now, the fact that you can navigate to a specific KVM IP address via a web browser also makes the idea of using a pool of ip addresses silly. Would you not want to hard code the KVM ip address in the service profile so that you always know which server's console you are logging into? Dunno, I am still working on figuring that one out.

 

Removing a Virtual Disk from CentOS/RHEL via vSphere Client

PxdxaDisco08 When you right click on a VM via the vSphere client and click edit setting you will see your hard disk enumerated as follows:

Hard disk 1 - Virtual Disk

Hard disk 2 - Virtual Disk

However CentOS/RHEL enumerates your disks using a different nomenclature. See below:

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        1044     8385898+  83  Linux
/dev/sda2            1045        2349    10482412+  83  Linux
/dev/sda3            2350        2610     2096482+  82  Linux swap

Disk /dev/sdb: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       32635   262140606   83  Linux

[[truncated]]

While you can assume that the your first disk listed in ESX is your first disk listed via the OS, this is not always the case. You can attempt to compare disk sizes to further determine which disk is which, however the sizes do not always match up.

The best way to identify disks is via the Virtual Device Node, which is visible via the virtual disk properties in ESX and via the command lsscsi in Centos/OS.

More than likely lsscsi is not installed via default so you are going to either need to install via yum, or download a rpm and install. Once installed run as seen below

>lsscsi
[0:0:0:0]    disk    VMware   Virtual disk     1.0   /dev/sda
[0:0:1:0]    disk    VMware   Virtual disk     1.0   /dev/sdb
[0:0:2:0]    disk    VMware   Virtual disk     1.0   /dev/sdc
[0:0:3:0]    disk    VMware   Virtual disk     1.0   /dev/sdd

The first two numbers are unique to the scsi controller and the second two numbers indicate the disk number which can be seen in the disk properties in the vsphere client properties.

For example in vmware Hard disk 4 has a virtual device id of “SCSI (0:3) Hard Disk 4, and by running lsscsi I know for sure that this disk is /dev/sdd. Now I can remove the disk with confidence from esx and have to worry that I removed the wrong disk.

Installing Dell OpenManage Server Administrator on Linux

R710 Dell™ OpenManage™ Server Administrator is Dell's version of the HP System Management Homepage, as it allows you to log into a web interface to view system configuration, health, and performance statistics.  Its availible for free and runs in Windows and Linux.

Dell makes the install easy on Linux due via a public rpm repo. The steps below outline the simple install process.

Install OpenManage Server Administrator

Step 1: wget -q -O – http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash

Step 2: yum -y install srvadmin-all

Step 3: Start  Systems Management Data Engine – /etc/init.d/dataeng start

Step 4: Start webinterface – /etc/init.d/dsm_om_connsvc start

Your system homepage is now availible via https://hostname:1311, where hostname is the hostname of your server.

Install Firmware/Bios Management Tools

Step 1: yum install dell_ft_install
Step 2: yum install $(bootstrap_firmware)

Now run either inventory_firmware or inventory_firmware_gui to view your firmware versions, and perform firmware upgrades.

 

 

 

Using hpacucli to Manage Disks on HP Servers Running Linux

DL380Hpacucli is the a command line Array Configuration Utility program for Smart Array Controllers and RAID Array Controllers. The command can be installed in one of two ways; either by standalone rpm, or as part of the Proliant Support Pack. The utility is useful as it allows you to create and delete raid configs without having to reboot.

First run the hpacucli command to start the utility. Once the utility is started run the command below to show your arrays and disks.

=> ctrl all show config

In my case I had added 3 additonal drives to a DL385 G1 with the intention of creating a simple raid1 mirror with a hot spare. The command below creates the second array, array B.

=>ctrl slot=0 create type=ld drives=2:3,2:4 raid=1

Now I can view my new array using ctrl all show config.

  array B (Parallel SCSI, Unused Space: 0 MB)

      logicaldrive 2 (67.8 GB, RAID 1, OK)
      physicaldrive 2:3   (port 2:id 3 , Parallel SCSI, 72.8 GB, OK)
      physicaldrive 2:4   (port 2:id 4 , Parallel SCSI, 72.8 GB, OK)

Now I add the remaining drive as a hot spare to the newly created array B

=> ctrl slot=0 array B add spares=2:5

The command below can be used to show the status of your array, which in my case is not good.

=> ctrl all show status

Smart Array 6i in Slot 0 (Embedded)
   Controller Status: OK
   Cache Status: Temporarily Disabled
   Battery/Capacitor Status: Failed (Replace Batteries/Capacitors)