XenServer: How to Build and Configure a Dedicated NFS Storage Bond

GodzukiFirst of all let me start this off by saying that there is a lot of information out there on how to setup a dedicated storage interface on XenServer. However, I was unable to find anything specifically related to bonding two unmanaged interfaces and use them for as a dedicated uplink, which is seems rather silly to me as why would you not want to have a highly redundant network connection to your NFS storage. I digress.

Anyway, the first thing you need to do is to ssh into one of your XenServer hosts. In my environment I am building out a three node cluster and I need to make sure that I am working specifically with the first host in the cluster. So….

First thing you need to do is change the network backend of your Xenserver from "openvswitch" to "Linux Bridge". You accompish this with the following command.

#xe-switch-network-backend bridge.

Now you will need to reboot. Note that you can check your network-backend mode at any time with the following command.

#cat /etc/xensource/network.conf

First get the uuid of the local xenserver host, use the hostname to do this.

  # xe host-list name-label=xen01

The command above will return the uuid of the server.

uuid ( RO): 4a9971f7-1e59-4e02-b849-04d206ee7b2b
name-label ( RW): xen01
name-description ( RW): Default install of XenServer

Then you need to get a list of pifs on the host that you are working with (making sure to exclude any other host's interfaces). The command below will output this list. We will need to grab the uuids of eth2 and eth3, since they are the interfaces that we are going to use to build our bond. Note that we are running this command so that it will spit out our MAC addresses as well… make sure that you take note of these as you will need them.

 #xe pif-list host-uuid=4a9971f7-1e59-4e02-b849-04d206ee7b2bparams=uuid,device,MAC,host-uuid

Next we will tell XenServer to "forget" or un-manage eth2. Then we will do the same to eth3. We will use the uuids of these interfaces to identify them to XenServer.

Example with interface eth2 in unmanaged mode. Rinse and repeat for eth3.

# xe pif-forget uuid=97afe085-c679-3aa0-d09b-3c530ee3ac60

Then list all PIFs to ensure the unmanaged one is no longer in the list:

# xe pif-list host-uuid=97afe085-c679-3aa0-d09b-3c530ee3ac60

If you have successfully removed them its time to start creating your bond.

First define your bond in /etc/modprobe.conf. I am calling my bond, bond51

alias bonding bond51
options bond51 miimon=100 mode=7

Then edit /etc/sysconfig/network-scripts/ifcfg-eth2 and /etc/sysconfig/network-scripts/ifcfg-eth3. Make them look like the file below. Change the device name for ifcfg-eth3 to eth3.

DEVICE=eth2
BOOTPROTO=none
HWADDR=<MAC ADDRESS OF YOUR INTERFACE>
ONBOOT=yes
MASTER=bond51
SLAVE=yes

Then create /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond51
IPADDR=<YOUR IP>
NETMASK=<YOUR NETMASK>
ONBOOT=yes
BOOTPROTO=static

Beep Boop. Ifup bond51 to bring up the bond and its slave members.

You can check the status of the bond via the command below.

cat /proc/net/bonding/bond51

Please know that I have done little more than reboot the XenServer host to make sure that the configuration that I built would persist across reboots, and failover from one interface to another. I have not tested performance yet in any way shape or form.

How to Use Hpacucli in ESXi

Miracle_WhipWow just stumbled into this one today as I was dreading having to reboot my ESXi 4.1 boxes just to see a few new local disks.

Apparently you can install hpacucli in ESXi very easily, and whats even better is that you can install it directly on a datastore and use the install on any host that has access to that datastore. No need to reinstall on each and every host in your cluster. No need to reboot to configure your disks.

Anyway here is the link to the how to, its obviously written by a Windows Guy, but it works easy enough.

http://v-front.blogspot.com/2012/03/how-to-run-hp-online-acu-cli-for-linux.html

 

Update: Apparently HP now provides an official version of hpacucli for ESXi 5 as part of HP ESXi Utilities Offline Bundle for VMware ESXi 5.0.

Also note that I found that my HP servers with the P400 card did not detect the new logical drives, even after a rescan. Apparently a reboot is required. However my HP boxes with the P410 card did detect the new logical drives after a rescan and no reboot was required.

 

Patching ESXi 4.1 via the Linux RCLI

PIRATE_MASK_IMAGE_rdax_65

Great Balls of Frustration… could the documentation be anymore confusing and convoluted regarding the process of patching an ESX server via the Linux remote cli.

No, I am not running windows, and no I am not on the local ESX console, and no I am not running the Vmware Management Appliance, and most of all, no I do not want outdated instructions from 2008. I just want to download some patches to my local linux destop and patch my ESX servers. Specifically I want to install the latest and greatest Qlogic drivers to troubleshoot a SAN connectivity issue that I will probably blog about at a later date.

Anyway here is how I did what I needed to do without resorting to using a windows box.

First download the driver isos to your local Linux desktop and mount the isos up locally,

mount -o loop vmware-esx-drivers-scsi-fnic_400.1.4.0.201-1vmw.2.17.00000.491446.iso /mnt

Second locate the offline-bundle directory. I moved mine off to another directory that I created specifically for the Qlogic Driver. This way I dont have to unmount and remount the iso when i patch my other boxes.

Next put the host in maint mode.

vicfg-hostops –username root –server esx04 -o enter

Then navigate to the offline_bundle directory and run the following command, replace the bundle name that I used with the bundle name in your directory

vihostupdate –server esx04 –install –bundle 841.k1.16.2-1vmw-offline_bundle-340223.zip

Wait for the following message

Please wait patch installation is in progress …
The update completed successfully, but the system needs to be rebooted for the changes to be effective.

Then reboot

vicfg-hostops –username root –server esx04 -o reboot