So recently I deployed 3 XenServer 6 hosts with local storage only. Our need was to virtualize approx 6 very old Centos QA servers running on ancient hardware.
Since I did not have any dedicated network/san storage at my disposal I stuck 6, 146 SAS drives in each server and carved them up into a 410GB raid 10 array.
However the XenServer install did not detect this secondary logical disk, so I had to figure out how to manualy create a storage repository using this new drive.
The first thing that I needed to do was to see how the disk was identified by the OS, for this I used fdisk
#fdisk -l
Disk /dev/sdb: 440.3 GB, 440345714688 bytes
255 heads, 63 sectors/track, 53535 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Using the output above I was easily able to determine that my secondary drive was /dev/sdb.
So first this i did was to determine the UUID of the host that I was logged into
Note that one fo the things that I found most odd with this process is that you do not actually create a partition on your disk before turning it into a storage repository.
So before we get started deleting a Storage Repository, we need to know a few key terms.
In XenServer a Storage Repository is a storage target that contains virtual disks (VDIs) and isos.
A PBD (physical device block) is what they call the interface between a physical host and an attached SR, which is responsible for storing the device config that allows the host to interact with the storage target.
So now that we have gotten that out of the way, lets get started.
Wow, this is a really overly complicated error for such a simple problem to resolve. Allow me to give you some background.
I am currently building my first production ready (well non-production really) XenServer cluster and ran into this issue when attempting to add my second host into the cluster. I hit google and found out that this was actually just a dns issue.
A quick check on the /etc/resolv.conf on two of my nodes, shows nothing but the following line.
; generated by /etc/sysconfig/network-scripts/ifup-post
Well great, on a standard linux box I would have just added my name server and would have been half way to the bar, but judging by the contents of the resolv.conf I figured that I was supposed to add it another way.
Well after a bit of poking around in XenCenter I found this. Click on the hostname of the XenServer, then click on the "Networking" tab, from there click on "Configure…" below the "Management Interfaces" section as illustrated below. You will then be presented with a pop-up window where you can enter your nameservers.
Once you have configured DNS properly you can then add the host to the cluster.
Note that you can also do this from the command line, however you have to go basically reconfigure your management interface.. ip, gateway, and everything that goes with it.
Seeing how fickle XenServer Clustering is regarding DNS, its probably not a bad idea to add /etc/host entries on your XenServer nodes for each server that will be in your cluster. You never know when dns might go out to lunch and you don't want your HA cluster affected.
For future reference you can check all the configuration parameters of your management interface with the following commands.
First of all let me start this off by saying that there is a lot of information out there on how to setup a dedicated storage interface on XenServer. However, I was unable to find anything specifically related to bonding two unmanaged interfaces and use them for as a dedicated uplink, which is seems rather silly to me as why would you not want to have a highly redundant network connection to your NFS storage. I digress.
Anyway, the first thing you need to do is to ssh into one of your XenServer hosts. In my environment I am building out a three node cluster and I need to make sure that I am working specifically with the first host in the cluster. So….
First thing you need to do is change the network backend of your Xenserver from "openvswitch" to "Linux Bridge". You accompish this with the following command.
#xe-switch-network-backend bridge.
Now you will need to reboot. Note that you can check your network-backend mode at any time with the following command.
#cat /etc/xensource/network.conf
First get the uuid of the local xenserver host, use the hostname to do this.
# xe host-list name-label=xen01
The command above will return the uuid of the server.
Then you need to get a list of pifs on the host that you are working with (making sure to exclude any other host's interfaces). The command below will output this list. We will need to grab the uuids of eth2 and eth3, since they are the interfaces that we are going to use to build our bond. Note that we are running this command so that it will spit out our MAC addresses as well… make sure that you take note of these as you will need them.
Next we will tell XenServer to "forget" or un-manage eth2. Then we will do the same to eth3. We will use the uuids of these interfaces to identify them to XenServer.
Example with interface eth2 in unmanaged mode. Rinse and repeat for eth3.
If you have successfully removed them its time to start creating your bond.
First define your bond in /etc/modprobe.conf. I am calling my bond, bond51
alias bonding bond51
options bond51 miimon=100 mode=7
Then edit /etc/sysconfig/network-scripts/ifcfg-eth2 and /etc/sysconfig/network-scripts/ifcfg-eth3. Make them look like the file below. Change the device name for ifcfg-eth3 to eth3.
DEVICE=eth2
BOOTPROTO=none
HWADDR=<MAC ADDRESS OF YOUR INTERFACE>
ONBOOT=yes
MASTER=bond51
SLAVE=yes
Then create /etc/sysconfig/network-scripts/ifcfg-bond0
Beep Boop. Ifup bond51 to bring up the bond and its slave members.
You can check the status of the bond via the command below.
cat /proc/net/bonding/bond51
Please know that I have done little more than reboot the XenServer host to make sure that the configuration that I built would persist across reboots, and failover from one interface to another. I have not tested performance yet in any way shape or form.
Here are some test prep materials that I have found. Most are going to be for XenServer 5, but I can't imagine that XenServer 6 is that much different.
Finally I have found it! Citrix's XenServer switch configuration best practices document.
While everyone in the world has blog posts and documentation regarding how to set up and configure bridged networks in Xen, they hardly ever go into the physical switch configuration required.
This is the document that you will need to pass along to your friendly Network Administrator, as they will more than likely not be familar with networking for Xen as its much different from networking for Vmware ESX.
The contents of the document above are outlined below.
Change the following options on the switches for XenServer ports:
Enable PortFast on XenServer connected ports. PortFast allows a switch port running Spanning Tree Protocol (STP) to go directly from blocking to forwarding mode by skipping the learning and listening modes. PortFast should only be enabled on ports connected to a single host. Port must be an 802.1q trunk port if you are using VLANS and the port must be in access mode. Ports used for storage should have PortFast enabled. Note: It is important that you enable PortFast with caution, and only on ports that do not connect to multi-homed devices such as hubs or switches.
Disable Port Security on XenServer connected ports. Port security prevents multiple MAC addresses from being presented to the same port. In a virtual environment, you see multiple MAC addresses presented from Virtual Machines to the same port. If you have enabled Port Security, it shuts down the port.
Disable Spanning Tree Protocol on XenServer connected ports. Spanning Tree Protocol must be disabled if you are using Bonded or teamed NICs in a virtual environment. Spanning Tree Protocol should be disabled because of the nature of Bonds and NIC teaming, to avoid failover delay issues when using bonding.
Disable BPDU guard on XenServer connected ports. BPDU is a protection setting part of the STP that prevents you from attaching a network device to a switch port. When you attach a network device, the port shuts down and has to be enabled by an administrator. A PortFast port should neverreceive configuration BPDUs. Note: When BPDUs are received by a PortFast port, it indicates another bridge is connected to the port, and it indicates that there is a possibility of a bridging loop formation during the Listening and Learning phases. In a valid PortFast configuration, configuration BPDUs should never be received, so Cisco switches support a feature called PortFast BPDU Guard, which is a feature that shuts down a PortFast-enabled port in the event a BPDU is received. This feature ensures that a bridging loop is not formed, because the switch's shutting down the port removes the possibility of a loop forming.
This is my second post in focused on forcibily something from XenCenter, as apparently XenCenter has a hard time knowing when to "forget" a device or host on its own. Now while this is annoying, it can be resolved rather easily from the command line.
Now, lets take a step back and reflect for a moment on the first post in this series… you can find it here. Anyway this first post showed us how to remove the orphaned device via the Windows CLI, however I find the process of using the CLI in Windows to be painful, so this post is going to outline a similar process but via the CLI on the XenServer host itself.
Anyway in the previous post I had to force remove a host from XenCenter via the CLI, and in this post I need to remove the "Local Storage", "Removable Storage" and "DVD Drive" that were associated with the orphaned host. These are the items in red below.
By clicking on each one of the items in XenCenter I can see that they are all in a Detached State, plus i can see the UUID of each item. You will need the UUID for the next step.
Now ssh into your master node and run the following
#xe sr-list
Look for the entry with the matching UUID
uuid ( RO) : 64e69b7d-ae97-4d42-c4d8-c260553b31d1 name-label ( RW): Local storage name-description ( RW): host ( RO): <not in database> type ( RO): lvm content-type ( RO): user
Then run the following command to remove it from XenCenter Inventory