Xenserver 6 – Create an Addtional Local Storage Respository with Thin Provisioning Support

ShippingSo recently I deployed 3 XenServer 6 hosts with local storage only. Our need was to virtualize approx 6 very old Centos QA servers running on ancient hardware.

Since I did not have any dedicated network/san storage at my disposal I stuck 6, 146 SAS drives in each server and carved them up into a 410GB raid 10 array.

However the XenServer install did not detect this secondary logical disk, so I had to figure out how to manualy create a storage repository using this new drive.

The first thing that I needed to do was to see how the disk was identified by the OS, for this I used fdisk

#fdisk -l

Disk /dev/sdb: 440.3 GB, 440345714688 bytes
255 heads, 63 sectors/track, 53535 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Using the output above I was easily able to determine that my secondary drive was /dev/sdb.

So first this i did was to determine the UUID of the host that I was logged into

xe host-list

Which spit out the output below

# xe host-list
uuid ( RO) : 4a9971f7-1e59-4e02-b849-04d206ee7b2b
name-label ( RW): xen01
name-description ( RW): Default install of XenServer

Then I ran the following command to create my new SR on /dev/sdb on xen01

xe sr-create host-uuid=4a9971f7-1e59-4e02-b849-04d206ee7b2b content-type=user type=ext device-config:device=/dev/sdb shared=false name-label=”Local storage 2″

Then I confirmed that the new SR was created

xe sr-list host=xen01

Note that one fo the things that I found most odd with this process is that you do not actually create a partition on your disk before turning it into a storage repository.

XenServer 6 – Deleting a Storage Repository From the Command Line

8-inch-floppy-disks

So before we get started deleting a Storage Repository, we need to know a few key terms.

In XenServer a Storage Repository is a storage target that contains virtual disks (VDIs) and isos.

A PBD (physical device block) is what they call the interface between a physical host and an attached SR, which is responsible for storing the device config that allows the host to interact with the storage target.

So now that we have gotten that out of the way, lets get started.

First list the SRs on a host.

#xe sr-list uuid=9a9e7903-7c0f-4f7e-f0a3-e39c54478346

Using the uuid from the you got from the command above, run the command below using the UUID, which for some reason you are calling the sr-uuid.

#xe pbd-list sr-uuid=9a9e7903-7c0f-4f7e-f0a3-e39c54478346

uuid ( RO) : a89b78ca-cb3d-0b44-5d0b-9cf2c8ad755a
host-uuid ( RO): 4a9971f7-1e59-4e02-b849-04d206ee7b2b
sr-uuid ( RO): 9a9e7903-7c0f-4f7e-f0a3-e39c54478346
device-config (MRO): device: /dev/disk/by-id/edd-int13_dev81
currently-attached ( RO): true

Now using the the UUID output of from the command above, unplug the pdb.

#xe pbd-unplug uuid=a89b78ca-cb3d-0b44-5d0b-9cf2c8ad755a

Then destroy it….boom.

#xe pbd-destroy uuid=a89b78ca-cb3d-0b44-5d0b-9cf2c8ad755a

Now tell Xenserver to “forget” the SR.

#xe sr-forget uuid=9a9e7903-7c0f-4f7e-f0a3-e39c54478346

XenServer 6: There Was an Error Connecting to the Server. The Service Did Not Reply Properly

GoombaWow, this is a really overly complicated error for such a simple problem to resolve. Allow me to give you some background.

I am currently building my first production ready (well non-production really) XenServer cluster and ran into this issue when attempting to add my second host into the cluster. I hit google and found out that this was actually just a dns issue.

A quick check on the /etc/resolv.conf on two of my nodes, shows nothing but the following line.

; generated by /etc/sysconfig/network-scripts/ifup-post

Well great, on a standard linux box I would have just added my name server and would have been half way to the bar, but judging by the contents of the resolv.conf I figured that I was supposed to add it another way.

Well after a bit of poking around in XenCenter I found this. Click on the hostname of the XenServer, then click on the "Networking" tab, from there click on "Configure…" below the "Management Interfaces" section as illustrated below. You will then be presented with a pop-up window where you can enter your nameservers.

Screenshot4
Once you have configured DNS properly you can then add the host to the cluster.

Note that you can also do this from the command line, however you have to go basically reconfigure your management interface.. ip, gateway, and everything that goes with it.

First run the command below

#xe pif-list host-name-label=xen01 management=true

Then using the UUID of the management interface, run the command below. Replace my IP addresses and uuid with yours.

#xe pif-reconfigure-ip mode=static IP=10.120.72.11 uuid=dc6b6651-6067-9a52-2011-6ba102da39e1 DNS=10.120.69.1 netmask=255.255.255.0 gateway=10.120.72.1

Seeing how fickle XenServer Clustering is regarding DNS, its probably not a bad idea to add /etc/host entries on your XenServer nodes for each server that will be in your cluster.  You never know when dns might go out to lunch and you don't want your HA cluster affected.

For future reference you can check all the configuration parameters of your management interface with the following commands.

First get the UUID of your management interface.

xe pif-list management=true host-name-label=xen01

Then check the configuration via the UUID.

xe pif-param-list uuid=f61b8d4d-67ec-e262-3e16-4348baaed076

And for example if you need to configure the DNS search domain, you can run the following.

xe pif-param set uuid=f61b8d4d-67ec-e262-3e16-4348baaed076 other-config:domain=MYDOMAIN

XenServer: How to Build and Configure a Dedicated NFS Storage Bond

GodzukiFirst of all let me start this off by saying that there is a lot of information out there on how to setup a dedicated storage interface on XenServer. However, I was unable to find anything specifically related to bonding two unmanaged interfaces and use them for as a dedicated uplink, which is seems rather silly to me as why would you not want to have a highly redundant network connection to your NFS storage. I digress.

Anyway, the first thing you need to do is to ssh into one of your XenServer hosts. In my environment I am building out a three node cluster and I need to make sure that I am working specifically with the first host in the cluster. So….

First thing you need to do is change the network backend of your Xenserver from "openvswitch" to "Linux Bridge". You accompish this with the following command.

#xe-switch-network-backend bridge.

Now you will need to reboot. Note that you can check your network-backend mode at any time with the following command.

#cat /etc/xensource/network.conf

First get the uuid of the local xenserver host, use the hostname to do this.

  # xe host-list name-label=xen01

The command above will return the uuid of the server.

uuid ( RO): 4a9971f7-1e59-4e02-b849-04d206ee7b2b
name-label ( RW): xen01
name-description ( RW): Default install of XenServer

Then you need to get a list of pifs on the host that you are working with (making sure to exclude any other host's interfaces). The command below will output this list. We will need to grab the uuids of eth2 and eth3, since they are the interfaces that we are going to use to build our bond. Note that we are running this command so that it will spit out our MAC addresses as well… make sure that you take note of these as you will need them.

 #xe pif-list host-uuid=4a9971f7-1e59-4e02-b849-04d206ee7b2bparams=uuid,device,MAC,host-uuid

Next we will tell XenServer to "forget" or un-manage eth2. Then we will do the same to eth3. We will use the uuids of these interfaces to identify them to XenServer.

Example with interface eth2 in unmanaged mode. Rinse and repeat for eth3.

# xe pif-forget uuid=97afe085-c679-3aa0-d09b-3c530ee3ac60

Then list all PIFs to ensure the unmanaged one is no longer in the list:

# xe pif-list host-uuid=97afe085-c679-3aa0-d09b-3c530ee3ac60

If you have successfully removed them its time to start creating your bond.

First define your bond in /etc/modprobe.conf. I am calling my bond, bond51

alias bonding bond51
options bond51 miimon=100 mode=7

Then edit /etc/sysconfig/network-scripts/ifcfg-eth2 and /etc/sysconfig/network-scripts/ifcfg-eth3. Make them look like the file below. Change the device name for ifcfg-eth3 to eth3.

DEVICE=eth2
BOOTPROTO=none
HWADDR=<MAC ADDRESS OF YOUR INTERFACE>
ONBOOT=yes
MASTER=bond51
SLAVE=yes

Then create /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond51
IPADDR=<YOUR IP>
NETMASK=<YOUR NETMASK>
ONBOOT=yes
BOOTPROTO=static

Beep Boop. Ifup bond51 to bring up the bond and its slave members.

You can check the status of the bond via the command below.

cat /proc/net/bonding/bond51

Please know that I have done little more than reboot the XenServer host to make sure that the configuration that I built would persist across reboots, and failover from one interface to another. I have not tested performance yet in any way shape or form.

Citrix XenServer CCA Exam Practice Material

Citrix_logo_sk485Now that I am back working on XenServer I have been looking at the CCA for XenServer Exam.

Below is the overview of the exam.

http://goo.gl/m6CMa

Here are some test prep materials that I have found. Most are going to be for XenServer 5, but I can't imagine that XenServer 6 is that much different.

http://citrixxperience.com/free/A09Printable.pdf

http://citrixxperience.com/free/A09StudyGuide.pdf

 

XenServer Switch Ports Configuration Best Practices

Old_switch
Finally I have found it! Citrix's XenServer switch configuration best practices document.

While everyone in the world has blog posts and documentation regarding how to set up and configure bridged networks in Xen, they hardly ever go into the physical switch configuration required.

This is the document that you will need to pass along to your friendly Network Administrator, as they will more than likely not be familar with networking for Xen as its much different from networking for Vmware ESX.

http://support.citrix.com/article/CTX123158

The contents of the document above are outlined below.

Change the following options on the switches for XenServer ports:

  1. Enable PortFast on XenServer connected ports.
    PortFast allows a switch port running Spanning Tree Protocol (STP) to go directly from blocking to forwarding mode by skipping the learning and listening modes. PortFast should only be enabled on ports connected to a single host. Port must be an 802.1q trunk port if you are using VLANS and the port must be in access mode.
    Ports used for storage should have PortFast enabled.
    Note
    : It is important that you enable PortFast with caution, and only on ports that do not connect to multi-homed devices such as hubs or switches.
  2. Disable Port Security on XenServer connected ports.
    Port security prevents multiple MAC addresses from being presented to the same port. In a virtual environment, you see multiple MAC addresses presented from Virtual Machines to the same port. If you have enabled Port Security, it shuts down the port.
  3. Disable Spanning Tree Protocol on XenServer connected ports.
    Spanning Tree Protocol must be disabled if you are using Bonded or teamed NICs in a virtual environment. Spanning Tree Protocol should be disabled because of the nature of Bonds and NIC teaming, to avoid failover delay issues when using bonding.
  4. Disable BPDU guard on XenServer connected ports.
    BPDU is a protection setting part of the STP that prevents you from attaching a network device to a switch port. When you attach a network device, the port shuts down and has to be enabled by an administrator.
    A PortFast port should never receive configuration BPDUs.
    Note
    : When BPDUs are received by a PortFast port, it indicates another bridge is connected to the port, and it indicates that there is a possibility of a bridging loop formation during the Listening and Learning phases. In a valid PortFast configuration, configuration BPDUs should never be received, so Cisco switches support a feature called PortFast BPDU Guard, which is a feature that shuts down a PortFast-enabled port in the event a BPDU is received. This feature ensures that a bridging loop is not formed, because the switch's shutting down the port removes the possibility of a loop forming.

Forcibly Remove Storage Devices From XenCenter via CLI

HammerbroThis is my second post in focused on forcibily something from XenCenter, as apparently XenCenter has a hard time knowing when to "forget" a device or host on its own. Now while this is annoying, it can be resolved rather easily from the command line.

Now, lets take a step back and reflect for a moment on the first post in this series… you can find it here. Anyway this first post showed us how to remove the orphaned device via the Windows CLI, however I find the process of using the CLI in Windows to be painful, so this post is going to outline a similar process but via the CLI on the XenServer host itself.

Anyway in the previous post I had to force remove a host from XenCenter via the CLI, and in this post I need to remove the "Local Storage", "Removable Storage" and "DVD Drive" that were associated with the orphaned host. These are the items in red below.



Snapshot1
By clicking on each one of the items in XenCenter I can see that they are all in a Detached State, plus i can see the UUID of each item. You will need the UUID for the next step.

Now ssh into your master node and run the following

#xe sr-list

Look for the entry with the matching UUID

uuid ( RO)                : 64e69b7d-ae97-4d42-c4d8-c260553b31d1
name-label ( RW): Local storage
name-description ( RW):
host ( RO): <not in database>
type ( RO): lvm
content-type ( RO): user

Then run the following command to remove it from XenCenter Inventory

# xe sr-forget uuid=64e69b7d-ae97-4d42-c4d8-c260553b31d1

Now rinse and repeat for any remaining orphaned items.