How to Create A Dedicated Storage NIC in XenServer

Wickenburg-networkIn Vmware ESX, when using NFS storage, you are required to create a separate and additional vmkernel portgroup to access your NFS storage. This way ESX Management traffic travels over one port group and NFS traffic travels over another.

In Xenserver the concept is similar; however executed much differently.

First and foremost you need to be aware of a few limitations in XenServer. While XenServer does allow you to create a dedicated nic (or bond) for NFS traffic, this nic must remain unmanaged by Xenserver. In contrast, a XenServer's management traffic travels over its "Management interface", which obviously has to be an interface that is managed via XenServer.

Allow me sum this up. You cannot share a nic or nics between management traffic and NFS traffic. This means that you are going want to create a bond for management traffic and a bond for NFS traffic you are going to need 4 free interfaces on your Xenserver box.

Now it is possible to use VLAN tagging and route your Management traffic and virtual machine traffic over the same physical interfaces. However its important to know that XenServer does not support VLAN tagging on the Management interface, so whatever VLAN you use for management, it must be the native vlan on the ports configuration.

The information below is from the Admin Guide for XenServer 5.5

Switch ports configured to perform 802.1Q VLAN tagging/untagging, commonly referred to as ports with a native VLAN or as access mode ports, can be used with XenServer management interfaces to place managementtraffic on a desired VLAN. In this case the XenServer host is unaware of any VLAN configuration.

XenServer management interfaces cannot be assigned to a XenServer VLAN via a trunk port.

Bottom line; its probably best to have seperate physical connections for your management traffic and NFS traffic.

Also, according to the Admin Guide, your NFS network should not be routed. See the words below and read them thusly.

"Before dedicating a network interface as a storage interface for use with iSCSI or NFS SRs, you must ensure that the dedicated interface uses a separate IP subnet which is not routable from the main management interface. If this is not enforced, then storage traffic may be directed via the main management interface after a host reboot, due to the order in which network interfaces are initialized."

Ok now that we got all that out of the way, lets actually create our dedicated storage nic.

First we need to get the uuid of the pif (physical interface) that we want to use. Note that this is just an example using a standalone interface.

#xe pif-list host-name-label=<"your_hostname>

Next we reconfigure our pif

#xe pif-reconfigure-ip mode=statc IP=<your-ip-on-nfs-vlan> netmask=<your-mask> uuid=<pif-uuid>

#xe pif-param-set disallow-unplug=true uuid=<pif-uuid>

#xe pif-param-set other-config:ManagementPurpose="Storage" uuid=<pif-uuid>

Alternatively you can use xe pif-forget to remote the interface from Xencenter database and configure it manually via the XenServer Kernel like you would any other interface in Linux, however this could be more confusing in the long run.


Getting Started with XenServer: Features and Licensing

10540402-xenserver This post is pretty much a bunch of links that I have found that I am using to get up to speed with ZenServer. I've got a new virtualization project just around the corner and Vmware is no longer a good fit for us.

The link below is from and compares the various features in the different versions of XenServer.

And the link below explains XenServer licensing. There are three licensed versions of XenServer, and of course a free version. The specific version that I am looking at is Enterprise, which only lacks a few features available in Platinum. Street price for Enterprise is $2500 per server.

If you are interested in test driving a licensed version of XenServer, you can use the link below to register for a 45 day trial.

Last but not least, the link below is to Citrix's documentation portal.

Now just need to find the minimum and maximums docs and I should be good to go.


Considering Alternatives to VMware


Interested in doing some serious virtualization on Cisco's new UCS C-Series line of servers? Seriously, why would'nt you be.  Cisco's UCS C460 M2 server supports up to 4 intel E7-4800 processors — these are 10 core procs people — and up to 1tb of memory. Thats 40 cores in 4 RU!

Images I am eager to get one of these servers, or even one or the two proc models into our lab and test them out. I have been hearing great things about them and think they would be perfect virtualization servers for our non-production infrastructure where we seem to have at least 6 or 7 different QA environments. Plus Vmware and Cicso partnered together on these things.

But slow your roll cowboy, with Vmware's recent licensing announcement regarding Vsphere 5 and vRam limitations, it would take 11 Enterprise plus licenses to utilize 1tb of memory. At 3 to 4k per license you are talking about 40k in licensing for one server. Nasty!

Enter the alternatives in the virtualization world. Citrix ZenServer and Redhat (RHEV-M). Hell I'll even mention Microsoft's Hyper-V.  However, to move to a different technology you need to be able to get your Vmware vms off of ESX and onto your new virtualization flavor of choice.

Here is how to do it in XenServer. Note that I have not tried this in the lab yet, but trust me i will. I will also do the same with RHEV-M, its its only $750 per socket per year.

Sure the alertnatives are not as advanced as Vmware ESX right now, but they making improvements slowly but surely, and I am sure that the recent Vmware licensing announcements are only going to bring more potential customers to other virtualization vendors which will only help to further mature their products.

Find out more about the Cicso USC C-Series via the link below