So before we get started deleting a Storage Repository, we need to know a few key terms.
In XenServer a Storage Repository is a storage target that contains virtual disks (VDIs) and isos.
A PBD (physical device block) is what they call the interface between a physical host and an attached SR, which is responsible for storing the device config that allows the host to interact with the storage target.
So now that we have gotten that out of the way, lets get started.
Wow, this is a really overly complicated error for such a simple problem to resolve. Allow me to give you some background.
I am currently building my first production ready (well non-production really) XenServer cluster and ran into this issue when attempting to add my second host into the cluster. I hit google and found out that this was actually just a dns issue.
A quick check on the /etc/resolv.conf on two of my nodes, shows nothing but the following line.
; generated by /etc/sysconfig/network-scripts/ifup-post
Well great, on a standard linux box I would have just added my name server and would have been half way to the bar, but judging by the contents of the resolv.conf I figured that I was supposed to add it another way.
Well after a bit of poking around in XenCenter I found this. Click on the hostname of the XenServer, then click on the "Networking" tab, from there click on "Configure…" below the "Management Interfaces" section as illustrated below. You will then be presented with a pop-up window where you can enter your nameservers.
Once you have configured DNS properly you can then add the host to the cluster.
Note that you can also do this from the command line, however you have to go basically reconfigure your management interface.. ip, gateway, and everything that goes with it.
Seeing how fickle XenServer Clustering is regarding DNS, its probably not a bad idea to add /etc/host entries on your XenServer nodes for each server that will be in your cluster. You never know when dns might go out to lunch and you don't want your HA cluster affected.
For future reference you can check all the configuration parameters of your management interface with the following commands.
First of all let me start this off by saying that there is a lot of information out there on how to setup a dedicated storage interface on XenServer. However, I was unable to find anything specifically related to bonding two unmanaged interfaces and use them for as a dedicated uplink, which is seems rather silly to me as why would you not want to have a highly redundant network connection to your NFS storage. I digress.
Anyway, the first thing you need to do is to ssh into one of your XenServer hosts. In my environment I am building out a three node cluster and I need to make sure that I am working specifically with the first host in the cluster. So….
First thing you need to do is change the network backend of your Xenserver from "openvswitch" to "Linux Bridge". You accompish this with the following command.
#xe-switch-network-backend bridge.
Now you will need to reboot. Note that you can check your network-backend mode at any time with the following command.
#cat /etc/xensource/network.conf
First get the uuid of the local xenserver host, use the hostname to do this.
# xe host-list name-label=xen01
The command above will return the uuid of the server.
Then you need to get a list of pifs on the host that you are working with (making sure to exclude any other host's interfaces). The command below will output this list. We will need to grab the uuids of eth2 and eth3, since they are the interfaces that we are going to use to build our bond. Note that we are running this command so that it will spit out our MAC addresses as well… make sure that you take note of these as you will need them.
Next we will tell XenServer to "forget" or un-manage eth2. Then we will do the same to eth3. We will use the uuids of these interfaces to identify them to XenServer.
Example with interface eth2 in unmanaged mode. Rinse and repeat for eth3.
If you have successfully removed them its time to start creating your bond.
First define your bond in /etc/modprobe.conf. I am calling my bond, bond51
alias bonding bond51
options bond51 miimon=100 mode=7
Then edit /etc/sysconfig/network-scripts/ifcfg-eth2 and /etc/sysconfig/network-scripts/ifcfg-eth3. Make them look like the file below. Change the device name for ifcfg-eth3 to eth3.
DEVICE=eth2
BOOTPROTO=none
HWADDR=<MAC ADDRESS OF YOUR INTERFACE>
ONBOOT=yes
MASTER=bond51
SLAVE=yes
Then create /etc/sysconfig/network-scripts/ifcfg-bond0
Beep Boop. Ifup bond51 to bring up the bond and its slave members.
You can check the status of the bond via the command below.
cat /proc/net/bonding/bond51
Please know that I have done little more than reboot the XenServer host to make sure that the configuration that I built would persist across reboots, and failover from one interface to another. I have not tested performance yet in any way shape or form.
Finally I have found it! Citrix's XenServer switch configuration best practices document.
While everyone in the world has blog posts and documentation regarding how to set up and configure bridged networks in Xen, they hardly ever go into the physical switch configuration required.
This is the document that you will need to pass along to your friendly Network Administrator, as they will more than likely not be familar with networking for Xen as its much different from networking for Vmware ESX.
The contents of the document above are outlined below.
Change the following options on the switches for XenServer ports:
Enable PortFast on XenServer connected ports. PortFast allows a switch port running Spanning Tree Protocol (STP) to go directly from blocking to forwarding mode by skipping the learning and listening modes. PortFast should only be enabled on ports connected to a single host. Port must be an 802.1q trunk port if you are using VLANS and the port must be in access mode. Ports used for storage should have PortFast enabled. Note: It is important that you enable PortFast with caution, and only on ports that do not connect to multi-homed devices such as hubs or switches.
Disable Port Security on XenServer connected ports. Port security prevents multiple MAC addresses from being presented to the same port. In a virtual environment, you see multiple MAC addresses presented from Virtual Machines to the same port. If you have enabled Port Security, it shuts down the port.
Disable Spanning Tree Protocol on XenServer connected ports. Spanning Tree Protocol must be disabled if you are using Bonded or teamed NICs in a virtual environment. Spanning Tree Protocol should be disabled because of the nature of Bonds and NIC teaming, to avoid failover delay issues when using bonding.
Disable BPDU guard on XenServer connected ports. BPDU is a protection setting part of the STP that prevents you from attaching a network device to a switch port. When you attach a network device, the port shuts down and has to be enabled by an administrator. A PortFast port should neverreceive configuration BPDUs. Note: When BPDUs are received by a PortFast port, it indicates another bridge is connected to the port, and it indicates that there is a possibility of a bridging loop formation during the Listening and Learning phases. In a valid PortFast configuration, configuration BPDUs should never be received, so Cisco switches support a feature called PortFast BPDU Guard, which is a feature that shuts down a PortFast-enabled port in the event a BPDU is received. This feature ensures that a bridging loop is not formed, because the switch's shutting down the port removes the possibility of a loop forming.
This is my second post in focused on forcibily something from XenCenter, as apparently XenCenter has a hard time knowing when to "forget" a device or host on its own. Now while this is annoying, it can be resolved rather easily from the command line.
Now, lets take a step back and reflect for a moment on the first post in this series… you can find it here. Anyway this first post showed us how to remove the orphaned device via the Windows CLI, however I find the process of using the CLI in Windows to be painful, so this post is going to outline a similar process but via the CLI on the XenServer host itself.
Anyway in the previous post I had to force remove a host from XenCenter via the CLI, and in this post I need to remove the "Local Storage", "Removable Storage" and "DVD Drive" that were associated with the orphaned host. These are the items in red below.
By clicking on each one of the items in XenCenter I can see that they are all in a Detached State, plus i can see the UUID of each item. You will need the UUID for the next step.
Now ssh into your master node and run the following
#xe sr-list
Look for the entry with the matching UUID
uuid ( RO) : 64e69b7d-ae97-4d42-c4d8-c260553b31d1 name-label ( RW): Local storage name-description ( RW): host ( RO): <not in database> type ( RO): lvm content-type ( RO): user
Then run the following command to remove it from XenCenter Inventory
Welcome back to another XenServer adventure. Today we are going to review the process of removing an orphaned XenServer host from XenCenter via the command line.
A few days ago one of my Xen hosts died due to an issue with the raid controller. In order to revive it I had to rebuild it, and add it back into the pool. However there was a problem; the original entry for this host in XenCenter appeared in red, which indicated that it was disconnected, and another entry for the same exact host with the same exact hostname appeared next to it in the host list for this particular cluster. I was unable to remove the orphaned host via XenCenter as there was no remove option.
I was in a pickle.
However I was able to find a solution.
First and formost it appears that there are actually two ways to do this; the first one being from the Windows command line where you are running Xencenter, or on the master node in the cluster.
This example is from the Windows CLI. Note that the UUID that is output to the command line is the UUID of the Host that is unreachable — the orphaned host.
C:\Program Files\Citrix\XenCenter>xe -s <Hostname_of_master> -u root -pw <root_password> pool-sync-database You attempted an operation which involves a host which could not be contacted. host: 560e233b-0e8b-4c2f-a641-cf1876630a6b (virt04.atlc1)
C:\Program Files\Citrix\XenCenter>xe -s <Hostname_of_master> -u root -pw <root_password> host-foget uuid=560e233b-0e8b-4c2f-a641-cf1876630a6b WARNING: A host should only be forgotten if it is physically unrecoverable; WARNING: if possible, Hosts should be 'ejected' from the Pool instead. WARNING: Once a host has been forgotten it will have to be re-installed. WARNING: This operation is irreversible. Type 'yes' to continue yes
Returning to XenCenter I found that the orphaned host entry had been removed.
If you are reading this post, then you should know that I have been spending a lot of time as of late trying to learn XenServer, and I am doing my best to get Xenserver to do my evil bidding.
When I first took a look at XenServer I was dissapointed to find that you cannot PXE boot a VM unless you use the "Other Install Media" Template. However, when you use this template you are not building a fully paravirtualized vm, and you loose some functionality on your vm (like hot adding a virtual disk).
So lets say you want to kickstart a Centos 5 64-bit vm. Traditionally in XenServer, you need to create a new vm based on the "Centos 5 (64-bit)" template and then point your vm to your kickstart media and ks config file. Being that this is a manual process, and I am trying to automate building virtual machines, I started searching for a better way to make Xenserver do what I wanted… I think I may have accomplished my goal.
So the first thing I did was create a new vm via cli. This step spits out a UUID for your new vm.
Now on the next step i cheated – I used XenCenter to quick create a vm based on my new template.
Now when I create a new vm from this template, it startes to kickstart on boot. My next steps are to create multiple templates, each based on my different kickstart images/configs. Then figure out how to set their ips and hostnames.