First off let me say that its really good practice to configure ports one by one, at least when you are starting out in the network world, as the repetition of typing the same thing over and over helps you to remember the proper commands. Hell this is one of the reasons that I blog the stuff that I do… I'm trying to make sure that I do not forget what I just learned.
Anyway, this is a quick and dirty one that I cannot remember to save my life. In this instance I wanted to configure a few ports on a 2950 that I plan to use to replace my 2960, as my 2960 is destined for bigger and better things given its layer 3 capabilities.
Note the spaces between the first port in the range, the dash, and the last port in the range.
%Warning: portfast should only be enabled on ports connected to a single host. Connecting hubs, concentrators, switches, bridges, etc… to this interface when portfast is enabled, can cause temporary bridging loops. Use with CAUTION
%Portfast will be configured in 8 interfaces due to the range command but will only have effect when the interfaces are in a non-trunking mode.
s-2950-1.loc(config-if-range)#spanning-tree bpduguard enable s-2950-1.loc(config-if-range)#end s-2950-1.localdomain#copy run start
EIGRP (Enhanced Interior Gateway Routing Protocol), is a Cisco proprietary routing protocol (until recently). When a router runs EIGRP, it keeps a copy of its neighbors routing table. If I router cannot find a route it its, or its neighbor's routing table, it will query its neighbors who in turn query their neighbors.
Exactly how routing protocols work is serious business, but dont worry we are not going to go into that here. Below is the process that I used to setup EIGRP on a Cisco 2811.
Before we do anything, lets get into Configuration mode (conf t).
Your first step is going to be to enable IP Routing on your device. But before you do so, make sure that you have configured a Gateway of Last Restort. I did not and had to hook up the old console cable.
The IP of this router is 10.1.0.2, and its directly connected to 10.1.0.1, which is its last resort first hop, so lets configure that .
So I was planning on blogging about this process simply because I keep forgetting it. Being that I am in the middle of building out my lab, and have a ton of old cisco hardware with ancient images, I have found myself going through the process of updating firmware quite a bit as I tinker with different IOS images.
The first thing that you are going to need is a tftp server, which I am running on my Fedora 18 desktop. Its a very easy setup, and has been simply documented on the link below. Note the article below is specfically about how to setup a tftp server on RHEL, however the process is pretty much the same.
The Cisco Catalyst 3506G is a layer 3 switch which went end-of-life in 2009. For the home lab its a pretty nice switch to have due to its layer 3 support and gigabit speed. Mine is the model seen to the left, 24 gigabit ports and 4x1gb SFP uplinks.
The Cisco Catalyst 3560 is available with one of two software images:
IP Base software includes advanced quality of service (QoS),
rate limiting, access control lists (ACLs), Open Shortest Path First
(OSPF) for routed access, and IPv6 functionality.
IP Services software provides a broader set of
enterprise-class features, including advanced hardware-based IP Unicast
and IP Multicast routing, as well as policy-based routing (PBR).
Anyway, gathered here are a few manadatory resources for the Cisco Catalyst 3560G.
First of all let me start this off by saying that there is a lot of information out there on how to setup a dedicated storage interface on XenServer. However, I was unable to find anything specifically related to bonding two unmanaged interfaces and use them for as a dedicated uplink, which is seems rather silly to me as why would you not want to have a highly redundant network connection to your NFS storage. I digress.
Anyway, the first thing you need to do is to ssh into one of your XenServer hosts. In my environment I am building out a three node cluster and I need to make sure that I am working specifically with the first host in the cluster. So….
First thing you need to do is change the network backend of your Xenserver from "openvswitch" to "Linux Bridge". You accompish this with the following command.
Now you will need to reboot. Note that you can check your network-backend mode at any time with the following command.
First get the uuid of the local xenserver host, use the hostname to do this.
# xe host-list name-label=xen01
The command above will return the uuid of the server.
Then you need to get a list of pifs on the host that you are working with (making sure to exclude any other host's interfaces). The command below will output this list. We will need to grab the uuids of eth2 and eth3, since they are the interfaces that we are going to use to build our bond. Note that we are running this command so that it will spit out our MAC addresses as well… make sure that you take note of these as you will need them.
Beep Boop. Ifup bond51 to bring up the bond and its slave members.
You can check the status of the bond via the command below.
Please know that I have done little more than reboot the XenServer host to make sure that the configuration that I built would persist across reboots, and failover from one interface to another. I have not tested performance yet in any way shape or form.
In Vmware ESX, when using NFS storage, you are required to create a separate and additional vmkernel portgroup to access your NFS storage. This way ESX Management traffic travels over one port group and NFS traffic travels over another.
In Xenserver the concept is similar; however executed much differently.
First and foremost you need to be aware of a few limitations in XenServer. While XenServer does allow you to create a dedicated nic (or bond) for NFS traffic, this nic must remain unmanaged by Xenserver. In contrast, a XenServer's management traffic travels over its "Management interface", which obviously has to be an interface that is managed via XenServer.
Allow me sum this up. You cannot share a nic or nics between management traffic and NFS traffic. This means that you are going want to create a bond for management traffic and a bond for NFS traffic you are going to need 4 free interfaces on your Xenserver box.
Now it is possible to use VLAN tagging and route your Management traffic and virtual machine traffic over the same physical interfaces. However its important to know that XenServer does not support VLAN tagging on the Management interface, so whatever VLAN you use for management, it must be the native vlan on the ports configuration.
The information below is from the Admin Guide for XenServer 5.5
Switch ports configured to perform 802.1Q VLAN tagging/untagging, commonly referred to as ports with a native VLAN or as access mode ports, can be used with XenServer management interfaces to place managementtraffic on a desired VLAN. In this case the XenServer host is unaware of any VLAN configuration.
XenServer management interfaces cannot be assigned to a XenServer VLAN via a trunk port.
Bottom line; its probably best to have seperate physical connections for your management traffic and NFS traffic.
Also, according to the Admin Guide, your NFS network should not be routed. See the words below and read them thusly.
"Before dedicating a network interface as a storage interface for use with iSCSI or NFS SRs, you must ensure that the dedicated interface uses a separate IP subnet which is not routable from the main management interface. If this is not enforced, then storage traffic may be directed via the main management interface after a host reboot, due to the order in which network interfaces are initialized."
Ok now that we got all that out of the way, lets actually create our dedicated storage nic.
First we need to get the uuid of the pif (physical interface) that we want to use. Note that this is just an example using a standalone interface.
Alternatively you can use xe pif-forget to remote the interface from Xencenter database and configure it manually via the XenServer Kernel like you would any other interface in Linux, however this could be more confusing in the long run.
So first i hit up /etc/modprobe.conf and verifed that the correct driver module was being aliased for the interface.
alias eth4 e1000
Ok check. Modprobe.conf looks good.
Now re-initialize driver
Then onto check lspci to make sure that the driver initialized all the interfaces. Note that there are two different versions of the 82546EB driver, which tells me that the driver was only initialized for two of the interfaces initially.
Turned my attention back to the ifconfig-eth4 and the fact that there was not a MAC address defined in the config file. Grabbed the mac address out of /etc/sysconfig/hwconf and added it to the ifconfig-eth4 config file