HomeLab Adventures: Freenas Volume 1

Humpty_Dumpty

 

 

 

 

 

So I am not going to lie, I am a very sick man, but I am also not afraid to admit it. I have a terrible, terrible addiction which is my homelab.

 

It all started out so innocently… An old Sun Ultra 5 to learn Sparc Solaris at home.. A couple of desktops converted over to rack mount cases and racked in a cheap telecom rack in my unfinished basement.

 

This was very early in my career when I had a lot to learn and plenty of free time to study. However that was many moons ago.

 

I measure the time that has past since then by the amount of gray that has crept into my beard. As I moved from one role to the next, I found that I had the pick of the litter when it came to retired equipment.

 

Previously I would have been lucky to land an old Xeon (without virtualization support) to take home, something chock full of PCI-X cards (or worse, SCSI) that were useless to me in a desktop. However now I was landing quad core Nehalems (perfect for virtualization) with handfuls of memory and sexy pci-e SAS/Sata raid controllers.

 

Oh and tons of SSDs that were considered too small not 6 months after they were unboxed. Let’s not even get into my networking setup… as that is a tale for a different day.

 

Once I had a deployed a couple of very nice and fully loaded ESX servers, I came to find that the performance bottleneck in my lab was storage. Sure I had terabytes of SAS and SATA disk, but it was all local. I had nothing that allowed me to fail over between host. Thus began a quest.. a quest for the ages.

 

Knowing myself as I do, I knew that I was not going to be satisfied by throwing a cheap NAS together out of a couple or SATA disk. No, desktop performance was not going to cut it. I needed 15k SAS, a raid controller with battery backup, a handful of spindles, and a beefy tower to allow for plenty of expansion (yes, all my machines were converted to towers). I also knew I was going to need to use LACP or some other network bonding to cable my creation into my network. Heck, I even dared check out the cost of a cheap 10Gb small business class switch (yup too expensive… lets wait a year or so).

Which brings us to today. The day I fired up my first freenas box.

My rough specs are as follows.

  • Gigabyte Z97-HD3
  • Intel Core i3 3.8Ghz
  • 5x600gb 15K SAS -Raid-Z1
  • 1x32gGB SSD
  • 2x4tb 7k SATA – Raid 1
  • 16GB Memory
  • LSI 9260 8i

So now what – move some VMS onto it and call it a day. Well that’s no fun. Lets see what kind of performance we can push through this baby. I mean after all, we are not using 15k SAS drives for nothing.

Side note, it’s not exactly plug and play when it comes to using SAS drives in a standard tower. Even if you have a SAS capable controller, you are going to need a backplane of some sort to provide power and i/o connectivity. Finding something that will fit the bill, without having to use a cheap one-off backplane is a challenge to say the least. For my lab I picked up a couple of these. 99% of what you see in the box stores will not support SAS drives, and its not always obvious at first glance… you have to check the specs on the side of the box. Also don’t walk into Fry’s thinking you will find one… I have tried. Microcenter seems to be the only large chain that stocks an internal SAS enclosure.

For testing I am have ssh’d into a linux desktop that is on the same network as the freenas box. The desktop has only 1gb network interface. Both systems a cabled northbound to a Cisco 3560g.

First let’s mount up our RaidZ-1 volume by sticking this in our /etc/fstab and running mount  /mnt.

freenas:/mnt/freenas-vol-1      /mnt    nfs rsize=8192,wsize=8192,timeo=14,intr

Boom, there it is our new fancy mount. Now to run the tests. However that will come in part 2 as I plan not to rush through this. As far as I understand, there can be a bit of tuning in Freenas, so it might take me a bit to get everything dialed in.

Related articles

Turn an Old Computer Into a Do-Anything Home Server with FreeNAS 8
Configuring ZFS on FreeNAS for backup storage from a Windows Domain
Sync Hacks: How to Set Up FreeNAS with BitTorrent Sync Using a Plugin
RHEL6 – Quick and Dirty NFS How To

 

RHEL6 – Quick and Dirty NFS How To

Cobra1So if you are reading this you many not be aware of the fact that I already put up a post on NFS on RHEL6 (you can find it here). However that post is basically just a redirect to another blog post that I ran into on the subject. And despite the fact that its very well written, and has no apparent spelling mistakes, or inapropriate comments about NFS, I felt the need to draft my own post. At this point I really cannot seem to remember anything technical if I don't put up a blog post on the subject. After all I have been doing so for over 4 years, so why stop now.

Anyway, now that I got all that out of the way, lets sit down on brass tacks.

First lets create a directory to be shared on our NFS Server

# mkdir -p /shared/nfs

Now lets change the unix perms so that its wide open

# chmod 777 /shared/nfs

Now lets add our new nfs share to /etc/exports/. In this example I am sharing it out to my local /24 network.

/shared/nfs     10.1.224.0/24(rw,sync)

 Now restart NFS service

# service nfs restart

Now verify that you are now sharing via NFS with showmount

# showmount -e localhost

Export list for localhost:
/shared/nfs 10.1.224.0/24

If you are running iptables on your nfs host you are going to need to do a bit more to get nfs to work

If you go by the book, you are going to need to run to open several ports, the commands below will open up what you need.

# iptables -I INPUT -m multiport -p tcp –dport 111,662,875,892,2049,32803 -j ACCEPT

# iptables -I INPUT -m multiport -p udp –dport 111,662,875,892,2049,32769 -j ACCEPT

The fast and cheap way is to just allow all from your nfs client

# iptables -I INPUT -s 10.1.224.55 -j ACCEPT

Now lets go over to the host that we intend to mount our nfs filesystem on and run our showmount command again

# showmount -e 10.1.224.34
Export list for 10.1.224.34:
/shared/nfs 10.1.224.0/24

Now manually mount your share as seen below

mount -t nfs 10.1.224.34:/shared/nfs /mnt

Now add your share to /etc/fstab so that its mounted persistently across reboots.

10.1.224.34:/shared/nfs /mnt                    nfs     defaults        0 0

Just to be sure that everything is correct, unmount /mnt and mount /mnt. This way you will catch any errors in your fstab. Probably not a bad idea to touch a file under your new nfs mount too, just to make sure that its writeable.

How to Create A Dedicated Storage NIC in XenServer

Wickenburg-networkIn Vmware ESX, when using NFS storage, you are required to create a separate and additional vmkernel portgroup to access your NFS storage. This way ESX Management traffic travels over one port group and NFS traffic travels over another.

In Xenserver the concept is similar; however executed much differently.

First and foremost you need to be aware of a few limitations in XenServer. While XenServer does allow you to create a dedicated nic (or bond) for NFS traffic, this nic must remain unmanaged by Xenserver. In contrast, a XenServer's management traffic travels over its "Management interface", which obviously has to be an interface that is managed via XenServer.

Allow me sum this up. You cannot share a nic or nics between management traffic and NFS traffic. This means that you are going want to create a bond for management traffic and a bond for NFS traffic you are going to need 4 free interfaces on your Xenserver box.

Now it is possible to use VLAN tagging and route your Management traffic and virtual machine traffic over the same physical interfaces. However its important to know that XenServer does not support VLAN tagging on the Management interface, so whatever VLAN you use for management, it must be the native vlan on the ports configuration.

The information below is from the Admin Guide for XenServer 5.5

Switch ports configured to perform 802.1Q VLAN tagging/untagging, commonly referred to as ports with a native VLAN or as access mode ports, can be used with XenServer management interfaces to place managementtraffic on a desired VLAN. In this case the XenServer host is unaware of any VLAN configuration.

XenServer management interfaces cannot be assigned to a XenServer VLAN via a trunk port.

Bottom line; its probably best to have seperate physical connections for your management traffic and NFS traffic.

Also, according to the Admin Guide, your NFS network should not be routed. See the words below and read them thusly.

"Before dedicating a network interface as a storage interface for use with iSCSI or NFS SRs, you must ensure that the dedicated interface uses a separate IP subnet which is not routable from the main management interface. If this is not enforced, then storage traffic may be directed via the main management interface after a host reboot, due to the order in which network interfaces are initialized."

Ok now that we got all that out of the way, lets actually create our dedicated storage nic.

First we need to get the uuid of the pif (physical interface) that we want to use. Note that this is just an example using a standalone interface.

#xe pif-list host-name-label=<"your_hostname>

Next we reconfigure our pif

#xe pif-reconfigure-ip mode=statc IP=<your-ip-on-nfs-vlan> netmask=<your-mask> uuid=<pif-uuid>

#xe pif-param-set disallow-unplug=true uuid=<pif-uuid>

#xe pif-param-set other-config:ManagementPurpose="Storage" uuid=<pif-uuid>

Alternatively you can use xe pif-forget to remote the interface from Xencenter database and configure it manually via the XenServer Kernel like you would any other interface in Linux, however this could be more confusing in the long run.

RHEL6 – Network Mounting User Home Directories

Cartoon-house-008Seriously, who does not hate users and their home directories? However if it wasn’t for them, then nothing would ever break and the life of a sysadmin would be a very boring one. Anyway back to the subject at hand.

First off, in order to mount a users home directory automatically upon login you need to make sure that you have autofs installed and configured to start at boot.

Note that for this example lets assume that our home directories are shared via nfs on the server, nfsserver.example.com, on the exported volume /nfs/home/

So first, on your local system, install autofs via yum if its not already installed.

$ sudo yum install autofs

Then extend the configurations of your automounter to mount the /nfs/home directory by adding the following line to the bottom of /etc/auto.master. The auto.master is a master map for the automounter, and it is referenced each time autofs is involked.

/nfs/home   /etc/auto.guests

In the example above, all accesses to /nfs/home will be directed to the map in /etc/auto.guests

Now obviously we are going to need to  create a file called /etc/auto.guests.  Here you will add the line below.

*    nfsserver.example.com:/nfs/home/&

Here we are using the metacharacter “&” to elimiate the need to list out each and every username. By doing so each users home directory is mapped by default.

Then start the automounter

#service autofs start

RHEL6 – How To Configure an NFS Server

Waffle1 Do you know whats almost as delicious as a tasty waffle with pure AAA Vermont maple syrup smothered all over it. Neither do I.

However I did run across a very well written blog post on how to configure an NFS server on RHEL6.

Wierd, is it a NFS server or an NFS server, the latter sounds and looks better but i dont think its correct.

Anyway link below:

http://aaronwalrath.wordpress.com/2011/03/18/configure-nfs-server-v3-and-v4-on-scientific-linux-6-and-red-hat-enterprise-linux-rhel-6/