Today I am finally going to take the time write an article about my lab. A while back, I had posted an article detailing the FreeNas server that I built to provide shared storage to my virtualization lab. However, today I am going to attempt to dive in deep and write, in at least a bit of detail, about the other components of my lab.
Over the last few years I have collected quite a bit of network gear. My job has given me the ability to take home decommissioned equipment from time to time, and Craigslist has helped me round out my collection. Below is what I currently have deployed. Not everything is powered on currently, as its crazy loud when all those tiny fans are a’screaming.
- 2x Cisco Catalyst 2960 switches
- 2x Cisco Catalyst 2950 switches
- 2x Cisco Catalyst 3550xl switches
- 2x Cisco Catalyst 2621xm routers
- 1 Cisco Catalyst 3560G layer 3 switch
- 1 Cisco Catalyst 2811 router
- 1 Avocent CPS 810 – Console Server
- 1 Cisco 2112 Wireless Lan Controller
- 2x Cisco Aironet 1100 access points
The equipment is distributed between my home office, and my lab in the next room over. Everything is racked and stacked in wall mounted telco/network racks. A patch panel, and a number of copper runs connect everything together.
Currently I have a Vmware Cluster comprised of two nodes. HA and DRS are both enabled, and I am running the Vcenter Appliance to manage it. My motherboards are both Supermicro X8DTIs running Dual Quad Core Nehalem Xeons (x5550). I found these boards to be perfect for my needs as they are pretty much standard EATX boards and can be fit into any standard EATX Case. Not wanting rack mount servers, this worked out perfectly for me. Both cases are CoolerMaster HAF models with CoolerMaster power supplies. These cases allow me to run some large fans, which help quiet these machines down a bit.
Each server is chock full of 4gb dimms, which gives each node a total of 48gb of memory (although I have a couple of failed sticks right now), at least one quad port intel NIC, and a battery backed raid controller (LSI 8888-ELP, and LSI 9260-8i respectively). Each mode is backed by a small UPS to ensure that they shutdown cleanly in case of a power outage. Each machine is chock full of SAS and Sata drives (more on that later).
Above you can see a screenshot from the “Hosts” tab in Vcenter. Looks like a memory upgrade might be coming soon. Below you can see both nodes as they are currently deployed. Each machine has a hot swappable drive caddy. ESX 5.5 is installed on an internal SSD.
So as I mentioned above – and posted previously here – I have a dedicated FreeNas server which acts as shared storage for my ESX Cluster. It’s specs are below.
- Gigabyte Z97-HD3
- Intel Core i3 3.8Ghz
- 5x600gb 15K SAS -Raid-Z1
- 1x32gGB SSD
- 2x4tb 7k SATA – Raid 1
- 16GB Memory
- LSI 9260 8i
I also have a number of drives local attached to each ESX node, which I mainly use as storage for my test vms. All in all, I have approximately 10TB of usable storage (a mix of raid1, raid5, and ZFS) available to my cluster. Included in this number, is an additional NFS mount hosted on a Centos 6 desktop. I use this share to move isos and images between my lab network and my primary workstation and for a landing zone for Veeam backups of my critical VMs.
I am currently running a Centos 6 Kickstart server to spin up my VMs, and I have a Puppet server to manage their configs. I use Gitlab for revision control, and Veeam for backups. Everything is monitored via Zenoss, and is sending syslog to my local Fedora desktop (for the time being). I am serving Domain Services and DNS from a Windows 2008 VM. I also have VCOPs up and running, as well as Capacity IQ.
Currently I am spending most of my lab time with Puppet. However, VCAC is next on my list as it will be used in conjunction with Puppet to build VMs. I also plan to integrate with Amazon EC2 at some point. Openstack (running in VMs) is on my list as well, as I really want some exposure to it. On the network side, I plan to get some sort of weathermap type software up and running so that I can monitor network load and traffic. NSX is always a possibility as well. Splunk too.
Although its taken me a bit of time to get my lab up and running, I have very much enjoyed the buildout process. Having the ability to migrate VMs between ESX nodes helps insulate me from server failures, and doesn’t leave me dead in the water if a hardware failure strikes. Backups allow me to backup my mission critical VMs, as the last thing that I want to have to do is spend my time rebuilding infrastructure that I have already built once. I also have enough spare network equipment to stand up test networks as needed without risking impact to the rest of my lab.
Being able to have an environment like this in my home gives me the ability to test out almost any technology that I want to learn. I am no longer dependent on my employer to deem a lab a necessity, nor do I have to argue and possibly compromise because teammate would rather do things differently.
If anyone accidentally deletes my VMS, its going to be me. And even then…. I have backups.