Raid Levels Explained and Simplified

b39c97c1c0dfbdc1eb7636c231493133
As a Systems Administrator, I deal with Raid 1(mirroring) pretty much exclusively. Hell, nowadays when building a server the server automatically mirrors your Operating System disks for you, which means that you do not even need to understand what is happening behind the scenes. You just pop your two drives in your server and go. However the world of the San Administrator is much more complicated.

First off its important to know that RAID stands for either “Redundant Array of Independent Disks”, or less commonly  “Redundant Array of Inexpensive Disks”. Either way you slice it (pun intended) the basic idea of RAID is to combine multiple hard disks to either increase performance or increase redundancy.

Before I get started its important to introduce the term LUN. A LUN is a logical disk that consists of raw physical
disk space.
LUNs are created as a basic part of the storage provisioning process. They are presented across a SAN to a server as a single physical disk.

Note that the title of this article is “Raid Levels Explained and Simplified“, and when I say simplified I mean it. I am going to give a brief overview of most of the common RAID levels and then present a weakness and strength. Scroll down to the bottom of the article for links to more in depth articles and web pages. 

 

RAID 0: Striped…No Fault Tolerance

OK, in my opinion, and in the opinion of many other, RAID 0 is not even RAID, because there is no redundancy. If a disk fails, you are toast. Basically your take a slice of two disk or more disks and create a LUN. For example lets say that you as the Sysadmin request 1 80GB disk from your local SAN Admin. In the scenario below your SAN guru would carve 8 10GB blocks and present them in order (block 1,2,3,4,5,6,7,8) to you as a single LUN. RAID 0 provides good read and write performance. In the end RAID 0 is striping which is the most important thing that you probably need to know about it.

Raid0

Continue reading

Linux SAN Disk Managment via DM-Multipath

A little background…
Most of the time, I have used the RDAC driver in Linux to manage SAN disks in Linux. The RDAC driver is used to hide the complexity of multiple paths and to
present redundant paths as a single path which can be used as you would
a standard SCSI / IDE / SAS / SATA drive. Seeing only one device makes managing your disks much easier.

However where I work we only use RDAC with our IBM FastT, Sun 6140 and STK Flexline storage arrays. RDAC is not for LSI based storage such as Hitachi, Clarion, and EMC. For these servers we manage SAN disk with DM-Multipath.

Setup…
Setting up DM-Multipath is not hard, first you need to make sure that you install the package, device-mapper-mulitpath, and you will need to configure your multipath.conf and drop it into /etc. Below is some info on how to do so.

http://kbase.redhat.com/faq/docs/DOC-3691

You will also need to make sure that you enable the multipathd daemon. This daemon is in charge of checking for failed paths.

Multipath Command…
For those use to using RDAC, DM-Multipath takes some getting used to, especially when you see the output from fdisk -ll.

In one particular instance I was given the disk name of /dev/sdm as the name of the new disk on this box. The output from the fdisk -l command is not exactly helpful, as there are a ton of psuedo devices showing up in my output. This is where the multipath command comes in handy.

Continue reading