3ware RAID controllers

These are nice PCI (or PCI express) cards that are nicely supported for Linux systems.

The 3ware web interface

You can (and are encouraged to) do most things via the web interface, which is accessed via the URL http://localhost:888. For security reasons, the web interface is not usually exposed to remote machines, but it can be exposed by changing a setting in the config file:
/etc/3dm2/3dm2.conf
This file controls lots of settings (including passwords for both the generic and adminstrative user. It also specifies who gets emails if there are important messages.

There are units and there are ports and there are volumes. A port is a physical single disk drive. A unit can be a raid array composed of several drives, a single drive that is in use, or a single drive set up as a spare.

Delete Unit should only be done if the corresponding device is unmounted and inactive. It will cause data loss. It is harmless for a spare drive. If you want to change a single drive to a spare, you would have to delete it first, then create unit.

Remove Unit should also only be done if the device is unmounted and inactive. This is done in order to prepare a drive for physical removal.

Rescan controller is done after inserting a new drive. It could also be done after an accidental (unintentional) remove unit. This should find new drives and add them to a list at the end of the display where they can be selected and configured.

The 3ware CLI interface

An alternate approach to working with the 3ware controller is to use what they call the CLI. This is a command line tool, invoked on a linux system as:
tw_cli
There are a myriad of commands, with a unique syntax. One nice thing is that if you run the CLI as root, you have (as near as I can tell) administrative access without having or having to remember the RAID administrators password. show will tell you what 3ware controllers are present (and what they are called). For example, one of my systems yields:
c2    9500S-8      8       8        3       0        4       4 
This tells me the controller is called "c2" and has 8 ports, with 8 drives and 3 units. It happens to be a 3ware 9500S-8.

/c2 show does a controller specific show, which on this machine yields

Unit  UnitType  Status         %Cmpl  Stripe  Size(GB)  Cache  AVerify  IgnECC
------------------------------------------------------------------------------
u0    SINGLE    OK             -      -       279.387   OFF    OFF      -        
u1    SPARE     OK             -      -       931.505   -      OFF      -        
u2    RAID-5    OK             -      64K     4656.56   OFF    OFF      OFF      

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     279.46 GB   586072368     3NF0AAE0            
p1     OK               u2     931.51 GB   1953525168    WD-WMATV1596184     
p2     OK               u2     931.51 GB   1953525168    WD-WMATV0813229     
p3     OK               u2     931.51 GB   1953525168    WD-WMATV0802857     
p4     OK               u2     931.51 GB   1953525168    WD-WMATV1595566     
p5     OK               u2     931.51 GB   1953525168    WD-WMATV0789713     
p6     OK               u2     931.51 GB   1953525168    WD-WMATV0812996     
p7     OK               u1     931.51 GB   1953525168    WD-WMATV1599137     
On this machine, we do some magic linux kernel settings in the /etc/rc.local file as follows:
#echo "64" > /sys/block/sdb/queue/max_sectors_kb

echo "512" > /sys/block/sda/queue/nr_requests
blockdev --setra 16384 /dev/sda
echo "deadline" > /sys/block/sda/queue/scheduler

echo "512" > /sys/block/sdb/queue/nr_requests
blockdev --setra 16384 /dev/sdb
echo "deadline" > /sys/block/sdb/queue/scheduler

echo "512" > /sys/block/sdc/queue/nr_requests
blockdev --setra 16384 /dev/sdc
echo "deadline" > /sys/block/sdc/queue/scheduler
Apparently this improves performance for the raid array. On this machine raid unit 0 (the single drives) appears as /dev/sda, and the raid 5 array (unit 2) appears as /dev/sdb. The spare drive does not appear at all, which is appropriate.

Note that this gigantic "disk" is best handled using a GPT partition table, set up by GNU Parted rather than fdisk.


Feedback? Questions? Drop me a line!

Tom's Computer Info / [email protected]