3TB IDE Software Raid-5 Server!!!
I just finished a server build that involved a large amount of time, effort, and research to complete. I think the details are worth posting here for others to benefit.
The Mission : Build a reliable and massively large linux file server for less than $1K/TB.
The Result : 3 Terabytes of disk storage for $3100.00!!!
Used Unisys/ALR Server chassis (circa 1999) - 15 hot swap slots ***
Used ASUS P4T-533 MB with 1GB RIMM 4200 RDRAM (6 PCI Slots)
P4B @ 3.06Ghz
2 Promise TX 133 ATA Cards
2 Promise TX 100 ATA Cards
2 Netgear GA-620 Fiber Gigabit adapters
12 250gb Hitachi ATA-100 Drives
This machine has a total of 12 IDE ports with NO interrupt sharing!!! You may think this to be impossible, but, I assure you, it works. After much trial and error, I discovered the key to get all of these cards to work together.
Here is the deal.....Promise ATA adapters of the SAME model only have enough rom address space for two cards for a total of 4 ports.... more than two cards results in the machine locking up or not seeing the additional drives. However, other Promise models use different address space, but, also are limited to two cards for the same reason. Soooo, my servers IDE ports are as follows:
4 IDE ports on 133Tx2 Cards
4 IDE ports on 100Tx2 Cards
2 Ports on the motherboard IDE ports (Intel ICH2)
2 Ports on the motherboard Raid ports (Promise 20276 in JBOD mode)
I disabled every other on-board serial, usb, parallel, audio, etc...to free up as many IRQ's as possible. As far as IRQ's are concerned, I learned a lot about the idiotic way in which intel based architecture/bios assigns interrupts. I still do not understand it, only through luck did I discover a mother board that did not gang up a bunch of devices on 1 IRQ (which absolutely KILLS raids performance)----The ASUS P4T-533. I tried 9 other boards from Abit, DFI, Giga-Byte, IWILL, Intel, etc.... all of them had interrupt conflicts and other sharing related IRQ issues. Intel oem boards are the worst by far!!??!! The P4T-533 with bios revision 1007 boots with the following assignments:
Video - IRQ 11
On board IDE - IRQ 14/15
Network Adapter - IRQ 4
Network Adapter - IRQ 9
Mass Storage Device - IRQ 3
Mass Storage Device - IRQ 10
Mass Storage Device - IRQ 12
Mass Storage Device - IRQ 7
Mass Storage Device - IRQ 5
The important concept for linux raid is : 1 drive per channel ONLY!!
Slave devices kill performance, and, in raid-5 disk groups, if one drive fails, it will most likely take the entire channel down including any slave devices....say bye bye to your data.
So I needed a BIG case to house all of those drives, but, did not want to pay big $$$ to get one. Down to the local PC recycling warehouse I go. After digging for about an hour, I find in the corner, a big, black, Unisys/ALR Revolution Quad-6 Pentium-Pro Server with 16 hot swap slots (love those old tower servers), and, dual redundant 750 watt power supplies for $35.00....about 3 feet high and 18 inched wide. Total of 12 aluminum framed cooling fans. :-) Anyway, the key to finding an older, high-end server chassis, is the power supplies MUST have 3.3 volt outputs. Most older pre-ATX power supplies were 5 and 12vdc only. This chassis puts out 75 amps of current at 3.3 volts, 80 amps at 5 volts and 55 amps at 12 volts....more than enough, plus redundancy. The drive cages were fully populated with 2gb SCA SCSI drives. I removed the rails from all the SCSI drives, tossed the SCSI disks into the trash and mounted the rails on the big Hitachi's. The SCA backplanes on the cages came out with 4 screws exposing the back of the cages for the IDE power and data cables....Voila!!!
Just like it was designed that way.
I decided to use Redhat 9, 2.4.20-x due to the $hitty performance of 2.6.x software raid. After patching the kernel to use EVMS, I created the md device and made some volume groups. Formatting took a little over 7 hours, but, when I saw 3,046,233,510,000 bytes free, I felt it was worth 2 weeks work.....LOL