LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices

Reply
 
LinkBack Search this Thread
Old 10-25-2004, 09:10 PM   #1
sanwizard
LQ Newbie
 
Registered: Sep 2004
Posts: 6

Rep: Reputation: 0
3TB IDE Software Raid-5 Server!!!


I just finished a server build that involved a large amount of time, effort, and research to complete. I think the details are worth posting here for others to benefit.

The Mission : Build a reliable and massively large linux file server for less than $1K/TB.

The Result : 3 Terabytes of disk storage for $3100.00!!!

Mission Accomplished!!!!

System Specs:

Used Unisys/ALR Server chassis (circa 1999) - 15 hot swap slots ***
Used ASUS P4T-533 MB with 1GB RIMM 4200 RDRAM (6 PCI Slots)
P4B @ 3.06Ghz
2 Promise TX 133 ATA Cards
2 Promise TX 100 ATA Cards
2 Netgear GA-620 Fiber Gigabit adapters
12 250gb Hitachi ATA-100 Drives

This machine has a total of 12 IDE ports with NO interrupt sharing!!! You may think this to be impossible, but, I assure you, it works. After much trial and error, I discovered the key to get all of these cards to work together.

Here is the deal.....Promise ATA adapters of the SAME model only have enough rom address space for two cards for a total of 4 ports.... more than two cards results in the machine locking up or not seeing the additional drives. However, other Promise models use different address space, but, also are limited to two cards for the same reason. Soooo, my servers IDE ports are as follows:

4 IDE ports on 133Tx2 Cards
4 IDE ports on 100Tx2 Cards
2 Ports on the motherboard IDE ports (Intel ICH2)
2 Ports on the motherboard Raid ports (Promise 20276 in JBOD mode)

I disabled every other on-board serial, usb, parallel, audio, etc...to free up as many IRQ's as possible. As far as IRQ's are concerned, I learned a lot about the idiotic way in which intel based architecture/bios assigns interrupts. I still do not understand it, only through luck did I discover a mother board that did not gang up a bunch of devices on 1 IRQ (which absolutely KILLS raids performance)----The ASUS P4T-533. I tried 9 other boards from Abit, DFI, Giga-Byte, IWILL, Intel, etc.... all of them had interrupt conflicts and other sharing related IRQ issues. Intel oem boards are the worst by far!!??!! The P4T-533 with bios revision 1007 boots with the following assignments:

Video - IRQ 11
On board IDE - IRQ 14/15
Network Adapter - IRQ 4
Network Adapter - IRQ 9
Mass Storage Device - IRQ 3
Mass Storage Device - IRQ 10
Mass Storage Device - IRQ 12
Mass Storage Device - IRQ 7
Mass Storage Device - IRQ 5

The important concept for linux raid is : 1 drive per channel ONLY!!
Slave devices kill performance, and, in raid-5 disk groups, if one drive fails, it will most likely take the entire channel down including any slave devices....say bye bye to your data.

So I needed a BIG case to house all of those drives, but, did not want to pay big $$$ to get one. Down to the local PC recycling warehouse I go. After digging for about an hour, I find in the corner, a big, black, Unisys/ALR Revolution Quad-6 Pentium-Pro Server with 16 hot swap slots (love those old tower servers), and, dual redundant 750 watt power supplies for $35.00....about 3 feet high and 18 inched wide. Total of 12 aluminum framed cooling fans. :-) Anyway, the key to finding an older, high-end server chassis, is the power supplies MUST have 3.3 volt outputs. Most older pre-ATX power supplies were 5 and 12vdc only. This chassis puts out 75 amps of current at 3.3 volts, 80 amps at 5 volts and 55 amps at 12 volts....more than enough, plus redundancy. The drive cages were fully populated with 2gb SCA SCSI drives. I removed the rails from all the SCSI drives, tossed the SCSI disks into the trash and mounted the rails on the big Hitachi's. The SCA backplanes on the cages came out with 4 screws exposing the back of the cages for the IDE power and data cables....Voila!!!
Just like it was designed that way.

I decided to use Redhat 9, 2.4.20-x due to the $hitty performance of 2.6.x software raid. After patching the kernel to use EVMS, I created the md device and made some volume groups. Formatting took a little over 7 hours, but, when I saw 3,046,233,510,000 bytes free, I felt it was worth 2 weeks work.....LOL
 
Old 11-21-2004, 11:31 PM   #2
mpower
LQ Newbie
 
Registered: Jun 2004
Posts: 21

Rep: Reputation: 15
All I have to say is "wow". BTW i thought they came out with 1terabyte HD's already? Is it possible
to use those instead of having to use 12 drives like you did?
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
IDE Software RAID 0 - overruns error nnsg Linux - General 0 11-09-2004 09:16 AM
moving system from ide software raid to new box with scsi raid ftumsh Linux - General 0 10-28-2003 09:34 AM
Setting up Software Raid 1 on a Remote Server retep Linux - Software 1 10-03-2003 07:01 PM
Software SCSI or Hardware IDE on Raid 5? birdy827 Slackware 0 06-22-2003 07:42 AM
server crashes with software raid Nerun Linux - Software 2 01-24-2003 10:45 AM


All times are GMT -5. The time now is 01:52 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration