LinuxQuestions.org
View the Most Wanted LQ Wiki articles.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices

Reply
 
Search this Thread
Old 09-22-2005, 09:22 AM   #1
Nitrowolf
LQ Newbie
 
Registered: Sep 2005
Posts: 4

Rep: Reputation: 0
Software RAID5 - poor write performance & freezing


I've been playing with the software RAID5 abilities of the 2.6.13 kernel. After numerous tests, I've settled on a 128k stripe setup on 4 250 GB drives.

After the array is created and it's synced, I get really poor write performance.

About 8MB/Sec, which is 25% or less of a single drive. I'm not looking for stellar performance, but something akin to a single drive is what I was expecting.

In addition to this, when writing to the drive, the system will appear to buffer a ton of data, then dump it to the raid array all at once, halting all other IO on the drive until the write is done. This causes a HUGE problem, since I need to be able to write and read from the drive at the same time in modest amounts. I can easily do what I need with a single disc... why isn't this working with the RAID5 setup?

So in short:

How can I increase the write performance of Software RAID 5 in linux?
How can I smooth out the writing to the drive so it doesn't buffer and then dump the entire contents of the buffer to the drive, halting all other IO on the drive?

Thanks!
 
Old 09-22-2005, 12:34 PM   #2
ironwalker
Member
 
Registered: Feb 2003
Location: Jersey shore,north
Distribution: Siduction the only way to do Debian Sid!
Posts: 500

Rep: Reputation: 30
Im getting;

/dev/md1:
Timing cached reads: 1904 MB in 2.00 seconds = 949.77 MB/sec
Timing buffered disk reads: 150 MB in 3.04 seconds = 49.35 MB/sec

with 4 raptors....the only difference between Raid 5 and no raid with these drives is my read levels go down to 12.....cached reads stay at 9hundred-sumthn'.

/dev/md1:
readonly = 0 (off)
readahead = 256 (on)
geometry = 15648/2/4, sectors = 97642752, start = 0

/dev/md2:
Timing cached reads: 1900 MB in 2.00 seconds = 948.72 MB/sec
Timing buffered disk reads: 160 MB in 3.04 seconds = 52.71 MB/sec

Not sure how to improve performance on r5 other than stripe size.

Quote:
How can I smooth out the writing to the drive so it doesn't buffer and then dump the entire contents of the buffer to the drive, halting all other IO on the drive?
Not haveing experianced this,I can not answer...sorry.
 
Old 09-22-2005, 03:37 PM   #3
Electro
Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
I think 128 kilobyte strip is huge for a single processor system to calculate the parity information for RAID 5 array. A dual or multi-processor system is required for softare RAID 5 setup. You can setup two RAID 0 arrays. When you format /dev/md0, make /dev/md1 be your journal drive. You get reduntancy and you also you get an increase write throughput, but at a cost of 250 GB less space. Also the read speed of RAID 0 is better than RAID 5 because of lower latency and slightly higher data throughput. You should leave RAID 5 to hardware RAID controllers like 3ware.
 
Old 09-25-2005, 09:49 PM   #4
Nitrowolf
LQ Newbie
 
Registered: Sep 2005
Posts: 4

Original Poster
Rep: Reputation: 0
Quote:
I think 128 kilobyte strip is huge for a single processor system to calculate the parity information for RAID 5 array. A dual or multi-processor system is required for softare RAID 5 setup. You can setup two RAID 0 arrays. When you format /dev/md0, make /dev/md1 be your journal drive. You get reduntancy and you also you get an increase write throughput, but at a cost of 250 GB less space. Also the read speed of RAID 0 is better than RAID 5 because of lower latency and slightly higher data throughput. You should leave RAID 5 to hardware RAID controllers like 3ware.
Well, that sounds reasonable... I will try playing with the chunk size and lowering it. The problem is, most of the data on this drive is going to be between 3 and 9 GB per file, so I went with a larger stripe size.

Regardless... several people I've talked to say they have no problems getting write speeds in the 20's on a PIII/1Ghz system with software raid 5.

I set this array up as a giant RAID-0 array (1TB) and got about 20MB/sec write speed sustained. That's about where I'd expect a raid-5 to be, or at least where I'd want it to be for my application.

Running this in a RAID-1 setup and losing 500GB of space is not really an option, since space is and controller ports are at a premium.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
[Fedora Core 3]Poor write performance with raid controller TomG22 Linux - Hardware 1 09-22-2005 09:25 AM
Xorg 6.8.2 and KDE 3.4 & poor xine performance Phathead Slackware 14 03-31-2005 02:09 AM
poor performance javabb Red Hat 8 01-20-2005 05:48 AM
poor performance in x ah786 Slackware 10 11-09-2004 11:11 AM
Poor performance Shyne Red Hat 1 10-17-2004 11:50 AM


All times are GMT -5. The time now is 02:15 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration