LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices

Reply
 
Search this Thread
Old 01-20-2009, 06:43 PM   #1
spankbot
Member
 
Registered: Aug 2007
Posts: 131

Rep: Reputation: 16
Poor RAID-0 performance. mdadm, SATA, CentOS 5


We have been using a software RAID setup for years as a cheap way to get higher disk IO during some of our experiments. The old setup used two 140gb SCSI disks attached to an Adaptec Ultra160 card. With this setup we achieved a constant write speed of about 40Mbs.

We have a new 8 core Tyan (s5393) machine and are attempting to create a new software RAID-0 setup with three 1T Seagate (model ST31000340AS) drives. On paper, the Seagate drives have more cache and are capable of much higher sustained IO speeds. However we have found this setup to be very unreliable. We would like to able to write at a constant 120Mbs, and sometimes this setup achieves this. But other times it seems to choke and hiccup, leaving us with sometime 100Mbs or less. We sometime can't get a constant 40Mbs. It's very inconsistent unlike the old SCSI setup. No errors in the system log... And we are careful to monitor the CPUs, and 16gm of RAM.

Questions:

What could we be running into?

Whats a good way to test my software RAID's disk IO? I'm using a tool called fio now.

Are there any tweaks I can make to the RAID setup to improve speed? We are basically writing one large multi-gig file to the array.

What kind of performance should be expected?

Any other tips would be greatly appreciated.
 
Old 01-21-2009, 04:48 PM   #2
spankbot
Member
 
Registered: Aug 2007
Posts: 131

Original Poster
Rep: Reputation: 16
UPDATE: Even with hardware RAID, performance is still suck. Lots of IO wait.

Last edited by spankbot; 05-04-2009 at 02:35 PM.
 
Old 09-18-2009, 01:39 PM   #3
Cadstar
LQ Newbie
 
Registered: Jan 2009
Location: Central New York State
Distribution: CentOS, Debian, Fedora
Posts: 2

Rep: Reputation: 0
Question

I would like to see some more input from others on this topic.
I too have seen a massive difference in performance while switching from CentOS 4 to CentOS 5.
Have tried changing hardware RAID controller from an IBM ServeRAID 8k to a MegaRAID SAS but this made no differnce.
Given the very same work load, CentOS 4 seems to be much more consistent and efficient in disk writes.

We are testing by transferring 4.5GB of data across the network (Gigabit Ethernet) to the disk. I have also tested switching from onboard Broadcom NIC to an Intel DualPort PCIe Gigabit adapter - this made no difference even with the Intel DMA accelerator installed.
Running sar during the transfer to monitor bandwidth 1min load average and average IO wait percentages.

Tests done using IBM x3500 server 2 x QuadCore 3.0GHz Xeon CPU, 32GB RAM, 8 x 300GB SAS HD in RAID 10 configuration.

Test 1 - CentOS 4.6 i386 - kernel 2.6.9-67.0.22
  • bwrtn/s: 217000.74 (average bandwidth) {sar -b 1 300}
  • 1minLoadAvg: 0.315 {sar -q 1 300}
  • IOWait%: 4.095 {sar 1 300}

Test 2 - CentOS 5.3 i386 - kernel 2.6.18-128.1.10.el5PAE
  • bwrtn/s: 209442.27 (average bandwidth) [I]{sar -b 1 300}
  • 1minLoadAvg: 1.4922 {sar -q 1 300}
  • IOWait%: 11.24 {sar 1 300}

This may not seem like much at first, but we develop medical practice management and electronic medical records software. If I install a CentOS 5.3 loaded server at one of our big clients, I could have 150 people logged in using the system at once. This difference in load grows out of control when the system gets heavily loaded.

Have tested using 30-day eval copy of RedHat 5.4 and get the same results. Running multiple tests I find that while CentOS 4 is extremely consistent in it's performance, CentOS 5 test results will jump around like crazy. One run of the above test will give a 0.95 LoadAvg result and the next run will give a 2.26 for the very same test! I run multiple tests ans average them for the above numbers.
Seems to be something low-level is happening here as changing I/O subsystem completely makes no difference in the performance.


I certainly agree with spankbot that IOwait times in CentOS5/RedHat5 are ridiculous. Anyone have some input on this??

Thanks!
 
  


Reply

Tags
adaptec, centos, io, iowait, performance, raid0, sata, scsi, seagate, tyan


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: SSD vs. SATA RAID: A performance benchmark LXer Syndicated Linux News 0 07-30-2008 08:00 PM
Extremely poor disk performance on SCSI RAID Yalla-One Linux - Hardware 5 03-29-2007 08:04 PM
Extremely poor SATA performance (Inappropriate ioctl for device) kyletriggs Linux - Hardware 7 02-14-2006 01:14 AM
poor performance on ata (native sata board) atom Linux - Hardware 6 10-07-2005 02:10 AM
[Fedora Core 3]Poor write performance with raid controller TomG22 Linux - Hardware 1 09-22-2005 09:25 AM


All times are GMT -5. The time now is 01:43 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration