Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I wasn't sure where to post this question so administrators, feel free to move it.
I have a media server I set up running Ubuntu 10.4 Server, and I set up a software raid 5 using 5 Western Digital Caviar Green 2TB 7200RPM 64MB drives. Individually they benchmark (using the Ubuntu's mdadm GUI (pali?somthing...) at about 100-120mb/s read write.
I set the raid 5 up with a stripe size of 256kb, and then I waited the 20 hours it took to synchronize. My read speeds in raid are up to 480mb/s, but my write max is just under 60mb/s. I knew my write performance would be quite a bit lower than my read, but I was also expecting at least single drive performance. I have seen other people online with better results in software, but have been unable to achieve the results they have gotten.
My bonnie++ results are more or less identical (I used mkfs.ext4 and set the stride and stripe-width).
The PC has 2048mb of RAM and a 2.93Ghz Dual Core Pentium (Core 2 Architecture), so I doubt think that's the bottle neck. These drives are on the P55 (P45*) South Bridge SATA controller.
Anyone know what I am doing wrong or have any suggestions?
Thanks.
EDIT: I meant P45 chipset.
Last edited by DiegoMustache; 11-19-2010 at 10:55 AM.
I've seen that web page before. It is reporting about 130mb/s block write on a 6 drive array. My drives are no slower than those. I have only 5 instead on 6, but the difference is too large. I am using the same chunk size as in the tests as well. There's gotta be something I can tweak...
Last edited by DiegoMustache; 11-16-2010 at 04:54 PM.
I don't know if this will help or not. But the November 2009 Linux Pro Magazine was devoted to RAID optimization and performance. If you happen to have a copy or perhaps they might have it in their archives.
I believe that your expectations may be too high. RAID 5 is a fault tolerant configuration. You should not expect high data write throughput in a fault tolerant environment.
If you want speed then use RAID 0. It is not fault tolerant. It is fast.
Some people use RAID 10 to try to improve the speed of RAID 1. RAID 10 may be faster in some environments than RAID 5 because RAID 10 does not compute a parity block for data recovery. Yet RAID 10 provides some fault tolerance. If data write performance is important then maybe this is for you.
I've seen some people use RAID 50. I'm not a fan of it.
I put some value on lab testing so web sites with performance data can be somewhat useful. Nevertheless your software and hardware will cause your results to deviate from some other person's tests. It's like the mileage rating in new cars. You may or you may not be able to achieve the same results as another person in another environment.
Last edited by stress_junkie; 11-16-2010 at 05:12 PM.
I think I figured it out guys. I was using Palimpsest (in Ubuntu called Disk Utility). It seems to be the problem. Palimpsest partitions the drives and creates an array out of the partitions. mdadm out of the command line by default does not. Testing on a 3x1TB array using Palimpsest i got write of 35mb/s. Using mdadm I got 90mb/s. After it's done recovering I'll get back to you on the new write speeds of my 5x2TB.
It made a huge difference. Now my read is 393.4MB/s max, 184.3MB/s avg, and 102.7MB/s min, and me write is 262.4MB/s max, 139.9MB/s avg, and 80.3MB/s min.
That is on 5 2TB WD Caviar Green 64MB drives on the P45 southbridge sata controller.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.