LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices

Reply
 
Search this Thread
Old 10-28-2006, 08:04 PM   #1
greg.dieselpowe
Newbie
 
Registered: Mar 2005
Location: Western New York
Distribution: My Own
Posts: 1

Rep: Reputation: 0
RAID 5 vs 1E(10)


I have had an Areca 1210 for about 6 mos now running RAID 1E and have been tossing the option back and forth about rebuilding the RAID 10 array into a RAID 5 array. All 4 drives are populated on the controller. My question is.....

1) What kind of performance can I expect running RAID 5 vs. RAID 10 using the same number of drives? I would think that since RAID 5 stripes across 4 drives instead of 2 that it would perform much faster than RAID 10 on reads and writes, just a smaller gain for writes since it is using distributed parity.
2) What kind of read/write speed could I actually expect?
3) Does anyone have a similar setup that has real numbers (hdparm -tT would be sufficient)?

My typical transfers for reads under hdparm -tT is

/dev/sda:
Timing cached reads: 4144 MB in 2.00 seconds = 2073.49 MB/sec
Timing buffered disk reads: 330 MB in 3.00 seconds = 109.91 MB/sec

Any input is appreciated.

Sincerely,
Greg

Last edited by greg.dieselpowe; 10-29-2006 at 08:42 AM.
 
Old 10-30-2006, 04:45 AM   #2
slantoflight
Member
 
Registered: Aug 2005
Distribution: Smoothwall
Posts: 283
Blog Entries: 3

Rep: Reputation: 35
There would be no performance benefit going from raid 10 to raid 5. Maybe even a slight performance hit. A least on writing side of things. raid 5 introduces distributed parity, checking, and some other stuff. Whereas raid 1+0 combines two simple but very high performance methods of raid. With the raid 10 you get the best of both worlds but at the highest cost.

I have no idea what kind of harddrives you have but it looks like your raid performance is on the low average side. It looks consistent with what you might expect from 4 fairly cheap 2mb 7200 rpm harddrives.
 
Old 10-30-2006, 06:59 PM   #3
greg.dieselpower
LQ Newbie
 
Registered: Oct 2006
Location: Western New York
Distribution: My Own, FC4 64bit
Posts: 3

Rep: Reputation: 0
Close, they are 8MB 7200 SATA 3Gb/S w/ NCQ enabled.

I haven't really played with the stripe size to see if I could improve performance. Does anyone have any tweaking tips for me?

Thanks,
Greg
 
Old 10-30-2006, 11:07 PM   #4
slantoflight
Member
 
Registered: Aug 2005
Distribution: Smoothwall
Posts: 283
Blog Entries: 3

Rep: Reputation: 35
Everyone seems to say 16.

I'm actually trying a 128k size stripe. The results are interesting.

I seem have 90 mb/s sustained. With two 7200 rpm drives.

If this program can be believed.
http://www.simplisoftware.com/Public...request=HdTach

Maybe later I'll post some hdparm results.

It seems to depend on the raid card you're using. But usually the larger the stripe the more performance you get out it i think.
 
Old 10-31-2006, 08:32 AM   #5
greg.dieselpower
LQ Newbie
 
Registered: Oct 2006
Location: Western New York
Distribution: My Own, FC4 64bit
Posts: 3

Rep: Reputation: 0
I am already using the largest stripe the card supports. So I guess I should start wondering if a different filesystem would make a difference here also. This is off topic for this post though so I will find a more appropriate forum. Thanks for your input slantoflight.

Greg
 
Old 05-08-2007, 12:48 PM   #6
bobpaul
LQ Newbie
 
Registered: Aug 2005
Posts: 14

Rep: Reputation: 0
Quote:
Originally Posted by greg.dieselpowe
I have had an Areca 1210 for about 6 mos now running RAID 1E and have been tossing the option back and forth about rebuilding the RAID 10 array into ...
Not so much for Greg, but others that stumble upon this like I did...

RAID 1E and RAID 10 are different. RAID 10 is a strip across mirrors. So you need 4 disks minimum, and even numbers there after. 2 disks each make up a mirror and the two mirrors are striped. This means with 4 100GB drives you get a 200GB set with both increased write and significantly increased read performance. You can loose up to 1 disk from each underlying mirror. With 6 disks in the array, you could loose up to 3 disks.

RAID 1E is an IBM mirroring/striping technique working with an arbitrary number of disks. This image from wikipedia should explain it, best. With 4 100GB disks you still end up with 200GB of storage. You can loose as many as 2 non-adjacent disks. This doesn't mean much for 4 disks, but for 5 disks it would mean you can still loose a max of 2 disks. 10 won't work with 5 disks.

Personally, I would use RAID 5 just because it allows a 4 100 GB disk array to reach 300GB with similar performance. The biggest downside I see is you can only loose 1 drive, regardless of how strategically you break them.
 
Old 05-08-2007, 03:50 PM   #7
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,362

Rep: Reputation: 171Reputation: 171
Greg

Something does not look right there. Here is mine for a stand alone sata I.

/dev/sda:
Timing cached reads: 2748 MB in 2.00 seconds = 1374.72 MB/sec
Timing buffered disk reads: 228 MB in 3.02 seconds = 75.49 MB/sec

I would expect not quite twice that speed out of your setup (135?).
 
Old 11-21-2007, 08:40 PM   #8
Tsagadai
LQ Newbie
 
Registered: Sep 2005
Posts: 5

Rep: Reputation: 0
The cause for your speed capping out so low is most likely the drives controllers all being on the pci bus which has a capacity of around 160MB/s for all traffic. It's not too expensive to get a motherboard now that links sata controllers with PCIe bus lines instead of the shared pci bus. Even pcie x1 drive controller cards will do the trick.

For the record my 3 drive sata2 raid5 gets around 200MB/s reads. This is using linux software raid on RHEL5.
 
Old 11-21-2007, 11:59 PM   #9
Electro
Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
PCI bus has a bandwidth of 133 MiB per second.

Remember, the utility hdparm just benchmarks raw throughput. The actual throughput depends on other variables such as the filesystem. Take for example, a 2.5 inch hard drive has a throughput of 5 MiB per second. When a filesystem like XFS is used, the throughput can be much higher. The reason is XFS writes in memory first and they writes the data to the hard drive. If memory is 2 GiB or larger, write throughput will be through the roof. Use either XFS or JFS to get the highest throughput.
 
Old 11-23-2007, 09:36 PM   #10
bobpaul
LQ Newbie
 
Registered: Aug 2005
Posts: 14

Rep: Reputation: 0
Quote:
Originally Posted by Electro View Post
When a filesystem like XFS is used, the throughput can be much higher. The reason is XFS writes in memory first and they writes the data to the hard drive. If memory is 2 GiB or larger, write throughput will be through the roof.
This of course would only be for file transfers of less than 2 GiB. A continuous stream of files from 1 disk to another (such as during a backup) will eventually slow down to the speed limitation of the controller, but it is a very good point to consider.
 
Old 03-08-2008, 02:45 PM   #11
wy1z
LQ Newbie
 
Registered: Jan 2008
Posts: 3

Rep: Reputation: 0
I experimented with RAID 0 across 4 SAS disks (15k RPM) on a Dell PowerEdge server with PERC 5i controller and learned the hard way the RAID gets lost with just one disk failure. OS is CentOS 5 64-bit.

When the replacement disk arrives, I'm likely going with a software RAID (so I don't have to worry about a failed controller), unless someone argues for a hardware install.

For the eventual rebuild, I'd want to maximize disk space (maybe 3 disks + spare, or 4 disks and no backup drive). The server has 6 disks, but the first two will be software RAID 1 of just the OS, and the remaining 4 for data/project stuff.

From what I've learned, it seems like RAID 10 would be best? I'd considered RAIDs 5 and 6, but I'm consistently finding talk of lost space and performance hits.

Since the controller can't natively handle RAID 10 anyway, what is the recommended software implementation for it? I don't plan to use LVM, unless it is the only way.

Thanks.

Scott
 
Old 06-05-2009, 04:15 PM   #12
Mistoffeles
Member
 
Registered: Dec 2007
Location: TGWN
Distribution: Fedora, RHEL, CentOS, Evil Entity
Posts: 36

Rep: Reputation: 15
Quote:
Originally Posted by Electro View Post
PCI bus has a bandwidth of 133 MiB per second.

Remember, the utility hdparm just benchmarks raw throughput. The actual throughput depends on other variables such as the filesystem. Take for example, a 2.5 inch hard drive has a throughput of 5 MiB per second. When a filesystem like XFS is used, the throughput can be much higher. The reason is XFS writes in memory first and they writes the data to the hard drive. If memory is 2 GiB or larger, write throughput will be through the roof. Use either XFS or JFS to get the highest throughput.
I know this is old, but I can't leave this kind of misinformation floating around for other people to be misled by.

Write caching is not exclusive to XFS, virtually any file system can cache write in memory and then write them to the hard drive. This is a risk vs reward scenario, unless you are using a battery backed cache (and even if you are) because data that is cached but not written to disk can be lost if power to the system is cut. Even a battery backed cache will lose data if the power cannot be restored before the battery runs out of juice. A UPS can mitigate this risk also, but again it has a limited amount of power stored and when that runs out, poof. Hopefully you have set up an automatic shutdown to prevent data loss.

Also write throughput to a write cache is a completely meaningless statistic, as the write cache is still bottlenecked by the need to write out the cached data to the hard drive. Only the real hardware performance is relevant, as your cache will only provide this artificially inflated write speed for a brief period of time, and then it has to wait for the hard disk to free up some of the cache. XFS actually slows down disk writes because it is a journaling file system, and although the creators of it have sought to minimize the impact of this journaling, it is still overhead.
 
Old 06-05-2009, 04:21 PM   #13
Mistoffeles
Member
 
Registered: Dec 2007
Location: TGWN
Distribution: Fedora, RHEL, CentOS, Evil Entity
Posts: 36

Rep: Reputation: 15
Quote:
Originally Posted by wy1z View Post
From what I've learned, it seems like RAID 10 would be best? I'd considered RAIDs 5 and 6, but I'm consistently finding talk of lost space and performance hits.
RAID 5 and 6 have a performance hit because of the parity calculatons, but waste less space than RAID 1E/RAID 10 for the same reason. The latter in a 4-drive array will waste half of the disk space on a copy of everything stored on the array, whereas the former only waste one quarter of the array (at least for RAID5), so if you have 4x100GB, RAID5 gives you 300GB while RAID 1E/10 gives you 200GB of usable space. Not sure what RAID 6 gives you, haven't done the math.
 
  


Reply

Tags
raid, raid0, raid1, raid5


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
SATA RAID 0 errors on bootup -- invalid raid superblock vonst Slackware 3 07-04-2006 03:55 PM
Raid Problem Fedora Core 3, RAID LOST DISKS ALWAYS icatalan Linux - Hardware 1 09-17-2005 03:14 AM
Perc3Di SCSI RAID + Adaptec 2810SA RAID = Fatal Grub Error? LinuxOnTheEdge Linux - General 2 03-19-2005 02:35 PM
does linux support the sata raid and ide raid in k7n2 delta ilsr? spyghost Linux - Hardware 10 04-16-2004 05:27 AM
moving system from ide software raid to new box with scsi raid ftumsh Linux - General 0 10-28-2003 09:34 AM


All times are GMT -5. The time now is 07:57 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration