LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 01-29-2014, 05:54 AM   #1
Ulysses_
Senior Member
 
Registered: Jul 2009
Posts: 1,303

Rep: Reputation: 57
RAID 0 with four SATA 2 or two SATA 3 SSDs?


On the P8C WS (B3) motherboard (here's the manual), what would be faster?

RAID 0 with four SATA2 SSDs, or RAID 0 with two SATA3 SSDs?

Or RAID 0 with six drives, all SATA ports being used? Does this motherboard support this? If not, can software RAID be done on top of the two sets of hardware RAID drives, is it advisable?

Which SSD model should I go for to get the maximum performance possible with these ports?

CPU: i7-3770T
RAM: 32 GB DDR3 1600MHz

Last edited by Ulysses_; 01-29-2014 at 06:26 AM.
 
Old 01-29-2014, 03:35 PM   #2
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,976

Rep: Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623
Not sure any recent tests on software raid would reveal any significant speeds. Generally you simply rely on hardware raid and they are well tested and have results.

Not sure you even have a true hardware raid on that board. I get the feeling the intel is a firmware raid solution.
 
Old 01-29-2014, 03:42 PM   #3
granth
Member
 
Registered: Jul 2004
Location: USA
Distribution: Slackware64
Posts: 212

Rep: Reputation: 55
What is your typical usage for this system?
What are your space requirements?
What are you performance goals?
Do you take backups regularly?
What's your budget?

Keep in mind, the c216 chipset provides hardware-assisted software raid (aka fakeraid). You'd be better off just sticking with linux md raid. Two SSD's in raid0 is overkill for most desktop systems.

All SSD's and HDD's will have different performance characteristics, based on your usage. I'd recommend starting with a single SSD for OS and applications. Then add a traditional HDD for media storage. Throw in another big drive for backups (rsnapshot is nice) and you're set.
 
Old 01-29-2014, 05:23 PM   #4
Ulysses_
Senior Member
 
Registered: Jul 2009
Posts: 1,303

Original Poster
Rep: Reputation: 57
Quote:
Originally Posted by granth View Post
What is your typical usage for this system?
Vmware virtual machines. Lots of them.

Quote:
What are your space requirements?
Performance is crucial so if performance is higher with high capacity SSD's, then I'd go for high capacity SSD's even though I do not need them.

Quote:
What are you performance goals?
The maximum possible with existing SATA ports.

Quote:
Do you take backups regularly?
No but I will if that is only way to get maximum performance (assuming you're implying high performance is unreliable sometimes hence the regular backups)

Quote:
What's your budget?
For hardware that will last, I am prepared to spend up to $2000.

Quote:
You'd be better off just sticking with linux md raid.
Are you actually saying software raid in linux is faster than the fakeraid of this intel chipset? Could try them both and see which is faster.

But first some high performance SSD's are needed, that will make the most of the available bandwidth.

Quote:
Two SSD's in raid0 is overkill for most desktop systems.
Do you mean it's too fast? This system needs to run lots of virtual machines simultaneously and be able to suspend them to disk and restore them from disk almost instantly. Currently this only happens some of the time as there is plenty of ram for disk caching. I was seriously considering PCI-express solutions, but couldn't find a reliable one that boots in a short time.

Quote:
Then add a traditional HDD for media storage.
The motherboard has this intel technology where you can set up the SSD as a cache of sorts to the traditional SSD, I am definitely going to try that and see what happens. Not sure if it allows RAID-0 SSD's.

What about the question in the title, might 4 SSD's be faster than 2?

Might 6 SSD's be raid-able? Or is there little to gain as the SATA 2 and SATA 3 ones are competing for the same chipset-allocated bandwidth?

Last edited by Ulysses_; 01-29-2014 at 05:33 PM.
 
Old 01-29-2014, 06:03 PM   #5
granth
Member
 
Registered: Jul 2004
Location: USA
Distribution: Slackware64
Posts: 212

Rep: Reputation: 55
Quote:
Originally Posted by Ulysses_ View Post
Vmware virtual machines. Lots of them.
This tells me nothing. I could run 100 VM's on a single disk without a hiccup, if they're all sitting idle. What do the VM's actually do? If it's cpu-intensive work, then fast disks don't do much. Ya dig?


Quote:
Originally Posted by Ulysses_ View Post
What about the question in the title, might 4 SSD's be faster than 2?

Might 6 SSD's be raid-able?
Yes, you can stripe (raid-0) 6 drives. More drives = more speed, until you saturate your controller's pcie bus. You'll also need to be careful about alignment and stripe size, otherwise your raid array will not run efficiently.

I believe you can stripe up to 32 disks using linux md raid. Using fakeraid, or a hardware raid card, that limit will vary based on the device. In any case, I suggest spreading your disks over multiple sata controllers, for maximum performance. You can add an inexpensive pcie sata card for $30, if you're using md raid, to help spread out the load.

Quote:
Originally Posted by Ulysses_ View Post
Are you actually saying software raid in linux is faster than the fakeraid of this intel chipset?
Yes, I'd guess that linux md raid is faster than intel fakeraid on linux. I remember seeing some benchmarks of this, but I'm sure they're outdated.

Quote:
Originally Posted by Ulysses_ View Post
Do you mean it's too fast? This system needs to run lots of virtual machines simultaneously and be able to suspend them to disk and restore them from disk almost instantly. I was seriously considering PCI-express solutions, but couldn't find a reliable one that boots in a very short time.
I mean you probably won't push it hard enough to justify the expense. Please explain more about the VM's and what they will be doing, aside from booting up, suspending, etc..

PCI-express solutions? Do you mean pcie raid cards, or pcie flash storage? Why is boot time of the hypervisor important? ALL hardware raid cards will increase the boot time. I'm confused why this is a factor.
 
Old 01-29-2014, 06:20 PM   #6
Dman58
Member
 
Registered: Nov 2010
Location: The Danger Zone
Distribution: Slackware & everything else in a VM
Posts: 294

Rep: Reputation: 31
I believe the speed would be almost identical, depending on the disks ofcoarse.
Sata2= 3Gb/s x 4 = roughly 12Gb/s
Sata3= 6Gb/s x 2 = roughly 12Gb/s

The issue is with raid0 the information isn't protected and if 1 disks fails all is lost. So putting a ton of vm's on it without a backup solution is asking for trouble somewhere down the line. Maybe not today or even next year but you will be totally upset if you lose all these vm's I would assume.

Back to the speed issue which really isn't an issue. The more disks in raid0 the more chance of failure. So your best bet would probly be to use the 2 Sata3 for raid0 then possibly use the 4 Sata2 in raid1 as your backup solution.
 
Old 01-30-2014, 08:27 AM   #7
Ulysses_
Senior Member
 
Registered: Jul 2009
Posts: 1,303

Original Poster
Rep: Reputation: 57
It seems a waste to allocate sata ports to backup drives, would rather use a usb drive for backups.

So 6 SATA SSD's with software raid 0 or 5 seems the way to go, if maximum storage performance for the money is aimed for, as stated above. Even if it is an overkill, and needs frequent backups. Just need a confirmation on the following two:

1. When all 6 ports are used, is the full SATA 2/3 bandwidth available to each SSD? Or does each SSD slow down, compared to being the only SSD connected, as there is competition for the chipset bandwidth allocated to SATA which might be less than the total of 4 SATA2 + 2 SATA3 = 24 Gb/s, and therefore PCIe flash is a faster option for the same money?

2. What is an SSD model that will use almost all the bandwidth of SATA 3, and what about SATA 2? Size does not matter, price does, and performance does critically, let's just say it's a kink of this customer, like people who buy expensive cars and average houses instead of the other way round. Must add up to at least 100 GB though.

Quote:
Do you mean pcie raid cards, or pcie flash storage?
PCIe flash storage was the one looked at, and it was a disappointment, fails after a few months and takes ages to boot.

Quote:
Why is boot time of the hypervisor important?
It is not. But excessive boot time raises questions about the quality of the design, as a usb flash drive connects in a few seconds no matter the size, why would a PCIe SSD need much more time if not because of bad design?

Last edited by Ulysses_; 01-30-2014 at 09:56 AM.
 
Old 01-30-2014, 11:15 AM   #8
granth
Member
 
Registered: Jul 2004
Location: USA
Distribution: Slackware64
Posts: 212

Rep: Reputation: 55
Quote:
Sata2= 3Gb/s x 4 = roughly 12Gb/s
Sata3= 6Gb/s x 2 = roughly 12Gb/s
These are not real-world numbers. You'll never come close to that. There isn't a huge difference between the actual speed of SATA2 and SATA3 drives.

Quote:
It seems a waste to allocate sata ports to backup drives, would rather use a usb drive for backups.
This just depends how much data you're backing up. My system wouldn't be able to finish a backup overnight on USB.

Quote:
When all 6 ports are used, is the full SATA 2/3 bandwidth available to each SSD? Or does each SSD slow down, compared to being the only SSD connected, as there is competition for the chipset bandwidth allocated to SATA which might be less than the total of 4 SATA2 + 2 SATA3 = 24 Gb/s, and therefore PCIe flash is a faster option for the same money?
You'll be sharing upstream bandwidth. I believe the intel onboard sata card is connected via a 4GB/s DMI Adding PciE sata controllers solves that problem. You could connect 3 drives to the onboard sata ports, and 3 drives to an inexpensive addon pcie sata card. Either that, or buy an expensive pcie hardware raid card, which would have much more upstream bandwidth than the onboard implementation. It takes something like 16 high-performance SSDs to saturate an 8-lane PciE 3.0.

As for SSD brands/models, I've had very good experience with the Samsung 840 Pro and Toshiba Q Pro series. You can find good SSD benchmarks on Anandtech.
 
Old 01-30-2014, 12:41 PM   #9
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by granth View Post
These are not real-world numbers. You'll never come close to that. There isn't a huge difference between the actual speed of SATA2 and SATA3 drives.
True for mechanical disks, but not for SSDs. SATA3 SSDs are significantly faster than SATA2 SSDs.
 
Old 01-30-2014, 01:06 PM   #10
granth
Member
 
Registered: Jul 2004
Location: USA
Distribution: Slackware64
Posts: 212

Rep: Reputation: 55
Quote:
Originally Posted by TobiSGD View Post
True for mechanical disks, but not for SSDs. SATA3 SSDs are significantly faster than SATA2 SSDs.
Yes, but this is only true under the right conditions (sequential access). Unfortunately, real world usage is random access, and therefore, the difference isn't noticeable.

Should you use SATA3 over SATA2? Absolutely! BUT, it's not going to make a huge difference. The make/model of the SSD will make more of a difference.

Reference
 
Old 01-30-2014, 01:35 PM   #11
Ulysses_
Senior Member
 
Registered: Jul 2009
Posts: 1,303

Original Poster
Rep: Reputation: 57
How many SATA3 SSD's like the Samsung 840 Pro can the available bandwidth of this particular motherboard's best PCIe socket serve, if a hardware raid card is bought?

What is an example of a good hardware raid card, that does not take too long to boot? And why does this happen:
Quote:
ALL hardware raid cards will increase the boot time.
And another thing: if the motherboard fails, can the data be recovered from the SSD's by plugging them onto another PC with onboard raid?

Last edited by Ulysses_; 01-30-2014 at 03:17 PM.
 
Old 01-30-2014, 03:46 PM   #12
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,976

Rep: Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623
"ALL hardware raid cards will increase the boot time."

Not exactly sure if I agree or not. I'll agree that an amount of time would be to allow for one to access the raid bios software and it might be configurable to be less. After the OS begins to boot, I'd think that time would be recovered and then some. (never did a study on this) In either case, a true hardware raid card attached to a fast backplane channel is the only real enterprise way to speed up normal workloads after booted and/or offer a mirror or data protection.


If I wanted speed, I'd consider a pci-e enterprise level raid board. I'd consider even one or more of the pci-e ssd's. Their speeds are really tops if the motherboard supports them. They only suffer bad reviews from people who's boards are too slow.

Last edited by jefro; 01-30-2014 at 03:49 PM.
 
Old 01-30-2014, 07:12 PM   #13
granth
Member
 
Registered: Jul 2004
Location: USA
Distribution: Slackware64
Posts: 212

Rep: Reputation: 55
Quote:
How many SATA3 SSD's like the Samsung 840 Pro can the available bandwidth of this particular motherboard's best PCIe socket serve, if a hardware raid card is bought?
I don't have the hardware to benchmark it for ya, bub. As for theoretical speeds, we could argue that all night.

Quote:
What is an example of a good hardware raid card, that does not take too long to boot? And why does this happen
I like Areca and LSI cards. Most of them have Linux support and perform very well. I've also had good luck with Dell, HP, and Sun HBA's.

It takes longer because hardware raid cards have a bios that loads after your motherboard bios. This secondary bios spins up the drives, scans them, assembles the array, and performs sanity checks on the components (cache, processor, etc.). After the raid bios is finished, boot time would be reduced since you have more throughput than an onboard controller. Server-grade hardware tends to run more checks at boot time to ensure data integrity.

Quote:
And another thing: if the motherboard fails, can the data be recovered from the SSD's by plugging them onto another PC with onboard raid?
If using fake-raid, you'll have to replace your motherboard with the same model, or at the very least, the same chipset. Otherwise, your data is toast. If using software or hardware raid, you can replace your motherboard with anything you prefer and the array will assemble. However, if your hardware raid card dies, you'll have to replace it with the same model, or the same brand if you're lucky. Software raid is the most portable solution, and offers good-enough performance for most workloads.

Starting off, I would stagger six drives over three sata controllers (onboard + 2x pcie), and use md raid. It's portable, fast, and gives you something to benchmark without breaking the bank. If it's not fast enough, you're only out ~$60, and you can upgrade. Your build doesn't seem enterprise-grade, so why spend so much on the HBA? I have a Marvell 88SE9215 based card, and it works great for my raid-6 array. I have 2x spinning disks connected via the onboard controller, and 2x connected via the add-on. Performance is around 250MB/s write, and 350MB/s read. Pretty close to what my single Toshiba SSD does, but HUGE and has some redundancy. I'm sure it would be crazy fast with SSDs in raid-0.

Last edited by granth; 01-30-2014 at 07:17 PM.
 
Old 01-31-2014, 12:35 PM   #14
Ulysses_
Senior Member
 
Registered: Jul 2009
Posts: 1,303

Original Poster
Rep: Reputation: 57
So you intentionally recommend using only two of the four ports of each port card, because you do not trust it is capable of 4 times the bandwidth of each port, and likewise only two of the six ports of the onboard SATA controller? Why not 3 of the 4 card ports and 3 of the onboard ports?

And what is it about the number 6 anyway, for $2000 I can buy more of your $200 SSD's.

Last edited by Ulysses_; 01-31-2014 at 01:26 PM.
 
Old 01-31-2014, 01:28 PM   #15
granth
Member
 
Registered: Jul 2004
Location: USA
Distribution: Slackware64
Posts: 212

Rep: Reputation: 55
I suggested using only two of the onboard ports, because your motherboard only has 2 x SATA 6Gb/s ports.

A single PCI-E 2.0 lane can provide 500 MB/s (4 Gbit/s), and since the card I recommended connects via x4 (lanes), you should have 2 GB/s of bandwidth available. I don't think 4x SSD's will saturate that link, so feel free to run four drives per card. Just keep in mind that for $30 more, you'll have additional headroom for performance and/or expansion.

Why six? It's a starting point. If six doesn't satisfy your need for speed, capacity, or bragging rights, then buy more.

You never answered my question about what you're actually trying to do. Is this a proof-of-concept, or a real workload?
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: SATA v3.2 adds tiny, but very speedy, SSDs LXer Syndicated Linux News 0 08-16-2013 01:40 PM
mount existing ntfs SATA RAID 0 on RHEL4 VIA fake RAID tmoble Linux - Hardware 10 11-13-2009 07:49 PM
Sata Raid 1 and Single Sata drive drive order issue Kvetch Linux - Hardware 5 03-19-2007 06:50 PM
SATA RAID 0 errors on bootup -- invalid raid superblock vonst Slackware 3 07-04-2006 03:55 PM
does linux support the sata raid and ide raid in k7n2 delta ilsr? spyghost Linux - Hardware 10 04-16-2004 05:27 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 10:20 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration