Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
On the P8C WS (B3) motherboard (here's the manual), what would be faster?
RAID 0 with four SATA2 SSDs, or RAID 0 with two SATA3 SSDs?
Or RAID 0 with six drives, all SATA ports being used? Does this motherboard support this? If not, can software RAID be done on top of the two sets of hardware RAID drives, is it advisable?
Which SSD model should I go for to get the maximum performance possible with these ports?
Not sure any recent tests on software raid would reveal any significant speeds. Generally you simply rely on hardware raid and they are well tested and have results.
Not sure you even have a true hardware raid on that board. I get the feeling the intel is a firmware raid solution.
What is your typical usage for this system?
What are your space requirements?
What are you performance goals?
Do you take backups regularly?
What's your budget?
Keep in mind, the c216 chipset provides hardware-assisted software raid (aka fakeraid). You'd be better off just sticking with linux md raid. Two SSD's in raid0 is overkill for most desktop systems.
All SSD's and HDD's will have different performance characteristics, based on your usage. I'd recommend starting with a single SSD for OS and applications. Then add a traditional HDD for media storage. Throw in another big drive for backups (rsnapshot is nice) and you're set.
Performance is crucial so if performance is higher with high capacity SSD's, then I'd go for high capacity SSD's even though I do not need them.
Quote:
What are you performance goals?
The maximum possible with existing SATA ports.
Quote:
Do you take backups regularly?
No but I will if that is only way to get maximum performance (assuming you're implying high performance is unreliable sometimes hence the regular backups)
Quote:
What's your budget?
For hardware that will last, I am prepared to spend up to $2000.
Quote:
You'd be better off just sticking with linux md raid.
Are you actually saying software raid in linux is faster than the fakeraid of this intel chipset? Could try them both and see which is faster.
But first some high performance SSD's are needed, that will make the most of the available bandwidth.
Quote:
Two SSD's in raid0 is overkill for most desktop systems.
Do you mean it's too fast? This system needs to run lots of virtual machines simultaneously and be able to suspend them to disk and restore them from disk almost instantly. Currently this only happens some of the time as there is plenty of ram for disk caching. I was seriously considering PCI-express solutions, but couldn't find a reliable one that boots in a short time.
Quote:
Then add a traditional HDD for media storage.
The motherboard has this intel technology where you can set up the SSD as a cache of sorts to the traditional SSD, I am definitely going to try that and see what happens. Not sure if it allows RAID-0 SSD's.
What about the question in the title, might 4 SSD's be faster than 2?
Might 6 SSD's be raid-able? Or is there little to gain as the SATA 2 and SATA 3 ones are competing for the same chipset-allocated bandwidth?
This tells me nothing. I could run 100 VM's on a single disk without a hiccup, if they're all sitting idle. What do the VM's actually do? If it's cpu-intensive work, then fast disks don't do much. Ya dig?
Quote:
Originally Posted by Ulysses_
What about the question in the title, might 4 SSD's be faster than 2?
Might 6 SSD's be raid-able?
Yes, you can stripe (raid-0) 6 drives. More drives = more speed, until you saturate your controller's pcie bus. You'll also need to be careful about alignment and stripe size, otherwise your raid array will not run efficiently.
I believe you can stripe up to 32 disks using linux md raid. Using fakeraid, or a hardware raid card, that limit will vary based on the device. In any case, I suggest spreading your disks over multiple sata controllers, for maximum performance. You can add an inexpensive pcie sata card for $30, if you're using md raid, to help spread out the load.
Quote:
Originally Posted by Ulysses_
Are you actually saying software raid in linux is faster than the fakeraid of this intel chipset?
Yes, I'd guess that linux md raid is faster than intel fakeraid on linux. I remember seeing some benchmarks of this, but I'm sure they're outdated.
Quote:
Originally Posted by Ulysses_
Do you mean it's too fast? This system needs to run lots of virtual machines simultaneously and be able to suspend them to disk and restore them from disk almost instantly. I was seriously considering PCI-express solutions, but couldn't find a reliable one that boots in a very short time.
I mean you probably won't push it hard enough to justify the expense. Please explain more about the VM's and what they will be doing, aside from booting up, suspending, etc..
PCI-express solutions? Do you mean pcie raid cards, or pcie flash storage? Why is boot time of the hypervisor important? ALL hardware raid cards will increase the boot time. I'm confused why this is a factor.
I believe the speed would be almost identical, depending on the disks ofcoarse.
Sata2= 3Gb/s x 4 = roughly 12Gb/s
Sata3= 6Gb/s x 2 = roughly 12Gb/s
The issue is with raid0 the information isn't protected and if 1 disks fails all is lost. So putting a ton of vm's on it without a backup solution is asking for trouble somewhere down the line. Maybe not today or even next year but you will be totally upset if you lose all these vm's I would assume.
Back to the speed issue which really isn't an issue. The more disks in raid0 the more chance of failure. So your best bet would probly be to use the 2 Sata3 for raid0 then possibly use the 4 Sata2 in raid1 as your backup solution.
It seems a waste to allocate sata ports to backup drives, would rather use a usb drive for backups.
So 6 SATA SSD's with software raid 0 or 5 seems the way to go, if maximum storage performance for the money is aimed for, as stated above. Even if it is an overkill, and needs frequent backups. Just need a confirmation on the following two:
1. When all 6 ports are used, is the full SATA 2/3 bandwidth available to each SSD? Or does each SSD slow down, compared to being the only SSD connected, as there is competition for the chipset bandwidth allocated to SATA which might be less than the total of 4 SATA2 + 2 SATA3 = 24 Gb/s, and therefore PCIe flash is a faster option for the same money?
2. What is an SSD model that will use almost all the bandwidth of SATA 3, and what about SATA 2? Size does not matter, price does, and performance does critically, let's just say it's a kink of this customer, like people who buy expensive cars and average houses instead of the other way round. Must add up to at least 100 GB though.
Quote:
Do you mean pcie raid cards, or pcie flash storage?
PCIe flash storage was the one looked at, and it was a disappointment, fails after a few months and takes ages to boot.
Quote:
Why is boot time of the hypervisor important?
It is not. But excessive boot time raises questions about the quality of the design, as a usb flash drive connects in a few seconds no matter the size, why would a PCIe SSD need much more time if not because of bad design?
Sata2= 3Gb/s x 4 = roughly 12Gb/s
Sata3= 6Gb/s x 2 = roughly 12Gb/s
These are not real-world numbers. You'll never come close to that. There isn't a huge difference between the actual speed of SATA2 and SATA3 drives.
Quote:
It seems a waste to allocate sata ports to backup drives, would rather use a usb drive for backups.
This just depends how much data you're backing up. My system wouldn't be able to finish a backup overnight on USB.
Quote:
When all 6 ports are used, is the full SATA 2/3 bandwidth available to each SSD? Or does each SSD slow down, compared to being the only SSD connected, as there is competition for the chipset bandwidth allocated to SATA which might be less than the total of 4 SATA2 + 2 SATA3 = 24 Gb/s, and therefore PCIe flash is a faster option for the same money?
You'll be sharing upstream bandwidth. I believe the intel onboard sata card is connected via a 4GB/s DMI Adding PciE sata controllers solves that problem. You could connect 3 drives to the onboard sata ports, and 3 drives to an inexpensive addon pcie sata card. Either that, or buy an expensive pcie hardware raid card, which would have much more upstream bandwidth than the onboard implementation. It takes something like 16 high-performance SSDs to saturate an 8-lane PciE 3.0.
As for SSD brands/models, I've had very good experience with the Samsung 840 Pro and Toshiba Q Pro series. You can find good SSD benchmarks on Anandtech.
True for mechanical disks, but not for SSDs. SATA3 SSDs are significantly faster than SATA2 SSDs.
Yes, but this is only true under the right conditions (sequential access). Unfortunately, real world usage is random access, and therefore, the difference isn't noticeable.
Should you use SATA3 over SATA2? Absolutely! BUT, it's not going to make a huge difference. The make/model of the SSD will make more of a difference.
How many SATA3 SSD's like the Samsung 840 Pro can the available bandwidth of this particular motherboard's best PCIe socket serve, if a hardware raid card is bought?
What is an example of a good hardware raid card, that does not take too long to boot? And why does this happen:
Quote:
ALL hardware raid cards will increase the boot time.
And another thing: if the motherboard fails, can the data be recovered from the SSD's by plugging them onto another PC with onboard raid?
"ALL hardware raid cards will increase the boot time."
Not exactly sure if I agree or not. I'll agree that an amount of time would be to allow for one to access the raid bios software and it might be configurable to be less. After the OS begins to boot, I'd think that time would be recovered and then some. (never did a study on this) In either case, a true hardware raid card attached to a fast backplane channel is the only real enterprise way to speed up normal workloads after booted and/or offer a mirror or data protection.
If I wanted speed, I'd consider a pci-e enterprise level raid board. I'd consider even one or more of the pci-e ssd's. Their speeds are really tops if the motherboard supports them. They only suffer bad reviews from people who's boards are too slow.
How many SATA3 SSD's like the Samsung 840 Pro can the available bandwidth of this particular motherboard's best PCIe socket serve, if a hardware raid card is bought?
I don't have the hardware to benchmark it for ya, bub. As for theoretical speeds, we could argue that all night.
Quote:
What is an example of a good hardware raid card, that does not take too long to boot? And why does this happen
I like Areca and LSI cards. Most of them have Linux support and perform very well. I've also had good luck with Dell, HP, and Sun HBA's.
It takes longer because hardware raid cards have a bios that loads after your motherboard bios. This secondary bios spins up the drives, scans them, assembles the array, and performs sanity checks on the components (cache, processor, etc.). After the raid bios is finished, boot time would be reduced since you have more throughput than an onboard controller. Server-grade hardware tends to run more checks at boot time to ensure data integrity.
Quote:
And another thing: if the motherboard fails, can the data be recovered from the SSD's by plugging them onto another PC with onboard raid?
If using fake-raid, you'll have to replace your motherboard with the same model, or at the very least, the same chipset. Otherwise, your data is toast. If using software or hardware raid, you can replace your motherboard with anything you prefer and the array will assemble. However, if your hardware raid card dies, you'll have to replace it with the same model, or the same brand if you're lucky. Software raid is the most portable solution, and offers good-enough performance for most workloads.
Starting off, I would stagger six drives over three sata controllers (onboard + 2x pcie), and use md raid. It's portable, fast, and gives you something to benchmark without breaking the bank. If it's not fast enough, you're only out ~$60, and you can upgrade. Your build doesn't seem enterprise-grade, so why spend so much on the HBA? I have a Marvell 88SE9215 based card, and it works great for my raid-6 array. I have 2x spinning disks connected via the onboard controller, and 2x connected via the add-on. Performance is around 250MB/s write, and 350MB/s read. Pretty close to what my single Toshiba SSD does, but HUGE and has some redundancy. I'm sure it would be crazy fast with SSDs in raid-0.
So you intentionally recommend using only two of the four ports of each port card, because you do not trust it is capable of 4 times the bandwidth of each port, and likewise only two of the six ports of the onboard SATA controller? Why not 3 of the 4 card ports and 3 of the onboard ports?
And what is it about the number 6 anyway, for $2000 I can buy more of your $200 SSD's.
I suggested using only two of the onboard ports, because your motherboard only has 2 x SATA 6Gb/s ports.
A single PCI-E 2.0 lane can provide 500 MB/s (4 Gbit/s), and since the card I recommended connects via x4 (lanes), you should have 2 GB/s of bandwidth available. I don't think 4x SSD's will saturate that link, so feel free to run four drives per card. Just keep in mind that for $30 more, you'll have additional headroom for performance and/or expansion.
Why six? It's a starting point. If six doesn't satisfy your need for speed, capacity, or bragging rights, then buy more.
You never answered my question about what you're actually trying to do. Is this a proof-of-concept, or a real workload?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.