Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
With RAID level 1 (mirroring), when doing write, it is clearly necessary to write to both drives to keep them in sync. However, with reading, that is not needed. Correct data can be read from just one drive (either one). However, shouldn't it be possible when doing read-ahead or reading large blocks (no particular size since there is no stripe size in RAID1, but let's say over 1MB), to concurrently read from both drives at the same time, half from one and the other half from the other? For hardware RAID, as long as the controller can sustain the aggregated rate, I don't see why it is not possible to effectively double the single drive performance. Yet most of the time while reading from a RAID1 mirror, only one drive is actually operating. Is this just a cheap controller doing that?
Cheap controllers are just faux or fake raids. They stink like software raid stinks since they are really just software based.
No, they do use two drives but how each product uses the drives may be odd to a wiki on raid. Don't trust any of the cheap raid cards for any standard.
I'd use the cheap cards for a mirror but for any serious work you really need a hardware raid card and they are not cheap.
Cheap controllers are just faux or fake raids. They stink like software raid stinks since they are really just software based.
No, they do use two drives but how each product uses the drives may be odd to a wiki on raid. Don't trust any of the cheap raid cards for any standard.
I'd use the cheap cards for a mirror but for any serious work you really need a hardware raid card and they are not cheap.
Based solely on the price, I'd guarantee it is a hardware based raid card. Looks like an LSI chip.
A hardware raid card would have features unique to that model that may allow such settings or have them by default. A hardware raid card is the top of the heap. If you want any sort of speed, they are built for it. That is why the cost. They also seem to be more reliable in terms if mtbf.
This card is delivering substantially less than what I believe it should, in theory, be able to deliver. With RAID 1 on 2 drives, it is barely faster than one drive by itself, for reading (I expect it to be nearly double for bulk reading cases), and just slightly slower for writing (expected). With RAID 0 on 2 drives or RAID 10 on 4 drives I do see a substantial speed boost for both reading and writing, on the order of 60% more for reading and 40% more for writing. I'd expect both to be on the order of 90% for large sequential I/O, but that isn't happening.
I don't know if this is just a low end RAID card (LSI does make higher end models at a higher price) with reduced capability, or if the issue is with the driver (for example, not passing enough I/O to the controller for it to make use of parallelism). The stride I'm using is 256K and I've tried I/O sizes ranging from 16K to 64M with speeds nearly level from 1M to 64M.
For RAID 1, if a read request is made, it should be able to pass that to one drive. Then if another read request is made before the first one finishes, it should be able to pass the 2nd one to the other drive. I do see some kind of load balancing happening where activity alternates between drives. But only a small fraction of time do both drives appear to be active, and the performance numbers seem to about match that.
The other issue I have with these drives is that there is no configuration option to use a single drive completely transparently. When doing single drive configurations, it still reserves some space at the start of the drive for whatever it stores there (configuration, RAID array states, etc). I may abandon using these controllers for that reason alone.
I cannot do any further tests as the last of the machines I set up with these has been shipped to its location.
Another issue with this controller is that in RAID 5, performance is significantly lower (4x slower) than what would be expected, when the data being written is a full stripe size. In that case, the read-modify-write cycling can be skipped for better performance. That is not happening. What I do not know is whether this is because the controller doesn't know how to avoid RMW, or if the kernel driver for it is not passing the write request in a large enough chunk for the controller to avoid RMW. My writes were as large as 64MB in a single write operation, tested with and without the O_DIRECT flag.
I'm hoping I will have a future scenario where I can test the Areca controllers.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.