SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
* Hardware controllers are easier to setup but have almost zero transparency if something goes off.
Only if you fail to install the provided management software. I'd argue that you get greatly enhanced transparency with hardware RAID, as you can see not only the status of the RAID array and all RAID volumes, but also the S.M.A.R.T. data of each individual drive. Oh, and you typically get web-based management and automatic e-mail notification as well.
Quote:
Originally Posted by lazardo
* Motherboard/BIOS raid is never a good choice.
This is informally known as "fakeRAID," and it's simply a software RAID with boot support. In most cases, the actual RAID functionality is provided by a kernel RAID module, either via md or device-mapper/LVM. In other words, it is neither more nor less reliable than a Linux software RAID.
I've used "fakeRAID" quite extensively with Linux-based appliances such as routers, where disk I/O performance is not an issue, but I'd like the system to handle a failing drive without requiring a reinstall or re-imaging. Depending on the RAID BIOS(*), such setups may handle boot drive failures significantly better than a regular software RAID setup with md, as a failed drive typically won't block the boot process.
Quote:
Originally Posted by lazardo
* mdraid (linux software raid) is very mature and has a lot of visiblilty if something goes off.
But it can only ever be as good as the underlying controller (and its kernel driver), and the drives used. Unlike hardware RAID controllers, it doesn't monitor or handle device failures or timeouts, as the md driver works with device nodes, not physical disks. This is also true for any kind of RAID setup that relies on a particular file system driver to provide redundancy, by the way.
You can get almost the same level of functionality and reliability with an md RAID as with a hardware RAID, if you write a number of scripts to monitor and regularly scrub the array, put commands in the startup scripts to disable write cache and automatic block reallocation for all drives involved, monitor the kernel log for timeout errors, have smartd monitor the drives, and have both the scripts and smartd send e-mails whenever something untoward happens.
But it does involve a bit of work, and you still won't get the performance of a hardware RAID.
(*) Some particularly useless BIOSes will indeed handle a failed drive in a boot array, but on reboot they will hang on the BIOS startup screen waiting for the user to press a key to continue.
My use case is that the RAID will be infrequently accessed.
It seems that our use cases differ, maybe you shouldn't listen too much to my experiences about this...
Quote:
Originally Posted by linuxbird
I like to think that spinning down modern drives is a reasonable way to reduce wear, and there are certainly arguments against it. But it is effective at thermal management, and at least reducing spin time.
Things to consider when spinning up and down drives in a RAID system is that they will require the most power when spinning up. Some RAID controllers support spinning up drives in sequence to not consume too much power, but it will slow down the spinning up process.
Quote:
Originally Posted by linuxbird
There are two main WD Red 4TB drives currently used
I don't think I have tried WD Red myself, but they are probably a good choice for RAID as they have conventional magnetic recording technology.
An in general question. As a home computer user with two hard drives. One drive has four partitions, the /, /home, a swap and a BIOS boot. The second disk one partition. Both disk are 1000GB in size. I am considering adding a third disk for /home. I have cron jobs established to do a full back up of the first drive to the second drive, plus another to do a tailored backup of the first to the second. This second drive is basically my backup drive.
Why would I want to use RAID? At this point in time I just do not see a valid reason. Over the years I have lost a hard drive or too, there is usually advance notice of impending doom. Restoring works just fine from a backup.
Whenever the topic of RAID comes up, for some reason we always see the same, peculiar advice being handed out where Linux software RAID is touted as somehow being preferable to, or even superior to, hardware RAID.
...
[*]Linux software RAID is notoriously brittle. If you don't believe me, do a Google or forum search for "md raid not working" or something to that effect.
...
This is made worse by the fact that the non-enterprise drives most people use have a ridiculously high timeout for read errors and auto reallocation, resulting in drives with even a single, marginal block being summarily ejected from the array. A hardware RAID controller will disable automatic S.M.A.R.T. block reallocation and set the timeout to a very low value, in order to handle a read error and an eventual reallocation itself.
[*]Hardware RAID arrays do not suddenly fail. At all.
If a degraded array fails to rebuild, it's because it wasn't verified/scrubbed regularly, and bad blocks were allowed to silently accumulate ("bit rot"). Most RAID controllers support automatic and/or scheduled background scrubbing, but this usually has to be configured.
By the way, this is every bit as much an issue with software RAID as with hardware RAID, except the md software RAID driver doesn't support any kind of automatic background scrubbing; you have to write "check" to /sys/block/md<number>, either manually or using a cron job.
[*]Hardware RAID setups are either faster or a lot faster than software RAID, depending on the setup.
...
[*]Hardware RAID controllers can be equipped with battery-backed cache. This means you can enable writeback caching and still not have to worry too much about power outages or kernel panics.
Sure, you can enable writeback caching on the individual drives in a software RAID array as well, if you don't really care about the integrity of your data.
[*]Hardware RAID controllers handle hot-plugging and dynamic expansion and transformation/migration of arrays really well.
Whether a given SATA controller supports hot-plugging of drives is anybody's guess, especially if the controller is of the onboard variety.[/list]Hardware RAID exists for a reason. If it wasn't any good, why would all high-end servers have such controllers?
In my almost 30 years of working with servers, I've hardly ever seen a RAID controller fail. I've lost a total of two due to failed firmware updates, and there was a somewhat flaky Mylex model back in the late 1990s, but that's about it. On the other hand, I've had to recover a significant number of software arrays in both Windows and Linux, and not all of them could be successfully reassembled.
I think that it just depends on the use case. I have two software RAID1 arrays in my desktop Slackware machine, one using an nvme drive and an SSD, the other one with two HDDs. I don't care much about speed, but the "write mostly" flag helps the read speed (on RAID1 at least) when there is significant speed difference between the disks. I just need to be able to keep using the system, even if I have to power down and manually disconnect a failed drive. So, linux software RAID is enough for me, even if I have to be careful about not using SMR disks, setting up cron jobs for scrub and SMART notifications, setting timeouts etc. I had several different disk configurations over the years and several failed drives and I had no problems with corrupted arrays or any data loss. On the other hand, I never had a hardware RAID controller fail on me either.
There is no point on bying an expensive RAID controller for my home desktop, but I would never use software RAID for professional use (although I did once without problems), where high availability and/or speed is an issue and the cost of a hardware RAID controller is not an issue. Software RAID and hardware RAID both have their uses, depending on your needs.
Why would I want to use RAID? At this point in time I just do not see a valid reason. Over the years I have lost a hard drive or too, there is usually advance notice of impending doom. Restoring works just fine from a backup.
RAID is not a replacement for backup. In short RAID gives me 3 good things:
Higher bandwidth with many disks working together
Bigger continuous space for a single file system (hundreds of TeraBytes)
Better availability, no need to restore a file system from backup when a single drive crashes. The file system is available even when the RAID system is rebuilding the contents of a replacement drive.
But for home use? Nah, at home I only have RAID1 in a NAS box which I mostly use for backup purposes.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.