LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 11-29-2023, 08:56 PM   #16
Ser Olmy
Senior Member
 
Registered: Jan 2012
Distribution: Slackware
Posts: 3,342

Rep: Reputation: Disabled

Quote:
Originally Posted by lazardo View Post
* Hardware controllers are easier to setup but have almost zero transparency if something goes off.
Only if you fail to install the provided management software. I'd argue that you get greatly enhanced transparency with hardware RAID, as you can see not only the status of the RAID array and all RAID volumes, but also the S.M.A.R.T. data of each individual drive. Oh, and you typically get web-based management and automatic e-mail notification as well.
Quote:
Originally Posted by lazardo View Post
* Motherboard/BIOS raid is never a good choice.
This is informally known as "fakeRAID," and it's simply a software RAID with boot support. In most cases, the actual RAID functionality is provided by a kernel RAID module, either via md or device-mapper/LVM. In other words, it is neither more nor less reliable than a Linux software RAID.

I've used "fakeRAID" quite extensively with Linux-based appliances such as routers, where disk I/O performance is not an issue, but I'd like the system to handle a failing drive without requiring a reinstall or re-imaging. Depending on the RAID BIOS(*), such setups may handle boot drive failures significantly better than a regular software RAID setup with md, as a failed drive typically won't block the boot process.
Quote:
Originally Posted by lazardo View Post
* mdraid (linux software raid) is very mature and has a lot of visiblilty if something goes off.
But it can only ever be as good as the underlying controller (and its kernel driver), and the drives used. Unlike hardware RAID controllers, it doesn't monitor or handle device failures or timeouts, as the md driver works with device nodes, not physical disks. This is also true for any kind of RAID setup that relies on a particular file system driver to provide redundancy, by the way.

You can get almost the same level of functionality and reliability with an md RAID as with a hardware RAID, if you write a number of scripts to monitor and regularly scrub the array, put commands in the startup scripts to disable write cache and automatic block reallocation for all drives involved, monitor the kernel log for timeout errors, have smartd monitor the drives, and have both the scripts and smartd send e-mails whenever something untoward happens.

But it does involve a bit of work, and you still won't get the performance of a hardware RAID.

(*) Some particularly useless BIOSes will indeed handle a failed drive in a boot array, but on reboot they will hang on the BIOS startup screen waiting for the user to press a key to continue.
 
Old 11-30-2023, 12:30 AM   #17
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 976

Rep: Reputation: 665Reputation: 665Reputation: 665Reputation: 665Reputation: 665Reputation: 665
Quote:
Originally Posted by linuxbird View Post
My use case is that the RAID will be infrequently accessed.
It seems that our use cases differ, maybe you shouldn't listen too much to my experiences about this...

Quote:
Originally Posted by linuxbird View Post
I like to think that spinning down modern drives is a reasonable way to reduce wear, and there are certainly arguments against it. But it is effective at thermal management, and at least reducing spin time.
Things to consider when spinning up and down drives in a RAID system is that they will require the most power when spinning up. Some RAID controllers support spinning up drives in sequence to not consume too much power, but it will slow down the spinning up process.

Quote:
Originally Posted by linuxbird View Post
There are two main WD Red 4TB drives currently used
I don't think I have tried WD Red myself, but they are probably a good choice for RAID as they have conventional magnetic recording technology.

regards Henrik
 
Old 11-30-2023, 01:28 AM   #18
chrisretusn
Senior Member
 
Registered: Dec 2005
Location: Philippines
Distribution: Slackware64-current
Posts: 2,976

Rep: Reputation: 1552Reputation: 1552Reputation: 1552Reputation: 1552Reputation: 1552Reputation: 1552Reputation: 1552Reputation: 1552Reputation: 1552Reputation: 1552Reputation: 1552
@Ser Olmy, great post.

An in general question. As a home computer user with two hard drives. One drive has four partitions, the /, /home, a swap and a BIOS boot. The second disk one partition. Both disk are 1000GB in size. I am considering adding a third disk for /home. I have cron jobs established to do a full back up of the first drive to the second drive, plus another to do a tailored backup of the first to the second. This second drive is basically my backup drive.

Why would I want to use RAID? At this point in time I just do not see a valid reason. Over the years I have lost a hard drive or too, there is usually advance notice of impending doom. Restoring works just fine from a backup.
 
Old 11-30-2023, 05:45 AM   #19
theodore.s
Member
 
Registered: Jul 2018
Location: Athens, Greece
Distribution: Slackware
Posts: 57

Rep: Reputation: 29
Quote:
Originally Posted by Ser Olmy View Post
Whenever the topic of RAID comes up, for some reason we always see the same, peculiar advice being handed out where Linux software RAID is touted as somehow being preferable to, or even superior to, hardware RAID.

...
[*]Linux software RAID is notoriously brittle. If you don't believe me, do a Google or forum search for "md raid not working" or something to that effect.

...

This is made worse by the fact that the non-enterprise drives most people use have a ridiculously high timeout for read errors and auto reallocation, resulting in drives with even a single, marginal block being summarily ejected from the array. A hardware RAID controller will disable automatic S.M.A.R.T. block reallocation and set the timeout to a very low value, in order to handle a read error and an eventual reallocation itself.
[*]Hardware RAID arrays do not suddenly fail. At all.

If a degraded array fails to rebuild, it's because it wasn't verified/scrubbed regularly, and bad blocks were allowed to silently accumulate ("bit rot"). Most RAID controllers support automatic and/or scheduled background scrubbing, but this usually has to be configured.

By the way, this is every bit as much an issue with software RAID as with hardware RAID, except the md software RAID driver doesn't support any kind of automatic background scrubbing; you have to write "check" to /sys/block/md<number>, either manually or using a cron job.
[*]Hardware RAID setups are either faster or a lot faster than software RAID, depending on the setup.

...
[*]Hardware RAID controllers can be equipped with battery-backed cache. This means you can enable writeback caching and still not have to worry too much about power outages or kernel panics.

Sure, you can enable writeback caching on the individual drives in a software RAID array as well, if you don't really care about the integrity of your data.

[*]Hardware RAID controllers handle hot-plugging and dynamic expansion and transformation/migration of arrays really well.

Whether a given SATA controller supports hot-plugging of drives is anybody's guess, especially if the controller is of the onboard variety.[/list]Hardware RAID exists for a reason. If it wasn't any good, why would all high-end servers have such controllers?

In my almost 30 years of working with servers, I've hardly ever seen a RAID controller fail. I've lost a total of two due to failed firmware updates, and there was a somewhat flaky Mylex model back in the late 1990s, but that's about it. On the other hand, I've had to recover a significant number of software arrays in both Windows and Linux, and not all of them could be successfully reassembled.
And just for reference: https://raid.wiki.kernel.org/index.php/Linux_Raid and especially https://raid.wiki.kernel.org/index.php/Timeout_Mismatch

I think that it just depends on the use case. I have two software RAID1 arrays in my desktop Slackware machine, one using an nvme drive and an SSD, the other one with two HDDs. I don't care much about speed, but the "write mostly" flag helps the read speed (on RAID1 at least) when there is significant speed difference between the disks. I just need to be able to keep using the system, even if I have to power down and manually disconnect a failed drive. So, linux software RAID is enough for me, even if I have to be careful about not using SMR disks, setting up cron jobs for scrub and SMART notifications, setting timeouts etc. I had several different disk configurations over the years and several failed drives and I had no problems with corrupted arrays or any data loss. On the other hand, I never had a hardware RAID controller fail on me either.

There is no point on bying an expensive RAID controller for my home desktop, but I would never use software RAID for professional use (although I did once without problems), where high availability and/or speed is an issue and the cost of a hardware RAID controller is not an issue. Software RAID and hardware RAID both have their uses, depending on your needs.
 
1 members found this post helpful.
Old 11-30-2023, 04:18 PM   #20
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 976

Rep: Reputation: 665Reputation: 665Reputation: 665Reputation: 665Reputation: 665Reputation: 665
Quote:
Originally Posted by chrisretusn View Post
Why would I want to use RAID? At this point in time I just do not see a valid reason. Over the years I have lost a hard drive or too, there is usually advance notice of impending doom. Restoring works just fine from a backup.
RAID is not a replacement for backup. In short RAID gives me 3 good things:
  1. Higher bandwidth with many disks working together
  2. Bigger continuous space for a single file system (hundreds of TeraBytes)
  3. Better availability, no need to restore a file system from backup when a single drive crashes. The file system is available even when the RAID system is rebuilding the contents of a replacement drive.

But for home use? Nah, at home I only have RAID1 in a NAS box which I mostly use for backup purposes.

regards Henrik
 
2 members found this post helpful.
  


Reply

Tags
raid5, slackware 15.0



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
S.W RAID vs RAID accelerator (FastTrakR TX4310 ) @ RAID 1 David Zfira Linux - Newbie 6 07-29-2009 11:13 PM
LXer: Tutorial: Linux RAID Smackdown: Crush RAID 5 with RAID 10 LXer Syndicated Linux News 0 08-14-2008 11:20 PM
LXer: Linux RAID Smackdown: Crush RAID 5 with RAID 10 LXer Syndicated Linux News 0 02-26-2008 09:40 PM
which raid level (RAID 5 or RAID 10) inspiredbymetal Linux - Server 4 11-25-2007 07:59 PM
HW RAID, fake-HW RAID or SW RAID? stromdal Linux - Hardware 5 08-10-2007 02:54 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 03:30 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration