SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
All the disks are connected to a LSI 3081E controller. There's no more available connectors on this controller.
Because I need some more disks, I've bought another 3081E controller, but instead of just creating md4 from disks connected to this new controller, I thought I'd ask here for opinions.
Would I benefit in any way from spreading all the disks across the two controllers?
It could look something like this ((a) or (b) signifies which controller the disk is connected to):
The above would provide a bit of redundancy. If one controller dies, the RAID volumes can keep running on disks connected to the other controller.
But what would it do to performance? Better? Worse? Same?
My first inclination was to simply connect the three new disks to the new controller and then add those disks to the new md4 volume. I suspect that will give me the best performance. But if the performance loss from spreading the disks is negligible, then perhaps I'd rather have the added bonus of some redundancy.
Any and all opinions/experiences are more than welcome.
Performance across controllers would be better, but you may or may not get redundancy depending on what raid levels you're using and where the disks are located, for example:
Code:
md2: sdb1(a) + sdc1(a) + sdj1(b)
If this is RAID 5 and you lose controller "a", then the array will be broken (only supports the loss of 1 disk not 2)
If this is RAID 5 and you lose controller "b", then the array will keep running (only 1 disk lost)
You can mirror lots of disks in RAID1. I do it all the time, for data where uptime is important. The more disks, the less the chance of them all crapping out at the same time.
That might be (or not ?) a limitation of mdadm. But it's certainly not a limitation of mirroring. I have seen a production system here at work using a triple mirror. It's an old cobol system writing to 'flat-files'. At lunch time the application is stopped, the 3rd mirror is broken, the application restarts on the 2-way mirror and a full backup is then made of the offline disk. Once the backup is complete, the 3rd disk is re-silvered. The "downtime" of the cobol application is then only a few seconds. The backup itself takes an hours or so and the resilvering is pretty quick - depends on how busy the system was.
Back to the original question, since all your RAID are RAID 1, clearly redundancy is the priority, so if you can (possibly) insulate yourself from controller failure by spreading the disks over two controllers, that would be consistent with that goal.
I doubt the performance difference would be that great anyway, but have no benchmarks to back me up.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.