Is overlapping two RAID5 arrays on same drives a bad idea ??
Red HatThis forum is for the discussion of Red Hat Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Is overlapping two RAID5 arrays on same drives a bad idea ??
Hi every one,
i`m continuing to learn from the experiences that this community holds , as my friend tommylovell suggested this partitioning theme for me, which perfectly suits my current situation , i`d like you to shed some light on this,
is placing two RAID5 arrays on disk as shown below Is advisable? Will this create performance problems?
Code:
sda-500G sdb-1T sdc-1T sdd-1T
250M<-RAID1->250M unused unused
470G<-RAID5->470G<-RAID5->470G<-RAID5->470G first RAID5 array
500G<-RAID5->500G<-RAID5->500G second RAID5 array
Are you creating partitions on each physical disk and then raid-ing between the disks?
Typically RAID5 would be used to protect against data loss due to the physical failure of the disks, so you'd create one RAID 5 array with all your disks and then partition as needed on top of that. Basically, the RAID array would expose a single logical drive to the OS to be partitioned.
However, I'm not sure from your post where or how you're splitting up the disks and coming up with the RAID arrays, since the number of 470G/500G partitions don't seem to add up to me. If you are trying to use one of the partitions as a member drive in two arrays, I'm pretty sure that wouldn't work.
Are you creating partitions on each physical disk and then raid-ing between the disks?
its not exactly partitions its RAID Software Partitions ,
"Software RAID devices are so-called "block" devices, like ordinary disks or disk partitions.
A RAID device is "built" from a number of other block devices - for example, a RAID-1 could be built from two ordinary disks, or from two disk partitions (on separate disks) - "
Typically RAID5 would be used to protect against data loss due to the physical failure of the disks, so you'd create one RAID 5 array with all your disks and then partition as needed on top of that. Basically, the RAID array would expose a single logical drive to the OS to be partitioned.
yes you`re right , and that's the main concept of RAID-ING in general , but as i told you its much more flexible on linux ,that it can allow you to add partitions from different disks to create multiple arrays,
(even from the same disk ---> as it meant for testing purpose only) that's why i fell in love with the Os
Quote:
However, I'm not sure from your post where or how you're splitting up the disks and coming up with the RAID arrays, since the number of 470G/500G partitions don't seem to add up to me.
this approach can be achieved as the smallest size will be elected which is (470GB) , i gave it a shot before i post.
Quote:
If you are trying to use one of the partitions as a member drive in two arrays, I'm pretty sure that wouldn't work.
i`m not trying to join one partitions as a member in 2 arrays, u must be misunderstood me ,
this is what i`m trying to go for ...
I guess it would work, but I don't think it will be very fast. In my experience, using Raid 5 with 3 disks is slow, in particular when writing. It's also very complicated. When a disk fail, it's a lot easier to mess up the rebuild. And I bet if someone else tried to fix it after a disk failure, they would be very confused.
If you want performance, I think it would be a lot faster to replace the sda with a 1TB drive, and use Raid 1+0 on the 4 disks instead. It would also be a lot simpler, both for the OS, and for maintaining it later.
I see, I haven't used RAID software partitions before, just regular full RAID on single disks. I'd have to agree with Guttorm, though regarding the performance issues. As I'm sure you know, RAID modes that are supposed to increase performance do so by doing simultaneous writes on different physical devices in order to reduce the total time to complete the file operation. I suppose given the general way you have the partitions set up, you might some performance increase if the different RAID arrays aren't being accessed simultaneously, but if you have heavy access simultaneously to /dev/md1 and /dev/md2, you're not getting any performance increase since the three physical assets (sdb, sdc, sdd) are all still being accessed. I suppose the programmer could have coded some clever tricks into the drivers to try and optimize these scenarios, but I doubt it would have a significant impact.
And I bet if someone else tried to fix it after a disk failure, they would be very confused.
i have to agree with you on this .
Quote:
If you want performance... It would also be a lot simpler, both for the OS, and for maintaining it later.
i`m not looking for performance only i`m trying to make it survive a hardware failure as well ,
while maintaining a big-resilient LV /backup to keep backed up data in..
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.