[SOLVED] soft raid setup with newly installed harddrives
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a got a server with suse linux installed.
Right now it has two 500 GB hd on it. they are sda and sdc.
I added two more 500GB hd on it, sdb and sdd. To try to setup a soft raid-1.
So I guess I should select those two, add them as raid partitions:
so now they show up like this:
sdb1 and sdd1 and they are both type Linux Raid.
Then in the expert paritioner, I select raid and add them both.
My confusion is here: so I am raiding between sdb1 and sdd1? because that's not what I want. I want to raid sda/sdc with sdb/sdd. So it would be a 1TB after raid.
Also, should I mount the partition? I guess mounting it makes it accessible and not mounting it makes it inaccessible to regular users?
RAID-1 is mirroring, you will not get 1 TB out of four 500 MB drives.
Another question is whether this RAID will be used for boot and root or only for data. Having a separate drive for boot and root will simplify your setup.
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
RAID-1 is about redundancy. You have 2 partitions. You create a RAID array with both of them, and you end up with a RAIDed partition which is as large as the smaller of the two partitions.
RAID-O is able to stripe disks, that is to continue on partition on the next so the RAID-ed partition look as the sum of those two. But that is more or less anti-redundant. If one of the two disks fail, your complete array is inaccessible. I know there are RAID configurations which let you RAID this striped array again so you have your reduncy back. Don't. It is a bad idea.
What you can do is to install LVM on top op the RAID. Then you can add the two RAIDed partitions to form a large partition. On this page: https://wiki.archlinux.org/index.php...e_RAID_and_LVM the first diagram tells exactly what I mean. I think that is what you want. Once you got the picture (no pun intended) you can search how to set up this on your Suse system.
If you have 4 devices (sda-sdd) you are free to choose which device you use for part of a RAIDed partition.
RAID-1 is about redundancy. You have 2 partitions. You create a RAID array with both of them, and you end up with a RAIDed partition which is as large as the smaller of the two partitions.
RAID-O is able to stripe disks, that is to continue on partition on the next so the RAID-ed partition look as the sum of those two. But that is more or less anti-redundant. If one of the two disks fail, your complete array is inaccessible. I know there are RAID configurations which let you RAID this striped array again so you have your reduncy back. Don't. It is a bad idea.
What you can do is to install LVM on top op the RAID. Then you can add the two RAIDed partitions to form a large partition. On this page: https://wiki.archlinux.org/index.php...e_RAID_and_LVM the first diagram tells exactly what I mean. I think that is what you want. Once you got the picture (no pun intended) you can search how to set up this on your Suse system.
If you have 4 devices (sda-sdd) you are free to choose which device you use for part of a RAIDed partition.
jlinkels
Is there any reason you called it 'raid o' (Oh), instead of raid '0' (Zero)?
@OP, if you want 1TB space over four * 500GB drives, raid 1 will result in 500GB space (And four duplicates), what you want to do is do either RAID 10, or, RAID 01. Raid 10 is raid 0ing two raid 1 arrays, so, you'll have:-
Raid 1:- sda (500GB) + sdb (500GB) = md0 (0500GB) //If either drive fails, mount remains, degraded.
Raid 1:- sdc (500GB) + sdd (500GB) = md1 (0500GB) //If either drive fails, mount remains, degraded.
Raid 0:- md0 (500GB) + md1 (500GB) = md2 (1000GB) //If either group fails, this breaks, resulting in loss of all data over all disks.
Now, this can survive at-least one drive failure, maybe two (if two from the same group fail (I.E. SDA & SDB, it's kaput), but, if two from different groups (I.E. SDA & SDC), it'll be fine). You can also do raid 01:-
Raid 0:- sda (0500GB) + sdb (0500GB) = md0 (1000GB) //If either drive fails, mount breaks.
Raid 0:- sdc (0500GB) + sdd (0500GB) = md1 (1000GB) //If either drive fails, mount breaks.
Raid 1:- md0 (1000GB) + md1 (1000GB) = md2 (1000GB) //If either group fails, other group will remain to keep data.
Personally, I recommend raid 10, as, it means if one drive fails, you only have to replace that one drive and fix that one group, rather than a recursive cascading avalanche of failing raid arrays that each need to be rebuilt.
Now, you may want to know, the more embedded raid arrays you add, and, the more disk you add, the longer seek time will be, as, each drive will have to seek to the position of data to get the data you want (Depending on the implementation of raid, maybe one only of the two HDDs in each group will need to seek (Some like to verify one against the other to check for errors in real-time)).
So, that means, whenever you access something on md2 in raid 10 (First example):-
MD2 has to process where the data is on MD1 and MD0, issues request to locate data on them
MD1 & MD0 both have to process where the data is on their relative physical disks (I.E. SDA & SDB, SDC & SDE), then, seek to it.
MD1 & MD0 then read data from disk(s, and, verify they match), and, pipe it back to the administrative process that is MD2.
MD2 then has to intertwine the two pieces of data together to give it back to the process that requested it.
With raid 01, it's very much the same, just swapped:-
MD2 has to process where the data is on MD1 and MD0, issues request to one (or both) of them to locate the data on them.
MD1 (& MD0) ha(s/ve) to process where the data is on their relative physical disks (I.E. SDA & SDB (, SDC & SDE), then, seek to it on both their disks.
MD1 (& MD0) then have to intertwine the data back together between their two disks
MD1 (& MD0) then pipe it back to the administrative process that is MD2.
MD2 then gives the data to the process that requested it.
Last edited by Automatic; 04-21-2014 at 02:51 PM.
Reason: 'Five' --> 'Fix'
I seem to be having trouble setting up my raid. sda, where the suse 11sp2 is installed, has sd2 that i set as linux raid, but it is not showing up when do my raid table.
I have attached two screenshots. Could it be because sda2 is mounted?
You can't add a mounted drive to a RAID, especially not the drive that holds the OS itself. Best bet is to back up, wipe the disk, set up the RAID, and then reinstall the OS.
thank you guys so much for all of your suggestions and help. For a variety of reasons
1. server doesn't have an on board raid controller
2. not enough hd
3. found newer, bigger, better server with better raid controller
we have simply decided to switch to configure the new server. the OS will be on 2 X 160G disk Raid-0 setup and the data will be stored on a 4X500G disk Raid-10 setup. This way we have dedundancy on both the OS and the data side.
Thank you all for your help. I really appreciate it.
the OS will be on 2 X 160G disk Raid-0 setup and the data will be stored on a 4X500G disk Raid-10 setup. This way we have dedundancy on both the OS and the data side.
RAID-0 does NOT have redundancy. RAID-0 is the opposite of redundant, it actually INCREASES the likelihood of complete data loss over just using a single drive.
RAID-0 does NOT have redundancy. RAID-0 is the opposite of redundant, it actually INCREASES the likelihood of complete data loss over just using a single drive.
But he clearly said the only the OS will be stored on it, the data will be stored on a raid 10 setup.
I assume he just needs fast read access to the OS partition (Configs, programs, etc...), but, doesn't care if they're lost (Has backups, or, relatively simply to recreate).
I'm an idiot, didn't read his second sentence, yes, I agree, raid 0 isn't redundancy, and, you are increasing the failure rate, this is because if either drive fails, you lose all data (As it's intertwined, with no backup).
Note:- Red is my way of doing a strike-through, as, I can't work out how to do it on this forum, [s]text[/s] doesn't work.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.