soft raid setup with newly installed harddrives
Hi all
I have a got a server with suse linux installed. Right now it has two 500 GB hd on it. they are sda and sdc. I added two more 500GB hd on it, sdb and sdd. To try to setup a soft raid-1. So I guess I should select those two, add them as raid partitions: so now they show up like this: sdb1 and sdd1 and they are both type Linux Raid. Then in the expert paritioner, I select raid and add them both. My confusion is here: so I am raiding between sdb1 and sdd1? because that's not what I want. I want to raid sda/sdc with sdb/sdd. So it would be a 1TB after raid. Also, should I mount the partition? I guess mounting it makes it accessible and not mounting it makes it inaccessible to regular users? Thanks guys |
Lets start from beginning.
RAID-1 is mirroring, you will not get 1 TB out of four 500 MB drives. Another question is whether this RAID will be used for boot and root or only for data. Having a separate drive for boot and root will simplify your setup. |
RAID-1 is about redundancy. You have 2 partitions. You create a RAID array with both of them, and you end up with a RAIDed partition which is as large as the smaller of the two partitions.
RAID-O is able to stripe disks, that is to continue on partition on the next so the RAID-ed partition look as the sum of those two. But that is more or less anti-redundant. If one of the two disks fail, your complete array is inaccessible. I know there are RAID configurations which let you RAID this striped array again so you have your reduncy back. Don't. It is a bad idea. What you can do is to install LVM on top op the RAID. Then you can add the two RAIDed partitions to form a large partition. On this page: https://wiki.archlinux.org/index.php...e_RAID_and_LVM the first diagram tells exactly what I mean. I think that is what you want. Once you got the picture (no pun intended) you can search how to set up this on your Suse system. If you have 4 devices (sda-sdd) you are free to choose which device you use for part of a RAIDed partition. jlinkels |
Hi jlinkels,
So basically, I would should setup raid among the 4 disks, and then install LVM, and use it to merge the two raid partitions right? Please ignore my previous questions. davy |
Quote:
Quote:
jlinkels |
Quote:
@OP, if you want 1TB space over four * 500GB drives, raid 1 will result in 500GB space (And four duplicates), what you want to do is do either RAID 10, or, RAID 01. Raid 10 is raid 0ing two raid 1 arrays, so, you'll have:- Raid 1:- sda (500GB) + sdb (500GB) = md0 (0500GB) //If either drive fails, mount remains, degraded. Raid 1:- sdc (500GB) + sdd (500GB) = md1 (0500GB) //If either drive fails, mount remains, degraded. Raid 0:- md0 (500GB) + md1 (500GB) = md2 (1000GB) //If either group fails, this breaks, resulting in loss of all data over all disks. Now, this can survive at-least one drive failure, maybe two (if two from the same group fail (I.E. SDA & SDB, it's kaput), but, if two from different groups (I.E. SDA & SDC), it'll be fine). You can also do raid 01:- Raid 0:- sda (0500GB) + sdb (0500GB) = md0 (1000GB) //If either drive fails, mount breaks. Raid 0:- sdc (0500GB) + sdd (0500GB) = md1 (1000GB) //If either drive fails, mount breaks. Raid 1:- md0 (1000GB) + md1 (1000GB) = md2 (1000GB) //If either group fails, other group will remain to keep data. Personally, I recommend raid 10, as, it means if one drive fails, you only have to replace that one drive and fix that one group, rather than a recursive cascading avalanche of failing raid arrays that each need to be rebuilt. Now, you may want to know, the more embedded raid arrays you add, and, the more disk you add, the longer seek time will be, as, each drive will have to seek to the position of data to get the data you want (Depending on the implementation of raid, maybe one only of the two HDDs in each group will need to seek (Some like to verify one against the other to check for errors in real-time)). So, that means, whenever you access something on md2 in raid 10 (First example):- MD2 has to process where the data is on MD1 and MD0, issues request to locate data on them MD1 & MD0 both have to process where the data is on their relative physical disks (I.E. SDA & SDB, SDC & SDE), then, seek to it. MD1 & MD0 then read data from disk(s, and, verify they match), and, pipe it back to the administrative process that is MD2. MD2 then has to intertwine the two pieces of data together to give it back to the process that requested it. With raid 01, it's very much the same, just swapped:- MD2 has to process where the data is on MD1 and MD0, issues request to one (or both) of them to locate the data on them. MD1 (& MD0) ha(s/ve) to process where the data is on their relative physical disks (I.E. SDA & SDB (, SDC & SDE), then, seek to it on both their disks. MD1 (& MD0) then have to intertwine the data back together between their two disks MD1 (& MD0) then pipe it back to the administrative process that is MD2. MD2 then gives the data to the process that requested it. |
hi Automatic
thanks for the suggestion, it does seem easier then messing around with LVM, which I have never touched before. I will give a shot on a VM. Fortunately seek time is not a concern, we are just using it as an ftp server to dump logs. davy |
@jlinkels, is the raid 10 setup what you talked about in your post that's not a good idea? why?
|
2 Attachment(s)
@Automatic
I seem to be having trouble setting up my raid. sda, where the suse 11sp2 is installed, has sd2 that i set as linux raid, but it is not showing up when do my raid table. I have attached two screenshots. Could it be because sda2 is mounted? Thanks Davy |
You can't add a mounted drive to a RAID, especially not the drive that holds the OS itself. Best bet is to back up, wipe the disk, set up the RAID, and then reinstall the OS.
|
Quote:
jlinkels |
Quote:
If you use LVM you have the option to add more disks and change partition sizes. It comes for free, except you have to familiarize yourself with LVM. jlinkels |
HI all
thank you guys so much for all of your suggestions and help. For a variety of reasons 1. server doesn't have an on board raid controller 2. not enough hd 3. found newer, bigger, better server with better raid controller we have simply decided to switch to configure the new server. the OS will be on 2 X 160G disk Raid-0 setup and the data will be stored on a 4X500G disk Raid-10 setup. This way we have dedundancy on both the OS and the data side. Thank you all for your help. I really appreciate it. |
Quote:
|
Quote:
I assume he just needs fast read access to the OS partition (Configs, programs, etc...), but, doesn't care if they're lost (Has backups, or, relatively simply to recreate). I'm an idiot, didn't read his second sentence, yes, I agree, raid 0 isn't redundancy, and, you are increasing the failure rate, this is because if either drive fails, you lose all data (As it's intertwined, with no backup). Note:- Red is my way of doing a strike-through, as, I can't work out how to do it on this forum, [s]text[/s] doesn't work. |
All times are GMT -5. The time now is 11:07 PM. |