LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   software raid 5 + LVM and other raid questions (https://www.linuxquestions.org/questions/slackware-14/software-raid-5-lvm-and-other-raid-questions-551939/)

slackman 05-07-2007 08:24 AM

software raid 5 + LVM and other raid questions
 
ok,

i wanna redo my file server and use software raid 5 + LVM, but before i do that i want to ask few question. i have about 10 160gb PATA drives that i want to use. the setup is simplified by using those 10 PATA purely as storage since OS (slack 11) will be residing on 30gb pata drive with cd rom on one of the IDE channels on the mobo.

so now the motherboard has only 3 PCI slots and 1 free onboard IDE channel (the other one is taken by OS HD and CDROM - on one cable). to fit all 10 Storage HDs i'd need 10 IDE channels and to have 1 HD per channel i'd have to get 2 IDE PCI controllers with 4 IDE channels and then 1 2 channel IDE controller and then i will have one spare for future upgrade.

but then again will it be hard (read: will i have to rebuild raid) if i wanted to upgrade the 3rd 2chan IDE controller to 4 chan?

does slack 11 come with built in support for raid and LVM or will i have to recompile the kernel in order for the OS to support those features?

now lets get back to the software. since ill be building software raid i don't care if the ide controllers will have built in raid capabilities.

setting up the raid is fairly easy ill probably set up raid 5 with 8 drives set up for storage and 2 as spares. is spare in software raid unusable until another drive breaks and then it's used to mirror the bad drive? or will i be fine with one?

also since i want to have the ability to add drives in the future and expand the storage, and it's hard to do with the software raid without moving data around, i'd also want to put LVM on top of the RAID. now should raid be created first and then LVM or the other way around?

also will there be a problem if i decide to add disk to raid that has different size the the disks in the array. since it's software i shouldn't have any major problem right?

i believe the HDs are in ok conditions but should i test them before building the array 1 by 1 or would it be the same if i did the test after the array is created and when it finds the error it will point me to the faulty drive and allow me the replace it?

i will add some more questions and concerns to this thread. thank you for any input, which is highly appreciated.

slackman 05-07-2007 08:27 AM

to add more to the statement with adding later on a disk of a different size. what if i have setup with 9 drives of 160 and one setup as a spare. then i add another lets say 500gb drive and it goes bad. wouldn't that make a mess? the spare doesn't have enough room to mirror the bad drive.

slackman 05-07-2007 08:50 AM

if you think i should change the raid mode lmk. hows software raid 5 for storage? how does it react on power loss (not that i will be pulling out cables, but just in case) ? difficulties with recovery? any input will help.

ajg 05-08-2007 04:15 AM

When I was testing Linux soft RAID (after my Promise SuperTrack Hard RAID controller decided to drop 2 300GB drives and totally trash the system), I didn't come across any issues with power loss - it sorted things out fairly well.

The big problem for me was if a drive died, the system would grind to a halt, forcing a manual power cycle. On reboot, it would refuse to mount the volumes on the RAID5 set as the filesystems were dirty from the power cycle, AND there was a drive missing from the RAID set. So ... to get things back again, you have to replace the failed drive, add it back into the array with mdadm, then run fsck across the volumes. You've got to think about your partition structure carefully if you want to avoid having to use a LiveCD to do this process. If you don't have a spare drive, you're stuffed until you get one.

I wasn't comfortable with these limitations, and having to create half-a-dozen RAID sets to avoid having to use a LiveCD in the event of a drive failure. I just use RAID1 - storage isn't expensive. It works, and if you lose a drive, you don't end up in RAID5 limbo.

nass 05-08-2007 05:18 PM

actually a couple of weeks ago i was in your position.
in the end i implemented a raid 5, but no LVM. basically i have a single drive for the OS (just 40GB) and another 4 raided5 drives.
anyway, to the point:

Quote:

but then again will it be hard (read: will i have to rebuild raid) if i wanted to upgrade the 3rd 2chan IDE controller to 4 chan?
i don't think you should have a problem provided you recompile support for the new controller in the kernel... and you replace the new card at the exact PCI slot that you removed the old one from. i think the pci slots (and thats the disks) are scanned sequentially and so replacing one card will not affect the node that gets assigned to each drive during boot up... i would appreciate a more kernel oriented person verifying that though...

but take things in order. have a go at the software RAID guide.
http://tldp.org/HOWTO/Software-RAID-HOWTO.html#toc10

note the 'persistent superblock' option that exists. force all drives to have this enabled, cause it will allow 'connecting' the disks of the array at boot time. and format the drives as linux-raid (partition code is 0xFD). its possible that the array will be reassembled even if the drive letters have changed (if that happens remember to update the /etc/raidtab file).

as for adding more drives (since i guess u'll be changing the IDE controller to add 2 more disks... read this:

http://tldp.org/HOWTO/Software-RAID-...10.html#ss10.1

apparently there is a way to expand the number of disks in raid 5 but its not well established (at the time the article was written).
i know read in the mdadm manual that there is an option --grow to expand an array, but it does NOT support raid 5 as of yet. im confident that we'll soon have that option available to us soon.

as for now, my best advice to you would be to experiment..
start your raid with say 4 drives... make sure you understand whats going on and once the system is settled in a raid5 with 4 drives, start adding drives.

Quote:

does slack 11 come with built in support for raid and LVM or will i have to recompile the kernel in order for the OS to support those features?
so i installed slack11 with the test26.s kernel and i had to recompile a kernel with raid support. (i downloaded the latest stable at the time 2.6.20.7)

Quote:

setting up the raid is fairly easy ill probably set up raid 5 with 8 drives set up for storage and 2 as spares. is spare in software raid unusable until another drive breaks and then it's used to mirror the bad drive? or will i be fine with one?
no need to use more than 1 drive as spare. especially if you choose to make 1 raid out of all the drives (you can even make more than one raid5 arrays and still share 1 spare drive among the 2 to save space).

Quote:

also will there be a problem if i decide to add disk to raid that has different size the the disks in the array. since it's software i shouldn't have any major problem right?
the raid will use only space from this drive equal to the smallest size of drive across the drives in the array. read here
http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html#ss5.8

hope this was of some help

slackman 05-09-2007 02:58 PM

thanks nass and ajg. some very valuable info. i did take a look at the software howto at tldp.org and it was very helpful.

i will experiment with raid5 to see if i'll have the same issues as ajg. does raid1 required you to power cycle/or any other human interaction in order to restore?

nass thank you for your input as well. i understand that i spare drive is enough, so in case of failure i'll know to replace the bad drive. what if im lazy and lets say next day another drive goes to heaven (of course they don't go out at the same time = BYE DATA), will the second spare then save my life? as far as the resizing, i see that the link you provided shows app that is still in developmental stage, but hey it works. that's why i was considering LVM since i think it allows for easier raid expansion + you have ability to do snapshots for system restore (is that the same as regular tar backup?).

all in all i will have to play with it myself and test all possible situations and then decide.

i will post some further questions later.

again your input is appreciated.


All times are GMT -5. The time now is 08:17 PM.