SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: -- Slackware for servers -- Debian for desktops --
Posts: 124
Rep:
full soft raid 1 with slackware
Hi,
I got a server with 2 sata disks that I want to put in a bootable soft raid...
I want the following partition scheme
/ primary part 1
/home primary part 2
swap primary part 3
extended part
/usr extended 1
/tmp ...
/var ...
/opt ...
- set all the flags to FD (raid autodetect) with fdisk
- copy the part scheme using sfdisk to my 2nd disk
Got couple of questions concerning this setup...
* I read I have to put all the partition flags to FD (raid autodetect) but what about the swap partition ?
* I can't seem to put my extended partition in the raid 1 using mdadm, skipping it might give probs when a disk fails I guess ? Or should I use LVM for more than 4 partitions ?
* when creating the arrays I do :
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
and so on...
Is this correct or can I just do :
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb and create all the partitions under /dev/md0 ...
It's not too surprising that you are a bit confused by all this, because there is seriously contradictory information floating about on the 'Net concerning this point. When I tried the same thing using Fedora, I found that, even though the mdadm stuff seems to say that you can have multiple partitions under a single RAID set, it doesn't actually work. The best I was able to come up with was to configure things so that each partition ran as a separate RAID set, though I later learned that (theoretically) a single LVM that contains several partitions can run under one RAID set.
So in your case, I see 3 RAID sets: one for the root and boot stuff; one for the swap partition, and one for a LVM containing everything else (/home, /usr, /tmp, ...). In addition, you might want to do some more reading to decide if you really want the swap partition to be RAIDed. There appear to be arguments both ways about this.
Distribution: -- Slackware for servers -- Debian for desktops --
Posts: 124
Original Poster
Rep:
thx, also read some people put the swap partition in striping instead of mirror...
1 problem, if a disk fails you dont have a swap anymore... But it seems there's a workaround for that.
Anyways, i'll try to read up a bit more, happy to know i'm not the only one confused about the subject, cant even find a decent book about software raid (recent one, explaining mdadm for 2.6 kernels etc..)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.