LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 07-25-2012, 08:18 AM   #1
foobarz
Member
 
Registered: Aug 2010
Distribution: slackware64-current
Posts: 48

Rep: Reputation: 10
Question udev 64-md-raid.rules doesn't run mdadm --incremental on some members


I have a md raid0 device, /dev/md3, that is partitioned into /dev/md3p[123]. These partitions are used as members of raid6. So, for example /dev/md3p2 is of type "linux_raid_member" and I use it as a member of a raid6. The type is shown by:

bklid /dev/md3p2
or
udevadm info --query=all --name=/dev/md3p2

During boot, udev runs and it has rules in /lib/udev/rules.d/64-md-raid.rules that act on block devices of type "linux_raid_member" and passes them to mdadm:

ACTION=="add", RUN+="/sbin/mdadm --incremental $tempnode"

where I assume $tempnode should hold /dev/md3p2 at some point during boot. However, it never happens; and, since this member is not added to the incremental assemble, the incremental assemble (being safe and good) will not start the raid6 with a member missing.

The udev rules assemble /dev/md3 itself and the /dev/md3p[123] devices are created, but they just don't appear to be cycled back through the udev rules again where they can be passed to mdadm to finish assembling more raids built on them.

Using udev to incremental assemble of raid devices looks like a nice way to do it, if it would just see all linux_raid_member devices, including ones that shown up inside partitions of an md device. The incremental assemble is "safe" because it won't start an array in degraded mode, so that gives you a chance to fix a configuration problem that has made members missing before running the array. Once you choose (and it should be a choice, not automatic in some script) to run an array degraded, you will have to add devices as spares and resync them, and that is a dangerous step you don't want to end up in unless a disk drive has really failed in normal operation, not just some software misconfiguration making devices seem missing to mdadm.

Well, even though udev can assemble most arrays made from regular disk partitions like /dev/sda1 etc, the slackware init scripts appear to just stop all arrays that might have been started by udev and then it re-assembles them using this sequence of commands:

# udev has finished _trying_ to incremental assemble arrays, but now slackware just says
# forget what udev has done, and lets start over like this:
/sbin/mdadm -E -s > /etc/mdadm.conf
/sbin/mdadm -S -s
/sbin/mdadm -A -s

So, first, the initramfs copy of your mdadm.conf, is wiped with a new one. If you included your own mdadm.conf into your initrd, then it is wiped unless you comment out that first line above. Then it stops all arrays. And then, it assembles all arrays using the -s option, which will run arrays degraded even when for some reason expected members are missing. Now that is not really nice or safe compared to what udev's incremental assemble does. The slackware installer-dvd and the default initrd-tree initrd.gz images do this, so if it gets it wrong, and a member is missing, it will degrade your arrays by starting them with missing member(s). This can happen if udev's incremental assembly is holding (binding) devices and the line "mdadm -S -s" is commented out and there is till enough unbinded to run something degraded. Then you have just let the installer or your initrd degrade your array by running it degraded.

I think, mdadm -A -s should be replaced by default to do: mdadm -A -s --no-degraded
And, when that cause some md device to be not started and not mountable, then you get errors and an emergency command line to fix stuff instead of it automatically deciding to degrade your array.


So, I need help. How can udev's 64-md-raid.rules file be fixed so that it processes devices like /dev/md3p2 and runs them with mdadm --incremental?

And I propose, changes to slackware, that where it is calling mdadm -A -s, it be changed to mdadm -A -s --no-degraded. This means in the generated script /boot/initrd-tree/init, and on the slackware-install-dvd's /etc/rc.d/rc.S.

Now, if you make a mdadm.conf file, like mdadm -E -s > /etc/mdadm.conf, and you edit to order the ARRAY lines in the order you need them, and then, cp /etc/mdadm.conf /boot/initrd-tree/etc, then udev's calls to mdadm --incremental will assemble your arrays with the expected /dev/mdX device names like you used to create the arrays. OR, you can make mdadm like this for your initrd: echo AUTO -all > /boot/initrd-tree/etc/mdadm.conf and this completely disables mdadm from incrementally assembling any arrays and you then use your own mdadm commands to start things.

I'd be fine with letting udev assemble my arrays if it would see all the members and finish the job. But so far, I've had to depend on the mdadm commands that that slackware uses to re-assemble the arrays, but I edited it to called mdadm -A -s --no-degraded instead. And, since udev can't finish the job, it does hurt for me to use echo AUTO -all > /boot/initrd-tree/etc/mdadm.conf as my mdadm.conf file on the initrd, but then mdadm -E -s > /etc/mdadm.conf on the root fs.

Anyhow, it's a weird situation where with the way things are nothing works correct out of the box for me: udev doesn't complete assembly of all arrays; the slackware "Re-assemble" arrays commands in initrd-tree/init and installer /etc/rc.d/rc.S want to run mdadm -A -s to sometimes degrade your array. So I have to fix things, but I cannot get udev fixed. I don't know why those rules don't pick up /dev/md3p2 etc.

Any help appreciated. Thanks a lot!
 
Old 07-25-2012, 12:17 PM   #2
foobarz
Member
 
Registered: Aug 2010
Distribution: slackware64-current
Posts: 48

Original Poster
Rep: Reputation: 10
I have something new to add...

If I edit /lib/udev/rules.d/64-md-raid.rules, and make the top of the file like this, then mdadm --incremental is run on devices like /dev/md3p2 etc:

##########################
# do not edit this file, it will be overwritten on update

SUBSYSTEM!="block", GOTO="md_end"

# -- stuff i am testing --
ENV{ID_FS_TYPE}!="", GOTO="md_handle"
IMPORT{program}="/sbin/blkid -o udev -p $tempnode"
LABEL="md_handle"
# -- --

# handle potential components of arrays (the ones supported by md)
############################

It seems that the problem is that, for partitioned raid devices like /dev/md3p3, with md3p3 is TYPE="linux_raid_member" that is then used as member of some other raid (a raid on a raid), that the ENV variables ID_* are not set yet when these rules start to execute at the top. They get set later in the file, but not before they are needed somehow. I don't claim to fully understand udev rules file writing, but my test lines above, while I'm not sure is a proper fix, does appear to work.

udev experts, please comment on this if you will... I hope to have a proper fix for this
 
Old 07-25-2012, 12:50 PM   #3
foobarz
Member
 
Registered: Aug 2010
Distribution: slackware64-current
Posts: 48

Original Poster
Rep: Reputation: 10
Actually, the above edit only works in tests like: udevadm test /sys/block/md3/md3p2

but, on a real boot, it looks like it has no effect as if devices like /dev/md3p2 do not even make a pass through the udev rules
 
Old 07-26-2012, 06:28 AM   #4
foobarz
Member
 
Registered: Aug 2010
Distribution: slackware64-current
Posts: 48

Original Poster
Rep: Reputation: 10
Here is more info, and I think maybe the cause of the problem:

The initrd uses BusyBox, and lots of command line utilities are not the full versions, but instead BusyBox versions with less features, for example:

/sbin/blkid -> busybox

This "blkid" of busybox doesn't appear to support the "-o udev" and "-p" options of the full version that you can run on once you are booted up.

While the initrd init script is running, it runs udev, and udev rules such as /lib/udev/rules.d/64-md-raid.rules runs:

/sbin/blkid -o udev -p $tempnode

to acquire info from blkid, such as environment variables ID_FS_TYPE and other ID_* variables. The blkid -> busybox returns nothing when you run it with these options, and so the ID_* env variables are not set. And with those variables not set as expected, the tests of some other udev rules do not work as expected.

I think that the slackware initrd needs to include the full version of blkid.
 
Old 07-26-2012, 08:09 AM   #5
foobarz
Member
 
Registered: Aug 2010
Distribution: slackware64-current
Posts: 48

Original Poster
Rep: Reputation: 10
More info, and now I am close to saying this is SOLVED:

In /lib/udev/rules.d/64-md-raid.rules: make the lines like this:

#IMPORT{program}="/sbin/blkid -o udev -p $tempnode"
IMPORT{builtin}="blkid"

That is, use the builtin blkid that udevd has! No $tempnode needs to be given.

With this change to the rules, the environment variables ID_* are created and can then be used properly in rule tests.

For udev running from initramfs at boot to be able to complete incremental assemble of nested raid arrays (a raid that is using members are also on a raid), then some more udev rules are needed:

http://pkgs.fedoraproject.org/gitweb...81060b;hb=HEAD

Save this as /lib/udev/rules.d/65-mdadm-incremental.rules

So now you have two rules files:

# the stock mdadm rules but using builtin blkid command
/lib/udev/rules.d/64-md-raid.rules
# later rules (from fedoraproject) that handle nested raids
/lib/udev/rules.d/65-mdadm-incremental.rules

Now, with these files on your system, edit:
vi /boot/initrd-tree/init
# comment out all of the mdadm commands that it calls
# to "Re-assemble" raid arrays,
# because udev rules can now incrementally assemble the
# arrays, including nested ones (at least 1 level), just fine

With your arrays fully assembled and clean, and the devices in /dev/md* all look correct, make a good /etc/mdadm.conf file:
mdadm -E -s > /etc/mdadm.conf
Now, for nested arrays, order the ARRAY lines so that they are in the order you need them to assemble in. Then copy it to initrd-tree:
cp /etc/mdadm.conf /boot/initrd-tree/etc
You need to include the good mdadm.conf file in your initramfs so that udev incremental assembly, calls to mdadm -I will see this mdadm.conf file and know the proper device names to make for your arrays, or else they can strange names like /dev/md12[34567] etc, when they should be maybe /dev/md[01234] as you gave mdadm --create when you made them. The order of the ARRAY lines is also important if you decide to run mdadm -A -s --no-degraded for yourself later.

Make some emergency info about your arrays:
mdadm -Evvvs > /etc/mdadm.Evvvs
cp /etc/mdadm.Evvvs /boot/initrd-tree/etc
This info could help you in an emergency to work on your arrays.

Oh, and then make sure to just remove the commands:
mdadm -E -s > /etc/mdadm.conf
mdadm -S -s
mdadm -A -s
that are found in the initrd-tree/init script.

Now be warned that the slackware-install-dvd will still call those there commands from it's /etc/rc.d/rc.S script if you boot on it. These commands seemed like a hack to "Re-assemble" after assuming udev did something already but maybe not correctly. The mdadm -A -s is a dangerous command that would assemble your array degraded, and _should_ be mdadm -A -s --no-degraded at the least, but removed totally is better. Just be aware that booting the slackware install dvd will run that command!


Finish up with:
mkinitrd -F
lilo

Make sure /boot/initrd-tree/lib/udev/rules.d/6[45]* look correct.

So, maybe slackware can make some of these changes or similar as defaults on installation: edit the mdadm package and have it install modified 64-md-raid.rules that uses builtin blkid, and also package with it and install the 65-mdadm-incremental.rules too. The 65-mdadm-incremental.rules file probably needs some more careful looking into and maybe editing for a general slackware inclusion, but maybe it is okay already as fedora has it.

I think that is about it! This is as good a place as any to document all of this. Maybe useful to others, and myself later!

Last edited by foobarz; 07-26-2012 at 08:22 AM. Reason: order of procedure steps was wrong
 
1 members found this post helpful.
Old 07-26-2012, 08:26 AM   #6
wildwizard
Member
 
Registered: Apr 2009
Location: Oz
Distribution: slackware64-14.0
Posts: 875

Rep: Reputation: 282Reputation: 282Reputation: 282
Quote:
Originally Posted by foobarz View Post
The mdadm -A -s is a dangerous command that would assemble your array degraded, and _should_ be mdadm -A -s --no-degraded at the least, but removed totally is better.
It also means you can still access your data if you boot your system after a device fails.

You do know that is the whole point of RAID don't you?
 
Old 07-26-2012, 09:17 AM   #7
foobarz
Member
 
Registered: Aug 2010
Distribution: slackware64-current
Posts: 48

Original Poster
Rep: Reputation: 10
Quote:
Originally Posted by wildwizard View Post
It also means you can still access your data if you boot your system after a device fails.

You do know that is the whole point of RAID don't you?
Sure I understand it. The problem is that command will start up your array degraded automatically, without you first getting to see the reason why it is going to have to start degraded at boot time.

The normal functioning of raid is during uptime operation (not right at boot), a drive fails during operation and the system can tolerate the loss of the drive and continue working.

Later, you shutdown the computer and do maintenance to replace the drive; you get into the box and play with wires etc and put a new drive in. Maybe you just pulled the wrong drive out, or pulled the power off a good drive. Now you boot up and another drive is failed because of a mistake. mdadm -A -s will start it degraded if it can. If it starts up degraded, then the drives that are removed due to some mistake will have to be added back as spares and fully resynced. The during the resync, you stand a good chance of having another failure and then your array is gone. If that happens, then really that mdadm -A -s command destroyed your array.

If mdadm -A -s --no-degraded had been in your boot init script, then it would not have started degraded; you would have been given an emergency command line where you can see what is wrong (oops, missing drive). You turn off the computer and fix it. When you boot up again all the drives are there, and no resync needs to happen because the array was not started degraded. You didn't risk your array by booting into it degraded automatically.

An array that is unexpectedly (newly) degraded at boot time is an emergency situation where you do not want to automatically start that array up degraded (unless you got a special requirement try to do so). You want to see what drive is missing/removed or not responding etc, before you shutdown to fix the computer again, or give the command to start the array degraded yourself - not automatically at boot.

The udev mdadm rules start arrays using incremental assemble of mdadm (mdadm -I), which will not start arrays degraded. So, it is safe and you will get a command line when your arrays have a problem right at boot. You can see/fix problem, then reboot. If the problem with a hard drive is a real hard drive problem, it has failed (not just pulled the power on it or something), then you can do: mdadm -A -s /dev/mdX, yourself to start degraded, and then start your system:

mount -o ro /dev/mdX /mnt
/sbin/udevadm info --cleanup-db
/sbin/udevadm control --exit
mount -o move /proc /mnt/proc
mount -o move /sys /mnt/sys
mount -o move /dev /mnt/dev
exec switch_root /mnt /sbin/init 1

... just like the end of the /boot/initrd-tree/init file does it.

I prefer safe defaults. Slackware's default to run mdadm -A -s, even in the slackware-install-dvd, is not safe. Let the user change it to the unsafe command after installation, if they prefer that.
 
Old 07-28-2012, 06:14 PM   #8
wildwizard
Member
 
Registered: Apr 2009
Location: Oz
Distribution: slackware64-14.0
Posts: 875

Rep: Reputation: 282Reputation: 282Reputation: 282
Finally got around to actually trying this out.

Before and after
Code:
[   21.825783] EXT4-fs (md0p2): mounted filesystem with ordered data mode.
[   18.714802] EXT4-fs (md0p2): mounted filesystem with ordered data mode.
So we also get a speed improvement out of this change

Proper changes are :-

/sbin/mkinitrd
add
Code:
    cp /etc/mdadm.conf $SOURCE_TREE/etc/mdadm.conf
to the section for RAID support (line 464)

/usr/share/mkinitrd/initrd-tree.tar.gz
modify init and remove the entire RAID section (line 164-171)

And add in the documentation that /etc/mdadm.conf must contain a current RAID setup.

EDIT and of course the new rules file /lib/udev/rules.d/65-mdadm-incremental.rules

Last edited by wildwizard; 07-28-2012 at 06:15 PM.
 
Old 07-28-2012, 07:56 PM   #9
wildwizard
Member
 
Registered: Apr 2009
Location: Oz
Distribution: slackware64-14.0
Posts: 875

Rep: Reputation: 282Reputation: 282Reputation: 282
Should also add that the initrd will also require any modules required for your keyboard otherwise you wont be able to start the system if a failure occurs.
 
Old 07-31-2012, 03:05 PM   #10
foobarz
Member
 
Registered: Aug 2010
Distribution: slackware64-current
Posts: 48

Original Poster
Rep: Reputation: 10
raid/luks/lvm test

Here is how I did another test with the changes mentioned above.

This is a somewhat complex setup but seems to work. I'll have to play with it more, like reboot it serveral times and make sure it always comes back up without some corruption.

What this does is make 2 raid0 devices md3 and md4. The size of md3 and md4 match approximately (a little larger) the size of disk1 and disk2. Then disk[12] md[34] are partitioned the same way for a /boot and a system (/ swap etc) partition. GPT partitions are used (gdisk and sgdisk) with lilo with no problems.

The /boot is /dev/m0, a raid1 made from /dev/sd[ab]1 so it is plainly readable to lilo.

The root/system device /dev/md1 is a raid6 made from /dev/sd[ab]p2 /dev/md[34]p2. /dev/md[34]p1 are not used for anything.

/dev/md1 is made luks encrypted with cryptsetup as /dev/mapper/cmd1

cmd1 is then made a LVM physical volume (pv), and added
to a new volume group (vg) called cvg1 /dev/cvg1/*.

Two logical volumes (lv) are created inside cvg1:
/dev/cvg1/swap /dev/cvg1/root

With these devices created, slackware "setup" is started.
This setup gives a fully encrypted raid6 root and swap, and
non-encrypted raid1 boot (which boot it has to be).

Here are the exact details if one would like to duplicate and test this setup also (using qemu with KVM VT-x support):

RAID/LUKS/LVM test:
Code:
qemu-img create disk1.raw 10000000K
qemu-img create disk2.raw 10000000K
qemu-img create disk3.raw 5005292K
qemu-img create disk4.raw 5005292K
qemu-img create disk5.raw 5005292K
qemu-img create disk6.raw 5005292K
qemu-img create disk7.raw 10000000K
# disk7 is not used right away, but is there for futher testing
# of growing the raids, lvm, and ext4 fs to test
# expanding the storage space

qemu-system-x86_64 \
 -no-quit \
 -boot order=cd,menu=on \
 -m 2G \
 -cpu host \
 -smp sockets=1,cores=4 \
 -net nic,model=rtl8139 -net user \
 -cdrom slackware64-current-install-dvd.iso \
 -drive boot=on,format=raw,media=disk,cache=none,aio=native,if=scsi,bus=0,unit=0,file=disk1.raw \
 -drive format=raw,media=disk,cache=none,aio=native,if=scsi,bus=0,unit=1,file=disk2.raw \
 -drive format=raw,media=disk,cache=none,aio=native,if=scsi,bus=0,unit=2,file=disk3.raw \
 -drive format=raw,media=disk,cache=none,aio=native,if=scsi,bus=0,unit=3,file=disk4.raw \
 -drive format=raw,media=disk,cache=none,aio=native,if=scsi,bus=1,unit=0,file=disk5.raw \
 -drive format=raw,media=disk,cache=none,aio=native,if=scsi,bus=1,unit=1,file=disk6.raw \
 -drive format=raw,media=disk,cache=none,aio=native,if=scsi,bus=1,unit=2,file=disk7.raw

Boot on install-dvd (remove boot=on from drive if needed), then install like this

gdisk /dev/sda
        0M 4M unused
Part 1  4M 516M   for /boot on raid1, type fd00 (Linux RAID)
        516M 520M unused
Part 2  520M -96M for / on raid6, type fd00 (Linux RAID)
        -96M -0M unused
sgdisk -R /dev/sdb /dev/sda
sgdisk -G /dev/sdb
mdadm --create /dev/md3 -l 0 -n 2 /dev/sd[cd]
mdadm --create /dev/md4 -l 0 -n 2 /dev/sd[ef]
sgdisk -R /dev/md3 /dev/sda
sgdisk -G /dev/md3
sgdisk -R /dev/md4 /dev/sda
sgdisk -G /dev/md4
# move 2nd GPT header to end of disk
# md[34] are slightly larger than sd[ab] so 2nd header is not at end after replicate
sgdisk -e /dev/md3
sgdisk -e /dev/md4
mdadm --create /dev/md0 --metadata=0.90 -l 1 -n 2 /dev/sd[ab]1
mdadm --create /dev/md1 -l 6 -n 4 /dev/sd[ab]2 /dev/md[34]p2
cryptsetup luksFormat /dev/md1
cryptsetup luksOpen /dev/md1 cmd1
lvm pvcreate /dev/mapper/cmd1
lvm vgcreate cvg1 /dev/mapper/cmd1
lvm lvcreate -L 512M -n swap cvg1
lvm lvcreate -l 100%FREE -n root cvg1
mkswap /dev/cvg1/swap
setup
   ADDSWAP /dev/cvg1/swap
   TARGET  /dev/cvg1/root
   Additional: /dev/md0 raid1 for /boot
   configure net for DHCP
   continue full install then exit setup (no reboot yet)
chroot /mnt
mount /dev/md0 /boot
mount -t devtmpfs none /dev
mount -t proc none /proc
mount -t sysfs none /sys
cd /boot
# change to use generic kernel:
rm System.map config vmlinuz
ln -s System.map-gen System.map
ln -s config-gen config
ln -s vmlinuz-gen vmlinuz
cd /etc
cp mkinitrd.conf.sample mkinitrd.conf
/usr/share/mkinitrd/mkinitrd-command-generator.sh -r
# note it's recommendations, and add 8139cp module also
# make sure to include modules to access disk devices
vi mkinitrd.conf
  MODULE_LIST="sym53c8xx:mbcache:jbd2:ext4:8139cp:dm-raid"
  LUKSDEV="/dev/md1"
  ROOTDEV="/dev/cvg1/root"
  ROOTFS="ext4"
  RAID="1"
  LVM="1"
  UDEV="1"
mkinitrd
mdadm -E -s >> /etc/mdadm.conf
vi /etc/mdadm.conf
  order the ARRAY lines, md3,4,0,1
  md[34] need to startup before md1 that has them as members (nested raid)
  dd to cut line, p to paste under current line
# copy mdadm.conf to initrd so udevd calls to mdadm -I can read a good conf file and get device names correct
cp /etc/mdadm.conf /boot/initrd-tree/etc
vi /boot/initrd-tree/init
 # comment out mdadm commands
 # #/sbin/mdadm -E -s > /etc/mdadm.conf # we got one already
 # #/sbin/mdadm -S -s  # we let udevd start arrays, so why stop them
 # #/sbin/mdadm -A -s  # udevd calls mdadm -I to do this already and safer (no start degraded)
vi /lib/udev/rules.d/64-md-raid.rules
 # make change:
 # #IMPORT{program}="/sbin/blkid -o udev -p $tempnode" #comment out this line
 # IMPORT{builtin}="blkid"  # do this line instead
 # #/sbin/blkid -> busybox is broken, so use the builtin
/etc/rc.d/rc.inet1 # start network device
/etc/rc.d/rc.inet2 # to access internet
lynx http://pkgs.fedoraproject.org/gitweb/?p=mdadm.git
 browse and download link (d) for raw: mdadm.rules
   http://pkgs.fedoraproject.org/gitweb...81060b;hb=HEAD
 save/copy to /lib/udev/rules.d/65-md-incremental.rules
mkinitrd -F
cd /etc
vi lilo.conf
  boot = /dev/md0
  raid-extra-boot = mbr-only
  lba32
  # comment out bitmaps and use standard menu
  image = /boot/vmlinuz
   initrd = /boot/initrd.gz
   root = /dev/cvg1/root
   label = Linux
   read-only
lilo
exit
umount /mnt/{dev,proc,sys,boot}
umount /mnt
lvm vgchange -an
cryptsetup luksClose cmd1
mdadm -S -s
sync
reboot  # set "boot=on" option to qemu -drive to boot installed system
Post install:
cryptsetup luksHeaderBackup /dev/md1 --header-backup-file md1.luksHeaderBackup
save this file somewhere secure, like maybe on a USB stick
if the header were to get corrupted, all data would be lost unless this 2MB header is restored using
cryptsetup luksHeaderRestore /dev/md1 --header-backup-file md1.luksHeaderBackup
you might also like to dump the master key:
cryptsetup luksDump --dump-master-key
You can destroy all the data on the luks by doing:
dd if=/dev/zero of=/dev/md1 count=1 bs=2M
or
cryptsetup luksFormat /dev/md1
But if a luksHeaderBackup exists, you could restore that.

Last edited by foobarz; 07-31-2012 at 04:01 PM. Reason: add commands to move 2nd GPT headers to end of disk
 
Old 08-01-2012, 02:51 PM   #11
foobarz
Member
 
Registered: Aug 2010
Distribution: slackware64-current
Posts: 48

Original Poster
Rep: Reputation: 10
raid/luks/lvm add disk7

In above test, I had a "disk7.raw" unused. But here I tested adding it into this test system. It seems to work. Here are the steps to add the disk:

Test adding disk7.raw (sdg) into the system:
Code:
sgdisk -R /dev/sdg /dev/sda
sgdisk -G /dev/sdg
mdadm --manage /dev/md0 --add /dev/sdg1
mdadm --manage /dev/md1 --add /dev/sdg2
mdadm --grow /dev/md0 -n 3
mdadm --grow /dev/md1 -n 5
mdadm --detail /dev/md1
# note the current status and capacity
# note that md1 capacity change event happens when reshape is complete, NOT immediately
# note the new status and capacity when reshape is complete
lsblk [-b] /dev/md1
# lsblk is smart to show the device and devices within it including partitions, luks volumes, and lvm logical volumes in a tree structure
# notice that md1 has the new size, but luksmd1 is still on the old size
# luksmd1 needs to be luksClose and luksOpen again to see the new size, but this cannot be done online with root mounted
# at this point, system must be halted, and boot onto the install-dvd to finish resizing
# booted on install-dvd do the following:
cryptsetup luksOpen /dev/md1 luksmd1
# now check it's size (should be new md1 size)
lsblk /dev/mapper/luksmd1
# view the pv and notice the PE Size and Total PE
lvm pvdisplay /dev/mapper/luksmd1
# now do a verbose test of resizing pv /dev/mapper/luksmd1:
lvm pvresize -v -t /dev/mapper/luksmd1
# first it appears to resize the volume to it's original size in sectors
# then, the important part is it appears to resize physical volume to more extents
# go ahead to run the resize:
lvm pvresize -v /dev/mapper/luksmd1
lvm pvdisplay /dev/mapper/luksmd1
# new pv size is shown, and notice Free PE count
# view volume groups and note the vg name is cvg1
lvm vgdisplay
# view logical volumes and note lv name and vg name
lvm lvdisplay
# run verbose test to resize root lv by +100%FREE
# notice lv is specified as <vg>/<lv>
lvm lvresize -v -t -l +100%FREE cvg1/root
# note the would-be new size that is reported and verify it makes sense, if so proceed:
lvm lvresize -v -l +100%FREE cvg1/root
# view lvs
lvm lvdisplay
# start lvm by changing status of all vg (and all lv inside) to available:
lvm vgchange -ay
# examine root and swap:
lsblk /dev/cvg1/root
lsblk /dev/cvg1/swap
# resize the root fs
e2fsck -f /dev/cvg1/root
resize2fs /dev/cvg1/root
# almost done! now clean up and reboot:
lvm vgchange -an
cryptsetup luksClose luksmd1
mdadm -S -s
sync
halt
# now, boot onto system normally and do:
# reinstall lilo onto md0, which now has a new member
lilo
# tune2fs (/) root for the new RAID stride and stripe-width
# examine the current settings:
tune2fs -l /dev/cvg1/root
# note fs "Block size" 4096 = 4K
# examine /dev/md1
mdadm --detail /dev/md1
# note raid "Chunk Size" 512K
# note "Raid Devices" 5 and "Raid Level" 6 (2-parity disks) Data disks = 5-2 = 3
# fs stride is the raid chunk size expressed in fs blocks: raid chunk size / fs block size = 512K/4K = 128 (4K fs blocks)
# fs stripe_width is the total number of fs blocks in a stripe across the non-parity data drives:  stride * (N-2) = 128*(5-2) = 128*3 = 384
# run tune2fs with these stride and stripe-width values (stride is already correct):
tune2fs -E stripe_width=384 /dev/cvg1/root
# reboot to make sure everything starts clean
sync
reboot
Notice that on halt/reboot, you get message: Can't deactivate logical volume group cvg1
At this point in the shutdown process, (/) root has been remounted ro and buffers are synced. Deactivating lvm, luks, and mdadm do not require any write access, so if buffers are synced to disk, it is okay to just halt/poweroff.
sync is called at several spots in /etc/rc.d/rc.0, including right after remounting root read-only (ro)
There are no mdadm commands to stop arrays during shutdown. All filesystems are unmounted, root is remounted ro, and sync is called. So that should be safe enough.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Any way to make mdadm (raid-5) reshape run faster? MikeyCarter Linux - Software 1 08-28-2011 08:28 PM
[SOLVED] udev rules - how to pass ATTRS{*} values to the RUN command? catkin Programming 8 09-26-2010 05:43 AM
mdadm: no such device: md0 -- RAID doesn't work after system recovery jlinkels Linux - Server 1 11-30-2009 08:14 PM
mdadm raid 5 and failing drive doesn't drop out of array elpresidente44 Linux - Server 1 05-24-2008 07:38 PM
udev rules all_partitions doesn't work arubin Slackware 1 06-10-2006 05:46 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 07:01 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration