initrd creation on update breaks dm-drives
Hi,
I've been on vacation for some weeks and yesterday updated my lenny/sid system. It was a rather large update, so I didn't follow it closely. At the end it created a new initrd image. When I booted my system today some drives on a raid 0 (intel ich7 software raid) couldn't be found anymore. Since some of them contain system relevant directories I dropped into a shell and tried to figure out the problem. The drives are no longer listed in /dev/mapper and dmraid -ay cannot find them anymore. Also fdisk -l does not see the partition, which contains the drives. I have also noticed that the update created a backup of the old ramdisk, so I restored it and rebooted and everything worked normal again. Does anyone have this problem and knows what caused it and how to fix ? |
initrd grief
Quote:
|
Hi,
I have done what you said and unpacked the old and new initrd into different directories to compare. There are several changes in the udev rules, most notably there is a new rule for the setup of the device-mapper drives. This rule is also in /etc/udev/rules.d/65_dmsetup.rules. I have attached it at the bottom, unfortunately I am not udev-expert enough to see, how and why this might cause problems. I have 2 drives in the raidset named 'Masterraid' and 'Slaveraid'. Masterraid contains only 1 partition (masterraid1), slaveraid contains 5 partitions (slaveraid1-5). The strange thing is : Masterraid is correctly found and mounted, while slaveraid is completely ignored (all partitions), not even the drive can be seen by fdisk, though it is on the same raidset as masterraid. I am lost on this one... Here is 65_dmsetup.rules. any tips are welcome. Code:
SUBSYSTEM!="block", GOTO="device_mapper_end" |
I can't say I'll be able to help much on this one. However, I'm unclear about your raid setup. You have raid 0 on an intel ich7, you have two drives one with 1 partition and one with 5 partitions. How are these set up? Are you using cryptsetup? More information might help someone else see something....
|
I have probably mixed up some terminology. It's 2 harddisks (400gb each) configured as raid 0 on a sata raid controller (intel ich7). In the controller bios I have configured 2 drives (masterraid-> 200gb and slaveraid ->600gb). Those are now recognized as drives by the operating system, so I can create partitions on them.
Masterraid contains 1 partition, Slaveraid contains 5, /dev/mapper shows this: Code:
tequila:/etc/udev/rules.d# ll /dev/mapper With the new initrd it looks like this: Code:
tequila:/etc/udev/rules.d# ll /dev/mapper Maybe there is some sort of new sanity check, which prevents the drives from being correctly found ?? I wouldn't know how to find out ... |
this here seems to be the "fix" that led to my problems. What is an "undefined label" (see changelog of dmsetup) and how do I find out, if my drive has an undefined label ?
I'm now reading further into udev and try to solve myself. If anyone has any tips, you're still welcome. This has to go. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=491107 http://packages.debian.org/changelog...27-3/changelog |
ok, I have finally figured out, that the dmraid binary is the source of the problem. One of those patches most probably contains a bug.
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=489969 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=489970 What I did was: - replace /sbin/dmraid with the dmraid binary from the old initrd. - create new initrd with update-initramfs -t -u -k $(uname -r) -> everything works again. I probably have to replace the dmraid-binary every time dmraid gets updated until more people have this problem. Filing a bug is usually useless in debian ... |
Quote:
|
yes, it was the dmraid binary. The second one of the bugs in post above yours obviously was the reason.
Here you can find the bug report in debian of the bug that struck me. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=494278 dmraid now works normally again for me. The bug is gone with that fix. |
All times are GMT -5. The time now is 05:50 AM. |