LinuxQuestions.org
LinuxAnswers - the LQ Linux tutorial section.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian
User Name
Password
Debian This forum is for the discussion of Debian Linux.

Notices

Reply
 
Search this Thread
Old 08-04-2008, 08:52 AM   #1
almatic
Member
 
Registered: Mar 2007
Distribution: Debian
Posts: 547

Rep: Reputation: 67
initrd creation on update breaks dm-drives


Hi,

I've been on vacation for some weeks and yesterday updated my lenny/sid system. It was a rather large update, so I didn't follow it closely. At the end it created a new initrd image.

When I booted my system today some drives on a raid 0 (intel ich7 software raid) couldn't be found anymore. Since some of them contain system relevant directories I dropped into a shell and tried to figure out the problem.
The drives are no longer listed in /dev/mapper and dmraid -ay cannot find them anymore. Also fdisk -l does not see the partition, which contains the drives.

I have also noticed that the update created a backup of the old ramdisk, so I restored it and rebooted and everything worked normal again.
Does anyone have this problem and knows what caused it and how to fix ?
 
Old 08-05-2008, 06:20 PM   #2
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Slackware 14.1 (multilib) with kernel 3.15.5
Posts: 1,548
Blog Entries: 12

Rep: Reputation: 177Reputation: 177
initrd grief

Quote:
Does anyone have this problem and knows what caused it and how to fix ?
Not me, but I'd recommend starting by looking at the new initrd to see what changed. From your description, I suspect there was a module for your hardware that was previously loaded which was inadvertantly left out.
 
Old 08-06-2008, 10:26 AM   #3
almatic
Member
 
Registered: Mar 2007
Distribution: Debian
Posts: 547

Original Poster
Rep: Reputation: 67
Hi,

I have done what you said and unpacked the old and new initrd into different directories to compare. There are several changes in the udev rules, most notably there is a new rule for the setup of the device-mapper drives. This rule is also in /etc/udev/rules.d/65_dmsetup.rules. I have attached it at the bottom, unfortunately I am not udev-expert enough to see, how and why this might cause problems.

I have 2 drives in the raidset named 'Masterraid' and 'Slaveraid'. Masterraid contains only 1 partition (masterraid1), slaveraid contains 5 partitions (slaveraid1-5).
The strange thing is : Masterraid is correctly found and mounted, while slaveraid is completely ignored (all partitions), not even the drive can be seen by fdisk, though it is on the same raidset as masterraid.

I am lost on this one...

Here is 65_dmsetup.rules.
any tips are welcome.

Code:
SUBSYSTEM!="block",                GOTO="device_mapper_end"
KERNEL!="dm-*",                    GOTO="device_mapper_end"
ACTION!="add|change",                GOTO="device_mapper_end"

# Obtain device status
IMPORT{program}="/sbin/dmsetup export -j $major -m $minor"
ENV{DM_NAME}!="?*",                GOTO="device_mapper_end"

# these are temporary devices created by cryptsetup, we want to ignore them
# and also hide them from HAL
ENV{DM_NAME}=="temporary-cryptsetup-*",        OPTIONS="ignore_device"

SYMLINK+="disk/by-id/dm-name-$env{DM_NAME}"
ENV{DM_UUID}=="?*", SYMLINK+="disk/by-id/dm-uuid-$env{DM_UUID}"

ENV{DM_STATE_ACTIVE}!="?*",            GOTO="device_mapper_end"
ENV{DM_TARGET_TYPES}=="|*error*",        GOTO="device_mapper_end"

IMPORT{program}="vol_id --export $tempnode"

OPTIONS+="link_priority=-100"
ENV{DM_TARGET_TYPES}=="*snapshot-origin*", OPTIONS+="link_priority=-90"

ENV{ID_FS_UUID_ENC}=="?*",    ENV{ID_FS_USAGE}=="filesystem|other|crypto", \
    SYMLINK+="disk/by-uuid/$env{ID_FS_UUID_ENC}"
ENV{ID_FS_LABEL_ENC}=="?*",    ENV{ID_FS_USAGE}=="filesystem|other", \
    SYMLINK+="disk/by-label/$env{ID_FS_LABEL_ENC}"

LABEL="device_mapper_end"
 
Old 08-06-2008, 10:44 AM   #4
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Slackware 14.1 (multilib) with kernel 3.15.5
Posts: 1,548
Blog Entries: 12

Rep: Reputation: 177Reputation: 177
I can't say I'll be able to help much on this one. However, I'm unclear about your raid setup. You have raid 0 on an intel ich7, you have two drives one with 1 partition and one with 5 partitions. How are these set up? Are you using cryptsetup? More information might help someone else see something....
 
Old 08-06-2008, 11:19 AM   #5
almatic
Member
 
Registered: Mar 2007
Distribution: Debian
Posts: 547

Original Poster
Rep: Reputation: 67
I have probably mixed up some terminology. It's 2 harddisks (400gb each) configured as raid 0 on a sata raid controller (intel ich7). In the controller bios I have configured 2 drives (masterraid-> 200gb and slaveraid ->600gb). Those are now recognized as drives by the operating system, so I can create partitions on them.
Masterraid contains 1 partition, Slaveraid contains 5, /dev/mapper shows this:

Code:
tequila:/etc/udev/rules.d# ll /dev/mapper
insgesamt 0
crw-rw---- 1 root root  10, 60  6. Aug 2008  control
brw-rw---- 1 root disk 254,  8  6. Aug 16:10 hdb3_crypt
brw-rw---- 1 root disk 254,  7  6. Aug 2008  hdb4_crypt
brw-rw---- 1 root disk 254,  0  6. Aug 2008  isw_cefjcfgegc_Masterraid
brw-rw---- 1 root disk 254,  2  6. Aug 2008  isw_cefjcfgegc_Masterraid1
brw-rw---- 1 root disk 254,  1  6. Aug 2008  isw_cefjcfgegc_Slaveraid
brw-rw---- 1 root disk 254,  3  6. Aug 2008  isw_cefjcfgegc_Slaveraid1
brw-rw---- 1 root disk 254,  4  6. Aug 2008  isw_cefjcfgegc_Slaveraid2
brw-rw---- 1 root disk 254,  5  6. Aug 2008  isw_cefjcfgegc_Slaveraid3
brw-rw---- 1 root disk 254,  6  6. Aug 16:10 isw_cefjcfgegc_Slaveraid5
brw-rw---- 1 root disk 254,  9  6. Aug 16:10 storage
tequila:/etc/udev/rules.d#
Slaveraid4 is an unused partition, so it is correctly not shown. hdb3_crypt is encrypted /home, but on a separate harddisk (ide-controller).

With the new initrd it looks like this:

Code:
tequila:/etc/udev/rules.d# ll /dev/mapper
insgesamt 0
crw-rw---- 1 root root  10, 60  6. Aug 2008  control
brw-rw---- 1 root disk 254,  8  6. Aug 16:10 hdb3_crypt
brw-rw---- 1 root disk 254,  7  6. Aug 2008  hdb4_crypt
brw-rw---- 1 root disk 254,  0  6. Aug 2008  isw_cefjcfgegc_Masterraid
brw-rw---- 1 root disk 254,  2  6. Aug 2008  isw_cefjcfgegc_Masterraid1
brw-rw---- 1 root disk 254,  9  6. Aug 16:10 storage
tequila:/etc/udev/rules.d#
I must add, that some months ago, I accidently destroyed the Slaveraid partitions and repaired them in Windows (cause linux repair tools cannot be used with device-mapper drives). After that chkdsk under windows fixed several issues with the filesystem, so these partitions may not be completely sane, but they still worked flawlessly until now.
Maybe there is some sort of new sanity check, which prevents the drives from being correctly found ??
I wouldn't know how to find out ...
 
Old 08-06-2008, 02:30 PM   #6
almatic
Member
 
Registered: Mar 2007
Distribution: Debian
Posts: 547

Original Poster
Rep: Reputation: 67
this here seems to be the "fix" that led to my problems. What is an "undefined label" (see changelog of dmsetup) and how do I find out, if my drive has an undefined label ?
I'm now reading further into udev and try to solve myself. If anyone has any tips, you're still welcome. This has to go.

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=491107
http://packages.debian.org/changelog...27-3/changelog
 
Old 08-06-2008, 06:46 PM   #7
almatic
Member
 
Registered: Mar 2007
Distribution: Debian
Posts: 547

Original Poster
Rep: Reputation: 67
ok, I have finally figured out, that the dmraid binary is the source of the problem. One of those patches most probably contains a bug.

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=489969
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=489970

What I did was:
- replace /sbin/dmraid with the dmraid binary from the old initrd.
- create new initrd with update-initramfs -t -u -k $(uname -r)

-> everything works again.

I probably have to replace the dmraid-binary every time dmraid gets updated until more people have this problem. Filing a bug is usually useless in debian ...
 
Old 11-23-2008, 12:02 AM   #8
newtovanilla
Member
 
Registered: Apr 2008
Posts: 267

Rep: Reputation: 30
Quote:
The drives are no longer listed in /dev/mapper
Did you find out why they were not listed? Was it only the "dmraid binary" and not the 65_dmsetup.rules?
 
Old 11-26-2008, 12:15 PM   #9
almatic
Member
 
Registered: Mar 2007
Distribution: Debian
Posts: 547

Original Poster
Rep: Reputation: 67
yes, it was the dmraid binary. The second one of the bugs in post above yours obviously was the reason.
Here you can find the bug report in debian of the bug that struck me.

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=494278

dmraid now works normally again for me. The bug is gone with that fix.

Last edited by almatic; 11-26-2008 at 12:18 PM. Reason: removed that 'thumbs down' symbol in the title
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
suse 10 kernel update breaks gcc debbedcorpse Suse/Novell 2 08-27-2006 11:16 AM
initrd creation problem loadlinux Red Hat 1 06-02-2005 02:09 PM
Packman libusb update breaks X pilotgi Suse/Novell 1 02-11-2005 05:32 PM
Update breaks several packages apolinsky Suse/Novell 0 12-06-2004 01:30 PM
Redhat 8.0 glibc 2.3.x update breaks apps Yobgod Linux - Software 0 04-10-2003 07:09 PM


All times are GMT -5. The time now is 09:17 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration