LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel
User Name
Password
Linux - Kernel This forum is for all discussion relating to the Linux kernel.

Notices


Reply
  Search this Thread
Old 04-28-2008, 02:09 PM   #1
JevidL
LQ Newbie
 
Registered: Aug 2003
Location: Ann Arbor, MI
Distribution: Gentoo Linux
Posts: 14

Rep: Reputation: 0
Centos kernel upgrade breaks dmraid on Intel Software Raid


Installed CentOS 5.0 a little while back and it shipped with kernel 2.6.18-8.1.15. I was able to successfully get Cent installed with dmraid 0 and LVM. Since then, I have not been able to get an upgraded kernel working. Whenever I try to get it working I wind up with this error:

Code:
No RAID sets and with names: "isw_eieaghiei_Volume0"
failed to stat() /dev/mapper/isw_eieaghiei_Volume0"
	Reading all physical volumes. This may take a while... 
	/dev/sda2: read failed after 0 of 1024 at 4998830448960: Input/output error 
	No volume groups found
	/dev/sda2: read failed after 0 of 1024 at 4998830448960: Input/output error 
	Volume group "VolGroup00" not found
Buffer I/O error on device sda2, logical block 488167040
Buffer I/O error on device sda2, logical block 488167041
Buffer I/O error on device sda2, logical block 488167042
Buffer I/O error on device sda2, logical block 488167043
Buffer I/O error on device sda2, logical block 488167040
Buffer I/O error on device sda2, logical block 488167041
Buffer I/O error on device sda2, logical block 488167042
Buffer I/O error on device sda2, logical block 488167043
Unable to access resume device (/dev/VolGroup00/LogVol01)
Buffer I/O error on device sda2, logical block 488167040
Buffer I/O error on device sda2, logical block 488167041
mount: could not find filesystem '/dev/root'
setuproot: moving /dev/failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
switchroot: mount failed: No such file or directory 
Kernel panic - not syncing: Attempted to kill init!
I believe that this section indicates a problem with access the RAID set:
Code:
No RAID sets and with names: "isw_eieaghiei_Volume0"
failed to stat() /dev/mapper/isw_eieaghiei_Volume0"
Which in turn causes LVM to fail, as the drives aren't properly being accessed.

I have at this point tried a number of things, primarily attempting to make a new initrd (for the new kernel, 2.6.18-53.1.14) using the following command:
Code:
mkinitrd new-initrd-file 2.6.18-53.1.14.el5
or
Code:
mkinitrd --preload raid0 --build-with raid0 new-initrd-file 2.6.18-53.1.14.el5
and then copying it over to
Code:
/boot/initrd-2.6.18-53.1.15.el5.img
and rebooting. None of these have solved my issue, and I'm starting to feel at a loss. It would seem the solution is close at hand, I just haven't managed to get it just right.. .

Any help would be much appreciated, thanks.
 
Old 08-17-2008, 01:32 AM   #2
MKumagai
LQ Newbie
 
Registered: Aug 2008
Posts: 2

Rep: Reputation: 0
Problem in mkinitrd, I think

I installed CentOS 5.1 from DVD into Shuttle XPC SX48P2 E based PC.
I know that I need to install yukon driver (it was easy; download from marvell site (google://yukon, driver marvell), tar xvj, then ./install.sh), but I had large problem in RAID (mirror) volume.

First, Windows xp(needs driver FD) and CentOS installation disk detect RAID volume correctly, and I could install into partitions on RAID volume.
However, when I rebooted Linux after installation, RAID volume was not detected and LVM used one of raw disk (I don't know why it used sdb6 instead of sda6), also, fstab contains /dev/sda3 instead of /dev/mapper/...

It takes many time and trouble, I finally stabilized it.
In conclusion, mkinitrd generated wrong init script, I think.

As you mentioned as
>No RAID sets and with names: "isw_eieaghiei_Volume0"
I also had similar message.

That was dmraid message in init script in /boot/initrd.....img.
(in some temp vacant dir, zcat /boot/initrd.(versions).img | cpio -i, then see 'init')
I had the line by default mkinitrd (yum-updated current version also creates as same):
> echo Scanning and configuring dmraid supported devices
> dmraid -ay -i -p "isw_cfgaijfacg_Volume0"
> kpartx -a -p p "/dev/mapper/isw_cfgaijfacg_Volume0"

When I tried 'dmraid ... Volume0' on the shell, it failed, while 'dmraid -ay' or 'dmraid -ay isw_cfgaijfacg' is OK. That means '_Volume0' is harmful.

I created another initrd as follows that contains:
> dmraid -ay -i -p "isw_cfgaijfacg"
> kpartx -a -p p "/dev/mapper/isw_cfgaijfacg_Volume0"
that worked well.

Mehtod: modified /sbin/mkinitrd
I copied /sbin/mkinitrd and changed following lines:

1342c1342,1343
< emit "dmraid -ay -i -p \"$dmname\""
---
> dmnamecore=$(echo $dmname | sed -e 's/_Volume[0-9]\+//;')
> emit "dmraid -ay -i -p \"$dmnamecore\""
(lines will be changed in other version, should search `dmraid')

This changed means cutting `_Volume0'.

I do not know this method is general to all environments, but this will be hint to someone that have same problem.

This method was effective after 'yum update' which upgraded the system into CentOS 5.2.

PS:
In addition, I needed to modify /etc/fstab that /boot and swap should used RAID volume instead of LABEL= partition. It seems that LABEL= refers /dev/sda (first detected?).
Moreover to create new initrd.img on miss-RAID environment, the mismatch on two physical disk occurred. I copied new initrd into USB, then boot from DVD(rescue, no detection CentOS), copied from sdb6(used) into sda6 using dd, reboot again(rescue, detection), and copied img into /boot.
I think it will be fastest: install CentOS, boot from rescue, chroot, mkinitrd by above modified one, edit grub.conf.
 
Old 08-18-2008, 05:01 AM   #3
AwesomeMachine
LQ Guru
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524

Rep: Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015
If the machine is shut down using one kernel, say 2.6.24; and started up using 2.6.25, it screws up software raid. You need to shut down with the same kernel you start up with again.

There is a way around this: USE HARDWARE RAID. Software raid is for a system that will perform its duty, without altering the kernel. SOftware raid is not a good choice if you want cutting edge kernel and software. 3Ware makes some great SATA hardware raid cards. I've never seen a hardware raid card for parallel ide. Nothing Promise manufactures for home users is hardware raid. It is just a new twist on software raid

Hardware raid cards have a processor and memory on them.
 
Old 08-18-2008, 11:12 AM   #4
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
Originally Posted by AwesomeMachine View Post
If the machine is shut down using one kernel, say 2.6.24; and started up using 2.6.25, it screws up software raid. You need to shut down with the same kernel you start up with again.
AM,
I haven't found this to be the case. I think the key may be that it's easier to do with a kernel you compile, because the software raid links will be there for the new boot. I used a Promise fakeraid for several years as a hobby/test machine, with no real problems until I put in an Nvidia 7300 with 512MB of RAM, and it simply does not like that old Promise card.
 
Old 08-28-2008, 11:53 AM   #5
Grrraham
LQ Newbie
 
Registered: Aug 2008
Posts: 5

Rep: Reputation: 0
Quote:
Originally Posted by MKumagai View Post
It takes many time and trouble, I finally stabilized it.
In conclusion, mkinitrd generated wrong init script, I think.

As you mentioned as
>No RAID sets and with names: "isw_eieaghiei_Volume0"
I also had similar message.
Hi, thanks for your suggestions. I think I've just hit this same issue with a fresh install of Centos 5.2 on a Supermicro X7DBE mainboard, having used the Intel chipset RAID to configure a RAID1 pair. Anaconda happily recognised the /dev/mapper RAID device and installation was fine, but on reboot it failed to find the RAID device with the same error messages as above and defaulted to /dev/sda. Since I'm using RAID1 instead of RAID0 the single disk has a valid image so it can still boot successfully, but then it ignores the other half of the mirror.

Quote:
Originally Posted by MKumagai View Post
Method: modified /sbin/mkinitrd
I copied /sbin/mkinitrd and changed following lines:

1342c1342,1343
< emit "dmraid -ay -i -p \"$dmname\""
---
> dmnamecore=$(echo $dmname | sed -e 's/_Volume[0-9]\+//;')
> emit "dmraid -ay -i -p \"$dmnamecore\""
(lines will be changed in other version, should search `dmraid')

This change means cutting `_Volume0'.

I do not know this method is general to all environments, but this will be hint to someone that have same problem.
It's not quite general, but yes thanks, it does give me a helpful hint. NOTE: the bit of the name following the last underscore (Volume0 for you, something else for me) is the name you would have assigned to the RAID set when you configured it in the BIOS. The /dev/mapper/isw_<whatever> is the identifier that dmraid prepends to it. So, I think if the script is tweaked to find the last underscore and strip from that point on, it should work. For me, life's too short, so I'm going to just unpack the initrd image to a temporary directory as you have shown, hack the init script to use the cut names, then crib the commands from the bottom of mkinitrd script to repack this image into a new one by hand.


Quote:
Originally Posted by MKumagai View Post
PS:
In addition, I needed to modify /etc/fstab that /boot and swap should used RAID volume instead of LABEL= partition. It seems that LABEL= refers /dev/sda (first detected?).
Moreover to create new initrd.img on miss-RAID environment, the mismatch on two physical disk occurred. I copied new initrd into USB, then boot from DVD(rescue, no detection CentOS), copied from sdb6(used) into sda6 using dd, reboot again(rescue, detection), and copied img into /boot.
I think it will be fastest: install CentOS, boot from rescue, chroot, mkinitrd by above modified one, edit grub.conf.
I think this is going to give me grief too. I've booted the system to /dev/sda a few times now, so the /var and .bash_history and other stuff will be out of sync with /dev/sdb, as well as the need to repair the boot stuff on both mirrors.
Could you please give more details this procedure, and on how fstab should look? Mine has LABEL=/ and LABEL=/boot, but it doesn't reference anything in /dev
Oddly, SWAP is still mapped to LABEL=SWAP-isw-edfiaf, which looks like a dmraid entity.
Do I need to edit the kernel line in grub.conf also? Currently it uses root=LABEL=/

This appears to be a definite bug in the updated kernel. Does anyone know if a bug report has been submitted?
 
Old 08-31-2008, 04:06 AM   #6
MKumagai
LQ Newbie
 
Registered: Aug 2008
Posts: 2

Rep: Reputation: 0
Hello Grrraham,
Quote:
Originally Posted by Grrraham View Post
I think this is going to give me grief too. I've booted the system to /dev/sda a few times now, so the /var and .bash_history and other stuff will be out of sync with /dev/sdb, as well as the need to repair the boot stuff on both mirrors.
Could you please give more details this procedure, and on how fstab should look? Mine has LABEL=/ and LABEL=/boot, but it doesn't reference anything in /dev
Oddly, SWAP is still mapped to LABEL=SWAP-isw-edfiaf, which looks like a dmraid entity.
Do I need to edit the kernel line in grub.conf also? Currently it uses root=LABEL=/

This appears to be a definite bug in the updated kernel. Does anyone know if a bug report has been submitted?
I modified /etc/fstab so that it does not contain "LABEL"s as follows.
Code:
/dev/VolGroup00/LogVol00 /                       ext3    defaults        1 1
/dev/VolGroup00/LogVol01 /var                    ext3    defaults        1 2
#LABEL=/boot1            /boot                   ext3    defaults        1 2
/dev/mapper/isw_cfgaijfacg_Volume0p3 /boot       ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/mapper/isw_cfgaijfacg_Volume0p5 swap       swap    defaults        0 0
I only changed "LABEL=/..." into mapped device as "/dev/mapper...".
"/" and "/var" are on LVM volumes that are detected by dmraid and lvm in 'revised initrd'.
"/boot " and "swap" became to use RAID 1 partitions directly. As I left as comment, those lines originally contained "LABEL=/boot1", but I changed it. It is because that detection of LABELed partition seemed to prefer raw volumes rather than RAID volumes.
If we used RAID 0, there would be no valid partition on each physical disk, but all physical disk contains same valid partition in RAID 1; that is one of causes, I think.

I also have description in grub.conf as
... root=/dev/VolGroup00/LogVol00 ...
but this was generated automatically on installation, bacause I used LVM. I you do not used LVM, I think it should be point as "root=/dev/mapper/....".

To make contents in two physical volumes the same, I used dd in environment of "rescue" on CentOS installation disk. (simply "linux rescue" on boot)
Because OS installation detection automatically mounts partitions, that is trouble when we use dd, I selected non-detection before rescue shell.

I forgot the detailed command, but like
# dd if=/dev/<source> of=/dev/<dest> bs=512M
<source> can be determined by the command 'df' when system booted without RAID. <dest> is another.
(For me, it source was /dev/sda3, and dest was /dev/sdb3. The difference is sda and sdb)
Please note that I am not confident to this operation, and it may broke the system if it is not suitable.

After I posted here, I found that this was reported at RedHat Bugzilla:
Bug 349161 - Missing entries in the init script ... See comment #15 and #16.
 
Old 09-03-2008, 01:20 PM   #7
Grrraham
LQ Newbie
 
Registered: Aug 2008
Posts: 5

Rep: Reputation: 0
I had a theory that the LABEL= mappings in /etc/fstab might get used properly if you fix the init script immediately following a fresh install (i.e. reboot after install immediately into rescue mode). I had certainly seen that when it fails to recognize RAID and reverts to booting from /dev/sda only, it then 'repairs' some configuration mappings during the boot. I presume /etc/blkid/blkid.tab comes into this somehow, and maybe this gets overwritten when reverting to non-RAID.
So if the init bug is fixed before this problem arises, the original settings in /etc might be OK. Anyway, since I'd failed to get the two RAID disks back in sync, and since it was a fresh install anyway, I zapped the RAID volume in BIOS and recreated it, then did a fresh Centos install. I did exactly as suggested above (i.e. only editing init), and got a kernel panic on boot. Hmm! So I then patched /etc/fstab as recommended, to get rid of LABEL= references, and tried again. I got another kernel panic.

D'oh! I should have read this thread a bit more carefully. I had removed the suffix from both the dmraid AND kpartx lines, instead of just dmraid. This is what gave me the kernel panic. I patched the init script again and this time the system came up, mounting all disks as /dev/mapper/... RAID entities. So hooray, I've got it working, but this mistake invalidated my experiment to see if the changes in fstab are really necessary, but thanks for clarifying what changes to make.

Quote:
Originally Posted by MKumagai View Post
I forgot the detailed command, but like
# dd if=/dev/<source> of=/dev/<dest> bs=512M
<source> can be determined by the command 'df' when system booted without RAID. <dest> is another.
(For me, it source was /dev/sda3, and dest was /dev/sdb3. The difference is sda and sdb)
Please note that I am not confident to this operation, and it may broke the system if it is not suitable.
Yes, I use dd on the UNMOUNTED disks. For example, after mounting /dev/sda1 (/boot) to patch initrd, I unmounted it again and used
Code:
dd if=/dev/sda1 of=/dev/sdb1 bs=1024
Note that I prefer to use a blocksize that divides into the partition size, so that only whole blocks are transferred (use fdisk -l to check partition sizes).
The main reason I'd rather not have to edit /etc/fstab is that it is on the main root partition, so I then have to use dd to resync this too, which takes a while.
I've realised that probably the reason the RAID volume of the previous installation never not back into sync is simply because I forgot to use dd on the swap partitions, since the non-RAID sessions would have altered this partition also. The data is total junk but the BIOS doesn't know this, and it reports a 'rebuild' state if it sees block mismatches anywhere on the disks.

Quote:
Originally Posted by MKumagai View Post

After I posted here, I found that this was reported at RedHat Bugzilla:
Bug 349161 - Missing entries in the init script ... See comment #15 and #16.
Nice to see that an urgent bug has been categorised as low priority for fixing. Apparently being able to boot isn't a priority...
I see comment #15 has no mention of /etc/fstab, so maybe if I'd got the init fix right first time, it would have worked.
 
Old 09-03-2008, 02:32 PM   #8
Grrraham
LQ Newbie
 
Registered: Aug 2008
Posts: 5

Rep: Reputation: 0
Well I had the excuse that I had a new Kickstart configuration to test, so I re-ran the kickstart install, and it was then that I twigged that there is a far easier way to apply the patch, with NO RESCUE CD and NO MANUAL RESYNC of the RAID mirrors after editing.
The anaconda installer has already successfully mounted the RAID set to do the installation, so when the OS install finishes, don't Reboot but instead simply press CTRL-ALT-F2 to switch to the shell command prompt. Now you can use this mini shell (nash) to apply the fixes to the mounted RAID volume, rather than booting rescue mode, patching one disk then dd'ing to the other. Having used this approach I've confirmed that changing fstab is not required - just the dmraid command in the init file needs to be altered.
I found that the following set of commands will work in the nash shell environment and since I was working with kickstart, I confirmed that you can get a fully-automated kickstart install & self-fix by just adding the lines to the %post section of ks.cfg
Of course, if you're not using kickstart and want to do this interactively, you can use jmacs or vi to edit the init file directly rather than the scripted chop/reassemble method. The comments ought to make it clear what's going on.

Code:
#this is where /boot is mounted by anaconda
bootpath=/mnt/sysimage/boot
mkdir /tmp/imgdir 
cd /tmp/imgdir
#extract the installed initrd file
zcat ${bootpath}/initrd-2.6.* | cpio -i 1>/dev/null 2>&1
#Fix init script. Manual approach: use a text editor interactively
#Scripted approach for automated fix:
#First get number of lines either side of dmraid line
lineno=`grep -n "dmraid -ay" init | cut -d: -f1`
let "nhead=${lineno}-1"
#this works in nash shell, where wc and cut are slightly different
nlines=`wc -l init | cut -c1-7`
let "ntail=${nlines}-${lineno}"
#Assemble new init file, start with everything before dmraid line
head -${nhead} init >/tmp/fixinit
#extract everything before the second underscore in the dmraid line
#note this also chops the closing double-quote
text=`grep "dmraid -ay" init | cut -d_ -f1-2`
#add the line to the new init, including the closing double-quote
echo ${text}\" >>/tmp/fixinit
#add the rest of the init file as-is
tail -${ntail} init >>/tmp/fixinit
#backup the original, in case something goes wrong
cp -a init /mnt/sysimage/root/init.orig
#replace init with the fixed version
cp -f /tmp/fixinit ./init
#(end of scripted approach to fix init)
#rebuild compressed initrd image file
find . | cpio --quiet -c -o | gzip -9 >/tmp/newinitrd.img
#when you feel brave, include this line to overwrite the boot initrd
cp -f /tmp/newinitrd.img ${bootpath}/initrd-2.6.*
Hope this is useful for others hitting this bug
 
Old 09-04-2008, 12:08 PM   #9
Grrraham
LQ Newbie
 
Registered: Aug 2008
Posts: 5

Rep: Reputation: 0
Looks like this is the relevant upstream bug report, which apparently does have a fix already on the way for the 5.3 release.
I'm be interested to know what effect the bug fix will have when applied to a system on which we've applied a work-around such as hacking the init script. From the description, it looks like they've fixed dmraid to respond correctly to the original syntax in the init script.
 
Old 06-08-2009, 05:30 AM   #10
Grrraham
LQ Newbie
 
Registered: Aug 2008
Posts: 5

Rep: Reputation: 0
do not use dmraid!

Update:
do not use dmraid on a critical system - it can trash your disks!
I had another incident of the RAID1 volume not being recognised and only one disk of the mirror was mounted and then modified when the OS was running. When I booted back into the old kernel the chipset status of the mirror was reported as 'REBUILD', but dmraid ignored this and came up with the mirror attached, and proceeded to use it on the assumption that the two disks were in sync. Of course they were not, but since most reads are taken from one disk, no inconsistency was visible. All block writes were replicated to both disks in the mirror, causing both the data and the journal of each disk to gradually become completely corrupted.
It looks like dmraid still doesn't support rebuilding a mirror and doesn't fail safely to protect data, and its implementation of RAID1 provides only an illusion of data security.
The lesson is, use Linux software RAID (mdraid), not fake-RAID (dmraid).
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
RAID 0 not properly recognized with dmraid on an Intel 82801GHM Uyanec Linux - Hardware 6 02-17-2008 12:40 PM
Upgrade to 2.6.22 breaks nvidia raid - won't book ocgltd Fedora 6 01-10-2008 12:13 AM
centos kernel upgrade problem (raid driver) blunt Linux - Server 1 10-29-2007 04:35 PM
software raid? dmraid? N_A_J_M Slackware 1 08-17-2005 07:44 PM
SuSE 9.1 Personal - Kernel upgrade breaks fglrx driver jasonM Linux - Distributions 6 07-14-2004 07:34 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel

All times are GMT -5. The time now is 05:00 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration