Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
After searching the forums I didn't find the exact answer I was looking for, so I thought i'd come here and ask.
I am currently running CentOS v5.3 (64-bit) with a RAID 1 configuration. The drives are also partitioned in half (with Vista on the other half).
I recently had to remove one of the hard drives in order to replace my video card. Upon reboot, I found that the RAID 1 array (software RAID, Intel chipset IHC7) was no longer in sync. I was able to boot into Vista and use Dell's matrix raid software to resync the drives. However, upon booting into CentOS, I am taken to a command prompt with the follownig error messages:
fsck.ext3: Device or resource busy while trying to open /dev/sda5
Kernel alive [FAILED]
An error occured during the file system check.
Dropping you to a shell; the system will reboot when you leave the shell
Give root password for maintenance
sda5 is my Linux partition and sda1 is Vista
As you might have noticed already, i'm not very knowledgable with repairing Linux problems, so any help is greatly appreciated!
Better post your fstab -l and your menu.lst from /boot/grub. You do that by opening up a terminal window from a rescue disk, or just typing it from your command prommpt might work as well, assuming you have access to your filesystem from that command prompt.
I presume you setup CentOS with the drives already setup in RAID using the same firmware? CentOS should then be using dmraid to recognize the drives and not mdadm. You might want to read up about the difference if you don't know; it'll be helpful later, perhaps.
RAID set "isw_biaeideiab_Volume0" already active
RAID set "isw_biaeideiab_Volume01" already active
RAID set "isw_biaeideiab_Volume02" already active
RAID set "isw_biaeideiab_Volume03" already active
RAID set "isw_biaeideiab_Volume05" already active
Sorry for my ignorance on this issue, but i'm still learning many of the basics, and I don't have a system admin to turn to.
Hmm that's your temporary fstab. You'll have to mount your arrays with a rescue disk and look at the (no longer) running system. If you can, you should be able to look in /dev/mapper and find the isw_biaeideiab_Volume0X devices to mount.
mount -t ext3(or whatever) /dev/mapper/isw_biaeideiab_Volume0 /mnt/example
but you HAVE to mount the right one. Probably best to do it read-only. Some of those may be pieces of the RAID, the other one will be the RAID.
Last edited by mostlyharmless; 11-04-2009 at 03:24 PM.
Reason: changed to /dev/mapper from /dev; added info
Hmm, if you're getting stuck at stage 1.5 and you re-made your RAID, I'll bet that either you just need to reinstall grub, which you can do with a grub floppy, SuperGrub or grub-install (not sure with CentOS). If you change your root(hd0,1) to root (hd0,0) in menu.lst, it'l probably work too, but you'll still have a corrupt grub on the other disk.
Last edited by mostlyharmless; 11-09-2009 at 03:21 PM.