FC5, Hardware Raid 5, LVM, Rescue Mode, Bad Superblock
FedoraThis forum is for the discussion of the Fedora Project.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
FC5, Hardware Raid 5, LVM, Rescue Mode, Bad Superblock
Greetings,
My brother has a FC5 box with a 1.2TB HighPoint Hardware Raid 5 Array. Something bad happened and we're not sure what but his system is in a very precarious state and it has all of his family's data on it.
I had him boot:
linux rescue dd
It found the FC5 install but we did not mount it.
We first tried:
fsck /dev/sda1
fsck /dev/sda2
which fails with a devices busy error and neither are mounted.
which kicked out a bunch of bad superblock messages but successfully "generated" the VolGroups under /dev/mapper/*
We then tried:
e2fsck -v -y /dev/mapper/whatever
and it kicked out an invalid response when opening /dev/*
Then we tried fsck -b 8193 on sda1, sda2, volgroups and that failed. WTF IS GOING ON!!!!
He wanted to use windows server and I persuaded him to use Linux and I will not lie....I AM REGRETTING THAT DECISSION BIGTIME becuase this is a CCCCCCCCCCCCOOOOOOOOOOOOOOOMMMMMMMMMMMMMMMMMMPPPPPPPPPPPPPPPPLLLLLLLLLLLLLLLLEEEEEEEEEEEEETTTTTTTTTT TTTTTEEEEEEEEEEEEEEEE JOKE!!!!!
Can anyone step up and redeame linux? I sure havent been able too...
You could try booting from CD/DVD into rescue mode
type "linux rescue" at the prompt and see if it will detect it.
you can try vgchange -a y first to make it search and activte the LVM, then see if you can get some LVM info, lvdisplay / pvs etc.
Also you could do a "fdisk -l" to ensure that the partitions are ok.
Also check to ensure you have not lost a disk. If its hardware RAID can you check the RAID controller to ensure It has right number of disks etc.
As the disks are LVM trying to mount it directly will not work.
The kernel need to contstruct the LVM information before you can fsck etc the filesystem, trying to fcsk an LVM partition will not work and only probably trash the volume if you force it.
I think your best bet is to perform a filesystem check like you are trying, but you don't want to have LVM logical volumes mounted while you are trying to fsck.
When you boot from the CD and use linux rescue don't let the rescue process mount the logical volumes in either readwrite or readonly mode. This happens in the last step before it gets to the shell prompt it asks if you want to find any Fedora installations. If you hit continue or readonly it will mount your logical volumes and make it more difficult to use fsck.
Once you get to the shell prompt type lvm lvscan to locate your logical volumes. At this point they should be inactive. Make them active with lvm lvchange -a y /dev/VolGroup00/LogVol00 (replace VolGroup and LogVol with what you found using lvscan). If the default LVM install was used one of the LV's will be a swap and we really don't have to work with this one. Once the LogVol is active you can use e2fsck -f /dev/VolGroup00/LogVol00 to perform the check, add the "y" option if you want to correct errors without asking. If your filesystem is not ext2/3 then you can determine what the filesystem is from fsck -N /dev/VolGroup00/LogVol00 and then use the proper tool.
Please let us know how you make out. If you are still having trouble after this tell us more about the error messages you are getting.
Hey guys, we tried all of that...once we activated the lvm's we ran e2fsck -v -y /dev/mapper/VolGroup00/LogVol01
and it gave us the bad superblock error for which we tried
e2fsck -v -b 8193 /dev/mapper/VolGroup00/LogVol01
and that failed saying that same thing, we tried all the backup superblocks and none of them worked.
1.2TB at 75% capacity...no, there is no backup...in fact all the machines backup to THIS ONE!! The backup was the HARDWARE RAID 5 Array which is making a joke out of all which that is supposed to be AMAZING.
As wmakowski said you need to run fsck on /dev/VolGroup00/LogVol00 or whatever is in lvscan. The /dev/mapper is a mapped device used by the kernel, try the /dev/VolGroup00/LogVol00 and see how you go.
You must make them active with lvchange in order for them to appear in /dev. If this is a default setup, I'm fairly certain that LogVol01 is the swap. Swap uses a different filesystem type and it makes sense that e2fsck would not experience great joy. What were the results from your e2fsck on LogVol00?
You should be able to look at /grub/grub.conf on the boot partition and tell which volume is your root volume. It will be next to root= on the kernel line. Another way is to use lvdisplay and look at the size of the volume. If you only have two volumes the larger of the two is where Fedora resides.
No problem, I can see you are getting frustrated and I would be too. Yes, I did read your initial post. From what you posted it looked as if you weren't sure of the exact steps to take. I replied with what has worked for me in the past. It is sometimes difficult to tell what is working and what is not from this end. Could you post a few of the error messages you are receiving when you enter different commands?
You mentioned getting a bad superblocks message. You can find the location of other backup superblocks using the command mke2fs -n /dev/VolGroup00/LogVol00. The -n option will not make the fs, just tell you the results of what it would do and show where the other superblocks would be located. Then you can try using e2fsck -b ##### /dev/VolGroup00/LogVol00. Are you certain that this is a ext2/ext3 filesystem? What were the results of fsck -N /dev/VolGroup00/LogVol00?
Just thought I'd share my experience of the last few days to see if it helps any. I've got a similar setup and - through a little bad luck, a little bit of rushing - I recently attempted to upgrade my nvidia video drivers via yum..... pulling in a new kernel. All well and good normally, except the current Highpoint drivers (for my device, a rr23xx) do not support kernels past 2.6.20
So, up my system pops, not a peep from the data on my RAID. Sound similar? Maybe this explains *how* your bro's system came to be in this state.
I was fortunate enough to be able to roll back and double-check my data was still good.
Even with a rescue disk, I found that though a sda device was visible, making any sense of the data - performing fsck, making sense of the LVM I have on there, and the like - was impossible. It actually turned out easier for me to revert back to a 2.6.20 kernel than develop a rescue disk with appropriate functioning Highpoint driver built in.
.... and saying that, everything's back up and running sweet as at this end.
Let us know if your system reports a post 2.6.20 kernel and what highpoint card you're using.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.