Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
My point was, did you run xfs_{check|repair} on /dev/hdx or /dev/mapper/LogicalVolumeName?
If you did the former, and ran the "repair" function, you may well have completely "trashed" your logical volume(s). The symptom you mention (re Superblocks) is typically reported when the check (or repair) programs are run against an invalid (e.g., LVM) disk partition.
I have only issued xfs_repair on the logical volume, not the drives themselves (obviously after unmounted the lvm). I'm going to install SuSE 10.1 (latest release) tonight and see if it makes any difference. Thanks for trying to help PTrenholme!
Tried installing newest SuSE (10.1), no success, same error.
Also tried running xfs_check and xfs_repair on the logical volume (thru /dev/mapper/logicalvolume), this also freezes the system instantly.
My next step would be to somehow test my disks individually, but I am clueless on how to proceed with this.
My disks are varying from 120~300 GB in size. Would it be possible to make an identical copy of any of the drives using a 300 GB disk (using Ghost or similar application) and replacing the disks one by one? Or would this operation require the disk to be identical with its original?
EDIT:
I guess before I do something like that I _really_ need to make sure the fs is not scrambled, cause then it would lead to the exact same thing.
I am now able to copy the data from the LVM in pieces, up until the point where my system freezes. But if I reboot the system and copy the same files, it works. So for me, the worst part is over. However, I was never able to find a solution to this problem, nor was I able to determine exactly what the problem was, even after ruling out several factors. The disks seem to be intact. Please see above replies for my previous actions in order to determine the cause of error.
Thanks to the people who tried helping me out, support like yours is more valuable than anything when you don't have anywhere to turn. Thanks guys.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.