HELP NEEDED - Can't mount RAID volume after changing failed boot drive
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
HELP NEEDED - Can't mount RAID volume after changing failed boot drive
Hi
I am a Linux newbie. I'm trying to help a friend who's boot disk died on a Fedora installation which is used as a Plex server.
I've replaced the main drive and re-installed Fedora fine.
I've re-inserted the 2 RAID1 disks and they are detected, and then after much reading used mdadm to scan and assemble the disks so the RAID volume shows in disks, however it won't display and gives the following error
Unable to Access Location
Error mounting /dev/md0p1 at run/media/plex/_mnt_md0: cannot
mount; probably corrupted filesystem on dev/md0p1
The RAID volume is 2 x 3Tb drives which contains all their Plex media,so I am desparate to access it.
I need to either:
Be able to fix the array so it is readable
or
Simply be able to read one of the disks so that I can copy the data off to another drive.
[plex@plexbox-localdomain ~]$ sudo fsck -n /dev/md0p1
[sudo] password for plex:
fsck from util-linux 2.32.1
If you wish to check the consistency of an XFS filesystem or
repair a damaged filesystem, see xfs_repair(8).
so then...
Code:
[plex@plexbox-localdomain ~]$ sudo xfs_repair -n /dev/md0p1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used. Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
- scan filesystem freespace and inode maps...
sb_fdblocks 266982522, counted 266990714
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
So when the original system died it broke the XFS filesystem. See the manpage for xfs_repair; notably the "-L" option.
Read the warnings, and the section "Dirty Logs" at the end. Your (friends) call, but essentially you probably have no option. Even if you lose some files, not the end of the world I guess.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.