Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
i have array of 3 disks, RAID 0, xfs file system, one of the disk is dieing, smartctl is showing errors... the problem is i just left for vacations i can't replace it, and it's pretty important
but the data doesnt seems to be all corrupted, just specific blocks... so is there a way to mark these sectors?
after i run xfs_repair, i was able to mount it, but it stopped working after i read the bad sector... maybe is there a way to avoid writing/reading from it, then all the other data would be untouched?
i just need to get it running, i know i should replace hdd, but i'm sure there is a way to just mark it
also, i heard that sometimes, when a poweroff occurs, and the sectori is not fully written, it appears as invalid. maybe this happend? because this error started when there was a powerdown
any ideas how to fix it without loosing most data?
i have array of 3 disks, RAID 0, xfs file system, one of the disk is dieing, smartctl is showing errors... the problem is i just left for vacations i can't replace it, and it's pretty important
but the data doesnt seems to be all corrupted, just specific blocks... so is there a way to mark these sectors?
after i run xfs_repair, i was able to mount it, but it stopped working after i read the bad sector... maybe is there a way to avoid writing/reading from it, then all the other data would be untouched?
i just need to get it running, i know i should replace hdd, but i'm sure there is a way to just mark it
also, i heard that sometimes, when a poweroff occurs, and the sectori is not fully written, it appears as invalid. maybe this happend? because this error started when there was a powerdown
any ideas how to fix it without loosing most data?
my hd is 250 gb wd
The disk manufacturers provide stand alone diagnostic programs on their web sites. One of the functions that these diagnostic programs will do it check every block on your drive. If any bad block is encountered the block is reassigned to a spare block beyond the normal end of the disk. The diagnostic program will also tell you if your disk is beyond repair. This repair is destructive and erases the entire disk. I used the Western Digital diagnostic program about 5 years ago and it worked as advertised. (It told me that I had more bad blocks than spares and the disk was kaput.)
So to use this diagnostic you first have to save all of your data to backup. In your case this could be a major problem. You could try to figure out what files contain bad blocks and try copying every file but the bad files.
My understanding is that modern drives will automat-
ically mark and avoid bad sectors. As long as you're
not receiving filesystem errors, it means that RAID is
doing its job. Obviously you'll need to replace the
failing drive as soon as possible, since RAID can only
survive the failure of one disk.
My understanding is that modern drives will automat-
ically mark and avoid bad sectors. As long as you're
not receiving filesystem errors, it means that RAID is
doing its job. Obviously you'll need to replace the
failing drive as soon as possible, since RAID can only
survive the failure of one disk.
Since this is RAID 0, it actually can't survive even one disk failing. However, it's sounding like the OP has a good chance of recovering most stuff.
Here are some links that might help get you started. They aren't specific to xfs, so I'm not sure what (if anything) you need to change to make them work with xfs. Hopefully someone more knowledgeable than I will chime in. And of course, make sure you check and understand the man pages thoroughly before trying something new. One thing you may want to consider is using dd (using the noerror option) as described in the first link to make a backup copy of your drive, so if this one degrades further you're not completely out. Though, I'm not sure what to do with RAID in the mix...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.