LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices

Reply
 
LinkBack Search this Thread
Old 09-09-2007, 04:01 PM   #1
Vrajgh
Member
 
Registered: Aug 2005
Posts: 65

Rep: Reputation: 31
Reiserfs - how to find which file contains data in a block


I've had some issues with bad blocks today (time to get a new hdd I think..) The partition which was causing trouble contains a Reiserfs file system.

I ran badblocks to work out which blocks were bad and used dd to write over them and force the drive to remap them. Running badblocks after this operation showed no bad blocks suggesting that the disc has succesfully re-allocated the blocks. Reiserfsck dealt with issues from not having umounted the filesystem correctly and everything appears to be behaving normally.

The issue I have is that I've used dd to write over specific blocks on the disc, which means I have erased information. There will be some files on the disc which are corrupted by this process (these could include executables and shared libraries.) Given that I still have the list of blocks that I have overwritten, how can I map these blocks to specific files which need to at least be inspected but more likely deleted or restored?
 
Old 09-11-2007, 03:09 AM   #2
Junior Hacker
Senior Member
 
Registered: Jan 2005
Location: North America
Distribution: Debian testing Mandriva Ubuntu
Posts: 2,687

Rep: Reputation: 59
You would have been required to check out the sectors in those blocks with a hex editor before using the "data destroyer" on them. The thing is, is that you mentioned using a utility that re-mapped the blocks. This of course, has to be done before using dd on them. If that's the order in which you proceeded, there is no need to worry as the file tables should also have been updated and been informed of the change in location of the sectors/blocks in question.
Did you run some modified dd command to have dd re-map the sectors? Because this appears to be what you are implying.
When re-mapping bad sectors/blocks, generally the utility has to make many passes over them to get a 100% verified read of the contents of each sector/block before it can write the data to spare sectors, because remember, the reason they are pegged as bad is because the read/write heads of the hard drive has to make too many passes to read them.
 
Old 09-11-2007, 04:56 AM   #3
Vrajgh
Member
 
Registered: Aug 2005
Posts: 65

Original Poster
Rep: Reputation: 31
The impression I've got from a number of articles on the subject is that a modern drive will automatically internally re-map bad sectors on write if it is unable to write to them. Hence dd to specific blocks could re-map them as necessary but would also destroy file data (which would already have been unreadable anyway.)

This information came from the following link amongst other sources.
http://www.namesys.com/bad-block-han...html#harddrive

I have no evidence that the blocks have been internally re-mapped other than the fact that badblocks no longer reports them as bad. This article suggests that bad sectors could also be caused by
Quote:
Thermal fluctuations which corrupt magnetic data
Powerloss during a write operation
Both of these would represent data corruption but would not require any physical re-mapping. The second is quite a likely cause as everything started to go wrong after the PC locked up whilst I was experimenting with ACPI.

If this were the case I would have expected the filesystem to have handled it in a relatively routine manner. I would have expected the journal to be replayed on mount with warnings of possible data loss.

Instead I was unable to boot the machine and was even unable to fsck the partition (my knoppix CD is my friend.) Finding the blocks marked as bad, zeroing them with dd and then running fsck again allowed the journal to replay and restore the filesystem structure. The system then booted cleanly. (In hindsight perhaps I should have run badblocks with the "non-destructive read/write" option before resorting to dd.)

I do not fully understand why that process worked. As I understand it journalled filesystems protect the filesystem structure (ie the file names, locations and file sizes) but not the data. If the errors on the drive were sufficiently serious to prevent it from booting then zeroing the data in those areas should still have prevented booting as important files would be corrupt. There still must be some files with a few more zeros in the middle of them but I don't understand how to interpret debugreiserfs - although I do know that there were only a couple of blocks that I blanked which were actually in use.
 
Old 09-11-2007, 05:29 AM   #4
Vrajgh
Member
 
Registered: Aug 2005
Posts: 65

Original Poster
Rep: Reputation: 31
P.S.

Perhaps I should add that I didn't try too hard to recover data because my home directory is on a different disc so there wasn't any really important data at risk. I knew that if I couldn't stop these blocks being listed as bad I could give the output of badblocks to the filesystem and make it (rather than the drive) work around them. If I couldn't make the machine boot then it would not have been too much of a problem to reinstall given how straightforward modern install processes are.

I am quite surprised to have come out the other side with what appears to be a working machine. I would be amazed if there were no corrupted files left over from this but now need to work out how to find them.

Last edited by Vrajgh; 09-11-2007 at 05:34 AM. Reason: Typo (times2)
 
Old 09-11-2007, 09:26 AM   #5
JZL240I-U
Senior Member
 
Registered: Apr 2003
Location: Germany
Distribution: openSuSE 13.1 / 12.3_64-KDE, Ubuntu 12.04, Fedora 17, Mint 16, Chakra
Posts: 3,619

Rep: Reputation: Disabled
For the next time:

http://www.linuxquestions.org/questi...d.php?t=362506

Scroll to "How to rejuvenate a hard drive"
 
Old 09-11-2007, 07:46 PM   #6
Junior Hacker
Senior Member
 
Registered: Jan 2005
Location: North America
Distribution: Debian testing Mandriva Ubuntu
Posts: 2,687

Rep: Reputation: 59
Quote:
Originally Posted by Vrajgh View Post
The impression I've got from a number of articles on the subject is that a modern drive will automatically internally re-map bad sectors on write if it is unable to write to them. Hence dd to specific blocks could re-map them as necessary but would also destroy file data (which would already have been unreadable anyway.)
This is only partially correct.
I looked at the article you referred too that suggested using dd. I disagree with one statement where the author says that sectors will only be re-mapped when being written too, they will also be re-mapped when there are "signs of failure" when reading also. This is a firmware feature in modern drives that basically makes sure the data contained in sectors with signs of failure (need too many passes to read, values of which are determined by the firmware) will be preserved before it is too late. The author of that article may be referring to early firmware features of this nature that was still premature and not as full featured as today's firmware features.
An excellant utility for re-mapping old drive's bad sectors is SpinRite.

What doesn't make sense is that if you were to use dd to write to a specific set of sectors that were fine with skip and count, dd would just overwrite the existing data and it would be overwritten, not re-mapped. So...looking at it from this perspective, why would a drive re-map existing data if you're trying to overwrite it with dd? I am not as fluent in reiserfs as I am in NTFS as far as journal file systems are concerned due to my data recovery occupation and NTFS makes up 99% of my jobs.
Quote:
I am quite surprised to have come out the other side with what appears to be a working machine. I would be amazed if there were no corrupted files left over from this but now need to work out how to find them.
When bad sectors are re-mapped, the original sectors are isolated so they can't be used again, this removes the errors. It is possible the drive managed to do the re-map before you ran dd, as mentioned, re-mapping bad sectors is not instantaneous, it requires many passes to verify the data within. Or, running dd to zero them labeled them as un-allocated sectors instead of allocated with errors. And the files that were there may belong to some application rarely used and it may never be an issue.
Here's a little something to ponder.
A friend of mine gave me three hard drives and asked me to wipe them so the data cannot be accessed. I used a zero utility that does the same thing as dd when zeroing. Then I put one of those drives in my unit, it had no partition table and thus, no partitions and could not be read by Windows or Debian. I made an image of the drive using dd and ran photorec against the image and pulled out all his data and whatever system files photorec can find.
 
Old 09-12-2007, 03:43 AM   #7
JZL240I-U
Senior Member
 
Registered: Apr 2003
Location: Germany
Distribution: openSuSE 13.1 / 12.3_64-KDE, Ubuntu 12.04, Fedora 17, Mint 16, Chakra
Posts: 3,619

Rep: Reputation: Disabled
Btw. for monitoring drives and prevention of data loss on drive failure there is S.M.A.R.T.:

http://smartmontools.sourceforge.net/
 
Old 09-13-2007, 08:25 AM   #8
Vrajgh
Member
 
Registered: Aug 2005
Posts: 65

Original Poster
Rep: Reputation: 31
Thanks for the help. I will keep reading about this subject and prepare for next time when it might be more serious. I'll also work out that backup routine that I should have sorted out years ago!
 
Old 09-14-2007, 03:04 AM   #9
JZL240I-U
Senior Member
 
Registered: Apr 2003
Location: Germany
Distribution: openSuSE 13.1 / 12.3_64-KDE, Ubuntu 12.04, Fedora 17, Mint 16, Chakra
Posts: 3,619

Rep: Reputation: Disabled
Quote:
Originally Posted by Vrajgh View Post
...I'll also work out that backup routine that I should have sorted out years ago!
http://www.die.net/doc/linux/man/man1/rsnapshot.1.html

I use it with fcron, works like a charm.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Retrieving semi-formatted data: Kernel Panic (cannot find file or dir /dev/root) majorGrey Linux - General 2 09-05-2007 04:02 AM
Reiserfs gives system block error Waqas Ahsan Linux - General 4 10-17-2006 08:18 AM
Writeing block data to a file. exvor Programming 1 05-12-2006 09:34 PM
Reiserfs: bad block in sysem area (usage of dd_rescue) kbcnetau Linux - General 0 12-02-2004 07:03 PM
bad block in reiserfs ngoddard Suse/Novell 4 12-01-2004 02:58 AM


All times are GMT -5. The time now is 03:17 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration