SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
That increase in the pending sector count doesn't necessarily mean that anything changed. A bad sector won't be discovered and marked "pending" until something tries to read it.
I have to wonder, though, whether something might have turned off the drive's automatic defect management. That would explain the write error on the bad sector. I thought that modern drives no longer had the ability to turn that off, but perhaps yours is one of the exceptions. See the paragraph for the "-D" option in the hdparm manpage.
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed: read failure 90% 2879 9548728[/code]
As expected, the test found an error and stopped. This was less than 1% of the way through the 976762584 sectors of disk. Pointless.
If you really want to find out how many bad sectors there are, run
Code:
dd if=/dev/sda of=/dev/null bs=4k conv=noerror
and then look at the number of pending sectors. I do not recommend doing this before recovering whatever data you can. Beating on a dying disk just to see how bad it is is not productive, and can make the problems worse. Using ddrescue to make an image with the readable sectors would be a better alternative.
Just a side question.
Would it be wise to go with 2 SSD-s in RAID-1 configuration?
That's probably something that I could afford from the monetary point of view.
Please note that it's my favorite toy machine.
I want it to be the best possible, within sensible budget.
Loss of data wouldn't cause major injuries, and there are backups too.
It just feels better with the uptime ticking up continuously :-)
Two copies, yes. One gets corrupted the other one gets corrupted, too. Only in case one drive dies suddenly the other one will have the data intact.
OK, that's what I was afraid of when I read about RAID-1.
So I would need something with error correction.
I'm goon have a look at the possibilities, but most probably I'm gonna give up on the idea.
RAID-1 will protect against data loss due to a drive failure. That is one cause of data loss. There is no form of RAID that protects against the other causes of data loss, such as accidental deletion, overwriting, OS failures that corrupt the filesystem, etc. RAID is not a substitute for backups. And of course RAID adds its own complexity and modes of failure to the mix. Its primary function is to allow a system to keep running seamlessly while a failed drive is replaced. If that is important vs. the hours of down time while a failed drive is replaced and restored from backup, then you need RAID. Otherwise, not so much, aside from the bragging rights about your continuous uptime (assuming that your drives are hot-swappable -- which they probably are not).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.