Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm setting up a mysql replication system. I usually maximize what's left on my stock before I buy. I do badblocks first to all 2nd hand hard drive to make sure it is in good condition before I install an Operating System.
I pick up a 200GB sata hard disk in the stock room and test it using badblocks. I left it in the office doing the test and just check it tomorrow. When I return to check, the report says it found 30 bad blocks.
I don't if I gonna still use it. Would you advise to use it?
Only use it if you don't care about the data you are putting on it...
Once a hard drive begins to fail, the problem typically gets worse quickly. Causes could be things like corrosion on the the surface, or a head crash that leaves grit floating around the area - these will spread.
My only successful attempts at keeping a drive with bad blocks going for a substantial time have involved partitioning out a substantial chunk of the drive around the bad area, but this would be an unprofessional way to operate on a business system.
Last edited by neonsignal; 02-04-2010 at 05:58 PM.
I pretty much agree with neonsignal; but it should be noted that pretty much every magnetic-media hard disk on the planet has *some* bad blocks at any given time; even a brand new drive. It's the nature of the technology. However, the HDD's firmware and/or BIOS or whatever, has code to automatically account for this, and to keep track of and avoid the bad areas of itself. It may also have SMART commands you can send it, to tell it to update its bad-blocks table.
That said, if the drive is showing rapid, sudden, or massive changes in bad block counts, and/or is an old drive, then totally: do not trust it for anything critical.
As of the moment, I'm doing a smartctl test. This was haven't use for 6 months now so I'm not sure about its status that's why I'm doing some test before using it. The test I'm doing was smartctl,e2fsck and badblocks.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.