Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I installed some hard drives to a new computer (haven't use these drives in a while). I created a new ext3 file system on them and tested them with smartctl. When I run a test with smartctl it fails with "Completed: read failure" with 90% remaining.
I found some information regarding bad blocks and a way to tell the computer to not use various portions of the drive, but I don't know where the problem is as the "LBA_of_first_error" shown after the test is run is blank. After a bit of research, every case I have found shows some information here. Is it worth looking into this further or does it mean that the drive is dying and it's better off in the bin?
The overall-health self-assessment test result is passed, so it seems functional for now. It's a Samsung 250GB IDE HDD.
Bad blocks should be automatically scanned for and reallocated by the drive's integrated disk electronics (IDE) and the accompanying firmware. The best test to find out if a disk is dying would be to run a "dd" against it, and watch your syslog (/var/log/messages in most Linux distributions).
In one termnial window, run the following command;
time dd if=/dev/hda1 of=/dev/null
(NOTE: Don't get the "if=" and "of=" designations swapped around, or you will wipe out the contents of the drive.)
In another, run something similar to;
tail -f /var/log/messages
If you start to see "Drive Seek" errors, or something that looks like this;
Well, if the test failed, then there is a problem with the disk. Can you post the attributes and results of the test as it appears on 'smartctl -a /dev/sda'.
I would get the hdd manufacture diagnostics and run those. 'smartctl' is great but I would still get the original diagnostics.
'UBCD (Ultimate Boot CD)' allows users to run floppy-based diagnostic tools from most CDROM drives on Intel-compatible machines, no operating system required. The bootable cd includes many diagnostic utilities.
The above link and others available from 'Slackware-Links'. More than just SlackwareŽ links!
'onebuck' makes a good point about the manufacturer's diagnostics. However, if you ran the full "dd" test I mentioned previously, let me save you some trouble;
This:..."media errors" after a minute or two of running.
Means: Toss the drive. Move along, nothing to see here.
If the disk isn't stone-dead now, it will be in short order. Besides, those "media errors" typically cause I/O-wait hangs. Which usually run 5 to 20 seconds. I don't know about you, but I don't want my server/project_box/workstation/etc screeching to a halt every time the "special" hard disk is accessed.
(NOTE: My advice does not apply if you're trying to scrape usable data off of that drive. _IF_ that's the case, start the file-copy now while you still can!)
That disk might have been kaput, but there's something you have to remember about hard drive manufacturers;
Most of the aluminum chassis are cut in Korea, the integrated circuits burned in Malaysia, and everything's assembled in Taiwan.
After dealing with literally *thousands* of hard drive failures in a Production data center, I can tell you two things;
1) If ,according to the manufacture date, a drive is six months old or newer - watch it for a year.
2) If, according to the manufacture date, a drive is 18 months old, or older (and has run continuously without issue), then it'll last you five more years.
Corollary; if the drive is five years old, watch it for a year.
That disk might have been kaput, but there's something you have to remember about hard drive manufacturers;
Most of the aluminum chassis are cut in Korea, the integrated circuits burned in Malaysia, and everything's assembled in Taiwan.
After dealing with literally *thousands* of hard drive failures in a Production data center, I can tell you two things;
1) If ,according to the manufacture date, a drive is six months old or newer - watch it for a year.
2) If, according to the manufacture date, a drive is 18 months old, or older (and has run continuously without issue), then it'll last you five more years.
Corollary; if the drive is five years old, watch it for a year.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.