Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Alas, my old hard drive, after 8244 hours of faithful service, appears to be dying. The sectors in need of reallocation, have increased rapidly in the last 200 hours of use (it's the spinning type):
There is no way to tell how long a disk will last. Hard diak manufacturers usually will state the MTBF, Mean Time Between Failure, but as the name states, this is a statistical measurement and says nothing about a single disk. Yes, 8000 hours is not that much, but things like that can happen with any disk from any vendor, so I wouldn't call that a rip-off, it just shows why having a good backup scheme is a necessity.
"Life" depends on a number of factors: usage; crashes; file system employed; quality of manufacture; quality of parts; vibration when rotating; Size of disks; speed of disk (2.5 disk drives are designed to accept knocks better); even interface (ide drives seems to survive abuse better than sata ones).
atime on ext2/3/4 is capable f writing every 5 seconds. 'relatime' option cuts that to 15 seconds. Noatime cuts out these extra writes at the expense of file access information. And there are loads of these little eccentricities make your question impossible to answer. I agree your drive is Kaput. Do you move the box while the platter is still rotating? Don't.
atime on ext2/3/4 is capable f writing every 5 seconds. 'relatime' option cuts that to 15 seconds. Noatime cuts out these extra writes at the expense of file access information. And there are loads of these little eccentricities make your question impossible to answer. I agree your drive is Kaput. Do you move the box while the platter is still rotating? Don't.
I always thought that atime was necessary for correct running of your root fs. Some tools, for example fancy du tools, rely on it.
Heat too might be an issue as well as below temps.
I'd get the Hitachi/IBM drive diags. It may offer better results.
At one time a low level format seemed to help. It may still help.
Plenty of other issues from nearby radar to large motors to power drops/hits to other emf/rfi.
How do I get the diags? I used smartctl to get the partailly shown results above.
When you say "a low level format" do you mean dd if=/dev/zero of=/dev/sda?
Heat, you might be right, I do live in FL USA. emf might also be a factor.
I always thought that atime was necessary for correct running of your root fs. Some tools, for example fancy du tools, rely on it.
GUI file managers have pretty much rendered atime a useless statistic in many cases. In order to show an appropriate icon for each file they read the beginning of the file to determine its type. That's going to cause an atime update for every file in the directory just because someone looked in the directory, perhaps just to see what huge files might be there. And, if the file manager resets the atime to avoid the appearance of access, that is going to cause a ctime update and make backup programs think that something about the file has changed.
Lifetime of HDDs might vary a lot, depending on many factors. I had ATA Maxtor 40 GB drive working every day, which started to fail after about 3 years of service. And right now there is WD Caviar Blue 250 GB in my Miditower working every day since it was bought in 2008 -- not a single hint on failing. All parameters are like in brand new drive. But back then I had nasty PSU and didn't know about how to handle drives properly. Now I do.
Distribution: Cinnamon Mint 20.1 (Laptop) and 20.2 (Desktop)
Posts: 1,676
Rep:
Quote:
I wanted to know if I got ripped off on this hitachi.
I normally see HDs that are 8+ years old still functioning, and this on is about half that.
We used to have about 90 x 2.Gb or 4.3Gb Seagate SCSI drives spinning in an old Sun SparcServer 1000e (3 x SSA100 arrays) they'd been spinning for ten years at least.
Latterly, the server sounded like it was a buzz saw. While the disks were still spinning, the bearings had all dried out. You had to spin down a minimum of ten disks to replace one, it was a nightmare! We'd have to "bean shake " the other nine to get them to break the stiction and spin up if we were lucky. The server lasted far too long due to the need for some legacy app or other.
The message being, if they're left spinning, they'll last longer. Starting and stopping them shortens their life.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.