LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 11-24-2020, 11:58 PM   #16
obobskivich
Member
 
Registered: Jun 2020
Posts: 609

Rep: Reputation: Disabled

Quote:
Originally Posted by jefro View Post
I've run many many enterprise level drives. Some have lasted decades before we finally updated them. You'd be amazed how many commercial type drives in existence still that were bought decades ago.
To add to this: One of the drives in my home file server has a build date of somewhere in 1999 - still works fine. Bit small these days, but what are you gonna do? OTOH I've junked plenty of 3-4 year old drives over the years (enterprise, consumer, etc) - I'm sure across 100,000 units or some similar really huge sample size the 'enterprise drive' is better than the 'consumer drive' (and my 21 year old WD is probably well into outlier land) but at an individual level its hard to make specific predictions, or account for outliers. I'm not saying up/down on enterprise stuff here as any sort of generality - just that there is no 'fool-proof, bulletproof, guranteed to never fail' part, and for every 'I bought XYZ and it died in 3 months XYZ is a bad brand!' you can find at least one other 'I bought XYZ and it survived for 20 years, 3 floods, an earthquake, etc etc XYZ is the best!'

One thing that hasn't come up yet and I just thought of it myself: temperature can have an impact on drive life - specifically, try to keep hard drives from running so crazy hot (I know, I know, modern desktops are usually running within a few *C of boiling water anytime you so much as look at them, but hard drives don't like that sort of thing). I've always seen the 'rule of thumb' as being round-abouts 50-60* C as the top-end (and ideally you probably want lower than that), and enterprise-level stuff usually tolerating that better than consumer-level stuff, including a lot of them having heatsinks or something approximating a heatsink built/cast into their bodies. Seagate or WD or whoever built the drive should have a cut sheet or datasheet that spells out their expected operating range.
 
Old 11-25-2020, 08:59 AM   #17
MirceaKitsune
Member
 
Registered: May 2009
Distribution: Manjaro
Posts: 156

Original Poster
Rep: Reputation: 1
Here's a bonus (horror) story from yesterday in the meantime: Just as I ordered the new HDD, I noticed my PC became very slow and would barely work or open directories any more. Upon restarting it spent a long time in POST then refused to boot; fsck was trying to fix the hard drive I'm replacing, complaints about bad sectors everywhere. Initially I thought some bug in a distro update caused a broken process to slow it down... as this happened I jumped to thinking my old drive suddenly failed right before I was able to replace it. I only then noticed the HDD was making clicking noises every few seconds, which at first I confused with it being busy and working. Before this began I also noticed that some images I saved from Firefox were corrupted files, but once more assumed it was a bug in FF... the computer also restarted on its own before I saw that, which I found very odd but didn't pay much mind to at first.

Thank goodness however that my old HDD is perfectly fine. The reason for this... was the measly SATA3 cable breaking, at least from what I could deduce (I didn't plug another HDD into it to check and risk breaking stuff). Once I replaced it everything was back to normal... in fact the system seems to be a bit faster now, meaning something was going wrong with it for ages but never caused any obvious issues! Glad I noticed this with the old drive and didn't plug the arriving one into the bad cable. This is partly my fault as I was using a bent design which had the neck pretty strangled, that's most likely what caused it... it's been there for a long time and I simply didn't have any reason to pay much mind to it before.

Moral of the story: Don't twist or pull on your cables nor leave them in positions where they could get damaged, including (but not limited to) SATA ones. Also Linux handled the problem very nicely: Once I plugged the drive back in on a working cable, fsck did a timely check on the partition during the next startup (no issues found luckily) then everything recovered. The drive is also cool for surviving what just happened, given the bad cable was causing it to break so badly it was clicking non-stop.
 
Old 11-26-2020, 10:48 PM   #18
MirceaKitsune
Member
 
Registered: May 2009
Distribution: Manjaro
Posts: 156

Original Poster
Rep: Reputation: 1
While I wait for the new drive to arrive, I'd like to ask one last important question: Once I format it (ext4) what command should I use to do an in-depth scan and check that there are no bad sectors and the drive arrived in order? I'm assuming fsck will do the trick, but what are the parameters for a full scan... or is there another Linux tool / command that's better suited? Hopefully it won't take more than a few hours to do a thorough test, though for 4 TB I can imagine it might be a while.
 
Old 11-27-2020, 07:36 AM   #19
teckk
LQ Guru
 
Registered: Oct 2004
Distribution: Arch
Posts: 5,359
Blog Entries: 7

Rep: Reputation: 1935Reputation: 1935Reputation: 1935Reputation: 1935Reputation: 1935Reputation: 1935Reputation: 1935Reputation: 1935Reputation: 1935Reputation: 1935Reputation: 1935
Quote:
"head parking issue"
That's those "green" caviar drives. I don't even see them for sale on *Egg anymore. They were cr*p.

The blue drives don't do that, unless you know something I don't.

Quote:
You'd be amazed how many commercial type drives
Do you think those black drives last a "lot" longer than the blue ones? They are a good deal more expensive. I've always bought the cheaper ones and replaced HD's after 10 years just because. Do you have experience with the black drives lasting so much better than the blue ones that it is worth the double the price?
 
Old 11-27-2020, 11:53 AM   #20
kilgoretrout
Senior Member
 
Registered: Oct 2003
Posts: 3,015

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398
Quote:
While I wait for the new drive to arrive, I'd like to ask one last important question: Once I format it (ext4) what command should I use to do an in-depth scan and check that there are no bad sectors and the drive arrived in order? I'm assuming fsck will do the trick, but what are the parameters for a full scan... or is there another Linux tool / command that's better suited? Hopefully it won't take more than a few hours to do a thorough test, though for 4 TB I can imagine it might be a while.
Seagate has diagnostic tools that can perform tests for bad blocks called SeaChest Utilities and the download contains a Linux version that runs from the command line:

https://www.seagate.com/support/soft...est/#downloads

and:

http://support.seagate.com/seachest/...s.html#_basics

There's no need to partition and format the drive before running these tests and there are long and short varieties. I can only assume the long form will take quite a while to check every block on a 4TB drive.
 
Old 11-27-2020, 10:11 PM   #21
obobskivich
Member
 
Registered: Jun 2020
Posts: 609

Rep: Reputation: Disabled
Quote:
Originally Posted by kilgoretrout View Post
There's no need to partition and format the drive before running these tests and there are long and short varieties. I can only assume the long form will take quite a while to check every block on a 4TB drive.
+1 to this - you can actually sidestep even booting into your main OS and use something like UltimateBootCD which contains an assortment of various disk utilities (among other things), and check it from a bootable environment. My guess is 'quite a while' will be >24h in this case (with a 4TB drive). Modern drives should be detecting/marking bad sectors more or less automatically as well, so a lot of the 'olden days' pre-configuration stuff shouldn't really be needed with a brand new drive.
 
Old 11-27-2020, 10:28 PM   #22
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,270

Rep: Reputation: 3656Reputation: 3656Reputation: 3656Reputation: 3656Reputation: 3656Reputation: 3656Reputation: 3656Reputation: 3656Reputation: 3656Reputation: 3656Reputation: 3656
When I say commercial I mean stuff that normal people simply won't buy. We had some scsi drives that ran a small computer for maybe 30 years. I only know of replacing one. Some DEC stuff ran about 30 years. An IBM system we had about that long. I guess we buy stuff every 30 years.

This is what I mean for home use if you want some enterprise level stuff. Other makers have similar models.
https://www.westerndigital.com/produ...es/wd-gold-hdd

I'd always use the OEM diag suite. They may have the inside track on how to best test a drive. Then tend to put drives in the fastest modes for tests also. Every once in a while generic tests do take forever.

Last edited by jefro; 11-27-2020 at 10:30 PM.
 
Old 11-28-2020, 12:55 AM   #23
obobskivich
Member
 
Registered: Jun 2020
Posts: 609

Rep: Reputation: Disabled
Quote:
Originally Posted by jefro View Post
I'd always use the OEM diag suite. They may have the inside track on how to best test a drive. Then tend to put drives in the fastest modes for tests also. Every once in a while generic tests do take forever.
The other value-add with the OEM software is it may have proper field definitions/support for all of the SMART parameters the drive may support.
 
1 members found this post helpful.
Old 11-28-2020, 09:10 AM   #24
MirceaKitsune
Member
 
Registered: May 2009
Distribution: Manjaro
Posts: 156

Original Poster
Rep: Reputation: 1
Quote:
Originally Posted by kilgoretrout View Post
Seagate has diagnostic tools that can perform tests for bad blocks called SeaChest Utilities and the download contains a Linux version that runs from the command line:

https://www.seagate.com/support/soft...est/#downloads

and:

http://support.seagate.com/seachest/...s.html#_basics

There's no need to partition and format the drive before running these tests and there are long and short varieties. I can only assume the long form will take quite a while to check every block on a 4TB drive.
That's excellent, just what I needed! I will run this tool as well if it's Linux native... going to install the drive on my mother's computer to format and test it for a day.
 
Old 11-29-2020, 01:29 AM   #25
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,849

Rep: Reputation: 553Reputation: 553Reputation: 553Reputation: 553Reputation: 553Reputation: 553
Quote:
Originally Posted by jefro View Post
I've run many many enterprise level drives. Some have lasted decades before we finally updated them. You'd be amazed how many commercial type drives in existence still that were bought decades ago.
I was using old, too-small-to-be-useful-anymore DEC StorageWorks drives for years. There still an old Sun cabinet running DEC 15K rpm SCSI drives humming along around here. I'll be getting rid of them when the project they're tied up with is over---not because they're failing but because of the noise (and heat).

The "bargain" WD drives (Blue, Green, etc.) I've bought at the local big box store were a disappointment. One batch had two fail in the first weekend. And these drives cannot be used in a RAID configuration so they're off my list. I went with the WD Red NAS drives but the recent fiasco with the new recording method that WD (sneakily) introduced into those drives was the final nail in that coffin---it should shake everyones' trust in that brand. The IronWolf NAS drives have been working quite well. They'll be my goto drives... for now.
 
Old 11-29-2020, 01:42 AM   #26
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,849

Rep: Reputation: 553Reputation: 553Reputation: 553Reputation: 553Reputation: 553Reputation: 553
Quote:
Originally Posted by jefro View Post
Some DEC stuff ran about 30 years.
I had an some RZ28s (which, if memory serves, were re-badged Seagate Hawks; or were those the RZ29s) that had been manufactured in the early/mid-'90s run pretty much continuously for over 15 years. Some of the StorageWorks arrays that I encountered at work contained some drives that had been in continuous service since the systems were installed. One cluster had an uptime of over two years before an poorly-planned change to the data center power brought it down.
 
Old 12-01-2020, 10:02 PM   #27
MirceaKitsune
Member
 
Registered: May 2009
Distribution: Manjaro
Posts: 156

Original Poster
Rep: Reputation: 1
Everything went well in the end: I installed the drive in another computer and let it run a series of write-read badblock tests... it took nearly 3 days to do all the patterns but everything turned out fine. I then plugged it into my main computer and used rsync (with the right parameters to keep all information intact) to copy the data over: To my surprise it took less than 8 hours to move some 1.5 TB between HDD's. Running on the new drive now and everything's working perfectly

Only slight annoyance is that just 3.6 TB out of the 4.0 announced by the drive are usable. Further more, the ext4 partition only offered 3.4 TB free space out of those once created... curious where 200 GB disappeared while the partition was empty, some kind of hidden cache I take it? But I'm well within this storage space so it's no real problem.
 
Old 12-01-2020, 10:16 PM   #28
kilgoretrout
Senior Member
 
Registered: Oct 2003
Posts: 3,015

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398
Code:
curious where 200 GB disappeared
By default, ext* filesystems reserve 5% of the drive for root processes and possible rescue actions. See my posts here where this issue is discussed at length:

https://www.linuxquestions.org/quest...ce-4175685665/
 
Old 12-01-2020, 10:24 PM   #29
MirceaKitsune
Member
 
Registered: May 2009
Distribution: Manjaro
Posts: 156

Original Poster
Rep: Reputation: 1
Quote:
Originally Posted by kilgoretrout View Post
Code:
curious where 200 GB disappeared
By default, ext* filesystems reserve 5% of the drive for root processes and possible rescue actions. See my posts here where this issue is discussed at length:

https://www.linuxquestions.org/quest...ce-4175685665/
Thanks! That sounds like a good choice now that I better understand it... 5% is reasonable anyway. Didn't know ext4 used reserved space to further improve performance, I like the way the FS is designed overall.
 
Old 12-02-2020, 01:01 PM   #30
v00d00101
Member
 
Registered: Jun 2003
Location: UK
Distribution: Devuan Beowulf
Posts: 514
Blog Entries: 1

Rep: Reputation: 37
Quote:
Originally Posted by teckk View Post
That's those "green" caviar drives. I don't even see them for sale on *Egg anymore. They were cr*p.

The blue drives don't do that, unless you know something I don't.


Do you think those black drives last a "lot" longer than the blue ones? They are a good deal more expensive. I've always bought the cheaper ones and replaced HD's after 10 years just because. Do you have experience with the black drives lasting so much better than the blue ones that it is worth the double the price?
They are often more reliable and seek times are a bit faster. I have a couple of 320GB Black's bought somewhere around 2006 installed in a server that no longer runs 24/7, but they still work without issue. The server itself ran for 10 years providing game servers and some early NAS functionality. I've had many other WD Black drives and they've lasted a long long time.

Blue drives, i've had 2 DOA, 3 fail within a week and a handful that developed stupid numbers of errors within the first 3 months. Had a 500GB Green that developed a clicking head but still works ok. Got a couple of 1TB Blues that also have clicking heads and i think the cache has died or corrupted on them. A 2TB Blue that somehow ended up with misaligned sectors.

I've had too many Seagates that were DOA or died very quickly or went complete fail after a couple of months. Maxtors and IBM were other POS that broke a lot and had zero reliability. Toshiba and PNY SSD's are the same, exceptionally high fail rate. Also had a Toshiba HD laptop drive fail on my dads laptop some years back.

I like WD drives for HDD and Samsung/HyperX for SSD/M.2. Less time spent debugging problems.

But as ever its swings and roundabouts. The OP must have known he would get 1000 opposing opinions when starting such a loaded thread.
 
  


Reply

Tags
drive, hardware, hdd


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Linux mount issue Seagate Barracuda 7200.11 250GB Bad fs, bad superblock chickenlinux Linux - Hardware 11 01-26-2009 01:15 PM
unable to set dma on a seagate barracuda st340014a dabicho Linux - Hardware 0 02-08-2005 07:19 PM
Seagate Barracuda 40GB SATA supported? lothario Linux - Hardware 1 12-08-2004 11:17 PM
WinXP Slack Dual Booting on a 200GB Seagate Barracuda kurtwisener Slackware 1 10-18-2004 02:23 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 06:24 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration