LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   Two dead hard drives at the same time, different brands, different age? (https://www.linuxquestions.org/questions/linux-hardware-18/two-dead-hard-drives-at-the-same-time-different-brands-different-age-4175456563/)

lpallard 04-02-2013 01:35 PM

Two dead hard drives at the same time, different brands, different age?
 
So a friend of mine has brought me two hard drives he coudlnt transfer from his old machine to his new machine he just built up. Upon booting the OS simply wouldnt load them. Even his BIOS wouldnt se them. Running Windows 8 and a brand new bleeding edge Mobo, we thought the drives were somehow incompatible with the mixture of Windows 8 & the new hardware so he lended me the drives to test on my own machine.

Docking them in a USB SATA dock, I can see them (dmesg) but I cant do anything with them.

Upon connecting the first drive (WD10EARS 1.0TB):

Code:

[181567.830171] scsi8 : usb-storage 1-2:1.0
[181580.026690] scsi 8:0:0:0: Direct-Access    WDC WD10 EARS-22Y5B1          PQ: 0 ANSI: 2 CCS
[181580.026878] sd 8:0:0:0: Attached scsi generic sg4 type 0
[181580.027546] sd 8:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
[181580.029292] sd 8:0:0:0: [sde] Write Protect is off
[181580.029295] sd 8:0:0:0: [sde] Mode Sense: 34 00 00 00
[181580.029297] sd 8:0:0:0: [sde] Assuming drive cache: write through
[181580.030923] sd 8:0:0:0: [sde] Assuming drive cache: write through
[181581.853701]  sde: sde1
[181581.855185] sd 8:0:0:0: [sde] Assuming drive cache: write through
[181581.855188] sd 8:0:0:0: [sde] Attached SCSI disk
[181586.679885] sd 8:0:0:0: [sde] Unhandled sense code
[181586.679889] sd 8:0:0:0: [sde]  Result: hostbyte=0x00 driverbyte=0x08
[181586.679891] sd 8:0:0:0: [sde]  Sense Key : 0x3 [current]
[181586.679894] sd 8:0:0:0: [sde]  ASC=0x11 ASCQ=0x0
[181586.679895] sd 8:0:0:0: [sde] CDB: cdb[0]=0x28: 28 00 74 70 6d 00 00 00 08 00
[181586.679899] end_request: I/O error, dev sde, sector 1953524992
...

The second drive (Seagate ST3320310CS 320GB):
Code:

[182416.397139] scsi10 : usb-storage 1-2:1.0
[182417.465153] scsi 10:0:0:0: Direct-Access    ST332031 0CS                  PQ: 0 ANSI: 2 CCS
[182417.465345] sd 10:0:0:0: Attached scsi generic sg4 type 0
[182417.466116] sd 10:0:0:0: [sde] 625142448 512-byte logical blocks: (320 GB/298 GiB)
[182417.466948] sd 10:0:0:0: [sde] Write Protect is off
[182417.466951] sd 10:0:0:0: [sde] Mode Sense: 34 00 00 00
[182417.466953] sd 10:0:0:0: [sde] Assuming drive cache: write through
[182417.468755] sd 10:0:0:0: [sde] Assuming drive cache: write through
[182417.594245] sd 10:0:0:0: [sde] Unhandled sense code
[182417.594248] sd 10:0:0:0: [sde]  Result: hostbyte=0x00 driverbyte=0x08
[182417.594251] sd 10:0:0:0: [sde]  Sense Key : 0x3 [current]
[182417.594253] sd 10:0:0:0: [sde]  ASC=0x11 ASCQ=0x0
[182417.594255] sd 10:0:0:0: [sde] CDB: cdb[0]=0x28: 28 00 00 00 00 00 00 00 08 00
[182417.594259] end_request: I/O error, dev sde, sector 0
[182417.594261] quiet_error: 9 callbacks suppressed
[182417.594263] Buffer I/O error on device sde, logical block 0
[182417.696859] sd 10:0:0:0: [sde] Unhandled sense code
[182417.696861] sd 10:0:0:0: [sde]  Result: hostbyte=0x00 driverbyte=0x08
[182417.696863] sd 10:0:0:0: [sde]  Sense Key : 0x3 [current]
[182417.696865] sd 10:0:0:0: [sde]  ASC=0x11 ASCQ=0x0
[182417.696867] sd 10:0:0:0: [sde] CDB: cdb[0]=0x28: 28 00 00 00 00 00 00 00 08 00

Did both drives died at the same time?? TO me it looks like they are indeed dead. I know the green drives are extremely unreliable as I have RMA'd quite a few of them in the last 5 years.. But the seagate surprises me. I have been using the same drive in my server for 2 years on a 24/7 basis and no problems so far..

camorri 04-02-2013 02:07 PM

Quote:

Did both drives died at the same time??
Its possible ( lightning strike ) might just do it. However, that is probably not the case.

New system boards use UEFI, not traditional BIOS, and require GPT partrition tables. You need new partitioning tools to create GPT partition tables. The drives probably have MBR's. ( Just guessing ). So, that might explain why they don't run on the new system board.

Do you know what OS was using the drives? That would give us a clew, windoze usually uses NTFS and you need the driver loaded on a linux system to see anything on the drives. Its called ntfs-g3. What distro are you using to 'see' the drives?

This link -->http://www.rodsbooks.com/gdisk/ might help explain GPT and the tools used to create partitions.

wroom 04-02-2013 02:16 PM

You can examine the drives with smartctl under linux. Should give a hint on what's going on.
That is, IF you can get the drive detected to a native sata port. Cant use the smartctl through a USB dock.

Causes to this could be lack of power to the drives. Could possibly be your friends power supply? The drives would then need low level formatting / surface testing with smartctl or a vendor diagnostic application to behave well.

Since both drives died at the same time, the cause should be where they both was installed.


I have seen some drives being damaged by heat and/or low supply voltage, (and this getting sense errors), to be fully repaired simply by initializing the whole surface of the disk with zeroes.

Like:
Code:

badblocks -wsc 256 -t 0x00 /dev/sdz
Which first writes the zeroes, and then read checks the disk.

Or without the readcheck:
Code:

dd if=/dev/zero of=/dev/sdz
sync

Of course, wiping the drive with zeroes will not help reading any data from the drive. :(

I have once had a sata cable come slightly loose on a drive in use, (a seagate), and the drive became unable to connecto to. = Damaged by a loose sata cable.

If you can see the drive in a USB dock, but not directly connected to the MB, then there is some incompatibility with the sata ports. Older drives doing 150/300 don't mix well with controllers doing 300/600.

jefro 04-02-2013 02:27 PM

And some malware could have damaged these drives too.

I'd think some issue with power, heat, physical damage or such before that.

H_TeXMeX_H 04-03-2013 02:48 AM

Backup your data and run a SMART long test on them, it also checks for bad blocks.

onebuck 04-03-2013 08:12 AM

Member response
 
Hi,

Quote:

Originally Posted by camorri (Post 4923688)
<snip>
New system boards use UEFI, not traditional BIOS, and require GPT partrition tables.

Care to expand on the 'require GPT partition tables';
Excerpt from https://en.wikipedia.org/wiki/Unifie...ware_Interface
Quote:

Contents
Interaction between the EFI boot manager and EFI drivers

The interface defined by the EFI specification includes data tables that contain platform information, and boot and runtime services that are available to the OS loader and OS. UEFI firmware provides several technical advantages over a traditional BIOS system:[11]
  • Ability to boot from large disks (over 2 TiB) with a GUID Partition Table, GPT.[12][13]
  • CPU-independent architecture[12]
  • CPU-independent drivers[12]
  • Flexible pre-OS environment, including network capability
  • Modular design
Processor compatibility

As of version 2.3, processor bindings exist for Itanium, x86, x86_64 and ARM. Only little-endian processors can be supported.[14]
The BIOS is limited to a 16-bit processor mode and 1 MB of addressable space due to the design being based on the IBM 5150, which used the 16-bit Intel 8088.[6][15] In comparison, the UEFI processor mode can be either 32-bit (x86-32, ARM) or 64-bit (x86-64 and Itanium).[6][16] 64-bit UEFI understands long mode, which allows applications in the pre-boot execution environment to have direct access to all of the memory using 64-bit addressing.[17]
UEFI requires the firmware and operating system loader to be size-matched; i.e. a 64-bit UEFI implementation can only load a 64-bit UEFI OS boot loader. After the system transitions from "Boot Services" to "Runtime Services," the operating system kernel takes over. At this point, the kernel can change processor modes if it desires, but this bars usage of runtime services[18] (unless the kernel switches back again). Presently, the only operating system that supports running a kernel that is not size-matched to the firmware is Mac OS X.

Disk device compatibility

In addition to the standard PC disk partition scheme, which uses a master boot record (MBR), EFI works with a new partitioning scheme: GUID Partition Table (GPT). GPT is free from many of the limitations of MBR. In particular, the MBR limits on the number and size of disk partitions (up to 4 primary partitions per disk, up to 2 TiB (240 bytes) per disk) are relaxed.[19] GPT allows for a maximum disk and partition size of 8 ZiB (270 bytes).[19][20] The UEFI specification explicitly requires support for FAT32 for system partitions, and FAT12/FAT16 for removable media; specific implementations may support other file systems.
Quote:

Originally Posted by camorri (Post 4923688)
You need new partitioning tools to create GPT partition tables. The drives probably have MBR's. ( Just guessing ). So, that might explain why they don't run on the new system board.

You can have GPT for the initial configuration & boot.

Quote:

Originally Posted by camorri (Post 4923688)
Do you know what OS was using the drives? That would give us a clew, windoze usually uses NTFS and you need the driver loaded on a linux system to see anything on the drives. Its called ntfs-g3. What distro are you using to 'see' the drives?

This link -->http://www.rodsbooks.com/gdisk/ might help explain GPT and the tools used to create partitions.

Microsoft Windows version(s) use FAT, FAT32 & NTFS Filesystems. As to the clue, Win7,8 would likely use NTFS while Xp would use FAT32 or NTFS. Earlier versions are not in the picture. :)

Excerpt from https://en.wikipedia.org/wiki/Unifie...ware_Interface
Quote:

Disk device compatibility

In addition to the standard PC disk partition scheme, which uses a master boot record (MBR), EFI works with a new partitioning scheme: GUID Partition Table (GPT). GPT is free from many of the limitations of MBR. In particular, the MBR limits on the number and size of disk partitions (up to 4 primary partitions per disk, up to 2 TiB (240 bytes) per disk) are relaxed.[19] GPT allows for a maximum disk and partition size of 8 ZiB (270 bytes).[19][20] The UEFI specification explicitly requires support for FAT32 for system partitions, and FAT12/FAT16 for removable media; specific implementations may support other file systems.

camorri 04-03-2013 08:26 AM

Quote:

Care to expand on the 'require GPT partition tables';
I stand corrected. Sorry for any confusion.

aaazen 04-05-2013 03:15 AM

Quote:

Originally Posted by lpallard (Post 4923668)
So a friend of mine has brought me two hard drives he coudlnt transfer from his old machine to his new machine he just built up...
Upon connecting the first drive (WD10EARS 1.0TB) ...

I owned 5 of the Western Digital WD10EARS/WD10EADS drives and 2 of them died on me.

Hey they were cheap and had 1 terabyte each.

A friend of mine had his die too, so that makes 3 out of 6....not good.

Buying these "green" drives was a 50/50 chance of failure...you get what you pay for... make backups often!

H_TeXMeX_H 04-05-2013 04:58 AM

I'd say avoid "green" as well as WD drives.

wroom 04-05-2013 05:45 AM

Quote:

Originally Posted by H_TeXMeX_H (Post 4925580)
I'd say avoid "green" as well as WD drives.

I have some WD black and some WD blue drives. None of them has failed. Never owned a WD green, because i read up on that they are low price, low speed and low quality. Heard of several that bought the WD green and that it failed rather soon.

WD black: Reliable high performance, slightly lower temp than the WD blue, and you will hear a raid multi-seeq crunch through walls.

WD blue: Medium to high performance, some of them run a bit hotter and consume more power, but they are rather quiet.

I'd go with a WD mechanical disk every day, but i will never buy the "green" version.

Comparing with other brand drives my personal opinion is that WD is above the others, and i've come to the decision to stay away from the SSD drives, because of how and how often i hear of them failing.

jpollard 04-05-2013 07:33 AM

How old are the drives? Were they in continuous use?

One thing I have seen with old drives is that the oil used for the bearings can freeze up when they get turned off for a time. Nothing really wrong other than age. But the system will tend to not recognize drives that will not spin up - they will show up in the /proc/scsi/scsi file (this information comes from the formatter board), but no partition information because the disk didn't spin up.

If the data is REALLY REALLY important, AND you have space to make a fast backup... you can sometimes free the stiction by CAREFULLY holding the disk between your hands, and carefully rotating the disk around the center axis of rotation, and hit a hard surface (I like a concrete floor) with the edge of the drive casing. This will sometimes crack loose the bearings and allow the disk to spin up. ONCE.

When the disk cools down again it will no longer work. This also has the possiblity of causing head crashes.

IF the disk spins up and is recognized, copy the data ASAP. You may not get another chance.

I have recovered disks and had them last several days doing this (one lasted several months - until it was turned off again). DO NOT DEPEND ON THIS WORKING. Sometimes does, sometimes doesn't.

camorri 04-05-2013 08:13 AM

I bought 2 Western Digital WD10EARS drives for a D-link 323 NAS. One disk failed in less than 10 hours of use. I didn't know these were advanced format drives when I bought them, nor did I know what advanced format was when I did. I keep the NAS off now, unless I want to back up. Been trying to decide what to buy to replace them both. WD now makes Red drives as well, specifically for NAS devices. Not sure if I can trust them; or not.

lpallard 07-02-2013 12:00 PM

as a courtesy to LQ, I am following up with this topic...

the seagate 320GB drive works in my slackware machine, so I guess it had to do with what camorri said (GPT partitions) on my fiends's windoze 8 computer...

the green WD10EARS is really dead.. even my slackware machine wont boot with it within 10 minutes, and after it passed the initial BIOS checkup, the kernel throws all kind of ata errors....

I trashed it.

Ive lost dozens of these green drives in the last 5 years... My opinion, and no offense there: they are pure crap. Absolute trash to avoid at all costs, unless you dont care the thing stops working after a few months or so.. Im not like that. I like my stuff reliable, especially in my server.

Thanks for the very educational thread guys!!!!


All times are GMT -5. The time now is 06:36 PM.