LinuxQuestions.org
LinuxAnswers - the LQ Linux tutorial section.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices

Reply
 
Search this Thread
Old 04-02-2013, 01:35 PM   #1
lpallard
Member
 
Registered: Nov 2008
Location: Milky Way
Distribution: Slackware (various releases)
Posts: 970

Rep: Reputation: 44
Two dead hard drives at the same time, different brands, different age?


So a friend of mine has brought me two hard drives he coudlnt transfer from his old machine to his new machine he just built up. Upon booting the OS simply wouldnt load them. Even his BIOS wouldnt se them. Running Windows 8 and a brand new bleeding edge Mobo, we thought the drives were somehow incompatible with the mixture of Windows 8 & the new hardware so he lended me the drives to test on my own machine.

Docking them in a USB SATA dock, I can see them (dmesg) but I cant do anything with them.

Upon connecting the first drive (WD10EARS 1.0TB):

Code:
[181567.830171] scsi8 : usb-storage 1-2:1.0
[181580.026690] scsi 8:0:0:0: Direct-Access     WDC WD10 EARS-22Y5B1           PQ: 0 ANSI: 2 CCS
[181580.026878] sd 8:0:0:0: Attached scsi generic sg4 type 0
[181580.027546] sd 8:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
[181580.029292] sd 8:0:0:0: [sde] Write Protect is off
[181580.029295] sd 8:0:0:0: [sde] Mode Sense: 34 00 00 00
[181580.029297] sd 8:0:0:0: [sde] Assuming drive cache: write through
[181580.030923] sd 8:0:0:0: [sde] Assuming drive cache: write through
[181581.853701]  sde: sde1
[181581.855185] sd 8:0:0:0: [sde] Assuming drive cache: write through
[181581.855188] sd 8:0:0:0: [sde] Attached SCSI disk
[181586.679885] sd 8:0:0:0: [sde] Unhandled sense code
[181586.679889] sd 8:0:0:0: [sde]  Result: hostbyte=0x00 driverbyte=0x08
[181586.679891] sd 8:0:0:0: [sde]  Sense Key : 0x3 [current] 
[181586.679894] sd 8:0:0:0: [sde]  ASC=0x11 ASCQ=0x0
[181586.679895] sd 8:0:0:0: [sde] CDB: cdb[0]=0x28: 28 00 74 70 6d 00 00 00 08 00
[181586.679899] end_request: I/O error, dev sde, sector 1953524992
...
The second drive (Seagate ST3320310CS 320GB):
Code:
[182416.397139] scsi10 : usb-storage 1-2:1.0
[182417.465153] scsi 10:0:0:0: Direct-Access     ST332031 0CS                   PQ: 0 ANSI: 2 CCS
[182417.465345] sd 10:0:0:0: Attached scsi generic sg4 type 0
[182417.466116] sd 10:0:0:0: [sde] 625142448 512-byte logical blocks: (320 GB/298 GiB)
[182417.466948] sd 10:0:0:0: [sde] Write Protect is off
[182417.466951] sd 10:0:0:0: [sde] Mode Sense: 34 00 00 00
[182417.466953] sd 10:0:0:0: [sde] Assuming drive cache: write through
[182417.468755] sd 10:0:0:0: [sde] Assuming drive cache: write through
[182417.594245] sd 10:0:0:0: [sde] Unhandled sense code
[182417.594248] sd 10:0:0:0: [sde]  Result: hostbyte=0x00 driverbyte=0x08
[182417.594251] sd 10:0:0:0: [sde]  Sense Key : 0x3 [current] 
[182417.594253] sd 10:0:0:0: [sde]  ASC=0x11 ASCQ=0x0
[182417.594255] sd 10:0:0:0: [sde] CDB: cdb[0]=0x28: 28 00 00 00 00 00 00 00 08 00
[182417.594259] end_request: I/O error, dev sde, sector 0
[182417.594261] quiet_error: 9 callbacks suppressed
[182417.594263] Buffer I/O error on device sde, logical block 0
[182417.696859] sd 10:0:0:0: [sde] Unhandled sense code
[182417.696861] sd 10:0:0:0: [sde]  Result: hostbyte=0x00 driverbyte=0x08
[182417.696863] sd 10:0:0:0: [sde]  Sense Key : 0x3 [current] 
[182417.696865] sd 10:0:0:0: [sde]  ASC=0x11 ASCQ=0x0
[182417.696867] sd 10:0:0:0: [sde] CDB: cdb[0]=0x28: 28 00 00 00 00 00 00 00 08 00
Did both drives died at the same time?? TO me it looks like they are indeed dead. I know the green drives are extremely unreliable as I have RMA'd quite a few of them in the last 5 years.. But the seagate surprises me. I have been using the same drive in my server for 2 years on a 24/7 basis and no problems so far..

Last edited by lpallard; 04-02-2013 at 01:38 PM.
 
Old 04-02-2013, 02:07 PM   #2
camorri
Senior Member
 
Registered: Nov 2002
Location: Somewhere inside 9.9 million sq. km. Canada
Distribution: Slackware 14.0 + 14.1
Posts: 4,830

Rep: Reputation: 431Reputation: 431Reputation: 431Reputation: 431Reputation: 431
Quote:
Did both drives died at the same time??
Its possible ( lightning strike ) might just do it. However, that is probably not the case.

New system boards use UEFI, not traditional BIOS, and require GPT partrition tables. You need new partitioning tools to create GPT partition tables. The drives probably have MBR's. ( Just guessing ). So, that might explain why they don't run on the new system board.

Do you know what OS was using the drives? That would give us a clew, windoze usually uses NTFS and you need the driver loaded on a linux system to see anything on the drives. Its called ntfs-g3. What distro are you using to 'see' the drives?

This link -->http://www.rodsbooks.com/gdisk/ might help explain GPT and the tools used to create partitions.

Last edited by camorri; 04-02-2013 at 02:09 PM.
 
Old 04-02-2013, 02:16 PM   #3
wroom
Member
 
Registered: Dec 2009
Location: Sweden
Posts: 83

Rep: Reputation: 24
You can examine the drives with smartctl under linux. Should give a hint on what's going on.
That is, IF you can get the drive detected to a native sata port. Cant use the smartctl through a USB dock.

Causes to this could be lack of power to the drives. Could possibly be your friends power supply? The drives would then need low level formatting / surface testing with smartctl or a vendor diagnostic application to behave well.

Since both drives died at the same time, the cause should be where they both was installed.


I have seen some drives being damaged by heat and/or low supply voltage, (and this getting sense errors), to be fully repaired simply by initializing the whole surface of the disk with zeroes.

Like:
Code:
badblocks -wsc 256 -t 0x00 /dev/sdz
Which first writes the zeroes, and then read checks the disk.

Or without the readcheck:
Code:
dd if=/dev/zero of=/dev/sdz
sync
Of course, wiping the drive with zeroes will not help reading any data from the drive.

I have once had a sata cable come slightly loose on a drive in use, (a seagate), and the drive became unable to connecto to. = Damaged by a loose sata cable.

If you can see the drive in a USB dock, but not directly connected to the MB, then there is some incompatibility with the sata ports. Older drives doing 150/300 don't mix well with controllers doing 300/600.
 
Old 04-02-2013, 02:27 PM   #4
jefro
Guru
 
Registered: Mar 2008
Posts: 11,006

Rep: Reputation: 1356Reputation: 1356Reputation: 1356Reputation: 1356Reputation: 1356Reputation: 1356Reputation: 1356Reputation: 1356Reputation: 1356Reputation: 1356
And some malware could have damaged these drives too.

I'd think some issue with power, heat, physical damage or such before that.
 
Old 04-03-2013, 02:48 AM   #5
H_TeXMeX_H
Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269
Backup your data and run a SMART long test on them, it also checks for bad blocks.
 
Old 04-03-2013, 08:12 AM   #6
onebuck
Moderator
 
Registered: Jan 2005
Location: Midwest USA, Central Illinois
Distribution: SlackwareŽ
Posts: 11,021
Blog Entries: 1

Rep: Reputation: 1364Reputation: 1364Reputation: 1364Reputation: 1364Reputation: 1364Reputation: 1364Reputation: 1364Reputation: 1364Reputation: 1364Reputation: 1364
Member response

Hi,

Quote:
Originally Posted by camorri View Post
<snip>
New system boards use UEFI, not traditional BIOS, and require GPT partrition tables.
Care to expand on the 'require GPT partition tables';
Excerpt from https://en.wikipedia.org/wiki/Unifie...ware_Interface
Quote:
Contents
Interaction between the EFI boot manager and EFI drivers

The interface defined by the EFI specification includes data tables that contain platform information, and boot and runtime services that are available to the OS loader and OS. UEFI firmware provides several technical advantages over a traditional BIOS system:[11]
  • Ability to boot from large disks (over 2 TiB) with a GUID Partition Table, GPT.[12][13]
  • CPU-independent architecture[12]
  • CPU-independent drivers[12]
  • Flexible pre-OS environment, including network capability
  • Modular design
Processor compatibility

As of version 2.3, processor bindings exist for Itanium, x86, x86_64 and ARM. Only little-endian processors can be supported.[14]
The BIOS is limited to a 16-bit processor mode and 1 MB of addressable space due to the design being based on the IBM 5150, which used the 16-bit Intel 8088.[6][15] In comparison, the UEFI processor mode can be either 32-bit (x86-32, ARM) or 64-bit (x86-64 and Itanium).[6][16] 64-bit UEFI understands long mode, which allows applications in the pre-boot execution environment to have direct access to all of the memory using 64-bit addressing.[17]
UEFI requires the firmware and operating system loader to be size-matched; i.e. a 64-bit UEFI implementation can only load a 64-bit UEFI OS boot loader. After the system transitions from "Boot Services" to "Runtime Services," the operating system kernel takes over. At this point, the kernel can change processor modes if it desires, but this bars usage of runtime services[18] (unless the kernel switches back again). Presently, the only operating system that supports running a kernel that is not size-matched to the firmware is Mac OS X.

Disk device compatibility

In addition to the standard PC disk partition scheme, which uses a master boot record (MBR), EFI works with a new partitioning scheme: GUID Partition Table (GPT). GPT is free from many of the limitations of MBR. In particular, the MBR limits on the number and size of disk partitions (up to 4 primary partitions per disk, up to 2 TiB (240 bytes) per disk) are relaxed.[19] GPT allows for a maximum disk and partition size of 8 ZiB (270 bytes).[19][20] The UEFI specification explicitly requires support for FAT32 for system partitions, and FAT12/FAT16 for removable media; specific implementations may support other file systems.
Quote:
Originally Posted by camorri View Post
You need new partitioning tools to create GPT partition tables. The drives probably have MBR's. ( Just guessing ). So, that might explain why they don't run on the new system board.
You can have GPT for the initial configuration & boot.

Quote:
Originally Posted by camorri View Post
Do you know what OS was using the drives? That would give us a clew, windoze usually uses NTFS and you need the driver loaded on a linux system to see anything on the drives. Its called ntfs-g3. What distro are you using to 'see' the drives?

This link -->http://www.rodsbooks.com/gdisk/ might help explain GPT and the tools used to create partitions.
Microsoft Windows version(s) use FAT, FAT32 & NTFS Filesystems. As to the clue, Win7,8 would likely use NTFS while Xp would use FAT32 or NTFS. Earlier versions are not in the picture.

Excerpt from https://en.wikipedia.org/wiki/Unifie...ware_Interface
Quote:
Disk device compatibility

In addition to the standard PC disk partition scheme, which uses a master boot record (MBR), EFI works with a new partitioning scheme: GUID Partition Table (GPT). GPT is free from many of the limitations of MBR. In particular, the MBR limits on the number and size of disk partitions (up to 4 primary partitions per disk, up to 2 TiB (240 bytes) per disk) are relaxed.[19] GPT allows for a maximum disk and partition size of 8 ZiB (270 bytes).[19][20] The UEFI specification explicitly requires support for FAT32 for system partitions, and FAT12/FAT16 for removable media; specific implementations may support other file systems.
 
Old 04-03-2013, 08:26 AM   #7
camorri
Senior Member
 
Registered: Nov 2002
Location: Somewhere inside 9.9 million sq. km. Canada
Distribution: Slackware 14.0 + 14.1
Posts: 4,830

Rep: Reputation: 431Reputation: 431Reputation: 431Reputation: 431Reputation: 431
Quote:
Care to expand on the 'require GPT partition tables';
I stand corrected. Sorry for any confusion.
 
Old 04-05-2013, 03:15 AM   #8
comet.berkeley
Member
 
Registered: Dec 2009
Location: California
Distribution: Slackware current
Posts: 144

Rep: Reputation: Disabled
Quote:
Originally Posted by lpallard View Post
So a friend of mine has brought me two hard drives he coudlnt transfer from his old machine to his new machine he just built up...
Upon connecting the first drive (WD10EARS 1.0TB) ...
I owned 5 of the Western Digital WD10EARS/WD10EADS drives and 2 of them died on me.

Hey they were cheap and had 1 terabyte each.

A friend of mine had his die too, so that makes 3 out of 6....not good.

Buying these "green" drives was a 50/50 chance of failure...you get what you pay for... make backups often!
 
Old 04-05-2013, 04:58 AM   #9
H_TeXMeX_H
Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269
I'd say avoid "green" as well as WD drives.
 
Old 04-05-2013, 05:45 AM   #10
wroom
Member
 
Registered: Dec 2009
Location: Sweden
Posts: 83

Rep: Reputation: 24
Quote:
Originally Posted by H_TeXMeX_H View Post
I'd say avoid "green" as well as WD drives.
I have some WD black and some WD blue drives. None of them has failed. Never owned a WD green, because i read up on that they are low price, low speed and low quality. Heard of several that bought the WD green and that it failed rather soon.

WD black: Reliable high performance, slightly lower temp than the WD blue, and you will hear a raid multi-seeq crunch through walls.

WD blue: Medium to high performance, some of them run a bit hotter and consume more power, but they are rather quiet.

I'd go with a WD mechanical disk every day, but i will never buy the "green" version.

Comparing with other brand drives my personal opinion is that WD is above the others, and i've come to the decision to stay away from the SSD drives, because of how and how often i hear of them failing.
 
Old 04-05-2013, 07:33 AM   #11
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 1,984

Rep: Reputation: 512Reputation: 512Reputation: 512Reputation: 512Reputation: 512Reputation: 512
How old are the drives? Were they in continuous use?

One thing I have seen with old drives is that the oil used for the bearings can freeze up when they get turned off for a time. Nothing really wrong other than age. But the system will tend to not recognize drives that will not spin up - they will show up in the /proc/scsi/scsi file (this information comes from the formatter board), but no partition information because the disk didn't spin up.

If the data is REALLY REALLY important, AND you have space to make a fast backup... you can sometimes free the stiction by CAREFULLY holding the disk between your hands, and carefully rotating the disk around the center axis of rotation, and hit a hard surface (I like a concrete floor) with the edge of the drive casing. This will sometimes crack loose the bearings and allow the disk to spin up. ONCE.

When the disk cools down again it will no longer work. This also has the possiblity of causing head crashes.

IF the disk spins up and is recognized, copy the data ASAP. You may not get another chance.

I have recovered disks and had them last several days doing this (one lasted several months - until it was turned off again). DO NOT DEPEND ON THIS WORKING. Sometimes does, sometimes doesn't.
 
Old 04-05-2013, 08:13 AM   #12
camorri
Senior Member
 
Registered: Nov 2002
Location: Somewhere inside 9.9 million sq. km. Canada
Distribution: Slackware 14.0 + 14.1
Posts: 4,830

Rep: Reputation: 431Reputation: 431Reputation: 431Reputation: 431Reputation: 431
I bought 2 Western Digital WD10EARS drives for a D-link 323 NAS. One disk failed in less than 10 hours of use. I didn't know these were advanced format drives when I bought them, nor did I know what advanced format was when I did. I keep the NAS off now, unless I want to back up. Been trying to decide what to buy to replace them both. WD now makes Red drives as well, specifically for NAS devices. Not sure if I can trust them; or not.
 
Old 07-02-2013, 12:00 PM   #13
lpallard
Member
 
Registered: Nov 2008
Location: Milky Way
Distribution: Slackware (various releases)
Posts: 970

Original Poster
Rep: Reputation: 44
as a courtesy to LQ, I am following up with this topic...

the seagate 320GB drive works in my slackware machine, so I guess it had to do with what camorri said (GPT partitions) on my fiends's windoze 8 computer...

the green WD10EARS is really dead.. even my slackware machine wont boot with it within 10 minutes, and after it passed the initial BIOS checkup, the kernel throws all kind of ata errors....

I trashed it.

Ive lost dozens of these green drives in the last 5 years... My opinion, and no offense there: they are pure crap. Absolute trash to avoid at all costs, unless you dont care the thing stops working after a few months or so.. Im not like that. I like my stuff reliable, especially in my server.

Thanks for the very educational thread guys!!!!
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
2 hard drives, XP on my main sata drives, 10.2 on my IDE LILO doesnt show on boot up Dachy Slackware 14 01-03-2008 07:01 AM
Determine age of hard drive? General Linux - Hardware 4 04-29-2007 11:53 PM
Pseudo-dead hard drives? Vorax Linux - Hardware 3 04-01-2006 07:37 PM
Can't get Age of Empires II: Age of Kings started (I've could it before !!!) vegetassj2 Linux - Games 44 08-28-2005 04:59 PM
Upgrading hard drives on Software raid 1 boot drives. linuxboy123 Linux - General 0 12-11-2003 03:28 PM


All times are GMT -5. The time now is 06:49 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration