LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (http://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   debian and ubuntu can't access hdd but windows can (http://www.linuxquestions.org/questions/linux-hardware-18/debian-and-ubuntu-cant-access-hdd-but-windows-can-4175438168/)

beowulfnode 11-21-2012 02:03 PM

debian and ubuntu can't access hdd but windows can
 
Some info for people who run in to this odd problem. I managed to solve this one by deleting the "hidden system partition" by booting a physical machine with linux rather than a vmware virtual machine.

I'm talking about low level access to the drive with things like gdisk, gparted and dd. They all resulted in io errors.

I have 2 new HDDs that are almost exactly the same that are Western Digital 3TB Green 64MB SATA 3, model WD30EZRX-00DC0B0. These were going in to my home built nas debian virtual machine, which is hosted on a vmware esxi physical host.

I was having trouble getting one of the new HDD to work under linux. The troublesome drive works fine under windows, read/write is fine, speed is good, S.M.A.R.T. values like reallocated sectors are 0. However I put the same disk in to a debian system or ubuntu live cd and it looks like the disk is faulty with I/O errors for read and write to anywhere on the disk.

My target system is a little complex so bear with me.
  • VMware ESXi 5.0 Update 1 physical host
  • 1 vmdk disk provided to VM for virtual machine boot and system files (Debian 6.0.6 Stable)
  • 6x Physical Device Raw Mappings (with vmkfstool -z from an SSH prompt on the ESXi host) to provide pass through of the physical disks to the virtual machine
  • the group of 6 HDD should not have have GPT or MBR partitions and are provided directly to zfs-fuse as block devices, using /dev/sda instead of /dev/sda1. (yes I know fuse slows it down and there are other options but it's what I'm using).
  • I'm in the process of upgrading 1 of the zfs pools (3 disks each) from 2TB disks to 3TB disks. I'm using each zfs pool like a raid 5 array.

The problem in linux starts well before zfs-fuse is involved, so if you don't know about zfs or fuse then don't worry about that. Plugging in the disk turning on and trying to use the command
dd if=/dev/null of=/dev/sdd bs=1k count=1024
practically hangs the system, takes ages (barely 1KB/s), and lots of I/O errors are logged in /var/log/messages and /var/log/dmesg

The sequence of events so far has been.
Already had a 3TB disk in the zfs raidz (similar to raid 5)
Purchase the 2 new HDD
Install 1st 3TB disk to system and rebuild the zfs raidz
Install the 2nd 3TB disk to system and try and rebuild raid
rebuild fails very quickly due to IO errors on the new drive
Tried Ubuntu 12.04.1 Live DVD in the virtual machine and it couldn't do anything with the "faulty" disk either.
Take the "faulty" new disk out and put it in my main desktop usb hdd dock (it's windows please don't shoot me for this). I find out my usb hdd dock doesn't support 3TB disks
I put in an internal sata port in my desktop that does support 3TB disks and with a quick initialise and format the disk looks ok
The command "chkdisk f: /r" completes without a hitch.
filling the drive up with data completes fine
reading the data back is fine and the same as written.
Take the hdd to work and use their usb hdd doc which does support 3TB disks, again in windows reading and writing is fine.
Using VMware Workstation 9 passthrough of USB devices I use ubuntu on a virtual machine there and again it exhibits the I/O errors.

My next test was to use ubuntu Live DVD on a physical computer with the hdd attached and see if that works. I tested the VMware host physical box but booted from ubuntu LiveDVD, and using the same sata port as before. I was thinking of confirming a hardware fault on that port on that computer, but the disk that appeared faulty in linux earlier started to work fine. However I notice that there was a hidden "System Recovery" partition on this brand new disk I removed from the sealed anti static bag myself, so I deleted that partition and used the dd command to write /dev/null over the first few MB of the disk.

If that test had failed I was then going to confirm the problem was linux, by using known good hardware (my main desktop that normally runs windows), but boot up from LiveCDs in to linux. However this was not needed.

Then I booted back in to VMware ESXi, re-added the disk to the debian 6.0.6 virtual machine and rebuilt the raidz and all went fine.

My main question was. Why did 1 HDD work and the other did not?

I believe the answer is that the VMware ESXi host prevents a virtual machine, even with a raw device mapping, from writing over a "System Restore" or "System Reserved" partition that is associated with the system drive on a windows install.

How that partition was on there in the first place, I don't know, your guess is as good as mine. But I do know that it was put on there at the Western Digital offices, and is how I received the drive, and that not all drives from WD are like that.

The HW in the VMware host is a very basic box as this is a home server
- Asus M4A78LT-M-LE, AMD 760G/AM3/mATX
- AMD Athlon II X2 235E CPU AM3 45W
Code:

~ # lspci
000:000:00.0 Bridge: Advanced Micro Devices [AMD] RS780 Host Bridge
000:000:01.0 Bridge: ASUSTeK Computer Inc. RS880 PCI to PCI bridge (int gfx)
000:000:06.0 Bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) [PCIe RP[000:000:06.0]]
000:000:17.0 Mass storage controller: ATI Technologies Inc SB700 SATA Controller [AHCI Mode] [vmhba0]
000:000:18.0 Serial bus controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller
000:000:18.1 Serial bus controller: ATI Technologies Inc SB700 USB OHCI1 Controller
000:000:18.2 Serial bus controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller
000:000:19.0 Serial bus controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller
000:000:19.1 Serial bus controller: ATI Technologies Inc SB700 USB OHCI1 Controller
000:000:19.2 Serial bus controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller
000:000:20.0 Serial bus controller: ATI Technologies Inc M4A785TD Motherboard
000:000:20.1 Mass storage controller: ATI Technologies Inc SB700/SB800 IDE Controller [vmhba1]
000:000:20.3 Bridge: ATI Technologies Inc SB700/SB800 LPC host controller
000:000:20.4 Bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge
000:000:20.5 Serial bus controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller
000:000:24.0 Bridge: Advanced Micro Devices [AMD] Family 10h Processor HyperTransport Configuration
000:000:24.1 Bridge: Advanced Micro Devices [AMD] Family 10h Processor Address Map
000:000:24.2 Bridge: Advanced Micro Devices [AMD] Family 10h Processor DRAM Controller
000:000:24.3 Bridge: Advanced Micro Devices [AMD] Family 10h Processor Miscellaneous Control
000:000:24.4 Bridge: Advanced Micro Devices [AMD] Family 10h Processor Link Control
000:001:05.0 Display controller: ATI Technologies Inc 760G [Radeon 3000]
000:002:00.0 Network controller: Atheros Communications AR8131 Gigabit Ethernet [vmnic0]

The Debian VM has
~# uname -r
2.6.32-5-amd64

~# lspci
Code:

00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.3 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.4 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.5 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.6 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.7 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.2 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.3 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.4 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.5 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.6 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.7 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.2 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.3 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.4 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.5 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.6 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.7 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.2 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.3 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.4 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.5 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.6 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01)
03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)


druuna 11-23-2012 02:58 AM

Thread is marked as being [SOLVED].

Taken off the zero-reply list.


All times are GMT -5. The time now is 08:24 PM.