We have a server running CentOS 4.6 with an internal raid array, the hard drives of that array are /dev/sda and /dev/sdb. We also have a pci fiber channel card which hooks up to a SAN. The san partition is /dev/sdc
Here is the output from fdisk:
Code:
fdisk -l
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 30401 244091610 83 Linux
Disk /dev/sdb: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 254 2040223+ 82 Linux swap
Disk /dev/sdc: 1610.6 GB, 1610612736000 bytes
255 heads, 63 sectors/track, 195812 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 195812 1572859858+ 83 Linux
My issue is I keep getting disk I/O errors for disk /dev/sdd
Here are a few lines from dmesg:
Code:
SCSI device sdd: 3145728000 512-byte hdwr sectors (1610613 MB)
sdd: asking for cache data failed
sdd: assuming drive cache: write through
SCSI device sdd: 3145728000 512-byte hdwr sectors (1610613 MB)
sdd: asking for cache data failed
sdd: assuming drive cache: write through
sdd:<6>Device sdd not ready.
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
My /etc/fstab does not list a /dev/sdd... but if I do an ls on /dev there is a single /dev/sdd device. If I rm /dev/sdd it'll re-appear after a reboot. If a try to fdisk /dev/sdd I get the following message:
Code:
fdisk /dev/sdd
Unable to read /dev/sdd
I have no idea what's going on. Somewhere there's gotta be a line that says /dev/sdd is a valid partition but I have no idea where to look. Any help/ideas would be much appreciated.
Thanks!