Umm.. which raid drive died?
The home file server is set up with 2 IDE drives in software RAID1. It seems that one is sickly... but which one? I need a clue:
Is mdstat telling me that two partions of hda are having trouble? dave@fileshare:~$ more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] read_ahead 1024 sectors md0 : active raid1 hdc1[1] hda1[0] 200704 blocks [2/2] [UU] md1 : active raid1 hdc3[1] 3911744 blocks [2/1] [_U] md2 : active raid1 hdc5[1] hda5[0] 1951744 blocks [2/2] [UU] md3 : active raid1 hdc6[1] 149717632 blocks [2/1] [_U] unused devices: <none> Is raidtab telling me that hdc is in trouble? dave@fileshare:~$ more /etc/raidtab raiddev /dev/md0 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/hda1 raid-disk 0 device /dev/hdc1 failed-disk 1 raiddev /dev/md1 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/hda3 raid-disk 0 device /dev/hdc3 failed-disk 1 raiddev /dev/md2 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/hda5 raid-disk 0 device /dev/hdc5 failed-disk 1 raiddev /dev/md3 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/hda6 raid-disk 0 device /dev/hdc6 failed-disk 1 I'm confused and need a clue. I am presuming here that /etc/raidtab gets automatically updated with the 'failed-disk' entry automatically. This is a pretty old Slackware system... 9.1 I think? 2.4 kernel. dave@fileshare:/var/log/packages$ ls *raid* raidtools-1.00.3-i386-1 dave@fileshare:~$ more /etc/fstab /dev/hda2 swap swap defaults 0 0 /dev/md1 / ext2 defaults 1 1 /dev/md0 /boot ext2 defaults 1 2 /dev/md2 /home ext2 defaults 1 2 /dev/md3 /share ext2 defaults 1 2 /dev/cdrom /mnt/cdrom iso9660 noauto,owner,user,ro 0 0 /dev/fd0 /mnt/floppy auto noauto,owner,user 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 proc /proc proc defaults 0 0 TIA -dave |
looks like /dev/hda3 and /dev/hda6. mdstat lists the *active* devices, as in theory if one device is failed, it doens't exist any more, just like a new drive in a box on your desk doesn't exist... comparing to raidtab (a generally useless file that's not needed) it's those partitiosn that are not active. dmesg will propbably tell you more useful information assuming it's still in the kernel ring buffer. "dmesg grep hda" may well show you the actual failure messages.
|
All times are GMT -5. The time now is 06:47 AM. |