Hello
We have an issue with a backup server. oracle linux 6.9 kernel 2.6.39-400.298.7.el6uek.x86_64
We have created two multipath deviced mpathbb and mpathbc
and we have create a raid 1 using mdadm like this:
Code:
mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/mapper/mpathbbp1 /dev/mapper/mpathbcp1
everything was fine the raid was OK and everything
but we had to reboot the device and all the multipath are gone and the md devices are faulty.
Code:
mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Wed May 16 11:33:11 2018
Raid Level : raid1
Array Size : 418211584 (398.84 GiB 428.25 GB)
Used Dev Size : 418211584 (398.84 GiB 428.25 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue May 22 11:52:06 2018
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Name : endor:1 (local to host endor)
UUID : 23999c17:96087b94:0d04bed5:43af255e
Events : 120363
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 241 1 active sync /dev/sdp1
0 65 49 - faulty /dev/sdt1
trying to rediscover the multipaths gives the errors:
multipath -r -v 2
Code:
May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled
May 22 12:15:33 | mpathbc: ignoring map
May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled
May 22 12:15:33 | mpathbc: ignoring map
May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled
May 22 12:15:33 | mpathbc: ignoring map
May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled
May 22 12:15:34 | mpathbc: ignoring map
May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled
May 22 12:15:34 | mpathbb: ignoring map
May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled
May 22 12:15:34 | mpathbb: ignoring map
May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled
May 22 12:15:34 | mpathbb: ignoring map
May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled
May 22 12:15:34 | mpathbb: ignoring map
in the dmesg we see the following:
Code:
device-mapper: table: 252:15: multipath: error getting device
device-mapper: ioctl: error adding target to table
device-mapper: table: 252:15: multipath: error getting device
device-mapper: ioctl: error adding target to table
device-mapper: table: 252:15: multipath: error getting device
device-mapper: ioctl: error adding target to table
device-mapper: table: 252:15: multipath: error getting device
device-mapper: ioctl: error adding target to table
this is the output of blkid:
Code:
blkid
/dev/sdd1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="4b342fb6-c337-4baf-6c59-47658ceb1018" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdc1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="f4a81505-6eb5-2a1f-989d-d7af3ee4f1a9" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sde1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="a7638aad-0a30-5a41-ce10-b20453c87a1d" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sda1: UUID="3658f519-1743-41d6-8b6e-b86c57070487" TYPE="ext4"
/dev/sda2: UUID="vS20Wm-Uh3S-CMZd-zuHw-LE1S-Se4i-dTTciJ" TYPE="LVM2_member"
/dev/sdi1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="c9e0a38a-f958-bcb5-a6b5-866c6d7def6a" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdh1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="8d94350d-fc45-eb84-88e2-2dbd88575769" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdj1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="90d90838-c682-3e8c-c4f4-a8e88e54e001" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdk1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="0d035f7f-1d87-8964-ee61-f5d24da11162" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdf1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="98a7658a-50cc-f97d-95f9-1794c16d3fe3" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdl1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="82a2e91c-e0cc-dc43-9898-92d4c9e15390" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdm1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="28663e8d-95fe-6903-4103-c40dbd871d55" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdn1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="77f865ab-466e-f3e8-f8cd-7fdacae35197" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdg1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="dde3fe18-af7a-4f8b-f042-700104941786" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/mapper/vg_root-lv_root: UUID="e0daa734-0c29-40d4-960e-d7d3911cb54d" TYPE="ext4"
/dev/mapper/vg_root-lv_swap: UUID="b722b74f-dbf5-4605-8b10-53076fba1208" TYPE="swap"
/dev/mapper/raid_backup_stagging-lv_backup_stagging: UUID="4d992abd-b1bf-46ba-be71-a9e44f42cd8f" TYPE="ext4"
/dev/mapper/vg_root-lv_var: UUID="a72f345b-3c10-4036-b232-9ec070e49993" TYPE="ext4"
/dev/mapper/vg_root-lv_home: UUID="2ed9b444-0339-4a42-b808-278468f33f3a" TYPE="ext4"
/dev/sdb2: UUID="6RHv8O-X03f-0pfy-ialS-7V9q-48vS-Rw1QHr" TYPE="LVM2_member"
/dev/mapper/vg_root-lv_root_mimage_0: UUID="e0daa734-0c29-40d4-960e-d7d3911cb54d" TYPE="ext4"
/dev/mapper/vg_root-lv_root_mimage_1: UUID="e0daa734-0c29-40d4-960e-d7d3911cb54d" TYPE="ext4"
/dev/md0: UUID="Cvqb0n-LemF-5lan-L8f6-61Ut-3uyE-g9ONe7" TYPE="LVM2_member"
/dev/mapper/vg_root-lv_var_mimage_0: UUID="a72f345b-3c10-4036-b232-9ec070e49993" TYPE="ext4"
/dev/mapper/vg_root-lv_var_mimage_1: UUID="a72f345b-3c10-4036-b232-9ec070e49993" TYPE="ext4"
/dev/mapper/vg_root-lv_home_mimage_0: UUID="2ed9b444-0339-4a42-b808-278468f33f3a" TYPE="ext4"
/dev/mapper/vg_root-lv_home_mimage_1: UUID="2ed9b444-0339-4a42-b808-278468f33f3a" TYPE="ext4"
/dev/md1: UUID="kXQyO8-qts3-1F28-Zwp4-c1IA-z2lr-g9AREd" TYPE="LVM2_member"
/dev/mapper/vg_nsrindex2-lv_nsrindex2: UUID="eaf134a2-19f8-4677-96ef-04597762215d" TYPE="ext4"
/dev/sdz1: UUID="23999c17-9608-7b94-0d04-bed543af255e" UUID_SUB="92ccede0-8baa-728d-3680-c409a0e59681" LABEL="endor:1" TYPE="linux_raid_member"
/dev/sdp1: UUID="23999c17-9608-7b94-0d04-bed543af255e" UUID_SUB="92ccede0-8baa-728d-3680-c409a0e59681" LABEL="endor:1" TYPE="linux_raid_member"
/dev/sdv1: UUID="23999c17-9608-7b94-0d04-bed543af255e" UUID_SUB="2b74a470-a580-59fa-58c3-77c8347ac1b4" LABEL="endor:1" TYPE="linux_raid_member"
/dev/sdab1: UUID="23999c17-9608-7b94-0d04-bed543af255e" UUID_SUB="2b74a470-a580-59fa-58c3-77c8347ac1b4" LABEL="endor:1" TYPE="linux_raid_member"
as you can see many linux raid members..
any idea how to recover the multipath and fixe the raid that appears faulty but really it isn't and how to prevent this to happen with reboots?
thanks