LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 05-22-2018, 05:27 AM   #1
mariogarcia
Member
 
Registered: Sep 2005
Distribution: debian, solaris 10
Posts: 202

Rep: Reputation: 31
multipath devices not present after reboot.


Hello



We have an issue with a backup server. oracle linux 6.9 kernel 2.6.39-400.298.7.el6uek.x86_64



We have created two multipath deviced mpathbb and mpathbc



and we have create a raid 1 using mdadm like this:



Code:
mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/mapper/mpathbbp1 /dev/mapper/mpathbcp1

everything was fine the raid was OK and everything

but we had to reboot the device and all the multipath are gone and the md devices are faulty.
Code:
mdadm --detail /dev/md1

/dev/md1:

        Version : 1.2

  Creation Time : Wed May 16 11:33:11 2018

     Raid Level : raid1

     Array Size : 418211584 (398.84 GiB 428.25 GB)

  Used Dev Size : 418211584 (398.84 GiB 428.25 GB)

   Raid Devices : 2

  Total Devices : 2

    Persistence : Superblock is persistent





  Intent Bitmap : Internal





    Update Time : Tue May 22 11:52:06 2018

          State : active, degraded

 Active Devices : 1

Working Devices : 1

 Failed Devices : 1


  Spare Devices : 0





           Name : endor:1  (local to host endor)

           UUID : 23999c17:96087b94:0d04bed5:43af255e

         Events : 120363





    Number   Major   Minor   RaidDevice State

       0       0        0        0      removed

       1       8      241        1      active sync   /dev/sdp1





       0      65       49        -      faulty   /dev/sdt1


trying to rediscover the multipaths gives the errors:



multipath -r -v 2

Code:
May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:33 | mpathbc: ignoring map

May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:33 | mpathbc: ignoring map

May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:33 | mpathbc: ignoring map

May 22 12:15:33 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:34 | mpathbc: ignoring map

May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:34 | mpathbb: ignoring map

May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:34 | mpathbb: ignoring map

May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:34 | mpathbb: ignoring map

May 22 12:15:34 | mpath target must be >= 1.5.0 to have support for 'retain_attached_hw_handler'. This feature will be disabled

May 22 12:15:34 | mpathbb: ignoring map





in the dmesg we see the following:



Code:
device-mapper: table: 252:15: multipath: error getting device

device-mapper: ioctl: error adding target to table

device-mapper: table: 252:15: multipath: error getting device

device-mapper: ioctl: error adding target to table

device-mapper: table: 252:15: multipath: error getting device

device-mapper: ioctl: error adding target to table

device-mapper: table: 252:15: multipath: error getting device

device-mapper: ioctl: error adding target to table
this is the output of blkid:

Code:
blkid
/dev/sdd1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="4b342fb6-c337-4baf-6c59-47658ceb1018" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdc1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="f4a81505-6eb5-2a1f-989d-d7af3ee4f1a9" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sde1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="a7638aad-0a30-5a41-ce10-b20453c87a1d" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sda1: UUID="3658f519-1743-41d6-8b6e-b86c57070487" TYPE="ext4"
/dev/sda2: UUID="vS20Wm-Uh3S-CMZd-zuHw-LE1S-Se4i-dTTciJ" TYPE="LVM2_member"
/dev/sdi1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="c9e0a38a-f958-bcb5-a6b5-866c6d7def6a" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdh1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="8d94350d-fc45-eb84-88e2-2dbd88575769" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdj1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="90d90838-c682-3e8c-c4f4-a8e88e54e001" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdk1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="0d035f7f-1d87-8964-ee61-f5d24da11162" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdf1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="98a7658a-50cc-f97d-95f9-1794c16d3fe3" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdl1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="82a2e91c-e0cc-dc43-9898-92d4c9e15390" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdm1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="28663e8d-95fe-6903-4103-c40dbd871d55" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdn1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="77f865ab-466e-f3e8-f8cd-7fdacae35197" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/sdg1: UUID="522f3262-a631-d3cf-a129-4bd444ac2207" UUID_SUB="dde3fe18-af7a-4f8b-f042-700104941786" LABEL="cj-s-dp06:0" TYPE="linux_raid_member"
/dev/mapper/vg_root-lv_root: UUID="e0daa734-0c29-40d4-960e-d7d3911cb54d" TYPE="ext4"
/dev/mapper/vg_root-lv_swap: UUID="b722b74f-dbf5-4605-8b10-53076fba1208" TYPE="swap"
/dev/mapper/raid_backup_stagging-lv_backup_stagging: UUID="4d992abd-b1bf-46ba-be71-a9e44f42cd8f" TYPE="ext4"
/dev/mapper/vg_root-lv_var: UUID="a72f345b-3c10-4036-b232-9ec070e49993" TYPE="ext4"
/dev/mapper/vg_root-lv_home: UUID="2ed9b444-0339-4a42-b808-278468f33f3a" TYPE="ext4"
/dev/sdb2: UUID="6RHv8O-X03f-0pfy-ialS-7V9q-48vS-Rw1QHr" TYPE="LVM2_member"
/dev/mapper/vg_root-lv_root_mimage_0: UUID="e0daa734-0c29-40d4-960e-d7d3911cb54d" TYPE="ext4"
/dev/mapper/vg_root-lv_root_mimage_1: UUID="e0daa734-0c29-40d4-960e-d7d3911cb54d" TYPE="ext4"
/dev/md0: UUID="Cvqb0n-LemF-5lan-L8f6-61Ut-3uyE-g9ONe7" TYPE="LVM2_member"
/dev/mapper/vg_root-lv_var_mimage_0: UUID="a72f345b-3c10-4036-b232-9ec070e49993" TYPE="ext4"
/dev/mapper/vg_root-lv_var_mimage_1: UUID="a72f345b-3c10-4036-b232-9ec070e49993" TYPE="ext4"
/dev/mapper/vg_root-lv_home_mimage_0: UUID="2ed9b444-0339-4a42-b808-278468f33f3a" TYPE="ext4"
/dev/mapper/vg_root-lv_home_mimage_1: UUID="2ed9b444-0339-4a42-b808-278468f33f3a" TYPE="ext4"
/dev/md1: UUID="kXQyO8-qts3-1F28-Zwp4-c1IA-z2lr-g9AREd" TYPE="LVM2_member"
/dev/mapper/vg_nsrindex2-lv_nsrindex2: UUID="eaf134a2-19f8-4677-96ef-04597762215d" TYPE="ext4"
/dev/sdz1: UUID="23999c17-9608-7b94-0d04-bed543af255e" UUID_SUB="92ccede0-8baa-728d-3680-c409a0e59681" LABEL="endor:1" TYPE="linux_raid_member"
/dev/sdp1: UUID="23999c17-9608-7b94-0d04-bed543af255e" UUID_SUB="92ccede0-8baa-728d-3680-c409a0e59681" LABEL="endor:1" TYPE="linux_raid_member"
/dev/sdv1: UUID="23999c17-9608-7b94-0d04-bed543af255e" UUID_SUB="2b74a470-a580-59fa-58c3-77c8347ac1b4" LABEL="endor:1" TYPE="linux_raid_member"
/dev/sdab1: UUID="23999c17-9608-7b94-0d04-bed543af255e" UUID_SUB="2b74a470-a580-59fa-58c3-77c8347ac1b4" LABEL="endor:1" TYPE="linux_raid_member"
as you can see many linux raid members..



any idea how to recover the multipath and fixe the raid that appears faulty but really it isn't and how to prevent this to happen with reboots?



thanks
 
Old 05-23-2018, 02:19 PM   #2
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
Why are you doing metadisks on top of multipath? Aren't the disk paths already being presented from an array where they're in a RAID or similar grouping? Are you getting disks from two separate arrays?

Anyway, you didn't post your multipath.conf file and folks would need that. You also didn't say what the source disks are coming from.

The mpathbb and mpathbc names are "user friendly names". Unless you explicitly defined the names it is possible they simply changed on reboot (e.g. to something like mpathbd and mpathbe). If they did that your metadisk config has the wrong names. You CAN and should explicitly assign names in multipath.conf to insure the multipath name is the same every time if you intend to use it in something like metadisks. We do explicit names even for LVM although LVM will find its information on even renamed devices.

What does lsscsi output? The /dev/sd* devices it shows should be the components of the multipath device. I usually like to start at that level.
 
Old 06-13-2018, 09:50 AM   #3
mariogarcia
Member
 
Registered: Sep 2005
Distribution: debian, solaris 10
Posts: 202

Original Poster
Rep: Reputation: 31
Quote:
Originally Posted by MensaWater View Post
Why are you doing metadisks on top of multipath? Aren't the disk paths already being presented from an array where they're in a RAID or similar grouping? Are you getting disks from two separate arrays?

Anyway, you didn't post your multipath.conf file and folks would need that. You also didn't say what the source disks are coming from.

The mpathbb and mpathbc names are "user friendly names". Unless you explicitly defined the names it is possible they simply changed on reboot (e.g. to something like mpathbd and mpathbe). If they did that your metadisk config has the wrong names. You CAN and should explicitly assign names in multipath.conf to insure the multipath name is the same every time if you intend to use it in something like metadisks. We do explicit names even for LVM although LVM will find its information on even renamed devices.

What does lsscsi output? The /dev/sd* devices it shows should be the components of the multipath device. I usually like to start at that level.
Hello sorry for the delay

it seems the reason to have the metaraid devices come from two different arrays on on the disaster recovery site and one in the main site by creating a metadevice on them we asure that if one site fails the data is intact.. I just inherited this configuration.. I would not have done things this way.

the main problem I think is that there is a race conflict between mdadm and multipath.. so when the server boots mdadm takes the first devices /dev/sd* it finds and then multipath cannot create the multipath because the devices are busy..

I have finetuned the multipath.conf. the version installed was a vanilla one without any configuration.

here is the output :

Code:
[root@endor adm_garcimo]# cat /etc/multipath.conf

## IMPORTANT for OVS do not blacklist all devices by default.
#blacklist {
#        devnode "*"
#}

## By default, devices with vendor = "IBM" and product = "S/390.*" are
## blacklisted. To enable mulitpathing on these devies, uncomment the
## following lines.
#blacklist_exceptions {
#       device {
#               vendor  "IBM"
#               product "S/390.*"
#       }
#}
#blacklist_exceptions {
#       device {
#               vendor  "DGC"
#               product "VRAID*"
#       }
#}

## IMPORTANT for OVS this must be no. OVS does not support user friendly
## names and instead uses the WWIDs as names.
defaults {
        user_friendly_names yes
        getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"
        path_grouping_policy    multibus

}

# List of device names to discard as not multipath candidates
#
## IMPORTANT for OVS do not remove the black listed devices.
blacklist {
#       devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|nbd)[0-9]*"
#       devnode "^hd[a-z][0-9]*"
#       devnode "^sd[a-n][0-9]*"
wwid 3600605b00422aa801cdddf382fef19c9
wwid 3600605b00422aa801cdce5ea53cfb37f
wwid 3600605b00422aa801cdcad7bf6b8ae52
wwid 3600605b00422aa8016dc9b0117da9efb
wwid 3600605b00422aa8016dc9b0b1875b07e
wwid 3600605b00422aa8016dc9b171920aeac
wwid 3600605b00422aa8016dc9b1f19a5f38f
wwid 3600605b00422aa8016dc9b281a23ab1c
wwid 3600605b00422aa8016dc9b301aa166f8
wwid 3600605b00422aa8016dc9b381b1e4a3d
wwid 3600605b00422aa8016dc9b401b9c0458
wwid 3600605b00422aa8016dc9b481c18e680
wwid 3600605b00422aa8016dc9b511c96a135
wwid 3600605b00422aa8016dc9b591d1533c7
wwid 3600605b00422aa801b734117b88ff849
wwid 3600605b00422aa8016dc9b6a1e132b56
wwid 3600605b00422aa801b725669bba7e06f
#       devnode "^etherd"
#       devnode "^nvme.*"
#        %include "/etc/blacklisted.wwids"
}

##
## Here is an example of how to configure some standard options.
##
#
#defaults {
#       udev_dir                /dev
#       polling_interval        10
#       selector                "round-robin 0"
#       path_grouping_policy    multibus
#       getuid_callout          "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
#       prio                    alua
#       path_checker            readsector0
#       rr_min_io               100
#       max_fds                 8192
#       rr_weight               priorities
#       failback                immediate
#       no_path_retry           fail
#       user_friendly_names     no
#}
##
## The wwid line in the following blacklist section is shown as an example
## of how to blacklist devices by wwid.  The 2 devnode lines are the
## compiled in default blacklist. If you want to blacklist entire types
## of devices, such as all scsi devices, you should use a devnode line.
## However, if you want to blacklist specific devices, you should use
## a wwid line.  Since there is no guarantee that a specific device will
## not change names on reboot (from /dev/sda to /dev/sdb for example)
## devnode lines are not recommended for blacklisting specific devices.
##
#blacklist {
#       wwid 26353900f02796769
#       devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
#       devnode "^hd[a-z]"
#       devnode "^sd[ab]"
#}
#multipaths {
#       multipath {
#               wwid                    3600508b4000156d700012000000b0000
#               alias                   yellow
#               path_grouping_policy    multibus
#               path_selector           "round-robin 0"
#               failback                manual
#               rr_weight               priorities
#               no_path_retry           10
#       }
#       multipath {
#               wwid                    1DEC_____321816758474
#               alias                   red
#       }
#}
devices {
#       device {
#               vendor                  "COMPAQ  "
#               product                 "HSV110 (C)COMPAQ"
#               path_grouping_policy    multibus
#               getuid_callout          "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
#               path_checker            readsector0
#               path_selector           "round-robin 0"
#               hardware_handler        "0"
#               failback                15
#               rr_weight               priorities
#               no_path_retry           queue
#       }
#       device {
#               vendor                  "COMPAQ  "
#               product                 "MSA1000         "
#               path_grouping_policy    multibus
#       }
#



        #
        # IBM DS4100 :: Active-Passive
        # IBM storage expert says functionally equivalent to DS4300
        #
        device {
                vendor                  "IBM"
                product                 "1724-100"
                hardware_handler        "1 rdac"
                path_grouping_policy    group_by_prio
                prio                    rdac
                path_checker            rdac
                no_path_retry           10
        }


        #
        # IBM DS4400 (FAStT700) :: Active-Passive
        # Verified @ Waltham, IBM
        #
        device {
                vendor                  "IBM"
                product                 "1742-900"
                hardware_handler        "1 rdac"
                path_grouping_policy    group_by_prio
                prio                    "rdac"
                failback                immediate
                path_checker            rdac
                no_path_retry           10
        }


        #
        # IBM XIV Nextra - combined iSCSI/FC :: Active-Active
        # Verified @ Waltham, IBM
        #
        device {
                vendor                  "XIV"
                product                 "NEXTRA"
                path_grouping_policy    multibus
                rr_min_io               1000
                path_checker            tur
                failback                immediate
                no_path_retry           10
        }

        #
        # Re-branded XIV Nextra
        #
        device {
                vendor                  "IBM"
                product                 "2810XIV"
                path_grouping_policy    multibus
                rr_min_io               1000
                path_checker            tur
                failback                immediate
                no_path_retry           10
        }

        #
        #       HP MSA1510i     :: Active-Active. Latest firmare (v2.00) supports Linux.
        #       Tested @ HP, Marlboro facility.
        #
        device {
                vendor                  "HP"
                product                 "MSA1510i VOLUME"
                path_grouping_policy    group_by_prio
                path_checker            tur
                prio                    "alua"
                no_path_retry           10
        }


        #
        #       DataCore SANmelody FC and iSCSI :: Active-Passive
        #
        device {
                vendor                  "DataCore"
                product                 "SAN*"
                path_grouping_policy    failover
                path_checker            tur
                failback                10
                no_path_retry           10
        }

        #
        #       EqualLogic iSCSI :: Active-Passive
        #
        device {
                vendor                  "EQLOGIC"
                product                 "100E-00"
                path_grouping_policy    failover
                failback                immediate
                no_path_retry           10
        }

        #
        #       Compellent FC :: Active-Active
        #
        device {
                vendor                  "COMPELNT"
                product                 "Compellent *"
                path_grouping_policy    multibus
                path_checker            tur
                failback                immediate
                rr_min_io               1024
                no_path_retry           10
        }

        #
        #       FalconStor :: Active-Active
        #
        device {
                vendor                  "FALCON"
                product                 ".*"
                path_grouping_policy    multibus
                failback                immediate
                no_path_retry           10
        }

        #
        #     EMD FC (ES 12F) and iSCSI (SA 16i) :: Active-Active
        #     Tested in-house.
        device {
                vendor                  "EMD.*"
                product                 "ASTRA (ES 12F)|(SA 16i)"
                path_grouping_policy    failover
                failback                immediate
                path_checker            tur
                no_path_retry           10
       }
        #
        #       Fujitsu :: Active-Passive (ALUA)
        #
        device {
                vendor                  "FUJITSU"
                product                 "E[234]000"
                path_grouping_policy    group_by_prio
                prio                    "alua"
                failback                immediate
                no_path_retry           10
                path_checker            tur
        }
        #
        #       Fujitsu :: Active-Active
        #
        device {
                vendor                  "FUJITSU"
                product                 "E[68]000"
                path_grouping_policy    multibus
                failback                immediate
                no_path_retry           10
                path_checker            tur
        }
        #
        #       JetStor :: Active-Active
        #       Tested in-house.
        device {
                vendor                  "AC&Ncorp"
                product                 "JetStorSAS516iS"
                path_grouping_policy    multibus
                failback                15
                no_path_retry           10
                rr_weight               priorities
                path_checker            tur
        }
        #
        #       Xyratex/Overland :: Active-Active
        #       Tested in-house
        #
        device {
                vendor                  "XYRATEX"
                product                 "F5402E|[FE]5412E|[FE]5404E|F6512E|[FEI]6500E"
                path_grouping_policy    failover
                failback                3
                no_path_retry           10
                path_checker            tur
        }

        device {
                vendor "FUJITSU"
                product "ETERNUS_DXM|ETERNUS_DXL|ETERNUS_DX400|ETERNUS_DX8000"
                prio alua
                path_grouping_policy group_by_prio
                path_selector "round-robin 0"
                failback immediate
                no_path_retry 10
         }

        #
        #       Revert to pre rel6 settings for OVS
        #
        device {
                vendor "NETAPP"
                product "LUN.*"
                dev_loss_tmo 50
        }

        device {
                vendor          "ATA*"
                product         ".*"
        }


        device {
                vendor "DGC"
                product "*"
                path_grouping_policy group_by_prio
                path_checker emc_clariion
                path_selector "round-robin 0"
                features "1 queue_if_no_path"
                prio emc
                hardware_handler "1 emc"
                no_path_retry 60
                failback immediate
                rr_weight uniform
                rr_min_io 1000
        }

}


multipaths {
        multipath {
                        wwid    36006016007a04200ac531a5b375e224c
                        alias   nsrindex-DC
                }
        multipath {
                        wwid    36006016005b04200ac541a5bda232eb4
                        alias   nsrindex-DRC
                }
}

blacklist_exceptions {
        wwid    "36006016005b04200ac541a5bda232eb4"
        wwid    "36006016007a04200ac531a5b375e224c"
}
if i create a mdadm in devices /dev/mapper/nsrindex-LVM... and reboot the mdadm is on devices /dev/sdX like this:

Code:
 mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Mon Jun 11 15:14:47 2018
        Raid Level : raid1
        Array Size : 419298304 (399.87 GiB 429.36 GB)
     Used Dev Size : 419298304 (399.87 GiB 429.36 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Jun 12 15:25:02 2018
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : unknown

              Name : hostname  (local to host hostname)
              UUID : d2779403:bd8d370b:bdea907e:bb0e3c72
            Events : 567

    Number   Major   Minor   RaidDevice State
       0      65        0        0      active sync   /dev/sdq
       1       8      160        1      active sync   /dev/sdk
is there a way ro create a systemd config that will force mdadm not to assemble the raid until multipathd is finished?

we also tried with emc powerpath as the san is a vnx but results are the same.. the metadevices on reboot are on different /dev/sdX devices each time.

would there be a way to hide /dev/sdX devices that are part of a multipath?

thank you
 
Old 06-13-2018, 12:06 PM   #4
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
You could look at the packages provided by mdadm (rpm -ql mdadm) and see if making dm-multipath a requirement helps. I see on RHEL7 that package also does udev configuration so exploring that might be helpful. I don't do a lot with metadisks myself.

I found one post online that suggested the device is getting busied out because it had a partition table on it even though the entire disk is being use rather than the partition. You might check for that.

Another idea I found online is to just get rid of the multipath.conf setup and used mdadm for the multipathing as well as the metadisks:
https://access.redhat.com/documentat...-s390info-raid

I've not worked extensively with mdadm so can't really offer much on that.

As regards your multipath.conf I was suggesting that IF your mdadm is relying on names (e.g. your "user friendly" mpathbb) you might want to setup the multipath.conf to set the name to the same thing everytime based on the devices scsi id. You have a couple of those defined already but I'm not sure if those are the ones you previously listed with other names or if they are something else:
Code:
multipaths {
        multipath {
                        wwid    36006016007a04200ac531a5b375e224c
                        alias   nsrindex-DC
                }
        multipath {
                        wwid    36006016005b04200ac541a5bda232eb4
                        alias   nsrindex-DRC
                }
}
Here in our LVM config we would pvcreate the alias names from that multipath section then use those names in the VG(s) we create.

Another alternative which is the way I would go were it me:
Note that LVM works fine adding devices from multipath to a mirror so you might be able to use LVM for your mirroring of the disks rather than mdadm. We did mirror with LVM from disparate arrays (Hitachi VSP and Pure FlashArray) when we were migrating from one to the other and saw no issues doing that. We added the Pure FlashArray multipath devices to a mirror with existing Hitachi multipath devices - once the mirror was synced we removed the Hitachi devices from the mirror but there is no reason you couldn't leave them there if you were keeping both arrays.

P.S. You mention EMC PowerPath. On their more sophisticated arrays (i.e. Not Clariion) EMC used to offer SRDF for syncing disks across arrays. That software (like all EMC software) costs extra but it would be one way to sync between live Production and remote DR site that would prevent the need to mount the DR site to local server. I imagine the performance of what you're doing is problematic if these sites are geographically different as they should be for "DR".

Last edited by MensaWater; 06-13-2018 at 12:12 PM.
 
  


Reply

Tags
multipath



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
URGENT problem with Multipath partitions not present on reboot - CentOS 7 h1tchiker Linux - Server 17 10-27-2017 10:36 AM
Filter disks that underlie the Multipath devices. hilou Linux - Newbie 3 11-26-2013 01:28 AM
Moving existing VG to multipath devices Foobsy Linux - Enterprise 4 05-15-2012 08:07 AM
[SOLVED] Multipath devices not showing up gunnerjoe Linux - Hardware 1 02-20-2012 03:20 PM
Mapping LUNs to multipath devices. larold Linux - Enterprise 9 02-20-2010 11:31 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 06:33 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration