LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Enterprise (https://www.linuxquestions.org/questions/linux-enterprise-47/)
-   -   Debian etch, IBM SAN 4700, qlogic. How to change multipath to active /passive? (https://www.linuxquestions.org/questions/linux-enterprise-47/debian-etch-ibm-san-4700-qlogic-how-to-change-multipath-to-active-passive-575287/)

vmm_sys 08-07-2007 03:35 AM

Debian etch, IBM SAN 4700, qlogic. How to change multipath to active /passive?
 
Hi, at our firm we installed a debian etch 4.0r0 amd64 on an ibm blade HS21, with a qlogic HBA 2422 included. We have connected them to an IBM SAN 4700. Our problem is that for the moment the connection with the SAN is active/active (round-robin).
This gives connection errors in the IBM DS4000 storage manager client. ("host side port (link) has been detected as down..."). We believe that due to the active/active round-robin the storage manager thinks there is a problem with the link. So therefore we would like to change the active/active in an active/passive through multipath. I don't know if it is possible to change the settings in the storage manager so that he doesn't consider this as a problem.

Hereafter some details about things that I have done on the blade after basic installation of the debian:

I installed with apt-get the debian package "firmware-qlogic" and afterwards did a "update-initramfs -u", then reboot. Installation of multipath-tools, afterwards it's possible to view the SAN-disk throug /dev/mapper/scsi_id.
-Creation of filesystem on this disk: pvcreate /dev/mapper/scsi_id, pvscan, creation of a logical volume in a volumegroup on the SAN-disk.
-adjustment of /etc/fstab, creation with mkdir of a new filesystem , mount the new filesystem
This works fine, I can use the filesystem without any problem, create files and directories. After a reboot everything works fine.

But then my colleague warned me about the error messages in the storage manager.

Herafter some output of multipath:

sudo multipath -ll
3600a0b800026c6320000054c469b17bedm-4 IBM,1814 FAStT
[size=10G][features=0][hwhandler=0]
\_ round-robin 0 [prio=1][active]
\_ 5:0:0:1 sda 8:0 [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 8:0:0:1 sde 8:64 [active][ready]


sudo multipath -l
3600a0b800026c6320000054c469b17bedm-4 IBM,1814 FAStT
[size=10G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 5:0:0:1 sda 8:0 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 8:0:0:1 sde 8:64 [active][undef]

I don't know how to change the multipath-settings so that there is an active/passive configuration. I know there is on debian etch an multipath.conf that can be used to adapt (/usr/share/doc/multipath-tools/examples/multipath.conf.synthetic). I allready copied this to /etc/multipath.conf, but haven't changed anything.

If someone would know how to solve this problem (either in debian or in the storage manager), please let me know.

Thanks

john.p 01-24-2008 11:15 PM

Any solution?
 
Did you get any solution to this problem?

The thing is that I'm about to setup multipath on Debian Etch. The server is an IBM X3950 connected to an IBM SAN (4000 I believe) and it would be nice to hear how it was solved.

Cheers
// John

tchetch 03-10-2008 02:42 PM

Well I've successfully installed Debian on DS4000 (IBM). I guess this might help you, this is the document I made : Debian on DS4000

T.

bittner 07-28-2010 06:59 AM

Multipath on Debian and Intel Modular Server (MFSYS25)
 
I'm trying to run multipath on a Debian Lenny box with 2 virtual disks provided by our Intel Modular Server system. (Intel officially supports SLES and RHEL only, with Debian you're left alone in the dark.) We have 2 disks with 2 paths each (sda through sdd), where /dev/sda is the Linux 'system' disk (with 2 partitions) and the 'database' disk is recogized as /dev/sdd (single partition). The other two disks (paths, really) are not accessible (executing fdisk -l /dev/sdb /dev/sdc yields nothing).

My problem currently is that multipath recognizes the database disk only (sdb and sdd apparently, with sdd being the active path and sdb's path state being failed instead of the desired enabled state). The system disk is not listed by the multipath -ll command even though the /var/lib/multipath/bindings file lists both devices with their correct IDs. Of course, /dev/mapper doesn't provide symlinks to the system disk either.

Code:

~# multipath -ll
database (22209000155faaffa) dm-0 Intel  ,Multi-Flex
[size=800G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=2][active]
 \_ 0:0:0:1 sdb 8:16  [failed][ready]
 \_ 0:0:1:1 sdd 8:48  [active][ready]

Does anyone know how to get around this issue and make the system disk show up? Below is my current /etc/multipath.conf configuration for sake of completeness:

Code:

defaults {
        user_friendly_names    yes
}

blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^(hd|xvd)[a-z][[0-9]*]"
        devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}

devices {
        device {
                vendor                  "Intel"
                product                "Multi-Flex"
                path_grouping_policy    "group_by_prio"
                getuid_callout          "/lib/udev/scsi_id -g -u /dev/%n"
                prio                    "intel"
                path_checker            tur
                path_selector          "round-robin 0"
        #      hardware_handler        "1 alua"
                failback                immediate
                rr_weight              uniform
                rr_min_io              100
                no_path_retry          queue
                features                "1 queue_if_no_path"
        }
}

multipaths {
        multipath {
                wwid            222ef0001555ab385
                alias          system
        }
        multipath {
                wwid            22209000155faaffa
                alias          database
        }
}

Note: I have commented out the hardware_handler option; the multipath daemon comments on the "1 alua" value with: unknown hardware handler type
(No wonder, according to the multipath.conf manpage "1 emc" is the only implemented value!)

The configuration is based on the Intel's MPIO config procedure for SLES (SuSE Linux Enterprise Server), instead of the RPMs shipped by Intel I installed multipath-tools using apt-get.

Anyone had this or a similar issue before and solved it?

P.S., some resources I found helpful so far:

bittner 08-16-2010 03:08 AM

[SOLVED] Multipath on Debian Lenny and Intel Modular Server
 
I've fixed the issue. For anyone interested in the solution: I've written a blog post about Multipath on Debian Lenny and Intel Modular Server.


All times are GMT -5. The time now is 03:39 AM.