-   Linux - Enterprise (
-   -   Debian etch, IBM SAN 4700, qlogic. How to change multipath to active /passive? (

vmm_sys 08-07-2007 04:35 AM

Debian etch, IBM SAN 4700, qlogic. How to change multipath to active /passive?
Hi, at our firm we installed a debian etch 4.0r0 amd64 on an ibm blade HS21, with a qlogic HBA 2422 included. We have connected them to an IBM SAN 4700. Our problem is that for the moment the connection with the SAN is active/active (round-robin).
This gives connection errors in the IBM DS4000 storage manager client. ("host side port (link) has been detected as down..."). We believe that due to the active/active round-robin the storage manager thinks there is a problem with the link. So therefore we would like to change the active/active in an active/passive through multipath. I don't know if it is possible to change the settings in the storage manager so that he doesn't consider this as a problem.

Hereafter some details about things that I have done on the blade after basic installation of the debian:

I installed with apt-get the debian package "firmware-qlogic" and afterwards did a "update-initramfs -u", then reboot. Installation of multipath-tools, afterwards it's possible to view the SAN-disk throug /dev/mapper/scsi_id.
-Creation of filesystem on this disk: pvcreate /dev/mapper/scsi_id, pvscan, creation of a logical volume in a volumegroup on the SAN-disk.
-adjustment of /etc/fstab, creation with mkdir of a new filesystem , mount the new filesystem
This works fine, I can use the filesystem without any problem, create files and directories. After a reboot everything works fine.

But then my colleague warned me about the error messages in the storage manager.

Herafter some output of multipath:

sudo multipath -ll
3600a0b800026c6320000054c469b17bedm-4 IBM,1814 FAStT
\_ round-robin 0 [prio=1][active]
\_ 5:0:0:1 sda 8:0 [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 8:0:0:1 sde 8:64 [active][ready]

sudo multipath -l
3600a0b800026c6320000054c469b17bedm-4 IBM,1814 FAStT
\_ round-robin 0 [prio=0][active]
\_ 5:0:0:1 sda 8:0 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 8:0:0:1 sde 8:64 [active][undef]

I don't know how to change the multipath-settings so that there is an active/passive configuration. I know there is on debian etch an multipath.conf that can be used to adapt (/usr/share/doc/multipath-tools/examples/multipath.conf.synthetic). I allready copied this to /etc/multipath.conf, but haven't changed anything.

If someone would know how to solve this problem (either in debian or in the storage manager), please let me know.


john.p 01-25-2008 12:15 AM

Any solution?
Did you get any solution to this problem?

The thing is that I'm about to setup multipath on Debian Etch. The server is an IBM X3950 connected to an IBM SAN (4000 I believe) and it would be nice to hear how it was solved.

// John

tchetch 03-10-2008 03:42 PM

Well I've successfully installed Debian on DS4000 (IBM). I guess this might help you, this is the document I made : Debian on DS4000


bittner 07-28-2010 07:59 AM

Multipath on Debian and Intel Modular Server (MFSYS25)
I'm trying to run multipath on a Debian Lenny box with 2 virtual disks provided by our Intel Modular Server system. (Intel officially supports SLES and RHEL only, with Debian you're left alone in the dark.) We have 2 disks with 2 paths each (sda through sdd), where /dev/sda is the Linux 'system' disk (with 2 partitions) and the 'database' disk is recogized as /dev/sdd (single partition). The other two disks (paths, really) are not accessible (executing fdisk -l /dev/sdb /dev/sdc yields nothing).

My problem currently is that multipath recognizes the database disk only (sdb and sdd apparently, with sdd being the active path and sdb's path state being failed instead of the desired enabled state). The system disk is not listed by the multipath -ll command even though the /var/lib/multipath/bindings file lists both devices with their correct IDs. Of course, /dev/mapper doesn't provide symlinks to the system disk either.


~# multipath -ll
database (22209000155faaffa) dm-0 Intel  ,Multi-Flex
[size=800G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=2][active]
 \_ 0:0:0:1 sdb 8:16  [failed][ready]
 \_ 0:0:1:1 sdd 8:48  [active][ready]

Does anyone know how to get around this issue and make the system disk show up? Below is my current /etc/multipath.conf configuration for sake of completeness:


defaults {
        user_friendly_names    yes

blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^(hd|xvd)[a-z][[0-9]*]"
        devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

devices {
        device {
                vendor                  "Intel"
                product                "Multi-Flex"
                path_grouping_policy    "group_by_prio"
                getuid_callout          "/lib/udev/scsi_id -g -u /dev/%n"
                prio                    "intel"
                path_checker            tur
                path_selector          "round-robin 0"
        #      hardware_handler        "1 alua"
                failback                immediate
                rr_weight              uniform
                rr_min_io              100
                no_path_retry          queue
                features                "1 queue_if_no_path"

multipaths {
        multipath {
                wwid            222ef0001555ab385
                alias          system
        multipath {
                wwid            22209000155faaffa
                alias          database

Note: I have commented out the hardware_handler option; the multipath daemon comments on the "1 alua" value with: unknown hardware handler type
(No wonder, according to the multipath.conf manpage "1 emc" is the only implemented value!)

The configuration is based on the Intel's MPIO config procedure for SLES (SuSE Linux Enterprise Server), instead of the RPMs shipped by Intel I installed multipath-tools using apt-get.

Anyone had this or a similar issue before and solved it?

P.S., some resources I found helpful so far:

bittner 08-16-2010 04:08 AM

[SOLVED] Multipath on Debian Lenny and Intel Modular Server
I've fixed the issue. For anyone interested in the solution: I've written a blog post about Multipath on Debian Lenny and Intel Modular Server.

All times are GMT -5. The time now is 03:25 PM.