Visit Jeremy's Blog.
Go Back > Forums > Enterprise Linux Forums > Linux - Enterprise
User Name
Linux - Enterprise This forum is for all items relating to using Linux in the Enterprise.


  Search this Thread
Old 08-07-2007, 03:35 AM   #1
LQ Newbie
Registered: Jul 2007
Location: Belgium
Posts: 5

Rep: Reputation: 0
Debian etch, IBM SAN 4700, qlogic. How to change multipath to active /passive?

Hi, at our firm we installed a debian etch 4.0r0 amd64 on an ibm blade HS21, with a qlogic HBA 2422 included. We have connected them to an IBM SAN 4700. Our problem is that for the moment the connection with the SAN is active/active (round-robin).
This gives connection errors in the IBM DS4000 storage manager client. ("host side port (link) has been detected as down..."). We believe that due to the active/active round-robin the storage manager thinks there is a problem with the link. So therefore we would like to change the active/active in an active/passive through multipath. I don't know if it is possible to change the settings in the storage manager so that he doesn't consider this as a problem.

Hereafter some details about things that I have done on the blade after basic installation of the debian:

I installed with apt-get the debian package "firmware-qlogic" and afterwards did a "update-initramfs -u", then reboot. Installation of multipath-tools, afterwards it's possible to view the SAN-disk throug /dev/mapper/scsi_id.
-Creation of filesystem on this disk: pvcreate /dev/mapper/scsi_id, pvscan, creation of a logical volume in a volumegroup on the SAN-disk.
-adjustment of /etc/fstab, creation with mkdir of a new filesystem , mount the new filesystem
This works fine, I can use the filesystem without any problem, create files and directories. After a reboot everything works fine.

But then my colleague warned me about the error messages in the storage manager.

Herafter some output of multipath:

sudo multipath -ll
3600a0b800026c6320000054c469b17bedm-4 IBM,1814 FAStT
\_ round-robin 0 [prio=1][active]
\_ 5:0:0:1 sda 8:0 [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 8:0:0:1 sde 8:64 [active][ready]

sudo multipath -l
3600a0b800026c6320000054c469b17bedm-4 IBM,1814 FAStT
\_ round-robin 0 [prio=0][active]
\_ 5:0:0:1 sda 8:0 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 8:0:0:1 sde 8:64 [active][undef]

I don't know how to change the multipath-settings so that there is an active/passive configuration. I know there is on debian etch an multipath.conf that can be used to adapt (/usr/share/doc/multipath-tools/examples/multipath.conf.synthetic). I allready copied this to /etc/multipath.conf, but haven't changed anything.

If someone would know how to solve this problem (either in debian or in the storage manager), please let me know.

Old 01-24-2008, 11:15 PM   #2
LQ Newbie
Registered: Jan 2008
Posts: 1

Rep: Reputation: 0
Any solution?

Did you get any solution to this problem?

The thing is that I'm about to setup multipath on Debian Etch. The server is an IBM X3950 connected to an IBM SAN (4000 I believe) and it would be nice to hear how it was solved.

// John
Old 03-10-2008, 02:42 PM   #3
LQ Newbie
Registered: Mar 2008
Posts: 2

Rep: Reputation: 0
Well I've successfully installed Debian on DS4000 (IBM). I guess this might help you, this is the document I made : Debian on DS4000

Old 07-28-2010, 06:59 AM   #4
LQ Newbie
Registered: Aug 2005
Location: Kreuzlingen, Switzerland
Distribution: Ubuntu, Debian
Posts: 11
Blog Entries: 10

Rep: Reputation: 0
Question Multipath on Debian and Intel Modular Server (MFSYS25)

I'm trying to run multipath on a Debian Lenny box with 2 virtual disks provided by our Intel Modular Server system. (Intel officially supports SLES and RHEL only, with Debian you're left alone in the dark.) We have 2 disks with 2 paths each (sda through sdd), where /dev/sda is the Linux 'system' disk (with 2 partitions) and the 'database' disk is recogized as /dev/sdd (single partition). The other two disks (paths, really) are not accessible (executing fdisk -l /dev/sdb /dev/sdc yields nothing).

My problem currently is that multipath recognizes the database disk only (sdb and sdd apparently, with sdd being the active path and sdb's path state being failed instead of the desired enabled state). The system disk is not listed by the multipath -ll command even though the /var/lib/multipath/bindings file lists both devices with their correct IDs. Of course, /dev/mapper doesn't provide symlinks to the system disk either.

~# multipath -ll
database (22209000155faaffa) dm-0 Intel   ,Multi-Flex
[size=800G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=2][active]
 \_ 0:0:0:1 sdb 8:16  [failed][ready]
 \_ 0:0:1:1 sdd 8:48  [active][ready]
Does anyone know how to get around this issue and make the system disk show up? Below is my current /etc/multipath.conf configuration for sake of completeness:

defaults {
        user_friendly_names     yes

blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^(hd|xvd)[a-z][[0-9]*]"
        devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

devices {
        device {
                vendor                  "Intel"
                product                 "Multi-Flex"
                path_grouping_policy    "group_by_prio"
                getuid_callout          "/lib/udev/scsi_id -g -u /dev/%n"
                prio                    "intel"
                path_checker            tur
                path_selector           "round-robin 0"
        #       hardware_handler        "1 alua"
                failback                immediate
                rr_weight               uniform
                rr_min_io               100
                no_path_retry           queue
                features                "1 queue_if_no_path"

multipaths {
        multipath {
                wwid            222ef0001555ab385
                alias           system
        multipath {
                wwid            22209000155faaffa
                alias           database
Note: I have commented out the hardware_handler option; the multipath daemon comments on the "1 alua" value with: unknown hardware handler type
(No wonder, according to the multipath.conf manpage "1 emc" is the only implemented value!)

The configuration is based on the Intel's MPIO config procedure for SLES (SuSE Linux Enterprise Server), instead of the RPMs shipped by Intel I installed multipath-tools using apt-get.

Anyone had this or a similar issue before and solved it?

P.S., some resources I found helpful so far:
Old 08-16-2010, 03:08 AM   #5
LQ Newbie
Registered: Aug 2005
Location: Kreuzlingen, Switzerland
Distribution: Ubuntu, Debian
Posts: 11
Blog Entries: 10

Rep: Reputation: 0
Lightbulb [SOLVED] Multipath on Debian Lenny and Intel Modular Server

I've fixed the issue. For anyone interested in the solution: I've written a blog post about Multipath on Debian Lenny and Intel Modular Server.


debian, ibm, intel, multipath

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
How do I create a partition on our SAN, RHEL Qlogic and EMC krigby Linux - Enterprise 13 08-07-2011 07:07 AM
connect debian etch to IBM SAN, meaning of "SNS scan failed" ? vmm_sys Linux - Enterprise 4 08-29-2007 04:37 AM
Configuring multipath to hds san results in multiple types of errors bret Linux - Software 1 05-12-2006 10:00 AM
Emulex FC - Brocade switches - Hitachi SAN - multipath Rimmon Linux - Enterprise 1 06-30-2004 11:44 AM
Operativsystem cannot detect new SAN devices using Qlogic fiber channel adapters sjensen Linux - Software 1 11-25-2003 05:51 AM > Forums > Enterprise Linux Forums > Linux - Enterprise

All times are GMT -5. The time now is 04:06 AM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration