LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Red Hat
User Name
Password
Red Hat This forum is for the discussion of Red Hat Linux.

Notices


Reply
  Search this Thread
Old 07-21-2006, 04:04 PM   #1
f0rmat
LQ Newbie
 
Registered: Jan 2002
Posts: 9

Rep: Reputation: 0
iscsi multipathing with RHELAS 4 Update2 with qlogic troubles


All,
I have 2 qlogic 4010C hba's that I am trying to get multipathed(so I can have failover) under Redhat Linux AS 4 Update 2, and for the life of me can not figure it out. Has anyone done this/know of any detailed(and working) instructions when using these qlogic cards? All I can get is a bunch of nothing from Redhat, Qlogic, and Netapp(our SAN filer)

I've tried using the qlogic drives, the redhat iscsi drivers, and I'm at the point now where I am able to see my 2 cards and the lun(although each card is displaying it as a seperate device.. (sdb and sdc is my single lun) but I can't even mount them because I get a /dev/sdb1(sdc1) is already mounted or /mountpoint busy.

Any help is most appreciated.

[root@amer-linuxdev /]# fdisk -l

Disk /dev/sda: 79.9 GB, 79966446080 bytes
255 heads, 63 sectors/track, 9722 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9722 77987542+ 8e Linux LVM

Disk /dev/sdb: 32.1 GB, 32140951552 bytes
64 heads, 32 sectors/track, 30652 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 30652 31387632 83 Linux

Disk /dev/sdc: 32.1 GB, 32140951552 bytes
64 heads, 32 sectors/track, 30652 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 30652 31387632 83 Linux
[root@amer-linuxdev /]#
 
Old 07-31-2006, 10:20 AM   #2
JALITE
LQ Newbie
 
Registered: Jul 2006
Posts: 13

Rep: Reputation: 1
since sdb and sdc are pointing to one physical LUN, you may need some kind of multipath software installed, then mount on the pseudo device, I guess.
 
Old 08-18-2006, 01:49 PM   #3
vega103
LQ Newbie
 
Registered: Aug 2006
Posts: 1

Rep: Reputation: 0
multipath iSCSI HBAs

As far as my limited research goes, you need a feature called multipath-tools.

You might also want to check out the QLogic iSCSI forum.

I would be interested to know if you have sorted it out.

Regards ...
 
Old 08-29-2006, 11:01 AM   #4
tbeardsell
LQ Newbie
 
Registered: Aug 2006
Posts: 3

Rep: Reputation: 0
Wink iSCSI fun! :-)

Make sure you have the right version of:
support.qlogic.com/support/product_resources.asp?id=341

Ensure that the HBA/Driver you are using does not have native multipathing enabled.

# insmod qla4xxx.ko ql4xfailover=0
# insmod ql4010c.ko

or similar (x).ko files for your HBA

Then use the mdadm to configure the devices as multipathed

# mdadm -C /dev/md1 -l multipath -n2 /dev/sdb /dev/sdc

note: not using the /dev/sdb1 or /dev/sdc1 partitions, we just want to use the LUN device

To get info on the device use:

# mdadm -D /dev/md1

Now make the filesystem on the /dev/md1 device and mount it... there we go you should be working fine

To make this persistant across reboots do the following

# echo "DEVICE /dev/sd[a-z]*" > /etc/mdadm.conf
# mdadm -Es >> /etc/mdadm.conf
# mdadm -As

Plus add the nessersay to the /etc/fstab

To test failover etc etc use the following or try some cable pulls while the device is being written to etc etc:

Mark an "active sync" or "spare" path as faulty:
# mdadm /dev/md(x) -f /dev/sd(x)

Hot remove a path:
# mdadm /dev/md(x) -r /dev/sd(x)

Hot add a path:
# mdadm /dev/md(x) -a /dev/sd(x)

HTH Tom
 
Old 08-30-2006, 01:52 PM   #5
f0rmat
LQ Newbie
 
Registered: Jan 2002
Posts: 9

Original Poster
Rep: Reputation: 0
So, something got screwed up somewhere along the line. I tried using device-mapper-multipath. Instructions at.. http://kbase.redhat.com/faq/FAQ_78_7170.shtm

and that seemed promising, when I ran multipath -v2 for the first time in the chain of commands, it actually displayed a multipath, then it disappeared(returns no results) and now at that point, I can't mount my netapp lun on either qlogic card..
[root@amer-linuxdev qlaiscsi-linux-5.00.04-2-install]# mount /dev/sdb1 /testmnt/
mount: /dev/sdb1 already mounted or /testmnt/ busy

It tells me /dev/sdb1 is busy... which it shouldn't be. I just uninstalled the drivers and reinstalled the drivers, but still to no avail.

Any ideas?
 
Old 08-30-2006, 03:21 PM   #6
f0rmat
LQ Newbie
 
Registered: Jan 2002
Posts: 9

Original Poster
Rep: Reputation: 0
So your post worked most of the way.. see below..
however... when I tried some cable pulls.. all went to shite. Unless I did something wrong?

I was copying my entire home directory from the home dir server, to this test lun(about 5gb of data) enough for me to pull a cable. When I pulled the cable of the first card, there was a pause(about 10 seconds), and writing continued.. I applauded your brillance, and my luck for finding you, waited a few, plugged the cable back in.. writing continued.. I then waited 10 seconds, and unplugged the second cable(to test redundancy across both cards) Writing stopped, and stayed stopped. In another terminal, that I had top running on... the md1_multipath process took 99.9% of the CPU constantly until I plugged back in the second cable.

Is it not completely automated? Would there be a manual step to tell the system that the first failedback card is back, and ok? I'm hoping to achieve complete automated failover and failback across both cards. Is that possible using MD? or do I need to look back at multipathing?

[root@amer-linuxdev dmurphy]# mdadm -D /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Wed Aug 30 15:49:05 2006
Raid Level : multipath
Array Size : 31387584 (29.93 GiB 32.14 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Wed Aug 30 15:55:13 2006
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0


Number Major Minor RaidDevice State
0 0 0 -1 removed
1 8 32 1 active sync /dev/sdc
2 8 16 -1 faulty /dev/sdb
UUID : 3e2f3ae5:b60cfd76:62b72e8e:0a748c23
Events : 0.2
[root@amer-linuxdev dmurphy]# mdadm /dev/md1 -r /dev/sdb
mdadm: hot removed /dev/sdb
[root@amer-linuxdev dmurphy]# mdadm /dev/md1 -a /dev/sdb
mdadm: hot added /dev/sdb
[root@amer-linuxdev dmurphy]# mdadm -D /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Wed Aug 30 15:49:05 2006
Raid Level : multipath
Array Size : 31387584 (29.93 GiB 32.14 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Wed Aug 30 16:16:29 2006
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0


Number Major Minor RaidDevice State
0 0 0 -1 removed
1 8 32 1 active sync /dev/sdc
2 8 16 0 active sync /dev/sdb
UUID : 3e2f3ae5:b60cfd76:62b72e8e:0a748c23
Events : 0.4
[root@amer-linuxdev dmurphy]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVolRoot
30254032 3183260 25533956 12% /
/dev/sda1 101086 14646 81221 16% /boot
none 1027508 0 1027508 0% /dev/shm
/dev/mapper/VolGroup00-LogVolX1
28092588 372940 26292604 2% /x1
amer-homes:/x/home/dmurphy
410958560 384906176 21942784 95% /home/dmurphy
/dev/md1 30894972 77784 29247812 1% /lunmnt
 
Old 08-31-2006, 05:21 AM   #7
tbeardsell
LQ Newbie
 
Registered: Aug 2006
Posts: 3

Rep: Reputation: 0
mdadm failing paths

Hi again,
I've seen similar problems with the mdadm multipath feature not been able to do its job basically.

What you are saying is correct you should be able to fail one path then replug then the system detect the repair and failback automatically (giving a minutes or so to give the OS a fighting chance to survive the cable pull / repair) then pull the second cable. All should be fine yeah?

What concerns me about the mdadm multipath feature is that there is not a lot of documentation out there of people doing this and testing it in the way you or I would! People just seem to document setting it up and saying yeah this looks good and must work as it looks like it should.

I've also seen problems with multipathing large disks for example a 5TB disk with two paths works fine native/dev/sd( x ) wise, but after a mdadm -C ...... -l=multipath it seems to become a 1.2TB volume????? Strange.

One route I'm looking at is versions and bugs to the default install of mdadm in to OS, I've tracked down the guy who wrote mdadm:
neil.brown.name/blog/front (I'm not allowed to post URL's yet so put the h t t p : // in front of the previous) :-)

Check out your version and see what’s available from:
w w w.kernel.org/pub/linux/utils/raid/mdadm/ (again h t t p : / /)

We'll get there in the end, I found you from a friend looking to do exactly the same, and so I'm pretty dedicated to this subject at the moment.

HTH

Tom
 
Old 08-31-2006, 10:44 AM   #8
l4n3
LQ Newbie
 
Registered: Aug 2006
Posts: 12

Rep: Reputation: 0
Hi all,

I'm having the same problem, ie. no default recovery of re-established paths for mdadm.

I checked out the above websites, but there is no mention of that ability around there.

Do we have any reason to believe that it is included in later versions (later than 1.6 which is currently included in RedHat AS4, u3)?

Just trying to save myself the compile hassle if it's not going to work anyway.

thanks
 
Old 08-31-2006, 11:42 AM   #9
l4n3
LQ Newbie
 
Registered: Aug 2006
Posts: 12

Rep: Reputation: 0
2.5.2 no joy

I just tried it again with 2.5.2 (the compile was completely painless) and
still there is no dynamic failover...
 
Old 08-31-2006, 01:01 PM   #10
tbeardsell
LQ Newbie
 
Registered: Aug 2006
Posts: 3

Rep: Reputation: 0
Lets ask the man, see what he says

h t t p : / / neil.brown.name/blog/20040607123837
 
Old 08-31-2006, 02:42 PM   #11
f0rmat
LQ Newbie
 
Registered: Jan 2002
Posts: 9

Original Poster
Rep: Reputation: 0
I felt like I was getting somewhere before using multipathd. see kbase.redhat.com/faq/FAQ_78_7170.shtm

However, after the first time I ran multipath -v2 to get a display of my devices, I got nothing back. It was as if multipath just disappeared. At that point I couldn't even mount my /dev/sdb1 or sdc. I fixed this weird behaviour by killing multipathd, turning it off from startup and rebooting.
 
Old 08-31-2006, 03:47 PM   #12
f0rmat
LQ Newbie
 
Registered: Jan 2002
Posts: 9

Original Poster
Rep: Reputation: 0
As I said, after completing all of the steps in that Redhat KB, when I enable the module and start the service, and run the stat command, I get the following..
[root@amer-linuxdev etc]#
[root@amer-linuxdev etc]# modprobe dm-multipath
[root@amer-linuxdev etc]# modprobe dm-round-robin
[root@amer-linuxdev etc]# service multipathd start
Starting multipathd daemon: [ OK ]
[root@amer-linuxdev etc]# multipath -v2
create: mpath1 (360a980004334654e43343646574a6476)
[size=29 GB][features="1 queue_if_no_path"][hwhandler="0"]
\_ round-robin 0 [prio=4]
\_ 1:0:1:0 sdb 8:16 [ready]
\_ 2:0:1:0 sdc 8:32 [ready]

[root@amer-linuxdev etc]# multipath -v2
[root@amer-linuxdev etc]# multipath -v2

All looks well!! However, this won't be the case for long. For whatever reason(as you can see) that is only displayed once, If I run multipath -v2 again, I get no results, and at this point, it seems like /dev/sdb and c are locked, and are useless.

[root@amer-linuxdev etc]# mount /dev/sdb1 /lunmnt
mount: /dev/sdb1 already mounted or /lunmnt busy
[root@amer-linuxdev etc]#

I can't free them until I kill multipathd and reboot.

Also, in the next step, when I run kpartx -a /dev/dm-#(i use 0 as my value) it creates the dm disk, however, if I do a mount /dev/dm-0 /lunmnt/ it mounts the harddrive that is in the machine(/dev/mapper/VolGroup00-LogVolRoot) It is not /dev/sda1 because I was using this also as an opportunity to play around with Logical volumes.


[root@amer-linuxdev etc]# mount /dev/dm-0 /lunmnt
[root@amer-linuxdev etc]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVolRoot
30254032 3188448 25528768 12% /
/dev/sda1 101086 14646 81221 16% /boot
none 1027508 0 1027508 0% /dev/shm
/dev/mapper/VolGroup00-LogVolX1
28092588 372940 26292604 2% /x1
amer-homes:/x/home/dmurphy
410958560 386076416 20772544 95% /home/dmurphy
/dev/dm-0 30254032 3188448 25528768 12% /lunmnt
 
Old 09-01-2006, 09:58 AM   #13
l4n3
LQ Newbie
 
Registered: Aug 2006
Posts: 12

Rep: Reputation: 0
To tbeardsell, I saw neil.brown's reply to you, not encouraging. Unless you'd like to code an update :-)

f0rmat: Since the mdadm stuff wasn't working at all I tried the dm-multipathing using the k-base article you pointed me to and it worked fine for me.

I haven't tested failover yet, but I will and will post my results here...

I'm using RedHatAS4 2.6.9-42.0.2.ELsmp, 64bit and qlogic fiber-channel cards, not iSCSI...
 
Old 09-01-2006, 10:13 AM   #14
l4n3
LQ Newbie
 
Registered: Aug 2006
Posts: 12

Rep: Reputation: 0
It worked, multiple iterations of unplug/plug while writing to a file on the effected partition...

One thing, I did not use the kpartx at all. I used fdisk on the whole disk (sdb) and removed all partition information, then mke2fs -j on the device itself, not a partition, and then created the multipath device, then mounted that whole device.

We don't partition our LUNs usually, so that's how I've done it. Perhaps your problems arise from the partitioning step?

Sorry I can't be of more help...
 
Old 09-01-2006, 04:15 PM   #15
f0rmat
LQ Newbie
 
Registered: Jan 2002
Posts: 9

Original Poster
Rep: Reputation: 0
if you didn't use kpartx how did you create the multipath device??

kpartx -a /dev/dm-# I thought was creating the multipath disk, which you can't use fdisk on. Did you do a different set of steps? What steps did you incur from start to end? Something must be missing
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Can not reboot when redhat WS 4 UPDATE2 instation finished defier7648 Linux - Hardware 3 12-06-2005 05:31 AM
multipathing - mdadm da_kidd_er Linux - General 0 05-19-2005 07:28 PM
Heeelp: Redhat 9 Kernel Update2.4.20-8 to 2.4.22 kadalz Linux - Newbie 10 09-04-2003 01:12 PM
Update2 Rollback monte Linux - Newbie 1 05-18-2003 08:08 AM
RH Update2 Rollback monte Linux - Distributions 0 05-18-2003 07:09 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Red Hat

All times are GMT -5. The time now is 11:10 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration