LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Enterprise Linux Forums > Linux - Enterprise
User Name
Password
Linux - Enterprise This forum is for all items relating to using Linux in the Enterprise.

Notices


Reply
  Search this Thread
Old 02-27-2017, 12:00 PM   #16
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669

Good.

I'm assuming the only edit you're doing is to remove the wwid with asterisk you asked about.

Be sure you only remove the line:
wwid "*"

Do NOT remove the "{" directly above or below that line. The one above is the end of the entry above and the one below is the end of the entire blacklist section so both are required.
 
Old 02-27-2017, 05:13 PM   #17
Hsingh
LQ Newbie
 
Registered: Jan 2012
Location: CO , USA
Distribution: Redhat, Oracle Linux6, Ubuntu16 & CentOS.
Posts: 28

Original Poster
Rep: Reputation: Disabled
Yes, I am going to comment out the wwid "*"

Code:
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^hd[a-z]"
        devnode "^dcssblk[0-9]*"
        device {
                vendor "DGC"
                product "LUNZ"
        }
        device {
                vendor "IBM"
                product "S/390.*"
        }
        # don't count normal SATA devices as multipaths
        device {
                vendor  "ATA"
        }
        # don't count 3ware devices as multipaths
        device {
                vendor  "3ware"
        }
        device {
                vendor  "AMCC"
        }
        # nor highpoint devices
        device {
                vendor  "HPT"
        }
        wwid "3600508b1001c9c5986be0c6503c1857e"
        device {
                vendor HP
                product Virtual_DVD-ROM
        }
        wwid "*"
}
 
Old 02-28-2017, 08:07 PM   #18
Hsingh
LQ Newbie
 
Registered: Jan 2012
Location: CO , USA
Distribution: Redhat, Oracle Linux6, Ubuntu16 & CentOS.
Posts: 28

Original Poster
Rep: Reputation: Disabled
Made the change in multipath.conf and did "service multipathd restart". There was no disruption.
Unfortunately it did not resolve the issue. The SAN devices still show up as local SD devices.

Code:
sudo pvdisplay
  Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sday1 not /dev/sdbm1
  Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdaz1 not /dev/sdbn1
  Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdak1 not /dev/sdax1
  Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdcw1 not /dev/sdai1
  Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdv1 not /dev/sdcw1
  Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdal1 not /dev/sday1
  Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdam1 not /dev/sdaz1
  Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdcy1 not /dev/sdak1
  Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdx1 not /dev/sdcy1
  Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdcj1 not /dev/sdv1
  Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdcz1 not /dev/sdal1
  Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdi1 not /dev/sdcj1
  Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdy1 not /dev/sdcz1
  Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdda1 not /dev/sdam1
  Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdz1 not /dev/sdda1
  Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdcl1 not /dev/sdx1
  Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdk1 not /dev/sdcl1
  Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdbw1 not /dev/sdi1
  Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdcm1 not /dev/sdy1
  Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdl1 not /dev/sdcm1
  Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdcn1 not /dev/sdz1
  Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdm1 not /dev/sdcn1
  Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdby1 not /dev/sdk1
  Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdbj1 not /dev/sdbw1
  Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdbz1 not /dev/sdl1
  Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdca1 not /dev/sdm1
  Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdav1 not /dev/sdbj1
  Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdbl1 not /dev/sdby1
  --- Physical volume ---
  PV Name               /dev/sdba2
  VG Name               vg_denplpilatdb1
  PV Size               247.63 GiB / not usable 2.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              63392
  Free PE               1392
  Allocated PE          62000
  PV UUID               nTLseC-cdOe-wA9r-J7xo-sqTE-G0yc-qoi7ug
   
  --- Physical volume ---
  PV Name               /dev/mapper/mpathip1
  VG Name               vg08
  PV Size               50.00 GiB / not usable 3.31 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              12799
  Free PE               0
  Allocated PE          12799
  PV UUID               PI3zJz-V1fW-OTPw-yMQu-4uOz-D6SP-0yvTsr
   
  --- Physical volume ---
  PV Name               /dev/mapper/mpathhp1
  VG Name               vg07
  PV Size               230.00 GiB / not usable 3.38 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              58878
  Free PE               0
  Allocated PE          58878
  PV UUID               Sy4crl-Z14m-GDxS-EkYA-jy9n-tSrQ-mITUs2
   
  --- Physical volume ---
  PV Name               /dev/sdav1
  VG Name               vg07
  PV Size               270.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              69119
  Free PE               0
  Allocated PE          69119
  PV UUID               YE4PrC-VMvs-dfhd-dQHt-efTj-0MzX-SkcdpO
   
  --- Physical volume ---
  PV Name               /dev/sdca1
  VG Name               vg07
  PV Size               200.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              51199
  Free PE               0
  Allocated PE          51199
  PV UUID               onG5Th-UCXu-Br60-rT1W-6fIj-IOGj-SKNaNk
   
  --- Physical volume ---
  PV Name               /dev/mapper/mpathfp1
  VG Name               vg06
  PV Size               230.00 GiB / not usable 3.38 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              58878
  Free PE               0
  Allocated PE          58878
  PV UUID               eHQZp3-kBcI-xdvS-pCi2-W93d-ec0R-d7P5n7
   
  --- Physical volume ---
  PV Name               /dev/mapper/mpathcp1
  VG Name               vg03
  PV Size               34.99 GiB / not usable 4.45 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              8957
  Free PE               0
  Allocated PE          8957
  PV UUID               XWz3qn-VZy9-iFdD-kBwu-1MKx-ZK4N-1d2BUQ
   
  --- Physical volume ---
  PV Name               /dev/mapper/mpathep1
  VG Name               vg05
  PV Size               34.99 GiB / not usable 4.45 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              8957
  Free PE               0
  Allocated PE          8957
  PV UUID               McogLs-TH0n-lSCj-KYdS-LM1F-XLlN-mvBekn
   
  --- Physical volume ---
  PV Name               /dev/mapper/mpathdp1
  VG Name               vg04
  PV Size               60.00 GiB / not usable 4.04 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              15358
  Free PE               0
  Allocated PE          15358
  PV UUID               zlGH5H-ZZM8-ZM59-f4Qr-lIVG-dddt-TOVk5N
   
  --- Physical volume ---
  PV Name               /dev/mapper/mpathap1
  VG Name               vg01
  PV Size               725.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              185599
  Free PE               0
  Allocated PE          185599
  PV UUID               uCligC-UPEp-xCua-gNVr-F29C-022X-ezpMxA
   
  --- Physical volume ---
  PV Name               /dev/mapper/mpathbp1
  VG Name               vg02
  PV Size               150.00 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              38399
  Free PE               0
  Allocated PE          38399
  PV UUID               VfY4RS-4gPw-BhnS-xp2d-cPCv-2XXF-0g2kus
   
  --- Physical volume ---
  PV Name               /dev/sdbl1
  VG Name               vg02
  PV Size               105.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              26879
  Free PE               0
  Allocated PE          26879
  PV UUID               OGHvsL-vxJ1-yrRG-yPHl-2dFa-KVSw-Qoa1BG
   
  --- Physical volume ---
  PV Name               /dev/sdbz1
  VG Name               vg02
  PV Size               110.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              28159
  Free PE               0
  Allocated PE          28159
  PV UUID               cllQix-BtG3-6oy3-9iVP-dzmj-8GFA-htX7ww
 
Old 03-01-2017, 07:27 AM   #19
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
What does "multipath -ll" or "multipath -l -v2" show now?

Also what lines do you have uncommented in lvm.conf?

Did you ever figure out the number of Hitachi LDEVs you have presented to this server?

Did you verify lsscsi output hasn't changed?
 
Old 03-02-2017, 03:28 AM   #20
Hsingh
LQ Newbie
 
Registered: Jan 2012
Location: CO , USA
Distribution: Redhat, Oracle Linux6, Ubuntu16 & CentOS.
Posts: 28

Original Poster
Rep: Reputation: Disabled
Multipath -ll now shows one additional LUN, which I missed to notice when I check after initially after making the change.

mpathl (360060e80132b960050202b9600000049) dm-26 HITACHI,OPEN-V
size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 0:0:0:9 sdj 8:144 active ready running
|- 0:0:1:9 sdw 65:96 active ready running
|- 0:0:2:9 sdaj 66:48 active ready running
|- 0:0:3:9 sdaw 67:0 active ready running
|- 1:0:0:9 sdbk 67:224 active ready running
|- 1:0:1:9 sdbx 68:176 active ready running
|- 1:0:2:9 sdck 69:128 active ready running
`- 1:0:3:9 sdcx 70:80 active ready running

Quote:
Also what lines do you have uncommented in lvm.conf?
No changes were made to lvm.conf. I only changed multipath.conf, commented out wwid "*" in blacklist section.

I forgot to check lsscsi but will do it asap. And, also get with storage team to find out total number of LUNs presented to this server.
 
Old 03-02-2017, 08:28 AM   #21
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
Quote:
Originally Posted by Hsingh View Post
No changes were made to lvm.conf. I only changed multipath.conf, commented out wwid "*" in blacklist section.
The reason I asked about lvm.conf is that it is possible you have the filter line set in such a way that LVM isn't looking at some of your multipath devices.

Also I see you're using partitions on your devices as PVs. It may be you don't have the multipath partition devices (e.g. mpathe = whole disk, mpathep1 = first partition on mpathe). The multipath command outputs only the whole disk info but you can see the other info in /dev/mapper.
 
Old 03-02-2017, 01:41 PM   #22
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
You need to run "vgscan".

I just did a test and found vgscan makes LVM use your multipath instead of single path sd (assuming the multipath exists - you said you have one more):

On test system vg_VolGroup01 uses sdc from SAN:
[root]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_VolGroup00 lvm2 a-- 278.38g 175.84g
/dev/sdc vg_VolGroup01 lvm2 a-- 1024.00g 0

After modifying multipath.conf and running the restart of the multipathd it still displayed the above.

I then ran the vgscan which output:
[root]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg_VolGroup01" using metadata type lvm2
Found volume group "vg_VolGroup00" using metadata type lvm2

After that it shows the multipath device instead of the sd device:
[root]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpatha vg_VolGroup01 lvm2 a-- 1024.00g 0
/dev/sda2 vg_VolGroup00 lvm2 a-- 278.38g 175.84g

In above output sda2 is our Internal RAID controller (PERC) so doesn't use multipath.

Last edited by MensaWater; 03-02-2017 at 01:43 PM.
 
Old 03-06-2017, 10:48 AM   #23
Hsingh
LQ Newbie
 
Registered: Jan 2012
Location: CO , USA
Distribution: Redhat, Oracle Linux6, Ubuntu16 & CentOS.
Posts: 28

Original Poster
Rep: Reputation: Disabled
I looked in /etc/lvm/lvm.conf and did not see any filters that would prohibit it from looking at any devices.

Quote:
After modifying multipath.conf and running the restart of the multipathd it still displayed the above.
What did you change in multipath ?

Doing a vgscan shouldn't be a problem, I will try to get that done here shortly. After reading the MAN pages of course, I am a little paranoid :-)
 
Old 03-06-2017, 12:38 PM   #24
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
Quote:
Originally Posted by Hsingh View Post
What did you change in multipath ?
It was initial setup for multipath on this server. This server didn't have the disk array in Devices because it didn't previously have multipath running at all due to there being only 1 fiber port on the server zoned to 1 port on the disk array. Due to maintenance I was doing yesterday I wanted to modify that one server port to see 2 different fiber ports on the disk array so when one of those disk array ports went offline it wouldn't take down the device at host level.

It just so happened that I saw pvs didn't display the multipath device as the PV even though start of multipathd created it. It was the vgscan that made pvs display the multipath instead of the single path device. (That is to say one has to run the multipathd restart so multipath sees and creates the multipath devices in /dev/mapper then run vgscan so LVM sees that multipath device instead of the single path.)
 
Old 03-13-2017, 09:59 AM   #25
Hsingh
LQ Newbie
 
Registered: Jan 2012
Location: CO , USA
Distribution: Redhat, Oracle Linux6, Ubuntu16 & CentOS.
Posts: 28

Original Poster
Rep: Reputation: Disabled
Running "vgscan" did not fix the issue. Here is the output of one of the volume groups

Code:
sudo vgdisplay vg07 -v
    Using volume group(s) on command line
    Finding volume group "vg07"
   --- Volume group ---
  VG Name               vg07
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               699.98 GiB
  PE Size               4.00 MiB
  Total PE              179196
  Alloc PE / Size       179196 / 699.98 GiB
  Free  PE / Size       0 / 0   
  VG UUID               jwOo39-ed0B-NE2Z-vCLA-dFGM-2RHz-jOyafl
   
  --- Logical volume ---
  LV Path                /dev/vg07/fast_recovery_area
  LV Name                fast_recovery_area
  VG Name                vg07
  LV UUID                GH0qsu-U5x6-xAV1-JnsQ-KEDg-u26Z-NzhifJ
  LV Write Access        read/write
  LV Creation host, time denplpilatdb1, 2014-04-17 12:12:53 -0600
  LV Status              available
  # open                 1
  LV Size                699.98 GiB
  Current LE             179196
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:19
   
  --- Physical volumes ---
  PV Name               /dev/mapper/mpathhp1     
  PV UUID               Sy4crl-Z14m-GDxS-EkYA-jy9n-tSrQ-mITUs2
  PV Status             allocatable
  Total PE / Free PE    58878 / 0
   
  PV Name               /dev/sdav1     
  PV UUID               YE4PrC-VMvs-dfhd-dQHt-efTj-0MzX-SkcdpO
  PV Status             allocatable
  Total PE / Free PE    69119 / 0
   
  PV Name               /dev/sdca1     
  PV UUID               onG5Th-UCXu-Br60-rT1W-6fIj-IOGj-SKNaNk
  PV Status             allocatable
  Total PE / Free PE    51199 / 0
 
Old 03-13-2017, 03:05 PM   #26
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
OK there are various layers of things happening here:

1) Disk array is presenting a LUN to the server over various paths based on the number of array fiber ports and server fiber ports zoned together. All of this is typically done in the SAN switch.
Per earlier settings we see you should have 8 paths for each LUN presented.

2) After re-scanning fiber HBAs, the server creates a /dev/sd<alpha> for each path it finds to SCSI disks so would create 8 of those for each array LUN presented because of the 8 paths noted above.

3) An sd<alpha> is a "scsi disk" and can be partitioned so that it has "sd<alpha><numeric> where the numeric is the partition number.

4) When an sd is partitioned it should be done on one of the sd<alpha> devices (e.g. sdav). Partition 1 would be created after writing that via fdisk, parted or whatever tool you used. (e.g. you would have sdav for entire disk, sdav1 for first partition, sdav2 for second partition etc...)

5) For multiple paths to the same LUN to be seen in Linux multipathing they must all have the same ID. It is multipath.conf and multipathd that recognizes this to create an mpath<alpha> (Note the alpha for mpath is not going to be the same as for sd for the obvious reason there are more of the sd devices than there are mpath devices.)

5) For multiple paths to a disk if you want to add partitions you must partition the sd<alpha> and not the multipath (mpath<alpha>) device.

6) Once you have added a partition to any of the sd<alpha> devices that are all paths to the same LUN on array you must run other commands so that the other sd<alpha> paths to that disk recognize they now have a partition and also so that the mpath<alpha> sees it.
a) Run "blockdev --flushbufs" on each of the sd<alpha> devices that are paths to the same disk.
b) Run "partprobe" so all of the sd<alpha><numeric> and mpath<alpha>p<numeric> devices are created (i.e. the partition is created under /dev (or /dev/mapper) for each.
c) Run "service multipathd restart" just to make sure it sees everything.

7) Once you see the mpath<alpha>p<numeric> devices have been created run "vgscan" which should pick up the mpath<alpha>p<numeric>.

To recap - you must have what is expected at each level above. You can't skip steps.

Last edited by MensaWater; 03-15-2017 at 08:18 AM.
 
Old 04-26-2017, 02:43 PM   #27
Hsingh
LQ Newbie
 
Registered: Jan 2012
Location: CO , USA
Distribution: Redhat, Oracle Linux6, Ubuntu16 & CentOS.
Posts: 28

Original Poster
Rep: Reputation: Disabled
So after a while I log into the box today try and fix the outstanding issue. Run a vgdisplay -v & pvs and, all the LUNs seem to have redundancy now !

## pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpathap1 vg01 lvm2 a-- 725.00g 0
/dev/mapper/mpathbp1 vg02 lvm2 a-- 150.00g 0
/dev/mapper/mpathcp1 vg03 lvm2 a-- 34.99g 0
/dev/mapper/mpathdp1 vg04 lvm2 a-- 59.99g 0
/dev/mapper/mpathep1 vg05 lvm2 a-- 34.99g 0
/dev/mapper/mpathfp1 vg06 lvm2 a-- 229.99g 0
/dev/mapper/mpathhp1 vg07 lvm2 a-- 229.99g 0
/dev/mapper/mpathip1 vg08 lvm2 a-- 50.00g 0
/dev/mapper/mpathjp1 vg07 lvm2 a-- 270.00g 0
/dev/mapper/mpathkp1 vg02 lvm2 a-- 110.00g 0
/dev/mapper/mpathmp1 vg02 lvm2 a-- 105.00g 0
/dev/mapper/mpathnp1 vg07 lvm2 a-- 200.00g 0
/dev/sdba2 vg_denplpilatdb1 lvm2 a-- 247.62g 5.44g

Only explanation I can think of is that last change of updating '/etc/multipath.conf' and restarting actually worked but the server needed a reboot. It was rebooted on Mon Apr 10. Thanks for the help !
 
1 members found this post helpful.
Old 06-09-2017, 04:20 AM   #28
voleg
Member
 
Registered: Oct 2013
Distribution: RedHat CentOS Fedora SuSE
Posts: 354

Rep: Reputation: 51
Once configuration (/etc/lvm/lvm.conf and /etc/multipath.conf) done and you happy with it, pls rebuilt initrd. (save previous as .old is a good idea). HOWTO LUNs on Linux using native tools
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Multipath San Disk identification pantdk Linux - Server 5 01-02-2013 10:17 AM
Need help with best configuration for SAN multipath kameleon25 Linux - General 1 05-12-2012 02:55 AM
Using multipath for disk created on SAN codenjanod Linux - Software 1 07-27-2010 12:57 PM
Mapping LUNs to multipath devices. larold Linux - Enterprise 9 02-20-2010 11:31 AM
autosense SAN LUNS sg001 Linux - Server 1 07-17-2007 10:10 AM

LinuxQuestions.org > Forums > Enterprise Linux Forums > Linux - Enterprise

All times are GMT -5. The time now is 06:52 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration