Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm assuming the only edit you're doing is to remove the wwid with asterisk you asked about.
Be sure you only remove the line:
wwid "*"
Do NOT remove the "{" directly above or below that line. The one above is the end of the entry above and the one below is the end of the entire blacklist section so both are required.
Made the change in multipath.conf and did "service multipathd restart". There was no disruption.
Unfortunately it did not resolve the issue. The SAN devices still show up as local SD devices.
Code:
sudo pvdisplay
Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sday1 not /dev/sdbm1
Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdaz1 not /dev/sdbn1
Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdak1 not /dev/sdax1
Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdcw1 not /dev/sdai1
Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdv1 not /dev/sdcw1
Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdal1 not /dev/sday1
Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdam1 not /dev/sdaz1
Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdcy1 not /dev/sdak1
Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdx1 not /dev/sdcy1
Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdcj1 not /dev/sdv1
Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdcz1 not /dev/sdal1
Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdi1 not /dev/sdcj1
Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdy1 not /dev/sdcz1
Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdda1 not /dev/sdam1
Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdz1 not /dev/sdda1
Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdcl1 not /dev/sdx1
Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdk1 not /dev/sdcl1
Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdbw1 not /dev/sdi1
Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdcm1 not /dev/sdy1
Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdl1 not /dev/sdcm1
Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdcn1 not /dev/sdz1
Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdm1 not /dev/sdcn1
Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdby1 not /dev/sdk1
Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdbj1 not /dev/sdbw1
Found duplicate PV cllQixBtG36oy39iVPdzmj8GFAhtX7ww: using /dev/sdbz1 not /dev/sdl1
Found duplicate PV onG5ThUCXuBr60rT1W6fIjIOGjSKNaNk: using /dev/sdca1 not /dev/sdm1
Found duplicate PV YE4PrCVMvsdfhddQHtefTj0MzXSkcdpO: using /dev/sdav1 not /dev/sdbj1
Found duplicate PV OGHvsLvxJ1yrRGyPHl2dFaKVSwQoa1BG: using /dev/sdbl1 not /dev/sdby1
--- Physical volume ---
PV Name /dev/sdba2
VG Name vg_denplpilatdb1
PV Size 247.63 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 63392
Free PE 1392
Allocated PE 62000
PV UUID nTLseC-cdOe-wA9r-J7xo-sqTE-G0yc-qoi7ug
--- Physical volume ---
PV Name /dev/mapper/mpathip1
VG Name vg08
PV Size 50.00 GiB / not usable 3.31 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 12799
Free PE 0
Allocated PE 12799
PV UUID PI3zJz-V1fW-OTPw-yMQu-4uOz-D6SP-0yvTsr
--- Physical volume ---
PV Name /dev/mapper/mpathhp1
VG Name vg07
PV Size 230.00 GiB / not usable 3.38 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 58878
Free PE 0
Allocated PE 58878
PV UUID Sy4crl-Z14m-GDxS-EkYA-jy9n-tSrQ-mITUs2
--- Physical volume ---
PV Name /dev/sdav1
VG Name vg07
PV Size 270.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 69119
Free PE 0
Allocated PE 69119
PV UUID YE4PrC-VMvs-dfhd-dQHt-efTj-0MzX-SkcdpO
--- Physical volume ---
PV Name /dev/sdca1
VG Name vg07
PV Size 200.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 51199
Free PE 0
Allocated PE 51199
PV UUID onG5Th-UCXu-Br60-rT1W-6fIj-IOGj-SKNaNk
--- Physical volume ---
PV Name /dev/mapper/mpathfp1
VG Name vg06
PV Size 230.00 GiB / not usable 3.38 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 58878
Free PE 0
Allocated PE 58878
PV UUID eHQZp3-kBcI-xdvS-pCi2-W93d-ec0R-d7P5n7
--- Physical volume ---
PV Name /dev/mapper/mpathcp1
VG Name vg03
PV Size 34.99 GiB / not usable 4.45 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 8957
Free PE 0
Allocated PE 8957
PV UUID XWz3qn-VZy9-iFdD-kBwu-1MKx-ZK4N-1d2BUQ
--- Physical volume ---
PV Name /dev/mapper/mpathep1
VG Name vg05
PV Size 34.99 GiB / not usable 4.45 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 8957
Free PE 0
Allocated PE 8957
PV UUID McogLs-TH0n-lSCj-KYdS-LM1F-XLlN-mvBekn
--- Physical volume ---
PV Name /dev/mapper/mpathdp1
VG Name vg04
PV Size 60.00 GiB / not usable 4.04 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 15358
Free PE 0
Allocated PE 15358
PV UUID zlGH5H-ZZM8-ZM59-f4Qr-lIVG-dddt-TOVk5N
--- Physical volume ---
PV Name /dev/mapper/mpathap1
VG Name vg01
PV Size 725.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 185599
Free PE 0
Allocated PE 185599
PV UUID uCligC-UPEp-xCua-gNVr-F29C-022X-ezpMxA
--- Physical volume ---
PV Name /dev/mapper/mpathbp1
VG Name vg02
PV Size 150.00 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 38399
Free PE 0
Allocated PE 38399
PV UUID VfY4RS-4gPw-BhnS-xp2d-cPCv-2XXF-0g2kus
--- Physical volume ---
PV Name /dev/sdbl1
VG Name vg02
PV Size 105.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 26879
Free PE 0
Allocated PE 26879
PV UUID OGHvsL-vxJ1-yrRG-yPHl-2dFa-KVSw-Qoa1BG
--- Physical volume ---
PV Name /dev/sdbz1
VG Name vg02
PV Size 110.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 28159
Free PE 0
Allocated PE 28159
PV UUID cllQix-BtG3-6oy3-9iVP-dzmj-8GFA-htX7ww
No changes were made to lvm.conf. I only changed multipath.conf, commented out wwid "*" in blacklist section.
The reason I asked about lvm.conf is that it is possible you have the filter line set in such a way that LVM isn't looking at some of your multipath devices.
Also I see you're using partitions on your devices as PVs. It may be you don't have the multipath partition devices (e.g. mpathe = whole disk, mpathep1 = first partition on mpathe). The multipath command outputs only the whole disk info but you can see the other info in /dev/mapper.
I just did a test and found vgscan makes LVM use your multipath instead of single path sd (assuming the multipath exists - you said you have one more):
On test system vg_VolGroup01 uses sdc from SAN:
[root]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_VolGroup00 lvm2 a-- 278.38g 175.84g
/dev/sdc vg_VolGroup01 lvm2 a-- 1024.00g 0
After modifying multipath.conf and running the restart of the multipathd it still displayed the above.
I then ran the vgscan which output:
[root]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg_VolGroup01" using metadata type lvm2
Found volume group "vg_VolGroup00" using metadata type lvm2
After that it shows the multipath device instead of the sd device:
[root]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpatha vg_VolGroup01 lvm2 a-- 1024.00g 0
/dev/sda2 vg_VolGroup00 lvm2 a-- 278.38g 175.84g
In above output sda2 is our Internal RAID controller (PERC) so doesn't use multipath.
Last edited by MensaWater; 03-02-2017 at 01:43 PM.
It was initial setup for multipath on this server. This server didn't have the disk array in Devices because it didn't previously have multipath running at all due to there being only 1 fiber port on the server zoned to 1 port on the disk array. Due to maintenance I was doing yesterday I wanted to modify that one server port to see 2 different fiber ports on the disk array so when one of those disk array ports went offline it wouldn't take down the device at host level.
It just so happened that I saw pvs didn't display the multipath device as the PV even though start of multipathd created it. It was the vgscan that made pvs display the multipath instead of the single path device. (That is to say one has to run the multipathd restart so multipath sees and creates the multipath devices in /dev/mapper then run vgscan so LVM sees that multipath device instead of the single path.)
Running "vgscan" did not fix the issue. Here is the output of one of the volume groups
Code:
sudo vgdisplay vg07 -v
Using volume group(s) on command line
Finding volume group "vg07"
--- Volume group ---
VG Name vg07
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 3
Act PV 3
VG Size 699.98 GiB
PE Size 4.00 MiB
Total PE 179196
Alloc PE / Size 179196 / 699.98 GiB
Free PE / Size 0 / 0
VG UUID jwOo39-ed0B-NE2Z-vCLA-dFGM-2RHz-jOyafl
--- Logical volume ---
LV Path /dev/vg07/fast_recovery_area
LV Name fast_recovery_area
VG Name vg07
LV UUID GH0qsu-U5x6-xAV1-JnsQ-KEDg-u26Z-NzhifJ
LV Write Access read/write
LV Creation host, time denplpilatdb1, 2014-04-17 12:12:53 -0600
LV Status available
# open 1
LV Size 699.98 GiB
Current LE 179196
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:19
--- Physical volumes ---
PV Name /dev/mapper/mpathhp1
PV UUID Sy4crl-Z14m-GDxS-EkYA-jy9n-tSrQ-mITUs2
PV Status allocatable
Total PE / Free PE 58878 / 0
PV Name /dev/sdav1
PV UUID YE4PrC-VMvs-dfhd-dQHt-efTj-0MzX-SkcdpO
PV Status allocatable
Total PE / Free PE 69119 / 0
PV Name /dev/sdca1
PV UUID onG5Th-UCXu-Br60-rT1W-6fIj-IOGj-SKNaNk
PV Status allocatable
Total PE / Free PE 51199 / 0
OK there are various layers of things happening here:
1) Disk array is presenting a LUN to the server over various paths based on the number of array fiber ports and server fiber ports zoned together. All of this is typically done in the SAN switch.
Per earlier settings we see you should have 8 paths for each LUN presented.
2) After re-scanning fiber HBAs, the server creates a /dev/sd<alpha> for each path it finds to SCSI disks so would create 8 of those for each array LUN presented because of the 8 paths noted above.
3) An sd<alpha> is a "scsi disk" and can be partitioned so that it has "sd<alpha><numeric> where the numeric is the partition number.
4) When an sd is partitioned it should be done on one of the sd<alpha> devices (e.g. sdav). Partition 1 would be created after writing that via fdisk, parted or whatever tool you used. (e.g. you would have sdav for entire disk, sdav1 for first partition, sdav2 for second partition etc...)
5) For multiple paths to the same LUN to be seen in Linux multipathing they must all have the same ID. It is multipath.conf and multipathd that recognizes this to create an mpath<alpha> (Note the alpha for mpath is not going to be the same as for sd for the obvious reason there are more of the sd devices than there are mpath devices.)
5) For multiple paths to a disk if you want to add partitions you must partition the sd<alpha> and not the multipath (mpath<alpha>) device.
6) Once you have added a partition to any of the sd<alpha> devices that are all paths to the same LUN on array you must run other commands so that the other sd<alpha> paths to that disk recognize they now have a partition and also so that the mpath<alpha> sees it.
a) Run "blockdev --flushbufs" on each of the sd<alpha> devices that are paths to the same disk.
b) Run "partprobe" so all of the sd<alpha><numeric> and mpath<alpha>p<numeric> devices are created (i.e. the partition is created under /dev (or /dev/mapper) for each.
c) Run "service multipathd restart" just to make sure it sees everything.
7) Once you see the mpath<alpha>p<numeric> devices have been created run "vgscan" which should pick up the mpath<alpha>p<numeric>.
To recap - you must have what is expected at each level above. You can't skip steps.
Last edited by MensaWater; 03-15-2017 at 08:18 AM.
Only explanation I can think of is that last change of updating '/etc/multipath.conf' and restarting actually worked but the server needed a reboot. It was rebooted on Mon Apr 10. Thanks for the help !
Once configuration (/etc/lvm/lvm.conf and /etc/multipath.conf) done and you happy with it, pls rebuilt initrd. (save previous as .old is a good idea). HOWTO LUNs on Linux using native tools
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.