LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Enterprise (http://www.linuxquestions.org/questions/linux-enterprise-47/)
-   -   vgcreate command is changing /dev/mapper as PV Name in pvdisplay to /dev/dm-xx path (http://www.linuxquestions.org/questions/linux-enterprise-47/vgcreate-command-is-changing-dev-mapper-as-pv-name-in-pvdisplay-to-dev-dm-xx-path-754073/)

neeravsingh 09-10-2009 02:36 AM

vgcreate command is changing /dev/mapper as PV Name in pvdisplay to /dev/dm-xx path
 
Hi,

I am using RHEL 4.8, MPIO and clarrion array.
I am trying to create a LV with a clariion LUN of 4GB.
Following are the steps i followed:

pvcreate /dev/mapper/mpath28

pvdisplay:

"/dev/mapper/mpath28" is a new physical volume of "4.00 GB"
--- NEW Physical volume ---
PV Name /dev/mapper/mpath28
VG Name
PV Size 4.00 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID drAP7S-GqmQ-x3fw-uzUu-ciqZ-5koX-36Ak0P


vgcreate vol11 /dev/mapper/mpath28

Volume group "vol11" successfully created

vgdisplay :

--- Volume group ---
VG Name vol11
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.00 GB
PE Size 4.00 MB
Total PE 1023
Alloc PE / Size 0 / 0
Free PE / Size 1023 / 4.00 GB
VG UUID B1ig0I-RQPt-vuZv-UET0-Ugmn-dYo7-Ns2Wk3

Now, if i issue pvdisplay, i get /dev/dm-xx path instead of /dev/mapper/:

--- Physical volume ---
PV Name /dev/dm-30
VG Name vol11
PV Size 4.00 GB / not usable 4.00 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 1023
Free PE 1023
Allocated PE 0
PV UUID drAP7S-GqmQ-x3fw-uzUu-ciqZ-5koX-36Ak0P

I made the following changes to lvm.conf file :

filter = [ "a/dev/mapper/mpath*/", "r/dev/dm-.*/" ]
type = ["device-mapper", 1]

I need to fix this issue urgently. Please let me know what i missed.

Thanks,
Neerav Singh

jonesr 09-11-2009 05:37 PM

Per the DM-Multipath manual "Any devices of the form /dev/dm-n are for internal use only and should never be used." But nothing is really broken, it's just that the tools (unfortunately) display that path.

If you missed anything, it would be just that /dev/dm-30 is an alias and it does not indicate that the PV was created incorrectly.

elcody02 09-12-2009 04:40 AM

Just change the lines in /etc/lvm/lvm.conf to the following
Quote:

# If several entries in the scanned directories correspond to the
# same block device and the tools need to display a name for device,
# all the pathnames are matched against each item in the following
# list of regular expressions in turn and the first match is used.
#preferred_names = [ ]

# Try to avoid using undescriptive /dev/dm-N names, if present.
preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
Basically you need to comment preferred_names = [ ] and uncomment preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]. Then lvm will first look into /dev/mpath or /dev/mapper/mpath.. and this will be the names for the pvs.

Hope this helps.

Have fun.

neeravsingh 09-14-2009 01:01 AM

Thanks alot elcody02, your resolution really solved my problem. It is showing /dev/mapper paths as PV name now.

emrebilmuh 09-14-2009 09:23 AM

Quote:

Originally Posted by elcody02 (Post 3679776)

Basically you need to comment preferred_names = [ ] and uncomment preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]. Then lvm will first look into /dev/mpath or /dev/mapper/mpath.. and this will be the names for the pvs.

Hope this helps.

Have fun.


Hey elcody02,

I just came here from this topic with your suggestion:
changing pv with its multipath volume

If i change /etc/lvm/lvm.conf with the new "preferred_names" parameters and run pvs, lvm recognizes the existing physical volumes through their new device pathes?

Or doesn't matter if i unmount the filesystems or not, what should i do to do that exactly with no data loss?

elcody02 09-15-2009 03:25 AM

Quote:

If i change /etc/lvm/lvm.conf with the new "preferred_names" parameters and run pvs, lvm recognizes the existing physical volumes through their new device pathes?
So far so yes. It does but see below.
Quote:

Or doesn't matter if i unmount the filesystems or not, what should i do to do that exactly with no data loss?
I would hope that lvm checks this change without anything being changed but I'm not sure.

To be on the safe side I would do as follows. Change the /etc/lvm/lvm.conf as described.

Then
  1. Stop services accessing lvm relevant services
  2. umount lvm relevant filesystems
  3. Deactivate all vgs (vgchange -an)
  4. Rescan vgs (vgscan)
  5. Activate vgs (vgchange -ay)
  6. mount lvm relevant filesystems
  7. Start services accessing lvm relevant services
or just reboot.

If you're using device-mapper-multipath and lvm for the rootfs you might want to rebuild the initrd and pray that this limited mkinitrd tool will recognize the changed lvm.conf.

But be warned this might end up in problems. So in case of having the root on device-mapper-multipath and lvm be extremly carefull and always test by rebooting (and backup the old initrd to have a way back).

Hope this helps a little.

neeravsingh 03-23-2010 04:05 AM

Hi,

I am again facing the same issue with the /dev/mapper paths on SUSE linux. I changed the preferred name attribute in /etc/lvm/lvm.conf file, still no luck. Please note, initially when i posted this problem, it was for Red Hat, which got resolved by changing the preferred names attribute as suggested above. Do I need to do something extra on SUSE. Few things from /etv/lvm/lvm.conf file :

# The first expression found to match a device name determines if
# the device will be accepted or rejected (ignored). Devices that
# don't match any patterns are accepted.

#preferred_names = [ ]
# Try to avoid using undescriptive /dev/dm-N names, if present.
preferred_names = [ "^/dev/mapper/", "^/dev/mpath/", "^/dev/[hs]d" ]

# Remember to run vgscan after you change this parameter to ensure
# that the cache file gets regenerated (see below).

# By default we accept every block device except udev names:
#filter = [ "a/.*/" ]
filter = [ "a/dev/mpath/*/", "r/.*/" ]
# Exclude the cdrom drive
# filter = [ "r|/dev/cdrom|" ]

types = [ "device-mapper", 1]


The problem i am suspecting is, any change made in lvm.conf file is not getting updated with scan commands.


All times are GMT -5. The time now is 10:52 AM.