LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   df command output shows partly dm-X devices and partly lvm mapper devices, (https://www.linuxquestions.org/questions/linux-newbie-8/df-command-output-shows-partly-dm-x-devices-and-partly-lvm-mapper-devices-4175546173/)

shadow-fmx 06-23-2015 04:25 AM

df command output shows partly dm-X devices and partly lvm mapper devices,
 
Hi all,

After upgrading our vmware template server from debian wheezy to debian jessie
the output of the df command seems broken with regards to the root and /usr mount.
We are using LVM on this machine.
I've been searching for quite some time for a solution, but lack of linux knowledge is bothering me to get it fixed. The system seems to run fine without issues.

I can temporarely resolve the issue (untill next boot) by doing a lazy umount and a new mount of for example the /usr but I have no clue how to do this for the root partition.

I found a thread that something has changed in the way df presents the devices, however it is not consistent in our case (it shows a mix of both) and it seems to go wrong at an earlier stage during boot.

Who can help me out what is going on with the mounting at boot phase and better yet, how to fix this? Either show all dm devices or mapper devices is fine with me although I have a slight preference for the mapper names as it makes it more obvious LVM is being used.

Please find below some outputs.
REFS I found already, but is not entirely our issue:
https://groups.google.com/forum/#!to...st/QxmQtwfYpr8


Any input is appreciated.

Code:

df -h                                                                                                                                                                                          11:12
Filesystem          Size  Used Avail Use% Mounted on
/dev/dm-0            922M  391M  467M  46% /
udev                  10M    0  10M  0% /dev
tmpfs                403M  5.7M  397M  2% /run
/dev/dm-2            3.7G  1.1G  2.4G  31% /usr
tmpfs              1006M    0 1006M  0% /dev/shm
tmpfs                5.0M    0  5.0M  0% /run/lock
tmpfs              1006M    0 1006M  0% /sys/fs/cgroup
/dev/mapper/vg0-srv  42G  48M  39G  1% /srv
/dev/mapper/vg0-tmp  922M  1.2M  857M  1% /tmp
/dev/mapper/vg0-var  1.8G  696M  1.1G  41% /var
tmpfs                202M    0  202M  0% /run/user/0

the mount command displays the correct mount information :
Code:

mount                                                                                                                                                                                          11:12
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=255132,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,relatime,size=411740k,mode=755)
/dev/mapper/vg0-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/mapper/vg0-usr on /usr type ext4 (rw,relatime,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/mapper/vg0-srv on /srv type ext4 (rw,relatime,data=ordered)
/dev/mapper/vg0-tmp on /tmp type ext4 (rw,relatime,data=ordered)
/dev/mapper/vg0-var on /var type ext4 (rw,relatime,data=ordered)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=205872k,mode=700)

however, the /proc/mounts file shows the incorrect output:
Code:

cat /proc/mounts                                                                                                                                                                                11:14
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=255132,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,relatime,size=411740k,mode=755 0 0
/dev/dm-0 / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0
/dev/dm-2 /usr ext4 rw,relatime,data=ordered 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
/dev/mapper/vg0-srv /srv ext4 rw,relatime,data=ordered 0 0
/dev/mapper/vg0-tmp /tmp ext4 rw,relatime,data=ordered 0 0
/dev/mapper/vg0-var /var ext4 rw,relatime,data=ordered 0 0
rpc_pipefs /run/rpc_pipefs rpc_pipefs rw,relatime 0 0
tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=205872k,mode=700 0 0

The fstab file also seems correct:
Code:

cat /etc/fstab                                                                                                                                                                                  11:15
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>  <type>  <options>      <dump>  <pass>
/dev/mapper/vg0-root /              ext4    errors=remount-ro 0      1
/dev/mapper/vg0-srv /srv            ext4    defaults        0      2
/dev/mapper/vg0-tmp /tmp            ext4    defaults        0      2
/dev/mapper/vg0-usr /usr            ext4    defaults        0      2
/dev/mapper/vg0-var /var            ext4    defaults        0      2
/dev/mapper/vg0-swap none            swap    sw              0      0
/dev/sr0        /media/cdrom0  udf,iso9660 user,noauto    0      0
#/dev/fd0        /media/floppy0  auto    rw,user,noauto  0      0


nooneknowme 06-23-2015 11:17 PM

You might need to look for udev rules for dm and lvm ( specially if there are any blacklists ).also check the lvm.conf for any hints.
Also check multipath configuration file if multipathd running.

if everything looks OK, run the equivilant of "udevadm trigger" in debian.

NOTE : I am not very familiar with Debian. However i think it should give you hints to troubleshoot.

shadow-fmx 06-24-2015 01:50 AM

Hi nooneknowme,

I need to delve into this udev stuff. I have been reading a little about it, but the linux boot sequence in detail is still very unclear to me unfortunately.
Some things happen in the initram which needs to be rebuild upon changes in the udev config if I understand correctly.
This is not yet in my league :)

Maybe someone who can explain the details of the debian boot process a little more in depth, especially with regards to finding drives and the mounting process involving udef and multipathing ?

By the way, I do not see any multipath deamon running.

Thanks !

syg00 06-24-2015 01:58 AM

Why do you care ?.
Just goes to prove you should be concentrating on the mount point, not the (simulated) device it reside on.

shadow-fmx 06-24-2015 06:16 AM

Quote:

Originally Posted by syg00 (Post 5382126)
Why do you care ?.
Just goes to prove you should be concentrating on the mount point, not the (simulated) device it reside on.

If you can tell me where to look I would :)
But I have no clue where to start (noob).

What service does the initial mounting and where is the configuration hidden ?

shadow-fmx 06-25-2015 07:34 AM

no one who can put me on the right track?

Some more info. I found some weird things going on in the dmesg log.
The 2 misbehaving mountpoints are mounted somewhere in the boot-process.
A little later they are re-mounted (?)
I can not see this behavior for the other logical volumes.

How can I analyze what service(s) is (are) doing these activities during the boot process?
What is causing this behavior?

Code:

[    0.994887] sr 1:0:0:0: Attached scsi CD-ROM sr0
[    0.995400] sd 2:0:0:0: Attached scsi generic sg0 type 0
[    0.995429] sd 2:0:1:0: Attached scsi generic sg1 type 0
[    0.995454] sr 1:0:0:0: Attached scsi generic sg2 type 5
[    1.001538]  sda: sda1 sda2 < sda5 >
[    1.001763] sd 2:0:0:0: [sda] Attached SCSI disk
[    1.037472]  sdb: sdb1 < sdb5 >
[    1.037695] sd 2:0:1:0: [sdb] Attached SCSI disk
[    1.294507] device-mapper: uevent: version 1.0.3
[    1.294786] device-mapper: ioctl: 4.27.0-ioctl (2013-10-30) initialised: dm-devel@redhat.com
[    1.619025] PM: Starting manual resume from disk
[    1.619031] PM: Hibernation image partition 254:1 present
[    1.619033] PM: Looking for hibernation image.
[    1.619676] PM: Image not found (code -22)
[    1.619678] PM: Hibernation image not present or could not be loaded.

[    1.811845] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
[    3.288879] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)


[    3.331395] random: nonblocking pool is initialized

snip

Code:

[    5.657373] input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3

[    5.673062] EXT4-fs (dm-2): re-mounted. Opts: (null)
[    5.705911] Adding 1949692k swap on /dev/mapper/vg0-swap.  Priority:-1 extents:1 across:1949692k FS


[    5.708398] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro
[    5.715836] alg: No test for crc32 (crc32-pclmul)


[    6.134232] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null)
[    6.184102] EXT4-fs (dm-5): mounted filesystem with ordered data mode. Opts: (null)
[    6.239410] EXT4-fs (dm-4): mounted filesystem with ordered data mode. Opts: (null)


[    6.278804] systemd-journald[190]: Received request to flush runtime journal from PID 1


JeremyBoden 06-26-2015 03:11 AM

Does lvm show the correct behaviour?
If there are no other problems, I would treat any discrepancies between /proc and mount to be "spurious" and not in need of explanation.

shadow-fmx 06-26-2015 03:18 AM

Quote:

Originally Posted by JeremyBoden (Post 5383209)
Does lvm show the correct behaviour?
If there are no other problems, I would treat any discrepancies between /proc and mount to be "spurious" and not in need of explanation.

Hi Jeremy,

It looks ok, lvs,pvs,vgs all give normal responses.
I admit it seems something cosmetic but i want to be sure, before I roll-out this as a template machine, that it will not cause issues.
Soon you will have a few machines running on it in a live environment and than - as always - the shit shows up.

So if you guys know some commands / tests I can run to make sure all is ok I guess i have to put my need/urgency voor cleanliness aside :).


By the way;
I also tried installing from scratch (using the jessie dvd iso)... it exposes the same problem.
Even worse, now more drives show up as dm-x.
To me this looks like a bug somewhere but I can not pinpoint it exactly with my knowledge level of Debian.
Never had this issue with wheezy or squeeze

JeremyBoden 06-26-2015 06:31 AM

So basically, you have a load of unused dm-x nodes plus some nodes properly used via dev-mapper???

Never tried it, but the dmsetup command might be useful, especially
Code:

dmsetup ls
Other options look severely dangerous!

shadow-fmx 06-26-2015 12:01 PM

Well the mounts (nodes?) are all in use.
This command showing CORRECT results once again :

Code:

dmsetup ls                                                                                                                                                              18:55
vg0-tmp (254:4)
vg0-swap        (254:1)
vg0-root        (254:0)
vg0-usr (254:2)
vg0-var (254:3)
vg0-srv (254:5)

It just seems the df command which uses the MTAB symlink in /etc (link to /proc/mounts which is holding "incorrect" mount info :
Most other tooling does not seem to bother. Really weird this....

Code:

df                                                                                                                                                                      18:57
Filesystem          1K-blocks    Used Available Use Mounted on
/dev/dm-0              943128  400044    477960  46% /
udev                    10240      0    10240  0% /dev
tmpfs                  411740    5768    405972  2% /run
/dev/dm-2            3776568 1101664  2463348  31% /usr
tmpfs                  801660      0    801660  0% /dev/shm
tmpfs                    5120      0      5120  0% /run/lock
tmpfs                1029344      0  1029344  0% /sys/fs/cgroup
/dev/mapper/vg0-var  1886280  711664  1060748  41% /var
/dev/mapper/vg0-srv  43121152  49032  40858644  1% /srv
/dev/mapper/vg0-tmp    943128    1228    876776  1% /tmp
tmpfs                  205872      0    205872  0% /run/user/0


JeremyBoden 06-26-2015 02:11 PM

Do you have an ordinary (non-LVM) /boot partition?
If not, this could be the cause of your problems (possibly).

shadow-fmx 06-27-2015 02:05 AM

Quote:

Originally Posted by JeremyBoden (Post 5383406)
Do you have an ordinary (non-LVM) /boot partition?
If not, this could be the cause of your problems (possibly).

Hmz, not sure. How can I tell ?

When installing debian with LVM from distro dvd, doesn't it take care automatically of this ?
Boot has ID 83 (not 8e), looks ok (?)

Code:

fdisk -l output :

Disk /dev/sdb: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00083bf7

Device    Boot Start      End  Sectors Size Id Type
/dev/sdb1        2046 104855551 104853506  50G  5 Extended
/dev/sdb5        2048 104855551 104853504  50G 8e Linux LVM

Disk /dev/sda: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009e766

Device    Boot  Start      End  Sectors  Size Id Type
/dev/sda1  *      2048  499711  497664  243M 83 Linux
/dev/sda2      501758 16775167 16273410  7.8G  5 Extended
/dev/sda5      501760 16775167 16273408  7.8G 8e Linux LVM


JeremyBoden 06-27-2015 08:48 AM

It's a bit small but possibly /dev/sda1 (only 243MB) could conceivably be your /boot partition.

But apparently my answer has been superseded.:)
You used to need a separate /boot for LVM - but GRUB2 can now handle LVM files.
You will still need a (ext2 etc) partition to hold the GRUB2 files - I'd guess that's actually /dev/sda1.

Comment:-
Why do you have two disks - one is only 8GB and the other is only 50GB?

shadow-fmx 06-29-2015 03:44 AM

Quote:

Originally Posted by JeremyBoden (Post 5383666)
Comment:-
Why do you have two disks - one is only 8GB and the other is only 50GB?

Hi,
This is a template machine with only basic configuration.
After deploying it must be tweaked depending on its use (hence the use of LVM and so on).
A spamfilter appliance or chat server do not required much space.

Why there are two disks is a little beyond the scope of the thread ;) there's some history behind it...

Any more thoughts on why the phenomenon occurs ?

JeremyBoden 06-29-2015 01:30 PM

How about this for a total guess for this phenomena?

During the boot process, / & /usr are both need to be mounted early on, but these correspond to dm-0 & dm-2.
When LVM is started it does (just possibly) a second mount of / & /usr - which would follow the contents of /proc.

It may be related to the layout of your fstab entries?
I use LVM in a script to do backups on an external disk, so I don't have any LVM fstab entries, instead I use mount and umount commands.
These do not mention "mapper" at all - for example
Code:

mount /dev/vgname/lvname /mnt/mountpoint -t ext4 -o rw
The above mountpoint shows it to be mounted as
Code:

/dev/mapper/vgname-lvname
BTW It's confusing if you use hyphenated names as any single hyphens will get replaced by double hyphens!

Perhaps you could add a "test" lvname with a fstab entry in the above style, just to see what happens?


All times are GMT -5. The time now is 03:17 AM.