Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
After upgrading our vmware template server from debian wheezy to debian jessie
the output of the df command seems broken with regards to the root and /usr mount.
We are using LVM on this machine.
I've been searching for quite some time for a solution, but lack of linux knowledge is bothering me to get it fixed. The system seems to run fine without issues.
I can temporarely resolve the issue (untill next boot) by doing a lazy umount and a new mount of for example the /usr but I have no clue how to do this for the root partition.
I found a thread that something has changed in the way df presents the devices, however it is not consistent in our case (it shows a mix of both) and it seems to go wrong at an earlier stage during boot.
Who can help me out what is going on with the mounting at boot phase and better yet, how to fix this? Either show all dm devices or mapper devices is fine with me although I have a slight preference for the mapper names as it makes it more obvious LVM is being used.
the mount command displays the correct mount information :
Code:
mount 11:12
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=255132,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,relatime,size=411740k,mode=755)
/dev/mapper/vg0-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/mapper/vg0-usr on /usr type ext4 (rw,relatime,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/mapper/vg0-srv on /srv type ext4 (rw,relatime,data=ordered)
/dev/mapper/vg0-tmp on /tmp type ext4 (rw,relatime,data=ordered)
/dev/mapper/vg0-var on /var type ext4 (rw,relatime,data=ordered)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=205872k,mode=700)
however, the /proc/mounts file shows the incorrect output:
cat /etc/fstab 11:15
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/vg0-root / ext4 errors=remount-ro 0 1
/dev/mapper/vg0-srv /srv ext4 defaults 0 2
/dev/mapper/vg0-tmp /tmp ext4 defaults 0 2
/dev/mapper/vg0-usr /usr ext4 defaults 0 2
/dev/mapper/vg0-var /var ext4 defaults 0 2
/dev/mapper/vg0-swap none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
#/dev/fd0 /media/floppy0 auto rw,user,noauto 0 0
You might need to look for udev rules for dm and lvm ( specially if there are any blacklists ).also check the lvm.conf for any hints.
Also check multipath configuration file if multipathd running.
if everything looks OK, run the equivilant of "udevadm trigger" in debian.
NOTE : I am not very familiar with Debian. However i think it should give you hints to troubleshoot.
I need to delve into this udev stuff. I have been reading a little about it, but the linux boot sequence in detail is still very unclear to me unfortunately.
Some things happen in the initram which needs to be rebuild upon changes in the udev config if I understand correctly.
This is not yet in my league
Maybe someone who can explain the details of the debian boot process a little more in depth, especially with regards to finding drives and the mounting process involving udef and multipathing ?
By the way, I do not see any multipath deamon running.
Thanks !
Last edited by shadow-fmx; 06-24-2015 at 01:50 AM.
Reason: text corrections
Some more info. I found some weird things going on in the dmesg log.
The 2 misbehaving mountpoints are mounted somewhere in the boot-process.
A little later they are re-mounted (?)
I can not see this behavior for the other logical volumes.
How can I analyze what service(s) is (are) doing these activities during the boot process?
What is causing this behavior?
Code:
[ 0.994887] sr 1:0:0:0: Attached scsi CD-ROM sr0
[ 0.995400] sd 2:0:0:0: Attached scsi generic sg0 type 0
[ 0.995429] sd 2:0:1:0: Attached scsi generic sg1 type 0
[ 0.995454] sr 1:0:0:0: Attached scsi generic sg2 type 5
[ 1.001538] sda: sda1 sda2 < sda5 >
[ 1.001763] sd 2:0:0:0: [sda] Attached SCSI disk
[ 1.037472] sdb: sdb1 < sdb5 >
[ 1.037695] sd 2:0:1:0: [sdb] Attached SCSI disk
[ 1.294507] device-mapper: uevent: version 1.0.3
[ 1.294786] device-mapper: ioctl: 4.27.0-ioctl (2013-10-30) initialised: dm-devel@redhat.com
[ 1.619025] PM: Starting manual resume from disk
[ 1.619031] PM: Hibernation image partition 254:1 present
[ 1.619033] PM: Looking for hibernation image.
[ 1.619676] PM: Image not found (code -22)
[ 1.619678] PM: Hibernation image not present or could not be loaded.
[ 1.811845] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
[ 3.288879] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
[ 3.331395] random: nonblocking pool is initialized
snip
Code:
[ 5.657373] input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3
[ 5.673062] EXT4-fs (dm-2): re-mounted. Opts: (null)
[ 5.705911] Adding 1949692k swap on /dev/mapper/vg0-swap. Priority:-1 extents:1 across:1949692k FS
[ 5.708398] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro
[ 5.715836] alg: No test for crc32 (crc32-pclmul)
[ 6.134232] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null)
[ 6.184102] EXT4-fs (dm-5): mounted filesystem with ordered data mode. Opts: (null)
[ 6.239410] EXT4-fs (dm-4): mounted filesystem with ordered data mode. Opts: (null)
[ 6.278804] systemd-journald[190]: Received request to flush runtime journal from PID 1
Last edited by shadow-fmx; 06-25-2015 at 09:41 AM.
Does lvm show the correct behaviour?
If there are no other problems, I would treat any discrepancies between /proc and mount to be "spurious" and not in need of explanation.
Does lvm show the correct behaviour?
If there are no other problems, I would treat any discrepancies between /proc and mount to be "spurious" and not in need of explanation.
Hi Jeremy,
It looks ok, lvs,pvs,vgs all give normal responses.
I admit it seems something cosmetic but i want to be sure, before I roll-out this as a template machine, that it will not cause issues.
Soon you will have a few machines running on it in a live environment and than - as always - the shit shows up.
So if you guys know some commands / tests I can run to make sure all is ok I guess i have to put my need/urgency voor cleanliness aside .
By the way;
I also tried installing from scratch (using the jessie dvd iso)... it exposes the same problem.
Even worse, now more drives show up as dm-x.
To me this looks like a bug somewhere but I can not pinpoint it exactly with my knowledge level of Debian.
Never had this issue with wheezy or squeeze
Last edited by shadow-fmx; 06-26-2015 at 03:38 AM.
It just seems the df command which uses the MTAB symlink in /etc (link to /proc/mounts which is holding "incorrect" mount info :
Most other tooling does not seem to bother. Really weird this....
It's a bit small but possibly /dev/sda1 (only 243MB) could conceivably be your /boot partition.
But apparently my answer has been superseded.
You used to need a separate /boot for LVM - but GRUB2 can now handle LVM files.
You will still need a (ext2 etc) partition to hold the GRUB2 files - I'd guess that's actually /dev/sda1.
Comment:-
Why do you have two disks - one is only 8GB and the other is only 50GB?
Comment:-
Why do you have two disks - one is only 8GB and the other is only 50GB?
Hi,
This is a template machine with only basic configuration.
After deploying it must be tweaked depending on its use (hence the use of LVM and so on).
A spamfilter appliance or chat server do not required much space.
Why there are two disks is a little beyond the scope of the thread there's some history behind it...
How about this for a total guess for this phenomena?
During the boot process, / & /usr are both need to be mounted early on, but these correspond to dm-0 & dm-2.
When LVM is started it does (just possibly) a second mount of / & /usr - which would follow the contents of /proc.
It may be related to the layout of your fstab entries?
I use LVM in a script to do backups on an external disk, so I don't have any LVM fstab entries, instead I use mount and umount commands.
These do not mention "mapper" at all - for example
Code:
mount /dev/vgname/lvname /mnt/mountpoint -t ext4 -o rw
The above mountpoint shows it to be mounted as
Code:
/dev/mapper/vgname-lvname
BTW It's confusing if you use hyphenated names as any single hyphens will get replaced by double hyphens!
Perhaps you could add a "test" lvname with a fstab entry in the above style, just to see what happens?
Last edited by JeremyBoden; 06-29-2015 at 01:32 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.