Hi,
Actually, when I do an umount -l (must be lazy because stuff in use) and then a remount as configged in fstab it shows up correct. However after a reboot the correction is gone again. Code:
root@debian64[pts/0]:/ # umount /usr -l 10:19 |
As I see, the "problem" is with devices mounted within initrd image.
No problem with these names at all. You should revise your boot image to fix names. I am not sure that you will success, but you will learn a lot. |
not sure that separate /usr is a problem with lvm but with systemd /usr is not being used but you have 2 options rebuild initrd or initramfs (not sure which) to support separate /usr or move it to root.
BTW Thanks for posting this thread the machine that I'm on is having problems & it's most likely due to separate /usr. Anything that I want to use I have to execute from /usr/bin |
@voleg,
I actually tried unpacking/decompressing the image last weekend. It looked succesfull, but where to go from there .... ? @eddy1 "rebuild initrd to support separate /usr". I am not familliar with this process. Can you point me in the right direction ? (I can google it myself, but about 10 million links show up). What is the core functionality of systemd ? |
That's my problem also. Still searching but threads about archlinux is probably where you would find your answer.
You may even find it in LFS book. |
I just did a release upgrade on my ubuntu box to 15.10 and now have the same issue... / is showing in df as /dev/dm-2
previously it showed as the /dev/mapper/raid1-root my two other lv's are showing in df properly.... Filesystem Size Used Avail Use% Mounted on udev 16G 4.0K 16G 1% /dev tmpfs 3.2G 2.6M 3.2G 1% /run /dev/dm-2 103G 9.8G 88G 10% / none 4.0K 0 4.0K 0% /sys/fs/cgroup none 5.0M 8.0K 5.0M 1% /run/lock tmpfs 16G 0 16G 0% /run/shm cgmfs 100K 0 100K 0% /run/cgmanager/fs none 100M 0 100M 0% /run/user /dev/mapper/raid5-home 443G 11G 414G 3% /home /dev/mapper/raid5-data 5.0T 599G 4.2T 13% /data /dev/sda1 228M 82M 134M 38% /boot tmpfs 3.2G 0 3.2G 0% /run/user/1000 //nacho/Backups 5.5T 2.7T 2.9T 48% /mnt/nacho //naive/data 8.2T 3.1T 5.1T 38% /mnt/naive //nexus/data 11T 8.1T 2.8T 75% /mnt/nexus Also, it looks like in dmesg that / is being mounted and then re-mounted as well. Were any of you able to track this down and correct? |
Not me.
I got lost in too much detail without knowing what exactly is causing this. Still have this issue in my template server, either after updating or new (!) installation from iso. |
Very strange, as 'mount' shows them properly
but /proc/mounts is missing the mapper for / and showing the device instead... from 'mount' Code:
/dev/mapper/raid1-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) Code:
/dev/dm-2 / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0 Code:
/dev/mapper/raid5-home on /home type ext4 (rw,relatime,errors=remount-ro,data=ordered) Code:
/dev/mapper/raid5-home /home ext4 rw,relatime,errors=remount-ro,data=ordered 0 0 |
I just do not understand that only a few people seem to suffer from this.
If this would be some major bug I expected way more replies to this thread. |
So presumably things like /var, /usr etc are unmounted?
If not, is it possible that you have two / directories, but you are not using one of them? |
Everything is fine and mounted on my box. I just cut the relative info out for the post.
my var, usr, etc, etc. are all on that lv (/) I can't find anything on this elsewhere, not sure what to search for either though. Oh, and other than this 'visual' issue, the system is fine. Just my monitoring scripts and such that look for the dm mapped named had to all be revised to point to the dev name instead - it's just bothering because I can't figure it out |
I had the same feeling :)
WHY is this wrong all of a sudden.... |
So you have a Logical Volume that isn't mounted (supposedly)...
BTW My desktop, which has absolutely no LVM on it, at all, always gives Code:
dmesg | grep 'EXT4-fs (sda2)' sda2 is my / partition Anyway a 'mount' command shows everything is good. Code:
/dev/sda2 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) BTW How does the kernel know whether to mount / as (ro) or (rw) without first mounting it, so that it can see what the /etc/fstab file has to say about things? |
The LVM volume is mounted just fine. The /dev/mapper/LVNAME is a mapping to the long device name for ease of use. The dm-2 name is a very short name for the device itself, which isn't the proper method of displaying the LVM volume.
From what I understand the initial root volume is always mount RO at first then remounted with FSTAB options. So I believe that is functioning as designed. More explanation on the /dev/mapper vs /dev/dm naming: Well, from functional point of view they are same - they point to the same device, of course. But dmsetup/lvm itself does not create the /dev/dm-X nodes - the ones in /dev/mapper are the right and official ones that should always be used. The /dev/dm-X nodes are created by some general udev rules, dm-X is only internal kernel name for that device and you can't rely on those names (because the number X that is assigned is not stable and could be changed - it depends on the sequence of device activation). |
maybe related? seems identical - going to try the patch tonight - will report back
https://bugs.debian.org/cgi-bin/bugr...cgi?bug=791754 EDIT: patch works! (may need to run update-initramfs -u -k all after - I did just to clean up all the other crud I tried) |
All times are GMT -5. The time now is 08:33 PM. |