Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
[root@ ~]# mount /dev/newvg/newlv /lvm/
mount: wrong fs type, bad option, bad superblock on /dev/mapper/newvg-newlv,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
I have also tried this :
mount -t ext4 /dev/newvg/newlv /lvm/
It's quite safe to reduce or extend the size of LVM. You just have to take care of the filesystem that resides within.
As Linux goes it just takes logical volumes as paritions. And when you would shrink a partition you would first shrink the filesystem within. Then shrink the partition. Same goes for LVM.
So you can do 2 things now. Either extend the lv to its orginal size, then shrink the filesystem, then shrink the lv.
Or just try to shrink the filesystem inside the lv.
Logical volumes can be reduced in size as well as increased. However, it is very important to remember to reduce the size of the file system or whatever is residing in the volume before shrinking the volume itself, otherwise you risk losing data.
As zhjim already mentioned, you have to resize the filesystem before you resize the LV that contains that filesystem.
Step 1 :
[root@mail lvm]# lvdisplay /dev/mapper/newvg-newlv
--- Logical volume ---
LV Path /dev/newvg/newlv
LV Name newlv
VG Name newvg
LV UUID uRX9yO-rbgE-fGRJ-YQFu-uKHQ-fjnp-yHnus3
LV Write Access read/write
LV Creation host, time mail.mitters.in, 2013-09-20 11:24:29 +0530
LV Status available
# open 1
LV Size 12.00 GiB
Current LE 3072
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
[COLOR="rgb(255, 0, 255)"]Step 2 : [/COLOR]
[root@mail ~]# mkfs.ext4 /dev/newvg/newlv
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
786432 inodes, 3145728 blocks
157286 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3221225472
96 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
[COLOR="rgb(255, 0, 255)"]Step 7 : [/COLOR]
[root@mail ~]# lvreduce -L 6G /dev/mapper/newvg-newlv
WARNING: Reducing active logical volume to 6.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce newlv? [y/n]: y
Reducing logical volume newlv to 6.00 GiB
Logical volume newlv successfully resized
[COLOR="rgb(255, 0, 255)"]Step 9 :[/COLOR]
[root@mail ~]# mount /dev/mapper/newvg-newlv /lvm
mount: wrong fs type, bad option, bad superblock on /dev/mapper/newvg-newlv,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
Superblock has an invalid journal (inode 8).
Clear<y>? yes
*** ext3 journal has been deleted - filesystem is now ext2 only ***
Superblock has_journal flag is clear, but a journal inode is present.
Clear<y>? yes
The filesystem size (according to the superblock) is 3145728 blocks
The physical size of the device is 1572864 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes
[root@mail ~]# mount /dev/mapper/newvg-newlv /lvm
mount: wrong fs type, bad option, bad superblock on /dev/mapper/newvg-newlv,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
[COLOR="rgb(255, 0, 255)"][root@mail ~]# dmesg |tail[/COLOR]
sdb: sdb1
sdb: sdb1
EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts:
SELinux: initialized (dev dm-2, type ext4), uses xattr
EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts:
SELinux: initialized (dev dm-2, type ext4), uses xattr
EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts:
SELinux: initialized (dev dm-2, type ext4), uses xattr
EXT4-fs (dm-2): bad geometry: block count 3145728 exceeds size of device (1572864 blocks)
EXT4-fs (dm-2): bad geometry: block count 3145728 exceeds size of device (1572864 blocks)
When resize2fs displays that "Please run 'e2fsck ...' first" message, it did not resize the filesystem. resize2fs refuses to run on filesystems that were not checked since the last time they were mounted (Mount count is non-zero).
I have to ask WHY you answered "yes" to the question below?
Quote:
Originally Posted by mitter1989
Step 7 :
[root@mail ~]# lvreduce -L 6G /dev/mapper/newvg-newlv WARNING: Reducing active logical volume to 6.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce newlv? [y/n]: y
Reducing logical volume newlv to 6.00 GiB
Logical volume newlv successfully resized
You must resize the filesystem FIRST, then resize the logical volume SECOND. You did just the opposite. The lvreduce command tried to warn you, but you ignored the warning.
At this point, your filesystem is most likely corrupted. Possibly beyond repair. I don't know about that part, as I've never done what you did to have any personal experience with that.
I think it would be best at this point to restore your damaged filesystem from backup. I wouldn't mess with any other repair attempts at this point. Most likely, they would be a waste of time and fruitless in the end.
If the LV was originally contiguous, you could just resize it back to the original size and very likely find that your filesystem was intact. If the LV was non-contiguous, then you would probably not get back the same extents that it had originally, and the missing parts of your filesystem would be essentially unrecoverable.
[root@mail ~]# resize2fs -p /dev/mapper/newvg-newlv
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 3145728 blocks long. Nothing to do!
resize2fs can't magically determine what new, smaller size you want. You have to tell it. Given the consequences of leaving the filesystem a bit too large for its container, the usual recommendation is to shrink the filesystem to a size somewhat smaller than it will ultimately be, resize the container, then resize the filesystem again to the default of filling the container:
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.