Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I performed the following, and then rebooted the machine.
Code:
[Michael@devserver datalogger]$ sudo lvreduce --size 200G /dev/mapper/VolGroup-lv_home
[sudo] password for Michael:
WARNING: Reducing active and open logical volume to 200.00 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce VolGroup/lv_home? [y/n]: y
Size of logical volume VolGroup/lv_home changed from 1000.00 GiB (256000 extents) to 200.00 GiB (51200 extents).
Logical volume lv_home successfully resized.
[Michael@devserver datalogger]$ sudo lvextend -L+500G /dev/mapper/VolGroup-lv_root
Size of logical volume VolGroup/lv_root changed from 50.00 GiB (12800 extents) to 550.00 GiB (140800 extents).
Logical volume lv_root successfully resized.
[Michael@devserver datalogger]$ df -aTh
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
ext4 50G 47G 4.8M 100% /
proc proc 0 0 0 - /proc
sysfs sysfs 0 0 0 - /sys
devpts devpts 0 0 0 - /dev/pts
tmpfs tmpfs 5.8G 0 5.8G 0% /dev/shm
/dev/sda1 ext4 477M 177M 275M 40% /boot
/dev/mapper/VolGroup-lv_home
ext4 985G 55G 880G 6% /home
/dev/mapper/VolGroup-lv_mysql
ext3 99G 55G 39G 59% /var/lib/mysql
none usbfs 0 0 0 - /home/vbox/vbusbfs
none binfmt_misc 0 0 0 - /proc/sys/fs/binfmt_misc
[Michael@devserver datalogger]$
Upon rebooting, the machine doesn't boot up. I took a photo of the screen, and uploaded it to https://s27.postimg.org/qkfjszusj/IMG_1077.jpg. I also typed the actual screen output and posted it below (maybe a small mistake or two, so probably should also see the photo).
Code:
Welcome to CentOS
Starting udev: [OK]
Setting hostname devserver.michaels.lan: [OK]
Setting up Logical Volume Management: 4 logical volume(s) in volume group "Volgroup" now active [OK]
Checking filesystems
/dev/mapper/VolGroup-lv_root: clean, 406325/3276800 files, 12450687/131072000 blocks
/dev/sda1: clean, 110/128016 files, 205022/512000 blocks
Error reading block 175663248 (Invalid argument)
dev/mapper-VolGroup-lv home: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
dev/mapper/VolGroup-lv_mysql: clean, 94551/6553600 files, 14809216/26214400 blocks
[FAILED]
*** An error occurred during the file system check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell
Give root password for maintenance
(or type control-D to continue)
Please help!
Last edited by NotionCommotion; 01-02-2017 at 11:33 AM.
So far as I know, you cannot reduce an LVM volume without first resizing the filesystem to a reduced size ... which cannot be done while the operating system is running. (You must boot a CD or memory-stick with a "repair tool" system that can work its magic on a dismounted volume.) I don't readily see that you reduced the file system at all.
I'm afraid that you have done really bad things to your data . . .
So far as I know, you cannot reduce an LVM volume without first resizing the filesystem to a reduced size ... which cannot be done while the operating system is running. (You must boot a CD or memory-stick with a "repair tool" system that can work its magic on a dismounted volume.) I don't readily see that you reduced the file system at all.
I'm afraid that you have done really bad things to your data . . .
Well, that sucks! Bad things to the data on home or root? I backed up home but not root.
I am a little nervous about doing anything right now. Can you give a little more direction on which "repair tool" and what to do with it?
Your root fs looks fine, don't worry about it. Your problem is with your /home:
Code:
/dev/mapper-VolGroup-lv home: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
As already suggested by sundialsvcs, go pick a rescue disk (I have only used Slackware and Arch linux usb to mount existing installations, you might want to try any of the 2), then just reformat /dev/mapper-VolGroup-lv_home since like you said you have backup.
By the way, you will still need to resize the /root fs.
Another thing..can you try to go proceed with the maintenance mode prompt (enter your root password), then edit /etc/fstab and comment out your /home entry, then reboot. (not sure if that shows up in maintenance mode tho).
It's only /home that was damaged, but that should still be recoverable if you have not yet tried to let fsck "fix" anything.
From the rescue shell, you should have the root filesystem mounted, but read-only. Make it read-write:
Code:
mount -o remount rw /
Now look in /etc/lvm/archive. One of the most recent files should contain the line "Description: Created *before* executing '/sbin/lvreduce --size 200G /dev/mapper/VolGroup-lv_home'" Using that file, you can restore the LVM configuration to what is was before:
I did the vgcfgrestore command to home (but not to root), but nothing after that and rebooted the machine. Probably a mistake rebooting as now they are mounted??? The monitor showed a couple of errors and I can post a photo if necessary.
I then was able to putty into the machine. Hope doing fsck now that it was mounted was not another mistake...
Running fsck with the "-n" option is harmless. It assumes a "no" response to any question it would ask about fixing something. You have to expect some inconsistencies, though, when running fsck on a filesystem that is mounted read-write. What you show above is to be expected.
You cannot shrink a mounted filesystem. That is why I suggested running those commands from the rescue shell while /home was not mounted.
BTW, vgcfgrestore restores the entire volume group, not just part of it. Your resizing of lv_root was reverted as well.
Last edited by rknichols; 01-02-2017 at 02:36 PM.
Reason: add BTW ...
Actually, I believe that a critical problem here is that you must reduce the filesystem so that it does not have any data stored in any areas on the disk that are about to go away. You must do this before (or, "as") you reduce the logical volume. And, you cannot do this on a mounted file system.
Furthermore, if you attempt to "repair" a file system that "just had a chunk of its data taken away from it," well, while it might restore the file system to "it functions" condition, it will never retrieve the data ... and it just might obliterate the directory-structures that would allow the data to be recovered even if the "lvreduce" was successfully reversed.
Running lvreduce or lvextend with the "--resizefs" option does resize the filesystem as needed.
Shrinking a logical volume without resizing the filesystem doesn't damage any of the data. It just makes the container appear smaller, and the kernel will refuse to mount the filesystem in that condition. Changing the container back to the way it was makes it just like nothing had happened, but of course that's depending on nothing else being done to the filesystem in the interim. Running fsck to repair the filesystem in that shrunken container would do irreversible damage, but according to the report, anyway, that was not done. The "fsck -n" that I recommended was just to confirm that all was well with the filesystem again.
So, I need to use umount /home first, and then follow your instructions, right?
And then regarding root, does it also need to be unmounted to resize? How can this be done?
Yes, you will need to have /home umounted. Again, that's why I recommended doing that from the rescue shell. You might not be able to unmount /home while running multi-user. You should be able to reboot single-user and do it.
An ext4 filesystem can be expanded while online. You should be able to do that with "lvextend --resizefs ... /dev/VolGroup/lv_root" while the system is running.
The rescue shell is what you were offered when the boot failed:
Code:
*** An error occurred during the file system check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell
Give root password for maintenance
I don't know why you would be getting the "mount point 0 does not exist" message. Sounds like there is something strange in /etc/fstab. You could post the contents. You might also try running "sudo mount -av" and see if there is a more informative message. Also, what version of CentOS is this? Looks like it might be CentOS 6, but I can't be sure.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.