LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 01-02-2017, 11:24 AM   #1
NotionCommotion
Member
 
Registered: Aug 2012
Posts: 762

Rep: Reputation: Disabled
Currupted filesystem using lvreduce and lvextend


I performed the following, and then rebooted the machine.

Code:
[Michael@devserver datalogger]$ sudo lvreduce --size 200G /dev/mapper/VolGroup-lv_home
[sudo] password for Michael:
  WARNING: Reducing active and open logical volume to 200.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce VolGroup/lv_home? [y/n]: y
  Size of logical volume VolGroup/lv_home changed from 1000.00 GiB (256000 extents) to 200.00 GiB (51200 extents).
  Logical volume lv_home successfully resized.
[Michael@devserver datalogger]$ sudo lvextend -L+500G /dev/mapper/VolGroup-lv_root
  Size of logical volume VolGroup/lv_root changed from 50.00 GiB (12800 extents) to 550.00 GiB (140800 extents).
  Logical volume lv_root successfully resized.
[Michael@devserver datalogger]$ df -aTh
Filesystem           Type         Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                     ext4          50G   47G  4.8M 100% /
proc                 proc            0     0     0    - /proc
sysfs                sysfs           0     0     0    - /sys
devpts               devpts          0     0     0    - /dev/pts
tmpfs                tmpfs        5.8G     0  5.8G   0% /dev/shm
/dev/sda1            ext4         477M  177M  275M  40% /boot
/dev/mapper/VolGroup-lv_home
                     ext4         985G   55G  880G   6% /home
/dev/mapper/VolGroup-lv_mysql
                     ext3          99G   55G   39G  59% /var/lib/mysql
none                 usbfs           0     0     0    - /home/vbox/vbusbfs
none                 binfmt_misc     0     0     0    - /proc/sys/fs/binfmt_misc
[Michael@devserver datalogger]$
Upon rebooting, the machine doesn't boot up. I took a photo of the screen, and uploaded it to https://s27.postimg.org/qkfjszusj/IMG_1077.jpg. I also typed the actual screen output and posted it below (maybe a small mistake or two, so probably should also see the photo).
Code:
Welcome to CentOS
Starting udev: [OK]
Setting hostname devserver.michaels.lan: [OK]
Setting up Logical Volume Management: 4 logical volume(s) in volume group "Volgroup" now active [OK]
Checking filesystems
/dev/mapper/VolGroup-lv_root: clean, 406325/3276800 files, 12450687/131072000 blocks
/dev/sda1: clean, 110/128016 files, 205022/512000 blocks
Error reading block 175663248 (Invalid argument)
dev/mapper-VolGroup-lv home: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
dev/mapper/VolGroup-lv_mysql: clean, 94551/6553600 files, 14809216/26214400 blocks
[FAILED]
*** An error occurred during the file system check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell
Give root password for maintenance
(or type control-D to continue)
Please help!

Last edited by NotionCommotion; 01-02-2017 at 11:33 AM.
 
Old 01-02-2017, 12:33 PM   #2
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 9,151
Blog Entries: 4

Rep: Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232
So far as I know, you cannot reduce an LVM volume without first resizing the filesystem to a reduced size ... which cannot be done while the operating system is running. (You must boot a CD or memory-stick with a "repair tool" system that can work its magic on a dismounted volume.) I don't readily see that you reduced the file system at all.

I'm afraid that you have done really bad things to your data . . .
 
Old 01-02-2017, 12:56 PM   #3
NotionCommotion
Member
 
Registered: Aug 2012
Posts: 762

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by sundialsvcs View Post
So far as I know, you cannot reduce an LVM volume without first resizing the filesystem to a reduced size ... which cannot be done while the operating system is running. (You must boot a CD or memory-stick with a "repair tool" system that can work its magic on a dismounted volume.) I don't readily see that you reduced the file system at all.

I'm afraid that you have done really bad things to your data . . .
Well, that sucks! Bad things to the data on home or root? I backed up home but not root.

I am a little nervous about doing anything right now. Can you give a little more direction on which "repair tool" and what to do with it?

Thanks
 
Old 01-02-2017, 01:16 PM   #4
ilesterg
Member
 
Registered: Jul 2012
Location: München
Distribution: Debian, CentOS/RHEL
Posts: 587

Rep: Reputation: 72
Your root fs looks fine, don't worry about it. Your problem is with your /home:

Code:
/dev/mapper-VolGroup-lv home: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
As already suggested by sundialsvcs, go pick a rescue disk (I have only used Slackware and Arch linux usb to mount existing installations, you might want to try any of the 2), then just reformat /dev/mapper-VolGroup-lv_home since like you said you have backup.

By the way, you will still need to resize the /root fs.
 
Old 01-02-2017, 01:19 PM   #5
ilesterg
Member
 
Registered: Jul 2012
Location: München
Distribution: Debian, CentOS/RHEL
Posts: 587

Rep: Reputation: 72
Another thing..can you try to go proceed with the maintenance mode prompt (enter your root password), then edit /etc/fstab and comment out your /home entry, then reboot. (not sure if that shows up in maintenance mode tho).
 
Old 01-02-2017, 01:27 PM   #6
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 4,547

Rep: Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082
It's only /home that was damaged, but that should still be recoverable if you have not yet tried to let fsck "fix" anything.

From the rescue shell, you should have the root filesystem mounted, but read-only. Make it read-write:
Code:
mount -o remount rw /
Now look in /etc/lvm/archive. One of the most recent files should contain the line "Description: Created *before* executing '/sbin/lvreduce --size 200G /dev/mapper/VolGroup-lv_home'" Using that file, you can restore the LVM configuration to what is was before:
Code:
vgcfgrestore -f /etc/lvm/archive/VolGroup_{whatever}.vg VolGroup
Your /home filesystem should now be OK.
Code:
fsck -n -f /dev/VolGroup/lv_home
Now, since /home/is not mounted, you can do the vgreduce correctly:
Code:
lvreduce --size 200G --resizefs /dev/mapper/VolGroup-lv_home
And, you might as well expand the root filesystem now, too:
Code:
lvextend -L+500G --resizefs /dev/mapper/VolGroup-lv_root
That should take care of it.
 
Old 01-02-2017, 01:54 PM   #7
NotionCommotion
Member
 
Registered: Aug 2012
Posts: 762

Original Poster
Rep: Reputation: Disabled
Thanks rknichols,

I did the vgcfgrestore command to home (but not to root), but nothing after that and rebooted the machine. Probably a mistake rebooting as now they are mounted??? The monitor showed a couple of errors and I can post a photo if necessary.

I then was able to putty into the machine. Hope doing fsck now that it was mounted was not another mistake...



Code:
[Michael@devserver ~]$ sudo fsck -n -f /dev/VolGroup/lv_home
fsck from util-linux-ng 2.17.2
e2fsck 1.41.12 (17-May-2010)
Warning!  /dev/mapper/VolGroup-lv_home is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong (243623737, counted=243623707).
Fix? no


/dev/mapper/VolGroup-lv_home: ********** WARNING: Filesystem still has errors **********

/dev/mapper/VolGroup-lv_home: 93735/65536000 files (0.1% non-contiguous), 18520263/262144000 blocks
[Michael@devserver ~]$ df -aTh
Filesystem           Type         Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                     ext4          50G   47G  3.2M 100% /
proc                 proc            0     0     0    - /proc
sysfs                sysfs           0     0     0    - /sys
devpts               devpts          0     0     0    - /dev/pts
tmpfs                tmpfs        5.8G     0  5.8G   0% /dev/shm
/dev/sda1            ext4         477M  177M  275M  40% /boot
/dev/mapper/VolGroup-lv_home
                     ext4         985G   55G  880G   6% /home
/dev/mapper/VolGroup-lv_mysql
                     ext3          99G   55G   39G  59% /var/lib/mysql
none                 usbfs           0     0     0    - /home/vbox/vbusbfs
none                 binfmt_misc     0     0     0    - /proc/sys/fs/binfmt_misc
[Michael@devserver ~]$
 
Old 01-02-2017, 02:33 PM   #8
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 4,547

Rep: Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082
Running fsck with the "-n" option is harmless. It assumes a "no" response to any question it would ask about fixing something. You have to expect some inconsistencies, though, when running fsck on a filesystem that is mounted read-write. What you show above is to be expected.

You cannot shrink a mounted filesystem. That is why I suggested running those commands from the rescue shell while /home was not mounted.

BTW, vgcfgrestore restores the entire volume group, not just part of it. Your resizing of lv_root was reverted as well.

Last edited by rknichols; 01-02-2017 at 02:36 PM. Reason: add BTW ...
 
Old 01-02-2017, 02:44 PM   #9
NotionCommotion
Member
 
Registered: Aug 2012
Posts: 762

Original Poster
Rep: Reputation: Disabled
So, I need to use umount /home first, and then follow your instructions, right?

And then regarding root, does it also need to be unmounted to resize? How can this be done?

Thanks again!
 
Old 01-02-2017, 02:52 PM   #10
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 9,151
Blog Entries: 4

Rep: Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232
Actually, I believe that a critical problem here is that you must reduce the file system so that it does not have any data stored in any areas on the disk that are about to go away. You must do this before (or, "as") you reduce the logical volume. And, you cannot do this on a mounted file system.

Furthermore, if you attempt to "repair" a file system that "just had a chunk of its data taken away from it," well, while it might restore the file system to "it functions" condition, it will never retrieve the data ... and it just might obliterate the directory-structures that would allow the data to be recovered even if the "lvreduce" was successfully reversed.
 
Old 01-02-2017, 03:17 PM   #11
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 4,547

Rep: Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082
Running lvreduce or lvextend with the "--resizefs" option does resize the filesystem as needed.

Shrinking a logical volume without resizing the filesystem doesn't damage any of the data. It just makes the container appear smaller, and the kernel will refuse to mount the filesystem in that condition. Changing the container back to the way it was makes it just like nothing had happened, but of course that's depending on nothing else being done to the filesystem in the interim. Running fsck to repair the filesystem in that shrunken container would do irreversible damage, but according to the report, anyway, that was not done. The "fsck -n" that I recommended was just to confirm that all was well with the filesystem again.

Last edited by rknichols; 01-02-2017 at 03:26 PM.
 
Old 01-02-2017, 03:23 PM   #12
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 4,547

Rep: Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082
Quote:
Originally Posted by NotionCommotion View Post
So, I need to use umount /home first, and then follow your instructions, right?

And then regarding root, does it also need to be unmounted to resize? How can this be done?
Yes, you will need to have /home umounted. Again, that's why I recommended doing that from the rescue shell. You might not be able to unmount /home while running multi-user. You should be able to reboot single-user and do it.

An ext4 filesystem can be expanded while online. You should be able to do that with "lvextend --resizefs ... /dev/VolGroup/lv_root" while the system is running.

Last edited by rknichols; 01-02-2017 at 03:24 PM.
 
Old 01-02-2017, 05:35 PM   #13
NotionCommotion
Member
 
Registered: Aug 2012
Posts: 762

Original Poster
Rep: Reputation: Disabled
Maybe light at the end of the tunnel

Physically logged on as root. Looked into "rescue shell" but all it seemed to be is booting off a USB/etc (https://access.redhat.com/documentat...mode-boot.html). Am I missing something?

Resized home, and then root (root took a long time).

Rebooted, and it rebooted much quicker now, but still show a couple of errors. Particularity "Mount Point 0 does not exist".

What is this all about.

Again, thank you for your help. I was really scared and you really saved me.

EDIT. tried about 5 times to rotate the picture, but it doesn't seem to take
Attached Thumbnails
Click image for larger version

Name:	IMG_1089.JPG
Views:	15
Size:	138.9 KB
ID:	23871  

Last edited by NotionCommotion; 01-02-2017 at 05:49 PM.
 
Old 01-02-2017, 07:06 PM   #14
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 4,547

Rep: Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082Reputation: 2082
The rescue shell is what you were offered when the boot failed:
Code:
*** An error occurred during the file system check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell
Give root password for maintenance
I don't know why you would be getting the "mount point 0 does not exist" message. Sounds like there is something strange in /etc/fstab. You could post the contents. You might also try running "sudo mount -av" and see if there is a more informative message. Also, what version of CentOS is this? Looks like it might be CentOS 6, but I can't be sure.
 
Old 01-02-2017, 07:11 PM   #15
NotionCommotion
Member
 
Registered: Aug 2012
Posts: 762

Original Poster
Rep: Reputation: Disabled
Yes, latest Centos 6 version.

Is it possible to enter this rescue shell other than when the boot fails? But then again, maybe no reason to do so...

Code:
[Michael@devserver ~]$ sudo mount -av
mount: UUID=12a081eb-285e-4599-95ab-db23b70280df already mounted on /boot
mount: /dev/mapper/VolGroup-lv_home already mounted on /home
mount: /dev/mapper/VolGroup-lv_mysql already mounted on /var/lib/mysql
mount: tmpfs already mounted on /dev/shm
mount: devpts already mounted on /dev/pts
mount: sysfs already mounted on /sys
mount: proc already mounted on /proc
mount: none already mounted on /home/vbox/vbusbfs
mount: mount point 0 does not exist
[Michael@devserver ~]$ cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Sat Apr 19 05:57:56 2014
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/VolGroup-lv_root /                       ext4    defaults        1 1
UUID=12a081eb-285e-4599-95ab-db23b70280df /boot                   ext4    defaults        1 2
/dev/mapper/VolGroup-lv_home  /home                  ext4    defaults        1 2
/dev/mapper/VolGroup-lv_mysql /var/lib/mysql         ext3    barrier=0       1 2
/dev/mapper/VolGroup-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
none /home/vbox/vbusbfs usbfs rw,devgid=496
504,devmode=664 0 0
[Michael@devserver ~]$
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
lvreduce on RHEL 7 dipanjan Linux - Server 7 07-09-2014 09:15 AM
Copy resize2fs lvreduce lvextend with (or in connection with) a device mapper chiendarret Linux - Software 3 06-01-2014 04:27 PM
LVM lvreduce and lvextend azurtem Linux - Server 11 10-17-2013 01:13 PM
lvreduce/lvextend vs lvresize drManhattan Linux - Newbie 2 06-04-2012 01:03 AM
killed my old kernel (and machine) dhbiker Slackware 7 04-11-2004 09:42 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 10:56 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration