home partition doesn't mount after resizing using lvreduce - can't read superblock
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
home partition doesn't mount after resizing using lvreduce - can't read superblock
Hello,
I reduced home partition using command below:
Code:
[root@sfvm08 mapper]# lvreduce -L 70G /dev/mapper/centos_sfvm03-home
WARNING: Reducing active and open logical volume to 70.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce home? [y/n]: y
Size of logical volume centos_sfvm03/home changed from 97.45 GiB (24946 extents) to 70.00 GiB (17920 extents).
Logical volume home successfully resized.
and as seems I get success message.
after that I increased root partition using command below:
everything seemed to be fine, but then my home partition died. After reboot, I get into centos rescue mode and per my check, home partition couldn't be mounted.
Tried to repair using command below in rescue mode:
Code:
xfs_repair /dev/mapper/centos_sfvm03-home
but it can't find secondary super block.
unfortunately I don't have any backup and my data is important
But I need more detailed guide for my case. This is a Virtual Machine and I have access to virtualization environment so I can increase hard disk over VMWare ESXi. Does it harm hard disk? as far as I read over the similar threat, I should increase reduced LV, can someone help me more detail on how to do that?
do you use UUID's in fstab (most Sys do now) ? they made may have been changed do to you changing its size. it is worth checking.
how if I say I don't have any idea about what you said? would you please tell it in detail?
As far as I know, one of sda sdb is XFS and another is Logical Volume
lvreduce allows you to reduce the size of a logical volume. Be careful when reducing a logical volume's size, because data in the reduced part is lost!!!
You should therefore ensure that any filesystem on the volume is resized before running lvreduce so that the extents that are to be removed are not in use.
You may have a slim chance to have something back if you undo both actions with lvreduce/lvextend.
Hard drives can die any time. Not having backups is asking for data loss.
unfortunately I don't have any backup and my data is important
Code:
WARNING: Reducing active and open logical volume to 70.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Was a hint: do not do this if you care about your data and haven't made a backup.
That secondary superblock xfs_repair can't find was perhaps in the part of the FS that got chopped off when you resized.
Unfortunately that other thread you linked to holds your answer. There is no good way to resize an existing xfs filesystem, and you're likely going to have to get into some data recovery methods now to get that "important" data back. And the results of those methods are far from a guaranteed success.
how if I say I don't have any idea about what you said? would you please tell it in detail?
As far as I know, one of sda sdb is XFS and another is Logical Volume
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdb3 during installation
UUID=5550a765-506a-4ca6-aedc-b1c660dbb486 / ext4 errors=remount-ro 0 1
# /home was on /dev/sdb4 during installation
UUID=2cbaed98-039f-45d4-bf0d-4eb200009ec4 /home ext4 defaults 0 2
if your fstab uses UUID's they may have changed on your partitions due to the change you made. a quick cat to your fstab and blkid in your terminal will give you ca quick look to see if that is a true statement or not. Thus eliminating that possibility if it is a false. If true then change them in your fstab to reflect the new UUID's.
You may have a slim chance to have something back if you undo both actions with lvreduce/lvextend.
Hard drives can die any time. Not having backups is asking for data loss.
Yes I see that my chance is slim in this way, but even so, what exactly I should do?
I decreased home and increased root as you can see in first post.(whole disk was 150GB) Now I have chance to change hard disk from 150GB to 200GB over VMWare. Is it better to do so without doing exactly reverse by just increasing home again? and if yes, how shall I do it, what commands do I need I mean?
Yes I see that my chance is slim in this way, but even so, what exactly I should do?
I decreased home and increased root as you can see in first post.(whole disk was 150GB) Now I have chance to change hard disk from 150GB to 200GB over VMWare. Is it better to do so without doing exactly reverse by just increasing home again? and if yes, how shall I do it, what commands do I need I mean?
I don't know the commands but the restore should be part of it and not resize to put that back.
WARNING: Reducing active and open logical volume to 70.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Was a hint: do not do this if you care about your data and haven't made a backup.
That secondary superblock xfs_repair can't find was perhaps in the part of the FS that got chopped off when you resized.
Unfortunately that other thread you linked to holds your answer. There is no good way to resize an existing xfs filesystem, and you're likely going to have to get into some data recovery methods now to get that "important" data back. And the results of those methods are far from a guaranteed success.
Yes, I understand what stupid thing I did. I'm just duplicating my virtual disk of Virtual machine to have chance on my tests I will go on. So the link you shared is about what I could do before I did this horrible thing, right?
Yes, I understand what stupid thing I did. I'm just duplicating my virtual disk of Virtual machine to have chance on my tests I will go on. So the link you shared is about what I could do before I did this horrible thing, right?
Yes. And next time around, if you think you will ever need to resize that volume, consider using a filesystem that can be resized.
As Emerson suggested, undo the resize, assuming no data had been written, and the repair attempt did not make things worse - this may give you a chance at getting your data back.
However, be warned, you have to put things EXACTLY as they were. The same number of extents, and everything has to be exactly where it was. Same starting sector, same starting extent, same ending extent, etc.
Yes. And next time around, if you think you will ever need to resize that volume, consider using a filesystem that can be resized.
As Emerson suggested, undo the resize, assuming no data had been written, and the repair attempt did not make things worse - this may give you a chance at getting your data back.
However, be warned, you have to put things EXACTLY as they were. The same number of extents, and everything has to be exactly where it was. Same starting sector, same starting extent, same ending extent, etc.
As far as I'm not a pro linux user, I thing I need to get some pro's hand on this issue.
As far as I'm not a pro linux user, I thing I need to get some pro's hand on this issue.
Quote:
XFS can handle file systems of up to 18 exabytes, with a maximum file size of 9 exabytes. There is no limit on the number of files.
are you even dealing with HARD DRIVES this big? seems over kill to me to even have such a file system if not. MOOC
Quote:
In 2009, version 5.4 of 64-bit Red Hat Enterprise Linux (RHEL) Linux distribution contained the necessary kernel support for the creation and usage of XFS file systems, but lacked the corresponding command-line tools. The tools available from CentOS could operate for that purpose, and they were previously also provided[by whom?] to RHEL customers on request.[13] RHEL 6.0, released in 2010, includes XFS support for a fee as part of Red Hat's "scalable file system add-on".[14] Oracle Linux 6, released in 2011, also includes an option for using XFS.[15]
RHEL 7.0, released in June 2014, uses XFS as its default file system, including support for using XFS for the /boot partition.[16]
Linux 4.8 added a new large feature, "reverse mapping". The foundation of a set of new features as snapshots, copy-on-write (COW) data, data deduplication, online data and metadata scrubbing, highly accurate bad sector/data loss reporting and significantly improved reconstruction of damaged or corrupted filesystems.[17]
are you even dealing with HARD DRIVES this big? seems over kill to me to even have such a file system if not. MOOC
Nope, The server we are talking about is just a simple CPanel/WHM Centos 7 running OS server hosting 200-300 websites. Strictly I'm not sure why this type of partition chosen (default suggested or whatever) but I'm just managing the server, not the first creator.
I will try with doing reverse tasks to see if I can have my home partition back or not, if not I will go with data recovery steps (I don't have any idea about this on linux, I should read more) to recover home directory and MySQL data storage which are critical for me to have them back.
Nope, The server we are talking about is just a simple CPanel/WHM Centos 7 running OS server hosting 200-300 websites. Strictly I'm not sure why this type of partition chosen (default suggested or whatever) but I'm just managing the server, not the first creator.
I will try with doing reverse tasks to see if I can have my home partition back or not, if not I will go with data recovery steps (I don't have any idea about this on linux, I should read more) to recover home directory and MySQL data storage which are critical for me to have them back.
ah oh boy are you in trouble not your toys to play with? I moded my post, wiki said something about the 4.8 kernel having reverse mapping and red hat / Centos having tools to deal with your file system. GO red hat ... !!! woop woop woop !!! good luck ...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.