LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Virtualization and Cloud (http://www.linuxquestions.org/questions/linux-virtualization-and-cloud-90/)
-   -   URGENT: Trouble growing a filesystem on a domu (http://www.linuxquestions.org/questions/linux-virtualization-and-cloud-90/urgent-trouble-growing-a-filesystem-on-a-domu-4175474906/)

kwstone 08-27-2013 01:33 PM

URGENT: Trouble growing a filesystem on a domu
 
I have a Xen setup where a domu's root filesystem is 98% full and growing. I've been scouring the web for answers and not getting anything coherent.

The dom0 has some free space. I used lvresize on the dom0 to add to the logical volume. That part took, and I can see from lvdisplay that the extra space was added to the logical volume.

I can't see how to go from here. An lvdisplay on the domu shows that the extra space isn't recognized yet.

So I'm not sure how to go from here. Does the domu need to do an lvresize? Which side does the resize2fs (it's ext3 filesystems), and when?

I'm about to run out of space, and I've been researching this for over a week without finding a solution.

Thanks in advance.

dt64 08-28-2013 04:48 AM

Have you resized the file system on your domU after you resized the underlaying partition?
Did you run resize2fs on domU? What was the outcome?
Please post the lvresize command you used and the output.

kwstone 08-28-2013 08:39 AM

As I've continued researching this, I've found several blogs that have helped my understanding. http://www.rsreese.com/resizing-xen-...d-filesystems/ is the one I'm using as the model from here. Anyway, to answer the questions dt64 posted:

Running lvresize wasn't a problem. I was able to add extra space to the logical volume with little incident, and I had spare disk space to allocate (fortunately).

The issue is with the "filesystem" that's being provided to the domu by the dom0. An fdisk -l shows that the filesystem file is actually a disk file with two partitions in it. resize2fs won't work directly on this because it's not a filesystem; running it will get a "bad magic number" error. The posting says that

* fdisk or an equivalent program is to be used on the disk file to delete and then re-create the partition that has the actual filesystem that the domu will use. In this case, it's the second partition. The re-creation adds the space.

* kpartx or an equivalent program is to be used on the disk file to split out the filesystems within it. kpartx -a is supposed to do this.

* e2fsck is to be used to do a filesystem check on the second filesystem to check for errors. e2fsck -f <filesystem file> is the form used.

* resize2fs can then be used on the filesystem

* kpartx -d or the equivalent of another program is then used to put the two filesystems you split apart back together again.

This is all done while the domu was shutdown with xm shutdown. xm create is used to start the domu back up.

So I guess I have two questions:

I'm not clear why doing a delete inside of fdisk doesn't destroy all the data in the guest filesystem/partition.

Is there another way to do this that doesn't require pulling things apart and putting them back together? I haven't found any credible alternatives. There were a couple of postings that claimed this could be done "hot", but those didn't seem to work.

dt64 08-28-2013 08:57 AM

Just to clarify: What kind of virtual HDD is used by your domU? LVM partition or file based vHDD like img or qcow2?

If it was LVM you can grow if from your dom0 like normal partitions while the domU is shut down.
I've asked for the lvresize command you used since you may use parameter -r which would adjust the filesystem automatically.

Note: fdisk can't work well with LVM volumes/partitiones. since you talked about using LVresize, fdisk doesn't help much. fidsk is for a different HDD management layer below LVM.

To get an idea of your HDD config post the output of the sollowing commands:
Code:

fdisk -lu
vgdisplay
lvdisplay
mount
virsh dumpxml <dumU name>


kwstone 08-28-2013 09:21 AM

I did the lvresize without the -r switch. Is there a way to compensate for the fact that I didn't do that to begin with?

The output of what you asked for is below.

[kwstone@devhost01 /]$ sudo /sbin/fdisk -lu
[sudo] password for kwstone:

Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders, total 285474816 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 63 208844 104391 83 Linux
/dev/sda2 208845 16980704 8385930 82 Linux swap / Solaris
/dev/sda3 16998400 42164223 12582912 83 Linux
/dev/sda4 42170625 285458984 121644180 83 Linux

Disk /dev/sdb: 292.3 GB, 292326211584 bytes
255 heads, 63 sectors/track, 35539 cylinders, total 570949632 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 63 208844 104391 83 Linux
/dev/sdb2 208845 570934034 285362595 8e Linux LVM

[kwstone@devhost01 /]$ sudo /usr/sbin/vgdisplay
--- Volume group ---
VG Name dom0vg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 11
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 7
Open LV 6
Max PV 0
Cur PV 2
Act PV 2
VG Size 388.12 GB
PE Size 32.00 MB
Total PE 12420
Alloc PE / Size 10048 / 314.00 GB
Free PE / Size 2372 / 74.12 GB
VG UUID ZBsQqC-n0B3-Nljh-rQjW-rlg5-3vhK-C7oK2e

[kwstone@devhost01 /]$ sudo /usr/sbin/lvdisplay
--- Logical volume ---
LV Name /dev/dom0vg/root
VG Name dom0vg
LV UUID DE5Qiz-Ylyh-9Knh-wfRw-PwzK-UIcO-0YwijO
LV Write Access read/write
LV Status available
# open 1
LV Size 10.00 GB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Name /dev/dom0vg/var
VG Name dom0vg
LV UUID KaTD2e-wsg1-q7LO-87f0-yVod-mHrP-Nlr6rV
LV Write Access read/write
LV Status available
# open 1
LV Size 2.00 GB
Current LE 64
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name /dev/dom0vg/swap
VG Name dom0vg
LV UUID ThRrQk-saFz-mFcT-pd6F-AWXK-VWgG-ozdN7s
LV Write Access read/write
LV Status available
# open 1
LV Size 2.00 GB
Current LE 64
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

--- Logical volume ---
LV Name /dev/dom0vg/devmon01
VG Name dom0vg
LV UUID MGQjfn-J0ek-sAxZ-UVU7-mYFj-cYqJ-ikI3hn
LV Write Access read/write
LV Status available
# open 2
LV Size 140.00 GB
Current LE 4480
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3

--- Logical volume ---
LV Name /dev/dom0vg/devint01
VG Name dom0vg
LV UUID TQ5sKO-1xc2-2AeC-adQQ-jf05-gRX4-e0YTZu
LV Write Access read/write
LV Status available
# open 2
LV Size 150.00 GB
Current LE 4800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

--- Logical volume ---
LV Name /dev/dom0vg/home
VG Name dom0vg
LV UUID vnPX3e-9kNq-KMeE-X0Zw-ORWP-YJap-ri31e6
LV Write Access read/write
LV Status available
# open 1
LV Size 6.00 GB
Current LE 192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5

--- Logical volume ---
LV Name /dev/dom0vg/devdns01
VG Name dom0vg
LV UUID oA5jiV-P2XT-gfED-Ophh-z9Mt-Xu0c-r1Ws2K
LV Write Access read/write
LV Status available
# open 0
LV Size 4.00 GB
Current LE 128
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6

[kwstone@devhost01 /]$ mount
/dev/mapper/dom0vg-root on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/mapper/dom0vg-var on /var type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
/dev/mapper/dom0vg-home on /home type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
none on /var/lib/xenstored type tmpfs (rw)

[kwstone@devhost01 /]$ sudo /usr/bin/virsh dumpxml devmon01
<domain type='xen' id='5'>
<name>devmon01</name>
<uuid>5739d1c7-8494-99b6-4e99-c44f5e994ac4</uuid>
<memory>4292608</memory>
<currentMemory>4292608</currentMemory>
<vcpu>2</vcpu>
<os>
<type>hvm</type>
<loader>/usr/lib/xen/boot/hvmloader</loader>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/lib64/xen/bin/qemu-dm</emulator>
<disk type='block' device='disk'>
<driver name='phy'/>
<source dev='/dev/dom0vg/devmon01'/>
<target dev='hda' bus='ide'/>
</disk>
<disk type='file' device='cdrom'>
<target dev='hdc' bus='ide'/>
<readonly/>
</disk>
<interface type='bridge'>
<mac address='00:16:36:2e:97:f6'/>
<source bridge='xenbr1'/>
<script path='vif-bridge'/>
<target dev='vif5.0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/2'/>
<target port='0'/>
</serial>
<console type='pty' tty='/dev/pts/2'>
<source path='/dev/pts/2'/>
<target port='0'/>
</console>
<input type='tablet' bus='usb'/>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='5905' autoport='yes' keymap='en-us'/>
</devices>
</domain>

kwstone 08-28-2013 10:04 AM

One more thing: I'm looking around for either lvresize --resizefs on the web and finding little info on it. The lvresize man page makes no mention of it. And I'm still looking around for information.

dt64 08-28-2013 02:42 PM

Since you didn't answer all of my questions I can only guess for a few things.

I've asked for the exact lvresize command you used for a reason, that would make things easier.

Theory first:
You state you couldn't find much about lvresize --resizefs. Have a read in the lvresize man page:
Code:

-r, --resizefs
    Resize underlying filesystem together with the logical volume using fsadm(8).

So with switch -r set, lvresize is using fsadm (man page) to change the file system size in the same run.

Practice next:
Using the right tools half the work done.
resize2fs without a size parameter specified would adjust the file system size to the size of the partition. I guess that's what we want here.
My next guess is that you were working on /dev/dom0vg/devmon01.
So what you want to do would be:
Code:

virsh shutdown devmon01
resize2fs -p /dev/dom0vg/devmon01
virsh start devmon01

This will shutdown your VM "devmon01", adjust the file system size of its HDD and start the VM.
The resize shouldn't take long, depending on disk speed, I/O load etc my guess would be around 30 seconds.

Just wondering: Why do you have 2 swap partitions (/dev/sda2 (4GB) and /dev/dom0vg/swap (2GB)) configured?

kwstone 08-28-2013 02:57 PM

The lvresize command used was:
lvresize -L +40GB /dev/dom0vg/devmon01

I looked again on the man pages for lvresize that come with the CentOS 5.5 installation that the Xen dom0 is on. There's nothing there about a -r or --resizefs option. Likewise searching the web for "lvresize resizefs" netted one forum where a couple of responders were skeptical about it. Maybe it comes in later versions of OS?

Doing an fdisk -l /dev/dom0vg/devmon01 shows that it's not a filesystem. There are two filesystems contained in it, and the second one is the one I want to grow.

Disk /dev/dom0vg/devmon01: 150.3 GB, 150323855360 bytes
255 heads, 63 sectors/track, 18275 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/dom0vg/devmon01p1 * 1 13 104391 83 Linux
/dev/dom0vg/devmon01p2 14 13054 104751832+ 8e Linux LVM

When I tried to do a resize2fs on /dev/dom0vg/devmon01, I got the "bad magic number" error. I've found blogs that say that should happen, since it's not a filesystem. And that's why those people say that the filesystems have to be split apart with kpartx or parted or whatever works.

I can't answer why there are two swap partitions. I inherited the system and have spent all my time trying to beat the clock with the shrinking disk space.

Sorry about not answering all the questions completely last time around.

kwstone 08-28-2013 03:04 PM

Looks like I missed another of your questions: I haven't tried running resize2fs from the domu. Not sure what to use as the parameter. An lvdisplay run from the domu showed that the extra 40GB I added on the dom0 is not showing.

dt64 08-28-2013 03:23 PM

Ok, now we are talking!

Looks like you have yet another LVM layer within your LV /dev/dom0vg/devmon01.
In that case you have to do something different.
I guess /dev/dom0vg/devmon01 was 100GB before?

now again, run the set of commands in your domU and post the output:
Code:

fdisk -lu
vgdisplay
lvdisplay
mount
df -h

Edith says:
I wondered why your lvresize doesn't support -r switch, but looking at this man page showed that this wasn't supported by then.
Is there any special reason why your are using this ancient version without upgrading to the latest 5.9 or even 6.4?

kwstone 08-28-2013 03:25 PM

Yes. At this point, it still is 100GB. The logical volume behind it has been reset to be 140GB, and I can't get the extra 40GB to show up without the surgery I mentioned earlier in the thread.

kwstone 08-28-2013 04:34 PM

The server is going to be put out of service in the next few months, so no energy is being applied to upgrade the OS.

[kwstone@devmon01 /]$ sudo /sbin/fdisk -l /dev/mapper/VolGroup00-LogVol00

Disk /dev/mapper/VolGroup00-LogVol00: 100.8 GB, 100898177024 bytes
255 heads, 63 sectors/track, 12266 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mapper/VolGroup00-LogVol00 doesn't contain a valid partition table
[kwstone@devmon01 /]$ sudo /sbin/fdisk -lu
[sudo] password for kwstone:

Disk /dev/hda: 150.3 GB, 150323855360 bytes
255 heads, 63 sectors/track, 18275 cylinders, total 293601280 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/hda1 * 63 208844 104391 83 Linux
/dev/hda2 208845 209712509 104751832+ 8e Linux LVM

[kwstone@devmon01 /]$ sudo /usr/sbin/vgdisplay
/dev/hdc: open failed: No medium found
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 99.88 GB
PE Size 32.00 MB
Total PE 3196
Alloc PE / Size 3196 / 99.88 GB
Free PE / Size 0 / 0
VG UUID EhxaHc-1PUq-USvD-NYgJ-zNrd-DZtd-4CuriM

[kwstone@devmon01 /]$ sudo /usr/sbin/lvdisplay
/dev/hdc: open failed: No medium found
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID d08ENn-ggSj-inuE-tiwM-JzJj-Qlww-1EN1YA
LV Write Access read/write
LV Status available
# open 1
LV Size 93.97 GB
Current LE 3007
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID LitIBj-1H4L-MUCi-3okp-G3TO-8vtF-uYAp5z
LV Write Access read/write
LV Status available
# open 1
LV Size 5.91 GB
Current LE 189
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

[kwstone@devmon01 /]$ mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/hda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
devhost01:/home on /home type nfs (rw,nfsvers=3,tcp,rsize=8192,wsize=8192,addr=10.224.235.14)

[kwstone@devmon01 /]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
92G 84G 2.7G 97% /
/dev/hda1 99M 26M 69M 28% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
devhost01:/home 6.0G 3.4G 2.3G 60% /home

kwstone 08-28-2013 04:38 PM

Another note: the domu is on CentOS 5.8, but the dom0 is on 5.5

dt64 08-28-2013 04:40 PM

Quote:

Originally Posted by kwstone (Post 5017771)
Another note: the domu is on CentOS 5.8, but the dom0 is on 5.5

Thanks. I was about to ask that question in a minute ;)

kwstone 08-28-2013 04:49 PM

Yeah, the --fsrezise flag was on the domu's man page for lvresize. So if the dom0 was running 5.8, it would be the 30 second operation you were talking about. Give it two minutes anyway :-)


All times are GMT -5. The time now is 09:31 PM.