LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Fedora > Fedora - Installation
User Name
Password
Fedora - Installation This forum is for the discussion of installation issues with Fedora.

Notices

Reply
 
Search this Thread
Old 12-15-2009, 03:42 PM   #1
PeggySue
LQ Newbie
 
Registered: Aug 2007
Posts: 6

Rep: Reputation: 0
How do you shrink a logical volume pse?


I am giving Fedora a test drive but logical volumes are causing a problem.

I had partitions for Windows, Ubuntu 10.04, Slackware and Swap. I thought I pointed the Fedora installer at the Ubuntu partition but it took the Ubuntu, Slackware and swap partitions!! The data is lost but I want to rebuild Slackware. To do this I need to free up disk space that I can repartition.

I only have a single hard drive on this laptop. Fedora is running on the logical volume I need to shrink so I can't umount it. The Fedora install disk is not a live DVD and Gparted on my Ubuntu Live CD won't manage logical volumes.

I'm new to logical volumes. Is there a preferred logical volume manager application that works with mixed partitions and logical volumes and that I could download in a Ubuntu Live session please?
 
Old 12-15-2009, 04:30 PM   #2
eco
Member
 
Registered: May 2006
Location: BE
Distribution: Debian/Gentoo
Posts: 412

Rep: Reputation: 48
If I understood correctly, you still have the original partiitons, you just have LVM on top of them. Then all you need to do is reduce the LV to exclude the needed partitions (make sure you have enough space) and resize the filesystem. Once done, you can simply remove the partitions from the VG and finally from the PV.

Have a look at the following link

The following is an extract of my wiki, it's verbose but it will give you a step by step. Hope it helps.

Code:
Remove a disk from a vg
[edit] Prepare

I followed this documentation.

The first option failed for me... here is the one I got working. --Ed 14:39, 7 July 2009 (CEST)

Add two disks to the PV

sdh - 3GB
sdi - 1GB

Code: add disks to PV

deb:~# pvcreate /dev/sd{h,i}
  Physical volume "/dev/sdh" successfully created
  Physical volume "/dev/sdi" successfully created

deb:~# pvscan
  PV /dev/sdc1   VG data            lvm2 [320.00 MB / 0    free]
  PV /dev/sdc3   VG data            lvm2 [376.00 MB / 56.00 MB free]
  PV /dev/sdh                       lvm2 [3.00 GB]
  PV /dev/sdi                       lvm2 [1.00 GB]
  Total: 4 [4.68 GB] / in use: 2 [696.00 MB] / in no VG: 2 [4.00 GB]

Now add them to the VG
Code: Add disks to VG

deb:~# vgcreate test /dev/sd{h,i}
  Volume group "test" successfully created

deb:~# vgdisplay -v test
    Using volume group(s) on command line
    Finding volume group "test"
  --- Volume group ---
  VG Name               test
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               3.99 GB
  PE Size               4.00 MB
  Total PE              1022
  Alloc PE / Size       0 / 0
  Free  PE / Size       1022 / 3.99 GB
  VG UUID               hA9nqx-4l1e-F61H-bqWt-dMVr-OPdW-SsA41A

  --- Physical volumes ---
  PV Name               /dev/sdh
  PV UUID               KN10Li-Dw1x-pPIf-j0iw-D2vH-03Pf-PEdKRO
  PV Status             allocatable
  Total PE / Free PE    767 / 767

  PV Name               /dev/sdi
  PV UUID               6ek7PN-hHfG-uI8F-ybYE-LVRG-0dHx-ZpPxCc
  PV Status             allocatable
  Total PE / Free PE    255 / 255

and finally add them to the LV using the entire VG space in one LVM and format the new filesystem.
Code: add disks to LV

deb:~# lvcreate -l 1022 test -n lvtest
  Logical volume "lvtest" created

deb:~# mke2fs -j /dev/test/lvtest
mke2fs 1.41.3 (12-Oct-2008)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
261632 inodes, 1046528 blocks
52326 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Mount the new fs and fill it with data so that we can check the integrity of the data later on.
Code: mount and fill

deb:~# mount /dev/test/lvtest /test

deb:~# for i in `seq 1 4`; do echo $i; mkdir /test/dir${i}; rsync -a /usr/ /test/dir${i}/; done
1
2
3
4

deb:~# df -h /test
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/test-lvtest
                      4.0G  3.2G  603M  85% /test

Display the status of the VG. Note how all extents in /dev/sdh are used.
Code: check VG status

deb:~# vgdisplay -v test
    Using volume group(s) on command line
    Finding volume group "test"
  --- Volume group ---
  VG Name               test
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               3.99 GB
  PE Size               4.00 MB
  Total PE              1022
  Alloc PE / Size       1022 / 3.99 GB
  Free  PE / Size       0 / 0
  VG UUID               hA9nqx-4l1e-F61H-bqWt-dMVr-OPdW-SsA41A

  --- Logical volume ---
  LV Name                /dev/test/lvtest
  VG Name                test
  LV UUID                KpiHKa-SDMI-RwsB-IYvW-3Yup-LumJ-3djpAn
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                3.99 GB
  Current LE             1022
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

  --- Physical volumes ---
  PV Name               /dev/sdh
  PV UUID               KN10Li-Dw1x-pPIf-j0iw-D2vH-03Pf-PEdKRO
  PV Status             allocatable
  Total PE / Free PE    767 / 0

  PV Name               /dev/sdi
  PV UUID               6ek7PN-hHfG-uI8F-ybYE-LVRG-0dHx-ZpPxCc
  PV Status             allocatable
  Total PE / Free PE    255 / 0

deb:~# lvdisplay -v /dev/test/lvtest
    Using logical volume(s) on command line
  --- Logical volume ---
  LV Name                /dev/test/lvtest
  VG Name                test
  LV UUID                KpiHKa-SDMI-RwsB-IYvW-3Yup-LumJ-3djpAn
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                3.99 GB
  Current LE             1022
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

deb:~# pvscan
  PV /dev/sdh    VG test   lvm2 [3.00 GB / 0    free]
  PV /dev/sdi    VG test   lvm2 [1020.00 MB / 0    free]
  PV /dev/sdc1   VG data   lvm2 [320.00 MB / 0    free]
  PV /dev/sdc3   VG data   lvm2 [376.00 MB / 56.00 MB free]
  Total: 4 [4.67 GB] / in use: 4 [4.67 GB] / in no VG: 0 [0   ]

[edit] Implement

Here we add new disks to the PV and start migrating the data from one PV to others.
Code: add disks to PV then to VG

deb:~# pvcreate /dev/sd{d,e,f,g}
  Physical volume "/dev/sdd" successfully created
  Physical volume "/dev/sde" successfully created
  Physical volume "/dev/sdf" successfully created
  Physical volume "/dev/sdg" successfully created

deb:~# pvscan
  PV /dev/sdh    VG test            lvm2 [3.00 GB / 0    free]
  PV /dev/sdi    VG test            lvm2 [1020.00 MB / 0    free]
  PV /dev/sdc1   VG data            lvm2 [320.00 MB / 0    free]
  PV /dev/sdc3   VG data            lvm2 [376.00 MB / 56.00 MB free]
  PV /dev/sdd                       lvm2 [1.00 GB]
  PV /dev/sde                       lvm2 [1.00 GB]
  PV /dev/sdf                       lvm2 [1.00 GB]
  PV /dev/sdg                       lvm2 [1.00 GB]
  Total: 8 [8.67 GB] / in use: 4 [4.67 GB] / in no VG: 4 [4.00 GB]

deb:~# vgextend test /dev/sd{d,e,f,g}
  Volume group "test" successfully extended

deb:~# pvscan
  PV /dev/sdh    VG test   lvm2 [3.00 GB / 0    free]
  PV /dev/sdi    VG test   lvm2 [1020.00 MB / 0    free]
  PV /dev/sdd    VG test   lvm2 [1020.00 MB / 1020.00 MB free]
  PV /dev/sde    VG test   lvm2 [1020.00 MB / 1020.00 MB free]
  PV /dev/sdf    VG test   lvm2 [1020.00 MB / 1020.00 MB free]
  PV /dev/sdg    VG test   lvm2 [1020.00 MB / 1020.00 MB free]
  PV /dev/sdc1   VG data   lvm2 [320.00 MB / 0    free]
  PV /dev/sdc3   VG data   lvm2 [376.00 MB / 56.00 MB free]
  Total: 8 [8.66 GB] / in use: 8 [8.66 GB] / in no VG: 0 [0   ]

deb:~# vgdisplay -v test
    Using volume group(s) on command line
    Finding volume group "test"
  --- Volume group ---
  VG Name               test
  System ID
  Format                lvm2
  Metadata Areas        6
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                6
  Act PV                6
  VG Size               7.98 GB
  PE Size               4.00 MB
  Total PE              2042
  Alloc PE / Size       1022 / 3.99 GB
  Free  PE / Size       1020 / 3.98 GB
  VG UUID               hA9nqx-4l1e-F61H-bqWt-dMVr-OPdW-SsA41A

  --- Logical volume ---
  LV Name                /dev/test/lvtest
  VG Name                test
  LV UUID                KpiHKa-SDMI-RwsB-IYvW-3Yup-LumJ-3djpAn
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                3.99 GB
  Current LE             1022
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

  --- Physical volumes ---
  PV Name               /dev/sdh
  PV UUID               KN10Li-Dw1x-pPIf-j0iw-D2vH-03Pf-PEdKRO
  PV Status             allocatable
  Total PE / Free PE    767 / 0

  PV Name               /dev/sdi
  PV UUID               6ek7PN-hHfG-uI8F-ybYE-LVRG-0dHx-ZpPxCc
  PV Status             allocatable
  Total PE / Free PE    255 / 0

  PV Name               /dev/sdd
  PV UUID               kRh6JT-yE9W-LZ65-HW40-Jglj-RT0g-grdhey
  PV Status             allocatable
  Total PE / Free PE    255 / 255

  PV Name               /dev/sde
  PV UUID               B2kKxM-qgyP-qwvE-1P4q-MTpT-1CTP-X9Spum
  PV Status             allocatable
  Total PE / Free PE    255 / 255

  PV Name               /dev/sdf
  PV UUID               mSyEYf-d83j-8jJG-Hc5E-ACaB-6fVL-h2zr8R
  PV Status             allocatable
  Total PE / Free PE    255 / 255

  PV Name               /dev/sdg
  PV UUID               H6PXmp-IEFN-HnXw-NZog-HAeJ-Xwol-KEmL08
  PV Status             allocatable
  Total PE / Free PE    255 / 255

Here is the crunch. We are moving the data from sdh to sd{d,e,f,g}

Youmight have noted that the available PE (Phisical Extents) on sdh where at 0. After the following operation, they will all be available which indicates all data has left the disk.
Code: add disks to PV then to VG

deb:~# pvmove -v /dev/sdh /dev/sdd /dev/sde /dev/sdf /dev/sdg
    Finding volume group "test"
    Archiving volume group "test" metadata (seqno 3).
    Creating logical volume pvmove0
    Moving 767 extents of logical volume test/lvtest
    Found volume group "test"
    Updating volume group metadata
    Creating volume group backup "/etc/lvm/backup/test" (seqno 4).
    Found volume group "test"
    Found volume group "test"
    Suspending test-lvtest (254:1) with device flush
    Found volume group "test"
    Creating test-pvmove0
    Loading test-pvmove0 table
    Resuming test-pvmove0 (254:2)
    Found volume group "test"
    Loading test-pvmove0 table
    Suppressed test-pvmove0 identical table reload.
    Loading test-lvtest table
    Resuming test-lvtest (254:1)
    Checking progress every 15 seconds
  /dev/sdh: Moved: 4.7%
  /dev/sdh: Moved: 9.6%
  /dev/sdh: Moved: 14.5%
  /dev/sdh: Moved: 19.2%
  /dev/sdh: Moved: 24.4%
  /dev/sdh: Moved: 29.1%
  /dev/sdh: Moved: 33.2%
    Updating volume group metadata
    Creating volume group backup "/etc/lvm/backup/test" (seqno 5).
    Found volume group "test"
    Found volume group "test"
    Suspending test-lvtest (254:1) with device flush
    Suspending test-pvmove0 (254:2) with device flush
    Found volume group "test"
    Found volume group "test"
    Found volume group "test"
    Loading test-pvmove0 table
    Resuming test-pvmove0 (254:2)
    Found volume group "test"
    Loading test-pvmove0 table
    Suppressed test-pvmove0 identical table reload.
    Loading test-lvtest table
    Suppressed test-lvtest identical table reload.
    Resuming test-lvtest (254:1)
  /dev/sdh: Moved: 39.0%
  /dev/sdh: Moved: 43.9%
  /dev/sdh: Moved: 48.4%
  /dev/sdh: Moved: 52.9%
  /dev/sdh: Moved: 58.0%
  /dev/sdh: Moved: 63.4%
  /dev/sdh: Moved: 66.5%
    Updating volume group metadata
    Creating volume group backup "/etc/lvm/backup/test" (seqno 6).
    Found volume group "test"
    Found volume group "test"
    Suspending test-lvtest (254:1) with device flush
    Suspending test-pvmove0 (254:2) with device flush
    Found volume group "test"
    Found volume group "test"
    Found volume group "test"
    Loading test-pvmove0 table
    Resuming test-pvmove0 (254:2)
    Found volume group "test"
    Loading test-pvmove0 table
    Suppressed test-pvmove0 identical table reload.
    Loading test-lvtest table
    Suppressed test-lvtest identical table reload.
    Resuming test-lvtest (254:1)
  /dev/sdh: Moved: 72.1%
  /dev/sdh: Moved: 77.6%
  /dev/sdh: Moved: 82.1%
  /dev/sdh: Moved: 87.0%
  /dev/sdh: Moved: 93.2%
  /dev/sdh: Moved: 98.8%
  /dev/sdh: Moved: 99.7%
    Updating volume group metadata
    Creating volume group backup "/etc/lvm/backup/test" (seqno 7).
    Found volume group "test"
    Found volume group "test"
    Suspending test-lvtest (254:1) with device flush
    Suspending test-pvmove0 (254:2) with device flush
    Found volume group "test"
    Found volume group "test"
    Found volume group "test"
    Loading test-pvmove0 table
    Resuming test-pvmove0 (254:2)
    Found volume group "test"
    Loading test-pvmove0 table
    Suppressed test-pvmove0 identical table reload.
    Loading test-lvtest table
    Suppressed test-lvtest identical table reload.
    Resuming test-lvtest (254:1)
  /dev/sdh: Moved: 100.0%
    Found volume group "test"
    Found volume group "test"
    Loading test-lvtest table
    Suspending test-lvtest (254:1) with device flush
    Suspending test-pvmove0 (254:2) with device flush
    Found volume group "test"
    Found volume group "test"
    Found volume group "test"
    Resuming test-pvmove0 (254:2)
    Found volume group "test"
    Resuming test-lvtest (254:1)
    Found volume group "test"
    Removing test-pvmove0 (254:2)
    Found volume group "test"
    Removing temporary pvmove LV
    Writing out final volume group after pvmove
    Creating volume group backup "/etc/lvm/backup/test" (seqno 9).

Now check the disk is empty!

{{Box Code|Check disk is empty|
<pre>
deb:~# pvscan
  PV /dev/sdh    VG test   lvm2 [3.00 GB / 3.00 GB free]
  PV /dev/sdi    VG test   lvm2 [1020.00 MB / 0    free]
  PV /dev/sdd    VG test   lvm2 [1020.00 MB / 0    free]
  PV /dev/sde    VG test   lvm2 [1020.00 MB / 0    free]
  PV /dev/sdf    VG test   lvm2 [1020.00 MB / 0    free]
  PV /dev/sdg    VG test   lvm2 [1020.00 MB / 1012.00 MB free]
  PV /dev/sdc1   VG data   lvm2 [320.00 MB / 0    free]
  PV /dev/sdc3   VG data   lvm2 [376.00 MB / 56.00 MB free]
  Total: 8 [8.66 GB] / in use: 8 [8.66 GB] / in no VG: 0 [0   ]

deb:~# vgdisplay -v test
    Using volume group(s) on command line
    Finding volume group "test"
  --- Volume group ---
  VG Name               test
  System ID
  Format                lvm2
  Metadata Areas        6
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                6
  Act PV                6
  VG Size               7.98 GB
  PE Size               4.00 MB
  Total PE              2042
  Alloc PE / Size       1022 / 3.99 GB
  Free  PE / Size       1020 / 3.98 GB
  VG UUID               hA9nqx-4l1e-F61H-bqWt-dMVr-OPdW-SsA41A

  --- Logical volume ---
  LV Name                /dev/test/lvtest
  VG Name                test
  LV UUID                KpiHKa-SDMI-RwsB-IYvW-3Yup-LumJ-3djpAn
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                3.99 GB
  Current LE             1022
  Segments               5
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

  --- Physical volumes ---
  PV Name               /dev/sdh
  PV UUID               KN10Li-Dw1x-pPIf-j0iw-D2vH-03Pf-PEdKRO
  PV Status             allocatable
  Total PE / Free PE    767 / 767

  PV Name               /dev/sdi
  PV UUID               6ek7PN-hHfG-uI8F-ybYE-LVRG-0dHx-ZpPxCc
  PV Status             allocatable
  Total PE / Free PE    255 / 0

  PV Name               /dev/sdd
  PV UUID               kRh6JT-yE9W-LZ65-HW40-Jglj-RT0g-grdhey
  PV Status             allocatable
  Total PE / Free PE    255 / 0

  PV Name               /dev/sde
  PV UUID               B2kKxM-qgyP-qwvE-1P4q-MTpT-1CTP-X9Spum
  PV Status             allocatable
  Total PE / Free PE    255 / 0

  PV Name               /dev/sdf
  PV UUID               mSyEYf-d83j-8jJG-Hc5E-ACaB-6fVL-h2zr8R
  PV Status             allocatable
  Total PE / Free PE    255 / 0

  PV Name               /dev/sdg
  PV UUID               H6PXmp-IEFN-HnXw-NZog-HAeJ-Xwol-KEmL08
  PV Status             allocatable
  Total PE / Free PE    255 / 253

... then remove it from the VG
Code: Remove the disk from the VG

deb:~# vgreduce test /dev/sdh
  Removed "/dev/sdh" from volume group "test"

deb:~# vgdisplay -v test
    Using volume group(s) on command line
    Finding volume group "test"
  --- Volume group ---
  VG Name               test
  System ID
  Format                lvm2
  Metadata Areas        5
  Metadata Sequence No  10
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                5
  Act PV                5
  VG Size               4.98 GB
  PE Size               4.00 MB
  Total PE              1275
  Alloc PE / Size       1022 / 3.99 GB
  Free  PE / Size       253 / 1012.00 MB
  VG UUID               hA9nqx-4l1e-F61H-bqWt-dMVr-OPdW-SsA41A

  --- Logical volume ---
  LV Name                /dev/test/lvtest
  VG Name                test
  LV UUID                KpiHKa-SDMI-RwsB-IYvW-3Yup-LumJ-3djpAn
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                3.99 GB
  Current LE             1022
  Segments               5
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

  --- Physical volumes ---
  PV Name               /dev/sdi
  PV UUID               6ek7PN-hHfG-uI8F-ybYE-LVRG-0dHx-ZpPxCc
  PV Status             allocatable
  Total PE / Free PE    255 / 0

  PV Name               /dev/sdd
  PV UUID               kRh6JT-yE9W-LZ65-HW40-Jglj-RT0g-grdhey
  PV Status             allocatable
  Total PE / Free PE    255 / 0

  PV Name               /dev/sde
  PV UUID               B2kKxM-qgyP-qwvE-1P4q-MTpT-1CTP-X9Spum
  PV Status             allocatable
  Total PE / Free PE    255 / 0

  PV Name               /dev/sdf
  PV UUID               mSyEYf-d83j-8jJG-Hc5E-ACaB-6fVL-h2zr8R
  PV Status             allocatable
  Total PE / Free PE    255 / 0

  PV Name               /dev/sdg
  PV UUID               H6PXmp-IEFN-HnXw-NZog-HAeJ-Xwol-KEmL08
  PV Status             allocatable
  Total PE / Free PE    255 / 253

deb:~# pvscan
  PV /dev/sdi    VG test            lvm2 [1020.00 MB / 0    free]
  PV /dev/sdd    VG test            lvm2 [1020.00 MB / 0    free]
  PV /dev/sde    VG test            lvm2 [1020.00 MB / 0    free]
  PV /dev/sdf    VG test            lvm2 [1020.00 MB / 0    free]
  PV /dev/sdg    VG test            lvm2 [1020.00 MB / 1012.00 MB free]
  PV /dev/sdc1   VG data            lvm2 [320.00 MB / 0    free]
  PV /dev/sdc3   VG data            lvm2 [376.00 MB / 56.00 MB free]
  PV /dev/sdh                       lvm2 [3.00 GB]
  Total: 8 [8.66 GB] / in use: 7 [5.66 GB] / in no VG: 1 [3.00 GB]

... then from the PV
Code: Remove disk from the PV

deb:~# pvremove /dev/sdh
  Labels on physical volume "/dev/sdh" successfully wiped

and lost but not least, check the data is still available!
Code: Check data integrity

deb:~# ls -l /test
total 32
drwxr-xr-x 11 root root  4096 2009-03-03 22:32 dir1
drwxr-xr-x 11 root root  4096 2009-03-03 22:32 dir2
drwxr-xr-x 11 root root  4096 2009-03-03 22:32 dir3
drwxr-xr-x 11 root root  4096 2009-03-03 22:32 dir4
drwx------  2 root root 16384 2009-07-06 21:46 lost+found

Done!
 
1 members found this post helpful.
Old 12-15-2009, 04:58 PM   #3
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 12,138

Rep: Reputation: 987Reputation: 987Reputation: 987Reputation: 987Reputation: 987Reputation: 987Reputation: 987Reputation: 987
I don't use LVM for a lot of different reasons, but to answer your other question, try a Knoppix liveCD. Last I looked it had LVM built in and all you had to do was pvscan/vgscan and maybe "vgchange -a -y"
I thought the Fedoras could be booted in rescue mode that should avoid the necessity for the above.
 
1 members found this post helpful.
Old 12-16-2009, 04:08 AM   #4
PeggySue
LQ Newbie
 
Registered: Aug 2007
Posts: 6

Original Poster
Rep: Reputation: 0
Thanks for the inputs. The links look interesting.

I think I have some reading to do but the answers are clearly there.

I found lvm2 on the Ubuntu Live CD. The problem was I didn't know which application I was looking for.

This gives me all I need to sort my problem. Thanks.
 
  


Reply

Tags
fedora, install, lvm


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
How do I shrink a volume group once I have created free space in it? Mountain Linux - General 5 10-28-2009 08:54 PM
Extending logical volume LogVol01 Insufficient allocatable logical exte swap space umeshsharma Linux - Newbie 4 06-22-2009 12:26 PM
Shrink volume in Red Hat EL WS 4 araczek Linux - General 4 04-15-2009 09:37 AM
Rename Logical Volume jackpal Linux - Server 2 03-27-2009 07:34 AM
logical volume manager plythgam Linux - Software 1 11-08-2004 10:47 AM


All times are GMT -5. The time now is 02:58 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration