LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 04-09-2024, 08:53 AM   #1
Jason.nix
Member
 
Registered: Feb 2023
Posts: 561

Rep: Reputation: 10
Post How do I add a new hard disk to LVM?


Hello,
The LVM configuration is as follows:
Code:
# df
Filesystem                  1K-blocks    Used Available Use% Mounted on
udev                          1967448       0   1967448   0% /dev
tmpfs                          398368     556    397812   1% /run
/dev/mapper/Docker--vg-root  17172304 8163408   8111260  51% /
tmpfs                         1991840       0   1991840   0% /dev/shm
tmpfs                            5120       0      5120   0% /run/lock
/dev/xvda1                     465124  108733    331457  25% /boot
/dev/mapper/Docker--vg-home  32596756    7656  30907692   1% /home
tmpfs                          398368       0    398368   0% /run/user/1000
I added a new hard drive (xvdb) to the system:
Code:
# /sbin/fdisk --list
Disk /dev/xvdb: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Ignoring extra data in partition table 5.


Disk /dev/xvda: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa42d4daf

Device     Boot   Start       End   Sectors  Size Id Type
/dev/xvda1         2048    999423    997376  487M 83 Linux
/dev/xvda2      1001470 104855551 103854082 49.5G  5 Extended
/dev/xvda5      1001472 104855551 103854080 49.5G 8e Linux LVM
...
How can I use it as another partition?

Thank you.
 
Old 04-09-2024, 11:43 AM   #2
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,640

Rep: Reputation: 2697Reputation: 2697Reputation: 2697Reputation: 2697Reputation: 2697Reputation: 2697Reputation: 2697Reputation: 2697Reputation: 2697Reputation: 2697Reputation: 2697
You are misunderstanding a few things. DF reports file system spaces, but nothing directly about LVM.
You need to read your man pages related to physical volumes and volume groups first. Once you have the idea, look for a HOW-TO page about adding a new physical device.
In short, and totally from ancient and possibly faulty memory: you define the device and prep it as a LVM device, define it as a physical volume, add that physical volume to the volume group, then you can grow the existing file systems to use the new space in the volume group.
 
Old 04-09-2024, 12:36 PM   #3
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,663
Blog Entries: 4

Rep: Reputation: 3944Reputation: 3944Reputation: 3944Reputation: 3944Reputation: 3944Reputation: 3944Reputation: 3944Reputation: 3944Reputation: 3944Reputation: 3944Reputation: 3944
Here’s another explanation for clarity:

LVM entirely separates the so-called logical picture, “as seen by Linux file-systems,” from the physical one. Each one is now manipulated separately, and each one has no idea of the other.

“File systems” live in “logical volumes,” which are endlessly expandable and seamless. Supporting these are “storage pools,” which might be provisioned by one device (partition), or many.

File systems therefore neither know nor care where any piece of information physically is. (And, LVM can move it, if need be.) The two viewpoints are entirely(!) and cleanly separated.

“It’s quite the amazing [-ly useful] magic trick!”

Last edited by sundialsvcs; 04-09-2024 at 12:38 PM.
 
Old 04-09-2024, 12:57 PM   #4
Jason.nix
Member
 
Registered: Feb 2023
Posts: 561

Original Poster
Rep: Reputation: 10
Hello,
I did the following steps:
Code:
# /sbin/fdisk -u -c /dev/xvdb
n
p
1
w
# /sbin/pvcreate /dev/xvdb1
# /sbin/lvmdiskscan -l
# /sbin/vgcreate new /dev/xvdb1
# /sbin/lvcreate -n lv_new -size 10G new
# /sbin/mkfs.ext4 /dev/new/lv_new
# mkdir /mnt/new
# nano /etc/fstab
/dev/new/lv_new  /mnt/new ext4 defaults 0 0
# mount -a

Last edited by Jason.nix; 04-09-2024 at 12:59 PM.
 
Old 04-09-2024, 01:16 PM   #5
MadeInGermany
Senior Member
 
Registered: Dec 2011
Location: Simplicity
Posts: 2,798

Rep: Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201
Well done.
You have created a separate VG "new", so it won't use any PVs of the other VGs.

List PVs, VGs, LVs:
Code:
pvs
vgs
lvs
Overview:
Code:
lsblk
lsblk -p
lsblk -f
 
Old 04-11-2024, 06:54 AM   #6
Jason.nix
Member
 
Registered: Feb 2023
Posts: 561

Original Poster
Rep: Reputation: 10
Quote:
Originally Posted by MadeInGermany View Post
Well done.
You have created a separate VG "new", so it won't use any PVs of the other VGs.

List PVs, VGs, LVs:
Code:
pvs
vgs
lvs
Overview:
Code:
lsblk
lsblk -p
lsblk -f
Hi,
Thank you so much for your reply.
What is the problem with this?
The outputs are as follows:
Code:
# /sbin/pvs
  PV         VG        Fmt  Attr PSize   PFree
  /dev/xvda5 Docker-vg lvm2 a--  <49.52g      0
  /dev/xvdb1 new       lvm2 a--  <50.00g <10.00g
#
# /sbin/vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  Docker-vg   1   3   0 wz--n- <49.52g      0
  new         1   1   0 wz--n- <50.00g <10.00g
#
# /sbin/lvs
  LV     VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home   Docker-vg -wi-ao---- <31.76g
  root   Docker-vg -wi-ao---- <16.81g
  swap_1 Docker-vg -wi-ao---- 976.00m
  lv_new new       -wi-ao----  40.00g
#
# lsblk
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sr0                    11:0    1 1024M  0 rom
xvda                  202:0    0   50G  0 disk
├─xvda1               202:1    0  487M  0 part /boot
├─xvda2               202:2    0    1K  0 part
├─xvda5               202:5    0 49.5G  0 part
│ ├─Docker--vg-root   254:0    0 16.8G  0 lvm  /
│ ├─Docker--vg-swap_1 254:1    0  976M  0 lvm  [SWAP]
│ └─Docker--vg-home   254:2    0 31.8G  0 lvm  /home
└─xvda6               202:6    0  512B  0 part
xvdb                  202:16   0   50G  0 disk
└─xvdb1               202:17   0   50G  0 part
  └─new-lv_new        254:3    0   40G  0 lvm  /mnt/new
#
# lsblk -p
NAME                              MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
/dev/sr0                           11:0    1 1024M  0 rom
/dev/xvda                         202:0    0   50G  0 disk
├─/dev/xvda1                      202:1    0  487M  0 part /boot
├─/dev/xvda2                      202:2    0    1K  0 part
├─/dev/xvda5                      202:5    0 49.5G  0 part
│ ├─/dev/mapper/Docker--vg-root   254:0    0 16.8G  0 lvm  /
│ ├─/dev/mapper/Docker--vg-swap_1 254:1    0  976M  0 lvm  [SWAP]
│ └─/dev/mapper/Docker--vg-home   254:2    0 31.8G  0 lvm  /home
└─/dev/xvda6                      202:6    0  512B  0 part
/dev/xvdb                         202:16   0   50G  0 disk
└─/dev/xvdb1                      202:17   0   50G  0 part
  └─/dev/mapper/new-lv_new        254:3    0   40G  0 lvm  /mnt/new
#
# lsblk -f
NAME FSTYPE FSVER LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
sr0
xvda
├─xvda1
│    ext2   1.0         77aad1e6-cb98-4f77-9e07-34a60c99c785    323.7M    23% /boot
├─xvda2
│
├─xvda5
│    LVM2_m LVM2        HWSyxT-Op70-r9Hm-tS73-OITg-ssiX-b6pzfZ
│ ├─Docker--vg-root
│ │  ext4   1.0         e5478aa2-a344-42a8-bc97-27e8183221c8      7.7G    48% /
│ ├─Docker--vg-swap_1
│ │  swap   1           3e36479d-89b2-4f7f-bf5b-65a437a41618                  [SWAP]
│ └─Docker--vg-home
│    ext4   1.0         f7e6a444-f3f0-47c5-81d6-a603ed3a20c7     29.5G     0% /home
└─xvda6
xvdb
└─xvdb1
     LVM2_m LVM2        8UHz0p-bfp4-Vemn-rgZ9-SUTG-wJmx-asCKFi
  └─new-lv_new
     ext4   1.0         65f7fff2-70c0-4433-b577-ac9034ed068f     31.5G    14% /mnt/new
 
Old 04-11-2024, 07:13 AM   #7
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Quote:
Originally Posted by Jason.nix View Post
Hi,
Thank you so much for your reply.
What is the problem with this?
The outputs are as follows:
Code:
# /sbin/pvs
  PV         VG        Fmt  Attr PSize   PFree
  /dev/xvda5 Docker-vg lvm2 a--  <49.52g      0
  /dev/xvdb1 new       lvm2 a--  <50.00g <10.00g
#
# /sbin/vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  Docker-vg   1   3   0 wz--n- <49.52g      0
  new         1   1   0 wz--n- <50.00g <10.00g
#
# /sbin/lvs
  LV     VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home   Docker-vg -wi-ao---- <31.76g
  root   Docker-vg -wi-ao---- <16.81g
  swap_1 Docker-vg -wi-ao---- 976.00m
  lv_new new       -wi-ao----  40.00g
#
# lsblk
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sr0                    11:0    1 1024M  0 rom
xvda                  202:0    0   50G  0 disk
├─xvda1               202:1    0  487M  0 part /boot
├─xvda2               202:2    0    1K  0 part
├─xvda5               202:5    0 49.5G  0 part
│ ├─Docker--vg-root   254:0    0 16.8G  0 lvm  /
│ ├─Docker--vg-swap_1 254:1    0  976M  0 lvm  [SWAP]
│ └─Docker--vg-home   254:2    0 31.8G  0 lvm  /home
└─xvda6               202:6    0  512B  0 part
xvdb                  202:16   0   50G  0 disk
└─xvdb1               202:17   0   50G  0 part
  └─new-lv_new        254:3    0   40G  0 lvm  /mnt/new
#
# lsblk -p
NAME                              MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
/dev/sr0                           11:0    1 1024M  0 rom
/dev/xvda                         202:0    0   50G  0 disk
├─/dev/xvda1                      202:1    0  487M  0 part /boot
├─/dev/xvda2                      202:2    0    1K  0 part
├─/dev/xvda5                      202:5    0 49.5G  0 part
│ ├─/dev/mapper/Docker--vg-root   254:0    0 16.8G  0 lvm  /
│ ├─/dev/mapper/Docker--vg-swap_1 254:1    0  976M  0 lvm  [SWAP]
│ └─/dev/mapper/Docker--vg-home   254:2    0 31.8G  0 lvm  /home
└─/dev/xvda6                      202:6    0  512B  0 part
/dev/xvdb                         202:16   0   50G  0 disk
└─/dev/xvdb1                      202:17   0   50G  0 part
  └─/dev/mapper/new-lv_new        254:3    0   40G  0 lvm  /mnt/new
#
# lsblk -f
NAME FSTYPE FSVER LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
sr0
xvda
├─xvda1
│    ext2   1.0         77aad1e6-cb98-4f77-9e07-34a60c99c785    323.7M    23% /boot
├─xvda2
│
├─xvda5
│    LVM2_m LVM2        HWSyxT-Op70-r9Hm-tS73-OITg-ssiX-b6pzfZ
│ ├─Docker--vg-root
│ │  ext4   1.0         e5478aa2-a344-42a8-bc97-27e8183221c8      7.7G    48% /
│ ├─Docker--vg-swap_1
│ │  swap   1           3e36479d-89b2-4f7f-bf5b-65a437a41618                  [SWAP]
│ └─Docker--vg-home
│    ext4   1.0         f7e6a444-f3f0-47c5-81d6-a603ed3a20c7     29.5G     0% /home
└─xvda6
xvdb
└─xvdb1
     LVM2_m LVM2        8UHz0p-bfp4-Vemn-rgZ9-SUTG-wJmx-asCKFi
  └─new-lv_new
     ext4   1.0         65f7fff2-70c0-4433-b577-ac9034ed068f     31.5G    14% /mnt/new
Very simple answer
Within a single VG it is possible to have multiple PVs and LVs. The LVs can be resized to whatever is needed (they act as a file system partition).

With multiple VGs, each can only manage the space assigned to that VG and LVs cannot be spanned across VG boundaries.

The proper way to do what you wanted to do would have been to create the PV (which you did properly), then add that PV to the existing VG.
Once that was done then the existing LV could have been expanded to use the additional space or relocated onto the new PV.

Now you can only create a new LV in the new VG and use it as a discrete file system partition (which is what you have done).

If you back up and remove the new LV and the new VG, then add that PV to the original VG you should be able to get back to what was originally suggested above.

Last edited by computersavvy; 04-11-2024 at 07:17 AM.
 
Old 04-13-2024, 08:22 AM   #8
Jason.nix
Member
 
Registered: Feb 2023
Posts: 561

Original Poster
Rep: Reputation: 10
Quote:
Originally Posted by computersavvy View Post
Very simple answer
Within a single VG it is possible to have multiple PVs and LVs. The LVs can be resized to whatever is needed (they act as a file system partition).

With multiple VGs, each can only manage the space assigned to that VG and LVs cannot be spanned across VG boundaries.

The proper way to do what you wanted to do would have been to create the PV (which you did properly), then add that PV to the existing VG.
Once that was done then the existing LV could have been expanded to use the additional space or relocated onto the new PV.

Now you can only create a new LV in the new VG and use it as a discrete file system partition (which is what you have done).

If you back up and remove the new LV and the new VG, then add that PV to the original VG you should be able to get back to what was originally suggested above.
Hello,
Thank you so much for your reply.
Do you mean I can use the previous VG to add the new partition?
 
Old 04-13-2024, 12:44 PM   #9
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
The vgextend command allows adding a new PV to an existing VG.
Once that is done then the various lv commands can be used to manage existing LVs or to create new LVs as needed.

Look at the various commands on the system for LVM management and use the related man pages to see what each command does.
Those commands can be easily identified with
Code:
ls /usr/sbin/lv*
ls /usr/sbin/pv*
ls /usr/sbin/vg*
Example man page excerpt
Code:
VGEXTEND(8)                                          System Manager's Manual                                         VGEXTEND(8)

NAME
       vgextend — Add physical volumes to a volume group

SYNOPSIS
       vgextend position_args
           [ option_args ]

DESCRIPTION
       vgextend adds one or more PVs to a VG. This increases the space available for LVs in the VG.
The main commands of interest to you would now be 'lvremove', and 'vgremove' (to remove the new LV and VG) then 'vgextend' (to add the new PV to the old VG)

Last edited by computersavvy; 04-13-2024 at 12:52 PM.
 
1 members found this post helpful.
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Raid + LVM add new raid device to LVM, problem request Linux - Server 3 08-15-2012 04:06 AM
How to add new hard disk to extend the LVM size .............. Senthilv Linux - Server 1 10-09-2011 06:23 PM
Adding an LVM hard disk to a system already running/using LVM firewiz87 Linux - Hardware 5 08-15-2010 12:59 AM
Mounting LVM Disk to non-LVM disk gak_92 Linux - Hardware 1 08-07-2009 03:34 AM
Now I have a scsi hard disk, two IDE hard disk, i want install linux on scsi hard dis tecpenguin Linux - Server 4 11-10-2007 06:44 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 10:28 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration