Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Does this mean that the physical partition /dev/sda has never been extended ?
Also, I need to extend the space on that physical partition.
I have increased the space in VMware, and now I want to create in Linux the physical partition /dev/sda1 as Linux LVM 8e. Is this ok?
Many Thanks!!
/dev/sda is NOT a partition, it is the whole disk (and thus is as large AS the disk).
In your case that disk has not been partitioned but made into an LVM "Physical Volume",
in which lv's (logical volumes) have been created. Their device files look like /dev/dm* (device mapper) and are managed through the LVM tools.
Find a good tutorial about LVM2, especially in combination with your distribution.
Shows the VG's name is vg_oracle and it has one LV named lv_oracle which is an ext4 filesystem mounted as /u01/app/oraccle/or.
Since sda was used as the PV there is no additional space available from sda to add a partition.
However, since your filesystem is in LVM you may be able to extend the LV if the VG has free space. Failing that you'd have to add another disk to the system since lsblk shows sda fully allocated to LVM and fdisk -l shows sdb fully allocated to partitions (some of which are in LVM but a separate VG named vg_newgen). With a new disk you can add it as a PV to vg_oracle.
Run the following commands and provide the output:
vgs = Shows your defined VGs and whether any space is available to add to the LVs
pvs = Shows which devices (disks or partitions are in use)
lvs = Shows your defined LVs with sizes
df -hP = Shows your mounted filesystem directories with sizes and the devices (e.g. LVs) they are mounted from.
You might want to read up on LVM and the manual page is a good place to start. Type:
man lvm
That output confirms you've allocated all of both disks to 2 separate VGs and have allocated 100% of VGs to LVs. There is no space in partitions.
It does show that you allocated your root (/) filesystem as 356GB but it contains only 130GB of files/data and has 222GB free. That filesystem is in VG, vg_newgen, which is comprised of two separate PVs, /dev/sdb2 & /dev/sdb4.
Also I notice your df output doesn't show anything for /dev/sdb3 but your lsblk earlier shows it formated for ext4 (not as part of LVM). Do you know what that is used for?
Ideally what you would do at this point is add a new disk to the system (are these internal disks? You can find out by running lsscsi command.
If you can't add another disk your options are:
1) If /dev/sdb3 is unused you could change its type to LVM2 then pvcreate it then vgextend it into vg_oracle (type "man" on each of the commands for details, e.g. "man pvcreate". Do NOT do this unless you're sure it isn't used for the images the label implies.
2) You could work on reducing the size of the root filesytem and its LV to only use the space allowed by sdb4 then plan to pvmove to move extents from /dev/sdb2 to /dev/sdb4, use vgremove to remove sdb2 from vg_newgen then vgextend to add to sdb2 to vg_oracle. Since this is the root filesystem you'd likely have to plan on reducing it in single user mode. Not a light undertaking.
3) If there are subdirectories in /u01/app/oracle/oradata filesystem that aren't actually data (e.g. the oracle binaries themselves) you could cheat by moving them to a new subdirectory of root (/) such as /u01/app/oracle/ora2 then make a symbolic link of that to the subdirectory they had in the above filesystem:
e.g. If you had /u01/app/oracle/oradata/appl_top you could create /u02/app/oracle/ora2/appl_top then link them.
I suspect however, your binaries and non data files are already in the root filesystem somewhere under /u02/app/oracle.
Any of these would require careful planning. For option 2 you'd have to shutdown everything. For option 3 you'd likely have to shutdown Oracle.
This is why I say option 1 is the ideal way to go. The only downtime would be to add a physical disk to the system. All the configuration after that can be done online. If it is a SAN disk it doesn't even require the downtime to add the LUN as that can be done online.
Last edited by MensaWater; 01-31-2019 at 03:08 PM.
I suspect the OP has made the virtual disk bigger - /dev/sda is 536.9 GB but the pv is only 200 G; probably the original size of the disk. "pvresize /dev/sda" will expand the pv to the size of the disk. Then the vg and lv can be enlarged.
I suspect the OP has made the virtual disk bigger - /dev/sda is 536.9 GB but the pv is only 200 G; probably the original size of the disk. "pvresize /dev/sda" will expand the pv to the size of the disk. Then the vg and lv can be enlarged.
D'oh! I completely missed that because I saw the VG was the same as the PV. You're right - fdisk shows the disk, /dev/sda, is much larger than the PV.
Since multipathing doesn't appear to be in use the OP can ignore that part.
Quote:
Increasing the size of an LVM Physical Volume (PV) while running multipathd — without rebooting
Posted on 2010-10-17 by Earl C. Ruby III
If you’re using the Linux Logical Volume Manager (LVM) to manage your disk space it’s easy to enlarge a logical volume while a server is up and running. It’s also easy to add new drives to an existing volume group.
But if you’re using a SAN the underlying physical drives can have different performance characteristics because they’re assigned to different QOS bands on the SAN. If you want to keep performance optimized it’s important to know what physical volume a logical volume is assigned to — otherwise you can split a single logical volume across multiple physical volumes and end up degrading system performance. If you run out of space on a physical volume and then enlarge a logical volume you will split the LV across two or more PVs. To prevent this from happening you need to enlarge the LUN, tell multipathd about the change, then enlarge the PV, then enlarge the LV, and finally enlarge the file system.
I have three SANs at the company where I work (two Pillar Axioms and a Xyratex) which are attached two two fibrechannel switches and several racks of blade servers. Each blade is running an Oracle database with multiple physical volumes (PVs) grouped together into a single LVM. The PVs are tagged and as logical volumes (LVs) are added they’re assigned to the base physical volume with the same tag name as the logical volume. That way we can assign the PV to a higher or lower performance band on the SAN and optimize the database’s performance. Oracle tablespaces that contain frequently-accessed data get assigned to a PV with a higher QOS band on the SAN. Archival data gets put on a PV with a lower QOS band.
We run OpenSUSE 11.x using multipathd to deal with the multiple fiber paths available between each blade and a SAN. Since each blade has 2 fiber ports for redundancy, which are attached to two fiber switches, each of which is cross-connected to 2 ports on 2 different controllers on the SAN, so there are 4 different fiber paths that data can take between the blade and the SAN. If any path fails, or one port on a fiber card fails, or one fiber switch fails, multipathd re-routes the data using the remaining data paths and everything keeps working. If a blade fails we switch to another blade.
If we run out of space on a PV I can log into the SAN’s administrative interface and enlarge the size of the underlying LUN, but getting the operating system on the blade to recognize the fact that more physical disk space was available is tricky. LVM’s pvresize command would claim that it was enlarging the PV, but nothing would happen unless the server was rebooted and then pvresize was run again. I wanted to be able to enlarge physical volumes without taking a database off-line and rebooting its server. Here’s how I did it:
First log into the SAN’s administrative interface and enlarge the LUN in question.
Open two xterm windows on the host as root
Gather information – you will need the physical device name, the multipath block device names, and the multipath map name. (Since our setup gives us 4 data paths for each LUN there are 4 multipath block device names.)
List the physical volumes and their associated tags with pvs -o +tags:
Find the device that corresponds to the LUN you just enlarged, e.g. /dev/dm-11
Run multipath -ll, find the device name in the listing. The large hex number at the start of the line is the multipath map name and the sdX block devices after the device name are the multipath block devices. So in this example the map name is 2000b080112002142 and the block devices are sdy, sdan, sdj, and sdbc:
In the second root window, pull up a multipath command line with multipathd -k
Delete and re-add the first block device from each group. Since multipathd provides multiple paths to the underlying SAN, the device will remain up and on-line during this process. Make sure that you get an ‘ok’ after each command. If you see ‘fail’ or anything else besides ‘ok’, STOP WHAT YOU’RE DOING and go to the next step.
multipathd> del path sdy
ok
multipathd> add path sdy
ok
multipathd> del path sdj
ok
multipathd> add path sdj
ok
If you got a ‘fail’ response:
Type exit to get back to a command line.
Type multipath -r on the command line. This should recover/rebuild all block device paths.
Type multipath -ll | less again and verify that the block devices were re-added.
At this point multipath may actually recognize the new device size (you can see the size in the multipath -ll output). If everything looks good, skip ahead to the pvresize step.
In the first root window run multipath -ll again and verify that the block devices were re-added:
Delete and re-add the remaining two block devices in the second root window:
multipathd> del path sdan
ok
multipathd> add path sdan
ok
multipathd> del path sdbc
ok
multipathd> add path sdbc
ok
In the first root window run multipath -ll again and verify that the block devices were re-added.
Tell multipathd to resize the block device map using the map name:
multipathd> resize map 2000b080112002142
ok
Press Ctrl-D to exit multipathd command line.
In the first root window run multipath -llagain to verify that multipath sees the new physical device size. The device below went from 82G to 142G:
At this point you can enlarge any logical volumes residing on the underlying physical volume without splitting the logical volume across multiple (non-contiguous) physical volumes using lvresize and enlarge the file system using the file system tools, e.g. resize2fs.
If you ran out of space, your LVs were split across multiple PVs, and you need to coalesce a PV onto a single LV use pvmove to move the physical volume to a single device.
I've not actually done that because typically we just add additional LUNs on physical servers or additional virtual disks from the hypervisor to virtual servers. We then just add the additional LUN or virtual disk as a PV to appropriate VG.
Last edited by MensaWater; 02-01-2019 at 08:27 AM.
Hm.. I am a little confused now.
What I have done till now is: I have increased the size of /dev/sda from 200GB to 536.9GB from VMware.
Now, I want to create a new primary partition type 8e named /dev/sda1, and then increase VG and then LV.
Is this OK? or I am missing something?
Many thanks for your support!
Hm.. I am a little confused now.
What I have done till now is: I have increased the size of /dev/sda from 200GB to 536.9GB from VMware.
Now, I want to create a new primary partition type 8e named /dev/sda1, and then increase VG and then LV.
Is this OK? or I am missing something?
Many thanks for your support!
No. Your sda is NOT partitioned currently. The entire original sda size of 200 GB was used as PV with no partitioning done. If you try to partition sda you risk blowing away the LVM information already on the disk.
As noted by syg00:
Quote:
/dev/sda is 536.9 GB but the pv is only 200 G; probably the original size of the disk. "pvresize /dev/sda" will expand the pv to the size of the disk. Then the vg and lv can be enlarged.
My post was just a follow up to his. I was showing a person who wrote up doing such expansion. I then noted that I typically don't increase virtual disk sizes but instead add new ones. You've already increased the virtual disk size so you need to go the pvresize route.
Last edited by MensaWater; 02-01-2019 at 11:42 AM.
Unfortunately that link assumes the disk is partitioned which is not the case for the OP.
To re-iterate to OP:
Do NOT try to partition /dev/sda - you are using the entire disk as a PV, not partitions from it. If you try to partition it you risk destroying existing LVM information. Just do the pvresize and other steps.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.