Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Ok, so, we occasionally have need to increase the size of a mount point so that a given application doesn't run out of space. Now, I know that I can use LVM to do growth by striping or concatenating the underlying LUNs, but, this isn't always the best solution. In some cases, we would prefer to grow the LUN on the array and upsize the partition/PV/LVM/filesystem on the Linux host. There's a number of commands that can be used to re-read a partition table: blockdev, sfdisk, partprobe, etc.
Unfortunately, of the commands I've found, `blockdev` seem to really be the only one that will cause the kernel to notice a change in the *size* of the /dev/sdX device. Without the size-change being noticed by Linux, I can't modify partitions/PVs/LVs/filesystems to use the new space. Unfortunately, it seems that I can only run `blockdev` if I've offlined the upper data structures that live on the /dev/sdX device (i.e., unmount filesystems and make any LVs containing the /dev/sdX offline). I mean, it still saves me a reboot, but I'm *really* looking for a way to get the kernel to notice the size change *without* having to take the filesystem (etc.) offline to do it.
Anyone know a better way to accommodate this kind of online device-size modification?
You have to unmount the filesystem(s) and offline the volume group.
....
Thanks for the reply. Unfortunately, it doesn't address anything I didn't already know (and put in my original post). Basically, I was looking for a way that didn't require offlining filesystems or LVs (particularly since there's certain LVs/FSes that you can't offline without a reboot).
Now, I know that I can use LVM to do growth by striping or concatenating the underlying LUNs, but, this isn't always the best solution. In some cases, we would prefer to grow the LUN on the array and upsize the partition/PV/LVM/filesystem on the Linux host.
Not a direct answer to your questions, but IMO letting LVM2 (and/or clvm) solve this for you is the best solution. (Unless I'm missing something obvious about your environment.) Carve up your LUNs as needed, and when a filesystem legitimately needs more space, you throw a LUN/PV into the mix and grow the LV. No need to muck about with underlying devices or kernel magic.
Unfortunately, none of the below actually causes the system to issue the necessary ioctl's to cause the system to become aware of a change in the device geometry (in this case, the size of the underlying LUN). The only thing I've found that issues the requisite ioctl's is `blockdev`. Unfortunately, if there's any active data structures on top of the grown device (LVM objects, filesystems, etc.) the requisite `blockdev` command fails with a "Device Busy" error. So, what I want, if it exists in Linux/RHEL, is a method of invoking the necessary ioctl's without having to offline the higher-up data structures.
None of the below do that. All of the below rely on something like `blockdev` having already caused the changed device geometry to have been discovered.
Quote:
Originally Posted by xaminmo
Found other sources that indicate you might be able to do it online with this. I've not actually used these steps, so YMMV.
* Increase LUN size
* Use "multipath -ll" to find the /dev/sd* devices for your LUN
* Rescan each /dev/sd* device with: echo 1 > /sys/block/sdX/device/rescan
* Possibly replace the paths multipathd -k "del path sdX" multipathd -k "add path sdX"
loop - making sure to keep a working path at all times
* Update device-mapper's internal tables multipathd -k"resize map <multipath_device>"
<multipath_device> can be "mpath14" or a dm WWID
Older versions of multipath require manual table reloads, ala:
dmsetup table; dmsetup suspend; dmsetup reload; dmsetup resume
* Use "multipath -ll" to make sure the sizes are updated
Not a direct answer to your questions, but IMO letting LVM2 (and/or clvm) solve this for you is the best solution. (Unless I'm missing something obvious about your environment.) Carve up your LUNs as needed, and when a filesystem legitimately needs more space, you throw a LUN/PV into the mix and grow the LV. No need to muck about with underlying devices or kernel magic.
"Best" is debatable. "Best" is always relative to the context(s) you're operating under.
There may be reasons for preferring to grow the underlying storage extent (e.g., you may have array optimizations that happen better if you're talking to a contiguous device rather than cobbling extents together via a host-based, software volume management solution like LVM(2); if you're in a virtualized environment, your virtual infrastructure people may want to limit the number of VMDKs associated with a given VM; etc.). As such, adding PV(s) to your VG(s) so you can grow your LV(s) won't be your "best" solution.
As for performance, your SAN can of course be optimized around either approach -- i.e. relying on contiguous blocks vs. expecting to divvy out fixed LUNs that may or may not be contiguous.
But, yes, I see your point about interdepartmental policies and politics. The devil is always in the details.
All that said, I don't have a clever solution (short of rebooting) to your original question.
As for performance, your SAN can of course be optimized around either approach -- i.e. relying on contiguous blocks vs. expecting to divvy out fixed LUNs that may or may not be contiguous.
Depends on the "SAN" device. Particularly lower-end devices tend to benefit from you keeping LUNs data contiguous - particularly if the host-side volume is created as a stripe of multiple volumes off the same set of spindles on the array (you can see I/O issues kind of like you would if you created a stripe from three separate areas on a local disk). Though, that's somewhat offset by other bottlenecks within your storage network topography. :P
Quote:
Originally Posted by anomie
But, yes, I see your point about interdepartmental policies and politics. The devil is always in the details.
Yup. There's "best technical", which only takes into account the hardware involved (which can also be a matter of interpretation and overall utilization patterns - both at setup and over the life of a solution), and there's "best practical" which takes into account all the other stuff that technical folks hate dealing with (since it's often at odds with "best technical").
Quote:
Originally Posted by anomie
All that said, I don't have a clever solution (short of rebooting) to your original question.
No worries. I just wanted to be sure, before I settled on my current "solution" that I'd exhausted all avenues. Was hoping someone else's reading of the documentation or practical experience might be leveraged to find something I missed.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.