LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Enterprise (http://www.linuxquestions.org/questions/linux-enterprise-47/)
-   -   Accommodating Dynamic LUN Growth (http://www.linuxquestions.org/questions/linux-enterprise-47/accommodating-dynamic-lun-growth-873345/)

ferricoxide 04-06-2011 09:19 AM

Accommodating Dynamic LUN Growth
 
Ok, so, we occasionally have need to increase the size of a mount point so that a given application doesn't run out of space. Now, I know that I can use LVM to do growth by striping or concatenating the underlying LUNs, but, this isn't always the best solution. In some cases, we would prefer to grow the LUN on the array and upsize the partition/PV/LVM/filesystem on the Linux host. There's a number of commands that can be used to re-read a partition table: blockdev, sfdisk, partprobe, etc.

Unfortunately, of the commands I've found, `blockdev` seem to really be the only one that will cause the kernel to notice a change in the *size* of the /dev/sdX device. Without the size-change being noticed by Linux, I can't modify partitions/PVs/LVs/filesystems to use the new space. Unfortunately, it seems that I can only run `blockdev` if I've offlined the upper data structures that live on the /dev/sdX device (i.e., unmount filesystems and make any LVs containing the /dev/sdX offline). I mean, it still saves me a reboot, but I'm *really* looking for a way to get the kernel to notice the size change *without* having to take the filesystem (etc.) offline to do it.

Anyone know a better way to accommodate this kind of online device-size modification?

xaminmo 04-18-2011 11:52 AM

You have to unmount the filesystem(s) and offline the volume group.
Quote:

umount /u01 ; umount /u02 ; umount /u03 ; umount /u04 ; umount /u05
vgchange -an oravg
Once that's done, you can run this to pick up the new size:
Quote:

# echo “1″ > /sys/class/scsi_device/$host:$channel:$id:$lun/device/rescan
If you're using a partition rather than the whole disk, then expand the partition
Quote:

fdisk /dev/sdb
blah blah
write/quot
Now, you can mount everything back up:
Quote:

vgchange -ay oravg
mount -a
Now, you can do an online resize.
Quote:

extendlv blah blah
resize2fs /dev/oravg/oralv02
If you're not using ext2,3 or 4, JFS will let you mount -oremount to pick up the larger LV or partition underneath.

ferricoxide 04-18-2011 05:51 PM

Quote:

Originally Posted by xaminmo (Post 4328210)
You have to unmount the filesystem(s) and offline the volume group.
....

Thanks for the reply. Unfortunately, it doesn't address anything I didn't already know (and put in my original post). Basically, I was looking for a way that didn't require offlining filesystems or LVs (particularly since there's certain LVs/FSes that you can't offline without a reboot).

xaminmo 04-18-2011 11:44 PM

Possible online method
 
Found other sources that indicate you might be able to do it online with this. I've not actually used these steps, so YMMV.

* Increase LUN size

* Use "multipath -ll" to find the /dev/sd* devices for your LUN

* Rescan each /dev/sd* device with:
echo 1 > /sys/block/sdX/device/rescan

* Possibly replace the paths
multipathd -k "del path sdX"
multipathd -k "add path sdX"
loop - making sure to keep a working path at all times

* Update device-mapper's internal tables
multipathd -k"resize map <multipath_device>"
<multipath_device> can be "mpath14" or a dm WWID
Older versions of multipath require manual table reloads, ala:
dmsetup table; dmsetup suspend; dmsetup reload; dmsetup resume

* Use "multipath -ll" to make sure the sizes are updated

* pvresize, lvextend, resize2fs

anomie 04-19-2011 05:23 PM

Quote:

Originally Posted by ferricoxide
Now, I know that I can use LVM to do growth by striping or concatenating the underlying LUNs, but, this isn't always the best solution. In some cases, we would prefer to grow the LUN on the array and upsize the partition/PV/LVM/filesystem on the Linux host.

Not a direct answer to your questions, but IMO letting LVM2 (and/or clvm) solve this for you is the best solution. (Unless I'm missing something obvious about your environment.) Carve up your LUNs as needed, and when a filesystem legitimately needs more space, you throw a LUN/PV into the mix and grow the LV. No need to muck about with underlying devices or kernel magic.

ferricoxide 04-19-2011 11:51 PM

Unfortunately, none of the below actually causes the system to issue the necessary ioctl's to cause the system to become aware of a change in the device geometry (in this case, the size of the underlying LUN). The only thing I've found that issues the requisite ioctl's is `blockdev`. Unfortunately, if there's any active data structures on top of the grown device (LVM objects, filesystems, etc.) the requisite `blockdev` command fails with a "Device Busy" error. So, what I want, if it exists in Linux/RHEL, is a method of invoking the necessary ioctl's without having to offline the higher-up data structures.

None of the below do that. All of the below rely on something like `blockdev` having already caused the changed device geometry to have been discovered.

Quote:

Originally Posted by xaminmo (Post 4328874)
Found other sources that indicate you might be able to do it online with this. I've not actually used these steps, so YMMV.

* Increase LUN size

* Use "multipath -ll" to find the /dev/sd* devices for your LUN

* Rescan each /dev/sd* device with:
echo 1 > /sys/block/sdX/device/rescan

* Possibly replace the paths
multipathd -k "del path sdX"
multipathd -k "add path sdX"
loop - making sure to keep a working path at all times

* Update device-mapper's internal tables
multipathd -k"resize map <multipath_device>"
<multipath_device> can be "mpath14" or a dm WWID
Older versions of multipath require manual table reloads, ala:
dmsetup table; dmsetup suspend; dmsetup reload; dmsetup resume

* Use "multipath -ll" to make sure the sizes are updated

* pvresize, lvextend, resize2fs


ferricoxide 04-19-2011 11:57 PM

Quote:

Originally Posted by anomie (Post 4329885)
Not a direct answer to your questions, but IMO letting LVM2 (and/or clvm) solve this for you is the best solution. (Unless I'm missing something obvious about your environment.) Carve up your LUNs as needed, and when a filesystem legitimately needs more space, you throw a LUN/PV into the mix and grow the LV. No need to muck about with underlying devices or kernel magic.

"Best" is debatable. "Best" is always relative to the context(s) you're operating under.

There may be reasons for preferring to grow the underlying storage extent (e.g., you may have array optimizations that happen better if you're talking to a contiguous device rather than cobbling extents together via a host-based, software volume management solution like LVM(2); if you're in a virtualized environment, your virtual infrastructure people may want to limit the number of VMDKs associated with a given VM; etc.). As such, adding PV(s) to your VG(s) so you can grow your LV(s) won't be your "best" solution.

anomie 04-20-2011 02:54 PM

As for performance, your SAN can of course be optimized around either approach -- i.e. relying on contiguous blocks vs. expecting to divvy out fixed LUNs that may or may not be contiguous.

But, yes, I see your point about interdepartmental policies and politics. The devil is always in the details.

All that said, I don't have a clever solution (short of rebooting) to your original question.

ferricoxide 04-21-2011 05:36 AM

Quote:

Originally Posted by anomie (Post 4330846)
As for performance, your SAN can of course be optimized around either approach -- i.e. relying on contiguous blocks vs. expecting to divvy out fixed LUNs that may or may not be contiguous.

Depends on the "SAN" device. Particularly lower-end devices tend to benefit from you keeping LUNs data contiguous - particularly if the host-side volume is created as a stripe of multiple volumes off the same set of spindles on the array (you can see I/O issues kind of like you would if you created a stripe from three separate areas on a local disk). Though, that's somewhat offset by other bottlenecks within your storage network topography. :P

Quote:

Originally Posted by anomie (Post 4330846)
But, yes, I see your point about interdepartmental policies and politics. The devil is always in the details.

Yup. There's "best technical", which only takes into account the hardware involved (which can also be a matter of interpretation and overall utilization patterns - both at setup and over the life of a solution), and there's "best practical" which takes into account all the other stuff that technical folks hate dealing with (since it's often at odds with "best technical").

Quote:

Originally Posted by anomie (Post 4330846)
All that said, I don't have a clever solution (short of rebooting) to your original question.

No worries. I just wanted to be sure, before I settled on my current "solution" that I'd exhausted all avenues. Was hoping someone else's reading of the documentation or practical experience might be leveraged to find something I missed.


All times are GMT -5. The time now is 12:10 PM.