Have some questions about data management. I have stumbled upon what feels like a complex data storage need. The OS environment is RHEL 7.6
New storage is inbound to refresh current storage.
Currently, the storage layout is below. These do not have striping enabled on the LVs, but rather 'linear' configuration.
Current Layout
Code:
sdb 8:16 0 4T 0 disk
├─mountvg-mntmntlv 253:13 0 25G 0 lvm /mount
|─mountvg-mnt01lv 253:14 0 38T 0 lvm /mount/mnt01
└─mountvg-mntlv 253:15 0 60G 0 lvm /mount/mnt
sdc 8:32 0 4T 0 disk
└─mountvg-mnt01lv 253:14 0 38T 0 lvm /mount/mnt01
sdd 8:48 0 4T 0 disk
└─mountvg-mnt01lv 253:14 0 38T 0 lvm /mount/mnt01
sde 8:64 0 4T 0 disk
└─mountvg-mnt01lv 253:14 0 38T 0 lvm /mount/mnt01
sdf 8:80 0 4T 0 disk
└─mountvg-mnt01lv 253:14 0 38T 0 lvm /mount/mnt01
sdg 8:96 0 4T 0 disk
└─mountvg-mnt01lv 253:14 0 38T 0 lvm /mount/mnt01
sdh 8:112 0 4T 0 disk
└─mountvg-mnt01lv 253:14 0 38T 0 lvm /mount/mnt01
sdi 8:128 0 4T 0 disk
└─mountvg-mnt01lv 253:14 0 38T 0 lvm /mount/mnt01
sdj 8:144 0 4T 0 disk
└─mountvg-mnt01lv 253:14 0 38T 0 lvm /mount/mnt01
sdk 8:160 0 4T 0 disk
└─mountvg-mnt01lv 253:14 0 38T 0 lvm /mount/mnt01
sdl 8:176 0 1.7T 0 disk
├─secondvg-mountflv 253:2 0 250G 0 lvm /mount/files
├─secondvg-mountmisclv 253:5 0 152M 0 lvm /misc
├─secondvg-mounttmplv 253:7 0 45G 0 lvm /mount/files/tmp
├─secondvg-mountaflv 253:9 0 60G 0 lvm /afiles
└─secondvg-mountjlv 253:10 0 4T 0 lvm /mount/mntj
sdm 8:192 0 1.7T 0 disk
└─secondvg-mountjlv 253:10 0 4T 0 lvm /mount/mntj
sdn 8:208 0 1.7T 0 disk
└─secondvg-mountjlv 253:10 0 4T 0 lvm /mount/mntj
Vendor wants the new storage to like look like so
• ( 85TB total ) 10 LUNs for the /dev/mountvg/mnt01lv data
• ( 300GB total ) 1 LUN for the /dev/mountvg/[mntmntlv,mntlv] data
• ( 11TB total ) 3 LUNs for the /dev/secondvg/mountjlv data
• ( 500GB total ) 1 LUN for the remainder of the /dev/secondvg/[mountflv,mounttmplv] data
As mentioned, the current 10 disk '/mountvg/mnt01lv' storage is default 'linear' for the logical volumes. The new is preferred to move to striped. That said, I presume I can pvcreate all the new volumes, vgextend the VGs I want to work with, and initiate pvmoves wherever they need to go
Code:
pvcreate /dev/newvol[1,2,3,4,5,6,7,8,9,10]
vgextend mountvg /dev/newvol[1,2,3,4,5,6,7,8,9,10]
pvmove -b -n /dev/mountvg/mnt01lv /dev/newvol[1,2,3,4,5,6,7,8,9,10]
pvcreate /dev/newvol11
vgextend mountvg /dev/newvol11
pvmove -b -n /dev/mountvg/mntmntlv /dev/newvol11
pvmove -b -n /dev/mountvg/mntlv /dev/newvol11
pvcreate /dev/newvol[12,13,14]
vgextend secondvg /dev/newvol[12,13,14]
pvmove -b -n /dev/secondvg/mntjlv /dev/newvol[12,13,14]
Questions/Concerns
I am having trouble finding info if pvmove will let you move the source logical volume data to an array of volumes at the same time, distributing the data. I don't see if using the pvmove syntax above will actually result in the expected outcome. I have done it many times with 1:1 PVs and it works great. :-)
This seems like a complex mess, trying to kill a fly with a sledgehammer. Haha! Any insight / help is appreciated. I suppose it is entirely possible there is another way to go about this and the above method is just overly complicated / not ideal. Kind of drawing a blank on some of this as it’s a much bigger, tangled beast!
Wondering if due to the requirements moving to the new storage - I can get everything connected to the new VM, import the current PVs,VGs and LVs. Create new PVs,VGs,LVs on the new disks, with striping enabled where appropriate. Mount everything and start rsyncs between the locations. Something like
Code:
shopt -s dotglob; rsync -axvH --delete --progress --stats /old/loc/* /new/loc/
Pull out the old, rename all of the new to what it needs to be. Fortunately its all SSD/Flash RDMs so hopefully the data transfer won't be painfully long.
Kind of thinking aloud and looking for some of the sage advice a few gurus out there can impart! :-)