New to the forum and really hoping someone can help. My linux knowledge is average and I am no expert.
I have an encrypted Raid 6 array with 6 drives. I have added the 7 drive using the grow command with Webmin.
I have been roughly following the below link
http://jotschi.de/2014/02/09/lvm-mda...debian-wheezy/
I was able to resize the crypt layer using the cryptsetup command.
Code:
# Resize the crypt layer and check the result
cryptsetup resize cryptotest
fdisk -l /dev/mapper/cryptotest
# Resize physical volume and check the result
pvresize /dev/mapper/cryptotest
pvdisplay
# Resize logical volume and check result
lvextend -l +100%FREE /dev/mapper/testvg-testlv
lvdisplay
# Resize filesystem
e2fsck -f /dev/mapper/testvg-testlv
resize2fs /dev/mapper/testvg-testlv
mount /dev/mapper/testvg-testlv test
The problem I have is that when I run pvresize I get the following error.
Code:
root@peter-X10SL7-F:/media/Raid# pvresize /dev/mapper/Raid
Failed to find physical volume "/dev/mapper/Raid".
0 physical volume(s) resized / 0 physical volume(s) not resized
Indeed pvdisplay and lvdisplay come up blank when I enter those commands.
pvs does show the volume but only when I do the "pvs -a"
Code:
root@peter-X10SL7-F:/media/Raid# pvs -a
PV VG Fmt Attr PSize PFree
/dev/mapper/Raid --- 0 0
/dev/mapper/cryptswap1 --- 0 0
/dev/md0 --- 0 0
/dev/sda1 --- 0 0
/dev/sda5 --- 0 0
/dev/sdb1 --- 0 0
I have included output from mdadm
Code:
root@peter-X10SL7-F:/media/Raid# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu May 14 16:11:40 2015
Raid Level : raid6
Array Size : 19534425600 (18629.48 GiB 20003.25 GB)
Used Dev Size : 3906885120 (3725.90 GiB 4000.65 GB)
Raid Devices : 7
Total Devices : 7
Persistence : Superblock is persistent
Update Time : Sun Oct 6 16:11:21 2019
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : mint:0
UUID : 12c9466f:10054afa:7a70f710:a476c3cf
Events : 13332
Number Major Minor RaidDevice State
0 8 81 0 active sync /dev/sdf1
6 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 97 3 active sync /dev/sdg1
4 8 65 4 active sync /dev/sde1
5 8 129 5 active sync /dev/sdi1
7 8 112 6 active sync /dev/sdh
Code:
root@peter-X10SL7-F:/media/Raid# fdisk -l /dev/mapper/Raid
Disk /dev/mapper/Raid: 18.2 TiB, 20003249717248 bytes, 39068847104 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 2621440 bytes
Any help on what to do next would be appreciated.