There are a few things about your setup that doesn't make sense. First, there's the description of /dev/sdb6:
Quote:
Originally Posted by narnie
|
According to the picture, there's an ext3 file system on /dev/sdb6. There shouldn't be.
Either
/dev/sdb6 is a component in a software RAID, in which case it contains an
md volume, or it's a formatted partition containing a file system, which means it can't be part of a RAID array.
The partition table type (which is nothing more than a label) says "RAID", and
/proc/mdstat confirms that the partition is indeed a RAID component. In other words, GParted is confused about the contents of this partition, which is bad news.
Quote:
Originally Posted by narnie
As you can see, I would like to grow sdb6 into the unallocated area.
|
Since the free space appears
before the partition, the entire partition will have to be moved then expanded.
Quote:
Originally Posted by narnie
This is my /proc/mdstat file
Code:
Personalities : [raid1]
md0 : active raid1 sdb6[1]
717040448 blocks super 1.2 [2/1] [_U]
unused devices: <none>
|
Metadata version 1.2 means the RAID metadata is at the beginning of the partition. Do not under any circumstances attempt to fsck the partition directly, as it will at best accomplish nothing, and could potentially destroy the entire RAID by altering the metadata.
The RAID is "active", which means you must stop it before you can move the partition. I assume the output above was the contents of /proc/mdstat before you ran
mdadm -S /dev/md0?
Quote:
Originally Posted by narnie
And this is the saved error from gparted when I try to resize that partition.
Code:
GParted 0.12.1 --enable-libparted-dmraid
Libparted 2.3
Move /dev/sdb6 to the left and grow it from 683.95 GiB to 905.50 GiB 00:00:00 ( ERROR )
|
The operation should work, but if the RAID is active, any attempt to manipulate the partition could have disastrous consequences.
Quote:
Originally Posted by narnie
Code:
calibrate /dev/sdb6 00:00:00 ( SUCCESS )
path: /dev/sdb6
start: 519,178,240
end: 1,953,523,711
size: 1,434,345,472 (683.95 GiB)
check file system on /dev/sdb6 for errors and (if possible) fix them 00:00:00 ( ERROR )
e2fsck -f -y -v /dev/sdb6
|
Ouch.
Quote:
Originally Posted by narnie
Code:
ext2fs_check_desc: Corrupt group descriptor: bad block for block bitmap
|
Ouch!
Quote:
Originally Posted by narnie
Code:
e2fsck: Group descriptors look bad... trying backup blocks...
e2fsck: Bad magic number in super-block while using the backup blockse2fsck: going back to original superblock
|
OUCH! Let's just hope
fsck decided to give up when it couldn't locate a valid superblock.
The procedure as outlined in the article/blog post is correct:
- remove one device from the RAID
- expand that device
- add it back
- repeat 1-3 for all devices
- run mdadm --grow to make the RAID device take advantage of the newly available space
The main problem here seems to be that you attempted to resize a device while it was still part of the RAID.
/proc/mdstat clearly shows
/dev/sdb6 as an active component. The
mdadm --fail command you tried to run would have removed
/dev/sda1, but that partition isn't even a part of the array. What were you trying to accomplish?
If you have a degraded RAID 1 array, you should be able to resize (or just recreate) the device that's missing. You can then add it back to the array and once the resync has completed, you can safely remove the other device (
/dev/sdb6) and resize the partition (or just delete and recreate it). Finally, add /dev/sdb6 back to the array, resync and run
mdadm --grow.
The fact that GParted thinks /dev/sdb6 contains a file system and tries to modify it, is a major problem if you want the contents of the partition to survive the procedure. My suggestion is to forget about GParted and just recreate one device at a time.