[SOLVED] Extend existing 2tb Raid 1 with 3tb disks to 5tb...
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have Fedora 17 (3.7.9-104.fc17.i686.PAE) on a 160G IDE drive, I then have 2x2tb Sata disks that are configured as a mirror, providing a 'data' volume.
I just bought 2x3tb disks, installed them in the machine and have two 'blank' disks evident, but have not taken any further steps.
I want to make a 5tb mirrored volume if possible... without losing the existing data on the 2tb volume.
I would think this is a very common scenario, but I cant find a definitive example.
I have read that fdisk is no good above 2tb, and the gdisk may be required to create GPT partitions.
My worries include:
Do I need to convert the 2TB volume to GPT before I can proceed.
Can mdam deal with the different sized disks - how do they get to be matched pairs...
Can a single filesystem be extended over the whole 5tb
Here is the output from fdisk -l:
Code:
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d9e41
Device Boot Start End Blocks Id System
/dev/sda1 2048 3907029167 1953513560 fd Linux raid autodetect
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d9e41
Device Boot Start End Blocks Id System
/dev/sdb1 2048 3907029167 1953513560 fd Linux raid autodetect
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sde: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c23e1
Device Boot Start End Blocks Id System
/dev/sde1 * 2048 4196351 2097152 83 Linux
/dev/sde2 4196352 308389887 152096768 83 Linux
/dev/sde3 308389888 312580095 2095104 82 Linux swap / Solaris
Disk /dev/md0: 2000.3 GB, 2000263512064 bytes
2 heads, 4 sectors/track, 488345584 cylinders, total 3906764672 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Last edited by jnoake; 03-09-2013 at 07:34 PM.
Reason: Added fdisk output
Do I need to convert the 2TB volume to GPT before I can proceed.
No, there shouldn't be a problem at all with mixing MBR and GPT partitions.
Quote:
Can mdam deal with the different sized disks - how do they get to be matched pairs...
The problem here isn't mdadm, it is how RAID1 works in general. The size of a RAID1 system is determined by the smallest disk you use, in your case 2TB. Even if you add a 3TB disk only 2TB of it will be used. Also, adding disks to a RAID1 will not make it larger, you will just have the original array mirrored to 4 disks, the size will still be 2TB. If you want to have an extensible array you would have to use RAID5 or RAID6, those arrays grow in size when you add disks.
Quote:
Can a single filesystem be extended over the whole 5tb
Depends on the filesystem you use, but with modern filesystems 5TB are not a problem at all. For example, ext4 can handle filesizes of up to 16TB and partitions up to 1EB (=ExaByte).
Gotta love Linux software raid.
ended up with a 5tb raid 0 volume made from a 2tb raid1 and 3tb raid1.
Code:
Personalities : [raid1] [raid0]
md2 : active raid0 md1[1] md0[0]
4883517952 blocks super 1.2 512k chunks
md1 : active raid1 sdd[2] sdc[1]
2930135360 blocks super 1.2 [2/1] [_U]
[=========>...........] recovery = 49.9% (1463028736/2930135360) finish=1221.4min speed=20017K/sec
md0 : active raid1 sdb[2] sda[1]
1953383360 blocks super 1.2 [2/1] [_U]
[=>...................] recovery = 6.3% (123103296/1953383360) finish=1557.5min speed=19584K/sec
on the following disks:
Code:
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d9e41
Device Boot Start End Blocks Id System
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sde: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c23e1
Device Boot Start End Blocks Id System
/dev/sde1 * 2048 4196351 2097152 83 Linux
/dev/sde2 4196352 308389887 152096768 83 Linux
/dev/sde3 308389888 312580095 2095104 82 Linux swap / Solaris
Disk /dev/md0: 2000.3 GB, 2000264560640 bytes
2 heads, 4 sectors/track, 488345840 cylinders, total 3906766720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md1: 3000.5 GB, 3000458608640 bytes
2 heads, 4 sectors/track, 732533840 cylinders, total 5860270720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/md2: 5000.7 GB, 5000722382848 bytes
2 heads, 4 sectors/track, 1220879488 cylinders, total 9767035904 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
I had to go through the steps several times with some strange results in between times, but got there eventually.
The final instruction about mounting the separated disk with the data, required that I create an md3 raid 0 and add the disk before I could mount it.
I used :
mke2fs -t ext4 /dev/md2
in all places to create file systems - it was a LOT faster.
the copy operation was replaced with:
rsync -pavHxl --progress --inplace --exclude 'lost+found' /mnt/y/shared /mnt/raid
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.