LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 03-09-2013, 07:26 PM   #1
jnoake
LQ Newbie
 
Registered: Jan 2009
Posts: 24

Rep: Reputation: 11
Extend existing 2tb Raid 1 with 3tb disks to 5tb...


There is a lot to read about Linux raid.
The closest post I can find is as follows, but does not fully address my situation.
http://www.linuxquestions.org/questi...es-4175421097/

I have Fedora 17 (3.7.9-104.fc17.i686.PAE) on a 160G IDE drive, I then have 2x2tb Sata disks that are configured as a mirror, providing a 'data' volume.

I just bought 2x3tb disks, installed them in the machine and have two 'blank' disks evident, but have not taken any further steps.
I want to make a 5tb mirrored volume if possible... without losing the existing data on the 2tb volume.

I would think this is a very common scenario, but I cant find a definitive example.

I have read that fdisk is no good above 2tb, and the gdisk may be required to create GPT partitions.

My worries include:
Do I need to convert the 2TB volume to GPT before I can proceed.
Can mdam deal with the different sized disks - how do they get to be matched pairs...
Can a single filesystem be extended over the whole 5tb


Here is the output from fdisk -l:

Code:
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d9e41

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048  3907029167  1953513560   fd  Linux raid autodetect

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d9e41

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048  3907029167  1953513560   fd  Linux raid autodetect

Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sde: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c23e1

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1   *        2048     4196351     2097152   83  Linux
/dev/sde2         4196352   308389887   152096768   83  Linux
/dev/sde3       308389888   312580095     2095104   82  Linux swap / Solaris

Disk /dev/md0: 2000.3 GB, 2000263512064 bytes
2 heads, 4 sectors/track, 488345584 cylinders, total 3906764672 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Last edited by jnoake; 03-09-2013 at 07:34 PM. Reason: Added fdisk output
 
Old 03-09-2013, 08:13 PM   #2
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by jnoake View Post
Do I need to convert the 2TB volume to GPT before I can proceed.
No, there shouldn't be a problem at all with mixing MBR and GPT partitions.

Quote:
Can mdam deal with the different sized disks - how do they get to be matched pairs...
The problem here isn't mdadm, it is how RAID1 works in general. The size of a RAID1 system is determined by the smallest disk you use, in your case 2TB. Even if you add a 3TB disk only 2TB of it will be used. Also, adding disks to a RAID1 will not make it larger, you will just have the original array mirrored to 4 disks, the size will still be 2TB. If you want to have an extensible array you would have to use RAID5 or RAID6, those arrays grow in size when you add disks.

Quote:
Can a single filesystem be extended over the whole 5tb
Depends on the filesystem you use, but with modern filesystems 5TB are not a problem at all. For example, ext4 can handle filesizes of up to 16TB and partitions up to 1EB (=ExaByte).
 
1 members found this post helpful.
Old 03-12-2013, 09:28 AM   #3
jnoake
LQ Newbie
 
Registered: Jan 2009
Posts: 24

Original Poster
Rep: Reputation: 11
Thanks TobiSGD for providing that clarity.

It got me thinking of raid 5... then raid 6... getting better, then I stumbled upon this article:
http://www.linuxquestions.org/questi...1/#post2767268

Brilliant! just what I was looking for.

Gotta love Linux software raid.
ended up with a 5tb raid 0 volume made from a 2tb raid1 and 3tb raid1.
Code:
Personalities : [raid1] [raid0] 
md2 : active raid0 md1[1] md0[0]
      4883517952 blocks super 1.2 512k chunks
      
md1 : active raid1 sdd[2] sdc[1]
      2930135360 blocks super 1.2 [2/1] [_U]
      [=========>...........]  recovery = 49.9% (1463028736/2930135360) finish=1221.4min speed=20017K/sec
      
md0 : active raid1 sdb[2] sda[1]
      1953383360 blocks super 1.2 [2/1] [_U]
      [=>...................]  recovery =  6.3% (123103296/1953383360) finish=1557.5min speed=19584K/sec
on the following disks:

Code:
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d9e41

   Device Boot      Start         End      Blocks   Id  System

Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sde: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c23e1

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1   *        2048     4196351     2097152   83  Linux
/dev/sde2         4196352   308389887   152096768   83  Linux
/dev/sde3       308389888   312580095     2095104   82  Linux swap / Solaris

Disk /dev/md0: 2000.3 GB, 2000264560640 bytes
2 heads, 4 sectors/track, 488345840 cylinders, total 3906766720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md1: 3000.5 GB, 3000458608640 bytes
2 heads, 4 sectors/track, 732533840 cylinders, total 5860270720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md2: 5000.7 GB, 5000722382848 bytes
2 heads, 4 sectors/track, 1220879488 cylinders, total 9767035904 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
I had to go through the steps several times with some strange results in between times, but got there eventually.
The final instruction about mounting the separated disk with the data, required that I create an md3 raid 0 and add the disk before I could mount it.
I used :
mke2fs -t ext4 /dev/md2
in all places to create file systems - it was a LOT faster.

the copy operation was replaced with:
rsync -pavHxl --progress --inplace --exclude 'lost+found' /mnt/y/shared /mnt/raid

to ensure permissions were preserved.

the whole process went something like this:
Code:
umount /mnt/raid
mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
mdadm --stop /dev/md0
mdadm --stop /dev/md1
mdadm --zero-superblock /dev/sda
mdadm --zero-superblock /dev/sdc
mke2fs -t ext4 /dev/sda
mke2fs -t ext4 /dev/sdc
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sda
mke2fs -t ext4 /dev/md0
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdc
mke2fs -t ext4 /dev/md1
mdadm --create /dev/md2 --level=0 --raid-devices=2 /dev/md[01]
mke2fs -t ext4 /dev/md2
mount /dev/md2 /mnt/raid
mdadm --assemble /dev/md3 /dev/sdb1
mkdir /mnt/y
mount /dev/md3 /mnt/y
rsync -pavHxl --progress --inplace --exclude 'lost+found' /mnt/y/shared /mnt/raid
umount /mnt/y
mdadm --stop /dev/md3
mdadm --zero-superblock /dev/sdb
mdadm /dev/md0 --add /dev/sdb
mdadm /dev/md1 --add /dev/sdd
sysctl -w dev.raid.speed_limit_min=200000
Finally I used
blkid

to find the UUID of the /dev/md2

and used it to replace the UUID that originally mounted /dev/md0 on /mnt/raid in
/etc/fstab
:
Code:
UUID=f3b4fa8f-7985-4ae3-b08b-921274fc7a07  /mnt/raid               ext4    defaults        0 3

Last edited by jnoake; 03-12-2013 at 09:42 AM. Reason: Additional info
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Need help with raid 1 array on 3TB disks Mint 13/(Ubuntu12.10) skunkarific Linux - Software 0 12-08-2012 12:45 AM
[SOLVED] Software raid 1 with 3Tb drives valunthar Linux - Server 8 08-10-2012 05:42 PM
Partition sizing on 3.5TB RAID file server Sheepdisease Debian 27 12-01-2009 07:03 PM
Performance of single server with 5TB disk space (5 disks): how bad will it be? rs1050 Linux - Server 3 11-26-2008 11:31 PM
RHEL 3: Steps of expanding file system (add new SCSI disks) to existing SW RAID & LVM atman1974 Linux - General 2 01-12-2006 09:49 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 01:06 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration