LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
Search this Thread
Old 07-27-2008, 11:07 PM   #1
CCThomas
LQ Newbie
 
Registered: Jul 2008
Posts: 4

Rep: Reputation: 0
Growing a software RAID5 onto fewer, larger, drives?


I currently have a software RAID5 array made up of 8 500GB drives. The server is Ubuntu 8.04, running kernel 2.6.24. I need more space, but I can't add any more drives, so what I want to do is replace my 8 500GB drives with 5 1TB drives.

What I thought I could do is, one-by-one, replace 5 of the drives (letting the array resync onto the new drive before replacing the next one). Next, use the 'shrink' option of mdadm to reshuffle the contents of the array down onto the 5 drives.

However, even though mdadm is capable of growing a RAID5 onto more drives, I'm unable to find the documentation that made me think one could shrink an array on to fewer. So I'm beginning to doubt the feasability of this whole thing.

Can anyone say whether such a thing might be possible?

Thanks,
-Chris
 
Old 07-27-2008, 11:46 PM   #2
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 12,448

Rep: Reputation: 1069Reputation: 1069Reputation: 1069Reputation: 1069Reputation: 1069Reputation: 1069Reputation: 1069Reputation: 1069
Wouldn't you just set a drive to fail then remove it ???. If it were me I'd swap out one drive at a time - but be aware I'm not a user of software raid.

Edit: Go here. Looks like a current and maintained reference.

Last edited by syg00; 07-27-2008 at 11:54 PM.
 
Old 08-15-2008, 01:50 AM   #3
CCThomas
LQ Newbie
 
Registered: Jul 2008
Posts: 4

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by syg00 View Post
Wouldn't you just set a drive to fail then remove it ???.
That doesn't reconfigure the array to run on one less drive - it just runs the array in degraded mode, without the extra drive of redundant information. You have to then replace the drive with another of equal or larger size, so that the redundant data can be reconstructed on it. So by itself, this doesn't do anything to reduce the number of required drives in the array.

It's looking like what I want is not possible - I took a look at the mdadm source, and it pretty clearly doesn't allow shrinking a raid5, even if the md driver might (which it probably doesn't right now).
 
Old 08-15-2008, 10:15 AM   #4
slackman
Member
 
Registered: Mar 2003
Distribution: Slack 9.0
Posts: 123

Rep: Reputation: 15
if you have spare ports on sata? ide? controller then just set up another raid with 3 1tb copy data over and regrow. if no spare ports/controller, backup and redo. are you going to reuse the 500's? if the price is right i may be interested. LMK thanks
 
Old 08-18-2008, 04:06 AM   #5
CCThomas
LQ Newbie
 
Registered: Jul 2008
Posts: 4

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by slackman View Post
if you have spare ports on sata? ide? controller then just set up another raid with 3 1tb copy data over and regrow.
Unfortunately, I don't have any spare ports, otherwise I'd have done exactly that. Another option would be to build a second server with the 1TB drives, put it on the network, and copy the data over.

Either of those options would probably have been smarter than what I actually did, which was to partition each of my 1TB drives into a 500GB partition (the same as my original drives), and another 500GB partition. I swapped out some of the 500GB drives with the 1TB drives, adding the 1st partition of each into the array, and letting the array sync itself.

After my original array was fully healed and stable, I created a second RAID5 array on the 2nd partitions of the 1TB drives. When I need more space, I can swap in additional 1TB drives, growing the 2nd array each time.

This is less than ideal - I haven't actually succeeded in decreasing the number of drives. And now I have to manage 2 filesystems & worry about where I'll place each new bit of data. Maybe the next time I have to do a major capacity increase (2TB drives? 4TB?), I'll do the "right" thing.

Quote:
if no spare ports/controller, backup and redo.
That begs a really good question - how _does_ one back up 3.5TB of data? I'm not quite prepared to spend 2x on duplicate hard drives. Over my home DSL connection, it would take 3.5 years to upload it to an internet backup solution. Tape drives seem to cost $1000s.

I know, I'm being cheap. But there's nothing on the array that I don't still have the source media for, or a backup for the small subset of data that actually is irreplacable. So if my array were to be completely lost, I don't really lose anything but the time it would take to reload all the data.
 
Old 01-12-2009, 03:23 PM   #6
ljwobker
LQ Newbie
 
Registered: Jan 2009
Posts: 4

Rep: Reputation: 0
To solve your "no spare ports" problem -- you might want to just get a SATA->USB enclosure and use that temporarily, then go with building the new RAID array. It would be slow (limited by USB bus) but as long as you're referencing everything by UUID or --scan in MDADM, I think it would be pretty easy to keep everything on-line and protected the whole time. Then again, it might be worth buying/borrowing a cheap SATA controller to add enough ports for the transition... anyway... maybe that helps.
 
Old 01-12-2009, 11:15 PM   #7
DarkFlame
Member
 
Registered: Nov 2008
Location: San Antonio, TX, USA
Distribution: Ubuntu Server 8.10 & SAMBA 3.2.3
Posts: 158
Blog Entries: 1

Rep: Reputation: 30
Let me look at the math, you've got 8 drives of 500GB each, for a total of 4 TB, but you don't actually have that much storage because it would actually be less than 3.5 TB because the math is x-1, and it's actually even less than that because 1 GB isn't 1 ACTUAL GB, but something less. So, for grins, you're starting with 3.5 TB.

You're going to an array of 5 drives of 1 TB each, which, going by the same argument above, is actually 4 TB.

Conclusion: You're going to all this trouble simply to add 1/2 TB.

That said, I'm going to go through the same thing you are, eventually. I've got a 4 disk RAID5 array with 250 GB drives, a total of just under 700GB of useable storage space. I'm thinking I can simply swap out the drives, one at a time, for 1 TB drives. The useable space won't actually change until the last one is swapped, but I think that would work (my theory).

Let me suggest something else. ASSUMING that you've got a working backup system, I'd simply tear down the old array and build a new one with blank 1 TB drives, then restore the data to it. Certainly you do have a backup system that you are using and that you know is reliable?
 
Old 01-13-2009, 12:41 AM   #8
ljwobker
LQ Newbie
 
Registered: Jan 2009
Posts: 4

Rep: Reputation: 0
swapping X drives with X bigger drives is a reasonable option, but as X gets large it's also pretty expensive. Most folks doing this sort of thing have 250 or 500G drives, but most of us are looking at moving to TB or even 1.5TB drives... trying to do 8 drives at a time makes that pretty expensive. Another thought I had would be to borrow another machine from somewhere that has at least 5 SATA ports on it, and use that machine to build your new array. Then network-copy the data to the new machine. Remove all the 500GB drives, and install all 5 1TB drives. mdadm is smart enough to find the array spread across the drives and put it back together. as long as you're moderately careful about keeping the drives straight, you should be able to do this with zero risk to your existing data (you could even mount all of them readonly if paranoid) -- if for some reason the migration goes horribly wrong you just put back the old drives and off you go.

Finding a machine to do this shouldn't be tough: it has to have 5 functioning SATA ports (or enough SATA + USB if you have an enclosure/converter thingy) and it has to be able to run a modern copy of linux... there's an awful lot of machines floating around that meet those criteria. I think you'd even be able to do this without actually installing linux at all -- you can probably do everything you need to from a liveCD. Hell... just borrow someone's windows PC, let them take their drives out of it so (again) you run zero risk of breaking an existing system... ;-)
 
Old 01-13-2009, 01:02 AM   #9
DarkFlame
Member
 
Registered: Nov 2008
Location: San Antonio, TX, USA
Distribution: Ubuntu Server 8.10 & SAMBA 3.2.3
Posts: 158
Blog Entries: 1

Rep: Reputation: 30
Quote:
Originally Posted by ljwobker View Post
Finding a machine to do this shouldn't be tough: it has to have 5 functioning SATA ports
My box has an Asus M3A78-EM motherboard, and it's got 5 SATA ports on it. Put an AMD64 Athlon 6000 uProcessor on it, and the cost was $230. Add a single 2GB stick of RAM for $34, rob a power supply from another box, and an old CD-ROM, and you're up & running in a hurry. Enermax makes a great case for $29, and it comes with a power supply. Total cost is under $300, plus shipping & hard drives.

Yes, I did mine with 250 GB drives - spent almost as much on the drives as I did on the rest of the box! By the time we've accumulated enough data to fill up the 700 GB space, I'll be able to get 1 TB drives for about the same price (250 GB drives are currently $49 each, and 1 TB drives were in the range of $109 & up, but that should be coming down, too).
 
  


Reply

Tags
mdadm, raid, raid5


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
autoexpanding zfs on adding larger drives madivad Solaris / OpenSolaris 1 03-19-2008 04:42 AM
Can I use larger hard drives? JosephS Linux - Hardware 6 08-16-2007 09:23 AM
HW raid5 or raid1, & how many drives? hank43 Linux - Enterprise 2 12-18-2006 10:03 AM
Growing RAID5 with mdadm not working in 2.6.17? Fredde87 Linux - Software 1 08-24-2006 04:45 AM
Automatic recoginiton of new hard drives and growing in new space in raid array lewt Linux - Hardware 0 06-01-2006 12:29 AM


All times are GMT -5. The time now is 04:18 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration