LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian
User Name
Password
Debian This forum is for the discussion of Debian Linux.

Notices


Reply
  Search this Thread
Old 03-01-2007, 05:30 PM   #1
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Arch
Posts: 2,905

Rep: Reputation: 77
Software RAID?


I am getting ready to install Etch on a new box that has two 250GB S-ATA drives on it and was wondering if I could use them with software RAID? I always see software RAID as an option when installing but have never used this. I don't have a RAID controller so I assumed this would be the next best thing, no?

I would like to set up both drives as one large 500GB drive (striped) and then let the system install everything on sda1 or whatever partition suites the file system.

Is there a simple way to do this or is this not even possible? Sorry but I have never used software RAID on a Linux system so please excuse my ignorance on this.

Thanks for any info!
 
Old 03-01-2007, 06:51 PM   #2
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
It's kinda complicated, but it's doable. This is how I did it when I was experimenting with it:

At some point during the installation, you will reach the disk partitioning screen. On one disk, set aside an area (1 GB?) and give it a mount point of "/boot". The bootable flag should be ON on this disk. On the other disk, set aside a swap area of the same size. Set your main areas (both the same size) as type RAID on both disks. I believe it's type "FD". Then, click on the configure RAID selection, and setup the RAID device as ext3/xfs/whatever, and with the mount point "/". You should be able to continue with the installation from here.

Added:
Clearly, the size of your swap area and boot area is up to whatever your needs are. 1GB was just an example.

Last edited by Quakeboy02; 03-01-2007 at 06:52 PM.
 
Old 03-01-2007, 07:01 PM   #3
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Arch
Posts: 2,905

Original Poster
Rep: Reputation: 77
So when I am at the partitioning section and I see two disks:

- sda
- sdb

Both have 250.1 GB of free space. Are you saying I should create a /boot partition (bootable flag enabled) as sda1 and then a swap partition on sdb (second physical disk)?

Then one those are completed, I can select the "Configure Software RAID" button?
 
Old 03-01-2007, 07:46 PM   #4
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
"Are you saying I should create a /boot partition (bootable flag enabled) as sda1 and then a swap partition on sdb (second physical disk)?"

Yes. The boot partition needs to be outside of the RAID. Think about it. That was the thing that threw me for awhile.

I'm not 100% sure about the swap partition, though. I suppose you could probably put it within the RAID if you wanted to. If you are expecting to do some swap-heavy tasks, then you might experiment with it. If not, then what does it matter?

Added:
"Then one those are completed, I can select the "Configure Software RAID" button?"

Yes, but remember to set the FS type and the mount point as "/".

Oh, one more thing. The boot partition has to be a primary partition, unlike the swap which is a logical partition.

Last edited by Quakeboy02; 03-01-2007 at 07:50 PM.
 
Old 03-01-2007, 08:08 PM   #5
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Arch
Posts: 2,905

Original Poster
Rep: Reputation: 77
Thanks, I will try this and post back my results.
 
Old 03-02-2007, 06:57 AM   #6
pk2001
LQ Newbie
 
Registered: Feb 2007
Location: Singapore
Distribution: Debian Sarge/Etch
Posts: 19

Rep: Reputation: 0
Quote:
Originally Posted by Carlwill
I am getting ready to install Etch on a new box that has two 250GB S-ATA drives on it and was wondering if I could use them with software RAID? I always see software RAID as an option when installing but have never used this. I don't have a RAID controller so I assumed this would be the next best thing, no?

I would like to set up both drives as one large 500GB drive (striped) and then let the system install everything on sda1 or whatever partition suites the file system.

Is there a simple way to do this or is this not even possible? Sorry but I have never used software RAID on a Linux system so please excuse my ignorance on this.

Thanks for any info!
I am sure you have your reasons but just striping without parity will double your chances of drive failure. Anyway, be that as it may, I have been running Sarge software raids for a while now (mind you, configured as Raid 1). Works extremely well and recovers itself well from failure. The installation of a raid on Sata disks proved a little more challenging as the Debian Sarge installer did not like the Sata disks much.

What saved me at the time with Sarge were some back-ported images found at this site:
http://kmuto.jp/debian/d-i/

Never tried it with Etch though. I suspect the Etch installer will make everything much easier.
 
Old 03-02-2007, 08:38 AM   #7
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Arch
Posts: 2,905

Original Poster
Rep: Reputation: 77
Well Etch has no problems with S-ATA as it appears as simple SCSI drives.

When I get to the disk partitioner and I see two drives:

sda
250.1 GB of free space

sdb
250.1 GB of free space

I then select the free space line under "sda" and create a new partition (primary) and make it 1024 MB, bootable, and mount point is "/".

I then select the free space line under "sdb" and create a new partition (logical) and make it 1024 MB, and rather than EXT3 - I use SWAP which requires no mount point.

Now sda and sdb both have 1GB partitions leaving 249 GB of free space for both drives. Do I then create a single large partition for the rest of the space and select RAID which requires no mount point? I still think I am missing something
 
Old 03-02-2007, 12:09 PM   #8
ironwalker
Member
 
Registered: Feb 2003
Location: 1st hop-NYC/NewJersey shore,north....2nd hop-upstate....3rd hop-texas...4th hop-southdakota(sturgis)...5th hop-san diego.....6th hop-atlantic ocean! Final hop-resting in dreamland dreamwalking and meeting new people from past lives...gd' night.
Distribution: Siduction, the only way to do Debian Unstable
Posts: 506

Rep: Reputation: Disabled
Quote:
I am sure you have your reasons but just striping without parity will double your chances of drive failure.

Remarks like this kill me.
There is nothing wrong with raid 0.............most people have some kind of backup plan in action.
Your comment,although I know what ya meant to explain,makes it seem that if one uses raid 0(stripe) than there hard drives will fail sooner than with raid 1.....lol.I see people saying this on forums all the time.raid 0,1,4,5,6,10,50 there all fine and dont beat the liveing hell out of a drive more than the other.

I know you meant that,if a hard drive fails in raid 0 than you lose everything whereas raid 1 has all the data on another drive.
I am sure some newbs gets scared from raid 0 from comments like you mentioned.

All raid is safe,just have an few different backup options included in your backup stratagy and all will be fine.

[for everyone]
Think about replies as if everyone was a newb to the topic and then newbs can search/read proper info without haveing to ask again in another thread.Just my opinion.
 
Old 03-02-2007, 12:53 PM   #9
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Re: I am sure you have your reasons but just striping without parity will double your chances of drive failure.

"Remarks like this kill me."

Actually, he's correct. However, it works much the same way as your chances for winning the lottery. Sure, if you buy two tickets your chances have doubled. But, 2 chances out of 50 million is still only 2 chances out of 50 million. Yes, it's going to happen to someone, but the chances of it happening to you are extremely small. Use backups.
 
Old 03-02-2007, 01:02 PM   #10
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
I then select the free space line under "sda" and create a new partition (primary) and make it 1024 MB, bootable, and mount point is "/".
You want this one to have a mount point of "/boot". The boot partition cannot be part of your software RAID0.

Quote:
Now sda and sdb both have 1GB partitions leaving 249 GB of free space for both drives. Do I then create a single large partition for the rest of the space and select RAID which requires no mount point? I still think I am missing something
Yes and no. When you create the two 249GB partitions, you create them both as RAID type. You won't be given the option for a mount point, since neither is mountable independently. It's during the creation of the RAID device (MD0) that you will be given the option to set the mount point. You need to manually set both the mount point ("/") and the FS type (ext3?) on the RAID device not on the two partitions that are used to create the RAID device.
 
Old 03-02-2007, 03:09 PM   #11
JimBass
Senior Member
 
Registered: Oct 2003
Location: New York City
Distribution: Debian Sid 2.6.32
Posts: 2,100

Rep: Reputation: 49
The choice of partitioning your remaining free space will depend on how you want the final setup to be. I agree fully with having a 1 Gb boot and swap space on separate drives. Then it comes to choice. Do you want a single root partition with 498 Gb of space, or do you want to partition that up into divisions, like say 100 Gb for root and the remaining 398 or so for home? If one big space is fine with you, then just go ahead and make the raid, then mount as /. If you want to set partition sizes, for just / and home or giving every primary directory its own mount point, it all can work.

Peace,
JimBass
 
Old 03-02-2007, 03:20 PM   #12
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
"I agree fully with having a 1 Gb boot and swap space on separate drives."

Heh, it's nice to see someone agreeing with something I've posted for a change. Tell me Jim, do you know whether or not the swap space can be on the RAID device? Would it have to be a file space, in that case? I may have to put my RAID experiment back in the box and play with this some more. It's an old Promise Ultra66 controller and some slowish drives, so I'm not so sure it's practical to use in anger.
 
Old 03-02-2007, 03:40 PM   #13
JimBass
Senior Member
 
Registered: Oct 2003
Location: New York City
Distribution: Debian Sid 2.6.32
Posts: 2,100

Rep: Reputation: 49
I don't see any point to putting the swap space in the array. Swap works much better as its own partition, so creating matching swaps for a RAID 1 array is just a waste of disk space. Of course, for RAID 1 you have to have identical partition sizes, so the space will be lost anyway.

You don't want a swap file, that is slower and uglier by far. Some folks will also try and get cute with garbage like creating a RAID array for swap, like taking 250 Mb from 4 drives and raiding them into a 1 Gb swap space. That also seems like a gross waste of resources to me. I think it is much more effective to just have a single space on any given drive, and let it swap away.

With true hardware raid, you do set the swap space, and it is a resource waste, but instead of having your processor do the work, the dedicated controller does the work. My 3ware sata raid bothers to write my 1 Gb of swap space across 3 drives with one for parity, but it doesn't effect my systems ability to do anything.

Peace,
JimBass
 
Old 03-02-2007, 03:48 PM   #14
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
"so creating matching swaps for a RAID 1 array is just a waste of disk space."

We're talking RAID 0 here.

Added:
Ahh, OK, I think I get it, now. Swap is not a filesystem. Files are not read and written to swap, 4K kernel pages are read and written to swap. As a result, there will not be an advantage to striping the swap space, because the access is more or less random and not large contiguous sections. Hmm, but would there be something to it with a 4K chunk size?

After thinking about it some more, I guess not. It would take a LOT of swapping for striped RAID to be of any use at all, and I wouldn't want to use a system that needed to swap that much.

Last edited by Quakeboy02; 03-02-2007 at 04:09 PM.
 
Old 03-04-2007, 12:51 AM   #15
pk2001
LQ Newbie
 
Registered: Feb 2007
Location: Singapore
Distribution: Debian Sarge/Etch
Posts: 19

Rep: Reputation: 0
Quote:
Originally Posted by Quakeboy02
Re: I am sure you have your reasons but just striping without parity will double your chances of drive failure.

"Remarks like this kill me."

Actually, he's correct. However, it works much the same way as your chances for winning the lottery. Sure, if you buy two tickets your chances have doubled. But, 2 chances out of 50 million is still only 2 chances out of 50 million. Yes, it's going to happen to someone, but the chances of it happening to you are extremely small. Use backups.
Well it's more a case of increasing redundancy and business continuity versus reducing it. Having a good backup strategy in place is all good and well, but down-time is down-time and restoring from backup really should only be a last resort.

With regards to the comment "All Raids are safe".... Hmmm let's say that there are degrees of safety.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Why can't I mount this md0 raid? (mdadm and software raid) cruiserparts Linux - Software 35 01-05-2013 03:35 PM
software raid 0 and raid 5: which chunk size to choose? malo_umoran Linux - General 2 02-26-2007 06:19 PM
Will a ex - Software Raid 1 disk boot without Raid software? carlosruiz Linux - Software 0 05-27-2006 01:12 PM
Can fake raid be converted to linux software raid? jmacdonald801 Linux - General 3 01-30-2005 12:33 PM
moving system from ide software raid to new box with scsi raid ftumsh Linux - General 0 10-28-2003 09:34 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian

All times are GMT -5. The time now is 09:25 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration