LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian
User Name
Password
Debian This forum is for the discussion of Debian Linux.

Notices


Reply
  Search this Thread
Old 03-03-2007, 10:36 AM   #1
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Arch
Posts: 2,905

Rep: Reputation: 77
Finally Tried Hardware RAID


OK - found a 3ware 9650SE S-ATA RAID PCI-E card for me home PC and I wanted to get 2 S-ATA 250 GB drives running as RAID0. I installed the card on my PC and then set up the RAID0 so now I have one large 460GB drive according to the 3ware BIOS however when I go to install Debian Etch, it is unable to find any drives to partition.

I can only assume the drivers for the 3ware RAID controller are not recognized by the Etch installer or Etch provided kernel so now my question is how do I proceed to getting this to work?

I went to AMCC's site who manufactures the card and downloaded a driver for my 9650SE card which is called 3w-9xxx-linux-src-2.6-supp_distros-9.4.0.1.tgz.

What do I do to get this working on my system?

Thanks for any info!
 
Old 03-03-2007, 03:29 PM   #2
JimBass
Senior Member
 
Registered: Oct 2003
Location: New York City
Distribution: Debian Sid 2.6.32
Posts: 2,100

Rep: Reputation: 49
Man, you just missed the cut - that card has kernel support in 2.6.19, and Etch comes with 2.6.18! http://3ware.com/support/OS-support.asp

So you have the correct file, and that driver either needs to be built or put on a floppy as source.

Even better! Fully down the page they have a link to a Sarge install disk with the module for the card you have built in!
http://www.3ware.com/kb/article.aspx?id=14860

So you get that iso for your architecture (i386 or 86_64), and install as Sarge, then immediately dist-upgrade to Etch or Sid once it installs, and it will keep the module for the card.

Glad to see hardware raid, you'll love it!

That card is new and expensive because of RAID 6, which we use for SAN level gear, as it is basically doubly fault-tolerant RAID 5. If you're only going to run RAID 0 or 1, you can get a cheaper card if you're interested in doing a return.

Peace,
JimBass

Last edited by JimBass; 03-03-2007 at 03:35 PM.
 
Old 03-03-2007, 03:39 PM   #3
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Step back from the dark side, carl. Come back to software RAID.

Seriously, though, do you have another system or an IDE drive? If so, then load etch to it. Then, download 2.6.19.5 and compile it with that driver and ensure it sees the RAID. Then, configure the RAID the way you want it. Then, do a "cp -a" from your booted partition to the RAID. I don't believe this will get your /dev directory, though, but look near the bottom of this thread for what should be a workaround to get your /dev copied to the RAID.

http://www.linuxquestions.org/questi...d.php?t=531196

Finally, copy your MBR over and change your grub to reference your RAID.

That looks a lot simpler than it'll turn out to be in practice, but it should be more-or-less correct.
 
Old 03-03-2007, 03:53 PM   #4
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Arch
Posts: 2,905

Original Poster
Rep: Reputation: 77
I don't think the card is that expensive. It only supports RAID 0,1 r& only has 2 S-ATA connectors on it. I will try that .ISO they have for Sarge and do the dist-upgrade to Etch.

As for Software RAID. I need someone to show or clearly walk through every step of software RAID on a Debian system as it appears to be way too complex for my small head...
 
Old 03-03-2007, 04:20 PM   #5
JimBass
Senior Member
 
Registered: Oct 2003
Location: New York City
Distribution: Debian Sid 2.6.32
Posts: 2,100

Rep: Reputation: 49
I saw the 2 port card, I figured you had one of the ones that could do RAID 6. My bad. Yeah, doing the sarge install with modules is the safest and easiest.

I am going to build an Ubuntu box with software RAID (1 in this case, but 1 and 0 would be the same proceedure), and I'll post back exactly how I accomplish it. I suspect Ubuntu just uses the Debian install procedure, which I know backwards and forwards.

Peace,
JimBass
 
Old 03-03-2007, 04:22 PM   #6
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Just yanking your chain about the software RAID, Carl. Try Jim's way first. I'm sure it will work out. If not, there's always the way I posted to try. Either way, we'll get you running with RAID. That looks like a nice card. I think I saw that it has a 128MB cache. The thing is gonna be fast!
 
Old 03-03-2007, 09:06 PM   #7
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
I am going to build an Ubuntu box with software RAID (1 in this case, but 1 and 0 would be the same proceedure), and I'll post back exactly how I accomplish it.
Jim, I found the following while looking around for something else. Starting from the partitioning page, it should be correct for installing a software RAID1. I notice that he has a swap on both drives, but no separate boot partition. I kinda figured it wouldn't need a separate /boot, since there is no striping is involved.

http://nepotismia.com/debian/raidinstall/part2.html
 
Old 03-03-2007, 10:27 PM   #8
JimBass
Senior Member
 
Registered: Oct 2003
Location: New York City
Distribution: Debian Sid 2.6.32
Posts: 2,100

Rep: Reputation: 49
I would do it differently. I would match a swap space on one drive to the boot partition on another, then mirror the rest of the drives. His way is fine too, and gets boot the protection of RAID. With the advent of live CDs I find boot to be largely unimportant and easy to recover, but whatever, its all a matter of preference. The instructions are fine, it just isn't the way I like to set them up. I also prefer that the /boot not be in the array for simplicity sake. With simple 2 disk RAID it isn't such a big deal, and better (read more disk) arrays there should always be a hardware controller. That being said, I've seen at least 2 times servers deployed with software RAID 5 and even software 10, which seems the height of foolishness. Having a simple /boot on a single disk at least allows you to software rebuild with problems.

Peace,
JimBass
 
Old 03-03-2007, 10:39 PM   #9
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Carl,

Not trying to steal your thread here. Please retake ownership anytime you like. In fact, how's the install going?

In the meantime, I've resurrected a project that I failed at abysmally on that last time I tried it: installing etch on a DMRAID RAID0. I don't know where this is going to lead, but at least I was able to understand the tool and get the mapper to have a device to work with this time. If I have any luck, I'll post to another thread. Yeah, I know it's still just a software RAID, but it's an unmet challenge, if you know what I mean.
 
Old 03-05-2007, 12:22 AM   #10
JimBass
Senior Member
 
Registered: Oct 2003
Location: New York City
Distribution: Debian Sid 2.6.32
Posts: 2,100

Rep: Reputation: 49
Here's a quick and easy software RAID setup I just did on a kubuntu install. I had 2 x 250 Gb sata drives.

First off, you have to have the modules added to the install for md, which is the software RAID built into most linuxes. In k(ubuntu)s "we'll be Debian but hold your hand while you try" method, all I had to do was go to the "load installer components from CD" and select mdcfg. Debian gives a much longer list I believe, but the same basic functionality, just choose md and then we can configure right after partitioning.

Before you can setup any software RAID, you have to prepare the disks by partitioning them. I broke them up with the intention of having a non-raid boot partition, non-raid swap, then a large raid 1 partition of the root (/), and a huge raid 1 /home.

What gets confusing it that you need to physically create the partitions before RAIDing them. I broke up the disks like this:
Code:
sda
1 Gb /boot reiserfs outside the RAID
79 Gb space for raid
160 Gb space for raid

sdb
1 Gb swap space
79 Gb space for raid
160 Gb space for raid
If you partition the raid space with an actual filesystem (reiserfs, or ext3) at this point you're screwed, as the only way software raid will accept the partition for raid is if it is partitioned as "space for raid".

Then at the top of the partition screen should be the option to "configure software raid". Click on that once you have your spaces set. Within the md creation, your first choice should be to create md device. It will then ask what type of raid, and how many disks will be used, and if you want spares. After telling it I wanted to do raid 1 with no spares, it then showed me all 4 partitions I had left as space for raid. I selected the two 79 Gb partitions (sda2 and sdb2), then created a second md device with the two 160 gb partitions, (sda3 and sdb3). I hit finish in the md create menue, and it dropped me back to the main partitioning screen. Now both of my software raid disks were shown. Here is where you set the filesystem and mount point for those devices. I selected md0, my 79 Gb raid 1 partition. It defaults to do not use, so highlight that at give it a filesystem. I love reiserfs, but many folks prefer ext3. Then set your mount point. I made the 79 Gb my root (/). Then I did the same with the 160 Gb one, making that reiserfs and /home. Then the partition screen had everything correct, the 2 software raid groups, md0 as / and md1 as /home, and /boot and swap off on sda1 and sdb1 respectively. Then I finalized the partitions, an wrote everything to the disks.

You can do other things as you want, I could see using raid 1 on the root but making the home partition raid 0, giving you 320 Gb of space. Just setting the entire disk as a single partition space for raid and combining everything into a 500 Gb raid 0 array is not recommended as you have no fault tolerance, and nowhere to put swap. You can't subdivide a md group into smaller pieces, so think ahead of how you want your partitions before you setup the raid space.

Peace,
Jimbass
 
Old 03-05-2007, 03:14 PM   #11
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Arch
Posts: 2,905

Original Poster
Rep: Reputation: 77
So I tried their provided Debian .ISO file however the damn thing does not see my onboard nVidia NIC and Etch does....

Any suggestions?
 
Old 03-05-2007, 03:19 PM   #12
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
From Jim's post above:
Quote:
So you get that iso for your architecture (i386 or 86_64), and install as Sarge, then immediately dist-upgrade to Etch or Sid once it installs, and it will keep the module for the card.
 
Old 03-05-2007, 05:12 PM   #13
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Arch
Posts: 2,905

Original Poster
Rep: Reputation: 77
I don't understand...

Someone suggested to download the Debian (sarge) .ISO file which I can assume has the 3ware RAID drivers built into the kernel however when I boot from that .ISO CD, it does not recognize my NIC so now how can I apt-get dist-upgrade to Etch when there is no way to connect to the net using the Sarge modules.

I am just trying to make sure I am not missing anything...
 
Old 03-05-2007, 05:15 PM   #14
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Oh - catch22 in progress. What NIC do you have? It should be pretty easy to get that going.

added:
Gee, I'm slow. You have the nvidia NIC. I'm pretty sure the driver is forcedeth. I'm surprised that it doesn't see it. Did you try "modprobe forcedeth"?

Last edited by Quakeboy02; 03-05-2007 at 05:17 PM.
 
Old 03-05-2007, 07:36 PM   #15
JimBass
Senior Member
 
Registered: Oct 2003
Location: New York City
Distribution: Debian Sid 2.6.32
Posts: 2,100

Rep: Reputation: 49
Yeah, you'll either need to get that module loaded on a floppy, or go hardcore and mount the iso on an existing system, add the module for the NIC, and then recreate a new larger iso. Depending on the size, you may need to remove some other module if you push the size past 700 Mb. You can also play with usb memory sticks if that makes things any easier.

Peace,
JimBass
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Hardware raid 1 | Fedora 5 | 3Ware raid | boot gomanza Linux - Hardware 4 09-28-2006 11:56 AM
Is it possible to convert from using hardware RAID to software RAID? kindredstar Linux - Hardware 2 12-19-2005 09:13 AM
Hardware RAID PCI Card (Hardware?) MasterC Linux - Hardware 1 03-01-2003 01:19 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian

All times are GMT -5. The time now is 04:06 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration