LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices



Reply
 
Search this Thread
Old 10-27-2005, 08:22 PM   #1
boostdemon
LQ Newbie
 
Registered: Jul 2005
Location: Northern VA USA
Posts: 7

Rep: Reputation: 0
Hardware RAID card suggestions?


Hey guys, hoping some of you can give me your input on an ATA133 RAID controller card for my server. Its currently running a trimmed down version of Mandrake 10.0 and one of its primary functions is a storage device using samba.

Planning on adding a pair of 300gig ATA133 drives in RAID1 (mirror) to replace the assortment of drives i've collected over the years that are currently holding stuff like artwork, taxes, email backups/address books, and other things that i cant afford losing but have neglected over the years.

All input welcome, dollar figures would be aroudn $125 for each drive, and say around $50 for the card?
 
Old 10-28-2005, 04:40 AM   #2
Thoreau
Senior Member
 
Registered: May 2003
Location: /var/log/cabin
Distribution: All
Posts: 1,167

Rep: Reputation: 45
Linux native Hardware RAID
3ware 7006-2
The dollar figure is $104

http://www.monarchcomputer.com/Merch...ricewatch&NR=1

300 GB Server rated ATA
Maxtor Maxline III 300 GB
The dollar figure is $150

http://www.pcnation.com/web/details....03&item=F40722
 
Old 10-28-2005, 05:15 AM   #3
kopite
Member
 
Registered: Nov 2003
Distribution: Debian Etch
Posts: 33

Rep: Reputation: 15
Im in the market for a raid card but want something that will be able to handle 4+ drives
 
Old 10-28-2005, 05:24 AM   #4
Thoreau
Senior Member
 
Registered: May 2003
Location: /var/log/cabin
Distribution: All
Posts: 1,167

Rep: Reputation: 45
ATA or SATA?

Assuming ATA,

3ware 7506-8
8 drives supported on a half length card.

Last edited by Thoreau; 10-28-2005 at 05:27 AM.
 
Old 10-28-2005, 08:01 AM   #5
kopite
Member
 
Registered: Nov 2003
Distribution: Debian Etch
Posts: 33

Rep: Reputation: 15
Im after sata. I was looking at the highpoint 2220 card but have been told its not fully linux compatible.
 
Old 10-28-2005, 08:09 AM   #6
boostdemon
LQ Newbie
 
Registered: Jul 2005
Location: Northern VA USA
Posts: 7

Original Poster
Rep: Reputation: 0
Quote:
Originally posted by Thoreau
Linux native Hardware RAID
3ware 7006-2
The dollar figure is $104

http://www.monarchcomputer.com/Merch...ricewatch&NR=1

300 GB Server rated ATA
Maxtor Maxline III 300 GB
The dollar figure is $150

http://www.pcnation.com/web/details....03&item=F40722
Thanks! that should work
Any reason to go with SATA 150 RAID / Drives? or should i just stick with ata133

//edit - looks like there would be a bit of work and/or possibly a full upgrade to mandriva 2005 to get SATA support. A bit more work than i had hoped for, but if its worth the effort its a possibility.

have also been reading up on EVMS software raid... and i must say, it doesnt look nearly as bad as people make software raid out to be. If the only setback is increased CPU use - i've got a 1ghz that is more than enough for anything i do... normal system load is something like 0.05 or so.
would just be concerned about having 3 drives, 2 big'uns in RAID1 and the other being the primary installation of linux just running off the normal IDE channel. If something happens to the main drive, the array gets lost and it was all pointless right? I'll read up on it, but im still looking hardware raid.

Last edited by boostdemon; 10-28-2005 at 08:47 AM.
 
Old 10-28-2005, 03:27 PM   #7
Thoreau
Senior Member
 
Registered: May 2003
Location: /var/log/cabin
Distribution: All
Posts: 1,167

Rep: Reputation: 45
The best card to get for SATA on a 32-bit PCI bus is the 4 port 9500 series. In actuality, if you have over 2 drives you should use a 64-bit PCI slot for any RAID card. These cards fit in both slots. If you have PCI-X, then you can get the newest 9550. But not many people have that, so I recommend the 9500S-however many ports you want.

You can pick any 3ware card and it's linux native(driver in the kernel itself). One reason why I recommend them. The other is that it's hardware, which means it doesn't load the CPU and isn't software dependent when a drive goes down.

There are other options, but most are software RAID even though they say otherwise. And beyond that, they are proprietary non-GPL'd drivers, so it's of no use for booting a fresh linux install. There is a LPI and Areco card as I recall, but is a bit pricey.
 
Old 04-10-2006, 01:56 AM   #8
CowLoon
LQ Newbie
 
Registered: Jan 2005
Posts: 18

Rep: Reputation: 0
I am running ubuntu linux. People aren't depending on my computer as a server, so I don't need anything fancy, I just want to have redundancy. I was thinking I would go for RAID1, since I've never met anyone who is running a desktop computer who has told me about having 2 drives fail at the same time.

Will the 3ware cards above work with linux in general, or only specific distributions? Probably the answer is linux in general, I guess, since the post says 3wards have drivers in the kernel.

If a drive fails, how will I know it?

If a drive fails in 4 years and my current hard drive is no longer manufactured will I need to buy 2 new drives

To get my current data under raid, do I just connect the controller and copy my data over to the array then do something with grub and reboot? I mean, will this invole a complete reinstall of all of my software, or just a reconfiguration of mount points and boot parameters?

Last edited by CowLoon; 04-10-2006 at 02:09 AM.
 
Old 04-10-2006, 08:23 AM   #9
boostdemon
LQ Newbie
 
Registered: Jul 2005
Location: Northern VA USA
Posts: 7

Original Poster
Rep: Reputation: 0
Holy thread resurection batman!

None the less i'll update.. Despite the recommendations, my project being what it is (extremely low budget) is now up and running quite well with software raid1 using 6 drives of equal size. Im using mdadm for the control instead of raidtools. It wasnt easy to setup - well, more like it wasnt easy to find the information to set it up. Every "how-to" missed something or a lot of things rather. It doesnt use more than 10% cpu for the time it takes to "duplicate" the data from one drive to another, and with a 1ghz more or less going to waste that wasnt a problem. Speed doesnt seem to be affected as far as read/write times since it will actually wait for a certain time frame to start copying. I havent had to truely test it yet, but i did accidently boot it up with only 1 of the two drives in an array powered on and it worked perfectly...

So, theres my status. Perhaps one day if i have more drives and more data i'll go for hardware but at this point the extra cpu usage wasnt posing a problem.

-Dana
 
Old 04-10-2006, 08:34 AM   #10
boostdemon
LQ Newbie
 
Registered: Jul 2005
Location: Northern VA USA
Posts: 7

Original Poster
Rep: Reputation: 0
In following with my recommendation to use software raid:

Quote:
Originally Posted by CowLoon
I am running ubuntu linux. People aren't depending on my computer as a server, so I don't need anything fancy, I just want to have redundancy. I was thinking I would go for RAID1, since I've never met anyone who is running a desktop computer who has told me about having 2 drives fail at the same time.
I've seen production servers kill lots of drives at once, however you're right... with a desktop you probably wont. If you really want to be super redundant you can do raid 1 w/ 3 drives. first second and backup. If one fails, the backup will automatically step in and take over.

Quote:
Will the 3ware cards above work with linux in general, or only specific distributions? Probably the answer is linux in general, I guess, since the post says 3wards have drivers in the kernel.
The good thing about the 3ware card is it will work with the linux kernel. So yeah, all linux distros should be compatible.

Quote:
If a drive fails, how will I know it?
If you use mdadm with software raid1 you can set it to email you all sorts of stuff... daily status, or like you want... when something is wrong. Plus it will tell you exactly whats wrong/what died not just a warning

Quote:
If a drive fails in 4 years and my current hard drive is no longer manufactured will I need to buy 2 new drives
Not with software raid, in fact i have 2 80 gig drives (they DO need to be similar size unless you want wastesd space) One is an ATA100 and one is an ATA133. They are both maxtors, but they're 2 different models and different sector counts when you do an 'fdisk -l' So what i did was take the lower sector count drive, and mimic that number when i partitioned the "newer" one that had more sectors. Basically i chopped the 80.2gig drive down to match the 79.8 gig drive.

Quote:
To get my current data under raid, do I just connect the controller and copy my data over to the array then do something with grub and reboot? I mean, will this invole a complete reinstall of all of my software, or just a reconfiguration of mount points and boot parameters?
In my experience with HARDWARE raid, yes you will have to build the raid from scratch. So as far as i remember (this is in windows, so this may not be correct) is you have to copy the data off the drives first... then setup the array, then format it, then copy data back to the array.

With software raid you dont have to do that, you can build an array from a drive with exisiting data on it. However you're going to be screwing with the drive so you might as well be smart and at least back it up first.


Hope that helps a little
 
Old 04-30-2006, 06:42 PM   #11
CowLoon
LQ Newbie
 
Registered: Jan 2005
Posts: 18

Rep: Reputation: 0
So if I understand correctly, with hardware RAID, I can't know if one of my drives has failed, except by rebooting and checking the bios to see if a drive has failed. Yikes. Or is there some other way? If not, then it's kind of pointless to have hardware raid...

I've now got my hardware (3ware 8006-2LP), but I'm scared. It says creating a RAID1 array deletes all data on both drives. So, if a drive fails, and I replace the drive and rebuild the array, does it delete all my data? The manuals doesn't address the thing that someone would be panicking about. Since it's not specifically saying that it's not going to delete my data, I feel like I should assume that it will.

Also, the manual says RAID1 arrays are not "profiled" (meaning?) when created, or initialized after booting into the OS. When the firmware receives the first verify request... the initialization will begin. Huh? I don't know what that means. Maybe it means it's going to initialize and then verify the disks for... 10 hours? It's going to initialize at some point some day? Do you know what it means? I'll call 3ware I guess. I wanted to finish before I have to start work, which involves using this computer.
 
Old 04-30-2006, 10:17 PM   #12
boostdemon
LQ Newbie
 
Registered: Jul 2005
Location: Northern VA USA
Posts: 7

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by CowLoon
So if I understand correctly, with hardware RAID, I can't know if one of my drives has failed, except by rebooting and checking the bios to see if a drive has failed. Yikes. Or is there some other way? If not, then it's kind of pointless to have hardware raid...
I couldnt say for sure, but i'll bet theres some monitoring at the application level that will provide a notification to you if something fails. A true redundant RAID1 will have a total of 3 drives... Primary, Duplicate, and Backup. The backup "jumps in" when one of the other drives have failed. I only use 2 cause in a non-production environment and i dont have drives dying left and right, plus i'd rather have the extra spending money.

Quote:
I've now got my hardware (3ware 8006-2LP), but I'm scared. It says creating a RAID1 array deletes all data on both drives. So, if a drive fails, and I replace the drive and rebuild the array, does it delete all my data? The manuals doesn't address the thing that someone would be panicking about. Since it's not specifically saying that it's not going to delete my data, I feel like I should assume that it will.
Thats one of the advantages of software raid, you dont have to start from scratch to set it up. To create a RAID on hardware - or at least on all the hardware array's i've setup - required blank drives to build the config initially. What you would have to do is setup the 2 drives for RAID1 then once its built you copy your data to them. Hardware raid controllers should duplicate the data to both drives real-time unlike software raid.

As far as when/if a drive dies... it will require you (unless you have hot-swap) to shut down the box, remove the dead drive and replace it... then boot it back up (it should boot on the good drive no problem) and it will copy over to the new drive in the background. Im not sure if the controller requires you to move the good drive to a particular channel or not on the cable, i would assume not since the controller should know which drive needed to be replace thus knowing which direction to "recover" data. But i would seek advice from someone more versed in hardware raid.

Quote:
Also, the manual says RAID1 arrays are not "profiled" (meaning?) when created, or initialized after booting into the OS. When the firmware receives the first verify request... the initialization will begin. Huh? I don't know what that means. Maybe it means it's going to initialize and then verify the disks for... 10 hours? It's going to initialize at some point some day? Do you know what it means? I'll call 3ware I guess. I wanted to finish before I have to start work, which involves using this computer.
Profiled = read up on RAID5 which is true RAID and most common in production servers. It easily took 16 horus to duplicate the 300gig drives i was using under software, i can assume it would take less time with hardware however i would highly recommend if/when you need to... you leave the computer up and let it finish.


good luck, you've got the best card for the job according to the linux pros so you should have plenty of support
-Dana
 
Old 04-30-2006, 11:15 PM   #13
AwesomeMachine
Senior Member
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian jessie/sid; OpenSuSE; Fedora
Posts: 1,593

Rep: Reputation: 162Reputation: 162
If the project is low budget you don't want hardware raid. Hardware raid is expensive. 3ware is the only company I know of that makes IDE hardware raid cards. Software raid is cheap, and pretty good. One thing to remember: just because the raid is from an add on card, or built into a mobo doesn't make it hardware raid. Most PC raid is actually software raid, except 3ware cards. There may be others, but 3ware is all I know about. Linux has pretty nice software raid now. With raid 5 you can use four drives, and any three of them can rebuild the fourth. So, if a drive fails, plug in another one and the array will rebuild itself. The only problem is the drive should be exactly the same. Hardware raid works with all different size drives. Software raid likes identical drives. So, if you buy five drives, and use four, saving the extra one for a drive failure, that should last you ten years.
 
Old 05-01-2006, 12:37 AM   #14
boostdemon
LQ Newbie
 
Registered: Jul 2005
Location: Northern VA USA
Posts: 7

Original Poster
Rep: Reputation: 0
actually i think regardless of software/hardware raid, your drives have to mirror the same sector count. For example, on my software raid1 of 2 80 gig drives - both maxtors but 1 is ATA100 one is ATA133 - they had two different sector counts. One was lets say 7986 and one was 8123... what i had to do was "clip" the larger one to match the smallest drive using 'fdisk' in order for it to mirror correctly. I would guess the hardware raid controller would have to do that as well.

Theres a few other than 3ware, however 3ware has the best linux support and has been tested on just about everything. True its more expensive, and it uses no system processes to copy. However its only benefit over software raid for a "normal" user of not being able to boot from it has been a non-issue for a while now. Software raid with modern distro's have no problem booting from it. Using a combination of grub or lilo to boot off 1 drive first, then setup the array, makes it pretty easy to build an entire system on soft raid with mdadm. Best investment i've made yet on this POS server i beat the crap out of daily
 
Old 05-05-2006, 03:29 PM   #15
CowLoon
LQ Newbie
 
Registered: Jan 2005
Posts: 18

Rep: Reputation: 0
Okay, I've installed the hardware finally. About my boggling over the word "initialize" in the manual, 3ware told me:

With RAID1 (not with RAID5) it "initializes" the disks in the background when you replace a defective drive, and does not delete the data from your "good" drive(s).

The also told me that I will be able to boot from the array.

Here are issues that I had during installation:

The card is physically longer than my pci slot. At first I thought I would have to get a different motherboard. Then I saw that PCI-X is actually physically shorter than PCI. Since the manual says that the card can be used in PCI or PCI-X I figured out that it is normal for some pins not to be connected, and sure enough it does fit in the PCI slot with some of the pins extending out past the end of the slot.

My next bit of confusion was about grub. Booting from Knoppix, my array shows up as /dev/sda and my existing drive shows up as /dev/hda. In grub on my hardware (hd0) is /dev/hda and (hd1) is /dev/sda. I put a special file in place so that I could:

find /wdc

in order to determine which drive was which.

So, the steps I took were:

Make a knoppix CD.
Make a grub boot CD or floppy.
Install the card and the two drives for the array.
Create the array in the 3ware BIOS.
Boot from a knoppix cd.
Mount the two drives (I got extra paranoid and mounted existing drive read-only).
Partition the new drive (the array) (I have one big partition and one swap partition).
Set it's boot flag.
Copy one to the other using copy -ax.
I forgot to mkswap here, but did it later.
Change fstab and boot/grub/menu.lst on the new drive to use sda instead of hda.
Put a file somewhere on the old drive, e.g. /mnt/hda1/my-old-drive.

Installing grub:

Boot from grub CD
root (hd1,0)
kernel (hd1,0)/vmlinuz root=label=/
initrd /initrd.img
boot

it booted from the array!

Boot from grub CD
find /my-old-drive

it finds it on (hd0)

root (hd1,0)
setup (hd1)

Then I pressed reset and it booted from the array.

I forgot to set the clock correctly when I was in knoppix, so I think the timestamps were wrong on fstab and menu.lst.

The next hurdle was installing the 3dm utility software which I downloaded from their web site. It comes with scripts for redhat and SUSE only. I'm running ubuntu so it prints out a message saying it has created a config file (it hasn't) and that it has installed /usr/sbin/3dm2 (it hasn't).

It looks like the script would have created something like this, when I say that I want to have both logging and a web browser admin interface:

Port 99999999
EmailEnable 0
EmailSender BobbySue
EmailServer localhost
EmailRecipient BettyRalph
EmailServerity 1
ROpwd .............
ADMINpwd ...........
RemoteAccess 0
Language 0
Logger 0
MsgPath /etc/3dm2/msg
Help /somewhere/doc/3dm

the passwords are actually different than that and the port was a chosen port number.

So, I created that and put it in /etc/3dm2/3dm2.conf.
I put the message files in /etc/3dm2/msg and the help files in /somehwere/doc/3dm.
cp /etc/init.d/skeleton /etc/init.d/tdm2
Change DESC=... to a description.
Change NAME=... to NAME="3dm2"
Copy the 3dm2.x86 binary to /usr/sbin/3dm2.

Then when I started it, it complained

/usr/sbin/3dm2# (0x0C:0x0005): Failed to start listening socket

3ware told me that the host must be an ip address and the default port is 888 and to use https when loading the page.

So I removed the "Port" line and changed the the host line to:

EmailServer 127.0.0.1

And it works to load https://127.0.0.1:888
 
  


Reply

Tags
3ware, backup, mirror, raid1, samba, software


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
raid/controller card suggestions branboom Linux - Hardware 5 06-29-2005 02:32 PM
Dell "CERC" Raid-5 w/CENT OS. Is this true hardware RAID or just an expensive card? fireman949 Linux - Hardware 2 06-24-2005 06:44 PM
Linux hardware RAID suggestions please... lagreca Linux - Hardware 6 09-23-2004 05:11 AM
suggestions about a supproted RAID card lucat Linux - Hardware 0 03-30-2003 03:55 PM
Hardware RAID PCI Card (Hardware?) MasterC Linux - Hardware 1 03-01-2003 02:19 AM


All times are GMT -5. The time now is 06:07 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration