Gave up on HPT374 driver, suggestions please for RAID 5 card/cabable mobo for RAID 5?
I've been putting off transferring my fileserver's OS from W2K3 to Debian Etch for about half a year or so and now due to some unfortunate circumstances, the opportunity has presented itself. The file server runs an 800 GB RAID 5 formatted NTFS on 3 Western Digital WD4000YR 400GB SATA drives which connect to a highpoint 1640 RAID card, a Western Digital WD3000JB 300GB IDE drive also formatted NTFS and an 80GB IDE drive serving as a system drive for W2K3 which I have other plans for. The fileserver died a couple of days ago and I took it as the perfect opportunity to make the switch for the file server. I put the Highpoint 1640 into my main tower with the 800GB NTFS RAID 5 and hoped Debian Etch (which I have dual booting with XP Pro on that tower) would recognize the card and that everything would go smoothly.
I wish it had been that easy. The card/RAID was not detected and I went to highpoint's site looking for a driver for Linux, hoping for a .deb package. I ended up compiling the driver from source (will provide more details on request) but upon the insmod command and the RAID being detected and mounted there were problems from the very beginning. Upon trying to play MP3 files from the RAID with a few different apps (Amarok, and noatun) a pattern emerged. The song would play fine for a few moments, before sputtering with crackles and pops and then the application either locking up or stopping the song as if it were finished. Upon using 'dmesg | tail -n 50' I found that 'hpt374 reset pid' or something similar to it (I'll need to double check when I get home) was happening every time the problems with the MP3s occurred. I honestly don't know what having that come up in dmesg means but I've seen some posts where people say that it is a "kernel reset".
A similar problem occurs if I try to transfer a file from the RAID to my home directory. The file will begin to transfer but at an extremely slow speed, usually 32KBps or so. It will then slow to about 30bps and stay there until I cancel out of it. The same "hpt374 reset pid" problem comes up in dmesg. I can never find the referenced pid numbers in ps aux.
Looking around various forums I don't think I've found a single instance of someone compiling the drivers from source for the highpoint 1640 and having success. Because of this I would appreciate it if someone could suggest a RAID card that supports SATA (the 1.5GBps variety) and RAID 5 that is known to play nicely with linux, ideally debian in particular. I don't mind hardware vs software or even if it is an onboard chipset with a motherboard, although I'd really prefer a separate card. Links to the hardware I'm going to use to replace the file server are as follows.
Motherboard: http://www.newegg.com/Product/Produc...82E16813128018 (flexible to changing this)
not sure if all that info is necessary but I'd rather give too much info than not enough =).
Thanks in advance to anyone who reads this, I hope you can help me with this annoying issue.
(edited on 12-27 21:21 GMT for clarity/paragraphs)
my buddy uses an Areca card, that he's very happy with. he's about as picky as it gets when choosing hardware, so i'd say if he bought it, it's probably as good as it gets.
i believe it's this one -- http://www.newegg.com/Product/Produc...82E16816131004
but i am not 100% sure (i know its 8 ports, 4 lanes, but models change in the course of a day in computer land).
full hardware cards are going to run you upwards of at least 300 bucks (a 4 port sata 2 on sale) to 1,000 (a 12 port sata 2 regular price), and now-a-days will all be PCI-Express.
i didn't want to upgrade my rig just yet - thus leaving me with only regular PCI slots... so when i went to add 4 additional sata 2 drives, i was stuck with linux software raid (mdadm) but its really not that bad, i've actually found it to work quite well.
also -- regarding your 'reset pid' problem...
sata / scsi devices don't get pid's. every time my server reboots (luckily only when i tell it to so far!) i get 4 lines in my system log like this...
-failed to set pid on devices sda
-failed to set pid on devices sdb
-failed to set pid on deviecs sdc
-failed to set pid on devices sdd"
... it is annoying to see it in the log, but its not a "real" error, since sata drives, as i said, don't get pid's, and it does not affect operation one bit.
i hate to say it, but if the os/kernel is continually trying to set or reset a pid on a device that doesn't reference one, then that alone could be causing all your problems.
when linux looks at the highpoint (if its a hardware controller), then it shouldn't see 4 drives or 3 drives or whatever... it should just see one drive - your 'imaginary' raid 5 drive. if the highpoint's a software card, then that's a whole mess of shit...
i was very picky about getting a software-raid card for my existing box... even though its only a 10 to 20 dollar item, you still have to be careful, because it's taking care of all your sensitive data.
i found that silicon image chipset cards are the best-supported by the linux kernel -- and NATIVELY, meaning you don't need a driver, it's "built in" to the kernel (this includes most pre-made kernel rpms from red hat and others... so you don't have to compile the kernel from scratch if you don't want to).
i'm going on 2 months with this setup, and i've had only 1 problem. my cooling was horrible (i've got 7 hard drives, an optical, a zip, a floppy, and a bunch of other junk in a case that's way too small) so one of my drives crossed the 80 degree mark --- it was shut down by smart without me knowing it, and linux mdadm raid5 continued on with only 3 drives -- and thus no backup. Well I got the log printout everday in my email and i look to see a drive is down -- i thought something horrible was wrong. after looking into it, i realized what happened, and rebuilt the array with one entry in the command line -- and all done. now that my cooling is much better, i've got no issues at all.
SATA Raid Card for Linux ? No doubt about it 3Ware.. why play around with cheap imitations, when you can get a card that's truly supported by the manufacturer.. If you are setting up RAID then you must have important data to worry about.. the price of a RAID card is hardly worth skimping on.
never had a good feeling about 3ware. they've been known to use different chipsets for the same model card (whilst still stating chipset 'a' on the box, when 'b' is on the product)... resulting in (well at least under windows) driver mayhem.
that being said, hardware raid does have its disadvantages... namely that catastropic loss of the card would require a person to get an identical card in order to rebuild the array.
with software raid, the entire machine can go up in flames and (as long as the hard drives themselves are undamaged) you can build a new machine, install linux, install mdadm, and rebuild the array. very time consuming, but it lets me sleep at night.
So the silicon image cards are natively supported eh? I like 3ware but the cards are kind of pricey (far as I know). What would you folks think about the idea of putting in a tape drive and doing backups that way?
I can hardly buy into software raid is better than hardware raid, I mean Every enterprise out there is doing it wrong then... software RAID doesn't support hot swap, it loads the CPU, doesn't support man o fhte advanced feature s of RAID. there are quite a few advantages to hardware raid over software RAID. Price isn't one of them though.
Software RAID is OK and it has it's place but it really doesn't compare to a proper hardware based solution .. that's just ludicrous.
Personally - I could barely afford the couple hundred I spent on drives, there was nothing left for a nice card. Given the choice, I'd rather run software raid in linux than to make a worse compromise and buy cheaper drives and an economy hardware raid card -- that's a recipe for disaster. I think most would agree with that. Considering the original poster is likely not running an enterprise grade server, I think he could get away with software raid. Running 6 disks on 2 arrays, I'm seeing (and this is just from observation, I haven't done extensive testing or anything like that) roughly a 10% increase in CPU usage (on a 1.67 GHz older processor). I'd say that's acceptable.
... as far as rebuilding hardware arrays with another hw card -- i would honestly have to look into it further, as you're not the first person to state that they have actually accomplished this task. But, at the same time, I've had many more people tell me they could not accomplish it. I'm starting to wonder if its not so much chipset/mfg but, rather, a certain 'keying' or 'structure' in which various raid architectures are carried out by different cards/chipsets. Basically, meaning that if two cards arranged data in one manner, then they would be somewhat interchangeable. It's certainly something that warrants looking into.
Lastly, I've read and commonly heard that Silicon Image gear is natively supported -- almost all of it. I have personally experienced and implemented the Silicon Image 0680 (0680A) PATA EIDE chipset, and the Silicon Image 3112 (3114) 2 and 4 channel SATA 150 chipset -- both were picked up by pre-packaged linux kernels in Fedora Core and Trustix linux. Kernel versions tested on were 2.6.15(-1.2054) through 2.6.18(-1.2200). Again, these are only software cards.
As far as drivers for a hardware raid card -- while the actual 'raiding' (if that's even a word) is done by the card, it has to present some sort of imaginary (multidisk) drive to the operating system... in order to do that, it requires some sort of basic driver. Some are nativelyl supported by the kernel, some are not. It is definately something that I would look into before buying a card (whether soft or hard raid). Reason is, if you have a choice between two cards that both meet your needs and are of the same price, and one is natively supported and one isn't, then you better pick the one that is or else you're just making more work for yourself (usually).
Highpoint 366/370/372/374 is natively supported in Linux and it is being doing this for a long, long time. The module is hpt366. This module supports HPT366, HPT370, HPT372, HPT374, so you do not need to compile anything. I have been using Highpoint controllers for a long time with out any problems. Though I would not be using Highpoint's RAID level 5 in the BIOS because it does not work well and Linux software RAID works better, but I would use multiple cards just in case a controller fails (this go for any controller brand too). I have not yet encounter a failure when using Highpoint controllers even after six years of abuse.
I recommend do not mess with Silicon Image controllers because they are not supported as well as Highpoint's. Silicon Image controllers are quirky because DMA gets enabled and disabled. Third Silicon Image controllers corrupts data.
When setting up software RAID in the BIOS, you will have to use dmraid to find the RAID array in Linux. If do not, you will see a list of hard drives that you thought should be in a RAID array.
Software RAID level 5 is horrible if you are using it on a single processor system. Dual processor will be better, but still not as good as a hardware RAID controller handling all RAID-5 tasks. Software RAID is meant to be use with level 0 and level 1.
If you are going to put 800 GB or more of data, it is best to spend the money on hardware RAID controllers and backup mediums since you already paid the money on hard drives. All 3ware cards are supported since kernel version 2.6.14. You can happily upgrade your kernel to fix any security problems that the previous kernel has with out being limited by this manufacture. Areca is a good brand because they have very little performance penalty when a hard drive fails. The kernel does not support Areca controllers yet, so you are stuck with the manufacture supplying you the software needed for your kernel. I would pick 3ware.
There is always a consequence of a controller failing. Most smart IT would use duplexing. This is costly when setting up a huge file server, but data is life or is business. Duplexing may have to use a combination of hardware RAID and software RAID. Two hardware RAID controllers that are using RAID 5. Mirror the two RAID arrays using software RAID. This will provide RAID-15 and it provides some advantages such as multiple reads and multiple writes which will increase server performance.
NTFS support in Linux is poor. I only suggest using it when you need to copy files to your Linux partitions or accessing a document. Playing a sound or video that needs streaming shows the weak parts of NTFS support. Hopefully, you mount your NTFS partitions as read-only.
I recommend using ECC memory because it makes your computer more reliable. OCZ for power supplies is ok, but not would I use in servers. I suggest Enermax or Seasonic for servers. For 24/7 servers, I would use redundant power supplies. File servers rarely have the latest and fastest processor. Usually they will have multiple slow processors. File servers are dependent on the quality of the NIC that is used.
I'm looking at setting up raid 5 on my server (running latest Kubuntu). Its a Celeron 2.4 with 512 Ram. 1 x 40G drive for OS, 3 x 250 drives in raid 5.
I noticed you were saying that software raid 5 is not a good idea... is this because it puts stain on the cpu and doesnt allow the computer to be useful for much else?
I realise hardware raid controllers are always going to be better but will it work well enough in a system that is solely a server?
Also... if you know of any pages devoted to setting up a raid 5 system that would be hugely appreciated!
Kubuntu mdadm raid5
I used this on my Dual P3 733 4x 250GB Fileserver.
Just remember to read the comments at the end.
Also good page http://www.die.net/doc/linux/man/man8/mdadm.8.html
|All times are GMT -5. The time now is 08:33 PM.|