LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions
User Name
Password
Linux - Distributions This forum is for Distribution specific questions.
Red Hat, Slackware, Debian, Novell, LFS, Mandriva, Ubuntu, Fedora - the list goes on and on... Note: An (*) indicates there is no official participation from that distribution here at LQ.

Notices


Reply
  Search this Thread
Old 03-22-2011, 11:07 PM   #1
wallab
LQ Newbie
 
Registered: Mar 2011
Location: Oklahoma
Distribution: None at present -- hope to learn enough to start using Linux
Posts: 3

Rep: Reputation: 0
Moving from Windoze Hardware RAID to LINUX software RAID?


Ok, here goes…

I’m so much of a Newbie that I can’t even rent a clue as to the best distro for my intention. I’ve spent several days reading, and I anticipate several more months before I can even begin to talk cogently on the topic of LINUX.

I currently run a home server on an XP-64 box housing a RAID 5 with 6 1TB WD drives on a Promise EX8350 card. I turned TLER off on all of them, but drive 4 consistently drops out when the server goes down (either with the power button or by lightning, and we have a lot of the latter in the spring here in Oklahoma). If I can reboot the server fairly quickly while the drives are all still hot, drive 4 generally shows up, but if I can not reboot until they get cold (overnight, for example, waiting for the city to fix the power lines) drive 4 does not show up. So I let the server run for an hour or so to heat up the drives, then I reboot. Now drive 4 shows up. If it appears as a separate RAID—JBOD, then I need to destroy that so the drive will be available for rebuild once the OS finishes booting.

I’m getting a tad tired of that whole process, so I bought 5 new 1TB drives, the same model, etc. as those already in the RAID. Unfortunately, in the interim between my first and second purchases, WD decided to remove the ability to turn off TLER (to force people to pay an extra $100 or so per drive for ones they say are specifically for RAID). Now if I add one of the new drives to the RAID, and it gets dropped and then drive 4 also fails, I’ll lose nearly 5 TB of movies, images, music, etc. To make matters worse, the drive is almost full and filling up quickly (I buy a lot of DVDs; it’s a weakness).

Incidentally, all five disks turned out to be bad; 2 were DOA right off, one died during testing, and the last two died the next day after I RMAed the first 3 to NEWEGG, so I got an RMA from WD for them. I used to buy nothing but WD; now I’m not so confident in the company.

So to make a long story short (I know: too late—I spent over 30 years as a public school teacher; what do you expect?) I have a sixth box that I’m putting together:

GigaByte_GA-K8N51GMF-9_929
On board video-GeForce 6100 Chipset (I don’t like cards in servers)
64-bit AMD processor: socket 939 (Yeah, it’s old, but it still works)
1 GB of RAM
4 SATA 3Gb connections: nVIDIA® nForce 430 controller
Onboard LAN (10/100/1000) nVIDIA® nForce network controller
1 DVD drive (IDE)
1 HDD at present SATA 500 GB (SYSTEM)

My question is simple: which distro should I download and begin to learn if I want to build a software RAID 5 out of several (5-8) 1 or 2 TB disks?

I think the software RAID is the way to go for me because of the annoying habit my card has of dropping that one drive, and I really don’t want to replace it with a drive on which I can not turn off TLER. In my research, I ran across a post on another board (wish I could find it again) in which a fellow said he used the software approach with MDADM because he could manipulate the time allotted to a drive to respond before it was dropped from the RAID: set the time for long enough, and even drives with TLER turned on will not get dropped. My Promise card does not allow that. And that’s why I want to see if there are any advantages to the software route and bypass the card altogether. Shame, really, since I paid nearly 500 bucks for it. Oh well. That was several years and about 3 RAID’s ago now.

Oh, and about how I plan to plug into the board more drives than its connectors can accommodate, I have (if I can find the blasted thing) an old Highpoint PCI-X Raid controller which will fit into a regular PCI slot; it just sticks out about a foot beyond the back end of the slot. I found long ago that if I don’t install any drivers for it, and I don’t create a raid (not even JBOD) then BIOS recognizes all the drives, and so does Windows. I hope that LINUX will too. But then hoping with an OS is like programming COBOL (which is my job) without looking at the screen. I get enough compile and S0C7 errors as it is.

Finally, transfer speed is moderately important. I maintain a total of 5 machines (Win XP-32, Win XP-64, Server 2003) in a home network (wired only please) over a LinkSys BEFSR41 router. All the computers are connected to a NetGear Gigabit switch, which is plugged into the router, which is plugged into the cable modem. One of the computers is an HTPC machine in the living room connected to the TV. We use that to watch movies, etc which are stored on the server in the computer room. Some of those movies are 1080p, and watching them over the network is what makes speed a moderate concern. The network will handle it ok if the RAID is fast enough.

I realize this is a long first post. I hope it's not too long; I wanted to get in as much info as possible so I can get started learning.
 
Old 03-23-2011, 02:25 PM   #2
NyteOwl
Member
 
Registered: Aug 2008
Location: Nova Scotia, Canada
Distribution: Slackware, OpenBSD, others periodically
Posts: 512

Rep: Reputation: 139Reputation: 139
That one drive that consistently drops out sounds like either the drive itself may be failing or is receiving inconsistent power on startup on its power lead. I assume you have tried replacing drive 4?

Software raid is virtually identical across distros so that shouldn't be a large issue. it's one thing that makes migrating data drives between systems easy with software RAID. I use Slackware for both desktop and server and as this is for a server, a basic install without all the extra desktop fluff would make a nice solid choice. For a basic server configuration most any mainstream distro would probably do as well. My second choices for Linuux would be Debian, OpenSuSE, or CentOS. You might also consider FreeBSD or OpenBSD. Both make solid servers though I have no experience with software RAID on those platforms.

Though they are a popular choice, and without trying to start an HDD war (grin), I have always found WD to have a high failure rate. All I have used (and work as well) for many years are Seagate drives. For RAID arrays it is generally worth a bit extra for the Enterprise class drives designed for RAID. If your data is important enough to go to a RAID 5 array, it's worth a few extra bucks for proper hardware. While they often work, sometimes for considerable time, desktop class drives not only lack some of the features of a RAID-class drive, they really aren't designed to 24/7 operation.

I used to do COBOL programming but haven't in some time after migrating my last COBOL client many years ago. Nice to see there are still some of us left
 
Old 03-23-2011, 11:45 PM   #3
wallab
LQ Newbie
 
Registered: Mar 2011
Location: Oklahoma
Distribution: None at present -- hope to learn enough to start using Linux
Posts: 3

Original Poster
Rep: Reputation: 0
Hey, thanks for your response. I appreciate hearing from someone who knows about COBOL. I'm in the midst right now of trying to do some really complicated stuff, so I won't have much time at home to experiment with LINUX for a couple of weeks or so.

About Slackware, I installed that about 15 years ago and ended up staring at the screen wondering how to go about installing software and how to do elementary stuff, but there were just no manuals that I could understand. I think that now LINUX has developed enough of a following that I might delve back into that to see how much I can learn through manuals and support groups like this one.

Jargon is one thing I need to catch up on. Another is the directory structure common to LINUX installations. I don't mind doing things by command line because I started using computers long before desktops came out; however, I have to say that I do appreciate a good graphical interface if it's clean and easy to navigate and above all dependable (I think I just ruled out WINDOZE with that last observation). One of the first I recall right offhand was GEM. I ran it on a Commodore 64 (one of which is still in my closet today) and I recall having to change 5 1/4 inch floppies every few seconds, but running a cursor with a joystick (the mouse had not yet become popular) was a real kick, an extremely slow kick, but a kick nonetheless.

Once I bought my first 386, I was hooked on the idea of a graphical interface which came pre-installed. It's unfortunate that it was such a piece of bloatware. But now I've fallen behind the movement to get an OS to act the way I want, and I think that's one aspect of the LINUX world that I like. Reinstalling WINDOZE is not much fun, and I've done that probably several hundred times by now. But remaking a LINUX OS sounds like it might be a lot more interesting because the outcome is never the same as the last time if something new is added.

About WD's hard drives, I fell out with Seagate a number of years ago and began to depend heavily on WD. But now it seems some sort of pendulum has swung back to favor other choices. I'll have to look into Seagate's drive line to see what I can find. As for enterprise drives, I leave my servers on 24/7, and the WD drives that I bought several years ago are still going strong. But this last batch I bought really worries me.

Again, thanks for the info. I'll get around to downloading Slackware's latest release along with any manuals and whatnot I can find. And if I have any questions I'll come back here to ask them, and when I have some successes, I'll come back here to share the joy.

Oh, I forgot to mention that the reason I bought that last batch of drives was specifically to replace that drive 4 and to expand the RAID to 8 drives, but as I said in my first post, I really don't want to trust those new drives with my data. As backup, ok, but not as workhorses.
---------------------------
After all is said and done, there's usually more said than done.

Last edited by wallab; 03-23-2011 at 11:51 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Fake raid , software, hardware raid. Are any transportable to new motherboard OuldeFauder Linux - Hardware 6 11-06-2009 07:33 PM
LXer: Benchmarking hardware RAID vs. Linux kernel software RAID LXer Syndicated Linux News 0 07-15-2008 03:50 PM
[SOLVED] Hardware RAID vs Software RAID and DATA RECOVERY bskrakes Linux - General 7 07-04-2008 01:09 PM
Is it possible to convert from using hardware RAID to software RAID? kindredstar Linux - Hardware 2 12-19-2005 09:13 AM
moving system from ide software raid to new box with scsi raid ftumsh Linux - General 0 10-28-2003 09:34 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions

All times are GMT -5. The time now is 08:19 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration