LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 11-30-2008, 09:27 PM   #1
DarkFlame
Member
 
Registered: Nov 2008
Location: San Antonio, TX, USA
Distribution: Ubuntu Server 8.10 & SAMBA 3.2.3
Posts: 158
Blog Entries: 1

Rep: Reputation: 30
RAID array troubles


I'm close to asking questions, but still have a trick (or 2) up my sleeve to see if I can get this thing working. But, if anyone has any suggestions, I'm ALL EYES & EARS!!!

I'm setting up my first Linux box (OpenSuSE 11.0 with the latest Samba server) to use as a file repository/server here at the house. Both our PCs are WinXP Pro, with data on them that I want to get off so that I can take the old IDE drives out of the brand new screamin' boxes I just built and install new SATA drives.

The first thing I did was use the Hardware RAID that comes on the ASUS M3A78-EM motherboards (3.2 GHz AMD Athlon dual core 6400 with 4GB RAM). It will only do RAID 0, 1, & 10. So, I went with 10 because the 4-250GB HDDs would give me the most room and the most data protection.

I got the Linux up and working and even managed to get the RAID working, but somehow got Linux installed on the RAID when I wanted it on the separate 80 GB drive. I removed the data cables from the 4 RAIDed drives & reinstalled OpenSuSE on the 80 GB drive, and it works.

HOWEVER, when I try to reconnect the 4-250GB drives, the OS still thinks they are RAIDed, even tho I turned off the hardware RAID and have them as simply SATA drives. To make matters worse, it looks like 3 of them are part of the "465 GB" drive and the 4th one isn't, tho I can see all 4 of the physical drives just fine. Deleting them in YaST's Partitioner doesn't work because they keep coming back - BEFORE I can even exit the app. And, it doesn't matter which of the 5 SATA connections on the motherboard I'm using for which drives, they still remain. Plus, I've used DOS to Fdisk the drives and format them, but Linux (& I've done several reinstalls of OpenSuSE on the 80 GB drive - just to make sure it's nothing leftover from the OS) still thinks they are RAIDed (and, still, just 3 of the 4). Even worse than that, when I go to create another RAID, or do any deletions, the system gives me archaic error messages ("Error# 12005" - or something like that), and it won't let me finish the process.

Currently, I'm using the motherboard's "Secure Erase" function in the RAID Controller BIOS to completely wipe the drives - because after all the OpenSuSE installations I've done, I'm pretty sure there's still something on the HDDs that are making the OS think it's still RAIDed.

IF I can get this fixed, I'm going to use OpenSuSE's Partitioner to setup a RAID5 array so that I can have almost 700GB of storage space. But, getting over this hump is being a pain.

By the way, it's been 30 minutes since I started the Secure Erase, and the drive is up to 18% erased. We're cooking with gas, now!

Thanks,
David
San Antonio, TX
 
Old 12-01-2008, 12:53 AM   #2
DarkFlame
Member
 
Registered: Nov 2008
Location: San Antonio, TX, USA
Distribution: Ubuntu Server 8.10 & SAMBA 3.2.3
Posts: 158

Original Poster
Blog Entries: 1

Rep: Reputation: 30
Ok, so I'm not a COMPLETE idiot, but I'm close!

After spending 3.5 hrs (of reading the Linux/OpenSuSE "bible" &) watching the RAID controller on the motherboard "secure erase" just one of the drives, I had an idea I wanted to try. I reconnected all the drives and then restarted the RAID hardware controller on the motherboard and looked at the disk setup. Sure enough, there it was, the RAID10 array. So, I removed the 4 drives from the RAID10 array and restarted the system. As sure as it's 5 o'clock SOMEWHERE, the Partitioner no longer sees the RAID array (even tho it had been turned off), and just sees the 4 disks. So, as I sit here typing the current "conclusion," I'm watching the other monitor show the formatting progress on what will be a 698.6 GB capacity drive. YIPPEEEE!!!

Now, I can go back to trying to make my WinXP Pro boxes get access to the files on the Linux server.
 
Old 12-01-2008, 01:34 AM   #3
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Glad you made some progress. Just FYI, motherboard RAIDS are not hardware RAIDS. They're software RAIDS. The motherboard manufacturers are pretty vague about this, so it's up to you to find out about it someplace like here at LQ. Granted, you can make an array with the BIOS features and have it seen in recent distros, but that's because they include the dmraid package.

What dmraid does is read the array configuration information off of the disks (that the BIOS put there) and interface the collection of disks the same way as the BIOS did. This works fine, but the downside is that if there are any problems, you have to exit Linux and go into the BIOS to do any needed array management. I don't think that dmraid has the maintenance part working, but I could be wrong. Give this issue due consideration before actually using a BIOS configured array. If you want to share an array between Linux and Windows, you're pretty much stuck using it. If not, though, mdadm is much better for Linux and offers maintenance function from within Linux.

Good luck!
 
Old 12-01-2008, 06:47 AM   #4
archtoad6
Senior Member
 
Registered: Oct 2004
Location: Houston, TX (usa)
Distribution: MEPIS, Debian, Knoppix,
Posts: 4,727
Blog Entries: 15

Rep: Reputation: 234Reputation: 234Reputation: 234
I believe the generally accepted term, except by their mfrs., for mobo based RAID is "fake RAID" -- from http://en.wikipedia.org/wiki/RAID#Fi...ver-based_RAID:
Quote:
These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit, not the RAID controller itself, thus introducing the aforementioned CPU overhead. Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as "fake RAID" even though the RAID itself is implemented correctly.
The whole Wikipedia RAID article

Your decision to use the mobo ("fake") RAID chip as extra SATA controller capacity; & implement Linux s/w RAID is, IMO, correct.

Last edited by archtoad6; 12-02-2008 at 06:05 AM. Reason: missing word
 
Old 12-01-2008, 07:14 AM   #5
DarkFlame
Member
 
Registered: Nov 2008
Location: San Antonio, TX, USA
Distribution: Ubuntu Server 8.10 & SAMBA 3.2.3
Posts: 158

Original Poster
Blog Entries: 1

Rep: Reputation: 30
That the Asus motherboard RAID controller is software now makes a LOT of sense, and explains much of the trouble I was having trying to delete it in Linux, to much frustration. The (Asus) explains that the SB700 chipset only does RAID0, RAID1, & RAID10, and that it requires the SB650 or SB750 chopset to do RAID5 (which is what I wanted, anyway). And, knowing that ANY disk controller is a combination of H/W & S/W, I figured that the SB700 chipset WAS the H/W that contained the S/W. And, I figured it was better (performance and resource-wise) to have the on-board chipset do the RAID controlling. But, I really, REALLY wanted the RAID5 configuration, and it's available in the Linux OpenSuSE 11.0 distro. So, I now that NOW I'm really using a S/W RAID controller, and it's probably not as efficient as a true H/W controller. But, in our environment (small family network), performance is not my main concern, but moreso financial economy and data stability in the eventual event of a crash. I know, BACKUP is paramount for data protection, but a RAID5 array is, imho, the next best thing & is MUCH better than having data distributed on desktops around the network. So, there I am. And, I appreciate the posts/comments - they are as educational as the effort to which I went to make it work.
 
Old 12-02-2008, 06:25 AM   #6
archtoad6
Senior Member
 
Registered: Oct 2004
Location: Houston, TX (usa)
Distribution: MEPIS, Debian, Knoppix,
Posts: 4,727
Blog Entries: 15

Rep: Reputation: 234Reputation: 234Reputation: 234
You're welcome, glad you got it working.

If you're ever in Houston, HLUG hangs out at HAL-PC -- come & visit us.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Adding an old software-RAID array to a software-RAID installation.. Boot problems GarethM Linux - Hardware 2 05-05-2008 03:16 PM
same raid array on 2 O.S. tataiermail Linux - Server 5 06-27-2007 09:22 AM
Want to repair my RAID-array imi@tux Linux - General 7 11-27-2006 06:10 AM
Raid Array wwnexc Linux - Hardware 1 09-25-2005 05:47 PM
power for raid array rtp405 Linux - Hardware 1 02-05-2003 05:36 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 07:46 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration