Linux - DistributionsThis forum is for Distribution specific questions.
Red Hat, Slackware, Debian, Novell, LFS, Mandriva, Ubuntu, Fedora - the list goes on and on...
Note: An (*) indicates there is no official participation from that distribution here at LQ.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Ive been using openSUSE 10.2 for a few months its the first distro i started with. I feel that suse has been a little buggy nd that they have been slacking a little bit on their updates, wenever i update my kernel, i restart my computer to a major error. Beryl dosent work good for me but maybe thats just beryl. I was going to install ubuntu nd noticed that it gave me no option for raid, so i installed it on my 2 laptops a new one and a old p3 vaio, nd found that i really liked the work flow of how ubuntu is. So baisically im loooking for another raid distro preferably ubuntu based, that will be stable, fast, and good on beryl. I saw uberyl but i have no idea if theres raid support for that either. N e suggestions???
Fedora core has RAID installation (I assume you mean software RAID). It also runs with a very current kernel (which will help with hardware RAID). I use both RAID1 and RAID5 on Fedora (for years now), and they have been very reliable.
If you are going to install, I suggest the re-spins, so that most of the critical maintenance is already applied (less to download when you do apply updates).
actually no im pretty sure what i have is hardware raid, i use silicon image raid controller and theres also a nVraid controller on my motherboard. Im assuming that the os has to support this as well because u need to be able to select raid rather than one of my two harddrives installed. I have my Raid striped and mirrored i think thats 5 i dont remember. Keep the posts coming i just got a lil confused
Striped and mirrored is RAID-10. You won't be selecting RAID during the installation, because hardware RAID is "invisible" to the OS. You should just see the RAID volumes (e.g. /dev/sda1). The more recent your kernel, the better your installation experience will be in that case.
If you still have the SuSE system operating, you could use "lsmod" to list the modules. Otherwise, look on the HCL on this site to see if your device is listed. It could be a simple matter of getting into the psuedo console just before the partitioning phase of installation and modprobing the correct kernel module. For this to work, the install kernel needed to be compiled for support for these modules. Even running the SuSE installation to this point and using lsmod might help identify what you need.
The docs for beryl sound like it's still experimental, and that it isn't what SuSE supports directly.
It isn't uncommon to need to recompile ndiswrapper or a new nvidia update after a kernel update.
To use the nvraid on the motherboard, you'll need to install the dmraid package. Or, you could just ignore that and use mdadm instead. Either way you'll get the same result, except that you have to do tricks to boot from a striped mdadm array.
As to your Silicon Image RAID controller, I suggest that you start with the manufacturer's website and look for your RAID device. But be aware that if it only cost you a few bucks, it's probably not a real raid, either.
If you want to get better help on this subject, then you need to post actual product model numbers. A general comment such as nvraid or silicon image raid is really no information at all.
I actually had no idea that software raid existed i thought it only works through a controller that splits the input to the harddrive pretty much. Im gonna switch to NVraid cuz Silicon Image isnt supported by Vista so in the future if i put it on i could. Actually now that i took a look at it again, i think the real raid controller is silicon raid cuz its a "controller" NVraid seems like software to me. Help me out on this thank you in advance
I'll put this to you as gently and directly as I can. A *real* RAID controller costs several hundreds of US dollars by itself. It does not consist of simply a chip on a motherboard. It is a chip plus dedicated memory, plus probably dedicated control software. As far as I'm aware, there are no motherboards with *real* RAID controllers embedded in them. Nobody would buy them, because they would cost in the $700-$1000 or more range. Furthermore, if anything broke on the motherboard, the money on the RAID part would be wasted. The motherboards that are advertised with RAID, whether they're SIL, or NVIDIA, or whatever, are FakeRaids. Sorry.
I'm not certain if post #6 is correct. There are two raid packages that are commonly used. Quakeboy is correct in describing onboard raid. With ATA onboard raid, they are hardly different than software raid. The same is probably true for onboard sata raid. Even when you use a Promise raid card, the parity checking is done by the CPU instead of the card. I was trying to find which kernel module supported your raid controller and found this benchmark article that you may find informative: http://www.extremetech.com/article2/...1976992,00.asp
If you still want to use your onboard raid controller, then you should look in the HCL on this site or google for an answer on which kernel module you need to support it.
Otherwise, you could opt for linux raid. SuSE uses mdadm, and configuring raid using the YaST partitioner isn't difficult. You will want to label which drive is which and become familiar with the mdadm command so that if one of the sata disks went bad, you will be able to recover.
There is a Linux Raid howto on the www.tldp.org website. Also, if you have the mdadm package installed, there is web based documentation at /usr/share/doc/packages/mdadm/Software-RAID.HOWTO.html
J, if you'll tell me which part you disagree with and why, I'll either accept your position or try to make my case for you. But as it turns out, both his options are mobo-based FakeRaids.
very interesting i had no idea about this, wow yea 1000 is a waste of money then. So before i start working on my computer again, what the hell is the point of a software raid, does it actually help in performance when joined together? or is it just so they appear as one drive nd cause u trouble.. i appreciate the help
my whole question is, can i achieve better performance, and is it worth it using the onboard "fake" raid
The only advantage to using the card-based or motherboard-based "fake" RAID is if you plan to dual boot with Windows. The downside is that if you use the Nvidia onboard controller for RAID, you can't swing the drives to the Silicon Image controller and have it work; you'll need to re-init the drives.
Using Linux's software RAID is a non-proprietary implementation - the set of drives will work on any Linux system with any controller card. You get the advantage of RAID protection from a single drive/controller failure. RAID0 is not really RAID since there is no redundancy, but you get improve performance from it (in exchange for a higher risk of failure)
For example, you can set your RAID controllers to JBOD (just a bunch of disks), and plug one drive into the Silicon Image controller and one into the NVraid controller. At installation, many Linux distributions can create a RAID1 array for you from the two drives. Now, if either drive fails, or one of the controllers fail, the system stays up and running. You also have good visibility into the operation of the drives and better recovery options, since everything is being done by Linux.
Read the references above fully to understand RAID operation before you get started. If your data has any value, you want RAID1. If you just want a fast disk system and don't care about the data (or you take daily backups), use RAID0.
I am dual booting with windows, wouldnt it be better to have the operating systems on seperate drives non raided?? I was using raid 10 which is Striped and Mirrored not 0 or 1. And last what would you recommend because now im thinking this mite not b worth it after all
What we recommend doesn't really matter; all we can do is provide information, which you need to use as input into your decision. What I would recommend is installing Linux only, with a RAID1 configuration, and getting a console to play games on. But that might not suit you.
lol personally i would do the exact thing you said, but i have brothers that arent up for that. I do love the linux os to do that. My whole decision depends on the risk, risk = crashing, crashing = not cool, but the whole thing is how high is that risk, im questioning if its a risk i should take or not nd does it actually strain the cpu more, im gonna go start doing a low level format through the bios nd unconfigure the raid for now.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.