Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm looking to build a Linux RAID5 media/file server. The MB I'm looking at using is a nforce2 IGP with 2xATA100 and 2xSATA150 ports. There are several ways I'm thinking of setting this up hard drive wise.
1. One small OS IDE disk, three large RAID5 data disks (2xSATA, 1xIDE)
2. One small OS IDE disk, four large RAID5 data disks (2x SATA, 2xIDE)...this requires that one of the IDE RAID5 disks share an IDE channel with the OS disk.
3. Four large RAID5 data/OS disks (2xSATA, 2x IDE)...basiclly making two partitions on every disk, one small (OS, swap, etc.) and one large (RAID5).
Since I have no experience with Linux SW RAID5 I don't know how each of these setups would work. I want to build the largest array I can for the least $, so options 2/3 look really good, but I'm worried about their performance.
Potential system specs:
Sempron 2500+ 333MHz FSB
1x512MB DDR (easy to add another stick if needed)
nforce2 IGP MB
Just buy a 4 port SATA card and put 4 SATA drives on it because if a hard drive fails on the same channel as the OS drive, the computer will just freeze and probably damage the OS drive.
The easiest and less processor usage way is to use multiple RAID 1. Then combine them into LVM. As you get more money and you need more space. You can add more space with out changing a RAID 5 setup. IMHO, using fix disks like RAID 5 can be hard to expand because the data capacity is very large and backups will be expensive. Using a combination of RAID 1 and LVM will give you reduntancy with out taking a lot of system recources.
I have a raid1 set up with mdadm - unmounted - not in use, but with data on it. (2 x 250gig IDE)
I set up FC3 with LVM.
I have a general understanding of LVM, but the details...?
In my /dev there are many md's (md0-md31). What the hell are those/where'd they come from - LVM?
My raid1 was set up outside of LVM. I am unsure how to mount it correctly. I have been having issues with crashes and I am not used to that in Fedora, so I thought I better check this issue first. The crashes could easily be related to smb or a possible failing hard drive, but I lacked a little understanding in the LVM area and thought I'd try that first.
The above seems to work, but with the LVM there the /dev/md0 concerns me. It is already there. Am I not creating it correctly with mdadm? I tried using /dev/md32, but It is rejected.
Any good articles on LVM and the /dev stuff it creates and is related to would sure be a help.
The /dev/md0..../dev/md255 are your RAID disks. The LVM will have the name vg in it. It can also be named as mydata, media, or any name that you can remember. If a drive goes down, you have to first bring down the LVM and then the RAID array that contains the failed disk.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.