Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I have new parts, a 3ware 9650se-8, and 2x1000GB sataII drives. Over time I expect to add more drives. Starting with RAID0 striped for 2tb, then probably add 2 more drives and move to RAID5; beyond that who knows. The 3ware software seems to have good support for rebuilding raid units on-the-fly, tho its very slow (~40 hours to migrate from a 1tb single to a 2 drive raid0). Since I'm still waiting for this first test to complete, I'll take a leap-of-faith that I can then expand the fs when its done.
Primary usage will be mythtv video storage, plus various backup spaces and other misc space hog needs. I'd like the flexibility for multiple filesystems if I need. Most likely use XFS for the big video filesystem. The rest of the system is a pretty standard Mandriva Opteron server starting with 2GB RAM.
My question is, should I layer LVM on top of the raid? And if so, more specifically:
1a. If I understand the LVM model, this raid device (/dev/sdb) would be the physical volume (PV). Most LVM guides I see talk about adding additional PVs to a LV to expand a filesystem. Haven't found one talking about Expanding a PV, which is what would happen if I add more drives to the same 3ware unit, which defaults to 2TB in size. Can / should I do this?
.. OR ..
1b. The other approach I could see is limit the 3ware to say 1TB (or less) unit sizes, now each drive I add would show up to the OS as a new /dev/sdx device, allowing me to make each a PV, and therefore following most LVM howto guides. Perhaps a bit more flexibility? If this theory is good, what are the drawbacks/pitfalls to doing this?
2. I seems like most suggest I not partition the 3ware device with fdisk, but give LVM the whole device? Correct?
I guess I struggle with questions 1a/1b the most. LVM's advantage seems based on the assumption you will add physical drives, but less instruction on the advantages on top of an expandable hardware raid. Any thoughts or good detailed references people know on this subject?