Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I would like to use the Linux software RAID to create a RAID 5 using 3 harddisks (or perhaps 4 - we'll see). I understood that the final size of the RAID 5 will be...
[size_of_the_smallest_hd * (total_nbr_of_HDs - 1)]
...because one of the HD will be used to store the parity data (or ok, the parity data will be shared by all participating HDs, but in the end the result is the same). Everything ok until now.
Now let's assume that after 1 or 2 years my RAID will be full. The first idea will be to buy an additional HD that will become part of the existing RAID.
My problem at this point is that most probably the new HD will be able to contain much more data than the old ones, but because the total RAID size will be based on the size of the smallest HD, a lot of space will be wasted on the new HD!
So, here comes the first question: will there be any way to use the wasted space of the new HD? Is it possible to create a separate "normal" partition on a HD which has a part already allocated to a RAID array?
Second question: what will I have to do if I will want to get rid of the old HDs and just buy and use brand new ones? Will I have to export somewhere the data, get rid of the old HDs, install the new ones & create the RAID, and re-import the data? Is there a better way? (I am supposing that I won't be able to connect all the old & new HDs to the machine at the same time to do a direct data transfer - not enough SATA ports)
Thanks a lot...
p.s.: possible to somehow use LVM in conjunction with software RAID? Any pros/cons?
Last edited by Pearlseattle; 11-17-2007 at 08:46 AM.
ok, so firstly you can't expand a raid 5 array after creation afaik. the parity is striped across all devices proportionally to the number of devices, i.e 1/4 in a 4 disk array, 1/5 in a 5 disk array. adding another device just makes this illogical, and you'd need to toally rebuild the array for it to work.
software raid arrays themselves are a combination of partitions, not necessarily devices. as such you *could* make a raid5 array from 2 drives, one with two 50gb partitions, one with just one 50gb partition... it's dumb of course, but it's the partitions that are raided. you can use the rest of a device however you wish.
lvm and raid go very well together, especially if you want a better solution that raid 1+0. here you'd have a number of raid 1 mdX's, and then join them all together with lvm into your physical volumes, possibly just one. using LVM on a raid5 array can also be of use to you in that you could potentially buy a newer larger drive the size of the existing array and extend an LVM PV on your raid array on to it, then remove the old drives from the LVM, effectively migrating the data to new hardware without removing the underlying LVM configuration. you *could* then do somethign slightly daft like taking two smaller drives and using LVM or raid0 across them to leave you with two larger partitions which you could then raid again, and spread the original migrated LVM across to it... probably best to stick to saner solutions though!
Hei, thanks a lot!
I'm a little bit confused about the third part... .
lvm and raid go very well together... . here you'd have a number of raid 1 mdX's, and then join them all together with lvm into your physical volumes, possibly just one.
Per example, I could today 2 HDs creating a RAID 1 array and install on top of it LVM. After a few months, when I'll need some more space, I will add 2 more HDs, creating an additional RAID 1, and integrate it into the existing LVM. Is this correct? After some more time, if I will need some more space I will just slowly replace in one of the RAID 1 arrays the two disks (one after the other restoring each time the RAID) by some bigger ones, and the LVM will automagically become bigger? If this is correct, will I have to tell something to LVM in order to make him aware that one of the arrays got bigger? Won't it complain at the beginning that something changed without him knowing?
using LVM on a raid5 array can also be of use to you in that you could potentially buy a newer larger drive the size of the existing array and extend an LVM PV on your raid array on to it, then remove the old drives from the LVM, effectively migrating the data to new hardware without removing the underlying LVM configuration.
What I understood is:
1) build a RAID 5
2) create a LVM on top of the RAID 5
3) buy after some time a new HD the size of the RAID 5.
4) what now? Extend the LVM to include the brand new giant HD?
5) "remove the old drives from the LVM": won't LVM say "hey man, I cannot find the RAID 5 - you're screwed!" Perhaps I didn't really understand what LVM is: isn't it a kind of RAID 0 which just "absorbs" any physical volume? Therefore, by losing one of the volumes, the entire LVM is lost?
Perhaps I found the answer to the last point in Wikipedia:
VGs can grow their storage pool by absorbing new PVs or shrink by retracting from PVs. This may involve moving already-allocated LEs out of the PV.
This perhaps means that there is a command to tell LVM "I'm going to remove the PV ####" and by doing this it will move all the data from the RAID WHATEVER to the brand new giant HD, without losing it?
Sorry, I'm really new to LVM... .
yeah, that's about it. Bobs Rubber Chicken Co. is based in an office and gets a lot more business and so rents the office next door to house the new staff it hired. it then loses business and since it fitted out the new office more recently, sells of the old one and someone turns it into a brothel. the company is still at the same address, like an LVM LV is still the same thing, but in the migration is in a new location, like an LVM LV has completely moved hardware. sorry for the naff analogy!
you do have to manage the PV's, PE's, VG's etc... but once you know what makes up what in the LVM heirarchy it's really fairly logical, so you create the PE's (Physical Extents) and the add them to the VG (Volume Group) you then have a larger VG and can then expand your LV's (Logical Volumes) inside them, which finally contain the filesystems. now like a partition contains a filesystem, and they are managed seperately, you still have to carefully manage the filesystem in the LV, i.e. making an LV larger doesn't make the filesystem bigger, that's where tools like resize2fs come in in isolation to LVM.
Great, thanks Chris! I'll buy 2 HDs, create a RAID 1, create the LVM, grab an old HD and do some experiments with LVMsPVsPEsEGs until I understand what I am actually doing. Sounds good.
Thanks a lot!!!