Red HatThis forum is for the discussion of Red Hat Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hello Experts ,
First its my pleasure to post my very first linux post right here , i`ve been learning a lot from your Decent site long ago, a long with my Linux Self-Study ,
i`m a MS System-Admin with 6 months Linux experience and growing .. its my first post ever, so plz easy on me
## My Situation ## :-
-CentOS 5.6(Final) x86_64 2.6.18-238.9.1.el5xen, Running as Backup-Server.
-4 HDD, 1x500GB , 3x1TB
-2 Raid Arrays (/dev/md0)-RAID1, (/dev/md1)-RAID5
-/boot on (/dev/md0)-RAID1 using ( /dev/sda1, /dev/sdb1)
-/swap on (/dev/sda2) non raid nor lvm partition "normal swap linux partition"
-VolumeGroup named "lvm_raid" installed on top of (/dev/md1)-RAID5 using (/dev/sdb2, /dev/sdd1, /dev/sdc1)
## What i need to do is ### :-
1- free up some space on (/dev/sdb) to add a second swap partiotion
2- create (/dev/md3)-RAID1 that holds the 2 Swap partiotions (/dev/sda2, /dev/sdb3)
# i understand that :-
- we need to resize the PV that holds the LVM "lvm_raid" and free up one of the partitions
- resizing the (/dev/md1)-RAID5 to the new size
# i searched all over the forums tryin to find a closer situation , but couldn`t find one resembles mine , as i came up with too many pieces that i cant put together,thats why i`m here
I Sincerely Appreciate Your help ...
and here some readings from my system that might help ..
I don't think what you are proposing to do is advisable.
Right now your three RAID5 partitions are of unequal size. The smallest size is used; the excess space (~256MB) on the other two is unusable. That's not a lot of space, but you're proposing to make the smaller partition even smaller and waste more space.
If you were to do as proposed, you would need to
1) use 'pvresize' to reduce the size of the LVM physical volume
2) use 'mdadm grow' to reduce the RAID array
3) use 'fdisk' (or a similar utility) to reduce the size of the partition and create your new partition.
The problem you will run into is that you need to get all of the sizes correct and they all have tendencies to round differently (cylinder size vs RAID chunk size vs LVM Physical Extent size). This is risky business and something I would not do. I doubt that others would find it a good idea and the reason why you can't find examples of how to do it.
You'd be far better off if you created a new swap LV in lvm_raid. You'd save some space over another RAID1 array and the performance of RAID5 is not that much less than RAID1. You could have this done in 5 minutes. Plus you would have far more flexibility in terms of changing your swap in the future. With your proposal you're locked into the size of the partition.
Out of curiousity, can you add more memory to this system to avoid the need for swap? I just bought 8GB of Crucial memory for $94US.
tommylovell , thanks a lot my friend for takin time replyin my thread, it meant a lot for me
after tryin out your proposal i feel like a fool as i chose the hard-way "mine" , but i didn`t know that i could create a /swap on RAID5 Device ! didn't think about it.
i just need clarify some points about my case :
as i`m trying to setup a reliable Backup Server that can survive the worst scenarios of HW Failures,
this was in mind :-
lying /boot , /swap on 2 RAID1 volume , "although i know its better to put /swap on a separate partition/Disk for performance reasons", but as my Server has only has 4 disks i was compelled to assign /swap partition on one of the RAID5 disks which wasn`t a wise approach leaving my 3 RAID5 disks unequal sized "didnt pay attention for this"
since i didn't sail too far in this solution yet, i`m thinking about Reinstalling the Server after Backing my data, with the following, knowing that i`ll be in need for /swap on RAID(5 or 1) since purchasing extra RAM would be in 1 month later (Budget issue)..
# what`s your best practice could be and why ? :-
1) assigning /swap space on each disk and create RAID5 equalled sized
OR
2) Considering to create a /swap LV within the lvm_raid from the beginning
but i didn`t know that i could create a /swap on RAID5 Device ! didn't think about it.
Well, there are a lot of different ways to do things. It's all part of the learning process.
You actually would be creating a swap space on an LVM Logical Volume, not directly on the RAID5 MD block device. (Think about it in layers. Real /dev/sdx devices on the bottom; RAID on top of that; then LVM on top of RAID.)
The advantage is that you can create and remove swap files easily on LVM with little forethought. It's hard to tell in advance just what your swap requirement might be. Putting the swap on LVM allows you to alter the sizes (add a new one, remove the old one, etc.).
Adding to fstab and swapping on based on fstab content is better than just doing a temporary add (swapon /dev/mapper/lvm_raid-swap2) because you are assured that it'll be added properly at the next reboot.
Quote:
# what`s your best practice could be and why ? :-
1) assigning /swap space on each disk and create RAID5 equalled sized
OR
2) Considering to create a /swap LV within the lvm_raid from the beginning
In work we always create swap on LVM like your option 2). It's much more versatile. But we generally have adequate memory on our servers. In those instances where we run low on memory and start to use swap, the systems slow but they are still usable. Since our LVM is on top of RAID, it's resilient. (I do the same at home but have little need for swap.)
If your option 1) is putting swap directly on each disk you're losing the resiliency which you said was one of your goals.
Glad to help. Hope I answered your questions.
by the way, don't forget to write your bootloader to your /dev/sdb drive so that you can boot off of it if your /dev/sda fails.
well , thinking about it in layers as u suggested ignited the view .
i`ve already wrote my boot loader to the /dev/sdb and tested it as well using
grub,
root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
but tom, how can make use of the remaining space on the first disk (500 GB) , or can u imagine a better portioning theme ?
i`m sorry for asking too much questions , i just need to learn while communicating with minds like you ..
Regards
how can make use of the remaining space on the first disk (500 GB) , or can u imagine a better portioning theme ?
That's a good question. There are two aspects of this. How would you lay it out on disk? And is this layout sound, technically?
As you said that you wanted this system to be resilient (RAID) that would mean that space would need to be placed into a RAID array.
One way to do it...
[code]
Code:
sda-500G sdb-1T sdc-1T sdd-1T
250M<-RAID1->250M unused unused
470G<-RAID5->470G<-RAID5->470G<-RAID5->470G first RAID5 array
500G<-RAID5->500G<-RAID5->500G second RAID5 array
That would give you a 250M RAID1; a 1410GB (3*470) RAID5; and a 1000GB (2*500) RAID5.
The 250M would be your /dev/md0 and /boot as it is now; the 1.4T would be /dev/md1 and the first PV in the lvm_raid VG, and the 1T /dev/md2 device would be the second PV in the lvm_raid VG. That'd give you a total of 2410GB of usable space
Two PVs, 1500G and 940G (totaling 2440G) in the lvm_raid VG. Not much of a gain over the other option.
So that's how you could minimize wasted space. But the other question remains, "is this a good idea, technically?"
I don't know that answer. I have heard that people have had performance problems due to contention for disk access with layouts like this. I would suppose that it depends a lot upon how heavily used the two RAID5 arrays are; whether you've lost a disk and one (or both) RAID5 arrays are running in degraded mode; what types of controllers the disks are on (PATA, SATA, SAS, SCSI).
I think because this question is a much bigger and different question than the one you originally asked in this post you should post a new question something like "Is overlapping two RAID5 arrays on same drives a bad idea?"
The text could be "I would like to place two RAID5 arrays on disk as shown below. Is this advisable? Will this create performance problems?
470G<-RAID5->470G<-RAID5->470G<-RAID5->470G first RAID5 array
500G<-RAID5->500G<-RAID5->500G second RAID5 array
[/code]
(The BB code tags [code] and [/code] make it more readable. See http://www.linuxquestions.org/questi....php?do=bbcode if you haven't already. They put the text in a "code:" box and give it a fixed font.)
It'll look like this.
Code:
sda-500G sdb-1T sdc-1T sdd-1T
250M<-RAID1->250M unused unused
470G<-RAID5->470G<-RAID5->470G<-RAID5->470G first RAID5 array
500G<-RAID5->500G<-RAID5->500G second RAID5 array
You'll be asking for opinion, but I value the opinions of others on this forum. There are a lot of people with a lot of experience. It is good to learn from your own mistakes, but better to learn from theirs...
Quote:
i`m sorry for asking too much questions , i just need to learn while communicating with minds like you
That's what we are here for. There are many, many, many questions that I can't answer. I help where I can. I'm certain that there will be opportunies where you can share your knowledge and experiences.
Last edited by tommylovell; 06-02-2011 at 08:50 AM.
sda-500G sdb-1T sdc-1T sdd-1T
250M<-RAID1->250M unused unused
470G<-RAID5->470G<-RAID5->470G<-RAID5->470G first RAID5 array
500G<-RAID5->500G<-RAID5->500G second RAID5 array
Thanks a lot tom That was very helpful suggestion ) ,i`ll go for it 4 sure .
Quote:
what types of controllers the disks are on (PATA, SATA, SAS, SCSI).
well its a PowerEdge T110 with 4 SATA Controllers , bought it 2 weeks ago for a Linux Backup implementation in the network.
Quote:
(The BB code tags code and code make it more readable. [/noparse] See http://www.linuxquestions.org/questi....php?do=bbcode if you haven't already. They put the text in a "code:" box and give it a fixed font.)
It'll look like this.
i didnt have the time to look for the FAQ section to get to know these tags , as i was rushing for an answer regarding my situation , thanks to Allah who made me find you as u were such a Great Help on this forum.
Quote:
I think because this question is a much bigger and different question than the one you originally asked in this post you should post a new question something like "Is overlapping two RAID5 arrays on same drives a bad idea?"
The text could be "I would like to place two RAID5 arrays on disk as shown below. Is this advisable? Will this create performance problems?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.