Resizing PV on RAID5 to Add Swap Space
Hello Experts ,
First its my pleasure to post my very first linux post right here , i`ve been learning a lot from your Decent site long ago, a long with my Linux Self-Study , i`m a MS System-Admin with 6 months Linux experience and growing .. its my first post ever, so plz easy on me ## My Situation ## :- -CentOS 5.6(Final) x86_64 2.6.18-238.9.1.el5xen, Running as Backup-Server. -4 HDD, 1x500GB , 3x1TB -2 Raid Arrays (/dev/md0)-RAID1, (/dev/md1)-RAID5 -/boot on (/dev/md0)-RAID1 using ( /dev/sda1, /dev/sdb1) -/swap on (/dev/sda2) non raid nor lvm partition "normal swap linux partition" -VolumeGroup named "lvm_raid" installed on top of (/dev/md1)-RAID5 using (/dev/sdb2, /dev/sdd1, /dev/sdc1) ## What i need to do is ### :- 1- free up some space on (/dev/sdb) to add a second swap partiotion 2- create (/dev/md3)-RAID1 that holds the 2 Swap partiotions (/dev/sda2, /dev/sdb3) # i understand that :- - we need to resize the PV that holds the LVM "lvm_raid" and free up one of the partitions - resizing the (/dev/md1)-RAID5 to the new size # i searched all over the forums tryin to find a closer situation , but couldn`t find one resembles mine , as i came up with too many pieces that i cant put together,thats why i`m here I Sincerely Appreciate Your help ... and here some readings from my system that might help .. --------------------------------------------------------------------- $ df -h --------------------------------------------------------------------- Filesystem Size Used Avail Use% Mounted on /dev/mapper/lvm_raid-volroot 9.7G 640M 8.6G 7% / /dev/mapper/lvm_raid-volhome 4.9G 1.9G 2.8G 41% /home /dev/mapper/lvm_raid-voltmp 9.7G 151M 9.1G 2% /tmp /dev/mapper/lvm_raid-volvar 20G 326M 19G 2% /var /dev/mapper/lvm_raid-volusr 9.7G 4.1G 5.2G 44% /usr /dev/mapper/lvm_raid-volopt 9.7G 151M 9.1G 2% /opt /dev/md0 243M 23M 208M 10% /boot tmpfs 1.6G 0 1.6G 0% /dev/shm none 1.6G 104K 1.6G 1% /var/lib/xenstored /dev/mapper/lvm_raid-volbackup 1.8T 153G 1.5T 10% /backup ------------------------------------------------------------------------ $ pvdisplay ------------------------------------------------------------------------ --- Physical volume --- PV Name /dev/md1 VG Name lvm_raid PV Size 1.82 TB / not usable 32.00 MB Allocatable yes (but full) PE Size (KByte) 32768 Total PE 59600 Free PE 0 Allocated PE 59600 PV UUID QJwEL1-HYG3-6iHI-NCUw-xs6r-RUiX-aJUQOs ------------------------------------------------------------------ cat /proc/mdstat ------------------------------------------------------------------ Personalities : [raid6] [raid5] [raid4] [raid1] md0 : active raid1 sda1[0] sdb1[1] 256896 blocks [2/2] [UU] md1 : active raid5 sdd1[2] sdc1[1] sdb2[0] 1953005568 blocks level 5, 256k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> --------------------------------------------------------------------------- $fdisk -l --------------------------------------------------------------------------- Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 32 257008+ fd Linux raid autodetect /dev/sda2 33 1076 8385930 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 32 257008+ fd Linux raid autodetect /dev/sdb2 33 121601 976502992+ fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 * 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 * 1 121601 976760001 fd Linux raid autodetect Disk /dev/md1: 1999.8 GB, 1999877701632 bytes 2 heads, 4 sectors/track, 488251392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md1 doesn't contain a valid partition table Disk /dev/md0: 263 MB, 263061504 bytes 2 heads, 4 sectors/track, 64224 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md0 doesn't contain a valid partition table |
I don't think what you are proposing to do is advisable.
Right now your three RAID5 partitions are of unequal size. The smallest size is used; the excess space (~256MB) on the other two is unusable. That's not a lot of space, but you're proposing to make the smaller partition even smaller and waste more space. If you were to do as proposed, you would need to 1) use 'pvresize' to reduce the size of the LVM physical volume 2) use 'mdadm grow' to reduce the RAID array 3) use 'fdisk' (or a similar utility) to reduce the size of the partition and create your new partition. The problem you will run into is that you need to get all of the sizes correct and they all have tendencies to round differently (cylinder size vs RAID chunk size vs LVM Physical Extent size). This is risky business and something I would not do. I doubt that others would find it a good idea and the reason why you can't find examples of how to do it. You'd be far better off if you created a new swap LV in lvm_raid. You'd save some space over another RAID1 array and the performance of RAID5 is not that much less than RAID1. You could have this done in 5 minutes. Plus you would have far more flexibility in terms of changing your swap in the future. With your proposal you're locked into the size of the partition. Out of curiousity, can you add more memory to this system to avoid the need for swap? I just bought 8GB of Crucial memory for $94US. |
tommylovell , thanks a lot my friend for takin time replyin my thread, it meant a lot for me :)
after tryin out your proposal i feel like a fool as i chose the hard-way "mine" , but i didn`t know that i could create a /swap on RAID5 Device ! didn't think about it. i just need clarify some points about my case : as i`m trying to setup a reliable Backup Server that can survive the worst scenarios of HW Failures, this was in mind :- lying /boot , /swap on 2 RAID1 volume , "although i know its better to put /swap on a separate partition/Disk for performance reasons", but as my Server has only has 4 disks i was compelled to assign /swap partition on one of the RAID5 disks which wasn`t a wise approach leaving my 3 RAID5 disks unequal sized "didnt pay attention for this" since i didn't sail too far in this solution yet, i`m thinking about Reinstalling the Server after Backing my data, with the following, knowing that i`ll be in need for /swap on RAID(5 or 1) since purchasing extra RAM would be in 1 month later (Budget issue).. # what`s your best practice could be and why ? :- 1) assigning /swap space on each disk and create RAID5 equalled sized OR 2) Considering to create a /swap LV within the lvm_raid from the beginning Thanks Again For Help .. Saed |
Quote:
You actually would be creating a swap space on an LVM Logical Volume, not directly on the RAID5 MD block device. (Think about it in layers. Real /dev/sdx devices on the bottom; RAID on top of that; then LVM on top of RAID.) The advantage is that you can create and remove swap files easily on LVM with little forethought. It's hard to tell in advance just what your swap requirement might be. Putting the swap on LVM allows you to alter the sizes (add a new one, remove the old one, etc.). lvcreate -L2G -n swap2 lvm_raid mkswap /dev/mapper/lvm_raid-swap2 add an entry to /etc/fstab, then: swapon -a Adding to fstab and swapping on based on fstab content is better than just doing a temporary add (swapon /dev/mapper/lvm_raid-swap2) because you are assured that it'll be added properly at the next reboot. Quote:
If your option 1) is putting swap directly on each disk you're losing the resiliency which you said was one of your goals. Glad to help. Hope I answered your questions. by the way, don't forget to write your bootloader to your /dev/sdb drive so that you can boot off of it if your /dev/sda fails. tom |
well , thinking about it in layers as u suggested ignited the view .
i`ve already wrote my boot loader to the /dev/sdb and tested it as well using grub, root (hd0,0) setup (hd0) root (hd1,0) setup (hd1) but tom, how can make use of the remaining space on the first disk (500 GB) , or can u imagine a better portioning theme ? i`m sorry for asking too much questions , i just need to learn while communicating with minds like you .. Regards |
Quote:
As you said that you wanted this system to be resilient (RAID) that would mean that space would need to be placed into a RAID array. One way to do it... [code] Code:
sda-500G sdb-1T sdc-1T sdd-1T The 250M would be your /dev/md0 and /boot as it is now; the 1.4T would be /dev/md1 and the first PV in the lvm_raid VG, and the 1T /dev/md2 device would be the second PV in the lvm_raid VG. That'd give you a total of 2410GB of usable space Another way to do it... Code:
sda-1T sdb-1T sdc-1T sdd-500G So that's how you could minimize wasted space. But the other question remains, "is this a good idea, technically?" I don't know that answer. I have heard that people have had performance problems due to contention for disk access with layouts like this. I would suppose that it depends a lot upon how heavily used the two RAID5 arrays are; whether you've lost a disk and one (or both) RAID5 arrays are running in degraded mode; what types of controllers the disks are on (PATA, SATA, SAS, SCSI). I think because this question is a much bigger and different question than the one you originally asked in this post you should post a new question something like "Is overlapping two RAID5 arrays on same drives a bad idea?" The text could be "I would like to place two RAID5 arrays on disk as shown below. Is this advisable? Will this create performance problems? [code] sda-500G sdb-1T sdc-1T sdd-1T 250M<-RAID1->250M unused unused 470G<-RAID5->470G<-RAID5->470G<-RAID5->470G first RAID5 array 500G<-RAID5->500G<-RAID5->500G second RAID5 array [/code] (The BB code tags [code] and [/code] make it more readable. See http://www.linuxquestions.org/questi....php?do=bbcode if you haven't already. They put the text in a "code:" box and give it a fixed font.) It'll look like this. Code:
sda-500G sdb-1T sdc-1T sdd-1T Quote:
|
Quote:
Quote:
Quote:
Quote:
http://www.linuxquestions.org/questi...07#post4375407 thanks again tom :) |
All times are GMT -5. The time now is 01:09 PM. |