LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Red Hat
User Name
Password
Red Hat This forum is for the discussion of Red Hat Linux.

Notices


Reply
  Search this Thread
Old 06-01-2011, 09:07 AM   #1
Saed.Abdu
LQ Newbie
 
Registered: Jun 2011
Posts: 14

Rep: Reputation: Disabled
Resizing PV on RAID5 to Add Swap Space


Hello Experts ,
First its my pleasure to post my very first linux post right here , i`ve been learning a lot from your Decent site long ago, a long with my Linux Self-Study ,
i`m a MS System-Admin with 6 months Linux experience and growing .. its my first post ever, so plz easy on me

## My Situation ## :-

-CentOS 5.6(Final) x86_64 2.6.18-238.9.1.el5xen, Running as Backup-Server.
-4 HDD, 1x500GB , 3x1TB
-2 Raid Arrays (/dev/md0)-RAID1, (/dev/md1)-RAID5
-/boot on (/dev/md0)-RAID1 using ( /dev/sda1, /dev/sdb1)
-/swap on (/dev/sda2) non raid nor lvm partition "normal swap linux partition"
-VolumeGroup named "lvm_raid" installed on top of (/dev/md1)-RAID5 using (/dev/sdb2, /dev/sdd1, /dev/sdc1)

## What i need to do is ### :-
1- free up some space on (/dev/sdb) to add a second swap partiotion
2- create (/dev/md3)-RAID1 that holds the 2 Swap partiotions (/dev/sda2, /dev/sdb3)

# i understand that :-
- we need to resize the PV that holds the LVM "lvm_raid" and free up one of the partitions
- resizing the (/dev/md1)-RAID5 to the new size

# i searched all over the forums tryin to find a closer situation , but couldn`t find one resembles mine , as i came up with too many pieces that i cant put together,thats why i`m here

I Sincerely Appreciate Your help ...

and here some readings from my system that might help ..

---------------------------------------------------------------------
$ df -h
---------------------------------------------------------------------
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/lvm_raid-volroot
9.7G 640M 8.6G 7% /
/dev/mapper/lvm_raid-volhome
4.9G 1.9G 2.8G 41% /home
/dev/mapper/lvm_raid-voltmp
9.7G 151M 9.1G 2% /tmp
/dev/mapper/lvm_raid-volvar
20G 326M 19G 2% /var
/dev/mapper/lvm_raid-volusr
9.7G 4.1G 5.2G 44% /usr
/dev/mapper/lvm_raid-volopt
9.7G 151M 9.1G 2% /opt
/dev/md0 243M 23M 208M 10% /boot
tmpfs 1.6G 0 1.6G 0% /dev/shm
none 1.6G 104K 1.6G 1% /var/lib/xenstored
/dev/mapper/lvm_raid-volbackup
1.8T 153G 1.5T 10% /backup
------------------------------------------------------------------------
$ pvdisplay
------------------------------------------------------------------------
--- Physical volume ---
PV Name /dev/md1
VG Name lvm_raid
PV Size 1.82 TB / not usable 32.00 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 59600
Free PE 0
Allocated PE 59600
PV UUID QJwEL1-HYG3-6iHI-NCUw-xs6r-RUiX-aJUQOs

------------------------------------------------------------------
cat /proc/mdstat
------------------------------------------------------------------
Personalities : [raid6] [raid5] [raid4] [raid1]
md0 : active raid1 sda1[0] sdb1[1]
256896 blocks [2/2] [UU]

md1 : active raid5 sdd1[2] sdc1[1] sdb2[0]
1953005568 blocks level 5, 256k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

---------------------------------------------------------------------------
$fdisk -l
---------------------------------------------------------------------------
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 32 257008+ fd Linux raid autodetect
/dev/sda2 33 1076 8385930 82 Linux swap / Solaris

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 32 257008+ fd Linux raid autodetect
/dev/sdb2 33 121601 976502992+ fd Linux raid autodetect

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 * 1 121601 976760001 fd Linux raid autodetect

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 * 1 121601 976760001 fd Linux raid autodetect

Disk /dev/md1: 1999.8 GB, 1999877701632 bytes
2 heads, 4 sectors/track, 488251392 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md0: 263 MB, 263061504 bytes
2 heads, 4 sectors/track, 64224 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table
 
Old 06-01-2011, 11:27 AM   #2
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 380

Rep: Reputation: 103Reputation: 103
I don't think what you are proposing to do is advisable.

Right now your three RAID5 partitions are of unequal size. The smallest size is used; the excess space (~256MB) on the other two is unusable. That's not a lot of space, but you're proposing to make the smaller partition even smaller and waste more space.

If you were to do as proposed, you would need to
1) use 'pvresize' to reduce the size of the LVM physical volume
2) use 'mdadm grow' to reduce the RAID array
3) use 'fdisk' (or a similar utility) to reduce the size of the partition and create your new partition.

The problem you will run into is that you need to get all of the sizes correct and they all have tendencies to round differently (cylinder size vs RAID chunk size vs LVM Physical Extent size). This is risky business and something I would not do. I doubt that others would find it a good idea and the reason why you can't find examples of how to do it.

You'd be far better off if you created a new swap LV in lvm_raid. You'd save some space over another RAID1 array and the performance of RAID5 is not that much less than RAID1. You could have this done in 5 minutes. Plus you would have far more flexibility in terms of changing your swap in the future. With your proposal you're locked into the size of the partition.

Out of curiousity, can you add more memory to this system to avoid the need for swap? I just bought 8GB of Crucial memory for $94US.
 
Old 06-01-2011, 03:57 PM   #3
Saed.Abdu
LQ Newbie
 
Registered: Jun 2011
Posts: 14

Original Poster
Rep: Reputation: Disabled
tommylovell , thanks a lot my friend for takin time replyin my thread, it meant a lot for me

after tryin out your proposal i feel like a fool as i chose the hard-way "mine" , but i didn`t know that i could create a /swap on RAID5 Device ! didn't think about it.

i just need clarify some points about my case :
as i`m trying to setup a reliable Backup Server that can survive the worst scenarios of HW Failures,
this was in mind :-
lying /boot , /swap on 2 RAID1 volume , "although i know its better to put /swap on a separate partition/Disk for performance reasons", but as my Server has only has 4 disks i was compelled to assign /swap partition on one of the RAID5 disks which wasn`t a wise approach leaving my 3 RAID5 disks unequal sized "didnt pay attention for this"


since i didn't sail too far in this solution yet, i`m thinking about Reinstalling the Server after Backing my data, with the following, knowing that i`ll be in need for /swap on RAID(5 or 1) since purchasing extra RAM would be in 1 month later (Budget issue)..

# what`s your best practice could be and why ? :-

1) assigning /swap space on each disk and create RAID5 equalled sized
OR
2) Considering to create a /swap LV within the lvm_raid from the beginning

Thanks Again For Help ..
Saed

Last edited by Saed.Abdu; 06-01-2011 at 04:03 PM.
 
Old 06-01-2011, 06:33 PM   #4
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 380

Rep: Reputation: 103Reputation: 103
Quote:
but i didn`t know that i could create a /swap on RAID5 Device ! didn't think about it.
Well, there are a lot of different ways to do things. It's all part of the learning process.

You actually would be creating a swap space on an LVM Logical Volume, not directly on the RAID5 MD block device. (Think about it in layers. Real /dev/sdx devices on the bottom; RAID on top of that; then LVM on top of RAID.)

The advantage is that you can create and remove swap files easily on LVM with little forethought. It's hard to tell in advance just what your swap requirement might be. Putting the swap on LVM allows you to alter the sizes (add a new one, remove the old one, etc.).

lvcreate -L2G -n swap2 lvm_raid
mkswap /dev/mapper/lvm_raid-swap2

add an entry to /etc/fstab, then:
swapon -a

Adding to fstab and swapping on based on fstab content is better than just doing a temporary add (swapon /dev/mapper/lvm_raid-swap2) because you are assured that it'll be added properly at the next reboot.

Quote:
# what`s your best practice could be and why ? :-

1) assigning /swap space on each disk and create RAID5 equalled sized
OR
2) Considering to create a /swap LV within the lvm_raid from the beginning
In work we always create swap on LVM like your option 2). It's much more versatile. But we generally have adequate memory on our servers. In those instances where we run low on memory and start to use swap, the systems slow but they are still usable. Since our LVM is on top of RAID, it's resilient. (I do the same at home but have little need for swap.)

If your option 1) is putting swap directly on each disk you're losing the resiliency which you said was one of your goals.

Glad to help. Hope I answered your questions.

by the way, don't forget to write your bootloader to your /dev/sdb drive so that you can boot off of it if your /dev/sda fails.

tom
 
Old 06-02-2011, 04:46 AM   #5
Saed.Abdu
LQ Newbie
 
Registered: Jun 2011
Posts: 14

Original Poster
Rep: Reputation: Disabled
well , thinking about it in layers as u suggested ignited the view .
i`ve already wrote my boot loader to the /dev/sdb and tested it as well using
grub,
root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
but tom, how can make use of the remaining space on the first disk (500 GB) , or can u imagine a better portioning theme ?
i`m sorry for asking too much questions , i just need to learn while communicating with minds like you ..
Regards
 
Old 06-02-2011, 08:48 AM   #6
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 380

Rep: Reputation: 103Reputation: 103
Quote:
how can make use of the remaining space on the first disk (500 GB) , or can u imagine a better portioning theme ?
That's a good question. There are two aspects of this. How would you lay it out on disk? And is this layout sound, technically?

As you said that you wanted this system to be resilient (RAID) that would mean that space would need to be placed into a RAID array.

One way to do it...

[code]
Code:
sda-500G     sdb-1T       sdc-1T       sdd-1T
250M<-RAID1->250M         unused       unused
470G<-RAID5->470G<-RAID5->470G<-RAID5->470G        first RAID5 array
             500G<-RAID5->500G<-RAID5->500G        second RAID5 array
That would give you a 250M RAID1; a 1410GB (3*470) RAID5; and a 1000GB (2*500) RAID5.

The 250M would be your /dev/md0 and /boot as it is now; the 1.4T would be /dev/md1 and the first PV in the lvm_raid VG, and the 1T /dev/md2 device would be the second PV in the lvm_raid VG. That'd give you a total of 2410GB of usable space

Another way to do it...

Code:
sda-1T      sdb-1T      sdc-1T      sdd-500G
500G        500G        500G        500G
470G        470G        470G
250M        250M        unused
Two PVs, 1500G and 940G (totaling 2440G) in the lvm_raid VG. Not much of a gain over the other option.

So that's how you could minimize wasted space. But the other question remains, "is this a good idea, technically?"

I don't know that answer. I have heard that people have had performance problems due to contention for disk access with layouts like this. I would suppose that it depends a lot upon how heavily used the two RAID5 arrays are; whether you've lost a disk and one (or both) RAID5 arrays are running in degraded mode; what types of controllers the disks are on (PATA, SATA, SAS, SCSI).

I think because this question is a much bigger and different question than the one you originally asked in this post you should post a new question something like "Is overlapping two RAID5 arrays on same drives a bad idea?"

The text could be "I would like to place two RAID5 arrays on disk as shown below. Is this advisable? Will this create performance problems?

[code]
sda-500G sdb-1T sdc-1T sdd-1T
250M<-RAID1->250M unused unused

470G<-RAID5->470G<-RAID5->470G<-RAID5->470G first RAID5 array

500G<-RAID5->500G<-RAID5->500G second RAID5 array
[/code]

(The BB code tags [code] and [/code] make it more readable. See http://www.linuxquestions.org/questi....php?do=bbcode if you haven't already. They put the text in a "code:" box and give it a fixed font.)
It'll look like this.
Code:
sda-500G     sdb-1T       sdc-1T       sdd-1T
250M<-RAID1->250M         unused       unused

470G<-RAID5->470G<-RAID5->470G<-RAID5->470G        first RAID5 array

             500G<-RAID5->500G<-RAID5->500G        second RAID5 array
You'll be asking for opinion, but I value the opinions of others on this forum. There are a lot of people with a lot of experience. It is good to learn from your own mistakes, but better to learn from theirs...
Quote:
i`m sorry for asking too much questions , i just need to learn while communicating with minds like you
That's what we are here for. There are many, many, many questions that I can't answer. I help where I can. I'm certain that there will be opportunies where you can share your knowledge and experiences.

Last edited by tommylovell; 06-02-2011 at 08:50 AM.
 
Old 06-03-2011, 01:15 PM   #7
Saed.Abdu
LQ Newbie
 
Registered: Jun 2011
Posts: 14

Original Poster
Rep: Reputation: Disabled
Thumbs up

Quote:
Code:
sda-500G     sdb-1T       sdc-1T       sdd-1T
250M<-RAID1->250M         unused       unused
470G<-RAID5->470G<-RAID5->470G<-RAID5->470G        first RAID5 array
             500G<-RAID5->500G<-RAID5->500G        second RAID5 array
Thanks a lot tom That was very helpful suggestion ) ,i`ll go for it 4 sure .

Quote:
what types of controllers the disks are on (PATA, SATA, SAS, SCSI).
well its a PowerEdge T110 with 4 SATA Controllers , bought it 2 weeks ago for a Linux Backup implementation in the network.

Quote:
(The BB code tags code and code make it more readable. [/noparse] See http://www.linuxquestions.org/questi....php?do=bbcode if you haven't already. They put the text in a "code:" box and give it a fixed font.)
It'll look like this.
i didnt have the time to look for the FAQ section to get to know these tags , as i was rushing for an answer regarding my situation , thanks to Allah who made me find you as u were such a Great Help on this forum.

Quote:
I think because this question is a much bigger and different question than the one you originally asked in this post you should post a new question something like "Is overlapping two RAID5 arrays on same drives a bad idea?"
The text could be "I would like to place two RAID5 arrays on disk as shown below. Is this advisable? Will this create performance problems?
here is the new thread as u said
http://www.linuxquestions.org/questi...07#post4375407
thanks again tom

Last edited by Saed.Abdu; 06-03-2011 at 01:16 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Raid5 HD acquired swap space cause raid to fail omgimdrunk Linux - Hardware 1 03-07-2010 10:08 PM
add swap space yusufs Linux - General 2 10-09-2009 10:55 AM
Add disc to RAID5 and extend 1 partition to use that space. fellz Linux - Hardware 1 11-16-2006 06:34 PM
swap space: add after install? wetnose23 Slackware 4 05-20-2006 09:01 PM
deleting/resizing swap partition / increasing hd space saranga2000 Linux - Hardware 1 12-09-2004 10:03 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Red Hat

All times are GMT -5. The time now is 01:58 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration