LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 03-24-2009, 06:32 AM   #1
lou248
LQ Newbie
 
Registered: May 2008
Posts: 2

Rep: Reputation: 0
In rhel 5, is it possible to make a volume group on a mirrored disk?


hi, im a bit new in linux so im wondering if anyone can help me. I have a rhel 4 server that i want to upgrade to rhel 5. i am planning to implement the volume system in rhel 5 to have the flexibility of adjusting the size of your partitions but i also want it to be mirrored onto another disk. Is this possible or am i just making things too complicated?
 
Old 03-24-2009, 07:18 AM   #2
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974
lvm just needs a block device to use, that can be a single disk, or a raid array of any format - hard or soft.
 
Old 03-24-2009, 07:22 AM   #3
reptiler
Member
 
Registered: Mar 2009
Location: Hong Kong
Distribution: Fedora
Posts: 184

Rep: Reputation: 41
LVM has, as far as I remember, it's own way to have mirroring.
If you use a software-RAID I'd suggest having a look into this as it might be interesting to compare performance here compared to LVM on RAID.

But of course it is possible to have LVM on a RAID.

Last edited by reptiler; 03-24-2009 at 07:23 AM.
 
Old 03-24-2009, 07:34 AM   #4
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974Reputation: 1974
hmm, wasn't aware of that... yeah.. here it all is... http://www.redhat.com/docs/en-US/Red...d_volumes.html
 
Old 03-24-2009, 07:38 AM   #5
reptiler
Member
 
Registered: Mar 2009
Location: Hong Kong
Distribution: Fedora
Posts: 184

Rep: Reputation: 41
I found something on the CentOS-site about LVM: http://www.centos.org/docs/5/html/Cl...LV_create.html
As CentOS practically is RHEL it should be applicable as well, and anyway, LVM is LVM.

I guess just out of curiosity I will run some small-scale test LVM mirroring vs. software-RAID.
 
Old 03-24-2009, 04:39 PM   #6
instrumentpilot
Member
 
Registered: May 2006
Posts: 34

Rep: Reputation: 2
Hi lou248, I'm rebuilding my web/database server and I just tested what you are wanting with RHEL5. While installing I created a RAID1 mirror using 2 disks. The first raid (md0) was for the /boot partition (because boot cannot be on an LVM). Then I created a second RAID1 mirror (md1) and put my LVM on that partition. I could then create my filesystems on the LVM.

After I got the server up and running I reaching in and unplugged the power chord on one of the hard drives. Everything was fine. Then I took another hard drive, installed it, recreated partition, added the partitions back to the RAID devices and watched the new disk get rebuilt.

NOTE: I have not done any performance benchmarks on the server and don't intened to. It is working fine and I don't have enough volume to worry about it.

P.S. I can post my notes on this process if you'd like, but they are at home. Let me know if your interrested.

Michael Cunningham
RHCE
 
Old 03-25-2009, 07:46 AM   #7
reptiler
Member
 
Registered: Mar 2009
Location: Hong Kong
Distribution: Fedora
Posts: 184

Rep: Reputation: 41
I did a little small-scale benchmarking using Bonnie++ in VM.
The setup was as follows:
First I set up four image-files with each 1GB size (as my space in the VM is quite limited), these I turned into loopback-devices using losetup.
The first two I put into a volume-group and created a mirrored volume, the other two went into a software-RAID.
After the tests with these setups I set up LVM using the RAID, created a regular volume (non-mirrored) and tested that too.
Everything was formatted with ext4.

disk.log:
Code:
Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
easylfs        768M 18230  59 51355  17 67873  33 31115  95 1414186  88 +++++ +++
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 17106  81 +++++ +++ 29840  98 23275  99 +++++ +++ +++++ +++
easylfs,768M,18230,59,51355,17,67873,33,31115,95,1414186,88,+++++,+++,16,17106,81,+++++,+++,29840,98,23275,99,+++++,+++,+++++,+++
lvm.log:
Code:
Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
easylfs        768M 23317  76 26471   8 12180  12 24706  86 146261  40  1746  14
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 19760  95 +++++ +++ 32152  56 +++++ +++ +++++ +++ +++++ +++
easylfs,768M,23317,76,26471,8,12180,12,24706,86,146261,40,1746.3,14,16,19760,95,+++++,+++,32152,56,+++++,+++,+++++,+++,+++++,+++
raid.log:
Code:
Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
easylfs        768M 24005  82 22283   9 12229  10 19977  76 128129  27  2549  26
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 17543  87 +++++ +++ 19479  67 23143  95 +++++ +++ 18577  61
easylfs,768M,24005,82,22283,9,12229,10,19977,76,128129,27,2548.8,26,16,17543,87,+++++,+++,19479,67,23143,95,+++++,+++,18577,61
raidlvm.log:
Code:
Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
easylfs        768M 16931  60  9305   4  8141   6 21314  75 132656  26  1550   6
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 20413  96 +++++ +++ 19228  64 22400  92 +++++ +++ 18104  59
easylfs,768M,16931,60,9305,4,8141,6,21314,75,132656,26,1549.7,6,16,20413,96,+++++,+++,19228,64,22400,92,+++++,+++,18104,59
 
Old 03-29-2009, 07:09 AM   #8
lou248
LQ Newbie
 
Registered: May 2008
Posts: 2

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by instrumentpilot View Post
Hi lou248, I'm rebuilding my web/database server and I just tested what you are wanting with RHEL5. While installing I created a RAID1 mirror using 2 disks. The first raid (md0) was for the /boot partition (because boot cannot be on an LVM). Then I created a second RAID1 mirror (md1) and put my LVM on that partition. I could then create my filesystems on the LVM.

After I got the server up and running I reaching in and unplugged the power chord on one of the hard drives. Everything was fine. Then I took another hard drive, installed it, recreated partition, added the partitions back to the RAID devices and watched the new disk get rebuilt.

NOTE: I have not done any performance benchmarks on the server and don't intened to. It is working fine and I don't have enough volume to worry about it.

P.S. I can post my notes on this process if you'd like, but they are at home. Let me know if your interrested.

Michael Cunningham
RHCE
Hi Michael, thanks for the info. if it is not too much of a bother, i would like to know how you did it. One more thing, when your creating an LVM and VG, can you use the entire size of the physical disk? Thanks again.

Lou
 
Old 03-30-2009, 01:32 PM   #9
instrumentpilot
Member
 
Registered: May 2006
Posts: 34

Rep: Reputation: 2
Quote:
Originally Posted by lou248 View Post
Hi Michael, thanks for the info. if it is not too much of a bother, i would like to know how you did it. One more thing, when your creating an LVM and VG, can you use the entire size of the physical disk? Thanks again.

Lou
Lou, I just back in town and this info is at home. I'll get it for you tonight. You can use "almost" the whole disk. The boot partition cannot be on LVM. I made mine 256M which is more than enough. The remaining part of each disk was for the LVM. More tonight.

Michael
 
Old 03-31-2009, 01:47 AM   #10
instrumentpilot
Member
 
Registered: May 2006
Posts: 34

Rep: Reputation: 2
Protecting Linux with Software Raid

Quote:
Originally Posted by lou248 View Post
Hi Michael, thanks for the info. if it is not too much of a bother, i would like to know how you did it. One more thing, when your creating an LVM and VG, can you use the entire size of the physical disk? Thanks again.

Lou
Ok, here it goes. Iíll write it in a step-by-step process.

Assumptions
The computer has two separate hard drive that are NOT configured with hardware raid. For purposes of demonstration the first hard drive will be /dev/hda and the second will be /dev/hdd (because thatís what it is on my system).

Installation
During installation use the custom disk configuration and do the following:
The first raid device will be for the /boot partition and consist of two raid partitions Ė one on each of the hard drives. This involves creating a raid partition on /dev/hda and another separate raid partition on /dev/hdd. Then create a RAID1 device (/dev/md0) with the two partitions and assign the /boot filesystem name. This sounds confusing, but you should see what I'm talking about when using the GUI for custom disk config.

The second raid device is very similar, but you can use all the remaining space on each of the drives (Iím assuming 2 drives the same size). Follow the same procedure from above until you have a RAID1 device named /dev/md1. Then create an LVM using the /dev/md1 device. During this step you can name your volume group whatever you wish, but Iíll use the name vg01. On this volume group I created the following logical volumes and filesystems (all ext3 except for swap of course).
Code:
FileSystem    LVName    Size(M)
/             lv.root      1024
/home         lv.home       512
/var          lv.var        512
/usr          lv.usr       2048
swap          lv.swap      2048
Post Installation Testing
Step 0) (optional)
If youíd like you can configure the /etc/mdadm.conf file with the following parameter so an email will be sent upon any failure of the software devices.
Code:
MAILADDR=emailaddr@domain.com
Step 1)
Examine the software raid configurations with the following commands. Make any notes you want.
Code:
mdadm --detail /dev/md0
mdadm --detail /dev/md1
cat /proc/mdstat
Step 2)
Reach in and pull the power chord off one of the drives. At this point you should get 2 emails. If you did not do anything with Step 0 above then check the mail of the root user for the raid failure emails.
Step 3)
Use the mdadm --detail commands from Step 2 above to find which partition has failed for each of the raid devices. For explanation purposes Iíll use /dev/hdd1 and /dev/hdd2 as the failed partitions.
Step 4)
Remove the failed partitions from the raid devices.
Code:
mdadm --remove /dev/md0 /dev/hdd1
mdadm --remove /dev/md1 /dev/hdd2
Step 5)
Shutdown and install a new hard drive.
For my test I reconnected the old drive, fdiskíed it to death.
Step 6)
Create two partitions on the new drive of sufficient size. This does not have to be exact, but the new partitions need to be 'at least' as big as the existing partitions.
Step 7)
Add the partitions on the new drive to the raid devices.
I like to start with the largest first so I donít make a mistake. Linux will give an error if the partition being add is not large enough. For example, if I were to accidentally try and add /dev/hdd1 to /dev/md1 I would get an error.
Code:
mdadm -a /dev/md1 /dev/hdd2
mdadm -a /dev/md0 /dev/hdd1
Step 8)
Monitor the progress of the RAID1 rebuild with the following commands. This can take some time on the larger raid device.
Code:
cat /proc/mdstat
Let me know if anything is confusing and I'll try and do better.

Michael
 
Old 03-31-2009, 03:21 AM   #11
maxy7710
Member
 
Registered: Jan 2008
Location: Mumbai, india
Distribution: REDHAT, FEDORA,SUSE, UBUNTU, ORACLE ENTERPRISE LINUX & SOLARIS 10
Posts: 130

Rep: Reputation: 17
micheal is spot on.
bravo
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Determining Volume Group Disk Usage On AIX Unix Using Sh And Awk LXer Syndicated Linux News 0 11-11-2008 12:40 PM
LXer: How To Resolve Veritas Disk Group Cluster Volume Management Problems On Linux o LXer Syndicated Linux News 0 09-30-2008 06:42 AM
RHEL server 5.2 x86_64 installed to mirrored drive fails to mirror liontooth Red Hat 0 07-17-2008 12:14 PM
Fedora LVM volume group & Physical Volume resize problem gabeyg Fedora 1 05-14-2008 12:26 PM
mirrored disk backup qrshat Solaris / OpenSolaris 2 08-30-2007 09:14 AM


All times are GMT -5. The time now is 12:20 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration