LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   In rhel 5, is it possible to make a volume group on a mirrored disk? (https://www.linuxquestions.org/questions/linux-newbie-8/in-rhel-5-is-it-possible-to-make-a-volume-group-on-a-mirrored-disk-714027/)

lou248 03-24-2009 05:32 AM

In rhel 5, is it possible to make a volume group on a mirrored disk?
 
hi, im a bit new in linux so im wondering if anyone can help me. I have a rhel 4 server that i want to upgrade to rhel 5. i am planning to implement the volume system in rhel 5 to have the flexibility of adjusting the size of your partitions but i also want it to be mirrored onto another disk. Is this possible or am i just making things too complicated?

acid_kewpie 03-24-2009 06:18 AM

lvm just needs a block device to use, that can be a single disk, or a raid array of any format - hard or soft.

reptiler 03-24-2009 06:22 AM

LVM has, as far as I remember, it's own way to have mirroring.
If you use a software-RAID I'd suggest having a look into this as it might be interesting to compare performance here compared to LVM on RAID.

But of course it is possible to have LVM on a RAID.

acid_kewpie 03-24-2009 06:34 AM

hmm, wasn't aware of that... yeah.. here it all is... http://www.redhat.com/docs/en-US/Red...d_volumes.html

reptiler 03-24-2009 06:38 AM

I found something on the CentOS-site about LVM: http://www.centos.org/docs/5/html/Cl...LV_create.html
As CentOS practically is RHEL it should be applicable as well, and anyway, LVM is LVM. ;)

I guess just out of curiosity I will run some small-scale test LVM mirroring vs. software-RAID.

instrumentpilot 03-24-2009 03:39 PM

Hi lou248, I'm rebuilding my web/database server and I just tested what you are wanting with RHEL5. While installing I created a RAID1 mirror using 2 disks. The first raid (md0) was for the /boot partition (because boot cannot be on an LVM). Then I created a second RAID1 mirror (md1) and put my LVM on that partition. I could then create my filesystems on the LVM.

After I got the server up and running I reaching in and unplugged the power chord on one of the hard drives. Everything was fine. Then I took another hard drive, installed it, recreated partition, added the partitions back to the RAID devices and watched the new disk get rebuilt.

NOTE: I have not done any performance benchmarks on the server and don't intened to. It is working fine and I don't have enough volume to worry about it.

P.S. I can post my notes on this process if you'd like, but they are at home. Let me know if your interrested.

Michael Cunningham
RHCE

reptiler 03-25-2009 06:46 AM

I did a little small-scale benchmarking using Bonnie++ in VM.
The setup was as follows:
First I set up four image-files with each 1GB size (as my space in the VM is quite limited), these I turned into loopback-devices using losetup.
The first two I put into a volume-group and created a mirrored volume, the other two went into a software-RAID.
After the tests with these setups I set up LVM using the RAID, created a regular volume (non-mirrored) and tested that too.
Everything was formatted with ext4.

disk.log:
Code:

Version 1.03e      ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
easylfs        768M 18230  59 51355  17 67873  33 31115  95 1414186  88 +++++ +++
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                16 17106  81 +++++ +++ 29840  98 23275  99 +++++ +++ +++++ +++
easylfs,768M,18230,59,51355,17,67873,33,31115,95,1414186,88,+++++,+++,16,17106,81,+++++,+++,29840,98,23275,99,+++++,+++,+++++,+++

lvm.log:
Code:

Version 1.03e      ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
easylfs        768M 23317  76 26471  8 12180  12 24706  86 146261  40  1746  14
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                16 19760  95 +++++ +++ 32152  56 +++++ +++ +++++ +++ +++++ +++
easylfs,768M,23317,76,26471,8,12180,12,24706,86,146261,40,1746.3,14,16,19760,95,+++++,+++,32152,56,+++++,+++,+++++,+++,+++++,+++

raid.log:
Code:

Version 1.03e      ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
easylfs        768M 24005  82 22283  9 12229  10 19977  76 128129  27  2549  26
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                16 17543  87 +++++ +++ 19479  67 23143  95 +++++ +++ 18577  61
easylfs,768M,24005,82,22283,9,12229,10,19977,76,128129,27,2548.8,26,16,17543,87,+++++,+++,19479,67,23143,95,+++++,+++,18577,61

raidlvm.log:
Code:

Version 1.03e      ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
easylfs        768M 16931  60  9305  4  8141  6 21314  75 132656  26  1550  6
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                16 20413  96 +++++ +++ 19228  64 22400  92 +++++ +++ 18104  59
easylfs,768M,16931,60,9305,4,8141,6,21314,75,132656,26,1549.7,6,16,20413,96,+++++,+++,19228,64,22400,92,+++++,+++,18104,59


lou248 03-29-2009 06:09 AM

Quote:

Originally Posted by instrumentpilot (Post 3486457)
Hi lou248, I'm rebuilding my web/database server and I just tested what you are wanting with RHEL5. While installing I created a RAID1 mirror using 2 disks. The first raid (md0) was for the /boot partition (because boot cannot be on an LVM). Then I created a second RAID1 mirror (md1) and put my LVM on that partition. I could then create my filesystems on the LVM.

After I got the server up and running I reaching in and unplugged the power chord on one of the hard drives. Everything was fine. Then I took another hard drive, installed it, recreated partition, added the partitions back to the RAID devices and watched the new disk get rebuilt.

NOTE: I have not done any performance benchmarks on the server and don't intened to. It is working fine and I don't have enough volume to worry about it.

P.S. I can post my notes on this process if you'd like, but they are at home. Let me know if your interrested.

Michael Cunningham
RHCE

Hi Michael, thanks for the info. if it is not too much of a bother, i would like to know how you did it. One more thing, when your creating an LVM and VG, can you use the entire size of the physical disk? Thanks again.

Lou

instrumentpilot 03-30-2009 12:32 PM

Quote:

Originally Posted by lou248 (Post 3491391)
Hi Michael, thanks for the info. if it is not too much of a bother, i would like to know how you did it. One more thing, when your creating an LVM and VG, can you use the entire size of the physical disk? Thanks again.

Lou

Lou, I just back in town and this info is at home. I'll get it for you tonight. You can use "almost" the whole disk. The boot partition cannot be on LVM. I made mine 256M which is more than enough. The remaining part of each disk was for the LVM. More tonight.

Michael

instrumentpilot 03-31-2009 12:47 AM

Protecting Linux with Software Raid
 
Quote:

Originally Posted by lou248 (Post 3491391)
Hi Michael, thanks for the info. if it is not too much of a bother, i would like to know how you did it. One more thing, when your creating an LVM and VG, can you use the entire size of the physical disk? Thanks again.

Lou

Ok, here it goes. I’ll write it in a step-by-step process.

Assumptions
The computer has two separate hard drive that are NOT configured with hardware raid. For purposes of demonstration the first hard drive will be /dev/hda and the second will be /dev/hdd (because that’s what it is on my system).

Installation
During installation use the custom disk configuration and do the following:
The first raid device will be for the /boot partition and consist of two raid partitions – one on each of the hard drives. This involves creating a raid partition on /dev/hda and another separate raid partition on /dev/hdd. Then create a RAID1 device (/dev/md0) with the two partitions and assign the /boot filesystem name. This sounds confusing, but you should see what I'm talking about when using the GUI for custom disk config.

The second raid device is very similar, but you can use all the remaining space on each of the drives (I’m assuming 2 drives the same size). Follow the same procedure from above until you have a RAID1 device named /dev/md1. Then create an LVM using the /dev/md1 device. During this step you can name your volume group whatever you wish, but I’ll use the name vg01. On this volume group I created the following logical volumes and filesystems (all ext3 except for swap of course).
Code:

FileSystem    LVName    Size(M)
/            lv.root      1024
/home        lv.home      512
/var          lv.var        512
/usr          lv.usr      2048
swap          lv.swap      2048

Post Installation Testing
Step 0) (optional)
If you’d like you can configure the /etc/mdadm.conf file with the following parameter so an email will be sent upon any failure of the software devices.
Code:

MAILADDR=emailaddr@domain.com
Step 1)
Examine the software raid configurations with the following commands. Make any notes you want.
Code:

mdadm --detail /dev/md0
mdadm --detail /dev/md1
cat /proc/mdstat

Step 2)
Reach in and pull the power chord off one of the drives. At this point you should get 2 emails. If you did not do anything with Step 0 above then check the mail of the root user for the raid failure emails.
Step 3)
Use the mdadm --detail commands from Step 2 above to find which partition has failed for each of the raid devices. For explanation purposes I’ll use /dev/hdd1 and /dev/hdd2 as the failed partitions.
Step 4)
Remove the failed partitions from the raid devices.
Code:

mdadm --remove /dev/md0 /dev/hdd1
mdadm --remove /dev/md1 /dev/hdd2

Step 5)
Shutdown and install a new hard drive.
For my test I reconnected the old drive, fdisk’ed it to death.
Step 6)
Create two partitions on the new drive of sufficient size. This does not have to be exact, but the new partitions need to be 'at least' as big as the existing partitions.
Step 7)
Add the partitions on the new drive to the raid devices.
I like to start with the largest first so I don’t make a mistake. Linux will give an error if the partition being add is not large enough. For example, if I were to accidentally try and add /dev/hdd1 to /dev/md1 I would get an error.
Code:

mdadm -a /dev/md1 /dev/hdd2
mdadm -a /dev/md0 /dev/hdd1

Step 8)
Monitor the progress of the RAID1 rebuild with the following commands. This can take some time on the larger raid device.
Code:

cat /proc/mdstat
Let me know if anything is confusing and I'll try and do better.

Michael

maxy7710 03-31-2009 02:21 AM

micheal is spot on.
bravo


All times are GMT -5. The time now is 02:53 PM.