LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 02-09-2011, 01:34 PM   #1
jmazaredo
LQ Newbie
 
Registered: Feb 2011
Posts: 4

Rep: Reputation: 0
Software Raid on Existing Lvm


Hi

Currently I have 3 hard drives

2pcs 10gb almost the same
1pc 20gb

I have a layout of

(10.2gb)
/dev/hda1 boot 104391 83 Linux
/dev/hda2 9912105 8e Linux LVM

(10.1gb)
/dev/hdb1 9873328+ 8e Linux LVM


(20.4gb)
(unpartitioned)


the two 10g is setup as lvm and I want to make raid1 using the 20gb hdd

almost all i see is raid1 first in the internet

thanks in advance
 
Old 02-11-2011, 09:15 AM   #2
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386

Rep: Reputation: 105Reputation: 105
I'm not clear on what you are trying to do. I think that is why no one has responded yet. But I'll give it a shot.

Are you trying to use LVM Logical Volumes to create a software RAID (MD) device?

It may be possible that with enough work you could do that. But it would be complex (you probably would need to make changes to the initialization in your initrd/initramfs so that RAID and LVM get activated in the proper sequence); unreliable (if you lost a disk, the system would not come up without it); and you would lose one big advantage of LVM (the ability to resize).

It would also depend on whether mdadm would let you use an LVM LV as a block device. (I'll have to try an experiment to see if that is even possible.)

The standard way of mixing software RAID and LVM is to have your real block devices on the bottom, with RAID on top of that, and then have LVM on top of the RAID MD device. And that works quite well.

Here's one of my home systems, if you are curious.
Code:
[root@athlonz initrd]# fdisk -l

Disk /dev/sda: 1500.3 GB, 1500300828160 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00031558

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        3126    25005172+  fd  Linux raid autodetect
/dev/sda3            3127      182401  1440026437+   5  Extended
/dev/sda5            3127      182401  1440026406   fd  Linux raid autodetect

Disk /dev/sdb: 1500.3 GB, 1500300828160 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0002a7c0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          13      104391   83  Linux
/dev/sdb2              14        3126    25005172+  fd  Linux raid autodetect
/dev/sdb3            3127      182401  1440026437+   5  Extended
/dev/sdb5            3127      182401  1440026406   fd  Linux raid autodetect
Code:
[root@athlonz initrd]# cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] 
md1 : active raid1 sda5[0] sdb5[1]
      1440026304 blocks [2/2] [UU]
      
md0 : active raid1 sda2[0] sdb2[1]
      25005056 blocks [2/2] [UU]
      
unused devices: <none>
Code:
[root@athlonz initrd]# pvs
  PV         VG    Fmt  Attr PSize  PFree 
  /dev/md0   vgz00 lvm2 a-   23.84G 32.00M
  /dev/md1   vgz01 lvm2 a-    1.34T  7.28G
Code:
[root@athlonz initrd]# lvs
  LV      VG    Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lvz00   vgz00 -wi-ao   1.00G                                      
  lvz01   vgz00 -wi-ao  22.81G                                      
  hfsplus vgz01 -wi-a-  32.00M                                      
  lvz00   vgz01 -wi-ao 536.00G                                      
  mac     vgz01 -wi-ao 500.00G                                      
  maclv   vgz01 -wi-ao 300.00G                                      
  t       vgz01 -wi-a-   8.00G                                      
  temp    vgz01 -wi-a-   2.00G                                      
  tommy   vgz01 -wi-ao  20.00G                                      
[root@athlonz initrd]#
Code:
[root@athlonz initrd]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Feb 15 02:00:36 2010
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or vol_id(8) for more info
#
UUID=54716048-c91b-4f48-b37e-fb09ce21e412 /     ext3    defaults        1 1
UUID=1fd8498a-868b-46b6-8ced-01155b0c5962 /bkup ext3    defaults        1 2
UUID=97bcf411-5558-4cf9-9aef-5108fa686246 /boot ext3    defaults        1 2
UUID=59d3649f-4c9d-46b2-94c8-b7d29578328e /boot2 ext3   defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
UUID=b827e20f-db5b-47b2-be38-17489895a0ad swap  swap    defaults        0 0
/dev/mapper/vgz01-tommy /home/tommy             ext3    defaults        1 2
[root@athlonz initrd]#
Code:
[root@athlonz initrd]# mount
/dev/mapper/vgz00-lvz01 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/mapper/vgz01-lvz00 on /bkup type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
/dev/sdb1 on /boot2 type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
/dev/mapper/vgz01-tommy on /home/tommy type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
gvfs-fuse-daemon on /home/tommy/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tommy)
[root@athlonz initrd]#
You could probably pair up your two 10GB drives like this, but I don't think your could get to 20GB of redundant disk space.
 
Old 02-11-2011, 09:39 AM   #3
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386

Rep: Reputation: 105Reputation: 105
Just for anyone that is curious, you can build a RAID1 device from an LVM Logical Volume. I can't see any benefit or use for this...
Code:
[root@athlonz ~]#  mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/mapper/vgz01-hfsplus missing
mdadm: array /dev/md2 started.
[root@athlonz ~]# cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] 
md2 : active raid1 dm-3[0]
      32704 blocks [2/1] [U_]
      
md1 : active raid1 sda5[0] sdb5[1]
      1440026304 blocks [2/2] [UU]
      
md0 : active raid1 sda2[0] sdb2[1]
      25005056 blocks [2/2] [UU]
      
unused devices: <none>
[root@athlonz ~]#
 
Old 02-12-2011, 05:42 AM   #4
jmazaredo
LQ Newbie
 
Registered: Feb 2011
Posts: 4

Original Poster
Rep: Reputation: 0
Thanks for the reply so before the lvm i should raid it first is that the way to go?

What would you suggest if I have an existing LVM (2 diskdrives), I want to raid 1 it I should get 2 drives also so I create one raid for each?

Thanks for your reply!
 
Old 02-12-2011, 01:20 PM   #5
netmar
LQ Newbie
 
Registered: Jul 2004
Location: Durham, NC
Distribution: Ubuntu 10.04 (I'd rather use Gentoo)
Posts: 23

Rep: Reputation: 3
That would the best solution. RAID really works best with identical sets of disks, so if you're planning on using RAID-1, then having two pairs of disks (which you can then build on with LVM), is definitely the way to go.

Akin
 
Old 02-12-2011, 04:34 PM   #6
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386

Rep: Reputation: 105Reputation: 105
If you get another 20GB drive you can pair up your two 10GB as one RAID1 array; and the two 20GB's as a second RAID1 array.

You may be able to "shuffle around" what you have now and not have to reinstall. BUT I WOULD RECOMMEND BACKING UP EVERYTHING OF IMPORTANCE FIRST!!! Just in case...

Is there data on both of your LVM Physical Volumes or just on the first?

'pvdisplay --maps' will tell you. If you could post the output of that, the output of a 'pvs' and the output of a 'vgs' it would be helpful.

Last edited by tommylovell; 02-12-2011 at 11:35 PM.
 
Old 02-13-2011, 07:08 AM   #7
jmazaredo
LQ Newbie
 
Registered: Feb 2011
Posts: 4

Original Poster
Rep: Reputation: 0
Thumbs up

there is no data on the drive that is important just making some test. thanks for the replies!
 
Old 02-13-2011, 12:46 PM   #8
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386

Rep: Reputation: 105Reputation: 105
I'M HOPING OTHERS WILL REVIEW THIS AND ADD COMMENTS AND ESPECIALLY CORRECTIONS. There is risk in doing this!

So, the first thing you need to know is that depending on what level of LVM2 you are on, you can sometimes run into bugs. I have, too many times, but these bugs have manifested themselves on very large volume groups. Like a 3.1 terabyte volume group comprised of 124 25GB SAN LUNs. (LVM1 is very old now and I wouldn't attempt to do anything with it!)

And because you can sometimes run into difficulty, either bugs or procedural errors in what you are doing, you really need to have a good backup of your data and be prepared to reload Linux from scratch. THIS IS REALLY IMPORTANT! Plan for failure.

That out of the way, another thing you need to know is that Linux Software RAID (aka MD, or the Multiple Device driver) writes its "superblock" at the end of the devices comprising the RAID array (it usually uses the last 128K of each device, that's why a /dev/mdX device is slightly smaller than the /dev/hdX or /dev/sdX devices that it is made of). LVM writes (by default) its metadata at the beginning of each physical volume that you add to LVM. That's why you need to create your MD device first then add it to LVM.

In your case if /dev/hdb1 is unused, you can remove it from LVM;
Code:
vgreduce <vgname> /dev/hdb1
pvremove /dev/hdb1
partition /hdb the same as /hda
either use 'fdisk /dev/hdb' and make it look like hda, or try
Code:
sfdisk -d /dev/hda > table
sfdisk /dev/hdb < table
use 'fdisk' to change the partition type of /dev/hdb2 to 'fd' (Linux raid autodetect)

create a "one device" RAID1 array out of it, /dev/md0;
Code:
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hdb2 missing
echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
add /dev/md0 to LVM;
Code:
pvcreate /dev/md0
vgextend <vgname> /dev/md0
move the LVM data off of /dev/hda2 onto /dev/md0;
Code:
pvmove -i15 -v /dev/hda2 /dev/md0
remove /dev/hda2 from LVM;
Code:
vgreduce <vgname> /dev/hda2
pvremove /dev/hda2
add /dev/hda2 to the RAID1 array, /dev/md0;
Code:
 
mdadm --manage /dev/md0 --add /dev/sda2
update the /dev/md0 entry in /etc/mdadm.conf to reflect the newly added device. If it has a UUID= parameter instead of explicit devices you are already set and no change is needed.

watch it sync the two devices;
Code:
cat /proc/mdstat
use 'fdisk' to change the partition type of /dev/hda2 to 'fd' (Linux raid autodetect)

Simple.

Then you'll want to make the hdb drive bootable on its own.

format hdb1

'mkdir /boot2'

'mount /dev/hdb1 /boot2'

'rsync -av /boot /boot2'

and write the bootloader code to /dev/hdb


You should be able to do this, or something very similar, to turn your undelying partitions into a RAID1 array.

There are variation you can do on this. You don't have to make the second drive bootable. You could take your third drive, partition it with two 10GB partitions; pair up /dev/hdc1 with the "hda partition" to make an "md0 array"; and pair up /dev/hdc2 with the "hdb partition" to make an "md1 array". (I'm not fond of this, but it'd work.) And there are other permutations.

It would be preferable if you got a drive to match your third drive and create another raid array out of it (/dev/md1), and then add it to the same LVM Volume Group or make a new volume group out of it.

Take the time to understand the steps you are doing and make sure they make sense before you do them.

And if worse comes to worst, you have your backups and can start from scratch.

Good luck.

Last edited by tommylovell; 02-13-2011 at 02:11 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Existing LVM filesystem to software RAID 0 (mirroring) hellowe Linux - Server 8 11-25-2009 08:28 AM
creating LVM with existing raid (With data on it) dkrysak Linux - Server 1 04-27-2009 04:36 AM
software raid 5 + LVM and other raid questions slackman Slackware 5 05-09-2007 03:58 PM
RHEL 3: Steps of expanding file system (add new SCSI disks) to existing SW RAID & LVM atman1974 Linux - General 2 01-12-2006 10:49 PM
RAID/LVM setup with existing drives juu801 Linux - General 3 07-06-2005 01:35 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 10:01 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration