LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Desktop
User Name
Password
Linux - Desktop This forum is for the discussion of all Linux Software used in a desktop context.

Notices


Reply
  Search this Thread
Old 09-22-2009, 12:52 PM   #1
sfhseric06
LQ Newbie
 
Registered: Sep 2004
Location: California, USA
Posts: 11

Rep: Reputation: 0
Linux [hardware] RAID confusion : massive logic puzzle


I am running Fedora 11, and did so happily with one HDD for a while. I decided to get two 320GB drives and RAID0 them. I cloned the disk, but was stopped when I could not boot with the RAID disks exclusively installed in the system. This may be a simple solution, I just need someone with a trained eye. Here are the steps I took:

- Setup Hardware RAID with system utility (ASUS mobo)
- mirrored data with dd utility, the only device with 640 GB of storage was /dev/dm-2, so I copied the data onto that device.
- partitioned extra space on /dev/dm-2 with LVM

BIOS pointed at the correct RAID drives, and Fedora begins to boot for about one second, then stops. There are no log messages left behind, so I am assuming it is having problems mounting the new root filesystem.(?)

I'm no expert, but my only guess is that the boot partition on the RAID drives is looking to mount the root filesystem on the Original drive. Here are the conditions that led me to believe this:

1. System does not boot with RAID installed and Original uninstalled.
2. System boots with both sets of drives installed, when BIOS points to RAID drives.
3. System boots when BIOS points to Original drives, regardless of RAID drives being installed.

The real thing that I need cleared up by someone with more experience is the twisted mess from fdisk. If I simply need to point to a new partition in fstab, then which one do I pick? Confused by the RAID naming conventions.

Note: sda and sdc are the RAID disks (320 GB ea.). sdb is the original disk, with original partitions (also 320 GB)

One more thing I noticed... linux might consider the Original disk (sdb) a RAID device because the system RAID considers it a RAID "single disk".

Code:
[root@feynman ~]# fdisk -l

Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbf026a90

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              26       38913   312363841   8e  Linux LVM
/dev/sda3           38914       52000   105121327+  8e  Linux LVM
/dev/sda4           52001       77808   207302760   8e  Linux LVM

Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbf026a90

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2              26       38913   312363841   8e  Linux LVM

Disk /dev/sdc: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/dm-0: 313.6 GB, 313633275904 bytes
255 heads, 63 sectors/track, 38130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-0 doesn't contain a valid partition table

Disk /dev/dm-1: 6224 MB, 6224347136 bytes
255 heads, 63 sectors/track, 756 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-1 doesn't contain a valid partition table

Disk /dev/dm-2: 639.9 GB, 639999934464 bytes
255 heads, 63 sectors/track, 77808 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbf026a90

     Device Boot      Start         End      Blocks   Id  System
/dev/dm-2p1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/dm-2p2              26       38913   312363841   8e  Linux LVM
/dev/dm-2p3           38914       52000   105121327+  8e  Linux LVM
/dev/dm-2p4           52001       77808   207302760   8e  Linux LVM

Disk /dev/dm-3: 209 MB, 209715200 bytes
255 heads, 63 sectors/track, 25 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-3 doesn't contain a valid partition table

Disk /dev/dm-4: 319.8 GB, 319860573184 bytes
255 heads, 63 sectors/track, 38887 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-4 doesn't contain a valid partition table

Disk /dev/dm-5: 107.6 GB, 107644239360 bytes
255 heads, 63 sectors/track, 13087 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-5 doesn't contain a valid partition table

Disk /dev/dm-6: 212.2 GB, 212278026240 bytes
255 heads, 63 sectors/track, 25808 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-6 doesn't contain a valid partition table
Help is greatly appreciated.
 
Old 09-22-2009, 03:36 PM   #2
jschiwal
LQ Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682
Raid and LVM are two different animals. Also, when you say you cloned the first drive, it sounds more like raid-1 than raid-0.

Do you want an LVM mirror for redundancy or do you want to add a second drive to an existing LVM volume? Or are you extending the size of the physical volume that LVM uses using hardware raid.

If this is onboard raid, then the mobo's support is really software raid and you are better off just using LVM2. Even most external sata raid cards are truly sofware raid (3Ware being an exception according to the readme of DBAN). Forgetting about raid and adding the new drive as a physical volume to LVM may be the best way to go.
 
Old 09-22-2009, 04:41 PM   #3
sfhseric06
LQ Newbie
 
Registered: Sep 2004
Location: California, USA
Posts: 11

Original Poster
Rep: Reputation: 0
I understand LVM is a nice fit for me, but really this was about trying to see a performance increase and also... because i've never used RAID before. And I apologize, when I said clone, I meant that I was coping all my old data onto the new RAID'ed drives.

The reason why LVM is involved at all is because when I originally installed F11, I let it do it's default LVM partitioning. I figured it would be nice to keep it if I wanted to extend the filesystem at any time, I've found LVM to be pretty convenient in the past. So I stuck with it and used it to extend the filesystem onto the larger RAID'ed drive.

So really, my goal is mostly a small performance increase seen with RAID 0. I understand the risks involved, which is why I automatically sync data and image important stuff off to my file server. That being said, I'm not exactly looking for a work around or just LVM. I've spent lots of time configuring this system and I would like to copy it over to some RAID'ed disks. So how might I decipher the mess that LVM on top of RAID has created? haha, sorry if that's a little stubborn.
 
Old 09-22-2009, 11:03 PM   #4
rkski
Member
 
Registered: Jan 2009
Location: Canada
Distribution: CentOS 6.3, Fedora 17
Posts: 247

Rep: Reputation: 51
After a quick glance of your post, 2 quick points:

1. /boot cannot be on RAID0, only RAID1, or non-RAID partition

2. /dev/sdc (your 2nd new drive) is not set up at all

Also, I don't understand all the other /dev/dm-x ...

HTH
 
Old 09-23-2009, 03:35 AM   #5
jschiwal
LQ Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682
If you use hardware raid, you need to determine what the raid controller is and make sure that the kernel can read the array. The raid array would be used as the device forming one of the physical volumes making up the LVM2 volume group. It doesn't look like you created an array out of your 2nd and 3rd disks. Also, you seem to be partitioning the drives as you would if you weren't going to use raid or lvm2. LVM2 has 3 layers. Physical partition are joined (similarily to raid-0) into an LVM volume group. The LVM array is then partitioned, containing logical volumes (LV).

If you will be retaining your current /dev/sda drive, you could keep the regular /boot partition. If not, check if you can have the first parts of the drives not be a part of the raid-0 array, or form a small raid-1 array. In a raid-1 array, the partitions on the 2nd & 3rd drives will be mirrors, and grub works by reading it's files from the first drive.

You should read up on how raid and LVM2 works. This might be a good place to start: http://linas.org/linux/raid.html
There are a number of RAID howtos on the www.tldp.org website, but they may be dated.

The lvm2 package will provide a number of programs for dealing with logical and physical volumes. There are 34 man pages for the commands and the config file. You should become comfortable working with lvm in the event you need to service the drives or partitions or make changes.

There is an lvm2 howto on the www.tldp.org web site. Look at the diagrams on this page of the howto: http://tldp.org/HOWTO/LVM-HOWTO/anatomy.html. If you were to retain the first hard drive, it could be partitioned with /dev/sda1 and /dev/sda2. /dev/sda2 could be one of the physical volumes on the top of the diagram, while the raid device, e.g. /dev/dm-0, could be the other. Together they would form the volume group, which in turn contains Logical Volumes. One of the logical volumes might be located across a drive boundry.

Another option, is to not use LVM. Partition and format the Raid-0 device. This may be a better option if you are set on using hardware raid-0, and don't plan on using the benefits of LVM. Since you already are using raid-0, using LVM as well is mostly redundant.
 
Old 09-23-2009, 12:05 PM   #6
sfhseric06
LQ Newbie
 
Registered: Sep 2004
Location: California, USA
Posts: 11

Original Poster
Rep: Reputation: 0
Thanks for all the tips! I'll try testing some things out today.

Quote:
Another option, is to not use LVM.
One quick question:
I installed F11 with LVM back in June... just because that is what I usually do. But, because now I only actually want RAID (simpler): how do I copy all my data from the root Logical Volume to a new disk withOUT LVM? In otherwords... how do I undo the LVM setup I have and throw the data on / into a standard ext3 partition? Any tools that you guys recommend? Or can it be done with dd?
 
Old 09-24-2009, 08:02 AM   #7
jschiwal
LQ Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682
I would probably use tar. Look at section 4.6 of the tar info manual. "Notable Uses".

First prepare your raid array and partition it. Then format the partitions. Mount what will be the new root partition (/) under /mnt/ or somewhere similar. Create the future system directories on the new root. E.G. "mkdir /mnt/{etc,home,usr,var,sys,proc,dev,tmp}". Mount the new partitions under these directories. For example, you may have created partitions for /home and /usr.

Here is the tar example:
Code:
tar -C sourcedir -cf - . | tar -C targetdir -xf -
You probably would want to explicitly list the system directories to copy in the left-hand side instead of the dot `.'.

It may be best to copy the files after booting to a live distro.

You could use the "cp -a" command as well. Perhaps in a loop:
for dir in etc usr bin lib var srv home; do
cp -a /$dir/ /mnt/${dir}
done

Don't bother with the /dev, /proc, /sys or /tmp directories. They are either psuedo filesystems (/proc, /sys), created when you boot (/dev), or don't contain anything you need to copy (/tmp).

Good Luck!
 
Old 09-25-2009, 12:27 AM   #8
sfhseric06
LQ Newbie
 
Registered: Sep 2004
Location: California, USA
Posts: 11

Original Poster
Rep: Reputation: 0
Thank you!

I'll let everyone know how it goes, might be a couple days.
 
Old 09-28-2009, 07:29 PM   #9
sfhseric06
LQ Newbie
 
Registered: Sep 2004
Location: California, USA
Posts: 11

Original Poster
Rep: Reputation: 0
So just an update--

I've scrapped the onboard (ASUS) RAID after learning that this stuff is typically non-ideal. After some research, I decided that Linux software RAID would best suit me, so I chopped up my disks and did some mdadm action. I managed to copy all of my data using the recursive copy suggestion above (thank you for that!). So now I have all of my data on ext3 partitions on top of software RAID. The boot partition is RAID-1 and the other partitions are RAID-0.

I have such a somewhat novice understanding of the Linux boot process and files (half of which I have learned in the past week). I cannot get the OS to boot from the RAID'ed drives. It partially boots, but it stops. I am unsure which GRUB stage this is in, but it attempts to create the /dev filesystem and halts at "creating initial device nodes". For all I know, this could be a simple mix-up with the config files on /boot.

sdb and sdc are the RAID disks, sda is the disk with the original OS on it... and LVM and all that good stuff.

Output from /etc/fstab (on the RAID disk):
Code:
[root@feynman etc]# cat fstab 

#
# /etc/fstab
# Created by anaconda on Thu Jul  2 23:05:26 2009
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or vol_id(8) for more info
#
#UUID's correspond to /dev/md0, /dev/md1, /dev/md2 respectively for /boot, /, /home
#
UUID=61496b4c-da23-45c0-bb76-627e2a1d0c55                /boot                   ext3    defaults 1 2
UUID=3a99f3ad-81ef-4a25-bf1b-652c12173f1a                /                       ext3    defaults 1 2
UUID=43431115-4234-45f4-902b-7a12f9ef89da                /home/                  ext3    defaults 1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  defaults        0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
Output from /boot/grub/menu.lst (on the RAID disk):
Code:
[root@feynman grub]# cat menu.lst 
default=0
timeout=0
splashimage=(hd0,0)/grub/splash.xpm.gz
#hiddenmenu
title Fedora (2.6.30.5-43.fc11.x86_64)
	root (hd0,0)
	kernel /vmlinuz-2.6.30.5-43.fc11.x86_64 ro root=UUID=61496b4c-da23-45c0-bb76-627e2a1d0c55
#UUID=3a99f3ad-81ef-4a25-bf1b-652c12173f1a
	initrd /initrd-2.6.30.5-43.fc11.x86_64.img
title Fedora (2.6.29.6-217.2.16.fc11.x86_64)
	root (hd0,0)
	kernel /vmlinuz-2.6.29.6-217.2.16.fc11.x86_64 ro root=/dev/mapper/vg_feynman-lv_root rhgb quiet
	initrd /initrd-2.6.29.6-217.2.16.fc11.x86_64.img
title Fedora (2.6.29.5-191.fc11.x86_64)
	root (hd0,0)
	kernel /vmlinuz-2.6.29.5-191.fc11.x86_64 ro root=/dev/mapper/vg_feynman-lv_root rhgb quiet
	initrd /initrd-2.6.29.5-191.fc11.x86_64.img
Here is output from #blkid :
Code:
[root@feynman yum.repos.d]# blkid
/dev/dm-0: UUID="8bcd4ad9-eacd-4f92-9b50-2aadc20faa06" TYPE="ext4" 
/dev/dm-1: TYPE="swap" UUID="0d08a172-fe9c-4cbf-ae50-ffb3f5a3ce9f" 
/dev/mapper/vg_feynman-lv_root: UUID="8bcd4ad9-eacd-4f92-9b50-2aadc20faa06" TYPE="ext4" 
/dev/mapper/vg_feynman-lv_swap: TYPE="swap" UUID="0d08a172-fe9c-4cbf-ae50-ffb3f5a3ce9f" 
/dev/sda1: UUID="ee0d8a24-7ca7-4e3d-ace0-5c3589f72986" SEC_TYPE="ext2" TYPE="ext3" 
/dev/sda2: UUID="YZoVoZ-QexQ-tNW4-xJUf-xRyy-jwZO-7z10Kt" TYPE="lvm2pv" 
/dev/sdb1: UUID="49a4c81f-be52-7df6-ae4f-c397c3138ee5" TYPE="mdraid" 
/dev/sdb3: UUID="428bf447-cf68-a64b-ae4f-c397c3138ee5" TYPE="mdraid" 
/dev/sdb4: UUID="9f418b07-99a3-2629-ae4f-c397c3138ee5" TYPE="mdraid" 
/dev/sdc1: UUID="49a4c81f-be52-7df6-ae4f-c397c3138ee5" TYPE="mdraid" 
/dev/sdc3: UUID="428bf447-cf68-a64b-ae4f-c397c3138ee5" TYPE="mdraid" 
/dev/sdc4: UUID="9f418b07-99a3-2629-ae4f-c397c3138ee5" TYPE="mdraid" 
/dev/md0: UUID="61496b4c-da23-45c0-bb76-627e2a1d0c55" TYPE="ext3" SEC_TYPE="ext2" 
/dev/md1: UUID="3a99f3ad-81ef-4a25-bf1b-652c12173f1a" TYPE="ext3" SEC_TYPE="ext2" 
/dev/md2: UUID="43431115-4234-45f4-902b-7a12f9ef89da" SEC_TYPE="ext2" TYPE="ext3"

I don't expect much help, kind of a status update. If the answer is obvious to someone, your input would help. But I'm slow to reply because of classes this week.
Thank you all.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Benchmarking hardware RAID vs. Linux kernel software RAID LXer Syndicated Linux News 0 07-15-2008 03:50 PM
[SOLVED] fstab + hardware = small puzzle MBA Whore Linux - Hardware 6 12-29-2007 04:25 PM
An interresting logic puzzle for those who like them:) valo General 7 10-17-2003 09:42 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Desktop

All times are GMT -5. The time now is 11:11 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration