Linux [hardware] RAID confusion : massive logic puzzle
Linux - DesktopThis forum is for the discussion of all Linux Software used in a desktop context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Linux [hardware] RAID confusion : massive logic puzzle
I am running Fedora 11, and did so happily with one HDD for a while. I decided to get two 320GB drives and RAID0 them. I cloned the disk, but was stopped when I could not boot with the RAID disks exclusively installed in the system. This may be a simple solution, I just need someone with a trained eye. Here are the steps I took:
- Setup Hardware RAID with system utility (ASUS mobo)
- mirrored data with dd utility, the only device with 640 GB of storage was /dev/dm-2, so I copied the data onto that device.
- partitioned extra space on /dev/dm-2 with LVM
BIOS pointed at the correct RAID drives, and Fedora begins to boot for about one second, then stops. There are no log messages left behind, so I am assuming it is having problems mounting the new root filesystem.(?)
I'm no expert, but my only guess is that the boot partition on the RAID drives is looking to mount the root filesystem on the Original drive. Here are the conditions that led me to believe this:
1. System does not boot with RAID installed and Original uninstalled.
2. System boots with both sets of drives installed, when BIOS points to RAID drives.
3. System boots when BIOS points to Original drives, regardless of RAID drives being installed.
The real thing that I need cleared up by someone with more experience is the twisted mess from fdisk. If I simply need to point to a new partition in fstab, then which one do I pick? Confused by the RAID naming conventions.
Note: sda and sdc are the RAID disks (320 GB ea.). sdb is the original disk, with original partitions (also 320 GB)
One more thing I noticed... linux might consider the Original disk (sdb) a RAID device because the system RAID considers it a RAID "single disk".
Code:
[root@feynman ~]# fdisk -l
Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbf026a90
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 38913 312363841 8e Linux LVM
/dev/sda3 38914 52000 105121327+ 8e Linux LVM
/dev/sda4 52001 77808 207302760 8e Linux LVM
Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbf026a90
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2 26 38913 312363841 8e Linux LVM
Disk /dev/sdc: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/dm-0: 313.6 GB, 313633275904 bytes
255 heads, 63 sectors/track, 38130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 6224 MB, 6224347136 bytes
255 heads, 63 sectors/track, 756 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/dm-2: 639.9 GB, 639999934464 bytes
255 heads, 63 sectors/track, 77808 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbf026a90
Device Boot Start End Blocks Id System
/dev/dm-2p1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/dm-2p2 26 38913 312363841 8e Linux LVM
/dev/dm-2p3 38914 52000 105121327+ 8e Linux LVM
/dev/dm-2p4 52001 77808 207302760 8e Linux LVM
Disk /dev/dm-3: 209 MB, 209715200 bytes
255 heads, 63 sectors/track, 25 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4: 319.8 GB, 319860573184 bytes
255 heads, 63 sectors/track, 38887 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-4 doesn't contain a valid partition table
Disk /dev/dm-5: 107.6 GB, 107644239360 bytes
255 heads, 63 sectors/track, 13087 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-5 doesn't contain a valid partition table
Disk /dev/dm-6: 212.2 GB, 212278026240 bytes
255 heads, 63 sectors/track, 25808 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-6 doesn't contain a valid partition table
Raid and LVM are two different animals. Also, when you say you cloned the first drive, it sounds more like raid-1 than raid-0.
Do you want an LVM mirror for redundancy or do you want to add a second drive to an existing LVM volume? Or are you extending the size of the physical volume that LVM uses using hardware raid.
If this is onboard raid, then the mobo's support is really software raid and you are better off just using LVM2. Even most external sata raid cards are truly sofware raid (3Ware being an exception according to the readme of DBAN). Forgetting about raid and adding the new drive as a physical volume to LVM may be the best way to go.
I understand LVM is a nice fit for me, but really this was about trying to see a performance increase and also... because i've never used RAID before. And I apologize, when I said clone, I meant that I was coping all my old data onto the new RAID'ed drives.
The reason why LVM is involved at all is because when I originally installed F11, I let it do it's default LVM partitioning. I figured it would be nice to keep it if I wanted to extend the filesystem at any time, I've found LVM to be pretty convenient in the past. So I stuck with it and used it to extend the filesystem onto the larger RAID'ed drive.
So really, my goal is mostly a small performance increase seen with RAID 0. I understand the risks involved, which is why I automatically sync data and image important stuff off to my file server. That being said, I'm not exactly looking for a work around or just LVM. I've spent lots of time configuring this system and I would like to copy it over to some RAID'ed disks. So how might I decipher the mess that LVM on top of RAID has created? haha, sorry if that's a little stubborn.
If you use hardware raid, you need to determine what the raid controller is and make sure that the kernel can read the array. The raid array would be used as the device forming one of the physical volumes making up the LVM2 volume group. It doesn't look like you created an array out of your 2nd and 3rd disks. Also, you seem to be partitioning the drives as you would if you weren't going to use raid or lvm2. LVM2 has 3 layers. Physical partition are joined (similarily to raid-0) into an LVM volume group. The LVM array is then partitioned, containing logical volumes (LV).
If you will be retaining your current /dev/sda drive, you could keep the regular /boot partition. If not, check if you can have the first parts of the drives not be a part of the raid-0 array, or form a small raid-1 array. In a raid-1 array, the partitions on the 2nd & 3rd drives will be mirrors, and grub works by reading it's files from the first drive.
You should read up on how raid and LVM2 works. This might be a good place to start: http://linas.org/linux/raid.html
There are a number of RAID howtos on the www.tldp.org website, but they may be dated.
The lvm2 package will provide a number of programs for dealing with logical and physical volumes. There are 34 man pages for the commands and the config file. You should become comfortable working with lvm in the event you need to service the drives or partitions or make changes.
There is an lvm2 howto on the www.tldp.org web site. Look at the diagrams on this page of the howto: http://tldp.org/HOWTO/LVM-HOWTO/anatomy.html. If you were to retain the first hard drive, it could be partitioned with /dev/sda1 and /dev/sda2. /dev/sda2 could be one of the physical volumes on the top of the diagram, while the raid device, e.g. /dev/dm-0, could be the other. Together they would form the volume group, which in turn contains Logical Volumes. One of the logical volumes might be located across a drive boundry.
Another option, is to not use LVM. Partition and format the Raid-0 device. This may be a better option if you are set on using hardware raid-0, and don't plan on using the benefits of LVM. Since you already are using raid-0, using LVM as well is mostly redundant.
Thanks for all the tips! I'll try testing some things out today.
Quote:
Another option, is to not use LVM.
One quick question:
I installed F11 with LVM back in June... just because that is what I usually do. But, because now I only actually want RAID (simpler): how do I copy all my data from the root Logical Volume to a new disk withOUT LVM? In otherwords... how do I undo the LVM setup I have and throw the data on / into a standard ext3 partition? Any tools that you guys recommend? Or can it be done with dd?
I would probably use tar. Look at section 4.6 of the tar info manual. "Notable Uses".
First prepare your raid array and partition it. Then format the partitions. Mount what will be the new root partition (/) under /mnt/ or somewhere similar. Create the future system directories on the new root. E.G. "mkdir /mnt/{etc,home,usr,var,sys,proc,dev,tmp}". Mount the new partitions under these directories. For example, you may have created partitions for /home and /usr.
Here is the tar example:
Code:
tar -C sourcedir -cf - . | tar -C targetdir -xf -
You probably would want to explicitly list the system directories to copy in the left-hand side instead of the dot `.'.
It may be best to copy the files after booting to a live distro.
You could use the "cp -a" command as well. Perhaps in a loop:
for dir in etc usr bin lib var srv home; do
cp -a /$dir/ /mnt/${dir}
done
Don't bother with the /dev, /proc, /sys or /tmp directories. They are either psuedo filesystems (/proc, /sys), created when you boot (/dev), or don't contain anything you need to copy (/tmp).
I've scrapped the onboard (ASUS) RAID after learning that this stuff is typically non-ideal. After some research, I decided that Linux software RAID would best suit me, so I chopped up my disks and did some mdadm action. I managed to copy all of my data using the recursive copy suggestion above (thank you for that!). So now I have all of my data on ext3 partitions on top of software RAID. The boot partition is RAID-1 and the other partitions are RAID-0.
I have such a somewhat novice understanding of the Linux boot process and files (half of which I have learned in the past week). I cannot get the OS to boot from the RAID'ed drives. It partially boots, but it stops. I am unsure which GRUB stage this is in, but it attempts to create the /dev filesystem and halts at "creating initial device nodes". For all I know, this could be a simple mix-up with the config files on /boot.
sdb and sdc are the RAID disks, sda is the disk with the original OS on it... and LVM and all that good stuff.
Output from /etc/fstab (on the RAID disk):
Code:
[root@feynman etc]# cat fstab
#
# /etc/fstab
# Created by anaconda on Thu Jul 2 23:05:26 2009
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or vol_id(8) for more info
#
#UUID's correspond to /dev/md0, /dev/md1, /dev/md2 respectively for /boot, /, /home
#
UUID=61496b4c-da23-45c0-bb76-627e2a1d0c55 /boot ext3 defaults 1 2
UUID=3a99f3ad-81ef-4a25-bf1b-652c12173f1a / ext3 defaults 1 2
UUID=43431115-4234-45f4-902b-7a12f9ef89da /home/ ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts defaults 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
Output from /boot/grub/menu.lst (on the RAID disk):
I don't expect much help, kind of a status update. If the answer is obvious to someone, your input would help. But I'm slow to reply because of classes this week.
Thank you all.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.