LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   ataraid.i and Slackware 12.2 (https://www.linuxquestions.org/questions/slackware-14/ataraid-i-and-slackware-12-2-a-728419/)

technik733 05-25-2009 04:06 PM

ataraid.i and Slackware 12.2
 
I've got another thread in hardware about getting a promise raid bus controller to work when installing slackware, but right now I'm just interested in a few things that I think might be specific to slackware.

1)Do I need to use the ataraid.i kernel with slackware 12.2, or is ata raid support included with the hugesmp.s kernel?

2)If pdc202xxx_new is loaded on boot, what is the name of the devices, or is it really just an IDE driver? I can see hde,f,g but can't cfdisk any but f, which is actually supposed to be a single striped drive.

3)If I were to decide to use software raid, would doing so break my windows xp installation on the same array?

Erik_FL 05-25-2009 08:43 PM

Quote:

Originally Posted by technik733 (Post 3552398)
I've got another thread in hardware about getting a promise raid bus controller to work when installing slackware, but right now I'm just interested in a few things that I think might be specific to slackware.

1)Do I need to use the ataraid.i kernel with slackware 12.2, or is ata raid support included with the hugesmp.s kernel?

2)If pdc202xxx_new is loaded on boot, what is the name of the devices, or is it really just an IDE driver? I can see hde,f,g but can't cfdisk any but f, which is actually supposed to be a single striped drive.

3)If I were to decide to use software raid, would doing so break my windows xp installation on the same array?

What Promise controller are you using exactly? Most of the Promise controllers are fake hardware RAID and the Linux drivers only support using them as normal SATA or IDE drives. You will usually see each drive as a separate device name and they will not be correctly accessed as a RAID array by Linux.

You can use "dmraid" with some of them to configure the Linux Device Mapper to access stripe and mirror sets. In that case you can use what is essentially Linux software RAID configured by "dmraid". Then Windows will still be able to use the fake RAID driver to access the arrays and both operating systems will be able to access each other's partitions. Unfortunately you can't simply install Slackware in the normal manner to do this since it does not include "dmraid" on the boot discs.

When you use "dmraid" it creates very long device names for the RAID arrays under "/dev/mapper". You can create more friendly, shorter names using udev rules if you want. That's what I did.

If you use Linux software RAID not configured by "dmraid" then you will also have to use Windows software RAID and the two operating systems will not be able to access each other's RAID partitions.

Promise has not updated their older Linux drivers for kernel version 2.6 so you can't use the drivers from their web site in most cases.

If you're thinking about using "dmraid" I recommend that you first install Slackware to a non-RAID hard disk. The non-RAID hard disk can be connected to the Promise controller if you have a spare port, or you can use another hard disk controller. Just make sure that it is a controller that you can boot from and the disk is not included in any RAID arrays. Compile "dmraid" and then verify that it can configure the correct mapper devices to access your RAID arrays. If that works then you can make an "initrd" image and edit the "init" script to run "dmraid". Either use the cryptic device names in the "fstab" or create your own devices for the root and swap devices with the correct major / minor numbers. You have to create the root and swap devices in the RAID partition BEFORE you try to boot from it. The udev rules run after booting. I have example scripts and if you search for my posts here you should find them.

technik733 05-25-2009 09:35 PM

Oh, glad to hear from you, I have indeed been reading through your posts about this...

I have a Promise 20276, the MBFasttrak133 Lite, which I have come to believe is in fact a fakeraid chip, from various sources. I have a RAID 0 array of two drives that I have split between windows and what I intend to use linux on, so dmraid would seem to be the only way to go since I can't uninstall the raid drivers from XP and use software raid.

I also do indeed have a 320GB drive on the same controller, and though it said that it had set it up as striped in the bios configuration, I was able to read it just fine from linux using cfdisk and I have set up a swap partition on it. I could move my XP Virtual memory partition from that drive back to the XP disk and end up with 8GB to work with installing slack.

I think I'm ready to do this, but I've never done anything like it before. The hardest part for me will be installing GRUB or Lilo. I remember reading some posts you had where you had uploaded some of the config files or an image of some kind, but the links were broken. They were quite some time ago. If you still have those it would be great if you could upload them again.

Also, I think I can manage having a cryptic device name for 2 drives, since it looked like it took a phenomenal amount of effort for you to rename them to the standard names.

Edit: Here's that post. If I can get dmraid compiled I can probably manage it, and even if I don't get grub or lilo installed I can boot using the dvd for a temporary fix. I have the source for dmraid, but I think I need some library to go with it...

Erik_FL 05-25-2009 11:43 PM

You can download scripts and an ISO image of a boot cd from here.

Click

Let me know if you need other drivers on the boot CD. You will have to use "modprobe" to load the driver for the RAID controller unless you build it into the kernel. If you let me know the exact driver name then I can build a kernel for you.

If you want to build your own kernel I included the configuration file that I used. The only thing special about it is that I used the suffix "-CD" instead of "-smp". The script to create the boot CD expects that although you can change the script if you want. The reason that I used "-CD" was to avoid writing over my live kernel modules.

It isn't too difficult to create the short device names. I included the "10-local" rules file that I use. Just make sure to use "mknod" for the root device and swap device from a boot CD. That way those two devices will exist in "/dev" before udev runs.

I recommend that you install GRUB by pressing the "C" key when you see the grub menu on the boot CD. You will have to first copy all the required grub files to /boot/grub. If you mount the boot CD after you boot it then you can copy the grub files from the CD. I put the grub files on the CD but not in the RAM filesystem that is booted.

To install grub use these commands.

root (hd0,1)
setup (hd0)

Replace the "(hd0,1)" with the correct disk containing "/boot/grub/menu.lst" and other grub files. Replace "(hd0)" with the drive or partition where you want grub to write its boot sector. Normally that is the drive you expect to boot from, or a primary partition on that drive.

You can find the location of files using this command.

find /boot/grub/menu.lst

That will display all the drive and partition names where the specified file was found.

The reason to install grub this way is that it will call the BIOS for the RAID controller and correctly access the drives even if grub under Linux can't figure out the drive mapping. Using the "find" command enables you to verify the correct drive names to put in "menu.lst" and for writing the boot sector.

technik733 05-25-2009 11:57 PM

The exact driver is pdc202xx_new, and from the dmesg output from slackware, it seems to be loading the driver itself fine already, so I don't think a kernel rebuild is necessary if I use slack 12.2. If using dmraid, installing slackware, installing grub, and changing the device names with udev is all there is to do, it doesn't seems quite as daunting. I may give it a go before I go to sleep. Thanks a ton, by the way. =D

technik733 05-26-2009 02:44 AM

Alrighty then... Thus far, I have nothing but good news. I've managed to access the drive as it should be, and have it formatted with reiserfs, and I also have the swap partition on the 320GB drive which I was able to remove from it's array and put on the regular IDE conroller without issue. I haven't even broken any of my windows partitions. =D

I managed this by copying dmraid and libdevmapper.so.1.02 to a floppy, and copying them from the floppy to the ramdrive to run and configure the raid array, and it worked very nicely after I moved the 320GB disk off of the controller.

So far the only problem I've had is that the slackware installer doesn't see the /dev/mapper/pdc_ccfafbbhc2 drive (the first partition is windows), it only sees /dev/hde2 and /dev/dm-0p2 (I think; I'm guessing at the last one).

Is there a way to just get the slackware installer to recognise it and install directly to the drive?

After that I could use your boot disk to remap it and install lilo, right?

technik733 05-26-2009 11:44 AM

Huh... after some messing around with the Slackware installer (and remembering to unmount fd0 from mnt... smart me...) I have got it to start copying files, but apparently my dvd is corrupt since the kernel modules won't install, and after it switches to the second category it closes and says "Killed" 3 times. Creepy. I'm gonna get another copy of slackware, and try it again.

Erik_FL 05-26-2009 11:50 AM

Quote:

Originally Posted by technik733 (Post 3552818)
Alrighty then... Thus far, I have nothing but good news. I've managed to access the drive as it should be, and have it formatted with reiserfs, and I also have the swap partition on the 320GB drive which I was able to remove from it's array and put on the regular IDE conroller without issue. I haven't even broken any of my windows partitions. =D

I managed this by copying dmraid and libdevmapper.so.1.02 to a floppy, and copying them from the floppy to the ramdrive to run and configure the raid array, and it worked very nicely after I moved the 320GB disk off of the controller.

So far the only problem I've had is that the slackware installer doesn't see the /dev/mapper/pdc_ccfafbbhc2 drive (the first partition is windows), it only sees /dev/hde2 and /dev/dm-0p2 (I think; I'm guessing at the last one).

Is there a way to just get the slackware installer to recognise it and install directly to the drive?

After that I could use your boot disk to remap it and install lilo, right?

You can copy the device node to a file with a different name.

cp -Pp /dev/mapper/pdc_ccfafbbhc2 /dev/sdr2

You may want to create some of the other devices depending on what the installer requires.

cp -Pp /dev/mapper/pdc_ccfafbbhc /dev/sdr

You can create the device nodes for your installed system the same way.

mount /dev/sdr2 /mnt
cp -Pp /dev/sdr2 /mnt/dev/sdr2
cp -Pp /dev/mapper/pdc_ccfafbbhc5 /mnt/dev/sdr5
umount /mnt

If you build your own kernel, keep in mind that the major device IDs might change. You can find that out when the "initrd" fails to boot the system. Type in a command to list out the current mapper devices.

dmraid -ay
ls -l /dev/mapper/*

Note the device IDs and create the devices such as "/dev/sdr2" with the correct information.

Only the names of the root device and swap device have to be permanently created like that because they are referenced before udev runs. If you don't use your own shorter device names then you will have to create the "/dev/mapper/pdc_ccfafbbhc2" name for the root device in the root partition.

Thanks for letting me know about the success copying the files to the normal setup RAM disk. I've never tried that and it will save me a lot of hassles when I upgrade later.

To install grub you can use the boot CD that I provided, or install the grub package for Slackware. Copy the grub files from the boot CD (mount the CD) or from "/usr/lib/grub/i386-pc" if you install the grub package. Put the files in the "/boot/grub" directory along with a "menu.lst" file. You may have to create a "device.map" file if you want to install grub from a booted Linux system. If you use the native mode (press "C" during grub menu) then you don't need a correct "device.map" to tell grub the BIOS drive ID assignments.

To build your kernel or install grub from Linux, use "chroot" like this.

mount /dev/sdr2 /mnt
chroot /mnt
mount -t proc none /proc
mount -t sysfs none /sys
# do your edits, builds installs, etc
# copy kernel and create initrd
umount /sys
umount /proc
exit
umount /mnt
# reboot

In theory lilo should work but I've never attempted that when I knew enough to get past the problems. I ran into some problems because lilo was unable to determine the Linux device name to BIOS drive ID assignments. I prefer to use grub since the boot sector doesn't change whenever the configuration is changed. I boot Windows first and it chains to a file with a copy of the grub boot sector. If I used lilo I would have to update that file every time that I used the "lilo" command to change the configuration. With grub I only have to make a copy of the boot sector once and then edit "menu.lst" to change the configuration without affecting the grub boot sector.

There is one last minor issue that I'll mention. Since Linux looks for partition tables on all the hard disk block devices it will look for partition tables on the raw volumes in a RAID array (such as /dev/hde or /dev/sde). In some cases that results in errors about reading from blocks on those devices during boot. The errors don't hurt anything. I made a minor edit to my "sd.c" file to avoid that. I don't know the exact changes needed for an IDE device like "/dev/hde". I haven't found any standard way to specify that devices should be excluded from partition detection. So far as I can tell there is no kernel or driver parameter for that. If the errors are annoying to you I will be glad to look at possible changes to avoid them in your configuration. My change wasn't elegant and it just tested for the specific device assignments in my configuration.

technik733 05-26-2009 06:27 PM

Alright, I've found out why it's saying "Killed" 3 times after reaching a consistent point during the installation. It's running out of memory, because apparently /dev/dm-0p2 is in fact mapped to or is memory. I checked top before and after I ran setup and the amount of base memory shrank so I can only assume that it grew the ram disk until it ran out.

So... why the hell is it doing that? Am I still going to have to install then change the root drive and stuff?

EDIT: I'm assuming that running dmraid from that partition might help, since I ran it from the ramdisk and it's installing to the ramdisk. Hmm... perhaps a chroot... But I need to run it in order to see the partition. Grr, I need advice on this.

technik733 05-26-2009 10:51 PM

Alright... I've basically searched through the entire slackware ramdrive trying to find something useful, and I've found that there is a command "dmsetup" that will recognize that the long device names are an array, but I can't figure out how to use it. There is also a shell script in /dev that looks interesting, is says "somethingdev_map.sh" but I didn't cat it to read it.

Right now I'm just stuck in a situation where slackware is installing to the ramdrive, and while I do copy the device node files to something that would seemingly be recognizable by the installer, it doesn't recognize them. It instead seed the ide device, and waht is seemingly a ramdrive partition created by dmraid. Wtfsticks?

Erik_FL 05-27-2009 02:17 PM

Quote:

Originally Posted by technik733 (Post 3553867)
Alright... I've basically searched through the entire slackware ramdrive trying to find something useful, and I've found that there is a command "dmsetup" that will recognize that the long device names are an array, but I can't figure out how to use it. There is also a shell script in /dev that looks interesting, is says "somethingdev_map.sh" but I didn't cat it to read it.

Right now I'm just stuck in a situation where slackware is installing to the ramdrive, and while I do copy the device node files to something that would seemingly be recognizable by the installer, it doesn't recognize them. It instead seed the ide device, and waht is seemingly a ramdrive partition created by dmraid. Wtfsticks?

Can you post the output that you get from these commands when you use them from the Slackware Setup CD.

dmraid -ay
ls -l /dev/mapper/*

Also, what device are you providing to SETUP for the root filesystem?

technik733 05-27-2009 07:09 PM

The devices it shows that are type Linux are /dev/hde2 which does not work, obviously, and /dev/dm-0p2, which is what I was choosing to install to. I'm not at home right now but when I just use ls in the directory I get:

control pdc_ccfafbbhc pdc_ccfafbbhc1 pdc_ccfafbbhc2

Not sure if the details matter a lot, but I'll post the exact output with the -l switch when I get home. It should be in around 4 hours. =/

Erik_FL 05-27-2009 10:54 PM

Quote:

Originally Posted by technik733 (Post 3554858)
The devices it shows that are type Linux are /dev/hde2 which does not work, obviously, and /dev/dm-0p2, which is what I was choosing to install to. I'm not at home right now but when I just use ls in the directory I get:

control pdc_ccfafbbhc pdc_ccfafbbhc1 pdc_ccfafbbhc2

Not sure if the details matter a lot, but I'll post the exact output with the -l switch when I get home. It should be in around 4 hours. =/

The "-l" option will show you the major and minor device IDs and you will need those to create the correct "udev" rules.

The automatically created names for the RAID arrays will be under "/dev/mapper" and NOT under "/dev". If you want names under "/dev" you will have to create them yourself.

The device name that you need to install to (for the second primary partition) is as follows.

/dev/mapper/pdc_ccfafbbhc2

If you are not installing to the second primary partition then you must manually create the partitions that you want and then run "dmraid -ay" again.

cfdisk /dev/mapper/pdc_ccfafbbhc
# create partitions
dmraid -ay

As I mentioned you may find that does not work well with some software. You can do this to create your own device names during installation.

# create "/dev/sdr" names
# hard disk device to use with cfdisk
cp -Pp /dev/mapper/pdc_ccfafbbhc /dev/sdr
# first primary (probably windows)
cp -Pp /dev/mapper/pdc_ccfafbbhc1 /dev/sdr1
# second primary (probably linux)
cp -Pp /dev/mapper/pdc_ccfafbbhc2 /dev/sdr2

The above commands copy the small device node files that essentially just provide the major and minor device IDs to access the devices. The actual names are unimportant and the same devices will be accessed if the major and minor numbers in the file are not changed. You can create the files using "mknod" but it's simpler to just copy the existing ones.

If you create a swap file partition then reboot or run "dmraid" again to detect the partition. Note that I used a logical partition (5) for the swap space as an example. If you create a primary partition for swap space the number will be 3 instead of 5.

cfdisk /dev/sdr
# create the swap partition
dmraid -ay
# swap partition device
cp -Pp /dev/mapper/pdc_ccfafbbhc5 /dev/sdr5

You must create all the partitions BEFORE using SETUP because you also have to run "dmraid -ay" after that to create the device names. It's OK to let SETUP format the partitions but it can't create the partitions. NOTE: You may want to manually format the root filesystem with 128-byte inodes to be compatible with more software.

Then install to this device.

/dev/sdr2

Specify the swap device like this if you provide it to setup.

/dev/sdr5

After installing Linux (before rebooting the setup disk) do this.

mount /dev/sdr2 /mnt
# create root device
cp -Pp /dev/sdr2 /mnt/dev
# create swap device
cp -Pp /dev/sdr5 /mnt/dev
umount /mnt

If you use the long device names then do the following after installing Linux and before rebooting.

mount /dev/mapper/pdc_ccfafbbhc2 /mnt
# create mapper directory
mkdir /mnt/dev/mapper
# create root device
cp -Pp /dev/mapper/pdc_ccfafbbhc2 /mnt/dev/mapper
# create swap device
cp -Pp /dev/mapper/pdc_ccfafbbhc5 /mnt/dev/mapper
umount /mnt

I hope that clears up the confusion about the device names. Typing things like "pdc_ccfafbbhc5" gets tiring and that's why I just created my own bogus device names. Some programs like "cfdisk" also don't display device names that long.

The names that you provide to "SETUP" are the names that it will use in "/etc/fstab". The root device and swap device created in "/dev" during setup must match the names created by "udev" later on. If you use the "/dev/sdr" names then make sure to specify the udev rules to create the devices (as in my example).

technik733 05-29-2009 12:36 AM

1 Attachment(s)
Sorry for the delay, I've had things to do. The output from ls -l is attached, since notepad screws up the lines.

The problem I'm having now is not that I can't access the arrays, or use dmraid to recognize them, or copy them to another /dev/xy file, but the problem is that the slackware installer is not recognizing them. I've got a swap partition on the non-raid drive, and using cfdisk I can see the formatted /dev/mapper/long2 partition, and seemingly it's even formatted with reiserfs, but in the installer it shows /dev/dm-0p2, and when it reaches a certain package it always stops and says "Killed" 3 times.

When this happened I ran top and it said that I only had around 60mb of memory, which was not the case. So I am assuming that it was really installing to the ramdisk. But it's odd since when I use cfdisk the partition consistently says that it's formatted reiserfs... Oh, but I used mkreiserfs on the long name to format it as well. So it's obviously installing to the ramdrive.

I think if I could just get the stupid installer to see the long device name or the /dev/sdr2 partition it would be relatively peachy. But I just don't know why it's not. I suspect that it might be because I'm not using the dmsetup utility that comes on the ramdisk, but I am not sure. I'll be looking into that in the next day or so.

Erik_FL 05-29-2009 10:49 AM

Quote:

Originally Posted by technik733 (Post 3556163)
Sorry for the delay, I've had things to do. The output from ls -l is attached, since notepad screws up the lines.

The problem I'm having now is not that I can't access the arrays, or use dmraid to recognize them, or copy them to another /dev/xy file, but the problem is that the slackware installer is not recognizing them. I've got a swap partition on the non-raid drive, and using cfdisk I can see the formatted /dev/mapper/long2 partition, and seemingly it's even formatted with reiserfs, but in the installer it shows /dev/dm-0p2, and when it reaches a certain package it always stops and says "Killed" 3 times.

When this happened I ran top and it said that I only had around 60mb of memory, which was not the case. So I am assuming that it was really installing to the ramdisk. But it's odd since when I use cfdisk the partition consistently says that it's formatted reiserfs... Oh, but I used mkreiserfs on the long name to format it as well. So it's obviously installing to the ramdrive.

I think if I could just get the stupid installer to see the long device name or the /dev/sdr2 partition it would be relatively peachy. But I just don't know why it's not. I suspect that it might be because I'm not using the dmsetup utility that comes on the ramdisk, but I am not sure. I'll be looking into that in the next day or so.

The installer might not be able to use the long device names. I've never tried that so I don't know.

To save yourself frustration why not try using the shorter names like "sdr2"? If that works then use the same names in your installed OS.

The way that I have been installing Slackware to RAID is by copying the existing Slackware installation from some other disk. I use a boot CD to do that but you can use the Slackware Setup CD command shell if you have "dmraid". Install Slackware to some normal hard disk and then afterward, copy the files to your RAID partition. For example.

mkdir /mnt/src
mkdir /mnt/dst
dmraid -ay
mount /dev/hda1 /mnt/src
mount /dev/mapper/pdc_ccfafbbhc2 /mnt/dst
cp -a /mnt/src/* /mnt/dst
chroot /mnt/dst
mount -t proc none /proc
mount -t sysfs none /sys
# edit files and whatever else
umount /sys
umount /proc
exit
umount /mnt/dst
umount /mnt/src

The "dmraid" devices are not like normal device mapper devices and I have noticed a few programs that don't work well with them. That's one reason why I decided to create my own names using ones for normal SCSI disks "sdr". The main reason that I picked the "r" suffix is because having that many real SCSI disks is unlikely but it's still a valid suffix. It also reminds me that it's a RAID array by virtue of the "r" suffix.


All times are GMT -5. The time now is 12:22 PM.