LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware > Slackware - Installation
User Name
Password
Slackware - Installation This forum is for the discussion of installation issues with Slackware.

Notices


Reply
  Search this Thread
Old 06-06-2008, 10:58 AM   #31
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258

Quote:
Originally Posted by agentc0re View Post
I think you've done a great job at figuring things out for the dmraid Eric, but i still will side that the built in software raid is the better choice to go. (not trying to flame) /endflame
It is certainly easier to install and use. I don't think I will ever get another "fake RAID" controller. I'll either just use software RAID or get a real hardware RAID controller that uses a more "normal" driver.

There are really only two reasons to use a fake hardware RAID controller instead of software RAID.
  • To boot from the RAID array (stripe set)
  • To allow multiple Operating Systems to see the same partitions

I wanted to do both of those things. It was actually easier with kernel version 2.4 because there was a Promise RAID driver on the Promise web site. Other than the fact that the driver could only be built as a module, it was easy to load the driver and boot from the RAID array.

Unfortunately Promise has not updated their Linux driver and seems to drop support for products as soon as a new product is released. My next hardware purchases are going to be based on who provides better support for Linux (in reality not just lip service).
 
Old 06-06-2008, 03:13 PM   #32
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
Quote:
Quote:
Originally Posted by agentc0re
I think you've done a great job at figuring things out for the dmraid Eric, but i still will side that the built in software raid is the better choice to go. (not trying to flame) /endflame

It is certainly easier to install and use. I don't think I will ever get another "fake RAID" controller. I'll either just use software RAID or get a real hardware RAID controller that uses a more "normal" driver.

There are really only two reasons to use a fake hardware RAID controller instead of software RAID.

To boot from the RAID array (stripe set)
To allow multiple Operating Systems to see the same partitions

I wanted to do both of those things. It was actually easier with kernel version 2.4 because there was a Promise RAID driver on the Promise web site. Other than the fact that the driver could only be built as a module, it was easy to load the driver and boot from the RAID array.

Unfortunately Promise has not updated their Linux driver and seems to drop support for products as soon as a new product is released. My next hardware purchases are going to be based on who provides better support for Linux (in reality not just lip service).
I couldn't agree more. I've also found this thread very helpful; thanks for all your hard work!
 
Old 06-07-2008, 08:43 PM   #33
m1bear
LQ Newbie
 
Registered: May 2008
Distribution: Slackware 14.1
Posts: 12

Rep: Reputation: 2
I am one that thinks that this thread is very useful and I thank everyone for helping out with it. I still haven’t gotten my Slackware to install on a fake raid array but at least I know that it is possible and I am kind of waiting on more hardware anyhow. I have now been working on making the boot CD that Erik was talking about the first time I tried it made a bin file not a ISO and it wasn’t bootable so I must have done something wrong . If I understand right I need to make a boot CD that supports an initrd that had the dmraid program on it. I am hoping that I can boot off this CD and then stick in a normal install DVD and run the setup program. So far nothing has been easy about this but as I said I already have Slackware installed on the same computer on a different hard drive. So I really am doing very well I was surprised at how well my other hardware worked The Duel Gigabit Lan worked great even the wireless adapter worked the sound worked but the digital out didn’t and the video cards work with KDM but I bet if I worked on it I could make it work better (I have two 8500GT with 512 they are hooked up in SLI but I honestly don’t think they really need it). Ohh caught myself getting off the subject I think this thread is great but you almost need a walkthrough for the thread there is important info and code all over the place so I end up with my laptop sitting next to me trying to follow the steps on my main computer.

And Erik FL I would very much appreciate being emailed more specific instructions because I tried with what you posted and had to copy more programs over because it had errors but I didn’t edit the programs any so that may have been the problem, I actually have tons and tons of blank CD’s so I have plenty of room to work.

And just a question out of the blue, can I just copy my file system over to the raid array and then just change grub to boot from there, as I said I have a working copy of Slackware working on the computer and using dmraid I have mounted all the partitions and can easily log in as root and copy everything over.


M1Barrie@msn.com

Last edited by m1bear; 06-07-2008 at 08:45 PM. Reason: add email
 
Old 06-08-2008, 05:13 AM   #34
hb950322
LQ Newbie
 
Registered: Dec 2006
Posts: 24

Original Poster
Rep: Reputation: 15
Post A detailed guide how to set it up

A lot of thanks an respect to Software-RAID HOWTO (http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html) of Jakob Ostergaard and Emilio Bueso. This HOWTO is a really in-depth document of about 40 pages ! My document is more for the unpatient sysadmin ...

Server Hardware, Kernel:

Kernel 2.6.21
Pentium Dual Core 2,66MHZ
4GB RAM
2 SATA-Drives 400GB

Boot machine with Slackware DVD and get on the console

Partioning both drives exactly the same way. I don't see the advantage by creating lot's of partitions. So I created a primary partition of 380GB (sda1, sdb1) and a swap partition (sda2, sdb2) of 20GB on each drive. Note the correct partition types: FD (linux raid autodetect) for sda1, sdb1. Type 82 as usual for the swap partitions.

If there was a former LILO or other bootloader installed in the MBR of the discs, wipe it out with:

dd if=/dev/zero of=/dev/sda bs=446 count=1
dd if=/dev/zero of=/dev/sdb bs=446 count=1

Forget about the old raidtools and raidtab files. Use the mdadm-utility (multiple discs array admin manager). A really fine tool with lots of options (man mdadm or mdadm --help). To create the array type:

mdadm –create /dev/md0 –level=1 –raid-devices=2 /dev/sda1 /dev/sda2

If there is already a filesystem on the partition(s), mdadm will ask you if you really want to proceede. Answer 'y'. You get an output like:

mdadm: array /dev/md0 started

With 'cat /proc/mdstat' you can always see the status of your array. Or use 'mdadm –detail /dev/md0' instead, which gives you nearly the same information.

By examine the outprinted information you recognize that the syncing process between the two discs (partitions) has started immediately. DON'T INTERRUPT THIS PROCESS UNTIL IT HAS FINISHED !

If the device was successfully created, the reconstruction process has now begun. Your array is not consistent until this reconstruction phase has completed. However, the array is fully functional (except for the handling of device failures of course), and you can format it and use it even while it is reconstructing.

Now you can put a filesystem on /dev/md0. I prefer ext3, but this doesn't matter. If you are an expert and want fine tuning, use mke2fs on the console. Otherwise you can type 'setup' now to enter the Slackware configuration utility.
Proceed through the configuration points as usual, exept LILO-Installation. By setting up your target partitions, you see the /dev/md0 device which MUST be setup as Root-partition.

LILO

Newer LILO distributions can handle RAID-1 devices, and thus the kernel can be loaded at boot-time from a RAID device. LILO will correctly write boot-records on all disks in the array, to allow booting even if the primary disk fails.

Some users have experienced problems with this, reporting that although booting with one drive connected worked, booting with both two drives failed. Nevertheless, running the described procedure with both disks fixed the problem, allowing the system to boot from either single drive or from the RAID-1 (this is what I did, too – changing lilo.conf and installed in MBR of both discs).

The boot device MUST be a non-raided device. The root device is your new md0 device. I did not test installing LILO in the superblock of the array. In my opinion it should work also.

Example:

boot=/dev/sda
install=/boot/boot.b
prompt
timeout=50
message=/boot/message
default=linux

image=/boot/vmlinuz
label=linux
read-only
root=/dev/md0

Enter the LILO-Configuration in Expert-Mode. Go through the steps and after you have done view the lilo.conf file very seriously, if the boot and root entries are like explained.

If everything is OK, install LILO

Be very patient. The synchronisation-process is painfully slow. For the two 400GB-Discs it took over two hours ! Read on to speed it up.

Parannoia:
Unmount /dev/md0
Stop your array with mdadm -S /dev/md0
Reboot – Everything should work perfect -


Speeding up Synchronisation

If you are in a situation where you sit in front of the console (or on a remote ssh connection) waiting for a Linux software RAID to finish rebuilding (either you added a new drive, or you replaced a failed one, etc.) then you might be frustrated by how slow this process is running. You are running cat on /proc/mdstat repeatedly (you should really use watch in this case ), and this seems to never finish… Obviously that there is a logical reason for this ‘slowness‘ and on a production system you should leave it running with the defaults. But in case you want to speed up this process here is how you can do it. This will place a much higher load on the system so you should use it with care.
To see your Linux kernel speed limits imposed on the RAID reconstruction use:
cat /proc/sys/dev/raid/speed_limit_max
200000
cat /proc/sys/dev/raid/speed_limit_min
1000
In the system logs you can see something similar to:
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reconstruction.
This means that the minimum guaranteed speed of the rebuild of the array is approx 1MB/s. The actual speed will be higher and will depend on the system load and what other processes are running at that time.
In case you want to increase this minimum speed you need to enter a higher value in speed_limit_min. For example to set this to approx 50 megabytes per second as minimum use:
echo 200000 >/proc/sys/dev/raid/speed_limit_min
The results are instant… you can return to the watch window to see it running, and hope that this will finish a little faster (this will really depend on the system you are running, the HDDs, controllers, etc.):
watch cat /proc/mdstat
Hardcore-Testing

After all was set up, I was curious how stable this thing is. So I made clean shutdowns, pulled of the cable of first sda, then sdb disc. Finally I pulled the AC connection without a proper shutdown (puhhhh!!).

mdadm /dev/md0 -a /dev/sdX hot-adds the degraded disc back into the array.

It work's. Not to damage so far ! Now I'm very shure, that I can rely on that system.
 
Old 06-09-2008, 11:25 AM   #35
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
The boot CD doesn't have an actual "initrd".
What it does have is a RAM filesystem (RAM disk) so that you can remove the CD from the computer while Linux is running (from the RAM filesystem). The RAM filesystem for the boot CD is stored in "rootfs.gz".

The purpose of the boot CD is to copy the Slackware files to your RAID partition and compile the kernel (in the RAID partition) plus create the "initrd" for the kernel.

To do that the boot CD has to have "dmraid" and a kernel that supports the disk controllers. The kernel on the CD has to have Device Mapper support for your types of RAID volumes (mirror, stripe, etc.). You can only do this if you have a non-RAID Linux driver for the disk controllers in your RAID hardware. Think of "dmraid" as a configuration program that detects the RAID information and then creates the Device Mapper information for Linux.

Since many "fake hardware RAID" controllers use standard disk controllers with special BIOS firmware, Linux does support them in non-RAID mode. The "dmraid" program configures a layer of Device Mapper devices on top of the "normal" hard disk devices that access the raw data for the disks. The Device Mapper does the RAID and the "normal" disk drivers read and write sectors. The "dmraid" program connects all that up and configures it but then is out of the picture. You could do that configuration manually but it would be difficult and would have to be changed if you changed your RAID configuration. You would also have to know how to decode the RAID metadata to set that up manually. The whole purpose of "dmraid" is to decode the metadata and configure the mirror and stripe sets in the Device Mapper.

To create the boot CD, you start by building a kernel with all of the drivers required for your hard disk controllers, plus the Device Mapper functions (mirror / stripe). What I did was append "CD" to the version of the kernel and that puts the modules in a separate directory from the "normal" ones.

After you have a kernel that should boot and detect your devices, then you build "dmraid" and "grub" (if you haven't already). Then create a boot CD with the CD kernel.

Boot from the CD. If something doesn't work, build the kernel or CD again after making changes.

Once you can detect the RAID partitions, create the Linux partition and mount it, then you can install Slackware.

Use "chroot" to make the new Linux partition the root and then compile your Linux kernel, build your "initrd" and install "grub".

Don't be surprised if your first few attempts result in a "kernel panic". Mine certainly did.

It helps to understand how the boot process works. First, "grub" is loaded by the computer BIOS. Then "grub" uses the BIOS to load the Linux kernel and the "initrd" RAM disk image into memory. At this point all disk I/O has been done using the BIOS, identifying the hard disk by the drive ID (usually 0x80). What makes "grub" able to read from the RAID array is the BIOS code located on the fake RAID controller. That extends the BIOS disk I/O functions to handle the RAID array as if it was a single hard disk (much like a RAID driver in the OS).

Second, the Linux kernel in memory starts up and uses the "initrd" RAM disk as if it was the root filesystem. An "init" script runs to configure everything required for accessing the real root partition. Normally that means loading Linux modules. You have to add "dmraid" so that the Device Mapper is set up and you can access the real root partition in the RAID array. When the "init" script is done, it mounts the real root partition and essentially does a "chroot".

Third, the "init" script starts the normal "init" task in the real root partition. The "init" task deallocates the memory used by the "initrd" RAM disk, and continues on with normal booting. The "init" task looks at "inittab" and performs the steps required for the run level that was specified in the boot parameters (or the default run level).

So, "dmraid" is only needed for the boot CD installing Slackware, and in the "initrd". After you've installed Linux, "dmraid" is really only used during booting. It is the Device Mapper driver in Linux that does the majority of the work after it has been configured by "dmraid". What I like about this approach is that the fake RAID works exactly like Linux software RAID. With the proprietary drivers for RAID controllers, you have "special" software that might or might not work right. At least with Linux RAID you have something that is well tested and maintained.

The only difference between using "dmraid" and normal Linux RAID is which method is used to configure the RAID arrays. After they are configured there is no difference. That's why I find it humorous when people suggest that I use Linux software RAID instead of "dmraid". Essentially I am using Linux software RAID. I'm just using "dmraid" to configure the Linux software RAID to match my metadata stored by my fake RAID BIOS.

A side benefit of this approach is that it doesn't matter if my RAID disks are really connected to the correct fake RAID controller. If my RAID controller fails, I can connect the disks to any hard disk controller and still use "dmraid" to gain access from Linux. Of course I probably can't boot from the RAID array on some other disk controllers because booting requires the BIOS firmware on the RAID controller. If my RAID controller fails, reading the data is what's important, not booting from the RAID disks.
 
Old 04-24-2009, 01:18 PM   #36
feris
LQ Newbie
 
Registered: Apr 2009
Posts: 1

Rep: Reputation: 0
Slackware Current with dmraid on Gigabyte ga-ma770-ud3

Hello
Im trying to run Slackware Current with dmraid on RAID1 matrix with use of gigabyte ga-ma770-ud3. Fake raid is made by Promise and dmraid work correctly with it. I have followed instructions in this topic, slackware was sucessfully installed on device created by dmraid in /dev/mapper, i have created "shortnamed" devices (sdr) in /dev and /boot/initrd-tree/dev, copy dmraid and libs to /boot/initrd/lib and modify init script.Proper udev rules also have been created. At boot initramfs works correctly switch_root is done, rc.S script is started by init. It mounts proc and sys, launch udev and then boot crashes on swapon, sdr1 file dont exist.But this dev node have been created previously, same with sdr2 which is system partition.
What i have done wrong.

Sorry for terrible english.
 
Old 04-24-2009, 06:46 PM   #37
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
Quote:
Originally Posted by feris View Post
Hello
Im trying to run Slackware Current with dmraid on RAID1 matrix with use of gigabyte ga-ma770-ud3. Fake raid is made by Promise and dmraid work correctly with it. I have followed instructions in this topic, slackware was sucessfully installed on device created by dmraid in /dev/mapper, i have created "shortnamed" devices (sdr) in /dev and /boot/initrd-tree/dev, copy dmraid and libs to /boot/initrd/lib and modify init script.Proper udev rules also have been created. At boot initramfs works correctly switch_root is done, rc.S script is started by init. It mounts proc and sys, launch udev and then boot crashes on swapon, sdr1 file dont exist.But this dev node have been created previously, same with sdr2 which is system partition.
What i have done wrong.

Sorry for terrible english.
This information is more current (for Slackware 12.2).

Here are some of the things that caused me trouble getting that to work on my two computers with RAID controllers.

I was not able to get the current version of "dmraid" to work. I had to use dmraid version 1.0.0.rc12. Newer versions would not detect any RAID arrays for some reason.

The "dmraid" program does not correctly detect logical drive partitions in an extended partition if there is any empty space between partitions. Creating partitions under Windows may cause empty space between the logical partitions. I got around that problem by creating the logical partitions with Linux.

The root device and swap device have to be created in the "dev" directory using a boot CD or without UDEV running. Otherwise they are created in the UDEV pseudo filesystem. Those devices have to bee seen before UDEV runs.

The major or minor device IDs may be different for the mapper devices when booting the actual system. After the dmraid runs on the actual system and it fails you can list the actual device IDs.

ls -l /dev/mapper/*

Make sure that you have the required LVM support in the Linux kernel for mirror, stripe, or both. Also, you need the required library files in the "initrd" image to go with the "dmraid" program.

I was not able to use "lilo" to boot my computer. I used "grub". I also had to install "grub" using a boot CD containing grub. I could not install grub from Linux because it was unable to determine the correct BIOS devices. To install grub I pressed the "c" key in the grub boot loader and then entered commands to locate my OS.

find /boot/vmlinuz
root (hd0,1)
setup (hd0,1)
quit

If you want to install "grub" to the MBR then use "setup (hd0)". You will have to change "root (hd0,1)" to the correct device containing your grub boot loader files. That is usually your Linux root device.

Be careful to use the correct version of grub that is patched for 256-byte inodes or you will have to format your Linux partition using 128-bit inodes.

In case it helps here are the various files from my working Linux system.

Script to create initrd image.
Code:
ROOTDEVNAME="/dev/sdr2"		# Name of root device
LINUXVER="2.6.24.5-smp"		# Linux modules version
CLIBVER="2.7"			# C library version
ROOTDIR="/boot/initrd-tree"	# Location of root filesystm
# Get most of the needed programs from the normal mkinitrd
mkinitrd -k $LINUXVER -c -r "$ROOTDEVNAME" -f ext3
# Create root device
cp -a "$ROOTDEVNAME" "$ROOTDIR/dev"
# Copy scripts and programs
cp -p init "$ROOTDIR"
chmod u=rwx,g=rx,o=rx "$ROOTDIR/init"
cp -p /sbin/dmraid "$ROOTDIR/sbin"
for lib in \
   "libdevmapper.so.1.02" \
   "libc.so.6" "ld-linux.so.2" \
   "ld-$CLIBVER.so" "libc-$CLIBVER.so"
   do
   if [ -e "/lib/$lib" ] ; then
      cp -Pp "/lib/$lib" "$ROOTDIR/lib/$lib"
   else
      echo "Library file not found \"/lib/$lib\""
      exit 1
   fi
done
# Make the compressed image file
mkinitrd
The script creates the required root device in the "initrd" image by copying the existing device. You should create the root device under "/dev" before you run the script. The script copies the required libraries to run "dmraid". Make sure that the C library version and kernel version in the script matches what you actually have.

Excerpt from modified "init" script that I keep in the same directory with my "create initrd" script. Changes are in bold.
Code:
# Parse command line
for ARG in `cat /proc/cmdline`; do
  case $ARG in
    rescue)
      RESCUE=1
    ;;
    root=/dev/*)
      ROOTDEV=`echo $ARG | cut -f2 -d=`
    ;;
    resume=*)
      RESUMEDEV=`echo $ARG | cut -f2 -d=`
    ;;
    0|1|2|3|4|5|6)
      RUNLEVEL=$ARG
    ;;
    single)
      RUNLEVEL=1
    ;;
  esac
done

# Load kernel modules:
if [ ! -d /lib/modules/`uname -r` ]; then
  echo "No kernel modules found for Linux `uname -r`."
elif [ -x ./load_kernel_modules ]; then # use load_kernel_modules script:
  echo "${INITRD}:  Loading kernel modules from initrd image:"
  . ./load_kernel_modules
else # load modules (if any) in order:
  if ls /lib/modules/`uname -r`/*.*o 1> /dev/null 2> /dev/null ; then
    echo "${INITRD}:  Loading kernel modules from initrd image:"
    for module in /lib/modules/`uname -r`/*.*o ; do
      insmod $module
    done
    unset module
  fi
fi

# Sometimes the devices needs extra time to be available.
# root on USB are good example of that.
sleep $WAIT

# Use mdev to read sysfs and generate the needed devices 
mdev -s

# Load a custom keyboard mapping:
if [ -n "$KEYMAP" ]; then
  echo "${INITRD}:  Loading '$KEYMAP' keyboard mapping:"
  tar xzOf /etc/keymaps.tar.gz ${KEYMAP}.bmap | loadkmap
fi

# Find any dmraid detectable partitions
dmraid -ay

if [ "$RESCUE" = "" ]; then 
  # Initialize RAID:
  if [ -x /sbin/mdadm ]; then
    /sbin/mdadm -E -s >/etc/mdadm.conf
    /sbin/mdadm -A -s
  fi
I added the lines in the case statement for the "single" kernel parameter and I added the line with "dmraid" to detect the RAID devices.

Here is my "/etc/udev/rules.d/10-local.rules" file.
Code:
# /etc/udev/rules.d/10-local.rules:  local device naming rules for udev

KERNEL=="dm-0", NAME="sdr", OPTIONS+="last_rule"
KERNEL=="dm-1", NAME="sdr1", OPTIONS+="last_rule"
KERNEL=="dm-2", NAME="sdr2", OPTIONS+="last_rule"
KERNEL=="dm-3", NAME="sdr4", OPTIONS+="last_rule"
KERNEL=="dm-4", NAME="sdr5", OPTIONS+="last_rule"
KERNEL=="dm-5", NAME="sdr6", OPTIONS+="last_rule"
KERNEL=="dm-6", NAME="sdr7", OPTIONS+="last_rule"
KERNEL=="dm-7", NAME="sdr8", OPTIONS+="last_rule"
Here is my "/etc/fstab" file.
Code:
/dev/sdr5        swap             swap        defaults         0   0
/dev/sdr2        /                ext3        defaults         1   1
/dev/sdr1        /vista           ntfs        ro,uid=root,gid=vista,dmask=0027,fmask=0137  0   2
/dev/sdr4        /winxpe          ntfs        ro,uid=root,gid=vista,dmask=0027,fmask=0137  0   2
/dev/sdr6        /files           ntfs        ro,uid=root,gid=vista,dmask=0027,fmask=0137  0   2
/dev/sdr7        /sharedfiles     ntfs-3g     uid=root,gid=sharedfiles,dmask=0007,fmask=0117  0   2
/dev/sdr8        /backup          ntfs        ro,uid=root,gid=vista,dmask=0027,fmask=0137  0   2
Here is my "/boot/grub/menu.lst" file.
Code:
default 0
timeout 5

title Linux
root (hd0,1)
kernel /boot/vmlinuz vga=791 root=/dev/sdr2 ro vt.default_utf8=0 load_ramdisk=1 ramdisk_size=4096
initrd /boot/initrd.gz

title Linux Single User Mode
root (hd0,1)
kernel /boot/vmlinuz single root=/dev/sdr2 ro vt.default_utf8=0 load_ramdisk=1 ramdisk_size=4096
initrd /boot/initrd.gz

title Windows Vista 64-bit
rootnoverify (hd0,0)
chainloader +1
 
Old 04-24-2009, 08:04 PM   #38
agentc0re
Member
 
Registered: Apr 2007
Location: SLC, UTAH
Distribution: Slackware
Posts: 200

Rep: Reputation: 34
I would actually recommend to NOT do a fakeraid. Instead, use linuxes software raid, mdadm. Alternatively you can also use LVM. I found that this is the best solution because if you have hardware die and you cannot replace your MB with the exact same MB then you are totally screwed. With mdadm/lvm any linux and machine running linux can be setup to see the data on your harddrives. This is a huge Pro and i think that it out weighs any con's.

There are some good examples on how to do a mdadm and/or lvm setup in the slackware cd.

Keep in mind, fakeraid raid really is still a software raid at the driver level so it's no better and probably worse than mdadm.
 
Old 04-25-2009, 11:24 AM   #39
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
Quote:
Originally Posted by agentc0re View Post
I would actually recommend to NOT do a fakeraid. Instead, use linuxes software raid, mdadm. Alternatively you can also use LVM. I found that this is the best solution because if you have hardware die and you cannot replace your MB with the exact same MB then you are totally screwed. With mdadm/lvm any linux and machine running linux can be setup to see the data on your harddrives. This is a huge Pro and i think that it out weighs any con's.

There are some good examples on how to do a mdadm and/or lvm setup in the slackware cd.

Keep in mind, fakeraid raid really is still a software raid at the driver level so it's no better and probably worse than mdadm.
You aren't "totally screwed" since Linux can still read the old RAID disks using LVM and "dmraid". The only thing that you temporarily lose is the ability to boot from the hard disks. There is no problem copying the disks using another computer or a Linux boot CD and "dmraid".

Since "dmraid" is only a configuration front-end for LVM there is no difference in performance between using madm/lvm and dmraid/lvm.

Here are the three main reasons to use a "fake hardware RAID" controller
  • You can boot directly from the RAID array
  • Another OS can access the Linux RAID partitions
  • Linux can access the RAID partitions from another OS

Many people dual-boot with Windows. Using "fake hardware RAID" allows Linux and Windows to access each other's RAID partitions. Using madm/lvm prevents Windows from accessing the Linux RAID partitions and using Windows OS RAID prevents Linux from accessing the Windows RAID partitions.

If you don't care about those three things then madm/lvm is more portable as you point out. The advantage to madm/lvm is that you can create arrays with disks on different disk controllers. On a system with limited disk controllers that can be an advantage. I do think that more than three drives in a RAID array provides very little extra performance (or maybe even lower performance).

I will be the first to admit that using "fake hardware RAID" controllers with Linux can be very difficult and one should not attempt it without being willing to invest significant effort. I did find that it was easier on the second computer because of what I had already learned and work that I had already done.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Linux NUBE..cant install slack 12 on an Asus p5nd2-sli deluxe slpdave Slackware 14 08-01-2007 08:22 AM
LXer: ASUS P5N-E SLI on Linux LXer Syndicated Linux News 0 05-24-2007 07:16 PM
4GiB in Asus A8N-SLI Deluxe causes kernel panic gzunk Linux - Hardware 1 03-06-2007 11:52 AM
Asus M2N-SLI mb BIOS problem oldman Linux - Hardware 2 10-20-2006 11:56 AM
Mandrake 10.1 installation on Asus A8N SLI mobo aVIXk7 Mandriva 3 04-10-2006 01:13 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware > Slackware - Installation

All times are GMT -5. The time now is 01:07 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration