actually a couple of weeks ago i was in your position.
in the end i implemented a raid 5, but no LVM. basically i have a single drive for the OS (just 40GB) and another 4 raided5 drives.
anyway, to the point:
but then again will it be hard (read: will i have to rebuild raid) if i wanted to upgrade the 3rd 2chan IDE controller to 4 chan?
i don't think you should have a problem provided you recompile support for the new controller in the kernel... and you replace the new card at the exact PCI slot that you removed the old one from. i think the pci slots (and thats the disks) are scanned sequentially and so replacing one card will not affect the node that gets assigned to each drive during boot up... i would appreciate a more kernel oriented person verifying that though...
but take things in order. have a go at the software RAID guide.
note the 'persistent superblock' option that exists. force all drives to have this enabled, cause it will allow 'connecting' the disks of the array at boot time. and format the drives as linux-raid (partition code is 0xFD). its possible that the array will be reassembled even if the drive letters have changed (if that happens remember to update the /etc/raidtab file).
as for adding more drives (since i guess u'll be changing the IDE controller to add 2 more disks... read this:
apparently there is a way to expand the number of disks in raid 5 but its not well established (at the time the article was written).
i know read in the mdadm manual that there is an option --grow to expand an array, but it does NOT support raid 5 as of yet. im confident that we'll soon have that option available to us soon.
as for now, my best advice to you would be to experiment..
start your raid with say 4 drives... make sure you understand whats going on and once the system is settled in a raid5 with 4 drives, start adding drives.
does slack 11 come with built in support for raid and LVM or will i have to recompile the kernel in order for the OS to support those features?
so i installed slack11 with the test26.s kernel and i had to recompile a kernel with raid support. (i downloaded the latest stable at the time 126.96.36.199)
setting up the raid is fairly easy ill probably set up raid 5 with 8 drives set up for storage and 2 as spares. is spare in software raid unusable until another drive breaks and then it's used to mirror the bad drive? or will i be fine with one?
no need to use more than 1 drive as spare. especially if you choose to make 1 raid out of all the drives (you can even make more than one raid5 arrays and still share 1 spare drive among the 2 to save space).
also will there be a problem if i decide to add disk to raid that has different size the the disks in the array. since it's software i shouldn't have any major problem right?
the raid will use only space from this drive equal to the smallest size of drive across the drives in the array. read here
hope this was of some help