Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm new/old at linux. The good thing about Linux is, once it's setup there is not much the user have to do to maintain it. The problem with it is once it's time to redo the system you've forgotten what you originally did. As in my case, I'm trying to migrate all my machines into one unit.
Anyways:
I have an issue with my system. I'm running Debian 3.1.5 kernel 2.6.8.3.x and using mdadm for my raid setup. I have various drives setup
/dev/hda
/dev/hdb
/dev/hdc
/dev/sda
/dev/sdb
/dev/sdc
/dev/sdd
The most important are the /dev/sd[abcd] drives. I'm trying to setup RAID5 on these drives. I have read and followed various instructions on this and other forums and have already configured/rebuild multiple times only to end up with the same result.
The problem is mdadm doesn't appear to be reading /etc/mdadm/mdadm.conf on boot, and because it doesn't get read on boot I always get this message "mdadm: no devices found for /dev/md0". There are also a bunch of other messages that followed, but suffice to say if mdadm can't find any devices belonging to /dev/md0 array it will error out and will not create the array.
The reason I believe mdadm is not reading mdadm.conf is because I can logon and execute the command to assemble the array:
#mdadm -A /dev/md0
mdadm: /dev/md0 has been started with 4 drives.
I also test /etc/fstab by:
#mount -a
#df -h
shows the mounted file system
/dev/md0 101G 33M 96G 1% /raid5
This is a pain especially if I can't rely on the system to complete it's boot-up without pausing at "mdadm: no devices found for /dev/md0" , give root password for maintenance or CONTROL-D to continue.
Here's my /etc/mdadm/mdadm.conf DEVICE partitions
DEVICE /dev/sda /dev/sdb /dev/sdc /dev/sdd
I've also checked /etc/rd2.d to make sure the link to mdadm is proper...
I've also checked /etc/init.d mdadm and mdadm-raid file, from what I can tell they look proper...
Is there anything else I'm missing? If some linux/mdadm guru or someone experience them me (and I know there are plenty out there) who can give me some info on this problem, I will be extremely greatful.
I know it's a long read but please understand my delimma.
Try changing the partition type to 'FD' using a partition editor. This tells the kernel to try and puzzle out the RAID type ad participants. This way you don't even NEED mdadm.conf. I never use it.
Also, make sure your initrd file (in /boot) has RAID5 support if your system partition is RAID5.
Try changing the partition type to 'FD' using a partition editor. This tells the kernel to try and puzzle out the RAID type ad participants. This way you don't even NEED mdadm.conf. I never use it.
Also, make sure your initrd file (in /boot) has RAID5 support if your system partition is RAID5.
Thanks for the response.
I did change the partition type to FD. Through my debugging I tried using DRIVE and partitions: /dev/sda or /dev/sda1...
Neither method worked. My system does have RAID5 support because I selected it during Debian installation. This is why I don't understand this issue I'm experiencing. This is a base install of Debian 3.1.5, with RAID-5 for the 4 SCSI drives. Yet everytime I reboot I get the same message "mdadm: no devices found for /dev/md0"
If you changed the partition type AFTER mucking with the mdadm.conf file, I suggest you rename the mdadm.conf file to mdadm.bkup
Is there any information you are trying to preserve on the target drives?
Have you read through the Software RAID HowTo? (one of the best HowTo's out there)
Could you post the following in separate replies:
mdadm -E /dev/sda1
mdadm -D /dev/md0
fdisk -l
cat /proc/mdstat
grep kernel /boot/grub/grub.conf
pax -z < /boot/initrd*$(uname -r)* | grep kernel
That should do it for me to figure out what's going on.
Could you post the following in separate replies:
mdadm -E /dev/sda1
That should do it for me to figure out what's going on.
Though I've still got the same error message, I'm using the below method to somewhat automate the (dirty) Raid mounting.
I still have to press CTRL+D to bypass the error message.
##############################################################################################
!/bin/sh
echo
echo Starting RAID-5
mdadm -A /dev/md0
mount -a
echo Done with Manual Raid setup
echo
##################################################
Save it at /etc/init.d/mdadm.mine
Make it executible "chmod +x /etc/init.d/mdadm.mine"
Add it to startup menu "update-rc.d /etc/init.d/mdadm.mine defaults"
################################################################################################
Here's the result from mdadm -E /dev/sda1
mdadm -E /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 00.90.00
UUID : 44e75c85:91fc9a9f:34b2c1b4:4834c282
Creation Time : Tue Apr 10 18:26:45 2007
Raid Level : raid5
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Wed Apr 11 20:08:33 2007
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 8b54834b - correct
Events : 0.82
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 1 0 active sync /dev/sda1
0 0 8 1 0 active sync /dev/sda1
1 1 8 17 1 active sync /dev/sdb1
2 2 8 33 2 active sync /dev/sdc1
3 3 8 49 3 active sync /dev/sdd1
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
Disk /dev/hda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 12 96358+ 83 Linux
/dev/hda2 13 377 2931862+ 82 Linux swap / Solaris
/dev/hda3 378 9729 75119940 83 Linux
Disk /dev/hdb: 82.3 GB, 82348277760 bytes
255 heads, 63 sectors/track, 10011 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hdb1 1 10011 80413326 83 Linux
Disk /dev/hdc: 200.0 GB, 200049647616 bytes
255 heads, 63 sectors/track, 24321 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/hdc doesn't contain a valid partition table
Disk /dev/md0: 110.1 GB, 110163001344 bytes
2 heads, 4 sectors/track, 26895264 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Device Boot Start End Blocks Id System
/dev/md0p1 1 26895264 107581054 83 Linux
Disk /dev/sda: 36.7 GB, 36746412032 bytes
64 heads, 32 sectors/track, 35044 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 35044 35885040 fd Linux raid autodetect
Disk /dev/sdb: 36.7 GB, 36746412032 bytes
64 heads, 32 sectors/track, 35044 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 35044 35885040 fd Linux raid autodetect
Disk /dev/sdc: 36.7 GB, 36778545152 bytes
64 heads, 32 sectors/track, 35074 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 35074 35915760 fd Linux raid autodetect
Disk /dev/sdd: 36.7 GB, 36722061312 bytes
64 heads, 32 sectors/track, 35020 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 35020 35860464 fd Linux raid autodetect
kaos:/etc/init.d#
This is a fresh build of Debian 3.1.5. I have a drive /dev/hdc which has data on it, though I could careless about it. There is no data on the drives I'm trying to create the RAID. Wouldn't matter anyways since my effort have wiped and rewiped all data on them.
Did update-initramfs get run when you installed mdadm? Have you tried running it to see if it fixes it? I run a dmraid and I'm pretty sure that it needs to be run for that.
Added:
Not that it strictly matters for this problem, but you should really consider upgrading to Etch. It's now officially stable.
Last edited by Quakeboy02; 04-11-2007 at 09:25 PM.
Switching between whole disk RAID devices, disk partition RAID devices, and partitions of RAID devices as RAID devices (/dev/md0p1) has me, you, the kernel, and everyone on this board confused.
Unmount everything that has to do with the SCSI drives.
Code:
umount /dev/md0p1
umount /dev/md0
Stop the RAID if it is started:
Code:
mdadm /dev/md0p1 --stop
mdadm /dev/md0 --stop
Make sure by checking
Code:
cat /proc/mdstat
Now annihilate all trace of past attempts on those drives:
(This writes all zeros to the beginning of each scsi drive. Be careful.)
Code:
for i in a b c d; do
dd if=/dev/zero of=/dev/sd$i count=10
done
Now make one BIG partition on each SCSI disk, of type FD:
Code:
for i in a b c d; do
echo ',,FD' | sfdisk /dev/sd$i
done
Did I read the OP wrong? I thought the problem was only that it was booting without the array assembled, not that he couldn't assemble the array once the system is booted.
He wants the kernel autodetection to work, which is facilitated by partition id FD. But while trying to figure out why it wouldn't work he delved into the black art of RAID partitioning (/dev/md0p1) which now has things confused.
Quakerboy: even though you though upgrading to Etch is unrelated. However some of the most bizzar suggestions are useful and in this case solved my problem. So thumbs up for your suggestion. I've spent the last couple days installing and configuring Etch (multiple times to test various builds). With this new version I can build raids to my harts content. /dev/hda or /dev/sda it doesn't matter, RAID 0, 1, 5 what ever...
This project started with a download of the most current version of Debian. At the time it was 3.1.5, I'm glad 4.0r0 solved my issues.
Thank You for the suggestion.
Dgar: I'm not leaving you out of this either. Thank you for all your troubleshooting effort. I got some valuable information on the process. You have been helping me from the start of this delima. Though as an Engineer I really would have liked to know exactly why the RAID would not stick on reboot...
Oh well, on a bright note, it's working now!!!
Now to the main part of the project "XEN", any of you's have experience on this topic?
Moderators: This is the best forum I'm happily to be a part of. The number of members is enormous, topics are constantly posted and answered. Thank you for a great forum.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.