software Firewire 800 RAID array loses formatting on reboot
I recently bought three 500GB Lacie external Firewire800 drives for use as a backup to my orgs SANS array. Installed SLES9 on the machine, then installed the RAID card, and plugged in the drives. THe Partitioner tool under YAST sees the drives, and lets me set them up in a RAID0 array. I formatted it a few times, using ReiserFS, Ext3, and JFS, and everytime, the format works like a champ. I use mount /dev/md0 /users and run df, and see the array mounted to /users. I can make directories and files and all kinda cool stuff, and I can mount and unmount it as much as I want. But when I reboot, I get a badfs type, no superblock error on /dev/md0 It's like the array is losing it's formatting everytime I reboot. I've tried not having it in /etc/fstab, and just under /etc/raidtab but that didn't help any except that the PC would boot w/ it out of /etc/fstab.
Any ideas as to why the array loses it's formatting everytime I reboot? It's really driving me up the wall, and I really want it to work cause $900 for 1.2TB of storage ain't bad, especially when it can hold ALL of my user/corporate data. here is some output I threw into a text file that may or may not help: Code:
THIS OUTPUT WAS GENERATED BY AFTER THE FOLLOWING COMMANDS WERE EXECUTED: --Monty |
You forgot to make a partition on each of the hard drives. Also after you make partitions, you have to reboot. When using Firewire, you have to know which one is which for every bootup because it is unpredictable when setting up Firewire and USB devices. You can either depend on Linux looking up the serial number for each drive or add a second small partition with a file store in it. Then write a script to find the information what drive is which and setup the RAID array.
Do not use JFS because it slows down when you add files to it and it does not defrag. The only way to defrag it is dump on to another 1.2 TB medium and dump it back on to the JFS drive. Only use JFS on small (less than 1 GB) partitions, because they are a lot easier to work with. I suggest XFS if you want very high performance. XFS reads and writes very fast because it spends most of the time in memory. Also XFS read huge chunks of data in parallel, so if you optimize the formatting for your RAID array you can get very fast disk transfers. |
Quote:
|
You can not specify raw block devices. You have to make atleast one partition on /dev/sda, /dev/sdb, and /dev/sdc before using it. After you made the partition on the drives, setup the software RAID using /dev/sda1, /dev/sdb1, and /dev/sdc1.
http://www.ibiblio.org/pub/Linux/doc...AID-0.4x-HOWTO http://www.ibiblio.org/pub/Linux/doc...are-RAID-HOWTO |
All times are GMT -5. The time now is 06:04 PM. |