LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   software Firewire 800 RAID array loses formatting on reboot (https://www.linuxquestions.org/questions/linux-hardware-18/software-firewire-800-raid-array-loses-formatting-on-reboot-256399/)

monty 11-18-2004 08:38 AM

software Firewire 800 RAID array loses formatting on reboot
 
I recently bought three 500GB Lacie external Firewire800 drives for use as a backup to my orgs SANS array. Installed SLES9 on the machine, then installed the RAID card, and plugged in the drives. THe Partitioner tool under YAST sees the drives, and lets me set them up in a RAID0 array. I formatted it a few times, using ReiserFS, Ext3, and JFS, and everytime, the format works like a champ. I use mount /dev/md0 /users and run df, and see the array mounted to /users. I can make directories and files and all kinda cool stuff, and I can mount and unmount it as much as I want. But when I reboot, I get a badfs type, no superblock error on /dev/md0 It's like the array is losing it's formatting everytime I reboot. I've tried not having it in /etc/fstab, and just under /etc/raidtab but that didn't help any except that the PC would boot w/ it out of /etc/fstab.

Any ideas as to why the array loses it's formatting everytime I reboot? It's really driving me up the wall, and I really want it to work cause $900 for 1.2TB of storage ain't bad, especially when it can hold ALL of my user/corporate data.

here is some output I threw into a text file that may or may not help:

Code:

THIS OUTPUT WAS GENERATED BY AFTER THE FOLLOWING COMMANDS WERE EXECUTED:
mdadm --build /dev/md0 --chunk=64 --level=0 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
mkfs.reiserfs /dev/md0
mount /dev/md0 /raid

OUTPUT OF mdadm --examine /dev/md0:

/dev/md0:
          Magic : a92b4efc
        Version : 00.90.00
          UUID : bd7b6b6c:d3bae712:3d56e6ab:7d0a909f
  Creation Time : Thu Nov 18 10:05:50 2004
    Raid Level : raid0
    Device Size : 488396992 (465.77 GiB 500.12 GB)
  Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0

    Update Time : Thu Nov 18 10:05:50 2004
          State : dirty, no-errors
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
      Checksum : 9519671d - correct
        Events : 0.1

    Chunk Size : 32K

      Number  Major  Minor  RaidDevice State
this    2      8      32        2      active sync  /dev/evms/.nodes/sdc
  0    0      8        0        0      active sync  /dev/evms/.nodes/sda
  1    1      8      16        1      active sync  /dev/evms/.nodes/sdb
  2    2      8      32        2      active sync  /dev/evms/.nodes/sdc

THE DF OUTPUT IS AS FOLLOWS:

Filesystem          1K-blocks      Used Available Use% Mounted on
/dev/hda2            38024624  13291024  24733600  35% /
tmpfs                  257540        8    257532  1% /dev/shm
/dev/md0            1465146448    32840 1465113608  1% /raid

ALL OF THIS OCCURS BEFORE THE REBOOT.

THESE ARE THE COMMANDS AND RESULTS AFTER THE REBOOT:

VO42818:~ # mdadm --examine /dev/md0
mdadm: /dev/md0 is too small for md1

VO42818:~ # mount /dev/md0 /raid
/dev/md0: No such device
mount: /dev/md0: can't read superblock

VO42818:~ # df
Filesystem          1K-blocks      Used Available Use% Mounted on
/dev/hda2            38024624  13291056  24733568  35% /
tmpfs                  257540        8    257532  1% /dev/shmdf
Filesystem          1K-blocks      Used Available Use% Mounted on
/dev/hda2            38024624  13291056  24733568  35% /
tmpfs                  257540        8    257532  1% /dev/shm

Thanks!
--Monty

Electro 11-18-2004 06:35 PM

You forgot to make a partition on each of the hard drives. Also after you make partitions, you have to reboot. When using Firewire, you have to know which one is which for every bootup because it is unpredictable when setting up Firewire and USB devices. You can either depend on Linux looking up the serial number for each drive or add a second small partition with a file store in it. Then write a script to find the information what drive is which and setup the RAID array.

Do not use JFS because it slows down when you add files to it and it does not defrag. The only way to defrag it is dump on to another 1.2 TB medium and dump it back on to the JFS drive. Only use JFS on small (less than 1 GB) partitions, because they are a lot easier to work with. I suggest XFS if you want very high performance. XFS reads and writes very fast because it spends most of the time in memory. Also XFS read huge chunks of data in parallel, so if you optimize the formatting for your RAID array you can get very fast disk transfers.

Occupant 11-29-2004 01:38 PM

Quote:

State : dirty, no-errors
I get this state after only a few minutes of running... "dirty, no error' I run xfs_check, and that cleans it up for a while, but after awhile it will return to a dirty state agian. Is this something to worry about?

Electro 11-29-2004 11:14 PM

You can not specify raw block devices. You have to make atleast one partition on /dev/sda, /dev/sdb, and /dev/sdc before using it. After you made the partition on the drives, setup the software RAID using /dev/sda1, /dev/sdb1, and /dev/sdc1.

http://www.ibiblio.org/pub/Linux/doc...AID-0.4x-HOWTO
http://www.ibiblio.org/pub/Linux/doc...are-RAID-HOWTO


All times are GMT -5. The time now is 06:04 PM.