A special steps require to re-initial RAID0 array created with mdadm?
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
A special steps require to re-initial RAID0 array created with mdadm?
I'm building a Myth box out of old hardware. I installed the system on an 80Gb IDE boot drive and planned to use two 430Gb IDE drives in a stripped array for storage.
I had Mythbuntu all configured save for designating the storage volume. I opened the terminal and created the stripped array (both secondaries on separate IDE channels) with mdadm at /dev/md0. I formatted the newly created array EXT3 with mke2fs. I wanted to create a storage folder at root on the array and I meant to type "cd /dev/md0" but typed /dev/mdo instead. The terminal locked and every though I could click on the menu bar, noting responded. On hard reboot, I got a screen full of errors and no GUI (yes, I should have recorded the erros but my thought was to start over fresh now that I have the config down. I already deleted the volumes on the boot disk.
Do I have to do anything special to re-initialize the drives from the array? FDisk (on Crunchbang Ubuntu live CD) still sees the array as one drive but sees no formatted volumes. I bought half a dozen of these IDE drives on NewEgg a while back, I eventually determined one came DOA, so these may have errors that only presented when I started writing to them and that may be my problem.
Last edited by CyyberspaceCowboy; 10-17-2010 at 12:06 PM.
Reason: Check notifications
You can use smartctl to run self-checks on your disks to make sure they're functioning correctly. Assuming the disks pass the check, the RAID drives probably still have their superblock RAID information present, the bit of information mdadm uses to determine if there is a functioning RAID or not. Assuming a drive you're trying to reset are at hda, you can zero the superblock out with:
Stephan,
smartctl /dev/md0 says it can't ID the type of device. I've tried -d ata and -d megaraid and both came back "device not found". smartctl -a /dev/sdb and smartctl -a /dev/sdc ame back without errors.
I'm afraid I got impatient and jumped the gun before waiting for a reply. Since I'd already had one DOA drive in the half dozen I bought a couple years ago on NewEgg (not such a great bargain now) I ran SeaTools boot disk against both drives individually and one had multiple, uncorrectable errors. I put in another identical 430Gb drive and started over w/o doing anything to remove the first array. Thanks for giving me mdadm --zero-superblock /dev/hda, I may decide to start over once more.
Before I do that, I think I am missing a step in my configuration and I hope someone can advise me.
What I've done: (presume commands preceded with sudo)
mdadm -C /dev/md0 -l 0 -n 2 /dev/sdb /dev/sdc
#some examples show /dev/sdb1 /dev/sdc1, is that my problem?
#most examples specify a chuck size, I used the default
#No errors on creating the array, so it needs to be formatted
mkfs -t ext4 -c /dev/md0
#Told me the volume was created with no errors, checked for bad sectors because these are refurb drives from NewEgg
#created my desired mountpoint
mkdir /storage
#Added the following to /etc/fstab (I originally used the UUID,
#discovered with blkid, but switched to /dev/md0 when
#volume wasn't auto mounted)
/dev/md0 /storage ext4 rw,auto,user,exec,async 0 1
On reboot, I get "the disk drive for /storage is not ready yet or not present"
On further research, I made a /etc/mdadm.conf containing:
DEVICE /dev/sdb /dev/sdc
ARRAY /dev/md0 devices=/dev/sdb,/dev/sdc
On another reboot I get the same message with an opportunity to drop to terminal and fix the problem
If I try to manually "sudo mount -t ext4 /dev/md0 /storage" I get: "special device /dev/md0 does not exist"
I've seen references to array "activation" but commands like raidstart seem to be depricated and I get the impression mdadm was supposed to do it for me. What did I miss?
Briefly, the commands you've used aren't creating the arrays on partitions, but rather on entire disks. This means you're not able to set the partition types to Linux RAID, likely giving the system a good deal of confusion as to exactly what devices your hard drives actually are. Also, smartmontools only checks the SMART status of physical drives; it won't recognize a RAID array, because a RAID isn't actually a hard drive, but rather a virtual grouping of hard drives. SMART data is only available on each individual drive. I suggest you run the extended SMART scan on the drives before assembling the array (though it'll take you an hourish to run on both drives I suspect.) The command is:
Code:
smartctl -t long /dev/sdb
and then running
Code:
smartctl -HcA /dev/sdb
on both of the drives. The first command runs an extended self-test, the second command spits out both it's 'general' self-assessment, and the raw data behind that assessment. Pay attention to the RAW_VALUE of Reallocated Sector_Ct and Current_Pending_Sector values, as these are common indicators of an old or defective hard disk. My personal recommendation is to use the Ubuntu live DVD (I like 10.04 and 10.10) as the 'Disk Utility' offers a nice graphical interface for running these tests, so you don't have to mess with the command line if you don't wish.
You'll need to start from scratch with the RAID again. Use fdisk (or, my preference, gdisk to create a GUID partition table, as GUID is more robust and the direction of future filesystems) and then create one partition on each drive. Make sure the partitions are the same size (if the disks are identical in size, then you can just create one partition on each drive that spans the entire disk) and assign them the disk filesystem ID of fd (or fd00 depending on if you use fdisk or gdisk; using the 'l' command should print out the hex filesystem types in either case.)
Once you've done that, you'll be able to create your new RAID without any trouble. There's a very good tutorial on how to set up your RAID here, including a few thoughts on chunk size: http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.