-   Linux - Server (
-   -   SoftwareRaid1 (mdadm) and LVM setup. Working, but is it ok? (

isync 05-31-2007 02:17 PM

SoftwareRaid1 (mdadm) and LVM setup. Working, but is it ok?
Hi there!

I have setup a two disk software raid on my amd64 machine running ubuntu 6.06.
I have plugged in the two sata disks right out of the box and they were recognized by the system after boot as sdb and sdc.

After that I started building the software raid 1 array:
sudo mdadm --create /dev/md0 -a -l 1 -n 2 /dev/sdb /dev/sdc

then on with lvm:
sudo pvcreate /dev/md0
sudo vgcreate datavg /dev/md0
sudo lvcreate -L 250G -n somelv datavg
sudo mkfs.ext3 /dev/datavg/somelv

after that I mounted the array successfully. But when I call up the ubuntu disk manager application it shows no info about the lvm or md0 disk. Is that normal? sdb and sdc are identified as "nopartition or partition not readable"...

And an fdisk -l /dev/md0 for the array notes "Disk /dev/md0 doesn't contain a valid partition table". And inspecting the beast with gparted also indentifies /sdb and /sdc as not partitioned(greyed) drives. But gparted also lists /dev/mapper/datavg/somelv1 with correct sizes but an warning symbol beneath the entry saying I havent the right plugin installed.

Now- is my md0/lvm beast alive and save to use or did I do something wrong?

wendea 05-31-2007 03:27 PM

Try testing it and unplug one harddrive while comp is running and see if the raid kicks in

isync 05-31-2007 04:39 PM

Ok, I opened the box and pulled the power plug on drive /sdc
After that I could continue accessing data. Which I did: I deleted a file on the array (in fact only disc /sdb)

Then I powered the drive up again and the following commands gave these outputs:
$ cat /proc/mdstat

  Personalities : [raid1]
  md0 : active raid1 sdb[0] sdc[2](F)
      488386496 blocks [2/1] [U_]

  unused devices: <none>

$ sudo mdadm --detail /dev/md0

        Version : 00.90.03
  Creation Time : Thu May 31 14:53:39 2007
      Raid Level : raid1
      Array Size : somedata
      Device Size : somedata
    Raid Devices : 2
    Total Devices : 2
  Preferred Minor : 0
      Persistence : Superblock is persistent

    Update Time : Thu May 31 22:22:13 2007
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

          UUID : somedata
        Events : 0.81

    Number  Major  Minor  RaidDevice State
      0      8      16        0      active sync  /dev/sdb
      1      0        0        -      removed

      2      8      32        -      faulty  /dev/sdc

Does this seem right?? Is my raid working??

But now, after the test, how do I resync the array? It does not seem to detect by itself that both drives are online again...
$ sudo mdadm --assemble -U resync /dev/md0
sais: "mdadm: /dev/md0 not identified in config file."

My config file (about which I learned here: is empty...

wendea 05-31-2007 04:41 PM

That looks right now all you have to do is run the raid script again

mkraid -R etc.... and it should be running

isync 05-31-2007 05:14 PM


Originally Posted by wendea
That looks right now all you have to do is run the raid script again

mkraid -R etc.... and it should be running

Easier said than done...
As I said, running ubuntu, "mkraid anything" gives me "bash: mkraid: command not found"
And being not a raid-pro, I do actually not know what you mean with "running the raid script etc"...

Can you talk me through?

isync 06-01-2007 05:26 AM

How do I get my raid resync'ed / working again??

How do I run "the raid script" again on ubuntu?

I tried:
$ sudo mdadm --assemble --update=resync /dev/sdc

mdadm: /dev/sdc does not appear to be an md device
do I need to use the "--force" option (to hook-in /dev/sdc as if I changed it for a new one?)

Reminds me a bit of the status I described earlier - that gparted still identifies drive /dev/sdb and /dev/sdc as unpartitioned...

iseeuu 07-16-2008 09:02 PM


Originally Posted by isync (Post 2770530)
$ sudo mdadm --assemble --update=resync /dev/sdc
mdadm: /dev/sdc does not appear to be an md device

By posting here, I am telling on myself. I hope this will save someone else the hours of frustration I endured. What is most embarrassing is this is the second time I had to figure this out:

The simple solution to the mdadm error above, ie - does not appear to be an md device - is that between --update=resync and /dev/sdc the "md device" ie: /dev/md0 or /dev/md1 etc, is missing. Since /dev/sdc is not a "md device", the code returns the error.



bobpaul 05-17-2011 09:48 PM

You need to give mdadm a raid device when using manage. So


mdadm --manage /dev/md0 --add /dev/sdc
I did the same thing for about 15 minutes here. madam --manage --help doesn't indicate this is the case. Of course it's in man mdadm.

All times are GMT -5. The time now is 01:49 PM.