LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
LinkBack Search this Thread
Old 10-24-2008, 10:52 AM   #1
tkmbe
Member
 
Registered: Jun 2005
Location: Obera, Argentina
Distribution: OpenSuse 10.3, Debian 4.0, Debian 3.1
Posts: 36

Rep: Reputation: 15
mdadm --detail XXX gives strange output for raid1 (doble disk space as it should be)


Hello

I have a problem with raid1.
I have 4 raid partitions were I have rearranged the disk partitions.
I have removed raid config, array etc. of 2 arrays, partitioned the disk at the beginning to rearrange and created the 2 arrays again with yast and opensuse 10.3
Now one array (md0) shows me the Used Dev Size 18.63GB and Array Size 9.31GB
but I use raid1 not raid0 and the fs details of the "normal" partitions shows the right sizes (9.31GB)...
I have already created and formated again the md0 but still the same problem.
Mounted the /dev/md0 it shows the right space but I really don't trust this.
And this will be my new / filesytem so I want to be shure that it will be right.
BTW: All raid arrays were created with suse 9.0 this 2 new ones with opensuse 10.3 ...
The version from mdadm --detail /dev/mdX says:
old one: 00.90.03
new one: 01.00.03
What could this be? Any idea?
What can I do?
Here the output of the good one and bad one raid arrays ...

Greetings

Beat

Info of newly created /dev/md0 (Bad one)
[HTML]
###### /dev/md0 infos

gw1:~ # mdadm --detail /dev/md0
/dev/md0:
Version : 01.00.03
Creation Time : Fri Oct 24 12:26:34 2008
Raid Level : raid1
Array Size : 9767416 (9.31 GiB 10.00 GB)
Used Dev Size : 19534832 (18.63 GiB 20.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Fri Oct 24 12:29:52 2008
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : 0
UUID : fc1cbc64:d9dec569:023063e5:e847ffbc
Events : 5

Number Major Minor RaidDevice State
0 8 7 0 active sync /dev/sda7
1 8 23 1 active sync /dev/sdb7

###### /dev/sda7 infos

gw1:~ # mdadm -E /dev/sda7
/dev/sda7:
Magic : a92b4efc
Version : 01
Feature Map : 0x1
Array UUID : fc1cbc64:d9dec569:023063e5:e847ffbc
Name : 0
Creation Time : Fri Oct 24 12:26:34 2008
Raid Level : raid1
Raid Devices : 2

Used Dev Size : 19534832 (9.31 GiB 10.00 GB)
Array Size : 19534832 (9.31 GiB 10.00 GB)
Super Offset : 19534960 sectors
State : active
Device UUID : eacbc428:29e67ecb:d0151ce3:59e1d742

Internal Bitmap : -76 sectors from superblock
Update Time : Fri Oct 24 12:29:52 2008
Checksum : c9ca60b5 - correct
Events : 5


Array Slot : 0 (0, 1)
Array State : Uu

###### /dev/sdb7 infos

gw1:~ # mdadm -E /dev/sdb7
/dev/sdb7:
Magic : a92b4efc
Version : 01
Feature Map : 0x1
Array UUID : fc1cbc64:d9dec569:023063e5:e847ffbc
Name : 0
Creation Time : Fri Oct 24 12:26:34 2008
Raid Level : raid1
Raid Devices : 2

Used Dev Size : 19534832 (9.31 GiB 10.00 GB)
Array Size : 19534832 (9.31 GiB 10.00 GB)
Super Offset : 19534960 sectors
State : active
Device UUID : a2058e71:10517059:f5a082bd:ae00a2f0

Internal Bitmap : -76 sectors from superblock
Update Time : Fri Oct 24 12:29:52 2008
Checksum : 28b5afd0 - correct
Events : 5


Array Slot : 1 (0, 1)
Array State : uU

[/HTML]

Info of newly created /dev/md4 (Good one)
Code:
###### /dev/md4 infos

gw1:~ # mdadm --detail /dev/md4
/dev/md4:
        Version : 01.00.03
  Creation Time : Thu Oct 23 22:44:24 2008
     Raid Level : raid1
     Array Size : 2522160 (2.41 GiB 2.58 GB)
  Used Dev Size : 2522160 (2.41 GiB 2.58 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Fri Oct 24 13:32:15 2008
          State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : 4
           UUID : 79a7c53d:84384732:36ca18e2:252bdbc6
         Events : 8

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

###### /dev/sda1 infos

gw1:~ # mdadm -E /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : 79a7c53d:84384732:36ca18e2:252bdbc6
           Name : 4
  Creation Time : Thu Oct 23 22:44:24 2008
     Raid Level : raid1
   Raid Devices : 2

  Used Dev Size : 5044320 (2.41 GiB 2.58 GB)
     Array Size : 5044320 (2.41 GiB 2.58 GB)
   Super Offset : 5044328 sectors
          State : clean
    Device UUID : f3b72005:ab1757da:a0d01962:c18f55d3

Internal Bitmap : 2 sectors from superblock
    Update Time : Fri Oct 24 13:32:15 2008
       Checksum : 69fe442a - correct
         Events : 8


    Array Slot : 0 (0, 1)
   Array State : Uu

###### /dev/sdb1 infos

gw1:~ # mdadm -E /dev/sdb1
/dev/sdb1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : 79a7c53d:84384732:36ca18e2:252bdbc6
           Name : 4
  Creation Time : Thu Oct 23 22:44:24 2008
     Raid Level : raid1
   Raid Devices : 2

  Used Dev Size : 5044320 (2.41 GiB 2.58 GB)
     Array Size : 5044320 (2.41 GiB 2.58 GB)
   Super Offset : 5044328 sectors
          State : clean
    Device UUID : 477b8328:dcfd23d2:d9067891:795ba974

Internal Bitmap : 2 sectors from superblock
    Update Time : Fri Oct 24 13:32:15 2008
       Checksum : 55dfefa1 - correct
         Events : 8


    Array Slot : 1 (0, 1)
   Array State : uU
Output of /proc/mdstat
Code:
gw1:~ #  cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda7[0] sdb7[1]
      9767416 blocks super 1.0 [2/2] [UU]
      bitmap: 0/150 pages [0KB], 32KB chunk

md4 : active raid1 sda1[0] sdb1[1]
      2522160 blocks super 1.0 [2/2] [UU]
      bitmap: 0/10 pages [0KB], 128KB chunk

md3 : active raid1 sdb10[1] sda10[0]
      6144704 blocks [2/2] [UU]

md2 : active raid1 sdb9[1] sda9[0]
      6144704 blocks [2/2] [UU]

md1 : active raid1 sdb8[1] sda8[0]
      6144704 blocks [2/2] [UU]

unused devices: <none>

Last edited by tkmbe; 10-24-2008 at 11:09 AM. Reason: Added more infos
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Migrate to a raid1 with mdadm Ossar Linux - Software 2 08-19-2008 11:55 AM
Software RAID1 with mdadm bujecas Debian 0 10-26-2006 09:56 AM
Strange Disk Space Issues cj10111 Red Hat 6 06-10-2005 04:24 AM
raid1 using mdadm? help plz akadidm Linux - Hardware 3 06-09-2005 04:58 AM
Host XXX.XXX.XXX.XXX is not allowed to connect to this MySQL server ocavid Linux - Newbie 2 03-16-2005 09:40 AM


All times are GMT -5. The time now is 07:06 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration