LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware > Linux - Embedded & Single-board computer
User Name
Password
Linux - Embedded & Single-board computer This forum is for the discussion of Linux on both embedded devices and single-board computers (such as the Raspberry Pi, BeagleBoard and PandaBoard). Discussions involving Arduino, plug computers and other micro-controller like devices are also welcome.

Notices


Reply
  Search this Thread
Old 04-23-2012, 07:55 AM   #1
mauromol
Member
 
Registered: Apr 2012
Location: Italy
Distribution: Linux Mint KDE, Debian
Posts: 35

Rep: Reputation: Disabled
Help with mdadm to build a RAID 1 array on an ARM NAS


Hello, I'm new to LQ, I hope this is the right section to post in.

I have an ARM NAS with Linux. It is a Western Digital MyBook World, ARM EABI.
This Linux box has a SATA disk and a USB port.
I connected a new hard disk on the USB port and I want to setup a RAID 1 on this for one of its partitions. In the past, when I was using another (older) disk, I had no problem to do this. However, now this is the error message I receive:

Code:
~ # mdadm -Cv /dev/md5 -l1 -n2 /dev/sdb4 missing
mdadm: size set to 1345732544K
mdadm: ADD_NEW_DISK for /dev/sdb4 failed: Invalid argument
The error message isn't saying what's actually wrong. I don't know what to do to diagnose the real problem. I'm not a Linux guru and I know very little about Linux RAID. Although I have some Linux experience.

I searched the web a lot: I found many pages about similar "Invalid argument" problems, but none of them seems to be my own problem.

Can anyone help with this? Is this the right forum section to post in?

Thanks a lot in advance,
Mauro.
 
Old 04-23-2012, 01:00 PM   #2
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
could you post the result of
Code:
fdisk -l
 
Old 04-23-2012, 02:08 PM   #3
mauromol
Member
 
Registered: Apr 2012
Location: Italy
Distribution: Linux Mint KDE, Debian
Posts: 35

Original Poster
Rep: Reputation: Disabled
Hi!
Here it is:

Code:
~ # fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sda1               5         249     1959936   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2             249         280      256992   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/sda3             280         403      988000   fd  Linux raid autodetect
Partition 3 does not end on cylinder boundary.
/dev/sda4             403      121601   973522944+  fd  Linux raid autodetect
Note: sector size is 4096 (not 512)

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 15200 cylinders
Units = cylinders of 16065 * 4096 = 65802240 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdb1               5         249    15679488   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2             249         280     2055936   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/sdb3             280         403     7904000   fd  Linux raid autodetect
Partition 3 does not end on cylinder boundary.
/dev/sdb4             403      121601  7788183556   fd  Linux raid autodetect
 
Old 04-23-2012, 02:57 PM   #4
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
The problem may be related to the fact that /dev/sdb has 4k sectors. Could you also post the result of
Code:
mdadm --examine /dev/sd*
and
mdadm -Q --detail /dev/md*
But it could also be that mdadm doesn't like that the partitions are not on a cylinder boundary.
which version of mdadm are you using.
 
Old 04-23-2012, 05:09 PM   #5
mauromol
Member
 
Registered: Apr 2012
Location: Italy
Distribution: Linux Mint KDE, Debian
Posts: 35

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by whizje View Post
The problem may be related to the fact that /dev/sdb has 4k sectors. Could you also post the result of
Code:
mdadm --examine /dev/sd*
and
mdadm -Q --detail /dev/md*
But it could also be that mdadm doesn't like that the partitions are not on a cylinder boundary.
which version of mdadm are you using.
Code:
~ # mdadm --examine /dev/sd*
mdadm: No md superblock detected on /dev/sda.
/dev/sda1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : c191d203:2af0915d:f5e4f042:1d2bf8c3
  Creation Time : Wed Jul 15 07:23:55 2009
     Raid Level : raid1
    Device Size : 1959872 (1914.26 MiB 2006.91 MB)
     Array Size : 1959872 (1914.26 MiB 2006.91 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 0

    Update Time : Tue Apr 24 00:00:38 2012
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 43675f04 - correct
         Events : 0.4968754


      Number   Major   Minor   RaidDevice State
this     0       8        1        0      active sync   /dev/sda1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       0        0        1      faulty removed
/dev/sda2:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 2790a3a1:98951451:c3780e6f:b2d255ac
  Creation Time : Wed Jul 15 07:23:56 2009
     Raid Level : raid1
    Device Size : 256896 (250.92 MiB 263.06 MB)
     Array Size : 256896 (250.92 MiB 263.06 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 1

    Update Time : Mon Apr 23 13:39:56 2012
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 799277ae - correct
         Events : 0.15502


      Number   Major   Minor   RaidDevice State
this     0       8        2        0      active sync   /dev/sda2

   0     0       8        2        0      active sync   /dev/sda2
   1     1       0        0        1      faulty removed
/dev/sda3:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : d8fcca71:7bb5075e:90c5591c:1f1405cf
  Creation Time : Wed Jul 15 07:23:56 2009
     Raid Level : raid1
    Device Size : 987904 (964.91 MiB 1011.61 MB)
     Array Size : 987904 (964.91 MiB 1011.61 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 3

    Update Time : Mon Apr 23 23:10:58 2012
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 47bbae89 - correct
         Events : 0.95992


      Number   Major   Minor   RaidDevice State
this     0       8        3        0      active sync   /dev/sda3

   0     0       8        3        0      active sync   /dev/sda3
   1     1       0        0        1      faulty removed
/dev/sda4:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 5c8a2d73:549db910:75460f62:9d9fa32a
  Creation Time : Mon Jan 11 14:34:29 2010
     Raid Level : raid1
    Device Size : 973522880 (928.42 GiB 996.89 GB)
     Array Size : 973522880 (928.42 GiB 996.89 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 2

    Update Time : Tue Apr 24 00:00:56 2012
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 4357d22e - correct
         Events : 0.10195944


      Number   Major   Minor   RaidDevice State
this     0       8        4        0      active sync   /dev/sda4

   0     0       8        4        0      active sync   /dev/sda4
   1     1       0        0        1      faulty removed
mdadm: cannot open /dev/sda5: No such device or address
mdadm: cannot open /dev/sda6: No such device or address
mdadm: cannot open /dev/sda7: No such device or address
mdadm: cannot open /dev/sda8: No such device or address
mdadm: No md superblock detected on /dev/sdb.
mdadm: No md superblock detected on /dev/sdb1.
mdadm: No md superblock detected on /dev/sdb2.
mdadm: No md superblock detected on /dev/sdb3.
mdadm: No md superblock detected on /dev/sdb4.
mdadm: cannot open /dev/sdb5: No such device or address
mdadm: cannot open /dev/sdb6: No such device or address
[... and so on...]
and

Code:
~ # mdadm -Q --detail /dev/md*
/dev/md0:
        Version : 00.90.03
  Creation Time : Wed Jul 15 07:23:55 2009
     Raid Level : raid1
     Array Size : 1959872 (1914.26 MiB 2006.91 MB)
    Device Size : 1959872 (1914.26 MiB 2006.91 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Apr 24 00:00:38 2012
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : c191d203:2af0915d:f5e4f042:1d2bf8c3
         Events : 0.4968754

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       0        0        1      removed
/dev/md1:
        Version : 00.90.03
  Creation Time : Wed Jul 15 07:23:56 2009
     Raid Level : raid1
     Array Size : 256896 (250.92 MiB 263.06 MB)
    Device Size : 256896 (250.92 MiB 263.06 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Mon Apr 23 13:39:56 2012
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 2790a3a1:98951451:c3780e6f:b2d255ac
         Events : 0.15502

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       0        0        1      removed
mdadm: md device /dev/md10 does not appear to be active.
mdadm: md device /dev/md11 does not appear to be active.
mdadm: md device /dev/md12 does not appear to be active.
mdadm: md device /dev/md13 does not appear to be active.
mdadm: md device /dev/md14 does not appear to be active.
mdadm: md device /dev/md15 does not appear to be active.
mdadm: md device /dev/md16 does not appear to be active.
mdadm: md device /dev/md17 does not appear to be active.
mdadm: md device /dev/md18 does not appear to be active.
mdadm: md device /dev/md19 does not appear to be active.
/dev/md2:
        Version : 00.90.03
  Creation Time : Mon Jan 11 14:34:29 2010
     Raid Level : raid1
     Array Size : 973522880 (928.42 GiB 996.89 GB)
    Device Size : 973522880 (928.42 GiB 996.89 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Tue Apr 24 00:02:27 2012
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 5c8a2d73:549db910:75460f62:9d9fa32a
         Events : 0.10195950

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       1       0        0        1      removed
mdadm: md device /dev/md20 does not appear to be active.
mdadm: md device /dev/md21 does not appear to be active.
mdadm: md device /dev/md22 does not appear to be active.
mdadm: md device /dev/md23 does not appear to be active.
mdadm: md device /dev/md24 does not appear to be active.
mdadm: md device /dev/md25 does not appear to be active.
mdadm: md device /dev/md26 does not appear to be active.
mdadm: md device /dev/md27 does not appear to be active.
mdadm: md device /dev/md28 does not appear to be active.
mdadm: md device /dev/md29 does not appear to be active.
/dev/md3:
        Version : 00.90.03
  Creation Time : Wed Jul 15 07:23:56 2009
     Raid Level : raid1
     Array Size : 987904 (964.91 MiB 1011.61 MB)
    Device Size : 987904 (964.91 MiB 1011.61 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Mon Apr 23 23:10:58 2012
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : d8fcca71:7bb5075e:90c5591c:1f1405cf
         Events : 0.95992

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       0        0        1      removed
mdadm: md device /dev/md4 does not appear to be active.
mdadm: md device /dev/md5 does not appear to be active.
mdadm: md device /dev/md6 does not appear to be active.
mdadm: md device /dev/md7 does not appear to be active.
mdadm: md device /dev/md8 does not appear to be active.
mdadm: md device /dev/md9 does not appear to be active.
Regarding the mdadm version:

Code:
~ # mdadm -V
mdadm - v2.4 - 30 March 2006
However, I don't think that the problem is related to cylinder boundaries, because the array are working fine with the /dev/sda* partitions and were working fine with the old drive. Partitions were created with the same boundaries, because /dev/sdb is meant to be a clone copy of /dev/sda.

The new disk is in fact a Western Digital Caviar Green with Advanced Format. Maybe this is related to the 4k sectors. By the way, the 4k sectors were chosen automatically by the OS when partitioning the drive.

So, does this mean that Linux RAID does not support 4k sectors drives? Or is it a limit of my Linux kernel version?

Code:
~ # uname -a
Linux MyBookWorld 2.6.24.4 #1 Tue Feb 10 11:00:22 GMT 2009 armv5tejl unknown
Thanks for your help!
 
Old 04-23-2012, 07:58 PM   #6
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
The partition table for dev/sdb doesn't seem right the disk has 15200 cylinders and you manage to let the partition end on cylinder 121601 can you changes to sectors with u in fdisk. Then you might be able to create equally sized partitions as dev/sda.
 
1 members found this post helpful.
Old 04-24-2012, 12:18 AM   #7
mauromol
Member
 
Registered: Apr 2012
Location: Italy
Distribution: Linux Mint KDE, Debian
Posts: 35

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by whizje View Post
The partition table for dev/sdb doesn't seem right the disk has 15200 cylinders and you manage to let the partition end on cylinder 121601 can you changes to sectors with u in fdisk. Then you might be able to create equally sized partitions as dev/sda.
I understand the problem, but I'm not sure I clearly understood the solution.

First of all, if I type fdisk -ul to see sizes in sectors instead of cylinders I see:

Code:
~ # fdisk -ul

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sda1           64320     3984191     1959936   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2         3984192     4498175      256992   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/sda3         4498176     6474175      988000   fd  Linux raid autodetect
Partition 3 does not end on cylinder boundary.
/dev/sda4         6474176  1953520064   973522944+  fd  Linux raid autodetect
Note: sector size is 4096 (not 512)

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 15200 cylinders, total 244190646 sectors
Units = sectors of 1 * 4096 = 4096 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdb1           64320     3984191    15679488   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2         3984192     4498175     2055936   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/sdb3         4498176     6474175     7904000   fd  Linux raid autodetect
Partition 3 does not end on cylinder boundary.
/dev/sdb4         6474176  1953520064  7788183556   fd  Linux raid autodetect
Once again, it says I have 244,190,646 sectors but the last partition ends on sector 1,953,520,064. This doesn't surprises me too much, since partitions were created by a tool that tries to replicate the partition table of /dev/sda on /dev/sdb. This tool seems not to be taking account of the different sector sizes. However, I don't know how I should proceed to delete all the partitions and recreate them in order to be the exact same sizes of /dev/sda partitions.

More precisely, I'm pretty sure that /dev/sdb1, /dev/sdb2 and /dev/sdb3 must be equal to /dev/sda1, /dev/sda2 and /dev/sda3 respectively, since I now that they will be used in pairs to create RAID-1 arrays in order to replicate contents in /dev/sda* into /dev/sdb*. The last partitions, instead, I mean /dev/sda4 and /dev/sb4, won't be put together in a RAID, since they only contain user data which will be replicated by other techniques. In fact, I previously used a 120 GB hard disk as /dev/sdb (that is, a smaller disk than /dev/sda) and all worked perfectly: the first 3 partitions were the same as in /dev/sda, but the last one was obviously much smaller than /dev/sda4.
The reason for which the tool I mention needs to create a RAID-1 array on /dev/sdb4 with a missing device is not perfectly clear to me, but I think it's just to way to replicate the system state of the first disk, whenever the second disk needs to totally replace the first in case of failure, even if /dev/sdb4 can actually be sized differently than /dev/sda4.

So, now, could you give me some hints on how should I proceed to recreate /dev/sdb partitions correcly?

I appreciate your help very much!
Mauro.

Last edited by mauromol; 04-24-2012 at 12:32 AM.
 
Old 04-24-2012, 02:30 AM   #8
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
With fdisk started in sector mode for /dev/sdb you can delete the partitions with d and then the number of the partition sdb1 that is 1. Repeat that for the partitions 2 to 4. As you can see a unit of /dev/sdb is 8 times so big as a unit of /dev/sda. 4096 vs 512. So the number of blocks occupied by the different partitions needs to be 8 times so small. the starting sector also needs to be 8 times so small. Type n for a new partition then it ask for a start sector 64320 div 8 is 8040. So the start sector is 8040. Then it asks for the last sector or size. the size of sda1 is 1959936 blocks div 8 is 244.992 blocks a block is 2 sectors so multiply with 2 to get the size in sectors is 489.984 . Type +489983 to give the size. Then press t to set the type of partition 1 to fd. Check with p if the partition is what you want. Repeat for partition 2 to 4. The dots are only for clarity do not type them.
Code:
So /dev/sdb becomes
start   end         blocks         (size in sectors)
8.040    498.023     244.992       489.984
498.024  562.271     32.124        64.248
562.272  809.271     123.500       247.000
809.272  244.190.007 121.690.368   243.380.736
 
1 members found this post helpful.
Old 04-24-2012, 02:32 AM   #9
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
If you have added all the partitions and they are what you want type w to write the changes to the partition table. Else you can start all over again ;-).
 
Old 04-27-2012, 08:42 AM   #10
mauromol
Member
 
Registered: Apr 2012
Location: Italy
Distribution: Linux Mint KDE, Debian
Posts: 35

Original Poster
Rep: Reputation: Disabled
Hi whizje,
thank you very much again for your help. I will certainly do what you suggest.
However, I have a more difficult (I think) question now.

Some background. Western Digital My Book World Edition is sold in two versions:
- the 1 disk version
- the 2 disk version, which uses two disks in RAID configuration
The 1 disk version (the one I have) at O.S. level is actually a 2 disk version with one disk missing. This means that, as you could see from my mdadm outputs, it's using RAID arrays /dev/md0, /dev/md1, /dev/md2 and /dev/md3 with just one partition each (/dev/sda1, /dev/sda2, /dev/sda4 and /dev/sda3 respectively).
This fact is exploited by a little tool (which consists of a set of scripts) to allow the cloning of the primary disk to an external disk (connected to the USB port) in order to have a full system backup on a secondary disk, which could then replace the primary one in case of failure, by simply unplugging the broken disk and plugging the new one instead.

Now, my primary disk has a sector size of 512 bytes, while my secondary disk has a sector size of 4 Kbytes. I'm going to create partitions on the secondary disk as you suggested, so that they are sized equally. In this way I should be able to:
- add /dev/sdb1 to /dev/md0 to replicate /dev/sda1
- add /dev/sdb2 to /dev/md1 to replicate /dev/sda2
- add /dev/sdb3 to /dev/md3 to replicate /dev/sda3
(the copying of /dev/sda4 to /dev/sdb4 is performed differently, since this last partition is actually the "data partition", and backup is made using rsync, so that I may even "clone" the primary disk to one of different size).

So far so good.

The question is this: once I have /dev/sda "cloned" to /dev/sdb using this technique, does replacing the primary disk with the secondary one let me boot the system? I mean, as long as I know, sector addresses are used by the boot manager to find the partitions to boot. If I just use the disk that is now /dev/sdb as the boot disk (with its MBR having all the information copied from /dev/sda, which has a different sector size hence different partition boundaries in sector units), will the system be able to boot?

Searching for the boot device used by the system, I see:

Code:
/ # cat /proc/cmdline
root=/dev/md0 console=ttyS0,115200 elevator=cfq mac_adr=0x00,0x90,0xA9,0x80,0x17,0x5D mem=128M wixmodel=WWLXN poweroutage=yes adminmode=recovery
So, it seems like it's using the logical RAID device /dev/md0 to boot the system. Hence, I would expect that the Linux RAID driver takes care of translating any "logic" sector count (written in the MBR) to the correct "physical" sector count on the fly and so let me boot the system from either of the two disks, even if they have different sector sizes.

Am I correct? Or am I in trouble?

Thanks again in advance!!

Last edited by mauromol; 04-27-2012 at 08:45 AM.
 
Old 04-27-2012, 11:40 AM   #11
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 594

Rep: Reputation: 141Reputation: 141
I think it should work. Try it and post the result.

Last edited by whizje; 04-27-2012 at 11:42 AM.
 
Old 05-01-2012, 04:07 PM   #12
mauromol
Member
 
Registered: Apr 2012
Location: Italy
Distribution: Linux Mint KDE, Debian
Posts: 35

Original Poster
Rep: Reputation: Disabled
Hi whizje,
I'm still in trouble. First of all, I had some problems creating the partition table. The first three partitions were created with no problems, following your advices. For the fourth one, I thought: "I'll let fdisk create the partition starting at the sector I want and extending till the end of the disk". So, I first specified 809272 as the starting sector and hit enter (without typing anything) as the ending sector. I realized later that however fdisk set the last sector somewhere at 1953504000, which would be the sector count of my disk if it had sector of 512 KB in size!! When I realized this, I divided the last sector by 8 and re-created the ending partition by specifying that value - 1... However, doing fdisk -l I saw that the ending cylinder was then set one above the last one (so: the last cylinder of /dev/sdb4 was 15201, while the disk has only 15200 cylinders). So I did this: took 15200, multiplied by 16065 to get the last sector and subtracted 1. So, this is the partition table I ended with:

Code:
~ # fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sda1               5         249     1959936   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2             249         280      256992   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/sda3             280         403      988000   fd  Linux raid autodetect
Partition 3 does not end on cylinder boundary.
/dev/sda4             403      121601   973522944+  fd  Linux raid autodetect
Note: sector size is 4096 (not 512)

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 15200 cylinders
Units = cylinders of 16065 * 4096 = 65802240 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdb1               1          32     1959936   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2              32          35      256992   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/sdb3              35          51      988000   fd  Linux raid autodetect
Partition 3 does not end on cylinder boundary.
/dev/sdb4              51       15200   973514912   fd  Linux raid autodetect
Or, in sectors:

Code:
~ # fdisk -ul

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sda1           64320     3984191     1959936   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2         3984192     4498175      256992   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/sda3         4498176     6474175      988000   fd  Linux raid autodetect
Partition 3 does not end on cylinder boundary.
/dev/sda4         6474176  1953520064   973522944+  fd  Linux raid autodetect
Note: sector size is 4096 (not 512)

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 15200 cylinders, total 244190646 sectors
Units = sectors of 1 * 4096 = 4096 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdb1            8040      498023     1959936   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2          498024      562271      256992   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/sdb3          562272      809271      988000   fd  Linux raid autodetect
Partition 3 does not end on cylinder boundary.
/dev/sdb4          809272   244187999   973514912   fd  Linux raid autodetect
This seems correct to me and the block count of the corresponding partitions 1, 2 and 3 on the two disks seems perfect.
The fourth partition seems to end in the last cylinder of the disk, so I think (and I hope) it's correct now (as I said, I couldn't say "calculate the end yourself" to fdisk in sector mode).

However, another strange thing is that I can't use parted to do operations on the new disk:

Code:
~ # parted /dev/sdb
Warning: Device /dev/sdb has a logical sector size of 4096.  Not all parts of
GNU Parted support this at the moment, and the working code is HIGHLY
EXPERIMENTAL.

GNU Parted 1.7.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Error: Unable to open /dev/sdb - unrecognised disk label.
And, if I try "mklabel msdos" on /dev/sdb I get the following:

Code:
(parted) mklabel msdos
Error: Bad address during write on /dev/sdb
Retry/Ignore/Cancel? c


You found a bug in GNU Parted! Here's what you have to do:

Don't panic! The bug has most likely not affected any of your data.
Help us to fix this bug by doing the following:

Check whether the bug has already been fixed by checking
the last version of GNU Parted that you can find at:

        http://ftp.gnu.org/gnu/parted/

Please check this version prior to bug reporting.

If this has not been fixed yet or if you don't know how to check,
please visit the GNU Parted website:

        http://www.gnu.org/software/parted

for further information.

Your report should contain the version of this release (1.7.1)
along with the error message below, the output of

        parted DEVICE unit co print unit s print

and additional information about your setup you consider important.

Error: SEGV_MAPERR (Address not mapped to object)Aborted
Anyway, this doesn't disturb me too much.

I was then able to create the /dev/md5 array correctly, with the command that gave me problem at first:

Code:
~ # mdadm -Q --detail /dev/md5
/dev/md5:
        Version : 00.90.03
  Creation Time : Tue May  1 22:49:29 2012
     Raid Level : raid1
     Array Size : 973514816 (928.42 GiB 996.88 GB)
    Device Size : 973514816 (928.42 GiB 996.88 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 5
    Persistence : Superblock is persistent

    Update Time : Tue May  1 22:51:08 2012
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 306f45b2:e274ffbd:1f4478a6:fe99301e
         Events : 0.4

    Number   Major   Minor   RaidDevice State
       0       8       20        0      active sync   /dev/sdb4
       1       0        0        1      removed
Good!
However, the problem comes next. If I try to format /dev/md5 as xfs, I get the following error:

Code:
~ # /usr/sbin/mkfs.xfs -f /dev/md5
mkfs.xfs: warning - cannot set blocksize on block device /dev/md5: Invalid argument
Warning: the data subvolume sector size 512 is less than the sector size
reported by the device (4096).
meta-data=/dev/md5               isize=256    agcount=32, agsize=7605584 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=243378688, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0
existing superblock read failed: Invalid argument
mkfs.xfs: pwrite64 failed: Invalid argument
mkfs.xfs: read failed: Invalid argument
Any idea on why? :-(

And, by the way... why is this disk giving me so much trouble to use?
 
Old 05-01-2012, 04:21 PM   #13
mauromol
Member
 
Registered: Apr 2012
Location: Italy
Distribution: Linux Mint KDE, Debian
Posts: 35

Original Poster
Rep: Reputation: Disabled
Ok, I could format the partition adding "-s size=4096" command parameter:

Code:
~ # /usr/sbin/mkfs.xfs -f -s size=4096 /dev/md5
meta-data=/dev/md5               isize=256    agcount=32, agsize=7605584 blks
         =                       sectsz=4096  attr=0
data     =                       bsize=4096   blocks=243378688, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=32768, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0
However, any suggestion on the way I created the last partition and on the error message of parted is still appreciated...
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] mdadm: only give one device per ARRAY line: /dev/md/:raid and array laughing_man77 Linux - Hardware 4 03-23-2012 04:05 PM
Problem creating new mdadm raid 1 array jlowry Linux - Software 12 03-04-2011 05:13 PM
mdadm acting oddly with RAID 5 array xonogenic Linux - Server 1 12-21-2010 10:16 AM
I need help on repairing a mdadm raid 5 array compgeek50 Linux - Hardware 0 02-24-2008 08:06 AM
Recovering a Raid 5 array, mdadm mess-up somebox Linux - Server 4 10-17-2007 06:57 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware > Linux - Embedded & Single-board computer

All times are GMT -5. The time now is 07:32 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration