LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 02-07-2012, 03:08 AM   #1
scanray
LQ Newbie
 
Registered: Feb 2012
Posts: 10

Rep: Reputation: Disabled
raid5 re-build problems


I know this problem is in another forum, but I have not found the answer.

I used software raid5 several years ago, I started using 3hdx320 and now I have 5hdx1T.
I had to change the MB where the raid had failures in the MB.

After installation of the system, I connected the disks of the raid. The system disks saw and recognized them as raid5, no problem up here. I mounted the raid, I copied several configuration files, so good.
Rebooted the machine and I noticed with surprise that he was rebuilding the raid. Let him finish. No way.
At the end of the message was "No disk format."

I began to review the logs and realized that the system had reorganized the hd in another order. Well, I think that's the problem.

out these lines in syslog when the hd is recent, and recognizes the raid

Quote:
Feb 6 03:13:26 server kernel: md: md127 stopped.
Feb 6 03:13:26 server kernel: md: bind<sdb1>
Feb 6 03:13:26 server kernel: md: bind<sdd1>
Feb 6 03:13:26 server kernel: md: bind<sde1>
Feb 6 03:13:26 server kernel: md: bind<sdf1>
Feb 6 03:13:26 server kernel: md: bind<sdc1>
Feb 6 03:13:26 server kernel: async_tx: api initialized (async)
Feb 6 03:13:26 server kernel: raid6: int64x1 2046 MB/s
Feb 6 03:13:26 server kernel: raid6: int64x2 2664 MB/s
Feb 6 03:13:26 server kernel: raid6: int64x4 1863 MB/s
Feb 6 03:13:26 server kernel: raid6: int64x8 1828 MB/s
Feb 6 03:13:26 server kernel: raid6: sse2x1 3488 MB/s
Feb 6 03:13:26 server kernel: raid6: sse2x2 4656 MB/s
Feb 6 03:13:26 server kernel: raid6: sse2x4 4835 MB/s
Feb 6 03:13:26 server kernel: raid6: using algorithm sse2x4 (4835 MB/s)
Feb 6 03:13:26 server kernel: xor: automatically using best checksumming function: generic_sse
Feb 6 03:13:26 server kernel: generic_sse: 7828.000 MB/sec
Feb 6 03:13:26 server kernel: xor: using function: generic_sse (7828.000 MB/sec)
Feb 6 03:13:26 server kernel: md: raid6 personality registered for level 6
Feb 6 03:13:26 server kernel: md: raid5 personality registered for level 5
Feb 6 03:13:26 server kernel: md: raid4 personality registered for level 4
Feb 6 03:13:26 server kernel: raid5: device sdc1 operational as raid disk 0
Feb 6 03:13:26 server kernel: raid5: device sdf1 operational as raid disk 4
Feb 6 03:13:26 server kernel: raid5: device sde1 operational as raid disk 3
Feb 6 03:13:26 server kernel: raid5: device sdd1 operational as raid disk 2
Feb 6 03:13:26 server kernel: raid5: device sdb1 operational as raid disk 1
Feb 6 03:13:26 server kernel: raid5: allocated 5334kB for md127
Feb 6 03:13:26 server kernel: 0: w=1 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:13:26 server kernel: 4: w=2 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:13:26 server kernel: 3: w=3 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:13:26 server kernel: 2: w=4 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:13:26 server kernel: 1: w=5 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:13:26 server kernel: raid5: raid level 5 set md127 active with 5 out of 5 devices, algorithm 2
Feb 6 03:13:26 server kernel: RAID5 conf printout:
Feb 6 03:13:26 server kernel: --- rd:5 wd:5
Feb 6 03:13:26 server kernel: disk 0, o:1, dev:sdc1
Feb 6 03:13:26 server kernel: disk 1, o:1, dev:sdb1
Feb 6 03:13:26 server kernel: disk 2, o:1, dev:sdd1
Feb 6 03:13:26 server kernel: disk 3, o:1, dev:sde1
Feb 6 03:13:26 server kernel: disk 4, o:1, dev:sdf1
Feb 6 03:13:26 server kernel: md127: detected capacity change from 0 to 4000808697856
Feb 6 03:13:26 server kernel: md127: unknown partition table
Feb 6 03:13:26 server kernel: device-mapper: uevent: version 1.0.3
Feb 6 03:13:26 server kernel: device-mapper: ioctl: 4.17.0-ioctl (2010-03-05) initialised: dm-devel@redhat.com
Feb 6 03:13:26 server kernel: EXT4-fs (sda6): mounted filesystem with ordered data mode
Feb 6 03:13:26 server kernel: loop: module loaded














Feb 6 03:19:08 server drakconf.real[2587]: ### Program is starting ###
Feb 6 03:19:13 server drakconf.real[2597]: ### Program is starting ###
Feb 6 03:19:26 server diskdrake[2616]: ### Program is starting ###
Feb 6 03:19:26 server diskdrake[2616]: dmraid::init failed
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sda succeeded: heads=255 sectors=63 cylinders=19457 start=2147483648
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sdb succeeded: heads=255 sectors=63 cylinders=56065 start=2147483648
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sdc succeeded: heads=255 sectors=63 cylinders=56065 start=4294936576
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sdd succeeded: heads=255 sectors=63 cylinders=56065 start=4294936576
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sde succeeded: heads=255 sectors=63 cylinders=56065 start=4294936576
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sdf succeeded: heads=255 sectors=63 cylinders=56065 start=4294936576
Feb 6 03:19:26 server diskdrake[2616]: id2hd: 0x76623ac6=>sda 0x7731752e=>sdc 0x009109b6=>sdd 0x78ea04eb=>sde 0xe4fa102c=>sdf 0x62286ebf=>sdb
Feb 6 03:19:26 server diskdrake[2616]: id2edd: 0x76623ac6=>/sys/firmware/edd/int13_dev80 0x7731752e=>/sys/firmware/edd/int13_dev82 0x009109b6=>/sys/firmware/edd/int13_dev83 0x78ea04eb=>/sys/firmware/edd/int13_dev84 0xe4fa102c=>/sys/firmware/edd/int13_dev85 0x62286ebf=>/sys/firmware/edd/int13_dev81
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sda 0x76623ac6: 310098/16/63
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sdc 0x7731752e: 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sdd 0x009109b6: 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sdf 0xe4fa102c: 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sde 0x78ea04eb: 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sdb 0x62286ebf: 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sda on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sda
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sda
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sda at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sda: 19457/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sda1, 62910539): 1023,254,62 vs 1023,3,62 with geometry 310098/16/63
Feb 6 03:19:26 server diskdrake[2616]: sda: using guessed geometry 19457/255/63 instead of 310098/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sda1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: ext4 ce7caea3-f715-424f-bb2a-77d0a91a0a15
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sda5
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: swap d6f410c2-4727-4448-b68d-a211668948b3
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sda6
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: ext4 b22989f5-07bb-4c09-add2-66b244cadf01
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sdb on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdb
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdb
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sdb at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sdb: 121601/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sdb1, 1953520064): 1023,254,62 vs 1023,14,62 with geometry 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: sdb: using guessed geometry 121601/255/63 instead of 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdb1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: 0b4906c3-2410-9e0d-263a-576e137e55ea
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sdc on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdc
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdc
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sdc at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sdc: 121601/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sdc1, 1953520064): 1023,254,62 vs 1023,14,62 with geometry 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: sdc: using guessed geometry 121601/255/63 instead of 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdc1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: 0b4906c3-2410-9e0d-263a-576e137e55ea
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sdd on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdd
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdd
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sdd at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sdd: 121601/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sdd1, 1953520064): 1023,254,62 vs 1023,14,62 with geometry 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: sdd: using guessed geometry 121601/255/63 instead of 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdd1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: 0b4906c3-2410-9e0d-263a-576e137e55ea
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sde on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sde
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sde
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sde at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sde: 121601/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sde1, 1953520064): 1023,254,62 vs 1023,14,62 with geometry 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: sde: using guessed geometry 121601/255/63 instead of 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sde1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: 0b4906c3-2410-9e0d-263a-576e137e55ea
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sdf on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdf
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdf
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sdf at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sdf: 121601/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sdf1, 1953520064): 1023,254,62 vs 1023,14,62 with geometry 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: sdf: using guessed geometry 121601/255/63 instead of 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdf1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: 0b4906c3-2410-9e0d-263a-576e137e55ea
Feb 6 03:19:26 server diskdrake[2616]: looking for raids in sdb1 sdc1 sdd1 sde1 sdf1
Feb 6 03:19:26 server diskdrake[2616]: running: mdadm --detail --brief -v /dev/md127
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/md127
Feb 6 03:19:27 server diskdrake[2616]: blkid gave: ext4 af4fcf91-dd36-4655-b410-9019e151c237 Data
Feb 6 03:19:27 server diskdrake[2616]: RAID: found md127 (raid 5) type ext4 with parts /dev/sdc1,/dev/sdb1,/dev/sdd1,/dev/sde1,/dev/sdf1
and further

Quote:
Feb 6 03:20:55 server diskdrake[2616]: mount_part: device=md127 mntpoint=/mnt/Data isMounted= real_mntpoint= device_UUID=af4fcf91-dd36-4655-b410-9019e151c237
Feb 6 03:20:55 server diskdrake[2616]: mounting /dev/md127 on /mnt/Data as type ext4, options
Feb 6 03:20:55 server diskdrake[2616]: created directory /mnt/Data (and parents if necessary)
Feb 6 03:20:55 server diskdrake[2616]: running: mount -t ext4 /dev/md127 /mnt/Data
Feb 6 03:20:55 server kernel: EXT4-fs (md127): warning: maximal mount count reached, running e2fsck is recommended
Feb 6 03:20:56 server kernel: EXT4-fs (md127): mounted filesystem with ordered data mode

Feb 6 03:47:51 server kernel: raid5: device sdc1 operational as raid disk 0
Feb 6 03:47:51 server kernel: raid5: device sdf1 operational as raid disk 4
Feb 6 03:47:51 server kernel: raid5: device sde1 operational as raid disk 3
Feb 6 03:47:51 server kernel: raid5: device sdd1 operational as raid disk 2
Feb 6 03:47:51 server kernel: raid5: device sdb1 operational as raid disk 1
Feb 6 03:47:51 server kernel: raid5: allocated 5334kB for md127
Feb 6 03:47:51 server kernel: 0: w=1 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:47:51 server kernel: 4: w=2 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:47:51 server kernel: 3: w=3 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:47:51 server kernel: 2: w=4 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:47:51 server kernel: 1: w=5 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:47:51 server kernel: raid5: raid level 5 set md127 active with 5 out of 5 devices, algorithm 2
Feb 6 03:47:51 server kernel: RAID5 conf printout:
Feb 6 03:47:51 server kernel: --- rd:5 wd:5
Feb 6 03:47:51 server kernel: disk 0, o:1, dev:sdc1
Feb 6 03:47:51 server kernel: disk 1, o:1, dev:sdb1
Feb 6 03:47:51 server kernel: disk 2, o:1, dev:sdd1
Feb 6 03:47:51 server kernel: disk 3, o:1, dev:sde1
Feb 6 03:47:51 server kernel: disk 4, o:1, dev:sdf1

Feb 6 03:47:51 server kernel: md127: detected capacity change from 0 to 4000808697856
Feb 6 03:47:51 server kernel: md127: unknown partition table
here it seems like a change


Quote:
Feb 6 03:55:05 server kernel: md: md127 stopped.
Feb 6 03:55:05 server kernel: md: unbind<sdc1>
Feb 6 03:55:05 server kernel: md: export_rdev(sdc1)
Feb 6 03:55:05 server kernel: md: unbind<sdf1>
Feb 6 03:55:05 server kernel: md: export_rdev(sdf1)
Feb 6 03:55:05 server kernel: md: unbind<sde1>
Feb 6 03:55:05 server kernel: md: export_rdev(sde1)
Feb 6 03:55:05 server kernel: md: unbind<sdd1>
Feb 6 03:55:05 server kernel: md: export_rdev(sdd1)
Feb 6 03:55:05 server kernel: md: unbind<sdb1>
Feb 6 03:55:05 server kernel: md: export_rdev(sdb1)
Feb 6 03:55:05 server kernel: md127: detected capacity change from 4000808697856 to 0
Feb 6 03:55:05 server mdmonitor: DeviceDisappeared event on /dev/md127
Feb 6 03:59:56 server kernel: md: bind<sdb1>
Feb 6 03:59:56 server kernel: md: bind<sdc1>
Feb 6 03:59:56 server kernel: md: bind<sdd1>
Feb 6 03:59:56 server kernel: md: bind<sde1>
Feb 6 03:59:56 server kernel: md: bind<sdf1>
Feb 6 03:59:56 server kernel: raid5: device sde1 operational as raid disk 3
Feb 6 03:59:56 server kernel: raid5: device sdd1 operational as raid disk 2
Feb 6 03:59:56 server kernel: raid5: device sdc1 operational as raid disk 1
Feb 6 03:59:56 server kernel: raid5: device sdb1 operational as raid disk 0
Feb 6 03:59:56 server kernel: raid5: allocated 5334kB for md2
Feb 6 03:59:56 server kernel: 3: w=1 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:59:56 server kernel: 2: w=2 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:59:56 server kernel: 1: w=3 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:59:56 server kernel: 0: w=4 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:59:56 server kernel: raid5: raid level 5 set md2 active with 4 out of 5 devices, algorithm 2
Feb 6 03:59:56 server kernel: RAID5 conf printout:
Feb 6 03:59:56 server kernel: --- rd:5 wd:4
Feb 6 03:59:56 server kernel: disk 0, o:1, dev:sdb1
Feb 6 03:59:56 server kernel: disk 1, o:1, dev:sdc1
Feb 6 03:59:56 server kernel: disk 2, o:1, dev:sdd1
Feb 6 03:59:56 server kernel: disk 3, o:1, dev:sde1
Feb 6 03:59:56 server kernel: md2: detected capacity change from 0 to 4000803979264
Feb 6 03:59:56 server kernel: RAID5 conf printout:
Feb 6 03:59:56 server kernel: --- rd:5 wd:4
Feb 6 03:59:56 server kernel: disk 0, o:1, dev:sdb1
Feb 6 03:59:56 server kernel: disk 1, o:1, dev:sdc1
Feb 6 03:59:56 server kernel: disk 2, o:1, dev:sdd1
Feb 6 03:59:56 server kernel: disk 3, o:1, dev:sde1
Feb 6 03:59:56 server kernel: disk 4, o:1, dev:sdf1
Feb 6 03:59:56 server kernel: md: recovery of RAID array md2
Feb 6 03:59:56 server kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Feb 6 03:59:56 server kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Feb 6 03:59:56 server kernel: md: using 128k window, over a total of 976758784 blocks.
Feb 6 03:59:56 server kernel: md2: unknown partition table
Feb 6 03:59:56 server mdmonitor: NewArray event on /dev/md2
Feb 6 03:59:56 server mdmonitor: DegradedArray event on /dev/md2
Then, after the rebuild, are so


Quote:
Feb 6 09:15:31 server kernel: md: md2: recovery done.
Feb 6 09:15:31 server kernel: RAID5 conf printout:
Feb 6 09:15:31 server kernel: --- rd:5 wd:5
Feb 6 09:15:31 server kernel: disk 0, o:1, dev:sdb1
Feb 6 09:15:31 server kernel: disk 1, o:1, dev:sdc1
Feb 6 09:15:31 server kernel: disk 2, o:1, dev:sdd1
Feb 6 09:15:31 server kernel: disk 3, o:1, dev:sde1
Feb 6 09:15:31 server kernel: disk 4, o:1, dev:sdf1
Feb 6 09:15:31 server mdmonitor: RebuildFinished event on /dev/md2
Feb 6 09:15:31 server mdmonitor: SpareActive event on /dev/md2
.
.
.
.
.
Feb 6 11:42:42 server kernel: raid5: device sdb1 operational as raid disk 0
Feb 6 11:42:42 server kernel: raid5: device sdf1 operational as raid disk 4
Feb 6 11:42:42 server kernel: raid5: device sde1 operational as raid disk 3
Feb 6 11:42:42 server kernel: raid5: device sdd1 operational as raid disk 2
Feb 6 11:42:42 server kernel: raid5: device sdc1 operational as raid disk 1
Feb 6 11:42:42 server kernel: raid5: allocated 5334kB for md2
Feb 6 11:42:42 server kernel: 0: w=1 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 11:42:42 server kernel: 4: w=2 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 11:42:42 server kernel: 3: w=3 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 11:42:42 server kernel: 2: w=4 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 11:42:42 server kernel: 1: w=5 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 11:42:42 server kernel: raid5: raid level 5 set md2 active with 5 out of 5 devices, algorithm 2
Feb 6 11:42:42 server kernel: RAID5 conf printout:
Feb 6 11:42:42 server kernel: --- rd:5 wd:5
Feb 6 11:42:42 server kernel: disk 0, o:1, dev:sdb1
Feb 6 11:42:42 server kernel: disk 1, o:1, dev:sdc1
Feb 6 11:42:42 server kernel: disk 2, o:1, dev:sdd1
Feb 6 11:42:42 server kernel: disk 3, o:1, dev:sde1
Feb 6 11:42:42 server kernel: disk 4, o:1, dev:sdf1
I put in bold the order of the disks at the beginning and after the rebuild. As noted, the disks sda and sdb were invested in the order in the raid.
I also understand that doing the rebuild changed the position of parity and that this may be one of the reasons why it does not recognize the format of the raid. Apart from other obvious things. : D

The million dollar question is:
How I can re-create the raid with the discs in the correct order?

I have the info on the order of the disks and also the uuid.
I have not done anything because if I do a rebuild again in the wrong order I'm sure I lose everything.

I hope I can help me,
thanks
 
Old 02-08-2012, 11:51 AM   #2
cbtshare
Member
 
Registered: Jul 2009
Posts: 645

Rep: Reputation: 42
I am afraid if you removed the broken array and put it back in a different order, it wont work.you need to have marked down the port you took each drive from .Is this what you are saying you re-arranged the drives?
 
Old 02-08-2012, 01:52 PM   #3
scanray
LQ Newbie
 
Registered: Feb 2012
Posts: 10

Original Poster
Rep: Reputation: Disabled
I have not changed the order of the disks physically. Their order is correct. As to the connections.
The change happened in the software. The first time the well recognized. Then did a rebuild, do not know why, and 2 discs appeared reversed.

Look this:

Quote:
Feb 6 03:47:51 server kernel: disk 0, o:1, dev:sdc1
Feb 6 03:47:51 server kernel: disk 1, o:1, dev:sdb1
Feb 6 03:47:51 server kernel: disk 2, o:1, dev:sdd1
Feb 6 03:47:51 server kernel: disk 3, o:1, dev:sde1
Feb 6 03:47:51 server kernel: disk 4, o:1, dev:sdf1
and later

Quote:
Feb 6 11:42:42 server kernel: disk 0, o:1, dev:sdb1
Feb 6 11:42:42 server kernel: disk 1, o:1, dev:sdc1
Feb 6 11:42:42 server kernel: disk 2, o:1, dev:sdd1
Feb 6 11:42:42 server kernel: disk 3, o:1, dev:sde1
Feb 6 11:42:42 server kernel: disk 4, o:1, dev:sdf1
sda and sdb disks are reversed.
 
Old 02-08-2012, 04:26 PM   #4
cbtshare
Member
 
Registered: Jul 2009
Posts: 645

Rep: Reputation: 42
Thats really weird.I'd suggest you call the software manufacturer and ask if this problem has ever happened before and how to fix before doing anything else and risk losing your data.
 
Old 02-08-2012, 04:55 PM   #5
scanray
LQ Newbie
 
Registered: Feb 2012
Posts: 10

Original Poster
Rep: Reputation: Disabled
True, it is very strange. I have not done anything.
I have tried to make compressed images of the disks, with dd and bzip, but I cant mount them. I will buy 2 discs of 2T to create images of the disks. Then I do the tests. For the moment I'm finding out what can be done to retrieve the discs.

In this post, @garydale is talking about something similar, but not clear to me the script that he uses, so I have not tried.

According to this wiki, is posible to recreate the original order of the disks. But no one says anything about what happens to the data when it was made a rebuild, before ...
So I want to make the images before attempting anything.
 
Old 05-01-2012, 10:58 AM   #6
scanray
LQ Newbie
 
Registered: Feb 2012
Posts: 10

Original Poster
Rep: Reputation: Disabled
Hi all,

Well, I have to say I could fix my raid5. Searching google, I found this post. That gave me the hope to recover the raid. Seeking the old hd raid1 system, and using r-studio, I managed to recover data from the raid5 and the version of mdadm that I used to create the raid5.

The problem was that the metadata was 0.9 and the block size was 64k. In the new version of mdadm values ​​are 1.2 and 512k respectively.

If someone has a problem similar or the same, read the article, I took my fear of losing data.

Have a good day.
 
Old 05-01-2012, 06:21 PM   #7
snowmobile74
LQ Newbie
 
Registered: Nov 2003
Location: Reston, VA
Distribution: Slackware for everything
Posts: 22

Rep: Reputation: 1
What did you do to rebuild your array? It sounds like you actually re-created the array and changed the geometry of it completely.

Quote:
Originally Posted by scanray View Post

I used software raid5 several years ago, I started using 3hdx320 and now I have 5hdx1T.
I had to change the MB where the raid had failures in the MB.
You had 3 320GB drives previously and you moved to 5 1Terabyte drives correct? from the time stamps it seems that was done some time ago, yes? Has it ever worked with the 5 1TB drives?

Quote:
Originally Posted by scanray View Post

After installation of the system, I connected the disks of the raid. The system disks saw and recognized them as raid5, no problem up here. I mounted the raid, I copied several configuration files, so good.
Rebooted the machine and I noticed with surprise that he was rebuilding the raid. Let him finish. No way.
At the end of the message was "No disk format."
Did it actually finish rebuilding? Do you know have the original command you used to build the array? Did you let it run fsck?

Can you show the output of mdadm -E /dev/sdf1 and mdadm -E /dev/sdb1 ?

also what does /proc/mdstat show.
 
Old 05-01-2012, 11:48 PM   #8
scanray
LQ Newbie
 
Registered: Feb 2012
Posts: 10

Original Poster
Rep: Reputation: Disabled
As I put in my first post, I messed up creating the raid5. The only information I had was the original order of the disks. I did not take into account the change of version of mdadm. The mdadm defaults changed. The link I put, I helped clear many doubts I had about losing data.
Quote:
What did you do to rebuild your array? It sounds like you actually re-created the array and changed the geometry of it completely.
The first thing I did was find the appropriate values. Looking at the old logs. I did not change the geometry to recreate the raid. If you change the geometry you lose the raid.
Quote:
You had 3 320GB drives previously and you moved to 5 1Terabyte drives correct? from the time stamps it seems that was done some time ago, yes? Has it ever worked with the 5 1TB drives?
True, I had 3x320GB. I switched to 3x1TB. I made a raid growth. Then, over time, I was adding the other 2HD, one by one. Yes, worked with 5TB, until today.
Quote:
Did it actually finish rebuilding? Do you know have the original command you used to build the array? Did you let it run fsck?
Already finished rebuild. I have run the raid more than 2 weeks.
But, that is, after rebuilt, I copied all the data to other disks. I cleaned the discs recorded zeros and went back to create the raid. This cleaning the discs I am 100% sure it was not necessary. All my data was 100% recovered.
ALWAYS need to use fsck after assembling.

The command I used to get my raid was:
Code:
mdadm -C -V /dev/md2 --metadata=0.9 -c64 -l5 -n5 /dev/sdc1 /dev/sdb1 /dev/sdd1 /dev/sde1 /dev/sdf1
These commands do not think that will help much, as I said, I create a new raid5 with the new defaults values.
Code:
# mdadm -E /dev/sdf1
/dev/sdf1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 581a5b1d:a5a951f6:0ac319c7:5c4f4647
           Name : infi.scanray.no-ip.org:2  (local to host infi.scanray.no-ip.org)
  Creation Time : Fri Mar 30 07:55:59 2012
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
     Array Size : 7814070272 (3726.04 GiB 4000.80 GB)
  Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 7958bca9:91efe60a:a4ea4c4f:af21562c

    Update Time : Tue May  1 11:54:10 2012
       Checksum : 967bbbe2 - correct
         Events : 13918

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAA ('A' == active, '.' == missing)
Code:
mdadm -E /dev/sdb1
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 581a5b1d:a5a951f6:0ac319c7:5c4f4647
           Name : infi.scanray.no-ip.org:2  (local to host infi.scanray.no-ip.org)
  Creation Time : Fri Mar 30 07:55:59 2012
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
     Array Size : 7814070272 (3726.04 GiB 4000.80 GB)
  Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 6af30c30:de52429a:0ab70c91:e9012724

    Update Time : Tue May  1 11:54:10 2012
       Checksum : e5b866bc - correct
         Events : 13918

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing)
I hope help you. if you need anything else, ask, I'll help any way I can. I'm no expert, but I learned a lot from this little problem.

orlando
 
Old 05-02-2012, 05:54 PM   #9
snowmobile74
LQ Newbie
 
Registered: Nov 2003
Location: Reston, VA
Distribution: Slackware for everything
Posts: 22

Rep: Reputation: 1
Ahh I apologize, I was examining the logs and didn't see you had already solved this problem. I had a similar issue myself with a 12 disk array. Then information on the defaults from version to version will be very helpful in the future.

Long live the Linux Raid!
 
Old 05-02-2012, 09:36 PM   #10
scanray
LQ Newbie
 
Registered: Feb 2012
Posts: 10

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by snowmobile74 View Post
Then information on the defaults from version to version will be very helpful in the future.
The information of the default values ​​you can get the info from the rpm on the web (I use mandriva).
Quote:
Originally Posted by snowmobile74 View Post
Long live the Linux Raid!
Yes, actually, there is nothing like linux raid.
Both win and osx have many recovery programs, some more complicated than others and all equally difficult to understand.
However, in linux, with the simple tools of creation, administration, well-managed of course, you can solve everything.

Linux forever!
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Problems with Raid5 performance Orangutanklaus Linux - Software 7 03-13-2011 03:10 AM
Raid5 Online HDD Build Prob sanjay2004 Linux - Server 1 03-10-2010 10:00 AM
Multi Layer RAID50 fail (Intel SRCS14L RAID5 + 3ware 9550SX-4LP RAID5)+Linux RAID 0 BaronVonChickenPants Linux - Server 4 09-27-2009 04:06 AM
AoE RAID5 slow build speed poonippi Linux - Software 1 07-23-2009 02:38 PM
raid5 problems dwater Fedora 0 01-02-2005 08:45 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 09:02 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration