Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I know this problem is in another forum, but I have not found the answer.
I used software raid5 several years ago, I started using 3hdx320 and now I have 5hdx1T.
I had to change the MB where the raid had failures in the MB.
After installation of the system, I connected the disks of the raid. The system disks saw and recognized them as raid5, no problem up here. I mounted the raid, I copied several configuration files, so good.
Rebooted the machine and I noticed with surprise that he was rebuilding the raid. Let him finish. No way.
At the end of the message was "No disk format."
I began to review the logs and realized that the system had reorganized the hd in another order. Well, I think that's the problem.
out these lines in syslog when the hd is recent, and recognizes the raid
Quote:
Feb 6 03:13:26 server kernel: md: md127 stopped.
Feb 6 03:13:26 server kernel: md: bind<sdb1>
Feb 6 03:13:26 server kernel: md: bind<sdd1>
Feb 6 03:13:26 server kernel: md: bind<sde1>
Feb 6 03:13:26 server kernel: md: bind<sdf1>
Feb 6 03:13:26 server kernel: md: bind<sdc1>
Feb 6 03:13:26 server kernel: async_tx: api initialized (async)
Feb 6 03:13:26 server kernel: raid6: int64x1 2046 MB/s
Feb 6 03:13:26 server kernel: raid6: int64x2 2664 MB/s
Feb 6 03:13:26 server kernel: raid6: int64x4 1863 MB/s
Feb 6 03:13:26 server kernel: raid6: int64x8 1828 MB/s
Feb 6 03:13:26 server kernel: raid6: sse2x1 3488 MB/s
Feb 6 03:13:26 server kernel: raid6: sse2x2 4656 MB/s
Feb 6 03:13:26 server kernel: raid6: sse2x4 4835 MB/s
Feb 6 03:13:26 server kernel: raid6: using algorithm sse2x4 (4835 MB/s)
Feb 6 03:13:26 server kernel: xor: automatically using best checksumming function: generic_sse
Feb 6 03:13:26 server kernel: generic_sse: 7828.000 MB/sec
Feb 6 03:13:26 server kernel: xor: using function: generic_sse (7828.000 MB/sec)
Feb 6 03:13:26 server kernel: md: raid6 personality registered for level 6
Feb 6 03:13:26 server kernel: md: raid5 personality registered for level 5
Feb 6 03:13:26 server kernel: md: raid4 personality registered for level 4
Feb 6 03:13:26 server kernel: raid5: device sdc1 operational as raid disk 0
Feb 6 03:13:26 server kernel: raid5: device sdf1 operational as raid disk 4
Feb 6 03:13:26 server kernel: raid5: device sde1 operational as raid disk 3
Feb 6 03:13:26 server kernel: raid5: device sdd1 operational as raid disk 2
Feb 6 03:13:26 server kernel: raid5: device sdb1 operational as raid disk 1
Feb 6 03:13:26 server kernel: raid5: allocated 5334kB for md127
Feb 6 03:13:26 server kernel: 0: w=1 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:13:26 server kernel: 4: w=2 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:13:26 server kernel: 3: w=3 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:13:26 server kernel: 2: w=4 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:13:26 server kernel: 1: w=5 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:13:26 server kernel: raid5: raid level 5 set md127 active with 5 out of 5 devices, algorithm 2
Feb 6 03:13:26 server kernel: RAID5 conf printout:
Feb 6 03:13:26 server kernel: --- rd:5 wd:5
Feb 6 03:13:26 server kernel: disk 0, o:1, dev:sdc1
Feb 6 03:13:26 server kernel: disk 1, o:1, dev:sdb1
Feb 6 03:13:26 server kernel: disk 2, o:1, dev:sdd1
Feb 6 03:13:26 server kernel: disk 3, o:1, dev:sde1
Feb 6 03:13:26 server kernel: disk 4, o:1, dev:sdf1
Feb 6 03:13:26 server kernel: md127: detected capacity change from 0 to 4000808697856
Feb 6 03:13:26 server kernel: md127: unknown partition table
Feb 6 03:13:26 server kernel: device-mapper: uevent: version 1.0.3
Feb 6 03:13:26 server kernel: device-mapper: ioctl: 4.17.0-ioctl (2010-03-05) initialised: dm-devel@redhat.com
Feb 6 03:13:26 server kernel: EXT4-fs (sda6): mounted filesystem with ordered data mode
Feb 6 03:13:26 server kernel: loop: module loaded
Feb 6 03:19:08 server drakconf.real[2587]: ### Program is starting ###
Feb 6 03:19:13 server drakconf.real[2597]: ### Program is starting ###
Feb 6 03:19:26 server diskdrake[2616]: ### Program is starting ###
Feb 6 03:19:26 server diskdrake[2616]: dmraid::init failed
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sda succeeded: heads=255 sectors=63 cylinders=19457 start=2147483648
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sdb succeeded: heads=255 sectors=63 cylinders=56065 start=2147483648
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sdc succeeded: heads=255 sectors=63 cylinders=56065 start=4294936576
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sdd succeeded: heads=255 sectors=63 cylinders=56065 start=4294936576
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sde succeeded: heads=255 sectors=63 cylinders=56065 start=4294936576
Feb 6 03:19:26 server diskdrake[2616]: HDIO_GETGEO on /dev/sdf succeeded: heads=255 sectors=63 cylinders=56065 start=4294936576
Feb 6 03:19:26 server diskdrake[2616]: id2hd: 0x76623ac6=>sda 0x7731752e=>sdc 0x009109b6=>sdd 0x78ea04eb=>sde 0xe4fa102c=>sdf 0x62286ebf=>sdb
Feb 6 03:19:26 server diskdrake[2616]: id2edd: 0x76623ac6=>/sys/firmware/edd/int13_dev80 0x7731752e=>/sys/firmware/edd/int13_dev82 0x009109b6=>/sys/firmware/edd/int13_dev83 0x78ea04eb=>/sys/firmware/edd/int13_dev84 0xe4fa102c=>/sys/firmware/edd/int13_dev85 0x62286ebf=>/sys/firmware/edd/int13_dev81
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sda 0x76623ac6: 310098/16/63
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sdc 0x7731752e: 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sdd 0x009109b6: 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sdf 0xe4fa102c: 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sde 0x78ea04eb: 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: geometry_from_edd sdb 0x62286ebf: 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sda on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sda
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sda
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sda at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sda: 19457/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sda1, 62910539): 1023,254,62 vs 1023,3,62 with geometry 310098/16/63
Feb 6 03:19:26 server diskdrake[2616]: sda: using guessed geometry 19457/255/63 instead of 310098/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sda1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: ext4 ce7caea3-f715-424f-bb2a-77d0a91a0a15
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sda5
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: swap d6f410c2-4727-4448-b68d-a211668948b3
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sda6
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: ext4 b22989f5-07bb-4c09-add2-66b244cadf01
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sdb on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdb
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdb
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sdb at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sdb: 121601/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sdb1, 1953520064): 1023,254,62 vs 1023,14,62 with geometry 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: sdb: using guessed geometry 121601/255/63 instead of 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdb1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: 0b4906c3-2410-9e0d-263a-576e137e55ea
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sdc on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdc
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdc
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sdc at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sdc: 121601/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sdc1, 1953520064): 1023,254,62 vs 1023,14,62 with geometry 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: sdc: using guessed geometry 121601/255/63 instead of 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdc1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: 0b4906c3-2410-9e0d-263a-576e137e55ea
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sdd on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdd
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdd
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sdd at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sdd: 121601/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sdd1, 1953520064): 1023,254,62 vs 1023,14,62 with geometry 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: sdd: using guessed geometry 121601/255/63 instead of 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdd1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: 0b4906c3-2410-9e0d-263a-576e137e55ea
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sde on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sde
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sde
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sde at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sde: 121601/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sde1, 1953520064): 1023,254,62 vs 1023,14,62 with geometry 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: sde: using guessed geometry 121601/255/63 instead of 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sde1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: 0b4906c3-2410-9e0d-263a-576e137e55ea
Feb 6 03:19:26 server diskdrake[2616]: test_for_bad_drives(/dev/sdf on sector #62)
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdf
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdf
Feb 6 03:19:26 server diskdrake[2616]: found a dos partition table on /dev/sdf at sector 0
Feb 6 03:19:26 server diskdrake[2616]: guess_geometry_from_partition_table sdf: 121601/255/63
Feb 6 03:19:26 server diskdrake[2616]: is_geometry_valid_for_the_partition_table failed for (sdf1, 1953520064): 1023,254,62 vs 1023,14,62 with geometry 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: sdf: using guessed geometry 121601/255/63 instead of 1938021/16/63
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/sdf1
Feb 6 03:19:26 server diskdrake[2616]: blkid gave: 0b4906c3-2410-9e0d-263a-576e137e55ea
Feb 6 03:19:26 server diskdrake[2616]: looking for raids in sdb1 sdc1 sdd1 sde1 sdf1
Feb 6 03:19:26 server diskdrake[2616]: running: mdadm --detail --brief -v /dev/md127
Feb 6 03:19:26 server diskdrake[2616]: running: blkid -o udev -p /dev/md127
Feb 6 03:19:27 server diskdrake[2616]: blkid gave: ext4 af4fcf91-dd36-4655-b410-9019e151c237 Data
Feb 6 03:19:27 server diskdrake[2616]: RAID: found md127 (raid 5) type ext4 with parts /dev/sdc1,/dev/sdb1,/dev/sdd1,/dev/sde1,/dev/sdf1
and further
Quote:
Feb 6 03:20:55 server diskdrake[2616]: mount_part: device=md127 mntpoint=/mnt/Data isMounted= real_mntpoint= device_UUID=af4fcf91-dd36-4655-b410-9019e151c237
Feb 6 03:20:55 server diskdrake[2616]: mounting /dev/md127 on /mnt/Data as type ext4, options
Feb 6 03:20:55 server diskdrake[2616]: created directory /mnt/Data (and parents if necessary)
Feb 6 03:20:55 server diskdrake[2616]: running: mount -t ext4 /dev/md127 /mnt/Data
Feb 6 03:20:55 server kernel: EXT4-fs (md127): warning: maximal mount count reached, running e2fsck is recommended
Feb 6 03:20:56 server kernel: EXT4-fs (md127): mounted filesystem with ordered data mode
Feb 6 03:47:51 server kernel: raid5: device sdc1 operational as raid disk 0
Feb 6 03:47:51 server kernel: raid5: device sdf1 operational as raid disk 4
Feb 6 03:47:51 server kernel: raid5: device sde1 operational as raid disk 3
Feb 6 03:47:51 server kernel: raid5: device sdd1 operational as raid disk 2
Feb 6 03:47:51 server kernel: raid5: device sdb1 operational as raid disk 1
Feb 6 03:47:51 server kernel: raid5: allocated 5334kB for md127
Feb 6 03:47:51 server kernel: 0: w=1 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:47:51 server kernel: 4: w=2 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:47:51 server kernel: 3: w=3 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:47:51 server kernel: 2: w=4 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:47:51 server kernel: 1: w=5 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:47:51 server kernel: raid5: raid level 5 set md127 active with 5 out of 5 devices, algorithm 2
Feb 6 03:47:51 server kernel: RAID5 conf printout:
Feb 6 03:47:51 server kernel: --- rd:5 wd:5 Feb 6 03:47:51 server kernel: disk 0, o:1, dev:sdc1
Feb 6 03:47:51 server kernel: disk 1, o:1, dev:sdb1
Feb 6 03:47:51 server kernel: disk 2, o:1, dev:sdd1
Feb 6 03:47:51 server kernel: disk 3, o:1, dev:sde1
Feb 6 03:47:51 server kernel: disk 4, o:1, dev:sdf1
Feb 6 03:47:51 server kernel: md127: detected capacity change from 0 to 4000808697856
Feb 6 03:47:51 server kernel: md127: unknown partition table
here it seems like a change
Quote:
Feb 6 03:55:05 server kernel: md: md127 stopped.
Feb 6 03:55:05 server kernel: md: unbind<sdc1>
Feb 6 03:55:05 server kernel: md: export_rdev(sdc1)
Feb 6 03:55:05 server kernel: md: unbind<sdf1>
Feb 6 03:55:05 server kernel: md: export_rdev(sdf1)
Feb 6 03:55:05 server kernel: md: unbind<sde1>
Feb 6 03:55:05 server kernel: md: export_rdev(sde1)
Feb 6 03:55:05 server kernel: md: unbind<sdd1>
Feb 6 03:55:05 server kernel: md: export_rdev(sdd1)
Feb 6 03:55:05 server kernel: md: unbind<sdb1>
Feb 6 03:55:05 server kernel: md: export_rdev(sdb1)
Feb 6 03:55:05 server kernel: md127: detected capacity change from 4000808697856 to 0
Feb 6 03:55:05 server mdmonitor: DeviceDisappeared event on /dev/md127
Feb 6 03:59:56 server kernel: md: bind<sdb1>
Feb 6 03:59:56 server kernel: md: bind<sdc1>
Feb 6 03:59:56 server kernel: md: bind<sdd1>
Feb 6 03:59:56 server kernel: md: bind<sde1>
Feb 6 03:59:56 server kernel: md: bind<sdf1>
Feb 6 03:59:56 server kernel: raid5: device sde1 operational as raid disk 3
Feb 6 03:59:56 server kernel: raid5: device sdd1 operational as raid disk 2
Feb 6 03:59:56 server kernel: raid5: device sdc1 operational as raid disk 1
Feb 6 03:59:56 server kernel: raid5: device sdb1 operational as raid disk 0
Feb 6 03:59:56 server kernel: raid5: allocated 5334kB for md2
Feb 6 03:59:56 server kernel: 3: w=1 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:59:56 server kernel: 2: w=2 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:59:56 server kernel: 1: w=3 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:59:56 server kernel: 0: w=4 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 03:59:56 server kernel: raid5: raid level 5 set md2 active with 4 out of 5 devices, algorithm 2
Feb 6 03:59:56 server kernel: RAID5 conf printout:
Feb 6 03:59:56 server kernel: --- rd:5 wd:4
Feb 6 03:59:56 server kernel: disk 0, o:1, dev:sdb1
Feb 6 03:59:56 server kernel: disk 1, o:1, dev:sdc1
Feb 6 03:59:56 server kernel: disk 2, o:1, dev:sdd1
Feb 6 03:59:56 server kernel: disk 3, o:1, dev:sde1
Feb 6 03:59:56 server kernel: md2: detected capacity change from 0 to 4000803979264
Feb 6 03:59:56 server kernel: RAID5 conf printout:
Feb 6 03:59:56 server kernel: --- rd:5 wd:4
Feb 6 03:59:56 server kernel: disk 0, o:1, dev:sdb1
Feb 6 03:59:56 server kernel: disk 1, o:1, dev:sdc1
Feb 6 03:59:56 server kernel: disk 2, o:1, dev:sdd1
Feb 6 03:59:56 server kernel: disk 3, o:1, dev:sde1
Feb 6 03:59:56 server kernel: disk 4, o:1, dev:sdf1
Feb 6 03:59:56 server kernel: md: recovery of RAID array md2
Feb 6 03:59:56 server kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Feb 6 03:59:56 server kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Feb 6 03:59:56 server kernel: md: using 128k window, over a total of 976758784 blocks.
Feb 6 03:59:56 server kernel: md2: unknown partition table
Feb 6 03:59:56 server mdmonitor: NewArray event on /dev/md2
Feb 6 03:59:56 server mdmonitor: DegradedArray event on /dev/md2
Then, after the rebuild, are so
Quote:
Feb 6 09:15:31 server kernel: md: md2: recovery done.
Feb 6 09:15:31 server kernel: RAID5 conf printout:
Feb 6 09:15:31 server kernel: --- rd:5 wd:5
Feb 6 09:15:31 server kernel: disk 0, o:1, dev:sdb1
Feb 6 09:15:31 server kernel: disk 1, o:1, dev:sdc1
Feb 6 09:15:31 server kernel: disk 2, o:1, dev:sdd1
Feb 6 09:15:31 server kernel: disk 3, o:1, dev:sde1
Feb 6 09:15:31 server kernel: disk 4, o:1, dev:sdf1
Feb 6 09:15:31 server mdmonitor: RebuildFinished event on /dev/md2
Feb 6 09:15:31 server mdmonitor: SpareActive event on /dev/md2
.
.
.
.
.
Feb 6 11:42:42 server kernel: raid5: device sdb1 operational as raid disk 0
Feb 6 11:42:42 server kernel: raid5: device sdf1 operational as raid disk 4
Feb 6 11:42:42 server kernel: raid5: device sde1 operational as raid disk 3
Feb 6 11:42:42 server kernel: raid5: device sdd1 operational as raid disk 2
Feb 6 11:42:42 server kernel: raid5: device sdc1 operational as raid disk 1
Feb 6 11:42:42 server kernel: raid5: allocated 5334kB for md2
Feb 6 11:42:42 server kernel: 0: w=1 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 11:42:42 server kernel: 4: w=2 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 11:42:42 server kernel: 3: w=3 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 11:42:42 server kernel: 2: w=4 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 11:42:42 server kernel: 1: w=5 pa=0 pr=5 m=1 a=2 r=5 op1=0 op2=0
Feb 6 11:42:42 server kernel: raid5: raid level 5 set md2 active with 5 out of 5 devices, algorithm 2
Feb 6 11:42:42 server kernel: RAID5 conf printout:
Feb 6 11:42:42 server kernel: --- rd:5 wd:5 Feb 6 11:42:42 server kernel: disk 0, o:1, dev:sdb1
Feb 6 11:42:42 server kernel: disk 1, o:1, dev:sdc1
Feb 6 11:42:42 server kernel: disk 2, o:1, dev:sdd1
Feb 6 11:42:42 server kernel: disk 3, o:1, dev:sde1
Feb 6 11:42:42 server kernel: disk 4, o:1, dev:sdf1
I put in bold the order of the disks at the beginning and after the rebuild. As noted, the disks sda and sdb were invested in the order in the raid.
I also understand that doing the rebuild changed the position of parity and that this may be one of the reasons why it does not recognize the format of the raid. Apart from other obvious things. : D
The million dollar question is:
How I can re-create the raid with the discs in the correct order?
I have the info on the order of the disks and also the uuid.
I have not done anything because if I do a rebuild again in the wrong order I'm sure I lose everything.
I am afraid if you removed the broken array and put it back in a different order, it wont work.you need to have marked down the port you took each drive from .Is this what you are saying you re-arranged the drives?
I have not changed the order of the disks physically. Their order is correct. As to the connections.
The change happened in the software. The first time the well recognized. Then did a rebuild, do not know why, and 2 discs appeared reversed.
Look this:
Quote:
Feb 6 03:47:51 server kernel: disk 0, o:1, dev:sdc1
Feb 6 03:47:51 server kernel: disk 1, o:1, dev:sdb1
Feb 6 03:47:51 server kernel: disk 2, o:1, dev:sdd1
Feb 6 03:47:51 server kernel: disk 3, o:1, dev:sde1
Feb 6 03:47:51 server kernel: disk 4, o:1, dev:sdf1
and later
Quote:
Feb 6 11:42:42 server kernel: disk 0, o:1, dev:sdb1
Feb 6 11:42:42 server kernel: disk 1, o:1, dev:sdc1
Feb 6 11:42:42 server kernel: disk 2, o:1, dev:sdd1
Feb 6 11:42:42 server kernel: disk 3, o:1, dev:sde1
Feb 6 11:42:42 server kernel: disk 4, o:1, dev:sdf1
Thats really weird.I'd suggest you call the software manufacturer and ask if this problem has ever happened before and how to fix before doing anything else and risk losing your data.
True, it is very strange. I have not done anything.
I have tried to make compressed images of the disks, with dd and bzip, but I cant mount them. I will buy 2 discs of 2T to create images of the disks. Then I do the tests. For the moment I'm finding out what can be done to retrieve the discs.
In this post, @garydale is talking about something similar, but not clear to me the script that he uses, so I have not tried.
According to this wiki, is posible to recreate the original order of the disks. But no one says anything about what happens to the data when it was made a rebuild, before ...
So I want to make the images before attempting anything.
Well, I have to say I could fix my raid5. Searching google, I found this post. That gave me the hope to recover the raid. Seeking the old hd raid1 system, and using r-studio, I managed to recover data from the raid5 and the version of mdadm that I used to create the raid5.
The problem was that the metadata was 0.9 and the block size was 64k. In the new version of mdadm values are 1.2 and 512k respectively.
If someone has a problem similar or the same, read the article, I took my fear of losing data.
What did you do to rebuild your array? It sounds like you actually re-created the array and changed the geometry of it completely.
Quote:
Originally Posted by scanray
I used software raid5 several years ago, I started using 3hdx320 and now I have 5hdx1T.
I had to change the MB where the raid had failures in the MB.
You had 3 320GB drives previously and you moved to 5 1Terabyte drives correct? from the time stamps it seems that was done some time ago, yes? Has it ever worked with the 5 1TB drives?
Quote:
Originally Posted by scanray
After installation of the system, I connected the disks of the raid. The system disks saw and recognized them as raid5, no problem up here. I mounted the raid, I copied several configuration files, so good.
Rebooted the machine and I noticed with surprise that he was rebuilding the raid. Let him finish. No way.
At the end of the message was "No disk format."
Did it actually finish rebuilding? Do you know have the original command you used to build the array? Did you let it run fsck?
Can you show the output of mdadm -E /dev/sdf1 and mdadm -E /dev/sdb1 ?
As I put in my first post, I messed up creating the raid5. The only information I had was the original order of the disks. I did not take into account the change of version of mdadm. The mdadm defaults changed. The link I put, I helped clear many doubts I had about losing data.
Quote:
What did you do to rebuild your array? It sounds like you actually re-created the array and changed the geometry of it completely.
The first thing I did was find the appropriate values. Looking at the old logs. I did not change the geometry to recreate the raid. If you change the geometry you lose the raid.
Quote:
You had 3 320GB drives previously and you moved to 5 1Terabyte drives correct? from the time stamps it seems that was done some time ago, yes? Has it ever worked with the 5 1TB drives?
True, I had 3x320GB. I switched to 3x1TB. I made a raid growth. Then, over time, I was adding the other 2HD, one by one. Yes, worked with 5TB, until today.
Quote:
Did it actually finish rebuilding? Do you know have the original command you used to build the array? Did you let it run fsck?
Already finished rebuild. I have run the raid more than 2 weeks.
But, that is, after rebuilt, I copied all the data to other disks. I cleaned the discs recorded zeros and went back to create the raid. This cleaning the discs I am 100% sure it was not necessary. All my data was 100% recovered.
ALWAYS need to use fsck after assembling.
Ahh I apologize, I was examining the logs and didn't see you had already solved this problem. I had a similar issue myself with a 12 disk array. Then information on the defaults from version to version will be very helpful in the future.
Then information on the defaults from version to version will be very helpful in the future.
The information of the default values you can get the info from the rpm on the web (I use mandriva).
Quote:
Originally Posted by snowmobile74
Long live the Linux Raid!
Yes, actually, there is nothing like linux raid.
Both win and osx have many recovery programs, some more complicated than others and all equally difficult to understand.
However, in linux, with the simple tools of creation, administration, well-managed of course, you can solve everything.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.