LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 10-03-2015, 10:31 AM   #1
hortageno
Member
 
Registered: Aug 2015
Distribution: Ubuntu 22.04 LTS
Posts: 240

Rep: Reputation: 67
re-adding hard disk to RAID5 fails with mdadm


Hi all,

I thought I'll ask my questions here rather than in th software forum.

I have a backup server which holds all my local backups running Ubuntu 14.04.3 and hold 3x 4TB drives in a RAID5 array. 2 days ago I wanted to test whether this server boots correctly with a degraded array. So I ran

Code:
mdadm /dev/md0 --fail /dev/sdd1
and shutdown the server. Then I removed the hard disk and booted. Everything went fine, so I shutdown the server, added the hard disk, booted and ran

Code:
mdadm /dev/md0 --add /dev/sdd1
I checked the progress several times and everything seems to run smoothly. The next day, when the re-sync should have finished, the array was in degraded state again.

Code:
Oct  1 10:19:37 backup mdadm[1550]: DeviceDisappeared event detected on md device /dev/md/server:0
Oct  1 10:38:47 backup mdadm[1549]: DeviceDisappeared event detected on md device /dev/md/server:0
Oct  1 10:42:27 backup mdadm[1549]: DeviceDisappeared event detected on md device /dev/md/server:0
Oct  1 10:47:57 backup mdadm[1470]: Fail event detected on md device /dev/md/server:0
Oct  1 10:47:57 backup mdadm[1470]: FailSpare event detected on md device /dev/md/server:0, component device /dev/sdd1
Oct  1 10:49:30 backup mdadm[1441]: DegradedArray event detected on md device /dev/md/server:0
Oct  1 10:50:29 backup mdadm[3033]: DegradedArray event detected on md device /dev/md/server:0
Oct  1 10:53:07 backup mdadm[1442]: DegradedArray event detected on md device /dev/md/server:0
Oct  1 10:55:04 backup mdadm[1436]: DegradedArray event detected on md device /dev/md/server:0
Oct  1 10:57:11 backup mdadm[1452]: DegradedArray event detected on md device /dev/md/server:0
Oct  1 10:59:16 backup mdadm[1437]: DegradedArray event detected on md device /dev/md/server:0
Oct  1 11:01:15 backup mdadm[1476]: DegradedArray event detected on md device /dev/md/server:0
Oct  1 11:02:52 backup kernel: [  133.944836] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Oct  1 11:02:52 backup kernel: [  133.944840] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Oct  1 11:02:52 backup kernel: [  133.944848] md: using 128k window, over a total of 3906885120k.
Oct  1 11:02:52 backup mdadm[1476]: RebuildStarted event detected on md device /dev/md/server:0
Oct  1 14:56:13 backup mdadm[1476]: Rebuild20 event detected on md device /dev/md/server:0
Oct  1 15:26:19 backup kernel: [15933.253596] perf interrupt took too long (2501 > 2500), lowering kernel.perf_event_max_sample_rate to 50000
Oct  1 18:49:35 backup mdadm[1476]: Rebuild40 event detected on md device /dev/md/server:0
Oct  1 22:59:36 backup mdadm[1476]: Rebuild60 event detected on md device /dev/md/server:0
Oct  2 03:26:17 backup mdadm[1476]: Rebuild80 event detected on md device /dev/md/server:0
Oct  2 04:21:16 backup kernel: [62407.504238] ata4.00: exception Emask 0x0 SAct 0x6000000 SErr 0x0 action 0x0
Oct  2 04:21:16 backup kernel: [62407.504260] ata4.00: irq_stat 0x40000008
Oct  2 04:21:16 backup kernel: [62407.504274] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:21:16 backup kernel: [62407.504294] ata4.00: cmd 61/40:d0:68:20:a8/05:00:89:01:00/40 tag 26 ncq 688128 out
Oct  2 04:21:16 backup kernel: [62407.504294]          res 41/10:00:68:20:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:21:16 backup kernel: [62407.504315] ata4.00: status: { DRDY ERR }
Oct  2 04:21:16 backup kernel: [62407.504324] ata4.00: error: { IDNF }
Oct  2 04:21:16 backup kernel: [62407.505776] ata4.00: configured for UDMA/133
Oct  2 04:21:16 backup kernel: [62407.505813] ata4: EH complete
Oct  2 04:21:23 backup kernel: [62414.512558] ata4.00: exception Emask 0x0 SAct 0x18000000 SErr 0x0 action 0x0
Oct  2 04:21:23 backup kernel: [62414.512581] ata4.00: irq_stat 0x40000008
Oct  2 04:21:23 backup kernel: [62414.512595] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:21:23 backup kernel: [62414.512615] ata4.00: cmd 61/40:d8:68:20:a8/05:00:89:01:00/40 tag 27 ncq 688128 out
Oct  2 04:21:23 backup kernel: [62414.512615]          res 41/10:00:68:20:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:21:23 backup kernel: [62414.512635] ata4.00: status: { DRDY ERR }
Oct  2 04:21:23 backup kernel: [62414.512644] ata4.00: error: { IDNF }
Oct  2 04:21:23 backup kernel: [62414.514326] ata4.00: configured for UDMA/133
Oct  2 04:21:23 backup kernel: [62414.514373] ata4: EH complete
Oct  2 04:21:30 backup kernel: [62421.521090] ata4.00: exception Emask 0x0 SAct 0x60000000 SErr 0x0 action 0x0
Oct  2 04:21:30 backup kernel: [62421.521110] ata4.00: irq_stat 0x40000008
Oct  2 04:21:30 backup kernel: [62421.521123] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:21:30 backup kernel: [62421.521145] ata4.00: cmd 61/c0:e8:a8:1d:a8/02:00:89:01:00/40 tag 29 ncq 360448 out
Oct  2 04:21:30 backup kernel: [62421.521145]          res 41/10:00:a8:1d:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:21:30 backup kernel: [62421.521174] ata4.00: status: { DRDY ERR }
Oct  2 04:21:30 backup kernel: [62421.521184] ata4.00: error: { IDNF }
Oct  2 04:21:30 backup kernel: [62421.522875] ata4.00: configured for UDMA/133
Oct  2 04:21:30 backup kernel: [62421.522922] ata4: EH complete
Oct  2 04:21:38 backup kernel: [62428.529575] ata4.00: exception Emask 0x0 SAct 0x3 SErr 0x0 action 0x0
Oct  2 04:21:38 backup kernel: [62428.529599] ata4.00: irq_stat 0x40000008
Oct  2 04:21:38 backup kernel: [62428.529614] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:21:38 backup kernel: [62428.529636] ata4.00: cmd 61/40:00:68:20:a8/05:00:89:01:00/40 tag 0 ncq 688128 out
Oct  2 04:21:38 backup kernel: [62428.529636]          res 41/10:00:68:20:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:21:38 backup kernel: [62428.529665] ata4.00: status: { DRDY ERR }
Oct  2 04:21:38 backup kernel: [62428.529676] ata4.00: error: { IDNF }
Oct  2 04:21:38 backup kernel: [62428.531416] ata4.00: configured for UDMA/133
Oct  2 04:21:38 backup kernel: [62428.531463] ata4: EH complete
Oct  2 04:21:45 backup kernel: [62435.538184] ata4.00: exception Emask 0x0 SAct 0xc SErr 0x0 action 0x0
Oct  2 04:21:45 backup kernel: [62435.538208] ata4.00: irq_stat 0x40000008
Oct  2 04:21:45 backup kernel: [62435.538222] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:21:45 backup kernel: [62435.538243] ata4.00: cmd 61/c0:10:a8:1d:a8/02:00:89:01:00/40 tag 2 ncq 360448 out
Oct  2 04:21:45 backup kernel: [62435.538243]          res 41/10:00:a8:1d:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:21:45 backup kernel: [62435.538272] ata4.00: status: { DRDY ERR }
Oct  2 04:21:45 backup kernel: [62435.538282] ata4.00: error: { IDNF }
Oct  2 04:21:45 backup kernel: [62435.539971] ata4.00: configured for UDMA/133
Oct  2 04:21:45 backup kernel: [62435.540017] ata4: EH complete
Oct  2 04:21:52 backup kernel: [62442.546727] ata4.00: exception Emask 0x0 SAct 0x30 SErr 0x0 action 0x0
Oct  2 04:21:52 backup kernel: [62442.546751] ata4.00: irq_stat 0x40000008
Oct  2 04:21:52 backup kernel: [62442.546766] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:21:52 backup kernel: [62442.546786] ata4.00: cmd 61/40:20:68:20:a8/05:00:89:01:00/40 tag 4 ncq 688128 out
Oct  2 04:21:52 backup kernel: [62442.546786]          res 41/10:00:68:20:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:21:52 backup kernel: [62442.546815] ata4.00: status: { DRDY ERR }
Oct  2 04:21:52 backup kernel: [62442.546825] ata4.00: error: { IDNF }
Oct  2 04:21:52 backup kernel: [62442.548513] ata4.00: configured for UDMA/133
Oct  2 04:21:52 backup kernel: [62442.548558] ata4: EH complete
Oct  2 04:21:59 backup kernel: [62449.555269] ata4.00: exception Emask 0x0 SAct 0xc0 SErr 0x0 action 0x0
Oct  2 04:21:59 backup kernel: [62449.555293] ata4.00: irq_stat 0x40000008
Oct  2 04:21:59 backup kernel: [62449.555308] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:21:59 backup kernel: [62449.555328] ata4.00: cmd 61/c0:30:a8:1d:a8/02:00:89:01:00/40 tag 6 ncq 360448 out
Oct  2 04:21:59 backup kernel: [62449.555328]          res 41/10:00:a8:1d:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:21:59 backup kernel: [62449.555357] ata4.00: status: { DRDY ERR }
Oct  2 04:21:59 backup kernel: [62449.555367] ata4.00: error: { IDNF }
Oct  2 04:21:59 backup kernel: [62449.557040] ata4.00: configured for UDMA/133
Oct  2 04:21:59 backup kernel: [62449.557086] ata4: EH complete
Oct  2 04:22:06 backup kernel: [62456.563823] ata4.00: exception Emask 0x0 SAct 0x300 SErr 0x0 action 0x0
Oct  2 04:22:06 backup kernel: [62456.563847] ata4.00: irq_stat 0x40000008
Oct  2 04:22:06 backup kernel: [62456.563861] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:22:06 backup kernel: [62456.563882] ata4.00: cmd 61/40:40:68:20:a8/05:00:89:01:00/40 tag 8 ncq 688128 out
Oct  2 04:22:06 backup kernel: [62456.563882]          res 41/10:00:68:20:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:22:06 backup kernel: [62456.563911] ata4.00: status: { DRDY ERR }
Oct  2 04:22:06 backup kernel: [62456.563922] ata4.00: error: { IDNF }
Oct  2 04:22:06 backup kernel: [62456.565632] ata4.00: configured for UDMA/133
Oct  2 04:22:06 backup kernel: [62456.565678] ata4: EH complete
Oct  2 04:22:13 backup kernel: [62463.572362] ata4.00: exception Emask 0x0 SAct 0xc00 SErr 0x0 action 0x0
Oct  2 04:22:13 backup kernel: [62463.572386] ata4.00: irq_stat 0x40000008
Oct  2 04:22:13 backup kernel: [62463.572401] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:22:13 backup kernel: [62463.572421] ata4.00: cmd 61/c0:50:a8:1d:a8/02:00:89:01:00/40 tag 10 ncq 360448 out
Oct  2 04:22:13 backup kernel: [62463.572421]          res 41/10:00:a8:1d:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:22:13 backup kernel: [62463.572450] ata4.00: status: { DRDY ERR }
Oct  2 04:22:13 backup kernel: [62463.572460] ata4.00: error: { IDNF }
Oct  2 04:22:13 backup kernel: [62463.574211] ata4.00: configured for UDMA/133
Oct  2 04:22:13 backup kernel: [62463.574257] ata4: EH complete
Oct  2 04:22:20 backup kernel: [62470.580904] ata4.00: exception Emask 0x0 SAct 0x3000 SErr 0x0 action 0x0
Oct  2 04:22:20 backup kernel: [62470.580929] ata4.00: irq_stat 0x40000008
Oct  2 04:22:20 backup kernel: [62470.580943] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:22:20 backup kernel: [62470.580964] ata4.00: cmd 61/40:60:68:20:a8/05:00:89:01:00/40 tag 12 ncq 688128 out
Oct  2 04:22:20 backup kernel: [62470.580964]          res 41/10:00:68:20:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:22:20 backup kernel: [62470.580993] ata4.00: status: { DRDY ERR }
Oct  2 04:22:20 backup kernel: [62470.581003] ata4.00: error: { IDNF }
Oct  2 04:22:20 backup kernel: [62470.583759] ata4.00: configured for UDMA/133
Oct  2 04:22:20 backup kernel: [62470.583857] sd 3:0:0:0: [sdd] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Oct  2 04:22:20 backup kernel: [62470.583865] sd 3:0:0:0: [sdd] Sense Key : Illegal Request [current] [descriptor]
Oct  2 04:22:20 backup kernel: [62470.583873] sd 3:0:0:0: [sdd] Add. Sense: Logical block address out of range
Oct  2 04:22:20 backup kernel: [62470.583877] sd 3:0:0:0: [sdd] CDB: 
Oct  2 04:22:20 backup kernel: [62470.583881] Write(16): 8a 00 00 00 00 01 89 a8 20 68 00 00 05 40 00 00
Oct  2 04:22:20 backup kernel: [62470.583903] blk_update_request: I/O error, dev sdd, sector 6604464232
Oct  2 04:22:20 backup kernel: [62470.584047] md/raid:md0: Disk failure on sdd1, disabling device.
Oct  2 04:22:20 backup kernel: [62470.584047] md/raid:md0: Operation continuing on 2 devices.
Oct  2 04:22:20 backup kernel: [62470.584075] ata4: EH complete
Oct  2 04:22:20 backup kernel: [62470.608486] md: md0: recovery interrupted.
Oct  2 04:22:20 backup mdadm[1476]: FailSpare event detected on md device /dev/md/server:0, component device /dev/sdd1
Oct  2 04:22:27 backup kernel: [62477.589530] ata4.00: exception Emask 0x0 SAct 0x4000 SErr 0x0 action 0x0
Oct  2 04:22:27 backup kernel: [62477.589555] ata4.00: irq_stat 0x40000008
Oct  2 04:22:27 backup kernel: [62477.589571] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:22:27 backup kernel: [62477.589594] ata4.00: cmd 61/c0:70:a8:1d:a8/02:00:89:01:00/40 tag 14 ncq 360448 out
Oct  2 04:22:27 backup kernel: [62477.589594]          res 41/10:00:a8:1d:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:22:27 backup kernel: [62477.589624] ata4.00: status: { DRDY ERR }
Oct  2 04:22:27 backup kernel: [62477.589635] ata4.00: error: { IDNF }
Oct  2 04:22:27 backup kernel: [62477.592210] ata4.00: configured for UDMA/133
Oct  2 04:22:27 backup kernel: [62477.592251] ata4: EH complete
Oct  2 04:22:34 backup kernel: [62484.598477] ata4.00: exception Emask 0x0 SAct 0x8000 SErr 0x0 action 0x0
Oct  2 04:22:34 backup kernel: [62484.598502] ata4.00: irq_stat 0x40000008
Oct  2 04:22:34 backup kernel: [62484.598518] ata4.00: failed command: WRITE FPDMA QUEUED
Oct  2 04:22:34 backup kernel: [62484.598539] ata4.00: cmd 61/c0:78:a8:1d:a8/02:00:89:01:00/40 tag 15 ncq 360448 out
Oct  2 04:22:34 backup kernel: [62484.598539]          res 41/10:00:a8:1d:a8/00:00:89:01:00/40 Emask 0x481 (invalid argument) <F>
Oct  2 04:22:34 backup kernel: [62484.598569] ata4.00: status: { DRDY ERR }
Oct  2 04:22:34 backup kernel: [62484.598579] ata4.00: error: { IDNF }
Oct  2 04:22:34 backup kernel: [62484.600294] ata4.00: configured for UDMA/133
Oct  2 04:22:34 backup kernel: [62484.600363] sd 3:0:0:0: [sdd] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Oct  2 04:22:34 backup kernel: [62484.600371] sd 3:0:0:0: [sdd] Sense Key : Illegal Request [current] [descriptor]
Oct  2 04:22:34 backup kernel: [62484.600379] sd 3:0:0:0: [sdd] Add. Sense: Logical block address out of range
Oct  2 04:22:34 backup kernel: [62484.600384] sd 3:0:0:0: [sdd] CDB: 
Oct  2 04:22:34 backup kernel: [62484.600387] Write(16): 8a 00 00 00 00 01 89 a8 1d a8 00 00 02 c0 00 00
Oct  2 04:22:34 backup kernel: [62484.600410] blk_update_request: I/O error, dev sdd, sector 6604463528
Oct  2 04:22:34 backup kernel: [62484.600526] ata4: EH complete
Oct  2 04:22:34 backup kernel: [62484.642165] RAID conf printout:
Oct  2 04:22:34 backup kernel: [62484.642178]  --- level:5 rd:3 wd:2
Oct  2 04:22:34 backup kernel: [62484.642186]  disk 0, o:1, dev:sdb1
Oct  2 04:22:34 backup kernel: [62484.642191]  disk 1, o:1, dev:sdc1
Oct  2 04:22:34 backup kernel: [62484.642196]  disk 2, o:0, dev:sdd1
Oct  2 04:22:34 backup kernel: [62484.649658] RAID conf printout:
Oct  2 04:22:34 backup kernel: [62484.649669]  --- level:5 rd:3 wd:2
Oct  2 04:22:34 backup kernel: [62484.649675]  disk 0, o:1, dev:sdb1
Oct  2 04:22:34 backup kernel: [62484.649679]  disk 1, o:1, dev:sdc1
Oct  2 04:22:34 backup mdadm[1476]: RebuildFinished event detected on md device /dev/md/server:0
Oct  2 04:23:29 backup kernel: [62540.122305] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Oct  2 04:23:29 backup kernel: [62540.122335] ata4.00: failed command: IDENTIFY DEVICE
Oct  2 04:23:29 backup kernel: [62540.122358] ata4.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 16 pio 512 in
Oct  2 04:23:29 backup kernel: [62540.122358]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Oct  2 04:23:29 backup kernel: [62540.122388] ata4.00: status: { DRDY }
Oct  2 04:23:29 backup kernel: [62540.122403] ata4: hard resetting link
Oct  2 04:23:39 backup kernel: [62550.129327] ata4: softreset failed (device not ready)
Oct  2 04:23:39 backup kernel: [62550.129353] ata4: hard resetting link
Oct  2 04:23:49 backup kernel: [62560.136399] ata4: softreset failed (device not ready)
Oct  2 04:23:49 backup kernel: [62560.136425] ata4: hard resetting link
Oct  2 04:24:00 backup kernel: [62570.703183] ata4: link is slow to respond, please be patient (ready=0)
Oct  2 04:24:24 backup kernel: [62595.163131] ata4: softreset failed (device not ready)
Oct  2 04:24:24 backup kernel: [62595.163158] ata4: limiting SATA link speed to 1.5 Gbps
Oct  2 04:24:24 backup kernel: [62595.163164] ata4: hard resetting link
Oct  2 04:24:29 backup kernel: [62600.328580] ata4: softreset failed (device not ready)
Oct  2 04:24:29 backup kernel: [62600.328608] ata4: reset failed, giving up
Oct  2 04:24:29 backup kernel: [62600.328621] ata4.00: disabled
Oct  2 04:24:29 backup kernel: [62600.328682] ata4: EH complete
Oct  2 04:27:59 backup kernel: [62810.287232] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
...
I repeated the same once more with the same result and similar log messages.

What might be the cause of it? Is it maybe because I keep doing my nightly backups to this server and it can't cope with the load? Everything was fine before my test and the SMART values seem ok

Code:
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.19.0-30-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     WDC WD40EFRX-68WT0N0
Serial Number:    xxx
LU WWN Device Id: yyy
Firmware Version: 80.00A80
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sat Oct  3 16:27:43 2015 BST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever 
                                        been run.
Total time to complete Offline 
data collection:                (51540) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine 
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 515) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x703d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   227   171   021    Pre-fail  Always       -       5641
  4 Start_Stop_Count        0x0032   099   099   000    Old_age   Always       -       1785
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   097   097   000    Old_age   Always       -       2655
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       1785
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       34
193 Load_Cycle_Count        0x0032   199   199   000    Old_age   Always       -       3600
194 Temperature_Celsius     0x0022   122   114   000    Old_age   Always       -       30
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00      2655         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
Are there other things I should do like lowering the rate limit or zeroing the drive or the superblock before putting the drive back into the array?

Last edited by hortageno; 10-03-2015 at 10:33 AM.
 
Old 10-03-2015, 11:29 AM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,140

Rep: Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263
You might try disabling queued commands to the drive with the kernel parameter:
Code:
 libata.force=noncq
 
1 members found this post helpful.
Old 10-04-2015, 10:11 AM   #3
hortageno
Member
 
Registered: Aug 2015
Distribution: Ubuntu 22.04 LTS
Posts: 240

Original Poster
Rep: Reputation: 67
Quote:
Originally Posted by smallpond View Post
You might try disabling queued commands to the drive with the kernel parameter:
Code:
 libata.force=noncq
Before I could try this out, the BIOS and linux couldn't recognize that disk at all anymore. To rule out a faulty SATA connection I swapped it with the disk next to it. The problem stayed with the disk and not the SATA port.

Then I connected that disk to another PC and it showed up in linux. I don't remember whether the BIOS could see the disk. I started filling the disk with zeros, but canceled it at about 600GB. Now I put it back into the server. The BIOS as well as linux could see the drive now and I created a partition and added the disk to the array. I will see tomorrow how it went.
 
Old 10-04-2015, 01:02 PM   #4
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
I hope that it will work - my first guess would have been as well to fill at least the start of the "faulty" HDD with zeroes, but on the other side this kind of problem (kind of the raid recognizing the faulty hdd somehow through its superblock(s)) should have happened right at the start.

It it happens again you might want to try to overwrite the "faulty" hdd completely with zeroes before trying again (maybe there is another superblock at the end of the disk/partition?).

Additionally, you might want to doublecheck how the "faulty" HDD is partitioned (if it is partitioned) to check if the partition size is at least as big and configured the same way as the ones of the other HDDs.

Last edited by Pearlseattle; 10-04-2015 at 01:04 PM.
 
1 members found this post helpful.
Old 10-04-2015, 02:09 PM   #5
hortageno
Member
 
Registered: Aug 2015
Distribution: Ubuntu 22.04 LTS
Posts: 240

Original Poster
Rep: Reputation: 67
Quote:
Originally Posted by Pearlseattle View Post
Additionally, you might want to doublecheck how the "faulty" HDD is partitioned (if it is partitioned) to check if the partition size is at least as big and configured the same way as the ones of the other HDDs.
The hard disks are all the same 4TB WD reds and have exactly the same layout. The only difference is the flag, which I set to "raid" on the "faulty" hard disk today when partitioning. No idea how the other two drives got the flag "msftdata". It could be that I used gparted und that was the default.

Code:
parted /dev/sdb 'unit s print'
Model: ATA WDC WD40EFRX-68W (scsi)
Disk /dev/sdb: 7814037168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start  End          Size         File system  Name  Flags
 1      2048s  7814035455s  7814033408s                     msftdata
Code:
parted /dev/sdc 'unit s print'
Model: ATA WDC WD40EFRX-68W (scsi)
Disk /dev/sdc: 7814037168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start  End          Size         File system  Name  Flags
 1      2048s  7814035455s  7814033408s                     msftdata
Code:
parted /dev/sdd 'unit s print'
Model: ATA WDC WD40EFRX-68W (scsi)
Disk /dev/sdd: 7814037168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start  End          Size         File system  Name     Flags
 1      2048s  7814035455s  7814033408s               primary  raid
 
Old 10-04-2015, 03:00 PM   #6
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Ok, that's frunny

If assume that you have already thought about setting the partition type to the same misterious "msftdata" if the sync fails today/tomorrow... .
In my case I would do that even if the current sync succeeds - you never know..., you could still get screwed up in the future and not even the queen of air&darkness would ever give you any help with such a mixed raid-config.

If the sync fails today it COULD be that the raid-partition_type, in the end, really makes less space available than the "msftdata-partition_type". We both don't know the internals of raid and msftdata partitions, so who knows
 
1 members found this post helpful.
Old 10-04-2015, 03:28 PM   #7
hortageno
Member
 
Registered: Aug 2015
Distribution: Ubuntu 22.04 LTS
Posts: 240

Original Poster
Rep: Reputation: 67
Quote:
Originally Posted by Pearlseattle View Post
Ok, that's frunny

If assume that you have already thought about setting the partition type to the same misterious "msftdata" if the sync fails today/tomorrow... .
In my case I would do that even if the current sync succeeds - you never know..., you could still get screwed up in the future and not even the queen of air&darkness would ever give you any help with such a mixed raid-config.

If the sync fails today it COULD be that the raid-partition_type, in the end, really makes less space available than the "msftdata-partition_type". We both don't know the internals of raid and msftdata partitions, so who knows
I'm pretty sure that the partition type was the same "msftdata" before I zeroed the disk. I set up all disk on the same day with exactly the same commands.
 
Old 10-04-2015, 03:34 PM   #8
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Quote:
I set up all disk on the same day with exactly the same commands
Mmmhh, I thought that you >>>touched<<< only the single disk that you set as "failed" at the beginning and that you did not touch the other disks?
 
1 members found this post helpful.
Old 10-04-2015, 04:12 PM   #9
hortageno
Member
 
Registered: Aug 2015
Distribution: Ubuntu 22.04 LTS
Posts: 240

Original Poster
Rep: Reputation: 67
Quote:
Originally Posted by Pearlseattle View Post
Mmmhh, I thought that you >>>touched<<< only the single disk that you set as "failed" at the beginning and that you did not touch the other disks?
I meant when I set up the server initially a few month ago. Today I only touched the "failed" disk. My previous failed attempts to re-add the disk were before that. A that point all disks were identical including the partition type.
 
Old 10-05-2015, 07:35 AM   #10
hortageno
Member
 
Registered: Aug 2015
Distribution: Ubuntu 22.04 LTS
Posts: 240

Original Poster
Rep: Reputation: 67
The drive failed again today. Now it's not recognized on either machines, neither in the BIOS nor in Linux. Opened a RMA with Western Digital. Thanks for trying to help.

Should I mark this thread as solved? I don't know whether returning a product counts as solution.
 
Old 10-05-2015, 02:01 PM   #11
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Maybe doing the resync stressed the drive for several hours and because of that it died...?
 
Old 10-09-2015, 05:55 AM   #12
hortageno
Member
 
Registered: Aug 2015
Distribution: Ubuntu 22.04 LTS
Posts: 240

Original Poster
Rep: Reputation: 67
Just to let everyone know, I got the replacement disk from WD yesterday and the sync finished now successfully.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
mdadm assemble raid5 with loopback mounted disk images Voltar Linux - Server 3 11-04-2015 12:50 PM
Can't resize ext4 partition after mdadm Raid5 grow to 4 disk jhon614 Linux - Software 0 11-02-2012 09:39 PM
Failed RAID5 disk array, questions about mdadm and recovery HellesAngel Linux - General 13 04-08-2012 05:30 AM
mdadm clean install of four drives overwritting any previous raid5 disk array javaholic Linux - Server 15 10-13-2008 11:05 AM
adding new disk to software RAID5 boToo Linux - Server 2 02-27-2007 05:22 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 03:28 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration