LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 01-13-2016, 11:23 PM   #1
sjabraha
LQ Newbie
 
Registered: May 2015
Posts: 8

Rep: Reputation: Disabled
Safe to remove failed hard drive from Raid5?


Hello, I am running Ubuntu 14.04 LTS (and am relatively new to Ubuntu). I'm also running a 4 disk Raid5 array using software raid. It appears that one of the disks has failed, or at a minimum, has fallen out of the array. I've purchased a new hard disk to replace the failed one. In preparation for the drive swap, I've execute the "Fail" and "Remove" MDADM commands that are required prior to removing the faulty drive. However, I keep getting a response for both saying "no such device". Detailed output is below.


Does this mean it's physically safe to remove the disk or am I overlooking something?
Thanks!

PHP Code:
s@half:~$ sudo mdadm --manage /dev/md0 --fail /dev/sdc1
mdadm
set device faulty failed for /dev/sdc1:  No such device
s
@half:~$ sudo mdadm --manage /dev/md0 --remove /dev/sdc1
mdadm
hot remove failed for /dev/sdc1No such device or address


s
@half:~$ sudo mdadm --detail /dev/md0
mdadm
:
/
dev/md0:
        
Version 1.2
  Creation Time 
Sat Sep 20 09:49:46 2014
     Raid Level 
raid5
     
Array Size 8790400512 (8383.18 GiB 9001.37 GB)
  
Used Dev Size 2930133504 (2794.39 GiB 3000.46 GB)
   
Raid Devices 4
  Total Devices 
3
    Persistence 
Superblock is persistent

    Update Time 
Sun Jan 10 22:26:56 2016
          State 
cleandegraded 
 Active Devices 
3
Working Devices 
3
 Failed Devices 
0
  Spare Devices 
0

         Layout 
left-symmetric
     Chunk Size 
512K

           Name 
half:0
           UUID 
280b8978:da1db13d:c9e3cfea:c8649514
         Events 
15254

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   
/dev/sda1
       1       8       17        1      active sync   
/dev/sdb1
       2       0        0        2      removed
       4       8       49        3      active sync   
/dev/sdd1
s
@half:~$ 
 
Old 01-14-2016, 03:58 AM   #2
ajdonnison
LQ Newbie
 
Registered: Feb 2009
Location: Australia
Posts: 3

Rep: Reputation: 0
The fact that it is saying the device is not found suggests that the drive is truly dead. You could try using 'fdisk -l' to check if the /dev/sdc device can be found, and if not look in syslog or dmesg for any messages relating to the failure. You should be safe to remove and replace it.
 
Old 01-14-2016, 06:59 AM   #3
sjabraha
LQ Newbie
 
Registered: May 2015
Posts: 8

Original Poster
Rep: Reputation: Disabled
re: Safe to remove failed hard drive from Raid5?

Quote:
Originally Posted by ajdonnison View Post
The fact that it is saying the device is not found suggests that the drive is truly dead. You could try using 'fdisk -l' to check if the /dev/sdc device can be found, and if not look in syslog or dmesg for any messages relating to the failure. You should be safe to remove and replace it.
Hi thanks for your reply and suggestion. I've just been a bit cautious as I've seen so many posts emphasizing the importance of executing fail and remove prior to physically removing the drive.

Here's what I got when I executed 'fdisk-l' on /dev/sdc
Code:
WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1  4294967295  2147483647+  ee  GPT
Partition 1 does not start on physical sector boundary.
I also executed smartctl and the tests seem to suggest the disk is fine (see below). Nonetheless, I still want to see what happens when I put in a new drive and just wanted to make sure it was safe to do so.

Code:
s@half:~$ sudo smartctl -a /dev/sdc
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-35-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red (AF)
Device Model:     WDC WD30EFRX-68EUZN0
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan  9 21:15:32 2016 HKT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (39840) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off supp          ort.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 399) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x703d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_          FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -                 0
  3 Spin_Up_Time            0x0027   177   175   021    Pre-fail  Always       -                 6150
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -                 54
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -                 0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -                 0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -                 476
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -                 0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -                 0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -                 42
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -                 9
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -                 145
194 Temperature_Celsius     0x0022   116   093   000    Old_age   Always       -                 34
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -                 0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -                 0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -                 0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -                 0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -                 0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

s@half:~$ sudo smartctl -d ata -H /dev/sdc
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-35-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
 
Old 01-15-2016, 04:02 PM   #4
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Ah... Be careful.

If you have rebooted (either deliberately, or other) then a dead disk will not show up - and other disks will be renamed. Normally I would expect the mdadmin -detail to list the devices - even the dead ones (though it would show a failure, but I haven't yet seen a failed disk...). Removing a disk (the way this one looks) is like a reboot may have happened that automatically removed the disk as it couldn't find it... The raid partitions contain a header that identifies its usage so that it can be assembled at boot time in the face of non-permanent device names. Here it just looks like the disk couldn't be found, hence the mdadmin remove/fail report not finding the partition.

The /dev/sdc you are looking at may not be the physical drive that failed... Specially since the smart data shows /dev/sdc as working.

You can get a list of serial/model numbers by looking at the directory /dev/disk/by-id

PS. I believe the /dev/disk/by-id has the model and serial number of the disk:

A long listing will show you what /dev/sdxnn names correspond to the serial/model identification. Getting a hardcopy of the list will allow you to verify the disks that are physically installed. The dead disk will be the one not on the hardcopy.

When that disk gets replaced (and the system rebooted) it should then show up (perhaps as /dev/sdc, but not necessarily), and the list from /dev/disk/by-id will now have a new entry that isn't on the hardcopy.

A partition list of that disk should show an empty partition table (the partitions of a disk are also shown in /dev/disk/by-id by having <disk name>-part<number>, so a new disk will just show up as itself without a "-part<number>" addition).

PS. I believe the /dev/disk/by-id has the model and serial number of the disk:
My listing looks like:
Code:
$ ls -l /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root  9 Dec 30 13:28 ata-MAXTOR_6L060J3_663206555890 -> ../../sdg
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-MAXTOR_6L060J3_663206555890-part1 -> ../../sdg1
lrwxrwxrwx. 1 root root  9 Jan 14 22:14 ata-PIONEER_DVD-RW_DVR-116D -> ../../sr0
lrwxrwxrwx. 1 root root  9 Dec 30 13:28 ata-SAMSUNG_HD250HJ_S0URJADPC01495 -> ../../sdh
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-SAMSUNG_HD250HJ_S0URJADPC01495-part1 -> ../../sdh1
lrwxrwxrwx. 1 root root  9 Dec 30 13:28 ata-SAMSUNG_HD250HJ_S0URJADPC01561 -> ../../sda
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-SAMSUNG_HD250HJ_S0URJADPC01561-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-SAMSUNG_HD250HJ_S0URJADPC01561-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-SAMSUNG_HD250HJ_S0URJADPC01561-part3 -> ../../sda3
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-SAMSUNG_HD250HJ_S0URJADPC01561-part4 -> ../../sda4
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-SAMSUNG_HD250HJ_S0URJADPC01561-part5 -> ../../sda5
lrwxrwxrwx. 1 root root  9 Dec 30 13:28 ata-ST3000DM001-1ER166_W500KKJK -> ../../sdc
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-ST3000DM001-1ER166_W500KKJK-part1 -> ../../sdc1
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-ST3000DM001-1ER166_W500KKJK-part2 -> ../../sdc2
lrwxrwxrwx. 1 root root  9 Dec 30 13:28 ata-ST3000DM001-1ER166_W500KKK4 -> ../../sdb
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-ST3000DM001-1ER166_W500KKK4-part1 -> ../../sdb1
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-ST3000DM001-1ER166_W500KKK4-part2 -> ../../sdb2
lrwxrwxrwx. 1 root root  9 Dec 30 13:28 ata-ST3000DM001-1ER166_W501N22E -> ../../sdd
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-ST3000DM001-1ER166_W501N22E-part1 -> ../../sdd1
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-ST3000DM001-1ER166_W501N22E-part2 -> ../../sdd2
lrwxrwxrwx. 1 root root  9 Dec 30 13:28 ata-ST3000DM001-1ER166_W501N50Z -> ../../sde
lrwxrwxrwx. 1 root root  9 Dec 30 13:28 ata-ST500DM002-1BD142_Z6E8YWQ4 -> ../../sdi
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-ST500DM002-1BD142_Z6E8YWQ4-part1 -> ../../sdi1
lrwxrwxrwx. 1 root root  9 Dec 30 13:28 ata-ST500DM002-1BD142_Z6E934RB -> ../../sdf
lrwxrwxrwx. 1 root root 10 Dec 30 13:28 ata-ST500DM002-1BD142_Z6E934RB-part1 -> ../../sdf1
lrwxrwxrwx. 1 root root 11 Dec 30 13:28 md-name-panther.localdomain:medialib -> ../../md127
lrwxrwxrwx. 1 root root 11 Dec 30 13:28 md-uuid-xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx -> ../../md127
<the rest gives the PCI address identification instead of physical model/part numbers>

Last edited by jpollard; 01-15-2016 at 04:34 PM.
 
Old 01-15-2016, 09:46 PM   #5
sjabraha
LQ Newbie
 
Registered: May 2015
Posts: 8

Original Poster
Rep: Reputation: Disabled
re: Ah... Be careful.

Hi Jpollard,

So you've touched on the very thing that's been nagging me about this whole thing from the beginning. I'm not totally convinced that my disk is actually faulty, but for some reason it has fallen out of the array. If this is the case, the question is why and more importantly how do I fix it?

I've just booted up the machine and run 'cat /proc/mdstat', output is below:


Code:
s@half:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra        id10]
md0 : active raid5 sda1[0] sdd1[4] sdb1[1]
      8790400512 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]

unused devices: <none>
When I run 'mdadm --detail /dev/md0', I get the following output

Code:
s@half:~$ sudo mdadm --detail /dev/md0

/dev/md0:
        Version : 1.2
  Creation Time : Sat Sep 20 09:49:46 2014
     Raid Level : raid5
     Array Size : 8790400512 (8383.18 GiB 9001.37 GB)
  Used Dev Size : 2930133504 (2794.39 GiB 3000.46 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Sat Jan 16 10:39:11 2016
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : half:0
           UUID : 280b8978:da1db13d:c9e3cfea:c8649514
         Events : 15258

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       0        0        2      removed
       4       8       49        3      active sync   /dev/sdd1
s@half:~$
So I'm fairly sure it's the /dev/sdc1 partition or /dev/sdc drive. So to get the serial number of the drive, I run 'sudo smartctl --info /dev/sdc' and get the following.

Code:
s@half:~$ sudo smartctl --info /dev/sdc
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-35-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red (AF)
Device Model:     WDC WD30EFRX-68EUZN0
Serial Number:    WD-WCC4N0837332
LU WWN Device Id: 5 0014ee 25f44eade
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan 16 11:11:30 2016 HKT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

s@half:~$


Here's what I get when I run 'ls -l /dev/disk/by-id'

Code:
s@half:~$ ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root  9 Jan 16 11:00 ata-FSB-120GB_6212140501034 -> ../../sde
lrwxrwxrwx 1 root root 10 Jan 16 10:39 ata-FSB-120GB_6212140501034-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Jan 16 10:39 ata-FSB-120GB_6212140501034-part2 -> ../../sde2
lrwxrwxrwx 1 root root 10 Jan 16 10:39 ata-FSB-120GB_6212140501034-part3 -> ../../sde3
lrwxrwxrwx 1 root root  9 Jan 16 10:39 ata-HL-DT-ST_DVDRAM_GH24NSC0_KAXE74J1312 -> ../../sr0
lrwxrwxrwx 1 root root  9 Jan 16 10:39 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0821054 -> ../../sda
lrwxrwxrwx 1 root root 10 Jan 16 10:39 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0821054-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 Jan 16 10:39 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0837332 -> ../../sdc
lrwxrwxrwx 1 root root 10 Jan 16 10:39 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0837332-part1 -> ../../sdc1
lrwxrwxrwx 1 root root  9 Jan 16 10:39 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2382719 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan 16 10:39 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2382719-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  9 Jan 16 10:39 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2442624 -> ../../sdd
lrwxrwxrwx 1 root root 10 Jan 16 10:39 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2442624-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Jan 16 10:39 dm-name-ubuntu--vg-root -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jan 16 10:39 dm-name-ubuntu--vg-swap_1 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jan 16 10:39 dm-uuid-LVM-3C5txMAWBMzEIgojmF6zS1RK4NSPEgHkEy3Oz5f0Gsl9Hfe8zp9zjO3uGqP9t6oy -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jan 16 10:39 dm-uuid-LVM-3C5txMAWBMzEIgojmF6zS1RK4NSPEgHkzMgSCYxJPh34GDPE4L53sabnzeNEndTU -> ../../dm-1
lrwxrwxrwx 1 root root  9 Jan 16 10:39 md-name-half:0 -> ../../md0
lrwxrwxrwx 1 root root  9 Jan 16 10:39 md-uuid-280b8978:da1db13d:c9e3cfea:c8649514 -> ../../md0
lrwxrwxrwx 1 root root  9 Jan 16 10:39 wwn-0x5001480000000000 -> ../../sr0
lrwxrwxrwx 1 root root  9 Jan 16 10:39 wwn-0x50014ee209ef5433 -> ../../sda
lrwxrwxrwx 1 root root 10 Jan 16 10:39 wwn-0x50014ee209ef5433-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 Jan 16 10:39 wwn-0x50014ee25f44eade -> ../../sdc
lrwxrwxrwx 1 root root 10 Jan 16 10:39 wwn-0x50014ee25f44eade-part1 -> ../../sdc1
lrwxrwxrwx 1 root root  9 Jan 16 10:39 wwn-0x50014ee6045afaef -> ../../sdd
lrwxrwxrwx 1 root root 10 Jan 16 10:39 wwn-0x50014ee6045afaef-part1 -> ../../sdd1
lrwxrwxrwx 1 root root  9 Jan 16 10:39 wwn-0x50014ee659aa3207 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan 16 10:39 wwn-0x50014ee659aa3207-part1 -> ../../sdb1
s@half:~$

And then the kicker:

Code:
s@half:~$ sudo smartctl -a /dev/sdc
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-35-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red (AF)
Device Model:     WDC WD30EFRX-68EUZN0
Serial Number:    WD-WCC4N0837332
LU WWN Device Id: 5 0014ee 25f44eade
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan 16 11:34:34 2016 HKT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (39840) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 399) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x703d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   176   175   021    Pre-fail  Always       -       6183
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       57
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       482
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       45
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       11
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       149
194 Temperature_Celsius     0x0022   116   093   000    Old_age   Always       -       34
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%       479         -
# 2  Short offline       Interrupted (host reset)      30%       477         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

s@half:~$
So it looks like the drive is ok but for some reason the array has dropped it...not sure what's changed...

Note: All of this was executed during the same session with no reboot.
 
Old 01-16-2016, 02:42 AM   #6
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
At that point, it looks like you should be able to add the partition back and let it rebuild.
 
Old 01-16-2016, 06:54 AM   #7
sjabraha
LQ Newbie
 
Registered: May 2015
Posts: 8

Original Poster
Rep: Reputation: Disabled
re: Ah... Be careful

Quote:
Originally Posted by jpollard View Post
At that point, it looks like you should be able to add the partition back and let it rebuild.
Ok, so I executed 'sudo mdadm /dev/md0 -a /dev/sdc1' and the it started the process of adding SDC1 back into the array. You can see this in the GUI disk utility screenshot attached. Then at some point along the way the process quits all together and SDC1 completely disappears from from the disk utility window. I've attached this screenshot as well.

Do I need to zero the superblock of sdc1 before trying to add it back in? Is there something else I should be doing?

Here's what I get when I execute 'sudo mdadm --detail /dev/md0'
Code:
s@half:~$ sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat Sep 20 09:49:46 2014
     Raid Level : raid5
     Array Size : 8790400512 (8383.18 GiB 9001.37 GB)
  Used Dev Size : 2930133504 (2794.39 GiB 3000.46 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Sat Jan 16 17:50:14 2016
          State : clean, degraded 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : half:0
           UUID : 280b8978:da1db13d:c9e3cfea:c8649514
         Events : 15277

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       0        0        2      removed
       4       8       49        3      active sync   /dev/sdd1
Detail for 'ls -l /dev/disk/by-id' - notice the drive ending with the serial number '7332' (aka /dev/sdc) from earlier is now missing:

Code:
s@half:~$ ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root  9 Jan 16 20:32 ata-FSB-120GB_6212140501034 -> ../../sde
lrwxrwxrwx 1 root root 10 Jan 16 16:51 ata-FSB-120GB_6212140501034-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Jan 16 16:51 ata-FSB-120GB_6212140501034-part2 -> ../../sde2
lrwxrwxrwx 1 root root 10 Jan 16 16:51 ata-FSB-120GB_6212140501034-part3 -> ../../sde3
lrwxrwxrwx 1 root root  9 Jan 16 16:51 ata-HL-DT-ST_DVDRAM_GH24NSC0_KAXE74J1312 -> ../../sr0
lrwxrwxrwx 1 root root  9 Jan 16 20:32 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0821054 -> ../../sda
lrwxrwxrwx 1 root root 10 Jan 16 16:51 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0821054-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 Jan 16 20:32 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2382719 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan 16 16:51 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2382719-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  9 Jan 16 20:32 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2442624 -> ../../sdd
lrwxrwxrwx 1 root root 10 Jan 16 16:51 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2442624-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Jan 16 16:51 dm-name-ubuntu--vg-root -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jan 16 16:51 dm-name-ubuntu--vg-swap_1 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jan 16 16:51 dm-uuid-LVM-3C5txMAWBMzEIgojmF6zS1RK4NSPEgHkEy3Oz5f0Gsl9Hfe8zp9zjO3uGqP9t6oy -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jan 16 16:51 dm-uuid-LVM-3C5txMAWBMzEIgojmF6zS1RK4NSPEgHkzMgSCYxJPh34GDPE4L53sabnzeNEndTU -> ../../dm-1
lrwxrwxrwx 1 root root  9 Jan 16 17:05 md-name-half:0 -> ../../md0
lrwxrwxrwx 1 root root  9 Jan 16 17:05 md-uuid-280b8978:da1db13d:c9e3cfea:c8649514 -> ../../md0
lrwxrwxrwx 1 root root  9 Jan 16 16:51 wwn-0x5001480000000000 -> ../../sr0
lrwxrwxrwx 1 root root  9 Jan 16 20:32 wwn-0x50014ee209ef5433 -> ../../sda
lrwxrwxrwx 1 root root 10 Jan 16 16:51 wwn-0x50014ee209ef5433-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 Jan 16 20:32 wwn-0x50014ee6045afaef -> ../../sdd
lrwxrwxrwx 1 root root 10 Jan 16 16:51 wwn-0x50014ee6045afaef-part1 -> ../../sdd1
lrwxrwxrwx 1 root root  9 Jan 16 20:32 wwn-0x50014ee659aa3207 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan 16 16:51 wwn-0x50014ee659aa3207-part1 -> ../../sdb1
And just for completeness, I ran 'sudo smartctl -a /dev/sdc'

Code:
s@half:~$ sudo smartctl -a /dev/sdc
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-35-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

Smartctl open device: /dev/sdc failed: No such device
s@half:~$

Now, all that being said, I've done this before and the missing sdc drive reappears after I reboot but still not as a part of the array.

Still not sure what's happening here...
Attached Thumbnails
Click image for larger version

Name:	Screenshot from 2016-01-16 17:07:05.png
Views:	21
Size:	112.2 KB
ID:	20548   Click image for larger version

Name:	Screenshot from 2016-01-16 20:28:12.png
Views:	21
Size:	99.6 KB
ID:	20549  
 
Old 01-16-2016, 07:35 AM   #8
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Looks like the disk has a problem. It may be a cabling issue - either power or signal... I include power since the disk didn't record any errors but did disappear from the system (minor oxidation+vibration can do this as it is intermittent). These problems might be fixed by just unplugging the cables and plugging them back in (a little contact cleaner doesn't hurt either, but usually isn't necessary).

The system logs should have recorded some errors about both the raid and the disk. These might shed more light on on the problem.

I wouldn't expect the disk to show up in the raid as the system/management code identified it as having failed, so reboots wouldn't include it in the raid until directed to do so.

Last edited by jpollard; 01-16-2016 at 07:37 AM.
 
Old 01-16-2016, 09:28 AM   #9
sjabraha
LQ Newbie
 
Registered: May 2015
Posts: 8

Original Poster
Rep: Reputation: Disabled
Ok will try and disconnect and reconnect the cabling. If that doesn't work, is it safe for me to try and new hard drive? Do I need to do anything beforehand like stop the raid? Failing and removing at the command line are not options per the first post on this.

Thanks!
 
Old 01-16-2016, 09:32 AM   #10
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Should be safe.

All you should need to do is give it the same partition sizes as used in the raid. Once that is done, adding it to the raid will update the header and begin storing data on the partition - just like when you added the other disk back to the raid.

As far as the raid device goes, it won't be any different than before.

Last edited by jpollard; 01-16-2016 at 09:33 AM.
 
Old 01-17-2016, 07:56 PM   #11
sjabraha
LQ Newbie
 
Registered: May 2015
Posts: 8

Original Poster
Rep: Reputation: Disabled
re: Safe to remove failed hard drive from Raid5?

Quote:
Originally Posted by jpollard View Post
Looks like the disk has a problem. It may be a cabling issue - either power or signal... I include power since the disk didn't record any errors but did disappear from the system (minor oxidation+vibration can do this as it is intermittent). These problems might be fixed by just unplugging the cables and plugging them back in (a little contact cleaner doesn't hurt either, but usually isn't necessary).

The system logs should have recorded some errors about both the raid and the disk. These might shed more light on on the problem.

I wouldn't expect the disk to show up in the raid as the system/management code identified it as having failed, so reboots wouldn't include it in the Raid until directed to do so.
I'm very happy to report that your suspicion looks to be on the money. I replaced the sata cable for both SDC and SDB. You may be wondering why I chose SDB. It was totally by accident actually. I pulled the cable out inadvertently but happen to notice that it looked like it was a bit contorted at the connector. I don't know if this was truly causing a problem but I did end up replacing it anyway. After replacing both cables, I rebooted the machine and executed the command to add SDC back into the raid. The process took a few hours but all seemed to go well when I check it again after the process completed. I've also rebooted the machine just to double-check and the Raid is holding.

Just a huge thanks to you for helping me solve the problem! I really appreciate it. I have to say that while I've seen others complain about the same issue,I just didn't think this would be the cause. I suspect what's adding to the likelihood is that I don't use this linux server all that often. So unlike my other machines, it's got a higher likelihood of some sort of 'build up' occurring that weakens the connection between the cable and the drive and/or mother board.

The only other thing I did differently is execute the add command at the server vs. by SSH, but I doubt that caused the issue from earlier.

Anyway, thanks again!
 
Old 01-17-2016, 09:12 PM   #12
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Thanks for letting us know that solved the problem. One side benefit is you now have a spare drive :-)

It also lets me feel better about some of the testing I did with md and btrfs (btrfs works... mostly, it has some VERY nice features. But its handling of errors, and the "recovery tools" is poor. Same testing carried out on an mdraid worked acceptably).

The basis of my tests was accidentally wiping the first couple of MB of a partition (I had a typo in the name). Went through the recovery procedures... and still had errors. The btrfs recover tool didn't work at all. At best, I got back 1/3 of the files (with a system hang on nearly every I/O error). So I did the same test on a md raid5. Yes, errors got reported, but when I failed the damaaged partition all the files were now valid. Adding the partition back and everything worked. So I dropped the use of btrfs.

The only thing I identified as a potential issue was that I did my tests within a VM, and passed the partitions to use as the disks. The raids initialized in the VM with no problems. BUT, watch out for the identification on the raid. Fortunately it (by default) adds the host name to the raid name. This is a problem if you specify a name overriding the default. If the VM host reboots, it will assemble the raid unless it can recognize that the raid doesn't belong to it but the VM. When the VM boots it will assemble the raid, and things work normally.

I haven't fully explored the dm raid5 yet for all the limitations.... but so far things just work.

Last edited by jpollard; 01-17-2016 at 09:17 PM.
 
Old 01-17-2016, 11:40 PM   #13
sjabraha
LQ Newbie
 
Registered: May 2015
Posts: 8

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by jpollard View Post
Thanks for letting us know that solved the problem. One side benefit is you now have a spare drive :-)

It also lets me feel better about some of the testing I did with md and btrfs (btrfs works... mostly, it has some VERY nice features. But its handling of errors, and the "recovery tools" is poor. Same testing carried out on an mdraid worked acceptably).

The basis of my tests was accidentally wiping the first couple of MB of a partition (I had a typo in the name). Went through the recovery procedures... and still had errors. The btrfs recover tool didn't work at all. At best, I got back 1/3 of the files (with a system hang on nearly every I/O error). So I did the same test on a md raid5. Yes, errors got reported, but when I failed the damaaged partition all the files were now valid. Adding the partition back and everything worked. So I dropped the use of btrfs.

The only thing I identified as a potential issue was that I did my tests within a VM, and passed the partitions to use as the disks. The raids initialized in the VM with no problems. BUT, watch out for the identification on the raid. Fortunately it (by default) adds the host name to the raid name. This is a problem if you specify a name overriding the default. If the VM host reboots, it will assemble the raid unless it can recognize that the raid doesn't belong to it but the VM. When the VM boots it will assemble the raid, and things work normally.

I haven't fully explored the dm raid5 yet for all the limitations.... but so far things just work.
I'm still learning MDADM and what it's capable of. At one point I was worried that potentially the standby sequence I had introduced months ago might have caused the disk to fall out on it's own.

What I am still a little perplexed by why the cable would respond to queries regarding SDC (such as it's health etc.) just fine but fail when trying to add the disk back into the raid. I forgot to mention this but I ended up replacing the original cable with a Sata III compatible cable. Not sure if that helped with the process or in someway enhanced the overall synchronization of SDC with the rest of the raid.

Questions...questions..

P.S. Have you seen this Web Gui that someone developed for MDADM? Very useful.
 
Old 01-18-2016, 03:53 AM   #14
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,120

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
So long as you don't get fooled by this
Quote:
Using RAID makes your backup strategy completely transparent and your data safe and happy.
RAID does not obviate the need for backups. See @jpollards experience with btrfs - or do an accidental "rm -rf". As it happens, I am a big believer in btrfs RAID5, but I am manic about backups. btrfs snapshot makes that a snack.
 
Old 01-18-2016, 06:21 AM   #15
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by syg00 View Post
So long as you don't get fooled by this

RAID does not obviate the need for backups. See @jpollards experience with btrfs - or do an accidental "rm -rf". As it happens, I am a big believer in btrfs RAID5, but I am manic about backups. btrfs snapshot makes that a snack.
btrfs still has too many failures - and the snapshots are worthless when the software fails. They are useful as long as things are working as it lets you get a consistent backup at a given point in time. But otherwise, they fail along with everything else.

The big problem is actually doing the backups - right now the cheapest backups are to another set of disks... but that can be more than double the cost of the raid. Even in large government installations, raid devices sometimes don't get backed up simply due to the expense, and backups can take a LONG time to do. One job I was on used two 16 TB filesystems on a NAS device (NetApp)... one with ~50 million files, the other with about 40 million. Using the proprietary backup method between two units would take about 10 hours IF you had a 10GB network between them (we did), unfortunately, if there is an error in the middle... things got dicy about whether it would finish, or just take two or three times as long (faster to restart the backup...). Migrating from a NetApp unit to something bigger.... a month or more. And you can't do it with rsync - technically you could, but it would take almost a week just for rsync to match directories to identify the files to be copied, with no files copied until that was done, then copy one file at a time (it might be better now, this was 2010). And if you had a failure in the middle - start over with the week to match up the directories again.

The only way to backup was to use a tape archive and robot (along with the expense and extra handling hint- don't get one with only one tape transport as it can take up to a week or two to repair, and during the repair no backups... and no restore capability either, so you really need two storage units in case the robot breaks...). So now the expense is in 3 storage units... and about 3 times the cost of the NAS. Guess what - funds were not available for backups.

BTW, the solution to the migration effort (they wanted to archive video data, and the NAS couldn't handle that added load): I wrote a multi-threaded perl script to scan directories for missing files... and feed a queue of files/directories copy/create, and a multi-threaded copy action. A scan (no files copied) would only take 45 minutes to run. The big improvement was being able to checkpoint the input queues (directories to be scanned) and the output queues (files/directories identified to be copied/created), thus the backup could be stopped/restarted without loosing the scan. NFS access was fast enough on both source and destination (after tuning - 64 NFS server threads on the recipient, the NetApp was plenty fast at NFS). Optimum transfer with both scanning and copying used 12 threads for scans, and two threads to copy data. More than two threads copying data would saturate the NAS/network, and the NAS was still in use during backup. The 12 threads could identify a HUGE number of files (at the beginning) very quickly, so I had to provide throttle controls as well. If the output queue got over 50,000 files/directories, then each scanning thread would stop when the current directory scan finished, and if the output queue was below 5000 they would start again. I had the process checkpoint every 6 hours during the migration. Once a single copy was completed, it would start again - and any new/modified files not already copied would be done.

A bit of a pain, but the checkpoint capability ensured that all the data would be copied, even if overall, there would be system failures/reboots of any system involved (the problem with rsync was that a reboot or problem on a system would require a complete restart - if the NAS rebooted... or if the destination NAS rebooted... So using the script had a copy, it only took 45 minutes to identify new files (usually less than 100), and only copying time added for those few files (which would start while scanning). Not as flexible as rsync, but at least it would work.

One last note - Your disk may have had a bit of a vibration issue with the cable. It worked until the signals stopped. So it worked... until the failure. You just can't tell how long that would take... Cleaning the contacts likely fixed it - just by unplugging/plugging. Using a new cable helps too.

Last edited by jpollard; 01-18-2016 at 06:26 AM.
 
  


Reply

Tags
array, degraded, hard drives, raid5, removed partition



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Failed drive while converting raid5 to raid6, then a hard reboot hakon.gislason Linux - Server 4 03-07-2019 04:23 AM
how to safe remove usb drive in command? shibir Linux - Hardware 3 12-19-2012 06:52 AM
Safe Remove USB drive not working in Xubuntu 12.04 dhruvats Linux - Desktop 5 08-17-2012 11:30 AM
firewire drive no safe remove roofninja Ubuntu 2 05-31-2007 11:16 PM
Safe shutdown of external USB hard-drive? blixel Linux - Hardware 3 12-12-2003 12:57 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 08:03 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration