LinuxQuestions.org
Register a domain and help support LQ
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices

Reply
 
Search this Thread
Old 08-14-2010, 09:23 AM   #1
The Jester
LQ Newbie
 
Registered: Apr 2010
Posts: 2

Rep: Reputation: 0
Bad Sectors on mdadm Raid 5 array


Hi all,

I'm running a Debian homeserver, with a 3-disk (1GB each) raid 5 array using mdadm (the OS is on a separate disk).
Now, smartmontools noticed some bad sectors on one of the disks, and I'm not sure what to do next (except for backup of valuable data).
I found some articles on how to fix these sectors, but I'm unaware what the result on the whole array will be.
What to do? Thanks in advance!

(I'm not a linux expert, just started tinkering with it a few months ago).
 
Old 08-15-2010, 01:59 AM   #2
xeleema
Member
 
Registered: Aug 2005
Location: D.i.t.h.o, Texas
Distribution: Slackware 13.x, rhel3/5, Solaris 8-10(sparc), HP-UX 11.x (pa-risc)
Posts: 987
Blog Entries: 4

Rep: Reputation: 248Reputation: 248Reputation: 248
Greetingz!

Well, for starters I would have to strongly suggest that if you have *anything* important on the arry, you should have already made a backup.
As for taking care of the bad sectors; you should just be able to unmount (for safety) the filesystem(s) on the RAID5, then run a full fsck to pickup and remap any bad sectors.

If that doesn't work, then as long as you're sure which device is reporting the problem, you could rip it out of the RAID5, reformat it, then add it back. By "reformat" it, I specifically mean run mkfs on it, then do a full fsck on the disk (check the man page for the options you will need).

If you're new to Linux, then I'd like to pass on one major tip:

Read the "man" pages for the various commands you see used in your google results. Not every Linux distrbution behaves the *exact* same way (For example: Red Hat-based distros vs Debian-based ones).

One more thing; Grab O'reily's "Essential System Administration, Third Edition", it'll give you a great start on some of the "Common Good Practise" things that make UNIX/Linux system administration really easy.

Good Luck!

P.S: If this helps, click the "Thanks" button on the bottom-right of this post.
 
Old 08-17-2010, 06:44 AM   #3
xeleema
Member
 
Registered: Aug 2005
Location: D.i.t.h.o, Texas
Distribution: Slackware 13.x, rhel3/5, Solaris 8-10(sparc), HP-UX 11.x (pa-risc)
Posts: 987
Blog Entries: 4

Rep: Reputation: 248Reputation: 248Reputation: 248
The_Jester,
Thank you for marking the post as "[SOLVED]". However, I was wondering what exactly the solution was?
If I helped, I'd really appreciate a little click on that "Thanks" button in the bottom right-hand corner of whichever post happened to help out the most.

Have a good one!
 
Old 08-17-2010, 07:13 AM   #4
The Jester
LQ Newbie
 
Registered: Apr 2010
Posts: 2

Original Poster
Rep: Reputation: 0
xeleema, I'd really want to, but I don't see a thanks button (only a quote).

What I did was mark the drive as faulty, removed it from the mdadm-array. Then formatted it and let the array rebuild overnight.
No errors reported yet.
 
Old 08-21-2010, 05:47 AM   #5
tg1000
LQ Newbie
 
Registered: Aug 2010
Posts: 1

Rep: Reputation: 0
Hi!

I've got a similar problem. Running Ubuntu with a 6-disk mdadm raid5 array (OS on different drive). Yesterday, two of the disks started reporting bad sectors (and the raid, of course, is not able to assemble with two disks missing):

root@sandman:~# smartctl -a /dev/sdf
<...>
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed: read failure 10% 4383 2930270838
<...>

...and the same for /dev/sdg.

I've read a bit of this howto: smartmontools.sourceforge.net/badblockhowto.html
but I'm not sure how to apply this on a mdadm raid.
Is there _any_ way to remove the bad sectors on raid drives and remount the raid without losing _all_ of the data?

Edit: both disks report LBA_of_first_error 2930270838, strange coincidence?

Last edited by tg1000; 08-21-2010 at 07:32 AM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
mdadm create,, raid array is not clean dbj Linux - Server 2 11-16-2009 05:36 PM
I need help on repairing a mdadm raid 5 array compgeek50 Linux - Hardware 0 02-24-2008 08:06 AM
mdadm: re-assembling raid array on new installation hamish Linux - Server 3 06-10-2007 02:23 PM
Bad sectors and RAID comrade_bronski Linux - General 1 10-16-2006 08:29 AM
Bad sectors in RAID 5 HDD arunabh_biswas Linux - Software 2 06-16-2006 12:25 PM


All times are GMT -5. The time now is 06:51 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration