LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 10-12-2012, 08:41 AM   #1
depam
Member
 
Registered: Sep 2005
Posts: 861

Rep: Reputation: 30
mdadm shows additional drive after inserting new disk


We are recently asked to clone a mirrored drive and the downtime is really difficult or impossible to get. What we had in mind was to break a mirror of the RAID drive (we know this is really bad idea) but we did it anyway

The 2nd disk that was removed from the first server was booting fine on the second server and all that was needed there is to rebuild the drive. However, we decided that to at least keep one good copy of the data to not perform a rebuild on the 2nd server until the original server got a good mirror.

The first server is showing degraded from mdstat /dev/sdb as the drive that have failed. We inserted the new disk and rescanned the drive but it shows as /dev/sdc instead of the /dev/sdb. Still leaves us no choice of rebooting and fixing this, we have rebuilt added sdc on the array and the rebuild process has completed. Installed grub on it as well. We thought of doing this since at least we get the redundancy incase of immediate drive failure. The mdstat shows three drives now though. /dev/sda, /dev/sdb and /dev/sdc (/dev/sdb showing failed and /dev/sdc and /dev/sda as active)

What I am not sure is if we had the chance to reboot the server, will that new disk be detected as /dev/sdb upon next reboot of the server and can assemble the RAID automatcally? Or will it still appear as /dev/sdc and /dev/sdb should be gone? If it isn't so does anyone encountered this issue and how to normalize it? Thanks.
 
Old 10-13-2012, 03:45 PM   #2
droyden
Member
 
Registered: Feb 2007
Location: UK
Posts: 150

Rep: Reputation: 19
Mdadm works with uuids so doesnt matter about the name within linux, it should show up as sdb after a reboot. However of the raid has rebuilt there is no real requirement to reboot.
 
Old 10-13-2012, 09:18 PM   #3
depam
Member
 
Registered: Sep 2005
Posts: 861

Original Poster
Rep: Reputation: 30
Hi droyden,

Thanks a lot. Will need to arrange the reboot on this then to find out. Will it still be considered as rebuilt or need to rebuild again after reboot?
 
Old 10-14-2012, 02:51 PM   #4
droyden
Member
 
Registered: Feb 2007
Location: UK
Posts: 150

Rep: Reputation: 19
Nope its all good, it won't need to rebuild again
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Disk Space shows more used disk thank size of files present in folder vinvar Linux - Virtualization and Cloud 2 11-02-2009 11:59 PM
[SOLVED] Software RAID (mdadm) - RAID 0 returns incorrect status for disk failure/disk removed Marjonel Montejo Linux - General 4 10-04-2009 06:15 PM
Disk Mounter applet always shows floppy drive pwalden Linux - Newbie 2 03-03-2008 04:23 PM
mdadm shows 2 faulty drives steven.wong Linux - General 2 08-21-2006 03:39 AM
additional scsi drive setup-not boot drive serat Linux - Newbie 3 04-03-2004 11:44 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 04:33 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration