LinuxQuestions.org
Support LQ: Use code LQ3 and save $3 on Domain Registration
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 07-31-2014, 08:19 AM   #1
Foxhound
LQ Newbie
 
Registered: Jul 2014
Posts: 17

Rep: Reputation: Disabled
[SOLVED] Harddrive was replaced, now I cant mkfs it cause its in use?


Hi all,

I am very novice with Linux and so far I managed to do what I needed to do by searching and reading a lot but now I am stuck and so I decided to register and see if someone here can offer me some help.

On my dedicated my second HD was defect and so the host replaced it. I use this 2nd harddrive for some local backups.
But, todat I am trying to run mkfs on this new drive but no matter what I do it keeps telling me the drive is in use by the system (sdb1 is apparently in use by the system).
But, its is not used. I checked with the following commands and nothing tells me sdb1 is in use at all:
lsof /dev/sdb1
lsof /dev/sdb
fuser /dev/sdb1
fuser /dev/sdb1
cat /etc/mtab (only outputs stuff about sda/sda1)
mount


I have already partitoned this new hd (its listed as sdb1), just trying to install the filesystem on it does not work and it keeps giving me the error its in use.


What could be the cause is that when my serverhost replaced this HD (and rebooted the server) in the /etc/fstab there was still a mountpoint for the sdb1.
I removed it from there now but I do not really want to reboot the server without knowing the fstab reference is actually the cause.
Maybe the system just keeps thinking sdb1 is already mounted (altough it is not) cause of that line still being there when the reboot was done.
So, if that is actually the cause I am wondering if I can fix that somehow without rebooting the box completely?


[add on]
Forgot to say, my OS is CentOS.

Last edited by Foxhound; 08-06-2014 at 12:13 AM.
 
Old 07-31-2014, 09:10 AM   #2
yancek
LQ Guru
 
Registered: Apr 2008
Distribution: PCLinux, Slackware
Posts: 5,582

Rep: Reputation: 907Reputation: 907Reputation: 907Reputation: 907Reputation: 907Reputation: 907Reputation: 907Reputation: 907
My understanding is that changes in fstab are not effective until reboot. There is no need to delete the line, just leave it there and comment it out by placing a: # at the beginning of the line.
 
Old 07-31-2014, 10:05 AM   #3
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 1,991

Rep: Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822
It's surprising that the system would boot at all with that fstab line present unless one of the options was "noauto".

/etc/mtab is not definitive. It's just a file that the mount command tries to maintain. See if there is any reference to sdb in /proc/mounts.

It's also possible that the disk contained something that the system is grabbing ahold of other than a mountable filesystem. See what you get from
Code:
grep sdb /proc/partitions
file -sk /dev/sdb*
 
Old 07-31-2014, 12:34 PM   #4
Foxhound
LQ Newbie
 
Registered: Jul 2014
Posts: 17

Original Poster
Rep: Reputation: Disabled
Thank you both for the replies/info!


Quote:
Originally Posted by rknichols View Post
It's surprising that the system would boot at all with that fstab line present unless one of the options was "noauto".
No, nothing like noauto at all, at least not in the fstab file if thats what you mean.


Quote:
/etc/mtab is not definitive. It's just a file that the mount command tries to maintain. See if there is any reference to sdb in /proc/mounts.
I am logged in as root but it shows me the error permissions are denied when I use /proc/mounts.


Quote:
See what you get from
Code:
grep sdb /proc/partitions
Outputs:

8 16 732574584 sdb
8 17 1 sdb1

Quote:
Code:
file -sk /dev/sdb*
Outputs:
/dev/sdb: x86 boot sector; partition 1: ID=0x5, starthead 1, startsector 63, 1465144002 sectors, extended partition table (last)\011\012-
/dev/sdb1: x86 boot sector
 
Old 07-31-2014, 03:10 PM   #5
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 1,991

Rep: Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822
Quote:
Originally Posted by Foxhound View Post
I am logged in as root but it shows me the error permissions are denied when I use /proc/mounts.
What did you try to do, execute it?? It's not executable. Just "grep sdb /proc/mounts". (Any user can do that.)

By any chance does "dmsetup deps" show anything with a dependency on sdb or its partitions (device numbers 8,16 through 8,31)?

If nothing shows up, I don't see any alternative to rebooting with the offending line deleted, or commented out, from /etc/fstab.
 
Old 07-31-2014, 03:33 PM   #6
Foxhound
LQ Newbie
 
Registered: Jul 2014
Posts: 17

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by rknichols View Post
What did you try to do, execute it?? It's not executable. Just "grep sdb /proc/mounts". (Any user can do that.)
Uhm yes I did...like I said I am very much noob with this linux stuff. I thought it would output something.
Anyway I ran "grep sdb /proc/mounts" and it came with nothing.


Quote:
By any chance does "dmsetup deps" show anything with a dependency on sdb or its partitions (device numbers 8,16 through 8,31)?
Output:
ddf1_RAID10: 1 dependencies : (8, 16)


Quote:
If nothing shows up, I don't see any alternative to rebooting with the offending line deleted, or commented out, from /etc/fstab.
I fear you are right with that. I was hoping I could get it fixed without a reboot but that seems to not going to work.
I appreciate all your help though, learned some new things and I finally officially registered here
 
Old 07-31-2014, 03:59 PM   #7
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 1,991

Rep: Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822
Quote:
Originally Posted by Foxhound View Post
Output:
ddf1_RAID10: 1 dependencies : (8, 16)
That says that /dev/sdb is being detected as a member of a RAID array. Presuming that you are not using RAID, see if running "mdadm --stop ddf1_RAID10" will release the device. Once the device is free ("dmsetup deps" no longer shows that entry), zero out the beginning of the drive with "dd if=/dev/zero of=/dev/sdb count=4096" to get rid of any RAID label, then repartition it and make your filesystem.

Last edited by rknichols; 07-31-2014 at 04:01 PM.
 
Old 07-31-2014, 04:35 PM   #8
Foxhound
LQ Newbie
 
Registered: Jul 2014
Posts: 17

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by rknichols View Post
That says that /dev/sdb is being detected as a member of a RAID array. Presuming that you are not using RAID, see if running "mdadm --stop ddf1_RAID10" will release the device.

No, this system is not using RAID. Just 2 seperate drives.
I ran the above command but it just gives me an error: mdadm: error opening ddf1_RAID10: No such file or directory
 
Old 07-31-2014, 05:21 PM   #9
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 1,991

Rep: Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822
Does "mdadm --query --verbose /dev/sdb" yield anything?
 
Old 07-31-2014, 05:55 PM   #10
Foxhound
LQ Newbie
 
Registered: Jul 2014
Posts: 17

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by rknichols View Post
Does "mdadm --query --verbose /dev/sdb" yield anything?

Output:
/dev/sdb: is not an md array
/dev/sdb: No md super block found, not an md component.


No idea what that means
 
Old 07-31-2014, 07:26 PM   #11
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 1,991

Rep: Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822
I don't know what is going on, then. There is something called "ddf1_RAID10" claiming use of /dev/sdb (major device 8, minor 16), but it's apparently not an md RAID array. A more ham-fisted approach would be
Code:
dmsetup --force remove ddf1_RAID10
which, if it succeeds, should give you the device back again, but I'm not sure what that might do to whatever thinks it's using the drive.

Eventually, you're going to have to reboot to get everything in a sane state again. I recommend zeroing the first 2 MB of the drive as above (dd if=/dev/zero of=/dev/sdb count=4096) to clear whatever signatures are there first.
 
Old 07-31-2014, 10:59 PM   #12
Foxhound
LQ Newbie
 
Registered: Jul 2014
Posts: 17

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by rknichols View Post
but I'm not sure what that might do to whatever thinks it's using the drive.
That sounds a bit worrying to me. I need this server to stay up so if I am not sure what happens to it when I run that command I am not sure I should
Can running that command damage (fubar) anything on sdb1?
The server itself runs without issues anyway, I just do not have any local backup space now, but I can just use another server I have as remote backup.

I dont understand why this happens at all anyway, its a new fresh disk. I think I will contact support about it. The fact it refers to some RAID10 to me looks as if the disk they gave me was already used and whats on it causes these issues.
 
Old 07-31-2014, 11:34 PM   #13
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 1,991

Rep: Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822
Yes, I'm sure it was previously used. It was partitioned, and the file command showed an x86 boot sector both in the MBR and in the boot sector of partition 1. There is definitely data on there, and something about it is confusing Linux.

Why are you concerned about hurting the data on sdb1? It's not your data. I recommend zeroing out the first 2 MB of the disk anyway. And, now that I think about it, it would be a good idea to zero out the last 2 MB as well, since old versions of RAID stored their superblocks at the very end of the disk. (That got changed because it causes various problems.) My concern about "whatever thinks it's using the drive" is for whatever kernel module has its claws into that drive preventing you from using it. I doubt anything terrible (i.e., system crash) would happen, because it clearly doesn't have a full RAID array to play with. You might need a reboot to get the system state completely clean again.
 
Old 08-01-2014, 12:50 AM   #14
Foxhound
LQ Newbie
 
Registered: Jul 2014
Posts: 17

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by rknichols View Post
Why are you concerned about hurting the data on sdb1? It's not your data.
Sorry, thats a typo. I meant I am worried running the command might damage my data on sda/sda1, but thats purely cause I do not know the command and cause of your own worries.
So (and this is something I dont have any knowledge about) if I run that "dmsetup --force remove ddf1_RAID10" what could happen to the kernell if it is using it? Could I fix issues with any kernell or will causing issues with the kernell bring the box down?


Quote:
I recommend zeroing out the first 2 MB of the disk anyway. And, now that I think about it, it would be a good idea to zero out the last 2 MB as well, since old versions of RAID stored their superblocks at the very end of the disk.
So I run dd if=/dev/zero of=/dev/sdb count=4096 for the first 2mb and excuse me for probably this really stupid question but what would be the command for the last 2mb? It is a 750GB drive.
(4096 to me does look like its 4 mb, so I am a bit confused about what value you are actually refering to).
 
Old 08-01-2014, 09:48 AM   #15
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 1,991

Rep: Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822Reputation: 822
If you forcibly remove the ddf1_RAID10 device, whatever part of the kernel tries to access it will receive an I/O error. That's also what would happen if someone were to unplug the device while the system is running.

The default blocksize for dd is 512 bytes, so a count of 4096 there would be 2 MB.

I see from /proc/partitions that sdb has 732574584 1 KB blocks, so
Code:
dd if=/dev/zero bs=1024 seek=$((732574584-2048)) of=/dev/sdb
would zero out the last 2 MB (and end with a "no space on device" message).
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Harddrive Replaced recover LVM vibes Linux - Software 2 06-04-2011 06:45 AM
Difference between mkfs.vfat and mkfs.msdos? EmrldDrgn Linux - Newbie 6 07-23-2009 03:27 AM
Fedora Core 6 won't recognize second internal harddrive or usb harddrive Rockgod1969 Linux - Hardware 14 01-26-2007 01:03 AM
Installing linux on 2nd harddrive(dual boot) windows on seperate harddrive lysol Linux - Software 25 08-03-2004 09:33 AM
Need to replace full harddrive with new, larger harddrive pearlr Linux - Newbie 1 01-02-2004 12:59 PM


All times are GMT -5. The time now is 10:52 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration