LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   [SOLVED] Harddrive was replaced, now I cant mkfs it cause its in use? (https://www.linuxquestions.org/questions/linux-hardware-18/%5Bsolved%5D-harddrive-was-replaced-now-i-cant-mkfs-it-cause-its-in-use-4175512848/)

Foxhound 07-31-2014 08:19 AM

[SOLVED] Harddrive was replaced, now I cant mkfs it cause its in use?
 
Hi all,

I am very novice with Linux and so far I managed to do what I needed to do by searching and reading a lot but now I am stuck and so I decided to register and see if someone here can offer me some help.

On my dedicated my second HD was defect and so the host replaced it. I use this 2nd harddrive for some local backups.
But, todat I am trying to run mkfs on this new drive but no matter what I do it keeps telling me the drive is in use by the system (sdb1 is apparently in use by the system).
But, its is not used. I checked with the following commands and nothing tells me sdb1 is in use at all:
lsof /dev/sdb1
lsof /dev/sdb
fuser /dev/sdb1
fuser /dev/sdb1
cat /etc/mtab (only outputs stuff about sda/sda1)
mount


I have already partitoned this new hd (its listed as sdb1), just trying to install the filesystem on it does not work and it keeps giving me the error its in use.


What could be the cause is that when my serverhost replaced this HD (and rebooted the server) in the /etc/fstab there was still a mountpoint for the sdb1.
I removed it from there now but I do not really want to reboot the server without knowing the fstab reference is actually the cause.
Maybe the system just keeps thinking sdb1 is already mounted (altough it is not) cause of that line still being there when the reboot was done.
So, if that is actually the cause I am wondering if I can fix that somehow without rebooting the box completely?


[add on]
Forgot to say, my OS is CentOS.

yancek 07-31-2014 09:10 AM

My understanding is that changes in fstab are not effective until reboot. There is no need to delete the line, just leave it there and comment it out by placing a: # at the beginning of the line.

rknichols 07-31-2014 10:05 AM

It's surprising that the system would boot at all with that fstab line present unless one of the options was "noauto".

/etc/mtab is not definitive. It's just a file that the mount command tries to maintain. See if there is any reference to sdb in /proc/mounts.

It's also possible that the disk contained something that the system is grabbing ahold of other than a mountable filesystem. See what you get from
Code:

grep sdb /proc/partitions
file -sk /dev/sdb*


Foxhound 07-31-2014 12:34 PM

Thank you both for the replies/info! :)


Quote:

Originally Posted by rknichols (Post 5212556)
It's surprising that the system would boot at all with that fstab line present unless one of the options was "noauto".

No, nothing like noauto at all, at least not in the fstab file if thats what you mean.


Quote:

/etc/mtab is not definitive. It's just a file that the mount command tries to maintain. See if there is any reference to sdb in /proc/mounts.
I am logged in as root but it shows me the error permissions are denied when I use /proc/mounts.


Quote:

See what you get from
Code:

grep sdb /proc/partitions

Outputs:

8 16 732574584 sdb
8 17 1 sdb1

Quote:

Code:

file -sk /dev/sdb*

Outputs:
/dev/sdb: x86 boot sector; partition 1: ID=0x5, starthead 1, startsector 63, 1465144002 sectors, extended partition table (last)\011\012-
/dev/sdb1: x86 boot sector

rknichols 07-31-2014 03:10 PM

Quote:

Originally Posted by Foxhound (Post 5212620)
I am logged in as root but it shows me the error permissions are denied when I use /proc/mounts.

:scratch::scratch::scratch::scratch: What did you try to do, execute it?? It's not executable. Just "grep sdb /proc/mounts". (Any user can do that.)

By any chance does "dmsetup deps" show anything with a dependency on sdb or its partitions (device numbers 8,16 through 8,31)?

If nothing shows up, I don't see any alternative to rebooting with the offending line deleted, or commented out, from /etc/fstab.

Foxhound 07-31-2014 03:33 PM

Quote:

Originally Posted by rknichols (Post 5212700)
:scratch::scratch::scratch::scratch: What did you try to do, execute it?? It's not executable. Just "grep sdb /proc/mounts". (Any user can do that.)

Uhm yes I did...like I said I am very much noob with this linux stuff. I thought it would output something. :)
Anyway I ran "grep sdb /proc/mounts" and it came with nothing.


Quote:

By any chance does "dmsetup deps" show anything with a dependency on sdb or its partitions (device numbers 8,16 through 8,31)?
Output:
ddf1_RAID10: 1 dependencies : (8, 16)


Quote:

If nothing shows up, I don't see any alternative to rebooting with the offending line deleted, or commented out, from /etc/fstab.
I fear you are right with that. I was hoping I could get it fixed without a reboot but that seems to not going to work.
I appreciate all your help though, learned some new things and I finally officially registered here ;)

rknichols 07-31-2014 03:59 PM

Quote:

Originally Posted by Foxhound (Post 5212731)
Output:
ddf1_RAID10: 1 dependencies : (8, 16)

That says that /dev/sdb is being detected as a member of a RAID array. Presuming that you are not using RAID, see if running "mdadm --stop ddf1_RAID10" will release the device. Once the device is free ("dmsetup deps" no longer shows that entry), zero out the beginning of the drive with "dd if=/dev/zero of=/dev/sdb count=4096" to get rid of any RAID label, then repartition it and make your filesystem.

Foxhound 07-31-2014 04:35 PM

Quote:

Originally Posted by rknichols (Post 5212751)
That says that /dev/sdb is being detected as a member of a RAID array. Presuming that you are not using RAID, see if running "mdadm --stop ddf1_RAID10" will release the device.


No, this system is not using RAID. Just 2 seperate drives.
I ran the above command but it just gives me an error: mdadm: error opening ddf1_RAID10: No such file or directory

rknichols 07-31-2014 05:21 PM

Does "mdadm --query --verbose /dev/sdb" yield anything?

Foxhound 07-31-2014 05:55 PM

Quote:

Originally Posted by rknichols (Post 5212783)
Does "mdadm --query --verbose /dev/sdb" yield anything?


Output:
/dev/sdb: is not an md array
/dev/sdb: No md super block found, not an md component.


No idea what that means :)

rknichols 07-31-2014 07:26 PM

I don't know what is going on, then. There is something called "ddf1_RAID10" claiming use of /dev/sdb (major device 8, minor 16), but it's apparently not an md RAID array. A more ham-fisted approach would be
Code:

dmsetup --force remove ddf1_RAID10
which, if it succeeds, should give you the device back again, but I'm not sure what that might do to whatever thinks it's using the drive.

Eventually, you're going to have to reboot to get everything in a sane state again. I recommend zeroing the first 2 MB of the drive as above (dd if=/dev/zero of=/dev/sdb count=4096) to clear whatever signatures are there first.

Foxhound 07-31-2014 10:59 PM

Quote:

Originally Posted by rknichols (Post 5212834)
but I'm not sure what that might do to whatever thinks it's using the drive.

That sounds a bit worrying to me. I need this server to stay up so if I am not sure what happens to it when I run that command I am not sure I should :)
Can running that command damage (fubar) anything on sdb1?
The server itself runs without issues anyway, I just do not have any local backup space now, but I can just use another server I have as remote backup.

I dont understand why this happens at all anyway, its a new fresh disk. I think I will contact support about it. The fact it refers to some RAID10 to me looks as if the disk they gave me was already used and whats on it causes these issues.

rknichols 07-31-2014 11:34 PM

Yes, I'm sure it was previously used. It was partitioned, and the file command showed an x86 boot sector both in the MBR and in the boot sector of partition 1. There is definitely data on there, and something about it is confusing Linux.

Why are you concerned about hurting the data on sdb1? It's not your data. I recommend zeroing out the first 2 MB of the disk anyway. And, now that I think about it, it would be a good idea to zero out the last 2 MB as well, since old versions of RAID stored their superblocks at the very end of the disk. (That got changed because it causes various problems.) My concern about "whatever thinks it's using the drive" is for whatever kernel module has its claws into that drive preventing you from using it. I doubt anything terrible (i.e., system crash) would happen, because it clearly doesn't have a full RAID array to play with. You might need a reboot to get the system state completely clean again.

Foxhound 08-01-2014 12:50 AM

Quote:

Originally Posted by rknichols (Post 5212930)
Why are you concerned about hurting the data on sdb1? It's not your data.

Sorry, thats a typo. I meant I am worried running the command might damage my data on sda/sda1, but thats purely cause I do not know the command and cause of your own worries.
So (and this is something I dont have any knowledge about) if I run that "dmsetup --force remove ddf1_RAID10" what could happen to the kernell if it is using it? Could I fix issues with any kernell or will causing issues with the kernell bring the box down?


Quote:

I recommend zeroing out the first 2 MB of the disk anyway. And, now that I think about it, it would be a good idea to zero out the last 2 MB as well, since old versions of RAID stored their superblocks at the very end of the disk.
So I run dd if=/dev/zero of=/dev/sdb count=4096 for the first 2mb and excuse me for probably this really stupid question but what would be the command for the last 2mb? It is a 750GB drive.
(4096 to me does look like its 4 mb, so I am a bit confused about what value you are actually refering to).

rknichols 08-01-2014 09:48 AM

If you forcibly remove the ddf1_RAID10 device, whatever part of the kernel tries to access it will receive an I/O error. That's also what would happen if someone were to unplug the device while the system is running.

The default blocksize for dd is 512 bytes, so a count of 4096 there would be 2 MB.

I see from /proc/partitions that sdb has 732574584 1 KB blocks, so
Code:

dd if=/dev/zero bs=1024 seek=$((732574584-2048)) of=/dev/sdb
would zero out the last 2 MB (and end with a "no space on device" message).


All times are GMT -5. The time now is 01:11 PM.