[SOLVED] Harddrive was replaced, now I cant mkfs it cause its in use?
Hi all,
I am very novice with Linux and so far I managed to do what I needed to do by searching and reading a lot but now I am stuck and so I decided to register and see if someone here can offer me some help. On my dedicated my second HD was defect and so the host replaced it. I use this 2nd harddrive for some local backups. But, todat I am trying to run mkfs on this new drive but no matter what I do it keeps telling me the drive is in use by the system (sdb1 is apparently in use by the system). But, its is not used. I checked with the following commands and nothing tells me sdb1 is in use at all: lsof /dev/sdb1 lsof /dev/sdb fuser /dev/sdb1 fuser /dev/sdb1 cat /etc/mtab (only outputs stuff about sda/sda1) mount I have already partitoned this new hd (its listed as sdb1), just trying to install the filesystem on it does not work and it keeps giving me the error its in use. What could be the cause is that when my serverhost replaced this HD (and rebooted the server) in the /etc/fstab there was still a mountpoint for the sdb1. I removed it from there now but I do not really want to reboot the server without knowing the fstab reference is actually the cause. Maybe the system just keeps thinking sdb1 is already mounted (altough it is not) cause of that line still being there when the reboot was done. So, if that is actually the cause I am wondering if I can fix that somehow without rebooting the box completely? [add on] Forgot to say, my OS is CentOS. |
My understanding is that changes in fstab are not effective until reboot. There is no need to delete the line, just leave it there and comment it out by placing a: # at the beginning of the line.
|
It's surprising that the system would boot at all with that fstab line present unless one of the options was "noauto".
/etc/mtab is not definitive. It's just a file that the mount command tries to maintain. See if there is any reference to sdb in /proc/mounts. It's also possible that the disk contained something that the system is grabbing ahold of other than a mountable filesystem. See what you get from Code:
grep sdb /proc/partitions |
Thank you both for the replies/info! :)
Quote:
Quote:
Quote:
8 16 732574584 sdb 8 17 1 sdb1 Quote:
/dev/sdb: x86 boot sector; partition 1: ID=0x5, starthead 1, startsector 63, 1465144002 sectors, extended partition table (last)\011\012- /dev/sdb1: x86 boot sector |
Quote:
By any chance does "dmsetup deps" show anything with a dependency on sdb or its partitions (device numbers 8,16 through 8,31)? If nothing shows up, I don't see any alternative to rebooting with the offending line deleted, or commented out, from /etc/fstab. |
Quote:
Anyway I ran "grep sdb /proc/mounts" and it came with nothing. Quote:
ddf1_RAID10: 1 dependencies : (8, 16) Quote:
I appreciate all your help though, learned some new things and I finally officially registered here ;) |
Quote:
|
Quote:
No, this system is not using RAID. Just 2 seperate drives. I ran the above command but it just gives me an error: mdadm: error opening ddf1_RAID10: No such file or directory |
Does "mdadm --query --verbose /dev/sdb" yield anything?
|
Quote:
Output: /dev/sdb: is not an md array /dev/sdb: No md super block found, not an md component. No idea what that means :) |
I don't know what is going on, then. There is something called "ddf1_RAID10" claiming use of /dev/sdb (major device 8, minor 16), but it's apparently not an md RAID array. A more ham-fisted approach would be
Code:
dmsetup --force remove ddf1_RAID10 Eventually, you're going to have to reboot to get everything in a sane state again. I recommend zeroing the first 2 MB of the drive as above (dd if=/dev/zero of=/dev/sdb count=4096) to clear whatever signatures are there first. |
Quote:
Can running that command damage (fubar) anything on sdb1? The server itself runs without issues anyway, I just do not have any local backup space now, but I can just use another server I have as remote backup. I dont understand why this happens at all anyway, its a new fresh disk. I think I will contact support about it. The fact it refers to some RAID10 to me looks as if the disk they gave me was already used and whats on it causes these issues. |
Yes, I'm sure it was previously used. It was partitioned, and the file command showed an x86 boot sector both in the MBR and in the boot sector of partition 1. There is definitely data on there, and something about it is confusing Linux.
Why are you concerned about hurting the data on sdb1? It's not your data. I recommend zeroing out the first 2 MB of the disk anyway. And, now that I think about it, it would be a good idea to zero out the last 2 MB as well, since old versions of RAID stored their superblocks at the very end of the disk. (That got changed because it causes various problems.) My concern about "whatever thinks it's using the drive" is for whatever kernel module has its claws into that drive preventing you from using it. I doubt anything terrible (i.e., system crash) would happen, because it clearly doesn't have a full RAID array to play with. You might need a reboot to get the system state completely clean again. |
Quote:
So (and this is something I dont have any knowledge about) if I run that "dmsetup --force remove ddf1_RAID10" what could happen to the kernell if it is using it? Could I fix issues with any kernell or will causing issues with the kernell bring the box down? Quote:
(4096 to me does look like its 4 mb, so I am a bit confused about what value you are actually refering to). |
If you forcibly remove the ddf1_RAID10 device, whatever part of the kernel tries to access it will receive an I/O error. That's also what would happen if someone were to unplug the device while the system is running.
The default blocksize for dd is 512 bytes, so a count of 4096 there would be 2 MB. I see from /proc/partitions that sdb has 732574584 1 KB blocks, so Code:
dd if=/dev/zero bs=1024 seek=$((732574584-2048)) of=/dev/sdb |
All times are GMT -5. The time now is 01:11 PM. |