[SOLVED] Harddrive was replaced, now I cant mkfs it cause its in use?
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
[SOLVED] Harddrive was replaced, now I cant mkfs it cause its in use?
I am very novice with Linux and so far I managed to do what I needed to do by searching and reading a lot but now I am stuck and so I decided to register and see if someone here can offer me some help.
On my dedicated my second HD was defect and so the host replaced it. I use this 2nd harddrive for some local backups.
But, todat I am trying to run mkfs on this new drive but no matter what I do it keeps telling me the drive is in use by the system (sdb1 is apparently in use by the system).
But, its is not used. I checked with the following commands and nothing tells me sdb1 is in use at all:
cat /etc/mtab (only outputs stuff about sda/sda1)
I have already partitoned this new hd (its listed as sdb1), just trying to install the filesystem on it does not work and it keeps giving me the error its in use.
What could be the cause is that when my serverhost replaced this HD (and rebooted the server) in the /etc/fstab there was still a mountpoint for the sdb1.
I removed it from there now but I do not really want to reboot the server without knowing the fstab reference is actually the cause.
Maybe the system just keeps thinking sdb1 is already mounted (altough it is not) cause of that line still being there when the reboot was done.
So, if that is actually the cause I am wondering if I can fix that somehow without rebooting the box completely?
What did you try to do, execute it?? It's not executable. Just "grep sdb /proc/mounts". (Any user can do that.)
Uhm yes I did...like I said I am very much noob with this linux stuff. I thought it would output something.
Anyway I ran "grep sdb /proc/mounts" and it came with nothing.
By any chance does "dmsetup deps" show anything with a dependency on sdb or its partitions (device numbers 8,16 through 8,31)?
ddf1_RAID10: 1 dependencies : (8, 16)
If nothing shows up, I don't see any alternative to rebooting with the offending line deleted, or commented out, from /etc/fstab.
I fear you are right with that. I was hoping I could get it fixed without a reboot but that seems to not going to work.
I appreciate all your help though, learned some new things and I finally officially registered here
That says that /dev/sdb is being detected as a member of a RAID array. Presuming that you are not using RAID, see if running "mdadm --stop ddf1_RAID10" will release the device. Once the device is free ("dmsetup deps" no longer shows that entry), zero out the beginning of the drive with "dd if=/dev/zero of=/dev/sdb count=4096" to get rid of any RAID label, then repartition it and make your filesystem.
I don't know what is going on, then. There is something called "ddf1_RAID10" claiming use of /dev/sdb (major device 8, minor 16), but it's apparently not an md RAID array. A more ham-fisted approach would be
dmsetup --force remove ddf1_RAID10
which, if it succeeds, should give you the device back again, but I'm not sure what that might do to whatever thinks it's using the drive.
Eventually, you're going to have to reboot to get everything in a sane state again. I recommend zeroing the first 2 MB of the drive as above (dd if=/dev/zero of=/dev/sdb count=4096) to clear whatever signatures are there first.
but I'm not sure what that might do to whatever thinks it's using the drive.
That sounds a bit worrying to me. I need this server to stay up so if I am not sure what happens to it when I run that command I am not sure I should
Can running that command damage (fubar) anything on sdb1?
The server itself runs without issues anyway, I just do not have any local backup space now, but I can just use another server I have as remote backup.
I dont understand why this happens at all anyway, its a new fresh disk. I think I will contact support about it. The fact it refers to some RAID10 to me looks as if the disk they gave me was already used and whats on it causes these issues.
Yes, I'm sure it was previously used. It was partitioned, and the file command showed an x86 boot sector both in the MBR and in the boot sector of partition 1. There is definitely data on there, and something about it is confusing Linux.
Why are you concerned about hurting the data on sdb1? It's not your data. I recommend zeroing out the first 2 MB of the disk anyway. And, now that I think about it, it would be a good idea to zero out the last 2 MB as well, since old versions of RAID stored their superblocks at the very end of the disk. (That got changed because it causes various problems.) My concern about "whatever thinks it's using the drive" is for whatever kernel module has its claws into that drive preventing you from using it. I doubt anything terrible (i.e., system crash) would happen, because it clearly doesn't have a full RAID array to play with. You might need a reboot to get the system state completely clean again.
Why are you concerned about hurting the data on sdb1? It's not your data.
Sorry, thats a typo. I meant I am worried running the command might damage my data on sda/sda1, but thats purely cause I do not know the command and cause of your own worries.
So (and this is something I dont have any knowledge about) if I run that "dmsetup --force remove ddf1_RAID10" what could happen to the kernell if it is using it? Could I fix issues with any kernell or will causing issues with the kernell bring the box down?
I recommend zeroing out the first 2 MB of the disk anyway. And, now that I think about it, it would be a good idea to zero out the last 2 MB as well, since old versions of RAID stored their superblocks at the very end of the disk.
So I run dd if=/dev/zero of=/dev/sdb count=4096 for the first 2mb and excuse me for probably this really stupid question but what would be the command for the last 2mb? It is a 750GB drive.
(4096 to me does look like its 4 mb, so I am a bit confused about what value you are actually refering to).
If you forcibly remove the ddf1_RAID10 device, whatever part of the kernel tries to access it will receive an I/O error. That's also what would happen if someone were to unplug the device while the system is running.
The default blocksize for dd is 512 bytes, so a count of 4096 there would be 2 MB.
I see from /proc/partitions that sdb has 732574584 1 KB blocks, so