LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 07-20-2008, 10:19 AM   #1
disbeliever
LQ Newbie
 
Registered: Dec 2005
Location: Holland
Distribution: Slackware 11.0 fluxbox-1.0rc2 (2.6.18-custom), ubuntu 8.04-server
Posts: 11

Rep: Reputation: 0
mdadm: device is busy yet not busy?


Hi all,

I have recently set up a new system to replace my old server. I installed ubuntu-server 64-bit with no flaws, went very fast actually .

Here is my uname -a to verify my kernel version: Linux quadstore 2.6.24-19-server #1 SMP Wed Jun 18 14:44:47 UTC 2008 x86_64 GNU/Linux

I have installed mdadm with apt-get (apt-get install mdadm)

I want to create an array under user root:

root@quadstore:~# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=6 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: Cannot open /dev/sdb1: Device or resource busy
mdadm: Cannot open /dev/sdc1: Device or resource busy
mdadm: Cannot open /dev/sdd1: Device or resource busy
mdadm: Cannot open /dev/sde1: Device or resource busy
mdadm: Cannot open /dev/sdf1: Device or resource busy
mdadm: Cannot open /dev/sdg1: Device or resource busy
mdadm: create aborted

my fdisk -l

root@quadstore:~# fdisk -l | grep raid
/dev/sdb1 1 60801 488384001 fd Linux raid autodetect
/dev/sdc1 1 60801 488384001 fd Linux raid autodetect
/dev/sdd1 1 60801 488384001 fd Linux raid autodetect
/dev/sde1 1 60801 488384001 fd Linux raid autodetect
/dev/sdf1 1 60801 488384001 fd Linux raid autodetect
/dev/sdg1 1 60801 488384001 fd Linux raid autodetect

Now, none of the six partitions are mounted (they dont show up in the mount -l). It is a fresh install of ubuntu; meaning that right after I installed ubuntu, I apt-getted mdadm and tried to create the array.

lsof does not show any files opend on any of the above partitions.

Can somebody with experience with raid and mdadm tell me, how mdadm can tell me that the partitions/devices are busy, while they aren't even mounted?

tnx,

DB
 
Old 07-20-2008, 12:55 PM   #2
pruneau
Member
 
Registered: Jul 2008
Location: Montreal
Distribution: Debian/Fedora/RHEL
Posts: 45

Rep: Reputation: 15
What does cat /proc/mdstat says ?
 
Old 07-20-2008, 01:31 PM   #3
disbeliever
LQ Newbie
 
Registered: Dec 2005
Location: Holland
Distribution: Slackware 11.0 fluxbox-1.0rc2 (2.6.18-custom), ubuntu 8.04-server
Posts: 11

Original Poster
Rep: Reputation: 0
root@quadstore:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb1[0](S) sdf1[6](S) sdg1[4](S) sde1[3](S) sdc1[2](S) sdd1[1](S)
2930303616 blocks

unused devices: <none>

and:

root@quadstore:~# mdadm --examine /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 00.90.00
UUID : 465bed38:9127b10b:79a18f5e:c079f5f7 (local to host quadstore)
Creation Time : Sat Jul 19 13:48:28 2008
Raid Level : raid5
Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
Array Size : 2441919680 (2328.80 GiB 2500.53 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 0

Update Time : Sat Jul 19 15:49:42 2008
State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 2
Spare Devices : 1
Checksum : 68eb6453 - correct
Events : 0.508

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 8 17 0 active sync /dev/sdb1

0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 0 0 2 faulty removed
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 0 0 5 faulty removed
6 6 8 97 6 spare /dev/sdg1

Wich is strange, where did it get this info? Before I freshly installed ubuntu I used dd to write all hds so that mdadm --examine would not show anything. now its back again???

also:

root@quadstore:~# mdadm --assemble --scan
mdadm: No arrays found in config file or automatically

and:

root@quadstore:~# mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
mdadm: forcing event count in /dev/sdc1(2) from 498 upto 508
mdadm: clearing FAULTY flag for device 1 in /dev/md0 for /dev/sdc1
mdadm: /dev/md0 has been started with 5 drives (out of 6) and 1 spare.

did apparently do something, but now, why 5 out of six?

further:

root@quadstore:~# mdadm --examine /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 00.90.00
UUID : 465bed38:9127b10b:79a18f5e:c079f5f7 (local to host quadstore)
Creation Time : Sat Jul 19 13:48:28 2008
Raid Level : raid5
Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
Array Size : 2441919680 (2328.80 GiB 2500.53 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 0

Update Time : Sat Jul 19 15:49:42 2008
State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 2
Spare Devices : 1
Checksum : 68eb6450 - correct
Events : 0.508

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 8 17 0 active sync /dev/sdb1

0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 0 0 2 active sync
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 0 0 5 faulty removed
6 6 8 97 6 spare /dev/sdg1

It still shows 2 faulty, but this is from an array before I reinstalled ubuntu and dd all disks. It just seams that the old information just keeps hanging somewhere. And why does it show spare, does this mean the parety disk of raid5?

tnx,

DB
 
Old 07-21-2008, 05:16 AM   #4
disbeliever
LQ Newbie
 
Registered: Dec 2005
Location: Holland
Distribution: Slackware 11.0 fluxbox-1.0rc2 (2.6.18-custom), ubuntu 8.04-server
Posts: 11

Original Poster
Rep: Reputation: 0
it seams to be working now, apparently my lack of knowledge and especially patience got the best of me :S, I still don't understand why it works now, but tnx for the help anyway.
 
Old 07-21-2008, 07:21 AM   #5
pruneau
Member
 
Registered: Jul 2008
Location: Montreal
Distribution: Debian/Fedora/RHEL
Posts: 45

Rep: Reputation: 15
Happy that you go the system to work.

Well, from what you said in your question, I guessed that you already somehow had a constructed array that the md driver refused to overwrite.
Altough, without precisely knowing the chain of event that lead you to this situation, it's very difficult to assess what predicament you were into.

Anyway, from what you just said, here's a few answers:
- a "spare" disk in an array is never marked as such without manual intervention. It's a disk _you_ designate as the receptacle of the informations if another one fails.
- which command did you use to wipe those hd ?
Quote:
dd if=/dev/zero of=/dev/sdb bs=1M count=1
is what is required to wipe the boot sector and partition table. But if you use /dev/sdb1, then it's only going to wipe the first partition, keeping all the information you want to get rid of. To check that a removal worked, use
Quote:
"fdisk -l"
it should report an unreadable partition table.
 
Old 07-22-2008, 04:35 PM   #6
disbeliever
LQ Newbie
 
Registered: Dec 2005
Location: Holland
Distribution: Slackware 11.0 fluxbox-1.0rc2 (2.6.18-custom), ubuntu 8.04-server
Posts: 11

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by pruneau View Post
Happy that you go the system to work.

Well, from what you said in your question, I guessed that you already somehow had a constructed array that the md driver refused to overwrite.
Altough, without precisely knowing the chain of event that lead you to this situation, it's very difficult to assess what predicament you were into.

Anyway, from what you just said, here's a few answers:
- a "spare" disk in an array is never marked as such without manual intervention. It's a disk _you_ designate as the receptacle of the informations if another one fails.
- which command did you use to wipe those hd ?

is what is required to wipe the boot sector and partition table. But if you use /dev/sdb1, then it's only going to wipe the first partition, keeping all the information you want to get rid of. To check that a removal worked, useit should report an unreadable partition table.
Thank you for your reply.

this is my current mdstat:

root@quadstore:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdc1[2](S) sdb1[0](S)
976767872 blocks

and,

root@quadstore:~# mdadm -E /dev/sdc1
/dev/sdc1:
Magic : a92b4efc
Version : 00.90.00
UUID : 465bed38:9127b10b:79a18f5e:c079f5f7 (local to host quadstore)
Creation Time : Sat Jul 19 13:48:28 2008
Raid Level : raid5
Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
Array Size : 2441919680 (2328.80 GiB 2500.53 GB)
Raid Devices : 6
Total Devices : 4
Preferred Minor : 0

Update Time : Tue Jul 22 13:01:40 2008
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 1
Spare Devices : 0
Checksum : 68ef312e - correct
Events : 0.562

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 2 8 17 2 active sync /dev/sdb1

0 0 0 0 0 removed
1 1 0 0 1 faulty removed
2 2 8 17 2 active sync /dev/sdb1
3 3 8 49 3 active sync
4 4 8 81 4 active sync
5 5 8 65 5 active sync


unused devices: <none>


This is after what I did the following in exact order:

wiped out all first 100 MB of everey disk of the array with:

dd if=/dev/zero of=/dev/sda bs=100M count=1

next I deinstalled mdadm with: apt-get --purge remove mdadm, /proc/mdstat still showing the device

next, I rebooted, /proc/mdstat was absent, wiped out 100 MB of every dis again, to be sure

next, I added partions on 2 disks, linux raid autodetect (fd)

next, I rebooted

next, I reinstalled mdadm with apt-get install mdadm, /proc/mdstat showing no devices

next, I rebooted and came to where I am now...


please tell what I am doing wrong, I just want the array deleted and start over, creating a new array :S

tnx,

DB

Last edited by disbeliever; 07-22-2008 at 04:37 PM.
 
Old 07-23-2008, 09:02 PM   #7
pruneau
Member
 
Registered: Jul 2008
Location: Montreal
Distribution: Debian/Fedora/RHEL
Posts: 45

Rep: Reputation: 15
I think you should have a look at what is into /etc/mdadm.conf. IF this file exists, wipe it out.
 
Old 07-24-2008, 04:38 PM   #8
disbeliever
LQ Newbie
 
Registered: Dec 2005
Location: Holland
Distribution: Slackware 11.0 fluxbox-1.0rc2 (2.6.18-custom), ubuntu 8.04-server
Posts: 11

Original Poster
Rep: Reputation: 0
deleting mdadm.conf had no effect, what did have effect was the following:

I uninstalled mdadm again
rebooted, installed mdadm
and then did mdadm --zero-superblock on all array partitions of the array
rebooted, and the array was gone

I would reboot after the install of mdadm, I could not use --zero-superblock apparently, it said device busy

tnx,

DB
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
mdadm cannot stop md0, resources busy ufmale Linux - Server 1 05-28-2008 06:32 AM
LXer: This week at LWN: Busy busy busybox LXer Syndicated Linux News 1 10-05-2006 08:09 PM
/: Device busy? ginda Linux - Newbie 4 02-02-2005 04:29 PM
Device is Busy...??? GrumpyGnome Linux - Software 2 07-01-2004 11:43 AM
Device is busy juanb Linux - General 5 11-05-2003 07:31 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 04:03 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration