LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 12-04-2010, 11:50 PM   #1
untlev
LQ Newbie
 
Registered: Dec 2010
Posts: 4

Rep: Reputation: 0
mdadm continually accessing raid device


I have an Linksys NSLU2 with four usb harddrives attached to it. One is for the os, the other three are setup as a RAID5 array. Yes, I know the raid will be slow, but the server is only for storage and will realistically get accessed once or twice a week at the most. I want the drives to spin down but mdadm is doing something in the background to access them. An lsof on the raid device returns nothing at all. The drive are blinking non-stop and never spin down until I stop the raid. Then they all spin down nicely after the appropriate time.

They are Western Digital My Book Essentials and will spin down by themselves if there is no access.

What can I shutdown in mdadm to get it to stop continually accessing the drives? Is it the sync mechanism in the software raid that is doing this? I tried setting the monitor to --scan -1 to get to check the device just once, but to no avail. I even went back and formatted the raid with ext2 thinking maybe the journaling had something to do with it. There are no files on the raid device, it's empty.

Debian Lenny
mdadm - v2.6.7.2 - 14th November 2008
RAID5/ext2

Thanks!
 
Old 12-06-2010, 01:33 AM   #2
gd2shoe
Member
 
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835

Rep: Reputation: 49
Welcome to LQ.

When you first build an array, the first thing md does is overwrite all participating devices to guarantee consistency. This process can take a while. You can check the progress with:
Code:
cat /dev/mdstat
This is probably what's going on. You just need to wait it out.

I assume you've tried unmounted the volume with the raid active. If the drives are still spinning, then it isn't anything filesystem related (including journaling).

The only other thing that make sense to me would be if md was polling SMART status. I don't know if it does, but it would be reasonable (if such a request can be done across USB). Unfortunately, I can't readily find a reference telling me if it does or not (or how to disable it).

If the device is being polled (checked very briefly every few seconds), lsof might not show the culprit. Something like this might catch it:
Code:
lsof /dev/md0 > file1
lsof /dev/md0 > file2
while diff file1 file2; do lsof /dev/md1 > file2; done;
(It is ironic that this is an example of polling.)
 
Old 12-06-2010, 10:55 PM   #3
untlev
LQ Newbie
 
Registered: Dec 2010
Posts: 4

Original Poster
Rep: Reputation: 0
@gd2shoe

You're dead on. From /proc/mdstat:

md0 : active raid5 sdb1[0] sdd1[3] sdc1[1]
975386240 blocks level 5, 4k chunk, algorithm 2 [3/2] [UU_]
[==>..................] recovery = 11.1% (54239220/487693120) finish=1150.3min speed=6278K/sec


This is after about 3 hours! Sheesh, this is going to take forever!

Thanks for the pointer. I was beginning to worry if it was going to work. I don't really want the drives running 24/7 and they're Western Digitals, they seem to get really hot. I'm sick of wearing out harddrives and losing data.

Thanks again!
 
Old 12-06-2010, 11:34 PM   #4
chickenjoy
Member
 
Registered: Apr 2007
Distribution: centos,rhel, solaris
Posts: 239

Rep: Reputation: 30
-- If you wanted redundancy; wouldn't syncing sdb1, sdd1 and sdc1 be a far less performance taxing option?
-- people use raid5 for performance and redundancy. Since your accessing them via USB2 then the performance gain is lost; so that just leaves device redundancy; which rsync can sync from sdb1 to sdc1 automatically. Plus you will have an extra hdd left over.

what do you think?

btw a raid5 through usb is a cool project and wish I could try it out if I had 3 external HDs... I wonder how the raid redundancy kicks in if you pull out one of the three HDs.
 
Old 12-06-2010, 11:36 PM   #5
gd2shoe
Member
 
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835

Rep: Reputation: 49
You're welcome.

If your drives are getting too hot, you could try:
Code:
cat /proc/sys/dev/raid/speed*
and then lower the numbers used.

Quote:
/proc/sys/dev/raid/speed_limit_min
A readable and writable file that reflects the current "goal" rebuild speed for times when non-rebuild activity is current on an array. The speed is in Kibibytes per second, and is a per-device rate, not a per-array rate (which means that an array with more disks will shuffle more data for a given speed). The default is 100.

/proc/sys/dev/raid/speed_limit_max
A readable and writable file that reflects the current "goal" rebuild speed for times when no non-rebuild activity is current on an array. The default is 100,000.
maybe something like:
Code:
echo 10000 > /proc/sys/dev/raid/speed_limit_max
It will take much longer to sync, but the drives won't be worked as hard.
 
Old 12-06-2010, 11:41 PM   #6
gd2shoe
Member
 
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835

Rep: Reputation: 49
Chickenjoy, you're rsync idea is roughly equivalent to a raid0. It's not a bad idea, but you only get redundancy across 1hd worth of storage while using 1 or more drives. Raid5 gives you 2hd worth of storage across 3 drives. It doubles his redundant storage area.
 
Old 12-06-2010, 11:56 PM   #7
chickenjoy
Member
 
Registered: Apr 2007
Distribution: centos,rhel, solaris
Posts: 239

Rep: Reputation: 30
@gd2shoe

Well, yea I understood that. If the original poster felt that one HDD's total capacity wouldn't be enough to store one set of the data then the only other option is to add in a third hd and only a raid5 would allow him both data redundancy and more storage space.

I think you meant raid1; which is mirroring. I have no problems with what the original poster wants to achieve but it's just the inner me (as a system admin) feels that rsync would be less cpu taxing than a raid5 through the usb protocol. I might be wrong; please correct me if I'am.

I don't know how critical the data that is being protected, or how much of it is, and even the allowable downtime.
 
Old 12-07-2010, 12:04 AM   #8
gd2shoe
Member
 
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835

Rep: Reputation: 49
Sorry. You're absolutely right. I meant Raid1.
 
Old 12-07-2010, 12:07 AM   #9
chickenjoy
Member
 
Registered: Apr 2007
Distribution: centos,rhel, solaris
Posts: 239

Rep: Reputation: 30
@gd2shoe

no problems mate.
 
Old 12-07-2010, 10:37 AM   #10
untlev
LQ Newbie
 
Registered: Dec 2010
Posts: 4

Original Poster
Rep: Reputation: 0
@chickenjoy

You're right. I actually considered just running a RAID1 with a spare, but the idea of a RAID5 sounded good. The drives are 500G per and a RAID5 will give me a theoretically redundant 1T bucket to store in. I'm sick to death of losing data to hdd failure. Last week my toddler stumbled over the desk and pulled my 500G backup external drive off onto the floor mid-backup. Now it just clicks and whines(the drive ) and I'm out about 300G more data.

That's the problem with drives now. They are so big it's very tempting to dump everything you have in one location. Then when it fails your left with your hinter regions hanging out in the breeze.

I did some research on the usb approach and found some people thinking about it. I'll try to remember to remember to post some feedback. If it doesn't work, I'll just revert to something like what you suggested, but it seemed worth the effort to play with since I don't have anything to @#%%$# store on it now!

My main concern is the drives being recognized in the same order each bootup. I'm not sure how mdadm will react if sdb1 is seen as sdc1 and vice-versu.

Thanks for the suggestions everyone!
 
Old 12-07-2010, 03:36 PM   #11
gd2shoe
Member
 
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835

Rep: Reputation: 49
Theoretically, udev will remember what it has previously assigned to each drive. I'm not certain about USB devices, though. You might want to consider using /dev/disk/by-id/* instead. You could also craft custom udev rules (a royal pain, but possible).
 
Old 12-07-2010, 08:35 PM   #12
untlev
LQ Newbie
 
Registered: Dec 2010
Posts: 4

Original Poster
Rep: Reputation: 0
When I got home this evening at around 7:30, they were all sleeping soundly. A quick ls in the mount and they all yawned and woke up fairly promptly. For what it's worth:

md0 : active raid5 sdb1[0] sdd1[2] sdc1[1]
975386240 blocks level 5, 4k chunk, algorithm 2 [3/3] [UUU]


I noticed that sdd1 is now marked as [2] and not [3](see the earlier post). Is this what the recovery was? I guess I'll do several back to back reboots and see if it breaks.
 
Old 12-08-2010, 05:39 PM   #13
gd2shoe
Member
 
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835

Rep: Reputation: 49
https://raid.wiki.kernel.org/index.php/Mdstat
Quote:
The raid role numbers [#] following each device indicate its role, or function, within the raid set.
I don't think I would worry too much about the numbers. They might change if the devices are detected in a different order, but each member of an array has a superblock at its end identifying its array and role. If the devices are detected in a different order, this superblock information is used to build the array in the correct order (assuming all devices belonging to the array are correctly identified by md).
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
mdadm: no such device: md0 -- RAID doesn't work after system recovery jlinkels Linux - Server 1 11-30-2009 08:14 PM
Dell/Intel ICH7 soft-RAID and mdadm raid-level mistake PhilipTheMouse Linux - General 0 03-14-2009 05:59 PM
rhel5 raid device cannot remove, no /etc/mdadm.conf file? hocheetiong Linux - Newbie 1 11-20-2007 06:08 AM
mdadm fails to assemble my RAID device tomhildebrand Fedora 6 06-28-2007 12:08 AM
SuSE 9.1 Pro running exremely slow - continually accessing disk drive Popcorn Dave Linux - Software 7 09-13-2004 11:46 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 12:52 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration