Linux - Server This forum is for the discussion of Linux Software used in a server related context. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
12-04-2010, 11:50 PM
|
#1
|
LQ Newbie
Registered: Dec 2010
Posts: 4
Rep:
|
mdadm continually accessing raid device
I have an Linksys NSLU2 with four usb harddrives attached to it. One is for the os, the other three are setup as a RAID5 array. Yes, I know the raid will be slow, but the server is only for storage and will realistically get accessed once or twice a week at the most. I want the drives to spin down but mdadm is doing something in the background to access them. An lsof on the raid device returns nothing at all. The drive are blinking non-stop and never spin down until I stop the raid. Then they all spin down nicely after the appropriate time.
They are Western Digital My Book Essentials and will spin down by themselves if there is no access.
What can I shutdown in mdadm to get it to stop continually accessing the drives? Is it the sync mechanism in the software raid that is doing this? I tried setting the monitor to --scan -1 to get to check the device just once, but to no avail. I even went back and formatted the raid with ext2 thinking maybe the journaling had something to do with it. There are no files on the raid device, it's empty.
Debian Lenny
mdadm - v2.6.7.2 - 14th November 2008
RAID5/ext2
Thanks!
|
|
|
12-06-2010, 01:33 AM
|
#2
|
Member
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835
Rep:
|
Welcome to LQ.
When you first build an array, the first thing md does is overwrite all participating devices to guarantee consistency. This process can take a while. You can check the progress with:
This is probably what's going on. You just need to wait it out.
I assume you've tried unmounted the volume with the raid active. If the drives are still spinning, then it isn't anything filesystem related (including journaling).
The only other thing that make sense to me would be if md was polling SMART status. I don't know if it does, but it would be reasonable (if such a request can be done across USB). Unfortunately, I can't readily find a reference telling me if it does or not (or how to disable it).
If the device is being polled (checked very briefly every few seconds), lsof might not show the culprit. Something like this might catch it:
Code:
lsof /dev/md0 > file1
lsof /dev/md0 > file2
while diff file1 file2; do lsof /dev/md1 > file2; done;
(It is ironic that this is an example of polling.)
|
|
|
12-06-2010, 10:55 PM
|
#3
|
LQ Newbie
Registered: Dec 2010
Posts: 4
Original Poster
Rep:
|
@gd2shoe
You're dead on. From /proc/mdstat:
md0 : active raid5 sdb1[0] sdd1[3] sdc1[1]
975386240 blocks level 5, 4k chunk, algorithm 2 [3/2] [UU_]
[==>..................] recovery = 11.1% (54239220/487693120) finish=1150.3min speed=6278K/sec
This is after about 3 hours! Sheesh, this is going to take forever!
Thanks for the pointer. I was beginning to worry if it was going to work. I don't really want the drives running 24/7 and they're Western Digitals, they seem to get really hot. I'm sick of wearing out harddrives and losing data.
Thanks again!
|
|
|
12-06-2010, 11:34 PM
|
#4
|
Member
Registered: Apr 2007
Distribution: centos,rhel, solaris
Posts: 239
Rep:
|
-- If you wanted redundancy; wouldn't syncing sdb1, sdd1 and sdc1 be a far less performance taxing option?
-- people use raid5 for performance and redundancy. Since your accessing them via USB2 then the performance gain is lost; so that just leaves device redundancy; which rsync can sync from sdb1 to sdc1 automatically. Plus you will have an extra hdd left over.
what do you think?
btw a raid5 through usb is a cool project and wish I could try it out if I had 3 external HDs... I wonder how the raid redundancy kicks in if you pull out one of the three HDs.
|
|
|
12-06-2010, 11:36 PM
|
#5
|
Member
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835
Rep:
|
You're welcome.
If your drives are getting too hot, you could try:
Code:
cat /proc/sys/dev/raid/speed*
and then lower the numbers used.
Quote:
/proc/sys/dev/raid/speed_limit_min
A readable and writable file that reflects the current "goal" rebuild speed for times when non-rebuild activity is current on an array. The speed is in Kibibytes per second, and is a per-device rate, not a per-array rate (which means that an array with more disks will shuffle more data for a given speed). The default is 100.
/proc/sys/dev/raid/speed_limit_max
A readable and writable file that reflects the current "goal" rebuild speed for times when no non-rebuild activity is current on an array. The default is 100,000.
|
maybe something like:
Code:
echo 10000 > /proc/sys/dev/raid/speed_limit_max
It will take much longer to sync, but the drives won't be worked as hard. 
|
|
|
12-06-2010, 11:41 PM
|
#6
|
Member
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835
Rep:
|
Chickenjoy, you're rsync idea is roughly equivalent to a raid0. It's not a bad idea, but you only get redundancy across 1hd worth of storage while using 1 or more drives. Raid5 gives you 2hd worth of storage across 3 drives. It doubles his redundant storage area.
|
|
|
12-06-2010, 11:56 PM
|
#7
|
Member
Registered: Apr 2007
Distribution: centos,rhel, solaris
Posts: 239
Rep:
|
@gd2shoe
Well, yea I understood that. If the original poster felt that one HDD's total capacity wouldn't be enough to store one set of the data then the only other option is to add in a third hd and only a raid5 would allow him both data redundancy and more storage space.
I think you meant raid1; which is mirroring. I have no problems with what the original poster wants to achieve but it's just the inner me (as a system admin) feels that rsync would be less cpu taxing than a raid5 through the usb protocol. I might be wrong; please correct me if I'am.
I don't know how critical the data that is being protected, or how much of it is, and even the allowable downtime.
|
|
|
12-07-2010, 12:04 AM
|
#8
|
Member
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835
Rep:
|
Sorry. You're absolutely right. I meant Raid1.
|
|
|
12-07-2010, 12:07 AM
|
#9
|
Member
Registered: Apr 2007
Distribution: centos,rhel, solaris
Posts: 239
Rep:
|
@gd2shoe
no problems mate.
|
|
|
12-07-2010, 10:37 AM
|
#10
|
LQ Newbie
Registered: Dec 2010
Posts: 4
Original Poster
Rep:
|
@chickenjoy
You're right. I actually considered just running a RAID1 with a spare, but the idea of a RAID5 sounded good. The drives are 500G per and a RAID5 will give me a theoretically redundant 1T bucket to store in. I'm sick to death of losing data to hdd failure. Last week my toddler stumbled over the desk and pulled my 500G backup external drive off onto the floor mid-backup. Now it just clicks and whines(the drive  ) and I'm out about 300G more data.
That's the problem with drives now. They are so big it's very tempting to dump everything you have in one location. Then when it fails your left with your hinter regions hanging out in the breeze.
I did some research on the usb approach and found some people thinking about it. I'll try to remember to remember to post some feedback. If it doesn't work, I'll just revert to something like what you suggested, but it seemed worth the effort to play with since I don't have anything to @#%%$# store on it now!
My main concern is the drives being recognized in the same order each bootup. I'm not sure how mdadm will react if sdb1 is seen as sdc1 and vice-versu.
Thanks for the suggestions everyone!
|
|
|
12-07-2010, 03:36 PM
|
#11
|
Member
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835
Rep:
|
Theoretically, udev will remember what it has previously assigned to each drive. I'm not certain about USB devices, though. You might want to consider using /dev/disk/by-id/* instead. You could also craft custom udev rules (a royal pain, but possible).
|
|
|
12-07-2010, 08:35 PM
|
#12
|
LQ Newbie
Registered: Dec 2010
Posts: 4
Original Poster
Rep:
|
When I got home this evening at around 7:30, they were all sleeping soundly. A quick ls in the mount and they all yawned and woke up fairly promptly. For what it's worth:
md0 : active raid5 sdb1[0] sdd1[2] sdc1[1]
975386240 blocks level 5, 4k chunk, algorithm 2 [3/3] [UUU]
I noticed that sdd1 is now marked as [2] and not [3](see the earlier post). Is this what the recovery was? I guess I'll do several back to back reboots and see if it breaks.
|
|
|
12-08-2010, 05:39 PM
|
#13
|
Member
Registered: Jun 2004
Location: Northern CA
Distribution: Debian
Posts: 835
Rep:
|
https://raid.wiki.kernel.org/index.php/Mdstat
Quote:
The raid role numbers [#] following each device indicate its role, or function, within the raid set.
|
I don't think I would worry too much about the numbers. They might change if the devices are detected in a different order, but each member of an array has a superblock at its end identifying its array and role. If the devices are detected in a different order, this superblock information is used to build the array in the correct order (assuming all devices belonging to the array are correctly identified by md).
|
|
|
All times are GMT -5. The time now is 08:21 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|