Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux? |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
01-06-2009, 05:06 AM
|
#1
|
Member
Registered: Dec 2004
Location: Trondheim, Norway
Distribution: kubuntu 10.04
Posts: 308
Rep:
|
dirty degraded md raid array
I find myself in the situation that I have a degraded raid array that reads as dirty, so I cannot start it. I have a "spare" partition (until yesterday, this was the 5th segment, but for some reason my system now keeps it as a spare in a degraded array.) I know the drive itself is good, as I have two other md raid arrays that each have a section on it.
mdadm -E /dev/sdX shows me that the different parts of the array do not agree about how many sections are present. there are 5 devices in the array, 2 of them show 5 working devices, 4 active, 1 spare, while the 3 others show only 4 working, all active.
how can I make the devices agree on this?
|
|
|
01-07-2009, 01:51 PM
|
#2
|
Member
Registered: Dec 2004
Location: Trondheim, Norway
Distribution: kubuntu 10.04
Posts: 308
Original Poster
Rep:
|
is it not possible to remove any reference to segment 4, the one that is marked as "removed"?
Code:
mdadm -D /dev/md2
/dev/md2:
Version : 00.90
Creation Time : Thu May 1 22:41:34 2008
Raid Level : raid5
Used Dev Size : 14651136 (13.97 GiB 15.00 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Mon Jan 5 15:39:03 2009
State : active, degraded, Not Started
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 8c00dfaf:a414eba5:fa99d161:76122a73
Events : 0.853386
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 19 1 active sync /dev/sdb3
2 8 97 2 active sync /dev/sdg1
3 8 81 3 active sync /dev/sdf1
4 0 0 4 removed
5 8 51 - spare /dev/sdd3
dmesg gives me:
Code:
[ 3802.752955] raid5: device sdg1 operational as raid disk 2
[ 3802.752960] raid5: device sdf1 operational as raid disk 3
[ 3802.752963] raid5: device sdb3 operational as raid disk 1
[ 3802.752965] raid5: device sdc1 operational as raid disk 0
[ 3802.752967] raid5: cannot start dirty degraded array for md2
[ 3802.752970] RAID5 conf printout:
[ 3802.752972] --- rd:5 wd:4
[ 3802.752973] disk 0, o:1, dev:sdc1
[ 3802.752975] disk 1, o:1, dev:sdb3
[ 3802.752976] disk 2, o:1, dev:sdg1
[ 3802.752978] disk 3, o:1, dev:sdf1
[ 3802.752980] raid5: failed to run raid set md2
[ 3802.752981] md: pers->run() failed ...
|
|
|
All times are GMT -5. The time now is 11:11 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|