Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a degraded software RAID 1 array. md0 is in a state of clean, degraded while md1 is mounted in read-only and clean. I'm not sure how to go about fixing this. Any ideas?
cat /proc/mdstat
Code:
Personalities : [raid1]
md1 : active (auto-read-only) raid1 sdb2[1] sda2[0]
3909620 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sda1[0]
972849016 blocks super 1.2 [2/1] [U_]
unused devices: <none>
mdadm -D /dev/md0
Code:
/dev/md0:
Version : 1.2
Creation Time : Tue Jun 21 21:31:58 2011
Raid Level : raid1
Array Size : 972849016 (927.78 GiB 996.20 GB)
Used Dev Size : 972849016 (927.78 GiB 996.20 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Tue Jun 2 02:21:12 2015
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name :
UUID :
Events : 3678064
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
2 0 0 2 removed
mdadm -D /dev/md1
Code:
/dev/md1:
Version : 1.2
Creation Time : Tue Jun 21 21:32:09 2011
Raid Level : raid1
Array Size : 3909620 (3.73 GiB 4.00 GB)
Used Dev Size : 3909620 (3.73 GiB 4.00 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat May 16 15:17:56 2015
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name :
UUID :
Events : 116
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
As for md0 you could remove the failed disk form the raid and then check it for bad blocks or maybe simply reformat it and then add it back into the raid.
Is the degraded drive due to bad blocks or just the drives some how getting out of sync?
Is this why the system is mounting md2 in read-only automatically?
Your mdadm output is claiming there is no /dev/sdb1, its status is "removed". This would likely be caused by a corruption in that partition, possibly due to a failing drive.
I have 30+ years of RAID experience at home and in the data center I manage. To get RAID to be at its best performance, you want matching drives with the same firmware. If you have dissimilar drives, it can cause issues with performance and reliability. If a RAID controller removes a drive, usually it means it failed or disconnected from a loose power cable. I have seen SATA cables not connect tight and vibrate loose. I would suggest replacing both drives with identical drives with same firmware. You will have less problems in the future. Installing a faster drive or a larger drive can cause issues writing with older drive. If you are using software RAID instead of hardware RAID, it can become more of a concern. At home I run the same SAS RAID drives and controllers I use in my data center.
Both my drives are identical, however they were on an older server which died. I removed the drives, backup one and placed them in a new sever. It's possible the drive is damaged, but is it also possible when I backup one drive I somehow got the the drives out of sync?
auto-read-only is not an error. It just means nothing has tried to write on it yet - which is expected to happen for boot or swap partitions as long as these are not in active use.
as for the raid that's missing a drive, show mdadm --examine output for both its members. the update time of the missing drive should tell you how long it's been missing, and if your machine was running at the time you could check your system log history if there are any messages.
If the data on /dev/md0 is OK, you can re-add the missing disk and see if the sync goes OK. Otherwise check dmesg why it failed, also check SMART data of both disks.
Code:
mdadm /dev/md99 --add /dev/sdxy1
(maybe need --fail and --remove before you can --add. Or alternatively, --re-add)
Last edited by frostschutz; 06-02-2015 at 04:19 PM.
mdadm --readwrite /dev/md1 does put md1 in readwrite mode, but I'm not sure why that drive's capacity shows up as only 4 GB, it's the exact same dive as md0 and both should be 1 TB.
Here's the output of mdadm --examine for each drive.
mdadm --examine /dev/sdb1
Code:
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 71c80baf:c5020223:fbb1120f:3aa695e2
Name : name:0 (local to host name)
Creation Time : Tue Jun 21 21:31:58 2011
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1945698304 (927.78 GiB 996.20 GB)
Array Size : 972849016 (927.78 GiB 996.20 GB)
Used Dev Size : 1945698032 (927.78 GiB 996.20 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=272 sectors
State : clean
Device UUID :
Update Time : Mon Feb 16 13:00:26 2015
Checksum : 35bdceae - correct
Events : 3496236
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
mdadm --examine /dev/sdb2
Code:
/dev/sdb2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 02d147d7:15cdc2da:819437ac:fc408339
Name : name:1 (local to host name)
Creation Time : Tue Jun 21 21:32:09 2011
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 7819264 (3.73 GiB 4.00 GB)
Array Size : 3909620 (3.73 GiB 4.00 GB)
Used Dev Size : 7819240 (3.73 GiB 4.00 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=24 sectors
State : clean
Device UUID :
Update Time : Wed Jun 3 01:35:55 2015
Checksum : f918a9ed - correct
Events : 117
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
mdadm --readwrite /dev/md1 does put md1 in readwrite mode, but I'm not sure why that drive's capacity shows up as only 4 GB, it's the exact same dive as md0 and both should be 1 TB.
This question does not make sense. /dev/md0 and /dev/md1 are not drives, they are RAID arrays build using partitions on your drives. Both arrays use both drives. md0 uses the first partition on the two drives, which is 1 TB, md1 uses the second partition on the two drives, which is 4 GB.
/dev/sda is your first drive, /dev/sdb is your second drive. /dev/sda1 is the first partition on the first drive, /dev/sda2 is the second partition on the first drive, /dev/sdb1 is the first partition on the second drive, /dev/sdb2 is the second partition on the second drive.
/dev/md0 uses /dev/sda1 and /dev/sdb1 and is 1 TB. This means that sda1 and sdb1 (the first partition on each drive) are each 1 TB.
/dev/md1 uses /dev/sda2 and /dev/sdb2 and is 4 GB. This means that sda2 and sdb2 (the second partition on each drive) are each 4 GB.
That is what mdadm is reporting. Is that correct, or is it not?
Last edited by suicidaleggroll; 06-03-2015 at 12:01 PM.
SCSIraidGURU, I'm using a software RAID 1 from an older Debian install. The installer configured it for me. I'm fairly inexperienced with RAID. I am not using a hardware RAID.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.