Raid 1 disk failed, cannot make active single spare that was rebuilding
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Raid 1 disk failed, cannot make active single spare that was rebuilding
I had a 2 disk raid 1. While one of the disks was syncing, the other failed! The failed disk cannot be used, it is really dead (any attempt to access it blocks for 2 minutes and nothing can be read). I have send it back to the store for replacement as it is in the warranty period.
So I am left with only 1 working disk, but it was marked as spare. I cannot start the raid...
I would like to mark this disk as active (not spare) and start the raid with this single disk to be able to copy all data.
How can I do that?
I have tried to stop the raid and force start with the single disk:
Code:
sudo mdadm --stop /dev/md128
mdadm: stopped /dev/md128
sudo mdadm --assemble --force /dev/md128 /dev/sda1
mdadm: /dev/md128 assembled from 0 drives and 1 rebuilding - not enough to start the array.
Here is some info:
Code:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md128 : inactive sda1[2](S)
4883637248 blocks super 1.2
unused devices: <none>
sudo mdadm -D /dev/md128
/dev/md128:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Working Devices : 1
Name : clevo2:128
UUID : 3318ae78:679f7afc:57f6521a:11cd3fef
Events : 64101
Number Major Minor RaidDevice
- 8 1 - /dev/sda1
sudo mdadm --examine /dev/sda
/dev/sda:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
sudo mdadm --examine /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x3
Array UUID : 3318ae78:679f7afc:57f6521a:11cd3fef
Name : clevo2:128
Creation Time : Fri Jul 19 20:05:09 2019
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 9767274496 (4657.40 GiB 5000.84 GB)
Array Size : 4883637248 (4657.40 GiB 5000.84 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Recovery Offset : 5598464 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : clean
Device UUID : 0af4c6d9:269b1e71:5d612699:048a258f
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 12 19:41:12 2020
Bad Block Log : 512 entries available at offset 32 sectors
Checksum : 8aea8502 - correct
Events : 64101
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
The normal results if a 2 drive raid-1 drive fails while the other is NOT in sync is that you rebuild your raid with new devices and load from install media and backup. While no raid replaces backups, raid-1 protects against single drive failures at best. You had a two drive situation. It takes Raid-6 to provide reliable two drive protection, and that requires at least four drives.
I have seen people get lucky, but a reload is the more reasonable expectation.
I would like to mark this disk as active (not spare) and start the raid with this single disk to be able to copy all data.
I think the short answer is that the second disk doesn't contain the data. A spare disk is just that - not a contributor to the array until it it is (fully) synched at some point.
But it was at a given point, part of the array: it was one of the 2 active drives. Somehow it became a spare. And when there was a sync process going on, the other active disk failed.
So, if there is a way to change the disk from spare (rebuilding) to active, I would like to try it even if the odds are not good, because the alternative is getting nothing out of it.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.