Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Background. I setup a RAID array with 2 disks at raid level 5. I then did LVM+XFS on the RAID. All was good. I then copied over my data that I had on another identical drive and then fdisked and added the drive to the RAID. After a reboot the raid no longer works.
Code:
sharp@MotherShip ~ $ uname -a
Linux MotherShip 4.2.0-sabayon #1 SMP Mon Oct 19 08:36:08 UTC 2015 x86_64 AMD FX(tm)-8120 Eight-Core Processor AuthenticAMD GNU/Linux
Here is where it gets interesting. mdadm thinks the raid level is 0 which is incorrect.
Code:
MotherShip ~ # mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 3
Persistence : Superblock is persistent
State : inactive
Delta Devices : 1, (-1->0)
New Level : raid5
New Layout : left-symmetric
New Chunksize : 512K
Name : sabayon:0
UUID : f75d2c00:d2a2ee55:eed84606:4ef98acf
Events : 13648
Number Major Minor RaidDevice
- 8 1 - /dev/sda1
- 8 17 - /dev/sdb1
- 8 33 - /dev/sdc1
MotherShip ~ # mdadm --assemble /dev/md127
mdadm: Failed to restore critical section for reshape, sorry.
Possibly you needed to specify the --backup-file
I have done some Googling but have not come up with anything really. I read on one site to recreate the raid array and it will not delete the data on the disks. http://www.geekride.com/activating-a...ve-raid-array/
I am concerned about losing my data on these drives....
I don't have very much experience with linux software RAID's.
Last edited by pacmanlives; 10-27-2015 at 11:46 AM.
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336
Rep:
So you did a raid 5 with 2 drives correct? That should technically work as it will just be degraded but I find mdadm does weird things when you try that. I did a raid 10 with 7 drives once figuring it would just show as one drive missing, but instead it somehow did a full raid 10 with that many drives. I ended up having to completely rebuild it as it would not even let me add the other drive.
So you did a raid 5 with 2 drives correct? That should technically work as it will just be degraded but I find mdadm does weird things when you try that. I did a raid 10 with 7 drives once figuring it would just show as one drive missing, but instead it somehow did a full raid 10 with that many drives. I ended up having to completely rebuild it as it would not even let me add the other drive.
So I was able to recreate this and I now know for sure that it was the grow of the raid that was interrupted caused this.
Steps to recreate this issue.
1. Create VM. I used a CentOS 7 vm for this.
2. Add 3 disks to VM.
3. Create RAID. mdadm --create md0 -l 5 --raid-devices=2 /dev/sdb /dev/sdc
4. Format RAID. mkfs.xfs /dev/md0
4. Mount /dev/md0
5. I created some files: touch file{1..1000}.txt
6. Add other disk to RAID. mdadm --manage md0 --add /dev/sdd
7. Grow RAID with other disk. mdadm --grow --raid-devices=3 --backup-file=/root/grow_md1.bak /dev/md0
8. Reboot.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.