LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 04-04-2018, 09:13 PM   #1
pbirkbeck
LQ Newbie
 
Registered: Apr 2018
Posts: 4

Rep: Reputation: Disabled
mdadm array degraded after new kickstart install on centos 7


I have kickstart partion 4 drives in the following way -

part raid.100000 --size=250 --ondisk=sda
part raid.100001 --size=250 --ondisk=sdb
part raid.100002 --size=250 --ondisk=sdc
part raid.100003 --size=250 --ondisk=sdd
part raid.100004 --size=50000 --ondisk=sda
part raid.100005 --size=50000 --ondisk=sdb
part raid.100006 --size=50000 --ondisk=sdc
part raid.100007 --size=50000 --ondisk=sdd
part raid.100008 --size=2048 --ondisk=sda
part raid.100009 --size=2048 --ondisk=sdb
part raid.100010 --size=2048 --ondisk=sdc
part raid.100011 --size=2048 --ondisk=sdd
part raid.100012 --size=1 --grow --ondisk=sda
part raid.100013 --size=1 --grow --ondisk=sdb
part raid.100014 --size=1 --grow --ondisk=sdc
part raid.100015 --size=1 --grow --ondisk=sdd
raid /boot --fstype ext3 --level=RAID1 --device=md0 raid.100000 raid.100001 raid.100002 raid.100003
raid / --fstype ext4 --level=RAID10 --device=md1 raid.100004 raid.100005 raid.100006 raid.100007
raid swap --fstype swap --level=RAID0 --device=md2 raid.100008 raid.100009 raid.100010 raid.100011
raid /local --fstype=ext4 --level=RAID10 --device=md3 raid.100012 raid.100013 raid.100014 raid.100015


After installation is completed mdadm --detail /dev/md0 md1 and md2 all show ok

But I get this with /dev/md3 - Any Ideas?

/dev/md3:
Version : 1.2
Creation Time : Tue Apr 3 13:51:53 2018
Raid Level : raid10
Array Size : 941203456 (897.60 GiB 963.79 GB)
Used Dev Size : 470601728 (448.80 GiB 481.90 GB)
Raid Devices : 4
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Apr 3 14:46:08 2018
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Layout : near=2
Chunk Size : 512K

Consistency Policy : bitmap

Name : xxxxx.xxxxxxx.com:3 (local to host xxxxx.xxxxxxx.com)
UUID : 75878e38:01f302ad:00ab175b:2d68d09d
Events : 4833

Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 21 1 active sync set-B /dev/sdb5
- 0 0 2 removed
3 8 53 3 active sync set-B /dev/sdd5
 
Old 04-05-2018, 02:43 AM   #2
AwesomeMachine
LQ Guru
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524

Rep: Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015
That usually means you're missing a drive. Are you sure they're all specified correctly?
 
Old 04-05-2018, 06:38 PM   #3
pbirkbeck
LQ Newbie
 
Registered: Apr 2018
Posts: 4

Original Poster
Rep: Reputation: Disabled
Yes they are specified correctly. I can replicate this same issue over and over again with this kickstart conifg. There are 4 drives total in the system ------ /dev/sda5 and /dev/sdc5 seem to disappear for no reason



/dev/md0 -

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3

/dev/md1 -

Number Major Minor RaidDevice State
0 8 1 0 active sync set-A /dev/sda1
1 8 17 1 active sync set-B /dev/sdb1
2 8 33 2 active sync set-A /dev/sdc1
3 8 49 3 active sync set-B /dev/sdd1

/dev/md2

Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
2 8 34 2 active sync /dev/sdc2
3 8 50 3 active sync /dev/sdd2

/dev/md3 (Problematic array)

Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 21 1 active sync set-B /dev/sdb5
- 0 0 2 removed
3 8 53 3 active sync set-B /dev/sdd5
 
Old 04-05-2018, 11:12 PM   #4
AwesomeMachine
LQ Guru
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524

Rep: Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015
With the information you've provided, I would say that md3 needs to be partitioned differently. Although I don't know exactly why.
 
Old 04-08-2018, 05:15 AM   #5
voleg
Member
 
Registered: Oct 2013
Distribution: RedHat CentOS Fedora SuSE
Posts: 354

Rep: Reputation: 51
Try to hotadd missing devices and see errors output if any:
Code:
# mdadm -a /dev/md3 /dev/sda5
# mdadm -a /dev/md3 /dev/sdc5
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
MDADM array degraded after move to new server TheLibertarian Linux - Software 0 12-21-2014 11:45 PM
Install CentOS V5.6 onto pre-degraded RAID1 + LVM? goony1 Linux - Software 0 06-11-2011 01:06 AM
[SOLVED] RAID 5 array not assembling all 3 devices on boot using MDADM, one is degraded. kirby9 Linux - Software 11 11-20-2010 10:32 AM
raid1 mdadm repair degraded array with used good hard drive catbird Linux - Hardware 7 07-09-2009 12:31 AM
Degraded raid1 array problem on slack 12.0.0 / mdadm slack-12.0.0 Slackware 5 10-12-2007 06:36 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 09:11 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration