LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 04-22-2018, 05:49 AM   #1
Nomos
LQ Newbie
 
Registered: Apr 2018
Location: Lüneburg/Germany
Distribution: Debian 9.4
Posts: 5

Rep: Reputation: Disabled
Desperately trying to setup a new RAID5


I am trying to setup a new RAID5 in my newly initialized DEBIAN 9.4. The motherboard is a GigaByte GA Z170N. I do not use the RAID options in the UEFI setup.

Until recent I had a RAID5 running with 4 x 2TB HDDs. Now I want to upgrade to 4x 8 TB HDDs. I made some faulty attemps initializing the new RAID5. The number of paramamters for MDADM is really enormeous.

My last strategy was:

Removing a possibly exitisting former version of the RAID

mdadm --remove /dev/md0

"Cleaning" the HDDs with

wipefs -af /dev/sda
wipefs -af /dev/sdb
wipefs -af /dev/sdc
wipefs -af /dev/sdd


Creating the new RAID5:

mdadm --create --verbose /dev/md0 /dev/sda /dev/sdb /dev/sdc /dev/sdd --level=5 --raid-devices=4


At first glance everything looks fine.

Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdd[4] sdc[2] sdb[1] sda[0]
23441685504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
[>....................] recovery = 0.2% (21901724/7813895168) finish=644
.6min speed=201447K/sec
bitmap: 0/59 pages [0KB], 65536KB chunk

unused devices: <none>


But after restarting the server I get the following:

Every 2,0s: cat /proc/mdstat uw-srv-1: Sun Apr 22 12:45:44 2018

Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [ra
id10]
md127 : active (auto-read-only) raid5 sdc[2] sdb[1] sda[0] sdd[4]
23441685504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
bitmap: 0/59 pages [0KB], 65536KB chunk

unused devices: <none>



Anybody out there who can tell me what's wrong with it?

Regards
Nomos

Last edited by Nomos; 04-22-2018 at 05:50 AM.
 
Old 04-22-2018, 06:21 AM   #2
Turbocapitalist
LQ Guru
 
Registered: Apr 2005
Distribution: Linux Mint, Devuan, OpenBSD
Posts: 7,308
Blog Entries: 3

Rep: Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721
If you are starting fresh and wiping the disks in the array then it is much easier to start over.

Be sure that the file sytem that was on the array is unmounted. Then clear the beginning. This is a bit much but works if you are erasing everything on it:

Code:
sudo dd if=/dev/zero bs=1M count=100 of=/dev/sda
sudo dd if=/dev/zero bs=1M count=100 of=/dev/sdb
sudo dd if=/dev/zero bs=1M count=100 of=/dev/sdc
sudo dd if=/dev/zero bs=1M count=100 of=/dev/sdd

sudo mdadm --zero-superblock /dev/sda
sudo mdadm --zero-superblock /dev/sdb
sudo mdadm --zero-superblock /dev/sdc
sudo mdadm --zero-superblock /dev/sdd
Check availability:

Code:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Then assemble the array, using md0 if there are no othe arrays on the machine:

Code:
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
Clear the newly created array and format it:

Code:
sudo dd if=/dev/zero bs=1M count=100 of=/dev/md0
sudo mkfs -t ext4 /dev/md0
Grab the UUID and use that for the /etc/fstab entry.

Then you can save the output of the following to /etc/mdadm/mdadm.conf

Code:
sudo mdadm --detail --scan
Be sure to confirm all that with the manual pages on your particular system.
 
1 members found this post helpful.
Old 04-22-2018, 09:51 AM   #3
Nomos
LQ Newbie
 
Registered: Apr 2018
Location: Lüneburg/Germany
Distribution: Debian 9.4
Posts: 5

Original Poster
Rep: Reputation: Disabled
Moin, Turbocapitalist

Thank you for your help und your input. I did as you suggested.

1. DDing all 4 HDDs


root@uw-srv-1:~# dd if=/dev/zero bs=1M count=100 of=/dev/sda
100+0 Datensätze ein
100+0 Datensätze aus
104857600 Bytes (105 MB, 100 MiB) kopiert, 0,0337322 s, 3,1 GB/s
root@uw-srv-1:~# dd if=/dev/zero bs=1M count=100 of=/dev/sdb
100+0 Datensätze ein
100+0 Datensätze aus
104857600 Bytes (105 MB, 100 MiB) kopiert, 0,0331339 s, 3,2 GB/s
root@uw-srv-1:~# dd if=/dev/zero bs=1M count=100 of=/dev/sdc
100+0 Datensätze ein
100+0 Datensätze aus
104857600 Bytes (105 MB, 100 MiB) kopiert, 0,0334686 s, 3,1 GB/s
root@uw-srv-1:~# dd if=/dev/zero bs=1M count=100 of=/dev/sdd
100+0 Datensätze ein
100+0 Datensätze aus
104857600 Bytes (105 MB, 100 MiB) kopiert, 0,0335101 s, 3,1 GB/s[/COLOR]

2. Zeroing the superblock:

root@uw-srv-1:~# mdadm --zero-superblock /dev/sda
mdadm: Couldn't open /dev/sda for write - not zeroing
root@uw-srv-1:~# mdadm --zero-superblock /dev/sdb
mdadm: Couldn't open /dev/sdb for write - not zeroing
root@uw-srv-1:~# mdadm --zero-superblock /dev/sdc
mdadm: Couldn't open /dev/sdc for write - not zeroing
root@uw-srv-1:~# mdadm --zero-superblock /dev/sdd
mdadm: Couldn't open /dev/sdd for write - not zeroing


3. Check availability:

NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 7,3T disk
└─md127 21,9T raid5
└─md127p1 21,9T md
sdb 7,3T disk
└─md127 21,9T raid5
└─md127p1 21,9T md
sdc 7,3T disk
└─md127 21,9T raid5
└─md127p1 21,9T md
sdd 7,3T disk
└─md127 21,9T raid5
└─md127p1 21,9T md
nvme0n1 238,5G disk
├─nvme0n1p1 512M vfat part /boot/efi
├─nvme0n1p2 69,9G ext4 part /
├─nvme0n1p3 9,3G ext4 part /var
├─nvme0n1p4 31,9G swap part [SWAP]
├─nvme0n1p5 1,9G ext4 part /tmp
└─nvme0n1p6 125G ext4 part /home


4. Cause of this result I did this:

root@uw-srv-1:~# mdadm --stop /dev/md127
mdadm: stopped /dev/md127

5. Again:

root@uw-srv-1:~# mdadm --zero-superblock /dev/sda
mdadm: Unrecognised md component device - /dev/sda
root@uw-srv-1:~# mdadm --zero-superblock /dev/sdb
mdadm: Unrecognised md component device - /dev/sdb
root@uw-srv-1:~# mdadm --zero-superblock /dev/sdc
mdadm: Unrecognised md component device - /dev/sdc
root@uw-srv-1:~# mdadm --zero-superblock /dev/sdd
mdadm: Unrecognised md component device - /dev/sdd

6. Check availability - #2:

root@uw-srv-1:~# lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 7,3T disk
sdb 7,3T disk
sdc 7,3T disk
sdd 7,3T disk
nvme0n1 238,5G disk
├─nvme0n1p1 512M vfat part /boot/efi
├─nvme0n1p2 69,9G ext4 part /
├─nvme0n1p3 9,3G ext4 part /var
├─nvme0n1p4 31,9G swap part [SWAP]
├─nvme0n1p5 1,9G ext4 part /tmp
└─nvme0n1p6 125G ext4 part /home

7. Creating the RAID

root@uw-srv-1:~# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 7813895168K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

8. DDing the newly crated array:

root@uw-srv-1:~# dd if=/dev/zero bs=1M count=100 of=/dev/md0
100+0 Datensätze ein
100+0 Datensätze aus
104857600 Bytes (105 MB, 100 MiB) kopiert, 0,293413 s, 357 MB/s

9. Creating the filesystem:

root@uw-srv-1:~# mkfs -t ext4 /dev/md0
mke2fs 1.43.4 (31-Jan-2017)
Ein Dateisystems mit 5860421376 (4k) Blöcken und 366276608 Inodes wird erzeugt.
UUID des Dateisystems: 1fcfcb01-9038-41fc-a920-c5ab77c66551
Superblock-Sicherungskopien gespeichert in den Blöcken:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000, 3855122432, 5804752896

beim Anfordern von Speicher für die Gruppentabellen: erledigt[means: successfully done]
Inode-Tabellen werden geschrieben: erledigt [means: successfully done]
Das Journal (262144 Blöcke) wird angelegt: erledigt [means: successfully done]
Die Superblöcke und die Informationen über die Dateisystemnutzung werden
geschrieben: erledigt [means: successfully done]

10. Retrieving the metadata of /dev/md0

root@uw-srv-1:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Apr 22 15:39:59 2018
Raid Level : raid5
Array Size : 23441685504 (22355.73 GiB 24004.29 GB)
Used Dev Size : 7813895168 (7451.91 GiB 8001.43 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Apr 22 16:04:13 2018
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

Rebuild Status : 3% complete

Name : uw-srv-1:0 (local to host uw-srv-1)
UUID : a509d083:6fc32c6e:3046d464:c79b38bc
Events : 285

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
4 8 48 3 spare rebuilding /dev/sdd

11. Editing the mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
# ARRAY /dev/md/0 metadata=1.2 UUID=4831bbbc:1cfa82b4:a7d95521:f9e6947b name=uw-srv-1:0
ARRAY /dev/md0 metadata=1.2 UUID=a509d083:6fc32c6e:3046d464:c79b38bc name=uw-srv-1:0


12. Restarting the system

And again: the dev/md0 has vanished.

Every 2,0s: cat /proc/mdstat uw-srv-1: Sun Apr 22 16:39:26 2018

Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [ra
id10]
md127 : active (auto-read-only) raid5 sda[0] sdd[4] sdc[2] sdb[1]
23441685504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
bitmap: 0/59 pages [0KB], 65536KB chunk

unused devices: <none>

What the hell is it? Do you have any more ideas?

Regards

PS: I regret the format of this post. But I couldn`t find how to find other charaacter sets and os on

Last edited by Nomos; 04-22-2018 at 09:52 AM.
 
Old 04-22-2018, 09:55 AM   #4
Turbocapitalist
LQ Guru
 
Registered: Apr 2005
Distribution: Linux Mint, Devuan, OpenBSD
Posts: 7,308
Blog Entries: 3

Rep: Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721
Re-check steps 8 through 11. It might be device md127 instead of md0 it seems.

If you want to keep fixed width text, you can put it between [code] [/code] tags.
 
1 members found this post helpful.
Old 04-22-2018, 11:53 AM   #5
Nomos
LQ Newbie
 
Registered: Apr 2018
Location: Lüneburg/Germany
Distribution: Debian 9.4
Posts: 5

Original Poster
Rep: Reputation: Disabled
Moin, Turbocapitalist:

Another source in the WWW reports the following:

And there seems to be some change to do with /dev/md/* and /dev/md* which I don't yet understand. You need to update initramfs so it contains your mdadm.conf settings during boot.

Code:
sudo update-initramfs -u

After doing so I get the following results:

Code:
Disk /dev/nvme0n1: 238,5 GiB, 256060514304 bytes, 500118192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1941E797-5868-4150-979C-7A0799A4E36C

Device             Start       End   Sectors  Size Type
/dev/nvme0n1p1      2048   1050623   1048576  512M EFI System
/dev/nvme0n1p2   1050624 147673087 146622464 69,9G Linux filesystem
/dev/nvme0n1p3 147673088 167204863  19531776  9,3G Linux filesystem
/dev/nvme0n1p4 167204864 234067967  66863104 31,9G Linux swap
/dev/nvme0n1p5 234067968 237973503   3905536  1,9G Linux filesystem
/dev/nvme0n1p6 237973504 500117503 262144000  125G Linux filesystem


Disk /dev/sda: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdb: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdd: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md127: 21,9 TiB, 24004285956096 bytes, 46883371008 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
And also:

Code:
root@uw-srv-1:~# lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME          SIZE FSTYPE            TYPE  MOUNTPOINT
sda           7,3T linux_raid_member disk
└─md127      21,9T ext4              raid5
sdb           7,3T linux_raid_member disk
└─md127      21,9T ext4              raid5
sdc           7,3T linux_raid_member disk
└─md127      21,9T ext4              raid5
sdd           7,3T linux_raid_member disk
└─md127      21,9T ext4              raid5
nvme0n1     238,5G                   disk
├─nvme0n1p1   512M vfat              part  /boot/efi
├─nvme0n1p2  69,9G ext4              part  /
├─nvme0n1p3   9,3G ext4              part  /var
├─nvme0n1p4  31,9G swap              part  [SWAP]
├─nvme0n1p5   1,9G ext4              part  /tmp
└─nvme0n1p6   125G ext4              part  /home
And look at this:

Code:
root@uw-srv-1:~# blkid
/dev/nvme0n1p2: UUID="2b11a9f0-90f3-4386-bb9e-78d80d11859b" TYPE="ext4" PARTUUID="cf40bace-396a-455e-bb29-a1a28a303418"
/dev/md127: UUID="1fcfcb01-9038-41fc-a920-c5ab77c66551" TYPE="ext4"
/dev/nvme0n1p1: UUID="9077-41D3" TYPE="vfat" PARTUUID="f28a95f7-fbd4-4ce5-b991-dc11cbaf16fe"
/dev/nvme0n1p3: UUID="b0b5b040-6f76-4448-ab62-30376b73ba62" TYPE="ext4" PARTUUID="4dcc06de-54f4-4e83-8190-26b773cc1376"
/dev/nvme0n1p4: UUID="82ea6f4b-2d4c-4f7f-b66c-1312bad69066" TYPE="swap" PARTUUID="b52483c1-63db-4a41-95ad-2268281f384c"
/dev/nvme0n1p5: UUID="6e866f1e-4b8d-4409-a798-058843d9f549" TYPE="ext4" PARTUUID="98285bf7-6d37-4052-885d-a64aa9ea89f7"
/dev/nvme0n1p6: UUID="0d4bbd77-83fd-4188-91ac-66de0e250989" TYPE="ext4" PARTUUID="18cfbe7b-a48b-4631-8524-a581c01c4cf0"
/dev/sda: UUID="a509d083-6fc3-2c6e-3046-d464c79b38bc" UUID_SUB="0843ccf0-a128-238c-4151-8815c28d7543" LABEL="uw-srv-1:0" TYPE="linux_raid_member"
/dev/sdb: UUID="a509d083-6fc3-2c6e-3046-d464c79b38bc" UUID_SUB="f763ad09-c59b-11f1-080c-ef0f94409c57" LABEL="uw-srv-1:0" TYPE="linux_raid_member"
/dev/sdd: UUID="a509d083-6fc3-2c6e-3046-d464c79b38bc" UUID_SUB="4d572fcc-60c9-ac3d-718a-21e711225bb7" LABEL="uw-srv-1:0" TYPE="linux_raid_member"
/dev/sdc: UUID="a509d083-6fc3-2c6e-3046-d464c79b38bc" UUID_SUB="14ffaf07-f482-640e-3964-f9993364b2b9" LABEL="uw-srv-1:0" TYPE="linux_raid_member"
/dev/nvme0n1: PTUUID="1941e797-5868-4150-979c-7a0799a4e36c" PTTYPE="gpt"
The remaining questions are:

1. How to change the state from (auto-read-only) to read/write?

Code:
root@uw-srv-1:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md127 : active (auto-read-only) raid5 sda[0] sdb[1] sdc[2] sdd[4]
      23441685504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      bitmap: 0/59 pages [0KB], 65536KB chunk

2. Is my entry in mdadm.con correct?

Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
# ARRAY /dev/md/0  metadata=1.2 UUID=4831bbbc:1cfa82b4:a7d95521:f9e6947b name=srv1-debian:0
ARRAY /dev/md/127 metadata=1.2 UUID=a509d083:6fc32c6e:3046d464:c79b38bc  name=uw-srv-1:0
Regards
Nomos
 
Old 04-22-2018, 12:02 PM   #6
Turbocapitalist
LQ Guru
 
Registered: Apr 2005
Distribution: Linux Mint, Devuan, OpenBSD
Posts: 7,308
Blog Entries: 3

Rep: Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721
I think you'd get more out of a query:

Code:
sudo mdadm --query /dev/md127

sudo mdadm --detail --scan /dev/md127
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Few questions about a basic home server HDD setup (Raid5 vs RaidZ) Mat Cauthon Linux - Server 3 11-06-2014 10:29 AM
Slackware64 Setup does not see gpt partitions created with parted on raid5 Alkisx Slackware 10 10-13-2009 12:59 PM
Setup up lvm raid1(boot,root,home),raid5(media) wallbunny Linux - General 4 10-01-2009 04:59 AM
Raid5 setup uses only 2 out of 3 drives divot_powell Linux - Server 1 09-29-2009 07:07 AM
Multi Layer RAID50 fail (Intel SRCS14L RAID5 + 3ware 9550SX-4LP RAID5)+Linux RAID 0 BaronVonChickenPants Linux - Server 4 09-27-2009 04:06 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 12:54 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration