Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am trying to setup a new RAID5 in my newly initialized DEBIAN 9.4. The motherboard is a GigaByte GA Z170N. I do not use the RAID options in the UEFI setup.
Until recent I had a RAID5 running with 4 x 2TB HDDs. Now I want to upgrade to 4x 8 TB HDDs. I made some faulty attemps initializing the new RAID5. The number of paramamters for MDADM is really enormeous.
My last strategy was:
Removing a possibly exitisting former version of the RAID
If you are starting fresh and wiping the disks in the array then it is much easier to start over.
Be sure that the file sytem that was on the array is unmounted. Then clear the beginning. This is a bit much but works if you are erasing everything on it:
root@uw-srv-1:~# mdadm --zero-superblock /dev/sda
mdadm: Couldn't open /dev/sda for write - not zeroing
root@uw-srv-1:~# mdadm --zero-superblock /dev/sdb
mdadm: Couldn't open /dev/sdb for write - not zeroing
root@uw-srv-1:~# mdadm --zero-superblock /dev/sdc
mdadm: Couldn't open /dev/sdc for write - not zeroing
root@uw-srv-1:~# mdadm --zero-superblock /dev/sdd
mdadm: Couldn't open /dev/sdd for write - not zeroing
3. Check availability:
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 7,3T disk
└─md127 21,9T raid5
└─md127p1 21,9T md
sdb 7,3T disk
└─md127 21,9T raid5
└─md127p1 21,9T md
sdc 7,3T disk
└─md127 21,9T raid5
└─md127p1 21,9T md
sdd 7,3T disk
└─md127 21,9T raid5
└─md127p1 21,9T md
nvme0n1 238,5G disk
├─nvme0n1p1 512M vfat part /boot/efi
├─nvme0n1p2 69,9G ext4 part /
├─nvme0n1p3 9,3G ext4 part /var
├─nvme0n1p4 31,9G swap part [SWAP]
├─nvme0n1p5 1,9G ext4 part /tmp
└─nvme0n1p6 125G ext4 part /home
root@uw-srv-1:~# lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 7,3T disk
sdb 7,3T disk
sdc 7,3T disk
sdd 7,3T disk
nvme0n1 238,5G disk
├─nvme0n1p1 512M vfat part /boot/efi
├─nvme0n1p2 69,9G ext4 part /
├─nvme0n1p3 9,3G ext4 part /var
├─nvme0n1p4 31,9G swap part [SWAP]
├─nvme0n1p5 1,9G ext4 part /tmp
└─nvme0n1p6 125G ext4 part /home
7. Creating the RAID
root@uw-srv-1:~# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 7813895168K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@uw-srv-1:~# mkfs -t ext4 /dev/md0
mke2fs 1.43.4 (31-Jan-2017)
Ein Dateisystems mit 5860421376 (4k) Blöcken und 366276608 Inodes wird erzeugt.
UUID des Dateisystems: 1fcfcb01-9038-41fc-a920-c5ab77c66551
Superblock-Sicherungskopien gespeichert in den Blöcken:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000, 3855122432, 5804752896
beim Anfordern von Speicher für die Gruppentabellen: erledigt[means: successfully done]
Inode-Tabellen werden geschrieben: erledigt [means: successfully done]
Das Journal (262144 Blöcke) wird angelegt: erledigt [means: successfully done]
Die Superblöcke und die Informationen über die Dateisystemnutzung werden
geschrieben: erledigt [means: successfully done]
10. Retrieving the metadata of /dev/md0
root@uw-srv-1:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Apr 22 15:39:59 2018
Raid Level : raid5
Array Size : 23441685504 (22355.73 GiB 24004.29 GB)
Used Dev Size : 7813895168 (7451.91 GiB 8001.43 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Apr 22 16:04:13 2018
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 3% complete
Name : uw-srv-1:0 (local to host uw-srv-1)
UUID : a509d083:6fc32c6e:3046d464:c79b38bc
Events : 285
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
4 8 48 3 spare rebuilding /dev/sdd
11. Editing the mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
And there seems to be some change to do with /dev/md/* and /dev/md* which I don't yet understand. You need to update initramfs so it contains your mdadm.conf settings during boot.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# ARRAY /dev/md/0 metadata=1.2 UUID=4831bbbc:1cfa82b4:a7d95521:f9e6947b name=srv1-debian:0
ARRAY /dev/md/127 metadata=1.2 UUID=a509d083:6fc32c6e:3046d464:c79b38bc name=uw-srv-1:0
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.