Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux? |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
|
07-05-2022, 10:43 AM
|
#1
|
LQ Newbie
Registered: Jan 2021
Posts: 11
Rep: 
|
Is there a way to not lose RAID0 array after power loss ?
Hello everyone,
I'm owner of an hp ZBook 17 G6 with 3 NVMe setted up in RAID0 with mdadm.
The operating system installed is CentOS 9 Stream.
I had my pc in sleep mode for too much time and it shutted down.
After reboot RAID0 array was lost.
If I shut down or reboot manually no problem occurrs but, instead, on power loss the RAID0 virtual disk disappears.
What I wish to know:
-Is there a way to avoid this or prevent it ?
-Is there a way to let Linux rebuild it easyly and quickly after reboot (from power loss) ?
Thanks for help
Last edited by EVIDEON; 07-05-2022 at 10:45 AM.
|
|
|
07-05-2022, 11:27 AM
|
#2
|
Senior Member
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,337
|
Are you trying to boot from RAID? How is grub configured and installed? What's in your mdadm.conf file?
|
|
|
07-05-2022, 06:07 PM
|
#3
|
LQ Newbie
Registered: Jan 2021
Posts: 11
Original Poster
Rep: 
|
Thanks a lot for quick reply.
My system has :
- 1 SSD with CentOS installed and from where the machine boots up (/dev/sda)
- 3 NVMe configured in RAID0 (/dev/md0 mounted on /mnt/md0)
I'd like to say more about grub configuration and installation but I've no experience. (don't know what information to give)
My mdadm.conf file (which isn't located in /etc but in /usr/share/doc/mdadm) :
Code:
ARRAY /dev/md0 metadata=1.2 name=ZERO:0 UUID=d307e76e:384447ad:17f0cd99:7cb32895
ARRAY /dev/md0 metadata=1.2 name=ZERO:0 UUID=430c76f6:daf84a85:ce98bd65:80a6a25b
I don't know if it can be useful but my /etc/fstab file contains :
Code:
# CentOS
/dev/mapper/cs_zero-root / xfs defaults 0 0
UUID=df0563e5-7e57-428e-8962-b51a7d84cff8 /boot xfs defaults 0 0
UUID=AA57-9A10 /boot/efi vfat umask=0077,shortname=winnt 0 2
/dev/mapper/cs_zero-home /home xfs defaults 0 0
/dev/mapper/cs_zero-swap none swap defaults 0 0
# RAID0
/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
Really appreciate your help
Thanks
|
|
|
07-05-2022, 07:15 PM
|
#4
|
LQ Veteran
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,414
|
Let's see
Code:
cat /proc/mdstat
sudo lsblk -f
|
|
|
07-05-2022, 11:57 PM
|
#5
|
Senior Member
Registered: Jul 2020
Posts: 1,595
|
When array is being written to there are moments when some of the drives in the array have been updated while some haven't. If computer is turned off at such a moment, array becomes inconsistent and as a precaution won't be mounted automatically. You can mount inconsistent array manually with --force flag but because RAID0 lacks any means to resolve this inconsistency - checksums, redundancy or journaling, these errors will keep on accumulating. RAID0, especially a software one, is not for systems which can be powered down unexpectedly, even if you forget about its general unreliability. Just isn't.
|
|
|
07-06-2022, 05:03 AM
|
#6
|
Senior Member
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,337
|
How does md find your mdadm.conf file if it's not in /etc?
|
|
|
07-06-2022, 05:58 AM
|
#7
|
LQ Guru
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 17,695
|
I would go through it with ls -lht, and find the last stripes being written. Avoid that file if you can trace it.
I had dealing with 1 Raid 0 system, which striped data onto 2 drives and the data map was on a third. It seems obvious that whoever set the system up wasn't worried about backups. Until now  .
I personally doubt that a file in /usr/share/doc is the configuration, and personally, I would expect 3 drives in a Raid 0 config; Two at least for data, and a third for housekeeping. Are those your disk UUIDs?
Also, just because it says 'Raid0' in /etc/fstab doesn't necessarily mean the thing uses Raid 0. It could be Raid (device) 0, as opposed to Raid device 1, etc. I'd look for independent verification.
Last edited by business_kid; 07-06-2022 at 06:01 AM.
|
|
|
07-06-2022, 09:33 AM
|
#8
|
LQ Newbie
Registered: Jan 2021
Posts: 11
Original Poster
Rep: 
|
Thanks everyone,
mdadm.conf not in /etc sounds strange for me too.
On debian based os the file was there, but here, the system made its folder choices.
I've tried to copy it in /etc but nothing changed.
Anyway without power loss, with manual reboot or manual shut down the array was recognized with no problems.
Code:
[zero@ZERO ~]$ cat /proc/mdstat
Personalities :
unused devices: <none>
[zero@ZERO ~]$ sudo lsblk -f
[sudo] password for zero:
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sda
├─sda1
│ vfat FAT32 AA57-9A10 591.3M 1% /boot/efi
├─sda2
│ xfs df0563e5-7e57-428e-8962-b51a7d84cff8 612.7M 40% /boot
└─sda3
LVM2_m LVM2 rG0PHk-E5OH-o3OT-eVrJ-6CVW-Gd4l-3lAJZJ
├─cs_zero-root
│ xfs / 41b7a843-75db-489f-a84a-421df2ac9f9a 45.3G 35% /
├─cs_zero-swap
│ swap 1 b55acf2e-2642-4a54-9b88-ff03acf3fb86 [SWAP]
└─cs_zero-home
xfs /home 912815cf-60de-4667-a89e-782a65bb5783 1.7T 1% /home
nvme0n1
nvme1n1
nvme2n1
Quote:
I personally doubt that a file in /usr/share/doc is the configuration, and personally, I would expect 3 drives in a Raid 0 config; Two at least for data, and a third for housekeeping. Are those your disk UUIDs?
|
I've searched but the .conf file is only there (/usr/share/doc).
I setted up level=0 for raid during configuration but can not say much about system choices for data. (sorry)
Thanks
|
|
|
07-06-2022, 03:09 PM
|
#9
|
LQ Guru
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 17,695
|
A file in /usr/share/doc is a sample config. What's more, the UUIDs in the config don't seem to be the UUIDs in the config file. You can check with
Code:
ls -l /dev/disk/by-uuid
I don't think you can have LVM and RAID operating at the same time. LVM hoovers up all the space and administers it. So directories can grow or shrink, and LVM manages that space. Raid 0 wants very definitely assigned set of spaces.
|
|
|
07-06-2022, 06:13 PM
|
#10
|
Moderator
Registered: Aug 2002
Posts: 26,855
|
Post the output of the command:
sudo mdadm --detail --scan
Yes, you can have both RAID and LVM. The interesting part is that none of the NVMs show any information about being a RAID member.
|
|
|
07-07-2022, 05:42 PM
|
#11
|
LQ Newbie
Registered: Jan 2021
Posts: 11
Original Poster
Rep: 
|
Code:
[zero@ZERO ~]$ ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 10 Jul 6 00:31 41b7a843-75db-489f-a84a-421df2ac9f9a -> ../../dm-0
lrwxrwxrwx 1 root root 15 Jul 6 00:31 5a7cc108-4795-4324-89b7-3d035b4cd8e5 -> ../../mmcblk0p1
lrwxrwxrwx 1 root root 10 Jul 6 00:31 912815cf-60de-4667-a89e-782a65bb5783 -> ../../dm-2
lrwxrwxrwx 1 root root 10 Jul 6 00:31 AA57-9A10 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 6 00:31 b55acf2e-2642-4a54-9b88-ff03acf3fb86 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jul 6 00:31 df0563e5-7e57-428e-8962-b51a7d84cff8 -> ../../sda2
(mmcblk0p1 is my SD removable card)
Code:
[zero@ZERO ~]$ sudo mdadm --detail --scan
[zero@ZERO ~]$
(absolutely nothing)
Thanks
|
|
|
07-07-2022, 06:21 PM
|
#12
|
Moderator
Registered: Aug 2002
Posts: 26,855
|
dm-0, dm-1 and dm-2 is the LVM but it appears your RAID has just disappeared. I should of asked you to also post the output of the commands:
mdadm -E /dev/nvme0n1
mdadm -E /dev/nvme1n1
mdadm -E /dev/nvme2n1
You can try to reassemble the array as:
sudo mdadm --assemble /dev/md0 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
|
|
|
07-07-2022, 06:29 PM
|
#13
|
LQ Newbie
Registered: Jan 2021
Posts: 11
Original Poster
Rep: 
|
Code:
[root@ZERO zero]# mdadm -E /dev/nvme0n1
/dev/nvme0n1:
MBR Magic : aa55
Partition[0] : 4000797359 sectors at 1 (type ee)
[root@ZERO zero]# mdadm -E /dev/nvme1n1
/dev/nvme1n1:
MBR Magic : aa55
Partition[0] : 4000797359 sectors at 1 (type ee)
[root@ZERO zero]# mdadm -E /dev/nvme2n1
/dev/nvme2n1:
MBR Magic : aa55
Partition[0] : 4000797359 sectors at 1 (type ee)
[root@ZERO zero]#
Code:
[root@ZERO zero]# mdadm --assemble /dev/md0 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
mdadm: /dev/nvme0n1 has no superblock - assembly aborted
[root@ZERO zero]#
|
|
|
07-07-2022, 07:01 PM
|
#14
|
Moderator
Registered: Aug 2002
Posts: 26,855
|
The drives do not look like they are partitioned but look like there is something there. Post the output of:
sudo fdisk -l /dev/nvme0n1
Do you have a backup of important data from the RAID? Do you need to recover any data?
Last edited by michaelk; 07-07-2022 at 08:14 PM.
|
|
|
07-08-2022, 09:58 AM
|
#15
|
LQ Newbie
Registered: Jan 2021
Posts: 11
Original Poster
Rep: 
|
Code:
[zero@ZERO ~]$ sudo fdisk -l /dev/nvme0n1
[sudo] password for zero:
Disk /dev/nvme0n1: 1.86 TiB, 2048408248320 bytes, 4000797360 sectors
Disk model: KXG50PNV2T04 KIOXIA
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CBFEE7D0-7424-43D9-A4D3-32D30F9D620C
[zero@ZERO ~]$
Really appreciate the time you gave to my problem.
I don't need backups of data from the old RAID array.
I wish only to know if there is a method to avoid the same problem now that I reset up a brand new RAID configuration.
I need to not lose data in case of power loss.
Thanks everyone for help.
Thank you Michael.
|
|
|
All times are GMT -5. The time now is 11:57 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|