LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 12-09-2020, 05:12 AM   #1
g4njawizard
Member
 
Registered: Feb 2020
Posts: 41

Rep: Reputation: Disabled
Unhappy Nextcloud - RPi 4- mdadm unable to remove raid 5 array


Hi everyone,

I am currently trying to create a new raid 5 array, because I experienced huge performance issues when running updates on nextcloud. It takes almost a complete day to create a backup, download files, extracting and replacing files.
I already used nextcloudpi and the normal server nextcloud version on my raspberry. Both lack in terms of reading and writing on the disks. But when I upload and download files from the cloud it's running smooth.

On my Raspberry Pi 4 4GB, I have 4 SATA Disks with each 2TB. First time I created an array, I used Raid 5 with a spare disk. But this conestellation persists every time, even when I delete every information. I want this time to use all 4 Disks without a spare part.

I already zero'd all 4 Disks. It took almost ~6-8 hours for each disk.
But the write speed was IMO ok. I had a write speed on each disk from round about 130-140Mb/s.

What I've also tried:

Code:
OK root@ncloud:~# mdadm --stop /dev/md127 
mdadm: stopped /dev/md127    
OK root@ncloud:~# mdadm --remove /dev/md127
mdadm: error opening /dev/md127: No such file or directory
Error root@ncloud:~# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
unused devices: <none>
OK root@ncloud:~# mdadm --examine --brief --scan  --config=partitions
ARRAY /dev/md/vol1  metadata=1.2 UUID=46c6092a:69f8d8f4:23bfc213:a9fb2222 name=ncloud:vol1
OK root@ncloud:~# mdadm --zero-superblock /dev/sda /dev/sdb /dev/sdc /dev/sdd
OK root@ncloud:~# wipefs -af /dev/sda /dev/sdb /dev/sdc /dev/sdd
OK root@ncloud:~# mdadm --examine --brief --scan  --config=partitions
But still the same Array with spare disk is keep coming back.


Code:
root@ncloud:~# mdadm --create --verbose --chunk=128 /dev/md/vol1 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: size set to 1953382400K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/vol1 started.
OK root@ncloud:~# mdadm --detail /dev/md/vol1
/dev/md/vol1:
           Version : 1.2
     Creation Time : Wed Dec  9 08:09:29 2020
        Raid Level : raid5
        Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
     Used Dev Size : 1953382400 (1862.89 GiB 2000.26 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Dec  9 08:09:30 2020
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 128K

Consistency Policy : bitmap

    Rebuild Status : 0% complete

              Name : ncloud:vol1  (local to host ncloud)
              UUID : 46c6092a:69f8d8f4:23bfc213:a9fb2222
            Events : 2

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       2       8       32        2      active sync   /dev/sdc
       4       8       48        3      spare rebuilding   /dev/sdd

Last edited by g4njawizard; 12-09-2020 at 05:13 AM.
 
Old 12-09-2020, 08:21 AM   #2
michaelk
Moderator
 
Registered: Aug 2002
Posts: 25,826

Rep: Reputation: 5962Reputation: 5962Reputation: 5962Reputation: 5962Reputation: 5962Reputation: 5962Reputation: 5962Reputation: 5962Reputation: 5962Reputation: 5962Reputation: 5962
This appears to be normal and can be overridden with the --force option.

Have you waited until the array was fully rebuilt to see if the 4th drive became active?

https://marc.info/?l=linux-raid&m=112044009718483&w=2
 
Old 12-09-2020, 09:52 AM   #3
g4njawizard
Member
 
Registered: Feb 2020
Posts: 41

Original Poster
Rep: Reputation: Disabled
Thanks for the reply. I am waiting now for some hours. Yet it's at 45%. I hope Disk 4 becomes active and not spare then. I will answer if it didn't.

edit: rebuild has finished. it works now!

Last edited by g4njawizard; 12-09-2020 at 03:10 PM.
 
  


Reply

Tags
mdadm, nextcould, raid5, raspberry pi 4



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: NextCloud gets bigger and better with Nextcloud Hub 19 LXer Syndicated Linux News 0 06-05-2020 08:42 PM
LXer: GNOME 3.33.2 Released, Krita 4.2 Debuts, RPi Camera Modules on RPi Zeros Power the Penguin Watch Project, Intrinsyc Switches Its Home LXer Syndicated Linux News 0 05-30-2019 06:23 AM
LXer: Mini-PC taps RPi Compute Module and supports RPi 2 LXer Syndicated Linux News 0 02-22-2015 03:03 PM
unable to unmount raid array or unable to stop raid array karthik-naren Linux - Newbie 3 07-09-2013 01:54 PM
[SOLVED] mdadm: only give one device per ARRAY line: /dev/md/:raid and array laughing_man77 Linux - Hardware 4 03-23-2012 04:05 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 10:46 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration