Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I've been playing around with MDADM and RAID 6 lately. I created and deleted the array about a dozen times as I was playing around with different options but when I re-created it tonight, I noticed that things were a little slow. Before, I was able to copy from my internal SSD to the array and averaged about 120MB/s. Tonight, I'm getting about 10MB/s. ????? Could of sworn I did everything the same. I tried the default 512K chunk size then took it down to 64K and same results. Crap speed.
Some things I noticed that were a little different. When the array was being created tonight, I got an error about how the GPT is not valid but the backup was OK and will be used. I'm going to delete the array again and will wait to see if somebody can hopefully offer up some step-by-step instructions to see if I can get back to square one. (100+MB/s)
To me, your bitmap looks wonky - 1/2 pages seems small. Mine is currently 0/22 pages, (raid 5, 3 partitions 3 TB each).
Is there an optimal setting for bitmap? Ultimately, this setup will have 6, 4TB drives in RAID 6. Another question, can this bitmap setting be changed on the fly or will I have to delete the array and make that change when it's created?
Is there an optimal setting for bitmap? Ultimately, this setup will have 6, 4TB drives in RAID 6. Another question, can this bitmap setting be changed on the fly or will I have to delete the array and make that change when it's created?
Not sure how this goes, but usually the bitmap is internal and gets created/allocated during creation. If things were added/modified piecemeal then it may be a bit off. The bitmap itself is used during recovery (recovery only needs to process the flagged blocks) - but from what I can find, if it is too small it forces excess writes to the bitmap, which can translate into slow writes (as the bitmap may get updated multiple times for each write).
The following commands are SUPPOSED to remove the internal bitmap, then allocate it again:
Code:
mdadm -G --bitmap=none <md-device> (remove the bitmap)
mdadm -G --bitmap=internal --bitmap-chunk=N <md-device> (creates internal bitmap, and uses the given chunk size)
where N is the size of bitmap chunk. You can try leaving out the --bitmap-chunk to see if mdadm allocates based on the storage size. The default is 64MB, but the manpage also says "or larger if necessary to fit the bitmap into the available space".
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.