possibly poor disk performance after Slackware 15.0 upgrade
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I might try to unzip the kernel-firmware some, but I'm afraid that the disk caching might change the result.
the3dfxdude --
The official kernel-firmware.SlackBuild is non-destructive in that it simply does a `git clone` into a temp directory and then sets up /tmp/package-kernel-firmware for `/sbin/makepkg`
All I did to set up my own kernel-firmware.SlackBuild is copy the entire ~/source/a/kernel-firmware/ directory into my local SlackBuild Directory ; cd into kernel-firmware/ under my directory and run kernel-firmware.SlackBuild.
The kernel-firmware.SlackBuild automatically downloads 'the latest' kernel firmware and makes a date-stamped kernel-firmware Package as /tmp/kernel-firmware-YYYYMMDD-xxxxxxx-noarch.1.txz where xxxxxxx is a unique id.
The Delays that I see are when makepkg executes tar from within /tmp/package-kernel-firmware/
I suspected that my NVMe Drives needed tuning but then again, maybe there is an issue with my generic .config or even in ext4 for my use-case ???
-- kjh
Last edited by kjhambrick; 12-16-2022 at 03:38 PM.
Reason: directory
My test case for poor disk performance on 5.15.x was extracting the linux kernel source code with the tar command:
tar xvf linux-x.y.z.tar.xz
This ran abnormal and very slow on 5.15.x; the disks (HDDs) would grind with very high disk activity but little progress on writing the decompressed files (only kB/s write speed). The grinding and slowdown was really bad, and I would press ctrl-c (or killall tar) to kill it after testing for a few seconds - letting it grind too long could damage hard drives or maybe even the disk controller by over heating. On 5.10.x, the tar command is normal/fast; the files decompress rapidly into cache ram and then a few seconds later they are flushed to disk quickly and smoothly in the background. I do not have NVMe/SSDs, but I guess that it is harder to notice the slowdown on SSDs because they are much faster than HDDs and they are quiet (you won't hear grinding like on HDDs).
My test case for poor disk performance on 5.15.x was extracting the linux kernel source code with the tar command:
tar xvf linux-x.y.z.tar.xz
This ran abnormal and very slow on 5.15.x; the disks (HDDs) would grind with very high disk activity but little progress on writing the decompressed files (only kB/s write speed). The grinding and slowdown was really bad, and I would press ctrl-c (or killall tar) to kill it after testing for a few seconds - letting it grind too long could damage hard drives or maybe even the disk controller by over heating. On 5.10.x, the tar command is normal/fast; the files decompress rapidly into cache ram and then a few seconds later they are flushed to disk quickly and smoothly in the background. I do not have NVMe/SSDs, but I guess that it is harder to notice the slowdown on SSDs because they are much faster than HDDs and they are quiet (you won't hear grinding like on HDDs).
My test case for poor disk performance on 5.15.x was extracting the linux kernel source code with the tar command:
tar xvf linux-x.y.z.tar.xz
This ran abnormal and very slow on 5.15.x; the disks (HDDs) would grind with very high disk activity but little progress on writing the decompressed files (only kB/s write speed). The grinding and slowdown was really bad, and I would press ctrl-c (or killall tar) to kill it after testing for a few seconds - letting it grind too long could damage hard drives or maybe even the disk controller by over heating. On 5.10.x, the tar command is normal/fast; the files decompress rapidly into cache ram and then a few seconds later they are flushed to disk quickly and smoothly in the background. I do not have NVMe/SSDs, but I guess that it is harder to notice the slowdown on SSDs because they are much faster than HDDs and they are quiet (you won't hear grinding like on HDDs).
I'm now testing 5.10 per your suggestion (specifically 5.10.159, the latest). I still need to do some serious benchmarking but i've done the following:
Code:
$ dd if=/dev/sda of=/tmp/footest bs=1M count=400
400+0 records in
400+0 records out
419430400 bytes (419 MB, 400 MiB) copied, 24.7656 s, 16.9 MB/s
$ dd if=/dev/sda2 of=/disk/adrian/footest bs=1M count=400
400+0 records in
400+0 records out
419430400 bytes (419 MB, 400 MiB) copied, 20.4501 s, 20.5 MB/s
/disk is an external USB hard drive.
Unfortunately i don't have numbers from my previous Slack setup.
I also built a 4.19.269 and i'm now looking for the latest 3.x kernel to test.
The Scheduler seems correct for my NVMe drives on Slackware64 15.0 ( see below my sig )...
But I was considering messing with the scheduler too
In the meantime, this ArchLinux EXT4 Article is informative and interesting.
The last topic is 4.3 Enabling fast_commit in existing filesystems which was discussed in depth in Jan 2021 on LWN Fast commits for ext4
From scanning the info, fast_commit was introduced in 5.10, and it is an optional feature for EXT4 FileSystems via mke2fs but it can be retro-enabled in EXT4 File Systems via tune2fs.
In addition fast_commit and fast_commit_size are mentioned in `man tune2fs` for e2fsprogs-1.46.5-x86_64-1 on Slackware64 15.0 so it should be available for me and mine.
I am not sure about availability on 32-bit Systems but fast_commit is only available on Journaled EXT4 FileSystems.
Recalling all the hubub about fsync() on Linux several years ago, fast_commit might be the answer to my issues ...
More to read before I change anything but I'll follow up when I know more.
Thanks again !
-- kjh
P.S. I meant to ask. does anybody know offhand the meaning of the two fields ?
And the difference between [none] mq-deadline -vs- [mq-deadline] none ?
Note: sda references an NVMe Drive in an external USB3.2 Enclosure connected via a Thunderbolt 4 Dock. sdb is a thumb-drive
It is possible to set the rotational property and the bfq scheduler with a udev rule:
Code:
$ cat /etc/udev/rules.d/60-ioschedulers.rules
# set bfq scheduler for rotational disks
ACTION=="add|change", KERNEL=="sd[a-z]*", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
Thanks for this tip!
Just to double check that I understand this right:
Do you want to force the rotational property to 1 for bfq to perform best or does the udev rule above select the bfq scheduler only for rotational disks? I am no wizard when it comes to writing udev rules, but my guess would be the latter with the above udev rule.
Do you want to force the rotational property to 1 for bfq to perform best or does the udev rule above select the bfq scheduler only for rotational disks? I am no wizard when it comes to writing udev rules, but my guess would be the latter with the above udev rule.
Interesting performance info on 4.14 vs 5.15, scuzzy_dog.
Are you running a 32-bit or a 64-bit System ?
Running 15.0 16 bit.
One of the things that is also bothering me about 5.X is the 'Code of Conduct' committee ruining Linux and the fact that people not qualified to even jack with the kernel are being bought on board in the name of diversity. It's a sad situation. Reading some of the stuff on the KML (Kernel Mailing List) is truly shocking.
I'm all about merit regardless of race, religion, or any other opinions. Diversity just for the sake of diversity ruins things. I know this first hand from working as a contractor in companies that have bought into all of that.
One of the things that is also bothering me about 5.X is the 'Code of Conduct' committee ruining Linux and the fact that people not qualified to even jack with the kernel are being bought on board in the name of diversity. It's a sad situation. Reading some of the stuff on the KML (Kernel Mailing List) is truly shocking.
I'm all about merit regardless of race, religion, or any other opinions. Diversity just for the sake of diversity ruins things. I know this first hand from working as a contractor in companies that have bought into all of that.
Can you please PM me links to discussions about this? It's been many years since i unsubscribed from the LKML.
Can you please PM me links to discussions about this? It's been many years since i unsubscribed from the LKML.
lol - 64 bit. I was playing with my kid while trying to type.
It would take quite awhile to find the links. Try doing a search for 'Coraline Ada Ehmke ' and also 'Linux Torvalds and code of conduct conflict' - and 'kernel discussion on banned words.'
Here's the 'Manifesto' written by 'Coraline Ada Ehmke' where Meritocracy is no longer valued or even needed: https://postmeritocracy.org/
The 'new guys' being bought in want to do stuff like change the words master, slave, blacklist, whitelist, and lots of other nonsense because people that read code might be offended. And some just can't write code but hey we need diversity. Good grief. Torvalds even stepped away from kernel development for a short time after getting pissed off at the new Code of Conduct Committee while it was being run by 'Coraline'.
BTW: Don't use wikipedia as your source of info.
Last edited by scuzzy_dog; 12-20-2022 at 08:29 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.