LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 01-05-2019, 01:03 PM   #1
rogan
Member
 
Registered: Aug 2004
Distribution: Slackware
Posts: 216

Rep: Reputation: 117Reputation: 117
ext4 incredible performance hit.


I just made an unpleasant discovery while trying to compile
stuff on one disk, while copying some big files from a usb drive
to another disk. All exept the usb drive was formatted with ext4.
The compile job almost halted during the copying.
top showed that iowait was stealing 70-80% cpu time.
Testing on a different machine using only btrfs gave 0% iowait
which of course is to be expected since the compilation was
totally unrelated to the copying, done on a different drive.
I then proceeded reformatting the drives with btrfs and the problem
was completely gone.

This problem is present on current (as of today).
I don't have access to a machine with a clean 14.2 install
so perhaps someone else can verify if it is a problem there as well ?
 
Old 01-05-2019, 06:17 PM   #2
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: McKinney, Texas
Distribution: Slackware64 15.0
Posts: 3,858

Rep: Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225
Umm, we'll need more details if you want something other than an "apples to aardvarks" comparison.
 
Old 01-05-2019, 07:34 PM   #3
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,140

Rep: Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122
At the very least you would have to reformat everything back to ext4 on the "problem" machine and re-run. A freshly formatted filesystem is a very different beast to one that has been in use for some time.
There are known issues with USB and large copies, but that is at the VFS level, so should affect either similarly. But there are too many variables - especially partitions/subvolumes and caching.
The last time I ran some kernel traces, there wasn't a noticable difference between the filesystems, but that was stand-alone testing, no USB entanglements. It was also quite a while ago - and not on slack it should be noted as well.
 
Old 01-05-2019, 09:28 PM   #4
khronosschoty
Member
 
Registered: Jul 2008
Distribution: Slackware
Posts: 648
Blog Entries: 2

Rep: Reputation: 514Reputation: 514Reputation: 514Reputation: 514Reputation: 514Reputation: 514
I don't know if its just in my mind or what (didn't do extensive testing), but, I also think I've experienced ext4 issues recently; and as a consequence I've moved over to jfs. I'm very pleased with jfs and intend on staying there.
 
Old 01-05-2019, 10:55 PM   #5
rogan
Member
 
Registered: Aug 2004
Distribution: Slackware
Posts: 216

Original Poster
Rep: Reputation: 117Reputation: 117
Richard Cranium: Both systems are AMD eight-core FX with 16GB or more main memory. I'm using regular rotating drives connected over sata (ahci).
The drives were mounted "default" and was not used for anything else at the time.
syg00: These were newly formatted disk (non ssd) drives. The problem isn't low performance for the copy itself. The problem is that everything
else done on the machine almost grinds to a halt while copying large files. It's just like in the old days when we only had PIO.

So if anyone using "current" or 14.2 has a spare drive and some time at hand...
 
Old 01-06-2019, 01:36 AM   #6
rogan
Member
 
Registered: Aug 2004
Distribution: Slackware
Posts: 216

Original Poster
Rep: Reputation: 117Reputation: 117
Ok so after some more testing it turns out that the thread subject is somewhat misleading
as this issue is about the same with ext4, xfs and jfs.
The ones I've tried so far that does not exhibit this behaviour is btrfs and reiserfs.
 
Old 01-06-2019, 01:51 AM   #7
ZhaoLin1457
Senior Member
 
Registered: Jan 2018
Posts: 1,025

Rep: Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214
The issue is not about a whatever filesystem, but is a known misbehavior of the Linux kernel and certain I/O schedulers, specially of CFQ used by default also by Slackware.

Some links discussing about that:
https://blog.vacs.fr/vacs/blogs/post...-are-performed
https://bugs.launchpad.net/ubuntu/+s...ux/+bug/131094
https://www.reddit.com/r/archlinux/c...izing_desktop/
https://blog.codeship.com/linux-io-scheduler-tuning/

From what I read, the consensus is that is much better to use Deadline for SSDs and BFQ for the mechanical hard drives. And the Block Multi-Queue.

Personally, I have this added on "/etc/rc.d/rc.modules.local" - because the lack of a better earlier "local" script
Code:
# Setup the DeadLine I/O scheduler for SSDs.
echo deadline | /bin/tee /sys/block/sda/queue/scheduler 1> /dev/null 2> /dev/null

# Setup the BFQ I/O scheduler for mechanical hard drives.
/sbin/modprobe bfq

#echo bfq | /bin/tee /sys/block/sd*/queue/scheduler 1> /dev/null 2> /dev/null

echo bfq | /bin/tee /sys/block/sdb/queue/scheduler 1> /dev/null 2> /dev/null
echo bfq | /bin/tee /sys/block/sdc/queue/scheduler 1> /dev/null 2> /dev/null
and in the kernel command line
Code:
scsi_mod.use_blk_mq=1

Last edited by ZhaoLin1457; 01-06-2019 at 02:23 AM.
 
5 members found this post helpful.
Old 01-06-2019, 02:50 AM   #8
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,140

Rep: Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122Reputation: 4122
Most of that looks sane, even if some of the references are (very) old.
I see in my notes from Dec 2017 that enabling blk_mq broke resume on this 2010 i7 laptop with spinning hard disk. More recent laptops with SSD didn't show the same problem. Haven't re-tested, and won't as this laptop is due for the bin.
 
Old 01-06-2019, 03:07 AM   #9
ZhaoLin1457
Senior Member
 
Registered: Jan 2018
Posts: 1,025

Rep: Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214Reputation: 1214
Quote:
Originally Posted by syg00 View Post
even if some of the references are (very) old.
I said that is a really known misbehavior of Linux I/O scheduling and I do not claimed that the links given are only which describe that issue. There are tons of links like this on Google, at least several thousands.

And today, at least, I had no issue with blk_mq and hibernation on my boxes, either sporting SSDs or not.
 
1 members found this post helpful.
Old 01-06-2019, 05:32 AM   #10
nobodino
Senior Member
 
Registered: Jul 2010
Location: Near Bordeaux in France
Distribution: slackware, slackware from scratch, LFS, slackware [arm], linux Mint...
Posts: 1,564

Rep: Reputation: 892Reputation: 892Reputation: 892Reputation: 892Reputation: 892Reputation: 892Reputation: 892
While building SFS, I can see this performance hit:
--------------------------------
Regression test on slackware64-current up to "Fri Nov 2 01:21:12 UTC 2018": no difference,in one shot.
----------------
real 1075m38.299s
user 3429m56.597s
sys 267m18.171s
You have mail in /var/mail/root
----------------
The regression test of today hasn't yet finished, I was at about 17 hours (nearly the same time as at the end of 3 nov. 2018) while building rust in build3_s.list, and it will be at least 5 or 6 hours more.
That will be > 25% longer to compile everything.

Last edited by nobodino; 01-06-2019 at 05:33 AM.
 
Old 01-06-2019, 05:37 AM   #11
rogan
Member
 
Registered: Aug 2004
Distribution: Slackware
Posts: 216

Original Poster
Rep: Reputation: 117Reputation: 117
Thanks for the replies all
In case anyone's interested I spent most of the morning gathering some data:

Compilation: linux-4.19.13 defconfig time make -j16 on a 2 TB spinning seagate 7200 rpm disk drive.

Unloaded: ext4: 3m57s, xfs: 3m59s, reiserfs: 4m10s, btrfs: 3m59s.

Performing the above compilation while copying large files between other unrelated drives:
ext4 lazyinit: 12m36s
ext4 init done: 6m26s, xfs: 1h+, reiserfs: 4m47s, btrfs: 4m49s

xfs is really a disaster in this situation, the computer would lock up and become unresponsive for seconds
while switching between vt's.
I tried different file systems and drives (usb sata ssd ...) for the copying but that did not seem to matter.
Since this seems to be a known issue (that I was unaware of) I'm marking this thread as solved.
 
Old 01-07-2019, 02:55 AM   #12
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by ZhaoLin1457 View Post
Personally, I have this added on "/etc/rc.d/rc.modules.local" - because the lack of a better earlier "local" script
Code:
# Setup the DeadLine I/O scheduler for SSDs.
echo deadline | /bin/tee /sys/block/sda/queue/scheduler 1> /dev/null 2> /dev/null

# Setup the BFQ I/O scheduler for mechanical hard drives.
/sbin/modprobe bfq

#echo bfq | /bin/tee /sys/block/sd*/queue/scheduler 1> /dev/null 2> /dev/null

echo bfq | /bin/tee /sys/block/sdb/queue/scheduler 1> /dev/null 2> /dev/null
echo bfq | /bin/tee /sys/block/sdc/queue/scheduler 1> /dev/null 2> /dev/null
and in the kernel command line
Code:
scsi_mod.use_blk_mq=1
You can do this using udev rules stored in /etc/udev/rules.d and you can specify whether a disk is rotational (HDD) or not (SSD). I have the following to set my SSD to deadline.

Code:
jbhansen@craven-moorhead:~$cat /etc/udev/rules.d/55-ssd-scheduler.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"
You could modify it to set BFQ for your HDDs (the number at the beginning would dictate when it will be run... 55 seemed to be a common number for setting the scheduler -- then the file just needs to end in .rules for udev to pick it up). This would ensure that all HDDs and all SSDs will use your preferred scheduler even if you add or remove drives or whether the drive designators change.

Code:
# SSDs
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"

# HDDs
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
Then to load the bfq module, you can add it to /etc/rc.d/rc.modules.local
 
3 members found this post helpful.
Old 01-07-2019, 01:47 PM   #13
Fred-1.2.13
Member
 
Registered: Jan 2006
Location: Midwest USA
Distribution: Started with Slackware - 3.0 1995 Kernel 1.2.13 - Now Slackware Current. Also some FreeBSD.
Posts: 124

Rep: Reputation: 59
Deleted wrong thread...
 
Old 01-07-2019, 02:13 PM   #14
rogan
Member
 
Registered: Aug 2004
Distribution: Slackware
Posts: 216

Original Poster
Rep: Reputation: 117Reputation: 117
Just to clarify:
The problem in itself is not solved and any kind of fiddling with schedulers or some such
will not improve anything noticeable in my can.
The good news is that 14.2 stock install is unaffected as far as I can tell, at least
xfs works just as well as btrfs or reiserfs in the above use scenario.
I'll try with some newer kernel versions in 14.2 to see if that changes anything.
 
Old 01-07-2019, 03:44 PM   #15
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
As far as I can tell, this doesn't seem to be a kernel problem since I'm running a self-compiled 4.19.12 (using Pat's config) on 14.2 and haven't noticed any issues when copying things via USB on my ext4 disks (both NVME, SSD, and HDD).
 
1 members found this post helpful.
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Incredible Render Performance In KdenLive With NVENC - But 1 Big Problem immortanjohn Linux - General 1 07-16-2018 10:17 AM
LXer: Stable Linux kernel hit by ext4 data corruption bug - Update LXer Syndicated Linux News 0 10-25-2012 09:20 AM
LXer: Stable Linux kernel hit by ext4 data corruption bug LXer Syndicated Linux News 0 10-24-2012 10:20 PM
Is it safe to format USB flash to ext4 or ext4? joham34 Linux - Newbie 2 01-08-2011 11:58 AM
LXer: Incredible Times, Incredible Technology LXer Syndicated Linux News 0 10-29-2010 01:20 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 11:02 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration