LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 07-17-2023, 04:47 AM   #1
bennyt
Member
 
Registered: Dec 2010
Location: UK
Posts: 69

Rep: Reputation: 9
Slackware64 -current, 6.1.38 huge kernel, slow nvme performance


Hi,

As title, I've just updated the OS to recent -current and my nvme is reporting like 23 MB/s.

Code:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=10G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.29
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=65.3MiB/s,w=21.5MiB/s][r=16.7k,w=5506 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=5226: Mon Jul 17 08:48:31 2023
  read: IOPS=17.2k, BW=67.1MiB/s (70.4MB/s)(7678MiB/114398msec)
   bw (  KiB/s): min=34888, max=140712, per=100.00%, avg=68813.02, stdev=15255.58, samples=228
   iops        : min= 8722, max=35178, avg=17203.19, stdev=3813.89, samples=228
  write: IOPS=5734, BW=22.4MiB/s (23.5MB/s)(2562MiB/114398msec); 0 zone resets
   bw (  KiB/s): min=12120, max=46648, per=100.00%, avg=22967.66, stdev=5045.73, samples=228
   iops        : min= 3030, max=11662, avg=5741.86, stdev=1261.44, samples=228
  cpu          : usr=4.25%, sys=27.91%, ctx=990533, majf=0, minf=7
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1965456,655984,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=67.1MiB/s (70.4MB/s), 67.1MiB/s-67.1MiB/s (70.4MB/s-70.4MB/s), io=7678MiB (8051MB), run=114398-114398msec
  WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=2562MiB (2687MB), run=114398-114398msec
I'm not sure what performance was before the update, but now I cannot run some I/O intensive apps, so something seems to have got worse.

Any ideas? Thanks.
 
Old 07-17-2023, 05:34 AM   #2
bennyt
Member
 
Registered: Dec 2010
Location: UK
Posts: 69

Original Poster
Rep: Reputation: 9
I'll answer my own post since it may be as straightforward as switching to the generic kernel (with initial ram disk). Performance has improved considerably, although it still seems rather slow. Perhaps there was a driver conflict in the huge kernel...?

Code:
Run status group 0 (all jobs):
   READ: bw=307MiB/s (322MB/s), 307MiB/s-307MiB/s (322MB/s-322MB/s), io=768MiB (805MB), run=2498-2498msec
  WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=256MiB (269MB), run=2498-2498msec
 
Old 07-17-2023, 06:19 AM   #3
business_kid
LQ Guru
 
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 16,382

Rep: Reputation: 2336Reputation: 2336Reputation: 2336Reputation: 2336Reputation: 2336Reputation: 2336Reputation: 2336Reputation: 2336Reputation: 2336Reputation: 2336Reputation: 2336
What command are you running? it does seem slow. I have slackware here (generic 5.15.63)
Code:
bash-5.1$ sudo hdparm -tT /dev/nvme0n1

/dev/nvme0n1:
 Timing cached reads:   59022 MB in  1.99 seconds = 29590.52 MB/sec
 Timing buffered disk reads: 2742 MB in  3.00 seconds = 913.47 MB/sec
bash-5.1$
 
Old 07-17-2023, 06:47 AM   #4
bennyt
Member
 
Registered: Dec 2010
Location: UK
Posts: 69

Original Poster
Rep: Reputation: 9
Per my post:

Code:
$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=1G --readwrite=randrw --rwmixread=75
On my laptop (different machine) I get very different answers between the two commands:

From above
Code:
Run status group 0 (all jobs):
   READ: bw=431MiB/s (452MB/s), 431MiB/s-431MiB/s (452MB/s-452MB/s), io=768MiB (805MB), run=1780-1780msec
  WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=256MiB (269MB), run=1780-1780msec
From hdparm:
Code:
/dev/nvme0n1:
 Timing cached reads:   16870 MB in  2.00 seconds = 8451.78 MB/sec
 Timing buffered disk reads: 2340 MB in  3.00 seconds = 779.12 MB/sec
Note that hdparm appears only to measure read speed. I suspect there is are big difference between different tests.
 
Old 07-17-2023, 06:59 AM   #5
bennyt
Member
 
Registered: Dec 2010
Location: UK
Posts: 69

Original Poster
Rep: Reputation: 9
The plot thickens. I just ran the fio test again and now (write) speeds are back to an apparent 25MB/s. So I guess it's nothing to do with the kernel...

For the record, on the same machine:
Code:
$ hdparm -tT /dev/nvme0n1

/dev/nvme0n1:
 Timing cached reads:   22674 MB in  1.99 seconds = 11414.79 MB/sec
 Timing buffered disk reads: 3394 MB in  3.00 seconds = 1131.10 MB/sec
 
Old 07-17-2023, 07:31 AM   #6
bennyt
Member
 
Registered: Dec 2010
Location: UK
Posts: 69

Original Poster
Rep: Reputation: 9
Ok, finally, I checked the hardware, which looked ok, but I unplugged the nvme, cleaned it all, and tried again. Output from fio is now:

Code:
test: (groupid=0, jobs=1): err= 0: pid=1362: Mon Jul 17 12:15:40 2023
  read: IOPS=228k, BW=891MiB/s (935MB/s)(768MiB/861msec)
   bw (  KiB/s): min=889888, max=889888, per=97.48%, avg=889888.00, stdev= 0.00, samples=1
   iops        : min=222472, max=222472, avg=222472.00, stdev= 0.00, samples=1
  write: IOPS=76.2k, BW=298MiB/s (312MB/s)(256MiB/861msec); 0 zone resets
   bw (  KiB/s): min=295736, max=295736, per=96.97%, avg=295736.00, stdev= 0.00, samples=1
   iops        : min=73934, max=73934, avg=73934.00, stdev= 0.00, samples=1
  cpu          : usr=15.12%, sys=73.49%, ctx=32030, majf=0, minf=7
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=196498,65646,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=891MiB/s (935MB/s), 891MiB/s-891MiB/s (935MB/s-935MB/s), io=768MiB (805MB), run=861-861msec
  WRITE: bw=298MiB/s (312MB/s), 298MiB/s-298MiB/s (312MB/s-312MB/s), io=256MiB (269MB), run=861-861msec
So the speed is much more like I wold expect. Not sure if it will remain stable but I guess this is a hardware issue so I'll mark as solved!
 
Old 07-17-2023, 07:43 AM   #7
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 988

Rep: Reputation: 674Reputation: 674Reputation: 674Reputation: 674Reputation: 674Reputation: 674
Quote:
Originally Posted by bennyt View Post
I guess this is a hardware issue so I'll mark as solved!
Were you able to find any traces from the hardware issue in the output from dmesg or in any log files? Something like io errors or rereads?

regards Henrik
 
Old 07-17-2023, 08:57 AM   #8
bennyt
Member
 
Registered: Dec 2010
Location: UK
Posts: 69

Original Poster
Rep: Reputation: 9
No - however, it appears to be a temperature problem. The nvme itself seems to be fine but it is not well cooled (this is a NUC, nvme sandwiched between and SSD and the main board). When it reaches ~70 C, write speeds drop off dramatically (like to 25 MB/s). I think the hardware is working exactly as it should! I will need to make some modifications for improved cooling.
 
2 members found this post helpful.
Old 07-17-2023, 06:04 PM   #9
Jan K.
Member
 
Registered: Apr 2019
Location: Esbjerg
Distribution: Windows 7...
Posts: 773

Rep: Reputation: 489Reputation: 489Reputation: 489Reputation: 489Reputation: 489
Not a Corsair MP700 by any chance?

Michael had some fun a while ago... https://www.phoronix.com/news/PCIe5-...Fail-3-Minutes

MB and/or nvme should come with a heatsink? otherwise there are loads of cooler tests online.
 
Old 07-18-2023, 11:29 AM   #10
dchmelik
Senior Member
 
Registered: Nov 2008
Location: USA
Distribution: Slackware, FreeBSD, Illumos, NetBSD, DragonflyBSD, Plan9, Inferno, OpenBSD, FreeDOS, HURD
Posts: 1,074

Rep: Reputation: 149Reputation: 149
Quote:
Originally Posted by business_kid View Post
What command are you running? it does seem slow. I have slackware here (generic 5.15.63)
Code:
bash-5.1$ sudo hdparm -tT /dev/nvme0n1

/dev/nvme0n1:
 Timing cached reads:   59022 MB in  1.99 seconds = 29590.52 MB/sec
 Timing buffered disk reads: 2742 MB in  3.00 seconds = 913.47 MB/sec
bash-5.1$
I did that and got almost exact same speeds.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Slackware 15 - NVME clone to external NVME - Boot problem from external NVME Klaus150 Slackware 35 10-16-2022 04:29 PM
How to install Slackware64-current on NVME drive with LUKS? zveren Slackware 7 05-26-2021 03:19 PM
LXer: Data in a Flash, Part II: Using NVMe Drives and Creating an NVMe over Fabrics Network LXer Syndicated Linux News 0 05-20-2019 11:41 PM
Migrate Linux/win10 dual boot from MBR nvme drive to a new GPT nvme drive bluemoo Linux - Software 7 09-25-2018 06:42 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 07:53 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration