SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I think I have tried every trick in the book, but I just
can't get the speed to within more than a tenth of what I get
when uploading to a 14.2 server.
Ruled out:
hardware, kernel (tried 4.4.88 on current), normal mounting options like varying values for rsize and wsize (I'm NOT mounting async on either 14.2 or current).
Clarification:
Large file copy has about the same performance.
However copying e.g. a kernel source tree ~60000 files ~700Mb
takes 13 min from a current system to a 14.2 system
and more than 2 hours the other way which makes a nfs file server
built on current almost unusable.
Does anybody else see this regression?
Does anybody have a clue what might have happened?
Thanks in advance
Last edited by rogan; 02-03-2018 at 07:52 AM.
Reason: clarify
Does anybody else see this regression?
Does anybody have a clue what might have happened?
I have no (more) 14.2 NFS-server but i can second that uploading 700Mb/kernel-tree to a -current NFS-server takes ages. Tested from a -current VirtualBox to my -current Desktop. About 6-8h here. The issue seem to be those many single files, not the 700Mb itself. Transfer a 1.1Gb file from Desktop to vbox takes about 10seconds... from vbox to desktop also 10-15seconds.
P.S. I'm using 4.9.79 as desktop kernel and 4.14.16 inside vbox.
Last edited by DarkVision; 02-04-2018 at 01:20 AM.
uploading 700Mb/kernel-tree to a -current NFS-server takes ages. Tested from a -current VirtualBox to my -current Desktop. The issue seem to be those many single files
I get the feeling something has introduced enormous overhead for file
transfers. I can literally count the files one by one in mc when
copying even very small ones.
I suspected retpoline fixes but those are supposed to be kernel only
right? Anyway I tried different kernels and a number of different
machines on current, but no luck so far.
I get the feeling something has introduced enormous overhead for file
transfers. I can literally count the files one by one in mc when
copying even very small ones.
I suspected retpoline fixes but those are supposed to be kernel only
right? Anyway I tried different kernels and a number of different
machines on current, but no luck so far.
rogan --
I wonder if you're not onto something there ?
My -current box is sorely out-of-date ...
Do you see the same speed regression when you copy the same set of files via rsync or scp or ftp ?
No errors, warnings... in logs? (/var/log/messages, /var/log/syslog, /var/log/nfsd/*)
I don't run current, but it seems there is new nfs configuration file in /etc/default (set default options), maybe worth a look?
Do you see the same speed regression when you copy the same set of files via rsync or scp or ftp
-- kjh
I cannot say for sure. I'm going to test that asap.
keefaz: Not a single warning anywhere in any of the logs I checked.
That would have been too easy i guess ... Slacking is real hard sometimes
uploading via scp is fast as hell
it took maybe 30 seconds for the whole linux-4.15.1 file tree
sftp _and_ ftp is about 100 times faster than current nfs on small files.
I think about as fast as my raid1 spinning hard drives can deliver
rogan --
I set aside Slackware64 14.2 and installed the latest Slackware64 current on an older Laptop on Sunday Morning.
I copied over my configs but I've not had time to set up nfs for testing yet.
Kinda like you, I found that rsync-over-ssh simply screams when I copied my local Slackware Current and SBo repos from my Slackware64 14.2 Laptop to the new Slackware64 current Laptop.
Before I mess around with nfs testing, did you figure out any issues / workarounds with your system or nfs configs ?
Thanks,
-- kjh
p.s. I thought I posted this earlier but it's not here ... must have pressed [Back] button while previewing. If you see a dupe, sorry ... I'll delete it ...
I have not found anything in particular. The tests I did suggested that no matter the
mounting options I could not avoid this strange overhead/file when copying to the server.
All tests were done on the ext4 file system mounted defaults,nodev,nosuid (as usual).
Since I am in the process of replacing all file systems with btrfs (other reason)
I'm going to do some more testing on exported btrfs volumes.
The sync option which have been the default since nfs-utils 1.0.0 (14.2 have 1.3.3)
seems to have a tremendous effect on current. If i export from the server (rw,async) I can
copy a kernel file tree to it in less than 3 minutes (without it in no less than 2 hours).
Somehow, it seems, sync have increased it's cost by 40 times at least.
Interesting data. Kernel source directory contains a large number of small files, it would have been cool to test copying just one large file as well.
But thanks anyway (I am surprised with the speed difference of rsync vs scp)
Preliminary tests seem to suggest that the file system
on the exported volume matters, but not enough to be a 'solution'.
I'm going to run some more tests to see which file
system has the heaviest penalty for the sync op.
Maybe we can learn something from it...
kjhambrick: If the server goes goes down during a write
or if a client A expects to find a file written by a client B
when it thinks the write is done, bad things happen...
I'd think sync is necessary in all 'serious' circumstances.
keefaz: rsync uses all kinds of trickery to speed up copying
nothing I have seen even comes close. It's a brilliant piece of software.
The sync cost is per file, and since the bandwidth is still there,
there's no particular slow down for really large single files.
However imagine using a 'modern' web browser on a nfs mounted user home,
only able to write one file/second, probably not a nice experience.
Interesting data. Kernel source directory contains a large number of small files, it would have been cool to test copying just one large file as well.
But thanks anyway (I am surprised with the speed difference of rsync vs scp)
keefaz --
See below.
More tests coming after I build multilib versions gcc and glibc for today's updates on my 14.2 Laptop.
Googling the 'grace period' and 'stable storage' errors I see in /var/log/syslog, the 'fix' might be to build and install nfs-utils-2.3.1 on Slackware 14.2 ???
Code:
Feb 7 07:13:15 samsung kernel: [ 8677.550991] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
Feb 7 07:15:18 samsung kernel: [ 8800.735383] NFSD: Unable to end grace period: -110
Feb 7 07:15:51 samsung kernel: [ 8833.503364] NFSD: Unable to create client record on stable storage: -110
-- kjh
Code:
# ----------------------------------------------------------------------
# WORKS: one huge file ; server-side defaults ; client-side async
# ----------------------------------------------------------------------
#
# cli: tar -cvf /opt/tmp/linux-4.4.115.kjh.tar linux-4.4.115.kjh
# cli: ls -la /opt/tmp/linux-4.4.115.kjh.tar
-rw-r--r-- 1 root root 656046080 Feb 7 06:17 /opt/tmp/linux-4.4.115.kjh.tar
#
# cli: umount /mnt/nfs-sam
# srv: rm -rf /opt/export/linux-4.4.115.kjh/
# srv: grep -v '^#' /etc/exports
/opt/export 192.168.0.0/24(rw,subtree_check,no_root_squash)
# srv: /etc/rc.d/rc.nfs restart
# cli: mount.nfs srv:/opt/export /mnt/nfs-sam -o 'rw,vers=4,async'
# srv: while [ 1 = 1 ] ; do echo "" ; date ; ls -la linux-4.4.115.kjh ; sleep 30 ; done
# cli: time cp -pR /opt/tmp/linux-4.4.115.kjh.tar /mnt/nfs-sam/
# ----------------------------------------------------------------------
Wed Feb 7 06:26:58 CST 2018
/bin/ls: cannot access 'linux-4.4.115.kjh.tar': No such file or directory
Wed Feb 7 06:27:28 CST 2018
---------- 1 root root 0 Dec 28 1979 linux-4.4.115.kjh.tar
<<snip>> ---------------------------------------------------------------
Wed Feb 7 06:28:28 CST 2018
-rw------- 1 root root 625999872 Feb 7 06:28 linux-4.4.115.kjh.tar
Wed Feb 7 06:28:58 CST 2018
-rw-r--r-- 1 root root 656046080 Feb 7 06:17 linux-4.4.115.kjh.tar
real 1m28.382s
user 0m0.000s
sys 0m0.348s
# ----------------------------------------------------------------------
# WORKS: one huge file ; server and client side defaults (sleep 10)
# ----------------------------------------------------------------------
#
# cli: umount /mnt/nfs-sam
# srv: rm linux-4.4.115.kjh.tar
# srv: grep -v '^#' /etc/exports
/opt/export 192.168.0.0/24(rw,subtree_check,no_root_squash)
# srv: /etc/rc.d/rc.nfs restart
# cli: mount.nfs srv:/opt/export /mnt/nfs-sam -o 'rw,vers=4'
# srv: while [ 1 = 1 ] ; do echo "" ; date ; ls -la linux-4.4.115.kjh ; sleep 10 ; done
# cli: time cp -pR /opt/tmp/linux-4.4.115.kjh.tar /mnt/nfs-sam/
# ----------------------------------------------------------------------
Wed Feb 7 06:33:38 CST 2018
---------- 1 root root 0 Jan 2 1980 linux-4.4.115.kjh.tar
Wed Feb 7 06:33:48 CST 2018
---------- 1 root root 0 Jan 2 1980 linux-4.4.115.kjh.tar
Wed Feb 7 06:33:58 CST 2018
---------- 1 root root 0 Jan 2 1980 linux-4.4.115.kjh.tar
Wed Feb 7 06:34:08 CST 2018
-rw------- 1 root root 7340032 Feb 7 06:34 linux-4.4.115.kjh.tar
<<snip>> ---------------------------------------------------------------
Wed Feb 7 06:34:58 CST 2018
-rw------- 1 root root 595591168 Feb 7 06:34 linux-4.4.115.kjh.tar
Wed Feb 7 06:35:08 CST 2018
-rw-r--r-- 1 root root 656046080 Feb 7 06:17 linux-4.4.115.kjh.tar
real 1m26.770s
user 0m0.000s
sys 0m0.348s
# ----------------------------------------------------------------------
# clean up
# ----------------------------------------------------------------------
#
# cli: umount /mnt/nfs-sam
# cli: rm /opt/tmp/linux-4.4.115.kjh.tar
# srv: rm /opt/export/linux-4.4.115.kjh.tar
Ok...
I've done some testing also. Since sync has become so damn expensive on 'current',
maybe the file system on the exported directory has something to do with it?
Test:
The server: AMD 8320 8G ram Slackware 'current' (who runs Intel these days)...
The client: AMD 9590 32G ram Slackware-14.2
Both operating systems are up to date withing a few days. Nic is gigabit on a gigabit lan.
Copy from a regular spinning hard drive dedicated for this use, to a nfs
exported directory from 'current' on another spinning hard drive dedicated for this test.
Sync/async is declared in /etc/exports. Mounts were done with default options (none given).
time cp -r /mnt/hd/linux-4.13.1/Documentation /mnt/tmp/rogan/tmp/ on client.
There were some cache effects but completely insignificant in comparison.
These are user times averaged over three runs and rounded:
BTRFS:
sync: 3 minutes
async: 10 seconds
EXT4:
sync: 8 minutes
async: 10 seconds
ext2:
sync: 1 minute 40 seconds
async: 10 seconds
XFS:
sync: 5 minutes
async: 10 seconds
JFS:
sync: 25 seconds
async: 20 seconds
If I recall correctly jfs has 'delayed writes'. It certainly works for nfs.
If one wants, one can mention something about death by creeping featuritis
but I would never do that
Last edited by rogan; 02-07-2018 at 02:37 PM.
Reason: no particular
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.