Can you rate your nfs performance with this command for me?
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Can you rate your nfs performance with this command for me?
Code:
time dd if=/dev/zero of=/path_to_nfs_drive/testfile bs=16k count=16384 (writes a ~256M file of 0's through nfs)
time dd if=/mnt/path_to_nfs_drive/testfile of=/dev/null bs=16k
I'm getting about 24 s on the write (the first command)
and about 20 s on the read (second command).
If you're getting anything vastly better than this, can you give me the details of your /etc/exports on the server and the /ets/fstab on the client that connected to it?
I'm runing all gigabit (nics and switch), but getting the speed of a 100 base-T (almost). I'm trying to see if it's just me or if nfs is really this slow.
Distribution: OpenBSD 4.6, OS X 10.6.2, CentOS 4 & 5
Posts: 3,660
Rep:
As I understand it, NFS requires syncronous writes (this was the original specification so I don't know if it's changed in the last several years). That means the NFS server has to wait until it's confirmed that the data is written to disk before it can return the message to the client that the data has been written. That will provide overhead regardless of how fast the disk is that you're writing to, or whether you're NFS server kernel is using a journaling file system. That being said, using faster disks and fs journaling on the server will certainly help.
By the way, 10MBytes/s is a pretty freakin good transfer rate. You can never reach the rated maximum bandwidth of a network any way because of protocol overhead, other stray traffic, time to transfer the data out of buffers, etc...
i get 28.791 on the write (to a 120MB WD SE drive on a PII350) and 13.383 on the read (to my fast computer). i notice gkrellm shows occasional peaks (approx 2/sec) on eth1 at around 40MB/s and the read/write is limited by hd IO. judging by the relative areas of the gkrellm outut, no ethernet traffic exists for ~90% of the transfer test. i'm testing gigabit cards (intel pro 1000) talking through a 10m CAT-6 cable. since you have a switch in the line, i wonder what your nics peak at when it actually sends packets?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.