LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 05-25-2008, 12:27 PM   #16
andrewdodsworth
Member
 
Registered: Oct 2003
Location: United Kingdom
Distribution: SuSE 10.0 - 11.4
Posts: 347

Rep: Reputation: 30

That sounds reasonable - certainly CPU usage impacted on my machine. My main server still has a pretty slow PCI bus (66MHz I think) so that also may impact.

Haven't done much tweaking of disks - hdparm parameters seem to be set pretty well automatically. Rather than tweaking your existing disks it would probably be simpler to buy a new disk. However, that isn't necessarily the cause of your NFS speed.

I know you've done some tweaking of NFS parameters but I'd have a look at what's happening at a packet level with NFS. It may be that something in the nature of NFS is making it unable to go faster or errors or there may be some other cause. Was NFS faster before you put gigabit in?
 
Old 05-27-2008, 01:19 PM   #17
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Rep: Reputation: 51
Quote:
Originally Posted by stefan_nicolau View Post
Code:
dd if=/dev/sda of=/dev/null bs=1M&
iperf -s
gives 76mB/s on iperf and 30mB/s on sda, for a total of 106mB/s on the bus
Code:
dd if=/dev/hda of=/dev/null bs=1M&
dd if=/dev/sda of=/dev/null bs=1M&
iperf -s
gives 57mB/s in iperf, 13mB/s on hda and 27mB/s on sda, for a total of 97mB/s on the bus. (Performance is the same when reading from a file rather than raw disk access.)

So the bottleneck is not on the bus. What I found interesting is that the disk performance goes down by half under heavy network usage. CPU during the combined iperf/dd runs is 70sys/0id/30wa. dd uses 45% cpu. So that's the bottleneck. Maybe I should first look at lowering cpu usage for disk access (is it possible? dma and acpi are already on.)

But during an nfs operation, the cpu is 25sys/10id/65wa and I only get 18mB/s. Why is wait so high if neither device is at full speed and the cpu is not maxed?
A thought:
Are there any symlinks on your system drive involved in getting to the drive that is the NFS export? I ask because the symlink is evaluated every time the disk tries to read/write (IIRC), which would mean that your slow disk has to be hit every time you try to read the fast disk.

Regardless, changing out that *slow* system drive will only help things. For gits and shiggles, maybe load up Knoppix or Ubuntu Live & do the same nfs export to see if removing the system drive from the equation helps out.
 
  


Reply

Tags
gigabit, performance



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Ethernet gigabit NFS performance poor DaveQB Linux - Networking 23 02-04-2011 06:15 AM
LXer: Optimize high-performance Linux and gigabit ethernet LXer Syndicated Linux News 0 01-12-2008 12:00 PM
slow write performance with Gigabit depeche Linux - Networking 4 05-24-2005 04:18 AM
Slow gigabit performance with software RAID The_Last_Nerd Linux - Networking 9 04-25-2005 01:47 PM
Performance of (3com) gigabit adapters worth it?? lsgko Linux - Networking 1 06-09-2004 08:51 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 02:56 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration