LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices



Reply
 
Search this Thread
Old 09-30-2007, 09:21 PM   #1
entz
Member
 
Registered: Mar 2007
Location: Milky Way , Planet Earth!
Distribution: Opensuse
Posts: 453
Blog Entries: 3

Rep: Reputation: 40
making networking more efficient


Greetings,

It has been some time since I last posted , yet i came back with an interesting issue.

i was thinking about the best way to transmit data over the net , i mean i was thinking about how to get the most out of the machine when it comes to processing the stream before you send it.

now I'll get into more detail ,
suppose you have two separate buffers or more and you want to "stream" them
over the net using the famous BSD api (you know socket() , bind() send().. etc...)
and the problem is : HOW to send those 2 or more buffers in a way that is effective as possible.

probably the simplest way would be to copy those 2 buffers into a larger serial buffer that holds them both , then keep calling send() until all bytes are sent....etc (let's call this the linear approach)

but as you can notice instinctively this method is very easy but certainly not the best!
because you burn a lot of cpu time copying stuff from point a in memory to point b.

so I was thinking whether there is a possibility to send a packet that originates from more than 1 buffer in the memory .
imagine if there was a send() function that would take not a buffer and it's length as arguments but an array of arrays of chars (i.e char**) instead and two other arguments (int and int* ) which would specify the number of input arrays and the length of each one respectively.
I guess something like that would be great.

alright then , I'm eager to hear from you how to handle this problem as
performance tuned as possible.

Cheers.
 
Old 10-01-2007, 03:05 AM   #2
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.6, Centos 5.10
Posts: 16,324

Rep: Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041
Better to generate your data into an array of buffers, then loop over the array, calling send() (or use a linked list if you need an indeterminate amt of 'buffers').
 
Old 10-01-2007, 09:44 AM   #3
entz
Member
 
Registered: Mar 2007
Location: Milky Way , Planet Earth!
Distribution: Opensuse
Posts: 453
Blog Entries: 3

Original Poster
Rep: Reputation: 40
HI Chrism01 ,

actually the drawback with your proposal is that , each buffer will be sent in it's own packet .
what I'm trying to do is to reduce the number of packets.

btw , I'v probably forget to mention that the buffers that I'm dealing with
are relatively pretty small from 1 or 2 bytes to a maximum of let's say 40 bytes.

now I understand that the packet length on most networks is about 250 bytes
and I'm trying to figure out a way to compose those small buffers into a unified network stream but without copying each individual buffer into a larger one in order to save cpu time and reduce packets.

hopes that the question has become clearer.

thanks

Last edited by entz; 10-01-2007 at 09:51 AM.
 
Old 10-01-2007, 06:56 PM   #4
graemef
Senior Member
 
Registered: Nov 2005
Location: Hanoi
Distribution: Fedora 13, Ubuntu 10.04
Posts: 2,379

Rep: Reputation: 148Reputation: 148
Sending such small data packets (1 or 2 bytes) is not going to be very efficient usage of the network. If you know that more is coming and the recipient can wait then why not gather the data into a larger packet and then send it?
 
Old 10-01-2007, 08:12 PM   #5
entz
Member
 
Registered: Mar 2007
Location: Milky Way , Planet Earth!
Distribution: Opensuse
Posts: 453
Blog Entries: 3

Original Poster
Rep: Reputation: 40
Quote:
Originally Posted by graemef View Post
If you know that more is coming and the recipient can wait then why not gather the data into a larger packet and then send it?
Basically that's what I'm planning to do , but doing it the classical way would mean that I've to copy all those small buffers into a larger one which is exactly what I'm trying to avoid.

I was actually asking if it's possible to send a packet not from one location in memory i.e buffer as it's usual , but from multiple locations where those small chunks are stored.
so that at the end e.g. the first 5 bytes in the packet originate from buffer x and the second 5 from another let's say buffer y.

if that could be done without the need to copy each single buffer into a another then that would be extremely efficient .
 
Old 10-01-2007, 08:33 PM   #6
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.6, Centos 5.10
Posts: 16,324

Rep: Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041
To do what you want, as you generate your buffer data, store it in strings to be sent. You can only send one string / buffer at a time and sending short buffers is inefficient networking.
To be honest though, memcpy is a very quick/efficient cmd, so I wouldn't worry about the overhead unless you've got extreme requirements. Any time spent in memcpy() will be swamped by the time taken to txmit the pkt on the network.
I think you may be suffering from premature optimization syndrome.
You might want to concat your data and then compress it before sending.
 
Old 10-01-2007, 09:28 PM   #7
entz
Member
 
Registered: Mar 2007
Location: Milky Way , Planet Earth!
Distribution: Opensuse
Posts: 453
Blog Entries: 3

Original Poster
Rep: Reputation: 40
Quote:
Originally Posted by chrism01 View Post
To do what you want, as you generate your buffer data, store it in strings to be sent. You can only send one string / buffer at a time and sending short buffers is inefficient networking.
To be honest though, memcpy is a very quick/efficient cmd, so I wouldn't worry about the overhead unless you've got extreme requirements. Any time spent in memcpy() will be swamped by the time taken to txmit the pkt on the network.
Well , i can't gather the chunks into one string since each chunk represents a given value in a larger object oriented scheme.
not to mention that different clients gonna recv() different values ..etc.

nonetheless , If it's true that memcpy is indeed so efficient then i gonna scrap the idea of making this thing perfect since as you mentioned , much more time is burned in actually getting the pkt to it's dest.
btw , i'm memcpy()ing ALL the time!

Quote:
I think you may be suffering from premature optimization syndrome.
haha , you named it

cheers
 
Old 10-02-2007, 06:39 AM   #8
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.6, Centos 5.10
Posts: 16,324

Rep: Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041Reputation: 2041
If you've got multiple clients like that (or even one, actually), then it's a trade off;

1. send all data asap - pro: quick 'response', con: inefficient pkt filling, too many pkts processed

2. accumulate data on a per client basis, then send (to given client) when you've got 'enough' for a given client.: pro: good pkt filling, small num pkts sent, con: slow 'response'

your choice ...
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Making wireless networking automatic on my laptop (FC5) JimmyB21 Linux - Wireless Networking 2 02-02-2007 07:51 AM
cdbackup not efficient! ganninu Linux - General 2 01-18-2007 01:24 PM
[PDF] Is it efficient? Wim Sturkenboom General 8 01-11-2007 05:00 PM
Simple is efficient ???? Maybe/Maybe not ! bigjohn General 21 07-08-2005 11:27 AM
a more efficient set up dr_zayus69 Linux - Newbie 3 12-03-2004 10:47 AM


All times are GMT -5. The time now is 09:30 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration