LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   making networking more efficient (https://www.linuxquestions.org/questions/programming-9/making-networking-more-efficient-588505/)

entz 09-30-2007 08:21 PM

making networking more efficient
 
Greetings,

It has been some time since I last posted , yet i came back with an interesting issue.

i was thinking about the best way to transmit data over the net , i mean i was thinking about how to get the most out of the machine when it comes to processing the stream before you send it.

now I'll get into more detail ,
suppose you have two separate buffers or more and you want to "stream" them
over the net using the famous BSD api (you know socket() , bind() send().. etc...)
and the problem is : HOW to send those 2 or more buffers in a way that is effective as possible.

probably the simplest way would be to copy those 2 buffers into a larger serial buffer that holds them both , then keep calling send() until all bytes are sent....etc (let's call this the linear approach)

but as you can notice instinctively this method is very easy but certainly not the best!
because you burn a lot of cpu time copying stuff from point a in memory to point b.

so I was thinking whether there is a possibility to send a packet that originates from more than 1 buffer in the memory .
imagine if there was a send() function that would take not a buffer and it's length as arguments but an array of arrays of chars (i.e char**) instead and two other arguments (int and int* ) which would specify the number of input arrays and the length of each one respectively.
I guess something like that would be great.

alright then , I'm eager to hear from you how to handle this problem as
performance tuned as possible.

Cheers.

chrism01 10-01-2007 02:05 AM

Better to generate your data into an array of buffers, then loop over the array, calling send() (or use a linked list if you need an indeterminate amt of 'buffers').

entz 10-01-2007 08:44 AM

HI Chrism01 ,

actually the drawback with your proposal is that , each buffer will be sent in it's own packet .
what I'm trying to do is to reduce the number of packets.

btw , I'v probably forget to mention that the buffers that I'm dealing with
are relatively pretty small from 1 or 2 bytes to a maximum of let's say 40 bytes.

now I understand that the packet length on most networks is about 250 bytes
and I'm trying to figure out a way to compose those small buffers into a unified network stream but without copying each individual buffer into a larger one in order to save cpu time and reduce packets.

hopes that the question has become clearer.

thanks

graemef 10-01-2007 05:56 PM

Sending such small data packets (1 or 2 bytes) is not going to be very efficient usage of the network. If you know that more is coming and the recipient can wait then why not gather the data into a larger packet and then send it?

entz 10-01-2007 07:12 PM

Quote:

Originally Posted by graemef (Post 2909791)
If you know that more is coming and the recipient can wait then why not gather the data into a larger packet and then send it?

Basically that's what I'm planning to do , but doing it the classical way would mean that I've to copy all those small buffers into a larger one which is exactly what I'm trying to avoid.

I was actually asking if it's possible to send a packet not from one location in memory i.e buffer as it's usual , but from multiple locations where those small chunks are stored.
so that at the end e.g. the first 5 bytes in the packet originate from buffer x and the second 5 from another let's say buffer y.

if that could be done without the need to copy each single buffer into a another then that would be extremely efficient .

chrism01 10-01-2007 07:33 PM

To do what you want, as you generate your buffer data, store it in strings to be sent. You can only send one string / buffer at a time and sending short buffers is inefficient networking.
To be honest though, memcpy is a very quick/efficient cmd, so I wouldn't worry about the overhead unless you've got extreme requirements. Any time spent in memcpy() will be swamped by the time taken to txmit the pkt on the network.
I think you may be suffering from premature optimization syndrome.
You might want to concat your data and then compress it before sending.

entz 10-01-2007 08:28 PM

Quote:

Originally Posted by chrism01 (Post 2909871)
To do what you want, as you generate your buffer data, store it in strings to be sent. You can only send one string / buffer at a time and sending short buffers is inefficient networking.
To be honest though, memcpy is a very quick/efficient cmd, so I wouldn't worry about the overhead unless you've got extreme requirements. Any time spent in memcpy() will be swamped by the time taken to txmit the pkt on the network.

Well , i can't gather the chunks into one string since each chunk represents a given value in a larger object oriented scheme.
not to mention that different clients gonna recv() different values ..etc.

nonetheless , If it's true that memcpy is indeed so efficient then i gonna scrap the idea of making this thing perfect since as you mentioned , much more time is burned in actually getting the pkt to it's dest.
btw , i'm memcpy()ing ALL the time!

Quote:

I think you may be suffering from premature optimization syndrome.
haha , you named it :p

cheers

chrism01 10-02-2007 05:39 AM

If you've got multiple clients like that (or even one, actually), then it's a trade off;

1. send all data asap - pro: quick 'response', con: inefficient pkt filling, too many pkts processed

2. accumulate data on a per client basis, then send (to given client) when you've got 'enough' for a given client.: pro: good pkt filling, small num pkts sent, con: slow 'response'

your choice ...
;)


All times are GMT -5. The time now is 04:47 PM.