ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
It has been some time since I last posted , yet i came back with an interesting issue.
i was thinking about the best way to transmit data over the net , i mean i was thinking about how to get the most out of the machine when it comes to processing the stream before you send it.
now I'll get into more detail ,
suppose you have two separate buffers or more and you want to "stream" them
over the net using the famous BSD api (you know socket() , bind() send().. etc...)
and the problem is : HOW to send those 2 or more buffers in a way that is effective as possible.
probably the simplest way would be to copy those 2 buffers into a larger serial buffer that holds them both , then keep calling send() until all bytes are sent....etc (let's call this the linear approach)
but as you can notice instinctively this method is very easy but certainly not the best!
because you burn a lot of cpu time copying stuff from point a in memory to point b.
so I was thinking whether there is a possibility to send a packet that originates from more than 1 buffer in the memory .
imagine if there was a send() function that would take not a buffer and it's length as arguments but an array of arrays of chars (i.e char**) instead and two other arguments (int and int* ) which would specify the number of input arrays and the length of each one respectively.
I guess something like that would be great.
alright then , I'm eager to hear from you how to handle this problem as
performance tuned as possible.
Better to generate your data into an array of buffers, then loop over the array, calling send() (or use a linked list if you need an indeterminate amt of 'buffers').
actually the drawback with your proposal is that , each buffer will be sent in it's own packet .
what I'm trying to do is to reduce the number of packets.
btw , I'v probably forget to mention that the buffers that I'm dealing with
are relatively pretty small from 1 or 2 bytes to a maximum of let's say 40 bytes.
now I understand that the packet length on most networks is about 250 bytes
and I'm trying to figure out a way to compose those small buffers into a unified network stream but without copying each individual buffer into a larger one in order to save cpu time and reduce packets.
Sending such small data packets (1 or 2 bytes) is not going to be very efficient usage of the network. If you know that more is coming and the recipient can wait then why not gather the data into a larger packet and then send it?
If you know that more is coming and the recipient can wait then why not gather the data into a larger packet and then send it?
Basically that's what I'm planning to do , but doing it the classical way would mean that I've to copy all those small buffers into a larger one which is exactly what I'm trying to avoid.
I was actually asking if it's possible to send a packet not from one location in memory i.e buffer as it's usual , but from multiple locations where those small chunks are stored.
so that at the end e.g. the first 5 bytes in the packet originate from buffer x and the second 5 from another let's say buffer y.
if that could be done without the need to copy each single buffer into a another then that would be extremely efficient .
To do what you want, as you generate your buffer data, store it in strings to be sent. You can only send one string / buffer at a time and sending short buffers is inefficient networking.
To be honest though, memcpy is a very quick/efficient cmd, so I wouldn't worry about the overhead unless you've got extreme requirements. Any time spent in memcpy() will be swamped by the time taken to txmit the pkt on the network.
I think you may be suffering from premature optimization syndrome.
You might want to concat your data and then compress it before sending.
To do what you want, as you generate your buffer data, store it in strings to be sent. You can only send one string / buffer at a time and sending short buffers is inefficient networking.
To be honest though, memcpy is a very quick/efficient cmd, so I wouldn't worry about the overhead unless you've got extreme requirements. Any time spent in memcpy() will be swamped by the time taken to txmit the pkt on the network.
Well , i can't gather the chunks into one string since each chunk represents a given value in a larger object oriented scheme.
not to mention that different clients gonna recv() different values ..etc.
nonetheless , If it's true that memcpy is indeed so efficient then i gonna scrap the idea of making this thing perfect since as you mentioned , much more time is burned in actually getting the pkt to it's dest.
btw , i'm memcpy()ing ALL the time!
Quote:
I think you may be suffering from premature optimization syndrome.
If you've got multiple clients like that (or even one, actually), then it's a trade off;
1. send all data asap - pro: quick 'response', con: inefficient pkt filling, too many pkts processed
2. accumulate data on a per client basis, then send (to given client) when you've got 'enough' for a given client.: pro: good pkt filling, small num pkts sent, con: slow 'response'
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.