ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am developing a server program that send 3 types of data
I data - 13888 usecs
P data - 2777 usecs
B data - 555 usecs
1. This data are to send within the time frame given.
2. There is a counter to add up the ideal time the data should be send known as streamtime.
3. At the beginning of the program, i used gettimeofday to get the time known as start.
4.Before i send the data, i gettime again known as beforesend
5. i get the difference between beforesend and start and then compare with streamtime. If is more than streamtime it wont send.
6. After sending i gettime again , i used the time got and minus it from streamtime, this will be the amount of time i will sleep.
7. This will continue in a loop until the program terminates.
My problem is that , sometimes at 5. the difference which is the timing is more than streamtime. I send 1700 pieces of data , about 400 pieces of data have that time more than the stream time.
Is is because of the background process tat cause some of this time to increase a lot?
Sort of. You are running in a system that gives out cpu time in slices (quanta). If some process that has nothing to do with what you are doing needs time, it will get it.
You need to develop some kind of inter-thread communication scheme using sempahores or whatever is appropriate. Just guessing that streamtime will be 13000 ms and then sleeping that long will get you in trouble. You need to have each transmission thread signal <I am Done>
and then have one controlling thread wait until it gets all three of these signals or semaphores or whatever, before it tries to send the next batch of data.
You can get relatively close, but most of the time you'll find people implement protocols for what data should look like (start sequence, stop sequence, check sum) rather then timing schemes for how data should come because they are safer. Technically speaking, if the kernel decides it needs to do something more important it could cut you off in the middle of your routine. You could check out the RTLinux extensions and implement a kernel module for the timing that would easily get you the resolution you want.
One of the timing issues I imagine you'll run into is the fact that it is very hard to syncronize the clock on two different machines across the network, especially for us resolutions. And if you get off sync you'll end up with a huge mess.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.