ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
There are two basic ways that you can do this.
The first (and easiest) is to call usleep(us); from within the loop, with us set to a time in microseconds. This will sleep for us microseconds, during which time the CPU isn't used.
The problem with this is that you still have to poll much faster than you need to, which is inefficient. A better way is to set up a timer to interrupt your process every time it needs to do something, or an input event occurs, and have your process call the kernel-level function schedule(); instead (effectively sleep for a while unless interrupted). Unfortunately, I don't know how to do that outside of the kernel.
I suggest you could examine the source of Frozen Bubble to see how they do it.
If you're waiting for input, from the user or the network, you typically use poll() or select(). The gtk event loop sits in poll() most of the time, waiting for events from X, which arrive over a socket. Qt probably does something similar.
If you're writing a gui app, of course you'll just use one of these toolkits and not deal with poll() or select() directly. Even if you aren't writing a gui, glib (used by gtk) can abstract the event loop for you.
i use java, thread programming could achieve the same goal.
i think c should work similiarly
!st create some thread using pthread lib, then ask them to wait in pool, once it is triggered(stdin), it will continue the loop.
As far as I could tell, java doesn't give you a proper way to do networked gui apps as a single thread, which IMO is really icky. Particularly since threading isn't built into the C language like it is in java, I think things come out cleaner if you stay away from threads where select() does the same job. Programming in one thread means you don't need to worry about deadlocks and races, and what lock goes with what data structure and so on.
It's a lot easier to debug a program when it only does one thing at once!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.