ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm facing a problem, where I need to stream some data at constant bit-rate, and I was wondering if there is any buffering/stream rate control algorithm out there. Google didn't help me much - I could use some hints.
I have a function, which is processing RTP packets - it plays them as audio. This happens on mips based board, where is a 'voice cpu' driven by vendor firmware, so there are some restrictions on what I can and can not do.
Let's say that this function is called 'processRTP'. If I call it as fast as I can, I will get audio output which is very fast, approx. 8x the original speed. And if I don't call this function fast enough, there will be some artifacts, noise, scratching, etc. So obviosly, I need to control the rate by myself.
So, I'm looking for a way/algorithm/pattern, which would help me to schedule calls to this function and at the same time be effective enough for this embedded device.
processRTP should be called approx. each ~10ms.
I need something more than:
for ...
processRTP();
usleep(10);
end
because this can break easily - usleep() is not accurate.
I would suggest that you modify your design a bit. It sounds like you are attempting to have the input data rate control the rate at which the audio output occurs. If this is so, then that type of architecture is not correct. You need to separate the rate at which you are receiving the packets from the rate at which you decode the audio output. To do this, you must use multiple threads and a buffer pool. The input thread receives the incoming the data packets and pushes those packets onto a received queue. The audio output thread pops a buffer off of the receive queue, looks at the timing information contained in the packet and outputs the decoded audio data at the appropriate rate. Buffer overruns can be handled by tuning the size of the buffer queue and by protocol. Underruns are handled by the starting and stopping of the audio playback until you have buffered a sufficient number of audio packets to assure continuous playback.
Note: I am over simplifying the algorithms, but this in essense is what needs to be done.
'RTP' seems to be misleading a bit. I supply RTP 'packets' from file. So there is no issue in receiving and buffering input from network. I already have available all packets and can access any of them.
Imagine it like a black box, which is consuming data at rate 1 packet per 10ms. This box has input buffer, let's say 10 packets. So I need to supply on average 1 packet per 10ms.
What I'm looking for is effective way to guarantee, that I'll get average of 1 packet per 10ms, and that I won't run into buffer underrun/overflow.
rstewart is absolutely correct. The key to solving this problem is to have some kind of queueing mechanism between the sender and the receiver. However you need to define "sender" and "receiver".
Once you've found (or created) such a mechanism (once you've got that piece of the puzzle locked in), the rest of the pieces should fall into place fairly easily.
rstewart is absolutely correct. The key to solving this problem is to have some kind of queueing mechanism between the sender and the receiver. However you need to define "sender" and "receiver".
The queueing mechanism.. that's exactly my question. :-)
But, I don't know what to google for, does it have some special name? Perhaps 'queueing mechanism', or 'rate control' or some 'buffering technique'.
It all boils down to keeping some buffer filled and not to overflow it or underrun it.
The queueing mechanism.. that's exactly my question. :-)
But, I don't know what to google for, does it have some special name? Perhaps 'queueing mechanism', or 'rate control' or some 'buffering technique'.
It all boils down to keeping some buffer filled and not to overflow it or underrun it.
FIFO is adressing only order of data, not the rate at which data are inserted and removed.
In my case, data are removed from this buffer (FIFO) at constant rate by firmware. So what I need is to fill this buffer and keep it from overflowing, underunning.
FIFO is adressing only order of data, not the rate at which data are inserted and removed.
In my case, data are removed from this buffer (FIFO) at constant rate by firmware. So what I need is to fill this buffer and keep it from overflowing, underunning.
FIFO has two ends, at one of them is producer (your function), at the other - consumer (your real time firmware). And FIFO has indication of its fullness.
FIFO does not change the order of data - it's like a pipe through which you push data in its natural order.
You might need two threads - one which feeds/fills FIFO, the other which empties it.
Last edited by Sergei Steshenko; 08-20-2009 at 02:37 AM.
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
Read the description of token bucket/HTB traffic shaping. I know your problem is not network traffic, but the description of the various queue types might give you a pointer as where to start.
Unfortunetly this information is hidden from me. I might be able to provide it to userspace by changing kernel driver which interacts with firmware on 'voice cpu', but that seems complicated :-).
For now I'm going with approach of 'bursts'. I'm measuring time between each iteration of my main loop, and based on this time I calculate how much data the 'voice cpu' could consume. So in this way I have pretty good estimate of how big the buffer really is, and if I find it to small I make another burst to fill it, or just usleep for a while.
Unfortunetly this information is hidden from me. I might be able to provide it to userspace by changing kernel driver which interacts with firmware on 'voice cpu', but that seems complicated :-).
For now I'm going with approach of 'bursts'. I'm measuring time between each iteration of my main loop, and based on this time I calculate how much data the 'voice cpu' could consume. So in this way I have pretty good estimate of how big the buffer really is, and if I find it to small I make another burst to fill it, or just usleep for a while.
If you create the FIFO, how is the information hidden from you ?
I need something more than:
for ...
processRTP();
usleep(10);
end
because this can break easily - usleep() is not accurate.
I guess you need to put some parameters on what you mean by '~10ms' and 'not accurate'. As Linux is not a real-time OS, you may have expectations which cannot be realized. What to do mean by 'break easily'? Do you have some empirical data that demonstrates your belief? How critical is it if the occasional time-sensitive interval is missed?
I guess you need to put some parameters on what you mean by '~10ms' and 'not accurate'. As Linux is not a real-time OS, you may have expectations which cannot be realized. What to do mean by 'break easily'? Do you have some empirical data that demonstrates your belief? How critical is it if the occasional time-sensitive interval is missed?
--- rod.
The OP's code breaks easily because it has wrong for the purpose architecture.
By the way, Linux has hard RT kernels, but in the OP's case counting on them would still be wrong architecturally.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.