Reply by Robert Scott March 4, 20142014-03-04
On Thu, 27 Feb 2014 23:21:30 -0800 (PST), Mauritz Jameson
<mjames2393@gmail.com> wrote:

>I'm working on a platform where the audio driver exhibits quite irregular c= >allback patterns. > >My expectation was that if I configure the audio driver to give me callback= >s every 10ms, I would get them approximately every 10ms (give or take some = >milliseconds). Ideally, the callback pattern would look something like this= >: > >t =3D 0ms : speaker callback >t =3D 1ms : mic callback > >t =3D 10ms : speaker callback >t =3D 11ms : mic callback > >t =3D 20ms : speaker callback >t =3D 21ms : mic callback > >t =3D 30ms : speaker callback >t =3D 31ms : mic callback > >The microphone callback takes the received microphone data and writes it to= > a ring buffer. It then sends a 'signal' to another thread to wake up and p= >rocess the microphone data. The processing of the microphone data results i= >n the generation of 10ms of speaker data. That speaker data is written to a= > speaker ring buffer, which the speaker callback reads from. > >If the callback pattern looks like what I've described above where the spea= >ker and microphone callback takes turns, everything works out fine. > >However, if the callback pattern is irregular things start to get messed up= >. For example : A burst of microphone callbacks will drive the size of the = >speaker ring buffer through the roof. If I - for whatever reason - don't ge= >t the same kind of burst of speaker callbacks, I suddenly have large latenc= >y in the speaker path. > >Another problem is if I get a burst of speaker callbacks. In that case the = >speaker ring buffer will run dry of samples and I'll have to return silence= > packets instead.=20 > >So I'm wondering if there's some kind of standard solution for this type of= > problem? I can't think of any.=20
There is no solution other than to increase the latency between your input and output processes. The irregular callbacks mean you are going to have to live with whatever delay makes it "just possible" to meet the deadline for filling the output buffer. Then your ring buffer will take up the slack when the delay is worst case in the other direction. But here's a thought you may not have considered. What if the input and output sample rates are not the same? Then you could not maintain a 1:1 ratio between input buffers and output buffers indefinitely. You may wonder, why would anybody do that? Well, I can tell you from personal experience that it does happen on some Windows sound cards, and it has been observed on some of the old Windows Mobile Pocket PCs. It seems crazy to me. Why would you want to design a system with two clock sources when one clock source would do? But the above-mentioned sysems did do it. I have even seen a discrepancy of about 400 ppm between the input and output data rates on an iPod Touch 5th generation. But the discrepancy only lasted for 4 seconds. Maybe there was some software PLL settling or something. Anyway, something more to think about when you want to link input buffers tightly to output buffers. Robert Scott Hopkins, MN
Reply by March 1, 20142014-03-01
On 28.02.14 08.21, Mauritz Jameson wrote:
> My expectation was that if I configure the audio driver to give me callbacks every 10ms, I would get them approximately every 10ms (give or take some milliseconds). Ideally, the callback pattern would look something like this: > > t = 0ms : speaker callback > t = 1ms : mic callback > > t = 10ms : speaker callback > t = 11ms : mic callback
[...]
> However, if the callback pattern is irregular things start to get messed up. For example : A burst of microphone callbacks will drive the size of the speaker ring buffer through the roof. If I - for whatever reason - don't get the same kind of burst of speaker callbacks, I suddenly have large latency in the speaker path.
#1: your buffer is too small. If you operate this way it must be larger than the ring buffer of the input device and the output device. #2: audio input and audio output already use buffers. First in the hardware secondly in the driver. So a callback might not mean that you have immediately to fill the buffer or process the samples. It is just a signal that there is data or space available. This especially applies to low latency applications. You should not do anything more than wake up the processing thread in the callback. You should not even copy the samples. Just pass the buffer pointer around in your application. (I am not familiar with the Android API in any way, but I would really wonder if it requires copying in the callback.)
> Another problem is if I get a burst of speaker callbacks. In that case the speaker ring buffer will run dry of samples and I'll have to return silence packets instead.
This problem should only arise at start-up, where you do not yet have processed data.
> So I'm wondering if there's some kind of standard solution for this type of problem? I can't think of any.
The usual way is a thread safe buffer queue with pointers to buffers of unprocessed data from the microphone. This is filled by the input callback. As soon as a packet has been put to the queue a wake up signal is sent to the worker thread. That's all in the callback. The worker thread will process any buffers in the queue. After a buffer has been processed it is returned to the microphone driver by some API call. Then the processed result is passed to the output. This call might block if the output buffers of the audio driver are filled. The output callback is not used. As soon as the queue runs empty the worker thread blocks until it is awaken by the wake up signal. I Don't know whether Android's API works this way. If it requires synchronous processing you need the queue at the output side. I.e. the buffers are processed immediately and the result is stored in a queue. The input callback writes to the queue, the output callback reads. This is approximately the way you implemented, except that your queue is size limited (ring buffer). But I would do this only if the API let me no other chance. Note that there are some other pitfalls. First of all, if you use a mutex semaphore/monitor to synchronize access of your queue or ring buffer, you might run into priority inversion. I.e. the lower priority worker thread my be preempted while in synchronized context and the higher priority system thread that should service the hardware buffers might wait on the monitor and on the lower priority worker. This usually causes sound drops or increasing latency. So the queue implementation usually need to be lock-free, which might b a challenge. Marcel
Reply by Mauritz Jameson February 28, 20142014-02-28
The platform is Android 4.1.1 and above.
Reply by Mac Decman February 28, 20142014-02-28
On Thu, 27 Feb 2014 23:21:30 -0800 (PST), Mauritz Jameson
<mjames2393@gmail.com> wrote:

>I'm working on a platform where the audio driver exhibits quite irregular callback patterns. > >My expectation was that if I configure the audio driver to give me callbacks every 10ms, I would get them approximately every 10ms (give or take some milliseconds). Ideally, the callback pattern would look something like this
hah good luck. I once worked on a platform, which I wont name, where all the drivers made me wait two weeks for one data exchange. I'd say, give me some audio. The driver would say, no, wait. No audio for you.
Reply by Mauritz Jameson February 28, 20142014-02-28
I'm working on a platform where the audio driver exhibits quite irregular callback patterns.

My expectation was that if I configure the audio driver to give me callbacks every 10ms, I would get them approximately every 10ms (give or take some milliseconds). Ideally, the callback pattern would look something like this:

t = 0ms : speaker callback
t = 1ms : mic callback

t = 10ms : speaker callback
t = 11ms : mic callback

t = 20ms : speaker callback
t = 21ms : mic callback

t = 30ms : speaker callback
t = 31ms : mic callback

The microphone callback takes the received microphone data and writes it to a ring buffer. It then sends a 'signal' to another thread to wake up and process the microphone data. The processing of the microphone data results in the generation of 10ms of speaker data. That speaker data is written to a speaker ring buffer, which the speaker callback reads from.

If the callback pattern looks like what I've described above where the speaker and microphone callback takes turns, everything works out fine.

However, if the callback pattern is irregular things start to get messed up. For example : A burst of microphone callbacks will drive the size of the speaker ring buffer through the roof. If I - for whatever reason - don't get the same kind of burst of speaker callbacks, I suddenly have large latency in the speaker path.

Another problem is if I get a burst of speaker callbacks. In that case the speaker ring buffer will run dry of samples and I'll have to return silence packets instead. 

So I'm wondering if there's some kind of standard solution for this type of problem? I can't think of any. 

The following link is an example of the callback pattern:

http://wikisend.com/download/143908/timestamps.txt

where '1' is microphone callback and '2' is speaker callback.