Hi Folks, following on from the series of articles on feedback controllers and applications, I am considering doing an article later in the year on Negative Latency DSP methods which cancel out unavoidable delays due to anti-alias filters, computation time and reconstruction filters etc. in sampled data systems.
Unwanted latencies of say 200 ns may well be important in fast closed-loop applications, where they can impact on performance or even stability. But, are there any open-loop applications where for example, an unwanted 200 ns delay is important ?
In broadcast television production, the audio is handled separately from the video. This is evident in cases in which the delays of the audio filtering are not exactly matched to the delays in the video filtering. Have you ever watched a news broadcast, or even an entertainment show with much dialog, in which the lips of the speaker do not seem well-synchronized to the words you hear that person speak? The result of that mismatch is quite distracting!
Thank you for the observation. I don't imagine it's much of a problem adjusting the relative delay between the video and audio. I am thinking more of the case where a processed output signal must respond to an input signal without incurring unwanted delay.
1000s of mobile phones in a cell have to send their signals in sync. delay of each is measured by station and the phone is asked to advance accordingly
Thank you for the example. as per my reply above, I am not so concerned about aligning multiple signals e.g to GPS, but rather the unwanted latency incurred within sampled-data processing blocks, which in closed-loop applications can give rise to poor or unusable performance.
Thanks for clarification.
I am sure removing delay in feedback loop could make loop control more efficient and excessive delay is avoided. As such it may apply to any case in general.
As an example: I worked in an eye tracking design for laser based Ophthalmoscopy (SLO followed by imaging of a selected area using OCT) and delay was a major issue as image correlation would take about 30 plus msec. yet humans can move eye faster involuntarily. we aimed at 1 msec and managed that using fft based correlation. This made it possible to adjust mirrors to offset eye movements during part of imaging process when moving from SLO to OCT of selected section.
Can negative delay work for such length of delay? and How is that implemented?
Hi kaz, Thank you for that interesting and illuminating example.
The application that I have been involved with is a lot simpler (in some senses). Although there was an original proposal to use fft's to generate a control signal, the consultant was unable to generate these quickly enough to avoid unacceptable delays in the loop. The solution was to use multiple IIR filters to build up the required transfer functions. Even then, the loop performance was limited by the ADC, DSP & DAC delays.
For my application, I ruled out ffts as a practical solution.