Delay estimation by FFT

Markus NentwigSeptember 22, 200750 comments
Given x=sig(t) and y=ref(t), returns [c, ref(t+delta), delta)] = fitSignal(y, x);:
Estimates and corrects delay and scaling factor between two signals

Code snippet

This article relates to the Matlab / Octave code snippet: Delay estimation with subsample resolution
It explains the algorithm and the design decisions behind it.


There are many DSP-related problems, where an unknown timing between two signals needs to be determined and corrected, for example, radar, sonar, time-domain reflectometry, angle-of-arrival estimation, to name a few. It is not difficult to find solutions in the literature - a good starting point is for example the list of references in [1] - but for some reason things are never as easy in the lab as they look on paper.
This article describes an algorithm that has served me well in the past. It can cope with relatively long signals (millions of samples), works regardless of the absolute delay, positive or negative and is relatively robust, accurate and fast.
For example, it can be used to line up baseband signals that represent input and output of a power amplifier, to extract distortion characteristics by coherent processing.

Use cases, advantages and disadvantages

  • Use as general-purpose tool for simulations and measurements to align input and output samples
  • For example: It has been used extensively for wide-band radio frequency measurements, which require very accurate input-/output alignment.
  • Notuseful for real-time applications. The whole signal needs to be known in advance.
  • Notrecommended for hardware or embedded implementation. Use a specialized method and optimize for the signal in question.
  • Meant as a robust lab tool that (usually) does what it's supposed to do without need for any further supervision.
  • Properly used, the accuracy can be orders of magnitude higher than typical "ad-hoc" solutions. It sure beats lining up signals with an oscilloscope.
  • Suitable for band pass (high pass) test signals, gaps in the power spectrum and narrow-band interference
  • May fail as result of excessive group delay variations (see Figure 6 why).
    If so, my code snippet [5] may be a more reliable but slower and less accurate alternative.
  • I wouldn't recommend it for timing-recovery 'homework' problems where the estimation is the main objective (if so: For some down-to-earth introduction, see [4] ).
    That said, it will happily give arrival time, magnitude and carrier phase from a known preamble / pilot signal, for example to calibrate the delay in a link-level simulator.


In theory, any delay estimator is limited by the Cramér-Rao bound [2], which depends largely on the shape of the power spectrum and the signal-to-noise ratio. In the absence of noise and distortion - easily simulated in Matlab - numerical accuracy remains the main limiting factor, and for example nano-sample accuracy can be achieved.
No formal proof has been carried out. If needed, it is straightforward to set up an artificial example with known delay, or to compare real-world results with a "true" maximum-likelyhood estimator such as [3] (that may however be less suited for everyday use).

This article is available in PDF format for easy printing

Background: Cyclic signals

It is assumed that input- and output signals are cyclic. In other words, the observed signal period is one cycle of a repetitive test signal where the system is in steady state, with all transients settled. If needed, this can be approximated by zero-padding the signals in the time domain to arbitrary length.
The assumption of cyclic signals is quite important, as it allows to convert freely between time- and frequency domain, and to evaluate the underlying continuous-time waveform anywhere between samples without loss of information besides arithmetic errors..
Note that the result can become heavily biased if either signal has been truncated, relative to the other one. This lies in the nature of the problem and is not related to this specific algorithm: The optimum delay is "pulled" towards the direction that maximizes the general overlap between waveforms.

Background: Delaying bandlimited signals

Delaying a signal by a fraction of a sample reveals additional points on the continuous-time equivalent waveform in the same way as an ideal reconstruction filter, as found in the definition of the Nyquist limit. Now if the signal is zero at some sampling instants, this does not automatically mean it would be also zero between samples.
All too often, delaying an abrupt step will show a lot of "ringing" - the ringing has been there all the time, but remained invisible, as long as the signal was observed only on the original sampling grid.
The first Nyquist criterion guarantees that there will be no interaction from one sample to another - but it says nothing about the times in-between.

Figure 1: A pulse, its continuous-time equivalent (green) and the result of time-delaying


Figure 2 shows the algorithm in a nutshell:

Figure 2: Overview of the algorithm

Step 1: Coarse timing estimation by crosscorrelation

A well-known solution to estimate delay is to calculate the cross-correlation between the two signals. An efficient implementation calculates |ifft(fft(a) .* fft(b)|, using FFT for cyclic convolution.
A peak in the result indicates the integer-sample delay with best alignment.
Crosscorrelation provides the coarse estimate (integerDelay) in the code snippet. Figure 3 shows an example:

Figure 3: Coarse estimate by crosscorrelation

The coarse estimate for the delay is found by peak search. Unfortunately, most real-life delays don't fall onto an integer sampling grid.
If so, the error can be made arbitrarily small by oversampling, in other words by zero-padding fft(a) and fft(b) at the center.
There are two drawbacks:

  • Timing resolution and FFT size scale proportionally, making high accuracy computationally expensive
  • The correlation peak is relatively shallow, and numerical accuracy becomes a bottleneck when searching for the maximum.

Step 2: Fine estimation via slope of relative phase

Instead, delay estimation is done in the frequency domain, where a time delay results in a linearly increasing phase shift and its slope indicates the delay. The slope will be extracted by linear regression.

Phase ambiguity

A delay of one sample causes a phase shift of pi at the highest frequency: For delays greater than one sample, the phase wraps around, as only a principal value of the angle of a complex number is known. This ambiguity needs to be resolved:

Fig. 4 illustrates the relative phase between both signals over frequency, assuming a delay of several samples.
The phase wraps around, which prevents finding its slope via linear regression.

Figure 4: Relative phase between signals for multi-sample delay

Step 2a: Coarse correction

Now the coarse delay estimate (Figure 3) is removed from the signal by phase-shifting.
As long as the remaining unknown delay is less than +/- 0.5 samples, the relative phase is now confined in a +/- pi window, and no phase wrapping will occur:

Figure 5: Relative phase between signals after coarse correction. Remaining delay is fractional-sample.

This approach will fail, if frequency-dependent group delay variations cause the phase to wrap at some frequency.
This may happen for example when dealing with a multipath radio channel: "Determining the delay" is not a well-defined problem, once the group delay varies with frequency.
In this case, phase unwrapping may be able to correct the problem, as long as it does not introduce any unwrapping errors. If necessary, set enableUnwrap = true (by default disabled).

Figure 6: Failure by wrap-around from group delay variations

When dealing with transmitters, receivers, test equipment and the like, the signal chain is usually designed with a constant group delay in mind, and the method is quite reliable. With channel models or signals on the band edge of a filter, your mileage may vary.

After coarse correction, the phase should not wrap around anymore, and linear regression will correctly determine its slope.

Step 2b: Weighted phase

At any frequency, the relative importance of the phase depends on the power of either signal: For example, the phase is meaningless both if no power is received and also if no power was transmitted (as the received signal is obviously useless in this case).
Therefore, the phase is scaled with the product of signal power at each bin.

Step 2c: Linear regression

A least-squares solution is found by projecting the phase on a basis formed by a constant term and a linear-phase term, the latter corresponding to a unit delay. As the phase has been weighted, the same weights are applied to the basis.
The resulting coefficient of the linear-phase term is the remaining delay in samples, which is summed with the coarse delay to give the final delay estimate.

Step 2d: Scaling factor

Finally, one signal is delayed by the delay estimate, and projected onto the other signal. A scaling factor c results.

Step 3: Return values

The function returns the delay estimate, the scaling factor c and a replica of the second signal that has been shifted and scaled to match the first signal.

Advanced use

Some comments on using the function in real-world applications:

Error vector

The shifted / scaled replica of the second signal can be subtracted from the first signal. The residual can be considered "vector error", or signal energy in an output signal that cannot be "'explained" by the input signal. It may reveal unwanted artifacts even though they are buried below a strong wanted signal, for example extract the noise floor from a modulated signal.

Error vector spectrum

Especially the spectrum of the error vector can lead to interesting insights. My spectrum analyzer code snippet can be a useful tool to investigate:
Figure 7 shows an example of actual measurement data on a radio frequency amplifier:

Figure 7: Example vector error spectrum

In short, the green trace shows the input signal to a power amplifier, and the black trace the output signal. Subtracting the time-aligned and scaled input from the output reveals distortion products both within and outside the wanted channel bandwidth of +/- 5 MHz. The in-channel distortion becomes visible only after removing the wanted signal itself.
In the lower picture, an artificial delay of 7 ns was introduced before subtracting the signals. The delay mismatch causes an error that obscures the distortion products, particularly at higher frequencies. This example illustrates the need for accurate delay compensation in wideband measurements.
If the error vector spectrum increases with frequency in a V-shape, it is a sign for delay misalignment or frequency mismatch of radio frequency carriers. A typical cause is that a measurement instrument returns n+1 samples for a time period that should result only in n samples. The same will happen if radio frequency generator and analyzer don't use a common reference clock.

Phase unwrapping?

Finally, some words on phase unwrapping, which is a commonly used solution in frequency domain delay estimation:
Detect, where the relative phase between neighboring samples changes by more than pi, and correct with a multiple of 2 pi:

Figure 8: Phase unwrapping

If the signal-to-noise ratio over the unwrapping range is consistently high, all is well.
Unfortunately, only one noisy frequency sample within the signal bandwidth may cause an unwrapping error, leading to catastrophic failure.
Typical causes are:

  • AC coupling or DC offset compensation, i.e. removal of carrier leakage around 0 Hz in a radio receiver
  • Multi-channel signals with guard bands between channels
  • Signals with narrow-band test tones (sine waves)
  • narrow-band interference inside a wideband signal

Phase unwrapping is implemented in the code snippet, but disabled by default. Set enableUnwrap = true to turn it on, and set enableCoarseCorr = false to disable the coarse correction and rely solely on phase unwrapping.


[1] Y. Zhang; W. H. Abdulla: A Comparative Study of Time-Delay Estimation Techniques Using Microphone Arrays

[2] Wikipedia: Cramér-Rao Bound

[3] Moddemeijer, R., An information theoretical delay estimator

[4] Signal Processing in Telecommunications 2: Course notes

[5] Code snippet: Delay estimation revisited

[ - ]
Comment by joncasOctober 2, 2007
Hello Markus, i was wondering if something similar to your program could be used in an ultrasonic anemometer to resolve the wave-length ambiguity in determining the time between the emitted ultrasound wave and the received signal. By measuring the phase shift of the received signal from the zero crossing samples, I get quite accurate values within one wave length but there is no obvious way to determine the number of pulses of the delay. Due to the inertia of the mechanical transceiver, there is no sharp rise in amplitude...
[ - ]
Comment by mnentwigOctober 2, 2007
Hello, yes, that should work (depending on the pulse and its autocorrelation properties). Can you provide some sample data for transmitted and received signal? It might be a nice example.
[ - ]
Comment by joncasNovember 4, 2007
many belated thanks! (it would be nice if there were some way to have an option of receiving an email update when there is a new entry in the log)
[ - ]
Comment by joncasOctober 3, 2007
Hello Markus the autocorrelation is where the difficulty lies. Whatever modulation one tries to impress on the sender signal (phase reversal or starting and stopping after a short pulse train), the reaction in the receiver is always delayed by the mechanical inertia. If there is more cross-wind, then less energy would be transferred and the delay is longer. Here are two pictures of the waves seen on an oscilloscope: http://www.flashgenie.net/anemometer50uS.gif http://www.flashgenie.net/41kHzPulse.gif (the upper wave is the receiver in each picture, the lower one the sender, which has some ringing.) The best method would appear to be to renounce directly determining the number of pulses that have elapsed and instead use your algorithm to make a more accurate measurement of the exact phase shift of the received signal, and to repeat the measurement at a slightly different frequency. The difference between the two phase shift values would then allow one to resolve the ambiguity of the number of pulses. If one used 39 kHz and 41 kHz, for example, one could resolve the ambiguity within 20 pulses, which would be enough for an anemometer where the transducers are not too far apart. In the present experimental circuit I was only measuring the zero-crossing times of the received signal, because the microprocessor can do that to within 10 ns, but there is too much jitter in the received signal to obtain sufficiently accurate calculations. If one used an AD-converter one could measure ca 25 analog values on each pulse and use your calculation method to find the best fitting phase-shift, right ?
[ - ]
Comment by mnentwigOctober 8, 2007
Hello, sorry, it took me a while, busy week... I think it would work just fine, but it's maybe overkill for the given problem. If your hardware is able to transmit and receive a 39 kHz / 41 kHz two-tone signal, the common cycle length is 1 ms. For example, I could multiply the received signal with two complex-valued NCOs at -39 / - 41 kHz, and integrate both (complex) over 1 ms. Afterwards, it should be possible to map between the phase angles of both complex sums and the delay. How to implement that mapping is another question, but I'm sure it will require much less CPU cycles than the FFTs. I have a C example program on my disk that could be used to put together a prototype with a soundcard and microphone (of course at audio frequencies). Please let me know, if interested. Cheers Markus
[ - ]
Comment by joncasOctober 13, 2007
Hello Markus yes, i would definitely be very grateful to try your idea to get me out of the present dead-lock! My mathematical knowledge in this area is very limited... To be able to follow you, I would need to understand a little better what this is about -- could you point me to any literature ? Right now I am using a pwm signal to alternately send 39 kHz and 41 kHz pulse trains. To reduce ringing in the rest of the circuit, I send the signal through a simple sine-filter. The ultrasonic transducers have a rather narrow resonant frequency, so it is possible there is some beat between the pwm frequency and their own resonant frequency.
[ - ]
Comment by mnentwigOctober 24, 2007
Hi, here is a program that you could use as basis for a prototype with a soundcard, uses portaudio library: http://www.elisanet.fi/mnentwig/webroot/looprecord/index.html I found also another program, I wrote this a while ago to measure the acoustic path length with an arbitrary test signal and FFT-based crosscorrelation (requires fftw3). It uses simple oversampling, not the more complicated (and accurate) method in the blog article. http://www.elisanet.fi/mnentwig/dspblog/fft_align/portaudio_example.c
[ - ]
Comment by SammySmithNovember 7, 2007
Hello Markus, I want to get a frequency independent phase shift for my baseband signal. This phase shift should be something like 20,30,40 and so on degrees. I was thinking of using a FFT on the baseband and then altering the phase information somehow without changing the magnitude and then taking the IFFT to get a phase shifted signal. The goal being that each frequnecy component of the input should be shifted by the same amount of degrees. Do you know how can I change the phase information of the FFT signal , so that when I take an IFFT it results in the correct ouput. Or if you have some other suggestions please do let me know Thanks
[ - ]
Comment by mnentwigNovember 8, 2007
>> I want to get a frequency independent phase shift for my baseband signal Right, you can take FFT and multiply each frequency component individually with exp(i angle). At least it's going to work as long as there is a full cycle length of the (periodic!) signal inside the FFT. An alternative solution in time domain for nonperiodic signals could use a Hilbert transform filter, which gives 90 degree phase shift for bandpass signals. As the lower corner frequency approaches 0 Hz, the filter length grows to infinity, though. It might be a good idea to ask the question on comp.dsp.
[ - ]
Comment by rashdkNovember 27, 2007
hi markus . I read two waveforms (acoustic) picked up by mics as a .wav file in matlab using wavread command. One of these is reference (ref in ur code) and other is signal..however running ur code with this shows the following error :- "??? Error using ==> run Error using ==> times Matrix dimensions must agree." I cant decipher the error as of now.. please guide.. regards dks
[ - ]
Comment by rashdkNovember 27, 2007
hi markus , the problem seems to be coming with the following line of code in ur code :- " u=u .* rotN;" I am still trying to figure out. dks
[ - ]
Comment by mnentwigNovember 28, 2007
Hello, you could compare size(u) and size(rotN). Probably wavread gives a column vector, but it should be a row vector. If so, simply transpose the data and it should be fine. Also, check whether the file is one-channel audio..
[ - ]
Comment by rashdkNovember 29, 2007
hi markus.. I did transpose my input data and the system as of now works ..i am yet to try it on field data to check the accuracy of delay estimation..The field data through a software has already estimated time delay between various received waveforms..As of now , I have to manually clip the waveform from the software ( Adobe Audition ) and then read it in matlab..do u know of any means to directly read it from the software.. Ur guidance has been of great help in reaching the final goal..thanks always ..n regards..dks
[ - ]
Comment by rashdkMay 5, 2008
hi markus i have sent 2 mails at ur website mail address and have been anticipating ur reply..I have sent these mails from my mail id : rashdk@iitk.ac.in. In trying to compile ur above program (the advanced version from ur website) , i get an error in compilation for as : Error using ==> vertcat CAT arguments dimensions are not consistent. Error in ==> delay at 78 r=transpose([constRotPhase; uDelayPhase])\transpose(ang); i have tried resizing but to no avail.. could u suggest ways to oversome this .. rashdk
[ - ]
Comment by rashdkJune 11, 2008
hello markus i tried ur progrm in my set up of 2 mics (placed 3 m each other ) and recorded a gunshot at distance of 60 m. The cross corr peak and hence the time delay that i get is in samples ?? so if i divide that by sampling freq , shouldn't i get the time delay of arrival at farther mic ...the progrma as of now yields this in seconds !!!! can u suggest me the errors..
[ - ]
Comment by olihtNovember 11, 2008
Hi, I want to do the delay shift in the time domain. But obviously I cannot shift 0.5 samples in time domain. So I thougth to do oversampling. But to perform the fft I need to down sample again and the shift is either 0 or 1. Does anyone have an idea? Thanks! oliht
[ - ]
Comment by mnentwigNovember 11, 2008
Hi, please have a look at this other page: http://www.dsprelated.com/showarticle/22.php One straighforward solution is to design an FIR filter that has a delay offset of 1/2 sample. The filter will do some "damage" to the signal, that can't be avoided (but reduced by using a longer filter). Compared to the "polyphase" filter in the above page, you're only interested in a "single" phase. So you'd use only the "red" (or blue or green, those that match your 1/2 sample delay offset) filter coefficients.
[ - ]
Comment by alex65111March 14, 2009
Hello Markus, Why in the program fft_align.m for definition integerDelay the maximum of correlation function is used, instead of a phase slope in frequency domain? Thanks!
[ - ]
Comment by mathu_09December 14, 2009
Hi Markus If there is a Frequency shift,and the sequence is a Zadoffchu_sequence ,Where the peak will be located ,And i think the peak position is dependent on (U - logicalrootsequence) which is used to generate the zadoff_chu sequence please help in undestanding this this is the psuedocode U = 140; for n= 0:838 conv_result(n+1) = exp((-j*pi*U*n*n+1)/839)); shift_result(n+1)= conv_result(n+1)*exp((j*2*pi*6*n)/839); end [corr lag] = xcorr(shift_result,conv_result); Is ther any thing iam missing here iam not able to mathematically locate the peakposition . thanks
[ - ]
Comment by farisflowersJanuary 31, 2011
Hi Markus Did u know about reconstruction of image using Fourier transform?in matlab..
[ - ]
Comment by ramackApril 22, 2012
Hi, What would the effect be if the two signals are swapped, ie the time shifted signal has no delay and the ref. signal is delayed. Will that result in a change of sign in the delay value?
[ - ]
Comment by mnentwigApril 23, 2012
Yes, exactly. You'll get the negative delay and the conjugate of the scaling factor. There is no causality problem, as the signals are treated as cyclic. A negative delay "d"is simply a full signal (FFT) length minus "d".
[ - ]
Comment by ramackApril 30, 2012
Hi Markus, I must be doing something wrong with my code. It's about 100 lines, so I don't want to post it here. But in order for the delay to be correct (sign and value), I have to know which signal is received first. In my case I am estimating the time delay between two signals, not knowing which will arrive first. Time delay estimation in the time domain works as expected (xcorr(sig2,sig1)) and accounts for the sign. Part of my relevant file is: xcorr_fft = ifft(fftsig2.*conj(fftsig1))'; correltime = [ 0 : 1/fs : (((length(xcorr_fft))-1))/fs ]'; [maxvalue, maxposition] = max(abs(xcorr_fft)); % To estimate time delay if sig1 arrives before sig2 time_delay12 = (maxposition)/fs; %fs is my sampling frequency % To estimate time delay if sig2 arrives before sig1 (NFFT1 is the length of sig1) time_delay21 = (NFFT1-maxposition)/fs; I'm having trouble understand how to find the sign. I can send you png plots showing both cases which might be helpful.
[ - ]
Comment by ramackMay 21, 2012
I realized where I'm making my mistake... I talked to a friend this morning, and through our conversation I realized that I need to be taking my reference from the center instead of the edges as I had been. I haven't updated my script, but I'm pretty sure that's my problem. Rich
[ - ]
Comment by mnentwigMay 22, 2012
Hello, good that you got it sorted out. If possible, try to use a test signal that is zero for a while, then a "burst", then zeros again. That should give you most accurate results. Otherwise, if I'd cut a window out of continuous signals, the timing of the window would be inaccurate (as it's the timing relationship I'm trying to find out in the first place). This biases the result randomly.
[ - ]
Comment by ramackMay 23, 2012
Well, I thought I had it sorted out! So I'm back to where I was prior to my conversation with my friend. He was thinking of the center reference issue as being my problem in the frequency domain, but it's was not. My scripts work very accurately in the time domain with xcorr() and accounts for negative delays if my sensors receive data in reverse order. The delay is correct sign and value using FFT cross-correlation if the signals are received in the order that I expect. But the delay is incorrect if the arrivals are reversed. I think I'm going to have look at determining the phase slope to account for the unknown sensor arrival order. Most of the applications I have found that use the FFT cross-correlation, the systems inherently have control of the sensor arrival order, IE pulse-echo systems like radar, SONAR etc. Right now, my simulated signals are sinusoidal pulses
[ - ]
Comment by mnentwigMay 23, 2012
Hello, I should point out that the signals are treated as cyclic. As long as there is no signal energy across the "loop point" (start/end), this should not matter. You'll simply get a negative number, up to half the cycle length. If you can post example signals somewhere, I could have a look.
[ - ]
Comment by ramackMay 24, 2012
Passing on some info that found yesterday. I was curious how Matlab and Octave performed cross-correlation, so I found xcorr.m, opened it and found that xcorr() actually is calculating it in the frequency domain. % XCORR produces an estimate of the correlation between two random % (jointly stationary) sequences: % C(m) = E[A(n+m)*conj(B(n))] = E[A(n)*conj(B(n-m))] I found this thread while searching for methods of estimating time delay in the frequency domain, but it appears that xcorr() is already doing this. So for my particular application, I'll continue to use the function xcorr(), however I would like to fully understand it's approach via the FFT. Rich
[ - ]
Comment by mnentwigMay 27, 2012
Hello, maybe I was able to reproduce the arrival time problem: Is it possible that one of your signals is negated? If so, things will break if the correlation peak in the coarse estimation part has a negative sign. Try to multiply one signal with -1, unless you're sure it's correct. An improved version (experimental) can be found here: www.dsprelated.com/blogimages/MarkusNentwig/dspblog/fft_align/mn_fitSignal_FFT_120528.m It will detect the case in question and correct it. Right now, plots are enabled. "Sanity checks" are included and maybe stricter than necessary.
[ - ]
Comment by emetoJune 6, 2012
Hello Markus, I have to estimate the delay of 3 OFDM frames sent from 3 different synchronised cells. The receiver receives them as one mixed signal. Three reference signals are created at the receiver and they are not a full replicas of the sent ones. They consists only of known parts of the sent frames like PRS (Positioning Reference signal), CRS, P-Sync, S-Sync. Matlab:: Transmitter: 3 vectors with IQ data Channel: shift(delay) each vector and combine them in one Receiver: - one mixed vector with IQ data - 3 reference vectors - delay estimation I have tried your program (without noise) and there were various deviations from the true delays. I think there are deviations because the reference signals are not full replicas of the transmitted ones and the received signal is a combination of the three. I want to ask you if there is a way to improve the accuracy in this case, because I need these delays for positioning estimation after that. I found a paper about this topic, but the method described in this paper estiamtes the delay only at sample level and not at subsample level (I hope I understood right section 3.2. Maximum Likelihood Estimation, Fig.2) http://spcomnav.uab.es/docs/conferences/DelPeral_GNSS_Signals_2011.pdf Thank you in advance! Emi
[ - ]
Comment by mnentwigJune 7, 2012
Hello, >> because the reference signals are not full replicas of the transmitted ones that sounds like a likely problem. If the "overlap" between signals changes with the delays, it will create a bias on the estimate - it gets pulled towards the direction that maximizes the overlap. What I'd do to debug the problem is use the method from the paper and simply oversample the signal to the accuracy you need. I copy / pasted an example here: http://www.dsprelated.com/blogimages/MarkusNentwig/comp.dsp/oversampledXcorr_120608.m This calculates ifft(fft(s2) .* conj(fft(s1))) and looks for the peak.
[ - ]
Comment by emetoJune 10, 2012
Hi Markus, Thank you very much. Your example works great now.:) Emi
[ - ]
Comment by mnentwigJune 10, 2012
Hello, I put an automated version of the crosscorrelation method here: http://www.dsprelated.com/showcode/288.php It's more robust for problems where the group delay response isn't well-behaved (in other words, "the" delay varies substantially over frequency. But I'd use it only where the above method fails.
[ - ]
Comment by sig_proJune 14, 2012
I have a question. How to estimate the error of time delay using your method? Could you give us an example about the error estimate code? Thank you!
[ - ]
Comment by mnentwigJune 14, 2012
Hi, I'd use known test signals with an appropriate power spectrum. You can find example code in the other delay estimation code snippet, here: http://www.dsprelated.com/showcode/288.php For the theoretical limit, I'd search for "Cramer Rao Bound; time-of-arrival (TOA)". Reference [1] in http://ens.ewi.tudelft.nl/pubs/yiyin09icassp.pdf looks promising, for example. What I have done occasionally is run the algorithm from the other code snippet in parallel, and compare. It is more robust to group delay variations, but otherwise less accurate from numerical precision.
[ - ]
Comment by emetoJune 17, 2012
Hello , I have to determine the error of time delay and the theoretical limit (CRLB). Is there a difference betwen CRLB of only one OFDM symbol and the CRLB of an OFDM frame. How can I determine the CRLB of one frame??? I think the formula in this paper is only for one symbol (I am not sure): http://spcomnav.uab.es/docs/conferences/DelPeral_GNSS_Signals_2011.pdf Thank you!
[ - ]
Comment by mnentwigJune 17, 2012
I'd have a look at eq. 2-4 and ask myself, what power spectrum would I expect for an OFDM symbol, and how does this differ from a whole frame.
[ - ]
Comment by emetoJuly 3, 2012
Hi Markus, I have tried to estimate the time delay of a signal in this way: create OFDM signal - delayed the signal using oversampling (time domain)- estimate the integer delay using cross-correlation-compesate the integer delay (time domain)-FFT- %receive signal (rec_sig) % reference signal (ref_sig) channel = rec_sig./ref_sig; phase = unwrap(angle(channel)); figure (1) hold off % subcarrier (sc) plot (sc(:,1),phase,'*') grid % fit a polynomial p1x +p0 , p1 is the slop c = polyfit(sc(:,1),unwrap(angle(channel)),1) hold on plot(sc(:,1),c(1)*sc(:,1)+c(2),'r') xlabel('Bin') ylabel('Phase [rad]') The fractional time delay can be estimated from the slope of the polynomial. Results: without noise delay 1.5 samples deviation = 0.0099 samples (in my case ~ 1,5 m) delay 0.25 deviation = 0.0043 samples (0,6 m) Do you think this idea is good? Can I improve somehow the accuracy ? Thank you!
[ - ]
Comment by mnentwigJuly 3, 2012
Hello, if it works, it will be fine. You'll run into problems if your signal is bandpass type (for example two channels, separated by a guard band). In this case, the little signal energy in the stopband gets overwhelmed by noise and the phase unwrapping fails. I've got this option also in my code (set the unwrap flag in the first couple of lines). You can improve the accuracy if you weigh with the magnitude (both original and received signal). Without noise, it can reach nanosample accuracy.
[ - ]
Comment by mnentwigJuly 7, 2012
Hello, as the fifth anniversary of this blog entry is drawing nearer, I decided to rewrite it, with more information on "why". I hope it's useful, please let me know if you spot errors. Cheers Markus
[ - ]
Comment by emetoAugust 23, 2012
Hello Markus, My simulation generates an OFDM signal and it delays it with delay = integer delay + fractional delay in the channel. Before to remove the CP I compensate the Integer delay in time domain. After that, I want to position my FFT window to start from some point of the CP. For example , CP length = 40 and the FFT Window start is at 30 CP sample. I want to reduce the ISI, when I have a scenario with more than one received signals which are with different delays. Unfortunately, there is a phase offset between the symbols after this FFT window positioning operation. I cannot understand what can cause this problem . Do you have an idea where can be the problem? Thank you!
[ - ]
Comment by mnentwigAugust 23, 2012
Hi, yes, that's exactly what I'd expect to happen. Offsetting the FFT window is, after all, a time delay for the subcarrier waveforms. The picture http://www.dsprelated.com/blogimages/MarkusNentwig/comp.dsp/IMG_2811.JPG shows one OFDM symbol with its CP, and a single subcarrier waveform in the time domain. Seen by the early or late window, the subcarrier starts at a different phase "phi1" or "phi2". If you offset the window by some amount relative to the symbol body, you can cyclically shift the signal by the same amount (i.e. "circshift" in matlab).
[ - ]
Comment by emetoAugust 26, 2012
Thank you for your fast answer. It works fine with "circshift" :)
[ - ]
Comment by vahid_farsiDecember 14, 2012
hi mMarkus

if we process reciving data by the computer and easilly can sample recivings signal with upper rate , then is still importat to be worried about fraction delays

and one more thing would you please tell me is it posibble to determin disatnce and angel of a pinger whit known ping(18k) throught 3 hydrophone which is loacated in 120 degre from each other in space . if u wanted to do that which one of TDE method you would use ?

thanks so much
[ - ]
Comment by mnentwigDecember 14, 2012

if oversampling does the job and can be done with reasonable effort, it may well be enough. It is a common solution. If not feasible, I'd probably try something like the "iterDelayEst" subroutine in the downloadable code snippet in Matlab.

What you should keep in mind is that the above method tends to lock to the strongest reflection of your signal, not the first one. Maybe this is not what you want, if you intend to measure the shortest path.
[ - ]
Comment by Zeyad_ZeyadJuly 16, 2018

Hi Markus,  

I have read your post and the post of "Signal fitting with subsample resolution",  are you talking about Fractional Sampling? .. 

Actually, I need to implement the fractional sampling for OFDM, what I know, after fractional sampling is done, we can have multi output instead of one. for example if y = x * h + v , where y is the output signal, x is the transmitted signal, h is the channel, and v is the noise ..   so after the fractional sampling, we should have, yg = x * hg + vg .. where g is the oversampling ratio representing the signal .. if g = 2 .. so we supposed to have .. y1 = x * h1 + v1 and y2 = x * h2 + v2 where y1 and y2 are the polyphse components of signal y. 

but when I build the code I get strange results.  Do you have any idea how to implement the fractional sampling on ofdm? thanks

[ - ]
Comment by mnentwigJuly 16, 2018


for OFDM there may be a shortcut: you've got the pre-IFFT signal, simply zero-pad the center in the frequency domain to e.g twice the size for 2x upsampling. This takes the least coding effort but may be too expensive / power hungry in implementation.

For polyphase filter design, you might have a look here: https://www.dsprelated.com/showarticle/156.php

You'll need only one 2-up stage (insert zeros, then lowpass-filter). Depending on your system "numerology", there may be little excess bandwidth separating the signal from the next alias zone, so this isn't necessarily easy.

I just mention this: note that this filter causes time dispersion of OFDM symbols by the filter pulse shape. This eats away from the cyclic prefix "budget" when symbols leak into each other in time (ISI). This is avoided by the FFT method because it happens pre-CP insertion.


[ - ]
Comment by Gerard123December 8, 2020

Hello Mr Markus,

I have read your post about "Delay Estimation by fft". I'm working on acoustic signals implemented on smartphone. Actually i want to calculate or estimate the time-varying between the sender (speaker) and the receiver (microphones, I'm working on two microphones). I want to apply the cross-correlation to determine the delay.  Can this code "fitSignal_FFT" shared, can work on it?  I've tried the code but still not satified with my result,  maybe i didn't use well the code on load data or I'm wrong. Need your help. Can you please help me,  how to insert or input the audio data and calculate correctly the delay? This is my code:

h = (0:15) * 2 * pi / 16;
ref = sin(ph);
y = fopen('F:\fast\audio1.pcm')

x = fread(y,inf,'int16') 
[coeff, shiftedRef, delta] = fitSignal_FFT(sig, ref);
figure(); hold on;
plot(ref, 'k+-');
h = plot(sig, 'b+-'); set(h, 'lineWidth', 5);
plot(shiftedRef, 'g+-')
legend('reference signal', 'signal', 'reference shifted / scaled to signal');
title('fitSignal\_FFT demo');

Your help will be greatful. Thank you very much. 

[ - ]
Comment by mnentwigDecember 8, 2020


there's quite a few things to be commented here. 

First of all, a sine wave is not a good signal for your purpose (unless you're content to operate within one wavelength. If so, the appropriate method is just multiply-accumulate with sine and cosine, then atan2() to get the phase). 

The "fitSignal" algorithm is conceptually not useful with a sine signal, and it gets quite obvious why if I go through the steps, knowing that most of the spectrum holds no information (the least-squares fit stage stops making sense).

Theory says a white spectrum is best (uniform power density, see Cramér-Rao for the math) but there's one problem: It will fry your speaker. Been there, got the T-shirt, and up in smoke went my trusty old Nokia Communicator, a long time ago in a galaxy far, far away. 

So if you want to use the fitSignal function, you should ultimately aim for a pink(ish) noise signal, which is closer to what speakers are designed for to endure, also more ear-friendly. Transmit a periodic signal (say, periodic in 100 ms), and capture an exact period length. Then the algorithm should work, at least if you use one full-duplex sound device that runs Tx and Rx from the same crystal.

Now if you want to use two different hardware devices, you may have to account for frequency error ("FEC" frequency error estimation and correction) or you'll be limited in capture length and frequency but that's a different problem.

I think what you need is a simple crosscorrelation (FFT both sides, conjugate one, multiply, IFFT. Note the above comment on periodic signals.
Look at the peak bin and its two neighbors, fit a square polynomial and locate its peak. This is probably "close enough" for all practical purposes with regard to consumer audio.
The "fitSignal" algorithm could make sense when you need 6+ orders of magnitude higher accuracy, e.g. when I need to prove that one of my devices has a PLL design flaw that causes a femtosecond-per-second drift (real world use case).

[ - ]
Comment by Gerard123December 23, 2020
Hello Mr Markus,

I really appreciate your comment. I’m using a cosine signal fmcw and cw signals. I’m using both in one device(smartphone) to send the signals as superposed signals. First, I want to calculate the time delay as mentioned of the two signal. Second, I want to see the time difference between the two signals during transmission. The fitsignal can help in this issue? I tried to calculate the frequency spectrum using the cross-correlation and fft and I find that the power of the signal in different sections of the signal change, is that right? Third, I want to plot them in the spectrogram domain to see the transmitted and received signals(fmcw and ce) that was mixed during the transmission. 

One More question: what the impact of combining two acoustic signals at the same time? The signal will be more powerful or same as when we send one signal fmcw signal? If sending both at the same time, the received signal has ability to collect more data? Need to be clear in my mind, I’m confuse.

Can you please help me how to solve the problem, and how to plot them in matlab with codes.

Thanks in advance 

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: