Need your help guys,
In my application the measurements are affected by temperature and the signal is stretched over time, though preserving relatively similar structure.
I want to find incremental stretching between pairs of consecutive signals. Having a pair of signals, I divide the first into N segments (windows) and for each window I want to find it's shift and new length in the next signal.
I tried to estimate these incremental shifts with peak detection, but got low accuracy estimates, probably due to the stretching effects on the waveform structure.
In addition, I was advised to try Dynamic Time Warping algorithm, but it manipulates both signals to find the minimal Euclidean distance. In my case I would want only the first segment to be warped until it correlates perfectly with some waveform in the 2nd signal. Couldn't find the way to do so with DTW.
I have had to do this several times. I have used a Farrow interpolator to do fractional rate sample rate conversion on only one of the signals (presumably the capture of interest). This will work if the warping is fairly consistent over the capture period of the data of interest.
If you know the approximate range over which the signal can warp (+- X ppm), then just do a search (in steps) over that range and check the correlation coefficient at each step.
One added advantage of this scheme, is that you may be able to develop a profile of the warp relationship to the temperature. Once you have that relationship, you may be able to hone in much quicker in the future, if you know the temperature for any given capture.
P.S. My chosen technique for assessing performance, has been to attempt to cancel one signal from the other using adaptive filtering. The adaptive filter is run with the interpolated signal as the desired, and the reference signal as the input to the FIR delay line. This techniques will give you a residual error as a cost function, but will also allow you to find the best time alignment (position of the largest tap) for the two signals too, so that you don't have to play that game as well.
very wise solution
Thanks for your kind support!
if you're interpolating (or sample rate converting) the stretched replica to make it the same length as the original signal segment, how do you know what the sample rate ratio is?
I'm not sure that you care necessarily to make the signals the same length. I think that the technique I propose does not care about the relative length of the two signals, except that there must be some significant period when the content is "common" to both.
okay, i understand this statement: "I have used a Farrow interpolator to do fractional rate sample rate conversion on only one of the signals..." but i am curious what sample rate conversion ratio you had used and how you had determined that ratio value from looking at the two signals.
after looking at the graphic, i would cross correlate small windowed segments of Signal 1 against Signal 2 and find the offsets that make for a maximum match. since the two signals are scaled apparently identically, the cross correlation might be more like the Average Squared Difference Function similar to how i discuss it in this stackexchange answer:
The most typical application for the sample rate conversion technique has been in modem signal analysis/reception. This is often referred to as timing recovery and is used to match the receiver to the transmit timing.
However, I have also had times when I dealt with audio signal sampling differences
One case was in the design of the playback of the received signal from Sirius Satellite radio. The received signal comes in packets at the effective transmit (encoding) rate. The D/As on the receiver have their own crystal, so are in no way synchronized to the transmitter. Rather than changing the clock on the D/As, we established a buffer with a high and low water mark and used the bumping against these marks to drive a PLL that produced the required phases for the Farrow interpolation, so that we played out at the D/A rate. The performance of the interpolation had to be very good, and we had a 7th or 8th order with about 55 taps per, to obtain -85 dB worst case error and over -102 dB average error in the interpolated result. Performance measured using a pure sine wave in the middle of the band, and covering the phase range from -0.5 to +0.5 samples.
Now for the problem at hand. I simply implement an interpolator with a slowing incrementing or decrementing phase parameter. In the implementation I have, I use at least three parameters:
- Input sampling rate
- Output sampling rate
- Initial phase different
I simply specify input sampling, say 8000 and output sampling as (8000 * (1+-x/1e6)) where x is the ppm adjustment I desire. For our case here, I suggested, that a range of ppm values can be used, and the experiment run iteratively to find something close, and then hone in from there.
At a systems level, there is usually a pretty well understood range over which the ppm values could extend.
I think this is more like a 'pattern search-matching. Once match is found, delay is to be computed.
Edit: I meant of course stretching-shortening signal of interest, for each search try
Thank you for your advise Mr. dgshaw6. Never heard about this technique, so I will try to look deeper into it.
If I understand right, you propose to work with the reference signal in the window and decrease the sampling rate (artificially stretch it) by some fraction and each step compute cross-correlation with the target signal until the CC has a maximum?
In case the 1st sample point of both windowed signals is not aligned, say the 2nd window was translated forward, due to the stretching of the previous windows, how would you recommend to deal with that?
I would think to align the beginning of both complete signals first, and then by finishing with stretching the 1st window, translate the next window by the increment found in the previous step.
Does it sound reasonable?
Will appreciate also a reference for simple explanation of Farrow interpolator, what I find so far is too complicated for me... :/
First, I assume in my technique that the stretch/shrink factor is consistent across the entire capture. If this is true, then no windowing is necessary, and the entire signals can be run through the adaptive filtering process.
The way to guarantee that you will not need to find the initial alignment, is to ensure that the reference signal enters the delay line before you expect to see evidence of it in the desired signal. Then you need an adaptive filter that is at least as long as you feel the delay between reference and desire can ever be (maybe double that). You may need to add some zeros at the beginning of the desired signal to guarantee that the reference arrives early.
When the adaptive filter converges, the position of the largest tap in the filter coefficients will tell you the actual delay that was needed to achieve the alignment.
As you reach the point in the iteration process where the correlation is maximized, where the error will be minimum, the impulse response in the adaptive filter will be at its most compact state. This is the state where the delay for alignment will be most obvious.
Honestly, I'm far from advanced signal processing techniques and couldn't understand so far how to implement Farrow interpolator.
In my approach I do iterations on incremental stretching of the original window ( I can do it by guessing temperature of the window in S1 and computing the corresponding number of samples in that "new" window" to compute the stretching ratio). Then, each time I slide the window from S1 on S2 over reasonable range and compute the cross-correlation coefficient. However it gets complicated once I try to define the convergence criteria. Presumably the normalized CC would never get 1, but for different temperatures -> different increments, in some range, would get some maximum. Following this approach how would the convergence look like? I assume the same question would arise in case of Farrow filter approach?
This would be a good application for wavelet analysis. You know what the source signal looks like, and you're looking for a similar wave shape, but different. The self similarity principle of wavelets seems like an interesting approach. Not sure it will actually work, it's just that it seems like a good fit to the problem.
I also thought that but the signal to be searched has to satisfy wavelet criterion to be used in a wavelet analysis as a 'wavelet'. If he can find a similar wavelet to the signal to be searched, that would be a perfect solution indeed.