DSPRelated.com
Forums

Upsampled input to an Adaptive filter?

Started by khurram6050 7 years ago14 replieslatest reply 7 years ago364 views

Hi everyone. I will try to explain the issue I am having as clearly as possible without going into my coding or maths. I have my own and a MATLAB Central implementation pf standard #LMS in MATLAB. Fixed step size. No normalization or other stuff.

I am trying to use it in a system identification setup. I generate a vector of gaussian numbers using "randn" and give the same vector as input and desired response to the LMS filter. Now the estimated weight vector at the end should be a "delta" channel and this is what I get. Then i tried upsampling and interpolating the input vector by an integer number and repeating the same thing. This time around the estimated channel is of the shape of an "Sinc". I gave the interpolated signal as the input and the desired response as before. No changes.

Then i also tried low pass filtering the input vector and repeating the same thing. Again a "Sinc". Has anyone observed this before or know something about this? Please point out my mistake. Any suggestions or a discussion is also welcome.

[ - ]
Reply by tdcJanuary 26, 2017

Hi Khurram6050,

You've essentially created a band-limited channel.  Compare the FFTs of the data at Fs_{lo} and Fs_{hi}.  In both cases, you should see that the spectral content is well contained within Fs_{lo}/2, though you will see some low level numerical content further out in the higher frequency data.  

When you interpolate, you are not adding any more spectral content, just filling in some gaps within the constraints of the original bandwidth.  Another way to look at it is that your higher frequency data channel has effectively been windowed in the frequency domain.  The inverse transform of this effective window is something sinc-like.  You get the nice delta function at first because everything is well matched without any phase offsets.  

It's analogous to taking a DFT of a bin-centered tone- you get a spike at the peak and zeros everywhere else.  In reality- zero padding the DFT interpolates the frequency spectrum and shows you the information in between.  

Your original channel was a band limited sinc function all along - interpolating just brings this out.  

Hope this is helpful- happy to try explaining more if something didn't make sense.  

-Travis

[ - ]
Reply by dgshaw6January 26, 2017

Hello khurram6050,

You said that you up-sampled and then interpolated.

Then you said that you also tried after you low pass filtered the inputs.

This lowpass filtering is a form of interpolation of the upsampled signal.

Can you be more specific about your interpolation and low pass filtering, and the integer ratio that you used for interpolation?

The only other brief insight I could give is to remember that the sync in time is a boxcar in frequency, so that the result implies that you have a brick wall in the frequency space that represents the interpolation in time.  However, if you are presenting the exact same signal into the two LMS inputs, then as you say there should be an single impulse in the response.

However, remember that your interpolated signal has no response in the upper regions of the frequency band, so it may take a VERY long time for the sync to die away to nothing around the delta, because the excitation is ill-conditioned in the upper frequencies.

I'm not sure that I made any sense.

Best regards,

David


[ - ]
Reply by MichaelRWJanuary 26, 2017

By only "upsampling and interpolating" the input signal (i.e. the signal to be filtered) and not the desired signal, are you expecting the same 'delta' channel response?  Based on your description, that is my understanding.

What is the rationale for expecting the same response if you've only modified one of the two necessary signal inputs to the adaptive filter?

[ - ]
Reply by khurram6050January 26, 2017

Hello. Ok so my wording was a bit ambiguous. I didn't just upsample and interpolate only the input signal. I repeated the same thing on the desired signal. In fact right now they are the same signals. That's why i am expecting a delta.

[ - ]
Reply by Tim WescottJanuary 26, 2017

I don't know what the right terminology is in adaptive filtering terms, but when you upsample, you're generating a signal where the neighboring samples are strongly correlated.  This strong correlation makes the ideal filter ambiguous.  I think if you drill down deep enough into the math for the LMS algorithm, you'll find that when that happens there's a divide-by-zero buried in there someplace.

So it's not surprising that Weird Chit* happens.

I can't tell you exactly what's going on, but if you have the mathematical chops for it, I strongly suggest that you hit the books on the LMS algorithm.  Anything that involves adaptation like this pretty much requires that you know the guts of the algorithm so that you can feed it what it needs to work right.

* That's an engineering term, BTW.  AT least, if you spend enough time trying to turn theory into practice, you'll start using it.  Young engineers have to say "gosh, I don't know" -- but around 10 or 15 years out of school you can start using "bazzilion" and "weird chit" and "humongous" and such.

[ - ]
Reply by dudelsoundJanuary 26, 2017

I've implemented a few adaptive FIRs myself and what you observe doesn't strike me as too weird. I agree with Tim: By upsampling and interpolating you effectivly feed a narrowband signal to the adaptive FIR filter - at some frequencies your energy is zero. When you dig into the math of adaptive FIR filters you will see (what you can guess before :) that your filter will not adapt correctly for frequencies that are not contained in your training signal.

In other words: The task of your adaption is to minimize the error between the filtered input and the given output. In an oversampled and interpolated (low-passed) signal this is perfectly accomplished by a filter that passes all freqeuncies in your signal unaltered and blocking all higher frequencies - and that would be a low pass-filter corresponding to a sinc in time.

[ - ]
Reply by khurram6050January 26, 2017

Hello. Apologies for the late response.

Well your answer kind of makes sense. But still it's not directly applicable to the problem i am dealing with. I have a bistatic radar where the direct feed through signal (high power) is followed by reflections from targets after some delay. So in order to you know observe what you suggested the order of the filter will have to large (to form a sinc shape) but doing that will filter out the reflections from the target as well. So i was restricting my filter to the direct path and before it. Kind of like a anti-causal filter. This is the problem of fractionally spaced equalization in a sense (fractionally because of the direct path not falling onto exactly a discrete instant).


I don't know if the above makes sense or not. But i agree with you. It makes sense!!

[ - ]
Reply by dudelsoundJanuary 26, 2017

OK - so if I get you correctly, you want to do some sort of echo-cancelling - you transmit a signal and want to receive only the reflections, not the direct path.

The direct path most probably contains some sort of delay. Now if your transmission is perfect except for a delay, the adaptive filter would have to take the form of a delayed impulse. If the delay is not an integer multiple of the sampling period, then the result will have a sinc shape (fractional delay).

Actually - even a straight-forward impulse like [0 0 0 1 0 0 0] is a sinc sampled exactly in the middle.

[ - ]
Reply by Tim WescottJanuary 26, 2017

If you're upsampling and interpolating just the input signal, won't the input and desired signals be at two different rates?

To clarify, in your filtered case, are you filtering just the input signal to the adaptive filter, but not the "desired" signal?

[ - ]
Reply by khurram6050January 26, 2017
Hi Tim. I upsampled and interpolated both the signals. In fact right now it is the same signal that i am giving both as input and desired response. So they are at the same rate.
[ - ]
Reply by MichaelRWJanuary 26, 2017

The other aspect of the problem that we might be over looking is the number of weights you are using in your LMS adaptive filter.  Typically, adaptive filters use a finite-impulse response (FIR) filtering structure because they are inherently stable.  One such structure is a tap-delay line.  Are you using the same filtering structure for the revised target and desired signals?

Out of curiosity what programming language are you using?  If you are using Matlab, I can have a look at your code if you like.  A second set of eyes never hurts.

[ - ]
Reply by khurram6050January 26, 2017

Hi Michael. Apologies for the late response. You are right. We are looking over the number of weights part of the problem. But the original problem i am dealing with has some restrictions in that context. Please see my response to "dudelsound" where i have touched this. 


Yes i am using a FIR filter. I am doing all this in MATLAB. I am attaching the files to this response. The attached files are just a test case using gaussian random vectors. They don't contain any of the radar stuff i mentioned.

The first  figure you see when you run the code is the delta and the second one is half side of the sinc because the filter in this implementation is causal.


dspRelated_testFile.m

fixed_step_LMS.m

[ - ]
Reply by MichaelRWJanuary 26, 2017

It looks like you may need to think about the LMS parameters and perhaps its implementation. You have indicated there are constraints to your problem, but based on the Matlab code you provided, I have created several figure sets to illustrate why you might need to rework your implementation of the LMS algorithm.

First of all, as you have described in your original post, when you use the white-noise sequence for both the signal to be filtered by the adaptive filter and the desired signal, the system response determined by the LMS adaptive filter is a "delta" channel (see figures below).


Then, based on your Matlab code, you convolve the white-noise sequence with a 41-sample SINC function that effectively filters the white-noise sequence with a lowpass filter with a normalized cutoff frequency of 0.5 pi radians per sample (you've verified this by showing the frequency magnitude-response using the FFT).  When this lowpass filtered sequence is used as the signal to be filtered by the adaptive filter and the desired signal, the system response looks like a SINC function (see figures below).


In order to get closer to the desired "delta" channel response when the lowpass filtered sequence is used for the input and the desired signals, then you need to re-examine the LMS parameters and maybe its implementation.  As you can see from the coefficient adaption curves for this case, the coefficients have yet to reach a stable state with 1,000 sample sequences.  By experimenting with your code, I was able to achieve a channel response that is more inline with the desired "delta" response by increasing the step-size from 0.01 to 0.05 and increasing the length of the data sequences from 1,000 to 10,000 samples (see the figure below).  Given the code you provided was "just a test case," you'll likely need to reconcile what I've said here with your original code.

I think realizing a "delta" channel here is a limiting case, requiring perfect adaptation of the filter using conditions that definitely will not coincide with the constraints of your problem.

For the sake of completeness, I've also run simulations using the WGN sequence as the input signal to the adaptive filter and the lowpass filtered WGN sequence as the desired signal using your original LMS parameters and also for the case of increasing the order of the adaptive filter from 10 to 50, 11 coefficients to 51 coefficients, respectively.

Using your original LMS parameters and 1,000 sample data sequences, the channel estimate, coefficient adaptation curves, and frequency magnitude response of the estimated system are shown below.

Using the original step-size of 0.01 and sequence length of 1,000 samples, a filter order of 50 (51 filter coefficients) is sufficient, albeit this value could likely be decreased to around 40, which would agree with the number of values in the SINC function used to define the lowpass filter.  Even with values less then 40, say in the mid-30s, there is good agreement in the passband and initial portion of the lowpass cutoff response, deviating more noticeably at higher frequencies.

As you can see, the adaptive filter needs enough coefficients to adequately estimate the coefficients of the original SINC lowpass filter.  This type of system identification works well if the system to be characterized is inherently moving-average (i.e. FIR) in nature, but does not do so well when it is auto-regressive (i.e. IIR).

One final idea, at least with respect to adjusting your step-size, is that your implementation of the LMS does not normalize the input vector.  I believe you have stated this here.  As you likely know, this means that the degree of adaptation, irregardless of what the step-size value is, will depend on the scaling of the input signal.  If the scaling of the input signal varies over time, then the adaptation of the LMS will not converge as required.

I hope these results help you move towards an answer to your question.

[ - ]
Reply by JOSJanuary 26, 2017

Did the second sinc correspond to the lowpass filter or the upsampling factor?  I.e., did the zero-crossings move?  If the sinc function "stretched" horizontally, then that could be correct.

Also, as mentioned by others, the frequency response of the LMS result is arbitrary in "undriven" frequency regions.  In other words, LMS can converge to the desired sinc function convolved with any filter having the response 1 over the driven lowpass band and anything at all in the zero-amplitude frequency region, such as the complementary highpass, which would get you back to delta.