I'm hoping that anyone who's used the arm DSP function (or any similar function) "arm_biquad_cascade_df2T_f32" could perhaps have a look at this code and provide some insight as to why there is no output from the IIR. This on an STM32L4 running at 80MHz.
I used ASN filter designer to generate a 100Hz LPF and the specs are at the top of the file that's inserted/attached. The method for the ping-pong buffer is from code posted by Joseph Yiu on this link:
<a class="moz-txt-link-freetext" href="https://community.arm.com/developer/tools-software/tools/f/armds-forum/5860/cmsis-dsp-fir-filter-for-continous-real-signal">https://community.arm.com/developer/tools-software/tools/f/armds-forum/5860/cmsis-dsp-fir-filter-for-continous-real-signal</a>
i have verified that the ping-pong mechanism is functioning as it should. I am feeding in a 2.5Vpp clean sine wave at 80Hz. Yet, all I can see on the DAC output is small ripples of about 400mV that don't match or even resemble the sine wave in. Here's a test I did to validate the ADC in and DAC out. In the ping-pong buffer state handler in INT7, I added a line to send the ADC signal from the ADC to the DAC. See the lines:
//uncomment this line to check DAC output, should match input
//dacOut = (uint32_t)ADCsignalIn;
With that line un-commented the DAC signal matches the ADC in on the scope perfectly. Is it possible the ADC signal needs to be scaled differently from what I am doing? I have tried a few different things to no avail.
In summary - there is virtually no output from the filter. I cannot imagine any problem with the filter coefficients as it runs OK in the ASN test loop. My plan was to check with a sine wave within the filter cutoff as I am now (80Hz), then try signals from 80Hz to 160Hz for performance.
Can anyone provide some help as to what the issue might be? Your feedback is greatly appreciated.
You have run into the problem that the biquadratric filter is ill conditioned for low pass filters. your bandwidth 1-millionth of the sample rate... you can not put poles near Z=1 for this low pass filter even with double precision floating point arithmetic.
see attached papers i wrote in 2007 to address this problem. There are a whole bunch of other options based on the idea that we can reduce sample rate, design and implement the filter at the reduced sample rate and then raise sample rate with a sequence of down-sampling filter and up-sampling filters. the common rule is... don't design filters with large ratio of sample rate to BW.
The first paper below demonstrates the cascade of down sample, filter, upsample.
The next two papers below handle the case without changing sample rate
hi fred h,
are you familiar with ASN filter designer? According to the parameters this is a stable filter. Check attached screen shot.
It's hard to debug code, without digging in for a while. What I would recommend is creating a circular buffer from some memory block (largest you have free). Then just log all out filter outputs in a circular fashion. Let it run, and then examine the circular buffer at the values being written to the DAC (assuming you are debugging with JTAG/IDE). Are they similar in range to those you see from ADCsignalIn? Another test is to set the first filter coefficient to 1, and all else 0. See example biquad formula below, where b0 = 1, all else = 0.
y(n) = ( b0 * x(n) ) + ( b1 * x(n-1) ) + ( b2 * x(n-2) ) -
( a0 * y(n-1) ) - ( a1 * y(n-2) )
This is a pass-thru filter, and should effectively be the same as setting the DAC to the ADC input.
Lastly, you might try to code it up in some hack C (gnu, or whatever), offline from the processor. You can run a swept sine through the filter, using the C sin() function over your frequency of interest, and monitor it's output. I've found it's much easier to first do this, and debug there, with Eclipse or whatever, instead of trying to debug on the embedded processor.
looks like it will be hacking to figure it out.
If there's any good DSP practitioners that know the CMSIS DSP library and Cortex-M4/M3 processors, there's a big pile of cash for anyone that can write a book on using the library and provide real-world examples on a Cortex-M series processor. There's literally millions of these devices out there and not a single good book as I've described!
"This on an STM32L4 running at 80MHz"
Do you mean you have an 80MHz processor clock rate, or you're sampling at 80MHz.
If you're not sampling at 80MHz -- what is your ADC sampling rate?
Thanks for taking the time.
Sorry, I should have been more clear. The ADC bus is running at 32MHz, and I have that divided down by 4.
The sampling rate is 1.024KHz. I wanted a base 2 rate for the FFT (not yet coded)
(Fharris) As pointed out, the filter is generated by ASN. It provides the base code for the ARM deployment which the program runs. it appears the filter is more than adequate at the Fs of 1.024KHz, with minimal overshoot and ringing.
Tim - do you see any glaring mistake in the code? The big part I am missing is what exactly is returned from the function call. There's virtually no documentation for the CMSIS code so unless you are skilled in the art of DSP it offers no clue as to how to handle the return array. I wish there was more examples provided from ARM on the library of real-world examples using the ARM processors.
Is scaling and factoring the input as I have on input and output OK? No output with a clean, ADC in-range signal into the filter, just makes little sense if the code all appears well.
Thanks for any thing you can offer.
It looks like it ought to work -- but I don't have mileage with the CMSIS calls -- the work I've done on the Cortex cores hasn't included DSP.
If it works the way I think it should your filter should have a DC gain of 1, so the amplitudes should work out.
You've tried piping the ADC straight to the DAC -- try generating a square wave instead of the ADC, and running that through the filter. That may be informative.
I've removed the ping-pong at at least right now I can see some output post-filter although it's being chunked with inverted pieces - better than no output as before! I'll keep you all posted as to what I find.
Well, I cleaned up code and it's now sort of working! Please see the attached photo. It starts right from the initial filter output. I set the scope to trigger on channel 2's output and caught it there.
I'm now running the IIR with a block size of 128. This does not seem to affect the signal in any way from a block size of 32.
Any idea as to what could be causing this?
the upper part of the sinewaves have overflowed and now appear at the lower range of the sinewaves. It looks as if you DC in your signal and not sufficient number of bits to accomodate the dynamic range of output signal.
Try this: attenuate your input signal to the filter by a factor of 2 and apply attenuated signal to the filter. if that doesn't fix it, scale input signal by 4 and try again
So this furthers the question: what is the right way to match filter output amplitude to that of the input? At in-band frequencies why is there bandpass attenuation? I assume there should be little if any, other than that during ripple.
Is this wrong assumption? Should the filter amplitude response remain fairly constant regardless of input?
What I did was tried adjusting the filter input as you suggested until it became stable with minimal pk-pk deviation. So right now the input is 2.17V pk-pk from ADC, and the filter/DAC output is 1.65. Is that an artifact of running the IIR, or is there any way to make output match input?
Thanks so much for the feedback.