DSPRelated.com
Forums

FIR filter verification

Started by Sharan123 October 5, 2015
>Now you are talking about algorithm validation. Your first post is about >proving implementation. The averaging filter can be studied in its >mathematical model before any form of implementation e.g.(Matlab
function
>"filter"): > >x = [1 zeros(1,1024); >h = [.2,.2,.2,.2,.2]; >y = filter(h,1,x); >then study frequency domain of y or its time domain. you will also need
to
>look after quantisation issues. > >so you can use this function to see the response and then implement it
and
>test your implementation with random data through both model and design: > >x = randn(1,1024); >yref = filter(h,1,x); > >yref should be same as your design output taking into consideration bit >resolution issues and overflow.
Dear Kaz, I apologize if my original post was ambiguous. Here I am re-iterating my understanding, 1) you design a Filter with whatever method you are comfortable with 2) implement an ideal filter in Matlab 3) translate this into a fixed point in a) SW world or b) into HW (RTL implementation) I am more interested in verifying 3b) above as VLSI is my area of work. My understanding is that, as we move from 1 to 2 to 3, the rigour with which we verify reduces. What this means is that we choose less and less tests (or subset of tests reduces as we go from 2 to 3) as we move from 1 to 2 to 3 but that does not take away the fact that we are still functionally dealing with a block that is more algorithmic in nature. Hence pure random test may be sufficient, but may not be sufficient also. This we will never know, unless we want to do the job of analyzing the random data and see if stimulus contained all necessary types to sufficiently test the filter. When I say sufficiently here, it does not mean exhaustive testing but what is minimally needed to check the implementation. Hence a minimal set of tests that verify the key design aspects of a filter are needed. In fact, due to fixed point implementation, we might add a few more tests to make sure that quantization effects are not de-grading the performance. Let me know what you think. --------------------------------------- Posted through http://www.DSPRelated.com
>I apologize if my original post was ambiguous. Here I am re-iterating my >understanding, >1) you design a Filter with whatever method you are comfortable with >2) implement an ideal filter in Matlab >3) translate this into a fixed point in a) SW world or b) into HW (RTL >implementation) > >I am more interested in verifying 3b) above as VLSI is my area of work. > >My understanding is that, as we move from 1 to 2 to 3, the rigour with >which we verify reduces. What this means is that we choose less and less >tests (or subset of tests reduces as we go from 2 to 3) as we move from
1
>to 2 to 3 but that does not take away the fact that we are still >functionally dealing with a block that is more algorithmic in nature. >Hence pure random test may be sufficient, but may not be sufficient
also.
>This we will never know, unless we want to do the job of analyzing the >random data and see if stimulus contained all necessary types to >sufficiently test the filter. When I say sufficiently here, it does not >mean exhaustive testing but what is minimally needed to check the >implementation. > >Hence a minimal set of tests that verify the key design aspects of a >filter are needed. In fact, due to fixed point implementation, we might >add a few more tests to make sure that quantization effects are not >de-grading the performance. > >Let me know what you think.
At start you design your filter on a PC (software e.g. matlab "filter"). You convert inputs and coefficients to fixed point when applying "filter" So now you got the model of your design on the PC. You can study it as you like e.g. apply impulse, frequency sweep or just multiple sinusoids or random data..etc but impulse input is more than enough for frequency study. Once you are happy you then implement it e.g. in rtl At this stage you can if you wish repeat all tests you did on matlab model but this is not necessary since you have done that already. What you want is prove that your rtl is indeed same so random input is enough. for overflow: Don't worry about other complexities of input patterns. all you need is scale input to cause some overflow and apply saturation to matlab model if applied in rtl otherwise a mismatch will occur. for rounding: apply same as per rtl in fact many designers do not let their input reach overflow as saturation still leads to nonlinearity. if you study functionality of your filter at rtl level only, how would you know that it is implemented correctly in the first place, you may have bugs that mislead you about frequency response. Kaz --------------------------------------- Posted through http://www.DSPRelated.com
>Once you are happy you then implement it e.g. in rtl >At this stage you can if you wish repeat all tests you did on matlab
model
>but this is not necessary since you have done that already. What you
want
>is prove that your rtl is indeed same so random input is enough.
Here is where I have to disagree with you. Just take a simple equation, y = K1*x1 + K2*x2; This is implemented in Matlab (our Golden model) and C/RTL (for example). I have tested this in Matlab and things are fine. Now, I have to test at C/RTL. Assume, I choose to use random method. And, through randomization, I get one vector as follows, [x1 = 1 x2 = 1] Note: I have shown one vector just as an example. But in reality while testing a complex function, one is likely to choose only a subset of all possible vector for a given function and one vector is an extreme example of that. Now, while implementing in C/RTL, we might have introduced a bug and implemented the equation as follows, y = K1*x1 + K2*x1 (note x1 is repeated instead of x2). If you use the above vector, you would end up with y = K1 + K2 through Matlab and through C/RTL implementation even though there is a obvious bug in the C/RTL. This is just an example and an extreme one at that. But if you are talking about a complex function and trying to choose vector through randomization method then you will end up with such situations.
>for overflow: >Don't worry about other complexities of input patterns. all you need is >scale input to cause some overflow and apply saturation to matlab model
if
>applied in rtl otherwise a mismatch will occur.
Again here, I had a doubt but I did not get input from forum members here. I am not too sure if Matlab model has to be kept in sync with C/RTL implementation by applying such Sat/round-off methods. I was hoping that, we could still have ideal implementation in Matlab and measure SNR for a chosen test input and make sure it is within tolerable level. The source of noise in this case is, quantization (finite word width) in fixed point implementation. I would like to hear from other experts here ... --------------------------------------- Posted through http://www.DSPRelated.com
> >Here is where I have to disagree with you. >Just take a simple equation, > >y = K1*x1 + K2*x2; > >Assume, I choose to use random method. And, through randomization, I get >one vector as follows, > >[x1 = 1 x2 = 1] >
Your assumption of this input as random stream is wrong. By random vector I mean a stream of thousands or tens of thousands of samples so your example stimulus is certainly not enough.
> >>for overflow: >>Don't worry about other complexities of input patterns. all you need is >>scale input to cause some overflow and apply saturation to matlab model >if applied in rtl otherwise a mismatch will occur. >
Matlab model must respect rounding and saturation as done in RTL Otherwise you will get mismatches. What is stopping you from having bit true model of your design? we are talking about bit true model Vs RTL. The rtl must also respect internal bit growth fully(products and their sum, and final division). Kaz --------------------------------------- Posted through http://www.DSPRelated.com
On Wed, 07 Oct 2015 03:01:39 -0500, Sharan123 wrote:

>>> I really doubt if this method will ever help in case of algorithm >>> validation. A filter is frequency selective function and testing with >>> random data will hardly test the function.In fact, we will never know > if >>> enough testing has been done. If we cant define what is sufficient for >>> testing, it is hard to say when enough testing has been done. >> >>I agree, more or less. You can put in white noise and verify that the >>output has the right spectrum, but this won't help you with all possible > >>bugs. > > Dear Tim, > > Please help me understand. Is it a good idea to add a bit of white noise > on top of test signal and check the spectrum? or, would doing this with > just with an impulse input and checking provide sufficient insight into > the filter behavior?
I was really only arguing over nits: the claim was that you can't test a filter with noise input and looking at the spectrum. I was only saying that you can do basic verification by putting in white noise and looking at the resulting output -- but that's all. In my opinion there are better ways to test; testing a filter against white noise may root out some bugs in the code that would not otherwise be found, but it's not the first thing I'd do. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com