Reply by Tim Wescott October 7, 20152015-10-07
On Wed, 07 Oct 2015 03:01:39 -0500, Sharan123 wrote:

>>> I really doubt if this method will ever help in case of algorithm >>> validation. A filter is frequency selective function and testing with >>> random data will hardly test the function.In fact, we will never know > if >>> enough testing has been done. If we cant define what is sufficient for >>> testing, it is hard to say when enough testing has been done. >> >>I agree, more or less. You can put in white noise and verify that the >>output has the right spectrum, but this won't help you with all possible > >>bugs. > > Dear Tim, > > Please help me understand. Is it a good idea to add a bit of white noise > on top of test signal and check the spectrum? or, would doing this with > just with an impulse input and checking provide sufficient insight into > the filter behavior?
I was really only arguing over nits: the claim was that you can't test a filter with noise input and looking at the spectrum. I was only saying that you can do basic verification by putting in white noise and looking at the resulting output -- but that's all. In my opinion there are better ways to test; testing a filter against white noise may root out some bugs in the code that would not otherwise be found, but it's not the first thing I'd do. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Reply by kaz October 7, 20152015-10-07
> >Here is where I have to disagree with you. >Just take a simple equation, > >y = K1*x1 + K2*x2; > >Assume, I choose to use random method. And, through randomization, I get >one vector as follows, > >[x1 = 1 x2 = 1] >
Your assumption of this input as random stream is wrong. By random vector I mean a stream of thousands or tens of thousands of samples so your example stimulus is certainly not enough.
> >>for overflow: >>Don't worry about other complexities of input patterns. all you need is >>scale input to cause some overflow and apply saturation to matlab model >if applied in rtl otherwise a mismatch will occur. >
Matlab model must respect rounding and saturation as done in RTL Otherwise you will get mismatches. What is stopping you from having bit true model of your design? we are talking about bit true model Vs RTL. The rtl must also respect internal bit growth fully(products and their sum, and final division). Kaz --------------------------------------- Posted through http://www.DSPRelated.com
Reply by Sharan123 October 7, 20152015-10-07
>Once you are happy you then implement it e.g. in rtl >At this stage you can if you wish repeat all tests you did on matlab
model
>but this is not necessary since you have done that already. What you
want
>is prove that your rtl is indeed same so random input is enough.
Here is where I have to disagree with you. Just take a simple equation, y = K1*x1 + K2*x2; This is implemented in Matlab (our Golden model) and C/RTL (for example). I have tested this in Matlab and things are fine. Now, I have to test at C/RTL. Assume, I choose to use random method. And, through randomization, I get one vector as follows, [x1 = 1 x2 = 1] Note: I have shown one vector just as an example. But in reality while testing a complex function, one is likely to choose only a subset of all possible vector for a given function and one vector is an extreme example of that. Now, while implementing in C/RTL, we might have introduced a bug and implemented the equation as follows, y = K1*x1 + K2*x1 (note x1 is repeated instead of x2). If you use the above vector, you would end up with y = K1 + K2 through Matlab and through C/RTL implementation even though there is a obvious bug in the C/RTL. This is just an example and an extreme one at that. But if you are talking about a complex function and trying to choose vector through randomization method then you will end up with such situations.
>for overflow: >Don't worry about other complexities of input patterns. all you need is >scale input to cause some overflow and apply saturation to matlab model
if
>applied in rtl otherwise a mismatch will occur.
Again here, I had a doubt but I did not get input from forum members here. I am not too sure if Matlab model has to be kept in sync with C/RTL implementation by applying such Sat/round-off methods. I was hoping that, we could still have ideal implementation in Matlab and measure SNR for a chosen test input and make sure it is within tolerable level. The source of noise in this case is, quantization (finite word width) in fixed point implementation. I would like to hear from other experts here ... --------------------------------------- Posted through http://www.DSPRelated.com
Reply by kaz October 7, 20152015-10-07
>I apologize if my original post was ambiguous. Here I am re-iterating my >understanding, >1) you design a Filter with whatever method you are comfortable with >2) implement an ideal filter in Matlab >3) translate this into a fixed point in a) SW world or b) into HW (RTL >implementation) > >I am more interested in verifying 3b) above as VLSI is my area of work. > >My understanding is that, as we move from 1 to 2 to 3, the rigour with >which we verify reduces. What this means is that we choose less and less >tests (or subset of tests reduces as we go from 2 to 3) as we move from
1
>to 2 to 3 but that does not take away the fact that we are still >functionally dealing with a block that is more algorithmic in nature. >Hence pure random test may be sufficient, but may not be sufficient
also.
>This we will never know, unless we want to do the job of analyzing the >random data and see if stimulus contained all necessary types to >sufficiently test the filter. When I say sufficiently here, it does not >mean exhaustive testing but what is minimally needed to check the >implementation. > >Hence a minimal set of tests that verify the key design aspects of a >filter are needed. In fact, due to fixed point implementation, we might >add a few more tests to make sure that quantization effects are not >de-grading the performance. > >Let me know what you think.
At start you design your filter on a PC (software e.g. matlab "filter"). You convert inputs and coefficients to fixed point when applying "filter" So now you got the model of your design on the PC. You can study it as you like e.g. apply impulse, frequency sweep or just multiple sinusoids or random data..etc but impulse input is more than enough for frequency study. Once you are happy you then implement it e.g. in rtl At this stage you can if you wish repeat all tests you did on matlab model but this is not necessary since you have done that already. What you want is prove that your rtl is indeed same so random input is enough. for overflow: Don't worry about other complexities of input patterns. all you need is scale input to cause some overflow and apply saturation to matlab model if applied in rtl otherwise a mismatch will occur. for rounding: apply same as per rtl in fact many designers do not let their input reach overflow as saturation still leads to nonlinearity. if you study functionality of your filter at rtl level only, how would you know that it is implemented correctly in the first place, you may have bugs that mislead you about frequency response. Kaz --------------------------------------- Posted through http://www.DSPRelated.com
Reply by Sharan123 October 7, 20152015-10-07
>Now you are talking about algorithm validation. Your first post is about >proving implementation. The averaging filter can be studied in its >mathematical model before any form of implementation e.g.(Matlab
function
>"filter"): > >x = [1 zeros(1,1024); >h = [.2,.2,.2,.2,.2]; >y = filter(h,1,x); >then study frequency domain of y or its time domain. you will also need
to
>look after quantisation issues. > >so you can use this function to see the response and then implement it
and
>test your implementation with random data through both model and design: > >x = randn(1,1024); >yref = filter(h,1,x); > >yref should be same as your design output taking into consideration bit >resolution issues and overflow.
Dear Kaz, I apologize if my original post was ambiguous. Here I am re-iterating my understanding, 1) you design a Filter with whatever method you are comfortable with 2) implement an ideal filter in Matlab 3) translate this into a fixed point in a) SW world or b) into HW (RTL implementation) I am more interested in verifying 3b) above as VLSI is my area of work. My understanding is that, as we move from 1 to 2 to 3, the rigour with which we verify reduces. What this means is that we choose less and less tests (or subset of tests reduces as we go from 2 to 3) as we move from 1 to 2 to 3 but that does not take away the fact that we are still functionally dealing with a block that is more algorithmic in nature. Hence pure random test may be sufficient, but may not be sufficient also. This we will never know, unless we want to do the job of analyzing the random data and see if stimulus contained all necessary types to sufficiently test the filter. When I say sufficiently here, it does not mean exhaustive testing but what is minimally needed to check the implementation. Hence a minimal set of tests that verify the key design aspects of a filter are needed. In fact, due to fixed point implementation, we might add a few more tests to make sure that quantization effects are not de-grading the performance. Let me know what you think. --------------------------------------- Posted through http://www.DSPRelated.com
Reply by Sharan123 October 7, 20152015-10-07
>I mentioned it and then Robert equationified it. It works best if >you use it with Full Scale inputs, i.e., > > x[n] = A*sgn{ h[n0-n] } > >where A can be +FS, -FS, or even -(+FS). > >I've found problems that way before.
Thanks --------------------------------------- Posted through http://www.DSPRelated.com
Reply by Sharan123 October 7, 20152015-10-07
>> I really doubt if this method will ever help in case of algorithm >> validation. A filter is frequency selective function and testing with >> random data will hardly test the function.In fact, we will never know
if
>> enough testing has been done. If we cant define what is sufficient for >> testing, it is hard to say when enough testing has been done. > >I agree, more or less. You can put in white noise and verify that the >output has the right spectrum, but this won't help you with all possible
>bugs.
Dear Tim, Please help me understand. Is it a good idea to add a bit of white noise on top of test signal and check the spectrum? or, would doing this with just with an impulse input and checking provide sufficient insight into the filter behavior? Also, when you say, spectrum, is it frequency domain representation? --------------------------------------- Posted through http://www.DSPRelated.com
Reply by kaz October 6, 20152015-10-06
>Hello, > >Thanks a lot for the responses. I am consolidating my comments in this >single post ... > >>The one thing missing from your question, which boggles my mind a bit >>since you're talking about testing, is the specified behavior of the >>filter. >
>>All you need some random data as input and a simple fir >>mathematical model that is proven (Matlab) and check output of your >design >>Vs model output. > >I really doubt if this method will ever help in case of algorithm >validation. A filter is frequency selective function and testing with >random data will hardly test the function.In fact, we will never know if >enough testing has been done. If we cant define what is sufficient for >testing, it is hard to say when enough testing has been done.
Now you are talking about algorithm validation. Your first post is about proving implementation. The averaging filter can be studied in its mathematical model before any form of implementation e.g.(Matlab function "filter"): x = [1 zeros(1,1024); h = [.2,.2,.2,.2,.2]; y = filter(h,1,x); then study frequency domain of y or its time domain. you will also need to look after quantisation issues. so you can use this function to see the response and then implement it and test your implementation with random data through both model and design: x = randn(1,1024); yref = filter(h,1,x); yref should be same as your design output taking into consideration bit resolution issues and overflow. Kaz --------------------------------------- Posted through http://www.DSPRelated.com
Reply by Eric Jacobsen October 6, 20152015-10-06
On Tue, 06 Oct 2015 13:37:17 -0500, "Sharan123" <99077@DSPRelated>
wrote:

>Hello, > >Thanks a lot for the responses. I am consolidating my comments in this >single post ... > >>The one thing missing from your question, which boggles my mind a bit >>since you're talking about testing, is the specified behavior of the >>filter. > >In this specific case, I have not started with filter design >specifications but I could have very well done that. I have deliberately >chosen a simple filter to get my basics about filter verification >right. > >>Shouldn't testing be making sure that the filter behaves as specified? >>So -- how is the filter specified? > >Sure. Assume we do have filter specifications then I would like to get >inputs on how to formalize verification of that filter. > >>That it does not exhibit more than specified quantization anomalies (or >>noise, if you want to spec it that way). Strictly speaking this comes >>under nonlinear behavior, but most people separate the two. > >Correct. How can I verify to make sure that quantization effect is not >degrading the filter performance. This could be due to less than word size >or bug in the filter implementation ... > >Should I pass a sine waveform with frequency within the pass band and then >measure SNR of ideal output VS actual output. Ideal output is from a >floating point Matlab implementation. > >>All you need some random data as input and a simple fir >>mathematical model that is proven (Matlab) and check output of your >design >>Vs model output. > >I really doubt if this method will ever help in case of algorithm >validation. A filter is frequency selective function and testing with >random data will hardly test the function.In fact, we will never know if >enough testing has been done. If we cant define what is sufficient for >testing, it is hard to say when enough testing has been done. > >>To that it's probably a good idea to add an input stimulus purposely >>designed to cause overflow (if the filter is designed to allow it) to >>make sure that it is handled correctly. > >Agree. But assuming an FIR filter and fixed point implementation, what is >this special input I can pass to make sure that data is not getting >saturated inside the filter? Can you throw some light or give an example?
It's been mentioned several times, essentially: x[n] = A*sgn{ h[n0-n] } where A can be +FS, -FS, or even -(+FS). As Robert mentioned, the length of x[n] must be at least as long as h[n] to get the full effect. Once the full length sequence has been cycled in, any saturation of overflow problems will be evident. The arithmetic inside the filter cannot ever exceed those particular cases, as those are the pathological worst cases for saturation or overflow.
>>For exampe, a string of input samples at max value can help reveal >>overflow issues, likewise a string of input samples at min value. >>For filters with coefficients with differing signs, a string of >>max-valued input samples with signs arranged to yield the largest >>possible output can also be revealing. > >Thanks > >>Whenever possible, I will generate a curve over the entire >>dynamic range -- that is, sweep the magnitude of the test signal >>applied to the inputs from "below noise" to "above saturation", and >>graph the RMS error (expressed in dBc) vs. input level (expressed in >dB). > >>Such a graph should have a wide region where the dBc error is >>satisfactorily low, corresponding the operating region, and will have >>degraded (higher) dBc error at both low and high input levels, >>corresponding to quantization and saturation/overflow errors >>respectively. > >>Good test signals are sinusoids, and also lowpass-filtered white >>noise. > >Thanks > > >>i.e. > >> x[n] = sgn{ h[n0-n] } > >>for 0 < n <= n0 > >>and n0 >= L, the length of the FIR. > >>to me, that (assuming that max |x[n]| = 1) would decidedly be the acid >>test for detecting overflow. > >Thanks > >>For FIR, it seems to me that for each tap you can figure out >>the largest value that can be added to the total. For IIR, it >>isn't so obvious, but maybe not so hard. > >>Strange thought, if you take a FIR filter, then take the absolute >>value of all the coefficients, what do you call the result? > >In case of moving average filter the sum is 1 but I dont think this is the >case with other filters. In fact, I have a question that filters such as >moving average filters dont act like filter intuitively because they >really dont remove anything from input but that topic is for other day >... > >>A bit of analysis, if you consider that part of "verification", wouldn't > >>go amiss either, although that really should be done as part of the >>design process. Having a second person going over your math, or even >>working through it independently, isn't a bad idea at all.) > >My situation is different. I verify filters designed by someone else > > > > > > > > > >--------------------------------------- >Posted through http://www.DSPRelated.com
Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
Reply by Eric Jacobsen October 6, 20152015-10-06
On Mon, 05 Oct 2015 21:51:59 -0500, Tim Wescott <tim@seemywebsite.com>
wrote:

>On Tue, 06 Oct 2015 01:42:08 +0000, glen herrmannsfeldt wrote: > >> Steve Pope <spope33@speedymail.org> wrote: >>> Tim Wescott <seemywebsite@myfooter.really> wrote: >> >>>>To that it's probably a good idea to add an input stimulus purposely >>>>designed to cause overflow (if the filter is designed to allow it) to >>>>make sure that it is handled correctly. >> >>> Whenever possible, I will generate a curve over the entire dynamic >>> range -- that is, sweep the magnitude of the test signal applied to the >>> inputs from "below noise" to "above saturation", and graph the RMS >>> error (expressed in dBc) vs. input level (expressed in dB). >> >> For FIR, it seems to me that for each tap you can figure out the largest >> value that can be added to the total. For IIR, it isn't so obvious, but >> maybe not so hard. > >I had been thinking that it was damned near impossible, then I realized >that you just need to take RBJ's idea and excite it with the sign >function of the impulse response: > > x[n] = sgn{ h[n0-n] } > >That'll take it the closest to hitting the rails.
I mentioned it and then Robert equationified it. It works best if you use it with Full Scale inputs, i.e., x[n] = A*sgn{ h[n0-n] } where A can be +FS, -FS, or even -(+FS). I've found problems that way before. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com