DSPRelated.com
Forums

How to calculate SNR for signal which can only detect with noise?

Started by abracadabra April 23, 2007
Hi dspers,

I was discussing a problem of evaluating efficiency and performance of
a noise cancellation algorithm using SNR and MSE with some buddies.
Someone suggest that for signal which we can only detect with noise,
we cannot know the exact pure source, so  we have to use a known
source and add some kind of synthesized noise to verify our algorithm.
However, it is hard to guarantee the synthesized noise has the same
property of the noise within the detected signal. Also, it may be
appropriate to do lots of experiments to examine the algorithm.

Someone claimed that the most famous guys are doing this, as if they
already know the exact properties of the noise. I do not know whether
this is the truth. So, how do you evaluate a noise cancellation
algorithm in such a case?

Thank you

abracadabra <jerry_cq_cn@yahoo.com> writes:

> Hi dspers, > > I was discussing a problem of evaluating efficiency and performance of > a noise cancellation algorithm using SNR and MSE with some buddies. > Someone suggest that for signal which we can only detect with noise, > we cannot know the exact pure source, so we have to use a known > source and add some kind of synthesized noise to verify our algorithm. > However, it is hard to guarantee the synthesized noise has the same > property of the noise within the detected signal. Also, it may be > appropriate to do lots of experiments to examine the algorithm. > > Someone claimed that the most famous guys are doing this, as if they > already know the exact properties of the noise. I do not know whether > this is the truth. So, how do you evaluate a noise cancellation > algorithm in such a case? > > Thank you
Your buddies are right. You may not be able to simulate the noise precisely, but it is usually much more fruitful to develop such an algorithm by creating "fake" signals (plus noise) and testing it rather than testing it immediately with real-world signals. To put it another way, you can find a lot of problems with your algorithm just by using a fake signal, and since you have perfect knowledge and control of the fake signal, you have a much better (or perfect) ideal of what the expected output is. -- % Randy Yates % "The dreamer, the unwoken fool - %% Fuquay-Varina, NC % in dreams, no pain will kiss the brow..." %%% 919-577-9882 % %%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO http://home.earthlink.net/~yatescr
Thanks.

It seems that one can use some elaborate fake signal to justfy his/her
algorithms. Lots of guys are using some results that are good for
their new methods, and bad for others to promote their "findings".
Hence the conclusion is drawn from just some portion of the real data.

abracadabra <jerry_cq_cn@yahoo.com> writes:

> Thanks. > > It seems that one can use some elaborate fake signal to justfy his/her > algorithms. Lots of guys are using some results that are good for > their new methods, and bad for others to promote their "findings". > Hence the conclusion is drawn from just some portion of the real data.
There's usually a corpus of standard signals to test by. Have you done a search, made inquiries, etc.? -- % Randy Yates % "Watching all the days go by... %% Fuquay-Varina, NC % Who are you and who am I?" %%% 919-577-9882 % 'Mission (A World Record)', %%%% <yates@ieee.org> % *A New World Record*, ELO http://home.earthlink.net/~yatescr
On Apr 23, 8:17 pm, abracadabra <jerry_cq...@yahoo.com> wrote:
> Hi dspers, > > I was discussing a problem of evaluating efficiency and performance of > a noise cancellation algorithm using SNR and MSE with some buddies. > Someone suggest that for signal which we can only detect with noise, > we cannot know the exact pure source, so we have to use a known > source and add some kind of synthesized noise to verify our algorithm. > However, it is hard to guarantee the synthesized noise has the same > property of the noise within the detected signal. Also, it may be > appropriate to do lots of experiments to examine the algorithm. > > Someone claimed that the most famous guys are doing this, as if they > already know the exact properties of the noise. I do not know whether > this is the truth. So, how do you evaluate a noise cancellation > algorithm in such a case? > > Thank you
When I left the ITU, we just beginning to look into this problem. There has been some research and algorithms devised to measure performance in the ITU group that is responsible for coding. However, I disagree with using the methods devised by them because there is a fundamental difference between reducing noise for encoding, and reducing noise on a received signal. I have not followed closely the latest work done by the ITU in this area. I would suggest looking at the contributions submitted for speech enhancement. This is the same group that does research for signal processing in the network. Good luck, Maurice Givens
On 24 Apr, 04:17, abracadabra <jerry_cq...@yahoo.com> wrote:
> Hi dspers, > > I was discussing a problem of evaluating efficiency and performance of > a noise cancellation algorithm using SNR and MSE with some buddies. > Someone suggest that for signal which we can only detect with noise, > we cannot know the exact pure source, so we have to use a known > source and add some kind of synthesized noise to verify our algorithm. > However, it is hard to guarantee the synthesized noise has the same > property of the noise within the detected signal. Also, it may be > appropriate to do lots of experiments to examine the algorithm. > > Someone claimed that the most famous guys are doing this, as if they > already know the exact properties of the noise. I do not know whether > this is the truth. So, how do you evaluate a noise cancellation > algorithm in such a case?
If you want to test an algorithm and see how well it works, you need to test it with simulated data. Only when you simulate the data yourself do you *know* exactly what signals the data contains, and only then can you compare the output of your processing routines with the input parameters. Once I have a working algorithm, I try to make it fail with simulated data. I break the assumed pre-conditions the algorithm is based on, and see how and when the algorithm fails. If I see something in real-life data I do not understand, I check if there are particular conditions with the measurement set-up I have not accounted for. If there are, I try to include these conditions in my simulations, and see if the processing results resemble what I observed in the measured data. This is a surprisingly efficient method for diagnostics, if done honestly. The difficult part of such method evaluation, is to remain honest. There are mainly two pitfalls to watch out for: "Prejudice" and "wishful thinking". "Prejudice" in this context is the mistake of thinking some method will not work when, in fact, it does. "Prejudice" is rare, I can only remember having done that mistake once during the last 15 years. That was in a discussion whether some material was a good acoustic conductor or not. efore th etest, I had my doubts, but the paying customer insisted on making a test, and we found that the material was an excellent conductor. Of course, unless somebody else force you to test the method you doubt (as happened with me) you are likely not to discover that you made a mistake of the "prejudice" type. "Wishful thinking" is far more common. This is the mistake of thinking a method works when, in reality, it does not. It seems to me that a lot of people either does not know or understand the limitations of any given method, or actively disregard such limitations. Either way, once people have one (or a few) scearios where a method works, they stop testing, e it with simulated or measured data. So they don't know if the "test" results are representative for the method, or only flukes. My experience is that mistakes of "wishful thinking" type is mainly made by people of high esteem, like people in corporate management, prestigeous academic positions, or who have degrees in impressive theoretical subjects. Which, incidentially, are the people who are least likely to review their own preparation for diving into any given suject. Rune