DSPRelated.com
Forums

Purpose of using normalization in Signal Processing

Started by runinrainy December 26, 2014
Hello

Could anyone plz explain me what the purpose of normalizing the signal?

If we have two signal on hand, how it is used when comparing these two
signals?

Thanks in advance!
	 

_____________________________		
Posted through www.DSPRelated.com
Probably you are asking the definition of normalisation. I don't know
either but it is to do with removing scale factor effect. Thus we may
rescale to  normalise amplitude dynamic range to swing between +1/-1 or
power to be unity or phase to be zero.

Kaz	 

_____________________________		
Posted through www.DSPRelated.com
runinrainy wrote:
> Hello > > Could anyone plz explain me what the purpose of normalizing the signal? > > If we have two signal on hand, how it is used when comparing these two > signals? > > Thanks in advance! >
I have 2 suggestions: 1. follow links at https://en.wikipedia.org/wiki/Normalization 2. ponder the definition of "normal" To encourage an "AHA moment": a. investigate the etymology of "mile" b. Why might I say "What does the Kelvin (or Rankine) scale have in common with your question?" c. why would I suggest kaz was leading in the same direction. HTH
On 12/26/14 10:47 AM, runinrainy wrote:
> > Could anyone plz explain me what the purpose of normalizing the signal? > > If we have two signal on hand, how it is used when comparing these two > signals?
this is, in my opinion, a really good (and fundamental) question. the necessity of normalization in comparing two signals should be apparent when considering the comparison of two signals with greatly different amplitudes. like what if the two signals differ in amplitude by 60 or 80 dB? then you're comparing the Himalayas to the Poconos. consider matching a signal, x[n], to a set of template, v0[n], v1[n], v2[n], v3[n] ... and you want to know which v[n] in the set "best fits" x[n]. the first ostensible method of "comparing" is by subtracting one from the other and looking at what remains. we might expect that if the residual is small, the comparison is good. Qxv[k] = SUM{ (x[n] - v[n+k])^2 } n (i am deliberately playing fast and loose with the limits to the summation.) some of us would call this the "Average Squared Difference Function" (ASDF) which is motivated by the 45 year old "Average Magnitude Difference Function" (AMDF). we "compare" by subtracting, but we must treat a negative difference the same as we treat a positive difference, that's why the difference is either abs() (for AMDF) or squared (for ASDF). note that Qxv[k] >= 0 and is equal to zero if x[n] and v[n+k] are precisely the same. now, for a given lag k, if you have x[n] looking a lot like v[n+k], including in magnitude, the Qxv[k] will be very small. but if x[k] looks a lot like v[n+k] but, because of some link in the signal chain, is 60 dB lower in amplitude, Qxv[k] will *not* be very small and, in fact, will show very little difference between a "good match" and a "bad match". let's say that all of the candidate signals in your template, v0[n], v1[n], ... are all normalized that they have an equal mean-square: SUM{ (v0[n])^2 } = SUM{ (v1[n])^2 } = ... n n then you might want to adjust the scaling of the input x[n] to also have the same mean-square, so that when you compare, there will be a sizable difference between a good match and a poor match. when you have a good match, you will know it. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
robert bristow-johnson wrote:
> On 12/26/14 10:47 AM, runinrainy wrote: >> >> Could anyone plz explain me what the purpose of normalizing the signal? >> >> If we have two signal on hand, how it is used when comparing these two >> signals? > > this is, in my opinion, a really good (and fundamental) question. > > > the necessity of normalization in comparing two signals should be > apparent when considering the comparison of two signals with greatly > different amplitudes. like what if the two signals differ in amplitude > by 60 or 80 dB? then you're comparing the Himalayas to the Poconos. > > consider matching a signal, x[n], to a set of template, v0[n], v1[n], > v2[n], v3[n] ... and you want to know which v[n] in the set "best fits" > x[n]. > > the first ostensible method of "comparing" is by subtracting one from > the other and looking at what remains. we might expect that if the > residual is small, the comparison is good. > > > Qxv[k] = SUM{ (x[n] - v[n+k])^2 } > n > > (i am deliberately playing fast and loose with the limits to the > summation.) some of us would call this the "Average Squared Difference > Function" (ASDF) which is motivated by the 45 year old "Average > Magnitude Difference Function" (AMDF). we "compare" by subtracting, but > we must treat a negative difference the same as we treat a positive > difference, that's why the difference is either abs() (for AMDF) or > squared (for ASDF). note that Qxv[k] >= 0 and is equal to zero if x[n] > and v[n+k] are precisely the same. > > now, for a given lag k, if you have x[n] looking a lot like v[n+k], > including in magnitude, the Qxv[k] will be very small. but if x[k] > looks a lot like v[n+k] but, because of some link in the signal chain, > is 60 dB lower in amplitude, Qxv[k] will *not* be very small and, in > fact, will show very little difference between a "good match" and a "bad > match". > > let's say that all of the candidate signals in your template, v0[n], > v1[n], ... are all normalized that they have an equal mean-square: > > SUM{ (v0[n])^2 } = SUM{ (v1[n])^2 } = ... > n n > > then you might want to adjust the scaling of the input x[n] to also have > the same mean-square, so that when you compare, there will be a sizable > difference between a good match and a poor match. when you have a good > match, you will know it. > >
When I have to compare two signals ( that are almost always represented as .wav files because that's what I most often work with ), I find that deconvolution is pretty useful. If the deconvolution product has but a few "spikes", then there's good correlation. If it's "flat", then there isn't. Because many of the signals I am comparing happen to be audio and often I'm trying to get them to "line up" in the time domain, the deconvolution product can tell me how much to "slide" them to make them have the minimum phase difference. Using the "tallest spike" of the deconvolution product is a good way to begin an estimate gain differences. I have no idea if this helps the OP or not. But I use Voxengo Deconvolver, which happens to be free. -- Les Cargill

"robert bristow-johnson"  wrote in message 
news:m7kfjp$2is$1@dont-email.me...

On 12/26/14 10:47 AM, runinrainy wrote:
> > Could anyone plz explain me what the purpose of normalizing the signal? > > If we have two signal on hand, how it is used when comparing these two > signals?
this is, in my opinion, a really good (and fundamental) question. the necessity of normalization in comparing two signals should be apparent when considering the comparison of two signals with greatly different amplitudes. like what if the two signals differ in amplitude by 60 or 80 dB? then you're comparing the Himalayas to the Poconos. Were you thinking volume? In terms of height, 60 dB is more like comparing the Himalayas with some smallish pebbles... consider matching a signal, x[n], to a set of template, v0[n], v1[n], v2[n], v3[n] ... and you want to know which v[n] in the set "best fits" x[n]. the first ostensible method of "comparing" is by subtracting one from the other and looking at what remains. we might expect that if the residual is small, the comparison is good. Qxv[k] = SUM{ (x[n] - v[n+k])^2 } n (i am deliberately playing fast and loose with the limits to the summation.) some of us would call this the "Average Squared Difference Function" (ASDF) which is motivated by the 45 year old "Average Magnitude Difference Function" (AMDF). we "compare" by subtracting, but we must treat a negative difference the same as we treat a positive difference, that's why the difference is either abs() (for AMDF) or squared (for ASDF). note that Qxv[k] >= 0 and is equal to zero if x[n] and v[n+k] are precisely the same. now, for a given lag k, if you have x[n] looking a lot like v[n+k], including in magnitude, the Qxv[k] will be very small. but if x[k] looks a lot like v[n+k] but, because of some link in the signal chain, is 60 dB lower in amplitude, Qxv[k] will *not* be very small and, in fact, will show very little difference between a "good match" and a "bad match". let's say that all of the candidate signals in your template, v0[n], v1[n], ... are all normalized that they have an equal mean-square: SUM{ (v0[n])^2 } = SUM{ (v1[n])^2 } = ... n n then you might want to adjust the scaling of the input x[n] to also have the same mean-square, so that when you compare, there will be a sizable difference between a good match and a poor match. when you have a good match, you will know it. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
> > > "robert bristow-johnson" wrote in message > news:m7kfjp$2is$1@dont-email.me... > > On 12/26/14 10:47 AM, runinrainy wrote: >> >> Could anyone plz explain me what the purpose of normalizing the signal? >> >> If we have two signal on hand, how it is used when comparing these two >> signals? > > this is, in my opinion, a really good (and fundamental) question. > > > the necessity of normalization in comparing two signals should be > apparent when considering the comparison of two signals with greatly > different amplitudes. like what if the two signals differ in amplitude > by 60 or 80 dB? then you're comparing the Himalayas to the Poconos.
On 12/27/14 12:16 PM, Phil Martel wrote:
> > Were you thinking volume? In terms of height, 60 dB is more like > comparing the Himalayas with some smallish pebbles...
well 10^(60/20) is 1000. so i think we're both off (in different directions, i understated the case and i think you overstated). it's like comparing Mt Everest to a little bluff of 8 or 10 meters. much bigger than smallish pebbles and much smaller than the Poconos. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

"robert bristow-johnson"  wrote in message 
news:m7pk9k$gfd$1@dont-email.me...

> > > "robert bristow-johnson" wrote in message > news:m7kfjp$2is$1@dont-email.me... > > On 12/26/14 10:47 AM, runinrainy wrote: >> >> Could anyone plz explain me what the purpose of normalizing the signal? >> >> If we have two signal on hand, how it is used when comparing these two >> signals? > > this is, in my opinion, a really good (and fundamental) question. > > > the necessity of normalization in comparing two signals should be > apparent when considering the comparison of two signals with greatly > different amplitudes. like what if the two signals differ in amplitude > by 60 or 80 dB? then you're comparing the Himalayas to the Poconos.
On 12/27/14 12:16 PM, Phil Martel wrote:
> > Were you thinking volume? In terms of height, 60 dB is more like > comparing the Himalayas with some smallish pebbles...
well 10^(60/20) is 1000. so i think we're both off (in different directions, i understated the case and i think you overstated). it's like comparing Mt Everest to a little bluff of 8 or 10 meters. much bigger than smallish pebbles and much smaller than the Poconos. Hmmm, yes I suppose it's more realistic to compare height to voltage where 60 dB => 1000 than to power where 60 dB => 1000000 In any case, Happy Holidays Phil -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
> > > "robert bristow-johnson" wrote in message > news:m7pk9k$gfd$1@dont-email.me... > > > well 10^(60/20) is 1000. so i think we're both off (in different > directions, i understated the case and i think you overstated). it's > like comparing Mt Everest to a little bluff of 8 or 10 meters. much > bigger than smallish pebbles and much smaller than the Poconos.
On 12/28/14 9:44 PM, Phil Martel wrote:
> > Hmmm, yes I suppose it's more realistic to compare height to voltage > where 60 dB => 1000 than to power where 60 dB => 1000000
perhaps not. height is directly proportional to energy. we could compare the velocities of falling stones as if they were voltages. so probably your conceptualization is best.
> In any case, Happy Holidays
and also for you. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On Fri, 26 Dec 2014 15:15:12 -0500, robert bristow-johnson wrote:

> On 12/26/14 10:47 AM, runinrainy wrote: >> >> Could anyone plz explain me what the purpose of normalizing the signal? >> >> If we have two signal on hand, how it is used when comparing these two >> signals? > > this is, in my opinion, a really good (and fundamental) question.
OTOH, I see normalization as a convenience, but not as a necessity.
> the necessity of normalization in comparing two signals should be > apparent when considering the comparison of two signals with greatly > different amplitudes. like what if the two signals differ in amplitude > by 60 or 80 dB? then you're comparing the Himalayas to the Poconos. > > consider matching a signal, x[n], to a set of template, v0[n], v1[n], > v2[n], v3[n] ... and you want to know which v[n] in the set "best fits" > x[n]. > > the first ostensible method of "comparing" is by subtracting one from > the other and looking at what remains. we might expect that if the > residual is small, the comparison is good. > > > Qxv[k] = SUM{ (x[n] - v[n+k])^2 } > n
However, the formal mathematical way of doing this would be to come up with a cross-correlation, comparing X_11 = sum_n {x[n]^2} X_21 = X_12 = sum_n {x[n] * v[n+k]} X_22 = sum_n {v[n+k]^2} Then compare the magnitude of X_21 with sqrt(X_11 * X_22). Then you see that starting by normalizing x and v is just a means of forcing X_11 and X_22 to be constant numbers (1, if you use your normalization below).
> (i am deliberately playing fast and loose with the limits to the > summation.) some of us would call this the "Average Squared Difference > Function" (ASDF) which is motivated by the 45 year old "Average > Magnitude Difference Function" (AMDF). we "compare" by subtracting, but > we must treat a negative difference the same as we treat a positive > difference, that's why the difference is either abs() (for AMDF) or > squared (for ASDF). note that Qxv[k] >= 0 and is equal to zero if x[n] > and v[n+k] are precisely the same. > > now, for a given lag k, if you have x[n] looking a lot like v[n+k], > including in magnitude, the Qxv[k] will be very small. but if x[k] > looks a lot like v[n+k] but, because of some link in the signal chain, > is 60 dB lower in amplitude, Qxv[k] will *not* be very small and, in > fact, will show very little difference between a "good match" and a "bad > match". > > let's say that all of the candidate signals in your template, v0[n], > v1[n], ... are all normalized that they have an equal mean-square: > > SUM{ (v0[n])^2 } = SUM{ (v1[n])^2 } = ... > n n > > then you might want to adjust the scaling of the input x[n] to also have > the same mean-square, so that when you compare, there will be a sizable > difference between a good match and a poor match. when you have a good > match, you will know it.
Moreover, you may wish to normalize for different reasons at different times. When I'm doing my calculations using fractional arithmetic, I tend to normalize everything by the input or output range of my data converters. Sometimes I'll have to further normalize to make sure that intermediate calculations do not overflow -- in these cases I treat the normalization on a case-by-case basis. But never, when I'm doing normalizations, do I feel like I'm doing anything earth-shattering. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com