Reply by Steve Pope January 24, 20172017-01-24
<pedro1492@lycos.com> wrote:

>I came across "normalized RMS", defined as >RMS(a-b)/(RMS(a)+RMS(b)) >as a measure of similarity. It is obviously zero for identical >signals, and increases with difference. If a or b is zero, >then it is 1. If a and b are both random noise, it will be 0.7 >(half sqare root of 2). It is also sensitive to amplitudes: >if b=a/2, then normalized RMS is 2/3. >That contrasts with normalised cross-correlation, which is not >swayed by amplitudes, rather measuring phase similarity. > >Does anybody use normalized RMS?
Not me. I'm more likely to use RMS(a-b)/RMS(a), where RMS(b) is the same as or nearly the same as RMS(a). Then it is of the same form as common measurements such as THD, EVM, or inverse SNR. Your "normalized RMS" does have a bit of symmetry going for it, however. Steve
Reply by Marcel Mueller January 24, 20172017-01-24
On 24.01.17 11.06, pedro1492@lycos.com wrote:
> I came across "normalized RMS", defined as > RMS(a-b)/(RMS(a)+RMS(b)) > as a measure of similarity. It is obviously zero for identical > signals, and increases with difference. If a or b is zero, > then it is 1. If a and b are both random noise, it will be 0.7 > (half sqare root of 2). It is also sensitive to amplitudes: > if b=a/2, then normalized RMS is 2/3. > That contrasts with normalised cross-correlation, which is not > swayed by amplitudes, rather measuring phase similarity.
The above is sensitive to phase too. Very sensitive actually. E.g. with 180&#4294967295; phase: a = -b => 2
> Does anybody use normalized RMS? > I am just curious about the name. I would have called it "normalized difference".
I never used this. However, the RMS(a-b) term is quite common as chi&#4294967295; in linear or nonlinear optimizations. Marcel
Reply by Tim Wescott January 24, 20172017-01-24
On Tue, 24 Jan 2017 02:06:04 -0800, pedro1492 wrote:

> I came across "normalized RMS", defined as RMS(a-b)/(RMS(a)+RMS(b)) > as a measure of similarity. It is obviously zero for identical signals, > and increases with difference. If a or b is zero, > then it is 1. If a and b are both random noise, it will be 0.7 (half > sqare root of 2). It is also sensitive to amplitudes: > if b=a/2, then normalized RMS is 2/3. > That contrasts with normalised cross-correlation, which is not swayed by > amplitudes, rather measuring phase similarity. > > Does anybody use normalized RMS? > I am just curious about the name. I would have called it "normalized > difference".
Not me, and I'd use "normalized difference", too, I think. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Reply by January 24, 20172017-01-24
I came across "normalized RMS", defined as
RMS(a-b)/(RMS(a)+RMS(b))
as a measure of similarity.  It is obviously zero for identical
signals, and increases with difference. If a or b is zero,
then it is 1.  If a and b are both random noise, it will be 0.7
(half sqare root of 2).  It is also sensitive to amplitudes:
if b=a/2, then normalized RMS is 2/3.
That contrasts with normalised cross-correlation, which is not
swayed by amplitudes, rather measuring phase similarity.

Does anybody use normalized RMS?
I am just curious about the name.  I would have called it "normalized difference".