# normalized RMS?

Started by January 24, 2017
```I came across "normalized RMS", defined as
RMS(a-b)/(RMS(a)+RMS(b))
as a measure of similarity.  It is obviously zero for identical
signals, and increases with difference. If a or b is zero,
then it is 1.  If a and b are both random noise, it will be 0.7
(half sqare root of 2).  It is also sensitive to amplitudes:
if b=a/2, then normalized RMS is 2/3.
That contrasts with normalised cross-correlation, which is not
swayed by amplitudes, rather measuring phase similarity.

Does anybody use normalized RMS?
I am just curious about the name.  I would have called it "normalized difference".
```
```On Tue, 24 Jan 2017 02:06:04 -0800, pedro1492 wrote:

> I came across "normalized RMS", defined as RMS(a-b)/(RMS(a)+RMS(b))
> as a measure of similarity.  It is obviously zero for identical signals,
> and increases with difference. If a or b is zero,
> then it is 1.  If a and b are both random noise, it will be 0.7 (half
> sqare root of 2).  It is also sensitive to amplitudes:
> if b=a/2, then normalized RMS is 2/3.
> That contrasts with normalised cross-correlation, which is not swayed by
> amplitudes, rather measuring phase similarity.
>
> Does anybody use normalized RMS?
> I am just curious about the name.  I would have called it "normalized
> difference".

Not me, and I'd use "normalized difference", too, I think.

--
Tim Wescott
Control systems, embedded software and circuit design
I'm looking for work!  See my website if you're interested
http://www.wescottdesign.com
```
```On 24.01.17 11.06, pedro1492@lycos.com wrote:
> I came across "normalized RMS", defined as
> RMS(a-b)/(RMS(a)+RMS(b))
> as a measure of similarity.  It is obviously zero for identical
> signals, and increases with difference. If a or b is zero,
> then it is 1.  If a and b are both random noise, it will be 0.7
> (half sqare root of 2).  It is also sensitive to amplitudes:
> if b=a/2, then normalized RMS is 2/3.
> That contrasts with normalised cross-correlation, which is not
> swayed by amplitudes, rather measuring phase similarity.

The above is sensitive to phase too. Very sensitive actually.
E.g. with 180&#2013266096; phase:
a = -b  =>  2

> Does anybody use normalized RMS?
> I am just curious about the name.  I would have called it "normalized difference".

I never used this.

However, the RMS(a-b) term is quite common as chi&#2013266098; in linear or
nonlinear optimizations.

Marcel
```
```<pedro1492@lycos.com> wrote:

>I came across "normalized RMS", defined as
>RMS(a-b)/(RMS(a)+RMS(b))
>as a measure of similarity.  It is obviously zero for identical
>signals, and increases with difference. If a or b is zero,
>then it is 1.  If a and b are both random noise, it will be 0.7
>(half sqare root of 2).  It is also sensitive to amplitudes:
>if b=a/2, then normalized RMS is 2/3.
>That contrasts with normalised cross-correlation, which is not
>swayed by amplitudes, rather measuring phase similarity.
>
>Does anybody use normalized RMS?

Not me.  I'm more likely to use RMS(a-b)/RMS(a), where RMS(b)
is the same as or nearly the same as RMS(a).  Then it is of
the same form as common measurements such as THD, EVM, or inverse SNR.

Your "normalized RMS" does have a bit of symmetry going for it,
however.

Steve
```