Rune Allnor wrote:> On 26 Des, 15:11, Oli Charlesworth <ca...@olifilth.co.uk> wrote: >> Rune Allnor wrote: >>> The derivations of the matched filter usually start out without >>> noise included in the model (if you know of a derivation which >>> contains noise at the outset, please let me know). In that case, >>> the constraints are deterministic, and the objective of the >>> determinsitic derivation is to find the filter that best fits some >>> criterion, given the reference signal one searches for. >> Maybe I've understood you, but the derivation in Ch.5 of Digital Comms >> (Proakis) uses a continuous-time model with AWGN, and uses the >> Cauchy-Schwarz inequality to show that the MF maximises the SNR. > > If so, there seems to be a case of terminology mix-up. > I never specialized in comms, so I don't know what the > convention is there, but matched filters are derived > without noise terms in texts on general DSP. As Eric > suggested, this might be a case of imprecise terminology.Just out of interest, can you provide a link or reference to an alternative definition or derivation? This is interesting, because without noise, what's the benefit of any particular filter over another? In other words, what's the optimisation criterion that leads to choosing the matched filter? -- Oli

# Comparing matched and Wiener filters

Started by ●December 24, 2008

Reply by ●December 26, 20082008-12-26

Reply by ●December 26, 20082008-12-26

On 26 Des, 16:14, Oli Charlesworth <ca...@olifilth.co.uk> wrote:> Rune Allnor wrote: > > On 26 Des, 15:11, Oli Charlesworth <ca...@olifilth.co.uk> wrote: > >> Rune Allnor wrote: > >>> The derivations of the matched filter usually start out without > >>> noise included in the model (if you know of a derivation which > >>> contains noise at the outset, please let me know). In that case, > >>> the constraints are deterministic, and the objective of the > >>> determinsitic derivation is to find the filter that best fits some > >>> criterion, given the reference signal one searches for. > >> Maybe I've understood you, but the derivation in Ch.5 of Digital Comms > >> (Proakis) uses a continuous-time model with AWGN, and uses the > >> Cauchy-Schwarz inequality to show that the MF maximises the SNR. > > > If so, there seems to be a case of terminology mix-up. > > I never specialized in comms, so I don't know what the > > convention is there, but matched filters are derived > > without noise terms in texts on general DSP. As Eric > > suggested, this might be a case of imprecise terminology. > > Just out of interest, can you provide a link or reference to an > alternative definition or derivation?Not very quickly. I'm sure a standard text like Proakis & Manolakis or Oppenheim & Schafer contand the derivation.> This is interesting, because without noise, what's the benefit of any > particular filter over another? �In other words, what's the optimisation > criterion that leads to choosing the matched filter?The MF is used in pulse compression in radars or sonars. The question to be answered is "what (normalized) filter h[n] maximises the output of a detector given a (normalized) reference signal s[n]?" The answer is reached by applying the the Cauchy-Scwartz inequality, which reaches maximum for h[n] = s[-n]. In which case the filter is a correlator. The argument works without any noise terms included, although I wouldn't be surprised if the same answer is reached if you include the noise terms. Rune

Reply by ●December 26, 20082008-12-26

Rune Allnor wrote:> On 26 Des, 16:14, Oli Charlesworth <ca...@olifilth.co.uk> wrote: >> This is interesting, because without noise, what's the benefit of any >> particular filter over another? In other words, what's the optimisation >> criterion that leads to choosing the matched filter? > > The MF is used in pulse compression in radars or sonars. The question > to be answered is "what (normalized) filter h[n] maximises the output > of a detector given a (normalized) reference signal s[n]?" The answer > is reached by applying the the Cauchy-Scwartz inequality, which > reaches > maximum for h[n] = s[-n]. In which case the filter is a correlator.I agree that this criterion would result in the matched filter. However, I guess my next question is; what would the practical benefit be in doing so if you weren't trying to overcome noise?> The argument works without any noise terms included, although I > wouldn't > be surprised if the same answer is reached if you include the noise > terms.Indeed. If the filter is normalised, then the noise power remains the same, so by maximising signal power, you're also maximising SNR. -- Oli

Reply by ●December 26, 20082008-12-26

On 26 Des, 18:43, Oli Charlesworth <ca...@olifilth.co.uk> wrote:> Rune Allnor wrote: > > On 26 Des, 16:14, Oli Charlesworth <ca...@olifilth.co.uk> wrote: > >> This is interesting, because without noise, what's the benefit of any > >> particular filter over another? �In other words, what's the optimisation > >> criterion that leads to choosing the matched filter? > > > The MF is used in pulse compression in radars or sonars. The question > > to be answered is "what (normalized) filter h[n] maximises the output > > of a detector given a (normalized) reference signal s[n]?" The answer > > is reached by applying the the Cauchy-Scwartz inequality, which > > reaches > > maximum for h[n] = s[-n]. In which case the filter is a correlator. > > I agree that this criterion would result in the matched filter. > However, I guess my next question is; what would the practical benefit > be in doing so if you weren't trying to overcome noise?You *are* trying to overcome the noise. However, by *not* including the noise terms you as system designer is releaved of the responsibility of obtaining statistics to characterize the noise. In practice the choise stands between 1) An 'imperfect' filter which based only on parameters that is available to the analyst (i.e. only the reference signal) 2) A 'prefect' (or at least 'better') filter, but where parameters are needed that might not be available at the time of design (the noise) In my experience, an imperfect system that works 'well enough' always beats the optimized systems that depend on all the parameters in the scenario. Rune

Reply by ●December 26, 20082008-12-26

On 26 Des, 18:51, Rune Allnor <all...@tele.ntnu.no> wrote:> On 26 Des, 18:43, Oli Charlesworth <ca...@olifilth.co.uk> wrote: > > > > > > > Rune Allnor wrote: > > > On 26 Des, 16:14, Oli Charlesworth <ca...@olifilth.co.uk> wrote: > > >> This is interesting, because without noise, what's the benefit of any > > >> particular filter over another? �In other words, what's the optimisation > > >> criterion that leads to choosing the matched filter? > > > > The MF is used in pulse compression in radars or sonars. The question > > > to be answered is "what (normalized) filter h[n] maximises the output > > > of a detector given a (normalized) reference signal s[n]?" The answer > > > is reached by applying the the Cauchy-Scwartz inequality, which > > > reaches > > > maximum for h[n] = s[-n]. In which case the filter is a correlator. > > > I agree that this criterion would result in the matched filter. > > However, I guess my next question is; what would the practical benefit > > be in doing so if you weren't trying to overcome noise? > > You *are* trying to overcome the noise.And in sonars and radars there are the added constraints about transmitter power (which translates more or less directly to size and weight of the transmitter) as well as bandwidth. In these applications one would like to get as high target resolution as possible and also remain within the constraints of transmitted power and signal bandwidth. With clever design of the transmitted signals one might be able to obtain far better range resolution using matched filters, than by naive methods. In these applications the MF contributes to beating the noise threshold, and a well-designed correlation function contributes to the pulse compression. Rune

Reply by ●December 27, 20082008-12-27

On Dec 25, 12:12�am, Oliver Charlesworth <ca...@olifilth.co.uk> wrote:> Hi, > > This is something that's been bugging me for the last couple of days. > > With the understanding that both are linear, we define the matched > filter as the one that maximises the output SNR, and the Wiener filter > as the one that minimises the mean square error (MSE). �Superficially, > these definitions sound almost identical. However, even in the simplest > model (estimating a scalar value with independent AWGN), we clearly end > up with different filters: > > MF: � � � � �g = h* > Wiener: � � �g = Sh*(Shh* + sigma^2.I)^-1 > > such that x_est = gy, where y = hx + v (where x is scalar data with S = > E|x|^2, v is AWGN with sigma^2.I = E|v|^2, {y,h,v} are Nx1 vectors, and > * denotes conjugate transpose). > > My question is, what is the intuitive reason (i.e. not just "because the > maths says so") why these don't end up with the same result, or more > specifically, how the Wiener filter can minimise the MSE without > maximising the SNR? > > [This has been brought up before > (http://groups.google.com/group/comp.dsp/msg/b3d159784bc28486?dmode=so... > and subsequent thread branch), but wasn't closed out on.] > > -- > OliDifferent criteria give different results. You can for instance minimise the fourth power or you can minimise the maximum value of error too. Such is mathematics. as to which is "best" is another matter. H.

Reply by ●December 27, 20082008-12-27

On Dec 25, 9:48�am, Rune Allnor <all...@tele.ntnu.no> wrote:>...> The derivations of the matched filter usually start out without > noise included in the model (if you know of a derivation which > contains noise at the outset, please let me know). In that case, > the constraints are deterministic, and the objective of the > determinisitic derivation is to find the filter that best fits some > criterion, given the reference signal one searches for.but, i think without consideration of any extension that requires "pre- whitening" of the additive source of error, the matched filter gives the best S/N for a bunch of samples that are modeled as a linear combination of pure signal plus uncorrelated white noise. if you somehow knew the signal strength and normalized it, then the MF minimizes the additive noise or additive error which is sorta what the WF does in a probabilistic sense. this is what i thought the main question is about, or am i misunderstanding it? r b-j

Reply by ●December 28, 20082008-12-28

On 28 Des, 04:26, robert bristow-johnson <r...@audioimagination.com> wrote:> On Dec 25, 9:48�am, Rune Allnor <all...@tele.ntnu.no> wrote: > > > > ... > > The derivations of the matched filter usually start out without > > noise included in the model (if you know of a derivation which > > contains noise at the outset, please let me know). In that case, > > the constraints are deterministic, and the objective of the > > determinisitic derivation is to find the filter that best fits some > > criterion, given the reference signal one searches for. > > but, i think without consideration of any extension that requires "pre- > whitening" of the additive source of error, the matched filter gives > the best S/N for a bunch of samples that are modeled as a linear > combination of pure signal plus uncorrelated white noise.Maybe it does, I don't know.>�if you > somehow knew the signal strength and normalized it, then the MF > minimizes the additive noise or additive error which is sorta what the > WF does in a probabilistic sense.Well, yes. But from a design POV the advantage of the MF is that you only need to know the desired signal, and can ignore the noise (even if you know it will be present in the data). Since you ignore the noise, there is no need to estimate covariance structure or anything like that, which would only complicate the filtering.> this is what i thought the main question is about, or am i > misunderstanding it?It's likely me who misunderstood and you who got it right. Rune