DSPRelated.com
Forums

Processing block size and matched filtering

Started by techn0mad 8 years ago9 replieslatest reply 8 years ago304 views
#MF
I have a set of conflicting requirements:
- I need to detect weak tones in noise
- I need to qualify the presence (and absence) of these tones based on their duration (they have minimum and maximum allowed durations)

So, matched filter (#MF) theory indicates that the best way to detect a weak tone is to integrate its energy as long as possible. The problem then is, when do we start sampling? What happens if we start in the middle of a tone?

Because of this, I have been thinking of processing the samples in blocks sufficiently shorter than the tone duration, so as to meet Nyquist requirements for the detection of the tone duration, but the downside of this would presumably be less integration time and less energy available for detection. Is it reasonable to assume that optimum detection could still be performed by integrating results across multiple blocks of samples? Is it possible to do some sort of overlapping block processing in a case like this?


Thank you,

[ - ]
Reply by Tim WescottAugust 26, 2016

Matched filter theory presumes that you know the starting time, stopping time, duration, frequency, and phase of the tone, as well as the intensity of the noise.  We should all be so lucky, but rarely are.

How much do you know?  Much depends on this.

How much processing power do you have?  Much depends on this, too.

What's your noise spectral density compared to your signal power?  "Weak" is not an engineering term -- quantify.

Assuming a sufficient SNR, a steady-strength tone, and known and steady noise, the brute-force way to do this is to use a FIR bandpass filter that's about twice as long as the allowable error in your start- and stop-time measurements.  As soon as the intensity crosses half the expected amplitude going up you start timing, and as soon as it crosses it going down you stop timing.

A slightly less brute-force, but more complicated, way to do this is to multiply your signal by sine and cosine waves at the frequency of the tone (this is called "quadrature demodulation" in the literature).  Low-pass filter the two signals coming out, and proceed as above.  This is easier on the multiplies if your low-pass filtering consists of moving-average filters (or CIC filters, which you can look up).

Another slightly less brute-force way to do this is by using IIR bandpass filters -- but if you're a DSP newbie I don't recommend it.

At this point I've run well off the end of my data -- you need to give us some numbers, or we can only bloviate, and hope that you're lucky enough to have a problem that's solvable at all.

[ - ]
Reply by gmsk1August 26, 2016

If you know freq of incoming tone (some error is allowed) then you correlate (multiply & integrate) the incoming signal with sine & cosine (2 correlations) over at least the minimum-time-duration. Then, you sum the square of the correlator outputs. This is called "non-coherent detection" since no estimation of phase of i/p signal was done.

[ - ]
Reply by Tim WescottAugust 26, 2016

Using this rule, if the frequency error is greater than 1/(2 * minimum-time-duration), then you start to hit diminishing returns on the integration step -- to the point where, if the frequency error were exactly equal to 1/(integration time), the output due to the actual signal would be null.

Also, this method gives you an indication of the presence of the tone, but it doesn't give you a good indication of the start and stop times of the tone.

This is why I'm advocating that the OP set his bandwidth based on the maximum expected frequency error or the needed precision in knowing start and stop times, whichever is more restrictive.  I suppose one could separate this out into a tone detection and a tone time detection -- first make sure that there's really a tone there (with a tight-bandwidth filter), then try to determine it's timing.  If you can stand some delay in acquiring the start time (basically if you can stand getting "the tone started X samples ago rather than "there's a tone NOW"), then this should work.

(Note that the correlate-and-integrate is pretty close to Slartibartfast's sliding DFT -- replace the integration with a moving average that's some integer number of cycles of the prototype sine wave and it is a sliding DFT.  Note also that the correlate-and-integrate acts like a lowpass filter -- with DSP, there's always more than one way to skin a cat.)

[ - ]
Reply by techn0madAugust 26, 2016

To answer some of Tim's questions about this case:

- About all we know about the tones is their (approximate) frequency and allowed duration. We do not know their start times.

- Processing power available is very large, or at least at this point it is not a practical consideration.

- Typical signal to noise is 0 to -10 dB

- The tones often suffer from frequency, phase, and amplitude impairments due to the channel characteristics

- The tones belong to a set/alphabet, and it is known in advance that they are sent in pairs (like DTMF)

I think I am currently doing something like the multiplication by sine and cosine waves; I am using a correlation function, and asking for the correlation between the signal and each of the tones in the alphabet.

[ - ]
Reply by Tim WescottAugust 26, 2016

So, yes, this is complercated.  I should have expanded above "SNR" as a general term means something, but noise spectral density vs. signal power has more meaning, because the SNR is proportional to bandwidth, and if you're picking tones out of something else then the bandwidth can be anything.

Is your correlation function doing correlation on each individual tone, or is it correlating with each possible pair of tones?  If the latter, and if the phase impairments are frequency dependent, then the correlation may not work well.

If the correlation is on each individual tone, then you're pretty close to what I was suggesting.  Were you to hire me to solve this problem for you, I would get as much information as possible, then I would analyze all of the impairments and duration constraints to figure out how much the expected spectrum of the tones is spread, then I would choose a filter that does the best job of detecting some random signal at that bandwidth.

Note that if you do this, and if the correlation time of the filter is shorter than the duration of the tone, and if you're really scraping the bottom of the barrel for more certainty, then you may need to sample the filter's response within it's correlation time and add up it's response in a mean-squared sense (because you can't count on things staying in phase).

(I'm probably playing fast and loose with the term "correlation time" -- by this I mean, roughly, the reciprocal of the bandwidth; i.e., the duration over which a signal needs to be correlated with itself to be usefully received by the filter.)

[ - ]
Reply by SlartibartfastAugust 26, 2016

There are a number of things you can do depending on how much processing power you have available to you.   This sounds like a good application for a sliding DFT,  but you can assess that in the context of the details of your application.

And, yes, you can integrate the results from multiple blocks.   For example, averaging the output of two DFTs with independent inputs (i.e., no overlap) can increase the SNR by 3dB.

There are a few things that may help you analyze the problem and see what your limits may be or further narrow the tradeoff space between processing and performance and available computing power:  One handy thing about the DFT is that its processing gain is reasonably easy to compute and is a function of N (the DFT length).   So as you change N based on managing the number of blocks or the overlap or the integration time, you can assess the expected gain and see how it will affect the detection probability.

[ - ]
Reply by SamShearmanAugust 26, 2016

You might look in to model-based spectral analysis for detection of tone(s) with minimum number of samples. (See https://en.wikipedia.org/wiki/Spectral_density_est...)

[ - ]
Reply by lamabrewAugust 26, 2016

Others have kind of asked the same questions, but maybe differently than I would ask it before trying to figure out how to solve this...so here's my $0.02

Noise: need to define that as it could make a big difference in the approach, i.e. gaussian vs. impulse vs. channel changes (multipath/echo, though probably technically not noise for some people I take noise here to be anything that is not the signal).

Signal: It seems like the signal isn't random but a predefined set of tones. Are there sync periods that you can hunt for and use that to define the symbol periods? As you and others have said optimal decoding takes place when you integrate across the symbol period. You implied it might be non-constant, but is that due to jitter (i.e. average symbol period is constant) or are the symbol times random?

Coding: Are you in control of what's being sent? Some sort of encoding (trellis code for example) would result in a better system (since you indicate compute power isn't an issue here)

[ - ]
Reply by techn0madAugust 26, 2016

To answer lamabrew's questions:

1. The noise has, AFAIK, a combination of Gaussian and Bi-Kappa distributions (HF radio channel)

2. The signal is very much like telephone DTMF signalling in that there is a predefined set of tones, and the start times are totally random, but their durations must fit within minimum and maximum times. The symbol times should more or less match the specified durations.

3. I am definitely not in control of the coding, but the symbols are sent in the form of two sets of two tones in each set (e.g. similar to two DTMF digits). The frequencies, duration, and spacing between the two tone pairs are all specified in advance.