Reply by Chris Hornbeck April 16, 20052005-04-16
On Sat, 09 Apr 2005 20:01:39 -0700, Bob Cain
<arcane@arcanemethods.com> wrote:

> Look at: > > http://www.atvs.diac.upm.es/publicaciones/docs/Bot00a.pdf
OK, I've looked, and, as usual, it's mostly over my head, but doesn't this apply to external "noise", meaning anything unwanted, rather than internal noise, meaning something subject to the Second Law? If so, it would seem to be within the murky purview of room de-convolving that you (*don't*) hint at so often. Large area receivers imply directionality. More is more, etc. Something the size of a human head has good directionality above one and a half kiloHertz or so... Add time-sensitive selective sensing around the surface... Sounds like a very interesting project. Good fortune, Chris Hornbeck 6x9=42 April 29
Reply by Andor April 11, 20052005-04-11
Bob Cain wrote:
...
> Among the numerous capsules the self noise of each would be > independant and uncorrelated. The signals, on the other > hand, would be highly correlated. There will be some time > of arrival differences in the sound pickup but little > magnitude differences.
My first take on this problem would be to "standardize" each sensor output (ie. compensate known differences of the frequency response, phase, delay, etc. between the sensors). For each sensor you get a response h_k (k in 1 .. N, where N is the number of sensors). You basically just need to find the relative response with respect to any one reference sensor that you can define arbitrarily. If you know the placement, the direction and frequency response of the sensors, this should theoretically pose no problem (I can imagine that this could lead to problems in practice). If s_k is the output of the k-th sensor, and s_k' = h_k * s_k (* denotes convolution with the compensation filter h_k) the compensated output of each sensor, then a first estimate of the "true" signal (that is the signal "seen" through the reference sensor) is the average r = 1/N (s_1'+...+s_N'). An estimate for the denoised signal would be s_k'' = (h_k)^-1 * r, the deconvolution of the average. You could refine the estimate r by weighting each s_k with an additional weighting filter w_k which could define the amount of information that sensor k contributes to the average signal. Some non-linear weighting function could lead to a more robust estimate (for examle if a sensor is defect). This would be my first naive approach. Howerver, I should think the problem of estimating a signal from many sensor outputs is a well studied problem. I don't know the technical term though. Regards, Andor
Reply by Bob Cain April 10, 20052005-04-10

Chris Hornbeck wrote:

> Ok, thanks. I'm with you so far, I think. The analog > outputs of the capsules can be summed, at large wavelengths, > with 3dB improvement in SNR per doubling in the number > of capsules.
That's for simply summing them which isn't quite what I'm getting at.
> > Reducing the differences among the various signal outputs > with increasing wavelength does the same thing. But what > extra noise reduction is possible using DSP?
Yes, that's the question. :-) There seems to be something to be gained. Look at: http://www.atvs.diac.upm.es/publicaciones/docs/Bot00a.pdf Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Reply by Chris Hornbeck April 9, 20052005-04-09
On Sat, 09 Apr 2005 00:23:43 -0700, Bob Cain
<arcane@arcanemethods.com> wrote:

>Among the numerous capsules the self noise of each would be >independant and uncorrelated. The signals, on the other >hand, would be highly correlated. There will be some time >of arrival differences in the sound pickup but little >magnitude differences. Seems to me that the cross >correlation information among them could be used to help >separate the noise of each from the signal.
Ok, thanks. I'm with you so far, I think. The analog outputs of the capsules can be summed, at large wavelengths, with 3dB improvement in SNR per doubling in the number of capsules. Reducing the differences among the various signal outputs with increasing wavelength does the same thing. But what extra noise reduction is possible using DSP? Thanks, Chris Hornbeck 6x9=42
Reply by Bob Cain April 9, 20052005-04-09

Chris Hornbeck wrote:

> Since I'm able to understand only a little gleam of > what you're getting at, let me just ask the Devil's Advocate > question. In what way would this not violate the Second Law?
Got me. 'Fraid I don't see the connection.
> > IOW, self noise is random; the sum is random; what element > could be considered to be correlated?
Among the numerous capsules the self noise of each would be independant and uncorrelated. The signals, on the other hand, would be highly correlated. There will be some time of arrival differences in the sound pickup but little magnitude differences. Seems to me that the cross correlation information among them could be used to help separate the noise of each from the signal. Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Reply by Chris Hornbeck April 8, 20052005-04-08
On Sat, 02 Apr 2005 18:38:47 -0800, Bob Cain
<arcane@arcanemethods.com> wrote:

> Since the elements >are in close proximity, the correlation, especially at the >lower frequencies is very high and it seems to me that info >could be used to ameliorate the problems of enhanced self >noise at lower frequencies. > >This is just the germ of an idea and I'm hoping for some >enlightening discussion (or dissuasion.)
Since I'm able to understand only a little gleam of what you're getting at, let me just ask the Devil's Advocate question. In what way would this not violate the Second Law? IOW, self noise is random; the sum is random; what element could be considered to be correlated? If the answer is over my head, please forgive me, but any hints or insights at my level would be greatly enjoyed. Thanks, as always, Chris Hornbeck 6x9=42
Reply by The Ghost April 7, 20052005-04-07
Bob Cain <arcane@arcanemethods.com> wrote in 
news:d31p3402hjm@enews3.newsguy.com:

> snip......snip. I am unfortunately rather poorly educated in > the area of statistical DSP so far but have a pretty good > record of assimilating what I need when that need arises. > What I mean to say is that I may need a little hand holding > to get off the ground if that would be possible.
Bob Cain is also poorly educated in the areas of physics, acoustics and engineering math. Furthermore, the record clearly shows that his ability to assimilate what he needs when the need arises is equally unimpressive. Lastly, with Bob Cain, hand holding dosn't help. Wit regard to the the issue of hand holding, Bob Cain recently went to sci.physics and got Zigateau to hold his hand in working through a third- year college-level partial differential equation. Here are a couple of representative quotes from Zigateau (sci.physics) 1) "I think it is possible that you do not know what you are doing, and that you have calculated the wrong thing. Also......I'm beginning to see how the unfortunate tone of your prior correspondence might have arisen." 2) "You don't appear to have the thing in perspective. It is not necessary to calculate 44,000 values of the sound pressure in order to calculate the total harmonic distortion. I think it is possible that you do not know what you are doing, and that you have calculated the wrong thing.
Reply by Bob Cain April 6, 20052005-04-06

Rune Allnor wrote:

> I have done something like that in the past. I had two > channels, where I wanted to filter out the signal that was > mutual between the channels.
Sounds like pretty much the same problem. Do you think it will make a substantial difference to work the other way, to keep the signal and in some way to discard the rest?
> What I did was to form the > autocovariance matrixes for both channels, and do some > voodoo subspace analysis on those matrixes. It did work > quite well for my purposes.
Love to know more.
> > Try to get a copy of my thesis (I've published a link > to it earlier, although the link may be obsolete by now), > and check chapter 2.3. > > If you can't get the thesis, post a note here and I'll > mail you a copy in a few days.
Yes, please! I am unfortunately rather poorly educated in the area of statistical DSP so far but have a pretty good record of assimilating what I need when that need arises. What I mean to say is that I may need a little hand holding to get off the ground if that would be possible. There has been a lot of activity using wavelets for de-noising and I wonder too if there might not be something in that theory which could take advantage of the correlation among numerous channels to reduce what is uncorrelated. Any wavelet experts that might be able to weigh in on that? Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Reply by Bob Cain April 6, 20052005-04-06

Fred Marshall wrote:
> "Bob Cain" <arcane@arcanemethods.com> wrote in message > >>What I'm wondering about is the use of cross correlation among the signals >>from the set of elements to determine where in the momentary spectrum >>there are components of interest so as to filter out the noise in the >>spectral regions where little correlation exists. Since the elements are >>in close proximity, the correlation, especially at the lower frequencies >>is very high and it seems to me that info could be used to ameliorate the >>problems of enhanced self noise at lower frequencies. > > > Bob, > > You didn't say anything about angle of arrival. If there should be a > consistent angle of arrival, then, if you sum elements on one side and its > opposite - and even if the "halves" so formed overlap - then the difference > at low frequencies should be zero or at least random.
The angle of arrival is inclusive. This is an array which encodes both intensity and incidence angle information. It yields a four channel B-format Ambisonic signal which codes the instantaneous value of the scalar pressure at a point and the vector particle velocity at the same point.
> I've not directly addressed the question you asked because there is no > spectral filtering except for the aforementioned sub-bands. If you really > want to do spectral filtering that's based on where the signal is, then it's > like wanting a priori knowledge of the signal - or at least a willingness > and ability (e.g. enough time) to do adaptive filtering. Otherwise, I don't > know how you do that.
Yes, this would have to be an adaptive process because the signal's spectrum is constantly changing. It just seems to me that if one has a pretty good idea of what the signal's instantaneous spectrum is from the correlation of seven nearly coincident elements it should be possible to reduce the content that is not part of this correlation, which would be uncorrelated noise. I just don't know how to do that. If the process were "perfect" the noise that was left in each signal would exist as small random variance of the actual spectral components it contains and would thus be cconsiderably masked. Section 2.2 of this paper seems to directly address the question but I do not understand it (yet.) The notation is not one I've been exposed to. http://www.atvs.diac.upm.es/publicaciones/docs/Bot00a.pdf Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Reply by Rune Allnor April 3, 20052005-04-03
Bob Cain wrote:
> I'm looking at a problem involving an array of microphone > elements. The array is closely spaced, clustered within > less than half the wavelength of the highest frequency of > interest. > > The operations used to create outputs from this array is > differencing, time integration and then further mixing. The > problem is twofold: first the elements are noisy because > they are tiny and second, the integration seriously boosts > the LF noise. > > What I'm wondering about is the use of cross correlation > among the signals from the set of elements to determine > where in the momentary spectrum there are components of > interest so as to filter out the noise in the spectral > regions where little correlation exists. Since the elements > are in close proximity, the correlation, especially at the > lower frequencies is very high and it seems to me that info > could be used to ameliorate the problems of enhanced self > noise at lower frequencies. > > This is just the germ of an idea and I'm hoping for some > enlightening discussion (or dissuasion.)
I have done something like that in the past. I had two channels, where I wanted to filter out the signal that was mutual between the channels. What I did was to form the autocovariance matrixes for both channels, and do some voodoo subspace analysis on those matrixes. It did work quite well for my purposes. Try to get a copy of my thesis (I've published a link to it earlier, although the link may be obsolete by now), and check chapter 2.3. If you can't get the thesis, post a note here and I'll mail you a copy in a few days. Rune