Forums

How to model or standardize two microphones for Audio Processing?

Started by charansai November 7, 2015
Note that my previous comments apply to "wideband anc ". If your system is only cancelling a single frequency then a different set of rules apply. 

Bob
 
@Bob Masta Thanks for your time
>If you are allowed any choice in the matter, you might >consider making the test frequency fall exactly on a >spectral line. The line spacing is SampleRate/N, so for >your chosen 44100 rate and NP the frequency spacing is 882 >Hz (0, 882, 1764, 2646...). If you can choose one of these >"line-locked" frequencies you eliminate all problems of >spectral leakage and don't need a windows function. >
I will do this. The test signal 2kHz frequency is what I chose. Because I am processing the audio data @44.1kHz and 50 samples so I will have around 1.13msec time, I thought with this high frequency I could atleast see two cycles of the sine wave.
>Also, 44100 Hz may not be the best choice of sample rate, >since most sound cards use 48000 Hz these days (or multiples >or submultiples). They appear to accept 44100, but >typically that causes the input and output to drift apart in >time. This may not be a big issue for you, but something to >keep in mind.
I am using NI myRIO 1900,in this I have something called high-throughput FPGA personality where I can collect the data at max @44.1kHz sample rate not beyond that! I want to make most of the available sample rate !
>>1)My power sprectrum dB values are very low ,essential components are >>located around -40dB to -50dB, whereas the microphone sensitivity is >>itself -50dB and sinusoidal noise dB measured using one mobile App is >>giving 75dB!! So, my question is if I amplify the input signal by a
factor
>>of 1000 or so, Will it help? Is it a good thing to do? >>I want to amplify the signal so that it will be relatively comparable
to
>>the original noise source. ( I have tried with amplification factor of >>1000, the amplitude of both the signals are much better around
0.1(peak)
>>instead of 0.0001 and dB values are around -10dB.) > >I'm not clear on exactly what's going on here. You should >amplify the signal so the loudest signal peaks are close to >the maximum ADC input level. >
I obviously cannot add an amplifier before feeding to the Audio Input of NI myRIO, I dont want to increase extra circuitry for microphone data before feeding it to ADC of device, because I feel any extra circuits in audio applications will introduce extra noises and connection problems and so on.. so !
>As a general rule, you should avoid all these so-called >unidirectional mics, since they have terrible frequency and >phase reposnse. Get plain ominidirectional units. >
That is what I thought ! I will consider this as well.
>>3)How bad is it to use just 50 samples per frame in audio applications? >>I am seriously concerned about the loud speaker, with just 50 samples
the
>>sound through the loud speaker is not at all clear even without any
data
>>processing.!! > >Why 50? If you are using FFTs you should pick a power of 2. >The more samples, the better the frequency resolution and >the lower the noise floor, at the expense of poorer temporal >resolution and more processing time. >
I can may be go for 64 samples. 50 is because as I explained earlier 50 samples @44.1kHz will give one block of audio data in 1.13msec as my AIr duct model is around 1meter long . I have another 50 sec to process the data and produce anti noise. --------------------------------------- Posted through http://www.DSPRelated.com
Hello Bob ! Thanks for your time :)
>Since you mentioned FxLMS, I assume you have a classic setup with a
noise
>source that is picked up by a noise-sense microphone, a loudspeaker to >generate anti-noise, and then, farther "downstream" you have an error >microphone.
True I am trying to implement ANC for kind of Air duct (1 meter long pipe).So,it is broad band feedforward FxLMS algorithm. But for initial results and to get myself comfortable with sensors n actuators involved, I am using Sinusoidal noise source.
>In case A you need to know the speaker-to-error mic path ahead of time.
I dont know the path ahead of the time, rather I am planning to use Offline Modelling of the secondary path using Gaussian Noise.
>If you are in a reverberant environment it is very important that this
filter
>be long enough to model the entire impulse response.
How to decide the length of the filter.? Will it depend on the length of the secondary path??
> It's worthwhile noting that at very low frequencies the phase >response of a large room is pretty crazy and can change when things move >around in the room, and this is a real challenge that has no easy
textbook
>solution.
What do you mean by low frequencies? Can you give a number below which the behavior is worse !
>Generally people do not use block-based processing for these systems due >to the latency. If you do use block-based processing you need to make
sure
>the latency is less than the shortest acoustic path.
The latency is the main issue pushing me to go for very less number of samples(50). @44.1kHz and 50 samples, I will get one frame data in 1.13msec and considering 1meter long pipe I need to process and produce noise in approx 1.7msec!
>On top of all this there is the issue of coherence and turbulence. If
two
>signals have low coherence then there is no filter that can turn one
into
> Acoustic paths in real-life environments have a nasty habit of >having poor coherence especially if the distances are large.
Is 1 meter distance large ?? How address this coherence problem? On what factors will it depend ? --------------------------------------- Posted through http://www.DSPRelated.com
When you refer to your frame size, is this just the buffer size you are using on the input to your dsp, or are you actually doing frequency-domain processing on each block? If you are doing your processing in the time-domain , are you limiting yourself to filter lengths of only 50 samples or are you keeping a larger buffer of past samples that are used for making longer filters?

You need to have some idea of the speaker-to-error mic impulse response length and make sure your filtering is at least as long as that response. If your filter is only 50 samples long then your impulse response including the transport delay must be pretty short. 

I get the impression that at least for now you are just trying to set the amplitude and phase to cancel a single frequency. Again you need to make sure that 50 taps is long enough to give you a delay of half the period plus the transport delay. 
My low-frequency comments don't really apply in a pipe. 

If you are planning to eventually cancel air-flow noise then turbulence will be a big problem for you. It's no longer a LTI system. 

Bob
The MEMS microphones from Knowles are matched well. In particular, the
ones with the "-1" suffix.

---------------------------------------
Posted through http://www.DSPRelated.com
>When you refer to your frame size, is this just the buffer size you are >using on the input to your dsp, or are you actually doing
frequency-domain
>processing on each block? If you are doing your processing in the
time-domain
>, are you limiting yourself to filter lengths of only 50 samples or are >you keeping a larger buffer of past samples that are used for making
longer
>filters? >
I am doing time-domain processing . Seems I am lagging some intuition of this frame based processing. I will brush up my concepts and get back to you. I will consider your valuable points, thanks for your co operation. --------------------------------------- Posted through http://www.DSPRelated.com
>The MEMS microphones from Knowles are matched well. In particular, the >ones with the "-1" suffix. > >--------------------------------------- >Posted through http://www.DSPRelated.com
Seems their sensitivities are very low rather I am going for some commercial product 'Audio Technica microphones', they are battery powered so hopefully no pre amplification is required. Thank you. --------------------------------------- Posted through http://www.DSPRelated.com