Forums

How to model or standardize two microphones for Audio Processing?

Started by charansai November 7, 2015
I am working on Active Noise Cancellation Project, where I am using two
microphones ( reference micriphone, error microphone ) and one loud
speaker. For this, I have bought two microphones (König Electronic
CMP-MIC8), the problem now is to process them . Even though both are of
same model, their signals with out sound/ with sound are quite different
and not matching at all.
As I will use adaptive iterative algorithm(FxLMS), I want to standardize
both the microphones.I am using a standard 2kHz frequency,44.1kHz sample
rate, amplitude 2 and 50 sample sinusoidal noise to test both
microphones.
I have few questions with basic audio sample value details and on
microphone characteristics.
1)Why the measured amplitude is severely attenuated for sinusoidal
signal?
2)How to center the two sinusoids around vertical zero for sinusoidal
signal? Is it because of the DC offset?? I tried subtracting the average
value of each frame from each sample of frame, but its not working. !
3)Do I need to do some filtering ? I have tried both high pass and low
pass with the built in functions available in LabVIEW, results are even
worse but I am not sure what I did !!
4) I tried to model the microphones relatively using LMS, feeding one
microphone input as input signal and other microphone as desired signal,
hoping that I can find a correction factor, but the best error that I am
getting is around 10% of the signal and this error is increasing with
amplitude of the noise source :(
So how to standardize the two microphones ?? Here I am attaching the two
microphone signals and their error signals with sinusoidal noise and with
out any noise.
Please help me solve this problem.
Thank You.
PS: I am sampling the audio @44.1kHz and 50 samples per frame if that
matters


---------------------------------------
Posted through http://www.DSPRelated.com
On 07.11.15 14.16, charansai wrote:
> I am working on Active Noise Cancellation Project, where I am using two > microphones ( reference micriphone, error microphone ) and one loud > speaker. For this, I have bought two microphones (König Electronic > CMP-MIC8), the problem now is to process them . Even though both are of > same model, their signals with out sound/ with sound are quite different > and not matching at all.
Well if you do only noise cancellation, the amplitude response does not matter much, since the target zero is independent of that. However, the phase /does/ count. So place both mirophones at the same place (as far as possible) and record a full spectrum response of both of the simultaneously. The component wise quotient of the both response FFTs is the delta response. You can use this complex vector for compensation during you calculations.
> 1)Why the measured amplitude is severely attenuated for sinusoidal > signal?
???
> 2)How to center the two sinusoids around vertical zero for sinusoidal > signal? Is it because of the DC offset??
You should not take DC into account is any way. Do all you calculations in the frequency domain and only in the operating range of your equipment. Ignore any other frequencies.
> I tried subtracting the average > value of each frame from each sample of frame, but its not working.
Why is in not working? What did you expect? This just removes the zero frequency component. Other low frequency noise stays there.
> 3)Do I need to do some filtering ? I have tried both high pass and low > pass with the built in functions available in LabVIEW, results are even > worse but I am not sure what I did !!
Try FFT(mic1) / FFT(mic2) to get the differences as mentioned above.
> 4) I tried to model the microphones relatively using LMS, feeding one > microphone input as input signal and other microphone as desired signal, > hoping that I can find a correction factor, but the best error that I am > getting is around 10% of the signal and this error is increasing with > amplitude of the noise source :(
So you are operating non-linear. Somwehere in your path distorion takes place. You should avoid that.
> PS: I am sampling the audio @44.1kHz and 50 samples per frame if that > matters
Frame? What kind of frame? Marcel
Thank You for the reply,

>So place both mirophones at the same place (as far as possible) and >record a full spectrum response of both of the simultaneously. The >component wise quotient of the both response FFTs is the delta response.
>You can use this complex vector for compensation during you
calculations.
>
What is the component wise quotient(delta response) and what is the correction factor the complex vector ?? Can you please elaborate your answer? Giving some reference which explains theoretically what you said would be great .
>> 1)Why the measured amplitude is severely attenuated for sinusoidal >> signal? > >??? >
I meant when my sinusoidal noise source amplitude is 2, how it is reduced by huge factor when it comes to microphone data..
> >You should not take DC into account is any way. Do all you calculations >in the frequency domain and only in the operating range of your >equipment. Ignore any other frequencies. >
If I don't consider or remove DC offset or low frequency component from both microphones. How can the iterative algorithm(FxLMS) identifies the statistical imbalances in both signals
>> I tried subtracting the average >> value of each frame from each sample of frame, but its not working. > >Why is in not working? What did you expect? > >This just removes the zero frequency component. Other low frequency >noise stays there. >
I was expecting eventually both signals values will center around zero so that their amplitude and phase can be relatively modeled.
>> 4) I tried to model the microphones relatively using LMS, feeding one >> microphone input as input signal and other microphone as desired
signal,
>> hoping that I can find a correction factor, but the best error that I
am
>> getting is around 10% of the signal and this error is increasing with >> amplitude of the noise source :( > >So you are operating non-linear. Somwehere in your path distorion takes >place. You should avoid that. >
What do you mean by operating non-linear. I could not understand what you mean!
>> PS: I am sampling the audio @44.1kHz and 50 samples per frame if that >> matters > >Frame? What kind of frame?
I meant number of samples as one frame. I will take 50 samples and process the data produce the anti noise through loud speaker then again I will take another 50 samples and process the data and so on .. So, I called these 50 sample data as one frame. I can not add the resulting diagrams here. Please see the following link to see the responses that I am getting from two microphones. [link](http://sound.stackexchange.com/questions/37509/how-to-standardize-two-similar-microphones?noredirect=1#comment34744_37509) Any further help would be deeply appreciated. Thank You. --------------------------------------- Posted through http://www.DSPRelated.com
charansai <110065@DSPRelated> wrote:

>So how to standardize the two microphones ??
Start with a large number of microphones, and sort through them until you find two that are sufficiently matched. Steve
On 08.11.15 00.12, charansai wrote:
> Thank You for the reply, > >> So place both mirophones at the same place (as far as possible) and >> record a full spectrum response of both of the simultaneously. The >> component wise quotient of the both response FFTs is the delta response. > >> You can use this complex vector for compensation during you > calculations. >> > What is the component wise quotient(delta response) and what is the > correction factor the complex vector ?? Can you please elaborate your > answer? Giving some reference which explains theoretically what you said > would be great .
Let X be the sound pressure at the microphone. Then the microphone records some Hm(X). As long as Hm(X) is linear, only the frequency and the phase response is affected. Two microphones have different Hmi(X). You will need an anechoic room and a reference microphone to measure Hm(X) - you probably don't have this option. But for your purpose it is sufficient, when know the /differences/ of your microphones. Then you can calculate a transfer function once that transforms the recording of Microphone 1 into a data as if it had been recorded with microphone two. To get the differential response, mathematically Hm1(X)/Hm2(X), you only need to record the same signal with both microphones. Of course, the test signal should contain all relevant frequencies, e.g. white noise, pink noise or something like that. Have a look at ->Fourier Transform and ->complex transfer functions in the ->time domain to get used to that methods.
>>> 1)Why the measured amplitude is severely attenuated for sinusoidal >>> signal? >> >> ??? >> > I meant when my sinusoidal noise source amplitude is 2, how it is reduced > by huge factor when it comes to microphone data..
What did you expect? There are dozens of arbitrary amplification factors in between. The amplifier of the speaker, the speakers efficiency, the microphones sensitivity, the microphone preamplifier and so on.
>> You should not take DC into account is any way. Do all you calculations >> in the frequency domain and only in the operating range of your >> equipment. Ignore any other frequencies. >> > If I don't consider or remove DC offset or low frequency component from > both microphones. How can the iterative algorithm(FxLMS) identifies the > statistical imbalances in both signals
Sorry, I am not familiar with FxLMS. But whatever it does, you need to band pass all microphone inputs to the range of frequencies they can deal with. This will remove DC as well. But be aware that this cannot take only 50 samples into account. In this dimension the DC component needs to be compensated as well, since it is just a snapshot of lower audiable frequencies. I would recommend e simple 2nd order IIR high pass filter to be applied before further processing of both microphone data. And avoid large group delays. The filter should be at least aperiodic, Q=0.5 or even less.
>>> I tried subtracting the average >>> value of each frame from each sample of frame, but its not working. >> >> Why is in not working? What did you expect? >> >> This just removes the zero frequency component. Other low frequency >> noise stays there. >> > I was expecting eventually both signals values will center around zero so > that their amplitude and phase can be relatively modeled.
Well, if you take only 50 samples, you get the frequency response of a ->rectangular window function which is probably not what you wanted.
>>> 4) I tried to model the microphones relatively using LMS, feeding one >>> microphone input as input signal and other microphone as desired > signal, >>> hoping that I can find a correction factor, but the best error that I > am >>> getting is around 10% of the signal and this error is increasing with >>> amplitude of the noise source :( >> >> So you are operating non-linear. Somwehere in your path distorion takes >> place. You should avoid that. >> > What do you mean by operating non-linear. I could not understand what you > mean!
If the /relative/ error depends on the /absolute/ amplitude of the signal, then the operation is no longer linear. Most probably either the speaker or the microphones introduced distortion. You need to avoid that, or if not sufficiently possible compensate for it (very complicated).
>>> PS: I am sampling the audio @44.1kHz and 50 samples per frame if that >>> matters >> >> Frame? What kind of frame? > I meant number of samples as one frame. I will take 50 samples and process > the data produce the anti noise through loud speaker then again I will > take another 50 samples and process the data and so on .. So, I called > these 50 sample data as one frame.
OK, so it is the latency of your filter.
> Any further help would be deeply appreciated.
Sorry I am not into ANC so far. I only used digital room correction which is similar to some degree but operates with significantly larger windows of 65k samples. Marcel
>charansai <110065@DSPRelated> wrote: > >>So how to standardize the two microphones ?? > >Start with a large number of microphones, and sort through them >until you find two that are sufficiently matched. > > >Steve
I have only three microphones.I don't think, I can afford many microphones.Even if the results are little erroneous, I need some optimized solution to find the correction factor between two microphones. --------------------------------------- Posted through http://www.DSPRelated.com
@ Marcel M&uuml;ller
Thank you again for that detailed explanation of each question. 
Bandpass filter improved the performance of analysis. I performed PSD on
the two microphone keeping very close to the speaker. For 50 samples the
both microphone frequency responses are centered around 1.9kHz instead of
2kHz, whereas if I go for higher number of samples per frame like 500, the
frequencies are centered exactly at 2kHz. 
I can not post the images here. 
1)My power sprectrum dB values are very low ,essential components are
located around -40dB to -50dB, whereas the microphone sensitivity is
itself -50dB and sinusoidal noise dB measured using one mobile App is
giving 75dB!! So, my question is if I amplify the input signal by a factor
of 1000 or so, Will it help? Is it a good thing to do?
I want to amplify the signal so that it will be relatively comparable to
the original noise source. ( I have tried with amplification factor of
1000, the amplitude of both the signals are much better around 0.1(peak)
instead of 0.0001 and dB values are around -10dB.)
2) How much the directionality of microphone will affect the amplitude and
phase of the microphone measurement??
My microphones are 'Polar Pattern Supercardioid' directional. 
This question is because interestingly, not only the phase and amplitude
of microphones are changing away from speaker, they are also changing
infront of speaker.i.e if I change the position of the microphones right
infront of speaker, I can see amplitude and phase changes.!!!
3)How bad is it to use just 50 samples per frame in audio applications?
I am seriously concerned about the loud speaker, with just 50 samples the
sound through the loud speaker is not at all clear even without any data
processing.!!

-Charansai
 

---------------------------------------
Posted through http://www.DSPRelated.com
>@ Marcel Mueller >Thank you again for that detailed explanation of each question. >Bandpass filter improved the performance of analysis. I performed PSD on >the two microphone keeping very close to the speaker. For 50 samples the >both microphone frequency responses are centered around 1.9kHz instead
of
>2kHz, whereas if I go for higher number of samples per frame like 500,
the
>frequencies are centered exactly at 2kHz. >I can not post the images here. >1)My power sprectrum dB values are very low ,essential components are >located around -40dB to -50dB, whereas the microphone sensitivity is >itself -50dB and sinusoidal noise dB measured using one mobile App is >giving 75dB!! So, my question is if I amplify the input signal by a
factor
>of 1000 or so, Will it help? Is it a good thing to do? >I want to amplify the signal so that it will be relatively comparable to >the original noise source. ( I have tried with amplification factor of >1000, the amplitude of both the signals are much better around 0.1(peak) >instead of 0.0001 and dB values are around -10dB.) >2) How much the directionality of microphone will affect the amplitude
and
>phase of the microphone measurement?? >My microphones are 'Polar Pattern Supercardioid' directional. >This question is because interestingly, not only the phase and amplitude >of microphones are changing away from speaker, they are also changing >infront of speaker.i.e if I change the position of the microphones right >infront of speaker, I can see amplitude and phase changes.!!! >3)How bad is it to use just 50 samples per frame in audio applications? >I am seriously concerned about the loud speaker, with just 50 samples
the
>sound through the loud speaker is not at all clear even without any data >processing.!! > >-Charansai > > >--------------------------------------- >Posted through http://www.DSPRelated.com
Sorry I cannot edit the post. I just want to remove my second question. -Charansai --------------------------------------- Posted through http://www.DSPRelated.com
On Sun, 08 Nov 2015 17:09:21 -0600, "charansai"
<110065@DSPRelated> wrote:

>@ Marcel M&uuml;ller >Thank you again for that detailed explanation of each question. >Bandpass filter improved the performance of analysis. I performed PSD on >the two microphone keeping very close to the speaker. For 50 samples the >both microphone frequency responses are centered around 1.9kHz instead of >2kHz, whereas if I go for higher number of samples per frame like 500, the >frequencies are centered exactly at 2kHz.
If you are allowed any choice in the matter, you might consider making the test frequency fall exactly on a spectral line. The line spacing is SampleRate/N, so for your chosen 44100 rate and N=50 the frequency spacing is 882 Hz (0, 882, 1764, 2646...). If you can choose one of these "line-locked" frequencies you eliminate all problems of spectral leakage and don't need a windows function. Also, 44100 Hz may not be the best choice of sample rate, since most sound cards use 48000 Hz these days (or multiples or submultiples). They appear to accept 44100, but typically that causes the input and output to drift apart in time. This may not be a big issue for you, but something to keep in mind.
>I can not post the images here. >1)My power sprectrum dB values are very low ,essential components are >located around -40dB to -50dB, whereas the microphone sensitivity is >itself -50dB and sinusoidal noise dB measured using one mobile App is >giving 75dB!! So, my question is if I amplify the input signal by a factor >of 1000 or so, Will it help? Is it a good thing to do? >I want to amplify the signal so that it will be relatively comparable to >the original noise source. ( I have tried with amplification factor of >1000, the amplitude of both the signals are much better around 0.1(peak) >instead of 0.0001 and dB values are around -10dB.)
I'm not clear on exactly what's going on here. You should amplify the signal so the loudest signal peaks are close to the maximum ADC input level.
>2) How much the directionality of microphone will affect the amplitude and >phase of the microphone measurement?? >My microphones are 'Polar Pattern Supercardioid' directional. >This question is because interestingly, not only the phase and amplitude >of microphones are changing away from speaker, they are also changing >infront of speaker.i.e if I change the position of the microphones right >infront of speaker, I can see amplitude and phase changes.!!!
As a general rule, you should avoid all these so-called unidirectional mics, since they have terrible frequency and phase reposnse. Get plain ominidirectional units.
>3)How bad is it to use just 50 samples per frame in audio applications? >I am seriously concerned about the loud speaker, with just 50 samples the >sound through the loud speaker is not at all clear even without any data >processing.!!
Why 50? If you are using FFTs you should pick a power of 2. The more samples, the better the frequency resolution and the lower the noise floor, at the expense of poorer temporal resolution and more processing time. Best regards, Bob Masta DAQARTA v8.00 Data AcQuisition And Real-Time Analysis www.daqarta.com Scope, Spectrum, Spectrogram, Sound Level Meter Frequency Counter, Pitch Track, Pitch-to-MIDI FREE 8-channel Signal Generator, DaqMusiq generator Science with your sound card!
Since you mentioned FxLMS, I assume you have a classic setup with a noise source that is picked up by a noise-sense microphone, a loudspeaker to generate anti-noise, and then, farther "downstream" you have an error microphone. 
This algorithm falls into one of two categories;

A)  you have no acoustic path between the loudspeaker and the noise-pickup microphone, or
B) you DO have such a path

In case A you need to know the speaker-to-error mic path ahead of time. If you are in a reverberant environment it is very important that this filter be long enough to model the entire impulse response. If your filter phase response does not match the acoustic path phase response to around +/- 45 degrees then the lms adaptive filter will either converge very slowly or even diverge. It's worthwhile noting that at very low frequencies the phase response of a large room is pretty crazy and can change when things move around in the room, and this is a real challenge that has no easy textbook solution. 

In case B you have another acoustic path that needs to be matched with an FIR filter so you can cancel the portion of the anti-noise signal that is picked up by the noise-sense mic. Again you need to make sure that this filter is long enough to completely model the path. 

Generally people do not use block-based processing for these systems due to the latency. If you do use block-based processing you need to make sure the latency is less than the shortest acoustic path. 

On top of all this there is the issue of coherence and turbulence. If two signals have low coherence then there is no filter that can turn one into the other. Acoustic paths in real-life environments have a nasty habit of having poor coherence especially if the distances are large. 

Bob