Reply by charansai November 11, 20152015-11-11
>The MEMS microphones from Knowles are matched well. In particular, the >ones with the "-1" suffix. > >--------------------------------------- >Posted through http://www.DSPRelated.com
Seems their sensitivities are very low rather I am going for some commercial product 'Audio Technica microphones', they are battery powered so hopefully no pre amplification is required. Thank you. --------------------------------------- Posted through http://www.DSPRelated.com
Reply by charansai November 11, 20152015-11-11
>When you refer to your frame size, is this just the buffer size you are >using on the input to your dsp, or are you actually doing
frequency-domain
>processing on each block? If you are doing your processing in the
time-domain
>, are you limiting yourself to filter lengths of only 50 samples or are >you keeping a larger buffer of past samples that are used for making
longer
>filters? >
I am doing time-domain processing . Seems I am lagging some intuition of this frame based processing. I will brush up my concepts and get back to you. I will consider your valuable points, thanks for your co operation. --------------------------------------- Posted through http://www.DSPRelated.com
Reply by Seltech-USA November 10, 20152015-11-10
The MEMS microphones from Knowles are matched well. In particular, the
ones with the "-1" suffix.

---------------------------------------
Posted through http://www.DSPRelated.com
Reply by November 10, 20152015-11-10
When you refer to your frame size, is this just the buffer size you are using on the input to your dsp, or are you actually doing frequency-domain processing on each block? If you are doing your processing in the time-domain , are you limiting yourself to filter lengths of only 50 samples or are you keeping a larger buffer of past samples that are used for making longer filters?

You need to have some idea of the speaker-to-error mic impulse response length and make sure your filtering is at least as long as that response. If your filter is only 50 samples long then your impulse response including the transport delay must be pretty short. 

I get the impression that at least for now you are just trying to set the amplitude and phase to cancel a single frequency. Again you need to make sure that 50 taps is long enough to give you a delay of half the period plus the transport delay. 
My low-frequency comments don't really apply in a pipe. 

If you are planning to eventually cancel air-flow noise then turbulence will be a big problem for you. It's no longer a LTI system. 

Bob
Reply by charansai November 9, 20152015-11-09
Hello Bob ! Thanks for your time :)
>Since you mentioned FxLMS, I assume you have a classic setup with a
noise
>source that is picked up by a noise-sense microphone, a loudspeaker to >generate anti-noise, and then, farther "downstream" you have an error >microphone.
True I am trying to implement ANC for kind of Air duct (1 meter long pipe).So,it is broad band feedforward FxLMS algorithm. But for initial results and to get myself comfortable with sensors n actuators involved, I am using Sinusoidal noise source.
>In case A you need to know the speaker-to-error mic path ahead of time.
I dont know the path ahead of the time, rather I am planning to use Offline Modelling of the secondary path using Gaussian Noise.
>If you are in a reverberant environment it is very important that this
filter
>be long enough to model the entire impulse response.
How to decide the length of the filter.? Will it depend on the length of the secondary path??
> It's worthwhile noting that at very low frequencies the phase >response of a large room is pretty crazy and can change when things move >around in the room, and this is a real challenge that has no easy
textbook
>solution.
What do you mean by low frequencies? Can you give a number below which the behavior is worse !
>Generally people do not use block-based processing for these systems due >to the latency. If you do use block-based processing you need to make
sure
>the latency is less than the shortest acoustic path.
The latency is the main issue pushing me to go for very less number of samples(50). @44.1kHz and 50 samples, I will get one frame data in 1.13msec and considering 1meter long pipe I need to process and produce noise in approx 1.7msec!
>On top of all this there is the issue of coherence and turbulence. If
two
>signals have low coherence then there is no filter that can turn one
into
> Acoustic paths in real-life environments have a nasty habit of >having poor coherence especially if the distances are large.
Is 1 meter distance large ?? How address this coherence problem? On what factors will it depend ? --------------------------------------- Posted through http://www.DSPRelated.com
Reply by charansai November 9, 20152015-11-09
 
@Bob Masta Thanks for your time
>If you are allowed any choice in the matter, you might >consider making the test frequency fall exactly on a >spectral line. The line spacing is SampleRate/N, so for >your chosen 44100 rate and NP the frequency spacing is 882 >Hz (0, 882, 1764, 2646...). If you can choose one of these >"line-locked" frequencies you eliminate all problems of >spectral leakage and don't need a windows function. >
I will do this. The test signal 2kHz frequency is what I chose. Because I am processing the audio data @44.1kHz and 50 samples so I will have around 1.13msec time, I thought with this high frequency I could atleast see two cycles of the sine wave.
>Also, 44100 Hz may not be the best choice of sample rate, >since most sound cards use 48000 Hz these days (or multiples >or submultiples). They appear to accept 44100, but >typically that causes the input and output to drift apart in >time. This may not be a big issue for you, but something to >keep in mind.
I am using NI myRIO 1900,in this I have something called high-throughput FPGA personality where I can collect the data at max @44.1kHz sample rate not beyond that! I want to make most of the available sample rate !
>>1)My power sprectrum dB values are very low ,essential components are >>located around -40dB to -50dB, whereas the microphone sensitivity is >>itself -50dB and sinusoidal noise dB measured using one mobile App is >>giving 75dB!! So, my question is if I amplify the input signal by a
factor
>>of 1000 or so, Will it help? Is it a good thing to do? >>I want to amplify the signal so that it will be relatively comparable
to
>>the original noise source. ( I have tried with amplification factor of >>1000, the amplitude of both the signals are much better around
0.1(peak)
>>instead of 0.0001 and dB values are around -10dB.) > >I'm not clear on exactly what's going on here. You should >amplify the signal so the loudest signal peaks are close to >the maximum ADC input level. >
I obviously cannot add an amplifier before feeding to the Audio Input of NI myRIO, I dont want to increase extra circuitry for microphone data before feeding it to ADC of device, because I feel any extra circuits in audio applications will introduce extra noises and connection problems and so on.. so !
>As a general rule, you should avoid all these so-called >unidirectional mics, since they have terrible frequency and >phase reposnse. Get plain ominidirectional units. >
That is what I thought ! I will consider this as well.
>>3)How bad is it to use just 50 samples per frame in audio applications? >>I am seriously concerned about the loud speaker, with just 50 samples
the
>>sound through the loud speaker is not at all clear even without any
data
>>processing.!! > >Why 50? If you are using FFTs you should pick a power of 2. >The more samples, the better the frequency resolution and >the lower the noise floor, at the expense of poorer temporal >resolution and more processing time. >
I can may be go for 64 samples. 50 is because as I explained earlier 50 samples @44.1kHz will give one block of audio data in 1.13msec as my AIr duct model is around 1meter long . I have another 50 sec to process the data and produce anti noise. --------------------------------------- Posted through http://www.DSPRelated.com
Reply by November 9, 20152015-11-09
Note that my previous comments apply to "wideband anc ". If your system is only cancelling a single frequency then a different set of rules apply. 

Bob
Reply by November 9, 20152015-11-09
Since you mentioned FxLMS, I assume you have a classic setup with a noise source that is picked up by a noise-sense microphone, a loudspeaker to generate anti-noise, and then, farther "downstream" you have an error microphone. 
This algorithm falls into one of two categories;

A)  you have no acoustic path between the loudspeaker and the noise-pickup microphone, or
B) you DO have such a path

In case A you need to know the speaker-to-error mic path ahead of time. If you are in a reverberant environment it is very important that this filter be long enough to model the entire impulse response. If your filter phase response does not match the acoustic path phase response to around +/- 45 degrees then the lms adaptive filter will either converge very slowly or even diverge. It's worthwhile noting that at very low frequencies the phase response of a large room is pretty crazy and can change when things move around in the room, and this is a real challenge that has no easy textbook solution. 

In case B you have another acoustic path that needs to be matched with an FIR filter so you can cancel the portion of the anti-noise signal that is picked up by the noise-sense mic. Again you need to make sure that this filter is long enough to completely model the path. 

Generally people do not use block-based processing for these systems due to the latency. If you do use block-based processing you need to make sure the latency is less than the shortest acoustic path. 

On top of all this there is the issue of coherence and turbulence. If two signals have low coherence then there is no filter that can turn one into the other. Acoustic paths in real-life environments have a nasty habit of having poor coherence especially if the distances are large. 

Bob
Reply by Bob Masta November 9, 20152015-11-09
On Sun, 08 Nov 2015 17:09:21 -0600, "charansai"
<110065@DSPRelated> wrote:

>@ Marcel M&uuml;ller >Thank you again for that detailed explanation of each question. >Bandpass filter improved the performance of analysis. I performed PSD on >the two microphone keeping very close to the speaker. For 50 samples the >both microphone frequency responses are centered around 1.9kHz instead of >2kHz, whereas if I go for higher number of samples per frame like 500, the >frequencies are centered exactly at 2kHz.
If you are allowed any choice in the matter, you might consider making the test frequency fall exactly on a spectral line. The line spacing is SampleRate/N, so for your chosen 44100 rate and N=50 the frequency spacing is 882 Hz (0, 882, 1764, 2646...). If you can choose one of these "line-locked" frequencies you eliminate all problems of spectral leakage and don't need a windows function. Also, 44100 Hz may not be the best choice of sample rate, since most sound cards use 48000 Hz these days (or multiples or submultiples). They appear to accept 44100, but typically that causes the input and output to drift apart in time. This may not be a big issue for you, but something to keep in mind.
>I can not post the images here. >1)My power sprectrum dB values are very low ,essential components are >located around -40dB to -50dB, whereas the microphone sensitivity is >itself -50dB and sinusoidal noise dB measured using one mobile App is >giving 75dB!! So, my question is if I amplify the input signal by a factor >of 1000 or so, Will it help? Is it a good thing to do? >I want to amplify the signal so that it will be relatively comparable to >the original noise source. ( I have tried with amplification factor of >1000, the amplitude of both the signals are much better around 0.1(peak) >instead of 0.0001 and dB values are around -10dB.)
I'm not clear on exactly what's going on here. You should amplify the signal so the loudest signal peaks are close to the maximum ADC input level.
>2) How much the directionality of microphone will affect the amplitude and >phase of the microphone measurement?? >My microphones are 'Polar Pattern Supercardioid' directional. >This question is because interestingly, not only the phase and amplitude >of microphones are changing away from speaker, they are also changing >infront of speaker.i.e if I change the position of the microphones right >infront of speaker, I can see amplitude and phase changes.!!!
As a general rule, you should avoid all these so-called unidirectional mics, since they have terrible frequency and phase reposnse. Get plain ominidirectional units.
>3)How bad is it to use just 50 samples per frame in audio applications? >I am seriously concerned about the loud speaker, with just 50 samples the >sound through the loud speaker is not at all clear even without any data >processing.!!
Why 50? If you are using FFTs you should pick a power of 2. The more samples, the better the frequency resolution and the lower the noise floor, at the expense of poorer temporal resolution and more processing time. Best regards, Bob Masta DAQARTA v8.00 Data AcQuisition And Real-Time Analysis www.daqarta.com Scope, Spectrum, Spectrogram, Sound Level Meter Frequency Counter, Pitch Track, Pitch-to-MIDI FREE 8-channel Signal Generator, DaqMusiq generator Science with your sound card!
Reply by charansai November 8, 20152015-11-08
>@ Marcel Mueller >Thank you again for that detailed explanation of each question. >Bandpass filter improved the performance of analysis. I performed PSD on >the two microphone keeping very close to the speaker. For 50 samples the >both microphone frequency responses are centered around 1.9kHz instead
of
>2kHz, whereas if I go for higher number of samples per frame like 500,
the
>frequencies are centered exactly at 2kHz. >I can not post the images here. >1)My power sprectrum dB values are very low ,essential components are >located around -40dB to -50dB, whereas the microphone sensitivity is >itself -50dB and sinusoidal noise dB measured using one mobile App is >giving 75dB!! So, my question is if I amplify the input signal by a
factor
>of 1000 or so, Will it help? Is it a good thing to do? >I want to amplify the signal so that it will be relatively comparable to >the original noise source. ( I have tried with amplification factor of >1000, the amplitude of both the signals are much better around 0.1(peak) >instead of 0.0001 and dB values are around -10dB.) >2) How much the directionality of microphone will affect the amplitude
and
>phase of the microphone measurement?? >My microphones are 'Polar Pattern Supercardioid' directional. >This question is because interestingly, not only the phase and amplitude >of microphones are changing away from speaker, they are also changing >infront of speaker.i.e if I change the position of the microphones right >infront of speaker, I can see amplitude and phase changes.!!! >3)How bad is it to use just 50 samples per frame in audio applications? >I am seriously concerned about the loud speaker, with just 50 samples
the
>sound through the loud speaker is not at all clear even without any data >processing.!! > >-Charansai > > >--------------------------------------- >Posted through http://www.DSPRelated.com
Sorry I cannot edit the post. I just want to remove my second question. -Charansai --------------------------------------- Posted through http://www.DSPRelated.com