Hello:
I am processing a sinusoidal signal that has an on time of about 100ms every 1000ms - a 10% duty cycle.
The capture interrupt takes a 512 sample buffer every second, and it's over the Nyquist rate. After processing through an FIR window, the signal goes through a 512 point CFFT (cmsis libs)
From the results, it's clear that the signal is being missed at times (at capture edges) due to the controlled interval of capture and the random arrival of the signal.
From what I have read, it seems the use of an Overlap-Save buffer would be of benefit to ensure the signal is captured within subsequent buffer frames.
The attached image has been clipped from a FFT paper by Professor Deepa Kundur (U of T) that describes this method. My question is, given the signal rates and capture buffers, what size should M-1 be?
Thank you for helping on this.
Groger, you do not say what you consider "Nyquist rate" to be in this context. Given that it could be just about anything from 10Hz to light, I think you need to name a figure.
Assuming square pulses, in order to get enough information to know that a pulse has happened you need to sample with an interval that's strictly less than the shortest pulse time (to guarantee hitting the pulse at least once), and you need to sample in windows that are at least contiguous (to guarantee that you're not losing a pulse between windows).
Note that the above paragraph doesn't mention the FFT, Nyquist, any frequency-domain terms or anything -- it's just a common-sense, time-domain statement of what you need to do to succeed.
What are you trying to actually do? Telling us your current sampling rate, what you're trying to find out about the pulses, and what you think the Nyquist rate is in this case and why, will all help people help you.
I apologize, but I don't really understand the question. My familiarity with overlap-save is to use it to do convolution in the time domain on real-time on buffered data. The idea is to convolve your buffer with your FIR taps and "save" the edge effects. You then convolve the next buffer and sum together the left edge effects from the new buffer with the right edge effects from the old buffer to get the continuous result. In that case, M would be equal to the number of filter taps.
Now to take a stab at answering your question...
It sounds like you know the frequency of your sinusoid, and the problem is that your 512 samples occur too fast to capture it reliably given the 10% duty cycle. Could you not slow down your sample rate to cover a sufficient amount of the 1000ms and then shift the cfft results to account for the aliasing? It seems to me that you need to cover >90% of the 1000ms with your 512 samples so your sample rate would have to be <(512/.9)Hz.
hi, thanks for your answer. can I ask you to go a bit further?
I am using CMSIS libraries. I have a buffer with data in: ADC_Samples, and an output buffer. Tap size is 64 taps. Blocks are 32 (uint32)
Here's the initialization routine.
arm_fir_init_f32(&S, FILTER_TAP_NUM, (float32_t *)&filter_taps[0], &firStateF32[0], BLOCK_SIZE);
Here is the execution the the FIR:
for( i = 0; i < NUM_BLOCKS; i++ ) {
arm_fir_f32(&S, &ADC_Samples[0] + (i * BLOCK_SIZE), &outputBuffer[0] + (i * BLOCK_SIZE), BLOCK_SIZE);
}
The result is in outputBuffer. After this, I pass it to the cfft routine and
get bin data. Based on your description, and the overlap save would be done continuously, is it passed into the cfft on each update ?
Based on this, can you please provide a pseudo-code description on how I would perform the overlap buffer?
Sorry, didn't take those classes, just nothing but years of electronics and embedded system programming. As i said, it's functioning, but I know it could function better with some work.
Really appreciate the help, thank you!
If you have a sample length L convolved with a 64-tap filter, the output is of length L + 64 - 1 and the overlap should be 63. But if you are trying to do this in real time, there is a limit (memory) to how long your total output can be. This is a problem when using a single microcontroller as you have to ignore continuous data from time to time to do the computations. Input > Process > Output > Input, etc. To get continuous processing of real time data, you will need multiple processors -- and you will have to figure out how they sync so that you know which data blocks are being processed by each processor. If you don't need to do it in real time, the job is easier.
Hi groger, let me ping a few users who might potentially be able to help you:
@Tim Wescott, @Rick Lyons, @mamcdonal, @AllenDowney, @Cedron
I am adding this to my first reply of February 25. You did not say what you sample rate is but if you are dealing with audio frequencies you need a pretty powerful processor of 100 MHz or more. I just discovered that NXP has released a Cortex-M4 microcontroller and an inexpensive evaluation board that is designed to serve as a DSP. Since it is not a dedicated DSP you will have to add your own peripherals -- SAR ADC, FIR filter, etc.