I'm currently trying to get an algorithm to run in real-time, but I
am running out of MIPs. Here are the specifics and I appreciate any
The algorithm itself takes 1.6 msec to process.
My sample rate is set to 8 kHz.
Framesize = 16 (really I'm reading in 32 stereo samples at a time,
but the motorola reads the right and left at the same times so it's
the equivalent of 16)
This means that I'm grabbing samples using an interupt every 2msec,
trying to process them, which takes 1.6 msec, and writing them out.
I'm using the low-level Codec Driver API described 6.2.5 of the
Embedded SDK Targeting Motorola DSP5685x Platform manual. Of course
they don't specifically say how resource intesive this API is, but is
it really taking more than .4msec?
I have three questions:
1) Are there any drawbacks to putting my main processing algorithm in
an interupt? My main function is just a never ending empty while
loop and I'm using a CODEC_RX_CALLBACK interupt service routine to
read, process and write data. Essentially I've used codec.mcp from
SDK\src\dsp56858evm\nos\applications\bsp\codec as my skeleton.
2) If the codec interface that I'm using or if the way that I'm using
it is inefficient, could someone please recommend an efficient method? 3) Does anyone know how many MIPs this processor is capable of? I
know the clock runs at 120 Mhz, but I'm usure of the actual
processing bandwidth. I think my code runs at about 80 MIPs because
a collegue at work has the same program running on a TI 5402 and he
has measured it.
Thank you all in advance,