hello all, I have implemented a 16 bit fixed point (simulation on a PC) nlms based algorithm in c. While this seems to work great for audio input that is vocal, i seem to be running into problems when i use music (guitar and bass) inputs. are there any issues that i need to consider/ what would the differences be? i figured music would have a higher eigen spread and so adjusted the value of the step size accordingly (i think..) thanks in advance, navin |