Hello I have a data vector/time-series d = s + n (where s is signal and n is noise). n is somewhat localized in frequency, with two distinct peaks. (in actuality of course, there is also white noise, but that's a different subject). I have around 1 second of reference noise (compared to 12 seconds of data) that was recorded before the onset of the signal. Call this reference n'. What I am doing is designing a filter f = (f1,f2...fm (where m is less than or equal to the length of the reference n')) to "kill" the noise by the following approach. Let N be the matrix corresponding to transient convolution by n', or a portion of it. Let f be a column vector. Then we can solve the least squares problem Nf = 0, and use the result to filter our original data vector. Obviously I had to set f1 = some constant in order to avoid the "solution" f-hat = (0,0..0), and do the requisite algebra. Then we can just use the Levinson-Durbin algorithm to solve for f-hat. I am right now in the process of debugging my C program, and then it will take an unfortunate length of time to determine how well it worked (ie, how much noise was removed, and more importantly how much data was removed). What I would like to know is whether or not anyone has tried this sort of approach before? Any tips or tricks you can recommend? Are there any obvious pitfalls I am missing? Thanks for your help.
Noise cancelling approach
Started by ●December 1, 2008
Reply by ●December 1, 20082008-12-01
On Dec 1, 1:42�pm, "alexryu" <ryu.a...@gmail.com> wrote:> Hello > I have a data vector/time-series d = s + n (where s is signal and n is > noise). �n is somewhat localized in frequency, with two distinct peaks. (in > actuality of course, there is also white noise, but that's a different > subject). I have around 1 second of reference noise (compared to 12 seconds > of data) that was recorded before the onset of the signal. �Call this > reference n'. �What I am doing is designing a filter f = (f1,f2...fm (where > m is less than or equal to the length of the reference n')) to "kill" the > noise by the following approach. > Let N be the matrix corresponding to transient convolution by n', or a > portion of it. �Let f be a column vector. �Then we can solve the least > squares problem Nf = 0, and use the result to filter our original data > vector. �Obviously I had to set f1 = some constant in order to avoid the > "solution" f-hat = (0,0..0), and do the requisite algebra. �Then we can > just use the Levinson-Durbin algorithm to solve for f-hat. � > I am right now in the process of debugging my C program, and then it will > take an unfortunate length of time to determine how well it worked (ie, how > much noise was removed, and more importantly how much data was removed). � > What I would like to know is whether or not anyone has tried this sort of > approach before? �Any tips or tricks you can recommend? �Are there any > obvious pitfalls I am missing? > Thanks for your help.I think you're describing a Wiener filter. There's a description here: http://en.wikipedia.org/wiki/Wiener_filter In Matlab, the steps are simple and can help validate your C implementation: [dummy,R]=corrmtx(sig,Neq-1); P=xcorr(sig,ref,(Neq-1)/2); W=R\P; zeq = filter(W,1,sig); Good luck, John