DSPRelated.com
Forums

FIR Filter: Silence in between

Started by Himanshu Chauhan April 30, 2005
Hello!

I have written a FIR Filter (High Pass, low pass). Everything is
working as expected except one. Its 100 tap filter so after convolution
there is a silence of 100 samples in output data. Input data is a wav
file (44100 Hz, 16 Bit, Mono).

High Pass perfectly cuts down the low frequencies but silience (in the
begining of every buffer) remains which produces clicks in the final
file.

What can be a good way of removing these clicks? Keeping history of
previously processed buffers??

Thanks in advance.

Regards
--Himanshu

On 30 Apr 2005 06:35:44 -0700, Himanshu Chauhan <hs.chauhan@gmail.com> wrote:
> Hello! > > I have written a FIR Filter (High Pass, low pass). Everything is > working as expected except one. Its 100 tap filter so after convolution > there is a silence of 100 samples in output data. Input data is a wav > file (44100 Hz, 16 Bit, Mono). > > High Pass perfectly cuts down the low frequencies but silience (in the > begining of every buffer) remains which produces clicks in the final > file. > > What can be a good way of removing these clicks? Keeping history of > previously processed buffers?? >
This is predictable? Sounds like it may be that the algorithm you're using pads the result with zeros. If it's being done by convolution, you'll wind up with (buffer Length) + (kernel length) samples in your output. If your algorithm is otherwise satisfactory, simply ignore the leading 100 samples.
"Himanshu Chauhan" <hs.chauhan@gmail.com> wrote in message 
news:1114868144.095087.138960@l41g2000cwc.googlegroups.com...
> Hello! > > I have written a FIR Filter (High Pass, low pass). Everything is > working as expected except one. Its 100 tap filter so after convolution > there is a silence of 100 samples in output data. Input data is a wav > file (44100 Hz, 16 Bit, Mono). > > High Pass perfectly cuts down the low frequencies but silience (in the > begining of every buffer) remains which produces clicks in the final > file. > > What can be a good way of removing these clicks? Keeping history of > previously processed buffers?? > > Thanks in advance.
Himanshu, The filter has to fill up with data before the full output can be realized. That means 100 samples. It's the startup transient. There will be a similar transient if the signal input stops. If you are processing streaming data you shouldn't be thinking "buffers" exactly. You want a continuous stream output - so may want to consider a continuous stream input. How you actually implement this with buffers, etc. would be with that streaming goal in mind. A very simple streaming approach does this: Process the first 100 samples to get the first output - or more simply just start processing and ignore the first 99 output samples. Thereafter, process the 2nd to 101st samples, 3rd to 102nd, .... etc. keeping the outputs from each. There should be no clicks, etc. Fred
Hi!

>The filter has to fill up with data before the full output can be
realized.
>That means 100 samples. It's the startup transient. There will be a
similar
>transient if the signal input stops
Thats what I am exactly saying. These 100 samples are Zero and as I treat all "buffers" independent unit, this startup transient adds to complete file (as many chunks of "buffers" I read from the file). Fred, I didn't understand about streaming approach. Could you please elaborate this. This is yet my another question, how is real time processing is done? how do you treat your data as? http://hschauhan.port5.com/files/DigitalFilters.zip Thanks and regards --Himanshu
>From what I understand, you have a file that you process in chunks.
You get a chunk, process it and save it, get th next and so on. Correct? If so, there's a problem there. When you take your first chunk, say 1000 samples, the first 99 output samples are not important as you have to precharge your filter. That's fine. However think about the last 99 output samples you get, are they correct? Do you pump in zeros to flush out the filter? Does that make sense to do that? Are you then doing the same as if you had a large enough buffer to contain your complete original input file? Think about what Fred wrote. If you don't give your filter input a continuous stream exactly the same as your input file, then how do you expect to get a continuous filtered output stream? QN
"Himanshu Chauhan" <hs.chauhan@gmail.com> wrote in message 
news:1114884781.139342.322200@l41g2000cwc.googlegroups.com...
> Hi! > >>The filter has to fill up with data before the full output can be > realized. >>That means 100 samples. It's the startup transient. There will be a > similar >>transient if the signal input stops > > Thats what I am exactly saying. These 100 samples are Zero and as I > treat all "buffers" independent unit, this startup transient adds to > complete file (as many chunks of "buffers" I read from the file). Fred, > I didn't understand about streaming approach. Could you please > elaborate this. This is yet my another question, how is real time > processing is done? how do you treat your data as? > > http://hschauhan.port5.com/files/DigitalFilters.zip > > Thanks and regards > --Himanshu >
Himanshu, You can't get rid of the transient. All you can do is ignore it. This represents an input/output delay and a loss of data if you choose to look at it that way. That is, there is no way to "filter", in the normal sense, 50 samples through a filter that is 100 samples long. That is not to say that you can't filter a burst that is 50 samples long because you would have legitimate, known zero-valued samples around the burst and *can* fill the length 100 filter and be filtered as intended. If this is in software, I'm not the best programmer here. So maybe someone else can help. Check out www.dspguru.com Look for circular buffers. But, maybe your implementation isn't in software. So, it might help a lot if you tell us what sort of implementation you're working on. It can make a difference. Consider this: If you have streaming data or simply a very long record the situation can be treated similarly: 1) Apply the filter to 100 samples. 2) Get one more sample (the 101st which will be the newest) and delete or ignore the oldest (the 1st) sample. 3) Apply the filter to these "new" 100 samples. A circular buffer is a buffer in which you increment the "starting" address modulo the number of samples in the buffer. Let us say you have a 100-sample buffer. Fill the buffer from locations 0 to 99 (or 1 to 100 if you like to address that way). The oldest sample will be in position 99 and the newest in position 0. Operate the filter on these samples by aligning the coefficients accordingly. Now, write the newest sample into position 99 and point the filter at locations 99,0,1 2....98. continue in this circular manner forever...... With a circular buffer you have to write the newest data into the buffer. With a contiguous large block of data you don't write the data at all; it's already there. Or, perhaps you write it into the large block in real time. With a circular buffer, you refer to the "beggining" of the data by incrementing the address in a circular / modulo manner. With a large block, you refer to the "beginning" of the data by incrementing the address. I hope you can see that if the data set is going to be larger than the "large block" then you end up resorting to a circular buffer implementation - although the buffer may be very large indeed. This boils down to: 1) Use a circular buffer that is exactly the length of the filter and update its contents every sample. 2) Use a circular buffer that is larger than the length of the filter and update its contents as you will, perhaps in chunks, so long as you stay ahead of the block that the filter is operating on. 3) Use a large block of memory to hold the data and assure that the data won't overflow the buffer. Expect to stop processing data after the buffer full of data has been processed. Sometimes in real implementations it is handy to "double buffer" so that one buffer is being written while the other is being processed. This is typical in video where there are frame buffers. If you are working in Scilab or Matlab with *conv* then I can understand your confusion. Each convolution will have a transient segment. Then you might think of the time-domain filtering process like this: %Program to demonstrate filter response%Filter length is 10 %Input is gate length 12 so there will be two stable samples out. %Output is length 1 zero + 9 transient + 10 stable + 9 transient + 1 zero = 30 samples N=10; data(1:40)=0; %Initially the filter is empty and the output is zero. data(11:29)=1 %The gate input y(19,30)=0 %Each row is a successive convolution. Each column is a single convolution. filter=[1 2 3 4 5 5 4 3 2 1]; %User-determined filter of length 10 i=1; %Input sample number / time index for i=1:30 x(1:10)=data(i:9+i) y(1:19,i)=conv(filter,x)' %Convolving filter length 10 with 10 samples, i=i+1 %results in 19 sample end plot(1:30,y(10,1:30)) axis tight
oops, it got sent too soon:

"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message 
news:DJ6dna4ayL40iunfRVn-uQ@centurytel.net...
> > "Himanshu Chauhan" <hs.chauhan@gmail.com> wrote in message > news:1114884781.139342.322200@l41g2000cwc.googlegroups.com... >> Hi! >> >>>The filter has to fill up with data before the full output can be >> realized. >>>That means 100 samples. It's the startup transient. There will be a >> similar >>>transient if the signal input stops >> >> Thats what I am exactly saying. These 100 samples are Zero and as I >> treat all "buffers" independent unit, this startup transient adds to >> complete file (as many chunks of "buffers" I read from the file). Fred, >> I didn't understand about streaming approach. Could you please >> elaborate this. This is yet my another question, how is real time >> processing is done? how do you treat your data as? >> >> http://hschauhan.port5.com/files/DigitalFilters.zip >> >> Thanks and regards >> --Himanshu >> > > Himanshu, > > You can't get rid of the transient. All you can do is ignore it. This > represents an input/output delay and a loss of data if you choose to look > at it that way. That is, there is no way to "filter", in the normal > sense, 50 samples through a filter that is 100 samples long. That is not > to say that you can't filter a burst that is 50 samples long because you > would have legitimate, known zero-valued samples around the burst and > *can* fill the length 100 filter and be filtered as intended. > > If this is in software, I'm not the best programmer here. So maybe > someone else can help. > Check out www.dspguru.com > Look for circular buffers. > > But, maybe your implementation isn't in software. So, it might help a lot > if you tell us what sort of implementation you're working on. It can make > a difference. > > Consider this: > If you have streaming data or simply a very long record the situation can > be treated similarly: > > 1) Apply the filter to 100 samples. > 2) Get one more sample (the 101st which will be the newest) and delete or > ignore the oldest (the 1st) sample. > 3) Apply the filter to these "new" 100 samples. > > A circular buffer is a buffer in which you increment the "starting" > address modulo the number of samples in the buffer. Let us say you have a > 100-sample buffer. > Fill the buffer from locations 0 to 99 (or 1 to 100 if you like to address > that way). > The oldest sample will be in position 99 and the newest in position 0. > Operate the filter on these samples by aligning the coefficients > accordingly. > Now, write the newest sample into position 99 and point the filter at > locations 99,0,1 2....98. > continue in this circular manner forever...... > > With a circular buffer you have to write the newest data into the buffer. > With a contiguous large block of data you don't write the data at all; > it's already there. > Or, perhaps you write it into the large block in real time. > > With a circular buffer, you refer to the "beggining" of the data by > incrementing the address in a circular / modulo manner. > With a large block, you refer to the "beginning" of the data by > incrementing the address. > > I hope you can see that if the data set is going to be larger than the > "large block" then you end up resorting to a circular buffer > implementation - although the buffer may be very large indeed. > This boils down to: > > 1) Use a circular buffer that is exactly the length of the filter and > update its contents every sample. > > 2) Use a circular buffer that is larger than the length of the filter and > update its contents as you will, perhaps in chunks, so long as you stay > ahead of the block that the filter is operating on. > > 3) Use a large block of memory to hold the data and assure that the data > won't overflow the buffer. Expect to stop processing data after the > buffer full of data has been processed. > > Sometimes in real implementations it is handy to "double buffer" so that > one buffer is being written while the other is being processed. This is > typical in video where there are frame buffers. > > If you are working in Scilab or Matlab with *conv* then I can understand > your confusion. Each convolution will have a transient segment. Then you > might think of the time-domain filtering process like this: > > %Program to demonstrate filter response%Filter length is 10 > > %Input is gate length 12 so there will be two stable samples out. > > %Output is length 1 zero + 9 transient + 10 stable + 9 transient + 1 zero > = 30 samples > > N=10; > > data(1:40)=0; %Initially the filter is empty and the output is zero. > > data(11:29)=1 %The gate input > > y(19,30)=0 %Each row is a successive convolution. Each column is a single > convolution. > > filter=[1 2 3 4 5 5 4 3 2 1]; %User-determined filter of length 10 > > i=1; %Input sample number / time index > > > for i=1:30 > > x(1:10)=data(i:9+i) > > y(1:19,i)=conv(filter,x)' %Convolving filter length 10 with 10 samples, > > i=i+1 %results in 19 sample > > end > > plot(1:30,y(10,1:30)) > > axis tight
So, while it's correct that the output of a filter is the convolution of the filter unit sampler response with the input, that's only if you have the entire input. This demo successively calculates the convolution of a buffer limited to the length of the filter. So, y(10,:) is the output that one would see as new samples propagate through the filter. It really obscures the point to use "conv". Instead of using "conv" it's much better to use the kernel of conv which represents the output of the filter at each sample instant: k=N y(n)=sum [a(k)*x(n-k)] k=1 This is the same as y(10,:) in the program above. It's pretty clear from this that the filter is only filled with data when all of the "x" terms are nonzero. If x(r) r<=0 are zero and x(1) is the first nonzero sample then the filter is only first filled when n>N so that x(N-k) is x(1). I don't know if this helps or not..... Fred
Hello Fred!

>I don't know if this helps or not...
It helped me alot!! Thanks alot!! I now understand what you exactly meant.
>But, maybe your implementation isn't in software. So, it might help a
lot
>if you tell us what sort of implementation you're working on. It can
make a
>difference.
Its in software. How can it make a difference? Anyways, I read your above post and implemented a convolution using circular buffers in delay line (it itself is now circular) and the output is real smooth!! Believe me I jumped out my chair!! :-) Below is what I have done. Its based on DSPGuru's circular convolve. /*================================================================= * Function : CircularConvolve * * Parameters : inputSample -> current sample in the input buffer. * num_taps-> number of taps in the filter. * filter_coeffs -> Filter coefficients. * delay_line-> pointer to the delay line * filter_status -> saved status of the filter. * * By : Himanshu Chauhan *==================================================================*/ BUFFER_T CircularConvolve(BUFFER_T inputSample, int num_taps, PBUFFER_T filter_coeffs, PBUFFER_T delay_line, int *filter_state) { int kernel_counter, state; BUFFER_T accum; state = *filter_state; /* Copy the current state of the buffer to local */ delay_line[state] = inputSample; /* Add the input sample to the virtual zero */ if (++state >= num_taps) /* if we are at max, wrap up */ state = 0; accum = 0; /* convolve complete delay line with the filter coeffs */ for (kernel_counter = num_taps - 1; kernel_counter >= 0; kernel_counter--){ accum += filter_coeffs[kernel_counter] * delay_line[state]; if (++state >= num_taps) state = 0; } *filter_state = state; /* Save the filter state for caller */ return accum; } I am beautifying the code a bit and adding some interface to it. Its working absolutely fine for me. Fred, I am very new to this field, so I have little or to say no experience with matlab. I would love to learn matlab but can you tell me how can it make my job easier? Thanks and regards --Himanshu
"Himanshu Chauhan" <hs.chauhan@gmail.com> wrote in message 
news:1114939915.463298.77370@o13g2000cwo.googlegroups.com...
> Hello Fred! > >>I don't know if this helps or not... > > It helped me alot!! Thanks alot!! I now understand what you exactly > meant. > >>But, maybe your implementation isn't in software. So, it might help a > lot >>if you tell us what sort of implementation you're working on. It can > make a >>difference. > > Its in software. How can it make a difference? Anyways, I read your > above post and > implemented a convolution using circular buffers in delay line (it > itself is now circular) > and the output is real smooth!! Believe me I jumped out my chair!! :-) > Below is what I have done. > Its based on DSPGuru's circular convolve. > > /*================================================================= > * Function : CircularConvolve > * > * Parameters : inputSample -> current sample in the input buffer. > * num_taps-> number of taps in the filter. > * filter_coeffs -> Filter coefficients. > * delay_line-> pointer to the delay line > * filter_status -> saved status of the filter. > * > * By : Himanshu Chauhan > *==================================================================*/ > BUFFER_T CircularConvolve(BUFFER_T inputSample, int num_taps, PBUFFER_T > filter_coeffs, PBUFFER_T delay_line, int *filter_state) > { > int kernel_counter, state; > BUFFER_T accum; > > state = *filter_state; /* Copy the current state of the buffer to > local */ > > delay_line[state] = inputSample; /* Add the input sample to the > virtual zero */ > > if (++state >= num_taps) /* if we are at max, wrap up */ > state = 0; > > accum = 0; > > /* convolve complete delay line with the filter coeffs */ > for (kernel_counter = num_taps - 1; kernel_counter >= 0; > kernel_counter--){ > accum += filter_coeffs[kernel_counter] * delay_line[state]; > if (++state >= num_taps) > state = 0; > } > > *filter_state = state; /* Save the filter state for caller */ > > return accum; > } > > > I am beautifying the code a bit and adding some interface to it. Its > working absolutely fine > for me. > > Fred, I am very new to this field, so I have little or to say no > experience with matlab. > I would love to learn matlab but can you tell me how can it make my job > easier? > > Thanks and regards > --Himanshu >
Himanshu, Great! Glad to be of help. Regarding Scilab or Matlab: you might think of them as a programming language that has been written with engineering, math, science in mind. Then, as is now common with languages, they come with lots of tools that make doing things even easier. It depends on what your job is. My experience in the electronics industry over the last 10 years was that these tools weren't seen all that much and I placed little value in them. I see them in the universities quite often though. I think that's because in industry there was a lot more work to do in implementation and much less in analysis. Scilab and Matlab are great for creating code quickly that can later be translated to another language environment if necessary (I'm not suggesting automatic translation because I'm not familiar with if that's an option and if it works, how well). If you are doing much analysis then the tools can be pretty compelling. I'm seeing recent advertising that suggests going direct from Matlab to FPGA - no experience there... I also see more and more reference to Simulink which is a simulation capability integral to Matlab. I'm sure that some folks would suggest that programming in C or C++ or Python or ...whatever is just as good or better. I have no strong opinion in that regard. Fred
Hi Fred!

Frankly speaking I also don't want to put in my efforts in learning
matlab.
At least for the time being.

After getting LP and HP filters to work I was thinking about a bass
boost
effect. Could you please, point out the problem with my approach? What
I am doing is this :-
I filter all base frequencies (till 200 Hz) using the low pass filter.
Then
I add this to the original signal. I figured out a little improvement
in bass
but it introduces some noise as well. If I choose to add the filtered
signal
to original signal twice or thrice, the output is horribly noisy!

May be I am on a totally wrong track, Am I?

Thanks and regards
--Himanshu

> Himanshu, > > Great! Glad to be of help. > > Regarding Scilab or Matlab: you might think of them as a programming > language that has been written with engineering, math, science in
mind.
> Then, as is now common with languages, they come with lots of tools
that
> make doing things even easier. > It depends on what your job is. > My experience in the electronics industry over the last 10 years was
that
> these tools weren't seen all that much and I placed little value in
them. I
> see them in the universities quite often though. I think that's
because in
> industry there was a lot more work to do in implementation and much
less in
> analysis. > > Scilab and Matlab are great for creating code quickly that can later
be
> translated to another language environment if necessary (I'm not
suggesting
> automatic translation because I'm not familiar with if that's an
option and
> if it works, how well). > > If you are doing much analysis then the tools can be pretty
compelling. I'm
> seeing recent advertising that suggests going direct from Matlab to
FPGA -
> no experience there... I also see more and more reference to
Simulink which
> is a simulation capability integral to Matlab. > > I'm sure that some folks would suggest that programming in C or C++
or
> Python or ...whatever is just as good or better. I have no strong
opinion
> in that regard. > > Fred