Lars,
I don't have time to look it up, but IIRC (and I may not, it has been
quite a while) the lattice implementation using reflection coefficients
allowed one to adjust the filter gradually within a block by
interpolating between the current and next set of reflection
coefficients as you progress through the block. Interpolation may have
been done at pitch intervals.Look up 'MPE-LPC' and see if you find this
explained. More common IIR implementations do not allow this gradual
progression of the filter (can't interpolate the coefficients, for
example, and be guaranteed stability), and as you say the internal
state would not commonly transfer from one block to the next where the
filter is updated on a block by block basis.
I would not expect either of the filtering methods you have used to
sound good on reasonable quality audio, since you would get periodic
(annoying) errors at the block interval. When you say you got
output=input with noise-free speech what criteria did you use to come
to this conclusion? I would guess it must have been qualitative,
rather than quantitative. Subtracting Output-Input I would expect to
not get close to zero out. Consider the case where you don't change the
filter taps at all, but called it uninititalized (zeroed internally)
with each block, inputting K samples and outputting only K samples out,
then the output signal at the block edges would not be correct. This
last statement would even be true if the filter was an FIR constant
coefficient filter.
Hope this helps some,
Dirk
Reply by Lars Hansen●October 25, 20052005-10-25
> What type of signal are you filtering?
A noise degraded speech signal x=s+n where s is clean speech and n is
colored, gaussian noise.
> How important is continuity?
I don't know to be honest. Maybe you can tell me why you ask that particular
question?
> How/why are the coefficients changed from one block to the next?
One filter (whitening FIR filter) receives its filter coefficients from
LPC-analysis based on the short-term autocorrelation of
a 256 samples long frame
Another filter (vocal tract IIR filter) receives its filter coefficients
from LPC-analysis based on the modified power spectrum of
the 256 samples long frame.
> Do you piece the output blocks together?
Yes.
>How?
If the output blocks are B1,B2,B3,.....,Bn then the estimated speech signal
is:
B1B2B3B4B5......Bn
>Does K samples in
> produce more than K samples out (from zero padding the end for example)
> ?
No. K samples in results in K samples out.
Here is a short overview of my algorithm:
1. Send 16 samples (a block) through a pre-emphasis filter P(z). Output is
stored in a
cyclic buffer of length 256
2. Update frame (length 256) based on contents of cyclic buffer (*)
3. Use autocorrelation of frame to calculate LPC-coefficients and LPC-gain
(Levinson-Durbin)
4. Send output from P(z) through a whitening FIR filter A(z) to obtain
residual
signal. A(z)'s filter coefficients are the LPC-coefficients (numerator) and
LPC-gain (denominator)
5. Calculate the power spectrum of the frame and do some manipulation with
it to obtain a less noisy spectrum. Convert
spectrum to autocorrelation. Use autocorrelation to calculate
LPC-coefficients and LPC-gain.
6. Send residual signal through a "vocal-tract" filter B(z) to obtain
estimated speech signal. The filter B(z) is an IIR filter whose coefficients
are the LPC-coefficients (denominator) and LPC-gain from item 5
7. De-emphasize estimated speech signal
In general, I implemented the filters as:
[output,final_conditions]=filter(numerator,denominator,signal,initial_conditions);
initial_conditions=final_conditions;
But the estimated speech sounded horrible until I replaced the above 2
commands with:
output=filter(numerator,denominator,signal);
In one of the tests I made I tried to use a noise-free speech signal as
input to the algorithm. The algorithm should then output a copy of the
input, but I could _only_ get output=input if I didn't use initial and final
conditions in the above described filters.
I hope that you can help me see what I am doing wrong or right :o) Thank
you....
Reply by dbell●October 25, 20052005-10-25
Lars,
What type of signal are you filtering?
How important is continuity?
How/why are the coefficients changed from one block to the next?
Do you piece the output blocks together? How? Does K samples in
produce more than K samples out (from zero padding the end for example)
?
Dirk
Reply by Lars Hansen●October 25, 20052005-10-25
Hello,
I have a question about filter implementation.
If I have a sequence x[n] and divide it into blocks of M samples per block
and do the following:
1) Update filter-coefficients
2) Filter the kth block
3) k=k+1
4) Go to 1
How should the filter be implemented ?
In matlab I have 2 options:
Either I implement the filter like this:
[output,final_conditions]=filter(numerator_coefficients,
denominator_coefficients, input_block, initial_conditions);
initial_conditions=final_conditions;
or
output=filter(numerator_coefficients, denominator_coefficients, input_block)
I would say that it doesn't really matter to use initial/final conditions
because the accuracy of the initial and final conditions rely on the filter
coefficients being constant and not time-varying. So since the filter
coefficients are updated for each block why would I care about the initial
and final conditions? Using initial and final conditions only makes sense in
the case where the filter coefficients are constant. In that case it will
ensure that I avoid transients, right?
Or am I mistaken?
Thanks :o)