> I've been putting off replying to this, just because I've been trying to
> work out different ways I can answer this inside my head, but I'm afraid I
> cannot fully answer your question, of how you can squeeze continuous time
> into discrete time, it is an extremely complicated subject that I haven't
> fully finished researching and it can cross over into several different
> fields of study! However I shall try and explain the perspective I am
> approaching the problem from..
...
[Explanation snipped.]
Phill,
Thanks for taking the time and expending the effort to explain in such
great detail. I haven't digested it yet (first reading), but I'll keep
working on it. I hope you find all the right trees in your forest!
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
Reply by moosedude●November 29, 20062006-11-29
>moosedude wrote:
>
> ...
>
>> Its part of the overall algorithm I'm applying to my source data,
called a
>> DWT (discrete wavelet transform). I may have asked some questions in
the
>> past here (a few months ago), relating to the same subject. The idea
is
>> that I'm an eventually going to adapt this DWT algorithm, to perform
>> another algorithm called the CWT. (continuous wavelet transform)
>
>Phil,
>
>I try neither to think much about transforms or to write about them
>here, but you just opened a new mystery for me. How can one do anything
>continuous on a finite computer. Your data are samples, are they not?
>
>> I must say, I'm overwhelmed by the amount of replies I've had to my
>> initial question, I have never visited such a helpful forum before!
I'm
>> really appreciative to everyone for the response.
>
>Interesting problems often come from interesting people.
>
>> I think, the problem is now solved, and as usual it was down to
something
>> stupid, when I was multiplying the fft results together, I was
performing
>> a standard arithmetic multiplication rather than a complex number
>> multiplication !! oops.
>
>The official term for that is "Forrest-trees Syndrome". It happens to
>many explorers.
>
>Jerry
>--
>Engineering is the art of making what you want from things you can get.
>�����������������������������������������������������������������������
>
I've been putting off replying to this, just because I've been trying to
work out different ways I can answer this inside my head, but I'm afraid I
cannot fully answer your question, of how you can squeeze continuous time
into discrete time, it is an extremely complicated subject that I haven't
fully finished researching and it can cross over into several different
fields of study! However I shall try and explain the perspective I am
approaching the problem from..
a CWT takes the mathematically form of ..
http://upload.wikimedia.org/math/c/b/5/cb57c0cec60fb9ab9c862375100dfa9f.png
(or for the full page...)
http://en.wikipedia.org/wiki/Continuous_wavelet_transform
Using this formula, and a mother wavelet (in this case Gabor) its possible
to produce a scalogram output of the original source signal. In the case
of the CWT, it uses this mother wavelet, stretching and dilating/scaling
it over the source signal several times, filter the result. In a sense,
its a much more sophisticated version of the FFT (for want of a better
comparison); in this case, it produces a plot, of frequency against
time.
Upon implementation of this formula, even with a very small set of
samples, it is extremely slow, repeated looping over the source samples
over and over again to produce the output (a collection of nested loops, 3
deep, typically looping the length of the source)
This is where the DWT comes in, I have researched and studied many, many
papers detailing how the DWT can be used to optimize and speed up the CWT,
presumable approximating the results of the CWT. The DWT operates in a
fair better time domain than the CWT, and is fair more efficient. I would
post a link to the paper I am working from at the moment, but I've
misplaced the link! (oops!)
Currently, as my previous posts suggested, I am testing this
algorithm/implementation using a Daubechie 4-tap filter, perform a FFT on
this filter, and the source samples and storing the high/low filtered
output into a tree structure, with a little bit of reorganizing. This is
as far as I have got so far in the implementation, I've still yet to
figure out how I can fit the Gabor wavelet into this complicated
arrangement, it may not even be possible with the form I have which
defines the Gabor wavelet. The man himself, never actually got around to
developing Tap filters (high & low pass) for his wavelets, in the form of
the Daubechie set... it is not possible to use the recursion algorithm the
generate this wavelet.
I have had to visit this subject several times, giving myself a little
break doing something else in-between, I can get very bogged down with the
mathematics, it seems to me sometimes people choose the most complicated
method of explain concepts which are very simple, and hiding behind the
maths? Also one little equation can hide years of research and
understanding :) (maybe I'm grinding axes now)
anyway, another method of simulating CWT with DWT's is using spline
approximations, a good person to do a Google on is a guy called Michael
Unser, who has researched this topic extensively, although some of his
papers seem to be reworks of earlier papers... be warning the information
is very compacted in some documents!
Of course, my choice of wavelet to use (Gabor) may also be open to
questions, and it may be possible that another more convenient wavelet may
exist to replace it in my project; but it really does appear to be the best
suited towards my intended job, that of analyzing musical data. the output
of the scalogram is very unique and exactly what I require.
Hopefully I've explain this best I can! I'm always open to suggestions in
regard to helping me with my problem :)
Regards Phill.
Reply by Jerry Avins●November 23, 20062006-11-23
moosedude wrote:
...
> Its part of the overall algorithm I'm applying to my source data, called a
> DWT (discrete wavelet transform). I may have asked some questions in the
> past here (a few months ago), relating to the same subject. The idea is
> that I'm an eventually going to adapt this DWT algorithm, to perform
> another algorithm called the CWT. (continuous wavelet transform)
Phil,
I try neither to think much about transforms or to write about them
here, but you just opened a new mystery for me. How can one do anything
continuous on a finite computer. Your data are samples, are they not?
> I must say, I'm overwhelmed by the amount of replies I've had to my
> initial question, I have never visited such a helpful forum before! I'm
> really appreciative to everyone for the response.
Interesting problems often come from interesting people.
> I think, the problem is now solved, and as usual it was down to something
> stupid, when I was multiplying the fft results together, I was performing
> a standard arithmetic multiplication rather than a complex number
> multiplication !! oops.
The official term for that is "Forrest-trees Syndrome". It happens to
many explorers.
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
Reply by moosedude●November 23, 20062006-11-23
>
>moosedude skrev:
>> Hello,
>>
>> I'm writing a simple C++ application, which performs the following
tasks;
>>
>> 1) generates high/low pass daub 4-tap filters
>>
>> 2) performs a fft of those filters
>>
>> 3) performs a fft of the input, test signal
>>
>> 4) multiples the fft of the filters, with the fft of the test signal
>>
>> 5) performs a ifft of the result of the previous step, to output the
>> filtered result of the test signal.
>
>Here's a detail I didn't notice before:
>
>Items 4) and 5) combined are abit ambiguous. Exactly what
>do you do here? The phrasings above give the impression
>that you do something like (Matlab notation):
>
>%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>X = fft(x);
>H1 = fft(h1,length(x)) ; % FFT of the first filter function
>H2 = fft(h2,length(x)); % FFT of the second filter function
>
>Y = X.*H1.*H2; % This is what item 4) says you do
>
>y = real(ifft(Y));
>%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
>A better way would be to apply EITHER h1 OR h2 to filter
>the signal:
>
>%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>X = fft(x);
>H1 = fft(h1,length(x)) ; % FFT of the first filter function
>H2 = fft(h2,length(x)); % FFT of the second filter function
>
>Y1 = X.*H1; % Apply LP filter
>Y2 = X.*Y2; & Apply HP filter
>
>y1 = real(ifft(Y1)); % LP-filtered data
>y2 = real(ifft(Y2)); % HP filtered data
>%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
>Rune
>
>
Hi Rune,
you are correct, statement 4 & 5 are completely ambiguous, I am applying,
as you suggested, the LP and HP separately on the same piece of data.
Its part of the overall algorithm I'm applying to my source data, called a
DWT (discrete wavelet transform). I may have asked some questions in the
past here (a few months ago), relating to the same subject. The idea is
that I'm an eventually going to adapt this DWT algorithm, to perform
another algorithm called the CWT. (continuous wavelet transform)
I must say, I'm overwhelmed by the amount of replies I've had to my
initial question, I have never visited such a helpful forum before! I'm
really appreciative to everyone for the response.
I think, the problem is now solved, and as usual it was down to something
stupid, when I was multiplying the fft results together, I was performing
a standard arithmetic multiplication rather than a complex number
multiplication !! oops.
Regards Phill.
Reply by Andor●November 23, 20062006-11-23
Ben Bridgwater wrote:
> Andor wrote:
...
> > No. FFT based filtering makes use of the fact that multiplication in
> > frequency domain of two DFTed data frames (or vectors, as you prefer)
> > corresponds to circular convolution in the time domain. All you need is
> > zero-padding in the correct places to transform the circular into
> > linear-convolution. No windows.
>
>OK, thanks. I had been thinking that the zero padding was only serving
> the purpose of making sure that the entire impulse response of the
> filter function was being included; I hadn't realized that it was also
> serving to take away the circularity (which is why I thought the window
> function was needed). Is there any intuitive way of seeing how this
> works, or is it just a matter of falling out in the math?
Well, the convolution y of two finite vectors, say h and x, with
N1=length(h) and N2=length(x), is defined as
y(n) = (x * h) (n)
= sum_{k=0}^n x(k) h(n-k),
and you can imediately see, that in general, y(n) =/= for n < N1 + N2,
and y(n) == 0 for n >= N1 + N2. So if you want to filter in the
frequency domain, the frame has to be long enough to accomodate y,
which has length N1 + N2 - 1, hence the zero-padding.
Regards,
Andor
Reply by Rune Allnor●November 22, 20062006-11-22
moosedude skrev:
> Hello,
>
> I'm writing a simple C++ application, which performs the following tasks;
>
> 1) generates high/low pass daub 4-tap filters
>
> 2) performs a fft of those filters
>
> 3) performs a fft of the input, test signal
>
> 4) multiples the fft of the filters, with the fft of the test signal
>
> 5) performs a ifft of the result of the previous step, to output the
> filtered result of the test signal.
Here's a detail I didn't notice before:
Items 4) and 5) combined are abit ambiguous. Exactly what
do you do here? The phrasings above give the impression
that you do something like (Matlab notation):
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
X = fft(x);
H1 = fft(h1,length(x)) ; % FFT of the first filter function
H2 = fft(h2,length(x)); % FFT of the second filter function
Y = X.*H1.*H2; % This is what item 4) says you do
y = real(ifft(Y));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A better way would be to apply EITHER h1 OR h2 to filter
the signal:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
X = fft(x);
H1 = fft(h1,length(x)) ; % FFT of the first filter function
H2 = fft(h2,length(x)); % FFT of the second filter function
Y1 = X.*H1; % Apply LP filter
Y2 = X.*Y2; & Apply HP filter
y1 = real(ifft(Y1)); % LP-filtered data
y2 = real(ifft(Y2)); % HP filtered data
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Rune
Reply by Randy Yates●November 22, 20062006-11-22
"moosedude" <phill_holland@hotmail.com> writes:
>>
>>moosedude wrote:
>>> Thanks everybody for the replies, but maybe I should explain a little
>>> more!
>>>
>>> The code I use to generate the test signal, is as follows;
>>>
>>> void testSignal(buffer<complex<double>> *src)
>>> {
>>> double PI = 3.14159265359;
>>> double incr = 1.0 / src->length;
>>> double offset = 0.0;
>>>
>>> for(long i=0L;i<src->length;i++)
>>> {
>>> double term1 = sin(40.0 * PI * offset) * exp(-100.0 * PI * pow(offset
> -
>>> 0.2,2.0));
>>> double term2 = (sin(40.0 * PI * offset) + 2. * cos(160.0 * PI *
> offset))
>>> * exp(-50.0 * PI * pow(offset-0.5,2.0));
>>> double term3 = 2.0 * sin(160.0 * PI * offset) * exp(-100.0 * PI *
>>> pow(offset - 0.8,2.0));
>>>
>>> double t = (term1 + term2 + term3);
>>> src->data[i].real(t);
>>>
>>> offset += incr;
>>> }
>>> }
>>>
>>> This basically generates three waveforms, the first and third are
> unique,
>>> and the middle wave is a combination of the first and third (I swiped
> the
>>> method for generating the signal from a wavelet book!)
>>>
>>> my filters are based upon the daubechie (is that the correct
> spelling?)
>>> 4-tap co-efficients, which are computing using this function;
>>>
>>> void reset()
>>> {
>>> double t = 4.0 * sqrt(2.0);
>>>
>>> h[0] = (1.0 + sqrt(3.0)) / t;
>>> h[1] = (3.0 + sqrt(3.0)) / t;
>>> h[2] = (3.0 - sqrt(3.0)) / t;
>>> h[3] = (1.0 - sqrt(3.0)) / t;
>>>
>>> g[0] = h[3]; g[1] = -h[2]; g[2] = h[1]; g[3] = -h[0];
>>> }
>>>
>>> I generate the filter by implementing the recursion algorithm, on
> these
>>> co-efficients.
>>>
>>> When I perform the steps I described earlier, the final result, looks
> sort
>>> of how I expected, being that its almost correct, apart from the fact
> that
>>> the outputted waveform of the filtered test signal, in the time domain
> is
>>> quite obviously symmetrical. i.e. the first half, and second half are
> a
>>> mirror of themselves. I wasn't sure whether this was down to coding
>>> error, or the results were suppose to look like this (as a result of
> the
>>> chosen test signal), hence the need for another computer program to
>>> compare the results with!
>>>
>>
>>Hmmm. If the time domain is mirrored as you describe, that suggests
>>that the frequency domain information applied to the IFFT is real. It
>>should be complex with conjugate symmetry.
>>
>>John
>>
>>
>
> Hi John,
>
> can you expand on this a bit further? When I'm filtering in the frequency
> domain, I'm not quite sure what to do with the imaginary part of the data..
> multiplying the imaginary part with the filter fft and the test signal fft,
> seemed to do nothing, as did leaving the imag. part alone, leaving it
> unprocessed.
> >
> Hi John,
>
> can you expand on this a bit further? When I'm filtering in the frequency
> domain, I'm not quite sure what to do with the imaginary part of the data..
> multiplying the imaginary part with the filter fft and the test signal fft,
> seemed to do nothing, as did leaving the imag. part alone, leaving it
> unprocessed.
>
> - Phill.
Here's the idea. Matlab comes with a function like this, but I think
this shorter m-file is easier to understand than their version.
John
*****************************
% fast convolution filter using overlap-add technique
% FFT size is twice filter length
function y = fastfilter(x,h)
x = x(:);
h = h(:);
Nfft = 2*length(h);
H = fft(h, Nfft);
Npoints = Nfft - length(h);
ip = 1;
op = 1;
y = zeros(2*length(x),1);
while(ip+Npoints-1 <= length(x))
% grab an input block of length Npoints
b = x(ip:(ip+Npoints-1));
% multiply FFT of input block by H, take inverse FFT
B = fft(b,Nfft);
B = B.*H;
t = ifft(B);
% concatenate all Nfft points of IFFT result onto output vector
y(op:(op+Nfft-1)) = y(op:(op+Nfft-1)) + t;
% advance input and output pointers by Npoints
ip = ip + Npoints;
op = op + Npoints;
end
y = y(1:(op+Nfft-1));
return
Reply by moosedude●November 22, 20062006-11-22
>
>moosedude wrote:
>> Thanks everybody for the replies, but maybe I should explain a little
>> more!
>>
>> The code I use to generate the test signal, is as follows;
>>
>> void testSignal(buffer<complex<double>> *src)
>> {
>> double PI = 3.14159265359;
>> double incr = 1.0 / src->length;
>> double offset = 0.0;
>>
>> for(long i=0L;i<src->length;i++)
>> {
>> double term1 = sin(40.0 * PI * offset) * exp(-100.0 * PI * pow(offset
-
>> 0.2,2.0));
>> double term2 = (sin(40.0 * PI * offset) + 2. * cos(160.0 * PI *
offset))
>> * exp(-50.0 * PI * pow(offset-0.5,2.0));
>> double term3 = 2.0 * sin(160.0 * PI * offset) * exp(-100.0 * PI *
>> pow(offset - 0.8,2.0));
>>
>> double t = (term1 + term2 + term3);
>> src->data[i].real(t);
>>
>> offset += incr;
>> }
>> }
>>
>> This basically generates three waveforms, the first and third are
unique,
>> and the middle wave is a combination of the first and third (I swiped
the
>> method for generating the signal from a wavelet book!)
>>
>> my filters are based upon the daubechie (is that the correct
spelling?)
>> 4-tap co-efficients, which are computing using this function;
>>
>> void reset()
>> {
>> double t = 4.0 * sqrt(2.0);
>>
>> h[0] = (1.0 + sqrt(3.0)) / t;
>> h[1] = (3.0 + sqrt(3.0)) / t;
>> h[2] = (3.0 - sqrt(3.0)) / t;
>> h[3] = (1.0 - sqrt(3.0)) / t;
>>
>> g[0] = h[3]; g[1] = -h[2]; g[2] = h[1]; g[3] = -h[0];
>> }
>>
>> I generate the filter by implementing the recursion algorithm, on
these
>> co-efficients.
>>
>> When I perform the steps I described earlier, the final result, looks
sort
>> of how I expected, being that its almost correct, apart from the fact
that
>> the outputted waveform of the filtered test signal, in the time domain
is
>> quite obviously symmetrical. i.e. the first half, and second half are
a
>> mirror of themselves. I wasn't sure whether this was down to coding
>> error, or the results were suppose to look like this (as a result of
the
>> chosen test signal), hence the need for another computer program to
>> compare the results with!
>>
>
>Hmmm. If the time domain is mirrored as you describe, that suggests
>that the frequency domain information applied to the IFFT is real. It
>should be complex with conjugate symmetry.
>
>John
>
>
Hi John,
can you expand on this a bit further? When I'm filtering in the frequency
domain, I'm not quite sure what to do with the imaginary part of the data..
multiplying the imaginary part with the filter fft and the test signal fft,
seemed to do nothing, as did leaving the imag. part alone, leaving it
unprocessed.
- Phill.
Reply by ●November 22, 20062006-11-22
moosedude wrote:
> Thanks everybody for the replies, but maybe I should explain a little
> more!
>
> The code I use to generate the test signal, is as follows;
>
> void testSignal(buffer<complex<double>> *src)
> {
> double PI = 3.14159265359;
> double incr = 1.0 / src->length;
> double offset = 0.0;
>
> for(long i=0L;i<src->length;i++)
> {
> double term1 = sin(40.0 * PI * offset) * exp(-100.0 * PI * pow(offset -
> 0.2,2.0));
> double term2 = (sin(40.0 * PI * offset) + 2. * cos(160.0 * PI * offset))
> * exp(-50.0 * PI * pow(offset-0.5,2.0));
> double term3 = 2.0 * sin(160.0 * PI * offset) * exp(-100.0 * PI *
> pow(offset - 0.8,2.0));
>
> double t = (term1 + term2 + term3);
> src->data[i].real(t);
>
> offset += incr;
> }
> }
>
> This basically generates three waveforms, the first and third are unique,
> and the middle wave is a combination of the first and third (I swiped the
> method for generating the signal from a wavelet book!)
>
> my filters are based upon the daubechie (is that the correct spelling?)
> 4-tap co-efficients, which are computing using this function;
>
> void reset()
> {
> double t = 4.0 * sqrt(2.0);
>
> h[0] = (1.0 + sqrt(3.0)) / t;
> h[1] = (3.0 + sqrt(3.0)) / t;
> h[2] = (3.0 - sqrt(3.0)) / t;
> h[3] = (1.0 - sqrt(3.0)) / t;
>
> g[0] = h[3]; g[1] = -h[2]; g[2] = h[1]; g[3] = -h[0];
> }
>
> I generate the filter by implementing the recursion algorithm, on these
> co-efficients.
>
> When I perform the steps I described earlier, the final result, looks sort
> of how I expected, being that its almost correct, apart from the fact that
> the outputted waveform of the filtered test signal, in the time domain is
> quite obviously symmetrical. i.e. the first half, and second half are a
> mirror of themselves. I wasn't sure whether this was down to coding
> error, or the results were suppose to look like this (as a result of the
> chosen test signal), hence the need for another computer program to
> compare the results with!
>
Hmmm. If the time domain is mirrored as you describe, that suggests
that the frequency domain information applied to the IFFT is real. It
should be complex with conjugate symmetry.
John