Reply by April 22, 20152015-04-22
On Thursday, April 23, 2015 at 7:33:51 AM UTC+12, Evgeny Filatov wrote:
> On 22.04.2015 20:49, gyansorova@gmail.com wrote: > > On Wednesday, April 22, 2015 at 2:39:01 PM UTC+12, robert bristow-johnson wrote: > >> On 4/21/15 5:26 PM, Evgeny Filatov wrote: > >>> On 22.04.2015 0:01, gyansorova@gmail.com wrote: > >>>> On Wednesday, April 22, 2015 at 7:28:28 AM UTC+12, Evgeny Filatov wrote: > >>>>> On 21.04.2015 21:50, gyansorova@gmail.com wrote: > >>>>> > >>>>> (snip) > >>>>> > >>>>>> I think it's more the otehr way around. They have a Toeplitz matrix > >>>>>> and they are looking for a fast way to invert it and find that this > >>>>>> involves polynomials. > >>>>>> For example in an abstract > >>>>> > >>>>> Thanks, that certainly helps. Curiously enough, there's a 1994 book by > >>>>> Bini and Pan named "Polynomial and Matrix Computations: Fundamental > >>>>> Algorithms". I have no idea if it's good, though. > >>>>> > >>>> > >>>> and I just found this simpler method using 2 FFTs to do convolution > >>>> > >>>> > >>>> http://icereijo.com/fft-based-multiplication-algorithm/ > >>>> > >> > >> cool trick. > >> > >>> > >>> True. However, the method just combines a complex signal from two real > >>> signals and computes an FFT of the complex signal. But then, one has to > >>> ask what was initially meant by "two FFTs"? > >> > >> it's similar to asking what is meant by a length-N real FFT? you can do > >> a length-N real FFT with a length-(N/2) complex FFT by putting the N/2 > >> even samples in the real parts and the N/2 odd samples in the imaginary > >> parts of N/2 complex samples. and untangle the results on the other > >> end. is it a length-N FFT or a length-(N/2) FFT? > >> > >> in both cases you're getting two-for-one. two reals for one complex. > >> > >> -- > >> > >> r b-j rbj@audioimagination.com > >> > >> "Imagination is more important than knowledge." > > > > what does BCD adj mean at the last step of their algorithm? > > > > They call that stage "BCD adjustment". > > BCD means "binary-coded decimal". They use BCD to encode numbers as > sequences of decimal digits. After the convolution they get a sequence > of multiplications of pairs of digits, and each may require up to two > digits to be written down. Like, 6*7 = 42 requires 2 digits. So to get > the final result in decimal form they must sum up all such 2-digit numbers. > > Not sure I've explained it better than they did in their source code > file "mul.cc", > > "Let us consider A and B sequences as polynomial coefficients. Then the > convolution operation of both signals will represent the polynomial > product. Knowing that none of the coefficients can be greather than > their base, if we do a decimal adjustement consisting in propagate the > carry values towards higher weight digits, the final sequence will be > the multiplication of both numbers. Thus, the multiplication can be > performed with a convolution operation with BCD adjustment." > > Regards, > Evgeny.
I followed the integer version example - so this probably means that if a number is greater than 9 they do a carry to the next digit along.. Thanks
Reply by Evgeny Filatov April 22, 20152015-04-22
On 22.04.2015 20:49, gyansorova@gmail.com wrote:
> On Wednesday, April 22, 2015 at 2:39:01 PM UTC+12, robert bristow-johnson wrote: >> On 4/21/15 5:26 PM, Evgeny Filatov wrote: >>> On 22.04.2015 0:01, gyansorova@gmail.com wrote: >>>> On Wednesday, April 22, 2015 at 7:28:28 AM UTC+12, Evgeny Filatov wrote: >>>>> On 21.04.2015 21:50, gyansorova@gmail.com wrote: >>>>> >>>>> (snip) >>>>> >>>>>> I think it's more the otehr way around. They have a Toeplitz matrix >>>>>> and they are looking for a fast way to invert it and find that this >>>>>> involves polynomials. >>>>>> For example in an abstract >>>>> >>>>> Thanks, that certainly helps. Curiously enough, there's a 1994 book by >>>>> Bini and Pan named "Polynomial and Matrix Computations: Fundamental >>>>> Algorithms". I have no idea if it's good, though. >>>>> >>>> >>>> and I just found this simpler method using 2 FFTs to do convolution >>>> >>>> >>>> http://icereijo.com/fft-based-multiplication-algorithm/ >>>> >> >> cool trick. >> >>> >>> True. However, the method just combines a complex signal from two real >>> signals and computes an FFT of the complex signal. But then, one has to >>> ask what was initially meant by "two FFTs"? >> >> it's similar to asking what is meant by a length-N real FFT? you can do >> a length-N real FFT with a length-(N/2) complex FFT by putting the N/2 >> even samples in the real parts and the N/2 odd samples in the imaginary >> parts of N/2 complex samples. and untangle the results on the other >> end. is it a length-N FFT or a length-(N/2) FFT? >> >> in both cases you're getting two-for-one. two reals for one complex. >> >> -- >> >> r b-j rbj@audioimagination.com >> >> "Imagination is more important than knowledge." > > what does BCD adj mean at the last step of their algorithm? >
They call that stage "BCD adjustment". BCD means "binary-coded decimal". They use BCD to encode numbers as sequences of decimal digits. After the convolution they get a sequence of multiplications of pairs of digits, and each may require up to two digits to be written down. Like, 6*7 = 42 requires 2 digits. So to get the final result in decimal form they must sum up all such 2-digit numbers. Not sure I've explained it better than they did in their source code file "mul.cc", "Let us consider A and B sequences as polynomial coefficients. Then the convolution operation of both signals will represent the polynomial product. Knowing that none of the coefficients can be greather than their base, if we do a decimal adjustement consisting in propagate the carry values towards higher weight digits, the final sequence will be the multiplication of both numbers. Thus, the multiplication can be performed with a convolution operation with BCD adjustment." Regards, Evgeny.
Reply by April 22, 20152015-04-22
On Wednesday, April 22, 2015 at 2:39:01 PM UTC+12, robert bristow-johnson wrote:
> On 4/21/15 5:26 PM, Evgeny Filatov wrote: > > On 22.04.2015 0:01, gyansorova@gmail.com wrote: > >> On Wednesday, April 22, 2015 at 7:28:28 AM UTC+12, Evgeny Filatov wrote: > >>> On 21.04.2015 21:50, gyansorova@gmail.com wrote: > >>> > >>> (snip) > >>> > >>>> I think it's more the otehr way around. They have a Toeplitz matrix > >>>> and they are looking for a fast way to invert it and find that this > >>>> involves polynomials. > >>>> For example in an abstract > >>> > >>> Thanks, that certainly helps. Curiously enough, there's a 1994 book by > >>> Bini and Pan named "Polynomial and Matrix Computations: Fundamental > >>> Algorithms". I have no idea if it's good, though. > >>> > >> > >> and I just found this simpler method using 2 FFTs to do convolution > >> > >> > >> http://icereijo.com/fft-based-multiplication-algorithm/ > >> > > cool trick. > > > > > True. However, the method just combines a complex signal from two real > > signals and computes an FFT of the complex signal. But then, one has to > > ask what was initially meant by "two FFTs"? > > it's similar to asking what is meant by a length-N real FFT? you can do > a length-N real FFT with a length-(N/2) complex FFT by putting the N/2 > even samples in the real parts and the N/2 odd samples in the imaginary > parts of N/2 complex samples. and untangle the results on the other > end. is it a length-N FFT or a length-(N/2) FFT? > > in both cases you're getting two-for-one. two reals for one complex. > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
what does BCD adj mean at the last step of their algorithm?
Reply by robert bristow-johnson April 21, 20152015-04-21
On 4/21/15 5:26 PM, Evgeny Filatov wrote:
> On 22.04.2015 0:01, gyansorova@gmail.com wrote: >> On Wednesday, April 22, 2015 at 7:28:28 AM UTC+12, Evgeny Filatov wrote: >>> On 21.04.2015 21:50, gyansorova@gmail.com wrote: >>> >>> (snip) >>> >>>> I think it's more the otehr way around. They have a Toeplitz matrix >>>> and they are looking for a fast way to invert it and find that this >>>> involves polynomials. >>>> For example in an abstract >>> >>> Thanks, that certainly helps. Curiously enough, there's a 1994 book by >>> Bini and Pan named "Polynomial and Matrix Computations: Fundamental >>> Algorithms". I have no idea if it's good, though. >>> >> >> and I just found this simpler method using 2 FFTs to do convolution >> >> >> http://icereijo.com/fft-based-multiplication-algorithm/ >>
cool trick.
> > True. However, the method just combines a complex signal from two real > signals and computes an FFT of the complex signal. But then, one has to > ask what was initially meant by "two FFTs"?
it's similar to asking what is meant by a length-N real FFT? you can do a length-N real FFT with a length-(N/2) complex FFT by putting the N/2 even samples in the real parts and the N/2 odd samples in the imaginary parts of N/2 complex samples. and untangle the results on the other end. is it a length-N FFT or a length-(N/2) FFT? in both cases you're getting two-for-one. two reals for one complex. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by April 21, 20152015-04-21
On Friday, April 17, 2015 at 7:36:27 AM UTC+12, gyans...@gmail.com wrote:
> Here's something you may not know. Suppose you want to expand in a power series a polynomial. Let's take a simple one > > a0+a1z^-1 > > > Then if you write a lower triangular toeplitz matrix > > [ a0 0 > a1 a1] > > > so that the first column is the polynomial coefficients then it's inverse becomes > > > (1/a0)[1 0 > -a1/a0 1] > > ie the first column is the power series expansion - or division into unity. > > You can extend for any order. > > So matrix inversion of a Toeplitz (lower triangular) is equivalent to reciprocal polynomial expansion as a power series (division into unity) > > Loads of papers on this and it turns out you can perform this with only 2 FFTs. > > The method is approximate with the FFTs but highly accurate. > > I am led to believe you can also multiply two polynomials (which is convolution) > using only 2 FFTs instead of the usual 3! > > Google Bini's algorithm and modified Bini's algorithm for some of this stuff.
Well the usual approach for real polynomial coefficients is that you need three FFTs.
Reply by Evgeny Filatov April 21, 20152015-04-21
On 22.04.2015 0:01, gyansorova@gmail.com wrote:
> On Wednesday, April 22, 2015 at 7:28:28 AM UTC+12, Evgeny Filatov wrote: >> On 21.04.2015 21:50, gyansorova@gmail.com wrote: >> >> (snip) >> >>> I think it's more the otehr way around. They have a Toeplitz matrix and they are looking for a fast way to invert it and find that this involves polynomials. >>> For example in an abstract >> >> Thanks, that certainly helps. Curiously enough, there's a 1994 book by >> Bini and Pan named "Polynomial and Matrix Computations: Fundamental >> Algorithms". I have no idea if it's good, though. >> >> Evgeny. > > and I just found this simpler method using 2 FFTs to do convolution > > > http://icereijo.com/fft-based-multiplication-algorithm/ >
True. However, the method just combines a complex signal from two real signals and computes an FFT of the complex signal. But then, one has to ask what was initially meant by "two FFTs". Evgeny.
Reply by April 21, 20152015-04-21
On Wednesday, April 22, 2015 at 7:28:28 AM UTC+12, Evgeny Filatov wrote:
> On 21.04.2015 21:50, gyansorova@gmail.com wrote: > > (snip) > > > I think it's more the otehr way around. They have a Toeplitz matrix and they are looking for a fast way to invert it and find that this involves polynomials. > > For example in an abstract > > Thanks, that certainly helps. Curiously enough, there's a 1994 book by > Bini and Pan named "Polynomial and Matrix Computations: Fundamental > Algorithms". I have no idea if it's good, though. > > Evgeny.
and I just found this simpler method using 2 FFTs to do convolution http://icereijo.com/fft-based-multiplication-algorithm/
Reply by Evgeny Filatov April 21, 20152015-04-21
On 21.04.2015 21:50, gyansorova@gmail.com wrote:

(snip)

> I think it's more the otehr way around. They have a Toeplitz matrix and they are looking for a fast way to invert it and find that this involves polynomials. > For example in an abstract
Thanks, that certainly helps. Curiously enough, there's a 1994 book by Bini and Pan named "Polynomial and Matrix Computations: Fundamental Algorithms". I have no idea if it's good, though. Evgeny.
Reply by April 21, 20152015-04-21
On Wednesday, April 22, 2015 at 12:49:49 AM UTC+12, Evgeny Filatov wrote:
> On 16.04.2015 22:36, gyansorova@gmail.com wrote: > > (snip) > > > I am led to believe you can also multiply two polynomials (which is convolution) > > using only 2 FFTs instead of the usual 3! > > > > Google Bini's algorithm and modified Bini's algorithm for some of this stuff. > > > > Frankly, I fail to see huge benefits related to Bini's algorithm. > > I'm not a math geek, so may be I'm missing some important detail. But > (for me) it goes like saying that, okey, we want to multiply two > polynomials. So we encode them as Toeplitz matrices so we have a nifty > formalism and substitute convolution with matrix multiplication. Then we > want an efficient method to multiply Toeplitz matrices and we say like > whoa, it's the same thing as multiplying polynomials, so we can do it > with 3 FFTs! But a question arises, why to bother with Toeplitz matrices > first? > > In multiplying polynomials A and B there's an obvious case when 2 FFTs > would suffice. It's when you know one polynomial beforehand (i.e., it > cannot change), so you can use a precomputed FFT of it, which you e.g. > store in memory. It's precisely the situation you have when finding the > value of 1/A, which you model as finding the matrix inversion of a > Toeplitz matrix that can be done with 2 FFTs by the Bini's algorithm. > You can do it with 2 FFTs precisely because you know one of the > polynomials beforehand -- that's, 1. > > I'm not saying it's impossible to multiply two polynomials with two > FFTs, just afraid that Bini's algorithm, modified or not, won't provide > much of assistance in that quest. > > Best regards, > Evgeny.
I think it's more the otehr way around. They have a Toeplitz matrix and they are looking for a fast way to invert it and find that this involves polynomials. For example in an abstract In this paper, we present an approximate inversion method for triangular Toeplitz matrices based on trigonometric polynomial interpolation. To obtain an approximate inverse ofhigh accuracy for a triangular Toeplitz matrix of size n, our algorithm requires two fast Fourier transforms (FFTs) and one fast cosine transform of 2n-vectors. We then revise the approximate method proposed by Bini (SIAM J. Comput. 13 (1984) 268). The complexity ofthe revised Bini algorithm is two FFTs of2 n-vectors. c 2004 Published by Elsevier B.V. also for fast spectral factorization We present a new method for factoring a real polynomial into the product of two polynomials which have their zeros inside and outside the unit circle, respectively. The approach is based on solving a nonlinear system by Newton's method. The Jacobi matrix is a polynomial of acompanion matrix andthus aBezoutian times the inverse of a triangular Toeplitz matrix. After getting deeper understanding of the displacement structure of the Bezoutian, each Newton step can be reduced to a few applications of discrete Fourier or cosine transforms and the LU-decomposition of a conceived linear system with a Cauchy-like matrix. These structural features are employed to design a fast algorithm, which shows excellent numerical behavior
Reply by Evgeny Filatov April 21, 20152015-04-21
On 16.04.2015 22:36, gyansorova@gmail.com wrote:

(snip)

> I am led to believe you can also multiply two polynomials (which is convolution) > using only 2 FFTs instead of the usual 3! > > Google Bini's algorithm and modified Bini's algorithm for some of this stuff. >
Frankly, I fail to see huge benefits related to Bini's algorithm. I'm not a math geek, so may be I'm missing some important detail. But (for me) it goes like saying that, okey, we want to multiply two polynomials. So we encode them as Toeplitz matrices so we have a nifty formalism and substitute convolution with matrix multiplication. Then we want an efficient method to multiply Toeplitz matrices and we say like whoa, it's the same thing as multiplying polynomials, so we can do it with 3 FFTs! But a question arises, why to bother with Toeplitz matrices first? In multiplying polynomials A and B there's an obvious case when 2 FFTs would suffice. It's when you know one polynomial beforehand (i.e., it cannot change), so you can use a precomputed FFT of it, which you e.g. store in memory. It's precisely the situation you have when finding the value of 1/A, which you model as finding the matrix inversion of a Toeplitz matrix that can be done with 2 FFTs by the Bini's algorithm. You can do it with 2 FFTs precisely because you know one of the polynomials beforehand -- that's, 1. I'm not saying it's impossible to multiply two polynomials with two FFTs, just afraid that Bini's algorithm, modified or not, won't provide much of assistance in that quest. Best regards, Evgeny.