DSPRelated.com
Forums

Sample Rate conversion

Started by HardySpicer May 18, 2012
dvsarwate  <dvsarwate@yahoo.com> wrote:

>On May 19, 7:38&#4294967295;pm, spop...@speedymail.org (Steve Pope) wrote:
>> In the algebraic decoding application with polynomials, W becomes the >> error locator so I think we must be stating the same "extended Euclidean" >> algorithm; but it seems you are computing more values than I would >> expect -- if you are explicitly calculating a[i] and b[i] at each step.
>In the algebraic decoding application, one of the >a[i] and b[i] is of no interest whatsoever, and so >only one sequence is computed. For what is is >worth, the sequence left uncomputed follows the >same recursion but with different initial conditions. >Instead of > > W[-1] = 0 > W[1] = 1 > W[i] = W[i-2] + Q[i] * W[i-1], i = 1, 2, 3... > >it is > > X[-1] = 1 > X[1] = 0 > X[i] = X[i-2] + Q[i] * X[i-1], i = 1, 2, 3...
Okay, thanks. I think the terminal value of the X sequence above is N / gcd (as opposed to M / gcd for the W sequence).
>For using the Bezout theorem (and the up/down >sampling ratios application) both the sequences >would be needed.
>The advantage of the iterative approach is that >at any stage, only 6 (or 4) numbers need be >stored; two remainders, two values of a[i] (and >possibly two values of b[i]). The alternative, >compute the remainders till you find the gcd >and then work backwards till you express >the gcd in terms of M and N requires more >(and unpredictably variable) amount of storage.
I see the advantage as getting a "free" long division. For the algebraic decoding case, if you are doing the extended Euclidean algorithm you must store four polynomial values (as you note above) between Euclidean iterations. Whereas if you are doing the plain Euclidean algorithm, you need only store two polynomial values during the Euclidean iteration. After the iteration you can discard one of these values, and use that storage element for performing a long division. I do not think the plain Euclidean approach requires more storage; it is mostly just messier, because you have to explicitly do a polynomial division (although it's a fairly simple one, as the numerator is a monomial). (It's been a long time since I've done this other than with the extended algorithm, so maybe I am forgetting some complexity aspect.) Steve
On May 20, 10:36&#4294967295;am, dvsarwate <dvsarw...@yahoo.com> wrote:
> On May 18, 11:09&#4294967295;pm, robert bristow-johnson > <r...@audioimagination.com> asked: > > > > 333/220 and use euclids algorithm? > > > what's euclids algorithm? > > More accurately, it could be called Bezout's theorem. > Given any two positive integers M and N, Euclid's > algorithm for the greatest common divisor (gcd) > can be used to compute gcd(M,N). Bezout's > theorem (which I have known about for over forty > years in the context of error-control coding but only > recently learned is named after somebody called > Bezout) says that there exist integers a and b > such that > > aM + bN = gcd(M,N) > > Note that a and b usually have opposite signs. > Furthermore, there exist integers c and d such > that > > cM + dN = 0 > > The application to DSP is obvious, I hope. > c and d tell you how to upsample a stream > at one rate and downsample at a different > rate to effect a rate conversion. > > These ideas are related to what is sometimes > called the extended Euclidean algorithm. > Starting from M and N, the Euclidean algorithm > computes a _sequence_ of *remainders* ending > in 0; the penultimate remainder is gcd(M,N). > The *extended* Euclidean algorithm computes > pairs of numbers (ai, bi) such that > > (ai)M + (bi)N = i-th remainder > > and the a,b,c,d &#4294967295;used above are just the last > two of these number pairs. Getting back to > things of interest to comp.dsp, the extended > Euclidean algorithm for *polynomials* is > used for decoding BCH and Reed-Solomon > codes, with one of the bi's being the error-locator > polynomial and the i-th remainder being the > error-evaluator polynomial. > > Hope this helps > > Dilip Sarwate
You forgot to mention Diophantine equations - which these are integer types.
On May 19, 8:27&#4294967295;pm, spop...@speedymail.org (Steve Pope) wrote:

> > For the algebraic decoding case, if you are doing the extended Euclidean > algorithm you must store four polynomial values (as you note above) > between Euclidean iterations. > > Whereas if you are doing the plain Euclidean algorithm, you need > only store two polynomial values during the Euclidean iteration. > After the iteration you can discard one of these values, and > use that storage element for performing a long division.
Yes, for the simple Euclidean algorithm, it is necessary to store only two remainders. It is only if you need the a[i]'s and/or b[i]'s that it is necessary to either compute them iteratively, storing only two values for each, as the algorithm proceeds, or one can go through the algorithm to the end without even thinking about the a[i] and b[i], storing *all* the remainders, and then work backwards (as you noted in your original response to my comments) so that one can ultimately write aM + bN = gcd(M,N) or cM + dN = 0. I don't know what you mean by a free division, but it seems to be free whichever option (compute a and b as you go, or do it at the end) one chooses. Dilip Sarwate
dvsarwate  <dvsarwate@yahoo.com> wrote:

>On May 19, 8:27&#4294967295;pm, spop...@speedymail.org (Steve Pope) wrote:
>> For the algebraic decoding case, if you are doing the extended Euclidean >> algorithm you must store four polynomial values (as you note above) >> between Euclidean iterations. >> >> Whereas if you are doing the plain Euclidean algorithm, you need >> only store two polynomial values during the Euclidean iteration. >> After the iteration you can discard one of these values, and >> use that storage element for performing a long division. > >Yes, for the simple Euclidean algorithm, it is necessary >to store only two remainders. It is only if you need the >a[i]'s and/or b[i]'s that it is necessary to either compute >them iteratively, storing only two values for each, as >the algorithm proceeds, or one can go through the >algorithm to the end without even thinking about the >a[i] and b[i], storing *all* the remainders, and then >work backwards (as you noted in your original >response to my comments) so that one can >ultimately write aM + bN = gcd(M,N) or >cM + dN = 0. I don't know what you mean by >a free division, but it seems to be free whichever >option (compute a and b as you go, or do it at >the end) one chooses.
Thanks. Yes, I'll retract what I previously said; there is more storage required. (Although I'm still thinking about if there's a way to possibly avoid that.) Steve
Steve Pope <spope33@speedymail.org> wrote:

>Thanks. Yes, I'll retract what I previously said; there is >more storage required. (Although I'm still thinking about >if there's a way to possibly avoid that.)
I think it can be done, as follows. This is a sketch not a proof so is maybe not correct. Recall that I said that at the extended Euclidean algorithm to find the GCD of M and N, W[n] = M/gcd(M,N) . Tangentially, W[] is called the auxiliary polynomial during the iteration, and at the end of the iteration it is the error locator. For the algebraic decoding case, M = x^r and N is the syndrome in some form (such as a power sum or Forney polynomial). Whereas, the "GCD" that you have found is the error evaluator, which I will call N. So by analogy, x^r / N should be W. But is it? Let's look at the two error case. N has the form of a Lagrangian: N(x) = v1(x + w2) + v2(x + w1) (two error case) For the three error case it would be: N(x) = v1(x + w2)(x + w3) +v2(x + w1)(x + w3) +v3(x + w1)(x + w2) where v1, v2... are the error values and w1, w2... are the locations. For the two error case, x^r / N equals x^r * (x + w1) * (x + w2) ___________________________ v1 * (x + w2) + v2 * (x + w1) I claim this is an error locator polynomial because the numerator has a zero at every error location whereas the denominator is non-zero at each such location. And that this remains true for the three or higher error cases as well. So I think the answer is, you do not need to save the entire series of remainders to use the plain Euclid algorithm without the auxiliary polynomial; since you can recover the error locator in this way by doing a final division (and maybe, lopping off an additional factor of x^(some constant)...). Like I said I am not sure this is airtight but at the moment it looks right. Steve
Steve Pope <spope33@speedymail.org> wrote:

>Steve Pope <spope33@speedymail.org> wrote:
>>Thanks. Yes, I'll retract what I previously said; there is >>more storage required. (Although I'm still thinking about >>if there's a way to possibly avoid that.)
>I think it can be done, as follows. This is a sketch not a proof >so is maybe not correct.
[..] There are a couple problems in what I had written in the above, so I again must think about this more before coming up with the formulation I am looking for. Steve
>>Steve Pope <spope33@speedymail.org> wrote: > >>>Thanks. Yes, I'll retract what I previously said; there is >>>more storage required. (Although I'm still thinking about >>>if there's a way to possibly avoid that.)
I have played with this problem a little more and now believe I actually have a solution. (Yes I've said this before but am now more certain.) My goal here is to use a plain (not extended) Euclidean algorithm for the algebraic decoding case without storing the entire series of remainders and working backwards to obtain the error locator. 1) Initialize the Euclidean algorithm with x^r, and S (the syndrome polynomail). But save S for later use. 2) With the Euclidean algorithm obtain N, the error evaluator. 3) The following now is true: N = S * W mod x^r where W is the error locator. This equation can be solved for W (e.g. by substitution). So this version requires storing just three polynomials: S, and the two working polynomials used during the Euclidean iteration. Steve
On Sunday, May 20, 2012 4:14:09 PM UTC-4, Steve Pope wrote:

Much material snipped....

> This equation can be solved for W > (e.g. by substitution). >
Congratulations! You have developed a better algorithm for BCH and Reed-Solomon decoding than anyone else thus far. --Dilip Sarwtae
dvsarwate <dvsarwate@yahoo.com> writes:

> On Sunday, May 20, 2012 4:14:09 PM UTC-4, Steve Pope wrote: > > Much material snipped.... > >> This equation can be solved for W >> (e.g. by substitution). >> > > Congratulations! You have developed a > better algorithm for BCH and Reed-Solomon > decoding than anyone else thus far.
Are you serious, Dilip, or is your tongue in your cheek? -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
On Monday, May 21, 2012 11:55:25 AM UTC-4, Randy Yates asked:

> > Are you serious, Dilip, or is your tongue in your cheek? >
Randy: No, my tongue was not in my cheek when I wrote my response. The idea that Steve has put forward never occurred to me, nor, as far as I know, to anyone else who has studied the problem. But, many times, a great advance occurs when someone not fully embedded in the field takes a new look at an old problem; there is much to be said for fresh eyes. So, I hope that Steve will write up the details of his idea (I, for one, don't fully understand how the recovery of W from S will work in practice as opposed to the quite vague reference as in "e.g. by substitution") though I suspect it might just go into a patent application first before it is revealed to comp.dsp or presented at a conference or published as a journal article. So, I will await more details at some future date. Dilip Sarwate