Reply by Albert van der Horst May 31, 20122012-05-31
In article <Czdvr.33818$_l.8495@newsfe15.iad>,
Jerry Avins  <jya@ieee.org> wrote:
>On 5/23/2012 9:40 AM, Andre wrote: >> On 19.05.2012 02:46, HardySpicer wrote: >>> My A/D samples at 33.33kHz and after processing I am using the sound >>> card to output the audio result. The sound card only accepts 44.1khz, >>> 22.05kHz etc. I am thinking of going 33.1 to 20.05. this doesn't have >>> to be spot on, just approx since it is only for listening to. What is >>> the best way - say >>> >>> 333/220 and use euclids algorithm? >>> >> 333 / 222 is probably much easier... > >Is slide-rule accuracy good enough?
In this case we are talking about a Pythagorean comma off in absolute pitch. Unless you have absolute hearing this may be good enough. You're a Forth eh? Always changing the problem to make it easier.
> >Jerry
Groetjes Albert -- -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
Reply by Jerry Avins May 26, 20122012-05-26
On 5/23/2012 7:09 PM, dvsarwate wrote:
> On May 23, 6:32 pm, Jerry Avins<j...@ieee.org> wrote: > >> >>> 333 / 222 is probably much easier... >> >> Is slide-rule accuracy good enough? > > > Jerry: > > Though I still have my slide rule, I don't think I can > work 333/222 on it. On the other hand, many people > couldn't work 333/222 in their heads or with paper and > pencil either.
Dilip, By "slide-rule accuracy" I meant "approximately three decimal places of precision". I too can figure in my head that (1/3)/(2/9) is 3/2. :-) Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by mnentwig May 25, 20122012-05-25
Hi,

I didn't follow the whole discussion. Maybe there are already 99 solutions
- let's make it a hundred.

This piece of code
http://www.dsprelated.com/showcode/3.php
will do the job if you set "rate" accordingly.
You can find an open source C implementation in fluidsynth on sourceforge.

The quality is (IMHO) surprisingly good for such a simple polyphase filter,
but it won't be up to the job for professional audio.

There is BTW a reference on audio interpolation that might be worth
reading. 
Reply by dvsarwate May 23, 20122012-05-23
On May 23, 6:32&#4294967295;pm, Jerry Avins <j...@ieee.org> wrote:

> > > 333 / 222 is probably much easier... > > Is slide-rule accuracy good enough?
Jerry: Though I still have my slide rule, I don't think I can work 333/222 on it. On the other hand, many people couldn't work 333/222 in their heads or with paper and pencil either. Dilip
Reply by Jerry Avins May 23, 20122012-05-23
On 5/23/2012 9:40 AM, Andre wrote:
> On 19.05.2012 02:46, HardySpicer wrote: >> My A/D samples at 33.33kHz and after processing I am using the sound >> card to output the audio result. The sound card only accepts 44.1khz, >> 22.05kHz etc. I am thinking of going 33.1 to 20.05. this doesn't have >> to be spot on, just approx since it is only for listening to. What is >> the best way - say >> >> 333/220 and use euclids algorithm? >> > 333 / 222 is probably much easier...
Is slide-rule accuracy good enough? Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Andre May 23, 20122012-05-23
On 19.05.2012 02:46, HardySpicer wrote:
> My A/D samples at 33.33kHz and after processing I am using the sound > card to output the audio result. The sound card only accepts 44.1khz, > 22.05kHz etc. I am thinking of going 33.1 to 20.05. this doesn't have > to be spot on, just approx since it is only for listening to. What is > the best way - say > > 333/220 and use euclids algorithm? >
333 / 222 is probably much easier...
Reply by Eric Jacobsen May 22, 20122012-05-22
On Tue, 22 May 2012 18:32:57 +0000 (UTC), spope33@speedymail.org
(Steve Pope) wrote:

>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: > >>(Steve Pope) wrote: > >>>I see several problems besides the one your mention. >>>This algorithm might not detect undecodable cases properly. >>>And there may be no way to adapt it to process erasures. >> >>For systematic codes it may not matter that much that it can't detect >>decoding failures in some applications. For RS codes the SNR >>"window" over which they work well is pretty small in my experience, >>and they just start mis-decoding to aliased "valid" codewords below a >>particular SNR threshold rather more than they find "undecodable" >>cases. An additional CRC or something is often needed because of >>this, so even if the decoder can't indicate that there was a failure >>there are often other mechanisms in place to handle that, anyway. >> >>Likewise erasure processing, again just in my experience, is most >>useful for puncturing parity symbols to adjust the overall code rate. >>That isn't always necessary, so a good decoder that can't do erasures >>can still be very useful. > >Whether erasures and undecodables are useful is scenario-dependent. >For example, the RS product codes used on CD's and DVD's make >serious use of both these features in order to perform. > >Also any system that is doing iterative recoding around an RS code >is probably using erasures usefully. In the first section of Heegard >and Wicker's text on turbo codes, there is a historical description >of how turbo coding developed from attempts to iteratively decode >concatenated RS/convolutional codes.
Not sure what's in Chris' book (as I don't own a copy), but Claude Berrou has described how they latched onto the idea proposed by Hagenauer that a SISO decoder is an SNR amplifier, and that inspired them to put feedback around it like you would with an amplifier. He tells it pretty much as he did at the early TC conferences here: http://backup.itsoc.org/publications/nltr/98_jun/reflections.html But that was all predated by Gallager inventing LDPCs in 1960. ;)
>Also, there are some standards that include impulse noise models that >require erasure decoding to meet the stated performance. (Whether >those models are realistic is another question.) > >But I agree with the basic statement that some (many) systems have no >use for erasure decoding.
Yeah, that's all I meant, that there are still plenty of applications left even if those features are missing. Eric Jacobsen Anchor Hill Communications www.anchorhill.com
Reply by Steve Pope May 22, 20122012-05-22
Eric Jacobsen <eric.jacobsen@ieee.org> wrote:

>(Steve Pope) wrote:
>>I see several problems besides the one your mention. >>This algorithm might not detect undecodable cases properly. >>And there may be no way to adapt it to process erasures. > >For systematic codes it may not matter that much that it can't detect >decoding failures in some applications. For RS codes the SNR >"window" over which they work well is pretty small in my experience, >and they just start mis-decoding to aliased "valid" codewords below a >particular SNR threshold rather more than they find "undecodable" >cases. An additional CRC or something is often needed because of >this, so even if the decoder can't indicate that there was a failure >there are often other mechanisms in place to handle that, anyway. > >Likewise erasure processing, again just in my experience, is most >useful for puncturing parity symbols to adjust the overall code rate. >That isn't always necessary, so a good decoder that can't do erasures >can still be very useful.
Whether erasures and undecodables are useful is scenario-dependent. For example, the RS product codes used on CD's and DVD's make serious use of both these features in order to perform. Also any system that is doing iterative recoding around an RS code is probably using erasures usefully. In the first section of Heegard and Wicker's text on turbo codes, there is a historical description of how turbo coding developed from attempts to iteratively decode concatenated RS/convolutional codes. Also, there are some standards that include impulse noise models that require erasure decoding to meet the stated performance. (Whether those models are realistic is another question.) But I agree with the basic statement that some (many) systems have no use for erasure decoding. Steve
Reply by HardySpicer May 21, 20122012-05-21
On May 22, 8:09&#4294967295;am, spop...@speedymail.org (Steve Pope) wrote:
> dvsarwate &#4294967295;<dvsarw...@yahoo.com> wrote: > >On Monday, May 21, 2012 11:55:25 AM UTC-4, Randy Yates asked: > > >> Are you serious, Dilip, or is your tongue in your cheek? > > >Randy: > > >No, my tongue was not in my cheek when I > >wrote my response. > > >The idea that Steve has put forward never > >occurred to me, nor, as far as I know, to > >anyone else who has studied the problem. > >But, many times, a great advance occurs when > >someone not fully embedded in the field > >takes a new look at an old problem; there is > >much to be said for fresh eyes. So, I hope > >that Steve will write up the details of his > >idea (I, for one, don't fully understand > >how the recovery of W from S will work in > >practice as opposed to the quite vague > >reference as in "e.g. by substitution") > >though I suspect it might just go into a > >patent application first before it is > >revealed to comp.dsp or presented at a > >conference or published as a journal > >article. So, I will await more details at > >some future date. > > I do plan to work on it further, and if it is workable > will code up a decoder in C++, and will let you know > the results. &#4294967295;If I obtain results. > > I see several problems besides the one your mention. > This algorithm might not detect undecodable cases properly. > And there may be no way to adapt it to process erasures. > > I think I would stop short of calling this a "great advance". :-) > > Steve
Try Electronics Letters. Hardy
Reply by Eric Jacobsen May 21, 20122012-05-21
On Mon, 21 May 2012 20:09:05 +0000 (UTC), spope33@speedymail.org
(Steve Pope) wrote:

>dvsarwate <dvsarwate@yahoo.com> wrote: > >>On Monday, May 21, 2012 11:55:25 AM UTC-4, Randy Yates asked: >> >>> >>> Are you serious, Dilip, or is your tongue in your cheek? >>> >> >>Randy: >> >>No, my tongue was not in my cheek when I >>wrote my response. >> >>The idea that Steve has put forward never >>occurred to me, nor, as far as I know, to >>anyone else who has studied the problem. >>But, many times, a great advance occurs when >>someone not fully embedded in the field >>takes a new look at an old problem; there is >>much to be said for fresh eyes. So, I hope >>that Steve will write up the details of his >>idea (I, for one, don't fully understand >>how the recovery of W from S will work in >>practice as opposed to the quite vague >>reference as in "e.g. by substitution") >>though I suspect it might just go into a >>patent application first before it is >>revealed to comp.dsp or presented at a >>conference or published as a journal >>article. So, I will await more details at >>some future date. > >I do plan to work on it further, and if it is workable >will code up a decoder in C++, and will let you know >the results. If I obtain results. > >I see several problems besides the one your mention. >This algorithm might not detect undecodable cases properly. >And there may be no way to adapt it to process erasures.
For systematic codes it may not matter that much that it can't detect decoding failures in some applications. For RS codes the SNR "window" over which they work well is pretty small in my experience, and they just start mis-decoding to aliased "valid" codewords below a particular SNR threshold rather more than they find "undecodable" cases. An additional CRC or something is often needed because of this, so even if the decoder can't indicate that there was a failure there are often other mechanisms in place to handle that, anyway. Likewise erasure processing, again just in my experience, is most useful for puncturing parity symbols to adjust the overall code rate. That isn't always necessary, so a good decoder that can't do erasures can still be very useful.
>I think I would stop short of calling this a "great advance". :-)
Let us know what you figure out regardless. ;) Eric Jacobsen Anchor Hill Communications www.anchorhill.com