DSPRelated.com
Forums

The $10000 Hi-Fi

Started by Unknown May 3, 2015
Eric Jacobsen <eric.jacobsen@ieee.org> wrote:

>On Thu, 7 May 2015 17:12:37 +0000 (UTC), spope33@speedymail.org (Steve
>>>On 5/7/2015 12:51 PM, Steve Pope wrote:
>>>> Within a digital algorithm that includes feedback, switching from >>>> undithered truncation to undithered rounding can make a large (generally >>>> beneficial) difference in behavior, or even stability.
>>This has been in my DNA for so long that >>I can't remember where/if I encountered it in the literature. >>Possibly in very standard DSP textbooks, but I'll have to >>do some browsing to find it.
>Truncation is biased, and feeding back truncated signals can lead to a >growing DC offset. I'm always very careful in comm systems to make >sure that accumulated pipeline stages get rounded whenever possible, >since even a small DC bias can cause substantial performance >degradation.
>Not sure if that's what you're thinking about, but forgetting to round >in some stage buried somewhere has caused a lot of grief for this >reason.
That is what I'm thinking about. Although, I am also thinking truncation means "truncation towards negative infinity", not "truncation towards zero" (as the latter is not a simple discarding of bits of a two's complement value). Steve
Steve Pope <spope33@speedymail.org> wrote:
> Eric Jacobsen <eric.jacobsen@ieee.org> wrote:
(snip)
>>Truncation is biased, and feeding back truncated signals can lead to a >>growing DC offset. I'm always very careful in comm systems to make >>sure that accumulated pipeline stages get rounded whenever possible, >>since even a small DC bias can cause substantial performance >>degradation.
>>Not sure if that's what you're thinking about, but forgetting to round >>in some stage buried somewhere has caused a lot of grief for this >>reason.
> That is what I'm thinking about. Although, I am also thinking > truncation means "truncation towards negative infinity", > not "truncation towards zero" (as the latter is not a > simple discarding of bits of a two's complement value).
But on most processors integer divide truncates toward zero. -- glen
glen herrmannsfeldt  <gah@ugcs.caltech.edu> wrote:

>Steve Pope <spope33@speedymail.org> wrote:
>> That is what I'm thinking about. Although, I am also thinking >> truncation means "truncation towards negative infinity", >> not "truncation towards zero" (as the latter is not a >> simple discarding of bits of a two's complement value).
>But on most processors integer divide truncates toward zero.
Possbly a rather inconvenient thing to hand off to an RTL designer. Steve
On Thu, 7 May 2015 20:22:09 +0000 (UTC), glen herrmannsfeldt
<gah@ugcs.caltech.edu> wrote:

>Steve Pope <spope33@speedymail.org> wrote: >> Eric Jacobsen <eric.jacobsen@ieee.org> wrote: > >(snip) >>>Truncation is biased, and feeding back truncated signals can lead to a >>>growing DC offset. I'm always very careful in comm systems to make >>>sure that accumulated pipeline stages get rounded whenever possible, >>>since even a small DC bias can cause substantial performance >>>degradation. > >>>Not sure if that's what you're thinking about, but forgetting to round >>>in some stage buried somewhere has caused a lot of grief for this >>>reason. > >> That is what I'm thinking about. Although, I am also thinking >> truncation means "truncation towards negative infinity", >> not "truncation towards zero" (as the latter is not a >> simple discarding of bits of a two's complement value). > >But on most processors integer divide truncates toward zero. > >-- glen
Multiply-accumulate stages, or even just accumulation stages, often have to reduce the calculated precision when passing a result to the next stage. This is often true in hardware and many times in software as well. The reduction in precision requires a decision in whether to truncate or round, and possibly on whether to clip for saturation as well. In hardware the clip or rounding requires additional complexity, so it is sometimes left out or forgotten in an attempt to make things fit in the available resources. It can pay big dividends in both performance and debug/verification time to be judicious with clip-and-round stages in hardware. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
Eric Jacobsen <eric.jacobsen@ieee.org> wrote:

>On Thu, 7 May 2015 20:22:09 +0000 (UTC), glen herrmannsfeldt
><gah@ugcs.caltech.edu> wrote:
>>Steve Pope <spope33@speedymail.org> wrote:
>>> That is what I'm thinking about. Although, I am also thinking >>> truncation means "truncation towards negative infinity", >>> not "truncation towards zero" (as the latter is not a >>> simple discarding of bits of a two's complement value).
>>But on most processors integer divide truncates toward zero.
>Multiply-accumulate stages, or even just accumulation stages, often >have to reduce the calculated precision when passing a result to the >next stage. This is often true in hardware and many times in >software as well. The reduction in precision requires a decision in >whether to truncate or round, and possibly on whether to clip for >saturation as well. In hardware the clip or rounding requires >additional complexity, so it is sometimes left out or forgotten in an >attempt to make things fit in the available resources.
>It can pay big dividends in both performance and debug/verification >time to be judicious with clip-and-round stages in hardware.
I agree. And, this is one reason why I believe fixed-point standards are valuable, and in particular, IEEE 1666, otherwise known as the System C fixed point types. Within this standard, the three most useful overflow modes are two's complement saturation, symmetric saturation, and 2's complement overflow, all of which have straightforward Similarly, the two most useful quantization modes are 2's complement rounding, and truncation towards negative infinity. Truncation towards zero is also in the standard. Once the a team of system and RTL designers has created templates and/or functions implementing the above (modest in my view) collection of fixed-point features, you are basically done in the sense of having avoided mis-communication or mismatch of quantization and overflow logic. And a new designer can quite quickly pick up what you're doing with respect to these design details, since they have a standard to go by. None of the above covers dithering or noise shaping however. Steve
On 5/7/15 8:59 AM, Greg Berchin wrote:
> On Wed, 06 May 2015 18:30:20 -0400, robert bristow-johnson > <rbj@audioimagination.com> wrote: > >> whether it's with an old Freescale 56K or some other processor, a lot of >> internal arithmetic in audio algorithms is done at 24 bits. 32-bit >> floats have a 25-bit mantissa. > > As I understand it, single-precision IEEE 754 floating point has 24 bit > precision with 23 fractional bits explicitly stored. Were you referring > to a different single-precision format? >
hidden one *and* sign bit plus 23. even without the scaling of floating point, an IEEE single-precision float can exactly represent 2^25 different signed integers. maybe it's 2^25 + 1 with zero. got your note, Greg, and i'll get back to you soon. L8r, -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On 5/6/15 9:42 PM, rickman wrote:
> On 5/6/2015 8:01 PM, Steve Pope wrote: >> robert bristow-johnson <rbj@audioimagination.com> wrote: >> >>> On 5/6/15 9:23 AM, Steve Pope wrote: >> >>>> Not sure Sony was the first, but they had an early 16-bit >>>> quantizer (I believe marketed as a "PCM unit", although that's >>>> a misnomer) that piggybacked onto a VCR used as a data recorder. >>>> This was around 1978. >> >>> i think it's the Sony F1. and they used betamax, not VHS. >> >> Yes, it had to be Betamax, which I had completely forgotten >> existed. >> >>> and this is >>> the reason why 44.1 kHz (or, more precisely, 44.056 kHz with the F1) >>> became the CD sample rate standard. very icky. too bad they didn't go >>> with 48 kHz. >> >> I had not realized the relationship there. Thanks. > > I don't think the fact that it was Beta vs. VHS had anything to do with > it. I think the sample rate is linked to the TV rates which are the same > for both. There were early digital recordings on VCRs and the CD sample > rate was set to be compatible. >
but the Sony F1 *was* a betamax device, as i recall. i realize that the same video can be recorded on a VHS. i just dunno how the box was. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
robert bristow-johnson  <rbj@audioimagination.com> wrote:

>On 5/6/15 9:42 PM, rickman wrote:
>> I don't think the fact that it was Beta vs. VHS had anything to do with >> it. I think the sample rate is linked to the TV rates which are the same >> for both. There were early digital recordings on VCRs and the CD sample >> rate was set to be compatible.
>but the Sony F1 *was* a betamax device, as i recall. i realize that the >same video can be recorded on a VHS. i just dunno how the box was.
The scenario that some Sony engineers designed the F1 to fit in with a Betamax machine, ending up with a 44.1 KHz sample rate for semi-random reasons, and then Sony managers forced some hapless unrelated group of Sony engineers supporting the CD Red Book standards effort to insist upon a 44.1 KHz sample rate ... against their better judgement ... entirely believable. Steve
Steve Pope <spope33@speedymail.org> wrote:
> robert bristow-johnson <rbj@audioimagination.com> wrote:
>>On 5/6/15 9:42 PM, rickman wrote:
>>> I don't think the fact that it was Beta vs. VHS had anything to do with >>> it. I think the sample rate is linked to the TV rates which are the same >>> for both. There were early digital recordings on VCRs and the CD sample >>> rate was set to be compatible.
>>but the Sony F1 *was* a betamax device, as i recall. i realize that the >>same video can be recorded on a VHS. i just dunno how the box was.
> The scenario that some Sony engineers designed the F1 to fit > in with a Betamax machine, ending up with a 44.1 KHz sample rate > for semi-random reasons, and then Sony managers forced some > hapless unrelated group of Sony engineers supporting the CD Red Book > standards effort to insist upon a 44.1 KHz sample rate ... > against their better judgement ... entirely believable.
I suppose, but there weren't many choices for CD master making back then. Nine track tape is 150MB at 6250BPI, none of the current high density tape systems existed. People were actually starting to use video tape as computer backup systems. I don't know the exact numbers, but a block size and block rate that allows for easy writing to NTSC video is pretty convenient. -- glen
glen herrmannsfeldt  <gah@ugcs.caltech.edu> wrote:

>Steve Pope <spope33@speedymail.org> wrote:
>> robert bristow-johnson <rbj@audioimagination.com> wrote: > >>>On 5/6/15 9:42 PM, rickman wrote: > >>>> I don't think the fact that it was Beta vs. VHS had anything to do with >>>> it. I think the sample rate is linked to the TV rates which are the same >>>> for both. There were early digital recordings on VCRs and the CD sample >>>> rate was set to be compatible. > >>>but the Sony F1 *was* a betamax device, as i recall. i realize that the >>>same video can be recorded on a VHS. i just dunno how the box was. > >> The scenario that some Sony engineers designed the F1 to fit >> in with a Betamax machine, ending up with a 44.1 KHz sample rate >> for semi-random reasons, and then Sony managers forced some >> hapless unrelated group of Sony engineers supporting the CD Red Book >> standards effort to insist upon a 44.1 KHz sample rate ... >> against their better judgement ... entirely believable. > >I suppose, but there weren't many choices for CD master making back >then.
>Nine track tape is 150MB at 6250BPI, none of the current high >density tape systems existed. People were actually starting >to use video tape as computer backup systems.
>I don't know the exact numbers, but a block size and block rate >that allows for easy writing to NTSC video is pretty convenient.
For certain market players, yes. Going on memory -- Sony, and also Ampex were wedded to pre-existing helical scan data recorders. Whereas 3M came out with a 48 ksample/sec tape system, eventually in multitrack (transverse scan?). I think it was market-dominant for awhile (well, 6 months anyway), and it was for an interval of time the right choice, a dedicated digital audio recorder. (Am I misremembering this?) Steve