Reply by Robert Orban September 25, 20072007-09-25
In article <1189061830.798679.327070@19g2000hsx.googlegroups.com>, 
rbj@audioimagination.com says...

> >Robert, with as much respect/deference as i can have, i think i must >disagree. in ABX testing, the subject hears two sounds, A and B, then >a third sound, X, (or X might actually come first) which ostensibly >would be either A or B, and is asked "which is X or which is X most >similar to? A or B?" and the subject must choose either A or B. is >that not the case? ABX is to try to answer "which one is better?", >not "is this one good enough or not?" > >> ABC/hr answers the question: "Can I hear a difference and if >> so, how subjectively important is it?" >> >> See ITU Recommendation BS 1116-1 and pcabx.com for more info. > >indeed at the pcabx.com site: > >"1.2.3 PCABX is a test paradigm that compares the performance of all >equipment to a well-known ideal: Sonic Accuracy. Two different pieces >of equipment can also be compared to each other, but the focus of most >comparisons is sound quality relative to a perfect ideal which is the >original audio signal prior to passing through the audio product under >test." > >i don't have access to the ITU pub, but the statement above appears to >agree with my understanding of ABX.
The scoring system in ABX has no provision for recording "how much difference is there," although one could presumably modify it appropriately. OTOH, ABC/HR specifically uses a scoring system that requires the subject to assess the quality difference between the (hidden) reference and the device under test. ABC/HR is more commonly used to assess the quality of items like codecs that are expected to degrade the original source material by determining whether their artifacts are anywhere from "very annoying" to "imperceptible," typically on a 1-5 scale. ABC/HR is known to be very sensitive and repeatable, which is why it is the gold standard for subjective testing of codecs whose goal is perceptual transparency.
Reply by glen herrmannsfeldt September 6, 20072007-09-06
Scott Seidman wrote:

(snip on phase detection in the ear)

> http://www.amazon.com/Introduction-Physiology-Hearing- > Second/dp/0125547544
> Of course, everythings a continuum, and we can argue about how high is > high. This has more to do with the fastest neural discharge rates than > anything else-- the neuron can only fire so fast, and above that rate, > phase locking just can't happen. So, in some frequency range, both ITD > and ILD are available, but above that frequency range, ILD only are > available. ITD cues are dependent on the peak detection, which just > can't happen above some frequency.
http://www.pnas.org/cgi/content/full/pnas;97/22/11787 I now realize that the person who told me about phase detection does (and did) experiments on owls. They may have even higher response than humans, but it seems likely that even human nerves can't keep up at 20kHz. The link above may or may not answer the question, but it is by the person who told me about phase detection. -- glen
Reply by glen herrmannsfeldt September 6, 20072007-09-06
glen herrmannsfeldt wrote:

> Scott Seidman wrote: >> glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in >> news:6dednUImAoWPokLbnZ2dnUVZ_qCgnZ2d@comcast.com:
>>> I have been told that the ear generates nerve impulses at the >>> peak of the sine for low frequencies, and at the peak of some >>> (but not all) sines for higher frequencies.
>> The former is correct, the latter isn't.
While looking for a reference, I instead found this one: http://www.cco.caltech.edu/~boyk/usenet.htm I think it is relevant to the discussion, though not to this specific question. -- glen
Reply by Scott Seidman September 6, 20072007-09-06
glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in 
news:0YWdndSZU5Xd033bnZ2dnUVZ_jOdnZ2d@comcast.com:

> Scott Seidman wrote: > >> glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in >> news:6dednUImAoWPokLbnZ2dnUVZ_qCgnZ2d@comcast.com: > >>>I have been told that the ear generates nerve impulses at the >>>peak of the sine for low frequencies, and at the peak of some >>>(but not all) sines for higher frequencies. > >> The former is correct, the latter isn't. > > The one who told me that actually does experiments putting electrodes > in the aural nerves and watching the impulses. What source do > you have that disagrees with that? > > -- glen > >
http://www.amazon.com/Introduction-Physiology-Hearing- Second/dp/0125547544 Of course, everythings a continuum, and we can argue about how high is high. This has more to do with the fastest neural discharge rates than anything else-- the neuron can only fire so fast, and above that rate, phase locking just can't happen. So, in some frequency range, both ITD and ILD are available, but above that frequency range, ILD only are available. ITD cues are dependent on the peak detection, which just can't happen above some frequency. -- Scott Reverse name to reply
Reply by glen herrmannsfeldt September 6, 20072007-09-06
Scott Seidman wrote:

> glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in > news:6dednUImAoWPokLbnZ2dnUVZ_qCgnZ2d@comcast.com:
>>I have been told that the ear generates nerve impulses at the >>peak of the sine for low frequencies, and at the peak of some >>(but not all) sines for higher frequencies.
> The former is correct, the latter isn't.
The one who told me that actually does experiments putting electrodes in the aural nerves and watching the impulses. What source do you have that disagrees with that? -- glen
Reply by glen herrmannsfeldt September 6, 20072007-09-06
Scott Seidman wrote:

> glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in > news:6dednUImAoWPokLbnZ2dnUVZ_qCgnZ2d@comcast.com:
>>Musical signals are processed in a different part of the brain, >>though with some overlap (otherwise the stereo people would >>be out of business). It seems that this sense is not so >>sensitive to phase.
> All the ITD and ILD processing are built right into the brain stem. > Music can't skip over them.
I didn't mean that the signals skipped over them, but the processing. That is, the part that senses musicalness. (Pleasing tones vs. harsh tones. Trombone from trumpet. Anything other than the direction of the source.) -- glen
Reply by Scott Seidman September 6, 20072007-09-06
glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in 
news:6dednUImAoWPokLbnZ2dnUVZ_qCgnZ2d@comcast.com:

> I have been told that the ear generates nerve impulses at the > peak of the sine for low frequencies, and at the peak of some > (but not all) sines for higher frequencies.
The former is correct, the latter isn't. -- Scott Reverse name to reply
Reply by Scott Seidman September 6, 20072007-09-06
glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in 
news:6dednUImAoWPokLbnZ2dnUVZ_qCgnZ2d@comcast.com:

> > Musical signals are processed in a different part of the brain, > though with some overlap (otherwise the stereo people would > be out of business). It seems that this sense is not so > sensitive to phase.
All the ITD and ILD processing are built right into the brain stem. Music can't skip over them. -- Scott Reverse name to reply
Reply by robert bristow-johnson September 6, 20072007-09-06
On Sep 5, 5:10 pm, "John E. Hadstate" <jh113...@hotmail.com> wrote:
> "robert bristow-johnson" <r...@audioimagination.com> wrote in > messagenews:1189011064.363040.257240@k79g2000hse.googlegroups.com...
...
> > > what the dispute (regarding audibility of phase) was about > > is if one > > applies different amounts of time delays to different > > frequencies, > > what happens if you pass through some hypothetical filter > > that changes > > no amplitudes and is *not* phase linear. apply the same > > filter to > > both ears. > > > Phase changes can cause the shape of reproduced waves to > change.
true.
> If the auditory system was not sensitive to the > wave shape, how would one tell the difference between a > flute, a trumpet, and a violin all playing the same note?
ah, but there is more different about them, John, than that the phases of their harmonics are scrambled up. the *amplitudes* of the harmonics are of differing magnitudes. what some folks are saying (and they have some reason to say this, but to do so as a blanket statement is mistaken and that's what i had disputed with Andrew Horner) is that if you have two waveforms with harmonic amplitudes that are exactly matched, you cannot hear a difference, even if the phases are not matched. those waveforms will have radically different shapes, but *may* under some conditions (like no nonlinearities coming later in the signal chain), be very difficult to discriminate. consider the square wave: x(t) = cos(wt) - 1/3*cos(3wt) + 1/5*cos(5wt) - 1/7*cos(7wt) + ... and this: x(t) = cos(wt) + 1/3*cos(3wt) + 1/5*cos(5wt) + 1/7*cos(7wt) + ... you add up enough terms and the latter will look pretty spikey compared to the square wave. and, to defer to Andrew (whom i've disagreed with his sweeping rejection of the audibility of phase), it *is* true that, with a very linear signal chain, you can hardly hear a difference.
> I > understand that there are issues of harmonics mixed with > different amplitudes, but would these instruments really > sound the same if the phase differences of the harmonics > were wildly screwed-up (and/or delayed) relative to the > fundamentals?
if you consider that our hearing is merely Fourier analyzers where magnitude is the only data just like looking at the magnitude of a spectrum (i don't think that), then, even if the phase is completely off, you can't tell because the magnitudes of all the sinusoidal components remain the same. i don't buy into that, but there are circumstances that it's pretty hard to tell.
> If so, it's not intuitively obvious to me.
Google group "square_phase.m" (in the comp.dsp newsgroup) and you'll hit at least three times i posted this MATLAB file that, for a bandlimited square wave, i fuck up the phases and you get some radically different looking waveforms that sound quite similar (play them out *good* speakers or headphones). but if you put in some non- linearity, *then* the amplitudes of the Fourier harmonics get messed up and you can certainly hear the difference. but where i differ with Andrew Horner is why one would bother to throw the phase information away in Wavetable Synthesis, since it saves nothing in computational cost of resynthesis. if it requires more bother in the off-line waveform analysis and construction of the wavetables, fine. MIPS are cheap in a non-real-time off-line process. r b-j
Reply by robert bristow-johnson September 6, 20072007-09-06
On Sep 5, 7:47 pm, Robert Orban <donotre...@spamblock.com> wrote:
> > Here is a report on one such experiment: > > Discrimination of Group Delay in Clicklike Signals Presented via Headphones > and Loudspeakers > > JAES Volume 53 Number 7/8 pp. 593-611; July/August 2005 > > Thresholds were measured for the discrimination of a click reference > stimulus from a similar stimulus with a group delay in a specific frequency > region, introduced using an all-pass filter. For headphone presentation the > thresholds were about 1.6 ms, were independent of the center frequency of > the delayed region (1, 2, or 4 kHz), and did not differ significantly for > monaural and binaural listening. For presentation via loudspeakers in a > low-reverberation room the thresholds were only slightly higher than with > headphones and did not differ significantly for distributed-mode > loudspeakers (DMLs) and cone loudspeakers. For presentation via the same > loudspeakers in a reverberant room the thresholds were larger than the > corresponding thresholds measured in the low-reverberation room, and this > effect increased with decreasing center frequency for both loudspeaker > types. For monaural listening the thresholds for discriminating group delay > were significantly larger for the DML than for the cone loudspeaker, > probably due to the higher ratio of reverberant-to-direct sound for the > former, associated with its lower directivity. However, for binaural > listening the difference between DML and cone loudspeakers became > nonsignificant. > Authors: Flanagan, Sheila; Moore, Brian C. J.; Stone, Michael A. > E-lib Location: (CD JAES53) /tmp/jaes53/7/pg593.pdf
thanks, Robert. i'm gonna look at that. i am presuming that for the binaural tests, the very same APF was applied to both channels, no? click stimulus is pretty wicked rigorous. ...
> In article <1188954726.414162.141...@50g2000hsm.googlegroups.com>, > rbj@audioimagination.com says...
...
> >i am envisioning listening to two, possibly different and possibly > >identical, sounds, one after the other. this is what i would call "AB > >Testing" as opposed to "ABX Testing" where in the latter you hear two > >sounds (A&B) that are nominally different, and then a third sound (X) > >that you assign to A or B. in AB Testing, you hear two sounds and you > >have to say if they are the same or if they are not. there will be an > >equal number of placebos put in (two sounds that are identical) and > >every false positive (where the listener judges two identical sounds > >to be different) will be subtracted from the number of true positive > >(where the listener judges two different sounds to be different). > >same for the false negatives (where the listener judges two different > >sounds to be identical) and true negatives (where the listener judges > >two identical sounds to be identical). that's how we subtract the > >bias from the "Monster Cable partisans" who might be tempted to judge > >any pair as different, just to make sure they hit the ones that > >actually *are* different. > > ABX and ABC/hr methodologies are useful because they automatically > determine whether subjects are able to discern differences in a > statistically significant way. ABX answers the question: "Can I hear a > difference?"
Robert, with as much respect/deference as i can have, i think i must disagree. in ABX testing, the subject hears two sounds, A and B, then a third sound, X, (or X might actually come first) which ostensibly would be either A or B, and is asked "which is X or which is X most similar to? A or B?" and the subject must choose either A or B. is that not the case? ABX is to try to answer "which one is better?", not "is this one good enough or not?"
> ABC/hr answers the question: "Can I hear a difference and if > so, how subjectively important is it?" > > See ITU Recommendation BS 1116-1 and pcabx.com for more info.
indeed at the pcabx.com site: "1.2.3 PCABX is a test paradigm that compares the performance of all equipment to a well-known ideal: Sonic Accuracy. Two different pieces of equipment can also be compared to each other, but the focus of most comparisons is sound quality relative to a perfect ideal which is the original audio signal prior to passing through the audio product under test." i don't have access to the ITU pub, but the statement above appears to agree with my understanding of ABX. r b-j