DSPRelated.com
Forums

Gain of an IIR Filter

Started by gokul_s1 September 2, 2007
>>>> because the ear >>>> cannot resolve phase, as discussed.
>>the gross example i cited (all-pass filter with very long delay >>element inside) is such that there is *only* phase shift. are you >>saying that such is inaudible?
I better say nothing, otherwise I contradict myself :) It's easy to construct a pathological example: Split an audio recording with a steep filter into lower and higher halves. Time-delay one side by one second (=linear phase shift). Of course the effect is audible and the result is a complete mess. If one treats the ear a bank of parallel narrowband power detectors, it would be legitimate to talk about "phase delay" and "group delay" separately. But I think in reality it's not that simple - given both the frequency and time resolution, one might prove that hearing in the conventional sense violates the time-bandwidth product and is impossible... that wouldn't help anybody ... Cheers Markus, listening to Pink Floyd's "Eclipse" while writing
On Sep 5, 1:11 pm, "Ron N." <rhnlo...@yahoo.com> wrote:
> On Sep 5, 9:51 am, robert bristow-johnson <r...@audioimagination.com> > wrote: > > > i don't think that there is much controversy regarding the audibility > > of Inter-Aural Phase Difference (or time difference). this is what > > the Blumlein stereo patent was about, i think. > > >http://mixonline.com/TECnology-Hall-of-Fame/alan-dower-blumlein-09010... > > (i can't seem to find a good one with a diagram showing the time > > arrival difference as a function of azimuth angle.) > > > if you applied the phase shift equally to both ears, would it make a > > difference? > > > what the dispute (regarding audibility of phase) was about is if one > > applies different amounts of time delays to different frequencies, > > what happens if you pass through some hypothetical filter that changes > > no amplitudes and is *not* phase linear. apply the same filter to > > both ears. > > LInear-phase filtering is already known to introduce some pre-ringing > artifacts at low frequencies.
i have to confess that i was being deliberately a litte pedantic about this. the issue was effects of phase changes *only*, with the assumption that there were no amplitude changes. that means an all- pass filter of some manner. an all-pass filter with linear phase is normally called a "delay line" or "pure delay" (but i pedantically didn't use that term). anyway, a linear-phase APF, a.k.a. "delay line", isn't gonna have much pre-ringing. now a linear-phase LPF, *that's* a different story.
> Wouldn't any sharp phase discontinuity > centered at a bass frequency energy peak cause a similar effect?
i dunno. if you exclude phase discontinuities of +/- pi and fold that into the magnitude as a polarity inversion, i'm not sure what step- like phase discontinuity can physically crop up. r b-j
"robert bristow-johnson" <rbj@audioimagination.com> wrote in 
message 
news:1189011064.363040.257240@k79g2000hse.googlegroups.com...

> > i don't think that there is much controversy regarding the > audibility > of Inter-Aural Phase Difference (or time difference). > this is what > the Blumlein stereo patent was about, i think. >
Sorry. I should have given you more credit than that.
> what the dispute (regarding audibility of phase) was about > is if one > applies different amounts of time delays to different > frequencies, > what happens if you pass through some hypothetical filter > that changes > no amplitudes and is *not* phase linear. apply the same > filter to > both ears. > > r b-j >
Phase changes can cause the shape of reproduced waves to change. If the auditory system was not sensitive to the wave shape, how would one tell the difference between a flute, a trumpet, and a violin all playing the same note? I understand that there are issues of harmonics mixed with different amplitudes, but would these instruments really sound the same if the phase differences of the harmonics were wildly screwed-up (and/or delayed) relative to the fundamentals? If so, it's not intuitively obvious to me.
On Sep 5, 2:10 pm, "John E. Hadstate" <jh113...@hotmail.com> wrote:
> "robert bristow-johnson" <r...@audioimagination.com> wrote in > messagenews:1189011064.363040.257240@k79g2000hse.googlegroups.com... > > > > > i don't think that there is much controversy regarding the > > audibility > > of Inter-Aural Phase Difference (or time difference). > > this is what > > the Blumlein stereo patent was about, i think. > > Sorry. I should have given you more credit than that. > > > what the dispute (regarding audibility of phase) was about > > is if one > > applies different amounts of time delays to different > > frequencies, > > what happens if you pass through some hypothetical filter > > that changes > > no amplitudes and is *not* phase linear. apply the same > > filter to > > both ears. > > > r b-j > > Phase changes can cause the shape of reproduced waves to > change. If the auditory system was not sensitive to the > wave shape, how would one tell the difference between a > flute, a trumpet, and a violin all playing the same note? I > understand that there are issues of harmonics mixed with > different amplitudes, but would these instruments really > sound the same if the phase differences of the harmonics > were wildly screwed-up (and/or delayed) relative to the > fundamentals? If so, it's not intuitively obvious to me.
Somewhere, I read some research showing that in fact most people can't tell the difference, for instruments that are only roughly harmonically similar, if the transient attack was removed. In fact, you could take the short transient attack of instrument #1, paste in onto a longer segment of instrument #2 sound, and people would think they are listening to some sort of instrument #1. The ear/brain system appears to do a different type of measurement on transients than on sustained waveforms. I haven't yet seen good research on whether significant phase distortion of short transients ("clicks" and such) is completely inaudible. If you take the FFT of an impulse and invert the phase of one bin, leaving all the magnitudes identical, the IFFT certainly looks like it would be audibly different. IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M
In article <1188954726.414162.141250@50g2000hsm.googlegroups.com>, 
rbj@audioimagination.com says...
> > >On Sep 4, 3:05 pm, Jerry Avins <j...@ieee.org> wrote: >> robert bristow-johnson wrote: >> >> > well, it's not the same thing. a pure delay is just that. you hear >> > the original signal *once* at some delay and no echoes after that. if >> > you have no reference to compare the possibly delayed signal to, how >> > would you know it sounds different than it would have sounded if it >> > had arrived at your ears 500 ms earlier. if echoes start happening to >> > a sound, it is not a pure delay (and might not even be an APF, so >> > there would be magnitude variations in the frequency response), and >> > you are hearing that. >> >... >> >> I'm not sure I like your first example. You may not hear phase, but >> interference can be quite audible. > >and once you get multipath interference, it's no longer a pure delay. > >what i am trying to get down to, with this "audibility of phase" issue >is something that we can nail down. i brought this up (among other >issues) in 2003 March with a letter to the AES Journal taking on an >author named Andrew Horner. he and i have both done stuff regarding >Wavetable Synthesis, but Andrew totally discarded phase information of >each harmonic, so the wave shape was not preserved at all (and i had a >problem with it, particularly since he didn't even hand-wave a >justification for this loss of information). but there are *some* >phase inaudibility situations that i grant. how to set up an >experiment to find out?
Here is a report on one such experiment: Discrimination of Group Delay in Clicklike Signals Presented via Headphones and Loudspeakers JAES Volume 53 Number 7/8 pp. 593-611; July/August 2005 Thresholds were measured for the discrimination of a click reference stimulus from a similar stimulus with a group delay in a specific frequency region, introduced using an all-pass filter. For headphone presentation the thresholds were about 1.6 ms, were independent of the center frequency of the delayed region (1, 2, or 4 kHz), and did not differ significantly for monaural and binaural listening. For presentation via loudspeakers in a low-reverberation room the thresholds were only slightly higher than with headphones and did not differ significantly for distributed-mode loudspeakers (DMLs) and cone loudspeakers. For presentation via the same loudspeakers in a reverberant room the thresholds were larger than the corresponding thresholds measured in the low-reverberation room, and this effect increased with decreasing center frequency for both loudspeaker types. For monaural listening the thresholds for discriminating group delay were significantly larger for the DML than for the cone loudspeaker, probably due to the higher ratio of reverberant-to-direct sound for the former, associated with its lower directivity. However, for binaural listening the difference between DML and cone loudspeakers became nonsignificant. Authors: Flanagan, Sheila; Moore, Brian C. J.; Stone, Michael A. E-lib Location: (CD JAES53) /tmp/jaes53/7/pg593.pdf
> >i am envisioning listening to two, possibly different and possibly >identical, sounds, one after the other. this is what i would call "AB >Testing" as opposed to "ABX Testing" where in the latter you hear two >sounds (A&B) that are nominally different, and then a third sound (X) >that you assign to A or B. in AB Testing, you hear two sounds and you >have to say if they are the same or if they are not. there will be an >equal number of placebos put in (two sounds that are identical) and >every false positive (where the listener judges two identical sounds >to be different) will be subtracted from the number of true positive >(where the listener judges two different sounds to be different). >same for the false negatives (where the listener judges two different >sounds to be identical) and true negatives (where the listener judges >two identical sounds to be identical). that's how we subtract the >bias from the "Monster Cable partisans" who might be tempted to judge >any pair as different, just to make sure they hit the ones that >actually *are* different.
ABX and ABC/hr methodologies are useful because they automatically determine whether subjects are able to discern differences in a statistically significant way. ABX answers the question: "Can I hear a difference?" ABC/hr answers the question: "Can I hear a difference and if so, how subjectively important is it?" See ITU Recommendation BS 1116-1 and pcabx.com for more info.
robert bristow-johnson wrote:

(snip)

> what the dispute (regarding audibility of phase) was about is if one > applies different amounts of time delays to different frequencies, > what happens if you pass through some hypothetical filter that changes > no amplitudes and is *not* phase linear. apply the same filter to > both ears.
It also happens if you listen to a multiple driver loudspeaker from different angles. Though the time delay will depend more on the position in the room, such that there is only one correct place. Few of us manage that in our living rooms. -- glen
Scott Seidman wrote:

(I wrote)

>>Not that phase isn't important, but the human auditory system is >>somewhat (not completely) insensitive to phase, and it isn't easy >>to control. For a loudspeaker it usually depends on position in >>the room which makes it hard to give a specific value.
> Actually, for lower frequency hearing (about 2K and below) the human > auditory system is exquisitely sensitive to phase (specifically time delays > between the two ears), and clearly uses the information for localizing > sound sources.
As far as I understand it is two separate systems. If you are out in the forest and hear a stick crack (an animal sneaking up on you) you can easily pinpoint the direction. That has an obvious evolutionary advantage. Assuming the same phase changes are made for both ears, the system might still work. I have been told that the ear generates nerve impulses at the peak of the sine for low frequencies, and at the peak of some (but not all) sines for higher frequencies. Musical signals are processed in a different part of the brain, though with some overlap (otherwise the stereo people would be out of business). It seems that this sense is not so sensitive to phase. I do remember once going to a John Pierce http://en.wikipedia.org/wiki/John_Robinson_Pierce seminar once with two speakers at the front of the room. He would play some sounds and ask who heard it from the right speaker (some would raise their hands), who heard it from the left (others would raise their hands). It was easy to see that the line went right down the middle of the room. -- glen
John E. Hadstate wrote:

   ...

> Phase changes can cause the shape of reproduced waves to > change. If the auditory system was not sensitive to the > wave shape, how would one tell the difference between a > flute, a trumpet, and a violin all playing the same note?
Very much the same if not exactly. The salient difference between a Steinway and a Baldwin lies more in the harmonic structure of the notes than in the phase. When my eats were better than they are now, I could distinguish between those instruments even on vinyl recordings.
> I understand that there are issues of harmonics mixed with > different amplitudes, but would these instruments really > sound the same if the phase differences of the harmonics > were wildly screwed-up (and/or delayed) relative to the > fundamentals?
Very much the same.
> If so, it's not intuitively obvious to me.
It can be empirically tested. Run a French horn through one all-pass filter and a trumpet through another. Each instrument will retain its identity, but you won't be able to tell which filter is which. Analog quadrature networks for single sideband intended for voice have a over 500 degrees of phase shift over about a decade. Despite the restricted bandwidth, the instruments you mentioned would be distinguished without difficulty. Jerry -- Engineering is the art of making what you want from things you can get. &macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;
On Sep 5, 7:47 pm, Robert Orban <donotre...@spamblock.com> wrote:
> > Here is a report on one such experiment: > > Discrimination of Group Delay in Clicklike Signals Presented via Headphones > and Loudspeakers > > JAES Volume 53 Number 7/8 pp. 593-611; July/August 2005 > > Thresholds were measured for the discrimination of a click reference > stimulus from a similar stimulus with a group delay in a specific frequency > region, introduced using an all-pass filter. For headphone presentation the > thresholds were about 1.6 ms, were independent of the center frequency of > the delayed region (1, 2, or 4 kHz), and did not differ significantly for > monaural and binaural listening. For presentation via loudspeakers in a > low-reverberation room the thresholds were only slightly higher than with > headphones and did not differ significantly for distributed-mode > loudspeakers (DMLs) and cone loudspeakers. For presentation via the same > loudspeakers in a reverberant room the thresholds were larger than the > corresponding thresholds measured in the low-reverberation room, and this > effect increased with decreasing center frequency for both loudspeaker > types. For monaural listening the thresholds for discriminating group delay > were significantly larger for the DML than for the cone loudspeaker, > probably due to the higher ratio of reverberant-to-direct sound for the > former, associated with its lower directivity. However, for binaural > listening the difference between DML and cone loudspeakers became > nonsignificant. > Authors: Flanagan, Sheila; Moore, Brian C. J.; Stone, Michael A. > E-lib Location: (CD JAES53) /tmp/jaes53/7/pg593.pdf
thanks, Robert. i'm gonna look at that. i am presuming that for the binaural tests, the very same APF was applied to both channels, no? click stimulus is pretty wicked rigorous. ...
> In article <1188954726.414162.141...@50g2000hsm.googlegroups.com>, > rbj@audioimagination.com says...
...
> >i am envisioning listening to two, possibly different and possibly > >identical, sounds, one after the other. this is what i would call "AB > >Testing" as opposed to "ABX Testing" where in the latter you hear two > >sounds (A&B) that are nominally different, and then a third sound (X) > >that you assign to A or B. in AB Testing, you hear two sounds and you > >have to say if they are the same or if they are not. there will be an > >equal number of placebos put in (two sounds that are identical) and > >every false positive (where the listener judges two identical sounds > >to be different) will be subtracted from the number of true positive > >(where the listener judges two different sounds to be different). > >same for the false negatives (where the listener judges two different > >sounds to be identical) and true negatives (where the listener judges > >two identical sounds to be identical). that's how we subtract the > >bias from the "Monster Cable partisans" who might be tempted to judge > >any pair as different, just to make sure they hit the ones that > >actually *are* different. > > ABX and ABC/hr methodologies are useful because they automatically > determine whether subjects are able to discern differences in a > statistically significant way. ABX answers the question: "Can I hear a > difference?"
Robert, with as much respect/deference as i can have, i think i must disagree. in ABX testing, the subject hears two sounds, A and B, then a third sound, X, (or X might actually come first) which ostensibly would be either A or B, and is asked "which is X or which is X most similar to? A or B?" and the subject must choose either A or B. is that not the case? ABX is to try to answer "which one is better?", not "is this one good enough or not?"
> ABC/hr answers the question: "Can I hear a difference and if > so, how subjectively important is it?" > > See ITU Recommendation BS 1116-1 and pcabx.com for more info.
indeed at the pcabx.com site: "1.2.3 PCABX is a test paradigm that compares the performance of all equipment to a well-known ideal: Sonic Accuracy. Two different pieces of equipment can also be compared to each other, but the focus of most comparisons is sound quality relative to a perfect ideal which is the original audio signal prior to passing through the audio product under test." i don't have access to the ITU pub, but the statement above appears to agree with my understanding of ABX. r b-j
On Sep 5, 5:10 pm, "John E. Hadstate" <jh113...@hotmail.com> wrote:
> "robert bristow-johnson" <r...@audioimagination.com> wrote in > messagenews:1189011064.363040.257240@k79g2000hse.googlegroups.com...
...
> > > what the dispute (regarding audibility of phase) was about > > is if one > > applies different amounts of time delays to different > > frequencies, > > what happens if you pass through some hypothetical filter > > that changes > > no amplitudes and is *not* phase linear. apply the same > > filter to > > both ears. > > > Phase changes can cause the shape of reproduced waves to > change.
true.
> If the auditory system was not sensitive to the > wave shape, how would one tell the difference between a > flute, a trumpet, and a violin all playing the same note?
ah, but there is more different about them, John, than that the phases of their harmonics are scrambled up. the *amplitudes* of the harmonics are of differing magnitudes. what some folks are saying (and they have some reason to say this, but to do so as a blanket statement is mistaken and that's what i had disputed with Andrew Horner) is that if you have two waveforms with harmonic amplitudes that are exactly matched, you cannot hear a difference, even if the phases are not matched. those waveforms will have radically different shapes, but *may* under some conditions (like no nonlinearities coming later in the signal chain), be very difficult to discriminate. consider the square wave: x(t) = cos(wt) - 1/3*cos(3wt) + 1/5*cos(5wt) - 1/7*cos(7wt) + ... and this: x(t) = cos(wt) + 1/3*cos(3wt) + 1/5*cos(5wt) + 1/7*cos(7wt) + ... you add up enough terms and the latter will look pretty spikey compared to the square wave. and, to defer to Andrew (whom i've disagreed with his sweeping rejection of the audibility of phase), it *is* true that, with a very linear signal chain, you can hardly hear a difference.
> I > understand that there are issues of harmonics mixed with > different amplitudes, but would these instruments really > sound the same if the phase differences of the harmonics > were wildly screwed-up (and/or delayed) relative to the > fundamentals?
if you consider that our hearing is merely Fourier analyzers where magnitude is the only data just like looking at the magnitude of a spectrum (i don't think that), then, even if the phase is completely off, you can't tell because the magnitudes of all the sinusoidal components remain the same. i don't buy into that, but there are circumstances that it's pretty hard to tell.
> If so, it's not intuitively obvious to me.
Google group "square_phase.m" (in the comp.dsp newsgroup) and you'll hit at least three times i posted this MATLAB file that, for a bandlimited square wave, i fuck up the phases and you get some radically different looking waveforms that sound quite similar (play them out *good* speakers or headphones). but if you put in some non- linearity, *then* the amplitudes of the Fourier harmonics get messed up and you can certainly hear the difference. but where i differ with Andrew Horner is why one would bother to throw the phase information away in Wavetable Synthesis, since it saves nothing in computational cost of resynthesis. if it requires more bother in the off-line waveform analysis and construction of the wavetables, fine. MIPS are cheap in a non-real-time off-line process. r b-j