Reply by robert bristow-johnson December 15, 20042004-12-15
in article xxpllbzgxrq.fsf@usrts005.corpusers.net, Randy Yates at
randy.yates@sonyericsson.com wrote on 12/15/2004 14:35:

> robert bristow-johnson <rbj@audioimagination.com> writes: >> [...] >> &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;.----<-- q = quantization error >> &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| >> &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;x-y&#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; v &#4294967295; &#4294967295; &#4294967295;Gv&#4294967295; | >> &#4294967295; x --->(+)----->[H(z)]----->[G]----->(+)----.---> y = G*v + q >> &#4294967295; &#4294967295; &#4294967295; &#4294967295; ^ &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| >> &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;| &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| >> &#4294967295; &#4294967295; &#4294967295; &#4294967295; '-------[-1]<-----[z^-1]<-----------' >> >> >> "G" is the gain inherent to the quantizer. the gain from q to the >> output y is: >> >> &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295;Y(z)/Q(z) = 1/(1 + G*H(z)) > > Robert, > > I get something different: > > Y(z)/Q(z) = 1 / (1 + z^{-1}*G*H(z)). > > Did I analyze the loop incorrectly?
no, you got it right and that is one more mistake that i missed. in addition this is not exactly the topology of the circuit that Vincent had. the real topology has an additional feedback into in between the the integrators with a gain of -2 (or -1/alpha where "alpha" is how Vincent defines it). still the point is if you increase |H(z)| by jacking up the feedforward coefs, then G will go down by the reciprocal factor and the behavior, for both the real simulation and for the linearized version, remains the same. if G wasn't there, it would appear that jacking up |H(z)| would help, but we know it cannot because the behavior of the comparator would be unchanged (assuming no polarity change in H). -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by December 15, 20042004-12-15
robert bristow-johnson <rbj@audioimagination.com> writes:
> [...] > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;.----<-- q = quantization error > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;x-y&#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; v &#4294967295; &#4294967295; &#4294967295;Gv&#4294967295; &#4294967295;| > &#4294967295; x --->(+)----->[H(z)]----->[G]----->(+)----.---> y = G*v + q > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; ^ &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| > &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295; &#4294967295;| &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| > &#4294967295; &#4294967295; &#4294967295; &#4294967295; '-------[-1]<-----[z^-1]<-----------' > > > "G" is the gain inherent to the quantizer. the gain from q to the > output y is: > > &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295;Y(z)/Q(z) = 1/(1 + G*H(z))
Robert, I get something different: Y(z)/Q(z) = 1 / (1 + z^{-1}*G*H(z)). Did I analyze the loop incorrectly? -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124
Reply by robert bristow-johnson December 13, 20042004-12-13
after some off-line conversation, i am reposting this with some mistakes
corrected and some of the explanation amplified by a few more dB i hope:

in article d4240f26.0412090245.52789a29@posting.google.com, Vincent Ma at
vinma55@hotmail.com wrote on 12/09/2004 05:45:

> But when I put a real one-bit quantizer to it (output +-1), it > generated a bad result with sin wave amplitude of 1, I then put a > scaling factor in the quantizer (output +-k) or I scale down the input > sin wave amplitude, it becomes better. (It seems that scaling up > quatizer output level generates same effect as scaling down the input > level).
The "gain of the comparator" issue is something that i harped about multiple times when i saw papers that simply replaced the comparator with an additive noise source of gain 1. why choose 1 for the gain? For a multibit quantizer, this is not an issue but it is for sigma-delta (or vise versa). ... In any linearized model of a noise shaping feedback system, the loop gain is a quantitative parameter that affects the performance and behavior of the system. (you need to look at this with a mono-spaced font.) x-y v &#4294967295; x --->(+)----->[H(z)]------>[Quantizer]----.---> y = quantized value &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;^ &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| = +/- U &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295; &#4294967295;| &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| &#4294967295; &#4294967295; &#4294967295; &#4294967295; '-------[-1]<-----[z^-1]<-----------' "linearized" model: &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;.----<-- q = quantization error &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;x-y&#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; v &#4294967295; &#4294967295; &#4294967295;Gv&#4294967295; &#4294967295;| &#4294967295; x --->(+)----->[H(z)]----->[G]----->(+)----.---> y = G*v + q &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; ^ &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295; &#4294967295;| &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;| &#4294967295; &#4294967295; &#4294967295; &#4294967295; '-------[-1]<-----[z^-1]<-----------' "G" is the gain inherent to the quantizer. the gain from q to the output y is: &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295;Y(z)/Q(z) = 1/(1 + G*H(z)) Now it would appear at first that you could easily decrease the gain from the quantization to the output to an arbitrarily low amount by just increasing H to be arbitrarily large. But, of course, it can't be that easy. In a multibit quantizer noise-shaped system, the value of "G", the quantizer gain, is essentially the mean slope of the staircase function of the quantizer (or the slope of the plank that you would lay diagonally on top of the staircase). You can increase the coefficients in H(s) but then the input to the quantizer gets larger and you push it into saturation (thus limiting how much constant gain you want to put into H(z)). And once |H| is made large enough to put the quantizer into saturation, you could double or triple |H| or scale it up by 100 and it wouldn't make any difference (other than fry some transistors). It still simply remains in saturation. For the one-bit quantizer, y is either +U or -U (Vincent had +k or -k) and there is no slope that can be inferred by the quantizer staircase function because there *is* no staircase function. There is only one step and you can lay a plank on it and one position and slope is just as plausible as any other position and slope. It turns out that the gain of that quantizer (G) is determined from the statistics of the input to it (I think it's the U times the mean abs value of the input divided by the variance of the input). I worked this out in about 1990 and showed it to Bob Adams only to find out later that John Paulos published a similar result in 1986 or 87. What that means is that you can crank up the gain in H(z) to whatever you want and, as long as the polarity of the gain does not change, the behavior of the quantizer remains unchanged. (This should be obvious since the quantizer is a comparator. Changing the level, but not the polarity, of the input to a comparator does not affect its operation.) So changing the preceding gain to a one-bit quantizer does not affect the operation of the quantizer. But, in the linear model, then the gain of the quantizer would have to change in a reciprocal manner to absorb the increase in gain of H(z). So in both models, increasing the constant gain factor in H(z) will not change anything. How is this gain G determined for the linearized model? First, you have to specify something about the nature of the additive noise error, q. q has to be uncorrelated from the quantizer input signal v or G*v, the other input to the adder. The reason why is if q was correlated to v (the cross-correlation was not zero), then q could be represented as the sum of a scaled version of v, call it G1*v, and an uncorrelated (not independent) noise, call it q1. so you have (even if you think G=1): y = G*v + q = G*v + G1*v + q1 = (G + G1)*v + q1 And now we're right back to where we started with saying that, in the linearized model, the input to the comparator is scaled by some factor, whether it's called "G" or "G+G1" is only a semantic issue, with an uncorrelated noise, "q" or "q1". the difference is semantic so we may as well just call it "G" and *uncorrelated* error noise called "q". Okay, now that we've gotten this far, if q is uncorrelated, then we know that in the adder of the linearized model, that the power or variance of the two uncorrelated signals, G*v and q, must add to be the power of the output y, which is always +U or -U. so the power of y is mean{ y^2 } = U^2 = mean{ (G*v)^2 } + mean{ q^2 } (1) no matter what. The power of G*v is mean{ (G*v)^2 } = G^2 * mean{ v^2 } . (2) G is the gain we're trying to determine and mean{ v^2 } is determined from the statistics of the signal coming out of H(s). The mean{ q^2 } is a little harder to calculate. mean{ q^2 } = mean{ (y - G*v)^2 } = mean{ y^2 } - 2*G*mean{ y*v } + G^2 * mean{ v^2 } = U^2 - 2*G*mean{ y*v } + G^2 * mean{ v^2 } (3) If you plug eqs. (2) and (3) into eq. (1) above you get U^2 = mean{ y^2 } = mean{ (G*v)^2 } + mean{ q^2 } = G^2 * mean{ v^2 } + U^2 - 2*G*mean{ y*v } + G^2 * mean{ v^2 } 2*G*mean{ y*v } = 2*G^2 * mean{ v^2 } G = mean{ y*v } / mean{ v*v } Not all positive values of G make equal sense. If G was a zillion, and say v was positive, you'ld have this huge positive voltage G*v going into the adder requiring a huge negative voltage of q to bring it down to a reasonable value of +U. So the adder is adding together two signals of opposite polarity but *both* with enormous energy, and the result is a signal of relatively small energy U^2. Likewise of G was too small, the energies of G*v and q would be too small to add up to U^2. There is only one value of G that makes it so that the energies of G*v and q add up to be the energy of y. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by December 10, 20042004-12-10
Randy Yates <randy.yates@sonyericsson.com> writes:
> [...] > When k != 1, you get a pole in the NTF. This pole will boost the > quantization noise and make it APPEAR that the noise-shaping is > working better, but it's really just boosting the noise. In fact, > I evaluated a first-order loop in-band noise for k = 1 and k = 4 and > k = 1 gets over 5 dB better: > > k quantization noise power (dB) > - ------------------------ > 1 -55.5357 > 4 -59.9940
Typo! This should be k quantization noise power (dB) - ------------------------ 1 -55.5357 4 -49.9940 -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124
Reply by December 10, 20042004-12-10
Hi Vincent,

Let me respond to a couple of your points below.

vinma55@hotmail.com writes:

> Dear DSP friends, > > > Thanks for the reply. Some of the answers are below > > 1. I already have a working design (got it from somewhere else and not > done by us) which uses alpha = 0.5 and k = 4. And I translate that > design into Matlab code to test.
I'm a pessimist, Vincent. The design you obtained was implemented by human beings just as capable as being fallable as you or I. I would not trust for a second that something is correct just because it came from working prototype. Now that last statement might seem a bit strange until you realize that "correct" means "perfectly correct." It is very possible that a design works "ok" or "pretty good" but still has errors in it. There is also the possibility that you translated the design incorrectly into Matlab code.
> I also want to get some solid theory > to back it up too, so that is why I studied many references trying to > comfort myself that the design indeed will work. > > 2. I know that the quantizer is highly non-linear so the linearized > model is only a rough means trying to get SOME idea. I am pretty OK > with that. What makes me wonder is why many references said the > linearized model work pretty good for loop order <= 2, which is not > what I oberserved. > > So the question is that if I want to change the > coeeficeints of the loop, what do I do? Starting from the linearized > model and get one set of coefficients then run extensive simulations to > fine tune? I know for higher order loop people do that (and that is > many references said too), but for 2nd-order loop we still need to do > that?
I think you're confused about the meaning of the term "linearized model." This technique is a method of representing the quantizer as a sum of the input and a quantization signal: y[n] = x[n] + q[n], where x[n] is the input to the quantizer and y[n] is the output.
> 3. For the linearized model in my Matlab code, I just want to test the > loop response, so I took the quantization noise away.
Err, with no quantization noise, how can you check the loop response? By "loop response" response I presume you mean "closed loop response", and the quantization noise is the ONLY thing affected by the closed-loop response in a properly-designed modulator. In other words, the quantization noise is the only thing that is filtered in a proper-designed modulator. So if you take it away, you can't observe anything. (The signal should pass through with the magnitude response unmodified.)
> Sure a white > noise can be put there, and you can see noise being shaped with both k > = 1 and k = 4.
Yes, that's the proper way to do it. That is, input white noise into the quantizer, NOT into the input. Actually what is best is to dither the input to the quantizer so that you get rid of those nasty limit cycles.
> 3. What I said "bad" is (with the real one-bit quantizer) that if you > look at the FFT result of the output v, for k = 1, the spectrum looks > bad, no noise shaping at all, but with k = 4 a noise shaping is > apparent. And I also try to decode (integrate & decimate) the bit > stream, same conclusion.
I didn't spend time debugging your loop and analyzing it completely, but there are a couple of things that don't look right to me about it. 1) The update equations are in the wrong order - you should compute y1 before e2 since e2 uses the output from y1. Then you used y1(n-1) in computing e2(n). That sort of undoes the fact that you've got the loop equations out of order, but it also adds a delay in the loop which impacts the overall transfer function(s) (noise and signal). I redid your loop the "right" way and included it at the end of this post. If you set alpha = k = 1, your original loop results in a NTF of NTF(z) = (1 - z^{-1})^2 / (1 - z^{-1} + z^{-2}) This places a pole in the NTF that isn't normally there (normally for an integrator-based modulator you only have zeros). If you "fix" the loop as my code at the end and include the gain k in the analysis, I get the following resulting NTF NTF(z) = k*(1 - z^{-1})^2 / (1 + (k-1)*z^{-1}) When k != 1, you get a pole in the NTF. This pole will boost the quantization noise and make it APPEAR that the noise-shaping is working better, but it's really just boosting the noise. In fact, I evaluated a first-order loop in-band noise for k = 1 and k = 4 and k = 1 gets over 5 dB better: k quantization noise power (dB) - ------------------------ 1 -55.5357 4 -59.9940 Finally, you may be getting some bogus quantization noise spectra when you run your modulator without dither and use a highly correlated signal such as a sine wave. I've include dither in the code below so that the spectrum is more consistent independent of the input signal. Finally, here's the reference I've used a lot in the past: @BOOK{deltasigmadataconverters, title = "Delta-Sigma Data Converters: Theory, Design, and Simulation", author = "{Steven~R.~Norsworthy,} {Richard~Schreier,} and {Gabor~C.~Temes}", publisher = "IEEE Press", year = "1997"} Hope this helps. --Randy function [v,y1,y2,e1,e2,q,dn] = ds(k) A = 0; F = 500; Fs = 8000; Oversampling_Rate = 128; %Len = 1024*Oversampling_Rate; Len = 1024*16; %Len = 100*Oversampling_Rate; m = 1:Len; In = A*cos(2*pi*F/(Fs*Oversampling_Rate)*(m-1)); alpha = 1; %k = 4; y1 = zeros(1, Len); y2 = zeros(1, Len); v = zeros(1, Len); q = zeros(1, Len); w = zeros(1, Len); dn = 2*k*(rand(1, Len) - 0.5); %dn = zeros(1, Len); for n = 2:Len-1 e1(n) = In(n) - alpha*v(n-1); y1(n) = y1(n-1) + e1(n); e2(n) = y1(n) - v(n-1); y2(n) = y2(n-1) + e2(n); w(n) = y2(n) + dn(n); if (w(n) >= 0) v(n) = +k; else v(n) = -k; end q(n) = In(n) - v(n); end figure; psd(v); title(sprintf('Signal Spectrum, k = %d, alpha = %d, A = %d', k, alpha, A)); figure; psd(q); title(sprintf('Quantization Spectrum, k = %d, alpha = %d, A = %d', k, alpha, A)); figure; plot(decimate(v,128)); title(sprintf('Decimated Signal, k = %d, alpha = %d, A = %d', k, alpha, A)); %plot(20*log10(abs(fft(v)))) -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124
Reply by December 10, 20042004-12-10
Robert,

You are correct, fixing k while lowering the input amplitude seems
similar to fixing input amplitude but scaling up k. They seem to
generate similar result but I don't have quantitative comparison about
the noise performance.

Reply by robert bristow-johnson December 10, 20042004-12-10
in article 1102648467.866541.285820@f14g2000cwb.googlegroups.com,
vinma55@hotmail.com at vinma55@hotmail.com wrote on 12/09/2004 22:14:

> 3. What I said "bad" is (with the real one-bit quantizer) that if you > look at the FFT result of the output v, for k = 1, the spectrum looks > bad, no noise shaping at all, but with k = 4 a noise shaping is > apparent. And I also try to decode (integrate & decimate) the bit > stream, same conclusion.
if, instead of increasing "k" by a factor of 4, what would happen if you left k equal to 1 and *decreased* "A" by a factor of 4. i would expect an identical noise-shaping behavior. it's all relative (as long as there is no polarity reversal) and the only amplitude that "k" can be measured against is "A". try to leave "k" constant and investigate the difference that you get with different input levels. now imagine what the amplitude of the signal applied to your comparator is when the input amplitude is decreased. if the input amplitude is decreased, so must also the amplitude of the signal applied to the comparator. but then the inherent gain of the comparator (in the linearized model) happens to be increased by the same amount to make up for the decrease in level. that increases the loop gain and causes more noise to be steered to the higher frequencies. think about that a little. r b-j
Reply by December 9, 20042004-12-09
Dear DSP friends,


Thanks for the reply. Some of the answers are below

1. I already have a working design (got it from somewhere else and not
done by us) which uses alpha = 0.5 and k = 4. And I translate that
design into Matlab code to test. I also want to get some solid theory
to back it up too, so that is why I studied many references trying to
comfort myself that the design indeed will work.

2. I know that the quantizer is highly non-linear so the linearized
model is only a rough means trying to get SOME idea. I am pretty OK
with that. What makes me wonder is why many references said the
linearized model work pretty good for loop order <= 2, which is not
what I oberserved. So the question is that if I want to change the
coeeficeints of the loop, what do I do? Starting from the linearized
model and get one set of coefficients then run extensive simulations to
fine tune? I know for higher order loop people do that (and that is
many references said too), but for 2nd-order loop we still need to do
that?


3. For the linearized model in my Matlab code, I just want to test the
loop response, so I took the quantization noise away. Sure a white
noise can be put there, and you can see noise being shaped with both k
= 1 and k = 4.

3. What I said "bad" is (with the real one-bit quantizer) that if you
look at the FFT result of the output v, for k = 1, the spectrum looks
bad, no noise shaping at all, but with k = 4 a noise shaping is
apparent. And I also try to decode (integrate & decimate) the bit
stream, same conclusion.

4. Lesse, your explanantion is very intereting and is quite the same as
what I observed, do you have any other references for that?
Thanks, friends.

Reply by December 9, 20042004-12-09
Lasse Langwadt Christensen <langwadt@ieee.org> writes:

> Vincent Ma wrote: > > Dear friends, > > I am encountering a confusion in Delta-Sigma Modulator. Most of the > > > references use a linearized model to model the DSM, and an analytical > > transfer function can be derived. Refer to Shenoi's "Digital Signal > > Processing in Telecommunications", page 492, the denomenator of the > > transfer function for a second loop DSM is derived like z^2-z+alpha, > > where alpha is a parameter (alpha times the output fed back to the > > input of first loop) to control the poles. Shenoi showed with a=1/2, > > the loop will be stable. I wrote a simple Matlab code (as below) it > > generated good result with a sin wave, Everthing is fine up to here. > > But when I put a real one-bit quantizer to it (output +-1), it > > > generated a bad result with sin wave amplitude of 1, I then put a > > scaling factor in the quantizer (output +-k) or I scale down the input > > sin wave amplitude, it becomes better. (It seems that scaling up > > quatizer output level generates same effect as scaling down the input > > level). > > Many references said simulations need to be done to verify the > > design, > > > but some references also said that the lineraized model generated > > satisfactory results for loop order <= 2 . I am confused. Any > > help/explanation will be highly appreciated. > > > > snip > > It's been a quite a while since I've worked with DSMs but think it's something > like this: > > If you take the noise tranfer function from the linear model the gain is > 1 for small inputs going towards 0 at saturation > thus for second order the poles stay inside the unit circle for all inputs, > but at seturation its at the unit circle. > > So for quantizer output of +/-1 and a sine with amplitude 1 as input you are > at the edge of unstable. > > afair first and second order should unconditionally stable at less than fullscale > input.
Lasse, Can you please explain better? The plain-old, integrator-based second-order modulator I've analyzed has NO poles in its NTF, which is simply (1 - z^{-1})^2. -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124
Reply by December 9, 20042004-12-09
vinma55@hotmail.com (Vincent Ma) writes:
> [...]
Vincent, Let me add that I have noted that your modulator structure is a bit off, but it probably isn't going to make a big difference. Here's where: e2(n) = y1(n-1) - v(n); This forms one of the integrators with the delay in-line with the feed-forward path, which in turn screws up your signal transfer function slightly. Replace this line with the following to fix it: e2(n) = y1(n) - v(n); The test I performed which I commented on previously was using your original modulator. -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124