I recently examined the feasibility of generating noise for testing a communication system and came to the disappointing conclusion that generating a digital signal and then converting it to analog via an ADC will never be capable of generating a stationary, continuous random process (signal) with an arbitrary distribution. Here's my reasoning - please correct me if I'm wrong. Let's assume the "digital" samples are infinite resolution. This is the best case since introducing a finite resolution limits the problem even further. Let the digital samples be distributed with some arbitrary (desired) pdf f(x). When that signal is then converted to analog via an ADC, the action of the reconstruction filter will be to scale and sum some number of the digital samples together. The scalings and the number of samples summed will probably both be a function of time. Then, in general, by the Central Limit Theorem, these samples will tend toward Gaussian. Further, the output process probably won't even be stationary. The precise characterization depends heavily on the type of reconstruction filter. An interesting situation occurs when the reconstruction filter is the idea lowpass. At the sample points (t = n*T), the analog output consists of just one digital sample, so at the sample points the analog output will have a pdf f(x). However, between the sample points, various numbers and amounts of the original digital samples will be scaled and summed, and almost certainly the analog output at exactly halfway between the sample points will be very Gaussian-like. If we use a 1-bit converter and a 1-bit PN sequence (let's say it's perfectly uncorrelated and uniformly distributed with f(x) = 0.5 * delta(x-1) + 0.5 * delta(x+1), and let's use the ideal lowpass filter for reconstruction. Then the analog output will be particularly pathological, with a simple two-level discrete PDF at the sample points, something very close to Gaussion halfway between the sample points, and something in between in between the sample points. No? Yes? I thought this was a VERY interesting problem and I am surprised (once it was asked) why it really isn't treated in the texts (Papoulis, Leon-Garcia, etc.). The texts discuss the PSD of such systems, but not the distribution of the resulting continuous-time random process. Comments? -- % Randy Yates % "My Shangri-la has gone away, fading like %% Fuquay-Varina, NC % the Beatles on 'Hey Jude'" %%% 919-577-9882 % %%%% <yates@ieee.org> % 'Shangri-La', *A New World Record*, ELO http://www.digitalsignallabs.com
Generating a continuous random variable with an arbitrary distribution
Started by ●April 18, 2008
Reply by ●April 19, 20082008-04-19
Randy Yates <yates@ieee.org> writes:> I recently examined the feasibility of generating noise for testing a > communication system and came to the disappointing conclusion that > generating a digital signal and then converting it to analog via an > ADCDoh! Substitue "DAC" for "ADC" in this entire post. Sorry! -- % Randy Yates % "I met someone who looks alot like you, %% Fuquay-Varina, NC % she does the things you do, %%% 919-577-9882 % but she is an IBM." %%%% <yates@ieee.org> % 'Yours Truly, 2095', *Time*, ELO http://www.digitalsignallabs.com
Reply by ●April 19, 20082008-04-19
Randy Yates wrote:> I recently examined the feasibility of generating noise for testing a > communication system and came to the disappointing conclusion that > generating a digital signal and then converting it to analog via an ADC > will never be capable of generating a stationary, continuous random > process (signal) with an arbitrary distribution.> Here's my reasoning - please correct me if I'm wrong.> Let's assume the "digital" samples are infinite resolution. This is the > best case since introducing a finite resolution limits the problem even > further. Let the digital samples be distributed with some arbitrary > (desired) pdf f(x).I suppose I agree in the general case, but maybe not in specific cases. For example 1/f noise falls of as, well, 1/f, and so could have an effect at fairly large frequencies. But if one, for example, wants to test the effects of noise on an audio system then some reasonable factor above 20kHz should be enough. Resolution might be important, but 24 bits shouldn't be too far off. (What is state-of-the-art in DAC's?) One should sample fast enough that the effects of the filter aren't too significant. -- glen
Reply by ●April 19, 20082008-04-19
Randy Yates wrote:> I recently examined the feasibility of generating noise for testing a > communication system and came to the disappointing conclusion that > generating a digital signal and then converting it to analog via an ADC > will never be capable of generating a stationary, continuous random > process (signal) with an arbitrary distribution. > > Here's my reasoning - please correct me if I'm wrong.DAC, but never mind. (I do that too.) Of course you're right. Aside from the discrete amplitude (which you recognize but choose to ignore) there is the bandwidth limitation imposed by the discrete nature of digital signal. There's no getting around that. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●April 19, 20082008-04-19
On Fri, 18 Apr 2008 22:13:38 -0400, Randy Yates wrote:> I recently examined the feasibility of generating noise for testing a > communication system and came to the disappointing conclusion that > generating a digital signal and then converting it to analog via an ADC > will never be capable of generating a stationary, continuous random > process (signal) with an arbitrary distribution. > > Here's my reasoning - please correct me if I'm wrong. > > Let's assume the "digital" samples are infinite resolution. This is the > best case since introducing a finite resolution limits the problem even > further. Let the digital samples be distributed with some arbitrary > (desired) pdf f(x). > > When that signal is then converted to analog via an ADC, the action of > the reconstruction filter will be to scale and sum some number of the > digital samples together. The scalings and the number of samples summed > will probably both be a function of time. Then, in general, by the > Central Limit Theorem, these samples will tend toward Gaussian. Further, > the output process probably won't even be stationary. The precise > characterization depends heavily on the type of reconstruction filter.The Central Limit Theorem applies for large numbers of random variables summed together, and then only for distributions with finite variances. You may have to accept that your output PDF isn't exactly discrete- valued, but you could come arbitrarily close. Similarly, while the output of the filter would be strictly non- stationary (because it's sourced by a time-varying system), you could get arbitrarily close to a stationary process.> An interesting situation occurs when the reconstruction filter is the > idea lowpass. At the sample points (t = n*T), the analog output consists > of just one digital sample, so at the sample points the analog output > will have a pdf f(x). However, between the sample points, various > numbers and amounts of the original digital samples will be scaled and > summed, and almost certainly the analog output at exactly halfway > between the sample points will be very Gaussian-like.Possibly, possibly not. Regardless, if you needed some non-Gaussian PDF within some bandwidth, I think with the right combination of oversampling, reconstruction filter design, and data massaging, you could achieve your goal to any finite degree of accuracy.> If we use a 1-bit converter and a 1-bit PN sequence (let's say it's > perfectly uncorrelated and uniformly distributed with > > f(x) = 0.5 * delta(x-1) + 0.5 * delta(x+1), > > and let's use the ideal lowpass filter for reconstruction. Then the > analog output will be particularly pathological, with a simple two-level > discrete PDF at the sample points, something very close to Gaussion > halfway between the sample points, and something in between in between > the sample points.Yes, which would indicate that your arbitrary random-process generator shouldn't use the above-mentioned architecture!> No? Yes? I thought this was a VERY interesting problem and I am > surprised (once it was asked) why it really isn't treated in the texts > (Papoulis, Leon-Garcia, etc.). The texts discuss the PSD of such > systems, but not the distribution of the resulting continuous-time > random process. > > Comments?-- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply by ●April 19, 20082008-04-19
On Apr 18, 7:13 pm, Randy Yates <ya...@ieee.org> wrote:> I recently examined the feasibility of generating noise for testing a > communication system and came to the disappointing conclusion that > generating a digital signal and then converting it to analog via an ADC > will never be capable of generating a stationary, continuous random > process (signal) with an arbitrary distribution. > > Here's my reasoning - please correct me if I'm wrong. > > Let's assume the "digital" samples are infinite resolution. This is the > best case since introducing a finite resolution limits the problem even > further. Let the digital samples be distributed with some arbitrary > (desired) pdf f(x). > > When that signal is then converted to analog via an ADC, the action of > the reconstruction filter will be to scale and sum some number of the > digital samples together. The scalings and the number of samples summed > will probably both be a function of time. Then, in general, by the > Central Limit Theorem, these samples will tend toward Gaussian. Further, > the output process probably won't even be stationary. The precise > characterization depends heavily on the type of reconstruction filter. > > An interesting situation occurs when the reconstruction filter is the > idea lowpass. At the sample points (t = n*T), the analog output consists > of just one digital sample, so at the sample points the analog output > will have a pdf f(x). However, between the sample points, various > numbers and amounts of the original digital samples will be scaled and > summed, and almost certainly the analog output at exactly halfway > between the sample points will be very Gaussian-like. > > If we use a 1-bit converter and a 1-bit PN sequence (let's say it's > perfectly uncorrelated and uniformly distributed with > > f(x) = 0.5 * delta(x-1) + 0.5 * delta(x+1), > > and let's use the ideal lowpass filter for reconstruction. Then the > analog output will be particularly pathological, with a simple two-level > discrete PDF at the sample points, something very close to Gaussion > halfway between the sample points, and something in between in between > the sample points. > > No? Yes? I thought this was a VERY interesting problem and I am > surprised (once it was asked) why it really isn't treated in the texts > (Papoulis, Leon-Garcia, etc.). The texts discuss the PSD of such > systems, but not the distribution of the resulting continuous-time > random process. > > Comments? > -- > % Randy Yates % "My Shangri-la has gone away, fading like > %% Fuquay-Varina, NC % the Beatles on 'Hey Jude'" > %%% 919-577-9882 % > %%%% <ya...@ieee.org> % 'Shangri-La', *A New World Record*, ELOhttp://www.digitalsignallabs.comI don't understand why so many people seem to lose track of their basic knowledge when they contemplate noise in DSP. We know how to calculate and measure quantization noise in ADCs and DACs. If quantization noise is well under the signal level at frequencies of interest, it doesn't represent any more problem for noise than for determinate signals. Picking a 1 bit DAC may be a poor place to start, but we can tell from an analysis of the quantization noise if it is a problem. Digital noise generators have finite power supplies that inherently cause output clipping. Digital noise generators must use bandwidth limiting filters to allow higher spectral levels while controlling clipping. The noise generation process in digital noise generators has inherent clipping because of the limited range of noise amplitude the process can represented with. Now go back and substitute analog for digital in the three previous sentences and you will have an accurate description of our traditional analog noise generators. There are enough limitations in the instrumentation that the digital question is not a big one for the things we have been using noise generators for (remember the GR1390?), if we remember to apply the DSP principles we are used to here. The real issue may be that we have actually believed the the revered analog generators were doing something far beyond their actual performance. It may be far beyond the capability of DSP to provide the performance we can dream of (and believed we had) and knowledge of DSP will not be a comfort here. It is also important to remember that no - finite- sample of noise: analog, digital or ideal, accurately represents the statistical measures of the distributions they are drawn from. DSP does allow a variety of parameters of noise generation to be controlled precisely which is something that digital might do better. Dale B. Dalrymple http://dbdimages.com
Reply by ●April 19, 20082008-04-19
Hi Tim, Tim Wescott <tim@seemywebsite.com> writes:> On Fri, 18 Apr 2008 22:13:38 -0400, Randy Yates wrote: > >> I recently examined the feasibility of generating noise for testing a >> communication system and came to the disappointing conclusion that >> generating a digital signal and then converting it to analog via an ADC >> will never be capable of generating a stationary, continuous random >> process (signal) with an arbitrary distribution. >> >> Here's my reasoning - please correct me if I'm wrong. >> >> Let's assume the "digital" samples are infinite resolution. This is the >> best case since introducing a finite resolution limits the problem even >> further. Let the digital samples be distributed with some arbitrary >> (desired) pdf f(x). >> >> When that signal is then converted to analog via an ADC, the action of >> the reconstruction filter will be to scale and sum some number of the >> digital samples together. The scalings and the number of samples summed >> will probably both be a function of time. Then, in general, by the >> Central Limit Theorem, these samples will tend toward Gaussian. Further, >> the output process probably won't even be stationary. The precise >> characterization depends heavily on the type of reconstruction filter. > > The Central Limit Theorem applies for large numbers of random variables > summed together,... which is precisely the situation we have in the case of most analog reconstruction filters, as they are IIR.> and then only for distributions with finite variances.Tim, I had hoped to bypass these sorts of nit-pics. From the tone of my post, I thought it would be clear that this isn't a rigorous coverage of the topic but rather an attempt to apply as much theory as necessary to find the result. However, what you say is true - we must theoretically limit the input distributions. Since the input will (in a practical system) be limited in amplitude, that doesn't seem to be a problem here.> You may have to accept that your output PDF isn't exactly discrete- > valued, but you could come arbitrarily close.I have no idea what your point is. You seem to be saying that we could come arbitrarily close to a discrete PDF. Is that correct? However, that wasn't my goal.> Similarly, while the output of the filter would be strictly non- > stationary (because it's sourced by a time-varying system), you could get > arbitrarily close to a stationary process.Explain to me precisely HOW you could come arbitrarily close to a stationary process if the distribution isn't Gaussian. It sounds to me, Tim, that after I've shown how you cannot "jigger a thingamabob," you've responded with "you can jigger a thingamabob," and done so without saying exactly how. Perhaps I'm missing something.>> An interesting situation occurs when the reconstruction filter is the >> idea lowpass. At the sample points (t = n*T), the analog output consists >> of just one digital sample, so at the sample points the analog output >> will have a pdf f(x). However, between the sample points, various >> numbers and amounts of the original digital samples will be scaled and >> summed, and almost certainly the analog output at exactly halfway >> between the sample points will be very Gaussian-like. > > Possibly, possibly not. Regardless, if you needed some non-Gaussian PDF > within some bandwidth, I think with the right combination of > oversampling, reconstruction filter design, and data massaging, you could > achieve your goal to any finite degree of accuracy.You think? OK... Exactly HOW do you think?! ANY reconstruction filter will result in summing several of the input samples together. And those sums will be varying. So how can the output ever have a constant, stationary PDF? -- % Randy Yates % "My Shangri-la has gone away, fading like %% Fuquay-Varina, NC % the Beatles on 'Hey Jude'" %%% 919-577-9882 % %%%% <yates@ieee.org> % 'Shangri-La', *A New World Record*, ELO http://www.digitalsignallabs.com
Reply by ●April 19, 20082008-04-19
On Sat, 19 Apr 2008 10:53:06 -0400, Randy Yates wrote:> Hi Tim, > > Tim Wescott <tim@seemywebsite.com> writes: > >> On Fri, 18 Apr 2008 22:13:38 -0400, Randy Yates wrote: >> >>> I recently examined the feasibility of generating noise for testing a >>> communication system and came to the disappointing conclusion that >>> generating a digital signal and then converting it to analog via an >>> ADC will never be capable of generating a stationary, continuous >>> random process (signal) with an arbitrary distribution. >>> >>> Here's my reasoning - please correct me if I'm wrong. >>> >>> Let's assume the "digital" samples are infinite resolution. This is >>> the best case since introducing a finite resolution limits the problem >>> even further. Let the digital samples be distributed with some >>> arbitrary (desired) pdf f(x). >>> >>> When that signal is then converted to analog via an ADC, the action of >>> the reconstruction filter will be to scale and sum some number of the >>> digital samples together. The scalings and the number of samples >>> summed will probably both be a function of time. Then, in general, by >>> the Central Limit Theorem, these samples will tend toward Gaussian. >>> Further, the output process probably won't even be stationary. The >>> precise characterization depends heavily on the type of reconstruction >>> filter. >> >> The Central Limit Theorem applies for large numbers of random variables >> summed together, > > ... which is precisely the situation we have in the case of most analog > reconstruction filters, as they are IIR.But an IIR filter weighs the more recent samples more heavily, so even though your current output value is affected by an infinite number of past input values, it is dominated by the last few input values. So the PDF of your output is "yanked around" by those last few input value and you can use that to (within limits) make sure that your output PDF is what you want, subject to some interaction between the desired bandwidth, the desired PDF and your required sampling rate.>> and then only for distributions with finite variances. > > Tim, I had hoped to bypass these sorts of nit-pics. From the tone of my > post, I thought it would be clear that this isn't a rigorous coverage of > the topic but rather an attempt to apply as much theory as necessary to > find the result. > > However, what you say is true - we must theoretically limit the input > distributions. Since the input will (in a practical system) be limited > in amplitude, that doesn't seem to be a problem here.It wasn't intended as a nit pick, so much as an example of where the Central Limit Theorem falls down. The Central Limit Theorem only says what will happen if you sum an infinite number of equal-valued random variables with continuous PDFs together, and it doesn't, by itself, say that sometimes the number of variables summed has to be huge before the approximation to a Gaussian starts getting close.>> You may have to accept that your output PDF isn't exactly discrete- >> valued, but you could come arbitrarily close. > > I have no idea what your point is. You seem to be saying that we could > come arbitrarily close to a discrete PDF. Is that correct? However, that > wasn't my goal.You want to generate arbitrary PDFs. I strongly suspect that a discrete PDF is going to be the most difficult to generate, so I think that examples of how to adequately approximate one would be a pretty strong argument for being able to adequately approximate just about anything you want.>> Similarly, while the output of the filter would be strictly non- >> stationary (because it's sourced by a time-varying system), you could >> get arbitrarily close to a stationary process. > > Explain to me precisely HOW you could come arbitrarily close to a > stationary process if the distribution isn't Gaussian.You seem to be laboring under the misapprehension that a stationary process must be Gaussian. It doesn't have to be, so there is no reason at all that a stationary process must be Gaussian.> It sounds to me, Tim, that after I've shown how you cannot "jigger a > thingamabob," you've responded with "you can jigger a thingamabob," and > done so without saying exactly how. Perhaps I'm missing something.I'm sorry, it appears that your proofs got inadvertently deleted from your post -- perhaps you should review what you actually sent. All I see in your post is a statement that you've thought about it, and have convinced yourself that you can't do what you want. I'm giving counter examples, but since your rigorous proofs were lost I was assuming that countering vague assertions of supposed fact with thought experiments that, with a bit of effort, showed those assertions to be untrue would be sufficient. Please publish your paper, and I'll be happy to review it.>>> An interesting situation occurs when the reconstruction filter is the >>> idea lowpass. At the sample points (t = n*T), the analog output >>> consists of just one digital sample, so at the sample points the >>> analog output will have a pdf f(x). However, between the sample >>> points, various numbers and amounts of the original digital samples >>> will be scaled and summed, and almost certainly the analog output at >>> exactly halfway between the sample points will be very Gaussian-like. >> >> Possibly, possibly not. Regardless, if you needed some non-Gaussian >> PDF within some bandwidth, I think with the right combination of >> oversampling, reconstruction filter design, and data massaging, you >> could achieve your goal to any finite degree of accuracy. > > You think? OK... Exactly HOW do you think?! ANY reconstruction filter > will result in summing several of the input samples together. And those > sums will be varying. So how can the output ever have a constant, > stationary PDF?Eh? A stationary random process has a varying output, else it wouldn't be a random process. "Stationary" means that a process has a probability distribution function that is independent of time, not that it has an output that doesn't change. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply by ●April 19, 20082008-04-19
On Sat, 19 Apr 2008 01:50:16 -0700, dbd wrote:> On Apr 18, 7:13 pm, Randy Yates <ya...@ieee.org> wrote: >> I recently examined the feasibility of generating noise for testing a >> communication system and came to the disappointing conclusion that >> generating a digital signal and then converting it to analog via an ADC >> will never be capable of generating a stationary, continuous random >> process (signal) with an arbitrary distribution. >> >> Here's my reasoning - please correct me if I'm wrong. >> >> Let's assume the "digital" samples are infinite resolution. This is the >> best case since introducing a finite resolution limits the problem even >> further. Let the digital samples be distributed with some arbitrary >> (desired) pdf f(x). >> >> When that signal is then converted to analog via an ADC, the action of >> the reconstruction filter will be to scale and sum some number of the >> digital samples together. The scalings and the number of samples summed >> will probably both be a function of time. Then, in general, by the >> Central Limit Theorem, these samples will tend toward Gaussian. >> Further, the output process probably won't even be stationary. The >> precise characterization depends heavily on the type of reconstruction >> filter. >> >> An interesting situation occurs when the reconstruction filter is the >> idea lowpass. At the sample points (t = n*T), the analog output >> consists of just one digital sample, so at the sample points the analog >> output will have a pdf f(x). However, between the sample points, >> various numbers and amounts of the original digital samples will be >> scaled and summed, and almost certainly the analog output at exactly >> halfway between the sample points will be very Gaussian-like. >> >> If we use a 1-bit converter and a 1-bit PN sequence (let's say it's >> perfectly uncorrelated and uniformly distributed with >> >> f(x) = 0.5 * delta(x-1) + 0.5 * delta(x+1), >> >> and let's use the ideal lowpass filter for reconstruction. Then the >> analog output will be particularly pathological, with a simple >> two-level discrete PDF at the sample points, something very close to >> Gaussion halfway between the sample points, and something in between in >> between the sample points. >> >> No? Yes? I thought this was a VERY interesting problem and I am >> surprised (once it was asked) why it really isn't treated in the texts >> (Papoulis, Leon-Garcia, etc.). The texts discuss the PSD of such >> systems, but not the distribution of the resulting continuous-time >> random process. >> >> Comments? >> -- >> % Randy Yates % "My Shangri-la has gone away, fading >> like %% Fuquay-Varina, NC % the Beatles on 'Hey Jude'" %%% >> 919-577-9882 % >> %%%% <ya...@ieee.org> % 'Shangri-La', *A New World Record*, >> ELOhttp://www.digitalsignallabs.com > > I don't understand why so many people seem to lose track of their basic > knowledge when they contemplate noise in DSP. We know how to calculate > and measure quantization noise in ADCs and DACs. If quantization noise > is well under the signal level at frequencies of interest, it doesn't > represent any more problem for noise than for determinate signals. > Picking a 1 bit DAC may be a poor place to start, but we can tell from > an analysis of the quantization noise if it is a problem. > > Digital noise generators have finite power supplies that inherently > cause output clipping. Digital noise generators must use bandwidth > limiting filters to allow higher spectral levels while controlling > clipping. The noise generation process in digital noise generators has > inherent clipping because of the limited range of noise amplitude the > process can represented with. Now go back and substitute analog for > digital in the three previous sentences and you will have an accurate > description of our traditional analog noise generators. There are enough > limitations in the instrumentation that the digital question is not a > big one for the things we have been using noise generators for (remember > the GR1390?), if we remember to apply the DSP principles we are used to > here. > > The real issue may be that we have actually believed the the revered > analog generators were doing something far beyond their actual > performance. It may be far beyond the capability of DSP to provide the > performance we can dream of (and believed we had) and knowledge of DSP > will not be a comfort here. It is also important to remember that no - > finite- sample of noise: analog, digital or ideal, accurately represents > the statistical measures of the distributions they are drawn from. > > DSP does allow a variety of parameters of noise generation to be > controlled precisely which is something that digital might do better. > > Dale B. Dalrymple > http://dbdimages.comThanks Dale, that's a better way of stating what I was trying to say. Even more succinct: if you treat the output of a hypothetical analog noise generator as an arbitrary wave, then you could reproduce it with an arbitrary wave generator, right? So a digital-based random noise generator would be nothing more than the DAC hardware from an arbitrary wave generator, with a DSP algorithm spewing out the appropriate numbers to the DAC. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply by ●April 19, 20082008-04-19
On Fri, 18 Apr 2008 22:13:38 -0400, Randy Yates <yates@ieee.org> wrote:>I recently examined the feasibility of generating noise for testing a >communication system and came to the disappointing conclusion that >generating a digital signal and then converting it to analog via an ADC >will never be capable of generating a stationary, continuous random >process (signal) with an arbitrary distribution.If such a signal can exist at all, it must have finite bandwidth; then suppose you have that signal, already in analog form. Couldn't you digitize it, following the necessary rules, and then put it through a DAC, again following the necessary rules, and be guaranteed a perfect reconstruction? If so, then you could generate it digitally with a digital recording of the original signal, and presumably could find a way to bypass having the original analog form.>Here's my reasoning - please correct me if I'm wrong. > >Let's assume the "digital" samples are infinite resolution. This is the >best case since introducing a finite resolution limits the problem even >further. Let the digital samples be distributed with some arbitrary >(desired) pdf f(x). > >When that signal is then converted to analog via an ADC, the action of >the reconstruction filter will be to scale and sum some number of the >digital samples together. The scalings and the number of samples summed >will probably both be a function of time. Then, in general, by the >Central Limit Theorem, these samples will tend toward Gaussian. Further, >the output process probably won't even be stationary. The precise >characterization depends heavily on the type of reconstruction filter. > >An interesting situation occurs when the reconstruction filter is the >idea lowpass. At the sample points (t = n*T), the analog output consists >of just one digital sample, so at the sample points the analog output >will have a pdf f(x). However, between the sample points, various >numbers and amounts of the original digital samples will be scaled and >summed, and almost certainly the analog output at exactly halfway >between the sample points will be very Gaussian-like. > >If we use a 1-bit converter and a 1-bit PN sequence (let's say it's >perfectly uncorrelated and uniformly distributed with > > f(x) = 0.5 * delta(x-1) + 0.5 * delta(x+1), > >and let's use the ideal lowpass filter for reconstruction. Then the >analog output will be particularly pathological, with a simple two-level >discrete PDF at the sample points, something very close to Gaussion >halfway between the sample points, and something in between in between >the sample points. > >No? Yes? I thought this was a VERY interesting problem and I am >surprised (once it was asked) why it really isn't treated in the texts >(Papoulis, Leon-Garcia, etc.). The texts discuss the PSD of such >systems, but not the distribution of the resulting continuous-time >random process. > >Comments?-- John






