DSPRelated.com
Forums

how to find the best ADC step size?

Started by lucy December 21, 2004
Hi all,

Suppose my input data follows Gaussian distribution with mean u and variance 
sigma^2.

If I want to design 6-bit ADC... what should be the optimal step size for 
this ADC?

Thanks a lot

-L 


"lucy" <losemind@yahoo.com> wrote in
news:cqacbp$bbc$1@news.Stanford.EDU: 

> Hi all, > > Suppose my input data follows Gaussian distribution with mean u and > variance sigma^2. > > If I want to design 6-bit ADC... what should be the optimal step size > for this ADC? > > Thanks a lot > > -L > > >
If ADC is Analog to Digital Converter, the distribution of your data has nothing to do with the resolution. The range of your data does, though. You can sacrifice range for resolution, and resolution for range. Why 6 bits? Scott

Scott Seidman wrote:
> "lucy" <losemind@yahoo.com> wrote in > news:cqacbp$bbc$1@news.Stanford.EDU:
>>Suppose my input data follows Gaussian distribution with mean u and >>variance sigma^2.
>>If I want to design 6-bit ADC... what should be the optimal step size >>for this ADC?
> If ADC is Analog to Digital Converter, the distribution of your data has > nothing to do with the resolution. The range of your data does, though. > You can sacrifice range for resolution, and resolution for range.
Well, Gaussian distributions have infinite range, though with ever decreasing probability. This does sound like a homework problem, and possibly some important information was left out. What parameter is being optimized through step size choice? I don't believe there is only one answer to this problem. -- glen
glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in
news:cqaejd$4gl$1@gnus01.u.washington.edu: 

> > > Scott Seidman wrote: >> "lucy" <losemind@yahoo.com> wrote in >> news:cqacbp$bbc$1@news.Stanford.EDU: > >>>Suppose my input data follows Gaussian distribution with mean u and >>>variance sigma^2. > >>>If I want to design 6-bit ADC... what should be the optimal step size >>>for this ADC? > >> If ADC is Analog to Digital Converter, the distribution of your data >> has nothing to do with the resolution. The range of your data does, >> though. You can sacrifice range for resolution, and resolution for >> range. > > Well, Gaussian distributions have infinite range, though with > ever decreasing probability. This does sound like a homework > problem, and possibly some important information was left out. > > What parameter is being optimized through step size choice? > I don't believe there is only one answer to this problem. > > -- glen >
Lucy isn't a student, but she does hold something near a record for thread initiations in cssm. Frankly, she tends to use this newsgroup instead of hauling out a textbook, or even doing a google search to try to find what she needs to know. If I had noticed it was her post, I probably would have skipped it. Scott
"Scott Seidman" <namdiesttocs@mindspring.com> wrote in message 
news:Xns95C6C72198B8Ascottseidmanmindspri@130.133.1.4...
> glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in > news:cqaejd$4gl$1@gnus01.u.washington.edu: > >> >> >> Scott Seidman wrote: >>> "lucy" <losemind@yahoo.com> wrote in >>> news:cqacbp$bbc$1@news.Stanford.EDU: >> >>>>Suppose my input data follows Gaussian distribution with mean u and >>>>variance sigma^2. >> >>>>If I want to design 6-bit ADC... what should be the optimal step size >>>>for this ADC? >> >>> If ADC is Analog to Digital Converter, the distribution of your data >>> has nothing to do with the resolution. The range of your data does, >>> though. You can sacrifice range for resolution, and resolution for >>> range. >> >> Well, Gaussian distributions have infinite range, though with >> ever decreasing probability. This does sound like a homework >> problem, and possibly some important information was left out. >> >> What parameter is being optimized through step size choice? >> I don't believe there is only one answer to this problem. >> >> -- glen >> > > Lucy isn't a student, but she does hold something near a record for thread > initiations in cssm. Frankly, she tends to use this newsgroup instead of > hauling out a textbook, or even doing a google search to try to find what > she needs to know. If I had noticed it was her post, I probably would > have > skipped it. > > Scott
Hi Scott, I did not know that my posts were so disgusting... to you... But I did try to ask questions that are not easily found solutions in textbooks... some problems are in fact initiated by myself. I just like problem-solving... It is hard to find guys interested in problem-solving nearby... I thought newsgroup might be a better place... if you find my problems are so trivial that a few googling + book hunting can work out, please do let me know... please let me know on which book you can find the solution to the above problem. I'd really like to know ... newsgroup serves as pointers, right?
Hello Lucy,

Try looking in "Digital Processing of Speech Signals" by Rabiner & Schafer. 
There is a bunch of info on matching the quantization to the distribution of 
a signal. Basically you are maximizing the entropy. I.e gain the most info 
per bit that you can.

Clay




"lucy" <losemind@yahoo.com> wrote in message 
news:cqacbp$bbc$1@news.Stanford.EDU...
> Hi all, > > Suppose my input data follows Gaussian distribution with mean u and > variance sigma^2. > > If I want to design 6-bit ADC... what should be the optimal step size for > this ADC? > > Thanks a lot > > -L >
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message 
news:cqaejd$4gl$1@gnus01.u.washington.edu...
> > > Scott Seidman wrote: >> "lucy" <losemind@yahoo.com> wrote in >> news:cqacbp$bbc$1@news.Stanford.EDU: > >>>Suppose my input data follows Gaussian distribution with mean u and >>>variance sigma^2. > >>>If I want to design 6-bit ADC... what should be the optimal step size >>>for this ADC? > >> If ADC is Analog to Digital Converter, the distribution of your data has >> nothing to do with the resolution. The range of your data does, though. >> You can sacrifice range for resolution, and resolution for range. > > Well, Gaussian distributions have infinite range, though with > ever decreasing probability. This does sound like a homework > problem, and possibly some important information was left out.
Hello Glen, One way is to divide the area under the bell curve into equal area partitions. In this case 2^6 of them. The abscissal value for each of the partitions becomes a transition point for the quantization. This maximizes the entropy (i.e., the information) since each quantization will be equally represented. And the entopy is maximized when each state has equal probability. This in not unlike Huffman's idea where his coding scheme attempts to make the length of each symbol times its frequency of occurance be the same for all symbols. Clay
> > What parameter is being optimized through step size choice? > I don't believe there is only one answer to this problem. > > -- glen >
"Clay S. Turner" <Physics@Bellsouth.net> writes:
> [...] > One way is to divide the area under the bell curve into equal area > partitions. In this case 2^6 of them. The abscissal value for each of the > partitions becomes a transition point for the quantization. This maximizes > the entropy (i.e., the information) since each quantization will be equally > represented. And the entopy is maximized when each state has equal > probability.
Isn't this a mapping from Gaussian to uniform? And we all know that uniform has the greatest entropy. Is this related to the concept of "vector quantization"?
> This in not unlike Huffman's idea where his coding scheme > attempts to make the length of each symbol times its frequency of occurance > be the same for all symbols.
I'm having trouble with this. Yes, I see that a Huffman code attempts to make (symbol length)*(symbol frequency) constant over all symbols. How does change anything about the underlying distribution, though? -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124
"lucy" <losemind@yahoo.com> wrote in
news:cqajiv$hsm$1@news.Stanford.EDU: 

> Hi Scott, > > I did not know that my posts were so disgusting... to you... > > But I did try to ask questions that are not easily found solutions in > textbooks... some problems are in fact initiated by myself. I just > like problem-solving... It is hard to find guys interested in > problem-solving nearby... I thought newsgroup might be a better > place... if you find my problems are so trivial that a few googling + > book hunting can work out, please do let me know... please let me > know on which book you can find the solution to the above problem. I'd > really like to know ... newsgroup serves as pointers, right? > > > > > >
lucy- I don't find your posts disgusting, just somewhat tedious. The netscan page lists your posts in cssm under your currently used email addy (and I seem to remember at least one more address before the current one) for the time period between 7/1/04 and 9/30/04. In that three-month period, you initiated 77 threads, and your first use of that name was on 7/28! That's more threads, by about a factor of 5, beyond your nearest competitor. Then, As many of those posts were requests for very basic information about GUI programming in Matlab (which participants seemed to have taught you, quite patiently and generously), my impression during that binge of questioning was that you were fairly bright, but embarked on a project in an unfamiliar environment that you didn't want to take the time to learn. Then, the posts moved over to signal processing, where you appeared to have absolutely no training, but you expected to learn this stuff on cssm. Signal processing questions covered: basic fft properties, basic sampling theorem, filter design, image processing, and some other topics, every one of which is well covered in one of two textbooks that you can find on just about every library's shelves. I'd suggest "Signals and Systems" by Oppenheim and Willsky, "Digital Signal Processing" by Oppenheim and Schaffer, and the lovely image processing book that can be found on the mathworks site under textbooks. My recommendations are somewhat dated, as I'm sure that more modern references on signals and systems cover a better mix of analog and digital techniques. Some of the docs for the Matlab toolboxes have fine reference lists, and they don't put those list in just so endusers can ignore them. Personally, I'd recommend you consider taking a course on the topic, as it will be a time saver in the long run. Help is help, but when you post 50 threads on one subject, consider going out and learning it right. Also, a good rule of thumb is to consider spending an 45 minutes or an hour trying to do something (like everybody who contributes to your threads did at some time) before asking for help with it. Then, try googling for it to see if you can help yourself without passing the hat. In fact, if you've googled this particular question in usenet groups, you'll find that you've already asked almost exactly the same question, but for the more complex 2D case! CSSM is a fine resource to get you over the rough spots, but its not the participants' responsibility to provide you with on the job training in areas you aren't proficient in. Scott
Hello Randy,
comments below:

"Randy Yates" <randy.yates@sonyericsson.com> wrote in message 
news:xxppt12cy6f.fsf@usrts005.corpusers.net...
> "Clay S. Turner" <Physics@Bellsouth.net> writes: >> [...] >> One way is to divide the area under the bell curve into equal area >> partitions. In this case 2^6 of them. The abscissal value for each of the >> partitions becomes a transition point for the quantization. This >> maximizes >> the entropy (i.e., the information) since each quantization will be >> equally >> represented. And the entopy is maximized when each state has equal >> probability. > > Isn't this a mapping from Gaussian to uniform? And we all know that > uniform has the greatest entropy.
Basically except for the lack of a Jacobian. And going to a form which maximizes our entropy is the goal.
> > Is this related to the concept of "vector quantization"?
Probably - but I havn't done much with this, so I'll have to guess here. But since the concept of maximizing entropy is a wonderful way of handling quantization without history, so I would be surprised if it hasn't been used for quantizing vocoder vectors. Remember in all of this process, we are assuming no sample to sample correlation. And in speech this is not quite true.
> >> This in not unlike Huffman's idea where his coding scheme >> attempts to make the length of each symbol times its frequency of >> occurance >> be the same for all symbols. > > I'm having trouble with this. Yes, I see that a Huffman code attempts to > make > (symbol length)*(symbol frequency) constant over all symbols. How does > change > anything about the underlying distribution, though?
The idea is to try to make each symbol contribute equally to the overall process. Imagine looking at your data after a huge number of symbols was received. The idea is to make the info provided by each type of symbol contribute equally. And in data compression, the idea is to find the minimum size for all of the data for a fixed information content. I hope this helps. Clay
> -- > Randy Yates > Sony Ericsson Mobile Communications > Research Triangle Park, NC, USA > randy.yates@sonyericsson.com, 919-472-1124