DSPRelated.com
Forums

Continuous-time DSP with no sampling

Started by Yannis November 2, 2005
Yannis wrote:
> Yes, in principle no levels are skipped, since there is no "time > between samples" - there are no samples, and everything is done in > continuous time. This, indeed, if implemented brute-force, makes the > speed-resolution product low. Now, one could skip *some* steps with > small penalty, but I will refrain from proposing this until there is > more work to back it. And, indeed, the reason that Gray codes will not > save us directly, is that the signals from the taps could be arriving > at instances differing by arbitrarily small amounts, and this can > produce glitches - it is a problem we are working on. > > The above having being said, I would not be as quick as to pronounce > the technique "truly and fundamentally impractical", as you have done. > In my reply to your first message, I offered an example of earlier > skeptical attitudes with another idea, and what eventually happened. I > will give you another example: In the early 80s, we had no resistors to > speak of on CMOS chips, so we proposed a method to make filters using
CMOS devices have always had resistors. Before CMOS came PMOS and NMOS, which depended very heavily on resistors.
> the (heavily nonlinear) MOSFETs in lieu of resistors, and still attain
I'm not sure who introduced MOSFETs as resistors, to replace things like poly resistors, but surely that goes back further than 1985. Still, it didn't suddenly enable mixed signal. Mixed signal was working just fine by other methods.
> input-output linearity. People thought it was truly and fundamentally > impractical back then, too. But it worked (see the IEEE Journal of
Rubbish
> Solid-State Circuits, Dec. 85), and such filters became a mainstream > technology and were produced in the millions. As it turned out, they
Mixed signal CMOS was common before that, and mixed signal NMOS long before that. The first DSP to catch a lot of people's attention (although it was a commercial failure) was the intel 2920 - a device with amps, A/D, DSP, and D/A all on the same device. That was around in 1979.
> found a niche we had not even dreamt of when we propose them: > equalization in the read electronics of computer disk drives.
and a thousand other places, many of which were being worked on from the very earliest days of CMOS. It wasn't the apparent immpossibility of analogue circuits in CMOS which held it back. Everyone was working on it. The speed and noise were the real issues people had to overcome for practical application. That took a while. Getting the power consumption of the digital stuff down, so it didn't generate so much noise for the analogue stuff was also an issue. A lot of early mixed signal was ECL. ECL consumes lots of current, but it is a constant current. It results in very little noise.
> I am not saying all this to brag, but rather to make people refrain > from dispensing with the idea, so that they will take the time to keep > offering us their valuable input. > > Yannis Tsividis >
Steve
In article <dkeamo$c3u$1@nnews.pacific.net.hk>, Steve Underwood says...
> >Yannis wrote: >> Yes, in principle no levels are skipped, since there is no "time >> between samples" - there are no samples, and everything is done in >> continuous time. This, indeed, if implemented brute-force, makes the >> speed-resolution product low. Now, one could skip *some* steps with >> small penalty, but I will refrain from proposing this until there is >> more work to back it. And, indeed, the reason that Gray codes will not >> save us directly, is that the signals from the taps could be arriving >> at instances differing by arbitrarily small amounts, and this can >> produce glitches - it is a problem we are working on. >> >> The above having being said, I would not be as quick as to pronounce >> the technique "truly and fundamentally impractical", as you have done. >> In my reply to your first message, I offered an example of earlier >> skeptical attitudes with another idea, and what eventually happened. I >> will give you another example: In the early 80s, we had no resistors to >> speak of on CMOS chips, so we proposed a method to make filters using > >CMOS devices have always had resistors. Before CMOS came PMOS and NMOS, >which depended very heavily on resistors.
I'm sure I don't need to remind an expert on analogue such as yourself that filters which use the bare resistors available in MOS processes would have very poorly controlled RC time constants and therefore very poorly controlled cutoff frequencies. Using MOS devices instead of resistors is one way of enabling some control of the time constant, but at the cost of linearity unless a proper structure is used. Search the literature if you're curious as to Yannis' contribution ;-).
> >> the (heavily nonlinear) MOSFETs in lieu of resistors, and still attain > >I'm not sure who introduced MOSFETs as resistors, to replace things like >poly resistors, but surely that goes back further than 1985. Still, it >didn't suddenly enable mixed signal. Mixed signal was working just fine >by other methods.
Well, tightly controlled RC time constants for filters with good linearity are a strict subset of "mixed signal."
> >> input-output linearity. People thought it was truly and fundamentally >> impractical back then, too. But it worked (see the IEEE Journal of > >Rubbish >
What, that people did consider the idea impractical? I would not underestimate the ability of people to consider a new idea impractical. (Whether the idea Yannis is describing was new in 1985 is open to debate, a debate which raged in IEEE transactions CAS a few years back, if my memory serves me, altough the debate might well have been about who did what in which month of 1985 as opposed to whether the idea was older than from 1985.) Some ideas get derided as impractical and turn out to be completely practical. The people who founded the company I work for got laughed at in public when they suggested the product which has done OK[1] for us over the last 6 years. Best Regards Jens [1] OK is an understatement; this is a British company - understatement is pervasive :-) :-) :-) :-) -- Key ID 0x09723C12, jensting@tingleff.org Analogue filtering / 5GHz RLAN / Mdk Linux / odds and ends http://www.tingleff.org/jensting/ +44 1223 211 585 "Never drive a car when you're dead!" Tom Waits
cs_posting@hotmail.com wrote:
> John Monro wrote: > > >>To expand on this: The only advantage of a Gray code ADC arises in >>those cases where successive measured analog values are separated by no >>more than one quantisation level; monitoring a river height comes to >>mind as an example. In that type of situation successive output codes >>will indeed differ in only one bit position. > > > The postulated continuous time processing would require that all > intermediate codes be passed through in any transition of the input. > > Obviously A/D converters like that don't exist, unless you low pass > filter the input signal to a small fraction of the sampling rate the > converter would be capable of in a discrete time application. > > Inteast of sampling rate, we'd talk of the slew rate of the system. > > It seems like you could probably find a video ADC fast enough to handle > audio frequency at perhaps 8 bits resolution with no missing codes... > but what advantage would this have over say 48 khz 16 bit discrete time > methods? >
CS, On your last point, a very high sampling rate can result in a greater resolution when the analog outputs are averaged. As to whether you could find a video ADC that is fast enough, I don't think so. Rough argument: To reproduce a 10 kHz maximum-amplitude triangular wave at 16 bit resolution, the wave passes through 64K quantisation levels, twice per cycle. So, 128K 16-bit samples must be generated, ten thousand times per second. That is a sample rate of around 1.3 Giga-samples per second. It would probably be practical to make a flash converter with one or two bits resolution that could handle this rate, but not 16 bits. Regards, John
Jens Tingleff wrote:

   ...

> I would not underestimate the ability of people to consider a new idea > impractical.
...
> Some ideas get derided as impractical and turn out to be completely practical. > The people who founded the company I work for got laughed at in public when they > suggested the product which has done OK[1] for us over the last 6 years.
These thoughts are not new. So what? :-) Jerry -- When a discovery is new, people say, "It isn't true." When it becomes demonstrably true, they say, "It isn't useful." Later, when its utility is evident, they say, "So what? It's old." a paraphrase of William James &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
in article 436b5122$0$26602$afc38c87@news.optusnet.com.au, John Monro at
johnmonro@optusnet.com.au wrote on 11/04/2005 07:16:

> On your last point, a very high sampling rate can result in a greater > resolution when the analog outputs are averaged. > > As to whether you could find a video ADC that is fast enough, I don't > think so. > > Rough argument: > To reproduce a 10 kHz maximum-amplitude triangular wave at 16 bit > resolution, the wave passes through 64K quantisation levels, twice per > cycle. So, 128K 16-bit samples must be generated, ten thousand times > per second. > That is a sample rate of around 1.3 Giga-samples per second. > > It would probably be practical to make a flash converter with one or two > bits resolution that could handle this rate, but not 16 bits.
i dunno if i quite understand the rough argument, but if simple averaging (not noise shaping, as done in sigma-delta) is used, you must quadruple the sample rate for every extra bit of resolution you get. so, to turn an 8 bit flash into an honest 16 bits, it's 4^8 (or 2^16) or an oversampling factor of 65536. i get 655 Mhz if 10 kHz was your desired sample rate. ooops. you're saying a 10 kHz waveform, so it must be at least 20 kHz (and a 1.31 Ghz, so i think we agree.) -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
John Monro wrote:
> cs_posting@hotmail.com wrote: > > It seems like you could probably find a video ADC fast enough to handle > > audio frequency at perhaps 8 bits resolution with no missing codes...
> On your last point, a very high sampling rate can result in a greater > resolution when the analog outputs are averaged. > > As to whether you could find a video ADC that is fast enough, I don't > think so. > > Rough argument: > To reproduce a 10 kHz maximum-amplitude triangular wave at 16 bit > resolution, the wave passes through 64K quantisation levels, twice per > cycle. So, 128K 16-bit samples must be generated, ten thousand times > per second. > That is a sample rate of around 1.3 Giga-samples per second. > > It would probably be practical to make a flash converter with one or two > bits resolution that could handle this rate, but not 16 bits.
Notice I said 8 bits of resolution, 256 levels accomplished in your 10 khz is a 2.56 MHz transition rate. I believe that is well within the range of video (as in television) converter performance. But I don't know if the proposed grey code device could be built, or if it would be truly clockless in keeping with the continous time premise. And the fidelity of the system would obviously be low. Also averaging doesn't directly apply as without discrete samples, you have no divisor. However you could lowpass filter the converter output (a sort of self-decaying average) and I guess improve resolution that way. Yet you'd run into the problem of having extremely frequent bit transitions - back to your GHz - if you want to have a no-missing-codes representation of comparable fidelity to current discrete time methods.
cs_post...@hotmail.com wrote:
> > Notice I said 8 bits of resolution, 256 levels accomplished in your 10 > khz is a 2.56 MHz transition rate.
I mean twice that of course, as it has to change in both directions for one cycle. But 5.12 MHz is not fast for discrete time ADC's - question remains if continous time operation adds difficulties by not letting you use gating, pipelines, or similar tricks. And I'm yet to see the practical advantage of continous time for most of these blocks, compared to a discrete time implementation fast enough to appear continuos at frequencies of interest, or where that won't work - analog signal processing.
I started this topic was to get the community's feedback on
continuous-time DSP; I wish this exchange had not deteriorated into the
history of mixed-signal MOS. I am very surprised to see Steve
Underwood's reply. I will probably regret answering him, but since what
he says is about not only my work, but also that of several colleagues
who published jointly with me, I feel oblidged to set the record
straight.

Steve Underwood wrote:
> Yannis wrote: > > Yes, in principle no levels are skipped, since there is no "time > > between samples" - there are no samples, and everything is done in > > continuous time. This, indeed, if implemented brute-force, makes the > > speed-resolution product low. Now, one could skip *some* steps with > > small penalty, but I will refrain from proposing this until there is > > more work to back it. And, indeed, the reason that Gray codes will not > > save us directly, is that the signals from the taps could be arriving > > at instances differing by arbitrarily small amounts, and this can > > produce glitches - it is a problem we are working on. > > > > The above having being said, I would not be as quick as to pronounce > > the technique "truly and fundamentally impractical", as you have done. > > In my reply to your first message, I offered an example of earlier > > skeptical attitudes with another idea, and what eventually happened. I > > will give you another example: In the early 80s, we had no resistors to > > speak of on CMOS chips, so we proposed a method to make filters using > > CMOS devices have always had resistors. Before CMOS came PMOS and NMOS, > which depended very heavily on resistors.
Jens Tingleff answered this, but just in case let me expand a little. There were certainly resistors in common MOS digital processes, e.g. well, diffusion, or poly resistors. But these were no resistors to speak of for the purpose of making filters with any precision, as the RC product tolerances and temperature variations were 40% or more. Would you have been interested in a filter whose frequency response varied all over the place? By using MOSFETs in lieu of resistors, *plus* automatic tuning through their gate voltage, we achieved frequency tuning accuracy of the order of 1%. But key to the whole idea was to do this so that the nonlinearities of the MOSFETs cancelled out, for *large* signals. This had not been done before, and resulted in US patent 4,509,019; take a look. Such filters were very robust and were produced in a massive scale for many years.
> > the (heavily nonlinear) MOSFETs in lieu of resistors, and still attain > > I'm not sure who introduced MOSFETs as resistors, to replace things like > poly resistors, but surely that goes back further than 1985.
Yes, but the key here was to introduce them so that the overal circuit remains linear *for large signals*; see above.
> Still, it > didn't suddenly enable mixed signal. Mixed signal was working just fine > by other methods.
So, *who said* that this 1985 work "suddenly enabled mixed signals"?? Please do not put words in my mouth. Are you confusing this with another message of mine under this topic, in which I was talking about our MOS voice encoder *in 1976 IEEE ISSCC*? But the impact of that work on MOS mixed-signal is well documented... Why don't you take a look at the IEEE ISSCC Virtual Museum - go to http://sscs.org/History/isscc50/index.html and hit "Communication Circuits".
> > input-output linearity. People thought it was truly and fundamentally > > impractical back then, too. But it worked (see the IEEE Journal of > > Rubbish > > > Solid-State Circuits, Dec. 85), and such filters became a mainstream > > technology and were produced in the millions. As it turned out, they > > Mixed signal CMOS was common before that, and mixed signal NMOS long > before that. The first DSP to catch a lot of people's attention > (although it was a commercial failure) was the intel 2920 - a device > with amps, A/D, DSP, and D/A all on the same device. That was around in > 1979.
You are confusing two different messages. *Nobody* said the 1985 work enabled mixed-signal CMOS. See above.
> > found a niche we had not even dreamt of when we propose them: > > equalization in the read electronics of computer disk drives. > > and a thousand other places, many of which were being worked on from the > very earliest days of CMOS. > > It wasn't the apparent immpossibility of analogue circuits in CMOS which > held it back. Everyone was working on it. The speed and noise were the > real issues people had to overcome for practical application. That took > a while. Getting the power consumption of the digital stuff down, so it > didn't generate so much noise for the analogue stuff was also an issue. > A lot of early mixed signal was ECL. ECL consumes lots of current, but > it is a constant current. It results in very little noise.
Again, and to summarize, in 1985 of course everybody was working on mixed-signal MOS, and I never indicated otherwise. In 1976, they were not. My point for bringing up our work from 1976 in *another* message was to mention how something that looked impossible to most people back then, eventually made it in a big way. And, again, my purpose in mentioning this was to make an analogy for the possibility that continuous-time DSP may turn out to be feasible; it was not meant to start a discussion of the history of mixed-signal. I am very sorry you felt this way and had to "respond" to statements I never made.
> > I am not saying all this to brag, but rather to make people refrain > > from dispensing with the idea, so that they will take the time to keep > > offering us their valuable input. > > > > Yannis Tsividis > > > > Steve
cs_posting@hotmail.com wrote:
> John Monro wrote: > >>cs_posting@hotmail.com wrote: >> >>>It seems like you could probably find a video ADC fast enough to handle >>>audio frequency at perhaps 8 bits resolution with no missing codes... > > >>On your last point, a very high sampling rate can result in a greater >>resolution when the analog outputs are averaged. >> >>As to whether you could find a video ADC that is fast enough, I don't >>think so. >> >>Rough argument: >>To reproduce a 10 kHz maximum-amplitude triangular wave at 16 bit >>resolution, the wave passes through 64K quantisation levels, twice per >>cycle. So, 128K 16-bit samples must be generated, ten thousand times >>per second. >>That is a sample rate of around 1.3 Giga-samples per second. >> >>It would probably be practical to make a flash converter with one or two >>bits resolution that could handle this rate, but not 16 bits. > > > Notice I said 8 bits of resolution, 256 levels accomplished in your 10 > khz is a 2.56 MHz transition rate. I believe that is well within the > range of video (as in television) converter performance. But I don't > know if the proposed grey code device could be built, or if it would be > truly clockless in keeping with the continous time premise. And the > fidelity of the system would obviously be low. > > Also averaging doesn't directly apply as without discrete samples, you > have no divisor. However you could lowpass filter the converter output > (a sort of self-decaying average) and I guess improve resolution that > way. Yet you'd run into the problem of having extremely frequent bit > transitions - back to your GHz - if you want to have a no-missing-codes > representation of comparable fidelity to current discrete time methods. >
CS, My apologies for accidentally misrepresenting you there. I had in mind the 16 bit resolution that mentioned further on in another context. By the way, I just checked the flash ADC situation and found that Maxim have a 8-bit flash ADC that works up to 1.0 Giga-samples/s. High-speed ADCs sure have improved since I last looked a few years ago! (And it only costs $395.00!) Interestingly, the preferred technique in these chips seems to be to convert from the internal comparator 'thermometer code' directly to Gray code using a ROM table, so there does not seem to be any chance of a glitch problem there. The outputs are latched, so the devices are not clockless, but I am not sure whether the latching is essential or just an interfacing convenience. In your follow-up posting you say that the sample rate is around 5 Mega-samples/s and I agree with that. Regarding any improvement in resolution, at this sampling rate the quantisation noise power will be spread over a 2.5 MHz bandwidth, so when there is any sort of band-limiting, either electronically or in our ears, then this noise will be reduced and the effective resolution will be increased, as you suggest. In going down to, say a 20 kHz sample rate we pick up an extra 4 bits of resolution. Despite the improved resolution, there will not be any 'missing codes' or 'bit-transition' problem because the improvement is actually in the analog resolution, and not in the digital code itself. Regards, John
robert bristow-johnson wrote:
> in article 43699577$0$25855$afc38c87@news.optusnet.com.au, John Monro at > johnmonro@optusnet.com.au wrote on 11/02/2005 23:43: > > >>robert bristow-johnson wrote: >> >> >>>3a. now i know there is this "Gray code" that could be used (if a flash A/D >>>was designed to output Gray Code) in which only one bit toggles as your >>>continuous (in amplitude) analog signal passes from one quantized level to >>>the next, (see http://en.wikipedia.org/wiki/Gray_code ) and then you would >>>never get more than one bit that should be changing at a time. accordingly, >>>the combinatorial logic would be modified to do the same arithmetic, but >>>with the Gray encoded words instead of 2's complement. >> >>in general, consecutive analog samples will differ by more than one >>quantisation level, and so the feature of Gray code that you mention >>will be of no particular significance. > > > John, > > *what* samples??? that's the whole premise of Yannis's paper. there are no > samples. the "sampling rate" is infinite. > > now, we all know that truly infinite quantities, including slew rate or ramp > rate, are impossible. so, even if step discontinuities are mathematically > conceivable, without infinite BW, they are not physically possible. so the > input voltage that moves from level n to level n+2 must pass through level > n+1 and that won't be missed because of any sampling times that have > straddled it passing through level n+1. >
Robert, Not having seen Yannis's paper, I was making a few assumptions. I did not even consider that the A/D process is fast enough to produce a valid digital representation of every single quantisation level that the analog input passes through. If so, this would mean that for a digital resolution of 16 bits, different digital values (I will avoid the term 'samples') could be generated at a rate up to 1,000,000,000 values per second. If this is in fact happening in Yannis's scheme then my criticism was not valid. If the digitisation process is slower than this, the criticism stands. Large-amplitude analog signals will cause successive digital values to be generated that are separated by more than one quantisation level. It follows that Gray coding will not help in avoiding glitches in this case. Regards, John