Reply by Anton Erasmus June 20, 20052005-06-20
On Sun, 19 Jun 2005 21:43:20 -0700, "Jon Harris"
<jon_harrisTIGER@hotmail.com> wrote:

>"Anton Erasmus" <nobody@spam.prevent.net> wrote in message >news:1119169974.ea68157269af093d1f453f607f949e3c@teranews... >> On Sun, 19 Jun 2005 00:57:12 -0700, "Jon Harris" >> <jon_harrisTIGER@hotmail.com> wrote: >> >> >> "Anton Erasmus" <nobody@spam.prevent.net> wrote in message >> >> news:1119000420.d54828b53b9bcd51f76b2b5b640103a6@teranews... >> >> >>
[Snipped]
>> >> Yes one would need a sensor with a low noise floor. The lower the >> better. AFAIK the more pixels they pack into a sensor, the higher the >> noise floor. Currently the whole emphasis is on producing sensors with >> more pixels. If the emphasis was on producing sensors with very low >> noise floor, I am sure a suitable sensor can be developed. > >Yes, if you only required, say a 1MP image, this would certainly help, both in >terms of noise and ability to read out the data quickly. >
1MP is quite adequite for a postcard size image or for a newspaper photo. I think there is at least a niche market for a 1MP digital camera that can take a photo without flash that is not blurred in conditions where other cameras would produce a blurred photo. If the sensor technology keeps improving as it had over the last 5 years or so, then the 1MP would go up to 3 or 5MP quite quickly. Regards Anton Erasmus
Reply by Jon Harris June 20, 20052005-06-20
<tony.nospam@nospam.tonyRobinson.com> wrote in message
news:87mzpmr7ey.fsf@tonyRobinson.com...
> "Jon Harris" <jon_harrisTIGER@hotmail.com> writes: > > > <tony.nospam@nospam.tonyRobinson.com> wrote in message > > > > As explained in a previous past, using electronic compensation for > > single-frame images (i.e. with still cameras) is not useful. > > Having seen many nighttime long exposures where streetlights leave a > trail, I don't agree. I'm sure it would be possible to deconvolve the > camera motion so the long wiggley trails become roundish blobs and so > sharpen everything else up. Of course if they were car headlights, that > left the trail, then you'd blur the background - perhaps that's why it's > not useful?
OK, that's a whole differnet ballgame than what we were talking about previously (taking multiple frames to compose a single "shake free" image). I think you hit on one of the difficulties: how to distinguish between camera movement and subject movement/blur from just a single image. The other is that fine detail is irrevocaly lost when nearby high-frequency details blur together (sorry for the crude explanation).
Reply by Jon Harris June 20, 20052005-06-20
"Anton Erasmus" <nobody@spam.prevent.net> wrote in message
news:1119169974.ea68157269af093d1f453f607f949e3c@teranews...
> On Sun, 19 Jun 2005 00:57:12 -0700, "Jon Harris" > <jon_harrisTIGER@hotmail.com> wrote: > > >> "Anton Erasmus" <nobody@spam.prevent.net> wrote in message > >> news:1119000420.d54828b53b9bcd51f76b2b5b640103a6@teranews... > >> >> > >> > I have read a bit about the astronomical image processing mention in > >> > one of the other posts. One thing that came up quite often, is that > >> > CCD sensors are linear. i.e. double the exposure time gives double the > >> > energy recieved. (They do not mention CMOS, but it is probably the > >> > same.) > >> > I agree that blurring occurs when taking a single photo when the > >> > camaera is moved and the exposure is to long. If one can say take in > >> > stead of 1x 2sec exposure, take 20x 0.1 sec exposures, and stack them > >> > into a single picture in software, one should end up close to the > >> > exposure level of the 2sec picture, but without the blur. > > > >I'm assuming that the software wouldn't simply stack the pictures exactly on
top
> >of each other, but move each one around slightly so as to best align them. > > > >One potential difficulty is the time it takes to read out the data from the > >sensor and "clear" it for the next exposure could be significant, especially
for
> >very short exposures. I don't know specifics, but for example, it might take > >3-4 seconds to take your 2 seconds worth of exposure. Clearly that is not a > >good thing! > > Lots of cheap digital cameras can take short mpeg movies, so it cannot > be that slow for the sensor to recover. > If someone has better information regarding the technical specs of > these devices, as well as what the theoretical limits are, it would be > quite nice to know.
Keep in mind that the mpeg movies are at best 640x480 at 30 frames per second. 30 fps corresponds to a frame time of 33.3 milliseconds. Is 640x480 adequate for the still photography being discussed? I doubt it. Sorry I don't have specifics on the technical specs, so I am inferring the approximate order of magnitude of the speed. If someone else has better info, please chime in, but it seems to me that you can't get too much more than 30 fps at ~VGA resolution out of today's consumer camera technology. (Maybe the HD camcorders can do a bit better, but still nothing close to the megapixels in a few milliseconds you would want to use the original idea ("Image stabilization by means of software.").
> >> > In the astronomical case, the "short" exposure pictures are in minutes > >> > for a total exposure time of hours. Of course the subject must not > >> > move during the "short" exposure time. > > > >Another difficulty I see is that when you have a severely underexposed
picture,
> >you are going to lose a lot of shadow detail since it ends up below the noise > >floor . A perfectly linear sensor with no noise wouldn't have this problem,
but
> >real-world ones certainly do. Consider, for example, a picture that has a > >black-to-white gradient. When exposed normally, the camera records levels
from
> >0 (full black) to maximum (full white). But with an exposure that is 1/20 of > >the proper value, much of the dark gray portion will be rendered as 0 (full > >black). Even after you add the 20 exposures together, you still get full
black
> >for those dark gray portions. So everything below a certain dark gray level
is
> >clipped to full black. > > Yes one would need a sensor with a low noise floor. The lower the > better. AFAIK the more pixels they pack into a sensor, the higher the > noise floor. Currently the whole emphasis is on producing sensors with > more pixels. If the emphasis was on producing sensors with very low > noise floor, I am sure a suitable sensor can be developed.
Yes, if you only required, say a 1MP image, this would certainly help, both in terms of noise and ability to read out the data quickly.
Reply by Steve Underwood June 19, 20052005-06-19
Paul Keinanen wrote:
> On Sun, 19 Jun 2005 00:57:12 -0700, "Jon Harris" > <jon_harrisTIGER@hotmail.com> wrote: > > >>Another difficulty I see is that when you have a severely underexposed picture, >>you are going to lose a lot of shadow detail since it ends up below the noise >>floor . A perfectly linear sensor with no noise wouldn't have this problem, but >>real-world ones certainly do. Consider, for example, a picture that has a >>black-to-white gradient. When exposed normally, the camera records levels from >>0 (full black) to maximum (full white). But with an exposure that is 1/20 of >>the proper value, much of the dark gray portion will be rendered as 0 (full >>black). Even after you add the 20 exposures together, you still get full black >>for those dark gray portions. So everything below a certain dark gray level is >>clipped to full black. > > > With an ideal noiseless sensor and preamplifier and an A/D converter > this would indeed be a problem. Early digital audio recordings in the > 1960/70 suffered from this problem, but by adding dithering noise > before the ADC at about 1 LSB, (low frequency) tones well below 1 LSB > could be recorded.
LOL. That is rather revisionist. Do you know of any converters of that vintage which were quiet and linear enough for a lack of dither to have been an issue?
> In a camera system, the sensor thermal noise and the preamplifier > noise acts as the dithering noise and by averaging several samples, > the actual value between two ADC steps can be calculated. > > By turning up the preamplifier gain (the ISO setting), you will > certainly get enough dithering noise to record and average levels > below 1 LSB.
Regards, Steve
Reply by Paul Keinanen June 19, 20052005-06-19
On Sun, 19 Jun 2005 10:33:58 +0200, Anton Erasmus
<nobody@spam.prevent.net> wrote:

>Yes, but at what exposure time does one currently hit the wall, and >where does the actual theoretical wall lie ? Also if one could cool >down the sensor, by how much would the noise floor be reduced ?
The sensor noise is halved at a temperature drop of 6-10 degrees centigrade at least for CCDs. Current cameras are consuming the batteries quite fast, which also heats the sensor and thus generating extra noise. With Peltier elements the sensor could be cooled to -40 C, but the sensor would have to be put into vacuum to avoid condensation problems. The photon (Poisson) noise should also be considered, but since the original aim was to use longer exposure times (allowed by image stabilisation), this should not be an issue. Paul
Reply by Paul Keinanen June 19, 20052005-06-19
On Sun, 19 Jun 2005 00:57:12 -0700, "Jon Harris"
<jon_harrisTIGER@hotmail.com> wrote:

>Another difficulty I see is that when you have a severely underexposed picture, >you are going to lose a lot of shadow detail since it ends up below the noise >floor . A perfectly linear sensor with no noise wouldn't have this problem, but >real-world ones certainly do. Consider, for example, a picture that has a >black-to-white gradient. When exposed normally, the camera records levels from >0 (full black) to maximum (full white). But with an exposure that is 1/20 of >the proper value, much of the dark gray portion will be rendered as 0 (full >black). Even after you add the 20 exposures together, you still get full black >for those dark gray portions. So everything below a certain dark gray level is >clipped to full black.
With an ideal noiseless sensor and preamplifier and an A/D converter this would indeed be a problem. Early digital audio recordings in the 1960/70 suffered from this problem, but by adding dithering noise before the ADC at about 1 LSB, (low frequency) tones well below 1 LSB could be recorded. In a camera system, the sensor thermal noise and the preamplifier noise acts as the dithering noise and by averaging several samples, the actual value between two ADC steps can be calculated. By turning up the preamplifier gain (the ISO setting), you will certainly get enough dithering noise to record and average levels below 1 LSB. Paul
Reply by Piergiorgio Sartor June 19, 20052005-06-19
Kris Neot wrote:
[...]
> How does my idea work?
How do you distinguish between motion due ti shaky hand and motion due to external "object" motion? I mean, if a take a picture of an highway, there is a lot of "shakyness" in the subject already... bye, -- piergiorgio
Reply by June 19, 20052005-06-19
"Jon Harris" <jon_harrisTIGER@hotmail.com> writes:

> <tony.nospam@nospam.tonyRobinson.com> wrote in message > > As explained in a previous past, using electronic compensation for > single-frame images (i.e. with still cameras) is not useful.
Having seen many nighttime long exposures where streetlights leave a trail, I don't agree. I'm sure it would be possible to deconvolve the camera motion so the long wiggley trails become roundish blobs and so sharpen everything else up. Of course if they were car headlights, that left the trail, then you'd blur the background - perhaps that's why it's not useful?
> most of the consumer camcorders do this digitally after capturing the > image. Presumably, this is accomplished by looking at the > frame-to-frame differences and essentially cropping different areas > from each frame in order to keep the video as stable as possible. I > don't know much more about the algorithms than that.
Again I'd imagine it's a trade off - perhaps a good area for smart DSP algorithms. Tony
Reply by Anton Erasmus June 19, 20052005-06-19
On Sun, 19 Jun 2005 00:57:12 -0700, "Jon Harris"
<jon_harrisTIGER@hotmail.com> wrote:

>> "Anton Erasmus" <nobody@spam.prevent.net> wrote in message >> news:1119000420.d54828b53b9bcd51f76b2b5b640103a6@teranews... >> >> >> > I have read a bit about the astronomical image processing mention in >> > one of the other posts. One thing that came up quite often, is that >> > CCD sensors are linear. i.e. double the exposure time gives double the >> > energy recieved. (They do not mention CMOS, but it is probably the >> > same.) >> > I agree that blurring occurs when taking a single photo when the >> > camaera is moved and the exposure is to long. If one can say take in >> > stead of 1x 2sec exposure, take 20x 0.1 sec exposures, and stack them >> > into a single picture in software, one should end up close to the >> > exposure level of the 2sec picture, but without the blur. > >I'm assuming that the software wouldn't simply stack the pictures exactly on top >of each other, but move each one around slightly so as to best align them. > >One potential difficulty is the time it takes to read out the data from the >sensor and "clear" it for the next exposure could be significant, especially for >very short exposures. I don't know specifics, but for example, it might take >3-4 seconds to take your 2 seconds worth of exposure. Clearly that is not a >good thing!
Lots of cheap digital cameras can take short mpeg movies, so it cannot be that slow for the sensor to recover. If someone has better information regarding the technical specs of these devices, as well as what the theoretical limits are, it would be quite nice to know.
>> > In the astronomical case, the "short" exposure pictures are in minutes >> > for a total exposure time of hours. Of course the subject must not >> > move during the "short" exposure time. > >Another difficulty I see is that when you have a severely underexposed picture, >you are going to lose a lot of shadow detail since it ends up below the noise >floor . A perfectly linear sensor with no noise wouldn't have this problem, but >real-world ones certainly do. Consider, for example, a picture that has a >black-to-white gradient. When exposed normally, the camera records levels from >0 (full black) to maximum (full white). But with an exposure that is 1/20 of >the proper value, much of the dark gray portion will be rendered as 0 (full >black). Even after you add the 20 exposures together, you still get full black >for those dark gray portions. So everything below a certain dark gray level is >clipped to full black.
Yes one would need a sensor with a low noise floor. The lower the better. AFAIK the more pixels they pack into a sensor, the higher the noise floor. Currently the whole emphasis is on producing sensors with more pixels. If the emphasis was on producing sensors with very low noise floor, I am sure a suitable sensor can be developed.
>In astro-photography, this might not be such a problem, as the unlit sky is >supposed to be black anyway, but with typical photographic scenes, it would be. > >> > If one scaled this down so that the "short" exposure time is in >> > milli-seconds or even micro-seconds depending on what the sensors can >> > do, then one can do the same thing just at a much faster scale. >> > What is the minimum exposure time for a current CMOS sensor before one >> > just see the inherant sensor noise ? > >As mentioned above, the shorter you try to make your individual exposures, the >more the read/clear time becomes a factor. State-of-the-art high mega-pixel >cameras today can shoot maybe 8-10 frames/second. So I doubt the sensors are >capable with dealing with micro-seconds or even low milli-seconds. Video >cameras can of course shoot 30 frames/second, but their sensors are typically >quite low resolution (i.e. <1 mega-pixel). >
Yes, but at what exposure time does one currently hit the wall, and where does the actual theoretical wall lie ? Also if one could cool down the sensor, by how much would the noise floor be reduced ? Regards Anton Erasmus
Reply by Jon Harris June 19, 20052005-06-19
> "Anton Erasmus" <nobody@spam.prevent.net> wrote in message > news:1119000420.d54828b53b9bcd51f76b2b5b640103a6@teranews... > >> > > I have read a bit about the astronomical image processing mention in > > one of the other posts. One thing that came up quite often, is that > > CCD sensors are linear. i.e. double the exposure time gives double the > > energy recieved. (They do not mention CMOS, but it is probably the > > same.) > > I agree that blurring occurs when taking a single photo when the > > camaera is moved and the exposure is to long. If one can say take in > > stead of 1x 2sec exposure, take 20x 0.1 sec exposures, and stack them > > into a single picture in software, one should end up close to the > > exposure level of the 2sec picture, but without the blur.
I'm assuming that the software wouldn't simply stack the pictures exactly on top of each other, but move each one around slightly so as to best align them. One potential difficulty is the time it takes to read out the data from the sensor and "clear" it for the next exposure could be significant, especially for very short exposures. I don't know specifics, but for example, it might take 3-4 seconds to take your 2 seconds worth of exposure. Clearly that is not a good thing!
> > In the astronomical case, the "short" exposure pictures are in minutes > > for a total exposure time of hours. Of course the subject must not > > move during the "short" exposure time.
Another difficulty I see is that when you have a severely underexposed picture, you are going to lose a lot of shadow detail since it ends up below the noise floor . A perfectly linear sensor with no noise wouldn't have this problem, but real-world ones certainly do. Consider, for example, a picture that has a black-to-white gradient. When exposed normally, the camera records levels from 0 (full black) to maximum (full white). But with an exposure that is 1/20 of the proper value, much of the dark gray portion will be rendered as 0 (full black). Even after you add the 20 exposures together, you still get full black for those dark gray portions. So everything below a certain dark gray level is clipped to full black. In astro-photography, this might not be such a problem, as the unlit sky is supposed to be black anyway, but with typical photographic scenes, it would be.
> > If one scaled this down so that the "short" exposure time is in > > milli-seconds or even micro-seconds depending on what the sensors can > > do, then one can do the same thing just at a much faster scale. > > What is the minimum exposure time for a current CMOS sensor before one > > just see the inherant sensor noise ?
As mentioned above, the shorter you try to make your individual exposures, the more the read/clear time becomes a factor. State-of-the-art high mega-pixel cameras today can shoot maybe 8-10 frames/second. So I doubt the sensors are capable with dealing with micro-seconds or even low milli-seconds. Video cameras can of course shoot 30 frames/second, but their sensors are typically quite low resolution (i.e. <1 mega-pixel).