Back in the olden days I was a fairly serious amateur photographer. (Keywords: Kodachrome 25 -- though I preferred the palette and saturation of Fujichrome 50; Panatomic X.) But I dropped out of the hobby just about the time the first digital SLRs became available. Lately my photographic interests have rekindled, so I have been doing my homework on digital photography. And what I've found surprises me. I naively thought that image sensors were configured like computer monitors. That is to say, in a computer monitor each pixel consists of three subpixels; one red, one green, and one blue. So if a monitor has 1920x1080 pixels, then there are 1920x1080 red subpixels, 1920x1080 green subpixels, and 1920x1080 blue subpixels. By analogy, the "24 megapixel" 6000x4000 imaging sensor in the Sony Alpha 7 should consist of 6000x4000 red-sensitive subpixels, 6000x4000 green-sensitive subpixels, and 6000x4000 blue-sensitive subpixels, right? Wrong. I learned about Bayer Filters. It turns out that there are 6000x4000 pixels on the image sensor, all right, but there are NO subpixels. The pixels are grouped into sets of four; one of those four pixels is red-sensitive, two of those four pixels are green-sensitive, and the remaining one of those four pixels is blue-sensitive. And the 6000x4000 color image is created by INTERPOLATION between samples of the same color sensitivity. The ad copy refers to the image as consisting of 24 million "effective" pixels. Well, I don't know much about imaging sensors, but I do know something about signal processing, and one of the most fundamental rules of signal processing is that YOU CANNOT INCREASE RESOLUTION BY INTERPOLATION, particularly in signals that may be aliased (as images can easily be). One can resample a digital signal to a higher sampling rate, but in doing so one does NOT increase resolution. So am I right in interpreting that 6000x4000 Bayer-Filtered image as consisting of 6 million REAL pixels, or have I misinterpreted something? Greg
Effective Pixels in Digital Cameras
Started by ●November 28, 2013
Reply by ●November 28, 20132013-11-28
On 28/11/13 13:44, Greg Berchin wrote:> Back in the olden days I was a fairly serious amateur photographer. (Keywords: > Kodachrome 25 -- though I preferred the palette and saturation of Fujichrome 50; > Panatomic X.) But I dropped out of the hobby just about the time the first > digital SLRs became available. > > Lately my photographic interests have rekindled, so I have been doing my > homework on digital photography. And what I've found surprises me.One figure that may be of use... When digitising 35mm slides of the natural world that were professionally shot and sold for professional uses, we decided to use a resolution of 3000*5000 pixels. Any more and we were digitising the grain structure.
Reply by ●November 28, 20132013-11-28
On Thu, 28 Nov 2013 14:14:56 +0000, Tom Gardner <spamjunk@blueyonder.co.uk> wrote:>When digitising 35mm slides of the natural world that were >professionally shot and sold for professional uses, we >decided to use a resolution of 3000*5000 pixels. Any more >and we were digitising the grain structure.Similarly, I have a 35mm film scanner with 4032x2688 pixel native resolution. It can NOT resolve grain in fine-grain films like Panatomic X. Both of these, however, support an assertion that the 6000x4000 "effective" (3000x2000 "real") pixels in my example are not enough to match the resolution of film. Greg
Reply by ●November 28, 20132013-11-28
Greg Berchin <gjberchin@chatter.net.invalid> wrote: (snip)> I learned about Bayer Filters. It turns out that there are 6000x4000 pixels on > the image sensor, all right, but there are NO subpixels. The pixels are grouped > into sets of four; one of those four pixels is red-sensitive, two of those four > pixels are green-sensitive, and the remaining one of those four pixels is > blue-sensitive. And the 6000x4000 color image is created by INTERPOLATION > between samples of the same color sensitivity. The ad copy refers to the image > as consisting of 24 million "effective" pixels.> Well, I don't know much about imaging sensors, but I do know something about > signal processing, and one of the most fundamental rules of signal processing is > that YOU CANNOT INCREASE RESOLUTION BY INTERPOLATION, particularly in signals > that may be aliased (as images can easily be). One can resample a digital signal > to a higher sampling rate, but in doing so one does NOT increase resolution.> So am I right in interpreting that 6000x4000 Bayer-Filtered image as consisting > of 6 million REAL pixels, or have I misinterpreted something?One is that the eye has lower resolution for color than for luminance. Even more, the eye color resolution is different for different colors. The systems for both analog and digital video take this into account. Given the Bayer array, you can generate a luminance signal at full resolution and chrominance at the lower resolution. As for aliasing, that happens when you resample to a lower resolution without filtering. You can interpolate (oversample) without aliasing. As you note, that doesn't increase the actual resolution. For DSLRs, it is usual to put a low pass filter in front of the sensor array to minimize aliasing. For cheaper cameras, the lens resolution is usually low enough that no filter is needed. (As I understand it, some DSLR sensors now have enough resolution that they are leaving out the low pass filter.) -- glen
Reply by ●November 28, 20132013-11-28
On Thursday, November 28, 2013 11:49:02 AM UTC-6, glen herrmannsfeldt wrote:> One is that the eye has lower resolution for color than for luminance. > Even more, the eye color resolution is different for different colors. > The systems for both analog and digital video take this into account. > Given the Bayer array, you can generate a luminance signal at full > resolution and chrominance at the lower resolution.Interesting, but I'm not certain that it's relevant to the resolution issue in the context of turning the imager array of 3000x2000 sets of 4 pixels into a display array of 6000x4000 red subpixels, 6000x4000 green subpixels, and 6000x4000 blue subpixels. A sharp edge of a blue object cannot be converted from the imager's array of 3000x2000 blue-sensitive pixels into a full-resolution 6000x4000 display image. That's just basic math. How our eyes perceive it is not really the issue.> As for aliasing, that happens when you resample to a lower resolution > without filtering.Or when you initially sample a high resolution image with a low resolution sensor. It is just like sampling a non-bandlimited 1-dimensional signal at too low a sampling rate -- sharp edges can only be discerned to the next sample period, and all the interpolation in the world won't change that.> For DSLRs, it is usual to put a low pass filter in front of the sensor > array to minimize aliasing.So basically, that (spatial) LPF reduces the image bandwidth to something that is appropriate for the pixel spacing in the image sensor. In my example, even if the image sensor actually could resolve 6000x4000, the LPF guarantees that the image itself has a low enough maximum spatial frequency to be resolved by a 3000x4000 array of pixels. And interpolation back to 6000x4000 results in NO increase in resolution, but a quadrupling of file size!>(As I understand it, some > DSLR sensors now have enough resolution that they are leaving out the > low pass filter.)And I note many references to the resulting Moire patterns ... Thanks, Greg
Reply by ●November 28, 20132013-11-28
On 2013-11-28 19:18, Greg Berchin wrote: [...]> Interesting, but I'm not certain that it's relevant to the resolution issue in the context of turning the imager array of 3000x2000 sets of 4 pixels into a display array of 6000x4000 red subpixels, 6000x4000 green subpixels, and 6000x4000 blue subpixels. A sharp edge of a blue object cannot be converted from the imager's array of 3000x2000 blue-sensitive pixels into a full-resolution 6000x4000 display image. That's just basic math. How our eyes perceive it is not really the issue.You can do, believe me, you can do. There are many non-linear algorithm capable of upscaling an image preserving sharp edges. You have another problem, the resolution of the lenses. Many sensors are far beyond that, expecially for very small cameras. bye, -- piergiorgio
Reply by ●November 28, 20132013-11-28
On Thursday, November 28, 2013 5:44:45 AM UTC-8, Greg Berchin wrote:> Back in the olden days I was a fairly serious amateur photographer. (Keywords: > Kodachrome 25 -- though I preferred the palette and saturation of Fujichrome 50;"Kodachrome 25" doesn't say "fairly serious", it just proves "old".> I learned about Bayer Filters. It turns out that there are 6000x4000 pixels on > the image sensor, all right, but there are NO subpixels. The pixels are grouped > into sets of four; one of those four pixels is red-sensitive, two of those four > pixels are green-sensitive, and the remaining one of those four pixels is > blue-sensitive. And the 6000x4000 color image is created by INTERPOLATION > between samples of the same color sensitivity. The ad copy refers to the image > as consisting of 24 million "effective" pixels. > > Well, I don't know much about imaging sensors, but I do know something about > signal processing, and one of the most fundamental rules of signal processing is > that YOU CANNOT INCREASE RESOLUTION BY INTERPOLATION, particularly in signals > that may be aliased (as images can easily be). One can resample a digital signal > to a higher sampling rate, but in doing so one does NOT increase resolution.Even the ad copy (and why would someone who claims to know signal processing use ad copy as a source?) makes no claim about resolution. Pixels are samples, not resolution, information content or bandwidth. Why are you shouting?> So am I right in interpreting that 6000x4000 Bayer-Filtered image as consisting > of 6 million REAL pixels, or have I misinterpreted something?You have misinterpreted. And you have also substituted a shout of undefined REAL pixels for undefined ad copy "real" pixels. At least the ad copy reflects consistent industry practice for Bayer sensors. Actual implementations of sensing and display of "red", "green" and "blue" colors seldom accurately match the RGB response definitions of the RGB standards such as sRGB or AdobeRGB. The conversion processes from sensor to RGB standard pixels and RGB standard pixels to display or printer output involve interpolation in color space as well as 2D physical space. In digital camera photography, the interpolation from sensor to standard RGB is done in a "raw converter". The remapping from RGB standards to display or print is often called "color management". (Google "color management system" and results will include how Apple and Microsoft manage resampling maps for moniters in their software.) Printer ink sets may have more than 3 components to remap to. Printer driver software routinely upsamples from pixels to higher density arrays ink drop positions because it is convenient and without the need to SHOUT IRRELEVANT REMARKS ABOUT RESOLUTION. Dale B. Dalrymple
Reply by ●November 28, 20132013-11-28
On 28/11/13 17:49, glen herrmannsfeldt wrote:> One is that the eye has lower resolution for color than for luminance. > Even more, the eye color resolution is different for different colors. > The systems for both analog and digital video take this into account. > Given the Bayer array, you can generate a luminance signal at full > resolution and chrominance at the lower resolution.I accept that when eye is looking at a full-resolution image in which the eye's chrominance resolution is the limiting factor. But it seems less than valid if the image's chrominance resolution is the limiting factor, e.g. if the eye is presented with a highly-enlarged (cropped) image.
Reply by ●November 28, 20132013-11-28
On Thu, 28 Nov 2013 10:18:42 -0800, Greg Berchin wrote:> On Thursday, November 28, 2013 11:49:02 AM UTC-6, glen herrmannsfeldt > wrote: > >> One is that the eye has lower resolution for color than for luminance. >> Even more, the eye color resolution is different for different colors. >> The systems for both analog and digital video take this into account. >> Given the Bayer array, you can generate a luminance signal at full >> resolution and chrominance at the lower resolution. > > Interesting, but I'm not certain that it's relevant to the resolution > issue in the context of turning the imager array of 3000x2000 sets of 4 > pixels into a display array of 6000x4000 red subpixels, 6000x4000 green > subpixels, and 6000x4000 blue subpixels. A sharp edge of a blue object > cannot be converted from the imager's array of 3000x2000 blue-sensitive > pixels into a full-resolution 6000x4000 display image. That's just basic > math. How our eyes perceive it is not really the issue.Few real-world objects are really monochromatic, however, and even fewer will exactly match the filters in the cameras. So deriving luminance from all the pixels is going to work for all but a few pathological cases.>> snip <<> So basically, that (spatial) LPF reduces the image bandwidth to > something that is appropriate for the pixel spacing in the image sensor. > In my example, even if the image sensor actually could resolve > 6000x4000, the LPF guarantees that the image itself has a low enough > maximum spatial frequency to be resolved by a 3000x4000 array of pixels. > And interpolation back to 6000x4000 results in NO increase in > resolution, but a quadrupling of file size!I rather suspect that the LPF is designed with the assumption that the luminance signal is good across all the pixels, not just the green ones (or whatever).>>(As I understand it, some >> DSLR sensors now have enough resolution that they are leaving out the >> low pass filter.) > > And I note many references to the resulting Moire patterns ...Optical systems cannot focus light perfectly. Look up "blur spot" on the web, and possibly "Airy disk". The blur spot acts like a spatial low- pass filter; if it's big enough with respect to the pixels then you don't need to add a LPF. If it's not, but you're a cheap bastard, then you'll leave the LPF off to save $$. I'm sure that the LPF in/out issue is one of great debate for camera makers, because the size of the blur spot from the optics changes not only with the F-number but with the quality of the lens and the skill of the guy bringing the image into focus. So on the one hand, if the LPF is not there, then at low F-number and good focus you'll see aliasing -- but if the LPF is there, and the focus and/or F-number blur just about matches the response of the LPF, you'd see some true image degradation as a consequence of having the LPF in place. (I haven't done the math. If the optics are truly diffraction limited then the size of the Airy disk is proportional to the F-number times the wavelength. I think the proportionality constant is 1.4, but it's been a long time since I've had to care. If you can find out the pixel pitch at the detector, then you can work from that to figure out blur spot size -- assuming that you've got a good enough lens that the blur spot is due to diffraction, and not other issues.) -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com
Reply by ●November 28, 20132013-11-28
On Thursday, November 28, 2013 12:46:40 PM UTC-6, dbd wrote:> Why are you shouting?Not shouting. In a text-only format I have no way to represent italics or underlines. You appear to have taken offense at the perceived shouting, and for that I apologize. What do you recommend that I use for emphasis in the future?> You have misinterpreted. And you have also substituted a shout of undefined REAL pixels for undefined ad copy "real" pixels. At least the ad copy reflects consistent industry practice for Bayer sensors.What is undefined about real pixels in this context? (And the ad copy -- and the consistent industry practice -- refer to "effective" pixels. When they mean real, physical picture elements, they call them "pixels", not "effective" pixels.) A pixel is a spatial sampling element. On a monitor each pixel is tri-colored, so the spacing from one color element to the next of the same color is the same as the pixel spacing. But in an image sensor, spacing from one color element to the next of the same color is twice the pixel spacing. No caps. No shouting.> Actual implementations of sensing and display of "red", "green" and "blue" colors seldom accurately match the RGB response definitions of the RGB standards such as sRGB or AdobeRGB. The conversion processes from sensor to RGB standard pixels and RGB standard pixels to display or printer output involve interpolation in color space as well as 2D physical space. In digital camera photography, the interpolation from sensor to standard RGB is done in a "raw converter". The remapping from RGB standards to display or print is often called "color management". (Google "color management system" and results will include how Apple and Microsoft manage resampling maps for moniters in their software.) Printer ink sets may have more than 3 components to remap to. Printer driver software routinely upsamples from pixels to higher density arrays ink drop positions because it is convenient and without the need to SHOUT IRRELEVANT REMARKS ABOUT RESOLUTION.Irrelevant? Resolution is the main focus of my question. And while there may be many factors that affect the achieved spatial resolution, some of which you have mentioned above, there is a fundamental limit that is a direct consequence of the size and spacing of the pixels. That is the focus of my inquiry. I am just a bit shocked at finding that the resolution limit of a digital camera appears to be only 1/2 (in a 1D context) of what one might expect from the basic pixel spacing. And nothing that I have read in this thread so far is changing my perception. Greg






