> I might have said: "where X is the *original* spatial sampling interval."
> If T is the temporal sample interval, then 1/T is the sample frequency and
> 1/2T is the Nyquist frequency and the bandwidth is < 1/2T. If I want to
> decimate by 2, then I need to reduce the bandwidth to < 1/4T.
> If X is the spatial sample interval, then 1/X is the sample frequency and
> 1/2X is the Nyquist frequency and the bandwidth is < 1/2X. If I want to
> decimate by 2, then I need to reduce the spatial bandwith to < 1/4X.
> The "new" sample interval will be 2X or 2T. So, the Nyquist frequency is
> 1/(2*2X) or 1/(2*2T)
I get it. Sorry I asked!
Engineering is the art of making what you want from things you can get.
Reply by Piergiorgio Sartor●March 26, 20042004-03-26
Jerry Avins wrote:
> Now that you point that out explicitly, it's retrospectively obvious.
> On the other hand, if the sensitive elements are just a little narrower
> then the spacing, then some DC is possible with inputs of n*Fs.
Currently that's the situation with CCD color sensor,
since a pixel is made usually by four sub-pixels:
so between each color of the _same_ type there is quite
a lot of space.
Not to mention the physical space between the single
Reply by Ronald H. Nicholson Jr.●March 26, 20042004-03-26
In article <firstname.lastname@example.org>,
Piergiorgio Sartor <piergiorgio.sartor@nexgo.REMOVETHIS.de> wrote:
>Jerry Avins wrote:
>> Now that you point that out explicitly, it's retrospectively obvious.
>> On the other hand, if the sensitive elements are just a little narrower
>> then the spacing, then some DC is possible with inputs of n*Fs.
>Currently that's the situation with CCD color sensor,
>since a pixel is made usually by four sub-pixels:
>so between each color of the _same_ type there is quite a lot of space.
>Not to mention the physical space between the single color sensors.
However, part of the "secret soup" used by some single-CCD digicams is
a degree of intentional physical blurring between sub-pixels of the color
mosiac to both low pass filter the pixel image and to gather some light
from between the CCD sensor rectangles. But with this method you get
less usable resolution than the rated "megapixels" of the camera.
Some cameras which don't physically blur the focal plane image run
demosaicing algorithms on the sub-pixels before handing you interpolated
RGB or color-subsampled YUV data, which can alias certain colored
patterns of just the right 2D spatial frequency. This gives higher
perceived "resolution" for some images at the cost of aliasing annoying
certain bright plaid shirts at just the right distance, etc. Also note
that some sensors psuedo-randomize the CCD color mosaic filter to make
it statistically unlikely that anyone has a shirt garish enough to
cause aliasing over a significant area.
Is there a theory of randomized sampling applicable to DSP processing?
(where something like the maximum sample spacing is specified?)
Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/
#include <canonical.disclaimer> // only my own opinions, etc.