I wonder if there is a spatial Nyquist sampling theory?

Started by walala March 19, 2004
Is there a Nyquist sampling theory for spatial dimensions?

Suppose point A to point B is about 1 mile, how many sensors I should place
to get a fair estimation of the property of the whole range?



Sure there is.  Imagine a very simple example: a sheet of graph paper, i.e.
gridded lines.  Your input frequency range corresponds to the spacing of the
lines.  Your sample rate must be high enough to get at least 2 samples per line.
If it is too low, you will get aliasing and it will look like the grid is wider
than it is.

> walala wrote: > > > > Is there a Nyquist sampling theory for spatial dimensions? > > > > Suppose point A to point B is about 1 mile, how many sensors I should place > > to get a fair estimation of the property of the whole range?
"walala" <mizhael@yahoo.com> wrote in message
news:c3fgba$nl0$1@mozo.cc.purdue.edu...
> Is there a Nyquist sampling theory for spatial dimensions? > > Suppose point A to point B is about 1 mile, how many sensors I should
place
> to get a fair estimation of the property of the whole range?
Walala, The answer is Yes. Consider this: Once the "function" is defined, what difference would it make where it came from? Temporal, spatial, etc. I believe you will find that spatial sampling, such as in digital cameras, forces us to accept spatial aliasing when the scene ends up having higher spatial frequencies than the sampling of the imaging array will allow to be captured while meeting the Nyquist sampling criterion. We have no control over either one - usually. Think of a picture of a picket fence or a window or door screen. There are very well-defined spatial frequencies in their images. Moire' patterns are typical and very noticeable in movies where the scene is changing. Think of a highly sampled image of newspaper graphics that's already sampled in a manner because it uses a dot structure. What happens if the sample rate is lower or higher? Some practitioners suggest scanning pictures with dot structure at a dpi setting that will blur the dots. Otherwise there will be Moire' pattern in the scanned image. Another alternative is to fully resolve the dots - yielding a lot more pixels. Fred

walala wrote:
> > Is there a Nyquist sampling theory for spatial dimensions? > > Suppose point A to point B is about 1 mile, how many sensors I should place > to get a fair estimation of the property of the whole range?
Half as many as for 2 miles :~} -jim -----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- http://www.newsfeeds.com - The #1 Newsgroup Service in the World! -----== Over 100,000 Newsgroups - 19 Different Servers! =-----
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:PPydnZftv9pcz8bd4p2dnA@centurytel.net...
> > "walala" <mizhael@yahoo.com> wrote in message > news:c3fgba$nl0$1@mozo.cc.purdue.edu... > > Is there a Nyquist sampling theory for spatial dimensions? > > > > Suppose point A to point B is about 1 mile, how many sensors I should > place > > to get a fair estimation of the property of the whole range? > > Walala, > > The answer is Yes. > Consider this: Once the "function" is defined, what difference would it > make where it came from? Temporal, spatial, etc. > > I believe you will find that spatial sampling, such as in digital cameras, > forces us to accept spatial aliasing when the scene ends up having higher > spatial frequencies than the sampling of the imaging array will allow to
be
> captured while meeting the Nyquist sampling criterion. We have no control > over either one - usually. > > Think of a picture of a picket fence or a window or door screen. There
are
> very well-defined spatial frequencies in their images. Moire' patterns
are
> typical and very noticeable in movies where the scene is changing. > Think of a highly sampled image of newspaper graphics that's already
sampled
> in a manner because it uses a dot structure. What happens if the sample > rate is lower or higher? Some practitioners suggest scanning pictures
with
> dot structure at a dpi setting that will blur the dots. Otherwise there > will be Moire' pattern in the scanned image. Another alternative is to > fully resolve the dots - yielding a lot more pixels. > > Fred > >
I am not sure I understand the concept of spatial frequency. For example, we know we have sensor networks now: suppose we place sensors into a region to sense temprature. Do we need to consider the Spatial Nyquist rate? I am curious about the emerging large scale coordinated signal processing...
"Jon Harris" <goldentully@hotmail.com> wrote in message
news:c3fhp2$27cfmo$1@ID-210375.news.uni-berlin.de...
> Sure there is. Imagine a very simple example: a sheet of graph paper,
i.e.
> gridded lines. Your input frequency range corresponds to the spacing of
the
> lines. Your sample rate must be high enough to get at least 2 samples per
line.
> If it is too low, you will get aliasing and it will look like the grid is
wider
> than it is. > > > walala wrote: > > > > > > Is there a Nyquist sampling theory for spatial dimensions? > > > > > > Suppose point A to point B is about 1 mile, how many sensors I should
place
> > > to get a fair estimation of the property of the whole range? > >
I am not sure I understand the concept of spatial frequency. For example, we know we have sensor networks now: suppose we place sensors into a region to sense temprature. Do we need to consider the Spatial Nyquist rate? I am curious about the emerging large scale coordinated signal processing...
walala wrote:

> I am not sure I understand the concept of spatial frequency.
Take a picture and apply a two dimensional Fourier transform to it. The result is (two dimensional) spatial frequency. Take a 35mm slide (black and white works a little better), illuminate it with a plane wave from a laser/lens system, place a lens on the other side, and a screen at the focal point of the lens. It is a little easier to see if you put a TV camera with its lens removed, or just focus the camera to infinity. The "image" on the TV camera is the Fourier transform of the object (picture on the slide). Add a second lens, and place objects at the focal point of the first lens. These objects can then selectively block spatial frequencies in the Fourier transform, and are then called spatial filters. The second lens will do an inverse transform so that you can see the results of spatial filtering.
> For example, we know we have sensor networks now:
> suppose we place sensors into a region to sense temprature. Do we need to > consider the Spatial Nyquist rate?
> I am curious about the emerging large scale coordinated signal processing...
You need to consider the spatial Nyquist rate if there are possibly frequency components higher than the Nyquist frequency. If I watch a TV weather report that gives temperatures in different cities, I usually believe that I can interpolate to the city I am in. There is always a possibility that could be wrong. -- glen
Jon Harris wrote:
> Sure there is. Imagine a very simple example: a sheet of graph paper, i.e. > gridded lines. Your input frequency range corresponds to the spacing of the > lines. Your sample rate must be high enough to get at least 2 samples per line. > If it is too low, you will get aliasing and it will look like the grid is wider > than it is. > > >>walala wrote: >> >>>Is there a Nyquist sampling theory for spatial dimensions? >>> >>>Suppose point A to point B is about 1 mile, how many sensors I should place >>>to get a fair estimation of the property of the whole range?
Here's a neat link: http://www.clarkvision.com/imagedetail/sampling1.html Good day! -- _____________________ Christopher R. Carlen crobc@earthlink.net Suse 8.1 Linux 2.4.19
glen herrmannsfeldt wrote:

> walala wrote: > >> I am not sure I understand the concept of spatial frequency. > > > Take a picture and apply a two dimensional Fourier transform > to it. The result is (two dimensional) spatial frequency. > > Take a 35mm slide (black and white works a little better), > illuminate it with a plane wave from a laser/lens system, > place a lens on the other side, and a screen at the focal point > of the lens. It is a little easier to see if you put a TV > camera with its lens removed, or just focus the camera to infinity. > > The "image" on the TV camera is the Fourier transform of the > object (picture on the slide). > > Add a second lens, and place objects at the focal point of > the first lens. These objects can then selectively block > spatial frequencies in the Fourier transform, and are then > called spatial filters. The second lens will do an inverse > transform so that you can see the results of spatial filtering.
I think that spatial filtering is a bit afield from what Walala wants to know. For one thing, it is about filtering by modifying a representation of the signal in certain locations; while that is certainly related, it's not as direct as one would like. The question is made simpler by its being couched as linear, rather than as about an area.
>> For example, we know we have sensor networks now: >> suppose we place sensors into a region to sense temprature. Do we need to >> consider the Spatial Nyquist rate?
Yes. The highest-frequency temperature variation we can infer from one temperature sensor per mile along a route is one cycle every two miles. Actual variations may be closer than that, but outdoors, we have no way to filter the data before acquiring it. Much closer spacing of the sensors -- higher spatial frequency resolution -- might be needed just to avoid aliasing, even if frequencies higher than 1 per two miles don't interest us.
>> I am curious about the emerging large scale coordinated signal >> processing... > > > You need to consider the spatial Nyquist rate if there > are possibly frequency components higher than the Nyquist > frequency. If I watch a TV weather report that gives temperatures > in different cities, I usually believe that I can interpolate to > the city I am in. There is always a possibility that could > be wrong.
Because of aliasing? I agree.
> -- glen
Another simple example of a spatial filter. &#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095; With a standard 10x microscope objective, an object at infinity comes to a focus at or near the rear surface of the rear element. (The object comes to a focus at the eyepiece, 169 mm or so further back.) A did of india ink in the center of the rear element blocks the light from a small distant illuminant such as a bare bulb. The lens nevertheless makes a fine image of the object plane which one can see in the eyepiece with brightness reversed: this is a form of dark field microscopy. The dot of ink acts as a spatial filter which removes the "DC" level. Jerry -- Engineering is the art of making what you want from things you can get. &#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;
"walala" <mizhael@yahoo.com> wrote in message
news:c3fpl0$rc8$1@mozo.cc.purdue.edu...
> > "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message > news:PPydnZftv9pcz8bd4p2dnA@centurytel.net... > > > > "walala" <mizhael@yahoo.com> wrote in message > > news:c3fgba$nl0$1@mozo.cc.purdue.edu... > > > Is there a Nyquist sampling theory for spatial dimensions? > > > > > > Suppose point A to point B is about 1 mile, how many sensors I should > > place > > > to get a fair estimation of the property of the whole range? > > > > Walala, > > > > The answer is Yes. > > Consider this: Once the "function" is defined, what difference would it > > make where it came from? Temporal, spatial, etc. > > > > I believe you will find that spatial sampling, such as in digital
cameras,
> > forces us to accept spatial aliasing when the scene ends up having
higher
> > spatial frequencies than the sampling of the imaging array will allow to > be > > captured while meeting the Nyquist sampling criterion. We have no
control
> > over either one - usually. > > > > Think of a picture of a picket fence or a window or door screen. There > are > > very well-defined spatial frequencies in their images. Moire' patterns > are > > typical and very noticeable in movies where the scene is changing. > > Think of a highly sampled image of newspaper graphics that's already > sampled > > in a manner because it uses a dot structure. What happens if the sample > > rate is lower or higher? Some practitioners suggest scanning pictures > with > > dot structure at a dpi setting that will blur the dots. Otherwise there > > will be Moire' pattern in the scanned image. Another alternative is to > > fully resolve the dots - yielding a lot more pixels. > > > > Fred > > > > > > I am not sure I understand the concept of spatial frequency. > > For example, we know we have sensor networks now: > > suppose we place sensors into a region to sense temprature. Do we need to > consider the Spatial Nyquist rate? > > I am curious about the emerging large scale coordinated signal
processing... Walala, I am curious *now* about the emerging large scale coordinated signal processing as well! Here's the concept of spatial frequency: Let's do it in one-dimensional space: Imagine sand dunes or ocean waves with perfect sinusoidal shape North/South and perfectly flat (extending infinitely) East/West. So, we'll only consider wave height North and South - so one-dimensional. We will call the North/South dimension "x". We will make an instantaneous measurement of wave height for all "x" - resulting in a numerical record that is a sine wave plus a constant. Let's say that one "cycle" of the sinusoid is in 100 yards. So the frequency is 1 cycle per 100 yards. We can't use Hz because that has dimensions of cycles/second. But we can certainly talk about cycles/yard or cycles/foot or cycles/meter...... Above, I've assumed infinitely high sampling rate or continuous measurement. So, if the measurement is to be sampled then it's no different than sampling any other waveform where the independent variable is time or any other measure. If we compute the Fourier Transform of the record we will see a component at zero spatial frequency to correspond with the constant term and a component at 1/100 cycles per yard. Note that the Fourier Transform variables are amplitude vs. distance and amplitude vs. spatial frequency rather than having amplitudes vs. time and temporal frequency respectively. Fred