DSPRelated.com
Forums

Zero padding fftw

Started by simwes December 8, 2010
On Mon, 13 Dec 2010 08:50:06 -0800 (PST), robert bristow-johnson
<rbj@audioimagination.com> wrote:

>On Dec 13, 11:15=A0am, eric.jacob...@ieee.org (Eric Jacobsen) wrote: >> On Mon, 13 Dec 2010 06:31:45 -0800 (PST), illywhacker >> >> >> >> <illywac...@gmail.com> wrote: >> >On Dec 13, 12:44=3DA0am, dbd <d...@ieee.org> wrote: >> >> On Dec 12, 2:36=3DA0pm, illywhacker <illywac...@gmail.com> wrote: >> >> >> > Obviously you have never heard of superresolution. >> >> >> You do not seem to be qualified to judge that of which other people >> >> have heard. >> >> >> > In any case, the question is: what did the OP want? >> >> > ... >> >> >> The OP asked for help in correctly performing an interpolation by fft, >> >> zero extending in the transformed domain and ifff. >> >> >> > The case you are discussing is that in which: >> >> >> > 1) only one signal in the range of the antialiasing is compatible wi= >th >> >> > the values of the given samples; >> >> >> > 2) the resampled signal required can be derived from the antialiased >> >> > signal alone. >> >> >> > We can assume (1), if the initial system is well-designed. But (2) >> >> > renders the situation trivial, >> >> >> You are indeed fortunate to find 2) trivial since it is the only >> >> sensible use case for DSP. >> >> >As I said: apparently you have not heard of superresolution. You seem >> >convinced that your knowledge and experience is the only knowledge and >> >experience relevant to DSP. From what I have seen so far, I do not >> >agree with you. >> >> You seem convinced that Darrell is incorrect or not cognizant in this >> area. =A0 I'm convinced you're wrong about that. >> > >i would agree with you completely, Eric, except i think his name is >"Dale". or did i completely screw up regarding who we're talking >about. > >r b-j >
Dammit. I'm getting older. Yeah, I meant Dale. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.abineau.com
On Mon, 13 Dec 2010 09:26:53 -0800 (PST), illywhacker
<illywacker@gmail.com> wrote:

>On Dec 13, 4:15=A0pm, eric.jacob...@ieee.org (Eric Jacobsen) wrote: >> On Mon, 13 Dec 2010 06:31:45 -0800 (PST), illywhacker >> <illywac...@gmail.com> wrote: >> >On Dec 13, 12:44=3DA0am, dbd <d...@ieee.org> wrote: >> >> On Dec 12, 2:36=3DA0pm, illywhacker <illywac...@gmail.com> wrote: >> >> >> > =3DA0and it is hard to believe this is what >> >> > the OP wanted (remember he or she wants to increase the resolution). >> >> > ... >> >> >> Interpolation is a common tactic used to more accurately resolve the >> >> location of features such as a peak or a zero crossing in a sampled >> >> signal. In English, the ability "to more accurately resolve" is >> >> correctly referred to as "increased resolution". This definition of >> >> resolution is consistent with the OP's question, algorithm and >> >> vocabulary and with 1) and 2). >> >> >Perhaps we have a difference of terminology. If the anti-aliasing and >> >sampling are correctly designed, then the anti-aliased signal is >> >uniquely determined by the samples. Therefore, any feature whatsoever >> >of the anti-aliased signal can be recovered from the samples. This is >> >not interpolation: one might call it reconstruction, but really it is >> >just a change of basis. >> >> What is not interpolation? =A0Sampling? > >I would not call a change of basis 'interpolation', but this is just >terminology. To give an example: I can uniquely represent a periodic >signal (i.e. a function on the circle) that has an upper bound on its >frequency content using a finite set of samples at regular intervals. >Alternatively, I can represent it by its finite set of Fourier >amplitudes. In either case, I can compute the value of the function at >any point I wish. I would not say that I have interpolated in doing >this. I have merely used a different description of the same thing. It >is when I do not know how to reconstruct the signal, i.e. when there >is more than one signal that could have produced the given samples, >that it is non-trivial.
Usually in DSP one is not sampling a function, but a stream of information. So the features and characteristics likely aren't known (otherwise, why collect them?). So it is, indeed, often useful to resample the original collected data set to add new samples where there weren't ones before, using the information from the surrounding samples to determine the values of the new ones. That process is widely understood to be called "interpolation", and is what the thread is about. I don't know why that'd be in dispute.
>> > Whether you express this in terms of >> >'resampling' is a matter of nomenclature only: it certainly does not >> >increase the resolution in any meaningful sense. No matter how close >> >together you make your samples, you will not be able to resolve any >> >features that were not encoded in the original samples. >> >> But you may be able to more accurately locate things like peaks and >> zero crossings, as Darrell mentioned. =A0 > >What you saying is true, if you use a particicular *algorithm* to >locate zero crossings (or peaks): resampling the reconstructed anti- >alised signal and then eyeballing the result. This is indeed one way >to do it. Again, using such a method, you will find only the zero >crossing in the unique anti-aliased signal corresponding to the >samples. If that is what interests you, fine. > >> Improving the ability to >> locate a peak is sometimes said to better "resolve" its location. >> It's not the more common usage of the term, but it is used that way. > >Sure - I do not really want to argue about the word 'resolution'. My >point is that to do anything non-trivial requires a model of your >signal class.
I think you just came full circle to what was claimed by others initially, including Dale. You seem to be in violent agreement with the finer points as already expressed by others. If you have some significant disagreement with things, you're not expressing it well. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.abineau.com
On Dec 13, 5:56 pm, eric.jacob...@ieee.org (Eric Jacobsen) wrote:
> On Mon, 13 Dec 2010 09:26:53 -0800 (PST), illywhacker > <illywac...@gmail.com> wrote: > >On Dec 13, 4:15=A0pm, eric.jacob...@ieee.org (Eric Jacobsen) wrote: > >> On Mon, 13 Dec 2010 06:31:45 -0800 (PST), illywhacker > >> <illywac...@gmail.com> wrote: > >> >On Dec 13, 12:44=3DA0am, dbd <d...@ieee.org> wrote: > >> >> On Dec 12, 2:36=3DA0pm, illywhacker <illywac...@gmail.com> wrote:
> Usually in DSP one is not sampling a function, but a stream of > information. So the features and characteristics likely aren't known > (otherwise, why collect them?). So it is, indeed, often useful to > resample the original collected data set to add new samples where > there weren't ones before, using the information from the surrounding > samples to determine the values of the new ones. That process is > widely understood to be called "interpolation", and is what the thread > is about. I don't know why that'd be in dispute.
You are conflating at least two things here. One is when the samples uniquely determine the underlying continuous signal, because of other knowledge we possess. For example, a well-designed anti-aliasing filter and sampling system allows the exact anti-aliased signal to be reconstructed, because we know how it was formed. In this case, computing values of the anti-aliased signal in between the original samples may help you to understand the nature of the signal when you look at a graph of it, but a computer has no need of this unless a particular algorithm requires it. The information is unchanged. This is the only situation that Dale will countenance apparently. On the other hand, the values you compute in this way may not be the values you want. The position of a peak in the anti-aliased signal may not be the same as its position in the original signal before anti-aliasing. In this case, you need to do more work. It is non-trivial, and a model of the original signal class is required. (Actually a model is there in the first case too, but turns out to be irrelevant to the inference.) An example is superresolution. Another would be identifying the locations of point sources. Contrary to what you suggest, this could be in 1d or n-d.
> I think you just came full circle to what was claimed by others > initially, including Dale. You seem to be in violent agreement with > the finer points as already expressed by others. If you have some > significant disagreement with things, you're not expressing it well.
Please do not be insulting. Consider the possibility that you might simply be understanding badly. Dale told me 'The only "model" that is relevant to the situation is whether the signal of interest survived the anti-aliasing filtering required before the original sampling'. This is false. (Well, actually, it is imprecise, but if we interpret 'signal of interest survived the anti-aliasing filtering' as meaning 'signal of interest is in bijective correspondence with the anti-aliased signal' and 'is relevant' as meaning 'can contribute' then it is false.) This is the argument, and no one else in this thread has expressed this opinion, although only two are disputing it. illywhacker;
On Dec 13, 12:49&#4294967295;pm, Jerry Avins <j...@ieee.org> wrote:
> On Dec 13, 11:23&#4294967295;am, Clay <c...@claysturner.com> wrote: > > > > > > > On Dec 13, 11:06&#4294967295;am, eric.jacob...@ieee.org (Eric Jacobsen) wrote: > > > > On Mon, 13 Dec 2010 05:53:36 +0000 (UTC), glen herrmannsfeldt > > > > <g...@ugcs.caltech.edu> wrote: > > > >dbd <d...@ieee.org> wrote: > > > >(snip) > > > > >> The OP asked for help in correctly performing an interpolation by fft, > > > >> zero extending in the transformed domain and ifff. > > > >(snip) > > > > >> Interpolation is a common tactic used to more accurately resolve the > > > >> location of features such as a peak or a zero crossing in a sampled > > > >> signal. In English, the ability "to more accurately resolve" is > > > >> correctly referred to as "increased resolution". This definition of > > > >> resolution is consistent with the OP's question, algorithm and > > > >> vocabulary and with 1) and 2). > > > > >In optics, as I understand it, resolution comes from the ability > > > >distinguish between two objects in an image, in place of one larger > > > >object. &#4294967295;The wikipedia page Optical_resolution has much of the > > > >explanation. > > > > >If you are looking through a telescope, you would like to know > > > >if you see two stars that are close together, or just one. > > > >In most cases, interpolation doesn't help. &#4294967295;Note that the > > > >actual image is, as seen from earth, pretty much a point source. > > > >Diffraction broadens that into an Airy pattern, which is pretty > > > >much the circular version of the sinc. &#4294967295; > > > > >High quality lenses are diffraction limited, such that diffraction > > > >effects limit the possible resolution. &#4294967295;For lower quality lenses, > > > >the lens itself limits the resolution. > > > > >Interpolation doesn't increase the resolution. &#4294967295;It may allow > > > >one to see what the shape of the expected sinc or Airy disk is, > > > >but that is all. > > > > >Linear deconvolution also doesn't do much to improve resolution. > > > > >Non-linear deconvolution uses other properties of the system > > > >to improve resolution, but that takes more than interpolation. > > > >For example, absorption spectra can't go below zero or > > > >above one. &#4294967295;Taking those constraints into account, and accurately > > > >knowing the transfer function of the system, allows one to do > > > >better than one might otherwise expect. > > > > >-- glen > > > > Similar definitions of "resolution" exist in signal processing. &#4294967295; When > > > I worked on radar processing systems, the ability to resolve two point > > > targets, similar to your optical example of resolving two stars, was > > > the pertinent example. &#4294967295; The idea there, which I've seen described > > > fairly consistently for other applications, was that two sinx/x point > > > responses had to be able to be separated to about their 3dB points in > > > order to detect and "resolve" them separately. > > > > So that's one definition of "resolution" that seems to be fairly > > > consistently applied. &#4294967295; In this context, interpolating additional > > > points does nothing to improve the resolution, which I think is why > > > that's usually the case that's made. > > > > But there are other things that can be "resolved" from a signal, and > > > as Darrell alluded the locations of peaks (e.g., accurately estimating > > > frequency in a DFT output) or zero crossings can often be more > > > accurately located using interpolation. > > > > So in that sense increasing the "resolution" by interpolation is > > > certainly possible, but it's not the usual way the term is used. &#4294967295;So, > > > as usual, it is important to know what is meant when "resolution" is > > > used in a discussion. > > > > Eric Jacobsen > > > Minister of Algorithms > > > Abineau Communicationshttp://www.abineau.com-Hidequoted text - > > > > - Show quoted text - > > > Certainly in astro one can refer to Rayleigh's or Dawes limits, but > > there are tricks for seeing a little more than what those limits > > allow. In the astronomical case of stars where for almost (there are > > exceptions) all practical purposes are mathematical point sources, the > > common trick to resolving close stars is to use an apodization filter > > placed over the telescope's aperture. This filter will push the each > > star's diffraction rings around at the expense of increasing the size > > of the central airey discs. But this can be useful for locating > > companion stars. > > > So in an analogous way one can think of making a window function have > > a null at a key location to reveal an item of interest but at the > > expense of losing something else. > > One simple apodizing mask is a square. It increases the effective > resolution along the diagonals of the square at the expense of > resolution along the other symmetry axes, 45 degrees away. (This is > great for spectroscopes, where resolution perpendicular to the slit is > all that matters.) Like all apodizing masks, it has to be applied at > the time of data collection. As far as I know, you can't improve an > already existing image that way. > > Jerry- Hide quoted text - > > - Show quoted text -
Yes, the mask is a "run time" thing. In fact if you are using a square mask, you can play with rotating it to place the desired companion object along the diagonal to the square. Some people use round masks made out of several layers of window screen. I haven't tried this yet. Here is an example http://home.digitalexp.com/~suiterhr/TM/MakingApod.pdf This articles's author has written the "bible" for star testing telescopes. Clay
On Mon, 13 Dec 2010 10:36:06 -0800 (PST), illywhacker
<illywacker@gmail.com> wrote:

>On Dec 13, 5:56 pm, eric.jacob...@ieee.org (Eric Jacobsen) wrote: >> On Mon, 13 Dec 2010 09:26:53 -0800 (PST), illywhacker >> <illywac...@gmail.com> wrote: >> >On Dec 13, 4:15=A0pm, eric.jacob...@ieee.org (Eric Jacobsen) wrote: >> >> On Mon, 13 Dec 2010 06:31:45 -0800 (PST), illywhacker >> >> <illywac...@gmail.com> wrote: >> >> >On Dec 13, 12:44=3DA0am, dbd <d...@ieee.org> wrote: >> >> >> On Dec 12, 2:36=3DA0pm, illywhacker <illywac...@gmail.com> wrote: > >> Usually in DSP one is not sampling a function, but a stream of >> information. So the features and characteristics likely aren't known >> (otherwise, why collect them?). So it is, indeed, often useful to >> resample the original collected data set to add new samples where >> there weren't ones before, using the information from the surrounding >> samples to determine the values of the new ones. That process is >> widely understood to be called "interpolation", and is what the thread >> is about. I don't know why that'd be in dispute. > >You are conflating at least two things here.
It's a broad topic.
>One is when the samples uniquely determine the underlying continuous >signal, because of other knowledge we possess. For example, a >well-designed anti-aliasing filter and sampling system allows the >exact anti-aliased signal to be reconstructed, because we know how it >was formed. In this case, computing values of the anti-aliased signal >in between the original samples may help you to understand the nature >of the signal when you look at a graph of it, but a computer has no >need of this unless a particular algorithm requires it. The >information is unchanged. This is the only situation that Dale will >countenance apparently.
The "unless a particular algorithm requires it" is a floodgate. This is what many of here do for a living, and have for a long time, so often people here are speaking from that point of view as the norm rather than the exception. Yeah, interpolation is useful.
>On the other hand, the values you compute in this way may not be the >values you want. The position of a peak in the anti-aliased signal >may not be the same as its position in the original signal before >anti-aliasing. In this case, you need to do more work. It is >non-trivial, and a model of the original signal class is required. >(Actually a model is there in the first case too, but turns out to be >irrelevant to the inference.) An example is superresolution. Another >would be identifying the locations of point sources. Contrary to what >you suggest, this could be in 1d or n-d.
You don't have to go that far at all. Many times it is desirable to extract the value a signal at a point in between existing samples. Finding the symbol centers of a communication signal is a reasonable example. There are many, many examples of simple interpolation of the anti-aliased signal being a useful or necessary process. Are you really arguing otherwise?
>> I think you just came full circle to what was claimed by others >> initially, including Dale. You seem to be in violent agreement with >> the finer points as already expressed by others. If you have some >> significant disagreement with things, you're not expressing it well. > >Please do not be insulting.
What's insulting about that?
> Consider the possibility that you might >simply be understanding badly.
Could be. Could be any misunderstanding on my part is because you're not expressing yourself well.
>Dale told me 'The only "model" that is >relevant to the situation is whether the signal of interest survived >the anti-aliasing filtering required before the original sampling'. >This is false. (Well, actually, it is imprecise, but if we interpret >'signal of interest survived the anti-aliasing filtering' as meaning >'signal of interest is in bijective correspondence with the >anti-aliased signal' and 'is relevant' as meaning 'can contribute' >then it is false.)
Sounds like you're going to lengths to adjust or narrow the interpretation to suit you. Have fun with that, but for general discussion I agree with Dale.
> This is the argument, and no one else in this >thread has expressed this opinion, although only two are disputing >it.
It may be interesting as a digression for those so inclined, but in the context of the OP's problem it seems irrelevant to me. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.abineau.com
Eric Jacobsen <eric.jacobsen@ieee.org> wrote:
(snip on resolution in optics)

> Similar definitions of "resolution" exist in signal processing.
(snip)
> So that's one definition of "resolution" that seems to be fairly > consistently applied. In this context, interpolating additional > points does nothing to improve the resolution, which I think is why > that's usually the case that's made.
> But there are other things that can be "resolved" from a signal, and > as Darrell alluded the locations of peaks (e.g., accurately estimating > frequency in a DFT output) or zero crossings can often be more > accurately located using interpolation.
I was trying to see if I could understand this one. With the assumption that there is one frequency in the source (or at least in the region of interest) I suppose it works. It would seem, though, that you could get much of the result from the ratio of the nearby bins. Well, first of all it seems to me that zero padding in only applicable in the case that the source signal is actually zero outside the range of the transform, or that it should be assumed to be zero.
> So in that sense increasing the "resolution" by interpolation is > certainly possible, but it's not the usual way the term is used. So, > as usual, it is important to know what is meant when "resolution" is > used in a discussion.
-- glen
illywhacker <illywacker@gmail.com> wrote:
(snip)

> However, if you know something about the class of signals you are > processing, then you can do better than this. To take Glen's example: > if you know that your original signal consists of point sources, then > you may be able to reconstruct their positions from the diffracted, > sampled image.
More specifically, if you know the transfer function accurately enough, and have sufficient signal/noise, then you can do some pretty amazing things. See especially some of the early images from the Hubble telescope. In that case, though, the actual resolution limit should still be the diffraction limit of the telescope. (As in the part I snipped, related to anti-aliasing filters, the higher spatial frequencies are lost at the diffraction limit.) Deconvolution allowed a correction for the very accurate, but wrong, curvature of the mirror, given enough signal. -- glen
Eric Jacobsen <eric.jacobsen@ieee.org> wrote:
(snip)
 
> Usually in DSP one is not sampling a function, but a stream of > information. So the features and characteristics likely aren't known > (otherwise, why collect them?). So it is, indeed, often useful to > resample the original collected data set to add new samples where > there weren't ones before, using the information from the surrounding > samples to determine the values of the new ones. That process is > widely understood to be called "interpolation", and is what the thread > is about. I don't know why that'd be in dispute.
I suppose it isn't in dispute. What is, is the meanng of resolution in the case of interpolation. I remember learning about interpolation in the case of log tables, where linear interpolation allowed for one digit more than that of the printed table. (and not all that long before calculators would do logs faster and easier.) In that case, though, the table was computed with the assumption of doing linear interpolation, and generated with the appropriate number of digits. In the case is sinc interpolation here, there has been no discussion on the uncertainty of the result. Even more, whether sinc interpolation is appropriate for a given data set. It is convenient in that it gives a visual representation of the interpolation, and that may lead one to believe that more information is there than is necessarily the case. (snip)
>>Sure - I do not really want to argue about the word 'resolution'. My >>point is that to do anything non-trivial requires a model of your >>signal class.
> I think you just came full circle to what was claimed by others > initially, including Dale. You seem to be in violent agreement with > the finer points as already expressed by others. If you have some > significant disagreement with things, you're not expressing it well.
-- glen
On Dec 13, 10:36 am, illywhacker <illywac...@gmail.com> wrote:
> ... > Consider the possibility that you might > simply be understanding badly. Dale told me 'The only "model" that is > relevant to the situation is whether the signal of interest survived > the anti-aliasing filtering required before the original sampling'. > This is false. (Well, actually, it is imprecise ...
Yes it is imprecise, and intentionally so. In real world data analysis, we expect that our samples contain components from unknown models and even from models that cannot be accurately represented by a finite set of samples. The case you gave:
> For example, a > well-designed anti-aliasing filter and sampling system allows the > exact anti-aliased signal to be reconstructed, because we know how it > was formed.
is only true in symbolic manipulations and homework problems. In fact, we may use the anti-aliased and sampled (and therefore faulty) data to determine if we have a signal with components that represent some model even when we know that signals that fit the model cannot be accurately represented by the anti-aliased sampled data sets. To do this for some models we may need to 'resolve' the positions of things like peaks and zero crossings to a finer 'resolution' than allowed by the original sample positions. The OP has given us no context to justify restricting our consideration to any more restrictive model than "anti-aliased and sampled". There are many games we can play if we know more, but you have failed to show any models as required limitations on the application of this group's usage of "interpolation" in response to the OP. Dale B. Dalrymple
On Dec 13, 2:44&#4294967295;pm, Clay <c...@claysturner.com> wrote:
> On Dec 13, 12:49&#4294967295;pm, Jerry Avins <j...@ieee.org> wrote: > > > > > On Dec 13, 11:23&#4294967295;am, Clay <c...@claysturner.com> wrote: > > > > On Dec 13, 11:06&#4294967295;am, eric.jacob...@ieee.org (Eric Jacobsen) wrote: > > > > > On Mon, 13 Dec 2010 05:53:36 +0000 (UTC), glen herrmannsfeldt > > > > > <g...@ugcs.caltech.edu> wrote: > > > > >dbd <d...@ieee.org> wrote: > > > > >(snip) > > > > > >> The OP asked for help in correctly performing an interpolation by fft, > > > > >> zero extending in the transformed domain and ifff. > > > > >(snip) > > > > > >> Interpolation is a common tactic used to more accurately resolve the > > > > >> location of features such as a peak or a zero crossing in a sampled > > > > >> signal. In English, the ability "to more accurately resolve" is > > > > >> correctly referred to as "increased resolution". This definition of > > > > >> resolution is consistent with the OP's question, algorithm and > > > > >> vocabulary and with 1) and 2). > > > > > >In optics, as I understand it, resolution comes from the ability > > > > >distinguish between two objects in an image, in place of one larger > > > > >object. &#4294967295;The wikipedia page Optical_resolution has much of the > > > > >explanation. > > > > > >If you are looking through a telescope, you would like to know > > > > >if you see two stars that are close together, or just one. > > > > >In most cases, interpolation doesn't help. &#4294967295;Note that the > > > > >actual image is, as seen from earth, pretty much a point source. > > > > >Diffraction broadens that into an Airy pattern, which is pretty > > > > >much the circular version of the sinc. &#4294967295; > > > > > >High quality lenses are diffraction limited, such that diffraction > > > > >effects limit the possible resolution. &#4294967295;For lower quality lenses, > > > > >the lens itself limits the resolution. > > > > > >Interpolation doesn't increase the resolution. &#4294967295;It may allow > > > > >one to see what the shape of the expected sinc or Airy disk is, > > > > >but that is all. > > > > > >Linear deconvolution also doesn't do much to improve resolution. > > > > > >Non-linear deconvolution uses other properties of the system > > > > >to improve resolution, but that takes more than interpolation. > > > > >For example, absorption spectra can't go below zero or > > > > >above one. &#4294967295;Taking those constraints into account, and accurately > > > > >knowing the transfer function of the system, allows one to do > > > > >better than one might otherwise expect. > > > > > >-- glen > > > > > Similar definitions of "resolution" exist in signal processing. &#4294967295; When > > > > I worked on radar processing systems, the ability to resolve two point > > > > targets, similar to your optical example of resolving two stars, was > > > > the pertinent example. &#4294967295; The idea there, which I've seen described > > > > fairly consistently for other applications, was that two sinx/x point > > > > responses had to be able to be separated to about their 3dB points in > > > > order to detect and "resolve" them separately. > > > > > So that's one definition of "resolution" that seems to be fairly > > > > consistently applied. &#4294967295; In this context, interpolating additional > > > > points does nothing to improve the resolution, which I think is why > > > > that's usually the case that's made. > > > > > But there are other things that can be "resolved" from a signal, and > > > > as Darrell alluded the locations of peaks (e.g., accurately estimating > > > > frequency in a DFT output) or zero crossings can often be more > > > > accurately located using interpolation. > > > > > So in that sense increasing the "resolution" by interpolation is > > > > certainly possible, but it's not the usual way the term is used. &#4294967295;So, > > > > as usual, it is important to know what is meant when "resolution" is > > > > used in a discussion. > > > > > Eric Jacobsen > > > > Minister of Algorithms > > > > Abineau Communicationshttp://www.abineau.com-Hidequotedtext - > > > > > - Show quoted text - > > > > Certainly in astro one can refer to Rayleigh's or Dawes limits, but > > > there are tricks for seeing a little more than what those limits > > > allow. In the astronomical case of stars where for almost (there are > > > exceptions) all practical purposes are mathematical point sources, the > > > common trick to resolving close stars is to use an apodization filter > > > placed over the telescope's aperture. This filter will push the each > > > star's diffraction rings around at the expense of increasing the size > > > of the central airey discs. But this can be useful for locating > > > companion stars. > > > > So in an analogous way one can think of making a window function have > > > a null at a key location to reveal an item of interest but at the > > > expense of losing something else. > > > One simple apodizing mask is a square. It increases the effective > > resolution along the diagonals of the square at the expense of > > resolution along the other symmetry axes, 45 degrees away. (This is > > great for spectroscopes, where resolution perpendicular to the slit is > > all that matters.) Like all apodizing masks, it has to be applied at > > the time of data collection. As far as I know, you can't improve an > > already existing image that way. > > > Jerry- Hide quoted text - > > > - Show quoted text - > > Yes, the mask is a "run time" thing. In fact if you are using a square > mask, you can play with rotating it to place the desired companion > object along the diagonal to the square. > > Some people use round masks made out of several layers of window > screen. I haven't tried this yet. Here is an example > > http://home.digitalexp.com/~suiterhr/TM/MakingApod.pdf > > This article's author has written the "bible" for star testing
Albert Ingalls's "Amateur Telescope Making (Book2, p357)" shows a very clever opaque mask for eliminating the visible effect of spider diffraction. Only 4% of the original light is blocked. Of course, curved spider vanes work also. Jerry