DSPRelated.com
Forums

column problem in image processing

Started by alb April 28, 2012
On 5/1/2012 7:31 AM, Tim Wescott wrote:
[...]
>> I think the first link in my previous post >>> (re: noise) mentions a number of noise sources that you'd only be able >>> to diminish by calibrating your sensor (as mentioned in the link: >>> master bias frame for read noise; dark frame for reduction of thermal >>> noise; etc.). >> >> It looks to me that this effect is present only when there's stray light >> coming in the CCD (which happens unfortunately). > > Do you know the physics behind where that's coming from? >
that is a good question and I doubt I have a reasonable answer for that. The CCD is a frame transfer one with electron-multiplication (so called L3vision technology). The sensitive area is transferred in a 'short time' w.r.t. exposure time into a non sensitive area of the same size and then 'slowly' readout. Every row is shifted down and is going through an avalanche diode which multiplies the value of each pixel. Then the multiplied row is shifted pixel by pixel, goes through a charge amplifier and then is converted with an ADC. row 0 **************** | here is shifted down row 1 **************** v ... ---------------- here is multiplied apd ---------------- **************** -> here is shifted to the charge amplifier If a pixel is saturated after the multiplication it is possible that the charge amplifier gets saturated as well and may take time to recover, therefore the next pixel would read a wrong charge (since the baseline on the charge amplifier has not come back to its original value). If this is true than it is possible that a saturated channel may effect the next one in the same row. Unfortunately this model does not explain fully why the area with stray light is all bright (if we have saturation we should have a column effect there as well). True is that in the area with stray light there is a background which is increasing (see histogram on the top) and a column structure superimposed.
> Before I saw your statement about it only happening in the presence of > stray light, I was thinking that it looks like garden-variety > nonuniformity on an IR CCD. But if it's only happening when you're > getting stray light then either there's something about the physics of > the detector that you're using that's letting the stray light "bleed" > into the picture nonuniformly, or (perhaps more likely) there's some > deficiency in the way that the CCD is calibrated, or the way that the > calibration is used, that is messing up your picture. > > How _is_ the imager calibrated?
The image shown is *not calibrated*, meaning it does not have any processing done before transmission. We call it 'raw image' and it is essentially what comes out of the hardware and is available to the processor for processing. Since there's very little memory on board to store the raw images we cannot use any approach that would require a (long) sequence of images to work on. Being the effect present only with stray light, it really cannot be evened out with a calibration which was performed in another moment, therefore I'm hoping to get something out through some sort of filtering, otherwise I'm afraid we need to give up on those images. [...]
> > There should be an electro-optical egghead helping you out with this, in > a well-formed project team. If not -- start eating eggs.
Eating eggs is not the problem, biggest problem is where to buy them!
>
On Monday, April 30, 2012 8:16:02 AM UTC-4, alb wrote:
> On 4/30/2012 7:11 AM, kevin wrote: > > > On Apr 28, 11:21 am, alb <alessandro.bas...@cern.ch> wrote: > >> Thanks for the pointers, but unfortunately is not a problem of bloomin=
g.
> >> I uploaded a sample of the picture I'm talking about here: > >> > >> http://ams.cern.ch/AMS/Basili/Docs/snapshot1.png > >> > >> As you may see there is some stray light coming in and from the > >> histogram on the right it is clear that something is wrong with the > >> column projection (mean of the column). > >> > >> Even though there are a couple of 'stars' (many more indeed), their > >> brightness is rather dim compared to the surrounding noise and hard to > >> find if the columns are not equalized first (I'm not sure is the right > >> term here). > >> > >> On the bottom of the screenshot it is shown some info about the image > >> and if you compare the sigma of the last one w.r.t. the ones taken > >> @04:56:41 they differ a lot (~300 vs ~20). So any threshold which is > >> based on the sigma will likely fail to find the star. > >=20 > > Yes, it doesn't look like blooming. I can see many faint points of > > light in the image you provided, but, as you said, they are very dim > > compared to the noise. =20 >=20 > Currently we are shooting 5 images in a row (130 ms each) and then > transmit them to ground. We don't have more space on board than that. > The sequence of 5 images allow us to clearly see objects moving, > therefore we are pretty confident that those dim points are stars. The > problem is how to make the software recognize them as stars. The system > should be capable of recognize the star, calculate the position w.r.t. > the CCD coordinates and transmit only a pair of numbers for each object > instead of the whole raw image. >=20 > Not what I would expect. And that sigma > > variation is worrysome. >=20 > Indeed. There are pictures which don't have so much noise and even > though the 'salt and pepper' noise is rather significant, it should not > be a problem to remove it. >=20 > By the way, does anybody know how much a median filter would take on a > 50MHz dsp if the image is 512x512 pixels? I know about an implementation > which is O(1) (http://nomis80.org/ctmf.html) and does not depend on the > dimension of the kernel, but I have no feeling how that is suitable for > my dsp. >=20 > I think the first link in my previous post > > (re: noise) mentions a number of noise sources that you'd only be able > > to diminish by calibrating your sensor (as mentioned in the link: > > master bias frame for read noise; dark frame for reduction of thermal > > noise; etc.). >=20 > It looks to me that this effect is present only when there's stray light > coming in the CCD (which happens unfortunately). >=20 > >=20 > > Given the large number of astronomy related sites, some of which are > > devoted to CCDs, you might ask at those sites to find someone who can > > more quickly identify your problem and help you solve it. My > > (inexperienced) guess is that it's related to calibration. Those > > alternating light/dark lines in your image don't look anything like > > the 'master dark frame' shown in the picture 'Example of a typical > > dark frame' in the first link. And I wouldn't expect the histogram of > > the column to show such a bias. >=20 > Indeed we don't even have a 'dark frame' since we don't have a shutter > and we cannot take a dark picture. The CCD is a frame transfer one and > was chosen for the possibility to avoid the presence of a shutter and > therefore add complexity to the system... leaving the problems to the > software guy! >=20 > >=20 > > Kevin McGee
No shutter - I think you got screwed! Even though some sensors have electri= cal shutters, they are still subject to blooming dues to light leakage thro= ugh the shutter. Mechannical shutters don't have that issue. That is why Pr= o DSLRs still have mechanical shutters. And if I understand you correctly, = your imager is aboard a spacecraft, so no human intervention is possible. I= s the CCD regulated to a constant temperature? The sensor's temp will stron= gly affect the dark current. Sometimes a "glow" in part of the image reflec= ts a hotter part of the CCD. You will see this in room temperature DSLRs wh= en used for long exposures. Typically in astrophotometry (what I sometimes do in a professional setting= ) for calibration is you shoot Bias, Dark, and Flat frames. Bias frames are= very short exposures (usually the shortest exposure available) with no inc= ident light (shutter closed). I usually shoot 10 of each and find their ave= rages. Dark frames are shuttered exposures whose time duration equals the t= ime of your planned image exposures. Flat frames are used to image a neutra= l brightness image. Flats are used to compensate for uneven illumination du= e to optical design and dust on the optics. 'Scopes are often brighter on a= xis than off axis, so a flat will remove the central hot spot in the image.= Darks and Biases will compensate out the uneven column gains and charge pu= mp noise (unequal thermal distribution in the sensor). If your dark current= is sufficiently low so that the dark noise is inconsequential for the expo= sure duration, then you don't need the dark frames and the bias frame suffi= ces to even out the sensor's response. I get to do this with liquid nitroge= n chilled sensors. Does your sensor at least have a peltier chiller? IHTH, Clay
On 5/2/2012 5:18 AM, Tim Wescott wrote:
[...]
>> Once again, the astronomy community has some expertise in this area, so >> it would be helpful to seek them out. >> > > I'm rusty. Yes, one needs to do a calibration for offset looking at a > neutral scene. You can sometimes do this just by defocusing the optics > -- but only if the light impinging on the detector is made uniform by > doing so.
unfortunately there's no possibility to change the focusing, neither use a shutter to take a 'dark frame', neither to have a 'flat frame'. I have to work with what I have, which is not much indeed.
> > Actually, if it weren't for the stray light problem, given that what > you're looking at is essentially a field of black with little bright dots > that are guaranteed to be moving, you could average a bunch of frames for > your offset correction.
The pixel size is such to cover 44 arcsec. I spare you the math, but essentially it means that between frames (130ms each) any real object (i.e. not an artifact of the system) will move less than a pixel. With this situation even if I could use a bunch of images I will probably get a false background where the object is.
> > Hey, Alb: does what you're seeing have some sort of automatic gain and > level applied to it before it gets to you? That may account for why you > sometimes see the nonuniformity and sometimes don't. And if it's there > -- it's going to greatly complicate your job. >
AFAIK there's no such a thing, even though the idea was in the plan sometimes in the past. I definitely will crosscheck if this part made it to the hardware but I doubt. And yes, it would complicate my job much more.
On 5/1/2012 8:21 PM, Frnak McKenney wrote:
>> Even though there are a couple of 'stars' (many more indeed), their >> brightness is rather dim compared to the surrounding noise and hard >> to find if the columns are not equalized first (I'm not sure is the >> right term here). > > Curious... I loaded a copy of the image into The GIMP and played with > the Brightness-Contrast controls (B=+10,C=+120). The result was a > lot of star-like points against a very black background. Given that > the original looked rather uniformly grey and stripey, I found this > surprising.
I also do something similar to what you said, I call it 'window leveling' (http://www.cs.ioc.ee/~khoros2/one-oper/window-level/front-page.html) and the result is that I increase greatly the contrast. You will definitely see more objects, but the stripes also will be enhanced and even though our naked eye may very well see those objects (a star would be at least a 2x2 up to 6x6 or more) I do not know how to implement the algorithm to find them.
> > If you were aimed at Ursa Minor (or some similar shape) this may be > helpful. On the other hand, if you aere aimed at, say, Orion, well, > feel free to ignore this interruption. <grin!> >
I'm sorry I think I did not understand this paragraph.
> > Frank McKenney
On 5/1/2012 3:48 PM, clay@claysturner.com wrote:
> Several questions: Do the same columns exhibit the same bias from > capture to capture?
if and only if there is stray light coming in (like the picture I posted).
> If not, you may have a read noise problem here - > try reading the array slowly!
Unfortunately I do not have any control on the readout process, you should assume that the raw picture will be sitting in a buffer when the flag/interrupt is asserted. Same cameras in order to read the data
> quickly have multiple read paths each with its own biases. However > these stripes occur in a regular pattern. Yours seem to be irregular. > However if the biases are consistant, a dark frame may be used (and > they should always be used along with flat frames in astrophotometry) > to remove them. If not, then you can treat each column as a mostly > flat line with occasional blips. This isn't a good place to start > accurate astrophotometry. Have you shot any dark frames and > subtracted them out?
no chance to shoot dark frames, we do not have a shutter. Bare in mind that this device is used for pointing on a spacecraft, it is true that it would be better to have dark frames and flat frames, but in a normal frame (no stray light) the background is rather uniform and flat and there's no such a need to complicate a lot the algorithm to find the stars. On the contrary, when we do have stray light in the ccd, the background is non uniformly distributed. In this case the background should be extracted somehow in order not to mess up with the position (the centroid will certainly be wrong if the background is non uniform, but I still don't know how much).
> > Clay > >
Hi, alb.

On Thu, 03 May 2012 08:31:15 +0200, alb <alessandro.basili@cern.ch> wrote:
> On 5/1/2012 8:21 PM, Frnak McKenney wrote: >>> Even though there are a couple of 'stars' (many more indeed), >>> their brightness is rather dim compared to the surrounding noise >>> and hard to find if the columns are not equalized first (I'm not >>> sure is the right term here). >> >> Curious... I loaded a copy of the image into The GIMP and played >> with the Brightness-Contrast controls (B=+10,C=+120). The result >> was a lot of star-like points against a very black background. >> Given that the original looked rather uniformly grey and stripey, I >> found this surprising. > > I also do something similar to what you said, I call it 'window > leveling' > (http://www.cs.ioc.ee/~khoros2/one-oper/window-level/front-page.html) > and the result is that I increase greatly the contrast. You will > definitely see more objects, but the stripes also will be enhanced
Ah. This is an area where communication gets tricky. What I see in the "image" portion of: http://ams.cern.ch/AMS/Basili/Docs/snapshot1.png is almost all a sort of vertical-stripe-y grey with a rounded triangular patch of white in the lower left corner. The grey isn't uniform: it consists of a reasonably-consistent set of vertical "stripes" plus a "few" vaguely brighter spots which I assume are stars. If this was being done using film instead of a CCD(?), I'd probably recommend that you clean or polish up the film guide so it didn't scrape so badly. Or stop buying that cheap corduroy-patterned film. <grin!> Now that I've explained my terminology: What surprised me was that throwing simple brightness+contrast adjustment made the "stripes" disappear (except near the lower left corner) and the "stars" pop out... except, of course, the lower left corner. The GIMP displays a preview of the contrast-brighness adjustments, and if I set "contrast" to "full" I can then see how the stripes re-emerge as I gradually increase the "brightness". Looking at the "window levelling" curve, it appears that I'm obtaining a similar effect using different tools. The vertical consistency of the "stripes" makes me think that there _ought_ to be some way to reduce that effect algorithmically. Suppose you took the image, trimmed off the artifacts around the perimiter, and then sliced off (say) the bottom quarter to eliminate the "white triangle". If you then computed a "column average intensity" on that result and subtracted it from the original image, it might give you a better idea of the intensity of your "star" points.
> and even though our naked eye may very well see those objects (a > star would be at least a 2x2 up to 6x6 or more) I do not know how to > implement the algorithm to find them.
Oh. I've been addressing the question of making the "stars" stand out a little better. _Is_ that what you've been asking about? Or were you actually asking for an algorithm for "picking out single-to-multiple pixel groups and finding their locations"?
>> If you were aimed at Ursa Minor (or some similar shape) this may be >> helpful. On the other hand, if you aere aimed at, say, Orion, >> well, feel free to ignore this interruption. <grin!>
> I'm sorry I think I did not understand this paragraph.
All I was trying to say was that section of my result image had a "star" pattern that reminded me of the constellation Ursa Minor. If that's the portion of the sky your image represents, then I have slightly more trust that the algorighm I'm applying "works". If not, if you were taking "photos" of (say) a portion of Orion or Scorpio, then perhaps what I'm doing needs checking. And, of course, if what we're seeing is a really "small" portion of the night sky, one much smaller than a single constellation, then my "recognition" is obviously a victory of imagination over reality. <grin!> Frank -- "Light thinks it travels faster than anything but it is wrong. No matter how fast light travels, it finds the darkness has always got there first, and is waiting for it." -- Terry Pratchett -- Frank McKenney, McKenney Associates Richmond, Virginia / (804) 320-4887 Munged E-mail: frank uscore mckenney aatt mindspring ddoott com
On 5/3/2012 7:18 PM, Frnak McKenney wrote:
> Hi, alb.
[...]
>> I also do something similar to what you said, I call it 'window >> leveling' >> (http://www.cs.ioc.ee/~khoros2/one-oper/window-level/front-page.html) >> and the result is that I increase greatly the contrast. You will >> definitely see more objects, but the stripes also will be enhanced > > Ah. This is an area where communication gets tricky. What I see in > the "image" portion of: > > http://ams.cern.ch/AMS/Basili/Docs/snapshot1.png > > is almost all a sort of vertical-stripe-y grey with a rounded > triangular patch of white in the lower left corner. The grey isn't > uniform: it consists of a reasonably-consistent set of vertical > "stripes" plus a "few" vaguely brighter spots which I assume are > stars.
A star should look like as a group of at least 2x2 pixels. All the rest is only noise (pixels with high dark current). If this was being done using film instead of a CCD(?), I'd
> probably recommend that you clean or polish up the film guide so it > didn't scrape so badly. Or stop buying that cheap corduroy-patterned > film. <grin!>
We had the idea to use film... it was to expensive to bring it down to earth and develop it! <grin!>
> > Now that I've explained my terminology: What surprised me was that > throwing simple brightness+contrast adjustment made the "stripes" > disappear (except near the lower left corner) and the "stars" pop > out... except, of course, the lower left corner. The GIMP displays a > preview of the contrast-brighness adjustments, and if I set "contrast" > to "full" I can then see how the stripes re-emerge as I gradually > increase the "brightness".
that is interesting. Maybe I should be playing with GIMP also.
> > Looking at the "window levelling" curve, it appears that I'm obtaining > a similar effect using different tools. >
problem with the leveling is that the stripes get only enhanced.
> The vertical consistency of the "stripes" makes me think that there > _ought_ to be some way to reduce that effect algorithmically. Suppose > you took the image, trimmed off the artifacts around the perimiter, > and then sliced off (say) the bottom quarter to eliminate the "white > triangle". If you then computed a "column average intensity" on that > result and subtracted it from the original image, it might give you a > better idea of the intensity of your "star" points.
I tried that, but unfortunately is not so easy to get rid of the 'white triangle' since it can get bigger depending on attitude of the spacecraft and light coming in. If you look at histogram - a mean over the column - on the top right you will see how the column structure is going on top of the 'white triangle'. I get the image a little less stripey but still nor useful to track objects.
> >> and even though our naked eye may very well see those objects (a >> star would be at least a 2x2 up to 6x6 or more) I do not know how to >> implement the algorithm to find them. > > Oh. I've been addressing the question of making the "stars" stand out > a little better. _Is_ that what you've been asking about? Or were you > actually asking for an algorithm for "picking out single-to-multiple > pixel groups and finding their locations"?
The second. I would need to implement an algorithm to search for stars in within images like the one I posted. Fortunately there are extended periods of time when we do not have stray light into the CCD, but I'm not addressing those cases here.
> >>> If you were aimed at Ursa Minor (or some similar shape) this may be >>> helpful. On the other hand, if you aere aimed at, say, Orion, >>> well, feel free to ignore this interruption. <grin!> > >> I'm sorry I think I did not understand this paragraph. > > All I was trying to say was that section of my result image had a > "star" pattern that reminded me of the constellation Ursa Minor. If > that's the portion of the sky your image represents, then I have > slightly more trust that the algorighm I'm applying "works". If not, > if you were taking "photos" of (say) a portion of Orion or Scorpio, > then perhaps what I'm doing needs checking. And, of course, if what > we're seeing is a really "small" portion of the night sky, one much > smaller than a single constellation, then my "recognition" is > obviously a victory of imagination over reality. <grin!> >
Wow, you recognized the Ursa Minor? For me it's hard to say.
> > Frank
On Fri, 04 May 2012 12:49:13 +0200, alb <alessandro.basili@cern.ch> wrote:
> On 5/3/2012 7:18 PM, Frnak McKenney wrote:
[...]
> A star should look like as a group of at least 2x2 pixels. All the rest > is only noise (pixels with high dark current).
Ah.
> If this was being done using film instead of a CCD(?), I'd >> probably recommend that you clean or polish up the film guide so it >> didn't scrape so badly. Or stop buying that cheap corduroy-patterned >> film. <grin!> > > We had the idea to use film... it was to expensive to bring it down to > earth and develop it! <grin!>
Sigh. Another missed opportunity: I could have tried putting in a joint bid with Federal Express to handle the deliveries. <grin!>
>> Now that I've explained my terminology: What surprised me was that >> throwing simple brightness+contrast adjustment made the "stripes" >> disappear (except near the lower left corner) and the "stars" pop >> out... except, of course, the lower left corner. The GIMP displays a >> preview of the contrast-brighness adjustments, and if I set "contrast" >> to "full" I can then see how the stripes re-emerge as I gradually >> increase the "brightness". > > that is interesting. Maybe I should be playing with GIMP also.
I think we've reached a point where this thought-to-text-to-thought interface has become a bit unweildy. It might be simpler if I send you some samples of my results so you can actually see what I'm talking about, and you can quickly determine whether it's something you can use.
>> Looking at the "window levelling" curve, it appears that I'm >> obtaining a similar effect using different tools. > > problem with the leveling is that the stripes get only enhanced.
Hm. Mine disappear, but (a) I'm not working with your original data (I'm chopping the CCD portion out of your alb_snapshot1.png file) and (b) I don't know how much useful data is being tossed by what I'm doing. <grin!>
>> The vertical consistency of the "stripes" makes me think that there >> _ought_ to be some way to reduce that effect algorithmically. >> Suppose you took the image, trimmed off the artifacts around the >> perimiter, and then sliced off (say) the bottom quarter to >> eliminate the "white triangle". If you then computed a "column >> average intensity" on that result and subtracted it from the >> original image, it might give you a better idea of the intensity of >> your "star" points. > > I tried that, but unfortunately is not so easy to get rid of the > 'white triangle' since it can get bigger depending on attitude of > the spacecraft and light coming in. If you look at histogram - a > mean over the column - on the top right you will see how the column > structure is going on top of the 'white triangle'. > > I get the image a little less stripey but still nor useful to track > objects.
>> Oh. I've been addressing the question of making the "stars" stand >> out a little better. _Is_ that what you've been asking about? Or >> were you actually asking for an algorithm for "picking out >> single-to-multiple pixel groups and finding their locations"? > > The second. I would need to implement an algorithm to search for > stars in within images like the one I posted. Fortunately there are > extended periods of time when we do not have stray light into the > CCD, but I'm not addressing those cases here.
>>> I'm sorry I think I did not understand this paragraph. >> >> All I was trying to say was that section of my result image had a >> "star" pattern that reminded me of the constellation Ursa Minor.
> Wow, you recognized the Ursa Minor? For me it's hard to say.
Well... let's say that I thought I saw, within the results of my image mangling, a pattern that _reminded_ me of Ursa Minor. Where _was_ the CCD aimed when that particular image was captured? <grin!> But I'll send you an "enhanced" image so you know what I think I'm seeing. I'll try to get something to you today (barring a SPAM trap or something at your end <grin>). Frank -- Perhaps the most valuable result of all education is the ability to make yourself do the thing you have to do, when it ought to be done, whether you like it or not; it is the first lesson that ought to be learned; and however early a man's training begins, it is probably the last lesson that he learns thoroughly. -- Thomas H. Huxley -- Frank McKenney, McKenney Associates Richmond, Virginia / (804) 320-4887 Munged E-mail: frank uscore mckenney aatt mindspring ddoott com
On Friday, May 4, 2012 12:01:54 PM UTC-4, Frnak McKenney wrote:
> On Fri, 04 May 2012 12:49:13 +0200, alb <alessandro.basili@cern.ch> wrote: > > On 5/3/2012 7:18 PM, Frnak McKenney wrote: > > [...] > > > A star should look like as a group of at least 2x2 pixels. All the rest > > is only noise (pixels with high dark current). > > Ah. > > > If this was being done using film instead of a CCD(?), I'd > >> probably recommend that you clean or polish up the film guide so it > >> didn't scrape so badly. Or stop buying that cheap corduroy-patterned > >> film. <grin!> > > > > We had the idea to use film... it was to expensive to bring it down to > > earth and develop it! <grin!> > > Sigh. Another missed opportunity: I could have tried putting in a > joint bid with Federal Express to handle the deliveries. <grin!> > > >> Now that I've explained my terminology: What surprised me was that > >> throwing simple brightness+contrast adjustment made the "stripes" > >> disappear (except near the lower left corner) and the "stars" pop > >> out... except, of course, the lower left corner. The GIMP displays a > >> preview of the contrast-brighness adjustments, and if I set "contrast" > >> to "full" I can then see how the stripes re-emerge as I gradually > >> increase the "brightness". > > > > that is interesting. Maybe I should be playing with GIMP also. > > I think we've reached a point where this thought-to-text-to-thought > interface has become a bit unweildy. It might be simpler if I send > you some samples of my results so you can actually see what I'm > talking about, and you can quickly determine whether it's something > you can use. > > >> Looking at the "window levelling" curve, it appears that I'm > >> obtaining a similar effect using different tools. > > > > problem with the leveling is that the stripes get only enhanced. > > Hm. Mine disappear, but (a) I'm not working with your original data > (I'm chopping the CCD portion out of your alb_snapshot1.png file) and > (b) I don't know how much useful data is being tossed by what I'm > doing. <grin!> > > >> The vertical consistency of the "stripes" makes me think that there > >> _ought_ to be some way to reduce that effect algorithmically. > >> Suppose you took the image, trimmed off the artifacts around the > >> perimiter, and then sliced off (say) the bottom quarter to > >> eliminate the "white triangle". If you then computed a "column > >> average intensity" on that result and subtracted it from the > >> original image, it might give you a better idea of the intensity of > >> your "star" points. > > > > I tried that, but unfortunately is not so easy to get rid of the > > 'white triangle' since it can get bigger depending on attitude of > > the spacecraft and light coming in. If you look at histogram - a > > mean over the column - on the top right you will see how the column > > structure is going on top of the 'white triangle'. > > > > I get the image a little less stripey but still nor useful to track > > objects. > > >> Oh. I've been addressing the question of making the "stars" stand > >> out a little better. _Is_ that what you've been asking about? Or > >> were you actually asking for an algorithm for "picking out > >> single-to-multiple pixel groups and finding their locations"? > > > > The second. I would need to implement an algorithm to search for > > stars in within images like the one I posted. Fortunately there are > > extended periods of time when we do not have stray light into the > > CCD, but I'm not addressing those cases here. > > >>> I'm sorry I think I did not understand this paragraph. > >> > >> All I was trying to say was that section of my result image had a > >> "star" pattern that reminded me of the constellation Ursa Minor. > > > Wow, you recognized the Ursa Minor? For me it's hard to say. > > Well... let's say that I thought I saw, within the results of my image > mangling, a pattern that _reminded_ me of Ursa Minor. Where _was_ the > CCD aimed when that particular image was captured? <grin!> > > But I'll send you an "enhanced" image so you know what I think I'm > seeing. I'll try to get something to you today (barring a SPAM trap > or something at your end <grin>). > > > Frank > -- > Perhaps the most valuable result of all education is the ability > to make yourself do the thing you have to do, when it ought to be > done, whether you like it or not; it is the first lesson that > ought to be learned; and however early a man's training begins, it > is probably the last lesson that he learns thoroughly. > -- Thomas H. Huxley > -- > Frank McKenney, McKenney Associates > Richmond, Virginia / (804) 320-4887 > Munged E-mail: frank uscore mckenney aatt mindspring ddoott com
Chopping off the bottom end (thresholding) of an unnormalized image can work for astrometry if you accept the loss of fainter stars. Certainly photometry is out the window without a lot of work using brighter known stars as references. But it sounds like the OP is only interested in astrometry (knowing the positions of the stars). Clay
Unhampered by prior training or experience I will offer an opinion:
1- Think pinhole camera. If you build your image over a long period of
time (minutes,hours,day?) You will lose everything but the
environment. Applying the result to a given single image may just
leave the image data you are seeking. A day long time exposure with a
pinhole camera will result in a busy street with all the traffic
removed.

2-I don't have a clue as to how this thing really works but the image
you have displayed looks like whatever happened to a vertical column
was consistent through out the column. The column projection shows
evidence of being clipped to a uniform maximum except where the AGC
was totally over whelmed. Correlation to the image leads me to believe
that the signal in question is amplitude modulated and overdriven. I
believe it would be worth the effort to discard each sample that comes
close to that clipping level. Further, a signal that is driven into
AGC will require some time to recover. 

Bear in mind that I have no concept of how much effort these steps
might take...

I am an obsolete IBM Field Engineer that has been put out to
pasture.... 

On Thu, 03 May 2012 08:40:01 +0200, alb <alessandro.basili@cern.ch>
wrote:


> >On the contrary, when we do have stray light in the ccd, the background >is non uniformly distributed. In this case the background should be >extracted somehow in order not to mess up with the position (the >centroid will certainly be wrong if the background is non uniform, but I >still don't know how much). > >> >> Clay >> >>
John Ferrell W8CCW