Reply by glen herrmannsfeldt January 31, 20122012-01-31
alb <alessandro.basili@cern.ch> wrote:

(snip, I wrote)
>> You might look up references to the ICT (Integer Cosine Transform), >> specifically designed for computation limited, (and I believe memory >> limited) systems. Specifically, for image compression on the CDP1802 >> processor, which has no multiply instruction. ICT is designed to >> optimize the shift and add needed by minimizing the 1 bits in >> the coefficients. >>
> thanks a lot for the hint, I will try to see what would be the result on the > raw images already taken from the device (at a rate of 1/30minutes!) and see > what is the result.
> At a first glance it looks like the ICT will be quite helpful to compress > the data but I haven't figured out yet how can be used to filter out the > noise, I guess I would need a filter before the transformation is performed.
I don't know, either. It was that you seem to be memory limited that reminded me of it. The ICT was designed as a retrofit on Galileo while it was in space. If one had known in advance, a more powerful processor may have been used.
> Actually, what would be nice is to see if "stars" do appear different from > random noise in a frequency domain, at that point I can set a threshold in > the frequency domain and apply a filter in the frequency domain.
-- glen
Reply by alb January 31, 20122012-01-31
Clay writes:

> On Jan 28, 11:46&nbsp;pm, alb <alessandro.bas...@cern.ch> wrote:
[...]
>> Given the fact that I'm far from being an expert in image processing or >> astronometry, I would be more than happy to listen to suggestions and/or >> references on this topic. >> >> Al >> >> [1] No stars recognition is required onboard. > > > Check out "Phd Guiding" by Stark Labs. They have made some of their > code open source for autoguiding your telescope. >
thanks for the reference, it is indeed a neat tool, but not really what I was looking for. It seems to me they need the user to identify the object in order to guide the camera towards it. In my case it would be the software which is identifying the objects.
> Typically when one uses an autoguider, you shoot a dark frame for > subsequent subtraction from all future frames. You identify a guide > star (hopefully reasonably bright but not so bright as to saturate the > ccd (you want the central image to appear peaked like a bell curve) > and then you put a box around the star and only use the reduced size > field data for tracking. > > Are you just doing this for fun? There are quite a few CCD cameras now > available for use as autoguiders and Stark Labs' software is free for > anyone to use. With a good low noise camera typical tracking may be > had to under 1/10 of pixel in the autoguider camera. Your mount on the > other hand be an issue.
It is actually for a Star Tracker which is installed on the ISS. I find it fun though!
> > Clay > > > > > > >
Reply by alb January 31, 20122012-01-31
Vladimir Vassilevsky writes:

> > > alb wrote: > > >>> A lot depends on how noisy is CCD, how blurry is the image and how >>> much of stray light gets into the picture. In the simple case, an >>> algorithm like you suggested would be sufficient. In more complex >>> cases, you would probably have to do edge enhancement and stray light >>> artifacts removal before processing. >> >> >> At the moment we are only getting raw images, just to have a feeling of >> how they look like, and to show them on a screen what I do is the >> following: >> >> 1. get the mean value and sigma of the whole image; > > Make a histogram of values; set the thresholds accordingly.
this work has already been done, but when there is stray light coming in it is not applicable anymore.
> >> 2. map all the values < (mean - 3*sigma) to 0x000 >> 3. map all the values > (mean + 3*sigma) to 0xFFF > > The statistics of images is usually very different from Gaussian; > therefore N x sigma rule doesn't apply.
making the values of each pixel it is clear that there is a "pedestal" which has a gaussian-like distribution, even when there is stray light coming in. I think this is the intrinsic CCD noise from the dark current. Definitely the rest of the image is nothing like a gaussian, but I think the N x sigma still applies to separate the signal from the noise. Am I wrong?
> >> 4. map all pixels in between linearly between 0x000 and 0xFFF >> >> As a result you have a constrast amplification which is similar (I >> guess) to the edge enhancement you proposed. > > Google "Sobel filter" >
hey thanks for the hint! I must admit there's a lot about image processing that I have no clue about. I will certainly need to study a lot.
>> I am not sure I understood what you had in mind with "stray light >> artifacts removal". > > I consult for money; if your project is more then idle curiosity, my > contact is at the web site in the signature.
Appreciate your offer and inputs, I will certainly consider the possibility if needs arise in the future. FYI I believe that most of what we discuss here is for my personal education, but my personal education is certainly very useful to the people whom I work for.
> > > Vladimir Vassilevsky > DSP and Mixed Signal Design Consultant > http://www.abvolt.com
Reply by alb January 31, 20122012-01-31
glen herrmannsfeldt writes:

> alb <alessandro.basili@cern.ch> wrote: > > (snip, I wrote) >>> So there is a stream of similar images? In that case, you might want >>> to do some motion estimation like MPEG's do. Figure out how the image >>> shifted from the previous frame, encode the shift and difference between >>> the two after the shift. > >> Lots of objects do not move from frame to frame and I think that is >> something related to the non-uniformity of the CCD. I think I can get rid of >> them simply subtracting two consecutive frames (unfortunately the available >> memory is not enough to store two complete pictures onboard, therefore I >> need to do this process over a subset of the frame and "calibrate" it in >> steps. > > You might look up references to the ICT (Integer Cosine Transform), > specifically designed for computation limited, (and I believe memory > limited) systems. Specifically, for image compression on the CDP1802 > processor, which has no multiply instruction. ICT is designed to > optimize the shift and add needed by minimizing the 1 bits in > the coefficients. >
thanks a lot for the hint, I will try to see what would be the result on the raw images already taken from the device (at a rate of 1/30minutes!) and see what is the result. At a first glance it looks like the ICT will be quite helpful to compress the data but I haven't figured out yet how can be used to filter out the noise, I guess I would need a filter before the transformation is performed. Actually, what would be nice is to see if "stars" do appear different from random noise in a frequency domain, at that point I can set a threshold in the frequency domain and apply a filter in the frequency domain.
> -- glen
Reply by glen herrmannsfeldt January 30, 20122012-01-30
alb <alessandro.basili@cern.ch> wrote:

(snip, I wrote)
>> So there is a stream of similar images? In that case, you might want >> to do some motion estimation like MPEG's do. Figure out how the image >> shifted from the previous frame, encode the shift and difference between >> the two after the shift.
> Lots of objects do not move from frame to frame and I think that is > something related to the non-uniformity of the CCD. I think I can get rid of > them simply subtracting two consecutive frames (unfortunately the available > memory is not enough to store two complete pictures onboard, therefore I > need to do this process over a subset of the frame and "calibrate" it in > steps.
You might look up references to the ICT (Integer Cosine Transform), specifically designed for computation limited, (and I believe memory limited) systems. Specifically, for image compression on the CDP1802 processor, which has no multiply instruction. ICT is designed to optimize the shift and add needed by minimizing the 1 bits in the coefficients. -- glen
Reply by Vladimir Vassilevsky January 30, 20122012-01-30

alb wrote:


>> A lot depends on how noisy is CCD, how blurry is the image and how >> much of stray light gets into the picture. In the simple case, an >> algorithm like you suggested would be sufficient. In more complex >> cases, you would probably have to do edge enhancement and stray light >> artifacts removal before processing. > > > At the moment we are only getting raw images, just to have a feeling of > how they look like, and to show them on a screen what I do is the > following: > > 1. get the mean value and sigma of the whole image;
Make a histogram of values; set the thresholds accordingly.
> 2. map all the values < (mean - 3*sigma) to 0x000 > 3. map all the values > (mean + 3*sigma) to 0xFFF
The statistics of images is usually very different from Gaussian; therefore N x sigma rule doesn't apply.
> 4. map all pixels in between linearly between 0x000 and 0xFFF > > As a result you have a constrast amplification which is similar (I > guess) to the edge enhancement you proposed.
Google "Sobel filter"
> I am not sure I understood what you had in mind with "stray light > artifacts removal".
I consult for money; if your project is more then idle curiosity, my contact is at the web site in the signature. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by Clay January 30, 20122012-01-30
On Jan 28, 11:46&#4294967295;pm, alb <alessandro.bas...@cern.ch> wrote:
> Dear all, > > I am working on the software of a Star Tracker device which should be > capable to compress the images for transmission [1]. The CCD is 512x512 and > each pixel is digitized with a 12bit ADC and the image frequency is ~10Hz. > Since the bandwidth available is rather limited (~1.5 Kb/s) I need to > compress the image to a level of few bytes per image to be able to cope with > the input rate. > > Given the normal picture as a field of randomly sparsed dots, most of which > are just noise, I should be able to find the brightest objects and transmit > their coordinates in fraction of pixel units. > > Assuming I get rid of the noise with some calibration process which may take > in consideration some digital low-pass filter, the image will look like a > sparse number of dot-like objects with a 2D gaussian-like shape. The > simplest algorithm I can come up with is to look for a pixel over a value > (threshold1) and look for pixels around it above another value (threshold2 < > threshold1), performing a sort of "clusterization". Once each cluster is > found I can sort them by magnitude and find the mean (or median) to extract > the coordinates and maybe send only the first N of them. > > Given the fact that I'm far from being an expert in image processing or > astronometry, I would be more than happy to listen to suggestions and/or > references on this topic. > > Al > > [1] No stars recognition is required onboard.
Check out "Phd Guiding" by Stark Labs. They have made some of their code open source for autoguiding your telescope. Typically when one uses an autoguider, you shoot a dark frame for subsequent subtraction from all future frames. You identify a guide star (hopefully reasonably bright but not so bright as to saturate the ccd (you want the central image to appear peaked like a bell curve) and then you put a box around the star and only use the reduced size field data for tracking. Are you just doing this for fun? There are quite a few CCD cameras now available for use as autoguiders and Stark Labs' software is free for anyone to use. With a good low noise camera typical tracking may be had to under 1/10 of pixel in the autoguider camera. Your mount on the other hand be an issue. Clay
Reply by alb January 30, 20122012-01-30
Vladimir Vassilevsky writes:

> > > alb wrote: > >> Dear all, >> >> I am working on the software of a Star Tracker device which should be >> capable to compress the images for transmission [1]. The CCD is 512x512 >> and each pixel is digitized with a 12bit ADC and the image frequency is >> ~10Hz. Since the bandwidth available is rather limited (~1.5 Kb/s) I >> need to compress the image to a level of few bytes per image to be able >> to cope with the input rate. >> >> Given the normal picture as a field of randomly sparsed dots, most of >> which are just noise, I should be able to find the brightest objects and >> transmit their coordinates in fraction of pixel units. >> >> Assuming I get rid of the noise with some calibration process which may >> take in consideration some digital low-pass filter, the image will look >> like a sparse number of dot-like objects with a 2D gaussian-like shape. >> The simplest algorithm I can come up with is to look for a pixel over a >> value (threshold1) and look for pixels around it above another value >> (threshold2 < threshold1), performing a sort of "clusterization". Once >> each cluster is found I can sort them by magnitude and find the mean (or >> median) to extract the coordinates and maybe send only the first N of them. >> >> Given the fact that I'm far from being an expert in image processing or >> astronometry, I would be more than happy to listen to suggestions and/or >> references on this topic. > > A lot depends on how noisy is CCD, how blurry is the image and how much > of stray light gets into the picture. In the simple case, an algorithm > like you suggested would be sufficient. In more complex cases, you would > probably have to do edge enhancement and stray light artifacts removal > before processing.
At the moment we are only getting raw images, just to have a feeling of how they look like, and to show them on a screen what I do is the following: 1. get the mean value and sigma of the whole image; 2. map all the values < (mean - 3*sigma) to 0x000 3. map all the values > (mean + 3*sigma) to 0xFFF 4. map all pixels in between linearly between 0x000 and 0xFFF As a result you have a constrast amplification which is similar (I guess) to the edge enhancement you proposed. I am not sure I understood what you had in mind with "stray light artifacts removal".
> > > Vladimir Vassilevsky > DSP and Mixed Signal Design Consultant > http://www.abvolt.com
Reply by alb January 30, 20122012-01-30
Jerry Avins writes:
> On 1/28/2012 11:46 PM, alb wrote:
[...]
> Do you need information about a star other than its location?
I need the brightness, otherwise will be difficult to sort them and send only the N brightest ones.
> Can you establish a size threshold and ignore anything smaller?
not really, sometime there is an amount of stray light coming from diffraction of the baffle and/or reflection in the baffle, which does not allow to set a fixed threshold.
> Can you establish a brightness threshold and ignore anything dimmer?
Every part of the sky is different, therefore I should have a list of thresholds which corresponds to the portion I am in. I think this would require a much more complex algorithm.
> If the answers are no, yes, yes, then you need only send one coordinate > per star. Even if each possible coordinate is assigned to one bit, the > available bandwidth would be exceeded, but run-length encoding should > work well.
It will not be viable to send one coordinate per pixel, if you take a look at the raw image you'll see it immediately why. The amount of lit pixels is incredible huge compared to the amount of stars.
Reply by alb January 30, 20122012-01-30
glen herrmannsfeldt writes:

> alb <alessandro.basili@cern.ch> wrote: > >> I am working on the software of a Star Tracker device which should be >> capable to compress the images for transmission [1]. The CCD is 512x512 and >> each pixel is digitized with a 12bit ADC and the image frequency is ~10Hz. >> Since the bandwidth available is rather limited (~1.5 Kb/s) I need to >> compress the image to a level of few bytes per image to be able to cope with >> the input rate. > > So there is a stream of similar images? In that case, you might want > to do some motion estimation like MPEG's do. Figure out how the image > shifted from the previous frame, encode the shift and difference between > the two after the shift.
Lots of objects do not move from frame to frame and I think that is something related to the non-uniformity of the CCD. I think I can get rid of them simply subtracting two consecutive frames (unfortunately the available memory is not enough to store two complete pictures onboard, therefore I need to do this process over a subset of the frame and "calibrate" it in steps. How can I figure out how the picture shifted? Certainly the field of view (FOV) is big enough to have a slightly shifted frame at the nominal rate (10Hz).
> >> Given the normal picture as a field of randomly sparsed dots, most of which >> are just noise, I should be able to find the brightest objects and transmit >> their coordinates in fraction of pixel units. > >> Assuming I get rid of the noise with some calibration process which may take >> in consideration some digital low-pass filter, the image will look like a >> sparse number of dot-like objects with a 2D gaussian-like shape. The >> simplest algorithm I can come up with is to look for a pixel over a value >> (threshold1) and look for pixels around it above another value (threshold2 < >> threshold1), performing a sort of "clusterization". Once each cluster is >> found I can sort them by magnitude and find the mean (or median) to extract >> the coordinates and maybe send only the first N of them. > > Integrated intensity over the spot, mean position, and width. > > The width will change from the filter, so you might want to correct > for that. Or are they really points? (No clusters, galaxies, etc.)
They are clusters indeed, but I'm not 100% sure that the width is important. The way the reconstruction software works on the receiver side (a.k.a. offline), is looking at the angle between one object and the two closest objects, being a unique identifier for a star. Certainly the width and sigma of each point will give an of idea how accurate is the coordinates meausurement.
> > If points, just total intensity and position. > >> Given the fact that I'm far from being an expert in image processing or >> astronometry, I would be more than happy to listen to suggestions and/or >> references on this topic. > > Also not being an expert, it might be that some of the "noise" is > actual important.
If "noise" is defined as all the pixels that are lit on two consecutive frames, certainly I don't expect it to be so important. But as you mentioned earlier, the filter may suppress some important information; certainly it should be tuned such that available information is enough to extract pointing. Here there's another factor that is involved, given the frequency of the measurement, it is possible to "loose" part of the information and interpolate between two consecutive measurements offline, the overall pointing accuracy may be degraded, but given the inertial trajectory of the Star Tracker (LEO) I don't expect it to be dramatic.
> > -- glen >