Dear all, I am working on the software of a Star Tracker device which should be capable to compress the images for transmission [1]. The CCD is 512x512 and each pixel is digitized with a 12bit ADC and the image frequency is ~10Hz. Since the bandwidth available is rather limited (~1.5 Kb/s) I need to compress the image to a level of few bytes per image to be able to cope with the input rate. Given the normal picture as a field of randomly sparsed dots, most of which are just noise, I should be able to find the brightest objects and transmit their coordinates in fraction of pixel units. Assuming I get rid of the noise with some calibration process which may take in consideration some digital low-pass filter, the image will look like a sparse number of dot-like objects with a 2D gaussian-like shape. The simplest algorithm I can come up with is to look for a pixel over a value (threshold1) and look for pixels around it above another value (threshold2 < threshold1), performing a sort of "clusterization". Once each cluster is found I can sort them by magnitude and find the mean (or median) to extract the coordinates and maybe send only the first N of them. Given the fact that I'm far from being an expert in image processing or astronometry, I would be more than happy to listen to suggestions and/or references on this topic. Al [1] No stars recognition is required onboard.
algorithms for star field compression
Started by ●January 29, 2012
Reply by ●January 29, 20122012-01-29
alb <alessandro.basili@cern.ch> wrote:> I am working on the software of a Star Tracker device which should be > capable to compress the images for transmission [1]. The CCD is 512x512 and > each pixel is digitized with a 12bit ADC and the image frequency is ~10Hz. > Since the bandwidth available is rather limited (~1.5 Kb/s) I need to > compress the image to a level of few bytes per image to be able to cope with > the input rate.So there is a stream of similar images? In that case, you might want to do some motion estimation like MPEG's do. Figure out how the image shifted from the previous frame, encode the shift and difference between the two after the shift.> Given the normal picture as a field of randomly sparsed dots, most of which > are just noise, I should be able to find the brightest objects and transmit > their coordinates in fraction of pixel units.> Assuming I get rid of the noise with some calibration process which may take > in consideration some digital low-pass filter, the image will look like a > sparse number of dot-like objects with a 2D gaussian-like shape. The > simplest algorithm I can come up with is to look for a pixel over a value > (threshold1) and look for pixels around it above another value (threshold2 < > threshold1), performing a sort of "clusterization". Once each cluster is > found I can sort them by magnitude and find the mean (or median) to extract > the coordinates and maybe send only the first N of them.Integrated intensity over the spot, mean position, and width. The width will change from the filter, so you might want to correct for that. Or are they really points? (No clusters, galaxies, etc.) If points, just total intensity and position.> Given the fact that I'm far from being an expert in image processing or > astronometry, I would be more than happy to listen to suggestions and/or > references on this topic.Also not being an expert, it might be that some of the "noise" is actual important. -- glen
Reply by ●January 29, 20122012-01-29
On 1/28/2012 11:46 PM, alb wrote:> Dear all, > > I am working on the software of a Star Tracker device which should be > capable to compress the images for transmission [1]. The CCD is 512x512 > and each pixel is digitized with a 12bit ADC and the image frequency is > ~10Hz. Since the bandwidth available is rather limited (~1.5 Kb/s) I > need to compress the image to a level of few bytes per image to be able > to cope with the input rate. > > Given the normal picture as a field of randomly sparsed dots, most of > which are just noise, I should be able to find the brightest objects and > transmit their coordinates in fraction of pixel units. > > Assuming I get rid of the noise with some calibration process which may > take in consideration some digital low-pass filter, the image will look > like a sparse number of dot-like objects with a 2D gaussian-like shape. > The simplest algorithm I can come up with is to look for a pixel over a > value (threshold1) and look for pixels around it above another value > (threshold2 < threshold1), performing a sort of "clusterization". Once > each cluster is found I can sort them by magnitude and find the mean (or > median) to extract the coordinates and maybe send only the first N of them. > > Given the fact that I'm far from being an expert in image processing or > astronometry, I would be more than happy to listen to suggestions and/or > references on this topic. > > Al > > [1] No stars recognition is required onboard.Do you need information about a star other than its location? Can you establish a size threshold and ignore anything smaller? Can you establish a brightness threshold and ignore anything dimmer? If the answers are no, yes, yes, then you need only send one coordinate per star. Even if each possible coordinate is assigned to one bit, the available bandwidth would be exceeded, but run-length encoding should work well. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●January 29, 20122012-01-29
alb wrote:> Dear all, > > I am working on the software of a Star Tracker device which should be > capable to compress the images for transmission [1]. The CCD is 512x512 > and each pixel is digitized with a 12bit ADC and the image frequency is > ~10Hz. Since the bandwidth available is rather limited (~1.5 Kb/s) I > need to compress the image to a level of few bytes per image to be able > to cope with the input rate. > > Given the normal picture as a field of randomly sparsed dots, most of > which are just noise, I should be able to find the brightest objects and > transmit their coordinates in fraction of pixel units. > > Assuming I get rid of the noise with some calibration process which may > take in consideration some digital low-pass filter, the image will look > like a sparse number of dot-like objects with a 2D gaussian-like shape. > The simplest algorithm I can come up with is to look for a pixel over a > value (threshold1) and look for pixels around it above another value > (threshold2 < threshold1), performing a sort of "clusterization". Once > each cluster is found I can sort them by magnitude and find the mean (or > median) to extract the coordinates and maybe send only the first N of them. > > Given the fact that I'm far from being an expert in image processing or > astronometry, I would be more than happy to listen to suggestions and/or > references on this topic.A lot depends on how noisy is CCD, how blurry is the image and how much of stray light gets into the picture. In the simple case, an algorithm like you suggested would be sufficient. In more complex cases, you would probably have to do edge enhancement and stray light artifacts removal before processing. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by ●January 30, 20122012-01-30
glen herrmannsfeldt writes:> alb <alessandro.basili@cern.ch> wrote: > >> I am working on the software of a Star Tracker device which should be >> capable to compress the images for transmission [1]. The CCD is 512x512 and >> each pixel is digitized with a 12bit ADC and the image frequency is ~10Hz. >> Since the bandwidth available is rather limited (~1.5 Kb/s) I need to >> compress the image to a level of few bytes per image to be able to cope with >> the input rate. > > So there is a stream of similar images? In that case, you might want > to do some motion estimation like MPEG's do. Figure out how the image > shifted from the previous frame, encode the shift and difference between > the two after the shift.Lots of objects do not move from frame to frame and I think that is something related to the non-uniformity of the CCD. I think I can get rid of them simply subtracting two consecutive frames (unfortunately the available memory is not enough to store two complete pictures onboard, therefore I need to do this process over a subset of the frame and "calibrate" it in steps. How can I figure out how the picture shifted? Certainly the field of view (FOV) is big enough to have a slightly shifted frame at the nominal rate (10Hz).> >> Given the normal picture as a field of randomly sparsed dots, most of which >> are just noise, I should be able to find the brightest objects and transmit >> their coordinates in fraction of pixel units. > >> Assuming I get rid of the noise with some calibration process which may take >> in consideration some digital low-pass filter, the image will look like a >> sparse number of dot-like objects with a 2D gaussian-like shape. The >> simplest algorithm I can come up with is to look for a pixel over a value >> (threshold1) and look for pixels around it above another value (threshold2 < >> threshold1), performing a sort of "clusterization". Once each cluster is >> found I can sort them by magnitude and find the mean (or median) to extract >> the coordinates and maybe send only the first N of them. > > Integrated intensity over the spot, mean position, and width. > > The width will change from the filter, so you might want to correct > for that. Or are they really points? (No clusters, galaxies, etc.)They are clusters indeed, but I'm not 100% sure that the width is important. The way the reconstruction software works on the receiver side (a.k.a. offline), is looking at the angle between one object and the two closest objects, being a unique identifier for a star. Certainly the width and sigma of each point will give an of idea how accurate is the coordinates meausurement.> > If points, just total intensity and position. > >> Given the fact that I'm far from being an expert in image processing or >> astronometry, I would be more than happy to listen to suggestions and/or >> references on this topic. > > Also not being an expert, it might be that some of the "noise" is > actual important.If "noise" is defined as all the pixels that are lit on two consecutive frames, certainly I don't expect it to be so important. But as you mentioned earlier, the filter may suppress some important information; certainly it should be tuned such that available information is enough to extract pointing. Here there's another factor that is involved, given the frequency of the measurement, it is possible to "loose" part of the information and interpolate between two consecutive measurements offline, the overall pointing accuracy may be degraded, but given the inertial trajectory of the Star Tracker (LEO) I don't expect it to be dramatic.> > -- glen >
Reply by ●January 30, 20122012-01-30
Jerry Avins writes:> On 1/28/2012 11:46 PM, alb wrote:[...]> Do you need information about a star other than its location?I need the brightness, otherwise will be difficult to sort them and send only the N brightest ones.> Can you establish a size threshold and ignore anything smaller?not really, sometime there is an amount of stray light coming from diffraction of the baffle and/or reflection in the baffle, which does not allow to set a fixed threshold.> Can you establish a brightness threshold and ignore anything dimmer?Every part of the sky is different, therefore I should have a list of thresholds which corresponds to the portion I am in. I think this would require a much more complex algorithm.> If the answers are no, yes, yes, then you need only send one coordinate > per star. Even if each possible coordinate is assigned to one bit, the > available bandwidth would be exceeded, but run-length encoding should > work well.It will not be viable to send one coordinate per pixel, if you take a look at the raw image you'll see it immediately why. The amount of lit pixels is incredible huge compared to the amount of stars.
Reply by ●January 30, 20122012-01-30
Vladimir Vassilevsky writes:> > > alb wrote: > >> Dear all, >> >> I am working on the software of a Star Tracker device which should be >> capable to compress the images for transmission [1]. The CCD is 512x512 >> and each pixel is digitized with a 12bit ADC and the image frequency is >> ~10Hz. Since the bandwidth available is rather limited (~1.5 Kb/s) I >> need to compress the image to a level of few bytes per image to be able >> to cope with the input rate. >> >> Given the normal picture as a field of randomly sparsed dots, most of >> which are just noise, I should be able to find the brightest objects and >> transmit their coordinates in fraction of pixel units. >> >> Assuming I get rid of the noise with some calibration process which may >> take in consideration some digital low-pass filter, the image will look >> like a sparse number of dot-like objects with a 2D gaussian-like shape. >> The simplest algorithm I can come up with is to look for a pixel over a >> value (threshold1) and look for pixels around it above another value >> (threshold2 < threshold1), performing a sort of "clusterization". Once >> each cluster is found I can sort them by magnitude and find the mean (or >> median) to extract the coordinates and maybe send only the first N of them. >> >> Given the fact that I'm far from being an expert in image processing or >> astronometry, I would be more than happy to listen to suggestions and/or >> references on this topic. > > A lot depends on how noisy is CCD, how blurry is the image and how much > of stray light gets into the picture. In the simple case, an algorithm > like you suggested would be sufficient. In more complex cases, you would > probably have to do edge enhancement and stray light artifacts removal > before processing.At the moment we are only getting raw images, just to have a feeling of how they look like, and to show them on a screen what I do is the following: 1. get the mean value and sigma of the whole image; 2. map all the values < (mean - 3*sigma) to 0x000 3. map all the values > (mean + 3*sigma) to 0xFFF 4. map all pixels in between linearly between 0x000 and 0xFFF As a result you have a constrast amplification which is similar (I guess) to the edge enhancement you proposed. I am not sure I understood what you had in mind with "stray light artifacts removal".> > > Vladimir Vassilevsky > DSP and Mixed Signal Design Consultant > http://www.abvolt.com
Reply by ●January 30, 20122012-01-30
On Jan 28, 11:46�pm, alb <alessandro.bas...@cern.ch> wrote:> Dear all, > > I am working on the software of a Star Tracker device which should be > capable to compress the images for transmission [1]. The CCD is 512x512 and > each pixel is digitized with a 12bit ADC and the image frequency is ~10Hz. > Since the bandwidth available is rather limited (~1.5 Kb/s) I need to > compress the image to a level of few bytes per image to be able to cope with > the input rate. > > Given the normal picture as a field of randomly sparsed dots, most of which > are just noise, I should be able to find the brightest objects and transmit > their coordinates in fraction of pixel units. > > Assuming I get rid of the noise with some calibration process which may take > in consideration some digital low-pass filter, the image will look like a > sparse number of dot-like objects with a 2D gaussian-like shape. The > simplest algorithm I can come up with is to look for a pixel over a value > (threshold1) and look for pixels around it above another value (threshold2 < > threshold1), performing a sort of "clusterization". Once each cluster is > found I can sort them by magnitude and find the mean (or median) to extract > the coordinates and maybe send only the first N of them. > > Given the fact that I'm far from being an expert in image processing or > astronometry, I would be more than happy to listen to suggestions and/or > references on this topic. > > Al > > [1] No stars recognition is required onboard.Check out "Phd Guiding" by Stark Labs. They have made some of their code open source for autoguiding your telescope. Typically when one uses an autoguider, you shoot a dark frame for subsequent subtraction from all future frames. You identify a guide star (hopefully reasonably bright but not so bright as to saturate the ccd (you want the central image to appear peaked like a bell curve) and then you put a box around the star and only use the reduced size field data for tracking. Are you just doing this for fun? There are quite a few CCD cameras now available for use as autoguiders and Stark Labs' software is free for anyone to use. With a good low noise camera typical tracking may be had to under 1/10 of pixel in the autoguider camera. Your mount on the other hand be an issue. Clay
Reply by ●January 30, 20122012-01-30
alb wrote:>> A lot depends on how noisy is CCD, how blurry is the image and how >> much of stray light gets into the picture. In the simple case, an >> algorithm like you suggested would be sufficient. In more complex >> cases, you would probably have to do edge enhancement and stray light >> artifacts removal before processing. > > > At the moment we are only getting raw images, just to have a feeling of > how they look like, and to show them on a screen what I do is the > following: > > 1. get the mean value and sigma of the whole image;Make a histogram of values; set the thresholds accordingly.> 2. map all the values < (mean - 3*sigma) to 0x000 > 3. map all the values > (mean + 3*sigma) to 0xFFFThe statistics of images is usually very different from Gaussian; therefore N x sigma rule doesn't apply.> 4. map all pixels in between linearly between 0x000 and 0xFFF > > As a result you have a constrast amplification which is similar (I > guess) to the edge enhancement you proposed.Google "Sobel filter"> I am not sure I understood what you had in mind with "stray light > artifacts removal".I consult for money; if your project is more then idle curiosity, my contact is at the web site in the signature. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by ●January 30, 20122012-01-30
alb <alessandro.basili@cern.ch> wrote: (snip, I wrote)>> So there is a stream of similar images? In that case, you might want >> to do some motion estimation like MPEG's do. Figure out how the image >> shifted from the previous frame, encode the shift and difference between >> the two after the shift.> Lots of objects do not move from frame to frame and I think that is > something related to the non-uniformity of the CCD. I think I can get rid of > them simply subtracting two consecutive frames (unfortunately the available > memory is not enough to store two complete pictures onboard, therefore I > need to do this process over a subset of the frame and "calibrate" it in > steps.You might look up references to the ICT (Integer Cosine Transform), specifically designed for computation limited, (and I believe memory limited) systems. Specifically, for image compression on the CDP1802 processor, which has no multiply instruction. ICT is designed to optimize the shift and add needed by minimizing the 1 bits in the coefficients. -- glen