I understand how lucky imaging gets the results it gets, but I’m wondering specifically how the 10% of frames are chosen.

They’re not picked based on clarity/blur, because the problem is one of distorted images not blurry images, causing issues when averaging the stack.

Searching online gives me lots of answers about how lucky imaging produces clearer images, but not how the lucky frames are chosen.

Anyone know how lucky frames get chosen?

  • @count_of_monte_carloM
    link
    English
    41 year ago

    This isn’t exactly my area of expertise, but I have some information that might be helpful. Here’s the description of the frame selection from a paper on a lucky imaging system:

    The frame selection algorithm, implemented (currently) as a post-processing step, is summarised below:

    1. A Point Spread Function (PSF) guide star is selected as a reference to the turbulence induced blurring of each frame.
    1. The guide star image in each frame is sinc-resampled by a factor of 4 to give a sub-pixel estimate of the position of the brightest speckle.
    1. A quality factor (currently the fraction of light concentrated in the brightest pixel of the PSF) is calculated for each frame.
    1. A fraction of the frames are then selected according to their quality factors. The fraction is chosen to optimise the trade- off between the resolution and the target signal-to-noise ra- tio required.
    1. The selected frames are shifted-and-added to align their brightest speckle positions.

    If you want all the gory details, the best place to look is probably the thesis the same author wrote on this work. That’s available here PDF warning.

    • @[email protected]OP
      link
      fedilink
      English
      21 year ago

      Thanks, I’ll take a look at that! I think I actually already skimmed it, because those 5 points are familiar.

      I wasn’t sure what was meant by the PSF guide star. Is that just the function that selects the speckle in each frame use to shift/align the frames?

      Also I wasn’t sure what “sync-resampled” means. Shifted-and-upscaled?

      Reading this had given me an idea on how I might implement it myself, but I wasn’t familiar enough with the terminology to know if my algorithm was the same as the one described.

      I’ll try reading further into the paper to see if it clears anything up

      • @count_of_monte_carloM
        link
        English
        21 year ago

        I believe the idea is that a single bright star in the frame (the guide star) is used for selecting the frames. The point spread function (PSF) is just going to be some function that describes the blurred shape you would observe with the detector for an input point source. You then select frames in which the guide star is well centered, compared to its overall distribution.

        I think your guess on “sync-resampled” is correct. They increased the “resolution” by a factor of 4, so that when they realign the chosen frames to center the guide star, they can do so at a sub-pixel precision.

        You may want to check out chapter 3 in the thesis, particularly section 3.5.3. The give a lot more detail on the process than you’ll be able to find in the paper. A well-written PhD thesis can be 1000x more valuable than the journal article it ultimately produces, because it contains all the specific details that can be glossed over in the final paper.