• @QuaternionsRock
    link
    English
    11 year ago

    Let me preface this by admitting that I’m not a camera expert. That being said, some of the claims made in this article don’t make sense to me.

    A sensor effectively measures the sum of the light that hits each photosite over a period of time. Assuming a correct signal gain (ISO) is applied, this in effect becomes the arithmetic mean of the light that hits each photosite.

    When you split each photosite into four, you have more options. If you simply take the average of the four photosites, the result should in theory be equivalent to the original sensor. However, you could also exploit certain known characteristics of the image as well as the noise to produce an arguably better image, such as by discarding outlier samples or by using a weighted average based on some expectation of the pixel value.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      However, you could also exploit certain known characteristics of the image as well as the noise to produce an arguably better image, such as by discarding outlier samples or by using a weighted average based on some expectation of the pixel value.

      Yes, that is one use case for pixel binning. Apple uses it to reduce noise in low light photos, but it can also be used to improve telephoto images where more data (from neighboring pixels) can be used to yield cleaner results.