• @[email protected]
    link
    fedilink
    English
    07 months ago

    I see what you mean, and while you raise a few excellent points, you seem to forget that a human looking at mashed potatoes have far more data than a computer lookkng at an image.

    A human get data about smell, temperature texture and weight in addition to a simple visual impression.

    This is why I picked a book/letter example, I wanted to reduce the variables available to a human to get closer to what a computer has from a photo.

      • @[email protected]
        link
        fedilink
        English
        17 months ago

        But what use would it be then, you wouldn’t be able to compare one potato to another, both would register the same values.

        • @adeoxymus
          link
          English
          37 months ago

          I think the use case is not people doing potato study but people that want to lose weight and need to know the amount of calories in the piece of cake that’s offered at the office cafeteria.

          • @[email protected]
            link
            fedilink
            English
            17 months ago

            And that means the feature is useless, there are so many things in a cake that can’t be seen from a simple picture.

            And if it is just a generic “cake” value, it will show incorrect data

            • @adeoxymus
              link
              English
              27 months ago

              The paper I showed earlier disagrees

    • @[email protected]
      link
      fedilink
      English
      17 months ago

      You are correct but you are speaking for yourself and not for example the disabled community who may lack senses or the capacity to calculate a result. While ai still improves its capabilities they are the first to benefit.