• @[email protected]
    link
    fedilink
    English
    7
    edit-2
    7 months ago

    I agree with you, but disagree with your reasoning.

    If you take 1lb of potatoes, boil and mash them with no other add-ins, you can reasonably estimate the nutritional information through visual inspection alone, assuming you have enough reference to see there is about a pound of potatoes. There are many nutrition apps out there that utilize this, and it’s essentially just lopping off the extremes and averaging out the rest.

    The problem with this is, it’s impossible to accurately guess the recipe, and therefore the ingredients. Take the aforementioned mashed potatoes. You can’t accurately tell what variety of potato was used. Was water added back during the mashing? Butter? Cream cheese? Cheddar? Sour cream? There’s no way to tell visually, assuming uniform mashing, what is in the potatoes.

    Not to mention, the pin sees two pieces of bread on top of each other… what is in the bread? Who the fuck knows!

    • @[email protected]
      link
      fedilink
      English
      17 months ago

      It isn’t as magical (or accurate) as it looks. It’s just an extension of how various health tracking apps track food intake. There’s usually just one standard entry in the database for mashed potatoes based on whatever their data source thinks a reasonable default value should be. It doesn’t know if what you’re eating is mostly butter and cheese.

      How useful a vague and not particularly accurate nutrition profile really can be is an open question, but it seems to be a popular feature for smartwatches.

    • @[email protected]
      link
      fedilink
      English
      07 months ago

      I see what you mean, and while you raise a few excellent points, you seem to forget that a human looking at mashed potatoes have far more data than a computer lookkng at an image.

      A human get data about smell, temperature texture and weight in addition to a simple visual impression.

      This is why I picked a book/letter example, I wanted to reduce the variables available to a human to get closer to what a computer has from a photo.

        • @[email protected]
          link
          fedilink
          English
          17 months ago

          But what use would it be then, you wouldn’t be able to compare one potato to another, both would register the same values.

          • @adeoxymus
            link
            English
            37 months ago

            I think the use case is not people doing potato study but people that want to lose weight and need to know the amount of calories in the piece of cake that’s offered at the office cafeteria.

            • @[email protected]
              link
              fedilink
              English
              17 months ago

              And that means the feature is useless, there are so many things in a cake that can’t be seen from a simple picture.

              And if it is just a generic “cake” value, it will show incorrect data

              • @adeoxymus
                link
                English
                27 months ago

                The paper I showed earlier disagrees

      • @[email protected]
        link
        fedilink
        English
        17 months ago

        You are correct but you are speaking for yourself and not for example the disabled community who may lack senses or the capacity to calculate a result. While ai still improves its capabilities they are the first to benefit.