• @dingus
    link
    English
    141 month ago

    Yeah, I never get these strange AI results.

    Except, the other day I wanted to convert some units and the AI results was having a fucking stroke for some reason. The numbers did not make sense at all. Never seen it do that before, but alas, I did not take a screenshot.

    • kadup
      link
      201 month ago

      deleted by creator

      • argv minus one
        link
        fedilink
        61 month ago

        What do humans do? Does the human brain have different sections for language processing and arithmetic?

        • @[email protected]
          link
          fedilink
          111 month ago

          LLMs don’t verify their output is true. Math is something where verifying its truth is easy. Ask an LLM how many Rs in strawberry and it’s plain to see if the answer is correct or not. Ask an LLM for a summary of Columbian history and it’s not as apparent. Ask an LLM for a poem about a tomato and there really isn’t a wrong answer.

    • argv minus one
      link
      fedilink
      81 month ago

      Meanwhile, GNU Units can do that, reliably and consistently, on a freaking 486. 😂

    • @jj4211
      link
      11 month ago

      Usually I’ll see something mild or something niche get wildly messed up.

      I think a few times I managed to get a query from a post in, but I think they are monitoring for viral bad queries and very quickly massage it one way or another to not provide the ridiculous answer. For example a fair amount of times the AI overview just would be seemingly disabled for queries I found in these sorts of posts.

      Also have to contend with the reality that people can trivially fake it and if the AI isn’t weird enough, they will inject a weirdness to get their content to be more interesting.