• @stjobe
    link
    541 year ago

    Biggest problem with it is that it lies with the exact same confidence it tells the truth. Or, put another way, it’s confidently incorrect as often as it is confidently correct - and there’s no way to tell the difference unless you already know the answer.

    • @[email protected]
      link
      fedilink
      191 year ago

      it’s kinda hilarious to me because one of the FIRST things ai researchers did was get models to identify things and output answers together with the confidence of each potential ID, and now we’ve somehow regressed back from that point

      • did we really regress back from that?

        i mean giving a confidence for recognizing a certain object in a picture is relatively straightforward.

        But LLMs put together words by their likeliness of belonging together under your input (terribly oversimplified).the confidence behind that has no direct relation to how likely the statements made are true. I remember an example where someone made chatgpt say that 2+2 equals 5 because his wife said so. So chatgpt was confident that something is right when the wife says it, simply because it thinks these words to belong together.

          • 𝕽𝖔𝖔𝖙𝖎𝖊𝖘𝖙
            link
            English
            21 year ago

            Gödel numbers are typically associated with formal mathematical statements, and there isn’t a formal proof for 2+2=5 in standard arithmetic. However, if you’re referring to a non-standard or humorous context, please provide more details.

            • metaStatic
              link
              fedilink
              11 year ago

              Of course I don’t know enough about the actual proof for it to be anything but a joke but there are infinite numbers so there should be infinite proofs.

              there are also meme proofs out there I assume could be given a Gödel number easily enough.