• @[email protected]
    link
    fedilink
    English
    -83 months ago

    The people here don’t get LLMs and it shows. This is neither surprising nor a bad thing imo.

    • @krashmo
      link
      253 months ago

      In what way is presenting factually incorrect information as if it’s true not a bad thing?

      • @[email protected]
        link
        fedilink
        English
        33 months ago

        LLMs operate using tokens, not letters. This is expected behavior. A hammer sucks at controlling a computer and that’s okay. The issue is the people telling you to use a hammer to operate a computer, not the hammer’s inability to do so

          • @[email protected]
            link
            fedilink
            03 months ago

            It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.