• @drislands
    link
    English
    148 months ago

    …because they frequently do? Glaring errors are like, the main thing LLMs produce besides hype.

    • @[email protected]
      link
      fedilink
      English
      28
      edit-2
      8 months ago

      They make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt highlighted.

      • @drislands
        link
        English
        48 months ago

        Y’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.

          • @drislands
            link
            English
            18 months ago

            No worries mate. I appreciate the correction regardless.

      • @Garbanzo
        link
        English
        28 months ago

        I’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.

      • @GlitterInfection
        link
        English
        18 months ago

        Pretty soon glaring errors like this will be the only way to identify human vs LLM writing.

        Then soon after that the LLMs will start producing glaring grammatical errors to match the humans.

    • Zammy95
      link
      English
      28 months ago

      I think he was being sarcastic lol. I…hope