• @[email protected]
    link
    fedilink
    English
    28
    edit-2
    10 months ago

    They make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt highlighted.

    • @drislands
      link
      English
      410 months ago

      Y’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.

        • @drislands
          link
          English
          110 months ago

          No worries mate. I appreciate the correction regardless.

    • @Garbanzo
      link
      English
      210 months ago

      I’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.

    • @GlitterInfection
      link
      English
      110 months ago

      Pretty soon glaring errors like this will be the only way to identify human vs LLM writing.

      Then soon after that the LLMs will start producing glaring grammatical errors to match the humans.