• @Carrolade
      link
      English
      349 months ago

      Honestly, I think ChatGPT wouldn’t make that particular mistake. Sounding proper is its primary purpose. Maybe a cheap knockoff.

      • metaStatic
        link
        fedilink
        -129 months ago

        chatGPT just guesses the next word. stop anthropomorphizing it.

        • @Carrolade
          link
          English
          89 months ago

          Yes, it does that because it was designed to sound convincing, and that is a good method for accomplishing that. That is the primary goal behind the design of all chatbots, and what the Turing Test was intended to gauge. Anyone who makes a chatbot wants it to sound good first and foremost.

        • @acosmichippo
          link
          English
          8
          edit-2
          9 months ago

          it guesses the next word… based on examples created by humans. It’s not just making shit up out of thin air.

        • @[email protected]
          link
          fedilink
          29 months ago

          Lol making a mistake isn’t unique to humans. Machines make mistakes.

          Congratulations for knowing that a LLM isn’t the same as a human though, I guess!