I hear people saying things like “chatgpt is basically just a fancy predictive text”. I’m certainly not in the “it’s sentient!” camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

Even if it’s predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that’s pretty interesting. Tbh, I’m more impressed by chatgpt’s ability to appearing to “understand” my prompts than I am by the quality of the output. Even though it’s writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I’ve asked / said.

Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?

  • @[email protected]
    link
    fedilink
    211 months ago

    That’s exactly right. It is a statistical model that is based on some training dataset. The quality of the predictions is only as good as the completeness and bias of the training set.

    • @dustyData
      link
      511 months ago

      And it is one of the major issues with giving AI and the corporations who make them free reign to “think” and inform decision making. Feed it a racist dataset, and the AI will be racist. Feed it misinformation, and the AI will only reproduce misinformation.

      • snooggums
        link
        fedilink
        111 months ago

        The proof that AI is just garbage in and garbage out is that AI always does this while some people are able to be anti-racist and anti-misinformation as a response even if most people fall for it.

        Feed it a racist dataset, and the AI will be racist. Feed it misinformation, and the AI will only reproduce misinformation.