• @taiyang
    link
    English
    2116 hours ago

    Not a good analogy, except there is one interesting parallel. My students who overuse a calculator in stats tend to do fine on basic arithmetic but it does them a disservice when trying to do anything more elaborate. Granted, it should be able to follow PEDMAS but for whatever weird reason, it doesn’t sometimes. And when there’s a function that requires a sum and maybe multiple steps? Forget about it.

    Similarly, GPT can make cliche copy writing, but good luck getting it to spit out anything complex. Trust me, I’m grading that drinble. So in that case, the analogy works.

    • @surph_ninja
      link
      English
      -615 hours ago

      You think it won’t ever spit out anything complex?

      • @taiyang
        link
        English
        712 hours ago

        LLMs by their very nature drive towards cliche and most common answers, since they’re synthesizing data. Prompts can attempt to sway it away from that, but it’s ultimately a regurgitation machine.

        Actual AI might be able to eventually, but it would require a lot more human like experience (and honestly, the chaos that gives us creativity). At that point it’ll probably be sentient, and we’d have bigger things you worry about, lol

      • @ZILtoid1991
        link
        English
        114 hours ago

        Then enjoy your AI slop! I’m not stopping you.