“* People ask LLMs to write code

LLMs recommend imports that don’t actually exist
Attackers work out what these imports’ names are, and create & upload them with malicious payloads
People using LLM-written code then auto-add malware themselves”

  • activepeopleOP
    link
    fedilink
    11 year ago

    and the devs copy+pasting code from it probably are aware of that it doesn’t know anything, and that it is likely synthesizing something based on StackOverflow, which they used to happily copy+paste from a few months ago.

    If the libraries ChatGPT suggests work ~80% of the time, this leaves an opportunity for someone to provide a “solution” the other 20%.

    • Haus
      link
      fedilink
      11 year ago

      This is pretty much my experience. It did a pretty good job with the grunt work of setting up a Qt UI in python, but something like 5/20 imports were wrong.