From https://twitter.com/llm_sec/status/1667573374426701824

  1. People ask LLMs to write code
  2. LLMs recommend imports that don’t actually exist
  3. Attackers work out what these imports’ names are, and create & upload them with malicious payloads
  4. People using LLM-written code then auto-add malware themselves
  • @[email protected]
    link
    fedilink
    English
    61 year ago

    It’s terrifying that someone would build off suggestions from ChatGPT without verifying the packages they are installing.