• @[email protected]
    link
    fedilink
    English
    5
    edit-2
    10 months ago

    Is this really a worm if all it’s doing is sending a prompt to persuade an AI agent with authority to send emails to send emails with the prompt to other AI agents? Like this guy’s saying in the article, just assume that AI output produced as a result of user prompts is the same as unfiltered user input and should be treated that way:

    “With a lot of these issues, this is something that proper secure application design and monitoring could address parts of,” says Adam Swanda, a threat researcher at AI enterprise security firm Robust Intelligence. “You typically don’t want to be trusting LLM output anywhere in your application.”