On Thursday, security firm Radware published research showing how a garden-variety attack known as a prompt injection is all it took for company researchers to exfiltrate confidential information when Deep Research was given access to a target’s Gmail inbox. This type of integration is precisely what Deep Research was designed to do—and something OpenAI has encouraged. Radware has dubbed the attack Shadow Leak.

“ShadowLeak weaponizes the very capabilities that make AI assistants useful: email access, tool use and autonomous web calls,” Radware researchers wrote. “It results in silent data loss and unlogged actions performed ‘on behalf of the user,’ bypassing traditional security controls that assume intentional user clicks or data leakage prevention at the gateway level.”

People considering connecting LLM agents to their inboxes, documents, and other private resources should think long and hard about doing so, since these sorts of vulnerabilities aren’t likely to be contained anytime soon.

  • Doug Holland
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 days ago

    Whoa, spam filtering is Gmail’s only advantage over generic email.

    I’ve migrated away, but still have my legacy Gmail account, but this’ll get me gone completely.