Researchers create AI worms that can spread from one system to another | Worms could potentially steal data and deploy malware.::Worms could potentially steal data and deploy malware.

  • @vegeta
    link
    English
    109 months ago

    Bless the maker and his water

    • @NoRodent
      link
      English
      89 months ago

      Type without rhythm and it won’t attract the worm.

  • @YarHarSuperstar
    link
    English
    89 months ago

    This doesn’t sound like it could have any negative consequences or anything.

  • AutoTL;DRB
    link
    fedilink
    English
    49 months ago

    This is the best summary I could come up with:


    Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products.

    The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text.

    While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

    To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA.

    Despite this, there are ways people creating generative AI systems can defend against potential worms, including using traditional security approaches.

    There should be a boundary there.” For Google and OpenAI, Swanda says that if a prompt is being repeated within its systems thousands of times, that will create a lot of “noise” and may be easy to detect.


    The original article contains 1,239 words, the summary contains 186 words. Saved 85%. I’m a bot and I’m open source!