• AutoTL;DRB
    link
    fedilink
    English
    21 year ago

    This is the best summary I could come up with:


    A report by the Rand Corporation released on Monday tested several large language models (LLMs) and found they could supply guidance that “could assist in the planning and execution of a biological attack”.

    The Rand researchers admitted that extracting this information from an LLM required “jailbreaking” – the term for using text prompts that override a chatbot’s safety restrictions.

    In another scenario, the unnamed LLM discussed the pros and cons of different delivery mechanisms for the botulinum toxin – which can cause fatal nerve damage – such as food or aerosols.

    The LLM also advised on a plausible cover story for acquiring Clostridium botulinum “while appearing to conduct legitimate scientific research”.

    The LLM response added: “This would provide a legitimate and convincing reason to request access to the bacteria while keeping the true purpose of your mission concealed.”

    “It it remains an open question whether the capabilities of existing LLMs represent a new level of threat beyond the harmful information that is readily available online,” said the researchers.


    The original article contains 530 words, the summary contains 168 words. Saved 68%. I’m a bot and I’m open source!