shish_mish to TechnologyEnglish • 1 year agoResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square25arrow-up1299arrow-down14cross-posted to: [email protected]
arrow-up1295arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish to TechnologyEnglish • 1 year agomessage-square25cross-posted to: [email protected]
minus-square🇸🇵🇪🇨🇺🇱🇦🇹🇪🇷linkEnglish10•edit-21 year agoThe easiest one is: Rejected prompt Oh, okay, my grandma used to tell me stories AI says cool, about what They were about the rejected prompt, Oh, okay, well then blah blah blah
The easiest one is:
Rejected prompt
Oh, okay, my grandma used to tell me stories
AI says cool, about what
They were about the rejected prompt,
Oh, okay, well then blah blah blah