• @A_A
    link
    English
    1
    edit-2
    3 hours ago

    One of 6 described methods :
    The model is prompted to explain refusals and rewrite the prompt iteratively until it complies.

  • Cornpop
    link
    English
    56 hours ago

    This is so stupid. You shouldn’t have to “jailbreak” these systems. The information is already out there with a google search.

  • @[email protected]
    link
    fedilink
    English
    58 hours ago

    My own research has made a similar finding. When I am taking the piss and being a random jerk to a chatbot, the bot much more frequently violates their own terms of service. Introducing non-sequitur topics after a few rounds really seems to ‘confuse’ them.