Anthropic created an AI jailbreaking algorithm that keeps tweaking prompts until it gets a harmful response.