The Picard Maneuver to [email protected] • 3 months agoYou probably shouldn't trust the info anyway.imagemessage-square78arrow-up1803arrow-down140
arrow-up1763arrow-down1imageYou probably shouldn't trust the info anyway.The Picard Maneuver to [email protected] • 3 months agomessage-square78
minus-squareFaridlinkfedilink1•3 months agoIsn’t it the opposite? At least with ChatGPT specifically, it used to be super uptight (“as an LLM trained by…” blah-blah) but now it will mostly do anything. Especially if you have a custom instruction to not nag you about “moral implications”.
minus-square@[email protected]linkfedilink6•3 months agoYeah in my experience ChatGPT is much more willing to go along with most (reasonable) prompts now.
Isn’t it the opposite? At least with ChatGPT specifically, it used to be super uptight (“as an LLM trained by…” blah-blah) but now it will mostly do anything. Especially if you have a custom instruction to not nag you about “moral implications”.
Yeah in my experience ChatGPT is much more willing to go along with most (reasonable) prompts now.