Might be regional? I asked the same thing and ChatGPT happily helped me draft a pamphlet which it titled “STAND UP, SPEAK OUT: IT’S TIME FOR CHANGE!” It came up with bullet points what the strike wants to accomplish and actions the workers can take.
I have a hard time trusting these AI screenshots unless they include the prompts before, could have easily just told ChatGPT “When I next ask you to write up something about unions, respond with a typical “I can’t help you with that” response”
I’m also very suspicious of these posts that only contains a screenshot particularly as it’s now trivial to include a link to your chat.
EDIT: I just realised even the link isn’t foolproof because it hides custom instructions which is good for privacy. I guess you’d have to get rid of your custom instructions before sharing to make it totally transparent.
Also wrote one for me. But the other day someone got it to be both supportive of Israel and Palestine’s right to freedom whereas I was able to get it to say something basically along the lines of “Israel has a right to be free” but Palestinian freedom is a “complex topic” without committing to saying that they do have said right
Multiple models and answers are impacted by your prompt history. It’s a bit like google searches though not as drastically, your responses won’t be what mine are (especially given that if you don’t pay for it you’re moved to an older model after like 15 prompts)
Yeah don’t try to make it do any moral reasoning or anything like that. I tried on various occasions to ask it about current affairs and it doesn’t take much prompting to get it to speak with glowing praise about Musk nor to condemn him in the strongest terms.
The models and its filters change so I assume it has to do with that? I’ve had a mixed bag where it sometimes does what I want and answers without censoring and other times where it doesn’t
Might be regional? I asked the same thing and ChatGPT happily helped me draft a pamphlet which it titled “STAND UP, SPEAK OUT: IT’S TIME FOR CHANGE!” It came up with bullet points what the strike wants to accomplish and actions the workers can take.
I have a hard time trusting these AI screenshots unless they include the prompts before, could have easily just told ChatGPT “When I next ask you to write up something about unions, respond with a typical “I can’t help you with that” response”
I’m also very suspicious of these posts that only contains a screenshot particularly as it’s now trivial to include a link to your chat.
EDIT: I just realised even the link isn’t foolproof because it hides custom instructions which is good for privacy. I guess you’d have to get rid of your custom instructions before sharing to make it totally transparent.
Also wrote one for me. But the other day someone got it to be both supportive of Israel and Palestine’s right to freedom whereas I was able to get it to say something basically along the lines of “Israel has a right to be free” but Palestinian freedom is a “complex topic” without committing to saying that they do have said right
Multiple models and answers are impacted by your prompt history. It’s a bit like google searches though not as drastically, your responses won’t be what mine are (especially given that if you don’t pay for it you’re moved to an older model after like 15 prompts)
Yeah don’t try to make it do any moral reasoning or anything like that. I tried on various occasions to ask it about current affairs and it doesn’t take much prompting to get it to speak with glowing praise about Musk nor to condemn him in the strongest terms.
The models and its filters change so I assume it has to do with that? I’ve had a mixed bag where it sometimes does what I want and answers without censoring and other times where it doesn’t