Further, participants in most cases preferred ChatGPT’s take on the matter at hand. That was based on five factors: whether the response understood the speaker, showed empathy, was appropriate for the therapy setting, was relevant for various cultural backgrounds, and was something a good therapist would say.
Patients explaining they liked what they heared - not if it is correct or relevant to the cause. There is not even a pipeline for escalation, because AIs don’t think.
You can’t say “Exactly” when you tl;dr’d and removed one of the most important parts of the article.
Your human summary was literally worse than AI 🤦
I’m getting downvoted, which makes me suspect people think I’m cheerleading for AI. I’m not. I’m sure it sucks compared to a therapist. I’m just saying that the tl;dr also sucked.
A bit disingenuous not to mention this part:
Patients explaining they liked what they heared - not if it is correct or relevant to the cause. There is not even a pipeline for escalation, because AIs don’t think.
Exactly. AI chatbot’s also cannot empathize since they have no self awareness.
but it can give the illusion of empathy, which is far more important.
You can’t say “Exactly” when you tl;dr’d and removed one of the most important parts of the article.
Your human summary was literally worse than AI 🤦
I’m getting downvoted, which makes me suspect people think I’m cheerleading for AI. I’m not. I’m sure it sucks compared to a therapist. I’m just saying that the tl;dr also sucked.