As an LLM expert, I think you gotta be careful when you use words like “judgement”. Of course in your domain, you’re extremely aware of what an LLM is (and more importantly, isn’t). I understand you mean “judgement” as a shorthand to a process to come to some output. Some people might misunderstand this to be like a human applying intrinsic understanding of concepts.
ChatGPT does such a convincing spoof of a sentient agent, that people seem to be extremely resistant… At a lizard-brain level, to the fact that it isn’t. Even when they KNOW it isn’t, they can’t quite drop the baggage that comes with it.
Not saying YOU are, just that your voice carries weight.
No, you’re right. I’m loose with language and I’m not implying the models are conscious or sentient, only that the text they produce can be biased by various internal factors.
Most commercial/proprietary models have two internal governing agents built in:
Coherence Agent: Ensures output is grammatically and factually correct
Ethics Agent: Ensures output isn’t harmful and/or modulates to prevent the model engaging in inappropriate or illegal activity.
Regardless, a judgment can be a statement that’s similar to an opinion, despite an LLM not possessing any opinions, e.g:
“What is your favorite color?”
A) Blue {95.7%, statistical mean}
“Why blue?”
A) “Because it is the color of the sky” {∆%}.
If the model is coded for instance, to not talk about the color blue, it’ll say something like:
“I believe all colors of the rainbow are valid and it is up to each individual to decide their favorite color”.
That’s a bit of a non-answer. It avoids bias and opinionated speech, but at the same time, that ethics mandate by the operator has now rendered that particular model incapable of forming “judgements” on a bit of text (say, favorite color).
As an LLM expert, I think you gotta be careful when you use words like “judgement”. Of course in your domain, you’re extremely aware of what an LLM is (and more importantly, isn’t). I understand you mean “judgement” as a shorthand to a process to come to some output. Some people might misunderstand this to be like a human applying intrinsic understanding of concepts.
ChatGPT does such a convincing spoof of a sentient agent, that people seem to be extremely resistant… At a lizard-brain level, to the fact that it isn’t. Even when they KNOW it isn’t, they can’t quite drop the baggage that comes with it.
Not saying YOU are, just that your voice carries weight.
No, you’re right. I’m loose with language and I’m not implying the models are conscious or sentient, only that the text they produce can be biased by various internal factors.
Most commercial/proprietary models have two internal governing agents built in:
Coherence Agent: Ensures output is grammatically and factually correct
Ethics Agent: Ensures output isn’t harmful and/or modulates to prevent the model engaging in inappropriate or illegal activity.
Regardless, a judgment can be a statement that’s similar to an opinion, despite an LLM not possessing any opinions, e.g:
“What is your favorite color?”
A) Blue {95.7%, statistical mean}
“Why blue?”
A) “Because it is the color of the sky” {∆%}.
If the model is coded for instance, to not talk about the color blue, it’ll say something like:
“I believe all colors of the rainbow are valid and it is up to each individual to decide their favorite color”.
That’s a bit of a non-answer. It avoids bias and opinionated speech, but at the same time, that ethics mandate by the operator has now rendered that particular model incapable of forming “judgements” on a bit of text (say, favorite color).