(it was the free online version, using a MS account)
this AI doesnt know stuff and then proceed to go " why do you think this is?" “could you find other information , so that i can help you”. I’m not sure if it convey how bad it sound. May be that the whole exchange which has that condescending tone.
I tried a lot of llm , I never feel so aggravated by the way one of them talk.
How did they program that stuff to make it so bad… did they try to give it personnality? And if MS roll that up in all its W11 app, I bet people will start destroying their computer out of frustration… may be it is their plan all along.
Because it’s copying the writing of it’s source material.
If the area you’re working in has lots and lots of toxic, condescending text material online, it can not (easily) reproduces the material without the condescending style. Think of it as mixed soup. The AI does not “understand” soup. It doesn’t understand “potatoes”, “veggies” and “salt”. If all the AI got to train with was “salty soup”, all it can make is “salty soup”. It can not create the soup the normal way and use less salt.
It’s not intentional, but imo you should still feel as insulted as if it was. It’s their product they are unleashing on the world, after all.
So you think MS database is worse than chatGPT, or google? Are they the one who bought access to reddit to feed their AI? I still think there are some added intention from MS… I dont recall other vanilla LLM saying “that’s interesting” or other value judgement. I think they tried to program Copilot’s answer to be more human-like, for some reason