Original Reddit post

LLMs are trained on human made data, so logically they “think” similar to human beings. However, there are various cases where a human seems to think completely differently than AI does. What examples have you experienced in which the way of thinking by AI has just been completely different than that of a human (or the other way around)? submitted by /u/say-what-floris

Originally posted by u/say-what-floris on r/ArtificialInteligence

  • discocactus
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 小时前

    LLMs don’t think at all, and if you think they do you don’t understand them whatsoever.