It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.

  • @CarbonatedPastaSauce
    link
    English
    317 months ago

    You’re illustrating the issue so many people have with this technology. Without a fundamental understanding of how it works, people will attempt to use it in ways it shouldn’t be used, and won’t understand why it isn’t giving them correct information. It simply doesn’t have the ability to do anything but put words in an order that statistically will resemble how a human might answer the question.

    LLMs don’t know anything. They can’t tell fact from fiction (and are incapable of even trying), and don’t understand concepts such as verifying info when requested. That’s the problem, they don’t ‘understand’ anything, including what they are telling you. But they do spit out words in a statistically probable order, even if the result is complete bullshit. They do it so well that they can fool most people into thinking the computer actually knows what it’s telling you.