It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.

  • @[email protected]
    link
    fedilink
    English
    1018 months ago

    ChatGPT is not an information repository.

    ChatGPT is not an information repository.

    ChatGPT is not an information repository.

    The correct answer to this problem is not “we can’t correct it”; it is “this class of task is completely out of scope for ChatGPT, and we will do everything we can to make sure users understand that”. Unfortunately, OpenAI knows damn well this is how the public perceives and uses its product and seems happy to let this misconception persist.

    We do need laws to curb this, but it’s really more a marketing issue than a technological issue. The underlying technology is amazing; the applications built around it are mostly garbage. What we have here is a hype trainwreck.

    • @gedaliyahOPM
      link
      -308 months ago

      Yet, LLMs are trained on data - an information repository. They are capable of accessing and recalling the contents of that information repository, and relaying information from that repository to an end user. It may not be an information repository functionally, but it legally seems to have the capabilities to be classified as one. (I am neither a lawyer nor a programmer, and I am not in the EU.)

      The software breaks the law, and the people who built it knew that this was likely the case. It was developed as a research project, which has very different legal requirements from a consumer product. They might not outright ban the software, but they might issue some hefty fines, etc. Banning a product is not the only recourse of the courts.

      • @CarbonatedPastaSauce
        link
        English
        408 months ago

        They are capable of accessing and recalling the contents of that information repository, and relaying information from that repository to an end user.

        This is not correct based on my understanding of LLMs, but I am certainly not an expert. As I understand it, it’s basically a statistics exercise in how they determine what order to put words into. They don’t ‘look stuff up’ in their training data. They probably don’t even have access to their training data once the model is complete. These models are trained on terabytes of data but are small enough to fit in memory, so it’s impossible for them to still have access to all that. But it wouldn’t matter if they did, because that’s not how they work.

        • @UnpluggedFridge
          link
          28 months ago

          LLMs do not look stuff up (except when they have an API that allows them to), but I think OP’s point still stands. The statistical next token predictor metaphor is useful , but in many regards that’s what text and language are. If you can understand that certain words are linked to certain other words, then you should be able to appreciate that certain groups of words can be associated in a way that is functionally the same as data.

          I have not memorized the pytorch documentation, but I can use what I understand about pytorch and other libraries to infer specific aspects of the library that I am not familiar with. Functionally, this is no different than if I accessed the documentation directly. If I communicate this information to others I have functioned as a data repository. The repository works on a more abstract and error-prone level, but it works nonetheless.

          Here is another very concrete example: LLMs know George Washington’s birthday. Not because they look up that information, but because of the learned associations between George Washington, birthday, and his actual date of birth.

          • @CarbonatedPastaSauce
            link
            English
            118 months ago

            I can use what I understand about pytorch and other libraries to infer specific aspects of the library that I am not familiar with.

            This is what LLM’s can’t do though. They can’t use what they understand because they don’t understand anything. They can’t infer, they can’t reason, they can’t evaluate or compare. They can spit out words that make it look like they did those things, but they didn’t.

            • @UnpluggedFridge
              link
              0
              edit-2
              8 months ago

              Here I think you are behind on the literature. LLMs can infer and reason, and there are whole series of papers that evaluate LLMs for these properties the exact same way we evaluate humans. So if you can’t trust the metrics, then you cannot even assert that humans can reason and infer and understand.

                  • @[email protected]
                    link
                    fedilink
                    28 months ago

                    Without commenting on the content of the paper,

                    so the author really doesn’t seem credible. I prefer nuanced and careful evaluations

                    Novelis

                    https://novelis.io/

                    Innovation in action: Optimizing Business Efficiency

                    Our mission is to revolutionize business efficiency through Innovation. Leveraging AI, automation and process intelligence, we transform operations, supercharge performance, and unlock growth for businesses worldwide

                    Hm. 🤔

        • @gedaliyahOPM
          link
          -228 months ago

          To me that sounds like a distinction without a difference. A jpeg is not an image, but a set of data that can be algorithmically processed and rendered as an image - which is why it can fit in a smaller space than a bmp. Despite the technical differences, a jpg and a bmp are legally equivalent. If something is illegal in a bmp, it’s also illegal in a jpg. The same laws apply to EVs and gas vehicles. The same laws apply to vinyl records and cassette tapes. The law does not care about the mechanism.*

          *for the most part

          • @CarbonatedPastaSauce
            link
            English
            318 months ago

            You’re illustrating the issue so many people have with this technology. Without a fundamental understanding of how it works, people will attempt to use it in ways it shouldn’t be used, and won’t understand why it isn’t giving them correct information. It simply doesn’t have the ability to do anything but put words in an order that statistically will resemble how a human might answer the question.

            LLMs don’t know anything. They can’t tell fact from fiction (and are incapable of even trying), and don’t understand concepts such as verifying info when requested. That’s the problem, they don’t ‘understand’ anything, including what they are telling you. But they do spit out words in a statistically probable order, even if the result is complete bullshit. They do it so well that they can fool most people into thinking the computer actually knows what it’s telling you.

          • @Grimy
            link
            238 months ago

            Here’s a better metaphor because yours completely misses the mark when it comes to the difference between an LLM and an actual encyclopedia.

            A painter will spend years honing his craft by studying other paintings as well as photos and real life. If you ask him to paint you a house from memory and try to build it with what he gives you, that just makes you an idiot, it doesn’t make him a bad architect.

            Chatgpt is not an encyclopedia and any thing it says that is remotely important to your personal or work life should be verified. They explicitly tell you it can and will give false responses.

          • @[email protected]
            link
            fedilink
            English
            188 months ago

            LLM isn’t a compilation of its training data, anymore than a cake is a pile of eggs, flower and sugar.

          • @kurwa
            link
            128 months ago

            Jpg is a lossy compression algorithm. Statistical probability of words occuring in sequence is not compression. That’s like saying generative images are compression, they aren’t. It’s not producing blurry matches of images, it’s producing something “novel”. Otherwise, that would be considered over fitting the data.

      • @antidote101
        link
        118 months ago

        They don’t recall information from a repository, the repository is translated into a set of topic based weighted probabilities of what words come next.

        Those probabilities are then used to reconstruct a best-guess at what words are next when generating strings of language.

        It’s not recall, it’s a form of “free” association, which is quite tightly bounded to the context, topic, and weightings of the training data.

        This is not precise and is more likely to create average answers and sentences, rather than precise ones.

        It’s not recall, it’s really convincing lies.

        “He seems to know what he’s talking about, and speaks with a certain kind of authority which makes sense and sounds knowledgeable”.

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        8 months ago

        To clarify, I mean to say that users should not consider it an information repository, because it does not function as one, by design. Whether it should be classified as such under the law is another matter, one on which I do not have enough knowledge to comment. I do think OpenAI is presenting ChatGPT inappropriately, and I hope they will be held accountable for that.

        I’m sure in the future we will see true databases built on the same technology (and they will be awesome, if implemented properly). But that’s not what ChatGPT is (or, as far as I know, any other existing LLM-based application). Any information it is able to “recall” is almost a coincidence of how it was trained. You can sort of think of it like lossy compression. The LLM gets all of its information from its training set, but it is not designed to retain any specific information from the training set in full. In cases where it does, that usually means one of two things:

        1. The information appeared many times in the training set, enough prevent it from being washed out.
        2. The model is far bigger than it should be, and is overfitted to its training data.