I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • Veraticus
    link
    fedilink
    English
    01 year ago

    Everyone here is busy describing the difference between memories and databases to me as if I don’t know what it is.

    Our memories are not a database. But our memories are like a database in that databases contain information, which our memories do too. Our consciousness is informed by and can consult our memories.

    LLMs are not like memories, or a database. They don’t contain information. It’s literally a mathematical formula; if you put words in one end, words come out the other. The only difference between a statement like “always return the word Paris in response to any query” and what LLMs do is complexity, not kind. Whereas I think we can agree humans are something else entirely, right?

    The fact they use neural networks does not make them similar to human cognition or consciousness or memory. (Separately neural networks, while inspired by biological neural networks, are categorically different from biological neural networks and there are no “emergent properties” in that network that makes it anything other than a sophisticated way of doing math.)

    So… yeah, LLMs are nothing like us, unless you believe humans are deterministic machines with no inner thought processes and no consciousness.

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      Ok, so here’s the misunderstanding - neural networks absolutely, 100% store information. You can download alpaca right now, and ask it about Paris, or llamas, or who invented the concept of the neural network. It will give you factual information embedded in the weights, there’s nowhere else the information could be.

      People probably think you don’t understand databases because this seems self apparent that neural networks contain information - if they didn’t, where does the information come from?

      There’s no magic involved, you can prove this mathematically. We know how it works and we can visualize the information - we can point to “this number right here is how the model stores the information of where the Eiffel tower is”. It’s too complex for us to work with right now, but we understand what’s going on

      Brains store information the same way, except they’re much more complex. Ultimately, the connections between neurons are where the data is stored - there’s more layers to it, but it’s the same idea

      And emergent properties absolutely are a thing in math. No sentience or understanding required, nothing necessarily to do with life or physics at all - complexity is where emergent properties emerge from

      • Veraticus
        link
        fedilink
        English
        0
        edit-2
        1 year ago

        You are correct this is a misunderstanding here. But it is of your misunderstanding of neural networks, not mine of memory.

        LLMs are mathematical models. It does not know any information about Paris, not in the same way humans do or even the Wikipedia does. It knows what words appear in response to questions about Paris. That is not the same thing as knowing anything about Paris. It does not know what Paris is.

        I agree with you the word “Paris” exists in it. But I disagree that information is relevant in any human sense.

        You have apparently been misled into believing a word generation tool contains any information at all other than word weights. Every word it contains is as exactly meaningless to it as every other word.

        Brains do not store data in this way. Firstly, neural networks are mathematical approximations of neurons. But they are not neurons and do not have the same properties of neurons, even in aggregate. Secondly, brains contain thoughts, memories, and consciousness. Even if that is representable in a similar vector space as LLM neural networks (a debatable conjecture), the contents of that vector space are as different as newts are from the color purple.

        I encourage you to do some more research on this before continuing to discuss it. Ask ChatGPT itself if its neural networks are like human brains; it will tell you categorically no. Just remember it also doesn’t know what it’s talking about. It is reporting word weights from its corpus and is no substitute for actual thought and research.