• Veraticus
    link
    fedilink
    English
    -1
    edit-2
    1 year ago

    Here, let’s ask GPT4 itself since you’ve decided it’s suddenly an okay source:

    Your statement is correct in asserting that the vector representation in a language model is not an abstract representation. It’s purely a mathematical construct. However, saying it’s “unrelated to anything that actually exists” might be an overstatement. These vectors do capture statistical patterns in human language, which are reflections of human thought and culture. They’re just not capable of the deep, nuanced understanding that comes from human experience.

    I accept it’s an overstatement. But it is neither “incredibly wrong,” nor is it thought. (Or intelligence.)

    • @BitSound
      link
      11 year ago

      So you admit that you were wrong?

      • Veraticus
        link
        fedilink
        English
        -11 year ago

        I was in this case – but the overall point I made is still correct. If winning this minor battle is what you were seeking, congratulations. You are no closer to understanding the truth of this or what we were actually talking about. Not that that was either your point or within your capabilities.

    • @[email protected]
      link
      fedilink
      11 year ago

      I’d just like to step in here and mention that asking an LLM is probably not a good proof (and this is directed at both of you). Its understanding of AI is from before it was trained, so it is wildly out of date at this point given how much has happened in the space since.

      • Veraticus
        link
        fedilink
        English
        11 year ago

        GPT4 has knowledge of its own training since it was trained in 2022.

        • @[email protected]
          link
          fedilink
          11 year ago

          Care to provide some proof of that? They did update their system prompt to include a few things like it is now GPT4 (it used to always say GPT3). Other than that, I don’t think it knows anything. But in general, I was more talking about developments in AI since it was trained which it certainly does not know.

            • @[email protected]
              link
              fedilink
              11 year ago

              I’m aware of that date.

              The OpenAI GPT-4 video literally states that GPT-4 finished training in August 2022.

              Either way, to clarify / reiterate, you’re refuting a different point than I’ve made. I said:

              Its understanding of AI is from before it was trained, so it is wildly out of date at this point given how much has happened in the space since.

              I’m not talking about whether it knows about its own training (I doubt that it does). I’m talking about it knowing about what’s happened in the broader AI landscape since.

              • Veraticus
                link
                fedilink
                English
                11 year ago

                I mean, your argument is still basically that it’s thinking inside there; everything I’ve said is germane to that point, including what GPT4 itself has said.

                • @[email protected]
                  link
                  fedilink
                  11 year ago

                  I mean, your argument is still basically that it’s thinking inside there; everything I’ve said is germane to that point, including what GPT4 itself has said.

                  My argument?

                  That doesn’t mean they’re having thoughts in there I mean. Not in the way we do, and not with any agency, but I hadn’t argued either way on thoughts because I don’t know the answer to that.

                  Are you assuming I’m saying that LLMs are sentient, conscious, have thoughts or similar? I’m not. Jury’s out on the thought thing, but I certainly don’t believe the other two things.

                  I’m not saying it’s thinking or has thoughts. I’m saying I don’t know the answer to that, but if it is it definitely isn’t anything like human thoughts.