• flere-imsaho
    link
    fedilink
    English
    186 months ago

    i guess it comes down to a philosophical question

    no, it doesn’t, and it’s not a philosophical question (and neither is this a question of philosophy).

    the software simply has no cognitive capabilities.

    • @[email protected]
      link
      fedilink
      English
      -56 months ago

      I’m not sure I agree, but then it goes to my second question:

      What’s the effective difference?

      • flere-imsaho
        link
        fedilink
        English
        156 months ago

        (…) perception, attention, thought, imagination, intelligence, comprehension, the formation of knowledge, memory and working memory, judgment and evaluation, reasoning and computation, problem-solving and decision-making (…)

      • @braxy29
        link
        English
        -86 months ago

        don’t know why you got downvoted, an LLM is essentially a chinese room, and whether such a room “knows” is still the question.

        • @techMayhem
          link
          English
          106 months ago

          Someone in the chinese room would not know anything about their in- or output. Sure you memorized that a certain set of symbols means your output should contain another set of symbols, but what do you actually “know” about these symbols.

          But you have no idea what it’s about. Is it a greeting? A recipe for some pasta? Instructions to build a bomb? Could be anything.

        • @[email protected]
          link
          fedilink
          English
          36 months ago

          I’m pretty well steeped in this question, from both a technological and philosophical perspective.

          And it’s funny to see all of these posters, who are upvoting comments that expose a fundamental lack of understanding about how LLMs and ai work, acting like the book is already closed on the answer.