I think AI is neat.

  • @Redacted
    link
    English
    48 months ago

    Understanding is a human concept so attributing it to an algorithm is strange.

    It can be done by taking a very shallow definition of the word but then we’re just entering a debate about semantics.

      • @Redacted
        link
        English
        7
        edit-2
        8 months ago

        Yes sorry probably shouldn’t have used the word “human”. It’s a concept that we apply to living things that experience the world.

        Animals certainly understand things but it’s a sliding scale where we use human understanding as the benchmark.

        My point stands though, to attribute it to an algorithm is strange.

          • @Redacted
            link
            English
            108 months ago

            Well it was a fun ruse while it lasted.

              • @Redacted
                link
                English
                48 months ago

                Yes you do unless you have a really reductionist view of the word “experience”.

                Besides, that article doesn’t really support your statement, it just shows that a neural network can link words to pictures, which we know.

                • @[email protected]
                  link
                  fedilink
                  English
                  -28 months ago

                  Help me understand what you mean by “reductionism”. What parts do you believe I’m simplifying or overlooking? Also, could you explain why you think being alive is essential for understanding? Shifting the goalposts makes it difficult to discuss this productively. I’ve also provided evidence for my claims, while I haven’t seen any from you. If we focus on sharing evidence to clarify our arguments, we can both avoid bad faith maneuvering.

                  Besides, that article doesn’t really support your statement, it just shows that a neural network can link words to pictures, which we know.

                  It does, by showing it can learn associations with just limited time from a human’s perspective, it clearly experienced the world.

                  • @Redacted
                    link
                    English
                    48 months ago

                    That last sentence you wrote exemplifies the reductionism I mentioned:

                    It does, by showing it can learn associations with just limited time from a human’s perspective, it clearly experienced the world.

                    Nope that does not mean it experienced the world, that’s the reductionist view. It’s reductionist because you said it learnt from a human perspective, which it didn’t. A human’s perspective is much more than a camera and a microphone in a cot. And experience is much more than being able to link words to pictures.

                    In general, you (and others with a similar view) reduce complexity of words used to descibe conciousness like “understanding”, “experience” and “perspective” so they no longer carry the weight they were intended to have. At this point you attribute them to neural networks which are just categorisation algorithms.

                    I don’t think being alive is necessarily essential for understanding, I just can’t think of any examples of non-living things that understand at present. I’d posit that there is something more we are yet to discover about consciousness and the inner workings of living brains that cannot be fully captured in the mathematics of neural networks as yet. Otherwise we’d have already solved the hard problem of consciousness.

                    I’m not trying to shift the goalposts, it’s just difficult to convey concisely without writing a wall of text. Neither of the links you provided are actual evidence for your view because this isn’t really a discussion that evidence can be provided for. It’s really a philosophical one about the nature of understanding.