Some argue that bots should be entitled to ingest any content they see, because people can.

    • RickRussell_CAOP
      link
      fedilink
      101 year ago

      There is a so-called “hard problem of consciousness”, although I take exception with calling it a problem.

      The general problem is that you can’t really prove that you have subjective experience to others, and neither can you determine if others have it, or whether they merely act like they have it.

      But, a somewhat obvious difference between AIs and humans is that AIs will never give you an answer that is not statistically derivable from their training dataset. You can give a human a book on a topic, and ask them about the topic, and they can give you answers that seem to be “their own conclusions” that are not explicitly from the book. Whether this is because humans have randomness injected into their reason, or they have imperfect reasoning, or some genuine animus of “free will” and consciousness, we cannot rightly say. But it is a consistent difference between the humans and the AIs.

      The Monty Hall problem discussed in the article – in which AIs are asked to answer the Monty Hall problem, but they are given explicit information that violate the assumptions of the Monty Hall problem – is a good example of something where a human will tend to get it right, through creativity, while an AI will tend to get it wrong, due to statistical regression to the mean.

      • Gormadt
        link
        fedilink
        71 year ago

        Don’t we humans derive from our trained dataset: our lives?

        If you had a human with no “trained dataset” they would have only just been born. But even then you run into an issue there as it’s been shown that fetuses respond to audio stimulation while they’re in the womb.

        The question of consciousness is a really hard one for sure that we may never have an answer that everyone agrees on.

        Right now we’re in the infant days of AI.

        • RickRussell_CAOP
          link
          fedilink
          11 year ago

          To be clear, I don’t think the fundamental issue is whether humans have a training dataset. We do. And it includes copyrighted work. It also includes our unique sensory perceptions and lots of stuff that is definitely NOT the result of someone else’s work. I don’t think anyone would dispute that copyrighted text, pictures, sounds are integrated into human consciousness.

          The question is whether it is ethical, and should it be legal, to feed copyrighted works into an AI training dataset and use that AI to produce material that replaces, displaces, or competes with the copyrighted work used to train it. Should it be legal to distribute or publish that AI-produced material at all if the copyright holder objects to the use of their work in an AI training dataset? (I concede that these may be two separate, but closely related, questions.)

          • @AEsheron
            link
            41 year ago

            What level of abstraction is enough? Training doesn’t store or reference the work at all. It derives a set of weights from it automatically. But what if you had a legion of interns manually deriving the weights and entering them in instead? Besides the impracticality of it, if I look at a picture, write down a long list of small adjustments, -2.343, -.02, +5.327, etc etc etc, and adjust the parameters of the algorithm without ever scanning it in, is that legal? If that is, does that mean the automation of that process is the illegal part?

            • RickRussell_CAOP
              link
              fedilink
              11 year ago

              Right now our understanding of derivative works is mostly subjective. We look at the famous Obama “HOPE” image, and the connection to the original news photograph from which it was derived seems quite clear. We know it’s derivative because it looks derivative. And we know it’s a violation because the person who took the news photograph says that they never cleared the photo for re-use by the artist (and indeed, demanded and won compensation for that reason).

              Should AI training be required to work from legally acquired data, and what level of abstraction from the source data constitutes freedom from derivative work? Is it purely a matter of the output being “different enough” from the input, or do we need to draw a line in the training data, or…?

              All good questions.

          • Gormadt
            link
            fedilink
            11 year ago

            We were talking about consciousness not AI created works and copyright but I do have some opinions on that.

            I think that if an artist doesn’t want their works included in an AI dataset then it is their right to say no.

            And yeah all the extra data that we humans fundamentally aquire in life does change everything we make.

            • RickRussell_CAOP
              link
              fedilink
              21 year ago

              And yeah all the extra data that we humans fundamentally aquire in life does change everything we make.

              I’d argue that it’s the crucial difference. People on this thread are arguing like humans never make original observations, or observe anything new, or draw new conclusions or interpretations of new phenomena, so everything humans make must be derived from past creations.

              Not only is that clearly wrong, but it also fails the test of infinite regress. If humans can only create from the work of other humans, how was anything ever created? It’s a risible suggestion.

      • @[email protected]
        link
        fedilink
        5
        edit-2
        1 year ago

        Why don’t you like calling it a “problem”? That just means it’s something we have a questions about, not that it’s problematic. It’s like a math problem, it’s a question we don’t have an answer for.

        • RickRussell_CAOP
          link
          fedilink
          11 year ago

          I hesitate to call it a problem because, by the way it’s defined, subjective experience is innately personal.

          I’ve gotten into this question with others, and when I began to propose thought problems (like, what if we could replicate sensory inputs? If you saw/heard/felt everything the same as someone else, would you have the same subjective conscious experience?), I’d get pushback: “that’s not subjective experience, subjective experience is part of the MIND, you can’t create it or observe it or measure it…”.

          When push comes to shove, people define consciousness or subjective experience as that aspect of experience that CANNOT be shown or demonstrated to others. It’s baked into the definition. As soon as you venture into what can be shown or demonstrated, you’re out of bounds.

          So it’s not a “problem”, as such. It’s a limitation of our ability to self-observe the operating state of our own minds. An interesting question, perhaps, but not a problem. Just a feature of the system.

          • @[email protected]
            link
            fedilink
            31 year ago

            That’s just ridiculous imo, it seems like they’re afraid of the idea that maybe we’re just automata with a different set of random inputs and flaws. And to me, that’s the kind of idea that the problem of consciousness is trying to explore.

            But if you just say, “no, that’s off limits,” that’s not particularly helpful. Science can give us a lot of insight into how thoughts work, how people react vs other organisms to the same stimuli, etc. It can be studied, and we can use the results of those studies to reason about the nature of consciousness. We can categorize life by their sophistication, and we can make inferences about the experiences each category of life have.

            So I think it’s absolutely a problem that can and should be studied and reasoned about. Though I can see how that idea can be uncomfortable.

            • RickRussell_CAOP
              link
              fedilink
              11 year ago

              Well, it’s a “problem” for philosophers. I don’t think it’s a “problem” for neurology or hard science, that’s the only point I was trying to make.

              • @[email protected]
                link
                fedilink
                11 year ago

                Ah, I thought you were talking about philosophy.

                But it’s still relevant for neuroscience since we need to understand how brain chemistry impacts the mind to create effective treatments. So not knowing how the mind works is a problem that may limit our ability to solve problems. But there’s plenty we can and have done without understanding where consciousness comes from.

                • RickRussell_CAOP
                  link
                  fedilink
                  11 year ago

                  understanding where consciousness comes from

                  Again, to be clear, I don’t think this is a fundamentally scientific question.

                  If you show a philosopher how a rose activates the retina and sends signals to the brain, you’ll get a response like, “sure, but when I say the subjective experience of a rose, I mean what the mind does when it experiences a rose”…

                  If you show a philosopher the retinal signals activate the optical processing capabilities of the brain, you’ll get “sure, but when I say the subjective experience of a rose, I mean what the mind does when it experiences a rose”…

                  If you show a philosopher how the appearance of a rose consistently activates certain clusters of neurons and glial cells that are always activated when someone sees a rose, you’ll get a response “sure, but when I say the subjective experience of a rose, I mean what the mind does when it experiences a rose”…

                  Show the philosopher that the same region of the brain is excited when the person smells a rose or reads the word “rose”, and they’ll say, “sure, but when I say the subjective experience of a rose, I mean what the mind does when it experiences a rose”…

                  To the philosopher, they have posed a question about “what it’s like to experience a rose”, and I suggest that NO answer will satisfy them, because they’re not really asking a scientific question. They’re looking for, as the SEP puts it, an “intuitively satisfying way how phenomenal or ‘what it’s like’ consciousness might arise from physical or neural processes in the brain”. But, science isn’t under any obligation to provide an inituitive, easy-to-understand answer. The assemblage of brain & nerve functions that are fired when a living being experiences a phenomenon are the answer.

                  • @[email protected]
                    link
                    fedilink
                    11 year ago

                    Current science does a pretty good job of explaining the what, and sometimes the how. But it doesn’t do a good job of explaining the “why”, or in other words, why do we like certain experiences and dislike others?

                    There’s also a gap in explaining how much our consciousness contributes to observable behavior. Like why do some people feel stress when others might feel excitement by the same stimuli? How much of behavior is explained by chemical processes in the brain, and how much relies on “personality”?

                    Some of that is closer to science, and some is closer to philosophy, but I do think science has a role to play in helping to guide philosophical thought.

    • @[email protected]
      link
      fedilink
      3
      edit-2
      1 year ago

      Well what an interesting question.

      Let’s look at the definitions in Wikipedia:

      Sentience is the ability to experience feelings and sensations.

      Experience refers to conscious events in general […].

      Feelings are subjective self-contained phenomenal experiences.

      Alright, let’s do a thought experiment under the assumptions that:

      • experience refers to the ability to retain information and apply it in some regard
      • phenomenal experiences can be described by a combination of sensoric data in some fashion
      • performance is not relevant, as for the theoretical possibility, we only need to assume that with infinite time and infinite resources the simulation of sentience through AI needs to be possible

      AI works by telling it what information goes in and what goes out, and it therefore infers the same for new patterns of information and it adjusts to “how wrong it was” to approximate the correction. Every feeling in our body is either chemical or physical, so it can be measured / simulated through data input for simplicity sake.

      Let’s also say for our experiment that the appropriate output it is to describe the feeling.

      Now I think, knowing this, and knowing how good different AIs can already comment on, summarize or do any other transformative task on bigger texts that exposes them to interpretation of data, that it should be able to “express” what it feels. Let’s also conclude that based on the fact that everything needed to simulate feeling or sensation it can be described using different inputs of data points.

      This brings me to the logical second conclusion that there’s nothing scientifically speaking of sentience that we wouldn’t be able to simulate already (in light of our assumptions).

      Bonus: while my little experiment is only designed for theoretical possibility and we’d need some proper statistical calculations to know if this is practical in a realistic timeframe already and with a limited amount of resources, there’s nothing saying it can’t. I guess we have to wait for someone to try it to be sure.

        • @tomi000
          link
          2
          edit-2
          1 year ago

          Interesting, please tell me how ‘parroting back a convincing puree of the model it was trained on’ is in any way different from what humans are doing.

          • @[email protected]
            link
            fedilink
            21 year ago

            And that is the point.

            It sounds stupidly simple, but AIs in itself was the idea to do the learning and solving problems more like a human would. By learning how to solve similar problems, and transfer the knowledge to a new problem.

            Technically there’s an argument that our brain is nothing more than an AI with some special features (chemicals for feelings, reflexes, etc). But it’s good to remind ourselves we are nothing inherently special. Although all of us are free to feel special of course.

            • RickRussell_CAOP
              link
              fedilink
              11 year ago

              But we make the laws, and have the privilege of making them pro-human. It may be important in the larger philosophical sense to meditate on the difference between AIs and human intelligence, but in the immediate term we have the problem that some people want AIs to be able to freely ingest and repeat what humans spent a lot of time collecting and authoring in copyrighted books. Often, without even paying for a copy of the book that was used to train the AI.

              As humans, we can write the law to be pro-human and facilitate human creativity.