KICK TECH BROS OUT OF 196

  • @testfactor
    link
    61 year ago

    Skipping over the first two points, which I think we’re in agreement on.

    To the last, it sounds like you’re saying, “it can’t be intelligent because it is wrong sometimes, and doesn’t have a way to intrinsically know it was wrong.” My argument to that would be, neither do people. When you say something that is incorrect, it requires external input from some other source to alert you to that fact for correction.

    That event could then be added to you “training set” as it were, aiding you in not making the mistake in future. The same thing can be done with the AI. That one addition to the training set that was “just enough to bridge that final gap” to the right answer, as it were.

    Maybe it’s slower at changing. Maybe it doesn’t make the exact decisions or changes a human would make. But does that mean it’s not “intelligent”? The same might be said for a dolphin or an octopus or an orangutan, all of which are widely considered to be intelligent.

    • Gaybees
      link
      fedilink
      61 year ago

      I don’t think they said anything like that “it can’t be intelligent because it’s wrong sometimes”. It’s more like the AI doesn’t exist outside of the prompts you feed it. Humans can introspect, reflect on the actions we’ve done and question what effect our actions had on the situation. Humans can have desires, we can want to be more accurate, truthful in our actions, and reflect on how we might have failed doing this in the past. AI cannot do this. And we can do this outside of the prompt of a similar situation. AI only takes an input and then generates an output, wipes its hands, and calls it a day. It doesn’t matter if it gave you a correct answer, wrong answer, or gave you a completely illegible sentence.

      • @testfactor
        link
        31 year ago

        The previous guy and I agreed that you could trivially write a wrapper around it that gives it an internal monologue and feedback loop. So that limitation is artificial and easy to overcome, and has been done in a number of different studies.

        And it’s also trivially easy to have the results of its actions go into that feedback loop and influence its weights and models.

        And is having wants and desires necessary to be an “intelligence”? That’s getting into the philosophy side of the house, but I would argue that’s superfluous.

    • @[email protected]
      link
      fedilink
      31 year ago

      My point is not that it can’t be intelligent because it’s wrong sometimes, my point is the program called “consciousness” is what would end up doing all the work we would recognize as sapient.

      A LLM is little more than a really weird dictionary that doesn’t let you open it.

      The LLM could spit things out and then be retrained but only if “consciousness” can tell it yes or no, feed it info etc.

      Currently that consciousness program is entirely computer scientists and there are no promising avenues for replacing them yet.

      • @testfactor
        link
        11 year ago

        You missed the point of my “can be wrong” bit. The focus was on the final clause of “and recognize that it was wrong”.

        But I’m kinda confused by your last post. You say that only computer scientists are giving it feedback on its “correctness” and therefore it can’t truly be conscious, but that’s trivially untrue and clearly irrelevant.

        First, feedback on correctness can be driven by end users. Anyone can tell ChatGPT “I don’t like the way you did that,” and it would be trivially easy to add that to a feedback loop that influences the model over time.

        Second, find me a person who’s only feedback loop was internal. People are told “no that’s wrong” or “you’ve messed that up” all the time. That’s what makes us grow as people. That is arguably the core underpinning of what makes something intelligent. The ability to take ideas from other people (computer scientists or no), and have them influence the way you think about things.

        Like, it seems like you think that the “consciousness program” you describe would count as an intelligence, but then say it doesn’t because it’s only getting its external information from computer scientists, which seems like a distinction without a difference.

        • @[email protected]
          link
          fedilink
          31 year ago

          Second, find me a person who’s only feedback loop was internal.

          Not my argument: find me a person you’d consider intelligent who is only influenced externally, with no autonomy of their own. I name that person vegetable.

          First, feedback on correctness can be driven by end users.

          You’ve never worked with end users, have you? Jesus Christ the last thing you want to give an end users is write access to your model. It doesn’t matter what channels hat write access comes through, it will be used to destroy your model.

          (Not to mention the extortionate cost of this constant training, but this isn’t a discussion about economic feasibility)

          Besides that doesn’t solve autonomy, which is still an integral aspect of intelligence.

          The “consciousness program” is a fiction for illustrative purposes. It doesn’t exist, in case you misunderstood me.

          I did not miss the point on the wrong bit: but an LLM saying it is uncertain is not the same as saying it is wrong, and LLMs do not evaluate true or false: they transform inputs into outputs. Optionally with a certainty level.

          Feeding another LLM with the outputs of the first has shown in some cases to improve accuracy, but that’s just hooking 2 models together: not solving the fundamental gaps in reasoning.

          If a human encounters an unknown situation it can seek out context to try and figure more out. They can generalize what they know and seek for things that might help them understand more.

          An LLM just has an output. It cannot “broaden it’s search” or generalize. Anything that did so would be layers on top of the LLM running aforementioned fictious “consciousness”, and that consciousness would have a significant amount of complexity in order to perform the functions described here and previously.

          An LLM is not an actor, it is math.

          You’re anthropomorphizing bits.

        • @[email protected]
          link
          fedilink
          21 year ago

          The paper states that the graphs representing those relations are the result of training LLMs on a very small subset of unambiguous true and false statements.

          While these emergent properties may provide interesting avenues to model refinement and inspecting outputs it doesn’t change the fact that these weird little dictionaries aren’t doing anything truly unexpected. We just are learning the extra data associated with the training data.

          It’s not far removed from the primary complaint of Gebru’s On Stochastic Parrots where she points out the ways that our biases are implicitly trained into LLMs because of the uncontrolled and unexamined inputs: except in this case those biases are the linguistics of truth and lies in unambiguous boolean inputs.

          • @[email protected]
            link
            fedilink
            English
            11 year ago

            This may provide interesting avenues to model refinement that aren’t spitting things out and being retrained by “consciousness” telling it yes or no, or feeding it additional info.

            • @[email protected]
              link
              fedilink
              11 year ago

              Only if the “direction of truth” exists in the wild with unchecked training data.

              That clustering is a representation of the nature of the data fed to the model: all their training data was unambitious true or false… It’s not surprising that it clusters.