Today the risks of artificial intelligence are clear — but the warning signs have been there all along

  • @who8mydamnoreos
    link
    01 year ago

    I really wish they would call these programs language learning models and not “AI”. AI doesn’t really exist. I think people would have an easier time accepting that the bias crap these things shit out is not truth if they didn’t assume it has intelligence.

    • @BitSound
      link
      11 year ago

      That smacks of the AI effect. “Intelligence” is ill-defined, but these language models have built a model of the world via experiencing it through human-generated text that is difficult to just handwave away. They certainly have more intelligence than an insect.

      • @who8mydamnoreos
        link
        -11 year ago

        What experience? It doesn’t experience its a program. It has inputs and outputs and if it outputs something that’s incorrect you cant ask it why, you can’t explain to the model why its wrong and change its behavior. You are selling insects way to short and the wiki article you linked is trash that should be deleted.

        • @BitSound
          link
          31 year ago

          It creates a model of the world that it experiences through its training data, i.e. human text.

          Suppose there was a robot that had sensory input (i.e. vision, touch, taste) of the world. It is experiencing the world through that sensory data. Would you disagree? If you disagree, then I think you are simply unwilling to accept anything but meat as capable of thinking. If you agree, then why is sensory data that comes from human text different than sensory data that comes from a physical probe? Both constitute sensory input that derive from physical interaction. The language model simply builds a model based on its sensory data which is human text.

          Put another way, you can read about something you have never physically experienced and build a model of it. Why can’t machines do that?