• @burliman
    link
    English
    341 year ago

    Bad humans are prompting these AI engines. Still gotta fix that. You know, root of the problem. I can tell you as an older human, misinformation has been supercharged every election. But yeah let’s blame AI this time around so we don’t have to figure out the tough problem.

    • @saltesc
      link
      English
      12
      edit-2
      1 year ago

      Correct. AI is simply a tool. People need to get their heads around this and stop perceiving it as some sentient magical entity with rogue prerogatives and uncontested liberties.

      Whenever AI does something whack, that was a human. Everything it knows and does comes from the knowledge and instructions of humans. It’s us. If AI produces misinformation, it’s simply doing what it was taught and instructed by someone, and there lies the source of bullshit.

    • Phanatik
      link
      fedilink
      41 year ago

      The problem isn’t the misinformation itself, it’s the rate at which misinformation is produced. Generative models lower the barrier to entry so anyone in their living room somewhere can make deepfakes of your favourite politician. The blame isn’t on AI for creating misinformation, it’s for making the situation worse.

    • HelloThere
      link
      fedilink
      English
      31 year ago

      Fallible humans are building them in the first place.

      No LLM - masquerading as AI - is free of biases.

      That’s not to say that ‘bad’ people prompting biased LLMs is not an issue, it very much is, but even ‘good’ people are not going to get objective results.