• @[email protected]
    link
    fedilink
    English
    133 days ago

    TBH I felt this was bit superficial. No concrete examples, I don’t really think adoption curve outside tech people will be that fast to agents, doesn’t really go into how using agents to manipulate people would significantly differ from using non agent chatbot for same end.

    I’m still worried how AI agents could be used to do evil, just that I don’t feel any better informed after reading this.

    Curious to hear any thoughts on this.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      3 days ago

      I think there is a risk vector, but I think as you say the people most susceptible toward AI manipulation (the folks who just don’t know any better) is low due to low adoption. I think there are a lot of people in the business of selling AI who are doing it by playing up how scary it is. AI is going to replace you professionally and maybe even in bed, and that’s only if it doesn’t take over and destroy mankind first! But it’s hype more than anything.

      Meanwhile you’ve got an AI who apparently became a millionaire through meme coins. You’ve got the potential for something just as evil in the stock markets as HFT— now whoever develops the smartest stock AIs make all the money effortlessly. Obviously the potential to scam the elderly and ignorant is high. There are tons of illegal or unethical things AI can be used for. An AI concierge or even AI Tony Robbins is low on my list.

  • AllahOP
    link
    English
    33 days ago

    do we have any protection on this site against agents?

    • @[email protected]
      link
      fedilink
      English
      23 days ago

      Insofar as the agents described in the article, I’m not sure where the overlap with Lemmy is.

      • AllahOP
        link
        English
        -23 days ago

        they are of many types

        • @[email protected]
          link
          fedilink
          English
          53 days ago

          I’m just trying to follow the train of thought.

          Frankly the best defense is probably to just write your own agent if you’re worried about someone injecting an agenda into one. I strongly suspect most agents would have a place to inject your own agenda and priorities so it knows what you want it to do for you.

          There is just a lot of speculation here without practical consideration. And I get it, you have to be aware of possible risks to guard against them, but as a practical matter I’d have to see one actually weaponized before worrying overly much about the consequences.

          AI is the ultimate paranoia boogeyman. Simultaneously incapable of the simplest tasks and yet capable of mind control. It can’t be both, and in my experience is far closer to the former than the latter.

  • @CosmoNova
    link
    English
    03 days ago

    Any major platform owned by a major corporation is a potential (and often practical) manipulation engine. AI already creeps into Social Media and only amplifies what platforms want you to see anyway.