• @[email protected]
    link
    fedilink
    -351 month ago

    At some point in the future, there will be LLMs that will scan all voice, video, or text calls and do sentiment analysis. If they flag a call as hostile, they could send a signal to all intermediate network conduits that this person is a threat and will essentially ‘unmask’ their origin. Authorities can then take appropriate action.

    If we’re lucky, laws will be passed to enable it just for cases of assholes threatening someone they don’t like. And after a few public lawsuits, the whole doxxing/swatting/death-threat thing will cease to exist.

    More likely, it’ll be used by governments to stifle dissent. But you gotta dream 😔

    • @Nuke_the_whales
      link
      101 month ago

      Sounds like the kinda thing my grandpa would say back in the 90s. I’m still waiting for the micro chip that’s supposed to make me worship Satan

    • Flying Squid
      link
      101 month ago

      That’s not what LLMs do. They’re sentence-construction software.

      • @[email protected]
        link
        fedilink
        21 month ago

        An LLM could do this, but it would be very expensive to do this for every single communication and not particularly good. Humans are good at communicating through subtext. “You have such a lovely wife, I would just be gutted if something tragic were to happen to her.”

        ChatGPT picked up the veiled threat there, but that’s a very unsubtle example.