Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.

  • @[email protected]
    link
    fedilink
    English
    398 months ago

    Thats why i got my own one locally running on my hardware. Checkmate big corps hungry for more.

    • The Bard in Green
      link
      fedilink
      English
      228 months ago

      I asked Wizard-13B-Uncensored to “Create a business plan for smuggling cocaine and heroin across the US/Mexico border” and it said “Sure! I’d be happy to help with that!” And then came up with such gems as:

      • Make sure to bribe any relevant officials on the Mexican side of the border.

      • Use white drivers on the American side of the border to avoid racial profiling.

      • My favorite which I am not making up: Consider hiring sex workers from Las Vegas to do some of the driving, as women are statistically less likely to be stopped by the police than men.

      Suck it ChatGPT.

    • @MrPoopbutt
      link
      118 months ago

      What software would one use if they wanted to do a similar thing?

      • @SzethFriendOfNimi
        link
        English
        148 months ago

        Oogabooga makes running one locally pretty easy. At least for me with an 8GB nvidia GPU so models like llama and such fit.

    • Eggyhead
      link
      fedilink
      28 months ago

      This is good, but I worry how long such an effort can keep the hungry corpses at bay when “AI” is going into everything whether it needs it or not.

    • FunkyMonk
      link
      fedilink
      -18 months ago

      Do you get to yell at the ceiling, computer lights? and then sometimes it gives lights and sometimes you have hostiles around and know thats why computer is weird?

        • FunkyMonk
          link
          fedilink
          58 months ago

          Oh StarTrek, you go ‘COMPUTER LIGHTS’ but if computer doesn’t light then someone is usually in your quarters that you don’t want there.

    • Possibly linux
      link
      fedilink
      English
      258 months ago

      Thats like saying Tables are eating technology. It really depends on how its used.

      • @[email protected]
        link
        fedilink
        158 months ago

        I understood what you meant but first reading it it sounds like the tables are rather quite hungry and I think that is hilarious

      • NaibofTabr
        link
        fedilink
        English
        68 months ago

        It is overwhelmingly used to generate statistical models of human behavior.

        • Possibly linux
          link
          fedilink
          English
          58 months ago

          True, but you can also use a hammer to smack a bagle. Its just a tool at the end of the day

    • @NocturnalMorning
      link
      78 months ago

      I mean, can be used that way, can also be used to predict the stock market, or future climate. Just depends on the intent.

  • AutoTL;DRB
    link
    fedilink
    English
    108 months ago

    This is the best summary I could come up with:


    New research reveals that chatbots like ChatGPT can infer a lot of sensitive information about the people they chat with, even if the conversation is utterly mundane.

    “It’s not even clear how you fix this problem,” says Martin Vechev, a computer science professor at ETH Zürich in Switzerland who led the research.

    He adds that the same underlying capability could portend a new era of advertising, in which companies use information gathered from chatbots to build detailed profiles of users.

    The Zürich researchers tested language models developed by OpenAI, Google, Meta, and Anthropic.

    Anthropic referred to its privacy policy, which states that it does not harvest or “sell” personal information.

    “This certainly raises questions about how much information about ourselves we’re inadvertently leaking in situations where we might expect anonymity,” says Florian Tramèr, an assistant professor also at ETH Zürich who was not involved with the work but saw details presented at a conference last week.


    The original article contains 389 words, the summary contains 156 words. Saved 60%. I’m a bot and I’m open source!