Was using my SO’s laptop, I had been talking (not searching, or otherwise typing) about some VPN solutions for my homelab, and had the curiosity to use the new big copilot button and ask what it can do. The beginning of this context was actually me asking if it can turn off my computer for me (it cannot) and I ask this.

Very unnerved, I hate to be so paranoid to think that it actually picked up on the context of me talking, but again: SO’s laptop, so none of my technical search history to pull off of.

  • @[email protected]
    link
    fedilink
    773 months ago

    Is it possible that your chain of questions is very similar to other “paranoid” users who inevitably question copilot about privacy, so this is a learned response?

    • @bbuezOP
      link
      243 months ago

      I’ll pull the rest of the context when she’s back in town, I doubt she’s used it more so it should be saved still. She looked at me when this typed out and said “you’re fucking with me right?”. I am still just as shocked, I wish I was fucking around and I have no other explaination how it would remotely key onto saying this given the previous interactions.

  • @[email protected]
    link
    fedilink
    69
    edit-2
    3 months ago

    Absolutely amazing.

    My guess is that at this point there are so many user prompts its received so far in its training set that bring up both Copilot and privacy concerns that it first interpreted the question, then searched for the most common topic associated with itself (privacy), then spit out a hardcoded MSFT override response for ‘inquiry’ + ‘privacy’.

    • @bbuezOP
      link
      143 months ago

      I want to believe that is the explaination, I really would’ve expected at least a hardcoded “features and capability” response, or for it to be more than a neutered chatGPT that im sure neither of us are going to use

      • @[email protected]
        link
        fedilink
        8
        edit-2
        3 months ago

        MSFT appears to still be using a fundamentally old chatbot model that they’ve just slapped a bunch of extra ‘features’ (namely, Wooow! It has APIs and works on other MSFT stuff!) to, much like Bethesda’s game engine.

        Probably barely different from Tay in terms of broad conceptual design, just patched and upgraded to do what it does faster.

        The core design is garbage, and just like Windows itself, its nearly certainly a giant fucking mess of layers upon layers of different versions of itself hiding under a trench coat, all standing on top of something 10 to 20 years old.

        • @bbuezOP
          link
          23 months ago

          I never needed Bing to open up in a side panel, I’m not sure why I would want that now

          They bought OpenAI and all, why aren’t they using the most cutting edge from them? Or is it just severely lobotomized with some preprompt to the point I still would rather open up chatGPT

          • @[email protected]
            link
            fedilink
            3
            edit-2
            3 months ago

            As a person who used to work at MSFT:

            I can almost guarantee you there are a whoooole lot of people who have made their careers basically championing the very old chat bot model, and they are probably now either directly in charge of the OpenAI stuff, or at the very least ‘stakeholders’.

            They will do nonsense corporate bullshit to make them selves seem very important, never really wrong about anything, and this will result in extremely slow and gradual actual adoption of the GPT stuff, all the while stressing all the reasons their old stupid bullshit can’t be seriously modified because of reasons that have to do with synergizing with other MSFT products.

            The process of the company gradually figuring out that none of that matters when it comes to producing something that is actually better will be slow, painful and incremental.

            Itll probably take half a decade.

            For reference, as an aside, I was doing a contract of DBA kinda stuff when they unveiled Windows 8. We had to dogfood it, ie, the MSFT process of everyone working at MSFT has to beta test everything else MSFT is making.

            Well… Windows 8 initially broke basically everything we were using to actually do DBA.

            I got angry and pointed out that Windows 8 had removed the ‘windows’ from Windows. The initial version was soley the tablet based design, only allowing a maximum of two ‘panes’ open at a time.

            We had to wait about a month for the various problems with SQL Manager Studio to be ironed out, and for them to basically allow the option to just use the more or less Windows 7 desktop for you know actually working on our PCs.

            Point of me mentioning this is: I saw how ludicrous this all was, and was frequently verbally abused by our team lead for pointing it out.

            Youre not allowed to go against the grain at MSFT unless youre a big dog. And… you become a big dog by bullying people and vastly overstating the necessity of what your team is doing.

            The culture there is downright psycho and sociopathic.

            • @bbuezOP
              link
              23 months ago

              Very informative insight, I’ve worked at a place or two where project management seems just so very disconnected to what is happening, going as far to argue about how competitions product quality doesn’t matter when we were in a meeting about a customer threatening to return their device… not suprised to find those mindsets have weaseled their way (or even made) big tech what it is today

    • @[email protected]
      link
      fedilink
      13 months ago

      Occam’s razor dictates that it’s just overly permissive settings by default and an owner who doesn’t know how to turn off mic access

  • @muntedcrocodile
    link
    22
    edit-2
    3 months ago

    Looks to me like not audio tracking but that u somehow inadvertently triggered microsofts privacy training they have given to copilot. Im guessing the ai was being too vocal about privacy and microsoft wanted to tame it and get it to downplay etc.

  • @[email protected]
    link
    fedilink
    183 months ago

    ChatGPT has a short but distinct history of encouraging paranoia in people who use it.

    Asked for help with a coding issue, ChatGPT wrote a long, rambling and largely nonsensical answer that included the phrase “Let’s keep the line as if AI in the room”.

    • @bbuezOP
      link
      13 months ago

      I have to credit to the novelty of the technology, there’s certainly a reason I’m wanting to self host models, my concern really is with what data is being used, and how these models are being trusted.

      My goal is to contribute the least useable data to the likes of OpenAI “in the puruit of AGI” because it will inevitably become as did MS Tay did, especially if something can change on their end and suddenly have it spitting out garbage for users who may be potentially at risk of bad advice or actually paranoid.

      That also doesn’t mean I havent and wont use chatGPT, it certainly has been a useful tool, knowing its limitations, but OpenAI has their head in the clouds and it only leads to greed in pursuit of an end goal. /Imho

      • @[email protected]
        link
        fedilink
        4
        edit-2
        3 months ago

        I think AI is humanized and otherwise designed so that people will feel encouraged to give private data to it. The Kagi Corporation wrote about this in their manifesto. In reality, giving your data to open AI is just as unsafe as typing in a personal search query into Google or Bing. But by changing the context, it feels like you’re talking to a friend or a person you met at a bus stop.

        AI Bros always say “it’s just a tool” as a sort of thought terminating cliche (note: this wasn’t intended to be a dig at your comment). Guns are a tool too. I wouldn’t want the richest corporations in the United States to personally own the most powerful missile systems, and in terms of AI, that’s kind of where we are.

    • Neato
      link
      fedilink
      English
      263 months ago

      It’s not real. There’s other ways that answer would come up. We only see the ones that seem weird because boring answers don’t get posted.

      • @SzethFriendOfNimi
        link
        13
        edit-2
        3 months ago

        There’s a real risk of survivorship bias here. Somebody asking about a car gets that and thinks nothing of it. A privacy minded person, however, would find it odd. And being the kind of person concerned about what could have been the cause considered the prior conversation.

        I’m not saying its an unreasonable concern or technically not feasible. It’s just not how the LLM’s tend to work.

        Id consider it more likely to be a bug, or general inquiries like you said, or that SO had a bunch of documents locally that reference privacy or browsing history (anytime really) that MS could have used as a kind of “here’s more about the person asking you a question”

        • @[email protected]
          link
          fedilink
          5
          edit-2
          3 months ago

          A privacy minded person probably wouldn’t use these tools to begin with tbh, they would likely run their own LLM instead.

          • @[email protected]
            link
            fedilink
            English
            33 months ago

            I guess that’s why OP brought up that they were using someone else’s computer.

            Also, a truly privacy-minded person wouldn’t refuse to use a hosted AI product at all. We generally just make ourselves aware that we don’t have privacy when using it, and never type anything sensitive into it. Also, have you seen what it costs to run a capable LLM?

            • @bbuezOP
              link
              13 months ago

              Just don’t pull a samsung

              I’ve just started messing with GPT4all for CPU based language models which can run relatively well on older gaming hardware, and a coral accelerator module for my NVR presence detection with Frigate only cost 30$

          • @SzethFriendOfNimi
            link
            23 months ago

            That’s what I’ve been playing with. Cool stuff even though it’s limited because of my 8GB nvidia card

      • @bbuezOP
        link
        4
        edit-2
        3 months ago

        I will tomorrow, I understand where the skeptisism comes from, I still very much doubt that it is listening, I do have my Firefox account on her laptop, but regardless it leaves a nasty taste in my mouth

        Edit: this is no more than about 6 messages into using it, first few were garbage my SO tried out, then I was curious of its actual utility, not really coming at it to find a problem

        • @[email protected]
          link
          fedilink
          0
          edit-2
          3 months ago

          I believe theyre linking Metadata. Regardless of my VPN, the ads that slip through are clearly all linked between device’s so it’s my belief that they’re gathering and tracking data to link sources based on hardware info. I use randomized MAC, client spoofers, VPN, adblock dns, everything, and they’re still able to link my devices based on hardware data.

    • Lath
      link
      fedilink
      13 months ago

      I believe it uses your browser history to gauge your interests and bases its responses partly on the type of stuff you participate in repeatedly.
      So if for example you browse websites related to privacy more than anything else, it takes that into account and gets all creepy about it.

  • Diotima
    link
    fedilink
    23 months ago

    This may not have been an instance of it spying on you; what can you do may be similar to other searches involving privacy, but one would do well to remember thst companies have been repeatedly caught spying on users.

    https://www.tomsguide.com/us/vizio-ftc-smart-tv-spying-privacy,news-24415.html Vizio spying without consent.

    https://www.news.com.au/technology/gadgets/how-google-is-secretly-recording-you-through-your-mobile-monitoring-millions-of-conversations/news-story/8089bf3084a430f4c4be46b81710c158 Google storing your conversations.

    https://www.techdirt.com/2024/01/02/cox-distances-itself-from-claim-it-spies-on-users-via-phones-cable-box-mics/ Cox cable BRAGGING about spying on users.

    https://www.forbes.com/sites/thomasbrewster/2021/08/09/apple-is-not-spying-on-your-imessages-and-this-one-switch-stops-it-scanning-your-photos/?sh=485a1696605f Apple gaslighting users over their on-device photo scanning.

    I’m sharing to say that whether this is an instance of spying or weird coincidence, you should absolutely assume that companies will violate your privacy at every opportunity because that’s what they’ve done.

    • @bbuezOP
      link
      23 months ago

      This is exactly what I would hope to show up in conversation, to me it really doesnt matter if you think your privacy may be violated, there are more than enough examples of it actually being violated to warrant taking precautionary and reductive measures in our digital footprints

  • Lemongrab
    link
    fedilink
    23 months ago

    If its anything like Cortana’s permission it’ll have access to all your web searches. Cortana also had speech and typing personalization, so Microsoft is definitely giving copilot at least those permissions.

  • zeluko
    link
    fedilink
    23 months ago

    Copilot is weird and can give out very weird responses that have little to do with your conversations.

    And of course it might just grab context depending on what you do (e.g. clicking the copilot button might already do that).

    I found it works best as GPT model if you disable the fancy stuff like search. It too easily looses track of what happened or completly goes off the rails.
    (i believe disabling search is a beta feature in some regions, but its a hidden flag you can theoretically set, i made a tampermonkey script to add a button).

    I hate the slow UI of Copilot, so i translate requests from a different GPT interface.

  • @bbuezOP
    link
    0
    edit-2
    3 months ago

    I will post the full context tomorrow when I can use the laptop again. No previous chats had anything to do with privacy and this was the first chat since the update. The first chat was something like “shit fart” that my SO had scientifically gauged the model with

    • @[email protected]
      link
      fedilink
      03 months ago

      I doubt it was listening to your conversation but regardless this problem can be solved entirely by installing Linux and GPT4all or one of the many other local FOSS LLMs.

  • Shawdow194
    link
    fedilink
    0
    edit-2
    3 months ago

    It’s a LLM. You asked it “what can you even do” and one of the most hot topics with AI is privacy concerns. With Copilot being neutered by MSFT to produce curated responses asking it what it can do, and it branching to privacy concerns first, seems totally reasonable

  • Echo Dot
    link
    fedilink
    03 months ago

    I doubt that it’s sending audio data back to Microsoft although it probably does have access to your search history if you’ve used bing / the inbuilt search bar.

  • @[email protected]
    link
    fedilink
    -3
    edit-2
    3 months ago

    If the response is not related to listening in on your convo then it smacks of a buddy processing a personal insecurity.

    Actually my last girlfriend said I was “nicely accommodable.”