• @[email protected]
        link
        fedilink
        English
        1116 hours ago

        look, if we just rename it a bit harder, I’m sure we’ll get the promptfondler uptake that marketing had forecast. this can’t be an itanium situation, dave in sales assured me there’s a use!

          • @[email protected]
            link
            fedilink
            English
            615 hours ago

            bit late

            they’re gonna have to break out the emergency use tardis reserve… and they’re gonna need to use a few of them

      • @WagnasT
        link
        English
        413 hours ago

        Wait, am I supposed to use Microsoft 365 Copilot Office Live Premium Pro Clippy Edition (new) or Microsoft 365 Copilot Office Live Premium Pro Clippy Edition? They’re both installed…

        • @[email protected]
          link
          fedilink
          English
          5
          edit-2
          13 hours ago

          “hello everyone my name is Bill Ballmer and welcome to today’s Visual tour of Microsoft 365 Coilot Office Live Premium Pro Clippy Edition! this is of course being broadcast to all your favourite Zune stations using the Power of Azure, and wow COM-fans I’m just about as hyped up for this demo as I can be. Let’s get right into it, shall we?”

  • Snot Flickerman
    link
    fedilink
    English
    13
    edit-2
    15 hours ago

    So, let’s go over this shall we.

    Humans were always the weakest link in the security chain. Why? Because humans aren’t logical and can be tricked with words and ideas.

    So we’ve developed this new type of computer program that “thinks” and speaks naturally like a human, right? Responding naturally to human conversation with it.

    The issue is (as I’ve said before) that we’ve essentially created a computer program that is just as fallible as humans.

    In other words, no shit simple prompt engineering works. There’s no way to “secure” a human brain from dripping out things it shouldn’t on accident, and by extension, there’s no way to “secure” an LLM “brain” because they operate in a somewhat similar manner (or at least appear to). Prompt engineering is just social engineering for computers. We’ve created a computer that can be tricked with words and ideas, just like a human.

    Humans were the weakest link in security and we just made computers as weak of a link as humans. Who really thought making computers as bad at everything as humans was a good idea?

    • @[email protected]
      link
      fedilink
      English
      614 hours ago

      The issue is (as I’ve said before) that we’ve essentially created a computer program that is just as fallible as humans.

      Id say it is worse, as we have more physical presence. We can think it rains, look outside and realize somebody is spraying water on the windows and we were wrong. The LLM can only react to input, and after a correction will apologize, and then you have a high chance it will still talk about how it rains.

      We can also actually count and actually understand things, and not just predict what the next most likely word is.

      But yes, I don’t get from a security perspective people include LLMs in things, also with the whole data flows back into the LLM thing for training a lot of the LLM providers are prob doing.

  • @ramenshaman
    link
    English
    412 hours ago

    I guess I will continue not using them

  • @[email protected]
    link
    fedilink
    English
    17
    edit-2
    13 hours ago

    *…but don’t anyone worry about that! We’re going to take control of a nuclear power plant and keep pushing them to you whether you want them or not! We’ve never let the sanctity of users’ data stop us before, and we’re not about to start now!"

    • Nightwatch Admin
      link
      fedilink
      English
      1016 hours ago

      You’re not wrong. MS announced adding CoPilot or whatever the fuck it is called this week to the Office 365 subscriptions for an additional fee, while hiding the subscriptions without under unclear cancellations and shit.